cess to build testable prototypes and achieve a compact but
robust and assembly-friendly design.
For embedded logic, timing and control code development, the tools were dictated by the selected components
— Xilinx FPGA and Microchip PIC32 microcontroller.
The design workflow included three core tools: Xilinx
Vivado Design Suite, ModelSim simulator, and MPLAB
Integrated Development Environment and In-Circuit
Emulator. These tools can be quickly accessed via evaluation kits or development boards that include example
projects and libraries that can be retargeted to the specific
A significant part of the development included the
design of the vision engine, implementation of a direct
image data path and creation of useful tools for testing
and debug work.
The embedded vision architecture can be divided into
the application engine and the vision engine. The goal of
the vision engine is to capture image data efficiently and
determinately from the image sensors into CPU memory
for processing. “Efficiently” means handling use case
flexibility while maintaining low latency and requiring minimal resources. “Determinately” means keeping
track of frame counts, timestamps and other image parameters. Adhesive bead inspection can last for seconds
or minutes, acquiring hundreds of thousands of images.
Therefore, it requires a continuous streaming architecture as opposed to a burst architecture.
The vision engine comprises four laser line profilers, each consisting of a CMOS image sensor chip,
visible laser line projector illumination source and associated optics. The vision engine is governed by the field-programmable gate array (FPGA) and the PIC32 microcontroller.
The FPGA is the heart of the vision engine managing the image data paths from the four image sensors,
applying preprocessing and appending acquisition information. The image sensors are directly controlled and
interfaced to the FPGA for tight exposure, illumination
synchronization and low-latency image data processing.
The microcontroller has interrupt inputs from the FPGA
and can be used as a low-latency path to the application
engine. Otherwise, the microcontroller has connections
to various system resources and diagnostic chips — i.e.,
inertial motion sensors, temperature sensors, current detectors and voltage monitors.
The application engine needs to keep up with the image
acquisitions arriving in the CPU memory circular buffer
and apply algorithms to extract and transform image data
into application data. The application engine maintains
multiple communication paths with the vision engine
(via microcontroller and via PCIe channel) to begin/end
image acquisition and to set or update imaging parameters (i.e., exposure time, frame rate, readout window
size, etc.). The 3D processing application software suite
resides on a small form factor SoM single board computer. All external factory IO and protocols are managed
by the application engine.
Debug and test
Tight integration of the embedded system brings
many advantages. But it can make debugging and test-
ing very difficult. The embedded architecture does not
give visibility into subcomponents of the system, much
less individual signals internal to the FPGA or micro-
processor. ModelSim enables end-to-end verification
of the image path. Verilog models of the image sensor
can be quickly coded, and back-end DMA transfers are
modeled by vendor-specific test benches and bus func-
One technique that embedded vision enables is record-
ing metadata in each image acquired. The FPGA records
image count, timestamp, image sensor settings, illumina-
tion settings and firmware revisions, and allocates space
for custom data that can be set by the microcontroller (or
the application engine via the microcontroller). This sup-
ports run-time diagnostics and post-analysis of settings
and signals via stored image sets.
Additional debug and test is supported via the FPGA
register interface and test applications with read-and-
write access to the FPGA via the microcontroller in-
terface. The Xilinx ChipScope Pro tool and Microchip
MPLAB debugger are used together for detailed testing
scenarios. A major challenge is that no one system com-
ponent has direct access to all relevant information. The
application engine integrates the timing and event infor-
mation from the adhesive dispenser and robot with the
image data to allow full replay and step-by-step debug.
Each bead-dispensing application brings its own requirements based on linear dispense speed (part motion or nozzle motion) and the minimum detectable
defect desired. Faster dispensing speeds and smaller
defects require higher acquisition rates. Higher acquisition rates are achievable with smaller inspection ranges (and vice versa). The higher speed applications dispense the beads at 1000 mm/sec. To inspect
for gaps as small as 2 to 3 mm requires a 1-mm sampling along the bead. This equates to 1000 bead
profiles per second (pps).
A primary goal of the vision engine was to maxi-
Spatial Light Modulators (SLMs)
for Structured Light Projection
A subsidiary of Kopin Corporation
Fast, high resolution SLMs for inline systems
Used in 3D AOI/SPI systems worldwide
Small electronics form factor
Wide range of optical configurations
3D inspection of small mechanical and
IP July 2017
Optics Bead Height
Exposure Limit at
Max Frame Rate
1× PCLe 2.0
Limited Profile Rate
11 605 151 6613 5327
48 2275 569 1758 1221
Trade-offs between bead height and aquisition rate
Within embedded vision architecture, the CPU memory
bridges the vision engine and the application engine.
SoC: system on chip. SoM: system on module.