« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:


Visual Inspection & Testing Visual Inspection & Testing

Data Integrity Attacked from All Sides in Vision System

POSTED 08/15/2008  | By: Winn Hardin, Contributing Editor


Data integrity, data handling and timing form a triangle at the heart of every machine vision system. Should one side fall, the stable isosceles becomes a “Bermuda Triangle,” sucking in lost bits, trigger signals and action commands, causing a nightmare for the engineer trying to debug the randomly crashing system.

The threat to data integrity, “…increases exponentially as system complexity increases because an error from one element can take down the entire system,” explains Yvon Bouchard, Director of Systems Architecture at DALSA  (Waterloo, Ontario).

These days, it’s hard to find a vision system that isn’t complex. A single camera attached to a single host can be a complex solution if the camera generates large data streams, interfaces with multiple peripherals, or has extremely tight tolerances. Adding to the equation components built by different manufacturers with different standard interpretations and networks made for consumers, not real-time, deterministic industrial applications -- and you have a problem requiring talented integrators and engineers.

In response, vision suppliers are developing new ways to leverage technological innovations from the consumer world and hardening them for industrial applications. From modified consumer networks to new hardware and software designs, vision systems are taking data integrity, handling and timing to the next level in their search for ease of use and robustness.

Hardware to the Fore
“Real time can be mistaken for high-speed applications,” explains Perry West, President of Automated Vision Systems Inc. (San Jose, California),” but what it really means is that the system has determinism. It knows when things are going to happen across the entire system. In reality, there’s soft real time and hard real time. Soft real time systems do not have a high cost of missing a real time window, although it can slow down the production line, which decreases the value of the vision system. Hard real time systems typically are moving applications. Inspections have to be done at a specific time and ejector signals have to be delivered at a precise time. Too early, or too late, causes problems. The majority of machine vision systems are either soft or hard real time, inline systems.”

The search for real time machine vision systems on par with the ubiquitous programmable logic controller (PLC) requires attention to all aspects of the vision system, across all disciplines, components, suppliers and integrators.

“A year ago, we had someone do a survey for us on system integrators, and they confirmed that the integration and software effort was the highest cost to the system, not the hardware,” noted Cor Maas, President Components Division at LMI Technologies (Delta, British Columbia). “But interestingly, we also asked how many systems require more than one camera, or motion, or multiple lights, and synchronization, and it turned out that 30% or so needed motion or synchronization with a moving target; another 30% required more than one camera or light source. This complexity is where [timing and data integrity] problems start because you have to consider interference between different cameras and light sources while collecting vision data from a passing target and delivering a result. Systems that require precise synchronization among different components are the majority, not the exception.”

“This is why we designed the FireSync platform,” continued Maas. “We know that people need to develop systems and software with very deterministic behavior while most available components are not real time. With a camera, it is relatively easy to determine how long it takes between a trigger signal and actual sensor exposure and electronic read out of the sensor. You can also determine how fast the target object is moving, what your smallest feature is that needs to be detected, and how long it takes for that feature to move one pixel.”

These calculations tell you a maximum acquisition time for the camera to freeze target motion, Maas says, but beyond the camera, determinism is harder to come by, which is why LMI’s FireSync adds additional conductors to Gigabit Ethernet to provide power, interlock and global synchronization timing signals for all components on the vision network. A light source can now be synchronized perfectly with the exposure of a camera. “FireSync Studio software allows all these timing parameters to be set up very easily, show the camera images and enable algorithm development,” says Maas. “You could call it a Vision-PLC combined with a Software Development Kit.”

Injecting Determinism
By combining the image processing elements with the sensor, smart cameras offer another way to improve the determinism of a small vision network.

“The benefit of a smart camera platform is that once the image is in the system and image processing software is told to go to work, it can be very deterministic because there’s not much out there to get in the queue or delay data flow,” explains Gary Kocken, Director of Sales for the Americas, PPT Vision Inc (Eden Prairie, Minnesota).

Smart cameras still have to communicate ‘outside the box’ to PLCs, other smart cameras on a vision network, and other real-time systems. With vision networks using multiple cameras across non-real-time networks, Kocken says customers often number the cameras and write code that requires each camera image to be acquired in sequence, or provide an alarm of lost data. With PLCs, motion controllers and other non-vision devices, Kocken says end users often use some form of handshake across a real-time industrial network, where the smart camera and PLC both acknowledge a readiness to receive data, and the data after it is sent; some users go so far as to duplicate the I/O channel between vision systems and PLCs.

Today, few non-industrial networks (e.g., Profibus, ControlNet, Camera Link, etc.) offer both guaranteed bandwidth and determinism. Ethernet switches insure data delivery, and offer high bandwidth, but vision designers cannot dictate when the data will arrive or how much bandwidth each device will have on the network. Firewire IIDC (previously DCAM) guarantees bandwidth.  However, Firewire can operate in asynchronous (guaranteed delivery) or isochronous (guaranteed bandwidth) modes.  USB can also work in either/both modes, but there is no camera interface specification.  Camera Link®, the only transmission standard designed for vision systems, enables both determinism and bandwidth for point-to-point communications, but adds a frame grabber to the bill of materials. Camera Link® is also overcoming technical manufacturing issues associated with some high-frequency Camera Link® cables at longer lengths.

“To guarantee data integrity is probably the most crucial component of the vision system,” says DALSA’s Bouchard. “For the longest time, before we started Trigger to Image Reliability, the only way the end user could verify the integrity of an image was to visually inspect the received image, or possibly suffer the consequence of system downtime as a result of false defects and other errors. With Trigger to Image Reliability, we essentially give the user the ability to detect if the system has misbehaved in some way. After all, it’s not the error that occurs once an hour that’s hard to fix. It’s the error that occurs once a month and it unacceptable to the customer.” DALSA “Trigger to Image Reliability” includes several data tracking mechanisms in both camera hardware and software. DALSA uses checksums between the frame grabber and camera to verify data length, and handshakes throughout the network for local area determinism.

 “The next step will be to verify the data itself,” continued Bouchard “for both media and data transmission integrity. This will alleviate a lot of the problems that many OEMs have in the field. You can test all the components in a system individually and have them pass, but when you put them in a system, you can experience rare hiccups that are often not acceptable to the end user. For the past 10 years, the primary problem has been software issues. But things have dramatically changed, and today, hardware reliability is the #1 issue out there.”