« Back To Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
Aerospace and Automotive Aerospace , Automotive , Consumer Goods/Appliances , Electronics/Electrical Components , Food & Beverage , Miscellaneous Manufacturing , Packaging , Pharmaceutical , Plastics , and Wood Products/Lumber

Application:
Visual Inspection & Testing Visual Inspection & Testing

See More

Vision and Motion: Better Together, Part I

POSTED 11/20/2009  | By: Kristin Lewotsky, Contributing Editor

DALSA Corporation, Engineering Specialities Inc.Machine vision has been a part of manufacturing -- and motion control -- for decades. As applications have become more demanding and the sophistication of hardware and software rises, implementations have gone from completely uncoupled systems in which motion serves vision, to tightly coupled combinations in which vision serves motion. Ultimately, the application drives the choice of hardware, software, and architecture.

To achieve best results from the technology, users need to be aware of its limitations as well as its capabilities. A machine vision system can inspect piece parts many times faster than a human being, for example, and unlike a human being they can continue to do so 24/7 for many months. At the same time, a machine vision system may be largely incapable of performing what appears to us to be a simple recognition task, say reliably locating a part when a shadow falls over it. “Typically, engineers who have not worked with vision technology have the same kind of biases that the general population does which is, ‘I can see it, why can't the machine?’" says Ben Dawson, Director of Strategic Development at Dalsa IPD (Billerica, Massachusetts). “We need to help users set expectations that are realistic.”

Machine Vision 101
In its simplest incarnation, a machine vision system consists of an illumination source, a lens to gather and focus light, a detector to capture the image, and an interface to pass the data to the processor. The processor, in turn, extracts information from the image data. For industrial applications, the detector and lens are often packaged into cameras; more recently, manufacturers have been including processors to create so-called smart cameras that can simplify integration for certain applications.

Success starts with understanding the parameters of the problem. Before buying any hardware, a user should understand the requirements of their application. What are they trying to evaluate? What metrics will yield the answers or performance they require and can a vision system provide that? If so, what kind of machine vision system? Edge detection systems, for example, are good at finding objects, and looking for cracks or textural defects. Blob analysis is based on contrast -- determining whether an area of interest exceeds a set threshold. Blob analysis can be used for a range of tasks, including detecting point defects like pinholes or contamination, variations that in general provide good contrast with the background.

Only after the problem is well-characterized should system design begin. Like safety, vision systems operate best when they are integrated into the design process from the beginning. The optics and the lighting, in particular, need to be designed for the application and integrated into the problem, not just slapped into a nook where they fit. “The thing you often run into with people who are using vision for the first time is they say, "I'm going to put a camera here,” wherever ‘here’ is,” says Perry West, President of Automated Vision Systems (San Jose, California). “The problem is that a camera is only a device that senses an image. What they really need to do is choose the type of the lens they are going to use and decide where it needs to be located to create the image for the camera to sense.”

“Anybody who really has been around vision very long and understands it well will tell you that there are two things that are really key to making a vision system work: the lighting and optics,” agrees Brian Powell, Vice President of Sales and Operations at Precise Automation (Morgan Hill, California). “Understanding how those two interact is really the key to making a machine vision system successful.”

Single Wavelength Lighting - LED. DALSA, Engineering Specialities Inc.The human eye has at times been likened to a simple lens -- but one that is backed by a supercomputer. Detecting targets under constantly changing lighting conditions is relatively straightforward for a human being. In a vision system, the quality and consistency of the lighting can make or break the application. Factory floors are already lit, making it tempting for OEM machine builders to leverage that light for their machine vision system, cutting cost, reducing space requirements, and giving themselves one less subsystem to worry about.

That’s a mistake, most vision system integrators will advise. "The thing we still preach against is trying to use ambient light,” says West. “This temptation to use ambient light could become a little bit more enticing in the motion control environment because of the difficulty of placing the light source.  The sad thing is that we are 90% of the way there to be able to use ambient light -- but that other 10% will kill an application."

Perry Cornelius, Advanced Systems Consultant at ABCO Automation Inc. (Brown Summit, North Carolina), tells of a factory floor inspection system that worked perfectly -- except for a fault that hit every day at around 3 p.m. A visit to the factory revealed the source of the problem: the light of the setting sun shining in through the windows.

“What I like to do if possible is to use LED lighting, which is a single wavelength such as  red, blue, or green,” Cornelius says. “It can be more expensive [than ambient or conventional lighting] for the application but with a single-wavelength light, we can use bandpass filters on the lens to filter out the other wavelengths and that really helps to attenuate ambient light.”

Extreme cases such as the example above may also require light enclosures or baffling to sufficiently attenuate optical noise. This could be challenging in some cases for OEM machine builders, who face space, footprint, and cost constraints, but the results are worth it.

Get Smart
At the heart of a camera is the detector, and the type of detector must be determined by the vision task required. In the case of discrete items presented one at a time, such as bottles in a packaging line, a photocell or laser line triggers an area scan camera with an M x N detector to capture an image. In the case of a continuous flow of material, for example in a web process, a line scan camera with a 1 x N detector might be a better choice.
 
For the right application, smart cameras can streamline the process of building a system. Certainly, the level of integration offered by smart cameras make building a system with them simpler and less expensive than buying and assembling individual pieces, including worrying about compatibility or writing software. The tradeoff is limitations in system functionality, however, so it’s important to be clear about what a given camera can and cannot do.

“To keep costs reasonable, vendors build products to address 90% of the problems that people want to solve rather than 100%, so you still have some tasks that even a very capable smart camera won’t do,” says Dawson. “The smart camera might offer the choice of, say, five operations; for example, it can look at contrast, it can find a pattern, it can check for certain limited classes of defects like dirt, and so on and those are great -- as long as that’s what you need to do."



 

A smart camera might be ideal to provide guidance to a part in a pick-and-place unit, for example. 3-D positioning of an object that needs to be picked up and moved to another location might also be feasible, though it may push the capabilities of a smart camera and potentially bog down the system. More sophisticated applications might present inescapable limitations, however. “When you get to high-speed web applications, the data rate is way over what a smart camera can do,” says Dawson. That means turning to more sophisticated hardware such as a PC-based architecture or a system with a separate frame grabber.

Today's frame grabbers have become quite sophisticated, typically integrating specialized interfaces such as Camera Link or high-performance processors. For demanding applications like web processing and flat-panel-display inspection that require the processing of tens or hundreds of megabits of data per second, such systems may be essential.

The pitfalls
The architecture of vision systems integrated with motion can vary widely. In some cases, a single camera is sufficient for an inspection or location task. In the case of a packaging line, however, multiple cameras at multiple positions may need to be ganged together for optimal performance. Elsewhere, a system with six degrees of freedom may leverage six views from a single camera, or six different cameras, to provide the required perspective.

Making connections

The most common camera interfaces are:

  • Camera Link: Developed by the Automated Imaging Association, Camera Link is a point-to-point interface with a data rate of 2 Gb/s.
  • Firewire 1394: A logical bus, FireWire supports tree, daisy chain, and loop topologies.  Although the newest specification, FireWire S3200 provides a data rate of 3.932 Gb/s, no commercial product is currently available; the widely available S800 operates at 9.83 Mb/s.
  • GigE Vision: Based on Gigabit Ethernet, it supports networking and operates at 1 Gb/s. Because of the packet-based nature of the Ethernet standard, determinism can be a challenge.
  • USB: The ubiquitous interface operates in a master-slave configuration, although USB hubs add a measure of topological flexibility. The newly released USB 3.0 computer peripheral interface specifies a signaling rate of 5 Gb/s (the data rate is 4 Gb/s), although chipsets are not yet commercially available.

--K.L.

A  common camera interface at present is Camera Link®, a point-to-point connection that can run as fast as 2 Gb/s. It is not a communications bus, however, so it cannot support multiple cameras in a networked configuration. Alternatives exist, including GigE Vision®, a camera interface based on Gigabit Ethernet. When GigE Vision is used to network multiple cameras, the same issues arise as plagues industrial Ethernet in motion control -- lack of determinism.

For a system that networks multiple cameras into a single processor, communications bandwidth can become a concern, but latency introduced by the packet-based nature of industrial Ethernet can be a far bigger problem. “With Camera Link, there's no real concern with the latency,” says Dawson. "When you trigger it to take a picture, you get the picture with known time latency. When you use Ethernet, there is always a potential for latency issues, especially if you have multiple devices.” For some applications, this is not a problem. In others, it can be critical.

The challenges don’t stop there. Processor delays can introduce additional latencies. In a laboratory, a system can be designed to be deterministic, but in the real world, such conditions -- and results -- may be hard to reproduce. The vagaries of real world conditions will likely require additional processing, which means additional time. To compensate, some system designers add a delay to provide a window to accommodate variations in processing while ensuring that the system gets the data it needs at the time it expects it. In a sense, it sets up a frame for the processing time.

In systems with so-called soft determinism that can tolerate some time lag, this approach can be a good solution. For an application requiring hard determinism -- inspection of print head nozzles microseconds after the ink droplets are ejected, for example -- there is no room for latency. Again, the application drives the design.

In recent years, that design process has gotten easier than it once was. Vendors recognized that for vision to really penetrate the industrial market, it had to be more user friendly. “The problem with machine vision 15 years ago was that the stuff was just too hard to use and required a lot of knowledge about vision and programming,” says Dawson. “It was not uncommon for it to take six months or even two years for someone to go from specifying an inspection system to actually having it on the line, and that delay is the kiss of death for the system if there is a product change or a management change."

In response, vendors begin focusing on user friendliness, with advanced algorithms and graphical user interfaces couched in the language of the end-user. “Instead of talking about Sobel edge detectors and blob analysis, we talk about diameters and areas and thereby move the algorithms into the background,” says Dawson. “The goal is to not have to educate users up to the level of a machine vision expert.”

In part II of this article, we’ll look at new developments in the integration of machine vision and motion control, focusing on capabilities that promise to make motion control more sophisticated and functional than ever.