« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:


Visual Inspection & Testing Visual Inspection & Testing

3D vision: Simpler, Smarter, Spreading

POSTED 06/08/2005  | By: Winn Hardin, Contributing Editor

It wasn’t too many years ago that a 3D machine vision system came with a Sun workstation. Multi-megabit images pushed the capabilities of the most advanced computing platforms. This put the cost of 3D systems out of reach for many small to medium-sized businesses.

‘‘The first 3D analysis systems were originally put together by a team of PhD’s from [the University of] Michigan, and although they were good tools you pretty much needed a PhD in geometry to use it,’‘ said Kevin Harding, Optical Metrology Leader for GE Global Research (Niskayuna, New York) and Chair of the Advances in 3D Machine Vision conference recently held at The Vision West Show (San Jose, California May 16-19).  ‘‘Now with new computing power, you have the potential to buy a computer for $1000 and analyze 5MB of data very quickly…Today, the 3D sensor is about where machine vision was 20 years ago. You can buy a [2D] vision system that’s built into the camera with Ethernet and almost anyone can program the thing, and that’s where we need to get with 3D vision. ‘‘

And vision suppliers are listening.

Out with the old
3D vision systems differ from 2D systems because they measure the height, also called the z-axis or theta of an object. For this reason, 3D systems typically perform at least one of two general tasks: creating 3D profiles of objects for inspection, or the location of an object in 3D space for guidance of a robot or other mobile apparatus. These applications require complex solutions and with that in mind suppliers are focusing on making the systems more compact and easier to integrate and operate, similar to ‘‘smart camera’‘ trends in the general purpose machine vision market.

Today, manufacturers still depend on functional gauges to measure gaps, threads, and specific features of machined parts. A single airfoil, for instance, can require 6 to 10 gauges, explains GE’s Harding; and companies like GE may make thousands of different airfoil models, resulting in ‘‘millions of dollars of functional gauges.’‘ Other manufacturers set aside a line specifically for dozens of coordinate measurement machines (CMM), requiring engineers, technicians and operators to take parts to the CMM line for measuring, instead of collecting data from fully-automated, 100% inspection systems like machine vision.

‘‘3D machine vision’s biggest problem is that it’s not as easy to use as 2D because it needs a combination of light sources and calibration built into the sensor more than 2D cameras,’‘ explains Leonard Metcalfe, CEO of LMI Technologies Inc. (Delta, British Columbia, Canada).

Smart 3D
LMI is tackling the simplicity challenge by offering lines of 3D ‘‘smart gauges’‘ that include sensor(s), lighting, processing and calibration in one package. ‘‘Some smart gauges have more than one camera – stereoscopy with structured light. Others use lasers for fast applications because of the light intensity that lasers provide,’‘ said LMI’s Metcalfe. ‘‘We’re rolling out a new sensor next week in Hanover [Germany] that does full 3D imaging in full color. Each sensor has multiple cameras and multiple light sources at different frequencies and the system combines the image automatically to provide a mixture of 3D and color. This is particularly useful in industries like organics and lumber where you want the 3D dimensional characteristics as well as finding rot, stain, fungus, etc., so those sections can be removed.’‘

As production lines increase their speed, light budgets are stretched to the limit, resulting in less image contrast when shutter speeds are too short to for the sensor to collect enough light. For this reason, as well as the inherent structure to coherent light (it goes where you point it and only where you point it), laser triangulation is the favored method of illuminating 3D machine vision applications.

‘‘Lasers are the most robust method,’‘ said Marcus Maurer, product manager for 3D technology at Vitronic (Louisville, Kentucky). ‘‘It’s not influenced by ambient light and it’s a proven and simple technique.’‘ Vitronic has organized its 3D vision business into four areas: body scanning for the gaming, fashion, ergonomics industries; inspection; robot guidance and welding. In addition to lighting and computing developments, Maurer points to improved autocalibration routines in 3D machine vision as an enabling development for 3D machine vision that is expanding its utility.

Eating up bandwidth
Karl Gunnarsson, business development manager for vision at SICK Inc. (Bloomington, Minnesota) says their IVC-3D helps to simplify 3D measurements by placing image filtering circuitry on the same chip with a CMOS optical sensor. The IVC-3D also uses laser line triangulation to create 3D data sets, but the processing on the CMOS chip allows the sensor head to isolate the laser line within the 512x1536-pixel image and only transmit the 3D information with sub-pixel accuracy. This allows the system to run at extremely high frame rates up to 5,000 fps at full frame. ‘‘The IVC-3D Smart Camera, with more than 100 vision tools, can easily be programmed to solve a variety of inspection tasks,’‘ Gunnarsson explains. ‘‘The offering from SICK combines the advantages of precise and robust 3D imaging with a smart system, creating a simple solution for machine vision applications.

Fast 3D analysis is critical to new vision applications that track moving objects, such as furniture on moving chain hooks, for example, or mobile robots on carts.

‘‘The math is pretty well established for 3D nonlinear motion, but the problem is bandwidth between the robot and vision system,’‘ explains Adil Shafi, president of Shafi Inc. (Brighton, Michigan). ‘‘How fast can the robot update the coordinate systems based on vision servoing or other approaches? Some use Firewire, others use Ethernet, but vision servoing requires more than that, especially if you’re tracking something that’s moving quickly. If an object is moving 1 inch a second, that’s one thing, but if it’s moving 10 inches a second, that’s something else.’‘

Unlike the laser scanning systems mentioned above, Shafi uses Cognex (Natick, Massachusetts) geometry based search algorithms to locate features in the image and their relative relationship to the object's position in 3D space. ‘‘Laser based systems are in general less able to recognize features and require more setup, safety considerations and maintenance than geometry based systems (with off-the-shelf CCD cameras),’‘ Shafi says. ‘‘Geometry based recognition systems can recognize more variations, e.g., changes in lighting, focus, feature, shape, etc. than laser based systems. They are key to handling bin picking, autoracking and products that can change shape.’‘

Integrators are solving the ‘mobility’ problem by limiting the movement of objects to one plane by adding tapers to chain hooks and other material handling systems that allow for some degree of free motion.

Bin picking
In other durable good manufacturing operations, bin picking is becoming another major driver for the adoption of 3D vision. Like the addition of color to vision systems for inspection and profiling of lumber and other organic products, 3D vision suppliers involved in bin picking applications are learning to develop vision functions that answer specific end-user applications rather than depending on customers and integrators to solve specific problems with general purpose vision systems. For instance, a major part of Shafi’s bin picking strategy relies on expanding its classes of stored 3D geometric shapes, such as a cylinder with a flange on one end, which is used as model for bin picking axel shafts.

The greatest challenge with bin picking is locating parts that are partially hidden by other parts or oriented in ways that make them hard for the vision system to recognize, or the robot to handle. ‘‘People are extremely cautious about bin picking right now because a failed system can really impact productivity,’‘ Shafi said. ‘‘It’s like where auto racking was four years ago. A few customers had success, but now people are starting to commit because of the successes demonstrated over time. There’s a big push to go to low volume manufacturing in the durable goods industry to accommodate just-in-time inventories and a wider range of models on a single line. Robot guidance and bin picking can give them that ability by reducing the need for hard fixtures and the associated costs and line turnover time.’‘

As 3D vision matures, more functions will be combined into single, smart systems, such as inspections with robot guidance, which is already happening today; color and spectral analysis with 3D vision and finally nonlinear motion tracking. As LMI’s Metcalfe explains, the vision industry needs to continue to adopt new technologies with the focus on making these components into easy-to-use systems.

‘‘We're very close to the imager people, and also signal processing and computer signal processing on the DSP side, and also the LED technology and laser technology because a lot of these components are key,’‘ Metcalfe said. ‘‘We need more and more light all the time, as machines move faster and faster, and LED business is helping a lot. Someday, we're going to see multispectral combined with 3D, a UV picture combined with visible and IR picture – all in 3D.