« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:



3D Imaging Ushers in a New Age of Inspection

POSTED 05/12/2019  | By: Winn Hardin, Contributing Editor

Advances in 3D imaging have allowed vision users to overcome some challenging inspection tasks. In the machine vision marketplace, 3D imaging continues to mature, tackling applications 2D imaging cannot. Improving 3D technologies cost-effectively simplifies a variety of inspection tasks. And it’s taking machine vision inspection to the next level.

“In a manufacturing setting, the fusion of 2D with 3D is necessary to measure how well components go together into an assembly and assess the product for final fit, finish, and packaging,” says Terry Arden, CEO of LMI Technologies.

According to David Dechow, Principal Vision Systems Architect at Integro Technologies, a systems integrator specializing in machine vision technologies with broad experience in helping companies implement 3D and 2D imaging for industrial automation, accuracy has improved as well. And with inspection tasks in 3D space, which may include measurement or reconstruction, precision is even more essential than with most tasks in robotic guidance or bin picking.

“End users should now recognize that the components are not just generic 3D image acquisition systems but rather have become in many cases more targeted in their applicability to specific tasks,” Dechow says.

The Gocator structured-light sensors provide full-field 3D inspection of objects with start/stop motion using a single "snapshot" scan. Photo courtesy of LMI Technologies.Many 3D imaging devices also are extending past the 3D point cloud to encompass color, texture, and other 2D imaging features. “The combination of a 3D image with grayscale ‘texture’ and/or color information is somewhat new in the machine vision marketplace,” Dechow says. “Having this information can be critical to the identification, differentiation, location, or measurement of an object.”

A 3D image by itself has no grayscale or color content. “Incorporating this content in a spatially correct way into a 3D image can have great benefit,” Dechow adds. “In many cases, the 3D profile simply does not fully define the object or its features.”

Additionally, 3D enables users to measure the angle of the surface and planar features such as distance. “This is an important capability to support robots, which move with 6° of freedom and need both angle and position to operate effectively,” Arden says.

Advancing Technologies
Machine vision component suppliers are developing products that address common 3D imaging needs. Hardware and software manufacturer Euresys provides frame grabbers capable of interfacing with traditional 2D cameras and laser-line projectors to build 3D height data, which can be combined with images. Additionally, the Euresys Open eVision Easy3D software library accurately calibrates and produces data sets from 2D image data and 3D shape data, which can now be processed using traditional machine vision tools such as gauging and measurement for metrology, OCR, and pattern matching to solve challenging vision applications.

“Previously, 3D imaging was accomplished using multiple cameras looking at the same object from known fixed positions or by using a single camera combined with a structured light pattern or laser point or line generator,” says Mike Cyros, Vice President Sales & Support Americas at Euresys. This required precise alignment of the cameras and pattern generators to be able to calculate, or triangulate, each point in the image to determine its distance from the camera. This process required a lot of computing power.

“Today’s FPGA and multicore embedded processor architectures make it possible to do these calculations at increasing resolutions and imaging rates, which enables in-line, real-time use of 3D image data for machine vision,” Cyros says. “What was once difficult to achieve can now be accomplished with off-the-shelf sensors and advanced software libraries, making it possible for systems integrators and machine makers to more easily integrate 3D measurements into their vision processes.

“When customers can just concentrate on how to mount a 3D sensor on their machine and trust that it will produce accurate, repeatable 3D point clouds, they’re more likely to adopt the technology,” Cyros continues.

Euresys’s Coaxlink Quad 3D-LLE laser-line extracting frame grabber features onboard processing for 3D profiling with zero host CPU usage. Photo courtesy of Euresys.Machine vision customers also have access to many 3D measurement technologies, including time of flight, stereo, lidar, and laser triangulation. In its Gocator product portfolio, LMI Technologies offers 3D smart sensors based on laser triangulation for moving parts, or stereo fringe projection for stationary parts. Laser profilers are used across a broad range of applications from log scanning in sawmills for optimal volume extraction to protein portioning in fish and meat processing and fastener location inspection in cell phone assembly.

In all these cases, the object to scan is moving through the scanning plane and a 3D surface is built up on-the-fly for inspection. Fringe projection is ideal in robot metrology to measure features such as slots, holes, openings, and studs in automotive body-in-white applications. During inspection, the robot remains stationary for the time it takes to snap a 3D point cloud.

“Laser triangulation and stereo fringe projection offer high accuracy and speed, and their integrated bandpass filters allow them to tolerate high levels of ambient light in order to deliver reliable data for inline automation, inspection, and optimization processes,” Arden says.

Meanwhile, industrial robot manufacturer FANUC America Corporation offers three different technologies that use structured light for vision-guided robotics, one of the most common uses for 3D vision. The company’s 3DL product uses two laser lines in a 2D image to produce one 3D Pose result.  The 3DA uses multiple image stereo using two digital 2D cameras and a DLP light projector. The 3DV uses the same approach but with only one image, making it faster.

“The multi-image approach can provide more accurate data than the single image, but it really depends on the application as to which one is used,” says David Bruce, Engineering Manufacturer at FANUC. “The addition of 3D data makes part segmentation easier and allows for precise 3D location to be extracted, which for robotic guidance is huge.”

Easy3D software library accurately calibrates and produces data sets from both 2D image data and 3D shape data, which can be processed using traditional machine vision tools. Photo courtesy of Euresys.That’s why bin picking is one of the largest applications for 3D imaging. “You need accurate 3D positional information of each part in the bin, and there’s no good way to do this with 2D,” Bruce says. “Tracking parts on a conveyor has a lot of advantages if you can use 3D because the 2D contrast between the part and the belt is not an issue when using 3D.”

For customers who want to build their own 3D imaging system, Euresys offers a specialized CoaXPress compatible Coaxlink Quad 3D-LLE laser-line extracting frame grabber with onboard processing for 3D profiling. This can be combined with the Easy3D library for accurate calibration of data.

“The goal is to provide greater flexibility to systems integrators and machine makers. They can select an appropriate 2D camera and laser-line generator that suits their field of view and imaging resolution requirements to build their own 3D imager from components, rather than having to select from preconfigured 3D scanning solutions that may not offer the right combination of resolution and field of view,” says Cyros.

Overcoming Obstacles
Like any machine vision technology, 3D imaging presents challenges. “Part reflectivity is a consideration with structured-light stereo 3D imaging. If the part is highly reflective, the depth image or point cloud algorithms will have issues doing this stereo calculation,” Bruce says, adding that the same issue applies if the part is not reflective enough, which can occur with dark or black parts. Fanuc’s 3DV technology overcomes these issues by doing multiple snaps at different exposures and light intensities and then merging those images.

Positional accuracy can be another trouble spot. “You’ve got to mount, or fixture, the sensors in a very accurate, precise way, and that can be difficult,” Cyros says.

As with any new machine vision technology, educating end users can be a struggle. “Many customers aren’t familiar with the technology and hold various myths about 3D, such as the widely held misconception that 3D is much more difficult to set up or maintain or is somehow more expensive than 2D,” Arden says.

LMI overcomes this misconception with its FactorySmart approach, which allows users to connect to the company’s smart sensors using any web browser running on a PC or mobile device. “The software makes it simple to visualize your part in 3D, drag and drop your measurement tools, and communicate results downstream for pass/fail decision-making,” Arden says.

When it comes to advanced machine vision technologies like 3D imaging, end users expect the same benefits they always do — ease-of-use and cost-effectiveness. Vision product manufacturers are addressing these demands by developing targeted solutions that improve accuracy and reliability in numerous inspection tasks.