Advances in 3D Vision Tackle Tough Automation Challenges
| By: John Lewis, Contributing Editor
|The Z-Trak2 series of 3D profile sensors built on Teledyne DALSA’s 3D image sensor technology ushers in a new era of 5GigE 3D profile sensors for high-speed, in-line 3D applications.|
Once upon a time, 3D imaging systems were prohibitively expensive and used mainly for niche applications. That’s certainly not the case anymore. Today a system-on-a-chip (SoC) can integrate ARM processors, neural processors, and FPGA fabrics all in one, allowing a whole series of 3D processing pipelines to run on the chip. This multiplicity of new onboard processing devices allows more intensive 3D image processing operations to run inside 3D scanning devices.
The advent of compact and price-efficient industrial 3D cameras is impacting all 3D vision product categories — from passive 3D imaging to active 3D camera systems that integrate laser line scanning, speckle projection, fringe pattern projection, and time-of-flight (ToF) technologies. What was once thought of as a technology of the future is increasingly being deployed in factories, improving efficiency, increasing production, reducing costs, optimizing floor space, boosting quality, and making facilities more profitable.
Time-of-Flight Takes Off
Time-of-flight sensors and cameras have made major strides recently. About a decade ago, in initial commercial deployments, ToF sensors were slow and offered low 2D, angular, and depth resolution. Today, a few machine vision companies, including Teledyne, not only have the ability to drive ToF sensor development but also have the capabilities for system-level development. This results in more commercial options with improved overall performance, including the ability for 3D to meet requirements in logistics, autonomous robotics, and other challenging applications.
Teledyne’s main 3D imaging solutions include its Z-Trak series of laser profilers and its Hydra3D ToF sensors. For the Z-Trak 3D laser profilers, key markets and applications include robot guidance, chip lead inspection, part identification, and general machine vision inspection. For the Hydra3D sensor, applications include factory automation, robotics, logistics, surveillance, intelligent traffic systems (ITS), mapping/building, and automated guided vehicles, including drones.
“The flexibility of the Hydra3D time-of-flight CMOS image sensor provides reliable 3D detection even in challenging applications, and real-time adaptability to environmental conditions,” explains Yoann Lochardet, 3D marketing manager at Teledyne e2v. “High resolution that increases the field of view with good angular resolution combines with long-distance range capability and a multi-system feature embedded on-chip that’s robust with other working systems in the area, and all of that without motion artifact has opened doors to new machine vision applications.”
|LUCID’s Helios2+ ToF camera offers two depth processing modes: high dynamic range (HDR) mode and high-speed mode.|
High Dynamic Range Imaging
Most recent developments in 3D machine vision have relied on active scanning techniques, where spatially or temporally modulating light is projected at the scene, reflected by the object, and captured by the vision system, usually a CMOS-based camera. Active scanning has helped increase working distance and improved the accuracy of 3D measurement systems while reducing processing time. However, images remain subject to degradation, with complex scenes containing heterogenous objects with a broad variety of reflection properties and resulting in an overall wide dynamic range, according to Alexis Teissie, product marketing manager at LUCID.
“One recent advancement we’ve seen is the adaptation of high dynamic range (HDR) imaging techniques, originally developed for standard vision systems, to 3D applications,” says Teissie. “The exact same technique cannot be ported directly to a 3D imaging system and needs to be adapted to make best use of the intrinsic properties of a given 3D scanning technology, but the overall principle remains very similar.”
That’s what LUCID realized with the introduction of the Helios2+ time-of-flight camera. Multiple exposures are combined in the phase domain, and the best exposure time is selected automatically at the pixel level on the camera image signal processor (ISP) to output a single point cloud with high dynamic range. The HDR point cloud is processed entirely in-camera, with no added burden to the back-end system, making it a flexible and useful tool for the application designer.
As the 3D sensing adoption rate picks up, more potential use cases are being explored across multiple industries. In some of the more complex configurations, the target scene may contain different parts with either high- or low-reflectivity surfaces, leading to a large dynamic range. This is typically the case with automotive assembly, with a broad assortment of materials, textures, and finish qualities, and in logistics and palletizing applications, where the type of objects can vary greatly and become difficult to image correctly with a fixed exposure time. The addition of HDR imaging capabilities opens a new set of opportunities for those developing 3D imaging systems and enables deployment in challenging environments.
|The cost-effective IDS Ensenso S series works with and AI-based laser point pattern triangulation instead of stereo vision.|
Factory Calibrated, Fully Integrated 3D Profilers
When selecting a 3D profiler, users must consider not only the cost of the unit but also software tools, deployment time, and equally important in-field service. Such devices can take on inspection challenges that are difficult if not impossible to handle with 1D or 2D imaging techniques. Such challenges include variation in height, defects caused by indentation or bubbling of laminate, measuring object thickness, coplanarity of adjoining surfaces, uniformity, and asymmetry of extruded parts.
“Customers who require 3D vision capabilities can greatly benefit from factory-calibrated, fully integrated 3D profile sensors like Z-Trak2,” explains Inder Kohli, senior product manager for vision solutions with Teledyne DALSA. “For example, Z-Trak2 comes in a wide variety of configurations and laser options to handle applications ranging from small electronic parts to large automobile engine parts, and from door frames to the entire chassis.”
As new SoCs and sensors support development of more cost-effective 3D systems, many suppliers have broadened their portfolios of 3D vision products at every price level. Quality 3D data, in the form of geometrically correct and low-noise point clouds, is now available to manufacturers at a lower cost. For example, the new Ensenso S 3D camera series from IDS Imaging Development Systems not only applies laser point triangulation but also includes products with time-of-flight sensing, albeit with differences in data quality, according to Martin Hennemann, IDS product manager for 3D cameras and machine vision software.
“Thanks to budget-friendly products such as Ensenso S, 3D data can be used in many different, even low-priced applications in the mass market,” explains Hennemann. “For production and (intra)logistics processes, the range of detailed applications that can be automated on a given budget is expanding. For instance, stations that check tray contents for automatic storage can by multiplied more easily.”
Such products fill the gap between proof-of-concept applications using low-cost consumer-grade 3D cameras and powerful but more expensive established industrial 3D sensor technologies. One area of particular interest is autonomous vehicles in factory and other environments. 3D image processing is of enormous importance for automation in industry but is spurred on by research and development for consumer areas such as gesture control, facial recognition, and virtual reality, which will lead to exciting new application possibilities.
|Capable of scanning objects moving 144 km/h, MotionCam-3D from Photoneo eliminates the trade-off between resolution and speed.|
3D Vision for Moving Objects
3D sensing of moving objects presents various challenges, including limited exposure time and lack of light. ToF camera systems offer high speed but at the cost of lower resolution and higher noise. Active pattern projection systems provide robust 3D data and high accuracy but only when scanning static scenes. The Photoneo 3D camera MotionCam-3D delivers both high scanning speed and high-quality 3D data and puts an end to the seemingly inherent trade-off between scanning speed and scanning quality.
“This is the only 3D area sensing technology on the market with which one can get both fast scanning speed and high-resolution 3D images without motion artifacts,” says Andrea Pufflerova, PR specialist at Photoneo. “In numbers, the parallel structured light technology enables the capture of objects moving up to 144 km/h (89 mph) while providing a point cloud with 900,000 3D points, submillimeter accuracy of 150 to 900 μm, low noise, detailed contours, and completeness of the scan on various materials.”
Applications deploying machine vision and vision-guided robots are directly limited by the constraints of machine vision. Because parallel structured light extends robotic vision capabilities and possibilities, it opens the door to completely new applications. For instance, applications with a hand–eye setup, where a 3D camera is attached directly to a robotic arm, can now reach faster cycle times, since the robot does not need to stop moving for each scan acquisition.
The new machine vision technology also greatly impacts collaborative robotics, as it effectively eliminates the effects of vibrations. Thanks to this, the movement and shaking of the human hand do not affect the final quality of image data. Other new applications enabled by parallel structured light technology include the handling, picking, and sorting of objects moving on a carrier such as a conveyor belt or an overhead conveyor, 100% quality control, and inspection and dimensioning of a wide variety of objects of different sizes — all without interruption and with objects in motion.
An example of a dimensioning application is luggage moving on a conveyor belt. The luggage is scanned to calculate its exact dimensions, and this information is further used for the calculation of service costs or for optimizing luggage placement in a transportation container.