Industry Insights
Novel 3D Vision Techniques Suit Evolving Machine Vision Tasks
POSTED 03/09/2021
| By: Jimmy Carroll, Contributing Editor
3D vision technologies must evolve and adapt to changing market conditions — especially during periods of accelerated change, such as when global pandemics reshape nearly every aspect of life. Different types of 3D suit disparate tasks and applications across many industries. Whether stereo vision, laser triangulation, structured light, time-of-flight (ToF), or something else, 3D imaging technologies are simultaneously evolving and expanding into more markets, helping end users and OEMs solve new and challenging imaging problems.
Bin Picking With 3D Area Scan Techniques
Because of COVID-19, people have turned to e-commerce platforms such as Amazon and Instacart at unprecedented levels, placing a huge demand on companies involved in logistics, warehousing, and packaging, which have been forced to constantly expand and scale operations to keep up. Logistics and warehousing applications — including the delivery, warehousing, and packaging of goods — rely heavily on 3D imaging of all types.
“Area scan and point cloud 3D implementations are required for bin picking applications, but despite the fact that [bin picking] gets arguably the most publicity, it is the hardest type of 3D to implement,” said David Dechow, Principal Vision Systems Architect, Integro Technologies, Inc. “[Most bin picking] doesn’t yet meet the needs of all customers because it requires extremely high-speed, random picks, and requires the robot to move like a human. But if there were one application that would be the 3D vision silver bullet should that be accomplished, [bin picking] is it.”
Dimensioning and Counting Boxes With Time-of-Flight
While area scan 3D techniques such as stereo vision and structured light may assist in bin picking applications, time-of-flight (ToF) helps with tasks such as measuring and counting boxes, for example. Targeting logistics, material handling, and robotics applications, the new Helios2 ToF camera from LUCID Vision Labs offers 2.5 times more light transmission and more than 50% better 3D precision compared to the original Helios.
“LUCID can always count on customers to push the limits of whatever technology we offer,” said Rod Barman, Founder and President at LUCID Vision Labs. “With that in mind, we design our products to accommodate both the current and potential future challenges in applications such as dimensioning, robotics, and AGVs.”
For Cognex, the biggest demand in 3D applications in logistics has been box dimensioning, with parts moving in motion. Historically, multiple lasers and an external camera help capture images in these applications. But with the 3D-A1000 system, for example, a smart camera with 2D and 3D together enables quick dimensioning from a web interface, explained John Keating, Senior Director and Business Unit Manager, 3D Products, Cognex.
“With Cognex and the A-1000, it’s all about simplicity of setup. It’s a web-based interface, but it’s still an In-Sight camera, which offers an easy walkthrough for setup,” he said. “People are becoming less scared of 3D the easier it becomes to deploy.”
With customers finding success with the A-1000, new 3D applications have opened up in logistics, such as locating objects on the classic empty tray scenario, which has been a difficult application for quite some time, said Keating.
For Mathieu Larouche, Product Manager, 3D Sensors, Matrox Imaging, traditional applications such as vision-guided robotics for bin picking and palletizing as well as box dimensioning are all significant driving forces behind 3D imaging. And in the same vein of making things easier for customers, Matrox Imaging proposed and is an active participant in GenICam GenDC, which aims to standardize the exchange of component data between 3D cameras/sensors and application software.
One potentially interesting area to follow on in 3D imaging, in terms of potential impact in the market, is the application of deep learning to 3D imaging to generalize the latter’s use, suggested Larouche.
Cycle Time Considerations and Backend Packaging
Packaging applications also rely increasingly more on 3D vision, but end users and OEMs must consider cycle time needs up front to avoid any issues, according to Jared Glover, CEO, CapSen Robotics.
“With many 3D vision-guided-robotics applications, the robots are moving differently with each pick or place, for example,” he said. “You aren’t going to have as consistent a cycle time with a 3D vision-guided-robot than something that is repeating the exact same motion over and over, which is why it is important to factor this into a line design. Extra buffers should be created before and after the bin picking robots, or extra robots should be added for redundancy if hitting a certain cycle time with no deviation is necessary.”
CapSen Robotics designed its CapSen PiC bin-picking product with these cycle time considerations in mind. The system uses highly accelerated proprietary GPU math libraries along with parallel image processing and motion control to achieve cycle times as low as two seconds on traditional 6-axis robots.
Cognex, with its new In-Sight 3D-L4000 embedded vision system, has been encouraged by the things 3D can do in the actual backend packaging process.
“Consider a scenario where boxes are filled with bottles and having to deal with something like tilt,” said Keating. “2D vision has problems with this when the bottles move around, but 3D vision provides the ability to understand depth.”
Cognex saw large pickup in terms of how quickly customers could solve problems in packaging. For example, in situations where customers needed to see below a certain area where the lower level is mixed, 3D solved the problem.
“Eyes are opening faster in packaging than we expected,” said Keating.
Software Opens New Doors in and Beyond the Factory
Several applications beyond what might be considered traditional machine vision benefit from 3D vision as well. Take railway inspection, for example. High-speeds, constant vibration, and tough environmental conditions can make 3D railway inspection applications difficult. But when installed correctly, inspecting rail profiles and wheel rims for wear and damage helps prevent accidents and ensure safe and reliable operation. This is a popular application of Photonfocus’ triangulation-based cameras — which use an external laser and separate camera instead of an integrated unit — as they’ve seen consistent and even increasing business on the rail.
Photonfocus offers a few specific advantages for such an application, including an in-camera firmware feature called LinLog that allows the cameras to perform high-dynamic range imaging in real time.
“In a rail application where the camera points downward, there may be a nice dark surface that allows for an integrator or end user to control the lighting on the surface to an extent,” said Mike Faulkner, Sales Manager, Americas. “As soon as you start going up and factoring in things like the sun, this can create problems fast.”
With LinLog, users can have bright and dark conditions at the same time and the technology provides an averaged-out image that can be used for inspection purposes.
Detecting multiple laser line profiles is a complex inspection task. Photonfocus offers this capability with its Multipeak Linefinder algorithm. For example, when a laser beam runs over something specular, it bounces around, creating one main line as well as a number of secondary lines. The Photonfocus algorithm helps the OEM find the right laser line for inspection purposes.
“Photonfocus has deep experience with 3D algorithms, and we always improve them based on customer needs,” said Reto Lienhardt, Head of Firmware/Design. “For example, when scanning metal or glass, the inherent issues of reflection can be solved in the algorithms. This is one of the main reasons we implemented the detection of more than one line in parallel for our customers, and why our 3D solutions are deployed into railway inspection, along with several other markets.”
For machine makers and systems integrators wanting to develop their own specialized 3D imaging systems, software tools such as the Easy3D libraries – which are part of Euresys’ Open eVision software – enable the development of 3D machine vision inspection applications. For example, a customer looking to implement a 3D imaging system with one or two laser lines projecting onto a moving surface requires laser line extraction which can be subsequently converted to a height map or 3D point cloud, which is needed for processing and analysis in 3D, explained Mike Cyros, Vice President Sales and Support Americas.
“In addition to the Easy3D software libraries, Euresys also offers ready-made hardware for applications as such this with our Coaxlink Quad 3D-LLE frame grabber, which does the laser line extraction from a CoaXPress camera automatically, on-the-fly,” he said. “Rather than offering a full fixed function 3D solution, we instead offer the building blocks necessary to our customers to allow them to more easily build their own 3D vision systems. Easy3D software and the Coaxlink Quad 3D-LLE offer an easy-to-use extensive range of functionality for system developers.”
3D Imaging, Moving Forward
3D imaging technologies continue to grow and may be reaching an inflection point. People will begin to think of 3D as a generic term for a collection of different devices that create 3D images in different ways, just like there are 2D cameras that create 2D images in different ways, suggested John Keating.
“3D can’t really be considered one technology, since we have laser-based methods, time-of-flight, stereo vision, and so on,” he said. “I think people are starting to shake their preconceived notions of 3D and will start thinking of it in a more similar vein to 2D, and soon.”
Even the names of 3D imaging technologies are different. Many companies do not call their products by the same name in the market — but a solution by any other name still boosts productivity, which is what matters most.