Industry Insights
2D or 3D Machine Vision? Why Not Both?
POSTED 09/28/2015 | By: Winn Hardin, Contributing Editor
It wasn’t too many years ago that system designers and integrators would do whatever they could to avoid 3D machine vision. It required complex lighting systems, lots of processing power, more engineering, and even more money.
Today, with the increase of computing power and newer, faster CMOS camera sensors, 3D machine vision equipment and software suppliers have simplified 3D system setup while adding even more capabilities to their products, such as using both 2D and 3D images to make their systems even more robust. As a result, applications that would never have considered the cost of 3D are adopting the technology at record rates.
2D vs. 2.5D vs. 3D
Any 3D imaging discussion starts with a definition of terms. A standard 2D machine vision image is flat, calibrated to allow the measurement of length and width, but it does not provide any height information. The next step, 2.5D, includes the Z axis or height information in addition to X and Y axes; it also provides information that allows the machine vision system to estimate the objects’ rotation (pitch and yaw) around two of the three dimensions. True 3D provides X, Y, and Z information as well as rotational information around all three axes (rX, rY, and rZ). For the “holy grail” of 3D vision — bin picking — and many other emerging applications, only 3D will do.
“3D vision and robot bin picking have been tied very tightly together,” explains Jim Anderson, Product Manager – Vision at SICK, Inc. (Minneapolis, Minnesota). “Visionaries like Adil Shafi worked on this for 15 years, but now, bin picking is really starting to bear fruit.”
Companies like SICK are simplifying the deployment of machine vision’s toughest applications by leveraging computational horsepower and smarter software. For example, SICK offers a productized bin-picking system known as precise location of parts in bins (PLB) that couples a laser triangulation-based machine vision system with specialized software that takes all the extraneous considerations into account for a successful bin-picking application.
Better 3D Software
“Historically, all 3D software was based on 2D algorithms that were put to 3D use,” says Nicholas Tebeau, Manager Vision Solutions Product Group at LEONI Engineering Products & Services (Lake Orion, Michigan). “Now, companies are offering proper 3D tools that make the whole application easier. And it’s not just a question of 2D or 3D. With bin picking, for instance, you have to have proper grippers. The software has to account for the gripper and make sure it doesn’t collide with the bin walls — and not just find the parts orientation, but to know where you can grab it and where you can’t based on the part geometry. Luckily, we can use FPGAs and other tools to make the whole process faster, but there is still a lot to consider from an integrator’s point of view.”
Not every robot guidance application requires full 3D. According to Ed Roney, National Account Manager and machine vision expert at FANUC America Corp. (Rochester Hills, Michigan), “many times all the robot needs to know is the distance to the object from the robot’s perspective. In those cases, 2.5D can work fine. But if the part isn’t located on a flat surface, or the scale of the part is unknown, a 3D point cloud is the way to go. But even when the object is on a flat surface, like boxes on a pallet, lack of contrast can be a reason to go 3D. A robot using 2D or 2.5D may not be able to easily tell where one box ends and another begins because there isn’t enough contrast between the two boxes.”
Structured light is one way that companies like SICK and Tordivel AS (Oslo, Norway) create contrast in 3D images. Tordivel, best known for its Scorpion Vision machine vision library, recently released the Scorpion 3D Stinger Camera. Unlike most stereovision cameras, which generate a Z value for every pixel based on slight differences in images acquired by two separate cameras built into a single housing and separated by a known distance, Tordivel combines stereovision with laser projection.
“The random pattern projector (RPP) guarantees that the object will have sufficient texture for robust stereovision calculations,” says Thor Vollset, CEO of Tordivel. “I would argue that laser triangulation is less sophisticated than stereovision because the 3D points are generated based on the angle between the camera and the laser, in fact a simple 2D calculation. A 3D calibrated camera knows where every pixels moves in space. This is used to move between the 2D and 3D images using the 3D object pose to extract the most accurate 3D coordinates from the very accurate edges in the 2D images. In a laser triangulation scanner, the edges cannot be accurately described because multiple pixels normally are required to describe a 3D point. I would argue that most 3D point clouds actually have much less information than a 2D image.” [Note: Proponents of laser triangulation say their systems offer better 3D resolution than stereoscopic, but in the end, it comes down to stand-off, laser scan speeds, camera resolution, and other factors.]
Don't Miss These Industry-Leading Events!
“With our Scorpion 3D Stinger Camera and companion software, we generate a dense 3D points cloud and a companion high-resolution 2D image set,” adds Vollset. “Starting with 3D image, you can establish an object pose or the object plane and then move to the 2D image where we do the most accurate 3D measurements. Two years ago, I didn’t think this was possible. But now, we can extract with millimeter precision every point within a Euro pallet field of view of 800 mm x 1200 mm x 1000 m in a second. To do that with laser scanning it could take 2 to 5 seconds, depending on the laser scan time. Another benefit with stereovision is that we can capture 3D data from moving objects without latency.”
2D and 3D: Best of Both Worlds
While stereoscopic systems can generate both high-resolution 2D images as well as 3D images for either data enhancement or to make the system easier for humans to use, not every application has the real estate for a stereoscopic camera.
Thanks to high-speed CMOS sensors, a single camera can collect both high-resolution 2D images and 3D data using laser triangulation. “Not only can you take a high-resolution grayscale or color image every 1000th frame, for example, but you can set different regions within the camera’s field of view and collect either 3D or 2D data or both,” notes SICK’s Anderson.
Capabilities like these are bringing new customers to 3D machine vision. “We have customers in the snack-food industry that are using 3D to measure 100% of their product as it passes below the camera on a conveyor,” Anderson says. “If you promise a 5-in. chocolate bar and you only produce a 4.9-inch bar, that’s fraud. So you tolerance them between 5 and 5.2 inches to make sure you’re safe. But if you can cut that down to 5.05 in., that company will save millions of dollars each year. That’s the sort of application that our new Ranger E, which provides high-resolution color images as well as 3D laser triangulation, was designed to solve.” SICK also recently introduced stationary 3D object scanning using technology similar to image-based 2D code-reading scanners.
As new solutions become available that make it easier for new machine vision users to adopt 3D technology, insiders expect the market to continue to expand. “Multidimensional imaging is certainly growing,” notes FANUC’s Roney. “We see more customers asking for it because 3D has become as easy to use as 2D machine vision.”