Vision & Imaging Blog
Emerging 3D Vision Technologies for Robot Vision & Machine Vision
New applications for robots have called for increased speed, and the ability to find parts positioned randomly on moving conveyors, stacked in bins, or on pallets. Machine vision systems are being paired with robots to help them locate and process these parts.
Often, 2D systems work just fine for vision guided robotics (VGR). 2D VGR systems can quickly process randomly located parts in a flat plane relative to the robot. 2D systems are usually easier to implement, requiring only a single digital camera and software that analyzes the image. However, 3D VGR systems let robots process the location of parts across all three dimensions.
Application of 3D Vision
3D machine vision is a growing trend that delivers accurate, real-time information to improve performance in applications. 3D machine vision detects objects regardless of position. As a result, robots have more flexibility and independence when compared to their 2D only counterparts. Robot vision with 3D lets the machine know if an object is lying down, upright, or hanging.
Robots with 3D machine vision can fulfill various tasks without reprogramming. They can account for unexpected variables in work environments. 3D vision allows robots to know what’s in front of them and react properly. 3D imaging is currently being used in metrology, guidance, and defect analysis systems.
Types of 3D Vision Technologies
There are different ways to implement 3D machine vision. Active techniques, like time of flight, use an active light source to provide distance information. Passive techniques, like stereo vision, rely on the camera’s data and work much like the depth perception of the human visual system.
Market Intelligence News & Insights:
How Will Apple’s $500 Billion Investment for US Manufacturing, Education, and AI Play Out?
On February 24th, Apple announced a “commitment… to spend and invest more than $500 billion in the U.S. over the next four years”, with plans for new manufacturing plants, enhanced support for manufacturing partners via their Advanced Manufacturing Fund, and a myriad of other initiatives. As part of this plan, Apple projects 20,000 new hires, with roles focused on R&D, silicon engineering, software development, and AI/ML engineering.
Stereo Vision
3D information on a part is obtained by observing a common feature from two distinct viewpoints. The distance calculated for each viewpoint returns X-Y-Z values. If multiple features are located on the same part, 3D orientation can be calculated.
3D stereo vision can be very inexpensive. A single 2D camera can be mounted on a robot that can move the camera to two different points of view. The main disadvantage of stereo vision is that only one part can be located per “snap” of the camera.
Time of Flight
Time of flight (TOF) 3D sensors measure how much time it takes for light to travel to the scene and back to the sensors in the array. This works in a way similar to the way pixels work in a CCD or CMOS vision sensor. Using this method, Z-axis information is obtained from each sensor in the array and a point cloud is created.
The phase shift in the emitted light vs. the received light provides enough information to calculate a time difference. Spatial location is then calculated by applying this information against the value of the speed of light. TOF sensors do typically provide a lower Z-resolution, but frame rates are much higher.
Register now for AIA’s free webinar on Emerging 3D Vision Technologies: Time of Flight, Active/Passive Imaging, Options at Vision Online.
Share This On X:
BACK TO VISION & IMAGING BLOG
Recent Posts