Industry Insights
Machine Vision Follows and Leads Robotic Technology
POSTED 08/07/2020
It may seem counterintuitive that a “sensing” technology like machine vision would follow the material handling technology that it guides, such as robotics, rather than lead. After all, most people have to see an object to grasp it. But like so many emerging technological trends, context is everything.
In this case, “follow” has two very different meanings. First, robots do follow machine vision guidance when it comes to vision-guided robot (VGR) applications such as bin picking and machine tending. In these cases, the machine vision system locates both the robot’s end effector and the target product in 3D space. The machine vision system then acts as the bridge between the two coordinate systems, providing a continuous stream of offsets or spatial corrections for the robot path as it moves toward an object for picking or a container for placing. In this operational case, the robot follows the machine vision system.
However, when it comes to business cases and trends — for example, vision-guided robotic adoption rates — vision follows the robot.
“At Baumer, we’re seeing machine vision technology follow robotic installations into many markets,” said Doug Erlemann, Business Development Manager for Cameras at Baumer Group. “This trend is particularly pronounced in manufacturing-intensive regions such as China and Pacific Rim countries but also holds true for new automation users in the U.S. and Europe.”
Global Supply Chain Drivers
As the cradle of manufacturing, Asia-Pacific markets, particularly China, represent a huge opportunity for traditional machine vision solutions. Despite increasing labor costs, growing environmental restrictions, and related factors that add to the cost of manufacturing, China is striving to improve quality controls and maintain its status as the world’s manufacturing hub. VGR applications and their growth across all manufacturing segments is considered a driving force behind this regional ranking and demand.
“Just a few years ago, we did a study of robotic installations that indicated maybe only 15 percent of robot work cells used machine vision,” says David Dechow, Machine Vision Technology and Systems Expert at machine vision integrator Integro Technologies Corp. “But since that time, we’ve seen nothing but increasing interest in vision-guided robotics. Unfortunately, most robot integrators do not have machine vision professionals on staff, so they miss a lot of opportunities to sell robotic solutions where machine vision would be the enabling technology. That’s one of the exciting things about being part of Integro, where we have high-level robot and machine vision designers focused on solving integrated solutions.”
Misconceptions and education hurdles still stand in the way of greater VGR adoption, adds Dechow. Today, there’s a belief that 3D VGR is too difficult to solve reliably for applications such as machine loading, palletizing, and flexible pick and place. And while it is true that tightly packed boxes or parts are a challenging VGR application, today’s machine vision technology is fully capable of guiding robots for accurate pick-and-place, bin picking, and related applications, Dechow says.
Bin Picking Simplified
Many robot companies such as Fanuc and Universal Robots have followed a slew of machine vision companies such as Keyence, Matrox, and others to provide optimized VGR solutions that simplify bin picking applications.
For example, Universal Robots’ new ActiNav Autonomous Bin Picking solution brings the simplicity of “teach by demonstration” common to UR’s collaborative robot programming platform to the bin picking machine vision solution.
“Machine tending has always been one of the mainstay applications for our collaborative robot arms,” says VP of Product and Applications Management at Universal Robots, Jim Lawton. “We discovered a significant market need for a simple solution that enables UR cobots to autonomously locate and pick parts out of deep bins and place them precisely into a machine. This is not pick and drop; this is accurate pick and part-oriented placement.”
Don't Miss These Industry-Leading Events!
While there are a variety of approaches to automating machine tending stations, many of which include implementing trays, bowl feeders, or conveyors to get the parts to the machine, Lawton explains how ActiNav bypasses this step. “Parts are often already in bins, so the most flexible and scalable option is to deliver that bin of parts to the machine and then pick them directly from the bin and place them into the machine,” he says. “This minimizes floor space and reduces the need for part-specific tooling.”
ActiNav autonomously inserts parts into computer numerical control (CNC) or processing machines such as drilling, deburring, welding, trimming, or tapping. The high-resolution 3D sensor and CAD matching enables high-accuracy picks powered by ActiNav’s Autonomous Motion Module (AMM), which determines how to pick the part and then controls the robot to pick the part and place it in a fixture each time. The AMM enables ActiNav to operate inside deep bins that hold more parts; something that stand-alone bin picking vision systems struggle to accomplish.
VGR Moves Beyond Manufacturing
While machine tending represents a traditional manufacturing operation, the adoption of machine vision and robotics technologies is accelerating outside of traditional manufacturing markets.
Examples include significant interest in using robots as disinfecting bots during the global pandemic, as butchers to allow social distancing in food processing, and as productivity enhancement systems for overloaded warehouse fulfillment centers.
Today, autonomous guided vehicles (AGVs), including automated forklift trucks (AFTs), are helping the world’s largest distribution centers to keep up with the massive increases in demand that have accompanied the global pandemic. A recent study from Adobe reveals that e-commerce reached $82.5 billion in May, essentially condensing 4–6 years of anticipated growth into the first half of 2020. This places enormous strain on warehouse, distribution, and shipping networks across the globe.
To increase productivity, companies such as ifm efector, inc. have developed 3D sensing solutions to allow autonomous forklifts to safely pick pallets and move them to racks without any human intervention — and at higher speeds overall than manual workers can maintain across an entire shift.
ifm’s Pallet Detection System (PDS) is mounted between the forks of an AFT. After receiving a material handling mission, the vehicle triggers ifm’s PDS camera to collect a 3D image of the pallet or rack. An embedded computer inside the camera, running a custom algorithm, filters the image for optimal AFT guidance and determines the mission target’s position relative to the AFT with 6 degrees of freedom (6DoF). At the mission’s terminus, ifm’s PDS conducts a volume sweep, using a similar process to locating the pallet, to verify that the rack or staging area is ready to receive the pallet.
Starting the 1980s, Intels co-founder Gordon E. Moore taught machine vision technologists not to worry about glory or position but rather focus on enabling new solutions for real-world problems. Intel never made a chip for vision-guided robotics, for example, but machine vision designers certainly have made excellent use of a technology developed for personal computers. The same may be true when it comes to the relationship between robotics and machine vision. We may not lead the business case today, but the world is learning — albeit slowly — that the future lies with those who have true vision.