Industry Insights
HED: Warehouse, Collaborative Robotics Expand VGR Universe
POSTED 07/26/2017
| By: Winn Hardin, Contributing Editor
Editor's note: The Collaborative Robots & Advanced Vision Conference, November 15–16, 2017, in San Jose, California, will explore a range of current advancements in both fields focusing on technology, applications, safety implications, and human impacts. Register here.
The evolution of vision-guided robotics (VGR), especially applications that can "see" in 3D, is allowing manufacturers and distribution centers to precisely automate tasks ranging from inspection at multiple distances to bin picking of random parts.
"General customer demands in VGR boil down to speed, accuracy, and reliability in the sense that the systems need to be able to pick or guide without error from the machine vision side," says David Dechow, Staff Engineer for Intelligent Robotics/Machine Vision with FANUC America Corp. (Rochester Hills, Michigan). Robotics suppliers also are meeting users' expectations for lower costs as even the most complex 3D vision guidance systems are coming down in price.
In terms of system deployment, traditional VGR applications such as machining, machine tending, and assembly, particularly in the automotive industry, remain strong. Meanwhile, "we are seeing heavy growth in the cutting-edge fields of aerospace, warehousing and distribution, and order fulfillment," Dechow says.
Dechow credits increased deployment in these areas to technological improvements such as better and more resolution, as well as better algorithms to locate and differentiate objects and to pick databases of objects for part tracking or mixed-part picking.
One segment of the warehouse environment adopting VGR is the palletizing and depalletizing of structured and unstructured loads of boxes and even the unloading and loading of trailers. Furthermore, while machine vision has been guiding robots in the industrial space to pick parts from a bin and place them on a conveyor, the technology is now staking its claim in large distribution centers where the speed/accuracy/reliability triad is critical in the movement of high-volume products.
"VGR is used for individual part or mixed-bin picking to fulfill random, on-demand orders," Dechow says. "Of course that involves some 3D vision for robotics and other technologies associated with robotics, such as gripping technology and force sensing to accurately and reliably pick parts for those types of applications."
While not every robot guidance application requires 3D vision, bin picking is nearly impossible without it. The biggest challenge in 3D robotic guidance for bin picking in order fulfillment isn't the vision itself but rather finding a gripper that is suited for the wide mix of parts and the broad way they will be presented to the system, Dechow says. "Grasping a teddy bear is dramatically different than grasping a box containing a USB drive."
Simplifying the User Experience
Machine vision integrator Artemis Vision (Denver, Colorado) is fielding fewer requests for picking and placing parts on conveyors for assembly and inspection. Instead, interest in VGR is coming from customers looking to drive instrumentation to specific positions for precise inspection and measurement. "Customers also are asking to guide robots for cutting, deburring, or other actions taken on a part," says Tom Brennan, President of Artemis Vision.
One thing helping companies such as Artemis develop VGR applications is the availability of more off-the-shelf 3D machine vision options than there were even five years ago. "These tools make it a lot easier to test the system because you can essentially set things up in the lab and see how effectively the camera generates a 3D point cloud," Brennan says. "You're not setting up your own light, your own camera, and your own lens and trying to calibrate everything to make the package work."
On the hardware side, Brennan sees liquid lens technology as a valuable tool to enable flexible configurability in VGR because it allows inspections at multiple object distances in a way that's quicker and more robust than using a lens with a motorized focus. "A camera with a liquid lens mounts on the robot and performs inspections all around a part, whether the object distance is 2 inches or 8 inches," he says.
Other developments in VGR hardware and software aim to simplify the end-user experience. In September 2017, FANUC will introduce a new robot controller called R30iB-PLUS. iRVision, the company's robotic vision software, has been integrated with the FANUC robot controller since 2006. This involved incorporating the camera interface hardware into the controller's computer architecture, in addition to having the full iRVision software suite operate in the robot controller processing environment along with the teach pendant and PC interfaces.
The latest controller release continues with that integrated architecture and includes significant advances in iRVision components and enhancements to the iRVision software. Additionally, iRVision's general-purpose tools, from pattern matching and blob analysis to inspection, are continually being updated and improved, according to Dechow.
"iRVision not only makes an application more efficient and better functioning because it is targeted toward a specific task, it's also easier for the customer to apply machine vision to that targeted task."
The application will be driven by a wizard that helps the engineer or end user walk through some of the difficult aspects of robotic guidance, such as camera setup, calibration, and the initial setup of the application. Furthermore, FANUC is making changes and additions to some of the 3D VGR tools and components, as well as enhancements to existing algorithms.
Dechow expects that such integrated functionality will someday be commonplace in a robot controller. "Machine vision, along with other sensing technologies and capabilities, ultimately will be standard embedded parts of any complete robotic system, although the exact architecture of such a system might not be something that we can completely envision today," he says.
Let's Work Together
The arrival of collaborative robots on the factory floor also presents new opportunities for vision-guided projects. In collaborative applications, robots and humans work safely side by side without the fence or light curtains that separate the two in traditional industrial robots bolted to the floor.
Factories are deploying collaborative robots for a variety of applications, including multimachine inspection. "Collaborative robots allow end users to have flexible systems where they can bring the camera to many positions without having to provide all the safety mechanisms or purchase tons of cameras," Artemis' Brennan says.
To create a flexible collaborative work cell, "vision-guided robotics is the technology to enable it [to perform a task]," Dechow says. "The natural progression would be having the collaborative robot see what is going on in its entire space."
The nature of collaborative applications, however, presents some challenges when deploying VGR. In a fixed cell, where the robot is guarded and the camera constrained, "we can set the lighting and imaging so it doesn't change dramatically," Dechow says. But in a collaborative work cell with more freedom of movement, a human can interfere with those parameters and affect the reliability of the robotic processes.
"As we provide better and easier-to-use machine vision in those environments, particularly in application-based processes, we will better overcome those collaborative challenges," Dechow says.
While collaborative robots are promoted as intrinsically safe, manufacturers are still on the learning curve with this relatively new technology. "The biggest challenge goes back to safety because that familiarity is not there yet, and there hasn't been a definition of how to implement it around a collaborative robot," says Shelley Fellows, Vice President of Operations for machine vision integrator Radix Inc. (Tecumseh, Ontario).
The goal of the integrator, Fellows says, is to work closely with the customer to establish a comfort level for the operator while maximizing the efficiency of the collaborative robot.
"Perhaps the biggest obstacle in implementing VGR is the misconception among customers that the technology is hard to use. That comes from the need for more information," Dechow says. "We need to better train engineers on how to use vision-guided robotics and how to use it efficiently to make it very reliable. If we have a robot that can see and work in a more flexible environment, that makes the customer more productive and the application more efficient."