Industry Insights
Teaching Vision-Guided Robots Not to Fear the Bin
POSTED 07/09/2019 | By: Winn Hardin, Contributing Editor
Few applications exemplify the evolutionary path of vision-guided robots (VGRs) better than bin picking. Unlike transferring items from a conveyor – a task that VGRs have excelled at for decades – accurately and repeatedly reaching into an unstructured bin to empty it one part at a time has traditionally been more challenging for the average vision guided robot workcell. Yet the ever-present demand to automate repetitive tasks continues to fuel growing interest in VGR bin picking, and the technology has simultaneously progressed with the introduction of lower-cost smart cameras, advanced 3D imaging systems, PC-based frame grabbers, and more user-friendly robot programming environments.
Bin picking cycle times have fallen considerably, for example, but that productivity often comes with a higher system cost and more complex programming on the machine vision software development side of the application. Consequently, most commercial applications tend toward structured or semi-structured bin picking, where the bot can still rely on comparative simple and cost-effective vision systems to find similarly shaped and oriented parts arranged in a stack.
“Vision-guided robotics is still largely based on 2D imaging technology because it favors applications familiar enough to map robot movement,” says Jim Reed, Vision Products Manager at LEONI. “The software has come a long way, but the geometry of the part is what hinders or affects the ‘go’ or ‘no go’ of a project.”
Within these conventional structured bin picking applications, however, VGRs are increasingly able to perform more variable tasks in shorter cycle times, while simplifying calibration and training tasks.
Camera-Robot Coordination
There is no shortage of software solutions for either machine vision or robot control. The challenge for vision-guided robotics is to integrate these two software platforms so the robot’s actions closely calibrate to the camera’s field of view (FOV) and any objects within it. Like any machine vision application, imaging software must be trained to distinguish the part. But it must then translate that data for a robot controller so the bot can quickly move to the correct position, pick it up and place it.
To simplify this process, robot OEMs like Fanuc and Epson Robots have developed turnkey software packages of their own to simplify calibration and training of the vision system so a programming degree is not required to operate their products. When EPSON Robots developed its Vision Guide software nearly two decades ago, for example, it was motivated by business rather than technical goals. “Our goal at the time was to make a vision guidance system for robots that could help developers find and place parts in under a day,” says Rick Brookshire, Director of Product Development, Epson Robots. “The challenge was that our customers couldn’t be writing 100-1000 lines of code just to get this to work.”
Today, Epson’s Vision Guide’s point-and-click environment enables non-programmers to easily calibrate and train the company’s 4-axis SCARA and 6-Axis articulated robots for structured bin picking applications. More importantly, it allowed the robot-maker to expand its potential customer base to include small- to medium-enterprises that cannot support a resident programmer.
Rather than focusing on the software to ease VGR training, Soft Robotics applied material science. The company’s suite of robotic grippers – or actuated soft elastomeric end effectors – are designed with a high degree of adaptability and conformality, enabling them to pick up a wide variety of items with minimal need for vision system training. This capability is particularly important for bin picking systems targeting high-mix applications in e-commerce, grocery, retail, and other industries that are characterized by frequent product changeovers.
“If I have to train my machine vision system to accommodate 12 SKUs I’m okay. If I have to train it for 12 million, I’m not okay,” explains Carl Vause, CEO of Soft Robotics. “And if I’m picking up bananas, my products may all share the same SKU, but they don’t necessarily share the same shape.”
The rule of thumb for retail is that VGRs can successfully pick up 30 to 40 percent of items with an end-of-arm vacuum tool. Soft Robotics, according to Vause, is seeing first pass analyses of its EOATs showing pick up rates of 90 percent. “But the start up speed is faster, which means you can change out SKUs much faster,” says Vause.
While not expressly a machine vision solution, Soft Robotics’ EOAT technology can significantly impact the simplicity and cost of a VGR system’s vision component. The fastest VGR systems today can singulate an object, find the right approach, and calculate how to apply the vacuum picker in a few hundred milliseconds if the system packs sufficiently high camera resolution and computing power. In comparison, Soft Robotics’ SuperPick, Vause says, can achieve 65 millisecond scan-to-pick times based on a low-cost 2D digital imager and a highly forgiving but accurate gripper design.
Axes of Retrieval
While VGRs have become easier to calibrate and train, other complexities await. The challenges begin to quickly mount as developers specifically confront two things: the number of axes of robot movement, and the level of randomness in the bin.
“Six-axis robots change a lot of things,” Brookshire says. “It’s not just a linear increase in complexity. Four-axis robots pretty much depend on parts coming from a flat surface and moving to a flat surface. But when you add two more axes to the end of the arm, you’re dealing with pitch, yaw and roll instead of just roll. That pitch and yaw adds a lot of complexity.”
Pitch and yaw, in theory, give six-axis robots more freedom of movement and therefore make them better equipped for handling some variation when picking up items. If objects on a flat surface are lying at a 45-degree angle to the horizontal plane, for example, a conventional 2D image map from an arm-mounted camera is often enough to distinguish individual objects and orient the robot arm for best effect.
As with four-axis robots, however, the challenge is calibrating the camera and robot – only the challenge is greater in six-axis. If the camera is mounted on the robot arm, the calibration target – the camera’s FOV – can be smaller, and therefore easier to calibrate. The drawback is that the vision guidance system is more dependent on the accuracy of the robot. As the bot moves further and further away from the position in which it was calibrated its accuracy declines – meaning that 1 mm in robot movement will not necessarily translate to 1 mm in real-world space.
Static camera positions can help address this, as well as improve cycle time since the camera can capture images while the robot moves into position. But this approach brings its own difficulties since a camera’s FOV has an impact on its resolution, and resolution is generally determined by robot accuracy.
“If accuracy is ±2 millimeters for the robot, then we need vision hardware to deliver results in tenths of millimeters,” says LEONI’s Reed. “That means the vision sensor needs to provide pixel density equating with 0.2 mm of accuracy, and most applications don’t deliver that. “
As FOV expands, resolution declines; and large FOVs are common in automotive applications where bins can measure more than 5 x 10 feet. The solution often comes down to mechanics or mathematics. Robot gripper designs can sometimes help compensate for near-sighted robots, while image algorithms can often derive subpixel results. But calibrating vision systems to guide six-axis robots is just preparation for the next and final frontier: random bin picking.
“Random bin picking has been the Holy Grail of vision-guided robotics for a decade,” says Reed. “Every application poses unique challenges, and the part geometry dictates which solution will work best. In our professional opinion, no single application or software platform can handle all variations.”
Picking the Grail
In addition to six-axis robots, random bin picking further demands 3D imaging schemes that often leverage more than one camera to guide robot movement in a third plane. 3D imaging also involves more complex illumination configurations, such as time of flight, stereo, lidar, and laser triangulation to calculate image depth. Denser image data subsequently requires more muscular image processing hardware. And lastly, the 3D imaging system must be calibrated with a six-axis robot.
“When you deal with this whole other dimension, it’s not like 1+1+1 is 3,” says Brookshire. “That third dimension adds a whole lot more complexity both on the vision algorithm side and the robot precision side.”
As the number of 3D bin picking displays at the Automate 2019 exhibit attests, random bin picking is a hot application that has inspired a groundswell of candidate solutions that build on component-level advances ranging from 3D camera systems to conformable grippers. Meanwhile, system-level innovations are enabling smoother interactions not only between vision and robotic elements, but also between VGR systems and human operators who lack advanced programming skills. Unlike machine vision applications that are bounded by the limitations of physics, random bin picking is largely an engineering challenge, which means it’s only a matter of time before the industry has solved it six ways until Tuesday and moved on to the next thing.