Tips and Tricks for Designing Successful Vision-Guided Robotic Solutions
| By: Winn Hardin, Contributing Editor
When it comes to vision-guided robotics (VGR), customers can choose from a variety of systems depending on their particular need. But it’s not always clear which system is best suited to a given application. In fact, choosing a system can be downright confusing. It’s important to know what to keep in mind and pitfalls to avoid when planning a VGR application.
Per usual, a careful examination of the application and its context is the best starting point. Nick Tebeau, Manager of Vision Systems at Lake Orion, Michigan-based Leoni Vision Solutions, begins with a comprehensive list of questions he asks the customer when determining the system that will best address the application. First, what does the part look like? How much is it going to move? What are its component parts? The answers to these are integral to the design of a system.
“We’ve done simple 2D systems that may use [FANUC] IR vision, which is built in [with FANUC robots], or a Cognex In-Sight smart camera mounted to a robot, which works perfectly fine,” Tebeau says. “Or let’s say we’re picking out of a basket and that basket is moving from station to station. We may use external, fixed machine vision cameras not on the robot itself to locate the basket.”
Of all the questions he has, there’s one Tebeau generally keeps to himself. “I don’t typically ask the customer if they think it’s a 2D or a 3D application,” he says. “I’m going to do that analysis on my own and then have a discussion with the customer about my findings. They may think that 3D will always be the better option when it’s actually not always appropriate or adds unnecessary complexity and cost to the system. The number-one pitfall is still incorrect lighting.” More often than not, if a vision system is not working well on the factory floor, Tebeau finds that the culprit is a poor lighting solution.
All Vibes Are Bad Vibes
Details matter, says Tom Brennan, President of machine vision integrator Artemis Vision, Denver, Colorado. “Every slight part variation can mean a different pattern match, and each different routine will need to be tested for thousands of cycles to ensure reliability,” he notes. “Be sure you have a firm grip on the details. Does the application involve a pallet, a rack, a conveyor belt? Details often extend beyond just the part.”
It is important to hold all external factors constant, Brennan says. Robots and conveyors produce motion and vibration, for example, and slight vibrations can foul a machine vision system’s calculations, making it miss picks as part of a VGR application. Brennan advises that customers keep this in mind as they are developing their system. At the same time, be aware of what exactly the robot is doing and when. Think in parallel, Brennan adds, to maximize the workcell’s throughput. “Design your system so the robot is always moving.”
Anticipating potential problems is crucial when designing a system, but so is anticipating future changes that arise as customers alter or expand their product lines. To this end, Brennan recommends choosing flexible hardware. “Changing software is typically cheaper than changing hardware, so make hardware selections that can be general purpose,” he says. “You’d be surprised at how quickly requirements and requests can grow.”
An Illuminating Problem
Lighting is a key ingredient to every machine vision system, and VGR solutions are no different. In fact, considering the size of many robotic workcells — and the difficulty in screening the workcell from outside lighting influences — lighting can be an even greater challenge for VGR systems.
According to Steve Kinney, Director of Technical Pre-Sales at JAI, Inc. in San Jose, California, ambient light on the factory floor is often unreliable, fixed lighting can be undesirable, and there’s usually little room for lighting on the equipment itself.
“Most VGR systems still use VGA-resolution cameras,” Kinney says. “The large pixels found in these cameras are especially sensitive and thus can help to overcome problems with lighting. In recent years, though, we have seen a push toward digital HDTV and megapixel cameras, which is rendering the older TV formats obsolete. However, because the cameras use smaller pixels, the move toward higher-resolution cameras can complicate the lighting situation with vision-guided robotics applications.”
Fortunately, there are several solutions to the light problem, Kinney says. Binning can increase light collection, for example, effectively providing the sensitivity of a larger pixel. A 5-MP camera with 5-micron pixels can be 2x2 binned, yielding a resolution of 1.25 MP and the equivalent of a 10-micron pixel.
Binning lowers the resolution of an image, but 1.25 MP still gives four times the number of pixels compared to VGA formats, allowing VGR designers to look at a much larger field of iew. “A lot of VGR applications are not resolution limited,” Kinney says.
CMOS sensors are allowing camera makers to offer higher-resolution cameras with strong dynamic range at competitive prices for VGR and other machine vision applications. The inclusion of global shutters as a standard feature for most CMOS cameras used in machine vision also is encouraging the adoption of this low-cost sensor technology.
Today, customers have more hardware and software options than ever for building their VGR applications. By carefully matching the application requirements and contextual environment to hardware and software selections, customers can be confident that their next VGR workcell will deliver the high ROI they need to stay competitive in today’s global manufacturing marketplace.