The Eyes Have It: Robotic Vision and Guidance
| By: Bennett Brumson, Contributing Editor
As vision and guidance systems get less expensive and more user-friendly, they will be increasingly integrated into robotic work cells as the range of applications for vision continues to grow. This is particularly true in the food-processing industry.
Food for Thought
‘‘We, at BluePrint Robotics, specialize in food packing robotics. Our focus is mainly wrapper loading, tray loading, carton/case loading, and kit assembly,’‘ said Joseph Crompton, Director of Software and Controls Engineering. The robotics company is a member of the BluePrint Automation Group, based in Boulder, Colorado. Kit assemblies are made of several components that a robot inserts into trays, containers, boxes or cartons.
‘‘BluePrint focuses on primary and secondary packaging,’‘ explained Crompton. ‘‘Primary packaging has robots taking unwrapped product and putting it into machines that put on initial packaging,’‘ said Crompton. Food product comes out of a freezer, oven, or fryer. The company also does secondary packaging, which is taking packages and putting them into cases or cartons.
Another food industry vision and guidance application handled is the process of sandwiching cookies. ‘‘Components come on a belt, where delta robots find the tops of cookies and puts them on the bottoms,’‘ said Crompton.
‘‘In food processing there are two things to keep in mind,’‘ said Edward Roney of FANUC Robotics America, Inc. ‘‘One is that the objects are typically moving, and secondly, accuracy and speed are very important.’‘ Roney is Manager of Development for Vision Products and Applications at FANUC Robotics, a robot manufacturer and systems integration firm located in Rochester Hills, Michigan.
Roney went on to say that food items, like most consumer products, also tend to be produced in high volume. ‘‘Product typically does not come to a stop station, but keeps moving in a constant flow. Integrators have to couple the vision system to the robot in a way that cameras can take an image of a part that is moving and then provide the position of that object to the robot further down the belt.’‘
Roney said that it is necessary to have coordination between the conveyor and the robot as a tracking system, so that the robot can determine precisely how far the object has moved. ‘‘Getting the timing just right is critical. If you do not get the timing of your camera with the tracking system down within a couple of milliseconds, you could be millimeters off your pick.’‘
Precise Timing Leads to Better Accuracy and Speed
‘‘With accuracy, you need to be careful. If operators are the slightest bit off, this could have a large affect when you have high volume of product moving by,’‘ Roney cautioned. With high-speed food applications, a relatively small error in timing could rapidly accumulated to a choke point downstream on an assembly line.
There is another speed-related issue to consider when integrating food-processing applications. FANUC Robotics’ Ed Roney addressed this. ‘‘Typically, vision systems can locate objects faster than a single robot can pick them. In many food applications, you would want two or three robots to one vision system.’‘
Roney went further by saying ‘‘Depending on how fast product is pulled off, there could be two to four robots on the line. The vision system tells which robot to pick which part and provides its location.’‘ Generally, robots have a longer cycle time than vision systems because a robot has to physically pick up objects, place them, and return to its original position for another cycle.
‘‘BluePrint breaks down vision into two distinct aspects, robot guidance and inspection,’‘ said Crompton. Food products could be floating in a fryer, which then goes out to a cooling tunnel. From the tunnel, the food product goes on a belt for a robot to find its location. The main challenge for vision systems is to find these randomly oriented objects and to direct the robot to pick it. To complicate this process, the product is often presented to the vision system at odd angles.
‘‘In two-dimensional robot guidance, ISRA can do part handling, palletizing, and depalletizing,’‘ said Kevin Taylor, Sales Manager at ISRA Vision Systems, a systems integrator headquartered in Lansing, Michigan. In three-dimensional applications, the company is involved in automotive painting.
Taylor continued by saying ‘‘In car painting applications, after bodies are primed, there is a process of seam sealing, especially underbodies to prevent leaks.’‘ The firm’s three- dimensional vision system locates car bodies so that the robot will adjust its path to where these seams are.
He also related other automotive applications for their guidance systems. ‘‘ISRA does a lot with glass insertion. Typically, front and rear windshields are installed with a robot.’‘ Taylor explained that a vehicle is brought into position, but the glass opening is in a different place in three-dimensional space each time. The guidance system adjusts the path of the robot to be able to insert the glass correctly. High accuracy is demanded for glass insertion.
Taylor also mentioned another use for his company’s robotic guidance systems. ‘‘We have an application where polyurethane foam is filling specific cavities in car bodies, to quiet a vehicle. ISRA has a three-dimensional vision system to guide the robot to within 0.5 mm of these holes.’‘
Ed Roney also spoke of the role of vision in robot guidance. ‘‘The largest use for robotic vision is guidance, which utilizes a camera in a work cell or a within a tool that the robot holds. The camera looks out in two- or three- dimensional space to find an object. Once the vision system finds that object, the robot's path is adjusted for that new position.’‘
Roney continued by saying that guidance adds real value to a robot application in ways that inspection and measurement do not. ‘‘In inspection and measurement, vision is not really adding value to the robot, just using the robot in a functional way. Robots have no way of changing their path without input. Vision provides that input for what is changed and provides guidance information to the robot about that new position.’‘
Already tested in commercial greenhouse applications and also being developed for long-range space missions, vision-guided six-axis Motoman robots perform intelligent tomato harvesting without damaging the plants or the produce.
‘‘The vision system detects location, size and relative position between a targeted fruit and the robotic harvesting gripper. Working together, the 2-D vision system and robot controller guide the robot, equipped with a four-finger prosthetic hand, to the target (ripe tomato). Fast algorithms allow the system to find the mature tomatoes even if they are partially occluded by immature fruit, leaves or other plant parts. The gripper detaches the fruit from the plant, then gently holds and transfers it to an appropriate location,’‘ says Peter P. Ling, Associate Professor of the Department of Food, Agricultural, and Biological Engineering, The Ohio State Universityin Wooster, Ohio.
‘‘The sensing and picking capability of the units has been demonstrated in the laboratory and also in commercial greenhouse environments where the Motoman robot is mounted on a mobile platform. Success rates of tomato fruit sensing and picking were better than 95% and 85%, respectively,’‘ Ling continues.
Utilizing vision systems for part inspection is another important function for them. Often, guidance and inspection are integrated into a single operation.
‘‘First, vision systems locate the part then inspect it. If there is a part to be inspected closely, the robot needs to know its location and orientation,’‘ said Babak Habibi, President and C.O.O. of Braintech, Inc. ‘‘Otherwise, you are limited to a fixed camera that might not be able to see features on the part that you want to inspect.’‘ Braintech, of North Vancouver, British Columbia, Canada, is a maker of robotic software packages.
Habibi went on to say, ‘‘Our software is able to locate parts in three-dimensional space using an image from a robot-mounted camera. This information is utilized to focus inspection on specific features of a part. Besides inspection, the position and orientation of parts can be used to instruct the robot to perform operations such as handling without the need for expensive fixturing and precision dunnage.’‘
Joe Crompton of BluePrint says inspection verifies that food products have the right size, right shape, or is not broken. Some foods cannot be frozen together because there might not be clearance for the gripper to go around them. ‘‘Inspection is where vision is really strong, especially when combining it with guidance, so the robot only picks good product and lets the rest pass,’‘ Crompton said.
Fringe Benefits of Vision
Other than performing guidance and inspection tasks, robotic vision systems realize other advantages when they are deployed. Adil Shafi of Shafi, Inc., Brighton, Michigan, outlined some of these advantages. ‘‘Customers save on labor, maintain throughput consistency, and get increased flexibility.’‘
Shafi, President of the software firm, elaborated on these points. ‘‘Traditionally, assembly lines required labor to pick up heavy and sharp parts. Vision systems help save on labor. Also, because of the variability of how fast people put parts onto assembly lines, that affects how much a plant or an assembly line makes in a day. There is more consistent throughput using robotic vision.’‘
Shafi also spoke of the increased flexibility that vision brings to already flexible robotic work cells. ‘‘End-users do not need to retool as often, making them more globally competitive. Retooling often takes two weeks, often requiring plants to shut down.’‘ Increased flexibility helps accelerate return on investment in vision systems and justify firms investing in them.
Experts in robotic vision systems will have a chance to network and share ideas at the Automated Imaging Association (AIA) sponsored Machine Vision for Robot Guidance Workshop. Sister trade group, the Robotic Industries Association (RIA) also is actively involved in organizing the workshop, which will be held in Nashville, Tennessee on October 5 and 6. The event will include a series of workshops and tutorials on the art and science of machine vision.
Braintech's Babak Habibi will conduct a workshop that focuses on three areas. ‘‘I will contrast traditional machine vision with robot guidance technologies and describe the differences between 2D, 2.5D and 3D algorithms in the context of application examples. I will describe single camera 3D technology which enable reliable, practical vision guided robots. Finally, I will highlight factors contributing to the success or failure of (Vision Guided Robot) VGR systems, such as the importance of choosing a fully-engineered product based upon a common software platform and proper integration.’‘
Adil Shafi’s workshop will prove to be equally as interesting. ‘‘First, I will review a few case examples. Secondly, I will do a quick overview of robots and vision, covering what the combination can do. I will ask ‘If robots continue to be flexible, why do end-users need vision?’ I take people through a number of short movies on various applications. Thirdly, I will illustrate various vision solutions that are popular.’‘