Robotic Industries Association Logo

Member Since 1900

LEARN MORE

RIA has transformed into the Association for Advancing Automation, the leading global automation trade association of the robotics, machine vision, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

Robotics+Vision at a Glance: The Dos, Don’ts and Applications

POSTED 07/12/2013

 | By: Tanya M. Anandan, Contributing Editor

Robots are renowned for their repeatability and speed. They take on Photovoltaic solar cells are aligned for processing using vision guided robotics (Courtesy of Cognex Corporation)monotonous, physically taxing, and even hazardous jobs without breaking a sweat, and all while boosting throughput. This works great in the structured environments that robots have been working in for decades, where you have fixed part presentation.

Once you interject variability in the way the parts are presented, blind robots need some assistance. That’s when the robot’s guide – machine vision – comes into play. For increased flexibility, robots need visual perception.

“When users require more flexibility or are looking to save money on fixturing, that’s what might push you from a blind robotics application to a vision guided robotics application,” says David J. Michael, Ph.D., Director of Core Vision R&D at Cognex Corporation in Natick, Massachusetts. “Compared to blind robots, vision guided robots eliminate the need for hard tooling and are more suitable for mixed-model processing applications.”

Variability, Flexibility and Quality
Being able to manage part variability without fixturing and facilitate quick changeovers are the hallmarks of vision guided robotics (VGR). Consistent quality is another primary advantage.

“With a guided system and the correct field of view, you’re able to drive the assembly of components down to levels of 0.001 inch or better depending on the robot,” says Phil Baratti, Applications Engineering Manager at Epson Robots in Carson, California. “This is possible with manual assembly, but to do it consistently over long periods of time can take a toll on the operator. Once a vision guided system is debugged and running well you can expect years of uptime and productivity.”

VGR technology, however, comes with its own set of complexities. There are numerous considerations when evaluating a robotics application for vision. We’ll survey them here and examine some of the pitfalls to avoid. We’ll also explore new technologies shifting the vision world on its axis, while demonstrating a wide variety of VGR applications from machine tending and auto assembly, to food processing and logistics.

Major Considerations for VGR
When evaluating an automated application for vision guidance, many of the considerations are the same as for any robotics application.  Robustness, accuracy, speed, ease of use, and ease of setup all need to be assessed.

“Several questions need to be answered before considering how to automate the vision aspect of an application,” says Tim DeRosett, Director of Strategic Initiatives at Yaskawa Motoman Robotics in Miamisburg, Ohio. “These include: What am I moving and how is it presented? What is the size of the part? Does up/down orientation matter? Is radial orientation important? How will the robot acquire the part and what are the space constraints? What must the robot do with the part after acquiring it?”

“Also, do I need to distinguish between multiple parts? Are they disjointed, touching or overlapping?” DeRosett adds. “Singulated parts are pretty straight forward nowadays.”

Stäubli TX60 robot performing vision guided conveyor tracking to locate electrical connector components (Courtesy of Stäubli Corporation)“You need to have good information from the vision system (backlighting, lens selection, filtering, etc.) or the end result will not be reliable,” explains Chad Henry, North American Sales Manager at Stäubli Corporation in Duncan, South Carolina. “This data, along with the other variables in the cell (robot, gripper, product variation, etc.), determine the total capability of the system and must be taken into consideration.”

Reflective (specular) objects or parts with low contrast require special consideration. “Specular objects present a challenge,” says Cognex’s Michael. You need to be careful with illumination, with the kinds of cameras used, the exposure, and how you take the image. You also need vision tools that are tolerant of specularity.”

Lighting
Many of the challenges surrounding vision guided robotics over the last couple of decades have centered on lighting and optics (lens types, the field of view and depth of focus). Recent strides in sensor technology and software have made them more adept at handling lighting variability. Advancements in lighting technology have also increased reliability.

“LED lighting has really come on strong and is much more reliable,” says Yaskawa’s DeRosett. “They don’t degrade over time like incandescent and don’t switch off and on like fluorescent lighting, which can impact high-speed applications. LEDs also have less heat output and longer life than incandescent and even halogen.”

Epson’s Baratti cautions against using one lighting scheme for different parts in an application. “The designer needs to consider a specific solution for each part,” explains Baratti. “One part may need a backlight, another may need on-axis lighting, and another may image best with a diffused ring light. Having these different configurations isn’t necessarily a bad thing, as long as you design into the system the ability to turn on and off each light as needed.”

Baratti says that it’s also important to consider the part features that need to be identified. “For instance, if there is a pattern on the top of the part that needs to be aligned with the inside edge of a bezel, then the system needs to be designed for top-light imaging. If you backlight a part that has top features needed for assembly, the resulting part silhouette may not provide the precision you need for the application.”

Production-Level Parts
Vision systems, although flexible, are still limited if they are not properly designed to find part variation. Baratti says a common pitfall to avoid is developing a vision solution with parts that are not production level.

“The parts used may image up very clean during the development, but may be presented to the production system with fluids on them, or may change in image quality due to oxidation or material variations,” he explains. “Production-level parts are needed during the design stage of a vision solution, so that the variables can be accounted for during development.”  

Basic Vision Guided Robotics - FREE WebinarDelivery Systems
Most VGR applications consist of three core systems: the robotics, the vision system, and some type of delivery or bulk handling system. The part delivery system is a major consideration in designing any VGR solution.

“Are the objects moving in or through the robot cell?” asks Pierantonio Boriero, Product Line Manager at Matrox Imaging in Montreal, Canada. “This motion will need to be tracked, through vision or other electromechanical means, to compensate for the delay between finding the object and moving the robot to that location.”

Epson’s Baratti says bulk singulation is becoming more common. “Since parts are in bulk bags or boxes, there needs to be some way of separating the parts for assembly. This can be done with a custom bowl feeder, or with a vision guided system and a flexible feeder.”

This video courtesy of Epson Robots demonstrates vision guidance for circular conveyor tracking. An Epson G6 SCARA robot is using 2D vision to locate two types of parts and place them on different pallets.

“Circular conveyor tracking works like a recirculating flexible feeding unit,” says Baratti. “It allows you to recirculate naturally. Compared to linear conveyor systems, circular systems provide a smaller footprint for bulk singulation. They are faster and there’s less travel on the part. It doesn’t have to go all the way down to the end of the belt and come back.”

2D, 2.5D or 3D
Epson’s circular conveyor demonstration uses 2D VGR. What’s the difference between two-dimensional (2D), two-and-a-half (2.5D), and three-dimensional (3D) vision technology? When do you use one over the other?

Comparing 2D, 2.5D and 3D VGR (Courtesy of Universal Robotics)

When you consider an object in 2D, you have the horizontal (X) and vertical (Y) coordinates. You can also have linear rotation around its center axis (Rz in the diagram). This changes the angle of the object (also referred to as ‘roll’). But there’s no tilt. The part remains flat.

A 2.5D application adds another measurement. Cognex’s Market Development Manager John Lewis explains: “Take a stack of plastic trays, each one containing several parts. The robot unloads the top tray and when that’s empty, the robot removes it and goes back to empty the next tray. Each time the robot empties and removes a tray, the vision system needs to measure the height of the remaining trays, so the robot can position itself for the next task.” That depth measurement (Z in the diagram) coupled with the flat-plane measurements make it 2.5D.

Now think of an airplane. It may ‘pitch’ forward, pointing its nose downward, or pull upward dipping its tail. Pitch is represented by Rx in the diagram. The plane can also ‘yaw’ or tilt left or right, represented by Ry. In 3D applications all six coordinates may come into play, also called six degrees of freedom.

Matrox’s Boriero notes specific considerations in regards to 3D VGR. “Do the objects have unique features? This is particularly relevant in establishing the imaging, positioning and orientating techniques to employ. How many possible resting states can an object have? Is the object rigid or flexible? This is particularly relevant for 3D VGR in establishing the complexity level for finding an object and processing it.”

This video courtesy of Kawasaki Robotics (USA) Inc. shows a 3D VGR application where the vision system is being used to locate randomly positioned bread dough on a conveyor. Then a Kawasaki RS Robot equipped with an ultrasonic cutting knife scores the baguettes prior to baking.

Keep It Simple
Most robot manufacturers, vision suppliers and robot integrators agree that when considering 2D versus 3D, the simplest solution is the best. Matrox’s Boriero stresses the importance of taking the time to carefully evaluate an application to see if it can be simplified for 2D VGR.

Yaskawa’s DeRosett sums it up: “There have been a lot of exciting advancements in 3D technology over the last few years, but if an application can be solved with a 2D system, then use 2D. Typically, 2D systems are more economical and less complicated to implement.”

“You have to consider the long-term goal for the cell and how flexible you want it to be,” suggests Earl Raynal Jr., Sales Manager for Motion Controls Robotics Inc. (MCRI), an RIA Certified Robot Integrator in Fremont, Ohio. “One of the best practices we can do is to focus on the customer’s needs and try to provide the least complicated solution.”

2D for Consumer Goods
MCRI took a rather complex process and designed a 2D VGR solution for assembling specialty plastic bottles used for packaging face cleansing products. The first stage in the process is bottle descrambling, which uses one camera, line tracking software, and FANUC PickPRO software to distribute the workload among four robots.

Four FANUC LR Mate 200iC robots employ one 2D camera to pick and place randomly oriented plastic bottles for downstream processing (Courtesy of Motion Controls Robotics Inc.)The bottle assembly process involves three stages. First, the bottles are fed from a bulk feeder in random positions and orientations as they travel on a conveyor at a high rate of speed (120 units per minute). One 2D camera looks at the position and orientation of the bottles. The first, or master, robot in the series uses software to assign bottles to each of four robots ahead of time to distribute the workload. The bottles are then placed onto two conveyors to move them to an inversion operation.

“The bottles were a challenge because they are asymmetrical, but need to be presented to the downstream operation in a consistent orientation,” says Raynal. “A simple solution was to provide a left-hand and a right-hand conveyor, so bottles oriented one way can be set down on one conveyor and bottles oriented the other way can be set down on the other conveyor.”

The second stage involves additional robots to invert the bottles and place them into a screw conveyor for final assembly; no vision is required in this area. In the third and final stage, another 2D vision system guides the robots to locate face scrublets in shipping trays. The robots then pick the scrublets and place them into the bottles, which are continuously moving with the conveyor.

This video courtesy of MCRI shows all three stages of the assembly process. In this case, the simple solution for a complex problem was 2D VGR, multiple conveyors, and performing specific sub-operations in discrete robot work cells.

2D VGR AutoRacking for spot welding truck bed inners (Courtesy of Adil Shafi)2D for AutoRacking
Another 2D VGR application that has stood the test of time involves an AutoRacking process for welding Dodge 4x4 truck bed inners. According to the inventor and installer, Adil Shafi, this was one of the first AutoRacking solutions in the automotive industry and is still in production today.

“It was implemented at the Chrysler Twinsburg Stamping Plant in Ohio during the summer of 2001,” says Shafi, President of ADVENOVATION Inc., a vision guided robotics innovation and systems integration firm in Rochester Hills, Michigan. “Since then, hundreds of similar solutions have been implemented in the industry.”

The application incorporates six ceiling-mounted Cognex In-Sight cameras (visible in the photograph) and a Nachi robot to rack the 6-foot or 8-foot-long truck bed inners. Six cameras are used to perform rack validation checks and two cameras perform linked field-of-view offsetting for proper part placement.

“We had to integrate this solution in six weeks,” says Shafi. “The challenges included having the cameras mounted 12 feet above the parts to avoid welding sparks, while viewing the 8-foot-long parts within 2-mm repeatability.”

Shafi says the system has been running for 12 years now, and over 50 more have been implemented in another 10 Chrysler plants. “They have helped to speed up assembly lines and save on labor cost, while reducing injuries from handling large, sharp parts.”

2.5D for Machine Tending
Robot integrator Interactive Design in Lenexa, Kansas, is using VGR in a milling machine tending application for replacement lawn mower blades. In this 2.5D application, an ABB six-axis robot uses a 2D camera and dedicated LED lighting mounted to its end-of-arm tool to locate the blades on a cart. A laser sensor measures the height distance to the blades.

ABB IRB 1600 robot equipped with Cognex In-Sight camera tends a milling application for replacement lawn mower blades (Courtesy of Interactive Design, Inc.)Nathan Maholland describes the process: “The robot acquires the blade from the cart and places it onto the vertical mill turntable fixture. This process continues until the turntable fixture is full. During the loading process, the machine is milling the blades on the other side of the turntable. The robot acquires recently completed blades from the vertical mill and places them on the completed product cart.”

“The machine vision guided robot made it easy for the customer because it allows them to load a large quantity of blades into the cell without having to fixture any of the individual parts,” adds Maholland.

Fixed-Camera vs. End-of-Arm Mount
In the previous application, the camera was mounted on the end-of-arm tool. This triggers specific considerations of its own.

“The advantages of an end-of-arm mount are better accuracy and generally better pose (combination of position and orientation), versus when the camera is mounted farther away and looking down,” explains Cognex’s Michael. (Imagine having an eye on your hand.) “You can inspect quality with an end-of-arm mount. You can also get a finer estimate of the part location and take shots at different angles. If fixed-mounted, you may need more than one camera to capture those angles.”

“The trade-off is that you’re putting your camera in harm’s way,” says Michael. “Also, you have to make sure you don’t knock the camera while the robot is moving around. You may need an industrial-strength enclosure around the camera.”

“If your camera is mounted on the end-of-arm tooling and it’s moving with the robot, we recommend using high-flex cables so they don’t wear out due to repetitive motion,” adds Cognex’s Lewis. “Standard industrial cable will have a shorter life.”

Another consideration is the field of view. “Your field of view should only be as big as it needs to be,” says FANUC’s Bruce. “How big is the feature you’re looking at and how much is it going to move?”

“For a large part, you can have two cameras mounted above looking at each end of a part, or you can have a camera mounted on the end-of-arm tool that looks at one end and then moves to the other end of the part,” Bruce explains. “With a fixed-camera mount, it snaps the shot as fast as half a second or less. Then the vision analysis can happen in the background while the robot is doing something else. With an end-effector mount, it’s more flexible but reduces cycle time each instance the robot has to stop and take a picture of the part.”

From camera location to lighting considerations, VGR suppliers and users as a whole are getting better at identifying the critical variables and avoiding the pitfalls. As vision technology is advancing and the experience level of robot manufacturers, vision suppliers, robot integrators, and even end users continues to mature, we are approaching VGR with a keen eye. Sometimes even a googly eye.

The Googly-Eyed Robot
One company that’s “seeing” VGR in a whole new light is Recognition Robotics. This Ohio-based software developer distinguishes itself from other vision suppliers as a software recognition company.

“We don’t use traditional machine vision algorithms,” says Joe Cyrek, Vice President of Recognition Robotics Inc. (RRI) in Elyria, Ohio. “Our patented algorithms are based on the human cognitive ability to recognize objects.”

RRI’s RobEye system uses a 2D camera to align vehicle underbody components in six degrees of freedom (Courtesy of Recognition Robotics Inc.)“In a machine vision world, the system is basically looking for geometric patterns and distances,” explains Cyrek. “But the human brain doesn’t process measurements. We do coordinated eye-hand movements based on what we recognize. Our recognition software basically works the same way.”

The software, called CortexRecognition®, is the brainchild of company Founder, President and CEO Dr. Simon Melikian. RRI takes its software and packages it with off-the-shelf hardware for a turnkey vision solution branded RobEye™, pronounced robe (short for robot) eye.

“The real groundbreaking and unique thing here is that with a single image from a 2D camera, we are telling the robot what the object is and where it is in space in 3D, or six degrees of freedom.”

Basically, RRI claims to be accomplishing 3D VGR with a 2D camera.

This video courtesy of RRI shows the system in action successfully aligning vehicle underbody components in a lab demo (note the googly eye). Even when the assembly components are relocated, the system adjusts the robot’s movements for proper alignment. 

RRI’s lab demo was conducted using a vehicle underbody assembly provided by a major auto manufacturer. “Our customer already uses RobEye systems for part picking,” says Cyrek. “This is the first application of the same system actually loading a part to the vehicle. We did the drill holes as a simple way to verify the parts lined up correctly and that the process was repeatable.”

“The idea is to use the RobEye system for multiple models without having to drastically change the end effector and add multiple cameras,” adds Cyrek. “That’s a huge advantage for the customer.”

Cyrek says RobEye is a “DIY (do-it-yourself) robot guidance solution” that works with any robot.

3D VGR for Logistics
Another software engineering company that’s raising eyebrows in the VGR world is Universal Robotics. The company recently announced the industry’s first unlimited depalletization system.

Yaskawa Motoman’s dual-arm robot moves boxes on the warehouse floor (Courtesy of Universal Robotics)‘Unlimited’ refers to the software’s ability to handle an unlimited number of SKU types or box types, and any size and shape of boxes. The boxes can also be in any random orientation or location.

This video courtesy of Universal Robotics shows unlimited depalletizing in action at a distribution center.

“A lot of vision systems today require static definitions of labels or other attributes to identify a specific carton,” explains Hob Wubbena, Vice President at Universal Robotics in Nashville, Tennessee. “Ours does not. It has a whole layer of intelligence totally separate from the vision. Cartons can have any labels, printing, or graphics of any kind; it doesn’t matter. We don’t recognize boxes or cases based on size, labels, color or location.”

The artificial intelligence Wubbena is referring to is called Neocortex. The vision sensing technology is branded Spatial Vision™. Neocortex was founded upon a 7-year development program between NASA and Vanderbilt University for humanoid space exploration. Both technologies have been used successfully in commercial logistics applications.

Yaskawa Motoman’s dual-arm SDA-series robots, a new breed of human collaborative robot, are equipped with Universal Robotics’ technology. This allows them to recognize boxes of varied proportions and move them to and from conveyor systems.

VGR technology is gaining steam in logistics where randomness requires 3D accuracy. Universal’s sensor fusion technologies that leverage multiple sensing systems, including Microsoft’s Kinect sensor, are expanding the possibilities for vision guided robotics in new markets.