« Back To Robotics Industry Insights
Robotic Industries Association Logo

Member Since 1900

LEARN MORE

RIA has transformed into the Association for Advancing Automation, the leading global automation trade association of the robotics, machine vision, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

Advanced Vision Guided Robotics: Technology & Trends Impacting VGR Proliferation

POSTED 04/02/2013  | By: Tanya M. Anandan, Contributing Editor

Vision Guided RobotGive a robot “sight” and you expand its range of possibilities. When a robot can see an object, various items can be picked and placed without the need for custom tooling. Generic bins, racks and conveyor systems can be reused with different products. These advantages typify even the most basic vision guided robotics (VGR) applications.

Enter advanced VGR, with smart camera-wielding robots employing the latest in 3D vision technology and software. Now objects of different geometry and size, contrast and color, even touching and overlapping items, become easier to detect and maneuver. Robot work cells become more adaptable to different products, short runs and quick changeovers.

“Companies could save so much money by applying vision, specifically with changeover in mind,” says Steven Prehn, senior vision product manager for FANUC Robotics America Corp. in Rochester Hills, Michigan. “It makes your tooling much simpler because now you don’t need very tight robot compliance. The robot can respond relative to its environment. It’s huge!”

Technology and ease of use have converged to where a variety of industrial applications can leverage VGR effectively. Advanced 3D sensing technologies are refining object detection and making VGR a practical solution for random bin picking, while emerging technologies are providing a tantalizing view into VGR’s future. Collaboration between vision and robotics suppliers is making implementation easier, more reliable and cost-effective. VGR is gaining momentum.

3D Sensor Technology
Object recognition has come a long way. Advanced 3D sensor technology helps detect objects with poor contrast or specularity, especially in less-than-optimal lighting conditions.

3D area sensor maps the positions of multiple parts in a bin (FANUC Robotics America Corp.)

“Object reflectivity and low contrast are age-old challenges for 2D-based systems,” says Brian Windsor, national product manager - 2D/3D smart cameras for SICK Inc. in Minneapolis, Minnesota. “The emergence of 3D scanning technologies has provided the ability to identify and locate objects based on their shape, which allows for reliable detection of objects with low contrast or complex geometries.”

“Back then we were only able to take one part and calculate its position with six degrees of freedom,” explains Prehn, noting that FANUC has had 3D products for nearly 15 years. “Now we’re at a point where we can get multiple parts within a single image view and render their positions. We now have a magnitude increase in processing capability. That opens the door for high-speed bin picking.”

The Holy Grail in 3D
Regarded as the holy grail of VGR applications, random bin picking has advanced considerably in the last few years. In the past the process was typically separated into stages. Parts were first isolated, then detected and retrieved, with another stage for more precise part orientation for subsequent operations.

Prehn describes the advantage of 3D area sensor technologies that allow a robot to be “self-aware” and hence react to its environment, ultimately speeding up the bin picking process: “Being self-aware is the knowledge of robot kinematics, tool position, how to engage with the part, where the bin walls are, and where the part is with six degrees of freedom, all in a matter of seconds.”

This video demonstrates how a robot equipped with a 3D area sensor locates and picks randomly positioned parts in a bin. Then 2D vision is used to detect the orientation of parts on the fly, so that the robot can place each part on a conveyor in consistent orientation. Interference avoidance software prevents the robot and tooling from coming in contact with the bin walls.

“Only now has VGR technology matured to the point where bin picking is practical to apply in a multitude of general industrial applications,” says Prehn.

Bin picking with 3D image capture (SICK Inc.)In another bin picking scenario, a 3D camera and part localization software are used to locate parts by matching CAD models to the 3D range data captured by the camera. The software analyzes the height data to locate parts that can be retrieved without the gripper colliding with other parts or the bin walls.

Jim Anderson, SICK’s national product manager - vision, says their PLB vision system uses a combination of scanning and snapshot 3D image capture. “This gives the system the resolution to work with a wider spectrum of objects that scanning systems enjoy, but without having to move the camera like traditional snapshot camera systems.” By integrating the 3D image capture and localization function, system setup is simplified.

Time-of-Flight Technology
Another 3D technology, time-of-flight (ToF), is taking object detection to new levels. ToF is used to measure the time of flight of a light signal between the camera and an object to determine the object’s depth.

“When you eliminate the 2D image and focus on the depth image of a product, you eliminate lighting and Parallel robot with integrated vision guidance and conveyor tracking stacks pancakes for packaging (Adept Technology Inc.)background coloration as concerns,” says Deron Jackson, Ph.D., chief technical officer at Adept Technology Inc. in Pleasanton, California.

“We do a lot in the food industry, with chicken, fish or even bagged goods,” explains Jackson. “Food residue on the conveyor belt can cause a real problem with object detection. Having the right vision tools and capability to isolate the part is very important. If the product has dirt on it or the lighting is poor, that’s all irrelevant (with ToF). Now you’re not looking at the color, you’re just looking at the depth.”

Calibration Wizards
To improve VGR’s reliability, suppliers are working harder to make the calibration process easier, not only for robot manufacturers and system integrators to integrate, but also for end users to operate. Calibration wizards make the process more plug-and-play ready.

Calibration allows vision systems to report part positions in units robots can understand. The process usually involves a dot grid or checkerboard pattern, typically called a calibration plate.

 

"It’s important for us to make the process more robust,” says David J. Michael, Ph.D., director of core vision R&D at Cognex Corporation in Natick, Massachusetts, “so that it still works even if the user doesn’t put the calibration plate in exactly the right place, or there are multiple cameras and they’re not quite all in the same field of view, or if you can’t see the entire plate from all cameras at all times.”

3D calibration example (Cognex Corporation)

Adept offers calibration wizards with its vision-integrated robots, plus specific software for conveyor-based packaging applications. “Aligning the camera to the belt, and aligning the belt to the robot, it’s all in one tool,” says Jackson. “You don’t need to worry about how to relate one coordinate system to another.”

Vision-Integrated Robotics
More suppliers are trending toward offering a combination of robots, vision hardware and software for a single-source VGR solution.

A major advantage of vision-integrated robotics is the engineering that’s already built into the system, or “the mathematics behind the scene,” explains FANUC’s Prehn. “A somewhat experienced engineer can take one of our products and have a functional system in less than 2 hours. That’s versus 3 weeks or more for building a system from scratch with your own algorithms and tying it into the kinematic model of a robot.”

SICK’s Anderson says there is a prevailing feeling among end users that every VGR system must be custom built from the ground up. “This new-every-time mindset has pushed costs higher and the time to deliver a working solution for the customer much longer. With vision manufacturers now creating systems for robot integration as the goal, these challenges of system cost and time to market are becoming less of a problem.”


 

Vision integration delivers ‘plug-and-play’ functionality for machine tool tending systems (ABB Inc.)“Taking the mystery out of vision technology remains one of the hurdles for the robot manufacturer,” says Nicholas Hunt, manager of automotive technology and support for ABB Inc. in Auburn Hills, Michigan. “Doing this involves a strategy of collaboration between robot manufacturers and vision technology providers. ABB has made major strides over just the past year by partnering with leading vision sensor developers; in a sense, piggybacking onto their core strengths and embedding their technology into our robot controller.”

ABB recently collaborated with SVIA Industrial Automation of Sweden to bring SVIA’s smart camera system for machine tool tending to the North American market. Introduced at Automate 2013, the vision system is tightly integrated with ABB’s IRC controller.

The Integrator’s Role
Robot integrators play a significant role in ensuring the reliability of VGR installations, especially when the application involves sophisticated technology or specialized markets.

“Integrators are the ones usually responsible to the end users,” says Adil Shafi, president of Advenovation Inc., a vision guided robotics innovation and systems integration firm in Rochester Hills, Michigan.

Shafi stresses the importance of planning and preparation when integrating advanced VGR technology into your factory operations. These steps include working with suppliers with proven track records, understanding your product and application, ensuring proper training for your operators, managing part variation (size, color and reflectivity), and he can’t stress this one enough, insisting on demos with all of your parts at the desired cycle times.

“All of these new innovations should be undertaken in a careful and reliable way, so as not to displace the trust that end users put in these new VGR technologies,” adds Shafi.

Reducing Costs, Reshoring Manufacturing
As reliability and ease of use progress, VGR technology is becoming a cost-effective replacement for manual assembly. Some end users are finding the increased productivity and cost savings substantial Reducing Costs, Reshoring Manufacturingenough to reshore operations.

Adept’s CTO describes a case with Dutch-producer Royal Philips Electronics: “We took what was a manual assembly line for electric shavers with 2,400 workers in China, and created an automated system in the Netherlands using Adept robots, vision systems and flexible feeders,” explains Jackson. “We worked with an integrator to put together a system of 150 of our vision-equipped six-axis and SCARA robots. That assembly line can build 600 different part numbers of Philips shavers all on the same physical system.”

The flexible feeders also use VGR technology and play a key role in the installation’s ‘future-proof’ flexibility. The systems are capable of separating and orienting a wide variety of parts and feeding them to the assembly operation.

Future Trends
As demand for mass customization grows globally, future-proof assembly is on everyone’s mind. Applications where new product models are introduced frequently, where production runs are shorter, or where changeover is more common, will benefit the most from advanced VGR. Automotive assembly, medical device manufacturing, food packaging, and pharmaceutical manufacturing are expected to be early adopters of leading-edge VGR technology.

Industry insiders expect new markets to crop up for VGR. From agricultural applications with vision guided robots working in the fields, harvesting, feeding, weeding and transporting produce and grain, to collaborative scenarios where robots are working alongside humans in manufacturing plants, the prospects are diverse. In the non-industrial realm, service robots are expected to be the largest sector for VGR growth.

“Organizations working on autonomous vehicle navigation, like Google or NASA, and humanoid assistance robots such as Toyota’s Partner Robot are leading the way in VGR technology,” says Lisa Maitre, senior application engineer at Kawasaki Robotics (USA) Inc. in Wixom, Michigan.

“With such worldwide interest in R&D of navigation systems,” explains Maitre, “we would expect to see major advances in the complex software systems required to analyze data collected by sensors and cameras in the near future. The same technology can be extrapolated on a smaller and simpler scale for industrial VGR.”

Structured-Light 3D Sensing
As mass-market appeal for consumer-oriented 3D technology drives down prices, we’ll see these technologies move into the industrial sector. Structured-light 3D sensors, such as Microsoft’s Kinect sensor for video gaming, cast an invisible infrared light pattern on an object, then use a 2D camera to detect the distortions of that light pattern and generate a 3D depth representation of the object. This same process can be used for 3D mapping of multiple objects, such as the scenario in random bin picking.

“We’re at the point where the products are out there in the consumer market, but they haven’t quite found their way into the industrial market at the right price point,” says Adept’s Jackson. “But I think that’s going to happen in the next few years.”

Also intriguing to VGR suppliers are companies pushing the envelope with common software architectures and point cloud perception libraries, such as the collaboration between Microsoft and iTripoli to develop 3D image extraction for robotics.

“We’ve seen systems displayed at trade shows that leverage technology like the Kinect sensor for 3D mapping and rendering of parts in six degrees of freedom. But more important than the individual technology used to render the 3D point cloud is the ability of the robot to take in that information and react to it,” explains Prehn. “Right now, we’re in a period of tremendous growth for 3D extraction. For FANUC, it’s about providing a complete solution that manages all aspects of a difficult process while being mindful of ease of use.”

Self-Aware Robots
Ease of use, a recurring theme, is driving bleeding-edge VGR technology – self-aware robots.

“That’s the next big goal, making the system easier to use regardless of the application,” says Prehn. “We want to push the envelope even further, so that the whole vision experience can be done in an automated fashion. So the robot is self-aware enough that it can automatically calibrate, automatically find a part, and automatically respond to that part’s position.”

Watch RIA’s upcoming webinar on Advanced Vision Guided Robotics airing April 25.

Presenter Adil Shafi will cover 2.5D and 3D VGR, calibration techniques, and the evolution of 3D-structured and random bin picking.
 

Robot Precision
Lest we forget about robotics while focusing on vision technology, robot manufacturer ABB reminds us that robotics engineering will need to step up its game.

“A kind of technological ‘push-pull’ between robot producers and consumers has been going on for some time, and a familiar driver is once again emerging – accuracy,” explains Hunt. “Robot users are craning their necks to see over that next technology wall and what they see is the next shoe to drop – the mechanical arm.”

“Robot engineers, peering once again into the mechatronic alphabet soup, are looking for ways to achieve reduced gear train backlash, faster sampling, better compensation, lighter links, tighter links and differential feedback,” says Hunt, “anything that results in finer motion granularity and improved dynamic response.”

Armed with new advances in vision technology and robot control, suppliers are focused on making VGR implementation and operation easier, more reliable and more flexible, while delivering measurable ROI. Their concerted efforts continue to accelerate VGR’s proliferation.