Industry Insights
A Preview of the International Conference for Vision-Guided Robotics
POSTED 09/03/2008 | By: Bennett Brumson, Contributing Editor
As robotic vision systems become increasingly powerful, more adaptable and less expensive, robot suppliers are incorporating them as a standard feature in systems provided to end-users. Robot suppliers, integrators and end-users have the opportunity to see, learn and contemplate the latest in vision-guided robotics at the International Conference for Vision-Guided Robotics, (ICVGR), to be held September 30 through October 2 in Novi, Michigan.
“The use of vision with robots has been growing substantially over the last several years. Integrators and end users are discovering the advantages of using Intelligent Robotic systems to be adaptive and flexible for changes in part presentation and part styles,” says Ed Roney, Development Manager for FANUC Robotics America, Inc. “This uniquely ‘focused’ conference on vision-guided robotics will help first time and experienced robot users learn more about the advantages of incorporating both 2D and 3D vision technology into their robotic automation applications.” FANUC Robotics will be making a presentation at the conference in addition to having a display at the tabletop exhibits.
Cognex also looks forward to participating in the conference sessions and tabletop event and the opportunity to present to users of robotic technologies the advantages that including vision can provide. “For experienced users of vision-guided robotics, we want to show how using vision's capabilities can provide quality inspection and positional data for robots, while enhancing payback for their project, as well as providing tangible quality improvements,” professes Lisa Eichler, Director of Marketing at Cognex Corp., Natick, Massachusetts.
Presentations in Three-Dimensions
Attendees to the ICVGR can look forward to an agenda that includes two dozen presentations and tabletop exhibits on vision-guided robotics. From case studies on vision-guided robotics in the food and pharmaceutical industries, to lessons learned and advances in vision technology, the ICVGR offers something for most end-users of robotics.
Brian Windsor, Business Development Manager at SICK Inc., Minneapolis, Minnesota, provides a brief rundown of his presentation, “Using Three-Dimensional Laser Scanning for Robot Guidance.” Windsor says, “I will be talking about how laser scanning technology works and how the camera grabs the information. My goal is to help end-users understand the benefits of three-dimensional laser scanning in comparison to other technologies.”
Windsor goes on to say that he will talk about how three-dimensional laser scanners work in applications that have low contrast and how traditional two-dimensional systems struggle due to inconsistent lighting. In addition, Windsor will speak about challenges posed by varying material surfaces and parts with reflectivity issues. “Three-dimensional triangulation cameras are good in applications with poor contrast or inconsistent lighting,” maintains Windsor.
Furthermore, Windsor will discuss field of view. “If an application requires a large field of view, our system is flexible enough to handle parts of different heights. In these applications, end-users do not have to change the position of the camera when looking at a wide variety of objects in a given field of view.”
Kevin Taylor, Vice President of ISRA Vision Systems, Lansing, Michigan, also has some interesting things to say about vision-guided robotics. “My presentation will be about successfully applying three-dimensional robot guidance for a cosmetic sealer in automotive applications.” Taylor adds, “While robotic seam sealing is something end-users have done for many years, automotive manufacturers are moving towards cosmetic sealing applications.”
Taylor points out that the appearance of cosmetic sealing beads is important, so the robot needs to be much more accurate than when sealer is applied to hidden areas of vehicles, such as the underbody. “Cosmetic beads are those seen by the customer, such as on truck lids, hoods and doors. Two panels are set together, which creates a seam that is seen every time the door, trunk or hood is opened.”
Applied Manufacturing Technologies Inc. (AMT), Orion, Michigan, intends to offer ICVGR visitors a couple of presentations. “AMT is presenting two studies, one on wireless vision with robot guidance accuracy, the other on top mistakes of vision integration," reports Eric Hershberger, Senior Engineer. These presentations, “High-Accuracy Robot Calibration, Wireless Networking, and Related Technical Issues” and “Top Lessons Learned in Vision Guidance Applications,” will be submitted by Hershberger and his colleague at AMT, David Wyatt, Staff Engineer.
According to Eichler, John Keating, Product Manager at Cognex will speak on “Technology Advances in Two-Dimensional Vision Guided Robotics.” Eichler says, “We plan to show our new vision system with a turntable demonstration using location and inspection features.” Explaining that locating parts is often challenging because of changing environmental conditions in robotic work cells, Eichler adds that successfully locating parts is more complex due to the robot's movement. The movement of the robot within the work cell can change the scale and angle of perspective, which makes locating parts more challenging.
Steve Prehn, Product Manager for FANUC Robotics America Inc, sees the use of 2D image processing techniques sometimes not enough to deal with parts that are not consistently fixtured. In his presentation at the conference, Steve will examine techniques that can be applied to extract accurate part locations from 3D points to provide the needed full six degrees of freedom – X, Y, Z, Yaw, Pitch and Roll.
Adept Technology Inc., Livermore, California, will zoom in on vision in packaging applications. “I will be giving a presentation, ‘Vision Guided Robot Applications for Packaging and Flexible Feeding,’ focusing on vision guidance for robotics and flexible feeding,” notes Mark Noschang, Applications Engineer at Adept. “I believe that furthering robotic technology is imperative for improving manufacturing. Increasing throughput, quality, flexibility, and achieving greater repeatability will allow for better products. I am excited to be able to share this technology with others.”
Greg Garmann, Software and Controls Technology Leader at Motoman Inc., West Carrollton, Ohio, will target opportunities in vision for multiple arm manipulators and 3D vision solutions. “New developments in robot technology require new ways of working with vision systems,” Garmann says. “The ‘human-like’ flexibility of movement with Motoman’s new dualarm robots provides unique solutions for the automation world. Combining new developments in vision with these highly flexible robots give an additional dimension in solving material handling applications,” he explains.
Don't Miss These Industry-Leading Events!
James W. Wells, Senior Staff Research Engineer and Dr. Jane Shi, Staff Researcher at General Motors (GM), Warren, Michigan, will lend an end-users' perspective on the potential for new areas for vision-guided robots. “Our presentation, ‘Robot Visual Servoing - Opportunities and Challenges Ahead,’ will be about moving towards developing applications where we might use visual servoing along with visual line tracking.” Wells and Dr. Shi will highlight motivations for why the car-making giant would want to attempt visual line tracking.
“Our final vehicle assembly area is a moving line, and to use robots without visual servoing would require significant investments in fixtures and stop stations,” says Wells. Continuing, Wells explains that stop stations allow a vehicle to be temporarily disconnected from a moving assembly line so that a robot could perform its task. “If the robot could visually acquire its target by relying on visual fixturing or visual servoing, then robots could play more of a role in the general assembly process of vehicles without the need of costly investments in hard fixtures or stop stations.” Dr. Shi says, “Visual servoing would make the general assembly area of the plant more flexible for new vehicle models as they are introduced.”
Robots in Exhibition
Dozens of companies that market robot vision products will have tabletop exhibits at the ICVGR. “DENSO Robotics will bring a fold-out display and a portable sample of small tabletop robots,” says Peter Cavallo, Robot Sales Manager at DENSO, Long Beach, California. “DENSO finds that in smaller events such as the ICVGR, we can do more personal interactions and assist people in developing their applications. In the intimate environment of the ICVGR, we can give companies ideas on how to begin to implement their automation projects.”
SICK Inc. will exhibit one of their camera packages along with its software, says Brian Windsor. “We will have a demonstration where we move objects under the camera so that people can get a sense of the three-dimensional information that is captured. Also, we will demonstrate different tools used in typical vision-guided robot applications.” In addition, SICK will have a camera with on-board processing, a common package that manufacturers use in various types of applications.
ISRA Vision Systems will be showcasing their new products and new technologies, declares Kevin Taylor. “ISRA will exhibit a sensor that is used for cosmetic sealing and can be used for other applications, such as glass insertion. Also, we will show off our newest software for inspection that is capable of detecting the presence or absence of parts.” In short, ISRA’s exhibits will focus on robot guidance and part inspection, particularly bead inspection, says Taylor.
Bin There, Doing That
Random robotic bin-picking is a major goal for manufacturers of vision systems. While bin-picking of completely random parts has not been reached yet, progress is being made, says Adept’s Mark Noschang. “True, robust bin-picking is the Holy Grail for vision-guided applications. Several companies have done some very impressive demonstrations of random bin-picking, but a system that is as adaptive, intuitive and capable as human workers is not available yet.” Noschang sees continued progress towards bin-picking being made as new and improved technologies come to the market.
Brian Windsor also sees progress towards robotic-based random bin-picking. “End-users want random three-dimensional bin-picking. While we are making strides on getting there, random bin-picking is something that will come.” Windsor observes that random bin-picking in a laboratory environment is moving along quite well. “On the manufacturing plant floor, several hurdles still must be overcome. In three to five years, we might start to see true bin-picking systems performing better where the end-user can justify its cost.”
Likewise, Lisa Eichler suggests, “Three-dimensional guidance systems have generated interest in the past few years, especially in flexible bin picking applications. Bin-picking is a newer technology that is still maturing into the mainstream.” Eichler believes that end-users are surprised by the wide range of applications that vision-guided robots currently perform reliably.
Eichler delineates current trends in vision-guided robotics. “Improved vision algorithms and the availability of shared communication protocols are the most significant advancements on the software side of vision-guided robotics. Hardware continues to be more affordable, with more processing capabilities, smaller size and improved ruggedness.”
David Wyatt of AMT sees more three-dimensional vision-guided robotics. “In the past five years, the major technological development in vision-guided robotics has been the use of two-dimensional cameras to extract nearly three-dimensional information. I see the continued march of distributed control being applied to motion through visual and tactile smart sensors.”
Eric Hershberger sees vision systems integrated into the robot’s controller as a benefit to end-users. “A separate computer within a cabinet adds complexity to the shop floor, while taking up valuable space as well as giving operators another piece of equipment that needs troubleshooting. In the next five years I hope to see more accurate vision guidance systems that can be better integrated into the robot.”
Hershberger believes vision systems are becoming less operator-intensive. “I am very excited about the gigabit Ethernet camera standard and power over Ethernet. Reducing the cables on a robot will greatly reduce downtime, troubleshooting and cost to the end-user.”
Peter Cavallo of DENSO notices that in vision-guided robotics, tasks that were once were very difficult are now very easy to do. As an example, Cavallo points to conveyor tracking to illustrate his point. “Conveyor tracking used to be a big deal but it is now a small add-on to the robot. Also, cameras have much more capabilities within them, so a small device is able to perform all necessary calculations and providing that information to the robot.”
Seeing More Clearly
Vision-guided robotics is a maturing sector that has potential for significant technological advancement. “When I see a robot holding a pair of chopsticks catching an insect as it flies through a robot’s work envelope, then I will know robotic vision is fully developed,” says Peter Cavallo.
Editor’s Note:
This article has been reviewed by members of the RIA Editorial Advisory Group.
For a company profile and contact information for RIA members referenced in this article, click Find a Company on Robotics Online.