Member Since 1900


RIA has transformed into the Association for Advancing Automation, the leading global automation trade association of the robotics, machine vision, motion control, and industrial AI industries.

Content Filed Under:



Debate Continues on Fieldability of Visual Servoing. . . .by Winn Hardin, Contributing Editor, Courtesy AIA's Machine Vision Online

POSTED 03/02/2007

 | By: Winn Hardin, Contributing Editor

Visual servoing sparks as much debate among technologists as it does among end users, as was illustrated at a recent panel discussion on the topic at AIA's recent Vision & Robots for Automotive Manufacturing Workshop.

Visual servoing is hard to do, so why would we like to do it?’‘ asks James Wells, Senior Staff Research Engineer at General Motors Research and Development (Warren, Michigan). The answer, he says, lies in matching dynamic production lines that don't want to stop, with today's automation solutions (e.g., robots) that require the production line to index, or stop, before the robot can operate. If the robot could be given sufficient sensory data and processing capabilities – essentially visual servoing – then the robot could work dynamically, visually tracking a moving target with a full 6-degrees of freedom and using the data to adjust the operational path to match the object's movement and velocity. The growth in pattern searches, 3D feature extraction and low-cost processing power is driving vision-guided robotics towards greater capability, so what more is needed to enable visual servo applications?

Frank Maslar from Ford Motor Company's Advanced Manufacturing Technology organization (Dearborn, Michigan) has an answer. ‘‘Visual servoing would reduce the overall cost and open the door for automating operations never before possible,’‘ Maslar says.

Why So Difficult?
‘‘One reason we don't do dynamic versus static solutions is the technology and commercial disconnects don't allow you to implement robust dynamic tracking applications very well,’‘ Wells explains. But perhaps that's changing, starting at Purdue University.

Professor Avinash Kak of Purdue University (West Lafayette, Indiana), a leading expert on machine vision and robot guidance, suggests this visual servoing definition: ‘‘Visual servoing means you take a sensor input where the robot is, make a prediction about where the object is supposed to be, and find the error between the prediction of the object and where the object really is. Then you feed that into a robot controller to correct its path.’‘ Visual servoing is typically viewed as the next step in an automation progression that began with mechanical line tracking and drag lines, moved to encoder based tracking, then feature tracking through machine vision and finally, visual servoing with full 6-degrees of freedom.

Working on a cooperative research project for Ford Motor Company, Kak successfully demonstrated a visual servoing solution that networks 7 computers and multiple cameras to guide a robot towards a shaking target (launch movie clip). A rope is connected to the target, and the robot is instructed to guide itself to the target while a person yanks on the rope, moving the target in random directions.


Kak instills robustness in his visual servoing system by taking two different vision-guidance approaches: one based on feature extraction that generates position 3D data at 30 fps, and another 3D model-based approach that generates data at 7 fps. If one approach fails, a separate PC arbitrator pulls data from the other method and sends that to the robot controller. ‘‘It's a subsumptive hierarchical controller that has multiple control loops. If one loop doesn’t work, another takes over.’‘

The multiple cameras with one positioned overhead also allow the vision system to track the object should it stray beyond the main camera's field of view.

The processing needs of Kak's system points to one of the automotive industries main concerns about the technology. ‘‘Obviously, the 7 computers need to be compressed down through algorithms and other methods,’‘ comments Greg Garmann, Software and Controls Technology Leader for Motoman Inc. (Troy, Ohio). ‘‘These systems tend to work better in controlled environments, like the semiconductor industry, but automotive needs a turnkey solution, for a changing environment, with lighting and part surface changes that significantly affect the results.’‘

Perhaps the solution is closer than we think. According to Robot & Vision Manufacturing LLC President, Karl Sachs, the Purdue demonstration could likely be handled with a single PC, running two mother boards. ‘‘And as we see it, it'll be one board pretty soon.’‘

Integrated Solutions
One of the reasons that Purdue's Kak has been able to wow the crowds is because of his ability to tightly integrate the vision system with the robot controller. For many integrators, this task is made more difficult because of the proprietary nature of each robot controller, limited access to the controller's code and available bandwidth.

‘‘We're going to have to move into a higher bandwidth between the vision system and robot controller,’‘ notes Motoman's Garmann.

Another approach to beating the processing and bandwidth challenges is in limiting the movement of the object through mechanical means, and educating potential users of this approach.

‘‘We've been trying to tell people to get the fear factor out of the way. We're not truly shooting for the stars, like tracking airplanes or military systems,’‘ explains Sachs. ‘‘Under normal circumstances and combined with rudimentary mechanical devices that limit the sway or swing of the object, automotive applications are essentially 2D, and other applications like meat cutting can be the same.’‘

Mechanical limiters reduce an object's movement to a single plane, shorten a fixture's sway path, or maintain the object's orientation. For instance, sides of beef are hung from hooks as they travel through the fat trimming stations in a beef production plant. Limiting the amount of sway would keep the side of beef in a camera's field of view, while limiting the hook's movement to one plane would essentially make a 3D operation into a 2.5 D machine vision application, reducing the processing requirements for visual servoing.

‘‘If we just look at restricting the movement a little, we can do visual servoing just fine in industrial environments,’‘ Sachs concludes.

Going Forward
Whether you believe visual servoing is a proven technology ready for the plant floor, or in need of more development, the attraction of dynamic versus static automation will continue to grow as manufacturers seek greater efficiency.

‘‘We'd like to have a process that's interchangeable with good compatibility between people and automated processes. Mixing stop stations where you have to stop the product and then accelerate it somewhere else and then stop it again creates logistical problems,’‘ explains GM's Wells. ‘‘And it's a crying shame that robots are capable of going to different places through programmed controls, but spend their entire life going to only 10 or 15 points.’‘

Business conditions may prove to be one of the biggest hurdles facing visual servoing. The complexity of visual servoing requires tight integration between vision system, robot controller, and material handling equipment, but these products are typically built by separate manufacturers using proprietary systems. Open standards do not exist to cover all interfaces and only 6 to 10 percent of robots in the field require vision guidance, which means suppliers among the three groups have less incentive to make their technology ‘‘open’‘ and work together on common standards. At the same time, end users – who would like to see greater integration and a commoditization of all markets, leading to lower prices – like ‘‘having choices’‘ when it comes to selecting suppliers.

Many of these conditions could change, however, as visual servoing continues to prove itself. ‘‘Even though we're not implementing these solutions yet in production, it attracts attention if our suppliers are able to demonstrate the capability,’‘ says Wells. ‘‘It's a chicken and the egg thing.’‘

Movie Clip ‘‘Robot simulates a peg-n-hole motion on the target’‘ courtesy of Prof. Kak, Purdue University.