« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
Robotics Robotics

Application:
Visual Inspection & Testing Visual Inspection & Testing

3D Vision Guided Robotics: When Scanning Just Won’t Do

POSTED 03/20/2008  | By: Winn Hardin, Contributing Editor

 

During the past several years, machine vision for robot guidance has improved the accuracy of robot-based assembly, while adding the capability to perform both assembly and inspection at the same time. Combining assembly and inspection reduces productivity losses caused by finding defects further downstream -- after multiple value-add operations have been completed.

3D systems can be parsed into many different categories. One method is scanning versus area systems. Scanning systems typically use a single camera to image a line of laser light projected onto the object under test. Laser-scanning-based vision systems can generate extremely accurate 3D surface maps of objects for robot guidance and inspection applications depending on the quality of the components used, but they can be slow, as in the case of large area coordinate measuring machines (CMM) for assembling aircraft, or as accuracy requirements increase, such as with densely packed ball grid array (BGA) microchips.

Non-scanning machine vision systems can also be highly accurate, with sub-millimeter and better accuracy over large areas. These systems also have inherently higher throughput because they generate 3D data for all pixels within the field of view without the need to physically move the camera, light or object under test. This article will look at advances in non-scanning 3D machine vision, from the maturing of bin picking applications to new developments that make these systems more robust to lighting, speed and movement.

Hardened Against Ambient
3D robot guidance applications utilizing machine vision continue to expand in number. Driven in the early years by the automotive industry’s need to improve quality and productivity, vision-guided robots now lift, fix, fasten, weld and paint most of a modern automobile during its original manufacture. Several years ago, robots began to feed parts to workcells themselves, often referred to as auto racking. Unlike bin picking, parts in autoracking applications are presented to the robot one at a time, although the exact location of the part varied depending on the rack used to ship the part, the condition of the rack and other factors. Unlike its predecessor, pick-and-place, auto racking requires the vision system to locate the singulated part for the robot in 3D space, rather than 2D space on a flat conveyor or tray. 

ISRA VISION SYSTEMS Inc. (Lansing, Michigan) is one company that uses both single-camera-light projector approaches; as well as multi-camera photogrammetric approaches for auto racking and automotive assembly applications. According to Kevin Taylor, ISRA’s National Sales Manager for North America, their new robot guidance sensor (RGS) systems come with two LED light sources: a 6-line projector for measuring surfaces without physical features (windshields, etc.), and standard white light illumination for physical features, such as holes.

As with all machine vision systems, lighting is always a challenge, and the same is true of 3D applications. If there is too much or too little light, contrast suffers and even the best vision algorithm can’t extract 3D coordinates from the image. To make their systems more robust, ISRA has added a new software routine for robot guidance called Color Dynamic Control (CDC). In the event of a failed measurement, ISRA’s ROVIS software triggers CDC, which runs the camera through different shutter settings until the camera can acquire a quality image. ‘‘In the past, you trained the system with a longer exposure time for a black car, for instance, compared to a white car, but ambient lighting conditions can change considerably,’‘ explains ISRA’s Taylor. ‘‘It may take add a half-second to the cycle time, but manufacturers would rather take a fraction of a second than fail a part or process.’‘


Certified System Integrator Program

Set Yourself at the Forefront of the Global Vision Market

Vision system integrators certified by A3 are acknowledged globally throughout the industry as an elite group of accomplished, highly skilled and trusted professionals. You’ll be able to leverage your certification to enhance your competitiveness and expand your opportunities.

GET CERTIFIED


 

A Moving Target
Bin picking took the next step in the evolution of 3D vision for robot guidance, challenging the vision system to find a part in a complex scene of randomly grouped parts.  As with the move from 2D pick-and-place to 3D auto racking, extra computational horsepower combined with more powerful image processing algorithms proved to be the key. Vision evolved from pixel crunching to vector-based pattern searches to locate key features regardless of size and orientation. Faster processors meant the systems could use larger, more complex vector descriptions, as well as the relationships between separate vectors based on mathematic models and CAD/CAM data. These model-based approaches proved more robust and capable of overcoming ‘‘new technology’‘ skepticism. 

‘‘Bin picking has certainly left the bleeding edge,’‘ notes Adil Shafi, president of Shafi Inc. (Brighton, Michigan), and software specialist in 3D bin picking. ‘‘What we are seeing is more and more bin picking solutions implemented in industry every year. It’s not going into every [automotive] program, but it’s just a few steps from that.’‘

Despite the maturing of vision systems, there is still one area where humans regularly beat vision systems on the plat floor: operating on a moving object, such as a chain-pulled automotive frame or an animal carcass.

Visual servoing, or the ability to track moving objects with the speed and precision to guide a robot to perform a task, is considered behind today’s machine vision systems. But don’t tell that to the raw pixel triangulators at 3D vision specialist, TYZX Inc. (Menlo Park, California).

TYZX has had already had some success with its DeepSea G2 Vision System platform in the food processing industry, tracking animal carcasses and other objects in real time for robot guidance. TYZX uses a stereoscopic design using two fixed CMOS sensors in a calibrated housing. Like all stereoscopic systems, the G2 can determine the 3D coordinate for every pixel in an image by comparing parallax shifts between images acquired from the separate cameras.

Typically, this approach is computationally extensive, adding to the cost of the processor and network elements to handle the required bandwidth, but TYZX expedites the solution by using application-specific integrated circuits (ASIC), field programmable gate arrays (FPGA) and PowerPC chips in a single housing. This configuration allows the ‘smart camera’ to crunch the data at speeds up to 60 fps while eliminating the need for network bandwidth capable of handling the camera to processor and interprocessor links. The ASIC is tasked with correction and stereo correlation, while the FPGA handles the 3D calculations. The robot path is then determined based on the collected 3D data.

While it’s no longer about increasing clock speeds, multicore processor designs continually give vision designers the basic tools they need to design more powerful algorithms without sacrificing overall system speed. This combination continues to push the performance of vision systems beyond what the manufacturing industry believes is possible. The good news is that – over time – today’s impossible task becomes tomorrow’s standard operating procedure. Soon, visual servoing will head the list of ‘‘mature’‘ vision systems, enabling new levels of labor savings and productivity enhancements across industries as disparate as cars to microchips.