AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:



Lessons Learned from VGR Applications Drive Robotic Palletizing Improvements

POSTED 03/19/2013

 | By: Winn Hardin, Contributing Editor

The automotive industry has always been a pioneering industry for development of vision-guided robotics (VGR). From parts manufacturing and packaging, to production line assembly, painting and welding, the automotive industry has helped fund the development of exciting convergences between machine vision and robotics.

Today, lessons learned from the automotive industry are also helping to drive a fast growing application area for VGR systems in palletizing and depalletizing, including the use of 3D modeling software to optimize mixed pallet builds, ‘learning robots’ that can improve their own cycle times by 10% over the most experienced robotic programmers efforts, and faster, better ways to acquire 3D point clouds for everything from flexible packages (aka bags) to the placement of car windows and doors.

Gap and Flush of BIW in the Body Shop. Courtesy of ISRA VISION. Types of Palletizing Applications
“The first big segmentation in palletizing is rigid packages, such as cases and layers of cases, vs. flexible packages, such as bags,” explains Dick Motley, Senior Account Manager for FANUC Robotics North America Packaging Distribution Group (Rochester Hills, Michigan). “The first question is are you building a pallet or unloading it?  Robotic palletizers, sometimes vision-assisted, can provide accurate tracking of items placed on a pallet, whether to meet critical traceability requirements for products such as pharmaceuticals, or to confirm order accuracy in a custom built-to-order mixed pallet,  Vision can play a larger role on depalletizing operations because you don't how the pallet may have shifted during transport, or the pallet could be of flexible packages, such as bags, so you need machine vision to guide the robot to each package or layer. Within palletizing and depalletizing, the throughput rate further segments the application.  At lower rates, items can be handled individually or in small groups, but at much higher rates, like those found in the soft drink and beer industries, items are often pre-arranged into complete layers, and a robot palletizes those layers using large, complex end of arm tools [EOATs]. Finally, the packaging material itself can further segment the application: dictating the type of tooling needed to handle items reliably and undamaged.  Tools employing vacuum, or a variety of mechanical side-clamping or bottom-supporting mechanisms, can be used alone or in combination, depending on the package characteristics.,  

All these palletizing segments have their unique challenges, adds Motley. Individual cases and layered pallets of rigid packages need the corners and edges aligned all the way around the pallet and layer-to-layer to exploit the stacking strength of the package.  Flexible products typically don’t have the accuracy requirements of rigid products, but as a result, the cycle times are much more dependent on maximum performance, of the robot itself. .

Measuring Gap and Flush after Vision Guided Glass Decking in Final Assembly. Courtesy of ISRA VISION. What Bin Picking Teaches About Depalletizing
Bin picking, or the unloading of large bins of parts, often in random order, requires a vision system to guide the robot to each part so it can be gripped, and moved. But in random bins of parts, parts lay on top of on another, making the machine vision choose which is the best part to pick first. And once a part is identified as being on top and read for picking, the robot controller has to figure out how to get the robotic EOT in the bin and grab the part without damaging itself, the part or the bin. “You set up boundaries that tell the robot not to get within a minimal distance  and hopefully the bin is designed  so that parts at the edges slide or roll down away from the walls, but it doesn’t always work that way,” explains Dan Broihan, Senior Sales Manager at ISRA VISION (Bloomfield Hills, Michigan).

Tricky stuff indeed. But according to FANUC’s Motley, the lessons learned in bin picking have really helped depalletizing applications. “The good news for the consumer space is that all the effort and technology invested to solve the bin picking problems really helps depalletizing operations where you don't have the constraints of the bin,” Motley says. “It’s pretty exciting when you can leverage bin picking and apply the technology to a non-heavy-industry application.”

For example, pallets of flexible packages can be built haphazardly or shift considerably during transport. To make the depalletizing of flexible packages as efficient as possible, FANUC’s 3D, stereovision-based vision system developed for bin picking offers the potential to use the same projected light grids to improve the contrast from one bag to the next based on its 3D curved shape.  2D vision has often been successfully employed to guide depalletizing robots, using pre-trained targets on the exterior of the package, but the 3D approach is a more general solution.

ISRA VISION also uses projected light with a single camera to improve gap measurements during automotive door hanging and window insertion applications. By using six, collimated LED line lights – three on each side of the seventh missing center line– ISRA has improved on traditional approaches to alignment by not just collect X, Y, and Z information for the gap but also rotational information around each axis.

In the automotive domain, bin picking was also one of the first mainstream VGR applications to use 3D models in conjunction with geometric, vector-based search techniques to help the vision system quickly identify different parts inside the bin.

Various sensing and scanning technologies can also be used to derive 3D size information for products entering a system that’s stacking custom built-to-order pallets.  Barcodes are valuable to establish nominal product dimensions, but the actual product can vary significantly enough from nominal to affect the stability of a mixed pallet.   Based on 3D dimensional data, the various items in a customer’s order can be arranged in mixed pallet loads to provide high shipping efficiencies while protecting against product damage, FANUC robots are installed in systems where items are arranged on the pallet(s) “on the fly”, and also in systems where the mixed pallets are highly optimized by collecting and analyzing the customer’s order in advance of the mixed pallet build. “You might not outrun a human building the pallet, but the robot has 100% accuracy on what products were placed on the pallets, doesn’t break products, and shrinkage and loss are eliminated from that point of the operation.”

Welding and the Learning Robot
FANUC’s Learning Robot technology improves the cycle time performance of FANUC robots.  “Our robot arms ship to our customers fully optimized for best performance, but then the customer attaches their tooling, and that alters the characteristics of the overall mechanical system.” explains FANUC’s Motley. “The Learning Robot involves temporarily instrumenting the robot & tooling with an accelerometer, which is then is removed before real production. Using data from the accelerometer while the arm runs through its programmed path, the robot system iteratively optimizes its own motions, accounting for the effects of the customer’s tooling.  Our studies show that The Learning Robot can improve cycle times by 8% to 10% – and that’s over expert programmers…someone that really knows what they’re doing.  While the initial successes have been in spot welding applications, this technology has a lot of promise in palletizing operations as well.”

According to ISRA’s Broihan, the automotive industry is pushing robots into applications that were once considered the sole domain of manual labor, including the finishing step where gaps are measured after final assembly, for example. ISRA’s new single-shot, 3D vision solution is well suited for finishing applications where cars may roll unexpectedly and operations need to be done quickly, and then the robot moved out of the way. If robot adoption continues in the finishing phase of automotive manufacture, in addition to painting, welding, white body and other parts of the production process, who knows what new capabilities will transfer out of automotive and into consumer goods and other industries?

Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.