Association for Advancing Automation Logo

Member Since 1974


Content Filed Under:



Increased Perception Will Likely Unlock the Next Level of Mobile Robot Deployment

POSTED 09/07/2022

 | By: John Lewis, A3 Contributing Editor, TechB2B

Mobile robotic applications, however, are particularly challenging when multicamera, multimodal solutions are required to adapt to an ever-changing environment. The former leads to long development cycles, increasing robot total cost of ownership. The latter leads to reduced availability of the fleet, decreasing throughput.

While most warehouses have yet to implement robotic solutions, machine vision suppliers have been working to reduce development time and increase overall camera signal robustness to meet speed, resolution, and accuracy requirements. The goal is to unlock the next level of robotic deployment by reducing cost of ownership for end users.

3D Imaging Advancements Improve Mobile Robot Efficiency

Teledyne DALSA customer Kibele PIMS is a case in point. The company has developed and commissioned two state-of-the-art, fully automated systems for Unilever companies Knorr and Lipton. The systems enable robots to identify, sort, and stack food products on pallets.

Kibele PIMS built the larger of the two systems to help Knorr classify its soups, sauces, and other products. Packaged in small batches, the products exit production via a 27-meter-long feeding conveyor and are then transferred in cartons by product type to their respective packing stations.

The cartons are sorted according to type by three Kuka robots moving on a linear axis to work across 17 stations. The robots place the cartons on pallets and wrap them with adhesive ?lm. As soon as a pallet is fully loaded and wrapped, the robots place them on conveyor belts for collection by trucks.

“Image processing systems play a decisive role in this application,” states Erdal Ba?araner, who played a key role in the development of the two systems at Kibele PIMS. “Without them, a reliable solution would not have been possible.”

Knorr’s system features a total of 19 Teledyne DALSA Genie Nano M1920 area cameras. At the beginning of each packaging station, a camera reads barcodes on the boxes, which are used to classify the product type and assign it to the correct conveyor belt. At the same time, the incoming boxes are checked for possible damage to the packaging. Defective cardboard boxes just stay on the feeding conveyor belt until they reach the end, where they are collected and manually assessed and repacked, if possible.

After the robots have loaded undamaged boxes onto the pallets, the completed pallets are transported past two additional cameras, which are used to record the number of pallets, the expiry dates, and again the type of product. This information is then sent to a labeling machine that prints the associated transport labels, attaches them to the pallets, and thus releases them for shipment.

In addition to Teledyne DALSA Genie Nano M1920 area cameras, the system includes four GEVA-312T industrial image processing PCs, each equipped with the image processing software iNspect, which evaluates all images. The customer then saves the complete product and pallet data directly on the GEVA PCs and can use this data to generate a wide variety of reports.

“I’m very satis?ed with the results of both sorting lines,” says Ba?araner. “Before switching to the fully automated solutions, the food packages were manually sorted, which was a very exhausting task. In the new setup, the boxes arrive every 1.5 seconds on average on the Knorr line and every 2 seconds on the Lipton line. Compared to manual sorting, this means a signi?cant increase in pro?tability and considerably fewer errors in the correct packaging of the food. Teledyne DALSA’s imaging components used in these fully automated sorting and packaging lines have been an essential guarantee for the achieved success.”

Kibele PIMS uses the Teledyne DALSA GEVA-312T, an integrated vision system complete with a 12-inch touch screen display. The slim-line form factor maximizes control panel real estate for PLCs, motion controllers, and nonvision-related components. (Image courtesy of Teledyne DALSA.)Increasing Perception Reduces Costs

Use of autonomous mobile robot (AMR) technology is increasing in a variety of industries to help automate processes, leading to improved productivity and efficiency. Ensuring AMRs work as needed, a variety of perception solutions are necessary — from edge processing hardware and ultrasonic sensors to artificial intelligence (AI) and 3D vision systems.

Mobile robots are always measured by availability and efficiency. More efficient robots lead to smaller fleet size requirements to meet the given throughput requirement. A smaller, faster fleet can manage the same throughput as a larger slower fleet, saving tens to hundreds of thousands of dollars depending on how many robots are required.

Adding perception devices to a robot allows for greater efficiency during each mission. Simply put, the robot has less downtime due to unplanned stops. This additional perception also reduces cycle time for tasks that require the robots to engage with the environment, such as when automated fork trucks load pallets for transport.

“Seconds count in the mobile robot industry,” explains Garrett Place, Business Development, Robotics Perception, ifm efector, inc. Saving 30 seconds per mission has great impact on throughput at the end of the week, month, and year. “Camera hardware is starting to provide a robust dataset in varying environments. While further work can be done here, the gains may not be as significant as what has been achieved over the last 8–10 years.”

Will Corns, Senior Manager of Machine Vision Strategic Accounts, Zebra Technologies agrees that “perception packages need to provide a wider field of view or incorporate 360-degree views, especially overhead to avoid overhead collisions. Also, processing times must be faster. Imagine the ability to move product from one side of a warehouse to the other seconds or minutes faster. This results in increased overall efficiency, either providing more throughput or using fewer robots to move product. By simply adding additional sensor packages, this can be achieved.”

Used for robotic perception on many AMRs and AGVs, ifm efector’s O3R platform camera heads are about the size of the Intel RealSense D series cameras but with full industrial specifications rated for shock, vibration, and dust. (Image courtesy of ifm efector.)Dynamic Landscape Challenge

Another area that continues to be challenging is addressing the dynamic landscape in which AMR technology is asked to operate. From staying in calibration to avoidance techniques, technology is continuously being deployed to mitigate the risk of collision, obstacle avoidance, and interaction with humans. Some perception packages are using multiple sensor payloads to provide wider feedback.

Many of the biggest challenges in mobile technology applications are similar to the ones we deal with in automation in general,” says Corns. “These include the ability to find efficiency gains while keeping product quality as high as possible, creating workflows and collaboration with integrated devices to take items out of a facility warehouse management system, providing guidance to robots for collision avoidance, and incorporating multiple sources, such as ultrasonic, vision and 3D.”

The ability to process at the edge AI algorithms is getting faster every year while the need for systems to provide more analytics, make decisions on their own, and use less power are always needed. Along with multiple sensor payloads, marrying 2D and 3D information and even providing environment data such as SKUs or barcode capture is significant.

AMRs are deployed in various applications and usually start by moving product from an induct area to a stocking location. Quickly, warehouse operations or manufacturing operations figure out that other tasks can be performed along the way. From inventory to floor plan layout, vision plays a significant role in the dynamic landscape in which mobile robots operate.

The PalletTransport1500 AMR that supports cross-docking, returns, and case-picking workflows for contactless pallet transport in distribution centers. (Image courtesy of Zebra Technologies.)  “With the recent addition of Matrox Imaging to Zebra, we now have some of the most robust machine vision technology to allow collaboration between mobile robots, 6–7 axis robots, or pick-and-place systems,” says Corns. “With 2D and 3D capabilities and innovative software and hardware offerings, we can also provide AI deep learning to the mix with our previous acquisition of Adaptive Vision.”

Increasing Environmental Awareness for the Future

Increased availability of industrial 3D cameras, along with lower cost 3D lidar, have helped open the door to increasing environmental awareness of AMRs and other vehicles, which leads to increased throughput. New developments in CPU/GPU combinations from companies such as NVIDIA provide the proper compute to handle the increased data generated on each robot.

Achieving higher throughput, greater operational efficiency, and fewer product touches — whether in manufacturing or warehousing — equates to reduced costs. It also allows for redeployment of other resources to more important jobs, leveraging invested assets to their fullest potential. This is especially critical today with the current labor shortage.

“By allowing major transportation and logistics companies to move thousands of boxes to the continuous resupply of a manufacturing process, our customers have been able to deploy a completely autonomous ecosystem,” says Corns.

Additional efficiencies seen today include being able to measure in flight, collaborate with multiple systems, and increase productivity. Several advancements, however, should be developed quickly, according to Corns. These include adopting perception packages on 6–7 axis robots, increased sensor packages on AMRs, allowing for more accurate work performance, and edge processing for quicker and more deterministic routines to reduce steps in the process.

“Developments in the area of signal processing and algorithm development are perhaps the next step function change we can see in the mobile robot industry,” says Place. “The system must make the best decisions based on the received signals. Additional potential inferences can make a big impact on robot efficiency. Helping the robot perceive its environment with greater fidelity will improve the overall operation of the vehicle.”