Industry Insights
Machine Vision Is on a Roll in the Autonomous Mobile Robot Market
POSTED 04/15/2020 | By: Dan McCarthy, Contributing Editor
For decades, industrial robots have steadily taken the dirty, dangerous, and dull jobs from human operators wherever they could efficiently do so. Until recently, however, the task of shifting pallets, parts, and inventory around a production or logistics facility has largely remained in human hands.
That began to change with the emergence of automated guided vehicles (AGVs), which added greater flexibility to material deliveries in large logistics and production facilities — as long as those facilities had predictably consistent layouts and specific navigational infrastructure, such as wires, magnetic strips, or reflectors. AGVs carry just enough onboard sensors and intelligence to use this infrastructure to navigate and stop when an obstacle blocks their pre-established travel path. Making even minor alterations to a route means moving that navigational infrastructure around, which can require expensive and disruptive changes to the surrounding facility.
Enter autonomous mobile robots (AMRs). AMRs contain additional software and sensor technologies that allow them to validate their position on an internally stored map of a facility and dynamically navigate around random obstacles in their path to find the most efficient route to a target destination.
“From the business side, the value of AMRs is that the navigational system is fully self-contained on the robot,” says Josh Cloer, sales director for the eastern U.S. at Mobile Industrial Robots (MiR). “You’re not reliant on an external infrastructure and the bot itself as you would be with an AGV.”
The market for both AGVs and AMRs is unlike other end markets that pitch different machine vision technologies against one another in a zero-sum race for market share. Instead, these mobile platforms make room onboard for multiple imaging components and other sensor technologies. The overlapping input from these multiple sensors helps to maximize robot safety and efficiency in challenging industrial environments.
A Rolling Horror Show
“Challenging” may be an understatement in that context, according to Garrett Place, head of business development for robotics perception at ifm efector. “You have reflectors all over, shrink-wrap everywhere, shiny metal, constantly moving objects. The lights are off. The lights are on. It’s a horror show for an imaging system,” he says.
Hence the need for data fusion. “There are no unicorns,” Place adds. “Localization is best done with sparse point clouds at long ranges to get that fidelity you need, whereas obstacle detection in the near range is better suited by a higher number of points, such as you get from 3D cameras. And seeing glass is best done with ultrasonic or radar. All of these things exist on standard mobile robotics today.”
MiR robots offer a representative model the sensor suite onboard many mobile platforms today. All the company’s robots have two laser scanners — specifically microScan3 scanners from SICK mounted on the front and back to provide a 360-degree field of vision (FOV) for spotting objects up to 8 meters away. These lidar systems support both localization and safety. In addition to helping a robot map out a new facility, they ensure that it can dynamically avoid obstacles and people when navigating. The scanners’ infrared lasers support further safety by providing some immunity from sunlight and dust.
MiR robots also leverage 3D imaging in the form of two forward-facing Intel D435 depth cameras to detect obstacles in a FOV measuring roughly 2 meters high. That vertical FOV is important for detecting tables, cantilevered obstacles, or dangling objects that appear high up in a robot’s path but nevertheless pose a risk to the load it is carrying.
An additional 24 proximity sensors — six on each corner of MiR bots — scan the floor for objects less than 20 centimeters high, such as pallets, cables, and human feet. Smaller payload models add ultrasonic sensors to detect glass, while MiR’s heavy payload bots carry floor-scanning time-of-flight (ToF) systems as part of the sensor suite.
Additional sensors commonly found onboard AMRs and AGVs include accelerometers and gyroscopes to sense inertial force, acceleration, and rotation. Encoders on each wheel measure speed, for accurate feedback to the laser scanners, to detect whether the robot is slipping on a wet floor.
Speeding the Last Meter
No matter how much warehouse or production floor an AVG or AMR navigates, much of its efficiency and value derive from what happens at the “last meter,” where it fulfills the task for which it is designed. Whether that task is human aided or automated, navigating the last meter demands a higher level of precision and repeatability to compete with the efficiency of human operators.
“We talk about efficiency in terms of missions for AGVs,” says Place. “How many missions can this pallet-moving robot make in an hour, shift, or day? If an AGV fleet can eliminate three to four seconds per pick per vehicle, those efficiencies add up over the course of a shift or a day to become comparable to what humans can normally provide.”
The enhanced repeatability and efficiency required in the last meter can come from improvements to sensors, software, processing, and/or overall system design. Some AMR systems borrow a chapter from AGVs and rely on a short length of magnetic tape on the floor or a QR code to speed docking. MiR uses such tools to provide 10-millimeter precision at the last meter.
That said, cleaner, more robust sensor input can help enhance precision and efficiency in the last meter and also help lighten the load on processing systems and software. “So much of the software problem is mitigating artifacts in the environment,” says Place. “If the hardware suppliers can do that with their design, then less work needs to be done with filtering. And you’re decreasing the barrier to these problems.”
Yet some challenges will always remain for software developers, adds Sean Kelly, senior software engineer for robotics perception at ifm efector. He presents the software driving ifm’s pallet detection system as an example. “It’s all about improving the modeling of the world we’re working within. Within palletization, we talk about modeling stretch wrap a lot. That takes looking at hundreds to thousands of unique instances to try to model what stretch wrap looks like in the imager you’re using versus how it looks through a different camera. You’re hand-tailoring these algorithms to isolate the noise from what you’re trying to look for. That’s just the nature of the game right now across all solutions, companies, and sensors.”
Before software can model the data streams from a mobile platform’s sensors, the data must be quickly processed and compiled into a common format. That means combining sparse point-cloud data from laser scanners with RGB images from stereoscopic cameras and fusing both in real-time with additional data from ToF cameras, ultrasound and radar sensors, and gyroscope and accelerometer signals. Additional calibration is needed to account for the relative position of all these sensors as they capture data from different perspectives on a moving platform. Finally, the robot must actively decide which sensors to trust based on environmental and other factors.
“There’s this whole fault process behind giving weights to the various sensors, and that’s a dynamic process,” says Kelly. “There’s no universal architecture. But we’re seeing trends toward edge processing because you can centralize all that decision making as close to where the data is coming from to minimize network latency and the impact of different clocks on different computers doing different things. If it’s all happening in one location, a lot of those challenges become a lot easier
AGVs and AMRs are paving the way toward greater efficiency and faster return on investment when automating transport tasks in production and logistics facilities. However, with greater autonomy comes greater challenges. As vision providers continue to introduce higher-performing imaging systems, improved software modeling, and more robust data processing, these mobile platforms will be increasingly capable of navigating dynamic production environments safely and efficiently.