3D Camera and Sensor Innovations Keep Mobile Robots Moving
| By: Jimmy Carroll, Contributing Editor
Like several other exciting technologies in the automation space, autonomous mobile robots (AMRs) continue to gain in popularity in places such as warehouses, distribution centers, and factories. As the market for AMRs continues to grow, the technologies that allow these robots to navigate challenging environments must also advance. This article dives into some of the latest technologies empowering AMRs to navigate, avoid obstacles and collisions, and work alongside people on the factory floor.
Unstructured, Challenging Environments
More than ever before, robots today handle a plethora of nontraditional roles in areas such as manufacturing, delivery, and security. Mobile robots face challenges in traversing changing and unstructured environments and must be designed to detect and classify objects at ranges that allow appropriate decision making and safe, efficient navigation. An AMR requires perception data capable of supporting the robot’s ability to identify and distinguish between objects of varying motion, shape, reflectivity, and material composition, according to Vishal Jain, Vice President of Software Engineering at Velodyne Lidar.
Lidar technology, like the products offered by Velodyne Lidar, allows different types of robots to work in varying environments to leverage rich and accurate 3D information for high-speed and safe navigation by avoiding collisions with small objects such as dunnage, overhanging objects such as cables or light fixtures, and moving obstacles such as people, with ample time to safely navigate. The high-resolution and dense 3D perception data gathered by the Velodyne sensors enables all of this — localization, mapping, object classification, and object tracking.
The company’s new Velarray M1600 solid-state lidar sensor, for example, provides AMRs with real-time, near-field perception data up to 30 meters and a broad 32-degree vertical field of view, allowing them to traverse unstructured and changing environments.
“The M1600 solid-state lidar sensor is built using Velodyne’s proprietary micro-lidar array architecture, which features the company’s optical chip technology with eight lidar channels miniaturized to the size of a penny, which forms the ‘engine’ of the lidar sensor,” said Jain. “The miniaturization combined with Velodyne’s proprietary, fully automated manufacturing process enables cost-effective, high-quality mass production.”
Safety Standard Considerations
In terms of robot safety, little to no guidance existed in terms of standards until recently when the RIA introduced the first national safety standard for industrial mobile robots. ANSI/RIA R15.08-1-2020 – American National Standard for Industrial Mobile Robots – Safety Requirements – Part 1: Requirements for the Industrial Mobile Robot provides technical requirements for the design of industrial mobile robots to support the safety of people who work near them. Aaron Rothmeyer, Market Product Manager at industrial sensor company SICK, believes this will lead to increased robot deployments.
“RIA’s R15.08 can really open things up for wider adoption because companies that are hesitant to deploy robots now have standardized safety guidelines to follow and can feel comfortable,” he said.
Having participated in the development of the standard, SICK now uses the guidelines to create products that are designed to comply with the regulations.
“Many devices used in mobile robots are getting smaller and lighter, so we’ve looked at the new standard and where the market is going and have started releasing more small, low-cost, lightweight products that can fit into smaller platforms,” said Rothmeyer.
Examples include SICK’s nanoScan3 safety laser scanner or its TIM781 2D lidar sensor, which also has a safety variant. This product, according to Rothmeyer, measures about 3-by-2 inches and can fit into small spaces. Additionally, the TIM781 sensor has opened new possibilities in the world of mobile robots in terms of IEC safety standards.
“Most every safety laser scanner out there is performance Level D, which is meant to essentially protect against the highest level of risk where you’ve got a machine that could potentially kill someone on a factory floor,” said Rothmeyer. “With the TIM781, we’ve been able to take some of the learnings from previous sensors and factor in the RIA 15.08 guidelines to do something that’s never been done before, which is protect lower risk applications at Level B.”
Level B represents a vastly different category than Level D and oftentimes provides auxiliary protection for less severe injuries. For example, if an automated guided vehicle (AGV) with Level D performance is pulling a trailer, and the trailer creates a pinch point when turning, that point represents the Level B application. Someone could get hurt at the pinch point but is not at risk of dying.
3D Takes Flight
Another technology commonly used in AMRs is 3D Time of Flight (ToF). Companies such as Basler offer ToF solutions like the blaze-101 camera, which provides a large measuring range of up to 10 meters and frame rates of up to 30 fps. This camera, according to Martin Gramatke, Product Manager, 3D Image Acquisition at Basler, helps robots navigate and avoid collisions in challenging environments with different surfaces and varying ambient light (Figure 1).
“In scenarios where a laser scanner may miss an object like the forks of another forklift when the scan plane is below the obstacle, a ToF camera can help prevent machine damage,” he said.
Gramatke does not believe that ToF cameras alone can solve the problem, however.
“The key to reliable navigation and obstacle detection is the combination of different sensors,” he said. “For example, we provide software that projects color image data onto 3D image data from the ToF camera. AI can then classify the color data to make better decisions in navigation and obstacle detection.”
Since the introduction of the Helios2 3D ToF camera at VISION Stuttgart in 2018, LUCID Vision Labs has been collecting market feedback and has gained industry expertise in real-world applications, allowing them to implement features that AMR customers are seeking — including the multichannel feature. With traditional ToF methods, if two or more AMRs come to an intersection, the light emitted from the ToF solutions can interfere with one another. With the multichannel feature, multiple ToF cameras can image the same space simultaneously without disrupting one another’s depth data.
“We also choose to implement our Helios camera at Sony’s recommended 850 nm wavelength, which addresses the indoor AMR market well. The sensor has two times the quantum efficiency in this range versus 940 nm and provides customers exceptional 3D depth data with submillimeter precision, enabling superior point clouds and unfiltered data,” said Rod Barman, Founder and President at LUCID Vision Labs.
AMRs operate in some tough environments, so we designed our next generation Helios2 camera to withstand these rigors. Helios2 offers “factory tough” IP67 protection in a compact 60x60x77.5mm form factor, GigE Vision PoE and industrial M12 connector for up to 100m cable length. We test to DIN EN 60068-2-27, DIN EN 60068-2-64 shock & vibration standards, as well as DIN EN 61000-6-2 industrial EMC immunity.
360-degree 3D Data
Designed specifically for use with robots, the PAL line of 3D vision systems from DreamVu provides 360-degree 3D vision. According to Mark Davidson, Chief Revenue Officer, DreamVu, many customers express that some of their biggest AMR navigation challenges lie in the sheer breadth of environmental situations that must be considered. In a large warehouse, tasks such as localization can prove difficult, but the company’s 3D vision systems have addressed these challenges.
The company does so by continually improving upon its algorithms in its software releases, learning more from every deployment and carrying that knowledge into the next, and finally, by taking an extremely collaborative approach with its customers.
“We spent a lot of time, engineer to engineer, over Zoom calls passing assets back and forth and getting feedback from our customers on what they need for a specific application,” he said. “Semantic segmentation, for example, is something we spend a lot of time on. We continue to advance these algorithms and put a lot of investment into R&D.”
In one recent example, DreamVu helped a client deal with navigation and obstacle avoidance issues on the floor. The robot navigation company’s end customer had brought a floor scrubber to market that encountered challenges with identifying objects on a floor. The company’s vice president of engineering tested 10 different sensors to solve the issue, but none were up for the challenge. In testing, the sensors identified white tape on a gray floor and black tape on all floor types—neither of which are obstacles because tape has no height. In about two weeks’ time, DreamVu was able to solve the issue for its customer through R&D efforts.
DreamVu’s camera-based systems use patented optics and imaging software to deliver 360° x 110° RGB-D field of view — complete with color and depth. According to Davidson, very little software exists that can take advantage of 360-degree data, so DreamVu created their own Vision Intelligence Software to take advantage of their cameras. Many mobile robots use 2D lidar on a plane that will detect a human lying on the ground, for instance, but anything below or above that plane is missed by the light. With DreamVu’s 3D obstacle detection capabilities, the robot can see down to the floor and up above the robot as well.
“Any hanging obstacle or something small on the floor, the system detects that,” said Davidson. “However, that may burden the robot nav system with handling all the 3D data, which can mean too much data, memory, and computation. What we do is take that 3D optical detection and flatten it into a 2D laser scan, which means that our system is compatible with the 2D mapping solutions that most of our customers are using right now.”