AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Simulation/3D Modeling Simulation/3D Modeling

Data Fusion Helps Autonomous Platforms Make 3D Maps More Efficiently

POSTED 04/30/2018

 | By: Winn Hardin, Contributing Editor - AIA

Whether designing autonomous automobiles, mobile robots, drones, or futuristic pilotless aircraft, developers must employ multiple sensors to localize the vehicle in three-dimensional space. Depending on the system, different types of image sensors can be used to accomplish this task. Autonomous vehicles, for example, may use lidar to generate a 3D point cloud of the surrounding area. While data from lidar systems is accurate for large distances, the data

Figure 1: The µINS-2 from Inertial Sense is smaller than a U.S. quarter and uses sensor data and GPS data to provide position estimation.
Figure 1: The µINS-2 from Inertial Sense is smaller than a U.S. quarter and uses sensor data and GPS data to provide position estimation.

generated is sparse, often necessitating the use of traditional complimentary machine vision solutions for close-up sensing and safety. 

Lidar sensors can be used to map the environment in either 3D or 2D, depending on the system’s application requirement. The HDL-64E from Velodyne LiDAR provides a 3D 360° horizontal and 26.9° vertical field of view (FOV), while the TiM571 LiDAR scanning range finder from SICK provides a 2D 220° FOV. 

2D laser scanners are often used in conjunction with camera systems to locate objects and their distances within the FOV of the autonomous vehicle. For these applications, different types of camera systems can include single (mono) cameras, stereo cameras, or cameras that capture RGB and depth (RGB-D) images. If single-camera systems are used, 3D images must be computed based on the motion of the camera, and any errors associated with this movement will have a detrimental effect on the resultant 3D image. 

RGB-D sensors include both RGB cameras and IR structured light projectors and sensors. For example, the Structure Sensor from Occipital operates with portable tablets such as the iPad, which handles the RGB image acquisition, while accessories provided the structure light protector for capturing depth information.

Alternatively, pre-calibrated stereo cameras such as the Bumblebee from FLIR Systems, Inc. provide similar capability. By identifying differences between features in the left and right images, stereo vision cameras determine depth. While stereo cameras produce dense data sets for short distances, they often lack range. As a result, many 3D mapping systems use both lidar and stereo vision imaging systems to obtain more accurate data. 


Multiple Sensors for Autonomous Vehicles
To estimate the position of a vehicle, robot, or drone in space, designers usually depend on layered sensing networks that include more than just machine vision and lidar. Designers typically use a combination of accelerometers, gyroscopes and magnetometers, and on-board inertial measurement units (IMUs) to measure acceleration and angular rate. Of course, these “dead reckoning” systems need to know a starting point and absolute location to stay safely on track. In many designs, this is performed using a global positioning system (GPS) that has an accuracy of approximately 5 meters or a differential GPS that uses fixed ground-based reference stations to improve this accuracy to approximately 10 cm. 

Figure 2: At the Norwegian Institute of Bioeconomy Research, a map in the form of a 3D point cloud was used by simultaneous localization and mapping (SLAM) algorithms for localization of the forestry vehicle and to measure properties such as tree trunk diameter and the coordinates of single trees.
Figure 2: At the Norwegian Institute of Bioeconomy Research, a map in the form of a 3D point cloud was used by simultaneous localization and mapping (SLAM) algorithms for localization of the forestry vehicle and to measure properties such as tree trunk diameter and the coordinates of single trees.

Thanks to the use of microelectromechanical systems (MEMS) technology, the size of these IMU/GPS systems has reduced dramatically, making them useful in the design of autonomous aircraft. The μINS-2 from Inertial Sense, for example, is smaller than a U.S. quarter and uses sensor data and GPS data to provide position estimation (Figure 1). 

In ground-based autonomous vehicles such as mobile robots and automobiles, data from IMUs can be used with data from wheel encoders to determine the position of the vehicle. 


SLAM Dunk
In the process of combining known locations with movement tracking to autonomously navigate environments as diverse as deserts to warehouse interiors, vehicles must construct a map of the environment while at simultaneously locating the vehicle within the map. To do so, they must perform simultaneous localization and mapping (SLAM). 

SLAM consists of first extracting landmarks or features from the point-cloud data generated by 2D or 3D lidar, sonar, or 3D camera system, and then confirming the feature location by matching the data from different sensor networks. SLAM navigation systems then update the current position of the vehicle using GPS, odometer, and/or INS data before estimating where future landmarks will be based on the mobile platform’s current position.  

Today, SLAM, or variations of it, is being used in a number of applications to map the environment. Examples include mobile robots used to generate local maps of forests, autonomous mobile robots (AMRs) for material handling applications, and drones for environmental awareness. 

Mapping Applications Abound
In the development of a mobile robot to measure the diameter of tree trunks, Marek Pierzcha?a of the Norwegian Institute of Bioeconomy Research and his

Figure 3: Fetch Robotics’ Freight robot employs a lidar scanning range finder, an IMU, and wheel encoders to generate a web-based GUI to limit the motion of the robot and define the positions of charging stations.
Figure 3: Fetch Robotics’ Freight robot employs a lidar scanning range finder, an IMU, and wheel encoders to generate a web-based GUI to limit the motion of the robot and define the positions of charging stations.

colleagues developed a mobile platform that combines a number of sensors to perform automated 3D mapping of forests. These include a Velodyne VLP-16 LiDAR, a FLIR Bumblebee stereo vision camera to record stereo images, and an onboard IMU and GPS interfaced to a

Pixhawk microcontroller for global reference. 

Employing the Robot Operation System running under Ubuntu 14.04, the system produces a map in the form of a 3D point cloud, which was then used by SLAM algorithms for localization of the forestry vehicle. The point cloud also computed properties such as tree trunk diameter and the coordinates of single trees (Figure 2). 

Meanwhile, Fetch Robotics is using AMRs for warehousing and intra-logistics applications. In the past, installing automated guided vehicles was time

consuming and expensive since dedicated tracks needed to be installed on the warehouse floor in the warehouse for these vehicles to follow. Now, AMRs that do not require dedicated paths can automatically generate a map of a warehouse. 

Figure 4: Top-down view of a 250 m drone trajectory (red) through a forest, with 3D map overlaid in gray dots that was generated by direct sparse odometry SLAM.

Figure 4: Top-down view of a 250 m drone trajectory (red) through a forest, with 3D map overlaid in gray dots that was generated by direct sparse odometry SLAM.

To accomplish this, Fetch Robotics’ Freight robot employs SICK’s TiM571 LiDAR scanning range finder that uses time-of-flight measurement to locate objects up to 25 meters away. Polar positional coordinates and the distance and angle from the lidar are transferred to the system’s host CPU and combined with data from the IMU to generate a 2D map. The Freight robot also uses a Carmine 1.09 short-range 3D camera sensor from Primesense to allow visualization above and below the lidar's FOV. A map of the robot’s surroundings can then be made by combining data from its wheel encoders, IMU, and lidar. Once generated, a web-based GUI can be used to limit the motion of the robot and define the positions of charging stations (Figure 3). 

To further illustrate how companies are using layered sensing systems for 3D autonomous navigation, Nikolai Smolyanskiy and his colleagues at NVIDIA have demonstrated how the technique has been used in the development of a micro aerial vehicle system for autonomously following trails in unstructured outdoor environments such as forests. Packed onto a 3DR Iris+ Quadcopter from 3D Robotics with a Pixhawk module and NVIDIA Jetson TX1 embedded supercomputer on a J120 carrier board from Auvidea, the system features a vision system based on a forward-facing Microsoft HD Lifecam HD5000 USB camera and a downward-facing PX4FLOW optical flow sensor from Pixhawk with sonar and a Lidar Lite V3 optical distant measurement sensor from Garmin . (Figure 4). 

While autonomous cars may have gleaned the most publicity for 3D mapping, other applications are using the same technology in areas as diverse as farming, forestry, and warehouse mapping. The technology is also being used to map the ocean floor, measure fish school populations, and enable remote underwater vehicle guidance. And as the number of 3D mapping applications rise, so, too, do opportunities for vision and imaging companies.
 

Embedded Vision This content is part of the Embedded Vision curated collection. To learn more about Embedded Vision, click here.