Embedded Sensing Layers for Autonomous Navigation

Embedded sensing layers for autonomous navigationAutonomous navigation is a difficult thing to achieve, and dozens of companies are racing to become the leader in this technology. While there’s no one way that’s proven the most effective yet, many companies are zeroing in on a similar set of technologies for autonomous navigation.

Whether it’s drones, boats, military vehicles or consumer vehicles, the world is full of unpredictable variables. Any autonomous vehicle has to be able to account for these unpredictable variables without pre-programming. This requires the use of a number of different types of sensors and data streams, which is currently one of the biggest challenges in autonomous navigation.

What Types of Embedded Sensing Layers are Used in an Autonomous Vehicle?

A typical autonomous vehicle has to estimate its own position, calculating acceleration and angular rate, while mapping or sensing the environment around it. This requires layered sensing networks with machine vision and lidar technology at the heart of it all. However, there are other streams of data that must be input for true autonomy.

Many autonomous vehicles are heavily dependent upon a GPS – either a standard GPS with an accuracy of about 5 meters or a differential GPS with an accuracy of about 10 centimeters. This GPS delivers vital information to accelerometers, gyroscopes and magnetometers, and on-board inertial measurement units (IMUs) for accurate position estimation. Microelectromechanical systems (MEMS) technology has made these systems small and powerful enough to bed within a wide range of vehicles for autonomous navigation.

Real World Example of Embedded Sensing Layers for Autonomous Navigation

A new autonomous mobile robot (AMR) for use in warehouses leverages a lidar scanning range finder that uses time-of-flight measurement to locate objects up to 25 meters away. A host CPU combines data from an IMU system and polar positional coordinates from the lidar to generate an accurate 2D map of the surrounding environment. A short-range 3D imaging sensor is also deployed for better visualization outside the lidar’s field of view. After the data from the IMU, lidar and wheel encoders are combined, a comprehensive map is created that allows for 3D autonomous navigation.

There are many real-world examples of embedded sensing layers for autonomous navigation, but the one mentioned above may best highlight the interworking streams of data and how they’re combined. Autonomy is easier in a warehouse than on the road because the warehouse is a far less dynamic environment than the outdoors – autonomous vehicles that operate outside need to compensate for these variables with more advancing sensing systems.

Autonomous navigation is difficult. It requires a complex system of embedded sensing layers and data fusion to create intelligible information for a robot or vehicle to move on its own. While autonomous navigation is still in its early stages, there have been many promising breakthroughs.

To learn more about how autonomous navigation works, take a deeper dive in our featured article, “Data Fusion Helps Autonomous Platforms Make 3D Maps More Efficiently.”


Embedded Vision This content is part of the Embedded Vision curated collection. To learn more about Embedded Vision, click here.