Blog
Embedded Vision Explained: How Do Autonomous Cars See?
Share This On X:
If autonomous cars are going to drive better than humans, cars must use embedded vision to “see” better than humans. Developers are creating a detection system that can sense a vehicle’s environment better than human eyesight through a combination of camera, radar, and LiDAR sensors (Light Detection and Ranging; a remote sensing method that uses light in the form of a pulsed laser to measure ranges.
Working together, diverse sensors can verify accurate detection. Camera, radar, and LiDAR provide the car visuals of its surroundings and also detect the speed and distance of nearby objects. Inertial measurement units are used to track a vehicle’s acceleration and location.
Cameras Provide a Visual
Cameras are the most accurate way embedded vision can capture a visual representation of the world. Autonomous cars use cameras placed on every side: front, rear, left, and right. These images are used to put together a 360-degree view of their environment.
Cameras include wide-angle lenses with a shorter range, and others offer a narrower view capable of long-range visuals. Fish-eye cameras provide panoramic views and are often used to help the vehicle park itself.
Vehicle manufacturers often use a series of CMOS imaging sensors to produce images between 1 and 2 megapixels; most use relatively inexpensive 2D cameras, but some are incorporating 3D cameras as well. Embedded vision systems need sensors with a high dynamic range of more than 130 dB to ensure a clear image in all conditions, including direct sunlight.
Don't Miss These Industry-Leading Events!
Radar Sensors Extend Capabilities
Radar sensors are used to supplement camera vision in low visibility, such as night driving or poor weather. Radar transmits radio waves in pulses -- those waves bound off an object and back to the sensor, providing information about the speed and location of the object.
Radar sensors also usually surround the car to detect objects at all angles. They can detect speed and distance, but can’t provide information on the type of vehicle -- this is where another sensor used in embedded vision systems comes into play.
LiDAR Provides a 3D View
Camera and radar are included on most new vehicles. Cars use them for advanced driver assistance and park assist technologies. They can also offer some level of autonomy when a human is in the driver’s seat with control of the vehicle.
For a completely autonomous car, the embedded vision system uses LiDAR. LiDAR has the capability of providing a 3D view of the environment, with the added benefit of working well in low-light conditions. It detects the shape and depth of surrounding cars, pedestrians, and road geography. Only a few LiDAR sensors are needed for an effective embedded vision system.
Like the human brain, an autonomous car’s embedded vision system must take in the data from all of these sensors and make interpret the information. Having different types of sensors offers reliability and overlap, as a fail-safe, that can respond in real-time.
BACK TO VISION & IMAGING BLOG
Recent Posts
- How to Become a Robotics Software Engineer: A Comprehensive Guide
- Top Robotics Competitions for Kids in 2024
- The Evolution of Motion Control: Trends Shaping High-Speed Automation
- CMU Robotics Institute's Autonomous Drone Can Save Lives in Natural Disasters
- Your Insider Guide to the 2025 A3 Business Forum: Agenda Highlights & Must-Attend Events
- How Robots Are Addressing the Healthcare Workforce Shortage
- View All