« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Autonomous Mobility Autonomous Mobility

Vision Systems Cruise into Autonomous Vehicles

POSTED 04/24/2018  | By: Winn Hardin, Contributing Editor

Although it will be a few more years before consumers can purchase autonomous vehicles, automobile manufacturers are already demonstrating vehicles capable of navigating without human intervention. Just as blind spot monitoring, forward collision warning, and lane departure warning systems have already emerged in the latest generation of vehicles, the promise of fully autonomous vehicles will be kept. 

According to the National Highway Traffic Safety Administration (NHTSA), automated vehicles that operate independently, without a human driver, will replace those that require a fully engaged driver at all times. By adopting Standard J3016 of SAE International in 2016, NHTSA validated SAE’s six levels of automated/autonomous driving (Table 1). 

Table 1: By adopting Standard J3016 of SAE International in 2016, NHTSA validated SAE’s six levels of automated/autonomous driving. (Courtesy of  SAE International)

“Although no automobile manufacturer has achieved Level 3 or higher in production, several have produced demonstration vehicles,” says Uwe Voelzke, Technical Marketing Manager at STMicroelectronics. “Some countries are working on a possible admission of Level 3 vehicles. That’s expected in 2020/2021.” Whatever the time frame, it seems certain that numerous types of disparate sensors will be required to achieve the promise of fully automated vehicles. In an autonomous car, cameras, radar, sonar, global positioning systems (GPS), and light detection and ranging (LIDAR) will generate a tremendous amount of data. 

“Cameras will generate 20 to 60 megabytes, radar upward of 10 kilobytes, sonar of 10 to 100 kilobytes. GPS will run at 50 kilobytes, and LIDAR will range between 10 and 70 megabytes. Thus each autonomous vehicle will generate approximately 4 terabyes of data per day,” says Brian Krzanich, Chief Executive Officer of Intel. As important as the systems used to capture this data is the computing infrastructure required to process and extract information from the data. 

Sensor Types
“Today most advanced driver assistance systems [ADAS] operate independently and hardly exchange information. Rearview cameras, surround-view systems, radar, and front cameras all have individual purposes,” says Hannes Estl, General Manager of the Automotive ADAS Sector at Texas Instruments. While each sensor type has its own limitations, systems use data fusion (combining information from multiple sensors) to produce a more accurate representation of the environment (Figure 1). 

Figure 1: While each sensor type has its own limitations, systems use data fusion (combining information from multiple sensors) to produce a more accurate representation of the environment. (Courtesy of Texas Instruments)

Of course, the processing requirements of each sensor type are diverse. Just as the smart cameras used in machine vision systems perform low-level image processing tasks as flat-field correction and Bayer interpolation, cameras used in automotive applications also perform heterogeneous processing. While the cameras and the radar, sonar, GPS, and LIDAR systems can all perform their own specific low-level processing functions, fusing the data generated will require more complex algorithms. 

“Cars, taxis, and trucking represent a market worth $10 trillion,” says Jensen Huang, founder and CEO of NVIDIA. It’s no wonder then that camera and sensor companies are courting large automobile manufacturers, often publicizing their successes in autonomous vehicle designs. For example, researchers at VisLab equipped a prototype vehicle called BRAiVE with four laser scanners, a GPS, and an inertial measurement unit (IMU). 

Figure 2: In a prototype autonomous vehicle developed by VisLab, cameras from FLIR Integrated Imaging Solutions assist in front obstacle detection, intersection estimation, parking space detection, blind spot monitoring, and rear obstacle detection. Courtesy of FLIRFor forward obstacle/vehicle detection, lane detection, and traffic sign recognition, four Dragonfly2 cameras from FLIR Integrated Imaging Solutions are mounted behind the upper part of the windshield. Two have color sensors, and two have grayscale sensors. Two more Dragonfly2 cameras are mounted behind the body of the car, looking sideways for parking and traffic intersection detection. An additional two Firefly cameras are integrated into the rearview mirror to detect overtaking vehicles, and another two Dragonfly2 cameras monitor nearby obstacles (Figure 2). 

Recognizing that the more disparate technologies employed in ADAS provide a better means of achieving Level 4 and Level 5 capability, FLIR also provides cameras based on its Boson thermal camera core. This system allows objects to be identified through darkness, fog, glare, and smoke.  


 

Like FLIR, Foresight Autonomous Holdings sees the value of combining visible, stereo, and infrared (IR) imaging. At this year’s Consumer Electronics Show (CES) in Las Vegas, Nevada, in January, Foresight showed its QuadSight vision system. The system uses two pairs each of stereoscopic IR and visible cameras to achieve near-100 percent obstacle detection with near-zero false alerts in lighting conditions such as complete darkness, rain, haze, fog, and glare. 

Figure 3: Attendees at CES had the opportunity to ride in what maker NAVYA calls the first robotized production cab on the market. The company’s AUTONOM CAB transported more than 600 people around the show.LIDAR and RADAR
Attendees at CES also had the opportunity to ride on what maker NAVYA calls the first robotized production cab on the market. The company’s AUTONOM CAB (Figure 3) transported more than 600 people around the show. The vehicle includes no less than 10 LIDAR sensors, six cameras, four radars, two global navigation satellite system antennae, and one IMU. 

While VLP LIDARs from Velodyne provide 360-degree peripheral vision, SCALA LIDARs from Valeo can be used for distances up to 200 meters. With the multiple sensors, at least triple redundancy is provided across all functions in the system. 

Short-, medium-, and long-range radar modules are also critical to fully autonomous driving due to their robustness in varying environmental conditions, such as rain, dust, and sunlight. “Radar sensors can be classified as short-range radars, with a 0.2- to 30-meter range; medium-range radars in the 30- to 80-meter range; and long-range radar, with an 80- to 200-meter range,” says Ali Osman Ors, Director of R&D for Automotive Microcontrollers and Processors at NXP Semiconductors. 

“Long-range radar [LRR] is the de facto sensor used in adaptive cruise control [ACC] and highway automatic emergency braking systems [AEBS]. Currently deployed systems using only LRR for ACC and AEBS have limitations, such as detecting motorcycles, and could be paired with a camera sensor to provide additional detection context,” says Ors. To address these needs, NXP offers a range of radar sensing and processing, vision processing, and sensor fusion products for the ADAS market. 

Diversity and Redundancy
“Diversity and redundancy” was a major theme when NVIDIA’s Jensen Huang spoke of the company’s latest achievements in autonomous vehicle technology at this year’s CES. Huang unveiled the company’s DRIVE Xavier autonomous machine processor system-on-a-chip, comprising a custom eight-core CPU, a 512-core Volta GPU, a deep-learning accelerator, computer vision accelerators, and high dynamic range (HDR) video processors, which he said could help autonomous vehicles reach level 4. “However, for the Level 5 market,” says Huang, “we have created the NVIDIA DRIVE Pegasus robotaxi AI computer, which is powered by two such processors.” 

Combining deep learning, sensor fusion, and surround vision, the NVIDIA DRIVE platform is designed around redundant system architecture and is built to support ASIL D, the highest level of automotive functional safety. 

“But the processor is just the first step,” says Huang. “We have been building the entire software stack of a self-driving car all along.” To date, the company has DriveWorks SDK, which contains reference applications, tools, and library modules for detection, localization, planning, and visualization. The company is currently building a map perception component to create a scalable crowd-sourced mapping platform for autonomous driving. It will enable a fleet of autonomous vehicles to create and consume map data collaboratively. 

Already, BAIDU and ZF Friedrichshafen AG have selected NVIDIA Drive Xavier for product development in China, and NVIDIA and Uber Technologies will partner to build self-driving Ubers, according to Huang. 

To ensure that autonomous automobiles function even when faults are detected, the company has created ISO 26262 ASIL D safety-compatible NVIDIA Drive functional architecture, which covers the processes, processor design, algorithms, system design, and validation methods used. “So sophisticated is this,” says Huang, “that people who build airplanes are beginning to knock on our door!” 

Understanding that simulating every possible environment encountered by an autonomous vehicle is impossible, the company has also developed a simulation environment that allows, for example, designers to move cameras on the virtual design of a car and then simulate lighting and road conditions in this environment (Figure 4). 

Eight-two million automobile accidents and approximately one million fatalities caused by automobile accidents each year cost society $500 billion. With the advent of Level 5 ADAS in every car, this number could theoretically be eliminated, while providing the added benefit of fewer traffic jams and lower transportation costs. 
 

Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.