AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

Vision Navigates Obstacles on the Road to Autonomous Vehicles

POSTED 10/04/2019

 | By: Dan McCarthy, Contributing Editor

Courtesy of NVIDIA

We, humans, are notoriously horrible at objectively judging our abilities. Surveys of high school students found only 1% of them considered their social skills below average. A separate study of college professors found more than 90% rated themselves as above-average teachers. And automobile drivers? Yet another survey showed that 88% of U.S. motorists believe their driving abilities are above the norm.

Considering the average level of prescience demonstrated by these statistics, it should surprise no one that media headlines from the early 2010s predicting that autonomous vehicles would be ubiquitous by 2018 still haven’t panned out. But if the 40,000 automobile fatalities in the U.S. in 2018 illustrate one thing, it’s that most drivers could benefit from more vision technology and artificial intelligence (AI) aboard vehicles.

Safety Automated
The National Highway Traffic Safety Administration estimates that human error, rather than technology or driving conditions, is to blame for 94% of traffic accidents. Causes range from distracted driving to drowsiness, speeding, and alcohol impairment. Consequently, more and more new cars incorporate some type of advanced driver assistance system (ADAS), such as cameras for reverse assistance, automatic emergency braking, and blind-spot monitoring. Upcoming car models will add increasing arrays of sensors to collect data about a vehicle’s surroundings or the state of its driver to enhance safety on the road. Vision systems based on optical and infrared cameras are often part of this mix, along with light detection and ranging (LIDAR) and radar sensors.

The Society of Automotive Engineers’ J3016 standard identifies six levels of vehicle automation, from L0 (no automation) to L5 (fully autonomous). L3 marks the critical boundary where a vehicle shifts from assisting the driver to taking over control of the wheel — albeit within limited conditions. Last year, Audi’s A8 sedan became the first series production car to incorporate an L3 ADAS, the AI Traffic Jam Pilot. The system is based on Delphi’s zFas control unit, which compiles data from LIDAR and a front-facing camera operating at 18 frames per second (fps) to handle start-up, acceleration, steering, braking, and even timely reaction to vehicles cutting in front of the car — all without driver input.

While the Audi A8 is an impressive achievement, other L3 models remain on the design board or test course. Some automakers, including Ford, Daimler, and BMW, actually aim to leapfrog L3 automation and launch L4 vehicles by the early 2020s. L4 systems make autonomous operation more routine and hand control to the driver only in unmapped areas or severe weather. Many industry analysts have cautiously accepted these optimistic projections and now forecast that we will see a significant number of cars — mostly luxury models or commercial vehicles — with L4 capacity by the early 2020s.

Some Caution Advised
Analysts are right to be cautious, however. In addition to navigating emerging regulations and liability concerns, designers of autonomous vehicles still confront technological hurdles. But vision technology is the least of their worries. Whether part of a camera module or embedded solution, the image sensors, processing chips, cables, and lens modules leveraged are all familiar tools and require comparatively simple calculations for object detection. In one scenario formulated by Texas Instruments, for example, a 1/3-inch CMOS sensor with a 43° field of view (FOV) and 1-megapixel (MP) resolution could detect a pedestrian at a maximum distance of 34 meters. Inserting an 8-MP imaging chip increases detection distance to 101 meters.

Thermal imagers can further extend detection of pedestrians and animals by as much as four times the distance of optical cameras, according to FLIR Systems, though this ability relies in part on narrowing the camera’s FOV. Detection distance for a FLIR ADK camera with a horizontal FOV of 50° is 67 meters. Narrowing that FOV to 18° pushes detection distance out to 186 meters.

Detection distance, however, is only half the equation. An average model sedan traveling at 40 mph has a typical stopping distance of 36 meters. Based on the 1-MP sensor scenario described above, the vehicle would not have time to stop before it reached the position of a detected pedestrian. Tweaking the pixel size, processing power, and lens parameters all help extend the 1-MP sensor’s detection distance, and applying an 8-MP sensor likely would enable stopping distances with meters to spare, even if the vehicle is traveling at 70 mph.

Camera frame rate also factors into stopping distance. According to another Texas Instruments scenario, a camera capturing 10 fps would be sufficient to enable an ADAS to bring a vehicle traveling 80 kilometers per hour to a full stop in 55 meters. Increasing frame rate to 30 fps would enable the ADAS to stop the car in 45 meters at that speed.

If calculating camera detection distance and frame rate were all that was required for ADAS design, autonomous vehicles might well have been on the road by 2018. But the most challenging technical hurdle involves processing and fusing data from multiple and disparate sensors and funneling it through AI algorithms quickly enough to enable an ADAS to make timely reactions in unpredictable environments.

A car equipped with 10 high-resolution cameras generates 2 gigapixels of data per second, according to NVIDIA. Processing that data through multiple deep neural networks (DNNs) requires an onboard computer to perform approximately 250 tera (trillion) operations per second (TOPS). However, the number and type of ADAS sensors aboard vehicles is growing, and the decision trees stemming from this data are far more complex and mutable than those required for vision applications on the factory floor.

“The industry has recognized that the complexity of computation is greater than initially anticipated,” says Danny Shapiro, senior director of automotive at NVIDIA. “The amount of software complexity keeps growing and, as such, we’ve released products that deliver up to 320 TOPS. Yet our customers want even more because of the need for all these different deep neural networks required to ensure the safety of the occupants and other road users.”

Worst Road Trip Ever
The biggest hurdle for autonomous cars may not be associated with hardware or software but rather with the “wetware” of the human brain. Nearly 75% of American drivers surveyed by the American Automobile Association last year reported that they would be afraid to ride in a fully autonomous vehicle.

Compiling safety data for self-driving cars is the obvious way to convert people. But how? To achieve an apples-to-apples comparison with human-driven vehicles, autonomous cars would need to rack up hundreds of millions, if not billions, of road miles, according to a study by the Rand Corporation.

Fortunately, one benefit of the digital sensor and computing technologies underlying automation is their ability to be easily tested in simulation. Everything an ADAS’s sensors, processors, and control software “sees” can be virtually manufactured and manipulated to test the system’s reaction under the most adverse circumstances.

NVIDIA’s DRIVE Constellation system, for example, operates on two servers. The first feeds photorealistic simulation data directly to a self-driving vehicle’s camera, lidar, and radar sensors to create a wide range of testing environments and scenarios, including different weather and road conditions, day or nighttime operation, and even heavy traffic or an animal darting in front of the car. The second server contains NVIDIA’s powerful DRIVE AGX Pegasus AI car computer — the 320 TOPS product that Shapiro referenced — and processes simulated data as if it were coming from the sensors of an actual car on the road. Combined, DRIVE Constellation’s two servers create a hardware-in-the-loop system that tests driving scenarios at a significantly accelerated rate.

“The challenge when you’re testing in the real world is that most of those miles are boring miles. Nothing happens,” says Shapiro. “What you can do in simulation is run the system through databases of dangerous scenarios and challenging conditions. Plus, the tests are repeatable.  Finally, they can be augmented either with human-in-the-loop drivers inside the simulator or AI agents.”

Also important, simulation allows automated driving data to be compiled in repeatable scenarios, which is critical to confirming that autonomous vehicle design is improving over time.

From a vision supplier’s perspective, ADAS and autonomous vehicles represent more of an opportunity than a challenge for once. Yes, camera sensors, modules, and connectors will need to be ruggedized, power-efficient, low cost, and optimized for speed. But the demands of the road do not require advanced imagers or substantial redesigns of chips or image processors. Most of the advances that will drive autonomous cars the last mile to market will address how sensor data can be fused, processed, and analyzed more quickly and cost-effectively. Simulated hardware-in-the-loop testing will also be a critical step. The good news is that several carmakers are already testing next-generation ADAS systems in simulation, on closed courses, and on city streets. Projections that L4 autonomous vehicles will be navigating streets and highways by the early 2020s are beginning to seem more realistic than optimistic.