« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:



Vision Systems Drive Auto Industry Toward Full Autonomy

POSTED 10/20/2017  | By: Winn Hardin, Contributing Editor

The race is on to make self-driving vehicles ready for the road. More than 2.7 million passenger cars and commercial vehicles equipped with partial automation already are in operation, enabled by a global automotive sensors market estimated to reach $25.56 billion by 2021. And of those sensors, cameras will see the largest growth of nearly 400 million units by 2030.

Estimates about the arrival of fully autonomous vehicles vary depending on whom you ask. The research firm BCG expects that vehicles designated by SAE International as Level 4 high automation — in which the car makes decisions without the need for human intervention — will appear in the next five years. 

Meanwhile, most automotive manufacturers plan to make autonomous driving technology standard in their models within the next 2 to 15 years. Tesla, whose admired and admonished Autopilot system features eight cameras that provide 360 degrees of visibility up to 250 meters, hopes to reach Level 5 full autonomy in 2019.

Carmakers are building upon their automated driver-assistance systems, which include functions such as self-parking and blind-spot monitoring, as the foundation for developing self-driving cars. The core sensors that facilitate automated driving — camera, radar, lidar, and ultrasound — are well developed but keep undergoing improvements in size, cost, and operating distance. 

The industry still must overcome other technological challenges, however. These include mastering the deep learning algorithms that help cars navigate the unpredictable conditions of public roadways and handling the heavy processing demands of the generated data. To help them carve a path toward total autonomy in driving, automakers are turning to vision software companies as an important player in the marketplace.

Algorithms Get Smarter
The machine vision industry is no stranger to the outdoor environment, with years of experience developing hardware and software for intelligent transportation systems, automatic license plate readers, and border security applications. While such applications require sophisticated software that accounts for uncontrollable factors like fog and sun glare, self-driving vehicles encounter and process many more variables that differ in complexity and variety.

“Autonomous driving applications have little tolerance for error, so the algorithms must be robust,” says Jeff Bier, founder of Embedded Vision Alliance, an industry partnership focused on helping companies incorporate computer vision into all types of systems. “To write an algorithm that tells the difference between a person and tree, despite the range of variation in shapes, sizes, and lighting, with extremely high accuracy can be very difficult.”

But algorithms have reached a point where, on average, “they’re at least as good as humans at detecting important things,” Bier says. “This key advance has enabled the deployment of vision into vehicles.”

AImotive (Budapest, Hungary) is one software company bringing deep learning algorithms to fully autonomous vehicles. Its hardware-agnostic aiDrive platform uses neural networks to make decisions in any type of weather or driving condition. aiDrive comprises four engines. Recognition Engine uses camera images as the primary input. Location Engine supplements conventional map data with 3D landmark information, while Motion Engine takes the positioning and navigation output from Location Engine to predict movement patterns of surroundings. Finally, Control Engine controls the vehicle through low-level actuator commands such as steering and braking.

For an automated vehicle to make critical decisions based on massive volumes of real-time data coming from multiple sensors, processors have had to become more powerful computationally while consuming less operational power. Software suppliers in this space are developing specialized processor architectures “that easily yield factors of 10 to 100 times better efficiency to enable these complex algorithms to fit within the cost and power envelope of the application,” Bier says. “Just a few years ago, this degree of computational performance would have been considered supercomputer level.”

To make safe, accurate decisions, sensors need to process approximately 1 GB of data per second, according to Intel. Waymo, Google’s self-driving car project, is using the chipmaker’s technology in its driverless, camera-equipped Chrysler Pacifica minivans, which are currently shuttling passengers around Phoenix as part of a pilot project.

However, the industry still needs to determine where the decision-making should occur. “In our discussions with manufacturers, there are two trains of thought as to what these systems will look like,” says Ed Goffin, Marketing Manager for Pleora Technologies (Kanata, Ontario). “One approach is analyzing the data and making a decision at the smart camera or sensor level, and the other is feeding that data back over a high-speed, low-latency network to a centralized processing system.”

Pleora’s video interface products already play in the latter space, particularly in image-based driver systems for military vehicles. “In a military situational awareness system, real-time high-bandwidth video is delivered from cameras and sensors to a central processor, where it is analyzed and then distributed to the driver or crew so they can take action or make decisions,” Goffin says. “Designers need to keep that processing intelligence protected inside the vehicle. Because cameras can be easily knocked off the vehicle or covered in dust or mud, they need to be easily replaceable in the field without interrupting the human decision-making process.”

Off the Beaten Path
While the self-driving passenger car dominates media coverage, other autonomous vehicle technology is quietly making a mark away from the highway. In September 2016, Volvo began testing its fully autonomous FMX truck 1,320 meters underground in a Swedish mine. Six sensors, including a camera, continuously monitor the vehicle’s surroundings, allowing it to avoid obstacles while navigating rough terrain within narrow tunnels.

Meanwhile, vision-guided vehicles (VGVs) from Seegrid (Pittsburgh, Pennsylvania) have logged more than 758,000 production miles in warehouses and factories. Unlike traditional automated guided vehicles — which rely on lasers, wires, magnets, or floor tape to operate — Seegrid VGVs use multiple on-vehicle stereo cameras and vision software to capture existing facility infrastructure as their means of location identification for navigation. 

As Bier of Embedded Vision Alliance points out, even the Roomba robotic vacuum cleaner — equipped with a camera and image processing software – falls under the category of autonomous vehicles.  

Whether operating in the factory or on the freeway, self-driving vehicles promise to transport goods and people in a safe, efficient manner. Debate persists over when fully autonomous cars will hit the road in the U.S. Even as the industry overcomes technical challenges, governmental safety regulations and customer acceptance will affect the timing of autonomous vehicles’ arrival.

In the meantime, automakers and tech companies continue to pour billions of dollars into research and development. Each week seems to bring a new announcement, acquisition, or milestone in the world of self-driving vehicles. And vision companies will be there for the journey.

Embedded Vision This content is part of the Embedded Vision curated collection. To learn more about Embedded Vision, click here.