AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

On the Road with Machine Vision

POSTED 11/02/2011

 | By: Winn Hardin, Contributing Editor

Drive a car down a road in a developed country, and you’re probably the unwitting star of your own TV show. Machine vision and imaging technology can be found along most major roadways in the U.S., Europe, and many parts of Asia and the Middle East to track traffic flows and notify emergency workers of accidents or other problems.

And as past articles have documented, the application of image processing to transportation data streams has helped automate license plate reading for toll, red light, and speeding enforcement, as well as track and control commercial trucks and transportation. And in recent years, thanks to government stimulus funding, transportation has been a growth segment for the machine vision industry

Success in public infrastructure projects helped machine vision’s image as well, educating the general population about the strengths and capabilities of the technology. This is helping to push machine vision technology further into the vehicles themselves, as evidenced by a renewed push for intelligent transportation systems (ITS) that use machine vision technology.

Safe Drivers

Machine vision technology was once considered a leading sensor technology to control advanced, two-stage airbag deployment. However, the component cost led automotive manufacturers to use less-expensive weight- and radar-based technology to sense occupant positions.

This small setback in airbag high-volume application had little long-term negative impact on machine vision technology inside the car, however, as demonstrated by new safety systems such as Mercedes’ eye-tracking and driver-alertness safety systems. These systems use low-resolution cameras and image-processing algorithms to isolate eye position, size, and blink rates to calculate the average “alertness” of the driver. Blink too slowly, or let your eyes nearly close, and the car will tell you to wake up and watch the road – in a nicer tone, of course.

MIT’s AgeLab and Volpe National Transportation Center – both of which are conducted in conjunction with the U.S. Department of Transportation – are furthering this work, according to Adil Shafi, President of Advenovation Inc. (Brighton, Michigan), using imaging and machine vision technology to track physical cues in older driver populations that may indicate a loss of attention or confusion, as well as lane-departure alarms and other imaging-based automotive safety systems. With aging driver populations in most of the developed world, the machine vision industry can expect more of these types of imaging-based safety systems to attract the attention of automotive OEMs and Tier 1 suppliers.

ITS and You

While an airbag can be controlled with a weight/resistance sensor and low-cost ranging radar, a more advanced ITS element such as autonomous vehicles requires a layered sensor approach to accommodate changing environments and to make these systems more robust in an unpredictable world.

DARPA’s Grand Challenge has been the poster child of autonomous vehicle technology development, but recently, Google has upped the ante with the Google Driverless Car along with other projects, such as Google Driverless CarItaly’s University of Parma’s VisLab.

“Laser and ranging techniques such as LIDAR or Microsoft’s Kinect are often used to track and follow moving objects” explains Shafi.  In autonomous cars, a combination of sensors or “sensor fusion” is key to success.  In these solutions, a supervisory program determines what kind of an environment the car is in. Based on this an appropriate sensing technique is used for computation, interpretation and follow on action.

The “Google Car” was developed by Sebastian Thrun, director of Stanford University’s Artificial Intelligence Laboratory and co-inventor of Google’s “Street View” product, which combines visual images with GPS and map coordinates to show search engine users what a store or location looks like, along with location information and driving directions. The car combines information gathered from Google Street View with artificial intelligence software that combines input from video cameras inside the car, a LIDAR sensor on top of the vehicle, radar sensors on the front of the vehicle, and a position sensor attached to one of the rear wheels that helps locate the car's position on the map. The car must first be driven along a particular route so that it can image and map the route. Subsequent trips can be driven autonomously, using various sensors that allow the car to react to changes in the route, such as parked cars, pedestrians, and other random events.

Recently, Google applied to the Nevada state government to change its laws to allow autonomous vehicles on public roadways. Google chose Nevada as its new test bed because the state is full of relatively empty roads for testing. So far, the Google Car has driven more than 140,000 autonomous miles.

The University of Parma’s VisLab autonomous vehicle used seven cameras, including stereovision and panoramic stitching with image analysis software, to drive the minivan 13,000 miles from Parma, Italy to Shanghai in 2010. Photo couresty of VisLab.

VisLab’s autonomous vehicle drove thousands of miles last year from Parma, Italy, to Shanghai, China. Unlike the Google Car, which uses maps of the route, VisLab’s car had to travel through undeveloped, relatively unmapped areas in Mongolia and Kazakhstan, requiring the vehicle to use vision sensors to follow a lead vehicle, as well as occasionally depend on human intervention.

VisLab’s minivans use seven cameras, four laser scanners, a GPS unit, and an inertial sensor suite. A pair of stereovision cameras mounted above the windshield identifies lane markings and the terrain slope. Three synchronized cameras behind the windshield stitch their images into a 180-degree panoramic frontal view. The laser scanners – three mono-beam and one four-plane laser beam – detect obstacles, pedestrians, and other vehicles, along with ditches and bumps on the road.

Each vehicle also carries three computers. One computer is responsible for processing images and data from the front of the vehicle. Another computer handles data for the sides. The third integrates all the data and plans a path, which in turn triggers low-level controls for steering, accelerating, and braking the vehicle. Solar panels on top of the vans power the electronics.

A key piece of software is the one that processes the 180-degree frontal view. This component takes the large panoramic image and identifies the lead van, even when approaching a tight turn or steep hill. It can also detect road markings, pedestrians, and obstacles in general.

Future projects may have an easier road to drive thanks to new standards and autonomous vehicle communities, as well as open source software offerings such as OpenSLAM. SLAM stands for Simultaneous Localization and Mapping and helps an autonomous vehicle or robot understand its surroundings by modifying stored maps based on the robot’s physical location, or localization, and constantly answering the question: Where am I (localization) and what is around me (mapping).

Pundits have said the machine vision industry isn’t an industry, per se, but rather an enabling technology to help other industries. But as machine vision technology changes more core elements in our lives – phones, cars, entertainment, safety, and now transportation – this point of view must come under increasing scrutiny. Was the combustion engine an enabling technology for transportation because it also has been used in everything from electrical generators or lawn mowers?

One thing seems clear, particularly when viewed through the lens of the transportation industry: The pace of machine vision adoption is accelerating at an unprecedented pace. The industry has entered the magical part of the technology adoption curve where coincidence and genius combine in unexpected ways, often changing everything around it. While no one really knows what the machine vision industry will look like in 5 or 10 years, the safe bet is that it’s going to be big.

Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.