AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:



Sensor Fusion: Got Standards?

POSTED 08/07/2012

 | By: Winn Hardin, Contributing Editor

The Three Levels of Sensor FusionWhat do tanks and aircraft carriers, Long Beach Port Authority and a dirt airstrip in Afghanistan, and Google’s autonomous car and ABB’s new HMI for hazardous production environments have in common? Each installation depends on a sensor network to survive.

Printing presses, medical imaging, security, and most definitely military applications all rely on computation systems and automated decisions that use heterogeneous sensing networks. Automation giant ABB recently announced new human machine interfaces (HMI) under development for hazardous production environments that use camera-based body and gesture tracking with other discrete sensors to control production equipment. But while the machine vision industry has a clear understanding of how to combine the outputs from three separate CCD or CMOS chips to create a “color” image and then use that data in a meaningful way, not all sensor fusion applications are so straightforward.

If only there was a standard for developing sensor fusion applications.

As it turns out, there are many standardized sensor fusion “methods” that potentially could help machine vision companies prepare for future industrial imaging market opportunities. Hundreds of these methods are defined by military and remote-sensing systems with very specific I/O, data, and computational requirements. But there are a few frameworks that provide guidance beyond a specific platform, such as simultaneous localization and mapping (SLAM) in mobile robotics (see sidebar), Tactical Automated Security Systems (TASS) in military and security applications, and Open Geospatial Consortium’s (OGC) Sensor Web Enablement (SWE) framework of standards for all kinds of sensor fusion applications.

More for Less Drives Sensor Fusion

Although there is no “sensor fusion” market to analyze, there is a growing number of multi-sensor devices that show commercial growth of highly integrated sensor packages.

For example, a survey released at the 2012 Sensors Expo from Yole Developpement predicted that discrete microelectromechanical systems (MEMS) unit sales would decline during the next five years. However, the decline will be more than offset by growth in combo sensors that combine multiple discrete sensors into a single unit, using new, dedicated microprocessors that can process multiple data streams without having to burden a system’s central processing unit (CPU).

Closer to the machine vision market, the Khronos Group’s StreamInput Working Group is developing ways to combine multiple cameras together into fused imaging sensor, or combining camera data with other sensor and I/O devices. Khronos specializes in graphic processing units (GPU) technology. The company’s StreamInput Working Group is hoping to develop silicon-based processors that do sensor fusion for cameras, accelerometers, and gyros, as well as traditional touchscreens, joysticks, and keyboards for a wide variety of applications in entertainment, industry, and military. Today, Intel, Freescale Semiconductor, Motorola, Broadcom, Sony, ST, and Nvidia are all part of the StreamInput Working Group.

Don’t Get All Hyper

Khronos’ solution could help companies like FluxData Inc., a multispectral imaging specialist that has used a number of specialized cameras from Imaging Solutions Group (Fairport, New York) in its products.

“Booz Allen [Hamilton] called about five years ago asking if there were standards or conversations happening that would lead to standards for multispectral and hyperspectral imaging. The answer was ‘no,’ but wouldn’t it be great if there were,” recalls Pano Spiliotis, CEO of FluxData Inc. “SPIE [The International Society for Optics and Photonics] has done some system-level work in this area, but no real standards have emerged.”

OGC’s SWE framework could help provide some answers for companies like FluxData. OGC standards in the SWE framework include: 

  • Observations & Measurements (O&M) - The general models and XML encodings for observations and measurements.
  • Sensor Model Language (SensorML) - Standard models and XML schema for describing the processes within sensor and observation processing systems.
  • PUCK - Defines a protocol to retrieve a SensorML description, sensor "driver" code, and other information from the device itself, thus enabling automatic sensor installation, configuration, and operation.
  • Sensor Observation Service (SOS) - Open interface for a web service to obtain observations and sensor and platform descriptions from one or more sensors.
  • Sensor Planning Service (SPS) - An open interface for a web service by which a client can 1) determine the feasibility of collecting data from one or more sensors or models and 2) submit collection requests.

When combined, these standards serve many of the same purposes as the GenICam standard modules, GenApi, Standard Feature Naming Convention (SFNC), and GenTL while opening up interoperability to all types of sensor modalities in addition to electro-optical sensors.

Standards like GenICam and the SWE framework offer machine vision companies a way to expand interoperability and prepare for a more connected, more integrated world. For companies looking to stay ahead of the curve when it comes to delivering a competitive edge through highly integrated solutions, a look at the sensor fusion standards under development today could be the key to new business tomorrow.

Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.