Industry Insights
Sensor Fusion: Good for Humans, Hard on Systems
POSTED 01/17/2003 | By: Winn Hardin, Contributing Editor
Say sensor fusion to an imaging engineer and he or she is likely to nod in understanding. Ask them to define it and you are likely to get a quick response along the lines of ‘‘fusing more than one sensor together.’‘ Ask for an example, however, and the silence may stretch a bit longer.
The truth is, machine vision has been a huge proponent of sensor fusion. Strictly speaking, a stereoscopic system that uses a pair of cameras to make 3D measurements is an example of sensor fusion. Unlike the system that uses a photoelectric sensor to trigger a camera (two sensors), the 3D system registers data from the two sensors to a common frame of reference, and then makes judgments based on the differences between the two data sets that are better than what the system could achieve with only one camera. Combining these data streams represents a ‘fusing’ of the two sensors.
Belur Dasarathy, president of Information Technology Consultants (Huntsville, AB) and renowned expert on sensor and information fusion and chair to several international conferences on the subject, adds, ‘‘sensor fusion is more relevant and important when sensors are of a different type, where you have information that is more complementary in nature. You want to exploit the synergy of the different sensor types,’‘ Dasarathy explained.
Military leads sensor fusion
According to Dasarathy, military and biomedicine are the two largest disciplines using sensor fusion today, although industrial applications are emerging. The military has long benefited from sensor overlays. Unmanned aerial vehicles (UAVs) such as the Predator use infrared and visible sensors to help detect camouflaged targets, while companies like Harsh Environment Applied Technologies Inc. (Annapolis, MD) combines various IR sensors to create better night vision system for soldiers (FusionWarrior), motorized vehicles (ArmorVision) and ships (MarineDefender).
In most military applications – as with most sensor fusion applications in general – acquiring data from more than one sensor is only one element that impacts system architecture. More than just having more than one camera port on a PC or processor, sensor fusion systems must register the datastreams to each other, sometimes using a third data set as a reference. For instance, Sarnoff Corporations (Princeton, NJ) UAV visualization systems called Video Information Capability Enhancement system (VICE) uses a dual Pentium server class workstation with special image processing boards to register both IR and visible images to a stored topographic digital image database. Because the image streams represent the real world, first the position of the camera relative to the image must be determined through altimeter and GPS data. The perspective of the scene must be normalized among the three data streams so that an IR, visible and digitally stored images of the same hill will look alike, for example. This process often requires neural network, fuzzy logic or genetic algorithms that are computationally intensive. In response, Sarnoff has created a frame grabber with application specific integrated circuits (ASICs) capable of handling 80 GOPs to handle the 3D aspects of registering data from different sources.
Like Sarnoff, Datacube (Danvers, MA) also had to use on-board processor engines to accommodate the intense processing requirements of military systems that use multiple sensors. Whenever possible, Datacube offers field programmable gate array (FPGA) modules in place of ASICs to make future system upgrades easier. Today’s FPGA modules can also be designed to handle some processes that are not widely available through ASICs, such as Fast Fourier Transforms (FFT). ‘‘All our products are built to standard commercial parameters, but can be incorporated into rugged chassis for military applications. There’s a big movement in the military to go commercial off the shelf (COTS),’‘ said Jeremy Riecks, aerospace and military systems sales manager at Datacube. Using modular approaches through FPGA, ASICs and software libraries that can accommodate slightly different read routines for CameraLink compatible cameras, for example, are necessary for today’s sensor fusion applications that demand among the highest processing capabilities at the lowest cost possible.
Medicine and Industry
While the military is combining different portions of the spectrum, biomedicine is combining imaging modalities such as Computed Tomography (CT) with Positron Emission Tomography (PET) and other non-invasive imaging techniques to expand the physicians ability to detect disease. Like military applications, fusing together different medical imaging modalities requires high-end processing greater than just the sum of the two operations.
Certified System Integrator Program
Set Yourself at the Forefront of the Global Vision Market
Vision system integrators certified by A3 are acknowledged globally throughout the industry as an elite group of accomplished, highly skilled and trusted professionals. You’ll be able to leverage your certification to enhance your competitiveness and expand your opportunities.
Although 3D rendering software for medical imaging does exist for PC level systems, most modalities including those that combine different modalities requires a Unix-based system, server class workstation or Silicon Graphics workstation. Like the military examples, both the CT and PET scans must be registered to a common ground: the patients anatomy. CT scanners, like GE’s LightSpeed, take thousands of high resolution pictures in a single scan, exponentially increasing the power needed for image processing. Like processing hardware, software for these systems is not cheap either. Because of the specific needs of radiologists and the size and complexity of the images, programs costing $30,000 and up are not uncommon for sensor fusion applications in biomedicine.
Unlike the military and medicine, industry does not necessarily require wide area, real-time images of the highest resolution. Most instances of industrial ‘‘sensor fusion’‘ relate to the use of a camera with one or more other sensor technologies: ultrasound, capacitance, gap or photoelectric sensors, for instance. Therefore, emerging industrial applications are using PC hosts whenever possible, although the application is still likely to require the a powerful PC with an additional image processing engine. Hot examples can be found in intelligent transportation systems (ITS) and non-destructive testing.
Government agencies, automobile manufacturers and universities have been developing ITS auto-guidance systems for both cars and robots for the past decade. In the case of cars, laser radar systems are the system of choice in good weather because of the relatively cheap light sources, very fast scan times and absolute measurements. However, when it rains, both visible diode lasers and IR lasers are absorbed. Millimeter wave or traditional radar systems are more expensive, but penetrate bad weather far better than visible automated imaging systems. Researchers at MIT (Cambridge, MA) and automobile manufacturers in Japan have combined the two systems to provide an overlapping system that uses a single PC. Similar designs can be found in agencies and universities around the world.
Similarly, visible and IR imagers have been combined for sentry robots such as ROBART III developed by H.R. Everett at the Space and Naval Warfare Systems Center in San Diego, CA. In this case, the sensors are used both for surveillance and for control of an air powered Gatlin gun in defending armed forces warehouses and storage facilities during the day and evening, when ambient light may not be available.
On the manufacturing floor, researchers have developed systems to inspect carbon fiber composites. Xavier Gros at the Institute for Energy in The Netherlands is one example. Gros, who has authored a book on the subject of NDT sensor fusion, has developed a system that uses eddy current with IR imaging to inspect the integrity of composite materials.
The ubiquity of sensors in automated inspection cannot be understated as evidenced by this sampling of applications that use more than one method to inspect, direct or assist machine-driven solutions. As these systems seek to mirror or even enhance human vision, the industry can expect more sensor fusion rather than less. People use more than one sensor (eyes, ears, touch) to navigate the world and so will machines. As one anonymous author said, ‘‘it is easier to teach a robot to play chess, than it is to teach it to move the chess pieces around the board.’‘ As development continues in sensor fusion applications in the military, medicine and industry, system designers will continue to wrestle with questions about when, where and why multisensor approaches can and should be used, and what architecture, acquisition methods and algorithms will result in most effective automated inspection system.