Industry Insights
Vision Helps Protect, Train Military
POSTED 11/19/2008 | By: Winn Hardin, Contributing Editor
Machine vision is just one of the many industries that has benefited from military development and funding. Many of the image processing algorithms and non-visible imaging hardware (infrared, multispectral, etc.) used in today’s industrial vision applications are direct descendants of military R&D, funded through budgets such as the U.S. Department of Defense’s Research, Development, Test & Evaluation (RDTE).
Military machine vision applications are also excellent examples of convergence among imaging systems, embedded electronics, and a growing variety of image and signal processors used for machine vision tasks – similar to the ‘smart camera’ trend prevalent in industrial vision applications. This convergence of vision, with a growing variety of computational components, is helping the military to improve its ability to automatically identify targets and battlefield awareness*, secure populaces and installations against terrorist threats through robust biometrics, improve man-machine interactions, and train new warriors in safe, controlled environments.
Vision Connects Imaging Modalities
Infrared and low-light sensitive imaging modalities are the most widely used imaging systems in military applications based on units in the field – thanks to night vision gun sights, binoculars, and similar nighttime military operation systems. The majority of these applications do not use image-processing algorithms, and therefore, are not true machine vision applications. Despite this fact, machine vision has benefited greatly by the development of uncooled IR imagers, but much like the multicore processors speeding today’s cutting edge machine vision image processing algorithms, the vision industry found ways to leverage technologies developed for non-machine-vision applications.
While an infantryman may not have a geometric pattern search algorithm running on his M16’s thermal sight, machine vision algorithms are helping commanders and security personnel with battlefield and situational awareness, namely by aligning and overlaying multiple imaging modalities to common global coordinate systems (see archived feature article on data fusion). In these applications, search algorithms find common features and structural patterns (buildings, fences, etc.) in each imaging stream, and then align the imagery to a common global coordinate system. This gives commanders and security personnel multiple ways to confirm a potential threat; for instance, by using visible imagery to confirm that a heat source is indeed a man versus a dog, etc.
Battlefield Biometrics
Near infrared (NIR) imaging, combined with wavelet compression and correlation algorithms for high-quality matches, is also helping the U.S. Army to perform the first post-Saddam census and protect against insurgent threats in war zones. Today, the infantry are using 6,700 Handheld Interagency Identity Detection Equipment (HIIDE) biometric systems to collect retinal biometric information from local nationals in Afghanistan and Iraq. Mathematic representations of key retinal features are then compared against stored watch lists of insurgents, enemy combatants and terrorists. NIR sensors are particularly useful because eyes are covered in a thin layer of water that makes the surface highly reflective, while the iris pattern is actually located a few nanometers below the surface of the eye. Having cameras optimized for the NIR greatly improves iris biometric acquisition because IR radiation penetrates tissue to the depth of a few millimeters.
Soldiers and Marines have collected more than 240,000 records for the Department of Defense’s Automated Biometrics Identification System (ABIS). ABIS has helped to identify than 200 individuals that have the potential to cause harm to Coalition forces.
Military-Style Tracking
When combined with new types of computational engines that significantly lower the price per gigaflop, while enabling ever more complex image processing applications, pattern search algorithms are also helping to improve man-machine interactions, training simulations, and military system designs through new approaches to 3D tracking.
Stereoscopic cameras - an imaging method that extracts 3D location data of features within an image based on slight, parallax-induced differences between two pictures of the same object taken from slightly different angles - is not a new method for tracking. However, computational costs have limited the use of stereoscopic systems. Today, multicore processors, field programmable gate arrays (FPGA), digital signal processors (DSP), and application-specific integrated circuits (ASICs) are all speeding 3D image processing while enabling helmet-based targeting in military aircraft, improving pilot simulation systems, and better targeting systems.
AIA member, TYZX Inc. (Menlo Park, California), has already had success mating its DeepSea G2 Vision System platform to automated tazer firing systems for securing highly sensitive areas. TYZX uses a stereoscopic design with two fixed CMOS sensors in a calibrated housing. Typically, this approach is computationally extensive, adding to the cost of the processor and network elements to handle the required bandwidth, but TYZX expedites the solution by using application specific integrated circuit (ASIC), field programmable gate array (FPGA) and PowerPC chip in a single housing. This configuration allows the ‘smart camera’ to crunch the data at speeds up to 60 fps while eliminating the need for network bandwidth capable of handling the camera to processor and interprocessor links. The ASIC is tasked with correction and stereo correlation, while the FPGA handles the 3D calculations and determining the robot path based on the collected 3D data.
Success tracking objects in wind tunnels prompted Advanced Imaging, a leading publication covering the electronic imaging industry, to award camera and frame grabber manufacturer DALSA (Waterloo, Ontario) their ‘Solutions of the Year’ earlier this year. DALSA won the machine vision category for expertise extended to the design and development of the ‘Stereo Metric Tracking System’ used by the low-speed wind tunnel in Braunschweig, Germany, which is operated by German-Dutch Windtunnels (DNW). The system is used to test the typical trajectories of objects jettisoned from transport aircrafts. Understanding the dynamics of jettisoned objects has applications in fields such as rapid delivery of humanitarian supplies.
In this application, dual high-speed cameras and frame grabbers capture images of the falling boxes acting as the ‘eyes’ of a vision system that use the two images to reconstruct the 3D co-ordinates of each box. Three markers on each side of the boxes provide reference points to help identify the location and orientation of each box. In order to meet the rigorous performance needs of the system, DALSA’s Xcelera frame grabbers were deployed to handle data rates of up to 1 Gigabyte/second along with the company’s SAPERA Essential image processing software.
The military continues to explore more advanced uses for machine vision technology, including new fields of vision-guided combat robots and automatic-guided vehicles (AGVs). According to Alan Schultz, head of the intelligence system section at the Navy Center for Applied Research in Artificial Intelligence (NCARAI), today’s machine vision is trying to automate the process of having recognition of objects in the natural world. What the military needs for tomorrow is general purpose vision with contextual knowledge that can spot the snipers hidden in the trees, or navigate vehicles through difficult terrain.
*Editor’s Note: In an upcoming AIA feature, we’ll delve more deeply into new target recognition and search algorithms the military is using as part of an article on Machine Vision in the Aerospace industry, particularly related to unmanned vehicles - their need to acquire high-resolution imagery, process that imagery, and relay prioritized imagery back to central command locations for further action.