Play Nice: Machine Vision Joins Other Technologies to Secure the Homeland
POSTED 12/20/2011 | By: Winn Hardin, Contributing Editor
If you own a business, and you want it secure, you place a camera over each entry to watch your customers, over the cash register to watch your employees, and outside to watch everything else. If you want to secure a country, closed-circuit television (CCTV) cameras aren’t the best solution.
The U.S. Department of Homeland Security (DHS) learned this the hard way in January 2011, when DHS Secretary Janet Napolitano halted funding of the SBInet program, also called the U.S.’ “Virtual Border Program.” The Virtual Border was going to be a layered system, starting with 1800 towers armed with radar and visible imaging systems, supported by border patrolmen with PDAs connected to distributed command centers. Unfortunately, software problems and difficulties with the visible imaging systems led to massive cost overruns, prompting DHS to shut the program down at the beginning of this year.
Despite the monetary losses, SBInet wasn’t a complete failure. DHS learned many lessons, including that no single “sensor” technology could be foolproof. The best solution would incorporate many different sensing technologies, working together to overcome specific location-based problems. And of course, machine vision technologies are part of the solutions that are surviving and succeeding along the U.S. border.
Mobile, Not Stationary
SBInet’s radar towers and visible cameras were susceptible to bad weather, leading to legions of false alarms. But a small part of the Virtual Border program continues to gain interest among government security agencies and the military: mobile platforms using thermal cameras for long-range detection backed by visible cameras for identification.
Under the SBInet program, moviMED (Irvine, California) worked with defense contractors to develop the software, pan/tilt/zoom control, and imaging system for a surveillance truck. An extension boom was mounted on a trailer in the back of the truck with a gimbal system hosting a 1.3 megapixel (MP) visible color camera and a cooled mid-wave (3 to 5 micron InSb) infrared camera.
Using a PC built into the dashboard, the operator could select paths or other regions of interest, and the cameras would automatically image that area, dwell for a few seconds, and move to the next location. The system used image processing sparingly, mainly inside the cameras for contrast enhancement and similar camera controls. The pan-tilt used gyroscope stabilizers with a pair of PID loops to allow the cameras to stay focused on spot, even while the truck was moving over uneven ground.
“The system allowed the border patrol to switch between visible and thermal images,” explains Markus Tarin, President of moviMED, “but there wasn’t interest at the time in image fusion. And while fusing two different images into a single view to use the best of both modalities has gained more attention in recent years, visible light really doesn’t offer much for nighttime scenes, unless it’s intensified or you use active illumination, which is counterproductive for surveillance operations. Mainly, these homeland security applications want the operator to choose the right sensor for the particular moment or location.”
While ground radar failed SBInet, it is still widely used as a detection modality, backed up by intensified visible (night vision) or infrared cameras. “There’s a multisensor system approach that’s on the rise these days,” continues Tarin. “Ground radar may detect the intrusion, and then you want software to automatically pan and tilt the camera to the location so you can see what’s there, possibly combined with laser range finding and image stabilization. It can be pretty complex and it really goes beyond a typical imaging project.”
Dock of the Bay
Stationary systems have found a home in homeland port security applications, however, where the targets are in confined waterways and channels-unlike land borders, which offer constantly changing design challenges based on the topography.
Innovative Signal Analysis (ISA) recently developed a coastal surveillance and change-detection imaging system for the U.S. Coast Guard at California’s Port of Long Beach. The system offers large-area (~7 to 50 square miles for human detection, depending on the selected sensor types and lenses), wide-angle (90 degree) surveillance with automated change detection and change history display in day and night conditions, along with a real-time data server that allows users in the U.S. Coast Guard and port authorities in both Long Beach and Los Angeles, as well as U.S. Customs and Border Protection (CBP), to simultaneously access and control the surveillance video. To assist officials with identifying potential threats across the 80 MP panoramic video feed, the WAVCam system uses specialized image processing algorithms to detect moving objects in the video feed and maintain a history of those tracks over time.
In its simplest form, the WAVcam system is essentially two boxes: a sensor subsystem and an image processing unit/video archive/server. The third element is a client-side software program, WAVclient, which allows any user with access to the network to control the video feed as if they controlled the camera subsystem.
At the Angels Gate installation and technology demonstration, DHS asked ISA to supply two sensor subsystems to provide long-range daytime and nighttime surveillance, a WAVcam-VIS, and WAVcam-MIR. Both subsystems include a camera, camera optics, an astronomy-grade mirror with motion controls, and processing board that controls camera and optics interface and control functions, inertia sensing, image processing, and interface functions to compress the images and bundle the video data with sensor data for TCP/IP communication to the local area network (LAN). The subsystem also includes temperature sensors and control functions.
The daytime WAVcam sensor subsystem contains an Imperx Inc. (Boca Raton, Florida) Bobcat, a 2 MP visible CCD camera with Camera Link output. “The HD Imperx camera has worked extremely well,” says Wayne Tibbit, Business Development Manager for ISA. “It offers beautiful imagery, a full set of controls to command the camera and control integration time, as well as other camera functions. Controlling the integration time is critical to making sure each part of the panoramic image offers the proper contrast for accurate downstream processing.”
Because of the size of the image, WAVanalytics also uses “change detection” algorithms, essentially comparing each panoramic image using proprietary signal processing algorithms to identify changes in the stationary image. These changes represent moving objects, which are then color highlighted in the panoramic image. At the same time, the display shows a sonar-like video feed that visually displays the change-detection targets’ motion over time, generating easily recognized “target signatures.”
According to Tibbit, developing future systems that integrate multiple sensor modalities–even when used as separate channels–will soon be easier thanks to new standards such as the
C2 Display Equipment Information Interchange standard (SEIWG ICD-0101A) coming out of the Department of Defense’s Physical Security Equipment Action Group. The new standard should help machine vision and security companies to develop products that work with other sensor modalities in an open source environment.