Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:


Laser Inspection and Laser Measurement Laser Inspection and Laser Measurement

Redefining Visible: How Infrared, Multispectral, and Time of Flight Are Expanding the Machine Vision Landscape

POSTED 06/10/2019

 | By: Winn Hardin, Contributing Editor

The visible range of the electromagnetic spectrum is defined largely by the wavelengths humans are able to see. But “visible” becomes a matter of perspective when using imagers, such as Teledyne DALSA’s Piranha4 multispectral camera, which captured these three images via its color, monochrome, and NIR outputs. The banknote’s architectural windows were half-printed with IR security inks — a detail that is invisible to the naked eye and absent in the color and monochrome images. The third image, illuminated and captured in the NIR range at 850 nm, clearly reveals the use of these inks.

Photo courtesy of Teledyne DALSA.

Automation has become such a hot topic lately that it’s easy to forget that machine vision isn’t simply a more efficient and tireless alternative to human inspection. It takes an extraordinary amount of design and engineering expertise to get a camera and computer to derive actionable intelligence from a bunch of photons. So, why go through all the trouble?

One of the many reasons machine vision is superior to manual inspection is that our eyes operate only within a comparatively thin sliver of the electromagnetic spectrum, from roughly 400 to 700 nm — the so-called visible range. So leveraging cameras that operate beyond the visible range is very useful, if only because it substantially increases the amount of light available to interact with a sample. The infrared spectrum alone offers more than 14,500 discrete wavelengths to work with. When you factor in other imaging techniques and modalities, such as multispectral and time-of-flight (ToF) imaging, machine vision enables a whole new perspective on what humans can inspect, analyze, sort, and read.

Multispectral Plus 1
Just above the rainbow that is the visible spectrum, between 800 and 1050 nm, is the near infrared (NIR), where some nonvisible imaging applications are possible with visible-range CCD or CMOS cameras. Such applications typically require use of NIR illumination, as well as low-pass or band-pass filters to block out visible light. Conventional silicon sensors exhibit high quantum efficiency up to 800 nm, above which their sensitivity dramatically drops off.

However, within that window, advances in manufacturing are contributing to silicon-based sensors with higher quantum efficiency in the NIR. These advances include implementation of thicker epitaxial layers on sensor chips as well as more highly resistive barriers between pixels, says Jens Hashagen, Senior Product Manager at Allied Vision. He adds that such architectures enable deeper intrusion of NIR photons into sensors and a higher probability of electron generation.

Silicon’s sensitivity into the NIR also allows the design of multispectral cameras that combine visible and NIR channels on a single chip. Select models in Teledyne DALSA’s Piranha and soon its Linea ML camera series, for example, encompass channels operating across eight or more spectral bands, spanning the visible and NIR range beyond 800 nm. Such cameras would be useful for measuring different vegetation indices or for food-sorting applications.

Because visible and NIR channels operate on a single silicon-based sensor array, impact on camera design is minimal. “If we are talking about separating out the silicon-bound NIR band from the visible band, we can add different types of filters to the image sensor that enable this separation,” says Matthias Sonder, Advanced Development Leader, Scanning, at Teledyne DALSA. “As long as a reliable manufacturing process with good quality control exists, this has little impact in the actual imaging hardware.

“In terms of managing the data, we had to do some development work to convert the RGB community to accept four independent spectral planes from a camera. But this is now reasonably available,” Sonder adds.

Entering the SWIR
From 900 nm to about 1.7 µm — the shallow end of the SWIR range — sensors based on indium gallium arsenide (InGaAs) become the prevailing imager technology. However, a potential challenger emerged in March when SWIR Vision Systems launched its new Acuros SWIR camera, which leverages quantum dot technology. Specifically, the imager incorporates a 2.1 MP broadband sensor based on colloidal quantum dot (CQD) thin-film photodiodes fabricated monolithically on silicon readout wafers. While CQD sensors have comparatively lower quantum efficiency than InGaAs, SWIR Vision Systems claims that the technology can deliver comparable performance when paired with active illumination and can offer higher resolution at a lower overall system cost.

Like silicon-based CCD or CMOS sensors, InGaAs and CQD sensors convert photons into electrons, thereby acting as quantum detectors. Both enable detection of reflected wavelengths between 1 and 1.7 µm, where familiar materials appear very differently. Water moisture, for example, absorbs light at 1.45 µm and, when viewed through a band-pass filter, appears almost black in SWIR images of bruised fruit, well-irrigated crops, or leaking yogurt containers. Materials that look identical in the visible range, such as different plastics and metals, also look different within this range, allowing them to be sorted more easily.

Imaging at the deeper end of SWIR, from 1.7 to 2.5 µm, has typically been the realm of mercury cadmium telluride (MCT) sensors. But by changing the composition of the three elements comprising InGaAs detectors, these sensors can capture images up to 2.2 µm. “Prices for such sensors are, in general, much lower than those for MCT sensors, and the sensor cooling is not as demanding, resulting in lower system costs,” says Hashagen.

Thermal Gets Sensitive
MCT cameras, as well as devices based on indium antimonide (InSb) sensors and, to a lesser extent, lead selenide (PbSe), extend imaging past SWIR into the MWIR and LWIR range — also known as thermal infrared because imaging in this region relies on capturing radiation emitted from the object itself. While that eliminates the need for external light sources when capturing an image, many thermal cameras —including MCT-based devices — require active cooling to manage sensitivity and noise. The trade-off is that cryogenic coolers increase the cost and size of these cameras, which often makes uncooled microbolometer-based imagers a preferable alternative.

Based largely on amorphous silicon (a-Si) or vanadium oxide (VOx), microbolometer devices have reconfigured LWIR cameras — a once costly technology available only to law enforcement and the military — into an embedded imaging modality able to fit into your pocket. FLIR Systems’ 10.50 × 12.7 × 7.14 mm Lepton microthermal camera module, for example, is now integrated into the Caterpillar S60 smartphone as well as firefighter heads-up displays.

Historically, the drawback to microbolometers was their comparative lack of sensitivity. This picture is starting to change, however, according to Mike Walters, Vice President for Product Management, Micro Cameras, at FLIR Systems. “Two things are happening,” Walters says. “First, we’ve gotten pixel sizes down to 12 µm, which makes these LWIR cameras comparable in resolution to visible sensors that capture wavelengths below 1 µm. But we’re aiming for pixel sizes under 10 µm while maintaining sensitivity. Second, since we’re looking at thermal emission, we are modifying the semiconductor layers to ensure pixels absorb all that heat.”

Time of Flight Ready for Takeoff
Human vision still intuits collections of 3D shapes more easily than a computer does. But techniques such as ToF have become sophisticated enough to help out with industrial applications such as robot pick-and-place operations, sorting objects in a bin, or scanning packages on a conveyor.

Like conventional machine vision systems, ToF systems rely on active illumination and quantum detectors to convert reflected photons into electrons. But instead of producing a 2D image, ToF technology produces a 3D point cloud. It does this in one of two ways. The first, generally known as direct ToF, fires short pulses of light over a field of view and measures their return time to the camera. Indirect ToF, the second method, emits a continuous broadband signal and measures the phase shift between the emitted and returning light waves.

Indirect ToF is more forgiving when it comes to sensor design and selection and can leverage conventional CCD or CMOS devices. That isn’t to say that these sensors cannot be optimized for ToF. After its acquisition of SoftKinetic, Sony combined that company’s proprietary pixel structure, which is capable of efficient high-speed sampling, with its in-house back-side illuminated sensor technology. The result was Sony’s new DepthSense ToF sensor, which provides more efficient light collection for NIR wavelengths at 60 fps.

“LUCID incorporated Sony’s DepthSense sensor into its Helios ToF camera, which delivers high accuracy within 2 mm, with a standard deviation of less than 2 mm at a 1 m distance” says Jenson Chang, Product Marketing Manager at LUCID Vision Labs. “Sony’s new sensor offers a large 3D resolution of 640 x 480, while delivering higher speed performance compared to existing products on the market.”

ToF systems, in general, require fewer components, less calibration and computation, and a smaller footprint than either stereo vision or structured light, and ToF more easily scales to longer working distances. But improvements to its accuracy are what is driving both adoption and continuing development of the technology.

“Most of the advances are triggered by applications that need more accuracy, as accuracy is one of the decisive criteria when judging a ToF system,” says Sebastian von Holdt, Head of Product Management at Basler. Accuracy applies to robotic pick-and-place applications, where the camera needs to deliver point cloud data that matches the accuracy of the robotic system. It applies to automated logistics, where the size and shape of boxes on a conveyor belt help determine shipping costs, storage space, or where an automated forklift system should move a box.

Improving the accuracy of ToF is occurring along three paths. The first is illumination, where the growing favor of VCSELs over IR light-emitting diodes has enabled short, efficient, and precise light pulses, which offer significant improvements in accuracy and reduce temporal noise behavior — even in outdoor applications at bright daylight.

Secondly, new sensor technology — offering modulation frequencies up to 300 MHz, greater dynamic range through back-side illumination, and resolutions on par with the video graphics array or higher — is contributing to denser point clouds and more details.

The third factor driving higher accuracy is improvements to timing. “Timing is everything when it comes to high-quality ToF camera systems,” says von Holdt. “Finding the best timing pattern combinations based on the frame rate and the modulation frequency of the sensor is key to enabling higher accuracy, lower distortions from motion, reduced reflections and stray light, as well as better quality at longer distances.”

A Look Ahead Into the Nonvisible
We cannot improve on human biology (yet). But we still have much room for improvement in what machine vision can achieve in the nonvisible, multispectral, and 3D vision realms. From denser and more sensitive pixel arrays, to back-side-illuminated detectors or stacked sensor designs, to lower cost and smaller systems, today’s camera makers and vision system integrators are improving our ability to sort, inspect, and read what the human eye cannot perceive.