New Machine Vision Applications and Sensors Drive Advances in Optics
| By: John Lewis, A3 Contributing Editor
Before any camera can capture an image of an object, the lens must collect scattered light from the object and distribute it properly over the sensor’s active area. That’s one reason why new machine vision applications and camera sensors tend to drive parallel advances in optics, and optics suppliers must continually evolve as machine vision technologies progress.
Designing Optics for New Large Format Sensors
“We consistently see machine vision applications going to higher resolutions,” says Nick Sischka, recently promoted to director of imaging product development at Edmund Optics. “We’ve stopped noticing pixels getting significantly smaller and instead see sensor sizes growing.”
As companies like Gpixel and Sony continue to release ever-larger sensors (e.g., 152 MP), the challenge for optics suppliers is to keep pace. These lenses will be quite large — larger than full-frame lenses typically used in photography — and until new ones are developed, a photographic medium-format lens may be the best option.
“Lenses for the very large 60.6-mm diagonal sensors that offer greater than 150 MP must be developed,” says Mark Peterson, cofounder and vice president, advanced technology, Theia Technologies. “Unfortunately, the cost for development, the large size of the lens, and the initial low volume will keep these lenses at a premium price.”
Jason Baechler, president of MORITEX North America, Inc., concurs. “When designing optics for increasingly large sensors, several challenges exist, especially if the pixel sizes/pitches are quite small,” he says. “The two main challenges are controlling the size and cost of the lenses/optics to align with the cameras using such new sensors. Beyond just the challenge to address all the camera sensors out there, those two factors increase the necessity of making tradeoffs in optical and mechanical designs.”
For bi-telecentric lenses, MORITEX designs the front/objective lenses to match the maximum field of view with a targeted resolution. That front portion of the lens can then be suitable for a wide range of image sensor formats, which minimizes component variation within its portfolio and cost optimizes the most expensive telecentric lens components. The back lens portion (image side), however, varies based on image format and the result is that the length is not always optimized.
For that reason, Baechler notes, “we still offer object-side telecentric lenses of different formats for applications that have tighter space requirements.” However, those lenses (MORITEX MMLs) can only be used with a specific sensor format or smaller, assuming the mount size is the same. For factory automation and other non-telecentric fixed focal-length lenses (where some large or high-power objective lens doesn’t drive the cost), the tradeoffs grow with the versatility and cost competitiveness of the products.
By minimizing the number of elements in a lens and simplifying the opto-mechanical systems for working distance (WD), aperture, and focus adjustment, a lens can be designed for a limited working distance range and/or aperture size. Another way to approach this is to design products to cover some range of sensor formats and offer mount adapters to match up with different cameras.
“This approach has the advantage of improved resolution as the sensor image format gets smaller versus the maximum image format of the lens,” Baechler explains. “As a result, a lens designed for 62-mm image diagonal (or line) and 5 µm pixels could match a sensor with 3 µm pixels and a 43.5-mm diagonal.”
In response to the growing release of larger sensors, Navitar Inc. has also developed optics optimized for use with the larger camera formats. One new product called the Navitar SingleShot™ Wide Field Objective Imaging System is suitable for high-end industrial applications such as semiconductor wafer inspection, FPD inspection, and MEMS, as well as life science and biomedical applications such as multi-well experiments and cell imaging.
The Navitar SingleShot is a modular fixed-focal-length imaging system that combines the field of view of a macro lens with the resolution of a microscope objective. Optimized for use with larger camera sensors, such as 4/3, 1.1-in., and 1-in. formats, the system is compatible with coaxial, ring lighting, and Kohler-style illumination.
“Digital zoom, or region of interest functionality, is fully supported when combined with a Pixelink CMOS or 10GigE camera,” says Jeremy Goldstein, owner and CEO of Navitar. “The lens was designed to be the perfect complement for the Sony IMX530 sensor (PL-X9524) at 2.74-µm pixels, which makes the lens one of the few products on the market that can fully utilize this 24 MP sensor.”
Navitar’s new imaging system offers feature resolution of 1.3 µm at a 10-mm field of view. “For comparison, a traditional long working distance 10x objective would resolve the same feature size at a FOV of 2.4 mm,” Goldstein says. “This is ideal for magnifying smaller regions of interest to better observe cellular activity or closely examine defect characteristics.”
Goldstein also notes that new optics to take advantage of the larger format cameras “enable Navitar to provide their customers larger fields of view imaging so the customer can see more of the object at one time. This increases the processing throughput and speed of data collection, which is extremely important to all industries, both industrial and medical.”
In biological research and instruments especially, capturing a larger surface area of a sample decreases the amount of time to collect meaningful data and the possibility of photo bleaching of labeled cells. Time previously spent moving the optics, the stage, or waiting for software to stitch multiple images together is greatly reduced when using the Navitar SingleShot system, according to Goldstein.
More Than What Meets the Eye
Industry reports expect non-visible imaging, including ultraviolet, short-wave infrared, IR, and hyperspectral and multispectral, to grow by more than 30 percent over the next five years. As sensor/camera options have rapidly expanded in recent years, costs have come down, spurring an explosion of new applications — which, in-turn, has expanded demand for a variety of lenses required to address these new applications.
Peterson notes that Theia has developed SWIR-only lenses in the past before the new hyperspectral sensors were available. “Now we are evaluating new lens types that can work with the advanced vis-SWIR sensors that are responsive to wavelengths from 400 nm to 1700nm, including lenses using our patented Linear Optical Technology®, to provide an ultra-wide image and remove barrel distortion without software.”
He adds: “The ‘killer app’ for this vis-SWIR wavelength band hasn’t been identified yet, but the decreasing cost of hyperspectral sensors will necessitate the availability of lenses that complement the wide wavelength band.”
In hyperspectral imaging, Sony has introduced broadband imaging sensors, but until there is an actual hyperspectral coating over it, most applications will use a narrow wavelength band across that sensor such as SWIR, according to Sischka.
“It often makes more sense to have a multi-camera solution with different sensors focusing on different wavebands rather than having one very expensive camera that can image across the whole broad spectrum,” Sischka says. “An exception to this would be an application where minimizing weight is a priority, like aerial imaging from a drone.”
SWIR Applications and Advantages
In response to the advances in SWIR technology, Navitar has developed the Resolv4K SWIR lens series, a modular product that can be configured in many SWIR fluorescence, microscopy, and imaging configurations.
“The SWIR region at 1000 nm to 2000 nm provides several advantages over the visible and near-IR regions for in vivo imaging, which is a market we support with our SWIR imaging optics,” says Goldstein. “The general lack of autofluorescence, low light absorption by blood tissue, and reduced scattering can render a mouse translucent when imaged in the SWIR region.”
One application that can benefit from SWIR imaging is advanced tumor diagnosis and treatment. Incorporating SWIR imaging with multispectral imaging approaches provides a useful platform for deciphering tumor development and metastasis, and it can help with tumor therapy.
By combining with SWIR-emitting core/shell quantum dots, intravital microscopy can generate detailed three-dimensional quantitative flow maps of brain vasculature. This allows visualization of the differences between healthy tissue and a tumor.
In the field of neurosciences, SWIR imaging can detect ions and neurotransmitters and help visualize biological processes deep within a living brain. This application allows scientists to better understand cognitive functions and neurodegenerative diseases such as Alzheimer’s and Parkinson’s, which Goldstein believes will lead to cutting-edge therapies.
In the area of multispectral/hyperspectral imaging, Navitar has developed many custom solutions in the medical and industrial imaging markets. For example, a medical technology company needed a camera manufacturer to help them create a new imaging system that combined artificial intelligence and hyperspectral imaging for real-time surgical guidance. Navitar’s Pixelink camera division is designing and building a medical-grade camera system integrating a hyperspectral image sensor to make it suitable for surgery.
The camera needs to meet ISO 13485, which addresses quality management in medical devices, and all related regulatory and electromagnetic compatibility requirements. With the new hyperspectral camera, the surgical system can analyze the difference between tissue types during a surgical procedure.
In another example, a startup developing an innovative robotic vision solution that combines 3D vision with multispectral imaging capabilities asked Navitar’s Pixelink camera division to provide a board-level camera integrated with a hyperspectral imaging sensor. Pixelink integrated the sensor and provided custom calibration to ensure sensor-to-sensor uniformity, enabling the customer to create a new 3D robotic vision solution with advance tracking capabilities. The system can detect defects in product colors or drops of liquid early in the manufacturing process.
Lenses for Discrete Sensor Cameras
MORITEX has been selling lenses and other components for applications using IR-SWIR, multispectral, and hyperspectral cameras for at least 15 years. As such, the company offers a “large portfolio of lenses to address multispectral area and line-scan camera requirements,” says Baechler. “For visible-range single-sensor area-scan cameras, there is nearly 100 percent overlap between lenses for color and multispectral applications.
“For discrete multisensor line-scan cameras, on the other hand, MORITEX has specific lenses that better manage the requirements for flange focal distance for high-resolution multispectral imaging with smaller pixel pitches.”
For SWIR applications, MORITEX offers a standard product portfolio of telecentric and non-telecentric products and can quickly manufacture various standard products with slight modifications to cover required wavelength ranges. Since full IR-SWIR–range illumination sources are not readily available with high output, most customer applications have specific wavelengths of interest over this range. As the number of cost-effective applications expands, however, Baechler says MORITEX will continue to release more options and have them readily available.
“For hyperspectral, we have a similar experience,” Baechler says. “Of course, MORITEX has lenses to address hyperspectral application needs, but due to the limitations of light sources, many of these are used only in low-volume applications outside of factory automation or logistics-type applications.”
Lenses for Prism-Based Cameras
MORITEX has been one of few companies to offer lenses for area-scan and line-scan prism-based multisensor cameras, an area of the market dominated by camera maker JAI. These lenses are especially useful for applications requiring high color resolution for different wavelengths, whether in the visible, SWIR, or multispectral ranges, according to Baechler.
Motorized Zoom and Focus
Other advancements in optics that are significant to the machine vision market include zoom lenses, new optical materials, and improved manufacturing methods. For example, zoom lenses — which were previously thought undesirable due to concerns about vibration effects on the motorized mechanics — are being adopted in greater numbers, according to Peterson.
The increased adoption is driven by applications where the subject is not just a uniformly shaped product on an assembly line, but an object at unknown and variable distance. One example is mobile applications, where the object or the imager or both may be moving. Meanwhile, fixed robots and roving autonomous robots require the flexibility of a zoom lens to navigate and identify objects in a dynamic environment.
Peterson also says that motorized lenses with adjustable zoom and focus provide greater versatility and convenience, allowing adjustment to changing conditions, remote setup, and operation. Combined with high resolution and NIR correction, as well as AI and machine learning, this type of lens can be a powerful tool to enable higher level identification and recognition tasks, such as defect detection, optical character recognition (OCR) and automatic number plate recognition.
One such application is intelligent traffic systems (ITS), where the flexibility to optimize the field of view and focus distance after initial installation and without the need for stopping traffic is enabled by motorized zoom and focus. Image optimization is critical to getting the best performance to identify targets using OCR, AI, and machine learning.
“There is a common misunderstanding that an imaging system is only useful if it is pixel-limited and operating at the Nyquist frequency,” Sischka says. “Imaging lenses can actually be very useful beyond the Nyquist frequency, meaning that pixel size is smaller than the focused spot size of the lens. For example, smearing multiple pixels across a feature is useful for intelligent traffic solutions because it allows them to notice small differences between objects that look relatively similar to each other.”
Metamaterials and Materials Science
As mentioned earlier, the incorporation of metamaterials will be very exciting but is not quite here yet, says Sischka. Materials science will be key to making further developments in serving larger sensors and broader wavelength ranges. Computational imaging and the ability to use diffractive elements have also led to recent advances.
Baechler agrees. “Material options have helped keep costs down while performance increases for lenses designed for IR-SWIR sensors,” he says. “Liquid lenses allow for novel solutions and also increased versatility of existing base optical designs.”
Navitar is developing its HR Microscope objective series, which provides larger fields of view with the same resolving power as higher magnification objectives. Navitar’s HR Plan Apo Infinity corrected objective lens series includes magnifications of 1X, 2X, 4X, 6X, 10X, and 20X.
“These lenses offer field of view increases of 67 percent to 150 percent, compared to objectives with similar numerical aperture (NA),” Goldstein says. “They offer up to 30 percent increased resolving power over competing lenses with similar magnifications and fields of view.”
When Samsung, a specialist in LCD display manufacturing, was shifting its focus from making LCD panels to manufacturing QD-OLED (quantum-dot display organic light-emitting diode) panels, the company required new automated optical inspection (AOI) machines to perform the inspection. The AOI solution required a modular microscope system to perform the inspection. Samsung evaluated various objective lenses from different manufacturers but chose the Navitar 4X HR objective because it not only provided a larger field of view compared to a competitive 5X objective, but it also offered a higher NA.
“The Navitar HR objective delivered 10 percent higher contrast and higher NA,” Goldstein says. “The larger field of view enables faster throughput, which is a common theme we are hearing from our customers in both the industrial and life science markets.”
Advanced Assembly Techniques
Goldstein also sees the continued expansion of using “advanced assembly techniques that ensure the centering of each optical element of the lens assembly instead of relying on just simply dropping lens elements in barrels,” adding that Navitar is at the forefront of this technology. “By combining the advanced assembly techniques with the expanded use of automated lens-sensor integration equipment, we improve lens performance and greatly improve the imaging performance of the optical system.”
As lens makers try to keep pace with the release of new camera sensors and expanding machine vision applications where advanced solutions are now feasible, new opportunities continually emerge — including in the field of robotics and autonomous machines. Increases in dynamic range allow autonomous vehicles to be much more robust and robots are now able to multitask. In retail applications, they can simultaneously recognize product outages, safety hazards, and label discrepancies. Like most robots, they automate tedious, repetitive tasks, freeing human talent for higher value tasks requiring more nuance and discretion.
In addition to new applications, existing applications also benefit from the greater throughput achieved by larger format optics that take advantage of the larger format sensors. Providing more data to improve image quality and image processing speed should improve AI efficiency and performance, allowing further advancements in automation. Precise data and metadata on optics and sensors is critical to enabling concepts such as digital twin and Industry 4.0.
Get hands-on with machine vision and sensing applications at The Vision Show, October 11-13 in Boston, MA.