Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:



Machine Vision Optics: More Than A Lens

POSTED 04/03/2012

 | By: Winn Hardin, Contributing Editor

A machine vision lens is composed of a number of individual pieces of glass – usually transparent to the human eye – contained in a mechanical housing. Unlike the camera, which is usually seen as the star of the show, the optic is the water boy. The introvert. The tail on the dog.

Top 6 Considerations When Selecting Optics

Pixel limited or not?

Working distance constraints, including minimum and maximum available distance from camera to object under test

How large is the required field of view at a given working distance

Level of detail and/or minimum defect size within a given field of view

Minimum depth of field within the camera’s field of view

Light: What color will you use, and is monochromatic an option?  

Unfortunately, this approach to machine vision design can end up costing you a lot of wasted money and time.

“The key is that you really need to select your optics at the same time you select your camera,” explains Nicholas James, Product Line Manager – Imaging at Edmund Optics (Barrington, New Jersey). “You might think that you want to go with the highest resolution camera you can afford, but if there’s not a cost-effective lens to match, you may be wasting money on that high-resolution camera.”

My Sensor Is Bigger Than Yours
While imaging amateurs use megapixels (MP) and sensor spatial resolution to describe a camera’s relative strength against the competition, vision engineers realize that a 16 MP sensor is useless if the optical transfer function (OTF) for the entire optical chain doesn’t support the sensor’s pixel size across the full width and height of the sensor. OTF includes everything from the creation of the photon to its absorption by the sensor, including the optical spectrum from the illuminator; the reflected or transmitted spectrum from the object under test; the aperture size; individual and combined optical elements; and finally, the sensor itself.

Another way to select an optic for a machine vision system is to ask the question: Is the application pixel limited or not? Megapixels refers to how many millions of pixels are on a sensor. Sensors come in a variety of sizes, commonly from 1/2-in to 2/3rds-in. to newer 1-in. Divide the active area of the sensor by the millions of individual pixels and you get a general idea of the size of an individual pixel. Being pixel limited means: Do you want your optics to be able to measure a change from black and white from one pixel to the next? Common industrial cameras of a few megapixels or less in a ½-in. or 2/3rds-in. sensor size typically have pixels in the 6 to 7 μm2 range. Finding optics that can achieve a modulation transfer function (MTF) – defined as how many line pairs per millimeter can be resolved using a given lens – high enough to resolve changes from 6 micron pixels is not all that difficult. However, buy a camera based on a 10 MP, ½-in. CMOS sensor, and now you need an optic good enough to focus light on pixels smaller than 2 microns without blurring the overall image. And that’s not as easy…or cheap.

“A CMOS sensor with 10 MP can cost less than $1,000, but the lens can also cost that much if you want to be pixel limited across the entire sensor,” explains James. “At that point, hopefully the designer is more interested in the number of pixels they can put on a specific feature, rather than achieving the full resolving power of the sensor because you’re starting to push the boundaries of physics. If you have an unlimited budget, it’s possible. But most machine vision systems need to use off-the-shelf optics to keep costs down.”

The Good and Bad of Vignetting and Roll-Off
When it comes to machine vision optics, the size of the pixel and overall sensor aren’t the only important dimensions. There is also the angle of incident light at the sensor to consider. Vignetting refers to a common condition where pixels at the edge of a sensor are darker than pixels at the center of a sensor. This has nothing to do with the response of one pixel to another and everything to do with the increased likelihood that a photon will bounce off a surface as the incident angle increases. Just as stones “skip” across the water when thrown nearly parallel to the water’s surface, photons will bounce off the surface of a pixel (or covering lenslet) as the “off axis” incident angle increases.

These graphs show the quantum efficiency (QE) of pixels with lenslets as the incident angle of the incoming illumination varies from 90 degrees (or Zero). The top graph (1A) shows the pixel QE based on angular changes in both the horizontal and vertical axis, while the bottom graph (1B) shows drop off in QE as it relates to each of the three primary colors used in a color camera.

“Most of the camera optics exhibit some degree of vignetting,” explains Steve Gum, Senior Optical Engineer at Navitar (Rochester, New York). “And many designers will use vignetting to their advantage to reduce other optical aberrations. The human eye will look through a lens and not notice vignetting because it works on a logarithmic scale. But a sensor works on a linear scale, and the decrease in brightness and contrast at the edges of a sensor can be very noticeable and problematic to a machine vision system that doesn’t have the built-in correction capabilities of the human eye.

“A double telecentric lens, or one that passes light in both directions without changing the refractive angle, can solve this problem, and that’s what we recommend to most of our machine vision customers,” Gum continues. “It can be more expensive and slower too, but sometimes it's a necessity.”

Optical manufacturers can design custom lenses with larger “sweet spots” that reduce the refraction of light at the edges of the sensor, but this adds cost to the optics. “Kodak was angling their lenslets at the edge of their sensors to better collect light coming in off axis,” Gum adds. “And sensors used to come with an Exit Pupil Location specification that allowed you to calculate the acceptable angle for a given sensor, but somewhere along the way that specification was dropped.”

Coatings, Color, and Line Scan
Anti-reflective coatings applied to each element in a lens design also contribute to overall light loss. As sensors grow in size while individual pixel sizes decrease, there’s more demand on lens manufacturers to develop optics that can achieve pixel-limited operation across larger areas. This typically requires a lens with 8 or 10 elements rather than 4 of 5 elements for low-resolution or more common machine vision lenses.

“If you have 4 of 5 elements in an optical train, then reflecting 1-2% of light off each element isn’t a huge deal,” explains Edmund Optics’ James. “But if you have 9 or 10 elements, 20% loss is pretty significant. So at higher resolutions, coatings can play a big role in throughput since the more light you lose, the longer the exposure time needed for the camera to collect sufficient light for the machine vision algorithms to do their job effectively.”

Chromatic aberration can be another design problem for high-resolution color machine vision systems. Different wavelengths of light have different refract at different angles when passing through optics. The result can be blurring or false images inside the overall image. The flat-panel industry is one application where high-resolution color imaging is critical. Luckily, flat panels are inspected in controlled environments, allowing the vision designer to use monochrome or laser light, which precludes the possibility of spatial shifts caused by different wavelengths of light passing through the same optic.

Finally, line- scan cameras continue to gain market share in web, printing, and flat-panel inspections because they offer a cost-effective way to image very large areas – sometimes meters in width – while still maintaining high spatial resolution to find very small defects. To the optical engineer, a 60-mm line scan camera is the same as a 60-mm-wide cross diameter area sensor, with one caveat: be careful of tilt issues caused by misalignment between sensor and optic. “It can be tricky to maintain micron-level consistency between an optic and a line scan sensor that’s 90 mm long,” explains James. “But if you don’t, you can expect resolution and contrast issues.”

When it comes to the newest line scan cameras that use two sensor arrays to capture both visible and infrared light, James says the issues remain the same. “Just make sure you have an optical material that is optimized for both wavelengths. Often you’ll see optics that are optimized for one color or the other instead of both visible and infrared.”

All About the Benjamins
Like the cars they help build, machine vision customers can choose an optic with perfect color response across a large spectrum with high MTF and large area, but only if you’re willing to pay a premium. But cost isn’t the only consideration. Most machine vision designers will make the mistake of leaving the optical selection until the end of the design. Custom lenses take time to design and manufacture. So if a system requires a special optic, asphere, new glass material, or you’re considering a move into short-wave infrared: Don't wait until the last minute to engage your optical engineer!

“If you have enough resources, you can design an optic to solve most problems, but the majority of machine vision applications need to work within budgetary constraints,” James concludes. “Customers have to weigh a number of different things, including overall system performance requirements and how lights, cameras, optics, depth of field, footprint, and many other factors will work together. The best solution is to select all the optical train elements together so you can make the best decision at the best price and avoid any last-minute surprises.”