Tech Papers
Using Optics to Optimize Your Machine Vision Application
POSTED 11/23/2016 | By: Kaitlin Bowes, Marketing Programs Specialist, Americas
Introduction
The lens is responsible for creating sufficient image quality to enable the vision system to extract the desired information about the object from the image. The lens is critical to machine vision performance because information that is not captured by the lens cannot be re-created in software. In a typical application, the lens is required to locate features within the field of view (FOV), ensure the features are in focus, maximize contrast and avoid perspective distortion. What may be adequate image quality for one application may be insufficient for another. This white paper will explain the fundamentals of using optics to optimize a machine vision application.
Basics of machine vision optics
The object area imaged by the lens is called the field of view (FOV). The FOV should cover all features that are to be inspected with tolerance for alignment errors. Features within the FOV must be large enough to be measured. In alignment and gauging applications, the lens is also responsible for presenting the image in a fixed geometry that is calibrated to the object’s position in space. The working distance (WD) is the distance from the front of the lens to the object being imaged.
The depth of field (DOF) is the maximum object depth that can be maintained entirely in focus. The DOF also determines the amount of variation in the working distance that can be allowed while still achieving an acceptable focus.
The sensor size is the size of a camera sensor’s active area, typically specified in the horizontal dimension. The primary magnification is the ratio between the sensor size and the field of view. With primary magnification held constant, reducing the sensor size reduces the field of view and increasing the sensor size increases the field of view. If the sensor is large enough, it will exceed the size of the image circle that is created by the lens, creating blank spots in the corners of the lens that are known as vignetting.
Resolution
Resolution is a measurement of the vision system’s ability to reproduce object detail. Figure 2 (a) shows an image with two small objects with some finite distance between them. As they are imaged through the lens onto the sensor, they are so close together that they are imaged onto adjacent pixels. If we were to zoom in, we would see one object that is two pixels in size because the sensor cannot resolve the distance between the objects. In Figure 2 (b), on the other hand, the separation between the objects has been increased to the point that there is a pixel of separation between them in the image. This pattern -- a pixel on, a pixel off, and a pixel on -- is referred to as a line pair and is used to define the pixel limited resolution of the system.
Figure 3 shows a spark plug being imaged on two sensors with different levels of resolution. Each cell in the grid on the image represents one pixel. So the resolution in the image on the left with a 0.5 megapixel sensor is not sufficient to distinguish characteristics such as spacing, scratches or bends in the features of interest. The image on the right with a 2.5 megapixel sensor provides the ability to discern details in the features of interest. In this case, simply swapping sensors provides a considerable improvement in resolution.
But as we move to more powerful sensors, we need to ensure that the optics are able to reproduce the details that we need to image. Targets can be used to determine the limiting resolution of a system and how well the sensor and optics complement each other. The UASF 1951 target shown in Figure 4 has horizontal and vertical lines so it can be used to test both horizontal and vertical resolution. Measurements are made in the frequency domain where spatial frequency is usually measured in line pairs per millimeter (LP/mm). The lines are arranged so that their spatial frequency is increased moving in a spiral towards the center of the target.
Contrast
Contrast is the separation in intensity between blacks and whites in an image. The greater the difference between a black and a white line, the better the contrast. Figure 5 shows two different images of a UPS label taken with the same high resolution sensor at the same position and focal length with different lenses. The difference is that the lens used to take the image on the right provides higher levels of contrast because it is a better match for the high resolution sensor.
Color filtering can be used to increase contrast. Figure 6 shows a machine vision application designed to distinguish between red and green gel capsules. The image on the left with no filter shows a subtle difference in contrast between the different color capsules. A sensor could distinguish between the different color capsules in this image, however, variations in lighting or to the ambient environment could generate false positives or false negatives. Adding either a red or a green filter increases the contrast to the point that the vision solution becomes much more robust.
Diffraction
In the real world, diffraction, sometimes called lens blur, reduces the contrast at high spatial frequencies, setting a lower limit on image spot size. The differences between ideal and real lens behavior are called aberrations. Figure 7 shows how these effects degrade the quality of the image. The object on the top of Figure 7 has a relatively low spatial frequency while the object on the bottom has a higher spatial frequency. After passing through the lens, the upper image has 90% contrast while the bottom image has only 20% contrast due to its higher spatial frequency. Lens designers choose the geometry of the lens to keep aberrations within acceptable limits however it is impossible to design a lens that works perfectly under all possible conditions. Lenses are generally designed to operate under a specific set of conditions, such as field of view, wavelengths, etc.
Now let’s look at lens performance across an entire field of view. The three images enclosed in different colors in Figure 8 are close-up views of the boxes shown in the same color on the larger image. The chart at the bottom of Figure 9 shows the modulation transfer function (MTF) of the lens at each position in the field of view. MTF is a measurement of the ability of an optical system to reproduce various levels of detail from the object to the image as shown by the degree of contrast in the image. The MTF measures the percentage of contrast loss as a function of the spatial frequency in units of LP/mm. The lens in Figure 9 has 59% average contrast in the center section, 56% in the bottom middle and 62% in the corner. The image demonstrates the importance of checking the MTF of a lens over the entire area that will be used in the application.
Figure 10 shows the same test applied to a different lens with the same focal length and same field of view using the same image sensor. In this case the contrast is reduced to 47% in the centre, 42% in the bottom middle and 37% in the corner. The third lens, shown in Figure 11, is different in that the performance is good in the center of the image at 52% contrast, drops off in the corner position to 36%, and drops even more in the bottom middle to 22%. It’s important to note that all three of these lenses have the same FOV, depth of field, resolution and primary magnification. The differences in their performance show how the lens performance can have a dramatic impact on the ability of the sensor to discern the details that are important in the application.
Depth of field
Depth of field is the difference between the closest and furthest working distances an object may be viewed before an unacceptable blur is observed. The F stop number (F/#), also called the aperture setting or the iris setting of the lens helps to determine the depth of field. The F/# is the focal length of the lens divided by the diameter of the lens. F/#’s are specified for most lenses at a focal length of infinity. As the F/# is reduced, the lens collects less light. The absolute resolution limit of the lens is reduced when the aperture is reduced in size. Reducing the aperture setting or making the aperture smaller increases depth of field as shown in Figure 12. The purple lines show the depth of field and the red lines indicate the maximum allowable blur. Increasing the allowable blur also increases the depth of field. The best focused position in the depth of field is indicated by the green line, which is close to the end of the depth of field closest to the lens.
Figure 13 shows a depth of field target with a set of lines on a sloping base. A ruler on the target makes it simple to determine how far above and below the best focus the lens is able to resolve the image.
Figure 14 shows the performance of short fixed focal length lens that is used in machine vision applications. With the aperture completely open looking far up the target in an area defined by the red box that’s beyond the depth of field range, we see a considerable amount of blur. With the aperture half open, the resolution at this depth of field position increases. The lines are crisper and clearer and the numbers are now legible. But when we continue to close the iris to the point where there is very little light coming in the overall resolution is reduced and the numbers and lines both become less clear.
Figure 15 shows this same lens but this time looking at the best focus position. With the iris completely open we see the image and numbers clearly. With the iris half open the image has become blurred. The resolution degrades even more with the iris mostly closed.
Figures 16 and 17 show another lens with a different focal length that is designed specifically for machine vision applications. With the iris completely open, the lines are gray rather than black and white and the numbers are somewhat legible but highly blurred. With the iris halfway closed, the lines come into sharper focus and the numbers are crisper. With the iris mostly closed, the resolution improves even more in the area of interest and the image is sharp throughout the range of working distances shown.
Distortion
Figure 18 shows an example of distortion, an optical error or aberration that results in a difference in magnification at different points within the image. The black dots show the position of different points in the object as seen through the lens while the red dots show the actual position of the object. Distortion can sometimes be corrected by the vision system which calculates where each pixel is supposed to be and moves it to the correct position.
As shown in Figure 19, perspective distortion caused by the fact that the further an object is from the camera, the smaller it appears through a lens. It is particularly important for gauging or other high precision applications. Perspective distortion can be minimized by keeping the camera perpendicular to the field of view.
Perspective distortion can also be minimized optically with a telecentric lens as shown in Figure 20. The image on the right shows the objects – four pins mounted perpendicular to a base. The image captured by the conventional lens suffers from perspective distortion. The telecentric lens, on the other hand, maintains magnification over the depth of field so it reduces or eliminates perspective distortion.
Figure 21 shows perspective distortion in a real world scenario. The object in the top center appears through two different lenses in the left and right lower images. Using a conventional fixed focal length lens produces the image on the lower left. The two parts appear to be different heights on the monitor even though they are exactly the same height in real life. This is the same way our eyes see the objects although our brain automatically corrects for perspective distortion and we perceive the objects as being of equal height. In the image on the lower right, the telecentric lens has corrected for perspective distortion and the objects can be measured accurately.
Conclusion
Optics is very important to the overall success of a machine vision application. The examples shown here demonstrate the importance of considering the overall system including the optics, lighting and vision system as opposed to simply picking out components. When you are discussing the application with suppliers, be sure to completely explain the goals of the inspection as opposed to just asking for specific components so that the supplier can contribute to the success of the application. Finally, expect a lot from your optical and vision system suppliers and find trusted partners that are committed to the success of your application and willing to put in the effort needed to make it happen.