Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:


Visual Inspection & Testing Visual Inspection & Testing

Machine Vision Considerations for Metrology Applications

POSTED 06/12/2000

 | By: Nello Zuech, Contributing Editor

No matter what instrument is used to measure a parameter there are two critical factors: accuracy and repeatability.  A basic rule-of-thumb is that the measuring instrument should be at least ten times better than the process specification it is to measure.  In other words, it should be at least ten times as repeatable and accurate as the process.

All measuring instruments have a scale made up of a number of "ticks" or markings along the scale.  In the case of machine vision, the distance between "ticks" is the size of the pixel (subpixel) or alternatively the distance between pixels (subpixels).  In machine vision a "tick" corresponds to resolution and may, but not necessarily, correspond to the sensitivity of the machine vision system -the smallest change in the measured quantity that the system is capable of detecting.  In machine vision this corresponds to the pixel (subpixel) increment or pixel (subpixel) resolution.

When using machine vision to gauge a part, one is faced with the dilemma that the edge of the part feature generally does not fall precisely on a pixel or precisely between two pixels.  The effect of an edge is generally felt over several neighboring pixels.  One cannot distinguish between two edges that fall on the same pixel. Typically the value of gray shade that is encoded represents the average value across a pixel.

An edge can be characterized by four properties:

  1. Contrast - cumulative intensity change across the line characterized as an edge
  2. Width (fuzziness) - size of interval across the profile in which the bulk of the intensity change occurs
  3. Steepness - surface slope within this interval
  4. Direction - angle of vector perpendicular to edge pixel

By virtue of the fact that the edge of an object typically covers several contiguous pixels with a specific gray shade profile (consider the value of gray as a third dimensional property at a spatial data point), familiarity with that profile as a result of the specific application properties (actual shape of the edge profile across the pixel set) one can use any number of mathematical or statistical schemes to basically infer the location of the edge point or to establish the location of the edge to within some increment of the distance effectively subtended by the pixel in object space. Viewing the gray scale profile as a curve, for example, one could calculate the second derivative of the curve - the specific point where change is expected - and define that as the edge pixel.

Different machine vision algorithms take advantage of the various properties of an edge to calculate to within a pixel (subpixel) the position of an edge.  Significantly, different algorithms do yield different results in terms of the size of the subpixel increment.

Accuracy and Repeatability
Accuracy is dictated by the calibration procedure.  In machine vision, as in most digital systems, the "calibration" knob can be changed one "tick" (one pixel or subpixel distance) at a time.  Each "tick" represents a discrete value change in the system's output, the discrete value being a physical dimensional increment.

The procedure to determine system accuracy requires the operator to place a "standard" in the correct location - a referenced position established during the initial calibration procedure.  With a machine vision system, the operator adjusts the calibration until the measured value is as close as possible to the "standard's" value.  This then determines the pixel (subpixel) dimension.

Most metrologists prefer to have at least ten "ticks" (pixel or subpixel units) across the tolerance range.  With ten "ticks", the most the system can be miscalibrated from the standard's value is one-half a "tick" at each measurement point on the standard.  Therefore, the most the norm of the readings (accuracy) can differ from the true (standard) value is one twentieth of the total span of the specification.

For example, given the nominal dimension of 0.1" with a tolerance of .005". (Total tolerance range of .01").  Therefore, each "tick" (pixel or subpixel distance) of the calibration knob should have a value of .1 of .01" or .001". One half of each step is thus equal to 0.0005".  In other words, the accuracy of the machine vision system should be equal to or better than 0.0005".

Since the rule-of-thumb for repeatability is the same as that of accuracy, the system requirements for repeatability are the same, i.e. the repeatability should be equal to the dimension of a "tick" - 0.001" in this application where the tolerance on the part is 0.005".

While accuracy may not be as critical in a given application because it can be derived by calibration, repeatability is more critical since it can not be corrected by calibration or otherwise.  It is observed that the above analysis is considered conservative by many.  Consequently, some suggest relaxing repeatability from 10/1 to 5/1.  This suggests in the application example given an acceptable system repeatability performance would be 0.002".  This should be at least the one sigma repeatability of the machine vision system in the application.

In some cases the rule-of-thumb that is used is that the sum of accuracy and repeatability should be less than one-third the tolerance band.  No matter what "rules" are observed, the accuracy or repeatability of the measuring instrument should not equal the tolerance of the dimension being measured and, in fact, must be appreciably smaller!

Machine vision with subpixel capability can often be used in many metrology applications satisfying such "rules." In some cases the performance approaches the practical limit of machine vision in an industrial setting regardless of the resolution of a system or the theoretical pixel size (field- of-view divided by number of pixels in horizontal/vertical direction).

In the above example application where the part dimension to be measured is 0.1", given the full field-of-view of the camera/machine vision system is applied across this dimension, the theoretical subpixel resolution could be 0.1"/1500 (based on a 500 x 500 area camera based machine vision system and a subpixel capability of 1/3 the pixel resolution) or .000066" -well within the required 0.00050".

However, it is observed that it is not necessarily the case that the more pixels there are the better.  There are other potential physical limits.  For example, resolutions below 0.0001" may be limited by the diffraction limit of optics.

Diffraction is fundamental to the propagation of electromagnetic radiation and results in a characteristic spreading of energy beyond the bounds predicted by a simple geometric model.  It is most apparent at the microscopic level when the size of the optical distribution is on the order of the wavelength of light.

For imaging, this means the light collected from a single point in the object is focused to a finite, rather than infinitesimal spot in the image.  If the radiation from the object point fills the lens aperture uniformly, and the lens aperture is circular and unobstructed, the resultant spot distribution will appear as a central disc surrounded by concentric rings.  According to the wave theory of light the central disk contains 83.8% of the total energy in the spot distribution.

The most immediate image effect is that adjacent points blur together and therefore, are unresolved, one from the other.  In a machine vision application, the points referred to are the subpixel "ticks".  The diffraction limit is defined by the Rayleigh criteria as:
                        R = 1.22lN
                        N = Numerical aperture of the lens
                        l   = Wavelength of light
          for example, based on
                        N = f/2
                          = 500 Nm (approx. avg. of white light)
                        R = 1.22 microns or .000048"

What this suggests is that while games can be played by using blue light and f/.8 lens, for example, the theoretical limit is on the order of 0.00002".

This limit, however, is exacerbated by application conditions and variables such as: light level and spectral changes, optical distortions and aberrations, camera sensitivity non uniformity, vision algorithm interpretation variations, temperature, vibrations, etc.  Not to say anything of part appearance and presentation variables.  The result is that in any given machine vision application the practical limit is on the order of 0.00008 - 0.0001".

This is analogous to having a ruler with a scale in 0.0001" increments.  The measurement sensitivity is half this (0.00005") - i.e., the scale is read to one or the other of two neighboring hash marks or "ticks".  Another observation made is that measurement sensitivity in conjunction with making a dimensional check between two edges on a part relates to the determination of the position of one of the two measurement points.  There is a similar sensitivity in conjunction with the determination of the other position.  This, too, contributes to repeatability errors.

In metrology applications of machine vision it is not unusual to deal in dimensions that have tolerances that require .0001" repeatability and accuracy. These are very demanding and, therefore, require attention to detail.

The ideal lighting is a back light arrangement using collimated lighting to provide the highest contrast with the sharpest defined edges.  This will only work if the features to be measured can be observed in the silhouette of the object.  Another consideration may be that the ideal light should be one with blue spectral output.  A xenon strobe has a blue output given the visible and IR are filtered out.  A strobe offers the additional advantage of reducing the effects of motion and vibration on image smear.

Using a strobe has the advantage of high efficiency and an ability to accurately control tuning of the light pulse either to the camera or the moving object.  Strobes reduce the effect of smear to the duration of the strobe cycle.  However, ambient light must be controlled to avoid "washing out" the image produced by the strobe.  A camera with an electronic shutter will minimize this effect.

The alternative, a top lighted arrangement may not result in measuring the absolute dimensions because of radii artifacts, fuzzy edges, etc...  Measuring using a top lighted arrangement, again ideally a collimated light arrangement, should use blue light to optimize the measurement.

Another issue in top lighting is that the lighting should be as uniform as possible.  A circle of light is one possibility.  This can be accomplished with an arrangement of fiber optic light pipes or light emitting diodes (LED).  These arrangements are commercially available.  The light pipe can be connected to a strobe light and LEDs can themselves be strobed.

The collecting optics should be a telecentric design to obtain the least edge distortions and be tolerant of potential magnification changes due to positional and vibration variations.
In some applications microscopic optics can be employed; that is, optics that magnify the image since the imager size is on the order of 8 millimeters and the part is smaller.  Not withstanding the magnification, the issue associated with the optics is the resolution of the optics.  In the case of optics, resolution refers to the ability to distinguish the distance between two objects.  In measurements this is analogous to the ability to distinguish between two "ticks".

Under idealized laboratory conditions, a microscope can be designed which has a resolution to 0.000020".  In a practical industrial environment, however, one is unlikely to get better than .00005 - .0001" resolution.

The presentation of the part and fixturing of the cameras should be such that the part is always presented squared or parallel to the image plane of the camera.  This will avoid pixel stretching due to perspective and keystoning effects.  It is noted that other properties of optics can also affect resolution, especially off-axis resolution: distortion, astigmatism, field curvature, vignetting, etc. In some cases these will become a substantial percentage of error compared to pixel size, and, consequently, compensated for accordingly by the machine vision system.  Alternatively, better optics may be required.

The imager that is used in the camera should have a resolution, in terms of the number of photosites that it incorporates, on the order of at least 500 x 500. In the case of the imager/camera, resolution in machine vision is often equated to the number of photosites in the imager.  The camera should have as high a response to the blue spectrum of the illumination as possible to yield an image with as high a contrast as possible.

In a back lighted arrangement this is less critical but may be an issue. Different image sensor designs have different spectral responses.  A CID sensor generally has a higher sensitivity than interline transfer CCD sensors, although a frame transfer CCD sensor may be comparable because it's higher fill factor results in a high effective quantum efficiency.

A preferred camera would be one that has a square pixel so horizontal and vertical values are the same.  While this can be corrected in software, it requires additional compute power and in turn, more time.

The camera itself should have an asynchronous scanning capability especially if a strobe operation is anticipated.  An exposure control feature would also be useful to further reduce the effects of background illumination.

A camera with an asynchronous scanning ability also has advantages to assist in synchronizing the event to the camera.  By synchronizing the camera to the event ensures that the object will always be physically in the same location in the image.  This may minimize the need for translation correction, an algorithm that adds processing time to execute and will, therefore, slow down the machine vision system performance.

Issues that affect resolution in cameras include: lighting, electronic noise, mechanical noise, optics, and aliasing.

The A-D/frame buffer should be compatible with the number of photosites in the imager.  There should also be a capability to synchronize the camera from the frame buffer in order to eliminate the effects of pixel jitter - a practice common in commercially available machine vision systems that make such issues transparent to the user.

Pixel jitter can result in an error in the mapping of a pixel to object space repeatedly.  This is more of a factor in higher resolution cameras, such as those that should be used in metrology applications, where jitter can be a one pixel error.

Vision Computer:
The vision computer should have the capacity to do a subpixel algorithmic analysis.  Significantly there are many different approaches to subpixel processing and these yield results that differ in robustness.  While many companies purport to have an ability to in effect subpixel to 1/64th of a pixel, it is commonly agreed that the best one would generally achieve in an industrial application is on the order of one part in ten or one part in twenty of a pixel.

Subpixelling approaches take advantage of the fact that the edge of an object will actually fall across several pixels and result in effectively a gray scale curve.  Any number of different mathematical approaches operating on that curve will yield an ability to interpret the position of the edge to within a pixel. Essentially a subpixel represents the finest distance between two "ticks" and relates to the vision system's ability to make measurements and not detect attributes that are smaller than a pixel.

The vision computer should have the capability to compensate for modest translation (.005") and rotation (5 degree) errors.  It should also have the ability to operate on edges based on subpixel processing techniques (to at least 1:10).  Ideally it should be able to trace an edge based on best fit of the pixel data and make measurements between two such lines.  All this processing and a measurement decision must be done at production rates.

Significantly, a vision computer with higher processing speeds (3000 per minute or greater) may have an advantage in that it could average a number of pictures taken on a single part.  Such averaging may improve the signal-to-noise and, therefore, effectively the resolution of the system typically by the square root of the number of samples.

Given this attention to detail in a gauging application, the best repeatability and accuracy that can be expected to be achieved with a machine vision system is on the order of 0.000050- 0.0001".

Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.