« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Visual Inspection & Testing Visual Inspection & Testing

Understanding Flaw Inspection Applications

POSTED 08/17/2000  | By: Nello Zuech, Contributine Editor

Overview
Machine vision applications that involve flaw detection or inspecting for cosmetic concerns are generally subjective in nature. The perception of what constitutes a flaw varies from individual to individual and often even one individual may have different sensitivities from time-to-time. It is also likely that a surface flaw can only be seen under certain conditions - the specific way that light hits the object while being viewed. It is also possible that certain observed anomalies are characterized as flaws if they meet certain conditions or are physically only experienced in certain locations and only then do they affect product functionality. Consequently, the definition of what constitutes a flaw is arbitrary. In machine vision literature this has often been referred to as the 'anything/anywhere' application, where an unwanted appearance anomaly can appear anywhere on the part.

Nevertheless, you want to somehow determine if the project is at least remotely feasible and you don't want to use a company's salesman to do the evaluation. How does one go about doing that? Well there are 'rules of thumb' that can be used to at least get some measure of feasibility. These start with having at least a fundamental understanding of how a computer operates on a television image to sample and quantize the data. Understanding what happens is relatively straight forward if one understands that the TV image is very analogous to a photograph.

The computer operating on the television image in effect samples the data in object space into a finite number of spatial (2D) data points which are called pixels. Each pixel is assigned an address in the computer and a quantized value, which can vary from 0 to 63 in some machine vision systems or 0 to 255 in others. The actual number of sampled data points is going to be dictated by the camera properties, the analog to digital converter sampling rate, and the memory format of the picture buffer or frame buffer as it is called.

Today more often than not the limiting factor is the television camera that is being used. Since most machine vision vendors today are using cameras that have solid state photo sensor arrays on the order of 500 or so by 500 or so one can make certain judgments about an application just knowing this figure and assuming each pixel is approximately square. For example, given that the object you are viewing is going to take up a one-inch field of view, the size of the smallest piece of spatial data in object space will be on the order of 2 mils, or one inch divided by 500. In other words, the data associated with a pixel in the computer will reflect a geographic region on the object on the order of 2 mils by 2 mils.

One can establish what the smallest spatial data point in object space will be very quickly for any application: X (mils) = largest dimension/500. Significantly this may not be the size of the smallest detail a machine vision system can observe in conjunction with the application. The nature of the application, contrast associated with the detail that you want to detect, and positional repeatability are the principal factors that will also contribute to the size of the smallest detail that can be seen by the machine vision system. Contrast has to do with the difference in shade of gray between what you want to discriminate and the background, for example.

The organizational repeatability is in effect just that - how repeatable will the object be positioned in front of the camera. If it can not be positioned precisely the same way each time then it means that the field of view will have to be opened up to include the entire area in which one can expect to find the object. This will in turn mean that there will be fewer pixels covering the object itself. Vibration is another issue which can impact the size of a pixel as does typically motion in the direction of the camera itself since optical magnification may then become a factor - increasing or decreasing the size of the spatial data point in object space.

Application Analysis
For applications involving flaw detection, contrast is especially critical in determining what can be detected. Where contrast is extremely high, virtually white on black, it is possible to detect flaws that are on the order of one-third of a pixel, however, with only about 70% probability. Significantly, one can detect these flaws but not actually measure them or classify them. When detecting flaws that are characterized as geometric in nature, for example, scratches or porosity, it is noted that the presence of such flaws can frequently be exaggerated by creative lighting and staging techniques. So if those were the only flaws one wanted to detect and detection was all that was necessary, a rule of thumb would be that the flaw has to be greater than one-third of a pixel in size, to be reliably detected most of the time.

Where contrast is moderate, the rule of thumb suggests that the flaw cover an area of three-by-three pixels. Classifying a flaw with moderate contrast would require that it cover a larger area, on the order of 25 pixels or so. Where contrast associated with a flaw is relatively low as is the case with many stains, the 1% of the field of view rule would hold or it should cover 2500 or so pixels. Significantly, if it is a question that one is trying to detect flaws in a background that is itself a varying pattern (stains on a printed fabric, for example), the chances are that one would only be able to detect very high contrast flaws.

Review Of Defect Detection Issues
Since the late 60's companies have applied a variety of techniques to inspect products for cosmetic concerns, either geometric flaws (scratches, bubbles, etc.) or reflectance (stains, etc.). Some arrangements are only able to detect high contrast flaws such as holes in the material with back lighting, or very dark blemishes in a white or clear base material. Some arrangements are transmissive, others reflective, others a combination. Some arrangements are only able to detect flaws in a reflective mode that are geometric (scratches, porosity, bubbles, blisters, etc.) in nature; others only those based on reflectance changes.

Some systems only have detection capability, others have ability to operate on flaw image, and develop descriptors and, therefore, classify the flaw. These latter systems lend themselves to process variable understanding and interpretation and ultimately automatic feedback and control. As machine vision techniques improve in image processing and analysis speeds, there will be opportunity to substitute these more intelligent systems for those earlier installed with only flaw detection capability.

Advances in sensors have emerged which will improve the signal-to-noise out of the cameras making it possible to detect even more subtle reflectance changes which in some processes are considered flaws. These advances alternatively make it possible to use less light and/or operate at faster speeds. Among these advances are time-delay-and-integration (TDI) linear array based cameras which are more suitable in situations where there is motion - either the product moves under the camera or the camera moves over the product.

The problems associated with the inspection for flaws in products have two primary considerations: data acquisition and data processing. The data acquired by the sensor must be reliable (capable of repeatedly detecting a flaw without excess false rejects) and quantifiable.

Ideally it should also provide sufficient detail so one can classify the defect. This should ultimately make it possible to interpret the condition causing the flaw to render corrective action to maintain the process under control. In order to quantify the defects the data must be in a separable form-separating depth, size and location information. It is anticipated that this type data can be quantitatively related to numbers with engineering significance and analyzed in a manner that can be correlated to human perception parameters.

What follows is an attempt to characterize the different approaches to flaw detection that have been described in the literature. One observation is that the flaws to be detected could require 3D information. However, video images are two-dimensional projections in which the third dimension is lost and must be recovered in some manner. The most developed techniques are those that operate on gray scale photometric data in a scene. Significantly, there are many different implementations based on different data acquisition schemes as well as different data processing and analysis schemes.

Gray Scale/Photometric
Techniques to detect 'high frequency' or local geometric deviations based on capitalizing on their specular properties have been studied more than any other approaches. Complementing these techniques are those that operate on the diffusely reflective properties of an object. These have been successful in extracting texture data - color, shadows, etc. Using this approach 'shape from shading' (low frequency or global geometric deviations) can detect bulges. The diffuse and specular components can be distinguished because as one scans light across a surface the specular component has a cosine to the nth power signature while the diffuse component a cosine signature.

In other words, scanning a light beam across the surface of an object creates specularly reflected and diffusely scattered signatures. When light passes over a flaw having a deformation or a protrusion, the reflected light at the sensor position 'dissolves' in amplitude as a result of changing the reflection angle. The energy density within the light bundle collapses. The flaw geometry redirects much or most of the reflected light away from the specular sensor. The sensor detects a lack or reduction of light.

All this is compounded by the fact that in addition to a change in angle of reflection these flaws also cause a change in the absorption of the material at the defect. For example, in metals this is particularly so for scratches, though less so for dents. In other words, in theory one should be able to separate out the two components (absorption and reflection) to arrive at a proper understanding of the defect. So far this has only been reduced to practice in a limited way.

Surface defects that essentially have no depth at all, such as discolorations, are detected as differences in reflectivity of the surface material. Stains, for example, do not change the light path but will change the amount of absorption and reflection in relation to the nominally good surrounding area.

Data Analysis
Techniques depend on the relationship of the signal from one pixel to its neighbors and to the whole image. The analysis to process gray scale data is computationally intensive. This factor in combination with the processing rates required to handle the typical throughput requirements and the size defects one wants to detect for a given span puts severe requirements on any data processing scheme.

Early approaches relied on fixed thresholds. That is, defect detection was a function of the value of the analog signal from the detector. If a flaw caused a change in the value of the signal greater than a certain pre set value than it was characterized as a flaw condition. Using registers and other techniques the number of times the flaw condition was detected in a given area, a threshold could also be set for flaw size.



 

Today various adaptive thresholding techniques are in widespread use, which reduce the incidence of false rejects experienced in fixed threshold systems. They are designed to compensate for variations in illumination, sensor sensitivity, surface variables, etc. Significantly, different wavelengths may be influenced differently by films, oils, etc. Similarly, today techniques exist to compensate for variation in illumination that might be experienced. Similarly, in systems using detectors with an array of photosites, photosite to photosite sensitivity compensation is performed. These type corrections have also made these type systems more reliable, experiencing fewer false alarms.

When it comes to analysis - simple techniques look for light/dark changes, run length encode to establish duration of change in the direction of scan, and essentially segment with connectivity to assess length of flaw, e.g. perpendicular to the direction of scan where there is motion - are frequently used.

Data based on arbitrary thresholds or the particulars of a specific inspection device may essentially be subjective in nature and could prove to be difficult to calibrate. Consequently, data processing must be on the gray scale content. More sophisticated processing involves operations on the signal to remove noise. Where two-dimensional images are used, the opportunity exists to perform neighborhood operations (either convolution or morphological) to enhance images and segment regions based on their boundaries or edges. Edges can be characterized by gradient vectors (slope of the curve or first derivative of the curve describing the gray scale change across a boundary). These vectors can include magnitude and direction or angle.

Image processing with these techniques often use two-dimensional filtering operations. Typical filters may take a frequency dependent derivative of the image in one or two dimensions to act as low pass filters. In others, one might take a vertical derivative and integrate horizontally. In these cases one capitalizes on the features of the defect that might exhibit distinctive intensity gradients. Still other filters might correlate to the one or two dimensional intensity profile of the defects.

In terms of detecting and classifying surface defects one would like to have as much resolution as possible. However, for economic reasons, one establishes the resolution of the system to provide just enough data to classify a defect. In general a number of sensor photosites must be projected on the defect to adequately determine its dimensions.
The actual resolution required for defect detection is dependent on the characteristics of the defect and background. It is generally agreed that sub pixel resolution is of no consequence to detection of flaws but related to ability to measure flaw size. The size flaw one can detect is a function of contrast developed and 'quietness' of background. Under special conditions it may be possible to detect a flaw smaller than a pixel but one can not measure its size.  Nyquist sampling theorem, however, suggests reliable detection requires a flaw be greater than two photosites across in each direction. Using this 'rule of thumb' alone usually results in unsatisfactory performance. The problem is that flaws do not conveniently fall across two contiguous pixels but may partially occlude several neighboring pixels, and in fact only completely cover one pixel. Hence, the above mentioned rule of thumb that a flaw cover an area of 3 X 3 pixels.

Resolution may be a function of optics. For most surface inspection the requirement for resolution, coverage and speed are in conflict with each other. Defect characterization may be possible using convolution techniques. It may be possible to develop specific convolution filters for individual defects. It may also be possible to characterize some defects based on their gradient vectors: magnitude, direction and angle.

Other specific features may become the basis - signal change in specific channel (specular or diffuse) vs. width of pulse. Parameters to be used could include the shape of the analog signal waveform in the region of the defect, the position of the defect on the object, and its light scattering characteristics.

Good, repeatable signals are required, however, for reliable classification of defects. Hence, the need for an optimized data acquisition system. Experience with data acquisition approaches based on characterizing the specular/diffuse properties of a scene have been found sensitive to lighting variations, reflectivity variations of the product, oils, dirt, and markings. In other words, these approaches have produced data that are not reliable. These approaches also suffer as they are not equally sensitive to flaws regardless of direction of the flaw.

Detecting Defects In Products
In terms of the requirement to detect flaws in discrete products, there are five factors that strongly interact: size of anomalous condition to be detected, the field of view, contrast associated with the anomalous condition, consistency of the normal background condition and speed or throughput.

Where contrast is high, (e.g. black spot on white 'quiet' background naturally or appears so due to lighting tricks), anomalous condition is relatively large, (greater than 1/8'), field of view on the order of 15' and in the case of continuous moving parts throughputs of 400 FPM or less, techniques are available that in general are very successful. While in principal the same techniques should be extendible to conditions with lower contrast, smaller anomalies, larger fields of view and higher throughputs, extension is not trivial.

In general these techniques are not generic but rather application specific. For example, today machine vision systems routinely detect flaws on high-speed paper making line where throughputs are up to 5000 FPM. Lighting, data acquisition and data processing are dedicated specifically to the paper product being produced, however, and may not be easily transported into other web scanning applications.

There are several issues that strongly interact: ability to uniformly illuminate wide areas, sensor/data acquisition, compute power and requirements in terms of being able to measure and classify in addition to detect. To see smaller detail in the same sized product requires sensors with more pixels. To handle higher speeds requires faster data acquisition. This in turn effectively reduces the photosite signal which, especially when low contrast changes have to be detected, results in poorer signal-to-noise. In other words, the quality of the input going into the computer is less reliable.

Such less reliable data requires that more processing be done on the signal. Where contrast is high, say black and white exists, simple thresholding techniques can be employed to ignore all data except that with a value below the threshold or the data associated with the flaw. Significant data compression is the natural effect. Similarly, the analysis can be simple - a count of the number of pixels can be correlated to size.

Where contrast is low or background 'busy', on the other hand, simple thresholding is virtually ineffective. Some form of gradient processing is required. That is, tactics that detect small gray shade changes corresponding to the edges of the anomaly. Since edges typically fall across several pixels, this means that the anomaly must cover an area of at least 25 pixels to have a reasonable confidence it will be detected and classified reliably. In other words, cameras with a larger number of pixels are typically required to detect lower contrast anomalies.

Again, this can exacerbate the data acquisition rate. Furthermore, gradient processing is far more compute intensive and the higher data rate means even faster compute power is required. Significantly, all the above discussion has revolved around detection.
Another issue is the requirement to recognize the anomaly for purposes of reliable classification. Such classification by the system would differentiate between: holes, dirt, bubbles, etc. The ability to so classify generally requires more pixels than simple detection as well as more image analysis or compute power.

The ability to uniformly illuminate a product may also be a factor. Another issue associated with lighting is that it be sufficient to provide good signal to noise property out of the camera. This is especially critical in applications where wide variation in opaqueness is experienced, which manifests itself as noise in the signal. This will also be exacerbated in this if high-speed data acquisition is required.

Lighting 'tricks' that may be able to mitigate some of these issues include the use of directional lighting to exploit the fact that flaws like bubbles and dents are geometric. Such a lighting arrangement takes advantage of light scattering off geometric surface permutations. Higher contrast conditions, such as holes can be easily detected based on photometric changes -light level changes.

In a transmissive mode, for example, a hole will result in more light a contrast change going from dark to lighter- and dirt will result in less light-a contrast change going from light to darker. In both cases a delta threshold can be set. Pixels above this threshold can be counted and if enough contiguous pixels exceed the threshold that correspond to the setting of a hole or dirt, the condition flagged.

Conclusion
Adoption of these systems is being driven by increased customer pressures to improve product quality as well as competitive pressures to keep costs down with improved process controls. Significantly, the average price of these systems is not likely to decline over the next five years but the systems should improve in performance. In some industries systems with improved performance, especially with the addition of defect classification may command a higher price and be purchased as a substitute to an already existing system.

While machine vision systems can be effective in detecting flaws it is important to be able to define and quantify what constitutes a flaw to assure being satisfied by a machine vision implementation. It is also important to understand that there no one universal approach and that it is important to find the most suitable approach for your specific application. Significantly, lighting design can often make or break an application. Consequently, where one is not purchasing an off the shelf solution that has been widely proliferated because it addresses an industry-wide application, it may make sense to consider a phased procurement. In this case, you would finance a demonstration project before proceeding with plant floor deployment.