Industry Insights
Update on Machine Vision-Based Web Scanners for Sheet Product Inspection
POSTED 08/29/2002 | By: Nello Zuech, Contributing Editor
The capability to inspect products produced in continuous sheet form has been around for a long time. As far back as the mid- 60s several companies had internally developed this capability and not too long after several companies were offering commercial products. These early systems used lasers in a flying spot scanning mode. A multi-faceted spinning mirror arrangement, most likely originally designed for 'spy-in-the-sky' military objectives was used to convey the focused spot of the laser across the sheet product as it moved continuously past the inspection station. A photomultiplier served as the detector, capturing time-dependent data that could be interpreted as a reflection of the condition of the sheet surface.
Certain surface concerns manifested themselves by increasing the amount of light reflected back to the photomultiplier and some by decreasing the light level. Today one finds line scan-based systems competing with laser scanner-based systems. The litany of conditions detected by these systems includes holes, spots, scratches, stains, streaks, inclusions, bubbles, pitting, inhomogeneities, indentations, coating bubbles, coating omissions, etc.
Concerns detected can be described as either geometric flaws (scratches, bubbles, etc) or reflectance (stains, etc.). The capabilities of systems vary. Some arrangements are only able to detect high contrast flaws such as holes in the material with back lighting, or very dark blemishes in a white or clear base material. Some arrangements are transmissive, others reflective, others a combination. Some arrangements are only able to detect flaws in a reflective mode that are geometric (scratches, porosity, bubbles, blisters, etc.) in nature; others only those based on reflectance changes. Often systems are based on often on either light field or dark field lighting arrangements.
Techniques to detect 'high frequency' or local geometric deviations based on capitalizing on their specular properties have been studied more than any other approaches. Complementing these techniques are those that operate on the diffusely reflective properties of an object. These have been successful in extracting texture data - color, shadows, etc. Using this approach 'shape from shading' (low frequency or global geometric deviations) can detect bulges. The diffuse and specular components can be distinguished because as one scans light across a surface the specular component has a cosine to the nth power signature while the diffuse component a cosine signature.
In other words, scanning a light beam across the surface of an object creates specularly reflected and diffusely scattered signatures. When light passes over a flaw having a deformation or a protrusion, the reflected light at the sensor position 'dissolves' in amplitude as a result of changing the reflection angle. The energy density within the light bundle collapses. The flaw geometry redirects much or most of the reflected light away from the specular sensor. The sensor detects a lack or reduction of light.
All this is compounded by the fact that in addition to a change in angle of reflection these flaws also cause a change in the absorption of the material at the defect. For example, in metals this is particularly so for scratches, though less so for dents. In other words, in theory one should be able to separate out the two components (absorption and reflection) to arrive at a proper understanding of the defect.
Surface defects that essentially have no depth at all, such as discolorations, are detected as differences in reflectivity of the surface material. Stains, for example, do not change the light path but will change the amount of absorption and reflection in relation to the nominally good surrounding area.
Data based on arbitrary thresholds or the particulars of a specific inspection device may essentially be subjective in nature and could prove to be difficult to calibrate. Consequently, data processing must be on the gray scale content. More sophisticated processing involves operations on the signal to remove noise. Where two-dimensional images are used (either captured from 2D scanners or built from succeeding 1D sensor scans), the opportunity exists to perform neighborhood operations (either convolution or morphological) to enhance images and segment regions based on their boundaries or edges. Gradient vectors can characterize edges (slope of the curve or first derivative of the curve describing the gray scale change across a boundary). These vectors can include magnitude and direction or angle.
Image processing with these techniques often use two-dimensional filtering operations. Typical filters may take a frequency dependent derivative of the image in one or two dimensions to act as low pass filters. In others, one might take a vertical derivative and integrate horizontally. In these cases one capitalizes on the features of the defect that might exhibit distinctive intensity gradients. Still other filters might correlate to the one or two dimensional intensity profile of the defects.
Some systems only have detection capability, others have ability to operate on flaw image, and develop descriptors and, therefore, classify the flaw. These latter systems lend themselves to process variable understanding and interpretation and ultimately automatic feedback and control. As machine vision techniques improve in image processing and analysis speeds, there will be opportunity to substitute these more intelligent systems for those earlier installed with only flaw detection capability.
The actual resolution required for defect detection is dependent on the characteristics of the defect and background. It is generally agreed that sub pixel resolution is of no consequence to detection of flaws but related to ability to measure flaw size. The size flaw one can detect is a function of contrast developed and 'quietness' of background. Under special conditions it may be possible to detect a flaw smaller than a pixel but one cannot measure its size. Nyquist sampling theorem, however, suggests reliable detection requires a flaw be greater than two photosites across in each direction. Using this 'rule of thumb' alone usually results in unsatisfactory performance. The problem is that flaws do not conveniently fall across two contiguous pixels but may partially occlude several neighboring pixels, and in fact only completely cover one pixel.
In terms of the requirement to detect flaws in discrete products, there are five factors that strongly interact: size of anomalous condition to be detected, the field of view, contrast/optical perturbations associated with the anomalous condition, consistency of the normal background condition and speed or throughput.
There are several issues that strongly interact: ability to uniformly illuminate wide areas, sensor/data acquisition, compute power and requirements in terms of being able to measure and classify in addition to detect. To see smaller detail in the same sized product requires sensors with more pixels. To handle higher speeds requires faster data acquisition. This in turn effectively reduces the photosite signal which, especially when low contrast changes have to be detected, results in poorer signal-to-noise. In other words, the quality of the input going into the computer is less reliable.
Input for this article was solicited from all the companies listed in the table at the end of the article. The table reflects insight on type of system offered (laser or line scan) and specific industries addressed by the respective vendors as reflected in their websites. The following did respond to our questions and their input is reflected.
- Bob Chiricosta and Todd Clapp - Cognex
- Graham Luckhurst - Dalsa
- Tim Potts - DarkField Technologies/SIS
- Glen Ahearn - Datacube
- John Claridge - Image Automation
- Werner Goeckel - Lasor Sytronics
- Kari Hilden - Papertech
- Vic Wintriss - Wintriss Engineering
1. Any thoughts on the early history of these products - commercially available products like this were introduced around 1971.
Tim Potts reminded us of Sick's (now the Surface Inspection Systems group of JenOptik) early laser-based systems targeted at photographic film inspection applications as well as the early work at Philco-Ford addressing applications in float glass manufacturing for automotive glass applications. Interestingly, the Philco-Ford activity subsequently became Ford Aeroneutronics, then Estek and that group was acquired by Kodak and then sold to ADE where the techniques have been adapted to unpatterned wafer inspection in the semiconductor industry. There was also some early work conducted in the mid-60s within PPG, the plate glass supplier. It appears that the early laser-based technology emerged concurrently in the U.S., Germany and the U.K.
By early 70's the paper industry was using arrangements of light detecting sensors mounted transverse to the direction of travel to look for gross defects. As line scan cameras evolved, these early systems were replaced with line scan-based approaches.
John Claridge made the following observation about these early systems - 'In my direct experience they were quite poor approximations to the customer requirement, and mostly ended up gathering dust quite quickly. Actually, the parent company of Image Automation, Sira Ltd, first proposed a laser scanner for glass inspection in about 1966. At about that time we were developing scanners that comprised a drum of lenses, each scanning an image of the surface over a slit, with a photomultiplier behind. Applications included punched computer tape paper and metals. We also had another scanner that worked by having a rotating pinhole, again scanning the image over a photomultiplier. Applications included finding black specks in PVC powder and black grains in rice. Actually, they worked, and were in use for many years. But they were simple black-on-white applications.'
Glen Ahearn makes the following observation about these early systems: 'Quite a bit has changed since the introduction of the first web scanning systems. The early systems provided minimal information in terms of the defects they would detect. The systems could tell you that 'something' was there, but they could not tell you it's size shape, location, classification or even provide you with a picture of the defect. Today most systems available in the market provide all of that and more.'
Todd Clapp provides a comprehensive review: 'In the 70's one had to go to laser scanners for high-Performance web inspection capability. Some of these systems were impressive with their massive optic and analog signal processing systems. Many served well. . Some camera and photo-detector array systems became available, but they were essentially low-resolution hole detectors. The web inspection camera-based system technologies of the day, and many of the applications using them, are so far removed from today's that experience from that far back is irrelevant to systems and decision-making today.
The 80's were the decade for video cameras and frame-based processing. Driven by CCTV technologies and low-cost TV cameras and formats, video cameras (area array cameras) were pressed into many applications. Anyone could buy a camera and a frame grabber and build a system. They became a practical option for web inspection, but not always a good one.'
2. What are the drivers for adoption of machine vision-based web scanners? What is different today than say five years ago when the technology also held much promise?
From the customer's perspective Tim Potts observes that both customers and applications are becoming more demanding - '…need to see smaller defects over wider products, particularly in the case of films used in the display world.' Tanja Kannisto echoes this reflecting the increasing demands of the paper industry: 'The market is demanding better systems all the time. The speeds of the paper machines are increasing and the system also needs to work on these new machines. Requiring smaller and smaller details to be detected is also a trend in the industry. The users are also demanding more user-friendly software and user interfaces. Most users are also requiring more interactivity and intuitive systems. The trend is towards a 'full service' system that offers the users all the features they need, analyzes the defect and even suggests fixes for the problem.'
She also observes from the customer's perspective - 'Today, the users are looking for new systems and additional features to maximize the efficiency and automate their production. This is where the strength of machine vision based systems lies. They provide the users an early warning of defects and process upsets. Based on the analysis of the defect or process upset, the user can repair the problem and make sure it does not occur again. The main driving force for machine vision systems is the ever-increasing need for faster process speeds and increasing efficiencies. These issues have become even more important in constant push for higher operating efficiencies in pulp and paper mills around the world. Also, most mills have become more aware of the bottom line and the money they are losing due to process stoppages and defects.'
And Glen observes 'The two big drivers we see are 1) the buyers of the rolls and coils of material produced by the web process have become more savvy about vision systems themselves and, having a better appreciation of what is possible, demand more quality from their suppliers. Suppliers who don't show a commitment to improving their information and process will be replaced by suppliers who do; 2) Liability. Some materials have liabilities associated with them if their defects get out into the field.'
Bob comments 'Yield management and process optimization are now the driving forces. In today's economy production lines are no longer full and competition is tougher than ever. Customers are more demanding on producers for quality, price and service. The ability to make better decisions is key. Those can be decisions to ship or not ship products, convert material one way or another, grade a product, or how to change and improve a process.'
Graham offers, 'The driving force for adopting web inspection is to ensure a high level of product quality is delivered to the end users. There still appears to be reluctance to use web inspection data for process control. This may be due to the customer having insufficient confidence in the systems performance to perform this task reliably, particularly in real time, and reluctance of the vendor to take on this added responsibility. There are fewer standards in web inspection than in many other sectors of industrial machine vision. A related issue is the range of expectations about the performance of web inspection systems among users. There is considerable diversity of materials, defects and applications. Limited standards, variations in customer expectations and inconsistent mission criticality in process control all contribute to the adoption challenges of vision-based web scanners.'
3. What are the barriers to the adoption of machine vision-based web scanner systems?
Vic Wintriss suggests 'Apprehension. All of our customers are very skeptical at first...having been stung before.' Todd provided the following insights' Some companies had a poor experience in the past with web inspection systems. And until recently high end systems were prohibitively expensive. Others do not yet realize the many benefits or how they can use the valuable data obtained. The combined value of online process improvement and the ability to know where your defects are, leads to much fewer defects and the ability to grade finished product. But the calculation of these savings is often hard to calculate and justify a system. It is easier to calculate the cost of shipping a large quantity of defective product to your best customer, and then taking it back.'
Werner suggests, 'The major barriers to adoption of vision scanners is still economics. Most operations have limited capital to spend, and they usually find it easier to justify payback on a new piece of production equipment than a vision system, which they see primarily as a quality control tool. Also, initially, many production people are skeptical of vision systems because they feel it will reduce throughput because they will now see more defects, which previously were not seen. Since production people often get rated on material efficiencies, they often are skeptical about the application of vision inspection. Once they see the system actually helps them increase production, over time, they ' buy in ' to the technology.'
Tim notes 'Cost is still an issue - as the defect requirements become tighter, higher resolution is required to achieve the better performance, but that comes with a price.'
4. What has happened over the years to make these products more consistent with the performance the market requires? Lighting - Optics - Cameras - Man Machine Interfaces - Graphic User Interfaces - Software - precision encoders - etc.
From the technology's perspective Graham Luckhurst observes 'Greater PC, DSP and programmable logic processing power at lower costs is probably one of the major factors allowing web inspection systems to meet the performance and ROI requirements of this market segment. Similarly, higher resolution, faster and lower cost cameras are having the same effect.' While Vic Wintriss observes '…the breakthrough came with smart cameras.... Data rates for fast processes such as paper are just too large and processing at a central computer requires too much computing power. With smart cameras, one can expand indefinitely. Main computer overload only occurs when defect rates get very high...a catastrophic situation where recording all data is usually unnecessary.'
Tanja observes, 'The improvement of high speed cameras has allowed us to develop our system to better meet the customer needs. The camera development has also enabled us to capture clearer and better quality images. This in turn has made it possible to add software based analysis features into our system. By being able to capture constant quality images we have developed our system software to provide Real-Time analysis, a very important feature that constantly monitors the production. Another hurdle to overcome has been the illumination needed for these systems to provide flicker free high quality images. We had to develop our own high efficiency, high output, flicker free, light to incorporate this specific need in our system. There were no lights available on the market to meet our need so we had to go through the product development and develop our own light, WebLite™. Computer hardware has also posed a hurdle to developing our system. The increasing processor speeds and other developments have allowed us to better utilize our whole system. There's no delay in the video download time and viewing of the videos, whereas a couple of years ago you had to wait for a video to download for a couple of minutes. The web running through a process at high speed (60 mph) also poses a hurdle that we had to overcome.'
Tim suggests 'knowledge associated with lighting has lead to better arrangements to cover webs up to 4 meters wide. In the case of laser systems, telecentric optical arrangements provide resolutions of 10 microns and have yielded more reliable and repeatable parametric data for classification. Vic also notes 'Lighting has made it possible to pick up defects that were invisible in the past. …. Any size defect, any web width and any web speed can be accommodated by merely adding more cameras...reducing the resolution of each camera and increasing its speed.'
Todd observes, 'Both hardware (lighting - Optics - Cameras - precision encoders) and software have greatly improved but the key has been the combination of these and improved graphical interfaces. The system manufacturers who understand their customer's processes and needs have become the most successful. Application based technology using application specific graphical user interfaces with trend charts and spatially-correct defect maps have made it easier for all personnel in industrial environments to derive knowledge from the displays. In the past, oscilloscopes, blinking lights, simple counters, and text table displays limited the conveyance of useful information from machine to user. Faster scanning cameras have also enabled detection of the defects humans can see (the main ones they object to) at very high web speeds.'
And Werner Goeckel notes 'The difference between now and five years ago is that the speed of the technology has increased and features such as defect imaging and special analysis modules have been added to the systems. Five years ago the standard was a 5 MHz processor for line scan whereas today the standard is 40MHz with better resolution. The fact that a system is one half the cost today from five years ago is also a relevant factor.'
John adds 'Probably the increasingly sophisticated software, allowing defects not only to be detected, but also classified, thereby providing the ingredients for feedback, and process correction.'
And Werner also observes 'Consistency is one of the major reasons many clients purchase vision inspection. The camera sees the defects the same way every time, as a result, much of the subjectivity is eliminated. Computers today are faster, lighting better and interfaces very intuitive to use. Windows platforms have been a significant factor in making these systems more acceptable to users. Customers want information and these systems with their higher speeds and such new technology such as ' Smart Cameras ' allows manufacturers to get more quantified data more quickly. Also the ability for remote monitoring and information access through data bases such as Sequel Server has made these systems more acceptable to the market.'
5. What performance is the market demanding from these systems: defect size sensitivity, defect types, repeatability, speed, etc.?
Tim notes, 'These requirements vary from application-to-application. In the case of films used in the display or storage media applications, requirements call for the detection of concerns on the order of 5 - 10 microns across a web 1.6 meters wide running at speeds of 3 - 4 meters/minute. In the case of paper, sensitivity is significantly more relaxed but speeds can run 1000 meters/minute, be 6 meters wide while looking for defects on the order of 0.020'. In the case of flat glass typical specs include 0.2mm diameter defects in a 4m wide ribbon running at 25m per minute and in display glass it's 20 micron defects in 1-2mm wide sheets.'
Werner observes 'Defects really haven't changed in five years, but the ability of the equipment to detect them has increased significantly. In many operations, line speeds have doubled, placing pressure on the systems to find these defects under more challenging conditions. Also certain types and sizes of defect that were acceptable five years ago are no longer acceptable to end-users. Therefore, systems today have to find smaller defects at much higher speeds.'
Graham suggests 'Object pixel sizes of less than 15um with web speeds often above 30 meters/minute and 1.5-3 meter wide webs appear to be the norm for many applications. Thus, systems with high resolution and speed capabilities appear to be in great demand. Repeatability in defect reporting is also a common requirement. However, this becomes a difficult issue when the defect size approaches the object pixel resolution of the inspection system and the vendor is forced to manage the customer expectation appropriately.'
Todd cites 'Everyone has their criteria, and it varies greatly across industries, processes, and end products. Performance requirements, wishes, and expectations do not always agree. Most producers have moved beyond simple light spot, dark spot, and hole detection. For example, most paper makers want a system to tell the difference between holes, slime holes, caliper tears, thin spots, and oil spots. They demand that very similar looking defects from different sources be clearly detected and identified as different.'
Tanja adds, 'The users are also demanding more user friendly software and user interfaces. Most users are also requiring more interactivity and intuitive systems. The trend is towards a 'full service' system that offers the users all the features they need, analyzes the defect and even suggests fixes for the problem. '
6. What are the challenges presented by web scanner applications?
Werner observes, 'There are always new challenges facing web scanners. Manufacturers not only want to know they have a defect, they want to know the exact location and as much about the defect as possible to determine cause. Five years ago defects were classified as small, large, dark or bright. Today customers want to be able to know much more about the defect and demand the ability of the system to classify them by characteristics. Some defects may be acceptable to ship whereas others are not. The manufacturer wants to know the difference. They also wants more statistical information about the defects such as how many on a roll, in clusters or not, in certain lanes or are they repetitive, for example. Many manufacturers classify the quality of a roll by the types or number of defects based on a standard. Therefore, the system needs to provide this information in the format required by the customer. Future systems will be required to handle even smaller defects, at faster speeds and process the information more efficiently.
Systems will also be required to have special analysis capability for certain difficult to detect defects or defects that are specific to their process. This will require high resolution, better lighting, custom software and high speed processing capability of volumes of data.'
Todd suggests 'Other than understanding the specific application and the specific needs of
that application, there are also spatial and environmental challenges. Existing process lines often do not have space, especially in the machine direction, to install new equipment. Web inspection systems must have the mechanical and optical flexibility to fit. Dust and vapors as well as water hose spray in some plants requires component protection.' Vic agrees, 'Because of the unique installation configurations and limitations, mechanical installation is always a challenge.'
Tim succinctly suggests that the markets are demanding ability to handle wider webs, at faster speeds and at the same time detect smaller defects. John refines this somewhat by suggesting 'Emulation of the human ability to distinguish defect types, where the differences are subtle. Also achieving high detection rates coupled with very low false positives.'
7. What does the market require in the way of defect classification capabilities? Why? If you offer classification capability, what is the basis of your classification routines?
Tim notes that defect classification is more often a 'want' rather than a 'must.' He also observed that rigorous defect classification requires even more resolution than defect detection and that would in turn require more expensive systems. He also suggested that 'Beauty of telecentric laser-based scanner is system generates multiple optical channels that are spatially coincident, essentially generating ROI for each optical channel and combinations of optical channels leading to parametric data --length, width, etc.'
In the case of glass, John suggests 'Our systems typically employ laser scanners, which offer the benefit of using many different channels: bright field, dark field, reflection, etc. Again, in flat glass a solid particulate has a different size spec to a gas bubble. First you must understand the defect type, then size accurately, then decide if this defect is rejectable.'
Todd observes, 'All defects must be detected, identified and visualized in the terms that
the specific industry and even the specific mill recognize. No longer can systems simply identify light spots, dark spots and holes. Users demand the software to identify and label all defects with their true identity and probable cause such as 'oil drop', 'thin spot' 'caliper tear', edge crack, or repeating defect with specific length. The SmartView® web inspection system uses numerous defect features, each calculated for every defect detected. The real-time classification engine then takes over and compares each defect and its features to the list of defect classes included for the particular product being inspected. An important aspect of any inspection system is the ability to intelligently handle the inevitable new defect occurrence, or even a variant of a known type. Even if a defect cannot be properly classified, it should be reported along with all of its features and image so that the inspection system can be changed to accurately report the defect correctly. Discarding an 'unknown' defect is never the right answer, and reporting one without a full complement of features is not helpful for correcting the situation.'
Werner comments, 'Different customers have different needs for classification. Some very basic, some very advanced. Some customers have few specific defects, others a great number with different characteristics. Some customers want many classes, some very few. For customers who want to better understand their process, they may want many classifications in order to differentiate the defects to determine root causes. A resin or film manufacturer may want to know the difference between a carbon gel versus a cross-linked gel, for example. These gels may be caused by different factors in