Color Vision: Luck Has Nothing to Do With It
| By: Winn Hardin, Contributing Editor
When solving color machine vision applications, a few end users are very lucky. The object under test is uniform in color. The application takes place in a controlled environment with no outside influences. And the customer is smart (or lucky) enough to buy the best hardware and software from a tried-and-true machine vision supplier. They turn on the system; train it to recognize the heretofore unnamed widget, which it does with greater than 99.9% repeatability. The manufacturing line revs up, and suddenly, the customer is producing the next iPod with near 100% quality assurance. They’re lucky. They slept at a Holiday Inn Express.
They’re far and few in between.
Machine vision solutions for color applications have indeed become much ‘smarter,’ aka: user proof. But to make a living by solving color machine vision applications, it takes an in-depth knowledge of the physics involved as well as the intricate - often subtle - strength and weaknesses of human vision and your machine vision system.
The Impossible Problem
“A core model for a color machine vision system is: multiply the illumination spectrum, by the spectrum of the object, and the band passes of red, green and blue sensor types and integrate over wavelength,” explains Teledyne DALSA’s (Waterloo, Ontario, Canada) Director of Strategic Development and presenter of the AIA Color Vision tutorial at Automate 2011, Ben Dawson. “It’s a simple model, but fairly accurate. To use color to help determine what an object is, you need to solve this model for the object’s spectrum. But with imperfect knowledge of the illumination and sensors’ responses and a limited number of sensor types, this is an under constrained problem - there is no unique solution. It would appear that you’re out of luck, but people solve color vision problems every day, so there must be a way.”
Color applications fall into two categories: color recognition, which is relative and somewhat subjective; and color measurement, which is best answered by Aristotle’s ghost operating a spectrophotometer calibrated by NIST to do the measurement. Humans and machines can recognize colors but only machines can do color measurements.
“Color measurement depends on the physical properties of an object, based on material science and the many materials and interfaces between and including the illuminator and the receiver,” explains Robert McConnell, President of WAY-2C Color Machine Vision (Arlington, Massachusetts). “The question becomes: How to do we overcome our limitations when it comes to color-based recognition and machine vision?”
Knowing What You Know…and What You Don’t
Humans have built-in circuits that allow us to calibrate our eyes for different conditions, allowing us to recognize and track objects based on color and other criteria. But for the machine vision designer, instinct isn’t an option.
“Human and machine vision are “fooled” by metamers - colors that look the same to us or a machine vision system, but have different spectra. “A Metamer spoofs a sensor so that, for example, it can’t tell monochromatic yellow light from a mixture of red and green that makes a visually identical yellow,” explains Teledyne DALSA’S Dawson. “To solve this problem you can do one of two things: You can build a sensor that has a huge number of narrow band-pass filters versus the three sensors used in today’s best color cameras. These devices are called spectrometers or hyperspectral sensors. Or you can do what humans do: We use other information and certain assumptions about the real world to help us overcome unusual conditions like metameres.”
For instance, Dawson continues, a vision system may not be able to tell what a brown object on a plate is by itself, but if the plate is in the kitchen, we can assume its more likely to be chocolate than wood, for instance. In machine vision these expectations or prior knowledge can be combine with the color information by using Bayesian methods. For example, if you are sorting fruit and know beforehand that 80% of the fruit will be oranges and the rest apples, the vision system can use that knowledge to say “orange” when the color evidence is less certain. And to improve your chances at color recognition, you must try to control or estimate all the optical elements in the system, such as lighting and the way the object is viewed.
“Machine vision tries to have constant lighting with broad spectral band illumination, while mitigating for thermal drift in cameras and other electronics,” adds McConnell. “You limit contamination from ambient lighting by using enclosures. During one recent application of ours, a customer developed a color-based sorting system in an open shed in Switzerland where the temperature swings can be very large. Although the customer was able to control the lighting, thermal effects resulted in color drift on all the electronics. The customer added temperature-controlled enclosures, among other mitigating systems, and retrained the system before each shift.”
Colorful Continuing Education
For those wanting to learn about the fundamentals of color machine vision, Teledyne DALSA’s Dawson will teach this year’s, “Advanced Color Theory and Applications” tutorial during the Automate 2011 Conference, on Monday, March 21, at Chicago’s McCormick Place.
The tutorial begins with a brief introduction, followed by an in-depth discussion of the physics behind color, which is really a discussion of the physics behind the interaction of light and matter. In the third part of the tutorial, Dawson focuses on real world, color machine vision applications and techniques.
“In Part Three of the tutorial, we look at a catalog of machine vision tools that are available today,” explains Dawson. “Not brands, but sensors, hyperspectral, three-sensor cameras, issues with color aliasing, chromatic aberration of lenses…the nuts and bolts of what you have to deal with when designing a color machine vision system using commercial products available today. Then we progress to a combination of algorithms and applications. We look at the key concept of the classifier, algorithms that determines what type of object you are looking at, based on values from the sensor types and prior or external knowledge – so whether you’re looking at an apple or an orange. We delve further additional factors to make a decision such as the cost of making a mistake and use. Then we illustrate the use of classifiers and other algorithms for color tasks such as measuring the right color of brown for a muffin, classifying mouthwash, or picking out rocks from potatoes”