Industry Insights
Choosing an Area Array Camera for Machine Vision
POSTED 04/27/2011
| By: Winn Hardin, Contributing Editor
Not so for the machine vision market. While spatial resolution is certainly important, machines using image-processing algorithms need a lot more data than the human eye. And not just in terms of megapixels, but dynamic range and low dark noise for a high signal to noise ratio are also important. And of course, machines need that data as fast as they can get it so that products they inspect can be manufactured as quickly and cheaply as possible.
Important Specifications
When customers choose a two-dimensional, area-array cameras for machine vision, they typically consider several important specifications: including array and pixel size; dynamic range, including read-out electronics and signal to noise (SNR) ratios, camera controls, the housing; and input/output (I/O).
Optics, pixel size, and spatial resolution affect the size of the sensor and efficient spatial resolution. For example, most vision cameras use C-mount optics, which typically limit the size of the sensor behind the lens to 1 inch, or F-mount lenses for larger sensors. The larger the optic, the more expensive the camera and sensor, says Ingo Lewerendt, Director of Product Management at Allied Vision Technologies GmbH (Ahrensburg, Germany). “If you go to 1 inch sensors and use C-mount lenses, you have to take care of distortion at the edges of the image. Some C-mount lenses can handle a 1.2 inch sensor, but only a few. Sometimes, it’s easier to go to F-mount or M42 optics. A 2/3rd inch sensor is the standard for C-mount optics.”
Consumer cameras boost sensor size by shrinking the size of each pixel, however, this limits how many photons are captured by each pixel, and can make it more difficult for the sensor to know exactly which pixel collected the photon, which is critical when a machine vision system is making physical measurements using image data.
“Pixels shouldn’t be smaller than 5 microns,” says Allied Vision’s Lewerendt. “Any smaller than that, and the effective sensitivity goes down, SNR goes down, and everything gets worse in terms of image performance. If you do go smaller than 5 microns, you know that you’ll have to do more processing on the back end. The trick is to get a good image in front of the camera, and the lens and sensor have to be the best to do that. Electronics simply translate the data. And of course, you need a square pixel for measurement, although that’s less of an issue for line scan cameras. Without a square pixel, you have to correct for distortion from the sensor during backend processing.”
In the past, machine vision camera manufacturers have mainly chosen from a limited number of sensors, such as the 1.4 MP, Sony ICX285 with 6.45-micron square pixels, but that number is increasing to a half dozen as the machine vision customer market expands and segments, according to Henning Tiarks, Head of Product Management at Basler Vision Technologies (Ahrensburg, Germany).
“We see camera customers falling into one of three categories,” explains Tiarks. “The largest segment of customers still use the ICX285, or ICX267 at 1.3 MP, but many will go up to 2MP cameras. A 5MP camera with 3.45-micron pixels is about 2,000 Euros, while an Aptina sensor at 5MP will only be about 350 Euros. That’s where we see the sweet spot for the majority of machine vision cameras - between 3.45 and 5.5 microns. The next group is concerned with speed, so they want a camera built on this Sony or that Kodak sensor, in either a mutitap format or CMOS for high-speed applications. You see this in medical, semiconductor or electronic manufacturing applications where you need a lot of dynamic range but also need high-throughput. In the early days, you either paid a lot for a multitap CCD sensor, or sacrificed image quality to go with high-speed CMOS, but that is changing. The third group is solely focused on price. They can live with a rolling shutter, rather than a global shutter, and can use strobe lights to inspect slow-moving objects.”
“We’re beginning to see the start of a revolution,” continued Tiarks. “And it’s taking place in the second group, in [automated optical inspection] applications in electronics, for example. The image quality of CMOS sensors is very similar to CCD sensors, and at 5- to 10-times the speed of CCD transfers. Camera prices have dropped by half. But CMOS sensors still vary from unit to unit, so the camera maker has to know what they’re doing and compensate for artifacts and other CMOS-related issues right on the chip. Basler Vision has been doing this for 10 years, and now, we can take all that we’ve learned and add it to a new CMOS sensor to guarantee good quality right away at a lower price.”
Older CMOS sensors didn’t always offer the best on-chip analog-to-digital (A/D) converters, although many of the newer models have overcome this limitation, acknowledges Allied Vision’s Lewerendt. “Some customers insist on a 12-bit pixel while many CMOS A/D converters only offer 10 bits,” he says.
Camera Controls and Housing
Despite the growth of “smart cameras,” the majority of vision camera customers simply want a camera that is easy to use comes with automated calibration. Most advanced camera controls and analysis come from software running on a nearby PC. “Especially with the advent of Gigabit Ethernet [GigE] because the higher bandwidth makes it less of an issue to pass camera commands back and forth from the PC,” notes Basler Vision’s Tiarks.
While this represents the mainstream of machine vision camera customers, there are OEMs that want specialized cameras. Security and traffic cameras are one customer segment where processing needs to be done on the camera to avoid high cabling costs back to a PC.
“There’s a group of leading edge technology users that want to develop systems or provide cameras to customers with unique selling points,” explains Allied Vision’s Lewerendt. “With these customers, we can talk in depth about their application and post processing, memory, trigger, and other advanced features that are more advanced. A camera essentially just converts light into data; it’s not very smart. But with some customers, you can show them how adding features to the camera itself can save them time and money during integration.”
The housing is often overlooked when it comes to camera marketing, but it can effect system integration. All cameras must been FCC and CE requirements when it comes to electromagnetic interference (EMI), but ruggedness issues and I/O are left up to the camera maker.
“All cameras are designed to survive 1G, but if to survive being dropped on the floor, Allied Vision designs its cameras to exceed 10G of forces. Special designs may survive 100G or more,” explains Lewerendt.
Handling thermal drift is another critical consideration. “A camera may start the production data in a 20C environment, and by noon, the temperature inside production equipment can be 50C, and then drop back to 0C during the night when the plant is closed,” explains Lewerendt. “You have to design the camera to handle the thermal changes or image quality degrades. We put our equipment through accelerated lifecycle testing to make sure it handle these temperature cycles.”
Finally, camera customers will always face a full menu of options when it comes to I/O, and specifically, the camera output. While Camera Link is expected to have a place as a proprietary network protocol for machine vision, GigE clearly benefits from consumer adoption, which lowers network-related component costs, as will 10 GigE and USB 3.0. “There’s not a single interface that will eventually win,” says Lewerendt. “Instead, you should carefully look at your application and find the right interface for you based on data bandwidth, cable length, etc.”