Industry Insights
High-Speed Vision Finds New Ways to Manage High Data Rates
POSTED 11/14/2012
| By: Winn Hardin, Contributing Editor
Challenging the Sensor
While the number of products passing along a manufacturing line can contribute to high-speed requirements for machine vision equipment and systems, more important to today’s manufacturing lines is the ability to handle high data rates and bandwidths.
“For us, high speed is really about data density per scan, or number of 3D points per second,” explains LMI Technologies’ (British Columbia, Canada) R&D Manager, Mark Radford. “For example, our Gocator displacement sensors report single range values at very high speeds of up to 32 kHz using a CMOS linear sensor, while Gocator line profilers acquire 3D using a megapixel area-array CMOS sensor at up to 5 kHz, or 6.5 million 3D points per second. We consider that to be pretty high speed.”
For high-speed camera manufacturer, Adimec (Eindhoven, The Netherlands) it comes down to bandwidths of up to 1 to 2 gigapixels per second. “The bottom line for machine vision cameras are about 1 megapixel, while the majority are 4 megapixel,” explains Jochem Herrmann, Chief Scientist at Adimec. “But camera resolutions are going up from a high end of 12 megapixels to 25 megapixels. But the bottom line is that Camera Link base is around 3 gigabits per second, while full goes up to around 7 gigabits. This speed is not sufficient for today's high-speed cameras. CoaXPress, which Adimec helped develop, can go up to 6 gigabits per second per coaxial cable, and we can use a quad link configuration to increase that to around 24 gigabits. That’s how you get to 2 gigapixels per second [for a 12-bit image].”
Higher-resolution, higher-bandwidth sensor arrays post multiple challenges for camera manufacturers. Larger 25 MP sensors mean that an electronic manufacturer can use fewer cameras to inspect densely populated PCB boards, for example. However, larger arrays mean either smaller pixels or larger, more expensive optics or running the risk of increasing the need for flat field correction, roll off, and similar challenges. “The quality of sensors in going up in general, which means better read noise and pixel uniformity, for example,” explains Herrmann. “At the same time, the newer, bigger cameras are getting better at pixel correction and correcting for optical distortions too. Our customers in electronics and related industries expect the highest-quality images. We do the processing on the camera so they don't have to spend engineering time doing it post camera. A sensor expert knows how to do this. Our customers do not.”
Modern CMOS sensors with windowing, multitap, and high-speed readout electronics are helping to reduce the bottleneck for high-speed imaging applications at the sensor, adds Herrmann, achieving speeds of up to 2 GP/s. But faster sensor speeds put additional burdens on in-camera processing and thermal budgets.
“At faster pixel speeds, you have to be able to correct for defective pixels, fixed pattern, gain, and dark current at higher speeds, too,” says Herrmann. “If you want to have a more or less perfect image, you have to do these things in real time. And even though new CMOS sensors run at even lower power, adding all this processing at high speed can increase the power demands for the camera, which leads to questions about how you keep the whole unit cool. Running a camera at high temperatures will adversely affect the quality of the images.”
And once the data gets outside of the camera, it still has to be passed across a cable and fed into a PC or other processing unit to make a decision. New Full Cable Link and CoaXPress cables help alleviate the bandwidth, although running multiple cables from a camera to a PC to achieve high-speed data throughput can overpower even new PCI Express 2.0 internal PC buses.
New Ways to Manage High Speed Data
LMI began its reputation as a 3D imaging powerhouse in the lumber industry, developing specialized high-speed scanners for lumber grading and processing. Today, they’ve packaged that know-how into a compact family of Gocator 3D flexible smart sensors that uses single-point, single-line, and (in the future) full field 3D technology. According to Radford, this approach allows LMI to align density and acquisition speed to a specific sensor type for a given application group.
“LMI has long supported road inspection systems that use networks of either displacement or line profiler sensors that acquire terabytes of data from the road surface to determine micro and macro textures or find ruts and cracks using height information,” Radford says. “Road inspection systems are mounted on vehicles moving at 100 km per hour. The same Gocator line profilers are used in tire inspection where manufactures want to scan the entire sidewall of a tire for defects in the range of 10 to 50 microns in less than 1 second. It’s the same technology we developed to inspect wood boards passing under a 3D inspection system at 20 meters per second, which requires profile scan speed of 3 to 4 kHz.”
“We achieve these speeds through a combination of windowing and multitap sensors,” adds Radford. “With the Gocator family, not only can you window down the CMOS sensor to achieve higher frame rates, but the camera can dynamically move the window to keep the laser line or dot in the image, adjusting to changing conditions while maintaining high speed. We also do significant FPGA processing to reduce the data at the camera by several hundred times compared to transmitting raw images. Furthermore, a multi sensor configuration is fully synchronized to a microsecond using dedicated LMI hardware to ensure CMOS exposures are performed at the same time.”
LMI’s dynamic windowing and synchronization is one way manufacturers are helping customers achieve high-speed imaging on demand while living within the constraints of standard networks, such as GigE and Camera Link. Recently, camera maker JAI (San Jose, California) introduced a new method to manage high-bandwidth imaging applications called resolution proportional digital zoom (RPDZ).
RPDZ is a patent-pending algorithm that maintains a constant data rate between the camera and the host computer while digitally modifying (zooming) the camera’s field of view (FOV). This is accomplished by sub-sampling pixels in the image in a manner proportional to the digital zoom level. This feature is particularly attractive for military applications such as unmanned aerial vehicles (UAVs) and semiconductor and electronics manufacturing where throughput is critical.
Unlike binning, which reads all pixels and combines nearby pixel values into a single averaged value, RPDZ uses 1x, 2x, 4x, and 8x decimation, which means it reads every pixel or every 2nd, 4th, or 8th pixel and row on the sensor. This reduces the effective resolution AND bandwidth requirements while allowing the system to identify important regions of interest, such as a car in a surveillance picture or defect on PCB board. Then, the RPDZ algorithm allows the camera to dynamically switch to full resolution for a given ROI while remaining within bandwidth allowances. This means UAVs can have high- and low-resolution images without exceeding standard RF bandwidths for wireless connections, while the electronics manufacturer can be sure to find a defect, then analyze that defect without concerns about lost frames caused by exceeding maximum bandwidth between the camera and processing unit.
JAI rolled out the RPDZ feature on a 4 MP camera this year and will be demonstrating a 20 MP model at Stuttgart’s VISION 2012 show, with shipments expected in Q1 2013. “RPDZ allows the user to use a high-speed megapixel camera in a way that’s the most meaningful for their application,” explains Steve Kinney, Director, Technical Pre-Sales and Support at JAI. “When operating at wide field of view, many applications don’t need full-resolution data rates. They want to scan at some decimated resolution to detect the object and have the pixel density available to do a digital zoom when they find something important. RPDZ allows the customer to seamlessly go from 1x to 8x digital zoom and have the camera do the work to maintain the ratios between bandwidth, frame rate, and resolution.”
As the methods and product examples above clearly demonstrate, machine vision equipment manufacturers are leveraging semiconductor advances to inject intelligence into virtually all their cameras and components in new and inventive ways. The result is an increasingly diverse ecosystem of machine vision products and capabilities designed to meet an increasingly diverse customer and application base.
Smart no longer just applies to “smart cameras.” Today, machine vision companies are using silicon intelligence to dynamically control all aspects of the image chain, from the camera, to the data cable, and finally to the image processing unit.