Industry Insights
High Speed or High Bandwidth? Machine Vision Tackles Both
POSTED 11/24/2015 | By: Winn Hardin, Contributing Editor
High-speed machine vision really includes two types of applications. In one case, the product or process is physically moving fast, which brings timing, data control, hardware selection, and many other factors to the fore when designing the system.
High-bandwidth applications, such as large flat-panel and printed-circuit-board (PCB) displays, may not physically move very fast. But these applications push the boundaries of high-resolution cameras, producing data streams that move the bottleneck to the data transfer from camera to processing unit rather than at the camera head. Luckily, machine vision hardware and software providers are constantly introducing new tools to help integrators overcome both high-speed and high-bandwidth challenges.
No Time for Errors
“High-speed and high-bandwidth installations are really two separate challenges, though they often come hand-in-hand,” says Brian Durand, owner of machine vision integrator i4 Solutions, LLC (St. Paul, Minnesota). “With high-speed lines, the main issue is variable latency. The time required to complete many image-processing algorithms varies with image content. This is especially true of pattern-finding algorithms. The challenge is how to either minimize or accommodate this variable latency when parts are moving very quickly.
“The faster parts are moving relative to the camera’s field of view, the more critical the camera trigger becomes,” Durand continues. “Even on medium-speed production lines, we wire a sensor directly to the camera to trigger image capture. Triggering a camera either through computer software or a PLC [programmable logic controller] often results in unacceptable variability. For example, we installed a vision system on a new carton machine running at just 220 cartons per minute. Our camera required a larger field of view because of timing variations associated with PLC scan. Triggering the camera directly could have reduced this variability by several orders of magnitude.”
Odos Imaging Ltd. (Edinburgh, Scotland) has made headlines in recent years with its 3D time of flight (ToF) cameras, which use nanosecond pulses from an infrared (IR) laser with high-precision triggering and shutter control to provide 3D depth information for every pixel from a single image. Odos recently released the SE-1000 high-speed recording camera, which uses the same camera platform as the 3D ToF camera.
“The ability and requirements of our particular flavor of time of flight actually lends itself very well to high-speed imaging,” explains Ritchie Logan, Odos Imaging’s vice president of business development. “One of the biggest challenges with high-speed, or high-frame-rate, imaging is the illumination. With high speed, you need very short exposure times and very intense illumination, which we have already developed for our time-of-flight camera. Our high-speed camera operates at 450 frames per second at max resolution, or up to 35,000 frames if you window down the sensor. But, by using a 1-kW laser pulse that lasts a few nanoseconds, in conjunction with our fast trigger and timing capabilities, we can capture a frame that, if we would string them together, would equal 5 million frames per second. The exposure length is longer than the light pulse, but since we’re using a very short IR pulse, we get an effective exposure length that is essentially only a few nanoseconds — thus ‘freezing time’ in a very simple manner for the user.” Additionally, the short illumination pulse within the exposure remains consistent across all frame rates, meaning the same amount of light is available from 450 fps right up to 35,000 fps.
The short pulse also allows Odos laser illuminator’s designation as a Class 1 device, which does not require any additional safety or enclosure requirements to protect nearby workers from the light. However, while an FPGA and careful electronics design allow the SE-1000 to be triggered with less than 1 ns of jitter and less than 1 µs of latency, it requires a minimum of 28 µs for the next frame. With 2GB of on-board memory, the SE-1000 is ideally suited for triggered stroboscopic applications rather than continuous process analysis, which can require continuous fast frame rate video streams rather than sequences of still images, Logan says.
No Time for the Jitters
Traditional high-speed (web inspection) and high-resolution/high-bandwidth (flat panel/PCB) inspections typically look to line-scan cameras that read a single row of image data at a time and create continuous images by stitching together one line of image data after the next. These cameras, which can run in the 100,000s of frames per second, typically use high-brightness, continuous illumination due to the very short pulse duration. However, matching the camera acquisition speed to the underlying conveyor or material handling system is critical to acquiring an accurate spatial image.
“When a customer chooses a high-speed line-scan camera, like our new Piranha XL PX-16k, they’re looking at high-end applications,” says Dr. Xing-Fei He, senior product manager for Teledyne DALSA (Waterloo, Ontario, Canada). “The quality of the encoder is very important in these applications. If the encoder has jitter problems and the customer is pairing a high-end camera with low-end encoder — and running the camera near the maximum speed of 125,000 lines per second — it can result in dropped lines.”
Although the jitter problems are not the fault of the camera, Teledyne DALSA has added some new features to the camera that “allow it to recognize encoder jitter and ignore irregularities from the encoder because we know speed can’t change that fast,” He says. A color version of the time delay and integration (TDI) 12-row selectable PX-16 is expected sometime in 2016.
Frame grabbers also are critical to high-speed imaging applications. While Odos’ SE-1000 can use Gigabit Ethernet because the focus is on exposure accuracy and timing, a standard 5-MP camera running at 253 frames per second will generate upwards of 10-11 Gb/s running at 8 bits, according to Rich Dickerson, manager of marketing communications at JAI Inc. (San Jose, California). “Ten years ago, an application like that would have necessitated a line-scan camera, but that’s not the case anymore. If the application fits, you can window a CMOS area-array sensor down and run it at thousands of frames per second. It just depends on the application.”
Assuming the high-speed system designer has overcome the challenges of camera selection, lighting, and material-handling integration, they will still face the daunting challenge of getting all that data into a computer for image processing.
According to Michael Chee, product manager at Matrox Imaging (Dorval, Quebec, Canada), the company’s frame grabbers use a number of design methods to help with high-speed and high-bandwidth applications. “Matrox Imaging hardware and software offer real-time OS support for time- or response-critical applications not only in terms of image capture and analysis but also for acting on the results of the latter,” says Chee. “Frame grabbers in the Matrox Radient eV series not only have efficient direct-memory access engines to transfer image data directly to host memory without CPU involvement, they also have an even more robust hardware-assisted image capture mode, which minimizes the interrupt load on the host’s CPU, guaranteeing reliable acquisition at frames rates in the tens of thousands per second.
Meanwhile, the Matrox Radient Pro family of vision processor boards has an onboard FPGA that can be programmed to offload intense and repetitive pre-processing tasks, giving the host CPU more bandwidth to deal with other time-critical tasks, Chee adds.
When it comes to high-speed machine vision systems, it’s clear that system designers have no time to mess around. In fact, the most important component to the system may be the integrator.
“The trend for machine vision component suppliers is to make everything as easy as possible,” says Odos’ Logan. “That’s certainly the case with many machine vision software packages that are designed to help people who don’t know their way around machine vision build a system. But that’s not the case with high-speed machine vision.”
Adds i4 Solution’s Durand: “When it comes to these specialized systems, customers need to leverage the expertise of a machine vision integrator that has experience with high-speed applications if they want to avoid unpleasant surprises.”