« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Visual Inspection & Testing Visual Inspection & Testing

New Functions, Guided Programming Drive Tomorrow’s Image Processing

POSTED 11/04/2008  | By: Winn Hardin, Contributing Editor

Color, 3D, search, guided programming and distributed processing will reshape the machine vision solutions of tomorrow.

Image processing algorithms and the computational platforms that power them are the twin hearts of machine vision. Despite the growth and success of machine vision, however, standard image processing tools for general purpose systems have changed slowly, cautiously, and rightly so. Machine vision rarely drives major computational design improvements, but has become exceedingly good at leveraging them. At the same time, industry is reluctant to risk financial ruin by trusting in the ‘bleeding edge’ of technology. This is even more apparent when it comes to ‘new’ and little understood technologies like machine vision.

Today, leading machine vision companies are leveraging the growing number of computational engines, including multicore processors, field programmable gate arrays (FPGA), graphical processing units (GPUs) and specialized application specific integrated circuits (ASICs) to drive new color, 3D, and search algorithms – all the while making the growing complexity of programming these systems easier for industry.

Software Evolution, not Revolution
If you’re looking for new image processing tools, look to color, search and advanced 3D measurement. This fall, both DALSA's Sapera Processing 6.10 and Matrox Imaging’s MIL 9.0 will offer important advances in color processing.

DALSA's Color Tool in the upcoming release of Sapera Processing 6.10 and the upcoming release of Sapera scheduled for this fall allows machine vision designers to choose among different color spaces, or even work in two color spaces simultaneously.

‘‘In addition to RGB [color space], we support HSV, LAB and YUV. Allowing designers to work in HSV allows them to extract the hue [H] component, for instance, and extract a range of color,’‘ explains Bruno Ménard, Group Leader – Image Processing at DALSA Digital Imaging (Waterloo, Ontario). ‘‘This is something you wouldn’t be able to do in RGB because you have to choose one plane.’‘

Applying image processing convolutions, such as edge detection, in different color spaces can yield different results, potentially inject artifacts into an image, and place different amounts of processing burdens on the computational platform. ‘‘HSV is often a good choice in applications because you don’t have to stick to primary colors, or use three times the processing power to solve the problem because you have to apply the algorithm to three separate color planes,’‘ says Robert Howison, Project Leader, OEM Applications group at DALSA. ‘‘Applying edge enhancement filters to each RGB plane can also introduce artifacts. With HSV and LAB you can apply the edge enhancement to the luminance over a range of colors without introducing artifacts.’‘

A next step in color imaging processing development would be adding ‘intelligence’ to the program by helping it guide designers on the best color space and metric to solve their application.  ‘‘Users are telling us that the tools are nice, but you have to help us with tool selection. That’s the challenge, that’s the Holy Grail,’‘ notes Arnaud Lina, Imaging Software Manager at Matrox Imaging (Dorval, Quebec). ‘‘That’s why training is so important to bridge the gap while we develop tools like our String Expert, which guide the user through string properties, font, and other important [OCR] criteria.’‘

DALSA’s ‘‘Special Edition’‘ of Xcelera frame grabbers allows vision designers to output two color spaces, typically RGB for the computer display because RGB closely mimics how the human eye perceives color, and a second color space (LAB, HSV) for image processing purposes. ‘‘This takes the pressure off the host processor to convert RGB into another color space for processing,’‘ adds DALSA’s Howison.

DALSA uses an FPGA on the Xcelera board for color space conversion, illustrating another trend in image processing software designs – utilization of multiple processing engines to speed up convolutions and tools.

Computing Cluster In a Box
‘‘In machine vision, there’s a lot of interest in new processing architectures and platforms,’‘ explains Matrox’s Lina. ‘‘They’re not necessarily designed to run completely new algorithms, but rather make existing algorithms go faster. That’s the #1 request from customers: make it run faster.’‘
 
One of the most impressive improvements to Matrox’s new MIL 9.0 release is the ability to take advantage of distributed computing architectures. ‘‘The key is to link together a multicore processor that can accommodate high level algorithms, beside a GPU [graphics processing unit] that is a big number cruncher, high-bandwidth memory, and an FPGA with customized algorithm optimized for a customer’s specific needs, and you create a powerful processing cluster capable of real time tasks that are very complex,’‘ explains Matrox’s Software Development Director, Stephane Maurice. ‘‘The key is to provide a single programming environment that can handle it all.’‘



 

According to Maurice, MIL 9.0 guides developers towards load sharing among different processors, cores, FPGA, ASIC, etc., but also allows the more experienced programmers to pick and choose. To accomplish this task, Matrox software engineers redesigned the basic image processing primitives while keeping the API relatively unchanged. The result should be a comfortable environment that gives designers significantly more flexibility to control computational platforms that offer a ‘‘cluster’‘ of processors, whether they be high-level convolutions on multicore microprocessors, or specific algorithms on ASICS, FPGAs, or GPUs.

‘‘I’m seeing more and more machine vision companies use GPUs,’‘ notes Perry West, President of Automated Vision Systems Inc. (San Jose, California). ‘‘Heuristic searches and training tasks that might have been impractical on a single processor; you might now be able to do things that weren’t possible, leading to a new whole new class of algorithms.’‘

Descriptor Based Matching
New 3D algorithms based on photogrammetric approaches using advanced ‘‘sheet of light’‘ or defocus methods determine the exact position of overlapping fine features, according to Dr. Wolfgang Eckstein, Managing Director at MVTec Software GmbH (Munich, Germany). As microprocessor I/O continues to grow, the need to connect dies to frames becomes more complicated. ‘‘Currently, the big companies are developing these defocus methods to determine where these fine wires are and whether they might be touching, which would short the device,’‘ says Eckstein. ‘‘I think you’ll see new 3D methods that use photogrammetric methods in the semiconductor industry within the next two years. Companies can do it today, but the issue is to make the algorithms fast and robust.’‘

MVTec will release Halcon 9.0, which will offer a new type of image search algorithm. Called Descriptor Based Matching, the algorithm uses relationships between dominant points of a image to quickly find features in textured images. ‘‘Unlike normalized correlation, which virtually moves a model around an image to search for a match, and geometric searches that downsample the resolution to find shapes and edges, Descriptor Based Matching uses a more statistical approach without downsampling,’‘ explains Eckstein. ‘‘It’s very stable and fast, and well suited to textured images that do not have shapes and edges. Since it doesn’t require downsampling of the image to speed the search, for certain applications, we estimate that Descriptor Based Matching is up to three times faster than geometric searches.’‘

The need for speed will continue to drive future advances in machine vision technology. Just a few years ago, image processing software was the bottle neck to the vision system. Then, high-frequency and multicore microprocessors solved that problem. Today, the bottleneck has moved away from the microprocessor to the spaces between processor chips, memory, and buses. We’ll just have to wait and see what new technology will come along that machine vision can turn to its own productive purposes.

Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.