« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

Vision Systems on Chip (VSoCs) Offer Maximum Performance for Dedicated Tasks

POSTED 07/25/2011  | By: Winn Hardin, Contributing Editor

VSoCs may represent the clearest signal yet that the mainstreaming of machine vision is finally taking place.

Location, location, location. It’s not just a real estate mantra – it also has relevancy for machine vision and other electronic applications.

When it comes to a machine vision system, the closer a sensor is to a memory buffer, the memory buffer to a processor, the processor to I/O – the faster the overall system. If you can put all these elements on the same silicon die, you can build the fastest, most optimized system on the market.

In the past, few machine vision systems had the volume and market momentum to warrant a vision system on a chip (VSoC) where most – if not all – of the data collection and processing units were on the same chip. The NRE and fab costs were too high and machine vision volumes too low to amortize the cost over a long enough product lifecycle. That’s changing with mainstreaming of machine vision as more image processing systems and features find their way into consumer products, offering the machine vision industry unprecedented opportunities to affect change and benefit from the world outside of the industrial marketplace.

Single-Board to VSoC

The advent of the “smart camera” presaged the world’s first VSoCs by bringing all the building blocks for a vision system onto the same board, avoiding backplane bottlenecks. In most – if not all – of these systems, the sensor is separate from the image processing cores, but they reside next to each other on the board, greatly improving I/O speeds and overall processing speeds.

“You can get the higher performance by integrating VSoC image processing and video analytics on a chip – either a custom IC, ASIC, or FPGA,” explains Kerry Van Iseghem, Co-Founder of Imaging Solutions Group (Fairport, New York). “DSPs cannot meet the high-speed or real-time performance applications in machine vision. They offer great flexibility, but not performance.  Most manufacturers who offer VSoC capabilities often incorporate a DSP or GPU alongside the VSoC for added flexibility and offer the best of all worlds. [However,] ASIC and FPGA solutions will always outperform any processor or DSP. Most VSoC solutions today utilize FPGAs.  Hard-wired ASIC and custom IC solutions are reserved for very high-volume applications that can amortize the NRE costs over many units.  The lowest possible unit cost has to be weighed against NRE costs. It usually is a pretty clear calculation. The beauty of the FPGA solutions is that the performance and cost of FPGAs are better than hard-wired ASIC and custom IC solutions were four years ago.”

A few pioneering vision companies are moving from single-board solutions to very close approximations of VSoCs to solve specialized high-volume industrial and consumer applications. The caveat: While Cognex Corp.'s DataMan500 barcode reader uses vision system on a chip (VSoC) technology to increase performance while restraining costs for a specialized machine vision solution.sensors and processors can be on the same chip, commercial entities are not incorporating all system elements on a single die. What gets designed on the chip and what doesn’t, depends on the application.

For example, Cognex’s (Natick, Massachusetts) Dataman 500, a high-speed, imager-based data code reader, offers one of the first industrial vision applications that combines both the sensor and processing elements on the same die.

“[Dataman 500’s VSoC is] one piece of silicon that has both an imager as well as a vision-optimized microprocessor on it, and the combination of those two on one piece of silicon creates a very tight feedback loop where the system can acquire images very quickly at 1000 frames a second,” explains Dataman 500 Product Marketing Manager, Matt Engle. “At that rate it can do two vision tasks. First, it runs an auto exposure routine that varies the exposure or light brightness of the system in order to get an optimal image. Second, it runs a finder algorithm that locates barcodes within the field of view. This tight feedback loop operates at a very high speed, and only passes images that meet certain criteria onto the DSP for decoding. This offloads the DSP by providing a well-exposed image and the location of a barcode. All it has to do is look in that one specific place, run the decoding algorithm, and then handle communication tasks such as formatting the data output.”


Certified System Integrator Program

Set Yourself at the Forefront of the Global Vision Market

Vision system integrators certified by A3 are acknowledged globally throughout the industry as an elite group of accomplished, highly skilled and trusted professionals. You’ll be able to leverage your certification to enhance your competitiveness and expand your opportunities.

GET CERTIFIED


 

More than just processing data codes faster, the customization of the VSoC includes a sensor with larger pixels that allow the system to operate with a smaller aperture, increasing the field of view – another benefit specific to the high-volume package and carton-sorting applications targeted by the DataMan 500. “Both speed and this increased depth of field all feed into the end game for the system, which is improved reading and robustness,” says Engle, adding that the same performance couldn’t be achieved using a traditional multicore processor solution because those cores also have to handle communication and other functions, while the DataMan’s VSoC focuses on specific tasks to speed the overall system.

Critical Mass for VSoC 3D

3D vision for consumer electronics is another hot area of development right now as gesture controls for personal computers, and soon TVs, take hold. TYZX, Inc. (Menlo Park, California) targets these applications with a stereovision VSoC that includes a pair of imaging sensors with a specialized 3D embedded processor.

“For industrial machine vision, you can accomplish some amazing things with GPUs and multicore processors that offer a phenomenal amount of processing, but from TYZX’s perspective, mobile processors don’t have the horsepower to do real-time 3D, flow, or background modeling by themselves,” explains Ron Buck, President and CEO of TYZX, Inc. “That’s where we see vision-related silicon coming to the fore.”

TYZX’s G3 EVS platform includes two sensors, the DS3 stereovision processing chip, along with a Linux processor and communication chips to communicate with other systems. 

The DeepSea 3 custom 3D chip processes pixels as they are read off the imager, generating dense 3D data clouds immediately and eliminating the need for storing raw pixel data in memory buffers. “You get the final 3D data set just a few clock cycles after the last pixel has been read from the imager,” explains Buck.

“A 3D point cloud is a computationally expensive data structure to analyze,” continues Buck. “[The DeepSea 3] takes the 3D point cloud, quantitizes it into cells, does 6 degrees of freedom and rotation of data to the user-specified frame of reference, and projects that 3D data onto a 2D plan to make it convenient for many different types of applications. The result is 3D data that is easy to process by the current generation of mobile processors….We couldn’t achieve such low latency without dedicated hardware with such a low power budget. As a general rule, a GPU gives you an order of magnitude better performance efficiency than a standard CPU architecture, while dedicated hardware gives another order of magnitude improvement in efficient processing.”

As machine vision technology segues to more consumer and large-volume applications, the reduction in component costs will make it easier for more suppliers to combine multiple functions in a single piece of silicon real estate. And it’s not likely to take very long. TYZX’s Buck believes that the first TVs using 3D vision for gesture control will hit the market by the end of next year.

As Cognex is proving today, these economies of scale will also help industrial applications as the machine vision industry gains greater purchasing power and influence at the creator’s table, also known as the “fab.”

Embedded Vision This content is part of the Embedded Vision curated collection. To learn more about Embedded Vision, click here.