The Fundamentals of Embedded Vision System Design

Embedded Vision Systems Embedded vision was only recently made possible through the miniaturization of vision and processing components, but the potential of embedded vision to disrupt a wide range of industries is enormous. Robotics, consumer electronics, augmented reality, aerospace and automotive sectors, among many others, are all adopting embedded vision and seeing incredible results.

While there are many different types of applications that use embedded vision technology, the underlying technology for these systems is relatively similar.

4 Fundamental Areas of Embedded Vision System Design

Embedded vision systems can vary greatly in function, but nearly all embedded vision systems are comprised of a few main components that deliver power, performance and accuracy in a compact size.

  1. Processing Architectures. Highly compact processors are the true enablers of embedded vision technology. They come in different configurations, including system on chip (SoC), system on module (SoM), single board computer (SBC), and fully custom designs. The different architectures typically signal how much of a computer’s components are located on the board.
  2. Types of Vision Processors. The two main types of processors include graphics processing units (GPUs) and field programmable gate arrays (FPGAs). GPUs are easily reprogrammed, while FPGAs are fast with low latency levels. There are also application-specific integrated circuits (ASICs) and digital signal processors (DSPs), but they have lost favor over the years.
  3. Embedded Vision Interfaces. The most common interfaces for embedded vision systems include USB 3.0, MIPI CSI-2, and low voltage differential signaling (LVDS). LVDS cables are often used with FPGA processors, while USB 3.0 and MIPI CSI-2 are used in specialized applications.
  4. Imaging Modules. The last piece of an embedded vision system is the imaging module. Typically, these are either consider sensor modules or camera modules. The distinction between the two is that camera modules have processing capabilities, while sensor modules do not. Camera modules are typically easier to integrate and take some of the work load off of the primary processor.

Embedded vision systems are used for a wide variety of applications, but most designs include the above processing architectures, processors, embedded vision interfaces, and imaging modules.

Embedded vision is a revolutionary technology with potential applications in nearly every industry on the planet, despite the fact the technology is still in its infancy.

 

BACK TO VISION & IMAGING BLOG

Embedded Vision This content is part of the Embedded Vision curated collection. To learn more about Embedded Vision, click here.