Industry Insights
Embedded Vision: Compact in Design, Expansive in Possibilities
POSTED 02/06/2018
| By: Winn Hardin, Contributing Editor
Throughout the history of the electronics industry, the old refrain that systems will continuously become faster, simpler, and cheaper has remained true. In the early days of computer vision, a frame grabber capable of capturing a single 640 x 480 x 8-bit image consisted of multiple boards in a rack and cost tens of thousands of dollars. Back then, few people could imagine a portable device no bigger than a deck of cards having the ability to capture images and video, store gigabytes of data, and act as a radio, GPS device, and telephone — all costing less than $500.
The cell phone is the most common example of combining multiple technologies into optimized, highly compact modules, often referred to as “embedded systems.” And when machine vision is added to the mix, the module becomes an embedded vision system. Embedded vision incorporates a small camera or image sensor, powerful processor, and often I/O and display capability into an application-specific system that is low on per-unit cost and energy consumption. Examples in the machine vision, medical, automotive, and consumer markets include smart cameras, ultrasound scanners, autonomous vehicles, and portable digital assistants (PDAs), respectively.
The Evolution of Embedded Platforms
The introduction of the PC and the increasing functionality of integrated circuits created a new market for PC-based single-board computers, frame grabbers, I/O peripherals, graphics, and communications boards. This allowed systems integrators to build custom systems tailored for specific applications such as data acquisition, communications, computer graphics, and vision systems. Today, the choice of boards, form factors, and functionality is numerous and includes products based on OpenVPX, VME, CompactPCI, cPCI Express, PC 104, PC/104 Plus, EPIC, EBX, and COM Express standards.
Just as frame grabbers and smart cameras may incorporate FPGAs, designers of systems using board-level products such as off-the-shelf CPUs, frame grabbers, and I/O peripherals are faced with an even wider range of products from which to choose. Here, numerous board-level products based on standards ranging from OpenVPX, VME, CompactPCI, cPCI Express, PC 104, PC/104 Plus, EPIC, EBX, and COM Express boards can all be used to build vision systems with different camera interfaces and I/O options. Standards organizations such as VITA, PICMG, and the PCI/104 Consortium detail these open standards and many of the products available to build a machine vision or image processing system.
Embedded vision can take two tracks: open small-form-factor vision and image processing boards and peripherals based on these computing platforms and aforementioned standards, or custom designs that use cameras, processors, frame grabbers, I/O peripherals, and software. While the hardware of open embedded vision systems may be relatively easy to reverse engineer, custom embedded vision designs are more complex, highly proprietary, and may use custom-designed CMOS imagers and custom Verilog hardware description language (HDL) embedded in FPGAs and ASICs.
Intellectual Property
In embedded vision design, many of the image processing functions that lend themselves to a parallel dataflow are implemented in FPGAs. Altera (now part of Intel) and Xilinx offer libraries that can be used with their FPGAs to speed these functions. Intel’s FPGA Video and Image Processing Suite, for example, is a collection of Intel FPGA intellectual property (IP) functions for the development of custom video and image processing (VIP) designs that range from simple building block functions, such as color space conversion, to video scaling functions. Likewise, Xilinx offers many IP functions for image processing functions such as color filter interpolation, gamma correction, and color space conversion.
Both Intel and Xilinx offer third-party IP as part of their partnership programs. In its Xilinx Alliance Program, Xilinx includes products from companies such as Crucial IP, iWave Systems Technologies, and Xylon that offer IP to perform noise reduction, video encoding, and video-to-RGB converters, respectively.
Leveraging the power of FPGAs, camera companies have not been slow in recognizing the need for peripherals that can be used in open embedded systems. Indeed, companies such as Allied Vision and Basler have already introduced camera modules to meet such demands.
“Many of today’s embedded systems rely on a sensor module connected to a processor board via a MIPI serial interface 2 (MIPI CSI-2) that is used in mobile devices and automotive applications,” says Francis Obidimalor, Marketing Manager at Allied Vision in his video, “Sensor Module Vs. Camera Module.” “However, these sensor modules have limited processing capability. Functions such as noise reduction and color debayering, as well as application-specific software such as facial recognition, must be performed on the host processor.”
To reduce the host processing required, camera modules with on-board processing capability can be used to off-load functions such as noise reduction and color debayering, allowing the developer to concentrate on the application software . “Using such modules employed in the Allied Vision ‘1’ platform, camera vendors can also provide the necessary drivers, alleviating the need for designers to write new camera drivers should a system need to be upgraded with, for example, a camera with a higher performance image sensor,” Obidimalor says.
For this reason, Basler offers a board-level camera, the dart, which measures 27 x 27 mm, weighs 15 g, and offers two interfaces: USB 3.0 and BCON, Basler’s proprietary interface based on low-voltage differential signaling (LVDS). Basler will also offer an extension module that lets users operate the camera via a MIPI/CSI-2 camera interface. “The result is that instead of using a sensor module, the designer can integrate a finished camera module with much less effort,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler.
Embedded vision components are being incorporated into a myriad of applications. Even so, a handful of industrial sectors are receiving most of the attention, largely due to economies of scale. These include automotive, medical, security, and consumer applications. Taken together, they spotlight key trends: developers are working to drive out cost and reduce system size while offering enhanced flexibility.
Automotive and Security
Advanced driver assistance systems (ADAS) capabilities such as mirror replacement, driver drowsiness detection, and pedestrian protection systems are pushing the need for enhanced image processing within automobiles. According to the research firm Strategy Analytics, most high-end mass-market vehicles are expected to contain up to 12 cameras within the next few years.
“In automotive applications, high-speed computing with low energy consumption is important,” says Ingo Lewerendt, Strategic Business Development Manager at Basler. For now, Basler intends to focus on embedded vision systems installed inside the vehicle. However, custom solutions seem almost inevitable as automakers offer up their own branded cabin configurations of entertainment and information systems.
FLIR Systems is also targeting the automotive market with its Automotive Development Kit (ADK) based on the company’s Boson thermal imaging camera core. Designed for developers of automotive thermal vision and ADAS, the uncooled VOx microbolometer detector-based camera cores are already employed on vehicles offered by GM, Mercedes-Benz, Audi, and BMW.
Data from such camera modules must quickly process and analyze images under the most extreme conditions and do so in the face of stringent automotive safety standards. To address these challenges, Arm has developed the Mali-C71, a custom image signal processor (ISP) capable of processing data from up to four cameras and handling 24 stops of dynamic range to capture detail from images taken in bright sunlight or shadows. Reference software controls the ISP, sensor, auto-white balance, and auto-exposure. To further leverage the device into the automotive market, the company has plans to develop Automotive Safety Integrity Level (ASIL)–compliant automotive software.
Embedded vision systems are not only finding themselves in automobiles but the automatic number plate recognition (ANPR) systems that monitor them. While the cameras used in such systems may have in the past been low-cost Internet-enabled cameras that use lossy transmission standards such as H.264/5, these are gradually being replaced by digital systems that need no such compression. Systems as Optasia Systems' IMPS ANPR Model AIO incorporate a GigE Vision camera from Basler interfaced to an off-the-shelf embedded computer housed in a single unit. These types of cameras are especially suited for low-light applications such as ANPR since they have a high dynamic range and are somewhat tolerant of exposure variations.
Medical Imaging
Two major applications of medical embedded systems are endoscopy imaging and X-ray imaging, which in turn enhance diagnosis and treatment. Use of embedded vision within the medical imaging market is growing rapidly, driven by a call for minimally invasive diagnostic and therapeutic procedures, the need to accommodate aging populations, and rising medical costs.
To develop portable products for this market, developers often turn to third-party companies for help. Zibra Corp. turned to NET USA for assistance in the design of its coreVIEW series of borescopes and endoscopes. NET developed a remote camera with a 250 x 250 NanEye pixel imager from AWAIBA and a camera main board that incorporates an FPGA to perform color adjustment and dead pixel correction. An HDMI output on the controller board allows images captured by the camera to be displayed/viewed at distances of up to 25 feet.
Studying the skeletal changes of lizards posed an interesting problem for Yoel Stuart, then a graduate student at Harvard University. Stuart needed a portable X-ray system to use in the field. He worked with Rad-icon Imaging (now part of Teledyne DALSA) and Kodex to produce the final system. To create the system, Rad-icon developed a Remote RadEye200 with a 14-bit Shad-o-Box camera module that had a GigE Vision adapter with an Ethernet interface connected to a portable host PC. Kodex integrated this X-ray camera with a 50 kVp portable X-ray source from Source-Ray.
Consumer Demands
“While machine vision integrators can pay $10,000 for a system designed for industrial machine vision applications, to break into consumer markets, the vision system can’t cost more than $500,” Basler’s Lewerendt says. “New markets want vision without the PC, the GPU, or a hard drive. They want the system reduced to the minimum.”
Reducing the system cost, however, poses a conundrum for those companies traditionally involved in the machine vision market, where high-resolution, high-speed cameras can cost thousands of dollars. In a teardown of the iPhone X by iFixit, researchers identified that the TrueDepth sensor cluster used in the device costs Apple $16.70. Apple refused to comment on the price of these components, but such low costs are not unusual in high-volume consumer products.
While traditional machine vision camera vendors might not want to compete in the consumer market, there are other opportunities for vendors of smart camera modules. These include pro-sumer drones that can be used for industrial applications such as thermography to analyze the heat loss of buildings. Then there’s BIKI from Robosea, an underwater drone created in the form of a fish that employs a 3840 x 2160 pixel camera, 32 GB memory, and on-board features such as automated balance and obstacle avoidance.
As embedded vision proliferates in automobiles, medical imaging, remote inspection, and consumer electronics, opportunities will continue to rise for vision vendors both traditional and nontraditional in scope.
Highlights
Embedded vision system: A system that incorporates image capture, processing, and (often) I/O and display capability into a single unit.
Design and market considerations, high-volume products (e.g., PDAs): Sophisticated, expensive custom system-on-a-chip designs; custom operating systems and software; ultra-competitive pricing of commercial off-the-shelf (COTS) components and packaging; high marketing costs.
Design and market considerations, medium-volume products (e.g., ultrasound systems): COTS processor and I/O boards, and open bus interface standards to lower costs; some customization in I/O and display capability and FPGA design to interface to custom sensors and displays; COTS operating systems; custom software, often developed with COTS or open source code; less sensitive component and software costs; medium marketing costs.
Design and market considerations, low-volume products (e.g., R&D systems): COTS PC-based processor, I/O boards, open bus interface and software standards; PC-based COTS operating systems and often COTS customizable software; no marketing costs.
Embedded Vision Competitive Considerations
- Embedded vision systems are appearing in diverse products such as drones, automobiles, portable dental scanners, consumer robots, and virtual reality systems. These demand low-cost components to reach pro-sumer and consumer markets.
- Companies that offer camera, lighting, and PC-based camera interface products for machine vision systems will find it difficult to compete. While lower-cost camera modules and camera interface/processing modules can be used for such applications, vendor margins will be substantially reduced, making it likely that only a handful of established machine vision vendors will enter the market.
- Established software vendors will need to lower the cost of their products to compete in these markets due to the proliferation of easy-to-configure (if unsupported) open source code.
- Cloud-based computing will negate the necessity for processing hardware currently used in embedded systems where, at present, deterministic and low-latency products may not be required, e.g., those used in automatic number plate recognition.