Industry Insights
Advances in Embedded Vision Pave Profitable Path
POSTED 06/26/2018 | By: Winn Hardin, Contributing Editor
It’s no secret that today’s iPhone has several thousand times the processing power of NASA’s Apollo command modular computers — although that statement only tells half the story when it comes to the strengths of 1960s computing. No matter how you look at it, however, the power of modern computing compared to just a few decades ago is startling. Even more surprising than having an ocean of teraflops in our pockets is what we do with it. In the 1960s, computers were a novelty. Today, many people in the developed world have at least one computational device.
The same relationship between power, size, and mass consumption applies to machine vision, and more specifically, embedded machine vision. Embedded vision incorporates an image sensor, powerful processor, and I/O into an application-specific system low on weight, energy consumption, and per-unit cost. Advances in embedded vision hardware and software have expanded opportunities in industrial machine vision as well as medical imaging, autonomous vehicles, and consumer electronics — opening the world of machine vision to a universe of new applications.
Smart Cameras Lead the Way
Bridging traditional machine vision systems with embedded functionality, smart cameras continue to satiate appetites for compact all-in-one vision capabilities. The smart camera market in North America grew 25 percent year-over-year to $408 million in 2017, representing the fastest-growing category in the vision industry. The adoption of smart camera-based systems is projected to accelerate 8.9 percent from 2017 to 2025, but component cameras and imaging boards are also seeing their share of the embedded vision pie increase to $189 million and $39 million, respectively.
Today’s smart cameras can handle a range of applications — from recognizing traffic signs to performing in-machine inspection — that require high performance and flexibility in a small form factor. Measuring 29 mm x 29 mm x 10 mm and weighing 10 g, the board-level version of FLIR’s Blackfly S camera is optimized for embedded systems such as handheld devices and unmanned aerial vehicles. The 5.0 MP USB3 Vision camera, scheduled for release in Q3 2018, provides a feature set including automatic and manual control over image capture and on-camera preprocessing.
Meanwhile, Basler offers the dart, a board-level camera that measures 27 x 27 mm, weighs 15 g, and offers two interfaces: USB 3.0 and the camera maker’s proprietary BCON for MIPI interface compatible with the GenICam machine vision standard. By using the latter interface, “the result is that instead of using a sensor module, the designer can integrate a finished camera module with much less effort,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler.
Developing the Complete Package
Achieving small, fast, and power-efficient smart cameras and other embedded vision systems relies on advances in hardware, software, and the technologies that support them. When it comes to image processing, many applications call for heterogenous platforms that combine the power of a central processing unit (CPU) with a field-programmable gate array (FPGA), graphics processing unit (GPU), or low-power ARM core.
Combining CPUs with a GPU can significantly reduce the processing time for an image set. For example, Qtechnology A/S uses an accelerated processing unit (APU) in its smart camera platforms that combines the GPU and CPU on the same die. The GPU is a massively parallel engine that can apply the same instructions across large data sets (in this case, pixels) at the same time. Performance can further be increased by pairing the APU with an external, discrete GPU, which enables the addition of GPU processing resources to support even more intensive vision tasks.
When compared with GPUs, FPGAs produce less heat for compact applications because they run at slower speeds, but they also require significant programming knowledge. The VisualApplets product line from Silicon Software aims to simplify the chip configuration process by simplifying the development environment for FPGAs. Recently, Silicon Software reported a deep learning implementation on FPGA architecture capable of classifying six different defects on images of metallic surfaces with 99.4% accuracy and an image throughput rate of more than 220 Mbps.
FPGAs are used extensively in embedded cameras for medical imaging because they reduce component costs and power consumption while allowing rapid development of camera interfaces such as CoaXPress and Camera Link, says AIA Vice President Alex Shikany. Embedded vision is becoming ubiquitous in endoscopy, surgery microscopy, dermatology, ophthalmology, and dentistry. In fact, research firm MarketsandMarkets expects the global medical camera market to reach $3.69 billion by 2021, up from $2.43 billion in 2016, at a compound annual growth rate of 8.7 percent.
A pair of new product releases from OmniVision Technologies illustrates the demand for embedded vision technology in medical imaging, particularly for point-of-care diagnostics and treatment. Designed for disposable and reusable endoscopes and catheters, the OH01A medical image sensor provides 1280 x 800 resolution at 60 frames per second in a 2.5 mm x 1.5 mm package.
Meanwhile, the OVMed image signal processor (ISP) for medical, veterinarian, and industrial and endoscopy applications integrates with OmniVision’s compact CMOS image sensors and features a short system delay of less than 100 ms. The ISP’s video processing unit has two versions: one that can fit inside an endoscope handle and an advanced option that resides in the camera control unit.
The automotive industry is following a similar trajectory with advanced driver assistance systems (ADAS). ADAS components include cameras, image processors, system processors, ultrasonic sensors, lidar, radar, and IR sensors, and they are responsible for a number of complex tasks — among them driver drowsiness detection, lane-change assistance, pedestrian identification, and traffic sign recognition.
Not only do these applications require high-performance image processing, this task must happen under extreme conditions and stringent automotive safety standards. To address these challenges, ARM has developed the Mali-C71, a custom ISP capable of processing data from up to four cameras and handling 24 stops of dynamic range to capture detail from images taken in bright sunlight or shadows. Reference software controls the ISP, sensor, auto white balance, and autoexposure. To further leverage the device into the automotive market, the company has plans to develop Automotive Safety Integrity Level–compliant automotive software. Mali-C71 represents just one ADAS component in a market expected to reach $89.3 billion in annual revenue by 2025.
Simplifying the Complex
With global tech giants staking their claim in embedded vision’s future, developers can expect more simplified deployment and management of the technology. In May 2018, Microsoft announced a vision AI developer kit based on the Qualcomm Vision Intelligence Platform. The kit brings the hardware and software required to develop camera-based IoT solutions using Azure IoT Edge and Azure Machine Learning. The goal is to deliver “real-time AI on devices without the need for constant connectivity to the cloud or expensive machines,” according to a blog from Microsoft.
Another recently announced design environment is Intel’s OpenVINO, which stands for Open Visual Inference & Neural Network Optimization. OpenVINO provides common software tools and optimization libraries to enable write-once, deploy-everywhere software that is attractive to embedded system developers. The toolkit ports vision and deep learning interference capabilities from popular frameworks such as TensorFlow, MXNet, and Caffe, as well as OpenCV, to Intel FPGAs and Movidius vision processing units.
For many traditional machine vision companies, the PC- or frame grabber–based industrial inspection system remains the bread and butter of their business. But visionary stakeholders already are capitalizing on new opportunities far beyond the factory floor, empowered by advances in embedded vision.