Industry Insights
Embedded Vision Advances Come to Fruition in 2019
POSTED 02/25/2019
When the first camera appeared on a phone in 2000, it marked the introduction of embedded vision to mainstream consumer markets. The camera took photos at 0.11 MP resolution. Now, nearly two decades later, embedded vision is ubiquitous in mobile devices, with Sony planning to release the IMX586 featuring a 48 MP sensor this year. But it’s not just the consumer market that hungers for embedded vision; the technology has proliferated into markets ranging from autonomous vehicles to warehouse logistics. In fact, experts in embedded vision see 2019 as a major turning point thanks to technological advances that are enabling new products and expanding market opportunities.
The continued miniaturization of silicon and chips is perhaps the biggest factor that has allowed embedded vision to extend its reach. This technological advance “has significantly increased the processing power per (chip) area and has decreased the relative energy demand of those chips,” says Gion-Pitschen Gross, Product Manager for Allied Vision.
Gross also notes a strong rise of energy-efficient design, especially championed by ARM. “All of this has increased processing power in a small footprint, allowing us to have powerful embedded boards that are able to process the image data, which is increasing as the resolution of the cameras increases,” he says. “And it has only now become possible to do the required advanced processing on an embedded board.”
On the software side, Gross credits the rising adoption of the Linux operating system and the OpenCV computer vision software library, both of which are free, open source, and stable. Designed for embedded and real-time systems, OpenCV contains 2,500 algorithms that can be used for tasks such as object detection, producing 3D point clouds from stereo cameras, image stitching, and facial recognition.
When it comes to industrial use, companies are choosing embedded vision because of its simplicity and compactness, says Yvon Bouchard, Asia Pacific Technology Director, Vision Solutions, for Teledyne DALSA. Applications include robotic assembly, pick and place, and automated optical inspection.
The key is to make these systems user-friendly, meaning that a user does not require extensive knowledge to program an embedded vision system. “Industry needs to consider maintenance and deployment of the vision solution with the greatest simplicity as possible,” Bouchard says.
Straddling Two Worlds
Embedded vision technology’s transition from consumer to industrial markets represents another fundamental shift. “Advancements in CMOS sensor technology on the sensor side, coupled with advanced development in embedded PCs that make them much faster, smaller, and more powerful, have established a new area from an industrial perspective,” says Tim Coggins, Head of Sales & FAE Module Business at Basler Inc.
This shift between two worlds creates what Coggins calls a middle space. “Companies seeking embedded vision want to enjoy the price point of what the consumer market offers, but they need the reliability and robustness of what machine vision has provided over the years,” he says. Applications in this space are broad, including iris inspection or facial recognition in retail, as well as IoT edge devices, logistics, security, and intelligent traffic systems.
Navigating between the two worlds is evident in a company’s vision purchasing habits. “When someone wants to design a traditional vision system, they typically start with the camera and figure out what resolution they need or what kind of optical challenges they’re going to have,” Coggins says. “In the embedded vision market, they shop for the processor first and the vision sensors second.”
Basler offers three levels of embedded vision systems, depending upon a customer’s application needs. At the entry level is an off-the-shelf, single-board camera that accepts the USB 3.0 interface. The next step toward a fully embedded system is the BCON for LVDS interface, primarily geared toward FPGA-based systems in system-on-a-module types of applications.
Finally, the dart camera module with Basler’s BCON for MIPI (mobile industry processing interface) features image signal processors (ISPs) on Qualcomm’s Snapdragon system-on-a-chip (SoC). “By utilizing the tools in that ISP, we are making the camera module as lean as possible and therefore bringing down the overall cost of the entire vision system,” Coggins says.
SoC Advances
Embedded vision owes much of its success to the SoC, a small computer that provides the three primary benefits of embedded vision system design: small form factor, low energy consumption, and powerful processing. Furthermore, SoC architectures are starting to take advantage of deep learning advancements.
Allied Vision’s recently released ALVIUM technology combines an SoC designed for embedded computer vision and a large image-processing library. “Traditional embedded vision systems had some sort of sensor module connected to an embedded board,” Gross says. “All of the image processing takes place on the embedded board. That can pose a problem because the embedded boards have to be small, and there is not enough power to do any type of the image processing on them.”
With ALVIUM technology, which appears in every camera within the series, the onboard preprocessing corrects and optimizes images inside the camera, freeing up the embedded board to perform the image-processing algorithms. “That some of the image preprocessing can now be done on the camera itself is a big change in thinking,” Gross says. The technology allows for actions such as sharpness, debayering, and vignetting correction.
The Alvium 1500 and Alvium 1800 series cameras include four sensors from 0.5 to 5 MP, with plans to add new sensors that reach up to 21 MP. The cameras, which support USB3 Vision and MIPI CSI-2 interfaces, are ideal for applications such as drones, vending machines, ATMs, and robotics, all of which require small, energy-efficient systems.
Another significant entry in the SoC field is Nvidia’s Jetson AGX Xavier. The chip deploys vision to the edge, making it ideal for industry-based embedded applications such as pick and place, robotic assembly, and automated optical inspection. Offering 32 TeraOPS (32 trillion calculations per second), Jetson AGX Xavier delivers the performance of a PC host system in a small form factor of 100 mm x 87 mm, with low-power-sipping modes at 10 W, 15 W, and 30 W.
All Systems Go
With all the developments taking place in embedded vision, industry insiders expect 2019 to be a turning point. Coggins sees this current on several fronts, most notably the arrival of more powerful embedded processors. “Three or four years ago, we were looking at an embedded system as a self-contained device with one camera,” Coggins says. “But with powerful embedded processors on the way, we are seeing multi-camera system capability.”
Coggins also expects 2019 to be the year that more companies pull the trigger on embedded vision projects. “Up until now, it’s been an educational process,” Coggins says. “It’s no longer just a discussion or a theoretical idea of what’s to come. We are now fully engaged with numerous projects.”
Another factor impacting embedded vision is the role that deep learning will play. “End-user-configurable applications that exploit AI to quickly train and deploy vision solutions will have a great impact on the industry,” Bouchard says. Products could range from portable medical imaging devices to unmanned aerial vehicles identifying crop disease.
But perhaps an even bigger opportunity awaits embedded vision technology: the unknown. “In general, we will see vision problems that were never considered with a traditional offering that we need to look at with an AI approach involving deep learning to solving that problem,” Bouchard says.