Embedded Vision Champions Design Flexibility, Ease of Use
| By: Winn Hardin, Contributing Editor
Call it the trifecta of machine vision. Customers across any number of industries want vision systems that are smaller, cheaper, and more powerful/faster. Embedded vision heeds the call with small cameras and application-specific processing, resulting in a compact system that is big on processing power but low on per-unit cost and energy consumption.
“Embedded vision expands the reach of robust computer vision from industrial applications to areas outside the factory where smaller processors are a better fit than classic PCs,” says Matthew Breit, Senior Consulting Engineer & Market Analyst at Basler Inc. (Exton, Pennsylvania). Application examples include portable medical devices, mobile robotics, and passport kiosks.
In order for these applications to come to fruition, manufacturers of embedded vision components are providing flexibility — and the necessary resources — to the embedded system designer or integrator.
“Today if someone is building an embedded vision system, they would buy only the image sensor and task their engineering team with designing the supporting electronics and firmware,” Breit says.
That process can be time-consuming, taking weeks, months, and even longer depending on requirements of the project. “Our job is to look at that embedded designer's situation and find ways to make their life easier, such as taking care of the entire image sensor integration inside the camera itself,” Breit says.
The Basler dart board-level camera, measuring 27 mm x 27 mm and weighing 15 g, aims to make a wide range of embedded vision systems possible. The camera offers two interfaces: USB 3.0 and BCON, Basler’s proprietary interface based on LVDS (low-voltage differential signaling).
“Besides taking on the sensor integration, BCON allows the embedded system designer to access the camera in a very direct way without the overhead of a PC-style interface like USB 3.0 or Gigabit Ethernet,” Breit says. “The result is that instead of handling a raw sensor, the designer can integrate a completely finished camera module as they would do with any other electrical device. The means adding vision with much less effort than before.”
Starting in the fall, Basler will offer an extension module that lets users operate dart via the MIPI/CSI-2 camera interfaces, which are the most common interfaces for embedded systems.
The dart is used in a broad range of applications, including automated license plate reading. “We’re also seeing a larger demand for handheld medical devices,” Breit says. “It’s much easier on the patient to sit next to a doctor with a handheld scanner versus sitting in a large noisy machine.”
Furthermore, embedded vision opens the door to applications that once seemed unachievable. “We've all encountered 'dream' applications over the years, whether it's a camera in a refrigerator, or the idea of a prosthetic eyeball. But now some of those dreams are getting closer to becoming reality. Of course, 'bionic eyes' are still down the line, but you can buy the refrigerator today.”
A Hub for Developers
Embedded system designers rely on multiple vendors to develop an embedded vision system, but Basler Inc. identified a critical gap within this group. “Whether you’re talking about lenses or image processing algorithms, there are hundreds of forums online about individual topics and tools, but we didn’t see one place which brings the entire embedded vision community together,” says Basler’s Matthew Breit.
So Basler took initiative and launched the website ImagingHub.com. Breit calls the agnostic networking platform a one-stop shop without the shop. “The site is designed for embedded engineers specifically looking to ‘get vision on board’ so to speak, a place where they can see all the different companies supplying components, find project examples that the community has posted, or start their own, and ask a question in the forum” Breit says.
The website, which has been live for almost a year, receives 10,000 visitors a month. “We think it will help make embedded vision easier and more accessible than it currently is,” Breit says.
He cites the prospect of interactive digital signage. “When a company pays for advertising space at the bus stop, they usually put up a poster and that’s the end of it,” says Breit. “But imagine the potential of actually engaging the person while they're waiting.”
In this scenario, a camera would identify who is looking at the ad and gauge their reactions. Even a few years ago, an application like that would require a PC, cables, and lighting. “But now, since the power of the processors has improved, and the size and the cost of everything has gotten smaller, we're seeing signage like this today in our shopping malls,” Breit says.
Flexing the Embedded Muscle
For the embedded vision systems and smart cameras it develops, Teledyne DALSA (Waterloo, Ontario) emphasizes flexibility, ease of use, and a small footprint. The GEVA 3000 embedded vision system provides an alternative to standard PC systems for inspection tasks in harsh industrial environments. GEVA, which accommodates Teledyne DALSA’s Genie Nano GigE Vision CMOS area scan camera, offers customers a choice between two application software suites: the wizard-based iNspect for users requiring easy setup, and Sherlock, for end-users who need flexibility in creating a graphical application.
Meanwhile, BOA Spot vision sensors combine Teledyne DALSA’s BOA vision system with integrated LED lighting, lens cover, and software. The resulting system is low cost, quick to set up, and easy to integrate with equipment on the factory floor. Accessible through a simple point-and-click interface, BOA Spot's embedded vision tools enable automated inspection and identification applications.
“Barcode reading is one of the biggest sellers for us, and we have an improved version that runs on BOA Spot,” says Bruno Menard, Software Program Manager, Smart Products Division at Teledyne DALSA. “It is much faster, more robust, and has high read rates.”
Deployment of embedded vision systems that use infrared imaging, whether for inspection in a tight industrial space or on a surveillance drone, also is on the rise. In response, Teledyne DALSA introduced the Calibir uncooled long wave infrared (LWIR) camera, which measures 29 mm by 29 mm. While Calibir supports GigE Vision output, customers who want to have a different interface like analog or USB3 Vision can use Teledyne’s Engine as the front-end architecture and plug into their own backend.
In addition to traditional machine vision division applications, Calibir is being used in outdoor applications that demand a small form factor, including solar panel inspection by an IR camera-equipped drone and night vision for hunting where the camera is mounted on the firearm.
Teledyne is showing customers the opportunities offered by embedded vision at its Imaging Possibility hub. “Smart cameras coupled with IoT [Internet of Things] will bring many possibilities,” Menard says. “For homeowners, it might start with remote home surveillance and control.”
Diving into Deep Learning
While many embedded vision installations are taking place outside the factory, the technology is finding its stride in some manufacturing applications — key among them robotics. As a turnkey machine vision integrator in 2D and 3D applications, Integro Technologies (Salisbury, North Carolina) integrates embedded industrial vision that acts as the robot’s eyes for tasks such as pick and place, load and unload, and vision-based quality inspection.
“Robot-mounted vision is not embedded to the degree that a driverless car is, but it serves the same purpose with many of the same software tools and can greatly enhance automation capability,” says Scott Holbert, Sales Engineer at Integro Technologies.
Holbert sees another trend that could enhance industrial inspection: deep learning. “The computer is using smarter and smarter algorithms and interfaces to discern what it is looking at,” says Holbert. “Machine vision and embedded vision systems, without being programmed like a traditional computer, are learning by example and increasing their accuracy over time.”
Unlike traditional machine learning, which relies on manual feature extraction, deep learning in machine and embedded vision learns features directly from images. “Whenever you have a human making a decision based on what they see, it opens you up to a lot of variability,” Holbert says. “We have cameras that can see better than the human eye, computers that make decisions very quickly, and systems that continue to learn and outperform humans doing certain tasks.”
As cameras continue to shrink in size and cost without sacrificing power, the Embedded Vision Alliance projects a “rapid proliferation of embedded vision technology into many kinds of systems.” Because of economies of scale, the automotive, medical, and retail sectors will continue to drive embedded vision development.
The potential for embedded vision is vast, and embedded designers will develop creative and unique solutions as time goes on. “This is the most exciting aspect for me, “says Basler’s Breit. “Imagine if you put the best brushes and paints in the hands of these artists. What masterpieces will we see?”