Embedded Vision Touts Image Quality, Heterogenous Processing Platforms as Key Elements to Medical Application Success
| By: Winn Hardin, Contributing Editor
Life sciences and medical applications are becoming a bigger part of the machine vision market. In 2017 during the A3 Business Forum, manufacturing economist Alan Beaulieu identified the medical industry as one of the fastest-growing markets driving manufacturing. Skyrocketing demand for services, driven by an aging populace and coupled with the medical market’s location on the upside of the technology adoption curve, bodes very well for makers of advanced, automated medical solutions.
While machine vision has helped to automate blood screening and similar microscope-based point-of-care solutions, there is also a growing demand for highly optimized medical instruments — for example, embedded systems — that can deliver a solution regardless of location or user expertise. Choosing between a traditional machine vision versus a fully embedded system typically depends on volume, scalability, and system requirements.
How Optimized Do You Want It?
“When you design a one-off system, the cost of components matters less,” says Darcy Bachert, CEO of Prolucid Technologies Inc., an embedded and vision systems integrator. “Using a PC means that if you need more power, you have more flexibility, including adding GPUs [graphic processing units] for high-speed analysis based on what the inspection requirements are. But the general guidance we give to our customers is that there’s a tradeoff between cost and performance that comes with each processing type, from PC-based to embedded or system-on-module [SoM] devices. We help our customers choose the best performance at the lowest cost while avoiding decisions that will add undo constraints to what the solution can do.”
For example, says Bachert, if you have enough power available to run GPUs, that will likely be the most cost-effective solution for handling more intensive image processing. But when power, space, and thermal constraints point to embedded — for instance, can all those GPUs fit into a device smaller than a laptop? — you take that route
Embedded solutions inevitably lead to higher design costs — not just because of the tight constraints and tradeoffs, but because miniaturization, embedded development, and debugging almost always take longer and cost more. Also, embedded systems typically need to be more rugged, which takes more engineering resources.
“Hardening can mean operating in both dark and bright environments,” Bachert says. “It can be high and low temperatures. It can also mean working with a technician that has been trained, and another that hasn’t. When it comes to higher volume embedded solutions, if there is an issue or a bug, it will be found. One-off systems don’t have to be as hardened because you know everyone who operates it will be properly trained. That’s not always the case when it comes to high-volume embedded solutions. It’s only when you have the highest volumes that a fully customized embedded solution makes sense because saving a few dollars per device will really matter.”
Making Babies the New-Fashioned Way
The machine vision market abounds with case studies of one-off microscopy solutions. French company i2S - Innovative Imaging Solutions’ recent solution for monitoring fertilized embryos illustrates medium-level embedded integration to solve a higher-volume, but not mass-market, medical application.
“When it comes to in-vitro fertilization, eggs are fertilized outside the womb,” says Xavier Datin, CEO of i2S, a company specializing in image capture and processing, applied in particular to develop embedded systems for the medical segment. “Biologists periodically load the cells into slides, load the slides into microscopes, and count the number of cells versus fragments in the embryo as a way to gauge the embryo’s development and viability. Each time the cells are transferred from the storage unit to the microscope, they are put under stress.”
i2S has developed an automated solution that leverages a custom 71-megapixel camera the company originally developed for high-speed and high-quality document scanning. The system documents the embryo’s growth over time without having to manipulate the cell.
“There are a couple of systems on the market for providing this sort of service, but our ability to design high-quality image acquisition systems using specialized lighting and 3D machine vision allows us to provide much better data on embryo growth than what’s on the market today and eventually increase the chance of fertility,” Datin says.
This is just one of several automated cellular inspection systems that i2S has created for cancer, blood analysis, and related applications. Dental imaging is another growth area for embedded machine vision solutions, as more corporations offer fast 3D scanning for prosthetics and other applications.
Portable Medical Machine Vision
For high-volume applications such as portable blood analysis, market scale lends itself to a fully optimized embedded solution.
“We’ve essentially miniaturized blood analysis to a handheld portable device form factor for one of our customers,” says Prolucid’s Bachert. “The solution uses a Zynq processor with dual-ARM process plus FPGA array. We were able to port the sensor output directly to the FPGA for preprocessing, and then move the filtered data to one of the ARM cores for high-level image processing. The final ARM core provides application logic.”
In cases like this, Prolucid typically works with the customer to develop the prototype, but for the highest-volume applications, if often makes sense to take 100 percent of the product development in-house.
To help differentiate between when it makes sense for customers to codevelop versus depend exclusively on the integrator, Bachert mentions optical coherence tomography (OCT) or magnetic resonance imaging (MRI) systems as examples.
“The customer understands their application really well in the clinical environment,” Bachert says. “But they’re not always experts in device and data, security, or communication protocol, and a lot of times they’re not strong on the core-application development side, either. Customers need to know what their core IP is and decide if they have the skills to supervise specialists in other disciplines or not. Many times, we’ll develop the platform, but the customer keeps the core IP and integrates that last step themselves.”