Cloud, Fog, Edge, and What They Mean for Machine Vision
| By: Winn Hardin, Contributing Editor
|Photo credit Winsystems.com|
Machine vision traditionally resides at the edge of the industrial network. In general, this means that machine vision solutions operate independently. They may receive data from a plant network to accomplish a task and share results with the network or downstream devices, but otherwise, these systems stand alone. They could do their job even if the plant network disappeared.
The cloud, of course, refers to the remote storage or processing of data (e.g., server farm). It requires high bandwidth and secure data pipelines to transfer data from the edge of the cloud and vice versa.
As you might expect, the fog lies in between the edge and the cloud. Sometimes called a hybrid solution, the fog often refers to a DIY “cloud” server farm housed inside the manufacturing plant, providing the opportunity for centralized processing and storage of image from machine vision systems as well as other production data.
But the fog also can refer to the uncertainty that manufacturers face when trying to adopt machine vision — a technology they don’t fully understand; a technology further obfuscated in public awareness by the hype surrounding technologies such as cloud, deep learning (DL), and artificial intelligence. By illustrating the pros and cons of keeping data close to home, versus sending it away as part of a cloud integration, this article hopes to cut through the fog surrounding today’s hottest manufacturing technologies and give machine vision customers a solid foundation for choosing the best vision and data storage solutions.
Three Paths to the Cloud
Machine vision has long been a proponent of cloud technology, especially when the manufactured products pose a danger to the public (e.g., food and medicine) or represent significant investment (e.g., electronics and aerospace components). In these cases, manufacturers regularly use automated inspection to improve product quality and provide visual records in the event of recall or some other liability issue. Storing data in the cloud provides a simple way for a company to protect itself without having to incur additional IT personnel as well as hardware and software outlays.
Today, the relationship between the cloud and machine vision is less clear cut. With the advent of DL, where computationally intensive analysis is performed in the cloud rather than at the edge of the network, cloud discussions have turned to what the cloud can do, not just what it can store. So today, when machine vision designers ask the same question they have asked themselves for the past 10 years, there is a lot more interest in the correct answer. In short, the primary question remains: Is the network that connects it to the edge capable of real-time data transfers, and is the cloud safe enough to protect my business data and, therefore, my business in general?
Just five years ago, the answer to that question was a solid “no.” Today, the answer is “it depends.” And while the answer is still unsatisfactory to engineers and executives, it does describe several important opportunities.
“Basically, there are three reasons why companies are coming to us asking to integrate cloud technology with distributed vision systems,” says Andy Long, CEO of systems integrator Cyth Systems. “The first is because the inspection takes place at a remote site, which already poses a security risk, so centralizing data collection and processing can help eliminate the need to send people to the remote location to maintain the system. The second reason is to use the cloud as a workspace for improving models and algorithm performance; this is where deep learning resides. The third reason is for cloud integration in general. By collecting data in the Cloud instead of a dedicated SCADA system, the cloud becomes a useful place to locate supervisory and control functions.”
While introducing the cloud to a plant presents a new security risk, it also provides better ways to monitor and respond to security threats. “Security is always a concern, especially if it’s in a regulated industry,” explains Darcy Bachert, CEO of embedded systems integrator Prolucid. The issues of how to deploy a secure cloud solution with high availability and an easy-management interface occurs so often at Prolucid that Bachert’s team developed devicestream™, a packaged cloud solution that tackles these very issues. “Having a system in the cloud doesn’t mean it’s scalable. It does not mean that it’s redundant. It’s still hardware and computers that can fail. We see a lot of do-it-yourself applications where people have overlooked key elements and simply put an application on a server, which is not really using the true principles of what cloud computing can provide,” Bachert says.
Big Application? Big Benefits
For the majority of discreet machine vision applications, the benefits of cloud computing —cheaper processing; centralized monitoring, maintenance and security; redundancy and scalability — deliver little to no benefit. The computational load of most machine vision solutions easily can be handled by today’s PC technology and images can be archived locally or to a remote server in the plant.
“Today, the local server farm is definitely more preferred for archiving images than the cloud,” says David Dechow, Principal Vision Systems Architect at machine vision integrator Integro Technologies. “I’m not saying it will always stay that way, but that’s how it is today. The main reason is security. While everyone wants remote technical support these days, our customers would prefer to use on-demand support access rather than an open process connection to the cloud. However, as our customers learn more about Industry 4.0, they’re learning that machine vision is a tool that generates valuable data that shouldn’t just be collected but used to improve processes.”
DL is the poster child for machine vision’s role in Industry 4.0, which is based on the premise of machine-to-machine communications and improved processes through automated data mining and analysis. Unlike traditional machine vision that analyzes images based on mathematically defined features and image data, DL learns what a good or bad part is based on the statistical analysis of large image sets. Expert programs or technicians create the training image sets by labeling each image as good or bad and locating the defect. Using statistical analysis, the DL software program can learn much as people learn — by “looking over the shoulder” of an expert. In this way, DL programs build a weighted model of good and bad parts based on the statistical analysis of the training image set.
“It’s a black box, and people still aren’t always comfortable with black-box solutions,” explains Cyth’s Long.
The great benefit of DL applications, however, is that the system can always get better at what it does by evaluating more expertly trained images. By storing archived images in the cloud, a DL system designer can tweak the DL model by feeding the software more images — images are chosen specifically to accommodate a newly identified defect, for example, or to push the false rejects down or up based on the manufacturer’s needs.
“As part of one of the applications we are working on, reducing false rejects by 1% adds more than $200,000 to the company through eliminated or reduced waste,” explains Prolucid’s Bachert. “So it can be quite significant and very valuable problem, and the only way that we can get that additional optimization is through having a much larger data set that we can then do tuning against. And that is where the cloud, from a storage point of view and a computing point of view, can become quite valuable.”
Considering that most cloud providers allow clients to transfer data to the cloud for free, but charge for the bandwidth to extract data from the cloud, moving “to the cloud” will never be an end goal by itself. Efficiency is the end goal. And for companies with products and margins that justify continuous improvement in efficiency and quality, the combination of machine vision and the cloud provides a competitive edge for today and tomorrow as companies find new ways to monetize their production data.