Industry Insights
Machine Vision Sticks Its Head in the Cloud, And Likes What It Sees
POSTED 03/07/2018 | By: Winn Hardin, Contributing Editor
Machine vision has always been in the business of big data, acquiring and processing countless gigabytes of images, and then extracting information to make a decision about a given object or task. Gigabytes of data per minute quickly turn into terabytes, or even petabytes, for applications like remote sensing and Web inspections that generate massive amounts of data. As the dataflows increase in size, they also multiply in number, prompting many industries to look for off-site computing and storage solutions. Enter the cloud. But can cloud computing respond fast enough for machine vision applications? Is the quality of service (QoS) sufficient for industrial applications? As the reach of machine vision extends beyond the factory floor, the answer, increasingly, is yes — even for industrial applications.
Life in the Cloud
“Without having some form of cloud or Internet of Things (IoT) integration, it becomes challenging to collect and manage large amounts of image data in a very disciplined manner,” says Darcy Bachert, Founder and CEO of Prolucid Technologies (Mississauga, Ontario). “In the last five years, there have been significant advancements in core cloud technologies that can make all of this happen.”
Bachert cites companies like Google, Microsoft, and Amazon Web Services that have developed technologies for massive scale with storage and analytics, all while keeping the information secure. “One of IBM’s protocols, MQTT, is specifically designed to interface with low-power distributed devices in order to implement QoS and assure that any type of data transmission is guaranteed.”
Beyond huge storage and computational power, public cloud providers are also offering machine learning and deep learning services. One example is TensorFlow, a framework for deep learning research and application development commonly used in machine vision. Deep learning is showing potential in everything from advancing disease detection to handling greater product variation on the production line.
These open source tools, along with advancement in imaging and image-based models, mean that “rather than having to hire a team of PhDs and data scientists, you can now train these models against emerging data sets in a much simpler, easier way to drive value,” Bachert says.
Bachert estimates that half of the machine vision projects his company develops have a cloud component. Among the biggest implementers of vision and imaging in the cloud is the medical device industry. Prolucid is working with a customer using an ultrasound-based imaging device to acquire images and provide classified values such as demographic factors and general location.
“This provides researchers with enough information to give context to the ultrasonic images so they can be used for clinical research in diagnosis or biopsies,” Bachert says.
To protect a patient’s privacy when collecting and analyzing data from medical imaging devices, Prolucid employs several security strategies. One procedure is to “de-identify” or eliminate personal information such as first and last name, date of birth, and postal code.
Second, Prolucid has a policy to secure data in transit and at rest, detect data breaches at the data center and the device level, alert the customer, apply a fix, identify other vulnerabilities, and recover data in the event of a catastrophic breach.
From the Cloud to the Ground
In a manufacturing environment, machine vision in the cloud generates some concern over Internet bandwidth and latency issues, which can slow down inspection processes and result in data loss and possibly safety issues for equipment and workers. “With a machine learning application, you still have the real-time inspection process,” Bachert says. “What changes is how you tackle it.”
For example, in a defect classification application, the manufacturer would use the cloud to collect a classified validation data set and develop a machine learning model. The model is then taken down from the cloud and deployed into a real-time process being done at the edge — which means at the edge of the network near the source of the data, i.e., at the manufacturing location.
“Because of this, we don’t have to worry about latency,” Bachert says. “In every system we design, the real-time process component needs to be able to operate with and without the cloud being connected.”
This hybrid approach of cloud and edge computing represents significant growth for machine vision integrator Cyth Systems (San Diego, California). “In the next 12 to 18 months, we expect that some of our clients will be doing analysis in the cloud, where the cycle times are higher and they don’t need an instantaneous response, just in case there’s any lag in the network,” says CEO Andy Long.
Don't Miss These Industry-Leading Events!
Long attributes the increased interest among manufacturers to the successful use of cloud computing in other areas. “Ten years ago, no one would have foreseen a self-driving car, but now there is a huge awareness about the data that is gathered and processed in the cloud to operate these vehicles,” Long says. “We have conversations with clients who say, ‘We don’t know what we want to do yet, but our executive team is telling us that we have to find a way to invest in this disruptive technology.’”
The Deep Reach of IoT
As manufacturers look to automate more of their processes, machine vision cloud-based production systems will play a big role. “We are doing a lot of assembly verification projects for our customers, where the goal is to provide a system that doesn’t require any programming and instead uses AI and a cloud-based processor to do all the work,” Long says. “The people who used to manually inspect parts are now responsible for training the system how to identify a good part or a bad part. It doesn’t require any machine vision knowledge per se.”
Using the cloud to simplify machine vision implementation also gives manufacturers unprecedented freedom to take the technology for a test drive. “The speed of analysis on the front end is quicker than all of the programming you would have to do in a traditional machine vision system,” Long says. “You can now experiment a lot more quickly and frequently to decide whether the technology solves a certain problem.”
Even if manufacturers are reluctant to analyze their imaging data over the Internet, they’re using the cloud in other ways — most notably in remote monitoring. Omron Microscan Systems, Inc. (Renton, Washington), offers an interface called CloudLink, which allows users to monitor live machine vision inspections via the Web. Meanwhile, with its Machine Vision Cloud product, ImpactVision Technologies (Maricopa, Arizona) can remotely monitor a customer’s vision system performance, change inspection criteria, and perform maintenance.
The Triniti light controller from Gardsoft Vision (Cambridge, UK) is another example of how IoT is reaching not only into every corner of the factory but also into every corner of the machine vision system itself. The Web-enabled Triniti lighting controller provides intelligence and precision control of lighting systems and operations, including fixed and variable data about the lighting properties, model information, usage information, and optical and electrical characteristics. GenICam and GigE Vision standards compliance allows easy integration with
other system components, as well as facilitating download of part number recipes from factory
host computers.
“Parameters like highest operating temperature, duty cycle, and hours of usage become
important to perform proper maintenance or repurpose the light,” says John Merva, Gardasoft’s
Vice President, North America. “Triniti allows users to easily access information to make the
best decisions possible about their lighting, and the overall performance of the machine
vision system itself.”
Beyond Medical and Manufacturing
In the digital age, libraries and academic institutions may be the last bastions of the printed word, but even they know the importance of going paperless. i2s (Pessac, France) manufactures several types of book scanners to preserve historical publications while allowing numerous users to access the digital assets through the cloud.
The company’s CopiBook series uses an area scan camera that can scan an A2-size page (420 x 594 mm) in 0.3 seconds, as compared with 4 seconds using a line scan camera. CopiBook can then increase productivity of the scanning process by more than 30 percent overall.
File sizes vary depending on the length of the book and the type of content within, but with i2s’s ability to place the digitized books in the cloud, the sky is literally the limit. For example, a 500-page book could generate between 200 and 500 MB of data, but i2s’s customers often need entire collections scanned.
“Standard collections can have 10,000 books, which is 2 to 5 TB for a project as a whole,” says i2s CEO Xavier Datin. “We also have some customers with 500,000 books, which takes the project up to 250 TB.”
As the IOT delivers its promise of connecting data, devices, and people, machine vision is finding new ways to utilize data analysis, deep learning, and all that cloud computing has to offer.