Industry Insights
Deep Learning Dominates at Automate
POSTED 04/23/2019
| By: Winn Hardin, Contributing Editor
Machine vision experts and customers have categorized deep learning in many ways: as an industry disruptor, as a step in the evolutionary chain, even as the industry’s next frontier. Wherever you land on the scale of the topic’s importance, deep learning generated considerable interest from attendees at Automate, held April 8-11 in Chicago.
The major theme among this year’s announcements? Software developers are making the technology more accessible for more users.
Higher Education
For Cognex, Automate provided an opportunity to educate customers on VisionPro ViDi, a deep learning–based image analysis software package designed specifically for factory automation. With the ability to tolerate deviation and unpredictable defects, the software is designed to tackle hard-to-solve applications that are too difficult or expensive for traditional machine vision systems. These include surface inspection, defect classification and reading even heavily distorted print.
Education at the show focused on ways to help customers integrate deep learning into production, including a new Best Practices guide on how to deploy the technology in a factory environment. "The validation and integration process is different from traditional vision, so a lot of our education centered around questions to ask, such as what resources are needed for starting a deep learning project, how do you pick a good project, and how do you qualify it for production," says John Petry, Director of Marketing for Vision Software at Cognex.
In May, Cognex will release new features to further simplify deep learning integration. These will allow the customer to more easily optimize their solutions against different data sets to qualify them for production.
Another feature will simplify packing and assembly checking. "Many deep learning applications focus on defect inspection, but there are plenty of cases where the user wants to quickly confirm that all elements in an assembly are present and correct," Petry says. Examples include ensuring the right wheels, lights, and mirrors are on a new pickup truck; confirming that snack packs have the correct type and amount of food; or validating that the right components are on a PCB.
Although traditional vision approaches could solve these applications, it would typically require a vision developer to determine the correct tools for the application. These may vary depending on lighting, part presentation, or inconsistent product appearance. Cognex’s new deep learning interface is flexible enough to handle a wide range of factory conditions, without requiring a vision expert.
"It makes it much easier to add new parts to the component library or quickly create a different layout using those components,” Petry says.
Less Effort, More Precision
The newest version of HALCON, 18.11, from MVTec Software GmbH prominently features functions boosted by deep learning. “By using deep learning methods, algorithms automatically find and extract the relevant patterns to differentiate between the classes,” says Johannes Hiltner, Product Manager for HALCON at MVTec Software GmbH. “This is in contrast to ‘traditional’ machine vision approaches, where users have to cumbersomely program each feature manually."
HALCON allows users to train their own convolutional neural networks (CNNs) for machine vision applications such as classification, semantic segmentation, and object detection. The software analyzes the images and automatically learns which features can be used to identify the given classes. Typical applications amendable to deep learning technology include defect detection on circuit boards or pills; object classification, such as identifying a plant species from a single image; or object counting.
This HALCON version specifically handles the localization of trained object, feature, or error classes in an image that can now be segmented with pixel accuracy — i.e., semantic segmentation. In contrast to this pixel-precise segmentation, HALCON’s object detection tool works a little differently. Here, the objects within an image are each marked by a surrounding rectangle known as a bounding box. This functionality of HALCON is especially useful for counting objects that touch each other or overlap partially.
Deep learning allows for a range of complex machine vision tasks to be reevaluated and performed with far less effort than conventional methods. The inference, which is the application of a trained network on new input images, runs on GPUs as well on CPUs. The latter is especially useful for cost-sensitive applications that need to avoid additional expensive hardware. Pretrained networks also make it easier to set up a new deep learning network with only a few hundred images instead of several thousand.
Meanwhile, MVTec’s MERLIC 4 software update continues to use deep learning to improve upon its optical character recognition (OCR) classifier, offering enhanced detection rates for number and character combinations for reliable identification, as well as a more robust reading of dot-print fonts.
Through deep learning, companies can reduce the amount of programming needed and save time and money. Another benefit is that users do not need any in-depth artificial intelligence (AI) knowledge. “Thus, companies can also use their staff to train the network without having to increase headcount,” Hiltner says.
Self-Sufficient Vision
At Automate, Cyth Systems told customers that you don’t need machine vision experience to inspect and classify products. “Deep learning is opening up new inspection abilities, along with new ways for automated inspection to become something companies can manage and oversee themselves without the dependence on highly technical vision experts,“ says Scott Frame, Sales Engineer with Cyth Systems.
To accomplish this goal, Cyth developed NeuralVision, a deep learning cloud-based platform for image handling, tagging, statistical analysis, simulation, and solution generation. With the help of LabVIEW systems engineering software, the company has programmed the platform so that users simply click an item of interest and determine what will pass or fail. The customer decides what product images are included into the learning models but they don’t have to know any programming.
"NeuralVision gives line operators the tools to easily tag images and train solutions to be run in production," says Frame. "Operators can easily review and test tags, as well as production results, and data reviews can allow for the evaluation of manufacturing methods for preventive or correction action."
Frame emphasizes that deep learning isn’t a replacement for traditional machine vision but rather an added capability to the inspections that can be done now. Still, the technology is important because "it’s allowing processes that could previously never compete with the human ability to identify items, defects, variation, and so on," Frame says. "For the first time, deep learning is allowing automated processes to be trained by those people to operate reliably and consistently at their level."
Making Deep Learning Easier
As part of updates to its Matrox Design Assistant X, which uses a flowchart-based integrated development environment (IDE), Matrox Imaging gave an on-site demo showcasing the use of deep learning technology for machine vision applications.
“The version X update leverages deep learning technology to perform inspection through classification that would otherwise be very difficult, if not impossible, to do using conventional imaging techniques,” says Fabio Perelli, Product Manager, Matrox Imaging.
The software uses CNNs, which are deep learning algorithms that categorize images or image regions to preestablished classes. The CNN is trained though a set of images that are representative of the application and categorized in the desired classes. Matrox Design Assistant X’s deep learning classification is ideal for analyzing images of highly textured, naturally varying, and acceptably deformed goods, such as produce.
Democratizing Deep Learning
Teledyne DALSA is releasing a new version of Sherlock, its machine vision software interface design for automated inspection applications. Sherlock 8 leverages a generic deep learning plug-in that allows customers to create specific applications by training with their own data set.
Teledyne DALSA demonstrated a beta version of the deep learning platform using two cameras. One camera presents objects to a second camera that locates multiple objects in a broader field of view. "For instance, if you put a washer under the first camera, the second camera will identify, locate, and classify all of the washers in the field of view of the second camera, no matter the type of background, orientation, or size," says Bruno Menard, Software Program Manager for Teledyne DALSA.
The Sherlock 8 deep learning platform is ideal for tasks such as finding a defect on a flat panel display, detecting irregular patterns on a surface, and facial recognition, because of its ability to differentiate among characteristics such as shape, brightness, or color. "Performing these tasks would be extremely difficult, if not impossible, with classic machine vision algorithms," says Stephane Dalton, Technical Manager for AI at Teledyne DALSA.
Designed initially for use on Windows and PC, the deep learning plug-in allows users to easily collect and label data. "Labeling can be a tedious task because you need a certain number of images to train a model," Menard says. "Using Sherlock 8’s deep learning plug-in, customers can use a bounding box around objects. It’s already drawn for you, and you can adjust the shape of the box, relabel it, or get rid of it if it’s not an object you want to recognize."
Flexibility and ease-of-use are the primary drivers of Teledyne’s deep learning platform. "We give users the ability to change machine learning hyper-parameters so they can influence the end result and determine the best network for their application, with a simple press of the ‘train’ button," Dalton says.
Deep Learning: Hype or Reality?
Even with the interest in deep learning shown by Automate attendees, many hesitate to deploy it until they understand it better. As a result, it is falling to software companies to show their customers that deep learning is a complement to, rather than a replacement for, machine vision. And that’s exactly what they did at Automate.
To persuade customers to adopt this technology, "we need to determine the correct applications for deep learning and build the right products to solve them," Cognex’s Petry says. "And users need to figure out how to integrate deep learning into production, how it should interact with manual inspection, and what they need to budget in terms of personnel, time, and equipment."