Industry Insights
As Software and AI Develop, New Machine Vision Applications Emerge
POSTED 10/04/2022 | By: John Lewis, A3 Contributing Editor, TechB2B
Automation and machine vision have played a leading role in filling the labor gap left by the pandemic and the cautious and inconsistent return of manpower to industry. As companies struggle to perform many sight-dependent tasks formerly handled by operators, they have started to allocate capital to replace unskilled labor with automation enabled by machine vision.
Emerging machine vision technologies such as non-visible, computational, and 3D imaging combined with AI, machine learning, and deep learning enable machine vision to address an ever-expanding range of new applications. While these emerging vision technologies and others, such as contact image sensor (CIS) and polarization cameras, are not necessarily new, they are becoming more accepted in manufacturing due to lower costs and reduced pain points in programming and maintenance.
Hardware and software advances including sensor, lighting, and processing technologies — as well as simpler user interfaces — have improved the image quality, economic efficiency, and reliability of machine vision systems. In every case, the process starts with a clear problem to which machine vision can be applied to make significant improvements to current methods.
“Machine vision technology has stepped up and stepped in,” says Keith Bowling, machine vision specialist with Edgewater Automation. “Robot guidance and product or component verification lead the pack in terms of growth for our company, with a noted increase in quality-related inspections.”
UV and Non-visible Imaging
In addition to robotic guidance, component verification, and quality inspection, non-visible machine vision applications using ultraviolet (UV), infrared (IR), short wave infrared (SWIR), and hyperspectral SWIR illumination are on the rise.
“More end users appreciate the return on investment of making their products more machine vision–friendly by using materials that maximize contrast,” explains Mark Kolvites, sales manager at Metaphase Technologies Inc. “For example, adding a UV pigment to adhesives and plastics so the usually clear material becomes visible when exposed to UV illumination is useful in applications such as detecting voids in water pump gaskets or detecting the presence of clear, tamper-evident safety seals on bottles.”
Steve Kinney, director of training, compliance, and technical solutions at Smart Vision Lights, agrees that UV is a hot emerging vision technology and adds SWIR to the list. “I see more and more applications for UV and SWIR every day,” says Kinney. “As a machine vision lighting supplier, we see new applications all the time. Logistics and agriculture are two other emerging application areas that are really coming on strong.”
Agricultural and food applications such as fruit, nut, and meat processing are increasingly taking advantage of hyperspectral imaging (HSI) techniques. As HSI cameras have become more affordable, food production, sorting, and quality inspection are growing markets as industries seek to reduce waste, increase automation, and protect their brands from quality failures, according to Mathieu Marmion, lead application specialist at Specim, Spectral Imaging Ltd.
Black Plastic Recycling
HSI is also being deployed in emerging waste sorting and recycling applications, which have become more profitable as oil prices have increased. But as the use of black plastic increases in packaging and automotive industries, so does the popular misconception that carbon black plastics cannot be separated during recycling.
“This is not true,” says Marmion. “The Specim FX50 hyperspectral camera enables black plastic sorting as well as textile recycling, which is also expected to grow as the European Union adopts new guidelines for combating textile waste”
One major bottleneck preventing the widespread adoption of spectral cameras for on-field and inline agricultural and industrial applications is the difficulty of managing and processing the huge amount of data contained in hyperspectral data cubes. Robust and easy-to-use data processing solutions are key for the growth of the spectral imaging market. Hyperspectral imaging will also benefit from the development of artificial intelligence. For example, in the food industry, AI could bring more flexibility to a processing machine and the same machine could adapt to detect diverse types of materials.
“Overall, we believe, and the market research shows, that the growth of the spectral imaging market in the years to come will be driven by the necessity to provide solutions to major industrial, societal, and economic challenges,” concludes Marmion.
Web-Based Material Inspection
Web-based material inspection applications for machine vision span a huge variety of expanding materials and markets, including labels, Li-ion battery components, textiles, document inspection, and sorting, according to Scott Bradley, Northeast regional sales manager for sensing systems products at Mitsubishi Electronic Automation, Inc.
“Web-based processes and digital print applications have always been the dominant application for CIS cameras,” explains Lou Fetch, business development manager of CIS and MICMO products at Mitsubishi Electric. “We are now seeing more customers with web-based processes seeking to simplify their vision systems by using one CIS camera instead of multiple line scan cameras, specifically where 1:1 imaging is required for barcode grading, OCR, and print inspection applications.”
The US Postal Service’s Informed Delivery rollout is another emerging application likely using scanning technologies such as CIS or line-scan cameras. Customers who sign up for the free service receive daily emails with images of mail that will be delivered to them each day. While currency validation and fraud detection have been around for a long time, these applications require increased sophistication as counterfeiters become more skilled. Advancements using specific wavelengths of illumination, combined with tape, hologram, and metal detection, augment vision inspection systems.
AI and Manufacturing
Another emerging area generating a very high level of interest is AI in manufacturing, where there is a great deal of education happening right now. According to Ed Goffin, marketing manager at Pleora Technologies, many manufacturers are deploying AI solutions for inline and manual inspection applications. Many more companies are learning how they can use technology to solve their unique problems.
“As we approached AI,” Goffin explains, “our first step was developing technologies for automated inspection applications. However, as we worked with more manufacturers, we realized one significant gap in their approach to quality was in manual inspection. Across the board, there are a number of manufacturing processes that still rely heavily on human visual inspection. That prompted a bit of a development pivot for us, where we’ve adapted edge processing tools initially developed for inline applications and really focused on simplifying their deployment as more of an operator-based tool.”
One of the lead customers for Pleora’s AI technology is a distillery that uses the initial system as an operator assistance tool for a manual labeling process. Electronics assembly manufacturer DICA Electronics Ltd. has also deployed the AI visual inspection system to reduce manufacturing quality escapes and to gather key data from manual processes to speed root cause analysis.
Interest in no-code or low-code tools for computer vision and AI development is growing. Where these were once the domain of an “expert,” manufacturers now seem more interested in owning the entire development-to-deployment process. There are obviously a number of complex applications where expertise is still required, but in many areas, manufacturers prefer a more DIY approach, according to Goffin.
“Based on our experience, we’re seeing a number of smaller manufacturers evaluating AI as the tools become simpler to use and more user-friendly,” explains Goffin. “One large area of interest is around the opportunity for AI technologies in manual processes. This includes providing tools that can help guide an operator’s decision, so they are consistent and objective, as well as gathering data from these steps to help guide process improvements. The real key is ensuring that these tools are intuitive and user-friendly.”
3D and Machine Learning
In the more traditional machine vision market, there is more focus on 3D and edge processing. Both are similar. With 3D, the goal is trying to simplify interoperability between 3D sensors and more traditional machine vision processing tools. With edge processing, there’s a great deal of interest in the use of machine vision standards to simplify interoperability between different types of IoT devices and processing.
“We see the most growth potential for the integration of 3D and machine learning in conjunction with traditional machine vision,” says Sam Lopez, senior vice president at Matrox Imaging/Zebra Technologies. “All of this is core to our business, where we already have a wide portfolio of products. We are also seeing image-based scanners and OCR readers increasingly deployed to streamline supply chain workflows. On the manufacturing side, there is increased demand for solutions that will help improve product quality and increase visibility — all while meeting regulatory compliance. In addition, the market is seeking versatile software packages that enable businesses to create custom machine vision applications and user interfaces.”
A case in point is robotics and industrial automation specialist Mosaic S.r.l, which recently deployed Matrox software as part of a nine-camera machine vision system that inspects brake discs. In the system, flow chart software eased the coding burden of the developers and allowed them to focus on achieving desired accuracy, performance, and algorithmic logic. In addition, the team easily managed protocols and different parameter sets for each camera and created efficient image analysis algorithms. In another example, Marexi Marine Technology Co. used high-fidelity AltiZ 3D profile sensors along with machine learning algorithms for its TUNASCAN system to properly classify and sort tuna by species with accuracy rates of more than 95%.
Deep Learning
Deep learning offers the advantages of traditional rules-based machine vision systems with the judgment that human inspectors bring — all in a continuously optimized and updated fashion. The technology is helping machine vision expand into new industrial applications. For machine vision users and integrators, deep learning is ideal for applications where subjective decisions need to be made, as with human inspection, particularly where the identification of features is difficult due to the complexity or variability of the image.
“There are also several graphical user interface–based software options for neural network training,” Lopez explains. “For example, our software offerings — Matrox Imaging Library (MIL) X and Matrox Design Assistant X — include vision tools for classifying or segmenting images for inspection using deep learning.”
Hardware and software optimized for machine vision are becoming increasingly popular, as they save time and resources in deployment and beyond. For example, Matrox offers industrial PCs with machine vision software already integrated, along with a portfolio of industrial vision controllers fully supported by its machine vision software development kit, which provides systems integrators and developers a complete foundation for building machine vision solutions.
“Overall, machine learning and 3D, along with easy-to-use software, have a strong potential to revolutionize the machine vision field,” concludes Lopez. “In automation, robotics and specifically vision-guided robotics are areas where we could see the market expand significantly.”
Vision-Guided Robotics
Steve Wardell, manager of vision engineering at ATS Automation, agrees, noting that in distribution centers, warehouses, and palletizing, vision-guided robotics seems to be an area of significant expansion for machine vision in logistics. “From a vision perspective, it’s not just having robots in there, but vision-guided robots,” Wardell explains. “Formerly, taught-position robots were common, but now almost all the systems that we’re putting through these days that have robots on them are vision-guided. It’s all about efficiency and cost savings.”
Other emerging markets include medical technology, nuclear, and energy, according to Darcy Bachert, CEO of Prolucid Technologies Inc. “We see a tremendous opportunity within these industries to utilize a wide variety of automation technologies and custom software development to help solve real-world problems, including energy conservation, optimized asset inspection and maintenance, and enhanced diagnostics.”
Prolucid’s applications range from energy and water conservation within commercial and residential buildings, to enhancing the life of nuclear reactors through advanced inspection and diagnostics, and automating medical image analysis to improve decision making and inputs into complex medical procedures. In each case, combinations of data collection, automated processing, and data visualization or controls are core components in solving these problems.
“The continued advancement of machine learning models, tools, and frameworks has been a critical aspect of these applications, combined with a better understanding of how to implement and the specific problems they are best suited to solving,” explains Bachert. “Improvements in processing capabilities within cloud and GPU have been helpful for some of the more data-intensive and complex algorithms being implemented.”
Edge Learning
Manufacturers want to solve ever more complicated problems. For decades, many have solved existing production problems with traditional machine vision. Now they want solutions for applications that have yet to be addressed, such as the cosmetic inspection of surfaces to detect stains or scratches, and they want solutions that are easier than ever to use.
Deep learning at the edge, also known as edge learning, is a subset of deep learning. It uses a set of pretrained algorithms to process images directly on-device. Compared to more traditional deep learning–based solutions, edge learning requires less time to set up and train, with fewer images required for proof of concept.
“With edge learning, only five to 10 sample images are needed per object. So customers can quickly develop solutions to many applications,” explains Reto Wyss, vice president of AI technologies at Cognex Corp. “Edge learning is much faster because you don’t need a PC, you don’t need a GPU, and you don’t need a data scientist, which significantly decreases the total cost of ownership when compared to deep learning applications.”
The Cognex edge learning solution integrates high-quality vision hardware, machine vision tools that preprocess images to reduce computational load, deep learning networks pretrained to solve factory automation problems, and a straightforward user interface designed for industrial applications.
“By embedding efficient, rules-based machine vision within a set of pretrained deep learning algorithms, edge learning devices provide the best of both worlds, with an integrated tool set optimized for factory automation applications,” Wyss explains. “With a single smart camera–based solution, edge learning can be deployed on any line within minutes.”
The Future Is Software
Any emerging applications, markets, and technologies poised to propel the machine vision market forward will most likely be software focused. While there will always be sensor, camera, lighting, and compute advancements, as software and AI continue to develop, new applications will emerge. Software that helps ease integration, programming suites for cross-functional products, OPC data availability, and plug-ins for lighting control are on the short list of emerging technologies that may help increase machine vision adoption.