« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Other Other

System Integrators Tackle New Challenges in Machine Vision Design

POSTED 08/21/2018  | By: Winn Hardin, Contributing Editor

Today’s vision system integrators can choose from a plethora of products: off-the-shelf lenses, lighting, cameras, camera interface boards, and software. And the choices keep evolving as vendors introduce more affordable products with added functionality to replace systems that once would have cost thousands of dollars. 

“One of the biggest drivers for this change is the commoditization of 2D machine vision offerings,” says Markus Tarin, President and CEO of MoviMed and MoviTHERM. “Smart sensors and smart cameras, as well as configurable vision systems, have largely eliminated the need for machine vision system development, with most common applications now being accomplished with off-the-shelf plug-and-play technology. 

“Sophisticated and capable machine vision integrators now find themselves in in a position where it becomes increasingly difficult to add value to a common 2D machine vision system,” Tarin continues, “and some vendors of these systems are underlining this situation by marketing their configurable machine vision systems directly to the end customer.”

John Salls, President of Vision ICS, Inc., also recognizes the progress over the last decade with smart cameras becoming more functional and lighting companies offering a broader range of products. However, just as software becomes more powerful and prices keep falling, Salls sees an issue with the interconnection and standardization of software packages. 

“Different companies use different terminology for the same things,” he says. “Even standardized communications like Ethernet have huge variation from company to company, and there is not really a push for open [software] standards in the vision industry.” 

Just like other products in the machine vision lineup, lighting needs careful attention from the integrator. “In every machine vision system, it is critical to realize the best possible image, and this is where lighting is key to create strong contrast definition of the features being inspected,” says Earl Yardley, Director of Industrial Vision Systems. 

“There is a vast range of generic off-the-shelf machine vision lights that are adequate for the majority of applications, but these standard offerings should not be the limiting factor,” Yardley says.

To address more complex applications, Industrial Vision Systems develops lighting specifically for a project. Recent examples include high-intensity UV lighting, segmented ring-lights integrated into a compact robot inspection head, and custom form-factor backlights.

"Producing these custom solutions creates extra design time at the front end of the project, but this is outweighed by the time saving when it comes to programming the system with a fully optimized image,” Yardley says. 

Delivering on 3D and Deep Learning
While many of today’s vision products can meet the needs of most applications, system integrators must stay a step ahead as technologies and customer demand evolve. In the 3D imaging market, for instance, MoviMed’s Tarin points out that hardware innovation precedes software innovation. 

“Although there are a number of 3D sensors and cameras available, such as laser triangulation, time of flight, stereoscopic sensors for pseudo random pattern generators, and others, there is a large gap in the development tool chain to allow for rapid system development,” says Tarin. 


 

OEMs currently use open-standard 3D sensors or cameras and program their application from scratch, or use “closed” systems with configurable tools that often come with a cost-prohibitive price tag, Tarin explains. 

“Perhaps what is required is a 3D sensor or camera with programmable FPGA for high-speed onboard image processing to enable a non-FPGA programmer to deploy 3D image processing algorithms all in one package,” Tarin says.

Another technology peeking its head into the machine vision market is artificial intelligence, and more specifically, deep learning — the ability of computers to learn from experience. At this early stage, the biggest challenge is separating hype from reality. “AI and deep learning algorithms sometimes overpromise a solution to every difficult-to-solve machine vision problem,” Tarin says.

He points to the Gartner Hype Cycle for Innovation (see figure), which indicates that deep learning is at the peak of the “Inflated Expectation” curve.

FIGURE The Gartner Hype Cycle for Innovation indicates that deep learning is at the peak of the “Inflated Expectation” curve.

“While it is true that machine vision applications are already benefiting from the deployment of deep learning algorithms, these are far from providing a silver bullet,” Tarin says. “This is especially apparent when one compares the effort necessary trying to achieve greater than 99% accuracy compared to traditional programming efforts. Nonetheless, this technology definitely has its place and will continue to gain importance over the next few years.”

Eyeing the Emerging
While many systems integration challenges have been drastically reduced by the availability and increasing affordability of smart cameras that embed lighting, software, and I/O interfaces, emerging technologies will pose more puzzles for machine vision to solve. In developing multispectral imaging systems, for example, specialized lighting will be required to illuminate products at specific wavelengths. For hyperspectral imaging, broadband LED illumination will replace the halogen-based systems currently in use. 

In data fusion applications, where a number of different sensors — ultrasound, visible, infrared, and lidar — are used, sophisticated imaging software will need to be tailored to perform efficiently on high-performance graphics processors. With the advent of ever faster CMOS-based high-speed cameras, system integrators will have to support optical networks to transfer data from cameras to computers. Tying this together, edge-based vision systems will be required to work in conjunction with cloud-based computing systems to tie analyzed captured data with factory management and robotics systems to fully automate manufacturing processes. 

But since machine vision system integrators are problem solvers by nature, they’ll do what they’ve always done: overcome any obstacles in order to find the best solution for the application.