AIA Logo

Member Since 1984


AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:



Custom or Commodity: What Camera Will You Choose?

POSTED 02/26/2016

 | By: Winn Hardin, Contributing Editor

more things change, the more they stay the same when it comes to the love affair between the machine vision industry and the digital camera. That was one of the messages delivered by Michael DeLuca of ON Semiconductor  during the recent A3 business conference in Orlando.

DeLuca quickly narrated the history of the digital sensor to a full hall of machine vision specialists. Born in the 1960s, image sensors were first integrated into a prototype digital camera by Kodak in 1975.  Kodak followed this up with the release of the first professional digital camera, the DCS-100 in the early 1990s.  It used a Nikon camera front end with the digital sensor, connected to a backpack of electronics that controlled the sensor, stored the images, and transferred the images via telephone modem. Apple brought out the first consumer digital camera in 1994, and 2000 saw the first camera in a cell phone – which lead to an explosion in availability of image sensors.

Today, imaging is seemingly ubiquitous – but since the machine vision industry represents as little as 0.5% of the overall digital image sensor market, it has limited influence with the remaining makers of digital sensors for cameras. “Machine vision is a drop in the bucket compared to the scale of camera phones, consumer devices, security cameras, and automotive imaging,” DeLuca said. The answer for machine vision camera makers and their customers? Choose between commoditization or customization.

Adimec’s (Eindhoven, The Netherlands) co-founder and chief scientist, Jochem Herrmann, clarifies the challenge for today’s machine vision camera companies. “Today, camera makers are asking, what exactly is our product?” asks Herrmann. “In the past, sensors were analog and difficult to design around. It was not easy to make a camera. You had to correct for all kind of things, especially in CMOS. And the people that were best at that made the best cameras. But the latest CMOS sensors today from Sony come out of the box offering a beautiful image. In a few years, you’ll be able to combine a sensor out of the box with a simple interface and have a camera. So today, camera companies are asking: What exactly is our product? And the answer is, at least for Adimec, that our business isn’t really about sensors, but about adding value; making image sensors suitable to perform in demanding environments.”

This isn’t to say sensors and their evolution isn’t important to machine vision camera makers. But these companies are looking beyond the sensor set to how they optimize a camera for specific applications.

For example, the latest sensors offer more flexibility than ever before, giving designers the ability to set multiple regions of interest with different dynamic range settings, the ability to choose from video to still imagery on the fly, and much more. Optimizing these sensors for end users, such as the electronics production industry, is how Adimec services their customers in the electronics industry, for example. “Customers know their applications, not sensors,” adds Adimec’s Herrmann. “Our job is to give them the best tools to solve their applications; high-speed, real-time, optical metrology. We only succeed if we can explain to our customers how they can use the camera to achieve a given goal.”

Teledyne DALSA’s (Waterloo, Canada) latest line scan cameras, for example, further illustrate the drive to add intelligence and value around the latest machine vision cameras, according to Ghislain Beaupré, vice president of operations and R&D for the OEM group.

As Beaupré explains, Teledyne DALSA’s line scan cameras combine multiple linear sensors in the Piranha XL PX-16 to offer both the highest line scan resolution and the highest sensitivity, allowing customers to inspect larger surface areas at higher speeds and with fewer cameras. At the same time, the company has expanded the appeal of both low- and high-end line scan cameras by adding a GigE Vision interface, and then adding TurboDrive compression to make the most of the Gigabit data transfer limitation without giving up the trigger-to-image features common to Teledyne DALSA products. All of those features depend on the sensor, but none ship with the sensor; that’s where the camera manufacturer comes in.

 “10 years ago, the camera was a black box,” adds Adimec’s Herrmann. “Then we went to board level, which was actually two boards: one for the sensor, one for the interface. Then one board. Now, camera makers have to ask themselves, exactly what value we’re bringing? If you just package a sensor, you’re making embedded vision systems, which is why we’ll see more of those systems going forward. But for our customers, as they push for higher resolution cameras – from 12, to 25, and even 50 megapixels – people will move away from one camera to one PC architecture. We’ll have to move the intelligence closer to the sensor to manage all that data. And to change data into intelligence, you have to have knowledge of the application. That’s what we do at Adimec. A custom solution using commodity components. It’s a choice.”

Given the massive amounts of data from the newest 25 megapixel cameras (and larger), and given the thermal dynamics involved in high speed computing, it seems unlikely that Herrmann envisions new ‘smart cameras’ that attempt to put the entire PC inside the camera.

It seems more likely that machine vision camera makers will continue along the path of today’s industrial networks and the PC itself, using DSPs, FPGAs, GPUs, and even the cloud to truly distribute the processing demands for increasing complex computation systems. As ‘machine vision in the cloud’ discussions continue to gain momentum, machine vision designers may be on the cusp of major changes in how machine vision systems are designed, and the roles of individual components – including the camera.
Embedded Vision This content is part of the Embedded Vision curated collection. To learn more about Embedded Vision, click here.