The Changing World of Vision Sensors and Smart Cameras
| By: Winn Hardin, Contributing Editor
As microprocessor and memory advances drive machine vision capability, the lines between vision sensors, smart cameras, and PC-host systems climb the functionality curve.
The smart camera has undergone significant changes since the 1990s thanks to multicore processors, the integration of greater Internet and remote-access functionality, the prevalence of higher resolution sensors, the growth of specialty smart camera systems for specific applications, and the growing functionality of image processing libraries.
Today, the offspring of the first smart cameras are generally referred to as vision sensors, while the latest generation of smart camera closely resembles the functionality of what once could only be accomplished with a dedicated PC. Unlike the PC solution, however, both vision sensors and smart cameras include an imaging sensor, light, computational elements, software, and I/O – all the basics of a machine vision solution in a single house. However, to select the right solution for your particular application, you need to understand the differences in performance and capability between vision sensors and smart cameras.
Kitchen vs. Garage
Although answers will change based on whom you ask, most would agree that machine vision systems fall into the following four categories in order of complexity: vision sensors, smart cameras, PC-based vision systems, and application-specific/turnkey systems.
“When you look at vision sensors, the difference between them and smart cameras is that with the vision sensor, the user just plugs in the device and configures it for a certain application rather than developing code from scratch,” explains Steven Tirsway, Product Creation Specialist at Keyence Corporation of America (Itasca, Illinois).
“It’s like the difference between the tools in your kitchen drawer versus your tool bench in the garage,” adds Jim Anderson, Vision Sensor Product Manager at SICK, Inc. (Minneapolis, Minnesota). “You can do a lot with tools in the kitchen-vision sensor. You can do blob analysis and find a shape using a pattern-matching tool. But if you need to do more image analysis or different types of filters, or need to break down the image further, that’s when the smart camera comes into play. Smart cameras are still pretty much plug-in and configure to operate, but you have a lot more parameters available to you with a smart camera than a vision sensor. That being said, the line between the two systems isn’t static. It’s the same for the line between smart camera and PC-host system. It’s constantly changing as these systems become more powerful.”
While lines between product categories are interesting to debate, when it comes to industrial solutions, a specific system’s capabilities are what matter. Today, one OEM’s vision sensor may compete directly with another OEM’s smart camera, depending on the functions, sensor sizes, I/O, and other considerations. That being said, most vision sensors on the market today use VGA or WVGA resolution sensors, leaving the 2+ megapixel (MP) sensors for smart camera lines. Vision sensors come with a limited predefined number of operations that are configured through sliders and automated wizards rather than inputting specific values or stringing together features into macros as a user would with a smart camera. And vision sensors typically come with limited I/O, although they can make decisions.
“For the majority of vision sensor applications, discrete I/O is sufficient,” explains Tirsway. “Avoiding autoID, positioning, and measurement applications eliminates the need for data transfers, which makes the vision sensor more stable and easy to use. Smart cameras offer more functionality…in the form of an enhanced toolset, higher resolution CCDs/CMOS and more peripherals. However, adding greater flexibility adds complexity…too.”
Not Just Presence/Absence Anymore
In past years, vision sensors were described as very limited machine vision systems that basically could tell you if an object was in the field of view or not. That is still the number one application for vision sensor devices. However, that functionality is expanding to include more machine vision tools, decision-making capability for downstream actuation, and even higher throughput than some smart camera systems.
“In addition to the standard vision sensor tools, such as pattern matching, rotation, and traditional contrast tools, SICK’s Inspector line includes the ability to determine position and do some blob analysis, which means this product can now work with organic products, for example, or overlapping products,” notes Anderson.
SICK’s Inspector has also added more memory in recent years, expanding the number of jobs that can be stored on the vision sensor from 16 jobs to 32. “Inspector has five basic tools, which you can configure 32 times. With our smart camera line, IVC, you have a few more tools, and you can develop programs with hundreds of steps.”
Another example of changing vision sensor designs can be found in Keyence’s IV series vision sensors, which includes multiple lenses on a mechanical wheel inside the camera. “We aim to take a lot of this guesswork out of the initial setup by providing ‘easy’ ‘Autofocus’ and ‘AutoBrightness’ adjustment,” says Tirsway. “The camera cycles through to find the optimal focus for the work piece but still gives the operator a slide feature in case they want to focus on a feature that isn’t on the top of the part. This helps to take the guesswork out of initial setup, allowing customers who have limited experience with vision technology to easily get off the ground and move forward.”
Who’s the Fastest?
While most newer smart camera models have some variety of high-clock speed, multicore, or heterogeneous microprocessor with GPU cores next to a CPU core, you might be surprised to learn that many vision sensors can handle more throughput than smart camera competitors.
“Because vision sensors use a smaller number of tools and are limited in how the tools can interact, SICK’s Inspector can process up to 250 images per second, compared to about 40 images per second for the IVC smart camera line,” adds Anderson. “When you’re running a web server and using a couple of different tools per image, operation for Inspector will be around 40 images per second compared to 10 for IVC, for example. And that’s not just SICK. It is pretty much the same for everyone across the industry.”
The glass-is-half-full crowd may say throughput is not enough when you have a limited number of available functions, but vision sensors can do more than go head to head with other vision systems. “One of the great things about a vision sensor is its ability to do multiple operations all at once or in a short period of time,” explains Keyence’s Tirsway. “Having such flexibility means it can be used to take the place of multiple sensor installations that are being used on a single target. With our vision sensor we are able to do up to 16 inspections per each program, and it allows for up to 32 programs/recipes to be configured and stored internally. During operation each of these programs can then be easily switched without the need for reconfiguration.”
So will the trend to increase the intelligence of vision sensors, smart cameras, and PC-based vision systems continue? All parts of the machine vision product universe will undoubtedly benefit from faster processors, networks, and smarter image processing algorithms. But that may not be the most important consideration when evaluating a vision sensor application.
“I don’t think we should be considering how much more a vision sensor would be able to do in the future, but how much easier and more stable they are for tomorrow’s industrial tasks,” concludes Keyence’s Tirsway. “Just as important as new microprocessors are trends to move from incandescent lighting to LED lighting. They both light the target, but the newer LEDs are more efficient and more stable with better life expectancy. Manufacturers will develop vision sensors to achieve what can already be achieved but in a package that is easier to use and more stable. That should be the future for vision sensors.”