« Back To Vision & Imaging Industry Insights
AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
Visual Inspection & Testing Visual Inspection & Testing

Technologies Impacting Machine Vision Lighting and Cameras

POSTED 01/06/2004  | By: Nello Zeuch, Contributing Editor

The question often asked of me is – ‘‘Where is machine vision technology going?’‘ Recognizing that machine vision is itself a multidisciplinary technology, to assess where the technology is going requires assessing the direction of each of the major disciplines. As it turns out in each of the disciplines one finds changes that are impacting both price and performance and in all cases very favorably for machine vision applications.

While improvements continue to be made in lamps such as xenon lamps often used in strobes and high-speed applications, the biggest impact insofar as lighting is concerned is coming from the advances in LED lighting. In addition to LEDs with many wavelength-specific emissions that can be advantageous in some machine vision applications, there have been significant strides in generating white light. At the same time the brightness of these lamps is improving and their efficiency, while manufacturers are learning how to handle the heat dissipation issues. As the LEDs address high-volume commercial applications one can expect that their cost will continue to decline. This will ultimately make it possible to consider custom arrangements as they will lead to cost-effective solutions.

In the case of cameras there are several advances taking place that impact machine vision. First, CMOS-based cameras are continuing to see gains in performance and for many machine vision applications their performance is now adequate. Second, one sees higher-resolution cameras emerging (1000 x 1000+) at price points that are almost equal to the price of a 640 x 480 camera. Again, CMOS technology is making this possible. Third, one is seeing lower cost color cameras emerging. Fourth, there is a definite migration to digital cameras. Five, one is finding more functionality in digital formats on these cameras – essentially they are becoming more ‘‘intelligent.’‘

Overall, the improvements in lighting and cameras are leading to more catalog products designed with specific generic machine vision applications in mind making the application engineering and system integration easier, faster and cheaper.

To gain insights into what is happening in each of the disciplines input for this article was canvassed from the suppliers of lighting and cameras who target the machine vision market.

Lighting Contributors

  •  
  • Steven R. Giamundo, Marketing Manager, FTI - Fiberoptics Technology Inc.
  • Mike Muehlemann, President – Illumination Technologies
  • J. Marcel LaFlamme, VP - RVSI – NER
  • Matt Pinter, Design Engineer – Spectrum Illumination

Strobe Lighting: Peter Niedzielski
What is happening in the underlying technology used by lighting designers? 
At PerkinElmer we are developing new, pulsed and continuous xenon, light sources that are more efficient, have higher light output, longer lamp life and better output stability.  These newer light sources are being applied to many of the traditional machine vision applications as well as some of the newer applications such as color recognition and UV fluorescence.  

How will this affect lighting design performance?
With the advent of more cost effective color cameras, the applications for color inspection/recognition are growing.  In order for color cameras to perform effectively, designers are seeking lighting with best possible color rendition.  This is why pulsed and CW xenon is so effective.  The color temperature and spectrum is very similar to that of natural sunlight.

Improved light output stability will allow the lighting designer to tackle vision applications that require very tight threshold settings for accurate measurement and detection. The increase in light output obtained from our strobes will allow images to be acquired at higher resolution and magnification. Longer life lamps will allow the design of vision systems that will have less down time and require fewer system re-calibrations.

How will these advances ultimately impact the machine vision industry?
These advances in light output, life and stability will ultimately make certain machine vision applications, which were once very difficult to achieve or were cost prohibitive, possible.

Fiber Optic Lighting: Steven Giamundo
What is happening in the underlying technology used by lighting designers? 
To keep pace with a rapidly developing vision technology, fiber optic lighting designers are focused on tailoring systems to be ‘‘digitally linked’‘, ‘‘smart’‘, and ‘‘niche focused’‘.  Lighting is still an important part of the system, and today's faster, integrated and more intuitive vision technology allows more alternative lighting technologies to be used, broadening the lighting equipment portfolio. 

To handle resolution, color, and speed requirements, fiber optic lighting designers are combining UV, IR, and HID lamp and diode technologies with fiber optic delivery systems to help vision systems see what could not be seen in the past. 

The ongoing challenge for all lighting designers will be to keep pace with system evolution. Today's fiber optic lighting designer is a hybrid physicist, electronic engineer, mechanical engineer and software engineer with a second degree in marketing.

How will this affect lighting design performance?
Changes in lighting technology give the fiber optic designer a broader palette from which to create, but (as mentioned previously) designers' knowledge base requires an extension into electronics and software. The best fiber optic designers will understand the benefits of most major light generating technologies, and develop additional expertise or partnering relationships to recommend what's best, rather than what they have to offer.

Because fiber optic lighting is really a light transmission and shaping system, the opportunity and challenge for fiber optic designers is to create hybrid systems which leverage the benefits of various light sources, while managing each source’s shortcomings or undesirable effects.

Regardless of the lighting technology employed, the application of an effective lighting solution in a cost-effective package (delivered on time) will continue to differentiate all designers and what they offer. In addition, all lighting designers must be committed to the value-add equation, lest they be usurped by electronic and software developers, or OEMs who invest to develop lighting competence.

How will these advances ultimately impact the machine vision industry?
Through the lighting interface, improved lighting component integration allows the vision system to become more ‘‘intelligent’‘ and ‘‘intuitive’‘. Therefore, more applications will become ‘‘mainstream’‘, served by plug and play, off-the-shelf, bundled solutions. In general, OEM integration will continue to accelerate, and more full color, UV, IR and 3D applications will develop.

Vision engineers will move to the next level of application, where systems can be fully integrated into the process, rather than just installed at the back end. Presented as a ‘‘linked’‘ package, smaller vision systems will talk to and be controlled by a larger full system interface.

Stable Lighting: Mike Muehlemann
What is happening in the underlying technology used by lighting designers? 
Our new product for this year is the 4900 Auto-Cal light source.  Released in April, this technology allows the user to calibrate the photonic output with NIST traceable accuracy, to within very tight photonic specifications throughout the lifetime of the individual system (relamp after relamp), and across multiple platforms (installations around the world).

How will this affect lighting design performance?
We have always been able to deliver rock solid stability over the lamp of a single lamp, but once a maintenance cycle was performed, neither the OEM, the integrator, the end user, nor we had control of the system's front end. The 4900 now upgrades the performance of a wide range of machine vision systems to make them more reliable and repeatable over the entire life of the system, a period which could span from 3 to 5 years. This makes software design simpler, and of course a more reliable and repeatable vision front end translates to more robust vision installations (long term success). 

What is happening in the underlying technology used by lighting designers? 
We see an entirely new class of applications that are mandated by compliance issues, of which machine vision inspection is becoming more and more integral. In almost all of these compliance applications, the requirements that the process be repeatable and reproducible becomes a critical element in process validation. When you look at machine vision as a technology tool, the sub component most likely to change is the front-end lighting. With these new easy to use photonic compliance technologies, we believe that machine vision tools will begin to make their way into many more mission critical manufacturing sectors en masse, where compliance and validation are mandatory.  

LED Lighting: Marcel LaFlamme
What is happening in the underlying technology used by lighting designers?
LED technology has been effectively used in machine vision lighting designs for ten years and has already displaced many applications using fluorescent or halogen lights.  However, this is only the beginning.  There are several major semiconductor companies engage in the research for a new LED ‘‘light bulb’‘.  This research has already been introduced into the MV lighting market with the white LEDs, the super bright LEDs and the COB (chip on board) products.  However, these new technologies bring along some side effects that have not been completely understood.  By necessity, these new components will force us to develop new designs for thermal management and some very creative electronic controls.

How will this affect lighting design performance?
In general, we will see a general increase in the light intensity produced by these new technologies.  We will also see a new range of complex lighting configurations, which will become possible with the more sophisticated lighting controls.  There will be a new ‘‘look and feel’‘ to the lighting products that will emerge.  The familiar rings and backlights will take on a new ‘‘Hi Tech’‘ look to accommodate the thermal management designs.  Electronic controls will come equipped with sophisticated communications protocols.

How will these advances ultimately impact the machine vision industry?
LED-based lighting systems have always been difficult to use when we need to light a very large field of view because of the lack of brightness.  These new technologies will open up the possibility of using LEDs in robot guidance, large part assemblies, web inspections, etc.  In the short run, these LED based products will be fairly expensive but as the general commodity market for these LED units begins to grow there will be a large drop in cost, which will be passed to the machine vision industry.  There is a future in light!

LED Lighting: Matt Pinter
Spectrum has been involved with many new products for new applications over the last year.  We have many new lights that we have designed to solve applications that could not be solved with standard LED's.  Some of our new lights include wash down lights for harsh environments.  The food industry has been waiting for LED lights they could use in wash down housings.

Spectrum Illumination's new Monster Lights are the first in the industry to use High Flux LED's.  The new high flux LED's are 30x to 50x brighter than standard LED's.  We have been working with LED manufacturers to bring the latest LED technology to the machine vision market.  All the top LED and lighting companies are rushing to bring to market all the new high power LED's.  The LED is coming of age to compete with conventional lighting.  LED's are making there way into the many new markets.  The prize is waiting for the major lighting suppliers to get the LED's power high enough to replace standard fluorescents and incandescent.  Saving the world’s energy is a big incentive for the manufacturers.  Why is this important to the machine visions industry?  The technology is advancing with brighter, cheaper LED’s, which helps the machine vision industry mature also.

We have been overwhelmed with orders from robot integrators for our new Monster Lights.  The higher power lights solve applications that standard LED's could not solve.  Many robotic applications require the light source to be positioned a long distance from the part.  Standard LED's cannot provide enough light for these applications.  The new high flux of high-power LED' s solve these applications where the light has to be 5 to 10 or even 15 feet away. 

The new Monster Lights are solving high-speed applications where short exposures are needed while inspecting thousands of parts per minute.  Short fast strobe pulses are required to provide illumination in these applications. The Output of the new LED's are high enough to solve these situations.  Millisecond and Microsecond pulse widths in strobe times are being used to inspect manufacturing lines.  To illuminate the product at these short exposures the light needs to be intense and fast.  High Flux LED's solve these issues.

Where is the LED's technology heading?  Nachia, Cree, Osram and Lumileds are all continuing to develop higher power LED's.  New LED's are fitted with heat sinks and directly coupled to either an Aluminum circuit board or ceramic.  Larger LED dies are under development and being rushed to market every quarter. No more output rating in millicandela (mcd).  Manufacturers are now rating LED's in Watts.  1 watt and now 5 watt LED's are available. 

Camera Contributors

  • William Mandl, President – Amain Electronics Company
  • Dave Gilblom, President - Alternative Vision
  • David Lane, Sr. Product Specialist – Cohu Electronics
  • Kerry Van Iseghem, Founder – Imaging Solutions Group of New York, Inc.
  • Greg Bell, Director of Business Development - Lumenera
  • Carlo Sabetti, National Sales Manager, Narragansett Imaging
  • Marty Furse, CEO – Prosilica
  • Steve Nordhauser, Vice President Product Development – Silicon Imaging
  • Jerry Fife, Sales Manager – Sony
  • Jari Loytomaki, Technical Director – TVI Vision

MOSAD-based Cameras: William Mandl
What is happening in the underlying technology used by cameras?
Delta Sigma is well known to be the most accurate A/D technology available.  Applying this technology at the pixel level in a camera focal plane can provide improved dynamic range and increased well capacity.  MOSAD, Multiplexed OverSample A/D, is an improvement in delta sigma A/D that allows a very simple circuit to be placed at each pixel, yet still achieve the known Delta Sigma performance.  Pixels size and fill factor are the same as with the classic integrate and dump approach, yet performance is massively improved.  As semiconductor technology shrinks, MOSAD will continue to improve proportionately due to its all-digital nature.


Certified System Integrator Program

Set Yourself at the Forefront of the Global Vision Market

Vision system integrators certified by A3 are acknowledged globally throughout the industry as an elite group of accomplished, highly skilled and trusted professionals. You’ll be able to leverage your certification to enhance your competitiveness and expand your opportunities.

GET CERTIFIED


 

How will this affect camera performance?
The development and perfection of over sample A/D at each pixel has lead to massive improvement in camera dynamic range potential.  Amain recently announced an infrared camera with 33 bits quantization error (198dB) at 30 frames per second.  This large dynamic range also included a well capacity of 10E10 electrons (10 billion).  This same camera has a noise floor below 40 electrons.  The ability to put such  capability at each pixel of a focal plane array is brought about by modern approaches in sample data theory.  Primarily, over sampling techniques, when properly applied at the pixel level, provide simple solutions for increased performance.

How will these advances ultimately impact the Machine Vision Industry?
Today's cameras severely limit machine vision dynamic range with limited well capacity.  A simple application such as spotting rust patches on a lemon are beyond these cameras, sorting for this blemish must be done by hand or not done at all.  With more dynamic range and larger well capacity many inspection problems can be solved.

Foveon Development: Dave Gilblom
What is happening in the underlying technology used by cameras?
The new CMOS color image sensor from Foveon incorporating their X3™ layered photodiode technology is now beginning to be incorporated in industrial cameras.  We sell the sensors to companies who want to design their own cameras and we sell the first industrial camera with the Foveon sensor.  It is the HVDUO-10m made by HanVision of Daejeon, Korea.  This technology is a major departure from all previous color sensors because it does not require color filters.  It relies instead on the differences in the absorption characteristics of silicon across the visible band.  Previously, color cameras used either one sensor with a color filter array (CFA) or three sensors on a prism.  The color filter array scheme introduces errors in dimensional location because the colors are not lined up geometrically.  The prism scheme is bulky and relatively fragile and put limits on the optics that can be used.  The Foveon X3 technology has neither of these problems.  In addition, since the sensor is CMOS, it has very low power consumption and highly flexible scan controls. 

How will this affect camera performance?
Cameras using this sensor technology show excellent color performance, perfect color registration and identical resolution in color and monochrome.  The scan control functions enable software control of ROI, pixel binning and triggered scanning.  The first sensor model was designed for high-resolution acquisitions but other models will be introduced that address other market segments.  These will enable the design of very small, low-power cameras with a wealth of programmable operating modes.  In addition, since the spectral response of these sensors extends from below 300 nm to almost 1100 nm, they will find use in a variety of UV and IR applications as well.  In fact, it is possible to switch between full-color and infrared modes with these sensors by just flipping a filter in front of the lens.

How will these advances ultimately impact the Machine Vision Industry?
Color vision has suffered from very detrimental compromises in performance due to the limitations in color sensors.  The Foveon X3 technology removes these compromises to enable full use of the color resolution of the sensor and to permit color images to be processed in the same way as monochrome images.  In addition, since organic filters are not needed in the Foveon sensors, the well-known degradation problems that occur in these filters with accumulated exposure, especially to stray UV, will disappear.  These sensors are only silicon.  This should improve the stability of color measurements in machine vision applications.  Improvements in both ease of use and performance will make vision users less reluctant to attack problems that can be best solved using color.  Vision systems suppliers reluctant to embrace color will be more likely to use this simpler technology.  This will increase the penetration of the available color market segment and help the vision industry grow as a whole.

High Resolution Cameras: Dave Lane
What is happening in the underlying technology used by cameras?
The increasing desire for higher and higher resolutions is driving the development of cameras with mega-pixel sensors.  Cohu currently has cameras with greater than 1000 by 1000 pixel arrays and are developing a 2K x 2K design.

How will this affect camera performance?
The increase in pixel counts and bit depth vastly increases the amount of data that needs to be processed.  More data means more bandwidth required to transfer that data around.  As higher bus speeds are developed larger camera arrays will become more and more popular.  With the advent of DSP chipsets many of the functions are now or will be available for control based on the application requirements.

How will these advances ultimately impact the Machine Vision Industry?
The smaller size and higher resolution will increase applications and the uses of machine vision as well as the accuracy and reliability of today’s systems.  Then new ‘‘Smart Cameras will include these advantages as well as those from the embedded computer industry.  Systems will become smaller, lighter and easier to use, while at the same time increasing in performance and reliability.

Digital Cameras & Smart Cameras: Kerry Van Iseghem
What is happening in the underlying technology used by cameras?
The biggest underlying technology shift we see at this time is the move towards 1394 adoption and the embedding of image processing inside the camera. ISG's customers really like the idea of taking their proprietary image processing algorithms and running them in hardware in the on-board Xilinx FPGA's. In many cases, the cameras can do 100% of the processing and trigger external events and PLC's.  This eliminates the need for a PC, Frame Grabber, Cabling, etc. and really lowers the system cost dramatically. The only cable needed is to hook up to the external trigger or PLC's. In many cases, where a PC is required they can live with 1394 bandwidth because the on-board processing reduces bandwidth by only sending pertinent information rather than the entire image video data.

How will this affect camera performance?
This allows the camera to operate much faster than previously. If you can process image in real-time on the fly and have the camera make the decisions it improves the overall systems performance.

How will these advances ultimately impact the machine vision industry?
These advances will dramatically reduce costs and allow machine vision to enter new markets where cost was prohibitive in the past. It spells trouble for the frame grabber companies but I suspect they will migrate to this model and focus on the software for the applications where appropriate.

CMOS & Digital Interfaces: Greg Bell
What is happening in the underlying technology used by cameras?
Image sensors – more resolution, better quality, standard digital interfaces, lower system costs. 

Camera manufacturers are able to leverage the technology and volumes of CMOS and CCD image sensors being utilized by consumer digital still cameras. Quality and unit prices are bettered every few months with fierce competition among sensor manufacturers. Perhaps the greatest advances of resolution and speed have emerged with CMOS based sensors. 

Digital interfaces such as FireWire, Ethernet and USB2 are gaining rapid acceptance with industrial applications. OEMs are looking for ways to lower costs and simplify their imaging components. Yes, there are high speed or complex imaging applications that require high speed CameraLink camera and frame grabbers, but a surprising percentage of integrated vision systems require no more than basic trigger and image capture. FireWire and USB2 (being nearly identical) provide a simple means to do this. USB2 has an advantage in that most new PCs and embedded computers have a native USB2 port – minimizing an add-in card and lowering COGs.            

How will this affect camera performance?
Higher resolution sensors provide more resolving power. Where a complex network of 6 NTSC cameras and capture cards were used to image a large PCB – a single, 2-megapixel digital camera can replace these, often at less than 50% of the original system cost. As an example, Lumenera has just introduced a USB2, 20-megapixel camera, at $3,500 that provides the ultimate in resolution for off-line inspection. 

CMOS sensors will not satisfy every machine vision application. CCD based cameras are required for low light environments and applications requiring detailed image analysis. However, for general inspection, component placement, OCR/barcode, etc., CMOS is the better choice, providing many benefits such as:

  • Higher frame rates
  • Sub-windowing
  • Extended dynamic range
  • Lower cost per pixel

The price/performance and additional benefits of CMOS are being demanded by OEMs and consumers.  Many camera manufacturers such as Dalsa, DVT, Cognex, Pulnix, Cohu and Lumenera have industrial grade CMOS based cameras in their line-up.  

Now with drivers and standard interfaces to popular machine vision software packages, USB2 will continue to be adopted for industrial inspection applications. With many computers providing up to 6 USB2 ports, multiple cameras and even synchronized capture can be provided without the need for frame-grabbers. There are limitations to PCI bandwidth, yet new PCs can manage several cameras streaming at VGA resolutions or 1-2 cameras capturing real-time megapixel images.        

How will these advances ultimately impact the machine vision industry?
CMOS sensors are nearing CCD quality. Machine vision applications are the ideal environment for CMOS as controlled lighting often makes up for the lower sensitivity of CMOS. With the inherent benefits of CMOS, you will find more and more camera manufacturers offering this as an alternative to CCD-based technology. 

Digital Bus interfaces such as FireWire, Ethernet and USB2 will continue to gain ground for general inspection applications. The impact of lower costs, ease of installation and integration with more and more standard software packages will help lower the barrier for use of vision systems for automated industrial applications.

CMOS: Carlo Sabetti
What is happening in the underlying technology used by cameras?
Cameras are being designed with low cost CMOS sensors and they are also including PC platforms in the camera for faster speed image handling. Even though the cost of the sensor will decrease the effect of this will be to increase the cost of the camera but also provide increased functionality in the camera's end use.  There will always be this conflict between sensor cost and overall cost of the camera with additional functionality.  This functionality will come from not only the PC platform but the use of special software or firmware specifically designed for particular applications in Machine Vision.  The idea of an all-singing all-dancing camera is not realistic in this marketplace.  The cost barriers for such a camera are high.

How will this affect camera performance?
These software and firmware features will increase the performance of the cameras for Machine Vision by allowing the end user to fix the camera to the type of application.  This will improve overall performance of the system. 

What the cameras may lack in performance by the use of a CMOS sensor will be improved in time.  The functionality of the camera will outweigh the lack of image performance.  This image performance will be more than satisfactory for most Machine Vision applications.

How will these advances ultimately impact the Machine Vision Industry?
In the longer term these changes will make the process equipment more flexible and repeatable and in effect lower the overall cost of the system due to application specific functions.

Firewire: Marty Furse
There are a number of technological issues that are affecting the machine vision camera market.  For one, Firewire (IEEE-1394) is maturing in terms of acceptance and compliance to software interfaces.  For the system integrator or end-user, this means camera systems that are less costly and easier to integrate than those of the past.  Eliminating the need for a frame grabber can save up to a thousand dollars of system cost.  Add to that the time savings due to the truly plug-and-play nature of DCAM compliant Firewire cameras and the cost savings begin to add up.

Another significant factor is the maturation of CMOS image sensor technology.  Image quality improvements, increased imaging speeds, snapshot shuttering, and extended dynamic range functions are features of the latest CMOS image sensors.  Considering the relatively low cost of these sensors, such features make CMOS the technology of choice for many machine vision applications.

In fact, Prosilica combines the latest CMOS sensor technology with truly plug-and-play Firewire interfaces and a suite of features and accessories to produce machine vision cameras that are very compelling for system integrators and end-users.

Regarding ‘‘snapshot shuttering’‘:
Traditional CMOS devices typically use a rolling shutter.  A rolling shutter operates like a curtain shutter on film cameras.  In such a case, all pixels have the same exposure time but not at the same moment.  These types of cameras are thus not suitable for capturing high speed events.  Improvements were made when global shuttering be

Peter Niedzielski, Applications Engineer – Lighting – PerkinElmer
Vision in Life Sciences This content is part of the Vision in Life Sciences curated collection. To learn more about Vision in Life Sciences, click here.