Industry Insights
Global Vision Standards Keep the Pace in 2017
POSTED 03/10/2017
| By: Winn Hardin, Contributing Editor
Machine vision industry standards inject compatibility into an industry that develops some of the most complex technology in the automation world, helping OEMs, system integrators, and end users do their jobs more efficiently. From the outside, new standards and iterative versions may seem to pop up at a furious pace, but behind the scenes, volunteers know it is a marathon, rather than a sprint, to keep these interfaces updated for and usable by a constantly changing marketplace.
For example, thanks to ever-increasing computational power and faster CMOS sensors, 3D vision adoption continues to climb. In response, GenICam 3.0.1, the newest version of the plug-and-play protocol, has provided easier integration and access to 3D machine vision technologies.
Previously, the standard supported the image stream of only one type of data — typically intensity data. “Our push is to allow the camera to send distance information or some kind of geometrical information together with intensity data and make it possible for the receiver to understand what’s coming,” says Mattias Johannesson, expert in 3D Vision Core Design at SICK IVP AB (Linköping, Sweden) and a contributor to the GenICam committee.
One of the challenges in supporting multipart data acquisition is configuring the sensor. The GenICam committee is addressing this through the most recent standard features naming convention (SFNC) module. “The SFNC defines how to set up the sensor to generate multiple types of data so it’s possible to configure it without breaking the backward compatibility and destroying anything for standard cameras,” Johannesson says.
Another component of SFNC involves adding new features necessary to set up the coordinate system. “This helps users understand not only what the coordinate information is but also whether it’s metric or imperial units, where the coordinate system is aligned relative to the camera, and whether it’s spherical coordinates or Cartesian coordinates,” Johannesson says.
In addition, the GenICam 3.0 standard is expanding to include a new high-dynamic range (HDR) feature, as well as improvements to user-set parameters and standardized device firmware upload. These last two improvements are aimed at deployment and field upgrades.
GigE Vision Goes for Precision
Like GenICam 3.0.1, multipart image transfer will play a significant role in revision 2.1 of GigE Vision. In response to vision companies’ need for 3D image support, the standards’ revision aims to ease transmission of multipart 3D images over GigE Vision. The GigE Vision committee has been working in the last few years to add a payload type that enables that support.
“Essentially it’s just splitting the various types of information that one can find in the 3D image and transmitting it as part of the same block because all of this information belongs to the same image,” explains Eric Carey, program director, area cameras and frame grabbers, Teledyne DALSA and chairman of the GigE Vision committee.
The GigE Vision committee is working in parallel with the GenICam committee to ensure support in transmitting those multipart 3D images, with the goal of releasing the revision by mid-2017 so that companies can offer 3D products based on the technology.
Multipart image transfer isn’t the only subject receiving attention. For the first time, GigE Vision will standardize the locking connector at the back of the camera. The committee will release a supplement that features formal mechanical drawings for the dimensions used in the locking connectors.
Revision 2.1 also will leverage the optional IEEE 1588 Precision Time Protocol (PTP), which uses a very high-precision clock to synchronize all cameras on a network. Synchronizing multiple camera systems typically requires dedicated trigger signals going to the frame grabber or camera, depending on the technology. That means a cable has to run from a common trigger source to all the cameras that need to synchronize.
The advantages of adding IEEE 1588 to GigE Vision are twofold. First, GigE Vision has an internal time stamp, which is a 64-bit number that increments at a specific time interval — usually 1 µs or faster. With IEEE 1588, this time stamp will use the PTP common time stamp as reference, meaning that all PTP-enabled cameras on the same network will share the same time stamp.
"Even though the cameras aren’t organized in terms of image acquisition, you can still create a timeline of what happens in the system as images arrive from each one,” Carey says. “You can accurately measure frame rate, distance, or speed depending on how the system is set. It could be pretty powerful, especially in multicamera systems, when you need to measure a time difference between the various image acquisitions.”
The second benefit of the common time stamp is the addition of a feature called “action command.” The action command allows the user to specify a time in the future for the synchronized, PTP-enabled cameras to execute the trigger operation without having to run the trigger wire. This gives integrators the ability to remove an I/O cable and use only the Ethernet cable to provide the data, power, and trigger to a GigE Vision camera.
In addition to supporting 10 GigE, GigE Vision stands to benefit from the higher speed rates of NBASE-T™. Released in 2016, NBASE-T technology goes beyond Ethernet’s limit of 1 Gbps, reaching 2.5 and 5 Gbps using the large installed base of Cat5e and Cat6 cabling.
“Machine vision is benefiting from this work as NBASE-T becomes mainstream for applications outside our market,” Carey says. “We’re interested in applying this technology to increase frame rates that can be sustained by GigE Vision cameras.”
Camera Link HS Leverages Fiber’s Speed and Cost Advantages
Camera Link HS (CLHS) has undergone several changes as the committee prepares its next revision of the standard. True to the 3D vision trends addressed by other standards, CLHS has introduced 3D data types. The revision also supports multiple areas of interest (AOIs) and multiple data types within one frame.
“That really changes the video structure, so we modified our video packet to enable changing AOIs and the pixel definition within each AOI on the fly,” says Michael Miethig, R&D camera development manager for Teledyne DALSA and chairman of the CLHS standards technical subcommittee.
Although CLHS can run over copper or fiber-optic cables, the cost and usability of the latter are making the standard’s X protocol a popular solution. The X protocol physical layer features 10.3 Gbps, 64b/66b encoding with Forward Error Correction achieving 1200 MB/sec image data throughput per fiber.
Robust, lightweight, and reliable, fiber has a very high bandwidth because losses within the cable are much lower than copper. Because fiber is smaller, the bend radius will be smaller. It also has a flex life rating of up to 50 million cycles and no electromagnetic susceptibility.
CLHS has fixed a bug to handle longer fiber-optic cable length. Although the original CLHS specification was designed for 300 m, users were exceeding that limit. “That was causing some data resend on our command channel, so in the next revision you can handle any cable length and measure the propagation delays,” Miethig says.
CLHS’s use of extremely low-cost IP cores available from the AIA has proven to be another attractive feature. AIA believes that providing low-cost IP cores will help promote the technology, enabling companies to save over $60,000 in development costs and six months of development time.
Additionally, “the fact that we all use the same IP cores means that it’s a very robust design,” Miethig says. “It reduces development effort and risk and ensures device interoperability.”
A New Standard on the Horizon?
“In order for industry associations to establish a new machine vision standard, a critical mass of companies must want the standard and be willing to share the technology with other companies in the industry,” says Bob McCurrach, AIA’s director of standards development.
An area currently attracting attention is smaller processor formats that field-programmable gate arrays (FPGAs) and systems on a chip (SoCs) make possible. “There’s certainly a place for smaller-than PC systems or embedded systems to be able to connect directly to the factory automation system,” says McCurrach.
Furthermore, these smaller processor formats “allow integration of vision into systems that don’t need the full power of a PC or that can’t package the larger, full-system architecture,” says McCurrach. “This is definitely an area that is helping drive vision into even more applications than was previously possible.”
The standards experts will gather for the spring 2017 International Vision Standards Meeting (IVSM), May 8-12, 2017 in Natick, Massachusetts, hosted by The MathWorks, Inc. and AIA. JIIA will host the fall IVSM in October in Hiroshima, Japan.
After all, a standard committee’s job is never done. “There’s always ongoing work within all the standards in terms of adding and revising functionality and making sure bugs are removed,” SICK’s Johannesson says.