Industry Insights
Integrating Motion Control with Machine Vision
POSTED 05/24/2018 | By: Kristin Lewotsky, Contributing Editor
Factories today leverage increasing levels of intelligence to improve the flexibility, throughput, and quality of manufacturing. For an example, look no further than the growing level of integration between machine vision and motion control in the manufacturing world. Combining the two technologies creates a more flexible, dynamic environment. The machine vision system becomes more effective, the motion system becomes more adaptable, and the production environment as a whole demonstrates greater functionality.
In the words of Nate Holmes, R&D group manager for motion and vision product lines at National Instruments (Austin, Texas), the vision system provides information, while the motion system takes action. Integration of the two technologies can be split into three classes: motion assisted machine vision (synergistic integration), vision-assisted motion control (synchronized integration), and vision-guided motion. They are listed in order of increasing levels of integration, which also implies increased complexity. There is a payoff for that complexity, though. Higher levels of integration increase product quality and production efficiency, while reducing production cost.
“Combining vision with motion allows you to have more flexible dynamic environment,” says Darrell Paul, marketing supervisor for product marketing at Omron Industrial Automation (Hoffman Estates, Illinois). “The motion system is able to provide more functionality inside the motion than ever before. So, at the same time you might be doing motion-guided vision, you could also be doing inspection and code reading.
In a palletizing operation, for example, the vision system could detect the layers of packaging already on the pallet. This would enable it to stack additional items in the correct orientation and sequence. And that is just the start. “It can then go back to the line and be able to see a package running on the conveyor,” says Paul. “It can actively lock in the position of that package and guide the motion control system to pick up the package without having to use any additional encoders on the system to do it.”
Before we launch into a discussion of integrated machine vision and motion control systems, it is important to note that a large number of variations exist in the hardware, software, and architecture. Some systems use a PLC and dedicated motion controller while others use a PC-based soft-motion platform. The colors of light, camera resolutions, and types of cameras themselves vary widely from system to system. This discussion includes some basic examples but be aware that they are not, by any means, comprehensive.
Motion-assisted vision
For decades, machine vision has been a fixture in the industrial environment. The technology is used for a variety of inspection tasks, from determining presence or absence of an object to checking dimensions to assessing quality. Is a part flawed? Is the bottle label on straight? Is the plastic the correct color? Machine vision systems can answer these questions very rapidly, over extended durations, and often very accurately. How accurately depends upon a number of factors, including system design, lighting, and how well the imager can see the parts. This is where motion systems can help.
Motion control technology can improve the performance of vision systems by optimizing positioning for best results. Motion axes can be used to place parts in front of the image sensor or transfer the image sensor over the parts. The technology can reposition parts to better illuminate the area of interest or change lighting conditions. It can change the orientation of the part or the camera to capture a sequence of images from different angles.
The classic example of motion-assisted vision is a web- or conveyor-based system that moves the product or web in front of a camera at constant velocity. The camera may be free running and capturing images continually or it may be triggered by a hardware sensor or a software trigger. The image processing algorithm may need to know the speed of the conveyor and the two systems may need to operate on a common time basis. The level of integration is minimal, however.
Vision-assisted motion
In vision-assisted motion, the vision system provides input to enable the motion system to perform a task such as ejecting bad parts. These types of tasks require synchronized operation between the vision system and the motion system. The sensor of the vision system triggers the camera to capture an image, which is stored in the buffer for processing. The trigger is also sent to the motion system so that the motion controller can register the position of the part. The vision system analyzes the image. When it detects a faulty part, it sends a signal to the motion system, which uses its knowledge of the part location to enable the actuator to eject the failed product.
For example, consider a system for separating green tomatoes from ripe tomatoes in a ketchup processing line. The tomatoes ride up a conveyor and cascade off the end in a single line (see figure 1). A line-scan camera captures the image and triggers the motion system to capture the position of a given tomato. If analysis determines that the tomato is green, the vision system sends a signal to the motion system. The controller on the motion system tracks the motion of the green tomato, and when the green tomato is in front of the actuator, commands a move to direct the tomato to a different conveyor.
Figure 1: in this vision-based sorting application, the vision system sends a trigger to the camera to capture an image and to the motion system to monitor the location of the tomatoes. When the image sensor detects an unripe tomato, it sends a signal to the motion system. The motion controller tracks the position of the item in question to enable an actuator to be commanded to reject the part. (Image courtesy of Kingstar) |
These types of sorting applications require tight synchronization between the vision and motion systems. In particular, the analysis and decision to reject the part needs to take place quickly enough that the motion system is commanded to act before the part reaches the actuator. The image resolution and complexity of the algorithm need to be low enough that analysis can be completed in time.
The performance of the communication system needs to be analyzed to confirm that latency and jitter do not interfere with image processing.
Vision-guided motion
At the next level of integration, the vision system provides feedback to the motion system, helping it to close the control loop (see figure 2). As in the case of synchronized motion, a sensor in the vision system triggers the camera to capture an image of the part. The vision system, (frequently the camera), calculates the position of the part in pixel space, then converts that to the physical location in real-world coordinates. The motion controller uses this information to generate a trajectory for the actuator. The approach dramatically increases the functionality of the motion system.
“Vision-guided motion compensates for non-rigid tooling,” says Paul. “You could have a part appear anywhere on a conveyor, and the system with the vision will be able to find that part and pick it up or inspect it or perform whatever operation it is supposed to do. You don't necessarily need to be presenting that part in a rigid fixture as you would in the past. And you don’t need to pay for that tooling either. So it’s an engineering cost reduction as well as an increase in flexibility.”
Figure 2: Block diagram of a vision-guided motion system shows the role of the vision system in delivering data to the trajectory generator at cycle times on the order of 0.5 to 1 s. The trajectory generator delivers setpoints about every millisecond, while the control loop for the actuator operates on 50 µs cycle times. (Image courtesy of National Instruments) |
Vision-guided motion is distinct from synchronized motion. In synchronized integration, the vision system helps the motion system decide whether to move. In vision-guided motion, the vision system helps the motion system decide where to move. Using vision-guided motion, a system can locate and assemble parts fed to it in random orientations. This eliminates the need for fixtures or special positioning equipment, reducing cost and complexity as well as speeding changeovers. Switching to a new product involves a software change rather than retooling.
Consider an application in which a silicone bead is applied by an extruder head to the edge of pieces of fabric. The system consists of a conveyor moving at a fixed speed, an overhead-mounted vision system, and an extrusion head mounted downstream on a linear slide controlled by a rotary servo motor (see figure 3). The fabric pieces are placed on the conveyor in random orientations. As they pass under the vision system, it captures an image and processes the data to calculate position and angle of the piece. Coordinate transformation software built into the smart camera transforms the image location from pixel space to its physical location in the real-world coordinate system. The motion controller uses this information to calculate the tool trajectory required to apply an even bead of silicone to the edge of the fabric.
Figure 3: This freeform edge-tracking system uses vision-guided motion to apply a bead of silicone to the edge of a series of cloth parts placed on a conveyor in random orientation. The overhead camera captures the image and performs a coordinate transformation before passing the data to the motion controller. The controller commands an extrusion head mounted on a servo-motor-driven linear slide to apply the silicone. The camera system also performs an inspection step simultaneously with part acquisition to enable the system to reject any defective blanks. (Image courtesy of ORMEC) |
Don't Miss These Industry-Leading Events!
Another example involves CNC table cutters used to cut parts out of 4’ x 8’ metal blanks. In conventional systems, the blanks are placed on the work surface using a crane. As a result, they cannot be precisely aligned. Instead, the machine must acquire the edge mechanically before it begins cutting parts from the blank in a relative coordinate system. To maximize yield, the machine is programmed to cut a pattern of nested parts. Because the orientation of the blanks varies, parts along the perimeter may project over the edge and be incomplete. Vision-guided motion provides a solution to this problem.
With the introduction of image-based feedback, the tool set no longer needs to acquire the edge of the blank mechanically. Instead, the vision system captures an image of the blank and converts its position from pixel space to real-world coordinates. The motion system modifies the trajectory for the tool had so that it cuts the maximum number of complete parts from the blank. This approach enables the system to compensate for a blank that is cocked, offset, or otherwise misaligned. It increases yield and throughput.
Vision-guided motion is an effective approach but does present some challenges. As with synchronized integration, timing is essential. The processing frame rate needs to be synchronized to the motion loop rate. Communications latency should be carefully scrutinized to ensure that it is acceptable within the constraints of the position loop of the equipment.
The biggest issue is that the system only determines the trajectory at the beginning of the move and does not receive further input to modify performance. If there is an error in the coordinate transformation, the system cannot correct. As a result, the performance of the system depends on both the transformation and also on the precision, accuracy, and repeatability of the motion system.
Visual servoing
Visual servoing, or visual servo control, provides a way to avoid the possible errors introduced by vision-guided motion systems. In visual servo control, the system uses vision input both as guidance and as closed-loop feedback (see figure 4). Instead of providing input only for initial trajectory planning, the vision system provides continuous images to enable the motion system to target the part. The approach can be used to obtain highly accurate performance from less expensive equipment.
Figure 4: in the visual servo control scheme known as dynamic look and move, the vision system generates position setpoints directly, eliminating the need for a separate trajectory generator. This cuts the cycle time significantly. (Image courtesy of National Instruments) |
A more sophisticated approach to visual servo control is known as direct servo. In this type of system, machine vision replaces conventional motion feedback devices such as encoders. The standard configuration features two cameras: an overhead camera to image actuator and target used to generate position setpoints, and a camera mounted on the actuator itself that delivers position feedback (see figure 5).
Figure 5: Direct servo control uses two cameras to deliver guidance and feedback. The first camera (bottom) images both actuator and target, using the data to generate position setpoints, replacing the trajectory generator. The second camera (top), mounted on the actuator, closes the position feedback loop. This camera replaces the conventional motion feedback system of encoder or similar. (Image courtesy of National Instruments) |
Challenges of integrating motion and vision
The integration of vision and motion brings big benefits, but it is not always easy. Both systems are complex. They frequently are provided by different vendors. This can make interoperability a challenge because the two systems need to be synchronized. The higher the level of integration, the tighter the synchronization required.
As previously discussed, adding vision means that the vision system and motion control tasks need to be completed in the same cycle. Achieving this goal requires a comprehensive evaluation of the task. Start by determining timing - how fast is the part moving? How much time is available to process the image and determine an action? Can the vision system deliver this response rate before the part reaches the actuator?
A number of factors can present timing issues. Adding digital filters can impact data analysis. “Vision tools can slow you down if you are not careful,” says Sam Rubin, senior systems engineer at ORMEC Systems Group (Rochester, New York). “The camera may need to use digital filtering to improve image contrast. Because these filters process the image pixel by pixel, they take time. All of a sudden, an inspection that would run at 1 ms starts running at 50 ms and then it may end up being too slow for the machine rate required.” Reducing the area being imaged and analyzed to the minimum possible region-of-interest can help speed the process.
Particularly in the case of vision-guided motion and visual servoing, uninterrupted high-speed communications is essential. There was a time when the two systems would be kept separate and communicate over a high-speed data link. This has the potential to introduce latencies as a result of fixed offset and jitter. “If I'm going to be counting bottles and I can only count them so fast because I have to allow for non-deterministic latencies,” says Matt Edwards, solutions architect/director consulting at IntervalZero (Waltham, Massachusetts). “I have to schedule for worst case.” As a result of the latency issue, machine builders and integrators increasingly run both machine vision and motion control on the same computing platform using a real-time operating system (RTOS) and shared memory. “Now, I can shorten those latencies and make things more deterministic and the system operates quicker because it’s all in one box and there are no sporadic latencies that I have to account for,” says Edwards. “Now, the whole process can go faster.”
A number of pitfalls exist in the integration of motion and vision. The first is mission creep. A vision system can perform multiple tasks while guiding motion, including inspection and barcode reading. It is important to ensure that these additions do not cause the vision system processing time to exceed the cycle time of the equipment.
It is important to remember that the vision system will be tuned and integrated to address a specific set of tasks and conditions. Boosting the speed may shrink the cycle time beyond the ability of the vision system to keep up.
Paths to success
More and more OEM machine builders are integrating machine vision into their motion systems. In part, that is due to a concerted push in industry toward usability. Smart cameras incorporate automatic coordinate transformation and calibration routines. Many feature autofocus and integrated lighting. For OEMs and end-users trying to put together a system with synergistic integration, it has never been easier to add in machine vision. Even with higher levels of integration, a new generation of products integrating both motion and vision in one device simplifies device commissioning.
“You're finding higher success in applications,” says Nick Tebeau, director of sales for Leoni Engineering Products and Services (Lake Orion, Michigan). “And what that means is that companies who may not have wanted to get involved in the past with integrating these technology platforms out of fear of being unsuccessful are now finding themselves much more comfortable. Support is more readily available because the number of trusted integrators has grown throughout North America. And there’s better training such as the AIA CVP training that really enable these different companies to become educated into the proper way of getting started.”
Despite the array of modern tools, organizations pursuing higher levels of integration should still consider working with an integrator, or at the very least tapping their vendors for help. “Systems are becoming a lot easier but understanding the basics of the motion and the vision system and marrying the two together can still be challenging,” says Bill Catalano, vice-president of marketing and sales at ORMEC Systems Group (Rochester, New York). “In this resource restricted time that we live in, working with an integrator or vendor can give organizations an edge.”
Acknowledgments
Thanks go to Andy Long of CYTH Systems and Nate Holmes of National Instruments for useful conversations and background material.