AIA Logo

Member Since 1984

LEARN MORE

AIA - Advancing Vision + Imaging has transformed into the Association for Advancing Automation, the leading global automation trade association of the vision + imaging, robotics, motion control, and industrial AI industries.

Content Filed Under:

Industry:
N/A

Application:
N/A

Machine Vision Moves Industrial, Collaborative Robot Applications Forward

POSTED 01/19/2015

 | By: Winn Hardin, Contributing Editor

Vision-guided robot (VGR) applications have always been among the most challenging machine vision applications. The robot has to reconcile three separate coordinate systems: the robots, the machine vision system, and the real-world coordinate systems. To provide the robot with accurate coordinates for each movement, VGR often requires multiple cameras (stereo vision, overhead, end-of-arm mounted, etc.) that are not only required to image moving parts in some cases but do so using a robot-mounted camera that moves with up to 6 degrees of freedom in a large 3D space. This much movement between camera, robot, and part makes lighting, field of view, and other optics-related challenges even more difficult.

And now with advent of collaborative and mobile robots, even the robot wants to unbolt itself and move around.

Luckily, the power of machine vision technology, including new algorithms and the underlying computing power that supports it all, is giving machine vision designers the tools they need to keep up with the fast-moving robotics industry.

Traditional vs. Collaborative VGR
Today, traditional industrial robot applications in which the robot is secured to the floor, far outnumber collaborative robot installations. In collaborative applications, robots and humans work together in the same workspace and without the traditional “cage,” light curtains, or other barriers to separate the two. With this in mind, traditional robot manufacturers such as FANUC, Adept Technology, and Yaskawa A new LR Mate 200iD/4SC clean room robot equipped with FANUC’s new iRVision® 3DA/400 Area Sensor locates and picks randomly oriented bottle caps from a bin.Motoman are taking different paths to leverage the increasing power of machine vision for traditional VGR applications.

For example, FANUC America Corporation (Rochester Hills, Michigan) recently released No Cal iRVision (No Calibration), which greatly simplifies one of the hardest jobs for a robot user: aligning the robot, machine vision, and real-world coordinate systems. Taking a nod from recent programming methods used by famous collaborative robot suppliers such as Rethink Robotics’ Baxter and Universal Robots’ UR5 and UR10, FANUC’s No Cal iRVision allows the user to position the robot and camera over a conveyor, and by simply showing the robot the part and where you want the part to go, No Cal iRVision will program the robot path itself. Calibration, which is a regular task for VGR applications to confirm that all three coordinate systems are in alignment, takes place behind the scenes, making it easy for a user to re-task the robot after changeover, for example.

FANUC also has wrapped its full iRVision program in a “bin picking” wrapper, which simplifies the set-up for one of VGR’s most common and challenging tasks. “We project light stripes down into the bin or tote and take 16 pictures very quickly to guide the robot to the part,” explains Ed Roney, national account manager for FANUC. “We also have tools that model the tooling to avoid making contact with the bin, which becomes a bigger challenge as you go deeper into the bin.”

Adept Technology (Pleasanton, California) is leveraging additional computer power, USB 3.0, and Power over Ethernet (PoE) in its new SmartVision MX vision processor for vision guidance. “The move to Power over Ethernet eliminates most of the cabling and simplifies installation while allowing us to use up to eight cameras for robot guidance applications,” says Terry Hannon, chief business development and strategy officer.

While Adept and FANUC have both taken machine vision hardware development in-house, Yaskawa Motoman (Miamisburg, Ohio) has partnered with machine vision industry leader Cognex (Natick, Massachusetts). “In the past year, we moved to MotoSight, a private-label vision product from Cognex for Yaskawa Motoman features the new MH12 robot with MotoFit, an integrated force sensing solution, and a MotoSight™ 2D vision system.our 2D vision-guided robot applications, which means that every time Cognex improves one of their tools, it automatically works with our robots,” explains Erik Nieves, technology director at Yaskawa Motoman. “When it comes to 3D, we recently introduced a palletizing system that uses Universal Robotics’ Neocortex, a template-based software program that really simplifies part training and setting up a 3D application.”

Universal Robots (East Setauket, New York) has taken a similar path. “Our UR5 and UR10 let the vision experts like Cognex and DALSA do what they do best, namely machine vision,” says Ed Mullen, national sales manager at Universal Robots. “Any smart camera that has an Ethernet output can connect to our system.” Universal Robots’ products are easily moved, programmed, and re-tasked for any compatible application. “We started primarily in machine tending,” adds Mullen. “But our application base has exploded since 2009, driven by the needs of small to medium companies wanting to automate simple processes. We have more than 4,000 collaborative robots out there working in companies all around the world. We’re not trying to go head to head with traditional robots, which will always have more capability with their logic, speed, accuracy, and repeatability – although we’ve done a good job with repeatability. But there are a lot of industrial applications out there that haven’t been served by traditional robots that require safeguards. Our robots are perfectly safe to work in between two workers on a production line, and with the ability to plug Yaskawa Motoman demonstrates a machine tending application with MotoRail, a MH50 II robot and MotoSight 2D vision system.our robot into a 120 V and program it by moving the arm from one location to the next, we’re opening up new applications to vision-guided robotics.”

Robot Collaboration Is Here Now
As the collaborative robot market takes off, new open-source tools may help expedite the development of this new robot class. “I’m very bullish on robot vision for mobile robots, mobile manipulation and some of the developments coming out of the ROS Industrial project,” says Yaskawa’s Nieves.

The ROS, or Robot Operating System, is an open-source hardware independent robot programming and operation software developed initially for military and defense applications. ROS included elements from Open CV, the open-source machine vision software program. A more recent iteration, ROS Industrial (ROSindustrial.org), hopes to transfer the open-source work to industrial robot applications. Robot OEMs don’t have to worry about losing market share to ROS Industrial, however. “Major manufacturers the world over are very excited by ROS Industrial, but many are not comfortable running open-source software on their factory floor,” says Yaskawa’s Nieves. “But think of ROS as a rapid prototyping machine. We can use ROS to develop new software and functionality, and then turn that over to our core coding group and say, ‘Okay, this works. Now make it a product.’”

In the last few years, Universal Robots, Yaskawa, Adept, and others have demonstrated the need for collaborative, mobile robots. As a proof of principle, Yaskawa recently demonstrated a mobile base with a dual robotic arm tending multiple CNC machines. “The mobile base would navigate to a CNC machine, communicate with the machine over wireless EtherNet/IP to open the door and control the chuck, while a Cognex camera would image a fiducial in the machine to make sure the robot was at the correct machine and in the right place to load a new part,” Nieves explains. “While a robot has repeatability in the Universal Robots’ UR5 robot arm has been integrated with the Flexvision image processing system at the German plant of automotive seating supplier Lear Corporation. sub-millimeter, the mobile base was only good to around an inch. That’s where the vision comes in.”

Adept has been selling its Lynx Handler, a variation on the Lynx self-navigating autonomous intelligent vehicle (AIV), including a fleet of 40 working in a 200-mm semiconductor fab plant for more than a year. “Our product has a 4-axis robot arm mounted on top of the AIV for moving SMIF (Standard Mechanical InterFace) pods around the fab,” Hannon says. “Older-generation fabs don’t have all the automation, like overhead transport systems that the 300-mm fabs have. Right now, those pods are moved manually, which means a lot of people moving around, generating dust, occasionally putting a pod in the wrong machine, or dropping the pod, all of which lower yields and slow down the process. This frees up people for higher value-added jobs than transporting and placing pods.”

Like Yaskawa’s machine-tending application, the SMIF pod application requires sub-millimeter placement of the pod into the production equipment, which is where machine vision system comes into play.

Safety is a big issue here, today, each of these collaborative robots is power-force limited (PFL), which generally means that they do not have the power to lift an object weighing more than 15 kg. By limiting the power of these robots, customers can be sure that they’re not putting human workers in harm’s way. The industry is actively working on safety standards for collaborative robots to help everyone understand and mitigate risks.