Member Since 1994
AMT creatively solves robotic automation challenges. We provide complete engineered solutions with talented, experienced engineers and NO product or hardware agenda. This allows us to offer a unique and unbiased automation solution that fit your robotic automation needs regardless of size. We offer the AMT Advantage, which separates us from the pack. We have over 25 years as a full-service Automation Solutions Integrator with the unique ability to engage with a manufacturer at any point in their process. "We never intended to be just another automation robotics & engineering firm. Our experience and skill in applying manufacturing technology is unsurpassed in the industry," CEO Mike Jacobs. At AMT, we have successfully built a company of relevant automation robotics technologies under one roof with a depth of knowledge and experience that exists solely to add value to our customers’ performance. Let us show you how your company can benefit from our experience.
Content Filed Under:
Aerospace and Automotive Aerospace , Automotive , Automotive , Consumer Goods/Appliances , Electronics/Electrical Components , Fabricated Metals , Medical Devices , Metals , Paper , Pharmaceutical , and Rubber
Assembly , Assembly , and Material Handling Assembly , Assembly , Material Handling , and Spot Welding
Reducing the Variables: Getting Accurate Motion From Inaccurate Robots
Blog post by Senior Application Engineer David McMillan. David has been in the robotics industry for more than 20 years, and is one of AMT’s go-to people for new or unusual technologies. For more information on AMT’s expertise in robotic applications, visit www.appliedmfg.com.
Indulge me, if you will, in a quick “Old Fart” story. When I got my first aerospace industry assignment, I was a highly experienced robotic application engineer with extensive experience in automotive industry production. I could honestly claim that I knew what I was doing.
I had no idea what I didn’t know. (The “unknown unknowns,” to steal a phrase.)
In automotive, repeatability is everything. The industry has evolved its production practices over decades to play to the strengths of the robots they use, and avoid the weaknesses. But aerospace runs on accuracy, and has no such historical relationship with robotics.
In order to discuss the topic intelligently, we first need to define our terms (as they relate to robotics):
Accuracy: The ability to arrive at any random point in space, from any random starting point, under any conditions, within a defined tolerance.
Repeatability (sometimes called precision): The ability to keep arriving at the same point in space, from the same starting point, under same/similar conditions, within a defined tolerance, even if that point is not accurate.
Robots are very good at doing the same thing the same way, every time, basically forever. Even if they’re doing it wrong, they’ll do it wrong the same way every time. This means that, historically, robot programming has consisted of tweaking programmed positions over multiple test runs until the robot arrives at the correct point in space. The position the robot thinks it’s at may not agree with the blueprints, but as long as the weld is in the right spot, who cares?
Industrial robots, historically, tend towards the top-left corner of the illustration in Figure 1. In examining the posted manufacturer specifications for the various robots I used over the next several years (as defined in ISO 9283), I found that the repeatability value would range from 50-150 µm, but the Accuracy values for the same robots would range from 500 to 1500 µm. In my experience, this is a reasonable rule of thumb: for any given standard industrial 6-axis robot, the relationship between repeatability and accuracy will be roughly 10x -- one order of magnitude.
More recently, there has been a strong push to generate robot programs “offline,” meaning to use a simulation to create your programs, download them to the robots, and just hit GO. There are no messy, prolonged try-fail cycles in the production facility.This push is strongest from non-traditional customers who are new to using industrial robots in their production, with aerospace being a particularly strong example. It was in my first foray into aerospace, using robots made for automotive use cases, where the critical major differences between accuracy and repeatability in industrial robots first became a real issue for me (as opposed to a theoretical issue). And the waters were much, much deeper than I ever imagined.
I had one assignment that seemed simplicity itself: guide a countersinking cutter into a hole that had already been drilled, using a machine vision system. I needed to achieve accuracy, relative to the hole, of 50 µm (0.05 mm, or about 0.002”) or better. This was far tighter than anything I’d ever had to do before, but the vision system could measure down to less than 1 µm, which should have given me a luxurious margin. I would simply move, measure, repeat, until I hit the target.
Instead, I found that reaching that 50 µm tolerance became a matter of luck; the robot would keep trying until the vision system reported we were within tolerance, but it might take five tries… or five hundred. As I dug into the problem, I found that I was attempting to make corrections smaller than the “noise” in the robot’s motion control.
Anyone who has driven a car whose steering needs work has an intuitive grasp of what “backlash” is. Industrial robots (despite being touted as using “zero backlash” drive mechanisms) have plenty of backlash, of multiple types. The formal term for the combination of all these factors is “lost motion.” This lost motion is not simple to model, it changes drastically depending on the robot’s position, orientation, payload, and even ambient temperature. This makes it very hard to predict, or correct for mathematically.
In this case, as my robot approached the target, it had to keep making smaller and smaller corrections -- a sort of Zeno’s Paradox, if you will. Eventually, the corrections became smaller than the robot’s lost motion. The result was that, when commanded to make a tiny motion, the robot might not move at all, or move in some random direction.
The Search for a Solution
While trying to resolve this problem, I tried a simple test. I set the robot in a position over a target, and measured its location with the vision system. Then I moved the robot in 10 µm increments in a fixed direction, stopping to take a measurement after every move. What I discovered was enlightening: for the first few increments, the robot moved randomly (or not at all), but movement in a consistent direction would bring me to a point where the robot would reliably move in 10 µm increments, as verified by the vision system. It eventually became apparent that, after moving in a fixed direction for enough distance, the robot would “take up the slack” in all axes, and the robot’s motion would (at least for a short distance) become much more precise. But, the moment I tried moving in any other direction, all the “slack” of the lost motion would be back in play, until I moved far enough along the new direction to “take it up” again.
Back to the Drawing Board!
So I needed to find a way to “take up the slack” reliably. For this application, we didn’t need to make the robot accurate, as much as we needed a way to keep all the lost motion consistent. If we could find a way to keep every axis “pulled” to one side or the other, instead of floating randomly around the dead band in the middle of the lost motion range, that would be enough.
The brainstorming session would probably be comical to an outside observer; options like attaching large springs to every axis, or mounting the robot on a diagonal wedge to “bias” gravity, were seriously proposed. There was even a suggestion to add a second robot, whose only job would be to pull on a bungee cord attached to the main robot’s end effector in a consistent direction (don’t laugh, it probably would have worked). But none of the ideas were very practical. And, fortunately, none of them were necessary.
Surprisingly Simple Solution?
Like most good engineering solutions, the final solution was almost painfully simple. It was to add a deliberate extra motion (well, two, technically) to each corrective motion, in a fixed direction. We called this the “anti-backlash” motion.
It worked like this: the Z-axis of our tool was aligned with the axis of the drill we were using. We made our final approach to the programmed target from a standoff position calculated along the Z-axis. Then, once we reached the programmed target position, we would take the vision measurement, which provided corrective values in Tool axes X, Y, Z, Rx, and Ry (Rz being obviously irrelevant, for a drill). But instead of simply moving by those corrective values, the program instead performed a “pull back” along the Z axis by several millimeters, enough to completely exceed any of the potential lost motion effects. This was the first step of the anti-backlash motion. The second step was to calculate and execute a motion that included the vision corrections and the exact inverse of the first anti-backlash motion.
In practice, it looked a bit comical (like a “drinking bird”), but it worked amazingly well. Our move-measure-correct iterations went from dozens to a reasonably consistent three (generally ranging between two and five). Experimenting with the magnitude of the anti-backlash offset quickly showed us where the minimum was -- below a certain point, the number of corrective iterations would sharply increase.
This means of overcoming lost motion is hardly a secret; the products from New River Kinematics that use metrology devices to guide robots (with frankly astounding accuracy) make use of a very similar approach. And any old-school machinist who ever had to work with a clapped-out manual lathe or mill learns how to compensate for the backlash in the hand cranks, often by deliberately making cuts from just one direction, when they need to squeeze precision out of an imprecise machine. It’s one of the most basic elements of overcoming lost motion effects in articulated robots, and probably a critical “toolbox item” for anyone who has to struggle with this problem.
Of course, this application was low-hanging fruit in many ways. Our target was a hole in a surface, easily visible to the robot’s vision, and achieving sufficient accuracy relative to the target was a simple matter of measuring and correcting until we eventually achieved tolerance. But what about more complicated situations? What if, for example, we needed to accurately drill a number of holes in a surface with no such easy targets? That… was part of a project which sent my team down yet another rabbit hole.
Navigating Between Landmarks (“Shouldn’t have taken that left turn at Albuquerque”)
In this case, the task was to drill several thousand holes, distributed over the entire surface of an airliner fuselage, with high accuracy.
Whenever anyone asks you to achieve a certain accuracy, one of your first questions should be “accuracy relative to what?” This is a lesson we learned rather painfully, but I’ll spare you the gory details.
In automotive, accuracy is achieved by the simple expedient of eliminating almost all variables -- the parts, and the tooling that holds them, are made to be even more repeatable than the robot. Uncontrolled effects, like thermal expansion, are generally negligible.
For large aerostructures, however, this does not hold true. The critical tolerances of aerostructure parts are not how repeatable the parts are, but how well they fit together. This might seem like a distinction without a difference, but it is not.
In addition, for very large structures like fuselages, it’s simply too big to make everything as repeatable as automotive parts are. Even if the parts are 100% repeatable and perfect to the CAD models, simple effects like gravity making the structure sag slightly between support points means that you cannot simply export the CAD model locations to the robot and hit GO.
Instead, we relied on what we came to call “local relative accuracy.” Recall that my previous project example revealed that once lost motion was taken care of, the robot could execute corrective motions with a precision that was better than its spec repeatability… at least for short distances.
In the case of this large airliner fuselage, the “skin” panels of the aircraft had already been attached to the “bones” of the airframe -- by the time the fuselage reached the robotic work area, roughly 10% of the 20,000 or so fasteners had already been installed. And each “tack” fastener had been placed very carefully to meet the customer’s tolerance spec. So, we could finish the fuselage by simply “connecting the dots.”
Almost. Of course, it was more complicated than that.
As we carried out our R&D efforts, we discovered that with our backlash compensation techniques, the robot could maintain sufficient accuracy for some distance. However, this distance was measured in joint space (that is, the motion of the robots’ axes), not simply in Cartesian space. This meant that drilling two points only 20 mm apart, with very different end effector orientations, would give very poor accuracy, compared to two points the same distance apart with similar orientation. We also found certain “no go” zones, where certain robot axes were too close to an inflection point to maintain accuracy. For example, imagine the “shoulder” joint on most 6-axis robots -- when that arm link is vertical, gravity might “tip” it either forwards or backwards over center. That axis also usually has a counterbalance of some kind. This introduces a random factor that needs to be avoided.
Our solution was straightforward in concept, although it was rather complex in application. The basic idea was to divide the entire fuselage skin into a chessboard-style grid of work zones. Each zone was defined using several criteria: overall size, degree of flatness or curvature, and the availability of landmarks (the pre-existing tack fasteners).
The size and curvature were interdependent. The tighter the curve, the smaller the zone, and vice versa. If a zone included tight areas that required the robot to make extra contortions to reach, those tight areas might become their own individual zones. This meant that the zones were often oddly shaped, and overlapped in strange ways (the term “gerrymandering” was thrown around semi-ironically).
The landmark availability was the most critical factor. The landmarks had to be visible, accurately placed, and locatable by machine vision. For a zone to be viable, it had to have enough landmarks that met these criteria, and the landmarks had to sufficiently encompass the zone.
For zones that were simply “connecting the dots,” drilling holes on a straight line between two pre-existing fasteners, two landmarks could be enough (though more were always preferred for redundancy). But for a simple two-landmark zone like that to work, we could only drill holes on the line between those landmarks -- as we moved away from that line, accuracy would drop off dramatically.
For two-dimensional zones, technically three landmarks would have been sufficient (per grade-school geometry, three points define a plane the way two points define a line). But that would have created an “accuracy triangle” -- moving outside the triangle defined by the three landmarks would have resulted in a fast drop-off in accuracy. Similarly, if two points of the triangle were too close together, their accuracy zone in Cartesian space would look more like a line.
For our vaguely square work zones, we generally insisted on at least four good targets at the corners, preferably more. As a general rule, we could usually achieve accuracy inside the polygon defined by the landmarks, relative to those landmarks.
The trick to making this work was to, again, eliminate as many variables as possible. To accurately drill holes in a work zone, the robot would first move to each landmark, carry out a move-measure-correct cycle (with backlash compensation) until almost perfectly aligned with the landmark, and record the robot’s position at that moment. This would be repeated for all landmarks in the zone. Then a six-degree-of-freedom (6DOF) calculation would be carried out using the differences between where each landmark had been expected, and where it actually was. This resulted in a 6DOF transformation that could be applied to all the target points inside the zone.
To make this work, it was critical that every landmark be measured and every hole be drilled, in the same orientation of the end effector (recall what I said earlier about motion in joint space vs. axis space). On a curved fuselage, this was obviously not 100% possible, which resulted in a tradeoff between the curvature and the zone size. Often, we would see zones that became rectangular, with the long axes parallel to the axis of curvature. As the process moved into fuselage areas with compound curvatures or tighter radii, the work zones would have to shrink to compensate.
Sometimes, we would encounter geometries where certain targets in the same zone required quite different end effector orientations -- they simply were not reachable otherwise. In situations like that, we would often split the same physical zone into separate virtual zones, carrying out vision measurements on the same targets from different orientations, chosen to match the orientations of the target set. We sometimes ended up with “virtual” zones that occupied the same physical zone, or overlapped other zones. The process of dividing an entire fuselage into workable zones was an involved process carried out in simulation environments (like Process Simulate or Catia V5) -- fully detailed 3D models of everything involved are critical.
Shelf Life (“Ask me for anything but time!”)
Now, so far I’ve talked about overcoming variables in space -- both position and orientation. And those are the first “speed bumps” anyone trying to squeeze accuracy out of 6-axis industrial robots will encounter. But there’s another, less obvious variable: time.
Put simply, the more time that expires between when you measure your workpiece, and when you apply that measurement as a positional correction to your robot, the more opportunity there is for something to change, without being noticed. Now, hopefully, the controls will detect if someone has opened/closed retaining clamps, or if a part has been removed and replaced. The tooling should be robust enough that the workpiece will not slip in its grip without human intervention or some kind of mechanical collision. But at the levels of mechanical accuracy we’re talking about, there is another, more insidious factor that can push your tolerance limits: thermal expansion.
If modelling out the volumetric error of a robot is extremely difficult, modelling out the thermal expansion effects on the robot’s positional accuracy is… well, let’s just say I see no solution to this in the near future. Tests carried out by my team showed that even in an environment with good control of ambient temperature, the robot’s own servo waste heat would slowly propagate through the body elements of the arm over time. There was a pronounced lag, resulting in an S-curve when one plotted accuracy errors over time -- during the time the heat was “creeping” through the robot’s shell, the errors would increase, but once the thermal distribution stabilized, so would the accuracy errors.
Various means of dealing with this issue were tried, the most basic being adding ambient-temperature sensors to the automation to detect temperature swings large enough to require re-measurement. Adding thermal sensors to the robot was considered, but abandoned as being impractical. In the end, we resorted to the simplest solution: minimizing the time between our measurement cycle and our corrective application of the measurement results. So, for every zone that the fuselage had been divided into, the measurement cycle was carried out immediately before the actual drilling processes in that zone. And if the process cycle was interrupted for any reason, the measurement cycle would be repeated before the remainder of the process cycle could be resumed. This had a cost in cycle time, but it was fairly minimal, and the application valued accuracy above speed in any event.
Another option we explored, but did not need to implement in this case, was to put an expiration time on each measurement cycle, and forcibly interrupt the process cycle for a repeat of the measurement cycle if that time expired for any reason while the process cycle was in progress. In practice, our zones were generally small enough that drift over time would not be a factor without some other kind of process interruption, but some production processes may have other issues (human operator interactions, for example) that can unpredictably introduce delays long enough for the drift over time to become significant.
The Wrap Up (or TL;DR)
Bottom line - industrial robots are much better at repeatability than accuracy. But if you can reduce the number of variables involved in the robot’s positioning, you can “cheat” your way to greater effective accuracy.
It’s important to fully grasp all the different factors which may be influencing your accuracy, especially the ones whose effects are in non-obvious or inconsistent ways.
In summary, I’ve tried to pass along some general lessons learned (the hard way) over the course of several robotic projects that required accuracy approaching, or in some cases beating, the robot’s inherent repeatability. It’s a very deep subject, and very counterintuitive to those of us accustomed to “classic” industrial robotics in sectors like automotive that have used robotic automation for decades. But with the proper interaction between product design and automation design, and a good grasp of what is achievable and what is required to achieve it, we can achieve aerospace tolerances with automotive robots.
I hope anyone else diving into these waters finds my recollections useful. If you have questions about a specific application, please contact us.
06/27/2017 | news
03/24/2021 | videos
04/06/2018 | editorials
12/15/2016 | industry insights
10/11/2019 | news
06/03/2021 | case studies
11/12/2021 | news