Industry Insights
Shrinking Drives Propel Motion Control Forward
POSTED 09/17/2012 | By: Kristin Lewotsky, Contributing Editor
Digital designs, shrinking processors, and more efficient switching amplifiers deliver drives boasting substantially improved performance in a fraction of the previous size.
When it comes to electronics, progress is not just a matter of performance but of delivering that performance in a smaller, lighter, cheaper device. It’s a trend that’s manifested in the motion arena, allowing OEMs to shrink their designs while delivering ever more powerful results. Drives have gotten smaller, lighter, and altogether more capable than they were even 10 or 15 years ago (see figure 1). Let's take a closer look at the state-of-the-art to discover what it means for machine builders.
Digital drives
Three factors primarily account for the size reduction: the change to digital drives with integrated processors, size reductions in the processors themselves, and the use of high-efficiency PWM switching amplifiers.
The shift from analog to digital drives is an important contributing factor to the size reduction, for multiple reasons. First, digital drives replace the resistors, capacitors, and op amps of the analog designs with fewer, more highly functional elements. “In the old days, you had complete circuit boards filled with hundreds of components," says,” says John Chandler, vice president of sales, North America. Technosoft Inc. (Bevaix, Switzerland). “Today, you have one component that can do it all in software.” Increasing functionality in software instead of hardware keeps the physical package small while capabilities grow. Power debugging techniques allow developers to optimize code for efficiency.
“Most digital drives have some sort of a FPGA or, in our case, a DSP,” says," agrees John McLaughlin, Manager, North America for Elmo Motion Control (Westford, Massachusetts). “A DSP allows the manufacturer to utilize that device as a core for programming and doing all the motion.”
Not only are digital drives cutting size by consolidating the number of components in favor of fewer digital devices but the processors themselves are shrinking as part of the semiconductor industry’s march along the Moore’s Law curve. As processors go from one generation to the next, they are getting both more powerful with more peripherals and degrees of design freedom, and physically smaller. As a result, drives are able to accomplish more in a far more compact form factor (see figure 2).
A third area of improvement is in the area of efficiency. Some 30 years ago, most amplifiers were linear—and strikingly inefficient. More recently, amplifier manufacturers have moved to pulse-width modulated (PWM) drives, which use MOSFETs for switching. Generally speaking, a PWM drive converts a square wave voltage to current, adjusting the duty cycle and frequency of the square wave to deliver the current required to run the motor as desired. Now, heat transfer goes as a function of area. That means that all things being equal, a smaller device will run hotter than its counterpart. Given that heat tends to degrade the lifetime of electronic components, heat becomes a limiting factor for the reduction of drive size. That’s where new developments in semiconductor design come in.
Modern semiconductors are able to run hotter than before, primarily as a result of better processing and packaging. Better heat tolerance allows designers to build smaller devices. That’s only part of the story, however. In terms of reducing size, increasing efficiency pays off in a big way. Consider a 100 W amplifier operating at 95% efficiency, which means it dissipates 5 W of power as heat. “Say we want to make an amplifier half the size and keep everything else like the operating temperature the same,” says Chandler. “If we can increase efficiency to 97.5%, we’re only going to lose 2.5 W as heat, so we can cut the size of the amplifier by half.”
MOSFETs generally have two different loss mechanisms: on-state conduction loss and switching loss. We can think of a resistor like a switch, which has a nominal resistance associated with it in the off state. This is reflected by a quantity known as on-state resistance, or RDS (on), frequently at the milli-Ohm level.
Switching losses are introduced by the switching process. In the case of a square wave voltage producing a varying current, we can represent power P as P = IV=I2R, where I is current, V is voltage, and R is resistance. When the device switches with the voltage going from zero to high, for example, the current simultaneously drops from some nominal level down to zero over the same time span; the opposite holds, as well. That means that at the extremes, we have a lot of voltage but no current—and no power—or a lot of current but no voltage. The problem arises during the transition, when both voltage and current are non zero. Now, we have a lot of power—and a lot of power loss. “If you multiply voltage times current, all of a sudden you get some really high power,” says Chandler. “The reason the device doesn’t explode is because the transition between those two states happens quickly, and in fact devices are switching faster and faster.” The faster the switch time, the greater the efficiency.
Greater efficiency pays off in more than one way. Lower power loss means less heat to dissipate, which reduces demand for thermal management components like heat sinks and fans. That, in turn, yields a smaller, lighter, and cheaper device. “You start with the heat sink and design the product in such a way that you produce less heat,” says McLaughlin. “It’s how you deal with the power and how you deal with the efficiencies or inefficiencies of the electronics.”
Designing with the new drives
Distributed control architectures simplify design and ease hardware management. Depending on the application, though, working with single-axis, high-efficiency components may be a better choice than a their multi-axis counterparts. Single-axis modules designed with a common pinout give customers the ability to easily integrate drives with varying capabilities. "A common pin-out platform is a big advantage for someone who makes different machines but has the same kind of control boards," says Karl Meier, marketing manager at Advanced Motion Controls (Camarillo, California). "They can save cost across many different platforms. That’s one advantage of having individual modules as opposed to having one platform all together. It exposes us to many more applications and it helps people actually create products and market opportunities that weren’t even possible before.”
He sees the new, higher-efficiency drives as supporting another trend-- a move toward portable equipment. “The market is shifting away from traditional factory floor stuff to more mobile, portable equipment," he says. "Because small drives have the ability to deliver a high-power-density solution, they are primed for portable power solutions because of efficiencies, because of the size of the equipment they’re going into. Most market studies have totally ignored that side of the servo motion control business.”
Today's more compact, high-efficiency drives allow machine builders and OEMs to build more effective solutions with smaller footprints. By using the proper levels of integration, designers can bring benefits to their customers while speeding the manufacturing process and cutting cost. That makes compact, high-efficiency smart drives a, well, pretty smart idea.