Industry Insights
Motion Control Builds Semiconductors, Part II: Assembly
POSTED 03/04/2008 | By: Kristin Lewotsky, Contributing Editor
Once the integrated circuits are fabricated and the silicon wafers diced and packaged, the job of building solid state electronics is just beginning. For those chips to be useful, they have to be assembled on circuit boards, aided by an automated process known as pick-and-place.
Pick-and-place operations can be divided into three tasks: feeding the parts to the machine, orienting those parts and installing them on the circuit board. Small parts, such as ICs, are generally fed to the pick-and-place assembly machine on tape-and-reel feeders that present the parts one at a time, stuck on a tape in an oriented manner. Larger parts with significant three-dimensional profiles are picked up from trays or fed by vibratory bowl feeders. A vibratory bowl feeder consists of a large bowl with a spiral ramp up the side. As the bowl jiggles, the parts work their way singly up the ramp, where they’re collected by the arm performing the pick-and-place.
Next, the parts must be moved into the correct orientation for installation. Tape-and-reel feeders may present the parts already oriented, but that is less the case for tray-loaded items or those fed by vibratory bowls. This is where machine vision can help.
Vision and Motion
“The components are on these reels or in these trays but there’s no precise registration of their placement within that XY space,” says Ron Rekowski, director of product marketing, Laser and Medical Group, Aerotech Inc. (Pittsburgh, Pennsylvania). “You use a camera to acquire an image of the part to figure out what its orientation is and then make adjustments.”
It’s not simply a matter of turning the part in a different direction, but sometimes even of adjusting to a different location. “You have a vision sensor that takes a picture of what it is you're picking up and you get a command to move to some ideal position where you think the part is. While the actuator is moving there, you're acquiring where the part actually is and then modifying the endpoints of your motion command on the fly,” he says. A sophisticated approach like this can take some of the hand work out of assembly, for example the trays, which are often manually loaded from bags of parts.
“There is an opportunity to combine sensing with motion control to make systems that don’t have to rely on fixtures to locate everything and don't have to rely on trays to feed parts,” says Brian Carlisle, president of Precise Automation Inc. (Morgan Hill, California). “They can build one kind of product one day and a different kind of product a day later.”
In parts feeding, for example, machine vision can dramatically simplify changeover. Vibratory bowl feeders must be custom designed for each application, so switching from assembling board A to assembling board B can mean swapping out a feeder that’s 6 in. to 3 ft in diameter, storing it, and installing a new one. At a time when flexibility is the mantra of manufacturing, an alternative is needed.
“Something I think is important is the idea of a flexible part feeder, which would use a combination of machine vision and motion control to separate out the 3-D parts that might be in a bag,” Carlisle says. He calls the approach visual servoing. “You could drop a bag of parts into a recirculatory conveyor that would jiggle and separate them, use machine vision to locate them and determine their orientation, and then use a robot to pick them up.” One vision-enabled feeder could accommodate a variety of different parts, making the switch from board A to board B in seconds rather than minutes or hours.
Visual servoing could also help with assembly, systematically guiding two parts into alignment or assisting with more complex processes such as laser welding. “The benefit is that you don’t have to have a super accurate machine, you just have to have a machine that has pretty good resolution,” Carlisle says. There’s no need for granite tables and air bearings, he suggests. If you have a proper vision system, a lower-cost design can do the job.
If, of course, meaning among other things a camera with sufficient depth of field to keep both parts in focus. Lens distortion and parallax in the objective can compromise off-axis imaging, so optical design is key. Proper lighting is an absolute must -- clear, consistent imaging is impossible without consistent lighting.
Another, less obvious pitfall is that of communications. If a high-throughput system is guiding parts into place, you need sufficiently fast communications to support it. “If people don't understand or think about it when they're implementing a vision system, [communication delays] can create problems, especially if they're trying to go fast,” Carlisle says. “The way Ethernet stacks and PCs and PLCs work, there are these cues and delays in the message processing. You wind up with these 100 ms to 200 ms delays, which is okay if all you're trying to do is locate a part, but not if you're trying to do real-time guidance.”
Multivendor systems, in particular, can introduce delays without careful attention to design. Being aware of the different flavors of Ethernet is important for best results.
Putting It Together
Once the parts have been picked up and oriented, they need to be transported to the designated position on the board, rapidly, accurately and without overshoot or ringing. High-end pick-and-place machines tend to use direct-drive linear-motors, which eliminate possible backlash and errors that can arise from some mechanical actuators. The new smart components, such as intelligent drives, also play important roles.
“It makes for good division of labor,” says Rekowski. “If you have a processor per axis on your intelligent drives, then that processor is only concerned with the overhead associated with operating one axis. You don't get a degradation with system performance when you go from a one-axis system to a two-axis or three-axis system, as opposed to a centralized control system.”
The distributed approach doesn’t necessarily rule out a centralized controller to perform high-level coordination between axes, of course. In such a scheme, low-level trajectory commands take place on the drive while tasks like path planning and program execution occur on a separate processor. Here, again, communication becomes key.
“The most common form of feedback in these types of devices are usually 20-um tight pitch optical encoders,” Rekowski says. “Obviously 20 um is not sufficient for the pick-and-place applications, so you have to get a device that converts the feedback to a micron- or submicron-level resolution outside of your controller, or you have to take the sign and cosign analog feedback and interpolate that down to 0.1 um or 10 nm tech resolution inside the controller.”
Those processing demands mean that the data rate of the devices is important. “If you're going to position at 1 m/s or 1.5 m/s with 0.1-um resolution and you are using external interpolation electronics, you’re going to need to be able to support 10-15 MHz data rates in your control system,” he adds.
The positioning requirements vary, depending on the size of the components. Larger components require more throughput than accuracy. Very small components impose stringent accuracy and repeatability requirements. Ultimately, it all comes down to the throughput of the machine.
“You’ve got to design for sufficient servo bandwidth to be able to close the loop fast enough to get the maximum performance out of your mechanical actuator,” says Rekowski. “You also have to look at it from a complete system level performance -- you have to design a control system that can either adequately compensate for the shortcomings of your mechanics or adequately draw all the capability out of that mechanical structure that you're designing.”
Taking Control
Part of the optimization process involves developing a control system that achieves fast move and settle times. From a design standpoint, that means ensuring adequate CPU resources or sufficient DSP muscle in the drive, then working with sophisticated techniques ranging from feed-forward algorithms to observer control.
The interconnected masses or inertias in a given system create resonances. “When you look at settling time, you're usually limited by your machine resonance,” Rekowski says. “You’re going to see some oscillations as this thing settles in that have to do with the rigidity of the system.” Such behavior requires a control approach that mixes algorithms and feedback -- observer control, or adaptive learning, as Retkowski characterizes it. “We run this acceleration profile, whatever it may be. We observe the system response and a look at the position and how it settles, and then use that information to modify the acceleration or velocity profile slightly.”
It’s an iterative process that can dramatically minimize overshoot and ringing. Ultimately the command process becomes tailored to the characteristics of a specific machine.
The market places continually increasing demands for speed and processing power on the electronics industry. Fabrication and assembly become ever more challenging but with the continual advances in motion control, manufacturers have the tools to keep pace.