Understanding Motion Control Networks
| By: Kristin Lewotsky, Contributing Editor
There was a time automation took place on a strictly point-to-point basis with each drive wired directly to the controller. The long runs of cabling cost money, took time to install, increased noise, and failed frequently - or worse, produced maddening intermittent faults. Enter the industrial fieldbus, which permitted devices to communicate digitally with one another over a network. The change simplified machine integration, use, and maintenance dramatically. Instead of crawling inside a machine to diagnose a drive error, OEMs and maintenance staff could simply plug into any port on the network to access everything. A bevy of protocols have been established that offer a variety of attributes for motion-control applications. Let’s consider a few.
The term fieldbus is a concatenation of field device output over an electrical bus. Early devices linked machine programmable logic controllers (PLCs) with the various components. The early versions were a huge advance but were still limited in terms of bandwidth, network size, number of nodes, and the amount of data that could be exchanged. Then Ethernet emerged, which presented both benefits and challenges for industrial networking.
On the plus side, industrial Ethernet operated orders of magnitude faster than the early fieldbuses. Speeds went from a few kilobits per second to hundreds of megabits or even gigabits per second. The amount of data that could be transferred rose from a few bytes to levels high enough to support big data analytics. It also held out the promise of piggybacking on improvements and price cuts in standard Ethernet physical media, switches, etc.
The problem is that standard Ethernet is not deterministic enough for motion control. To understand why, we should start with the Open Systems Interconnection (OSI) structure.
Modeling communications by layers
The OSI model presents network communications as a layered architecture as follows:
Layer 1: the physical layer, e.g., RS 232, RS 485, IEEE 802.3 (Ethernet), CAN bus, IEEE 802.11 (wireless Ethernet), etc.
Layer 2: the data link layer, which governs node-to-node communications
Layer 3: the network layer, which enables messages to travel from node to node across the network (e.g., IP)
Layer 4: the transport layer, which packs up the data to send to the next layer (e.g., transmission control protocol (TCP), user datagram protocol (UDP), etc.)
Layer 5: the session layer, which manages communication sessions to exchange data
Layer 6: the presentation layer, which involves data translation between the network and the application
Layer 7: the application layer, where many industrial Ethernet protocols reside
You can think of the model in action as a set of Russian nesting dolls. The data from a device starts at the application layer, with additional routing information like headers being added at each layer until by the time it reaches the physical layer, the transmission carries not just data but full information about where it came from, where it should go, and how it should be handed off en route. This information enables it to be sent to the proper destination and unpacked layer by layer until the receiving device is able to harvest the information. In this way, a PC, for example, can send a job to a printer.
In theory, a PLC ought to be able to use the same technique to harvest data from an encoder or send commands to a drive. Unfortunately, it’s not quite that easy.
The Ethernet protocol (IEEE 802.3) is defined for layer 1 and layer 2 of the OSI model. Although many people associate Ethernet with TCP/IP communications, they are not actually part of the Ethernet standard. They are layer 3 and layer 4 protocols, as discussed above; moreover, they are not the only solutions to perform these tasks, just the most widely known.
TCP/IP over Ethernet works well for the Internet but presents problems for highly synchronized motion. For starters, TCP/IP is based on the premise of best-effort delivery. Each node on the network has the right to transmit at any time. To prevent packets from interfering, the standard includes carrier sense multiple access with collision detection (CSMA/CD). If two nodes attempt to send a packet simultaneously, it is detected and one of them is delayed until the bus is clear. This may add a few hundred milliseconds of latency to the transmission. That’s not a problem when it comes to downloading cat videos on YouTube, but for highly synchronized motion in a 300-bottle-per-minute capping machine, it could spell disaster.
Variation in message delivery time is known as jitter. For hard-real-time applications, jitter should be below 1 µs. To achieve this level of determinism, the industry has developed a number of different approaches to Ethernet.
The Controller Area Network (CAN) is a multi-master serial bus system with multi-drop capabilities. First developed by Bosch to link controllers in automotive applications, the CAN fieldbus was released to CAN in Automation (CiA) and standardized as ISO 11898. The CAN fieldbus (now called Classical CAN) offers data rates of up to 500 kbps, nominally over distances as long as 40 meters (actual implementations would be shorter). Cutting the data rate to 125 kbps stretches out the length to 500 meters. The data field is restricted to 8 bytes. The standard primarily covers layers 1 and 2 (the physical layer and the data-link layer) in the OSI model.
CAN is an event-driven versus clock-driven protocol, so there is no specified cycle time. Jitter at a bit level is ±1 CAN time quanta (network time interval). Message-level jitter is the maximum length of the message minus one bit. The number of nodes for the network varies depending on device and network topology. Practically speaking, a linear topology can easily support 64 nodes but with careful design, it can go higher.
More recently, the CiA upgraded the original standard version to create the CAN flexible data-rate standard (CAN FD) protocol. The data-link layer now supports data rates faster than 1 Mbps, and a 64-byte data field. When only a single node is transmitting, there is no need for synchronization between nodes, this enables a temporary speed boost. The trade-off is that nodes need to be resynchronized before the transmission of the acknowledgment (ACK) slot bit.
To add a higher level of connectivity, the CiA developed CANopen. The CANopen protocol occupies layer 7 of the OSI model, leveraging the CAN datalink specification for data transport. It is a hybrid master-slave and peer-to-peer architecture in which any node on the network can communicate with and control other nodes, while itself acting as a slave to a different master. Each node in the network can be assigned a priority so that in the event of data collision, the most important node prevails.
CANopen is based on an object-oriented data model that features standardized communication objects (COBs). Each CANopen device incorporates three elements: a protocol stack to handle CAN communications over the network, application software for internal control and interfacing to process hardware, and the CANopen object dictionary. The object dictionary defines the datatypes it uses and contains all relevant data about the device, including application-oriented parameters and communications parameters.
The object dictionary does not simply contain static values. Entries can be written to the object dictionary of a device to tell it to turn on, for example. Conversely, the object dictionary can be updated with the latest operating parameters.
To access the object library, CANopen uses a pair of protocols from its software stack: service-data objects (SDOs) and process-data objects (PDOs). The SDO enables other nodes to read or write data to a single node at a time. Although it can be used for dynamic data, it’s best suited to configuration. To read/write high-priority control and status data, PDOs provide a better solution. Other components in the protocol stack include network management tools, error control functions, and special functions like synchronization and time stamping.
CANopen operates at data rates as high as 1 Mbps over a nominal bus length of up to 25 meters. Dropping the speed to 10 kbps stretches length to 5000 meters. CANopen supports up to 127 nodes; if each node becomes a master, the total number of nodes possible is 10,000.
CC-Link is a fieldbus developed by Mitsubishi Electric in 1997 and released to the CC-Link Partner Association (CLPA) in 2000. It is based on a modified version of the RS 485 physical layer and can connect up to 64 nodes on a network over a bus as long as 100 meters. It delivers a top speed of 10 Mbps and supports cycle times as low as 10 ms.
In 2007, CLPA released a deterministic industrial Ethernet protocol called CC-Link IE, with a synchronous motion variant called CC-Link IE Field Motion. Based on a full-duplex copper or fiber bus, CC-Link IE Field Motion delivers synchronous communication over a master-slave architecture. CC-Link IE Field is a layer 7 protocol. Although it uses Gigabit Ethernet for layer 1 and layer 2, it does not use TCP/IP or UDP/IP for the layer 3 and layer 4 tasks. To prevent data collisions, it uses token passing, in which the master sends the “token” to each node serially, in a user-defined order. Only the node possessing the token at any one time can put data onto the network.
CC-Link IE Field Motion supports hard-real-time motion with cyclic communication over dedicated bandwidth. It separately allocates bandwidth for acyclic transmission (“transient” messaging) of on-demand data.
As the physical layer suggests, the protocol operates at a maximum data rate of 1 Gbps. It can connect up to 254 nodes per network with cable runs of up to 100 meters over copper and 550 meters over fiber. It is compatible with star, line, mixed star and line, and ring networks.
Common Industrial Protocol (CIP)
Originally developed by Allen-Bradley, now Rockwell Automation, the Common Industrial Protocol (CIP) is a media-independent family of standards for industrial automation. It defines comprehensive communications services for automation, including controls, motion, safety, synchronization, etc. It constitutes an application layer that can be implemented over various different physical and transport layers such as in DeviceNet, Ethernet Industrial Protocol (EtherNet/IP), etc.
CIP is based on a producer-consumer model rather than a source-destination approach. Producers are the field devices that generate data while the consumers are the devices that use that data; some devices can be both. The chief benefit is that if multiple consumers need the same data, the producer only needs to transmit one time, no matter how many consumers are on the network. This improves bandwidth efficiency and enables the use of a variety of hierarchies such as master-slave, slave-multi-master, target-to-originator, and peer-to-peer.
CIP is an object-oriented protocol in which each device is associated with a collection of objects that details its characteristics, performance, and how it communicates across the network. The protocol encompasses three types of objects: required objects, application objects, and vendor-specific objects. Among the required objects are the identity object (sort of an electronic nameplate for each device), the message-router object (which directs object-to-object communications), and the network object (which describes the network details required to connect an object in the network).
Application objects describe the way a device encapsulates data. The application object for a motor might include size, current rating, and speed. A device might require multiple objects that combine to create an object profile. To simplify implementation, the CIP protocol includes a library of predefined application objects and profiles. Objects can be grouped into assemblies that make it possible to establish rules for different behaviors; for example, one assembly might be created to monitor the output of a current meter for a drive at 100-ms intervals while another might be established to only capture the values when they exceed some trigger point. The user just picks the assembly that best suits their needs.
Vendor-specific objects enable manufacturers to customize objects beyond what is available in the libraries.
CIP includes two key extensions that make it suitable for motion control: CIP Sync and CIP Motion. Based on the IEEE 1588 Precision Time Protocol standard, CIP Sync is a distributed clock that works with the CIP object model. CIP Motion features motion profiles encompassing drive control. Examples include drive configuration, status, and diagnostic attributes and services; unicast control-to-drive communications; multicast peer-to-peer communications to synchronize position and velocity in drives slaved to multiple distributed controllers, and more.
Developed in the 1990s by Allen Bradley, DeviceNet is a digital fieldbus designed for connecting industrial controllers and I/O devices. It protocol can support master-slave architectures for conventional centralized control or for distributed control over peer-to-peer connections. It can link up to 64 nodes with a top speed of 500 kbps. Although it is not designed for motion, it can be an economical solution for tasks like connecting encoders in simple motion systems.
DeviceNet implements CIP on top of layer 1 and layer 2 from the CAN specification. CIP handles layers 5 through 7 while DeviceNet takes care of the network- and transport-layer tasks. The physical layer delivers both communications and up to 24 VDC, 8 A of power on the same bus.
Ethernet for Control Automation Technology (EtherCAT) is a real-time industrial Ethernet protocol designed specifically for high-speed, highly synchronized operation. Initially developed by Beckhoff Automation in 2002, it was transferred to the EtherCAT Technology Group in 2003. It is an open standard covered by IEC 61158. It delivers cycle times of 100 µs or better and jitter of less than or equal to 1 µs.
EtherCAT is a layer 1 and layer 2 protocol. In layer 2, it only uses the MAC layer in order to identify node locations and transmit to them. Designed for real-time applications, it allows exact scheduling of node-to-node communications. At the same time, it can carry standard Ethernet traffic to, for example, communicate with a supervisory control and data acquisition (SCADA) systems or manufacturing execution systems (MESs). To accomplish standard Ethernet communications, the protocol embeds the Ethernet frame with headers inside of an EtherCAT frame, where it gets unpacked at the receiving device. A dedicated hardware device known as a Switchport embeds the data into the EtherCAT segment.
EtherCAT is based on a master-slave architecture. The master transmits a telegram that goes to each slave node in sequence. Each slave device features a dedicated controller able to process the frames in hardware. When the telegram arrives, the slave device reads the applicable data, inserts data of its own in real time, and passes it to the next node. At the end of the bus, the signal returns, courtesy of the full-duplex bus.
Having each data packet passed through each node before return does introduce a small amount of latency at each node and must be compensated for in the scheduling. Timing is based on a distributed clock. The first slave node acts as a master clock, distributing it to the other nodes in the network. The protocol corrects for propagation delay to maintain the jitter specification.
The protocol supports nominal data rates of 100 Mbps with 90% utilization. Each master can support 65,535 slaves. Each slave can also control axes and handle I/O, leading to even larger network implementations.
EtherCAT requires a specialized processor in slave devices. The protocol is interoperable with CANopen and SERCOS III, making it possible for an EtherCAT network to interface and communicate with CANopen and SERCOS-III-enabled devices.
EtherNet/IP is an industrial networking protocol that adapts CIP to standard Ethernet. EtherNet/IP uses CIP for the session, presentation, and application layers (layers 5 through 7) while leveraging Ethernet for layer 1 and layer 2. It also takes advantage of TCP/IP and UDP/IP for layers 3 and 4. This enables it to not only use standard Ethernet equipment but to carry Internet traffic alongside machine data. To achieve the determinism required for hard-real-time motion, EtherNet/IP uses CIP Motion.
EtherNet/IP divides communications into explicit messaging and implicit messaging. Explicit messaging involves request/reply for non-real-time communications and takes place over TCP/IP. Examples of this type of communication include configuration parameters and acquisition of slowly varying data. Implicit messaging involves real-time I/O like control data transferring from a remote I/O device. This type of communication transmits over UDP/IP.
By leveraging CIP Motion and CIP Sync, EtherNet/IP can achieve millisecond update rates for as many as 100 axes, and jitter of less than 100 ns.
Ethernet POWERLINK is a software-based industrial Ethernet protocol. Developed by B&R Industrial Automation in 2001 and handed off to the Ethernet POWERLINK Standardization Group (EPSG) in 2003, Ethernet POWERLINK is a Layer 7 protocol that delivers hard-real-time performance suitable for motion-control applications. It builds on both Ethernet and the CANopen protocol to simultaneously be both deterministic enough for highly synchronized motion and to send non-real-time data like user information in standard TCP/IP frames.
Functionally speaking, Ethernet POWERLINK presents a hybrid master-slave and peer-to-peer architecture. The network has a managing node that controls the clock/communication scheduling and a number of slave nodes known as controlled notes. The managing node sends out a clock signal to synchronize communications. Next, on a set time schedule, it sends all nodes a polling message that can include commands and requests for information. In response, each node sends out a frame of its own. That transmission not only returns to the managing node but to all other nodes in the network.
An important attribute of Ethernet POWERLINK is that each node can itself act as a managing node. That enables it to send commands/requests to other slave nodes to create an architecture of nested networks.
Each frame consists of an isochronous portion for real-time messages and an asynchronous portion that transmits standard TCP/IP Ethernet. Thus, the system can simultaneously transmit path commands in the isochronous segment and non-time-sensitive data like camera images in the asynchronous section over several frames.
Ethernet POWERLINK operates at standard Ethernet speeds of 100 Mbps, with a 200 µs minimum cycle time and a jitter of about 20 ns. Each managing node can control up to 259 nodes. Because each of those controlled nodes can act as a managing node, the actual number of devices that can be supported on an Ethernet POWERLINK network is much higher. It is an open-source standard, although controlled node devices do require hardware that can be implemented in software on an FPGA or via a dedicated chip.
Launched in 1987 by a collaboration of German companies and universities, PROFIBUS uses a master-slave architecture to send data at speeds as fast as 12 Mbps. it is based on the RS-485 physical layer. The protocol is split into two classes: PROFIBUS DP, for decentralized peripherals, and PROFIBUS PA, for process automation. We will restrict this discussion to PROFIBUS DP, which is used for motion control. Interestingly, PROFIBUS DP can also be used for process control. This is particularly useful in manufacturing, where discrete and process-based automation frequently take place side-by-side.
PROFIBUS encompasses layer 1 and layer 2. In operation, the PROFIBUS master polls the slave nodes, sending data and requesting data from one node after another. The cycle time for the bus is defined by the duration of that process. Because these messages are the only signals on the bus and communication is scheduled by the master node, it is a completely deterministic solution.
The number of nodes supported is constrained by the drivers on RS-485 chips: 32 nodes per segment, 128 nodes per network. For practical purposes, most networks stick with a few segments and couple multiple networks together to get to higher node counts.
PROFINET is a switched-packet networking protocol based on a producer-consumer architecture. The producer nodes poll the consumer nodes but unlike for PROFIBUS, are not constrained to do so at constant intervals. It is important to note that PROFINET does not use the PROFIBUS fieldbus or protocol. PROFINET is based on the Ethernet layer 1 and layer 2 solutions, and occupies layer 7. It exists as a number of variations; PROFINET IRT is of most interest to the motion control community.
To avoid latencies introduced by the network and transport layers, PROFINET has its own Ethertype. When the equipment detects the PROFINET Ethertype in a frame, it bypasses the network layer and transport layer, sending the data directly to the application layer. In this way, it avoids the delays and jitter introduced by protocols such as TCP/IP. Traffic without that Ethertype goes through the standard flow. This enables the network to support both hard-real-time motion control communications and less time-critical data such as configuration data or diagnostics. Jitter is less than 1 µs.
To achieve synchronized motion, PROFINET IRT uses IEEE 1588-2008. Unlike IEEE 1588, which defines a distributed clock for a nested control system, IEEE 1588-2008 describes a single control loop between the master clock and the drives. This transparent clock reduces the latency that can be introduced with the more complex architecture. The protocol also uses scheduling to equalize the arrival time of the packets from the various nodes. The master sends packets to the most distant nodes first to reduce jitter.
Because they don’t require the header information necessary for TCP/IP transport, PROFINET real-time frames tend to be small. The protocol leverages bandwidth reservation to guarantee availability to the real-time frames compared to the larger “best-effort” TCP/IP frames. That enables deterministic communication with the motion axes while remaining open to what is considered classic Ethernet traffic up to and including print jobs sent from the HMI.
Developed in the 1980s by a German machine-tool consortium, the serial real-time communication system (SERCOS) is a fiber-optic automation bus for hard-real-time motion applications. In 1995, it became standardized as IEC 61491. Based on a ring topology, SERCOS operates at speeds of up to 16 Mbps. The network can support up to 254 devices per ring with cycle times as low as 62.5 µs and jitter of less than 1 µs.
SERCOS uses a master-slave architecture. The master sends out the master synchronization telegram to all of the nodes. Next, in a user-defined sequence, each node in turn transmits its data. When this finishes, the master sends out the master data telegram, which contains data for each node in an assigned location.
To help ensure interoperability, the standard specifies more than 500 data blocks and motion-control functions. Network segments are can be as long as 50 meters over plastic fiber-optic and 250 meters with glass fiber.
SERCOS III is a fiber-based industrial Ethernet protocol capable of linking up to 511 devices with a jitter of ±10 ns. In addition, users can have multiple networks within a given installation. SERCOS III can operate in both slave-to-slave and controller-to-controller (machine-to-machine) modes. It supports a variety of topologies, including line or ring. The protocol uses Fast Ethernet for layer 1 and layer 2, giving users a choice of twisted-pair or fiber for the bus.
The master in the SERCOS network sends the synchronization telegram and launches the cycle for real-time communications, which can range from 31.5 µs to 65 ms. The longer the cycle the greater the number of nodes that can be interrogated. Once the transmission completes, the master releases the remainder of the cycle time for the bus to be used as a unified communications channel. During this period, other types of data such as TCP/IP and even EtherNet/IP can use the bus. When the cycle completes, the master resumes control.
Today’s industrial networking solutions provide a number of options to OEMs and end-users. Start by determining the size of the network and the requirements for type and frequency of communications. Careful investigation into the fieldbuses listed above will reveal the best solution.
Thanks go to the following individuals for helpful conversations: Michael Bowne, PI North America; Sari Germanos, business development manager, B&R Industrial Automation; Daron Underwood, CTO, Kingstar; and John Wozniak, manager, CLPA.