Antaira Technologies Logo

Member Since 2023


Content Filed Under:



Industrial Switches Combat Security Dangers in Industrial Artificial Intelligence Systems

POSTED 08/04/2023

 | By: Henry Martel, Antaira Technologies



The tools and software underpinning industrial Artificial Intelligence (AI) systems are vulnerable to cyber-attacks that may enable bad actors to take expert systems and manipulate them to serve malicious end goals. As AI (and its subcategory of the machine learning algorithms and deep learning models) rapidly make their way into industrial networks, cyberattacks represent an escalating threat that needs to be addressed. A compromised AI system holds the potential to seriously damage a company’s reputation and financial standing, and even endanger the safety of employees.

AI is a double-edged sword. While it is opening doors to limitless possibilities to further human intelligence, solve problems and improve industrial processes, it simultaneously is exposing small businesses and its users to new cyber threats. In this blog post, we explore the security dangers to industrial AI systems, and specifically, the role industrial Ethernet switches play in preventing attacks on artificial intelligence technology.

What Makes Artificial Intelligence Vulnerable?

The reason for AI vulnerability is fundamental: Each day, as thousands of IoT devices are connected to networks around the globe to collect, analyze data store and use big data together with artificial and artificial general intelligence, hackers are given an increasingly broader attack surface. Every sensor, actuator, edge computer, industrial switch and server you deploy is a potential gateway to the network.

Moreover, by virtue of the way AI learns, the systems can readily be attacked and controlled. AI continuously learns by identifying patterns, using historical data using machine learning and deep learning algorithms, that identify and recognize similar patterns from massive datasets, emulating the human brain in decision making. Given enough data, the learned patterns can be of such high-quality that they will outperform humans at many tasks. For instance, an AI system may ingest thousands of images from a list of components used in the assembly of a complex product. AI learns by identifying patterns in each component’s shape, markings, weight, depth, color and size, growing smarter in its ability to identify with each image consumed. The upshot of this deep learning, is vastly improved quality of the product that is being assembled with faster throughput and reduced waste compared to a manual assembly line made up of human workers.

But there’s a catch: a hacker can alter the image dataset to trick weak AI tools and the deep learning AI system into making mistakes. What if a disgruntled employee alters the dataset images to contain the wrong component size or color? A change of a few pixels can send the machine in to an automated assembly line of chaos, where it rejects good parts while installing non-compliant ones.

Another example incorporating an AI attack would be an autonomous forklift in a warehouse that is “taught” by human mind to incorrectly recognize objects in its path, resulting in it perhaps crashing into storage racks or employees. An AI-powered video surveillance system can be maliciously trained by human intelligence to misclassify events as being non-threatening so that true dangers can evade detection without human intervention. Data manipulation, alteration of decision-making process of expert systems using human language, injecting malicious commands into expert systems, or total system control are real consequences of an industrial AI hack.

We should stress here that AI attacks of this type are not due to the data scientist who developed the deep learning software making a critical mistake in writing the code. Rather, it is the very nature of the computer science algorithm to extract patterns by generalizing information from the training data it ingests to learn. Corrupt the training data, corrupt the output values of supervised machine learning system.

AI as a Hacker Tool

When in the wrong hands, generative AI programs themselves can be used to uncover weaknesses in networks. Imagine a remote sensor in an oil field that was accidentally not assigned a firewall. A hacker using AI can automatically scan the network’s attack surface to detect the sensor as being vulnerable for exploitation. Once inside, the network via the sensor’s connection, the hacker can install AI-powered malware to collect intellectual property or alter data with relatively low risk of detection.

Another way cyber criminals are leveraging AI is to mine both human intelligence and artificial intelligence to automate phishing attacks. Unfortunately, AI generated phishing emails are more likely to be opened than ones created by humans, because AI can scrap personal information from social media accounts to encourage the recipient to believe they are authentic. According to a 2021 Cybersecurity Threat Trends report, about 90% of data breaches occur due to fraud detection and human error in phishing.

Providing Security to AI Networks

Industrial networks, whether or not they have adopted AI, typically apply the AI tools and applications to Zero Trust and Defense in Depth cybersecurity strategies using AI algorithms to help reduce vulnerabilities, analyze data, contain threats, and mitigate risk. Below is a brief summary of how the AI applications and tools work for each approach.

Zero trust architecture is based on the assumption that every connection and endpoint represent a security threat to your network. Zero trust is implemented on a framework where:

• All traffic is logged and inspected

• Network access is strictly limited and controlled based on roles

• Every connection accessing data and resources must be authorized

• All network resources are continuously secured and verified.

Essentially, zero trust only grants the absolute minimum access privileges to those who legitimately need to operate on a specific task of the neural network, while preventing unrestricted access to those parts of the neural networks, that the user does not require access to perform specific tasks.

Defense in depth is based on the IEC 62443 cybersecurity standard. It calls for multiple layers of security to be installed on the network — physical, technical, and administrative — both in the form of security devices and best practices. At the core of this layered defense approach is network redundancy. If one security measure fails to stop an attack, the next one may help limit and mitigate the damage before the attack reaches the entire network. Another key tenant is network segmentation. IEC 62443 uses the term “zone” for segments. All communication devices within a zone are assigned the same security level and similar protection. Zones can be placed inside larger zones if necessary

Combining zero trust and defense in depth creates a cybersecurity program based upon layer after layer of protection fortified by restricted access to specific areas of the computer program and network.

The Role of Industrial Ethernet Switches

Industrial Ethernet switches are one of the cornerstones to an AI-enabled industrial network. Ethernet switches connect the IIoT devices neural networks that collect vital real-time information, analyzing data for analysis the neural networks that transmit that data to AI software. Both Zero Trust and Defense in Depth approaches take into account the vulnerability of industrial switches to neural networks.

A company’s industrial switch infrastructure is essentially a huge web of interconnected data highways, which makes an unprotected networking switch the ideal gateway for a hacker. Despite industrial switches increasingly being the target of cyber-attacks, networking switches are often overlooked by security teams. This is a serious danger: If an attacker was to take control over a network switch via ARP spoofing, for instance, they could potentially exploit it for lateral movement to additional devices and change their behavior. In addition, controlling the industrial switch could aid in denial of service or man in the middle attacks, as well as provide the means for the theft or unauthorized removal or movement of data from devices connected to the switch. This is only a partial list of potential damaging outcomes.

Industrial switch attack resilience varies widely from vendor to vendor. In general, managed industrial switches will have more security than unmanaged industrial switches. At Antaira, we integrate a number of features into our industrial Ethernet switches to help improve the security of your industrial network, including:

• Port Security: Limits the number of MAC addresses on each port to help prevent unknown or unauthorized devices from forwarding packets.

• Access Control Lists: Known as “ACLs” are a list of rules to help you better control and monitor traffic on your network. ACLs act as filters to manage the traffic that accesses the network or gets denied.

• 802.1X Authentication: This feature requires devices connecting to a network to authenticate before being able to access resources.

• VLANs: A virtual LANs or “VLAN” segments a network to keep sensitive data isolated from less secure parts of the LAN.

• SNMP v1/v2c/v3: SNMP ensures that every interaction with a device on the network is authenticated and encrypted, reducing the risk of unauthorized parties gaining access to data.

Of course, it is a good practice to disable all ports when you configure a networking switch out of the box, enabling each port only when a device needs to come online. Even a simple step like locking Ethernet switches in a closet or cabinet can prevent trouble.


As industries continue to embrace AI-driven technologies, the implementation of secure industrial Ethernet switches becomes imperative to protect critical infrastructure and valuable data from ever-evolving cyber threats. Industrial switches facilitate secure communication between AI-enabled devices, such as AI-powered robots and other intelligent systems and sensors, yet can serve as a gateway to your network if not properly safeguarded. To learn how your organization can leverage the benefits of AI without compromising on security, contact the Antaira technical team at 1-714-671-9000.