The Alarming Rise Of Adversarial Machine Learning Attacks

By Emily Newton

Artificial intelligence (AI) and machine learning (ML) have quickly become staples of the communications industry. While this technology has enabled many impressive gains, it also introduces unique risks. The threat of adversarial machine learning is one of the most significant.
What Is Adversarial Machine Learning?
Adversarial machine learning attacks manipulate ML models, causing them to produce unreliable outputs. That could be as minor as making a chatbot offer incorrect information or as severe as stopping a self-driving car from recognizing pedestrians in the way.
A recent research paper from the National Institute of Standards and Technology (NIST) divides these attacks into two main categories. The first covers those targeting an ML model’s training, where cybercriminals interfere with data to undermine the algorithm’s accuracy. Data poisoning — where criminals insert false or misleading information into training datasets — is a prominent example.
The second category deals with incidents targeting the model’s testing. Instead of tampering with an algorithm’s development, these methods work by offering misleading inputs. In the NIST’s example, applying digital noise over an image could cause a machine vision model to categorize it incorrectly, even if there’s no distinguishing difference to humans.
How Adversarial ML Affects The Telecom Industry
Adversarial attacks have become a bigger threat as AI and ML are increasingly common in the telecommunications sector. While the industry hasn’t experienced any significant incidents so far, a successful one could cause considerable damage.
Some device manufacturers use ML models to predict when equipment will fail, enabling quick, low-cost repairs. A cybercriminal could target such systems through data poisoning, causing the algorithm to miss critical signs of wear. Consequently, the equipment it monitors could experience a significant breakdown without warning, leading to damaged goods and high repair costs.
In a more dramatic scenario, adversarial ML could target material-handling robots’ navigation systems. Affecting AI’s reliability could cause the machines to run into and harm employees. Attacks could also cause back office AI to divulge sensitive customer information or fail to spot incoming cyberattacks against a company’s network.
Defending Against Adversarial Machine Learning Attacks
The more an organization relies on machine learning, the more dangerous adversarial ML becomes. Thankfully, protection is possible. Telecom businesses can defend against adversarial machine learning through a few key strategies.
Limiting Access To ML Data
One of the most important steps in preventing adversarial machine learning attacks is restricting access to training datasets. It’s best to follow the principle of least privilege, which states that only users who need access to certain data to perform their roles should have those privileges.
Any access restrictions require strong authentication measures to work as intended. A simple username and passcode combination is not enough, as 46% of surveyed Americans have had their passwords stolen. Even a strong password is susceptible to leaks, so multifactor authentication (MFA) is necessary for accessing sensitive ML data.
Even network administrators may not need access to training data. Only employees actively involved in the AI model’s development and oversight should have these permissions. Fewer authorized accounts mean fewer potential entry points for attackers.
Adversarial Training
Businesses also can develop their machine learning models to resist adversarial ML. A technique called adversarial training is one of the most popular and effective ways to do so.
Adversarial training exposes a model to known attack methods during training to teach it to identify and ignore them. The simplest way is to add malicious examples to the training data but give it the correct label. Alternatively, teams can build a binary classification algorithm into the model to determine if an input is dependable before analyzing it.
While this defense does harden ML, it’s not a perfect solution on its own. One study found that achieving a 70% recognition rate entailed a 40% false positive rate and sophisticated attack methods often work around it.
Ensemble Learning
Another way to strengthen a model against adversarial machine learning is to use a more robust algorithm. Ensemble learning, which combines multiple ML models to increase confidence in the output, can make adversarial attacks less likely to influence AI’s decision-making.
Research finds that model diversity leads to higher accuracy and fewer issues related to overfitting or bias. Those same advantages make ensemble learning resistant to malicious inputs. An attack may lead to an incorrect output in one algorithm, but different results from the others can counteract it.
Like adversarial training, ensemble learning is imperfect. Cybercriminals could trick all algorithms within the model, though this is far harder than influencing a simpler system. The biggest downside is that ensemble methods involve higher costs and complexity in development.
Model Explainability
Organizations may be unable to stop all adversarial ML attacks, especially as methods grow increasingly sophisticated. Consequently, it’s also crucial to implement steps to catch them before they can cause damage. Enabling model explainability is among the most critical steps here.
Many algorithms work in a “black box,” meaning it’s difficult to determine how they arrive at their conclusions. By contrast, explainable AI is transparent and shows how it analyzes each factor to produce a given output.
Building and training an explainable ML model is time-consuming and challenging. However, tracing the technology’s decision-making means professionals can identify cases of data poisoning or malicious inputs. Teams can then correct the model to harden it against similar situations in the future.
Limiting AI Reliance
Businesses must recognize that AI is not always as trustworthy as it seems. Minimizing reliance on machine learning will make adversarial attacks far less effective.
Experts should verify AI suggestions before making significant decisions based on them. This ensures a compromised model won’t lead to dramatic outcomes. Designing workflows to have backup options if automation solutions like machine vision fail will likewise make ML’s vulnerabilities less impactful.
Studies find that simply knowing a suggestion came from AI causes people to over-rely on it, leading to inefficient outcomes and other negative consequences. Workflow safeguards, employee training, and explainability will help counteract this trend and improve resilience.
Adversarial Machine Learning Attacks Demand Attention
Adversarial machine learning will become a larger threat to the telecom industry as the sector’s AI usage grows. Companies must recognize and address the issue today to remain safe.
While this threat does not mean ML is inherently unsafe, it does mean businesses should approach it carefully. Once organizations learn of the risk and understand how these attacks work, they can develop a more resilient AI strategy.