Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Jan 11 2024
Software

Unpacking AI Data Poisoning

As technology evolves, so do threats.

Machine learning and artificial intelligence are making their way to the public sector, whether agencies are ready or not.

Generative AI made waves last year with ChatGPT boasting the fastest-growing user base in history. Meanwhile, Microsoft launched a generative AI service for the government in June, the Department of Defense announced a generative AI taskforce in August, and more initiatives are sure to come.

The list of possible use cases for AI is long: It can streamline cumbersome workflows, help agencies more effectively detect fraud and even support law enforcement efforts. Regardless of use case, one thing holds true: The more data a model ingests, the more accurate and impactful it will be. This assumes, of course, that the data isn’t being edited or added maliciously.

Data poisoning — the manipulation of algorithms through incorrect or compromised data — represents a new threat vector, particularly as more agencies embrace AI. While data poisoning attacks are not new, they have become the most critical vulnerability in ML and AI as bad actors gain access to greater computing power and new tools.

Click the banner below to learn how Backup as a Service boosts data protection.

Watch Out for Data Poisoning Tactics

Data poisoning attacks can be categorized in two ways: by how much knowledge the attacker has and by which tactic they employ. When a bad actor has no knowledge of the data they seek to manipulate, it’s known as a black-box attack.

The other side of the spectrum is a white-box attack, in which the adversary has full knowledge of the model and its training parameters. These attacks, as you might suspect, have the highest success rate.

There are also grey-box attacks, which fall in the middle.

The amount of knowledge a bad actor has may also affect which tactic they choose. Data poisoning attacks, generally speaking, can be broken into four broad buckets: availability attacks, targeted attacks, subpopulation attacks and backdoor attacks. Let’s take a look at each.

Availability attack: With this breed of attack, the entire model is corrupted. As a result, model accuracy will be considerably reduced. The model will offer false positives, false negatives and misclassified test samples. One type of availability attack is label flipping, or adding approved labels to compromised data.

RELATED: Government increased its use of AI in 2023.

Targeted attack: While an availability attack compromises the whole model, a targeted attack affects only a subset. The model will still perform well for most samples, which makes targeted attacks challenging to detect.

Subpopulation attack: Much like a targeted attack, a subpopulation attack doesn’t affect the whole model. Instead, it influences subsets that have similar features.

Backdoor attack: As the name suggests, this type of attack takes place when an adversary introduces a back door — such as a set of pixels in the corner of an image — into training examples. This triggers the model to misclassify items.

How to Fight Back Against Data Poisoning

In the private sector, Google’s anti-spam filter has been attacked multiple times. By poisoning the spam filter’s algorithm, bad actors have been able to change how spam was defined, causing malicious emails to bypass the filter.

Now, imagine if something similar happened to an agency. Undoubtedly, the impact would be far worse.

Audra Simons
Proactive measures are critical because data poisoning is extremely difficult to remedy.”

Audra Simons Senior Director of Global Products, Forcepoint G2CI

The question then is, how can agencies prevent data poisoning from taking place?

To start, proactive measures must be put in place. Agencies need to be extremely diligent about which data sets they use to train a given model and who is granted access to them.

When a model is being trained, it’s crucial to keep its operating information secret. This high level of diligence can be enhanced by high-speed verifiers and zero-trust content disarm and reconstruction, tools that ensure all data being transferred is clean.

Additionally, statistical models can be used to detect anomalies in the data, while tools such as Microsoft Azure Monitor and Amazon SageMaker can detect shifts in accuracy.

Proactive measures are critical because data poisoning is extremely difficult to remedy. To correct a poisoned model, the agency would have to conduct a detailed analysis of its training inputs to detect and remove fraudulent ones.

DISCOVER: Agencies should be part of the AI proof-of-concept process.

As data sets grow, that analysis becomes more difficult, if not impossible. In such cases, the only option is to retrain the model completely, a time-consuming and expensive process.

Training GPT-3, for instance, carried a price tag of more than $17 million. Most agencies simply do not have the budget for that kind of correction.

As agencies embrace ML and new forms of AI, they must be aware of the threats that accompany them. There are numerous ways for an adversary to disrupt a model, from inserting malicious data to modifying existing training samples.

Preventing data poisoning attacks is particularly crucial as more agencies rely on AI to deliver essential services. For AI to truly live up to its potential, agencies must take the steps necessary to maintain model integrity across the board.

Vertigo3d/Getty Images