Federal AI Data Security Guidance Sounds Warnings About Data Drift, Well Poisoning

by | May 29, 2025

New joint guidance issued by US government agencies addresses the assortment of threats to AI data security that firms need to be aware of as they tie models into their systems. These include the ways in which stored data can “drift” out of alignment, and the approaches that active threat actors will take to intentionally introduce malicious elements or commands.

AI data security includes updating, storage and defense from attackers

AI data security is something of an underdeveloped area. This is very concerning as the rate of development and uptake is badly outpacing the rate of understanding all the ramifications and having robust management plans in place, in a situation that at least somewhat parallels the rapid onboarding of internet-connected smart devices over the last decade.

The security considerations span a broad range of elements, from safe storage of data to defending against active attackers. On the “safe storage” front, one important consideration is the early implementation of a quantum-resistant solution. But organizations have to consider not just the data they are holding, but the potential vulnerabilities of the sources they draw from.

The AI data security guidance breaks this down into three main areas: “data poisoning” by attackers with malicious intent, “data drift” in which stored data gradually goes out of alignment with more current sources, and potential vulnerabilities that lie in wait in the supply chain.

Guidance urges caution in connecting AI to sensitive functions

While an organization’s model may or may not be targeted by attackers, data sourcing and quality issues are something everyone will have to deal with. The guidance suggests elements such as “content credentials” that help identify and attribute sources of training data, and digitally signed databases that facilitate access to older data that can be removed or updated as necessary. Certificates of data integrity are likely to become important in the market as retail customers look for a simple and reliable assurance that these management policies are in place.

For some, AI data security will also mean active defense against attackers. Some of the threats the guidance highlights include “frontrunning,” or attempting to maliciously edit public sources of data just ahead of a known scheduled “snapshot” that will be added to training models, and “split-view poisoning” in which an attacker takes control of a domain or other resource and essentially turns it into a poisoned well for models to draw from.

There is also the matter of “data drift” to consider, in which AI data stores gradually drift out of alignment with new updates as time goes on. The guidance provides an array of management suggestions to help ensure that models are staying updated properly, but organizations will also likely want to invest in data quality assessment tools to monitor for this phenomenon over time.

While the guidance is good reading filled with solid advice, there is one suggestion that may meet with some controversy. It advises using AI to train AI, something that analysts are in fact very iffy on.

Recent Posts

How can we help?

8 + 12 =