AI Developers Face an Onslaught of Cyber Threats From Low-Tech Attackers

by | Jan 16, 2024

AI models rely heavily on huge troves of data to train, and according to the National Institute of Standards and Technology (NIST) that presents a serious challenge for the industry that has no easy solutions. Relatively low-skill cyber threats can find multiple ways to poison data and insert backdoors into these systems both before and after they are deployed, leaving AI developers with a major problem to address.

NIST: No guaranteed solutions for broad assortment of cyber threats to AI

Researchers have been coming up with theoretical attacks on AI developers since the initial boom times of AI expert systems in the 1980s. Some of these cyber threats are variants of techniques that have already appeared in the wild, used primarily by spammers and malware developers to defeat automated pattern recognition systems.

Today’s AI is even more vulnerable than some of these past systems, however, primarily because of the oceans of data it must take in to function. Systems like ChatGPT absorb tens of trillions of “tokens” of training data to deliver their seeming expert ability to function on a wide variety of topics, and need to be fueled by free-ranging scraping of the internet. This opens up numerous possibilities for cyber threats. AI developers can control what URLs they point models at, but cannot feasibly screen all of the content they contain, control user-generated content or even ensure that the URLs won’t be captured by a bad actor at some point.

And all of this takes place in the early stages of AI flooding the internet with synthetic content. The more AI-generated content that replaces traditional work, the more AI developers end up training models on what AI has already generated. If these loops get bad enough they could render the models useless.

Little technical skill is required to pull off many of the attack types that NIST documents. Data poisoning is a matter of simply putting bad data in an AI’s path, and privileged access to models might also be readily obtained via open source components or third parties. NIST says that there are no easy answers to shutting down this tremendous range of readily available vulnerabilities, suggesting that AI developers may have to reduce functionality and specialize in the interest of preserving security and long-term stability.

Can AI developers keep up with threats?

AI developers may already have serious issues with “poisoned” training models that have tended to rely on indiscriminate grabbing of internet data, and they are definitely already getting a taste of cyber threats that relentlessly try to jailbreak their safety mechanisms.

AI cyber threats go far beyond misinformation and offensive speech, however. Some systems are already being granted access to sensitive confidential and personal information. AI developers have to think about what kind of files their systems will be able to reach, how attackers will attempt to manipulate their way to that information, and what can be done about it.

The full 100-page NIST paper goes into impressive detail about all of the cyber threats that can be anticipated at this point, and should be required reading for AI developers. But a quick read is also of great benefit to anyone in an organization that may be implementing AI tools in the future, as a summary of the risks that can be expected.

Recent Posts

How can we help?

5 + 1 =

× How can I help you?