AI Security Guidelines Adopted by 18 Countries Show a Potential Path Forward for Regulations

by | Dec 5, 2023

A new set of AI security guidelines promoted by the US and UK offer some possible insight into coming regulation of the industry, covering everything from initial design to ongoing maintenance once systems are in the hands of customers.

Though the guidelines are not in any way binding, 18 countries have signed on and indicated that this is a direction they are moving forward in. The “Guidelines for Secure AI System Development” provide a look at what much of the world has in mind for addressing risks and threats in design, development, deployment and ongoing operation.

A preview of things to come from US-UK AI security guidelines

AI regulation is clearly forthcoming, though each country is moving at its own pace. Though the AI security guidelines are not binding, they do reveal something about what may be forthcoming from new regulations. Even though existing AI tools are considered to be in an infant state, they are already revealing problems that organizations will be grappling with for a long time to come: “hallucinations” and made-up information that looks real, dangerous inaccuracy stated with confidence, bias, security and storage of personal information, and weaponization by hackers among other elements that the guidelines cover.

The new AI security guidelines also mention supply chain security repeatedly. However, they are not a comprehensive source and either do not cover or barely cover some important elements (open source software / SaaS evaluation and insider threats among them). While the guidelines delineate areas that regulators will almost certainly take action on and some of the rules they will almost certainly make, individual countries will still have work to do (and possibly contentious political arguments) over the details and penalty amounts.

AI security guidelines build off existing CISA and NCSC rules

One thing that is clear from the AI security guidelines is that signatory nations intend to hold AI developers responsible for their products from “cradle to grave,” or from initial design principles to how they handle ongoing security once they are in the wild.

Broken up into four convenient sections along this general timeline, security by design principles are the first thing the AI security guidelines address. Much of this covers how to select the right model while considering appropriate trade-offs, but the document indicates it will also be an organization’s responsibility to evaluate any external libraries they incorporate and the security status of any external providers they select.

The theme of vendor security continues into the section covering secure development. The existing NCSC Supply Chain Guidance contains some strong indications of what individual responsibilities will likely be under the law, but it appears organizations will be asked to track and consider the “technical debt” they accrue in terms of prioritizing short-term function over long-term instability or security weaknesses.

In terms of deployment, organizations will be asked to evaluate how responsible each release of a new tool is and to have an appropriate incident response plan in place that anticipates potential problems. In terms of ongoing operation and maintenance, organizations can expect to have substantial monitoring obligations and may be asked to participate in information-sharing programs to address shared issues that develop in AI models.

Recent Posts

How can we help?

4 + 1 =

× How can I help you?