The general future of AI cybersecurity was set by the Biden administration’s executive order last month, and CISA was tasked with shaping up most of the details. The intended path forward is at least somewhat clearer with a new “Roadmap for Artificial Intelligence” issued by the agency that at least outlines the broad strokes of plans for securing critical infrastructure and government agencies from AI threats.
Those threats are both internal and external, as the roadmap notes that the government must develop its own new AI tools with caution to avoid bias or trampling on civil rights. But much of the focus is on critical infrastructure, besieged around the world as ransomware gangs see it as a lucrative area of focus and lose their fear of reprisal over causing real-world harm.
Critical infrastructure protection may come down to ability to staff AI experts
The AI cybersecurity plan helps to clarify the concrete steps that federal agencies plan to take in the coming months. Critical infrastructure defense seems to be very high on the priority list, if not the #1 item.
Critical infrastructure is far from the only concern, however. The strategy expresses concern about potential bias and rights violations in AI tools, even as it also notes the urgent need for defense measures against the extant AI capabilities of threat actors. The success of all of this will likely come down to the country’s ability to staff expert AI cybersecurity positions, with the administration already forming plans to attract foreign talent for these various roles. The plan also mentions internal training and promotion in federal agencies as a key component of the ongoing strategy.
AI cybersecurity plan may also establish software bill of material requirements
Among the “lines of effort” that make up the more concrete details of the plan is the announcement of the NIST AI Risk Management Framework (RMF), along with the creation of an AI Use Case Inventory for federal deployment of new AI systems.
The creation of a Software Bill of Materials for AI systems is also something previously mentioned by the executive order, and CISA’s paper reiterates that this is forthcoming (though at this point remains in an evaluation phase with very little detail available). In terms of near-term developments for technology manufacturers, CISA is looking to add AI material to the existing Secure-By-Design guidance for software.
The government will also collaborate with the private sector on AI cybersecurity by way of the Information Technology Sector Coordinating Council’s AI Working Group, and some of the major players in the cybersecurity and critical infrastructure industries will also be involved in the Joint Cyber Defense Collaborative. The latter is establishing a public webpage meant to catalog AI risks and share insights.
CISA wraps the paper up by noting that it will need to expand its own AI cybersecurity team, though it will likely be competing with some of the critical infrastructure companies it is seeking to protect on that front. Federal government agencies are likely looking at training and promoting from within given a very tight IT job market for finding high-level specialists, particularly in AI.