The first days of the second Trump administration have been characterized by the rapid issuance of executive orders. But the new president has also found time to repeal some of those issued by the Biden administration, such as its 2023 order on AI risks.
Executive Order 14110 was issued on Oct. 30, 2023 and applied to the country’s leading AI developers, requiring them to share the results of internal safety tests with the federal government and keep them apprised of any AI risks applicable to national security, the country’s economy, or general public safety and health.
Trump administration takes developer-friendly position on AI risks
As the appearance of figures like Elon Musk, Jeff Bezos and Mark Zuckerberg at his events indicate, Trump has been forging an alliance with “big tech” and repeatedly signalled permissiveness in AI development. The balance of AI risks vs innovation was once again thrown into the headlines recently as a Chinese company debuted DeepSeek R1, which seemingly came out of nowhere to contend directly with the best available generative AI models at present.
However, projected AI risks remain far from under control. The concerns that were first raised in late 2022 with the emergence of ChatGPT have only been magnified with time. And it remains difficult to have public discussions about this vital balance between progress and danger when the AI models remain in development behind closed doors, with outsiders still knowing little about how they actually work.
While Trump did rescind the Biden order on AI risks, the administration does not appear to be ignoring the issue. That repeal was accompanied by the issuance of a new executive order, directing the creation of an Artificial Intelligence Action Plan within 180 days.
AI risks, continuing limits to functionality contributing to public “trust gap”
The Trump administration sees loosening AI regulations as an economic boon and a vital requirement for outcompeting rivals, chiefly China. However, Trump has also previously signalled that he wants federal government use and deployment of AI to be rolled out cautiously; an order from his prior administration, issued just weeks before he left office, mandated that federal agencies consider privacy and citizen rights in their use of AI and served as a building block for later Biden orders.
While political and tech leaders want to go all-out in AI development, they have to account for public sentiment that is not particularly pro-AI. A significant “trust gap” remains due to an assortment of factors: underperformance by existing models, lack of transparency about storage and use of personal information, hallucinations, fears of weaponization, and so on. These AI risks directly threaten some of its greatest possibilities, such as its use in diagnosing medical conditions and finding specific treatments tailored to individuals.
While the Trump administration has definitely signalled friendliness to AI development, the picture of a united front between it and the tech industry may not be accurate. There are already some splinters among the major players as Elon Musk has broken ranks to criticize the “Stargate” proposal that draws together OpenAI, Oracle and Softbank to work toward positioning the US as the dominant force in AI development by the end of this decade. Though the direct cause of this is almost certainly longtime rancor between Musk and OpenAI founder Sam Altman, the SpaceX and Tesla mogul has also been sounding alarm bells about AI risks for some years now.