State-sponsored hacking groups are seeing the same potential that ransomware gangs and scammers are seeing in OpenAI tools, but a new report from Microsoft says that thus far they are experiencing only “limited” benefit.
That does not mean that the threat should be written off, as at this point attackers seem to be getting more out of available AI tools overall than the defenders are. But the state sponsored hacking groups seem to mostly be using ChatGPT and similar LLMs for basic scripting and translation tasks at present, rather than for advanced malware and penetration purposes.
LLMs mostly provide convenience services to state-sponsored hacking groups
The new report is a joint effort from Microsoft and OpenAI, which says that it bans the accounts of state-sponsored hacking groups upon discovery. It also says that these groups are getting limited utility from OpenAI tools, but that most of the major players have nevertheless been using them.
The most important takeaway is that these groups are not really using LLMs to develop malware; at least not yet. The big fear of AI tools is the ability to quickly generate custom malware and endless iterations that slip by automated defenses that rely on recognizing particular signatures or traits of programming languages. At least one group seems to be dabbling with snips of malware generation, but for the most part these groups are trying to use OpenAI tools to create basic scripts for functions after a break-in takes place (like searching for particular types of files).
Instead, OpenAI tools seem to be most helpful for crafting phishing messages and social engineering communications in other languages (something that for-profit criminals also heavily use the LLMs for). State-sponsored hacking groups from China, Russia, Iran and North Korea have all been observed making use of ChatGPT for this purpose.
OpenAI tools popular with hackers, but not yet a game-changer
The report names teams from Russia and China that are among the most active and dangerous of the state-sponsored hacking groups. Russia’s group, which became best known in the mainstream media as “Fancy Bear” around the 2016 election season, is bringing OpenAI tools to bear on its targets in Ukraine as part of Russia’s continuing invasion. China actually has two separate groups that have been spotted using these tools, one of them its most active in recent months (RedHotel, or Charcoal Typhoon). In all of these cases the tools are used for reconnaissance against targets, and scripting to help hackers find items of interest and cover their tracks once they break through a network’s defenses.
North Korean and Iran also have state-sponsored hacking groups exploring the possibilities of OpenAI tools. In the case of North Korea, the interest seems to be as much on polishing communications in other languages as in direct hacking. Thus far this has been to target foreign policy experts for intelligence purposes, but it is likely the country’s financially-minded teams have explored LLMs as well. Iran is also showing a strong interest in using ChatGPT and LLMs to improve its phishing messages.