The First Wave of Generative AI Cybercrime Tools Has Arrived

by | Aug 11, 2023

Hackers are getting a boost from generative AI as the first wave of cybercrime tools has appeared for offer on dark web forums and Telegram channels.

This first generation of AI tools is predictably limited, but at minimum offers capable assistance in creating the building blocks of confidence schemes (such as business email compromise). At the moment these tools make inexperienced hackers a little more capable, and provide experienced hackers with convenience and a bit more of an edge. But cybercrime tools will almost certainly develop alongside legitimate generative AI projects, and will likely lead to a major spike in relatively sophisticated attacks in the long run.

Cybercrime tools assist with translation, vulnerability research

Thus far the cybercrime tools appear to be a definite help in crafting emails and attack site landing pages that look more authentic, particularly for those targeting languages they have little to no grasp of. The constraints of these tools are basically those of current generative AI projects like ChatGPT.

In some cases, these generative AI services have been hacked or repurposed to criminal ends. OpenAI, Google and other project developers have inserted guardrails into their products in a bid to curb things like malware creation and vulnerability scanning, but criminals continue to find clever ways to word prompts to break out of these virtual boxes.

Custom prompts that break generative AI guardrails have been discussed and sold on the dark web for some time now, but the emergence of fully functional stand-alone tools is relatively new. Security researchers spotted the first of these, “WormGPT,” in July. It was quickly followed by something called “FraudGPT.” The creator of FraudGPT has already announced two new cybercrime tools to be released sometime in the near future, one of which might involve illicit access to SW2’s dark web-trawling DarkBERT project trained on dark web sites.

This development should be setting off alarm bells at all types of organizations, even if the cybercrime tools are relatively limited in scope at present. The ability of generative AI to create will likely be paced by its ability to sow destruction going forward, as criminals continue to figure out how to turn tools into weapons.

Generative AI models co-opted to help criminal hackers

The criminal underground is highly unlikely to develop its own generative AI from the ground up, but it is already having success with modified versions of legitimate projects. WormGPT is reportedly a hack of 2021’s GPT-J language model, and FraudGPT appears to be an iteration that adds some more features. And in addition to what appears to be a hack of DarkBERT, one of the developers is claiming they will also be offering a corrupted version of Google’s Bard at some point.

The cybercrime tools were quickly spotted in July as the creators first turned to “clearnet” public forums to advertise them, then turning to the dark web and Telegram channels after being kicked off of those for being too up front about their black hat purposes.

In the future, generative AI cybercrime tools might develop nasty malware or scan software for zero-days. In the present, the main threat seems to be their ability to assist with the communications end of business email compromise (BEC) and phishing campaigns. Employees should already have awareness training pertaining to these threats, but that training may need to be expanded to communicate the full scope of what threat actors now have access to and how frequent these attacks are likely to be going forward.

Recent Posts

How can we help?

8 + 7 =

× How can I help you?