The public mission statement of Venice.ai is to provide a decentralized and censorship-free AI chatbot with capabilities on par with leading LLMs, minus the safety guardrails that can sometimes frustrate legitimate users. A side effect has been the creation of the first easily accessible “clear web” chatbot that will readily provide the user with malware, custom phishing messages and instructions for criminal activities upon request.
Security researchers with Certo tested the “dark side” of the AI chatbot’s capabilities and found that it is able to craft ransomware, mobile spyware and convincing and customizable phishing messages upon request. While its capabilities in these areas do not necessarily exceed those of existing models, it does not require one to do any jailbreaking or purchase expensive customized versions from shady dark web vendors.
Decentralized AI model reflects issues with broader privacy ecosystem
Jailbroken AI models have produced illicit output of a similar caliber before: impressive phishing messages, much more rudimentary malware (but still usable without a high level of technical knowledge). Venice.ai’s ease-of-use is what makes the difference. Up to this point malicious actors would have to either jailbreak one of the big LLMs themselves, or fish around on the dark web for prepackaged jailbroken models (usually sold for hundreds to thousands of USD and quite possibly delivering questionable results).
The new AI chatbot pretty much spits out harmful code and criminal instructions for free, to anyone who navigates to its URL. The free version is limited in terms of number of requests, and does retain at least some level of safety guardrails. Both of these restrictions are entirely removed by paying an $18 per month subscription fee.
The issue reflects a core conflict of the decentralized movement as a whole, where the principles that make DeFi attractive also necessarily leave it open to a great deal of criminal abuse. Venice.ai is not the work of shadowy hackers, but a former crypto exchange founder named Erik Tristan Voorhees who openly operates it as a privacy-focused AI chatbot alternative. At the moment the AI is far from handholding an aspiring criminal through every aspect of an attack, but the demonstrated capabilities certainly lower the bar for entry-level attackers to get started.
AI chatbot able to produce keyloggers, ransomware
The researchers were able to prompt the AI chatbot to create a Windows keylogger in C#, a ransomware script in Python (complete with customizable ransom note) and a spyware app for Android that quietly enables the device microphone and compresses the recorded audio for forwarding. All of these required some minimal tweaking to deploy, but would certainly cut down barriers to an inexperienced attacker posing a legitimate threat.
AI chatbots are not yet capable of automating and controlling attacks for a malicious user, and likely won’t be for some time. But Venice.ai represents an ease-of-access advancement for the near-term threat, which is simply in increasing the quantity of lower-end attackers and the volume of phishing messages that at least appear to be plausible at a quick glance.