Rise of the Terminators, or Just Petty Crypto Thieves? Nascent AI Chatbots Able To Write Malware, Find Vulnerabilities, and Express Anti-Human Sentiment

by | Dec 23, 2022

Visit any social media platform and you can find people having fun with ChatGPT, the surprisingly robust AI chatbot. While most are focusing on silly requests, some security researchers are already probing the AI for its ability to detect vulnerabilities or even exploit them.

While their abilities are still modest and the circumstances in which they can “hack” are still fairly basic, users have found that ChatGPT can locate and articulate exploits in smart contracts, write snippets of malicious code, and even form opinions about humans that are straight out of a sci-fi nightmare. Given that AI chatbots are still in their very early “beta” stages, the long-term implications are causing at least some amount of concern.

Early experiments with AI chatbots show where security concerns should be focused

There is, at least for the near future, no reason to worry about ChatGPT or similar AI chatbots achieving sentience and hacking into military computers. In fact, the biggest problem with these bots thus far is the misplaced sense of absolute confidence they often have in obviously wrong answers.

However, ChatGPT also has not wasted any time in absorbing misanthropic viewpoints. Early tests of its “opinions” saw it come to the conclusion that humans are inferior to AI and are too damaging to the planet. Of course, the creators were quick to remind people that AI chatbots cannot yet form real opinions, and made adjustments to the code so that it no longer attempts to provide its subjective takes on weighty issues.

However, AI chatbots can already be used as a blunt attack tool in several ways. One is to analyze smart contracts for potential vulnerabilities; ChatGPT has shown the ability to analyze flawed portions of code and spit back detailed explanations for exploiting them. It has also been used to generate simple malware, and improve the natural language and authenticity of targeted phishing emails.

The creators have anticipated abusive uses for AI chatbots and added some safety measures to prevent them in being used in these ways, but hackers have already found ways around these guardrails (often by simply making a small tweak to the input question).

AI chatbots able to answer complex questions, but not always accurately

AI chatbots are still far from being able to really “think” in the human sense; they draw heavily on internet-based resources and essentially engage in sophisticated pattern recognition. For example, one of the first demonstrations of ChatGPT finding a smart contract vulnerability drew on a real-world incident from April 2022, with the AI using the same trick the hackers did.

At the moment, however, the greatest danger of AI chatbots appears to be lulling users into a false sense of confidence in their answers. They naturally impress with some of their accurate and complex answers, which may lead people to believe that is their normal performance; testing shows that is far from true. ChatGPT almost always answers with total confidence in its output, even when it is obviously dead wrong about something. This has led to its temporary ban from StackOverflow and several other sites and platforms, as it has been found to generate too many bad answers that might look good at a glance.

Recent Posts

How can we help?

3 + 8 =

× How can I help you?