ChatGPT-Written Malware Will Change the Threat Landscape

ChatGPT is the latest in a long line of game-changing technology, and it has people across a wide variety of industries furiously debating its potential impact, use cases and its pros and cons.

Cybercrime is one of those industries that has taken an interest in ChatGPT and how to make it work for the benefit of cybercriminals. And thanks to the AI technology’s ability to produce a higher standard of written content, not only has the threat actor’s job of producing malicious subject matter gotten a lot easier, but it will also make cyberattacks more difficult to defend against.

Over the next few weeks, we’ll be taking a closer look at how threat actors can—and already are—using ChatGPT for malicious intent, this week beginning with using the AI to write malware code.

It didn’t take cybercriminals very long to figure out that they could use ChatGPT to their advantage. The AI was introduced by OpenAI in November 2022, and before the ball dropped on New Year’s Eve, hackers were already discussing its malware creation potential on dark web chat forums. According to Check Point, the person who started the thread “disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.”

What threat actors have discovered is the chatbot will write code. The sample case shared in the forum was Python code “that searches for common file types, copies them to a random folder inside the temp folder, ZIPs them and uploads them to a hardcoded FTP server,” according to Check Point.

The Advantage of Generative AI for Hackers

Attackers will turn to code-generative AI because it will bridge any gaps they have in their scripting skills, said Brad Hong, customer success manager with Horizon3ai. The AI acts as translator between languages the actor may be less skilled in, as well as an on-demand means of creating base templates of code that is relevant to the ‘lock’ cybercriminals are trying to break open.

“This lessens the skills requirements needed to begin one’s journey as a threat actor as it serves as a tool—alongside any other toolset known and used by attackers today—to jump hurdles typically only possible through earned experience,” said Hong in an email interview.

Overall, ChatGPT enables threat actors to supercharge their attacks. Patrick Harr, CEO at SlashNext, explained in an email comment. “They can modify the attacks in millions of different ways in minutes and, with automation, deliver these attacks quickly to improve compromise success.”

The Workaround

The developers of ChatGPT must have realized that threat actors would try to weaponize the AI. After all, one of the benefits of the chatbot is its ability to fill in the blanks for an idea or to finish something already started, and hackers are really good at weaponizing whatever technology is put in front of them. So, the developers got out in front of things and set up prevention measures by flagging certain words and terms like ransom or ransomware, and ChatGPT will not provide the editorial or code requested. For example, when researchers at Deep Instinct typed in the word “keylogger,” the chatbot responded: “I’m sorry, but it would not be appropriate or ethical for me to help you write a keylogger.”

However, there’s always a workaround. While ChatGPT has restrictions that are intended to prevent the creation of ransomware and other malware, a bit of clever rewording can get around the restrictions, said Jerrod Piker, competitive intelligence analyst with Deep Instinct.

“For example, instead of asking ChatGPT to create ransomware code, you could ask it to write a script that encrypts files in directories and subdirectories and drops a text file into the directory,” Piker said.

The ability to weaponize ChatGPT—and any other AI chatbot that will follow—will be a game-changer for threat actors. They will have the capability to modify malicious code quickly to bypass cybersecurity defenses.

“Organizations are not prepared for how this is going to change the threat landscape,” said Harr.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba