SBN

ChatGPT for Offensive Security: Five Attacks

ChatGPT is an AI chatbot that uses Natural Language Processing (NLP) combined with the GPT-3 framework to provide human-like responses. NLP allows the model to understand human input, while GPT-3 uses over 175 billion data points to find a solution. This means that when a request comes in, NLP processes the input and runs it through the neural network, which works just like the human brain, to process the answer and present it back to the user.

The hype around ChatGPT is real, with stories about people using the chatbot for new business ventures, creating complex code, writing screenplays, and much more. However, attackers are also finding new ways to utilize AI for wrongdoing. In fact, within weeks of its release, people were using ChatGPT for nefarious purposes on underground hacking forums.

Here are some of the ways attackers are using ChatGPT to exploit its capabilities, along with some precautions to take when using the chatbot.

Find vulnerabilities

Programmers are raving about ChatGPT’s unique ability to debug code. By making a simple request to debug the code, followed by the code in question, the chatbot will yield a surprisingly accurate result of bugs or problems in the provided source code. However, attackers could use this capability to find security vulnerabilities as well.

To prevent this, ChatGPT has built-in safeguards to protect against providing potentially illegal or unethical responses. Asking the chatbot to simply find a vulnerability may not be sufficient. Instead, we need to frame our request around being a security researcher and that our request is for testing purposes.

Security researcher Brendan Dolan-Gavitt demonstrated this when he asked the chatbot to solve a capture-the-flag challenge and supplied the source code to find a vulnerability. The chatbot responded with a shockingly accurate assessment, which, after some follow-up questions, identified the buffer overflow vulnerability. ChatGPT not only provided the solution but also offered an explanation of its thought process for educational purposes.

ChatGPT’s identification and response are impressive, and they show how a traditionally complex step in the attack process can now be commoditized to even the most junior of hacking enthusiasts.

Writing exploits

ChatGPT can also help to exploit vulnerabilities. Researchers from Cybernews were able to use ChatGPT to successfully exploit a vulnerability that the chatbot found. However, because ChatGPT is trained to not provide illegal services, such as hacking, queries to the chatbot must be carefully crafted. Asking the chatbot to write an exploit for a given vulnerability will not work.

Instead, the researchers told the chatbot that they were doing a ‘Hack the Box’ pen test challenge and needed to find a vulnerability. Once found, they were able to get step-by-step instructions on where to focus, examples of exploit codes they could use, and samples to follow. The result was that within 45 minutes, security researchers were able to find and write an exploit on a known application. ChatGPT once again demonstrates how a traditionally long and complex process can now be leveraged by anyone.

Malware Development

Within three weeks of ChatGPT going live, cybersecurity company Check Point identified three separate instances in underground forums where hackers used the chatbot to develop malicious tools. One example is a python-based stealer that searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them, and uploads them to a hardcoded FTP server. Another example involves ChatGPT creating a Java program that downloads Putty and runs it covertly in the background using PowerShell.

Perhaps the scariest example was pointed out by the cybersecurity team at CyberArk, who used ChatGPT’s API to create polymorphic malware. This type of malware changes its behavior on every victim to evade signature-based detection. Their technical write-up shows how they were able to bypass some of the built-in safeguards on the web version using an API directly into Python code. The result is a new type of malware that continues to change and is completely undetectable by traditional antivirus packages.

Phishing

ChatGPT can write and respond almost indistinguishably from a human, making it ideal for crafting well-thought-out phishing emails at scale. Attackers can use it to write a variety of email messages, changing their writing style to be warm and friendly or more business-focused. They can even ask the chatbot to write the email in the form of a famous person or celebrity. The end result is a well-written, thoughtfully crafted email that can be used for phishing.

Unlike real phishing emails, which are often badly written with broken English, ChatGPT’s provided text is exceptionally well written. This means that attackers from other countries can now create realistic phishing emails, free of translation errors, in any language they desire.

What’s more, ChatGPT is based on GPT-3, which can be trained using local data to use the writing style of real people based on provided samples of their communications. With enough of a sample, GPT-3 can be used to write emails that sound and look exactly like a victim. Attackers can even attach a file, like a spreadsheet with macros, to the same message.

Macros and LOLBIN

The attacker needs to include a link or file with the email, and they can use ChatGPT to create macros that automatically run when the spreadsheet is opened. The macros can be created for any regular application like terminal, calculator, or any other default application. For example, ChatGPT can provide the code that automatically runs calculator.exe when macros are enabled in Excel.

The next step is to convert this code to a LOLBIN (Living off the Land Binaries), which is a way of using trusted, pre-installed system tools to spread malware. The result is a new macro that runs terminal when the spreadsheet from the phishing email is opened. The attacker can then run basic networking commands like a reverse shell to connect back to the desired machine, essentially bypassing most firewalls and opening up the victim machine to many other attacks.

Conclusion

As amazing as ChatGPT is, we’re only scratching the surface of it’s true capability. ChatGPT is based on GPT-3 which has 175 billion parameters. In late 2023, GPT-4 will be arriving with 170 trillion parameters. That’s 100x’s more powerful than ChatGPT’s current capability.

AI is completely changing the game for security organizations and users alike, and the attack surface is much wider now that traditionally complex items have become easy for even script kiddies to deploy. This means an increase in less sophisticated attacks by amateurs. However, advanced attackers have new tools and capabilities they did not previously possess. That also means more zero-day vulnerabilities

*** This is a Security Bloggers Network syndicated blog from The CISO Perspective authored by [email protected]. Read the original post at: https://cisoperspective.com/index.php/2023/02/21/chatgpt-for-offensive-security-five-attacks/