SBN

Your account is being suspended – ChatGPT vs Humans

Social engineering is a method that cybercriminals use to manipulate people into revealing confidential information or perform actions that could harm their company’s cybersecurity. Social engineering techniques are constantly evolving, and cybercriminals are always looking for new ways to exploit people’s trust. One of the most effective tools that cybercriminals use is ChatGPT, an AI-powered chatbot that can mimic human interaction.

Introduction

ChatGPT is an advanced chatbot that uses natural language processing to simulate human conversation. It can be programmed to sound like a real person, making it an effective tool for social engineering. Cybercriminals can use ChatGPT to impersonate a trusted individual such as a colleague, friend, or family member to obtain sensitive information. ChatGPT is highly sophisticated and can learn and adapt to new scenarios, making it a formidable tool for cybercriminals.

Is this the “Rise of the Machines”?

Terminator 3 Movie Poster

One of the ways that ChatGPT can be used for social engineering is by impersonating all sorts of customer service, human resources, account protection, and other “services” in a phishing email. You commonly see subject lines such as:

  • Password Expiration Notice
  • HR: Vacation Policy Update
  • Your payment was declined
  • Delayed Shipping Update
  • and more

Cybercriminals can program ChatGPT to sound like a legitimate representative of a company and use it to trick people into revealing sensitive information. For instance, ChatGPT could be used to extract credit card information, bank account details or social security numbers from unsuspecting individuals.

So…. Who’s Better?

A recent study conducted by Hoxhunt compared the effectiveness of ChatGPT and human operators in carrying out phishing attacks. The study used a sample of over 53,000 email users, across more than 100 countries, found that the phishing emails sent by humans and ChatGPT were both successful in tricking employees into clicking on the links and entering their login credentials. But who was better? Thankfully it was still the humans, for now 😬, with a click rate of 4.2% to ChatGPT’s 2.9%.

Smashing Printer

Should I be freaking out? 😳

The results of the study have significant implications for organizations and individuals alike. The use of AI-powered chatbots for social engineering attacks is becoming increasingly common, and the effectiveness of these chatbots is only going to increase as the technology advances. While this may be a cause for concern, don’t freak out yet, the humans are still the superior beings. But the experiment does highlight a critical countermeasure that all organizations should employ, and that is end user training.

Educating employees about the risks of social engineering attacks and implementing security measures such as two-factor authentication and email filters can greatly increase the miss rate on phishing emails and phone calls. Organizations should also perform regular security audits and internal campaigns to check up on employee awareness and caution.

Conclusion

The study conducted by Hoxhunt highlights the growing threat of AI-powered chatbots in carrying out social engineering attacks. The results of the study show that ChatGPT is a formidable tool for cybercriminals.

It is essential for organizations and individuals to take steps to protect themselves from these attacks, including educating employees, implementing security measures, and being cautious when interacting with unsolicited emails. By doing so, we can help safeguard our personal information and prevent it from falling into the wrong hands.

ModernCyber’s Zero Trust Assessment

If you are looking for help with policies, training, or security tool implementation to assist with protecting your environment from phishing threats, we would love to help you!

Schedule some time to speak with one of our cybersecurity experts.

*** This is a Security Bloggers Network syndicated blog from ModernCyber Blog authored by ModernCyber Blog. Read the original post at: https://www.moderncyber.com/blog/humans-vs-bots