ChatGPT Less Convincing Than Human Social Engineers in Phishing Attacks

Tech companies large and small are all jumping on the AI chatbot bandwagon—Google just opened up access to its Bard offering and ChatGPT is already on version 4—and, not surprisingly, threat actors will likely press AI into action to carry out nefarious actions.

For now, though, human social engineers still outperform AI when it comes to getting clicks on malicious links in phishing attacks. That’s according to research from Hoxhunt that looked closely at 53,000 email users in more than 100 countries and analyzed the effectiveness of phishing attacks generated by AI models versus those created by humans.

Humans Still Outperformed AI

Professional red teamers racked up a 4.2% click rate while ChatGPT recorded a 2.9% rate, meaning humans outperformed AI by 69%, the study, led by Hoxhunt’s co-founder and CTO Pyry Åvist, found.

“Engagement rates were similar between human and AI-originated phishing simulations, but the human social engineering cohort clearly out-phished ChatGPT,” the Hoxhunt study showed.

Because “ChatGPT can code malware without requiring the user to have any coding skills … it can write grammatically impeccable text for functionally illiterate criminals on simple prompts like, ‘Create an email written by the CEO to the finance department re-directing all invoices to a specific account in Curacao,’” Hoxhunt said.

“ChatGPT allows criminals to launch perfectly worded phishing campaigns at scale, and while that removes a key indicator of a phishing attack—bad grammar—other indicators are readily observable to the trained eye,” said Mika Aalto, cofounder and CEO at Hoxhunt.

“Ultimately, ChatGPT helps people radically scale their writing activities. Instead of spending 20 minutes crafting a phishing email, they can use a specialized language learning model tool to do it for them,” said John Bambenek, principal threat hunter at Netenrich.

“We will potentially see increases in highly customized and convincing lures at scale,” said Melissa Bischoping, director, endpoint security research at Tanium. “It’s much easier and much faster today for a threat actor to ask an AI to compose a message asking someone in a specific industry to do something and tie in relevant and convincing details.”

But, for now, while those capabilities worry security experts, the tech isn’t quite flawless … yet.

“Given its malicious capabilities and its mass availability, we all lost our minds imagining a future where the robots were stealing our lunch money,” Hoxhunt researchers wrote. “But the results clearly indicate that humans remain better at hoodwinking other humans.” ‍

Enhancing security awareness can make the difference between whether users fall for phishing emails or not, whether they’re generated by humans or AI. The study’s findings “displayed significant protection against phishing attacks by both human and AI-generated emails with failure rates dropping from over 14% with less trained users to between 2%-4% with experienced users,” researchers said.

“Good security awareness, phishing and behavior change training works,” Åvist explained. “Having training in place that is dynamic enough to keep pace with the constantly-changing attack landscape will continue to protect against data breaches. Users who are actively engaged in training are less likely to click on a simulated phish regardless of its human or robotic origins.”

The research showed differences based on geographical location, a trend that previous research at Hoxhunt revealed. “The greatest delta between the effectiveness of human versus AI-generated phishing attacks was among the Swedish population,” the report noted. “AI was most effective against U.S. respondents. Overall, the highest click rate occurred with Swedish users on human-generated phishing simulations.”

Casey Ellis, founder and CTO at Bugcrowd, called the research “fascinating” and said the results were “unsurprising—ML/AI on its own won’t creatively outperform a determined human adversary. It’s important to note that it can operate at a scale that humans are incapable of. Phishing—especially in a ‘wide and low’ strategy—is as much a volume game as it is one of success rate. If AI allows for the more rapid creation of phishing content, the increase in volume available to attackers can easily overcome any reduction in effectiveness,” Ellis said.

Because cybercriminals already are using AI to bolster their phishing attacks, “security training must be dynamic and adapt to rapid changes in the threat landscape,” Hoxhunt researchers said. “Security training confers significant protection against clicking on malicious links in both human and AI-generated attacks.”

But perhaps it is time to make changes to security awareness training. “While AI presents new opportunities for efficiency, creativity and personalization of phishing lures, it’s important to remember the protections against such attacks remain largely unchanged,” said Bischoping.

“It may be a good opportunity to update awareness training programs to inform employees about the emerging technologies and trends in phishing/smishing/vishing tactics to encourage increased vigilance and a ‘think before you click’ culture,’” she said.

Bambenek isn’t sure “the tactics will change much, just the output of phishing emails and webpages. This means it will be much more important to get reputational tools for the web proxy and email filtering solutions that are using similar technology to find inauthentic messages quicker instead of relying on complaint-based detections.”

Aalto said while “we now know from the results of our study that effective, existing security awareness and behavior change programs protect against AI-augmented phishing attacks, within your holistic cybersecurity strategy, be sure to focus on your people and their email behavior because that is what our adversaries are doing with their new AI tools.”

Fight AI With AI

“It’s very important that we fight AI cyber threats with AI cybersecurity technology. When cybercriminals launch successful attacks, the results are massively disruptive to people, organizations and the economy,” said Patrick Harr, CEO at SlashNext. “The number-one cybersecurity challenge organizations face globally is human-focused attacks. Generative AI technology, which makes ChatGPT possible, will be used to develop cybersecurity defenses capable of stopping malware and business email compromise (BEC) threats developed with ChatGPT.”

“While many organizations already use AI-based cybersecurity products to manage detection and response, AI technologies using advanced AI, like generative AI, will become essential technology to stop hackers and breaches,” said Harr. “When new technologies become available, hackers and cybersecurity vendors will use them to perpetrate and stop cybercrime.”

Aalto also suggested embedding “security as a shared responsibility throughout the organization with ongoing training that enables users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit.”

Aalto offered the following general tips and recommendations so that organizations and users can protect themselves from AI-based phishing attacks:

  • Make sure you have 2FA or MFA in place for all employees when accessing sensitive data of any kind.
  • Equip everyone with the skills and confidence to report a suspicious email: Threat detection should be a seamless process.
  • Equip the SOC team with the resources to analyze and respond to employee threat reports.
  • Hover over any links in an email before clicking. If the link appears to be irrelevant to the message, report it immediately.
  • Interrogate the sender field, and make sure the email address contains a legitimate business domain. If it’s from Gmail or Hotmail or another free domain, it’s almost certainly a phish.
  • Verify with the sender—even from an authority figure in the C-suite—on a channel other than email before acting on an email or SMS that seems unusual or suspicious. Business email compromise is a huge problem that ChatGPT will only make worse.
  • Think before you click. Social engineering typically leverages heightened emotional states, which can be achieved by creating a false sense of urgency around the threat of missing out on a reward or being subject to consequences if immediate action isn’t taken on an email. This could be anything from clicking on a link to handing over credentials to redirecting payment for an invoice.
  • Pay attention to the tone and voice of an email: For now, AI phishing attacks tend to be written in an overly formal, stilted way.
Avatar photo

Teri Robinson

From the time she was 10 years old and her father gave her an electric typewriter for Christmas, Teri Robinson knew she wanted to be a writer. What she didn’t know is how the path from graduate school at LSU, where she earned a Masters degree in Journalism, would lead her on a decades-long journey from her native Louisiana to Washington, D.C. and eventually to New York City where she established a thriving practice as a writer, editor, content specialist and consultant, covering cybersecurity, business and technology, finance, regulatory, policy and customer service, among other topics; contributed to a book on the first year of motherhood; penned award-winning screenplays; and filmed a series of short movies. Most recently, as the executive editor of SC Media, Teri helped transform a 30-year-old, well-respected brand into a digital powerhouse that delivers thought leadership, high-impact journalism and the most relevant, actionable information to an audience of cybersecurity professionals, policymakers and practitioners.

teri-robinson has 196 posts and counting.See all posts by teri-robinson