How Chatbots Will Change Phishing Attacks

It was only a matter of time until threat actors turned to chatbots like ChatGPT to create phishing attacks. In fact, while it has only been a couple of months since the world was first captivated by the power of ChatGPT, the bad guys were already using chatbots for phishing.

Back in May 2022, Trustwave announced it discovered hackers were using chatbots the way most of us encounter chatbots–initiating a conversation in an interactive chat box, only on a phishing website. Later in the year, Check Point discovered that ChatGPT was able to write a phishing email with an attached document embedded with malicious code. As chatbots become more commonplace, expect threat actors to find new ways to create even more dangerous phishing email.

During a recent interview, Josh Shaul, CEO of Allure Security, answered some questions about what the future may hold when it comes to AI and phishing attacks, and what security teams can do about them. And, perhaps not surprisingly, the best way to outsmart chatbot-based phishing is with AI technology.

Security Boulevard: How do you think chatbots like ChatGPT will change or impact phishing attacks?

Josh Shaul: For me and most people I ask, the first signal to them that a message may be phishing is the often poor word choice, grammar and other signs that the email, SMS, call or whatever you’ve encountered is a scam. Those little reactions inside your head, like, ‘My bank would never say this,’ or ‘This looks like it was written by someone who speaks another language’—that’s usually the big tell. ChatGPT just eliminated that line of defense. Now anyone can easily write perfect phishing emails—no need for any writing or language skills—and keep coming up with new ones forever—just by asking ChatGPT to do the work for them.

And to make matters worse, ChatGPT won’t just write up your phishing emails. It’ll even build your phishing website for you. We’ve already seen examples of researchers getting ChatGPT to build a fully functional site. Ask the system to give you a site that looks like a particular bank or fashion brand, with some specific instructions about where to put the data people fill into the login form, and off you go.

SB: Why will it become more difficult to detect a phishing attempt with chatbots?

JS: Evading detection is the most critical focus area for phishers and fraudsters. Generative AI gives them a powerful new tool to use against organizations. Simply stated, putting the power of generative AI in the hands of scammers and con artists is probably going to be regrettable.

SB: How should security teams shift their approach to defending from phishing attacks as they become even more difficult to detect?

JS: Organizations have been losing the fight against phishing for 25 years or more. The old ways of doing things—primarily trying to teach people not to be fooled by other people—don’t work. It’s time to accept that.

Some email security vendors do a great job of spotting phishing in the inboxes they protect—but that doesn’t solve the problem either. Attackers simply shift from email to another vector, such as SMS or social media outreach.

Where organizations need to focus going forward is externally. Finding the spots on the web, social media, etc. that are impersonating their businesses for evil purposes before people get lured in and turned into victims. And with the volume of new websites, social media profiles, mobile apps, etc. published every day, it’s simply impossible to do this without automation and AI. New technologies that scour the web for impersonations is available now. For those organizations that have adopted this proactive approach, the results are stunning, with massive reductions in account takeover fraud and associated costs.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba