When ChatGPT Goes Phishing

ChatGPT has become a powerful tool for security professionals seeking to enrich their work. However, its widespread use has raised concerns about the potential for bad actors to misuse the technology. Experts are worried that ChatGPT’s ability to source recent data about an organization could make social engineering and phishing attacks more effective than ever before.

Potential Dangers of ChatGPT in the Hands of Cybercriminals

1. Developing Highly-Pinpointed Campaigns

In today’s world, cybercriminals are constantly developing new tactics to deceive unsuspecting individuals and organizations. Social engineering and phishing attacks are becoming increasingly sophisticated, and the availability of data on the internet makes it easier for attackers to craft convincing and targeted campaigns.

With the help of tools like ChatGPT, attackers can quickly gather recent data about an organization and use it to create a highly efficient and pinpointed campaign. For example, an attacker could ask ChatGPT to provide them with all public disclosures or press releases from a corporation in the last 30 days and then use that information to target specific divisions of the organization. Through this approach, an attacker could create a campaign that uses current organization-specific buzzwords, project names or acronyms to appear more legitimate.

In addition to ChatGPT, cybercriminals can also leverage publicly available information on the web and social media to gather personal information about their targets. This information can be fed into the chatbot to assist bad actors in the development of a campaign to further lure their targets.

The availability of ChatGPT in more than 90 languages is a major cause for concern. This localization opens the door for bad actors to launch the same campaign against multiple targets, including global corporations.

2. Breaking the Mold on Security Awareness Training

Most canned user security awareness training solutions constantly hammer home the standard issues that indicate a potential phishing email–primarily remaining vigilant about spelling, grammar and erroneous buzzwords. Grammar and spelling errors have been a common cybersecurity tip-off for some time. However, with the advancement of AI technology, cybercriminals are now using more sophisticated methods to craft their phishing emails, including machine learning and language processing tools like ChatGPT to eliminate any spelling or grammatical errors that may otherwise give away the fraudulent nature of the message.

The ability for ChatGPT to include company-specific buzzwords and improved grammar and spelling in phishing emails makes it even more challenging for users to distinguish between genuine and fake messages. The quality of these AI-assisted campaigns can create a sense of trust, leading email recipients to believe that the message is from a legitimate source. Cybercriminals can now more easily lure users into clicking malicious links, downloading malware or revealing sensitive information.

Changing the Approach to Security Monitoring and Protection

To stay ahead of constantly evolving threats, security practitioners should employ social engineering training for employees. It’s crucial that this training is updated annually, if not bi-annually.

Organizations should also leverage AI-assisted anti-spam and phishing detection solutions that monitor URLs. The good news is that most dedicated anti-spam solutions incorporate AI, which enables them to counteract AI-enabled attacks. These tools monitor and analyze behavior and relationship interactions to identify suspicious activity, like financial requests or human resources impersonations, ensuring the content is flagged and blocked no matter who the source is.

Other proactive solutions are to always require multi-factor authentication (MFA) and monitor accounts to catch any changes to MFA settings. Implement solutions that monitor at the individual user level to detect any time a significant change is made to their MFA settings. For example, if your MFA still uses text to verify identity and the user changed their phone number, this activity should raise flags. Also, some MFA solutions allow for hundreds of attempts per hour by default. Double-check these settings, as there should be a limit to how many times a user can be assigned an MFA request attempt.

Finally, as many successful phishing attacks occur after hours, it’s important to have 24/7 monitoring. While most of these recommendations are preventative, there is always a chance that a threat actor can get through, so make sure to conduct due diligence and monitor around the clock.

Convenience Comes at the Cost of Security

I always say anything convenient always comes at the cost of security, and it’s extremely rare that the two meet.

While AI tools like ChatGPT offer breakthrough functionality that offers many great benefits, they certainly also create greater security vulnerabilities within organizations. As ever, improving security awareness training for employees, implementing anti-spam solutions and enacting round-the-clock monitoring remain excellent ways to defend against the risks ChatGPT presents.

While it may be convenient to embrace ChatGPT and assume no harm can come from a highly-intelligent tool, that scenario is simply not a reality. In the face of AI, the best defense is taking the necessary steps toward security resilience.

Avatar photo

Jim Broome

Jim Broome is a seasoned IT/IS veteran with more than 20 years of information security experience in both consultative and operational roles. Jim leads DirectDefense, where he is responsible for the day-to-day management of the company, as well as providing guidance and direction for our service offerings. Previously, Jim was a Director with AccuvantLABS where he managed, developed, and performed information security assessments for organizations across multiple industries, while also developing and growing a team of consultants in his charge. Prior to AccuvantLABS, Jim was a Principal Security Consultant with Internet Security Systems (ISS) and their X-Force penetration testing team. Jim has also developed and provided training courses on several security products, including being a primary author of the CheckPoint Software Software CCSA/CCSE/CCSI training program, as well as creating and delivering numerous client-focused training programs and events.

jim-broome has 3 posts and counting.See all posts by jim-broome