How to Protect Your Company in a ChatGPT World

With the potential to be as game-changing as the internet, smartphones and cloud computing, the emergence of generative AI tools like ChatGPT and GitHub CoPilot will undoubtedly open up new possibilities and challenges for companies. The swift and sweeping advancement of AI has raised the stakes for those looking to leverage this technology responsibly while also preparing for the potential impact of AI adoption by cybercriminals. With its ability to write code that can help identify and exploit vulnerable systems, generate hyper-personalized phishing emails and even mimic executives’ voices to authorize fraudulent transactions, it’s crucial that organizations reevaluate their risk calculus around AI and consider defensive and offensive strategies against hacking.

Here are three key strategies that IT and security executives should consider when evaluating their cybersecurity posture in the age of AI.

1. Fight AI With AI

Some people fear the worst with AI and imagine a future where an all-knowing thinking machine becomes a superweapon threatening humanity in an “AI arms race.” That description is quite hyperbolic and the reality of AI is much less ominous.

Bad actors will undoubtedly use AI tools for nefarious purposes. But existing AI tools are generally limited to basic coding and they have safeguards in place to prevent writing truly malicious code.

On the bright side, AI has the potential to enhance the skills of cybersecurity defense teams and augment their abilities, particularly in the cybersecurity field where we see a shortage of skilled professionals. By using AI tools, entry-level analysts can receive assistance with routine duties and security engineers can experience an increase in their coding and scripting capabilities.

The key to success is investing in AI tools and training to up-level expertise rather than matching every offensive AI threat with an AI countermeasure.

2. Safeguard Your Email

Most cyberattacks begin in our inboxes. Bad actors send fraudulent emails, leveraging phishing and social engineering tactics to harvest credentials that will let them into an organization. Recent advancements in AI will make these emails increasingly sophisticated and realistic, and integrating AI-powered chatbots into social engineering toolkits will broaden the scope and reach of these attacks.

Cybersecurity professionals must acknowledge the potential for AI-powered phishing and social engineering attacks and educate users on detecting and responding to them. It is imperative to continue to train users to identify phishing attacks, provide a platform to rapidly report suspicious activities, and enlist their assistance in the overall cyber defense strategy.

However, human error is inevitable, and we must protect users from hacking by relying on technical defenses as well. Unfortunately, the basic filtering tools in leading email services are often not enough. Companies should seek out advanced email security tools to comprehensively block attacks across different vectors, even from trusted senders or domains.

3. Defend the Data

While phishing and credential harvesting are often the first steps in any attack, they are not the whole picture. When considering the risks posed by AI to company data and applications, it is important to acknowledge the multi-faceted nature of potential attacks.

Organizations should move beyond protecting networks with a traditional castle-and-moat perimeter and focus instead on where data lives and how users and applications access it. For many companies, this means adopting a zero-trust architecture with a secure access service edge (SASE) solution fortified with phishing-resistant multifactor authentication (MFA).

Internet-facing applications and APIs are vulnerable to various types of attacks, including those carried out by bots or other AI-driven attacks. These apps should have appropriate protections like encryption, a web application firewall (WAF), input validation and rate-limiting to mitigate against bots or future AI-driven attacks.

As companies embrace AI tools, they must ensure their users are not misusing or leaking company data. Some companies, like JPMorgan Chase, have decided to restrict employees from using ChatGPT, but at a minimum, companies should implement acceptable use policies, technical controls and data loss prevention measures to mitigate any risks.

The Path Forward With AI

It is essential for cybersecurity professionals to remain curious and experiment with AI tools to gain a better understanding of their potential uses for both good and malicious purposes. To protect against attacks, organizations should search for ways to amplify their own capabilities–whether by using ChatGPT or exploring cybersecurity tools and platforms that have access to extensive training data and leverage threat intelligence across various defense dimensions.

The bottom line is that businesses must implement comprehensive security measures that evolve with the changing world. While AI has emerged as a potential threat, the technology can lead to powerful benefits as well–we just need to know how to use it safely.

Avatar photo

John Engates

John Engates joined Cloudflare in September of 2021 as Field Chief Technology Officer and is responsible for leading the Field CTO organization globally. Prior to Cloudflare, John was Client CTO at NTT Global Networks and Global CTO at Rackspace Technology, Inc. Earlier in his career, John helped launch one of the first Internet service providers in his hometown of San Antonio, Texas. John is a graduate of the University of Texas at San Antonio and lives in Texas with his wife and two daughters. He is passionate about technology and enjoys mountain biking, snowboarding, and spending time traveling with his family.

john-engates has 2 posts and counting.See all posts by john-engates