Generative AI: Adopted on Both Sides of the Data Security Battle
Generative AI made waves across nearly every industry in 2023, and both business and technology experts believe the technology will become increasingly mainstream as we move into 2024. In cybersecurity, generative AI and large language models (LLMs) have sparked optimism and concern in almost equal measure. On the one hand, security professionals are excited at the prospect of enhancing their threat detection and response capabilities while automating routine and repetitive tasks. On the other hand, attackers are already leveraging generative AI to identify vulnerabilities, evade security measures and accelerate the development of malicious bots to compromise data and disrupt operations.
As technology evolves, the ongoing cat-and-mouse game between attackers and defenders is gaining new elements as well. With generative AI already being used on both sides of the data security battle, it is increasingly critical for organizations to understand both the threat the technology may pose to their data and its potential to help them secure it.
How Cybercriminals are Using Generative AI
Content creation is one of the most common uses for generative AI, and it didn’t take attackers long to recognize that they could use the technology to craft more convincing phishing emails. Most savvy internet users have long since learned to spot the telltale signs of phishing emails, including spelling mistakes, grammatical errors, strange formatting choices and other indicators. But generative AI is making it easier for attackers to avoid those simple mistakes, making it much more difficult to spot phishing attempts. That can spell real trouble for today’s businesses, as phishing scams are one of the leading ways hackers gain credentials to establish a foothold within the network to enable a larger attack. The easier it is for attackers to trick an employee into giving away their credentials, the greater the risk to the organization — and its data.
Generative AI is particularly good at iterating on existing content, and attackers are capitalizing on that ability to modify malware strains to evade detection by known security solutions. As soon as those solutions learn to recognize the attack signatures associated with a new type of malware, attackers can spin up a fresh, unrecognizable version. Similarly, AI-powered tools are helping attackers probe the capabilities of existing security tools — especially open-source solutions and off-the-shelf products. This allows attackers to identify potential exposures more quickly and has led to an increased number of zero-day hacks. Unfortunately, that problem is only likely to worsen as a growing number of attackers gain access to generative AI tools, and organizations that want to protect their data will need to be aware of new zero-day hacks and move quickly to install patches as they become available.
Generative AI is also exacerbating the longstanding problem of automated web traffic. An astonishing 30% of all web traffic now comes from “bad bots,” which include a wide range of automated processes that attackers use to carry out attacks and otherwise misuse and abuse online resources. Generative AI is helping to increase the sophistication of those bots, allowing them to more accurately mimic human behavior and making them significantly more difficult to detect. Common bot detection solutions like CAPTCHA and reCAPTCHA will no longer be enough to identify and stop malicious automated traffic, which means attackers will have an easier time using bots to engage in distributed denial of service (DDoS) attacks, business logic attacks (BLAs), and other dangerous and damaging practices that target organizations and their data.
How Security Teams are Leveraging Generative AI
Fortunately, generative AI solutions are also allowing security professionals to improve both their productivity and their threat recognition capabilities, helping them rebuff these new tactics and keep their data secure. Security teams are often burdened by repetitive and time-consuming tasks, such as monitoring network traffic and reviewing security logs, and generative AI solutions are already being used to automate many of these activities, freeing security professionals up for more engaging work. This doesn’t just improve security outcomes. It also improves job satisfaction and retention — which is particularly important amid the ongoing cybersecurity skills shortage.
Generative AI can also enhance threat detection and response capabilities, thanks to its ability to quickly identify patterns and anomalies that human operators might overlook. A small anomaly might not trigger a security alert on its own, but if that same behavior is identified in multiple areas throughout the digital environment, it may be indicative of an attack in progress. Generative AI tools are very good at this type of detection, and their ability to monitor the entire digital ecosystem more holistically means security teams can detect a potential breach more quickly. Even better, many AI-driven solutions are also capable of engaging in automated remediation efforts, allowing them to stop an attacker without the need for human intervention. That sort of real-time response can mean the difference between a minor incident and a major data breach.
Another way generative AI is enhancing productivity is through the reduction of false alarms. Today’s organizations use a wide range of security solutions, each of which has its parameters and thresholds for what requires operator intervention. Because generative AI systems are constantly learning and adapting, they are increasingly capable of recognizing which alerts require immediate attention and which can be safely ignored or deprioritized. This ensures that security professionals remain focused on real threats to the organization and its data, helping them streamline their workflows and avoid needless and time-consuming investigations. At a time when attackers are leveraging their own generative AI tools to improve the sophistication and efficacy of their attacks, this is increasingly critical.
The Next Steps for Generative AI
Successfully navigating the impact that generative AI will have on the cybersecurity industry means embracing the benefits of AI while remaining aware of its potential drawbacks and dangers. By prioritizing awareness and education while ensuring that security professionals have the resources they need to combat today’s AI-driven threats, organizations can avoid becoming easy prey for today’s attackers.
Yes, adversaries will continue to leverage generative AI to evade security protocols, identify vulnerabilities and make malicious automated traffic appear more convincingly human. However, organizations will be able to counter those advancements by streamlining workflows, empowering security teams, and improving their threat detection and response capabilities. As the technology grows more advanced — and more mainstream — it is imperative that today’s organizations recognize how generative AI will continue to shift both sides of the data security battlefield.