CISOs Brace for LLM-Powered Attacks: Key Strategies to Stay Ahead
Large language models (LLMs) have taken the tech world by storm, emerging as a powerful technology that can transform industries with their ability to analyze complex data and generate human-like text. Yet, as these models proliferate at breakneck speed, they present an increasingly compelling target for malicious actors. For chief information security officers (CISOs), understanding and mitigating the security risks associated with these LLMs is paramount.
Unpacking the Threats Posed by LLMs
LLMs and their associated infrastructure present a range of potential attack surfaces. Most vulnerable are public LLM services, where a web application interface could be exploited to collect sensitive data entered into prompts. Deploying LLMs internally allows organizations to maintain stricter security boundaries, but there are still risks of malicious actors infiltrating the supply chain to steal data or compromise outputs.
While the foundational principles of cybersecurity, like strong data governance and data security policies, are crucial for LLM protection, the unique nature of AI systems demands additional layers of defense. Below are three of the biggest security vulnerabilities created by the rapid emergence of LLMs, followed by strategies to defend against them.
Supply Chain Attacks and Data Poisoning
Just as the SolarWinds breach revealed critical vulnerabilities in traditional software supply chains, LLMs face similar risks of attack through their AI development pipelines. Both cases demonstrate how compromising even one component of a complex system can have far-reaching consequences. For example, by poisoning datasets used to train an AI model, attackers can manipulate that model to produce flawed outputs. The consequences can be dire, such as models producing inaccurate predictions about the business, or chatbots spewing false information to customers. Moreover, if attackers manage to insert backdoors into the model code, they could capture or reveal sensitive company data as it’s being processed by the model.
Public LLM Services vs. Internal LLM Deployment
The decision to use public LLM services or deploy models internally carries distinct security implications. Enterprises have no direct control over the security of public LLM services, so they are entrusting the provider to protect any data their employees may enter. A similar concern applies to third-party services that use generative AI for tasks like transcribing audio and summarizing conference calls. These services can lead to “shadow AI” use, where employees are sharing sensitive data with unsanctioned third-party services. This growing trend of AI democratization is both empowering and concerning — while it puts powerful tools in everyone’s hands, it also creates new security blind spots that organizations need to actively monitor and control.
Undermining AI-Powered Security Solutions
Another concern is the potential for hackers to compromise the security tools that bolster threat detection capabilities. Security tools increasingly use AI and machine learning to analyze network traffic, log files and other data types to identify patterns and anomalies that might indicate a security breach. Adversaries can use AI to bypass these security measures, exploiting the same technologies intended to protect organizations. For instance, attackers could compromise an LLM so that it no longer detects particular types of events or patterns, effectively creating a back door in an organization’s defenses. Malicious actors could also train an LLM to evade detection by generating traffic patterns that appear benign but in fact, mask an attack.
Strategies to Stay Ahead of LLM Threats
Generative AI may still be relatively new, but the basic concerns — data loss, reputational risk and legal liability — are well understood. As a result, security leaders must establish a rigorous, formal approach to guard against these emerging threats. This means implementing comprehensive security frameworks that address both traditional vulnerabilities and AI-specific risks while ensuring that strategies evolve alongside the rapid advancement of these technologies. When establishing guardrails, CISOs should pay particular attention to the following areas:
Governance and Compliance: Robust governance frameworks are vital in managing the risks associated with LLMs. The recently introduced ISO 42001 standard guides AI governance, emphasizing the need for continuous human oversight, especially in critical areas like financial reporting and code validation. CISOs must ensure that governance practices keep pace with the rapid AI advancements.
Comprehensive protection measures should be implemented across the entire AI model lifecycle to prevent adversaries from infiltrating the supply chain and compromising LLMs. This includes protecting the CI/CD pipeline, from code repositories to production environments.
Governance must also extend to access and permissions. Much of the risk from LLMs and generative AI is about data protection. It’s essential to implement best practices to ensure that employees can access only the data they need access to, with a limited group of administrators who can adjust these permissions. This should be a standard practice, but the rise of LLMs and generative AI makes it even more imperative.
Control Unapproved Generative AI Use: Organizations will need to ratchet up monitoring for the unapproved use of generative AI technology to guard against data leakage. CISOs must understand the policies of these services and should limit access to a small handful that align with their policies for data privacy and security. More broadly, employees should never have access to sensitive data they don’t need. Where they do need access, they must be educated about not sharing sensitive information with public services.
Engage Stakeholders: Security teams must communicate clearly with all parties involved in the development and use of AI, including employees, end users, investors and partners to ensure expectations and security best practices are met. By educating each group about AI’s functions, its applications and the expected advantages and disadvantages of using it, organizations can promote transparency and further build trust with those harnessing AI in their day-to-day. Creating formal policies for stakeholders engaging with AI helps define how communication will be managed and ensures full alignment with your organization’s policies.
Upskill Security Analysts: Despite the risk of AI-powered tools being targeted, generative AI still provides a powerful way to upskill security analysts and ease the shortage of experts. LLMs can democratize data access, empowering less experienced analysts to extract meaningful insights. By simplifying the process of querying and analyzing data, AI tools can also help security teams identify vulnerabilities and respond to incidents more quickly.
Balancing Security With Innovation
As LLMs continue to evolve, they present a dual challenge for CISOs: Balancing the potential benefits of AI with the need to mitigate security risks. By embracing AI and implementing robust guardrails and governance, organizations can harness the power of LLMs while protecting against potential threats. Vigilance and adaptation are crucial as AI technologies advance, ensuring that security teams remain ahead of the curve in safeguarding their organizations.