A rundown of the OWASP top 10 for large language model applications
The unprecedented market entry of ChatGPT and the ensuing explosion in generative AI development over the last year have simultaneously catalyzed powerful innovation and introduced dangerous vulnerabilities for malicious actors to exploit. Security professionals are left to navigate the uncharted waters of the evolving LLM landscape.
In response, an international collective of nearly 500 experts (with over 125 active contributors) banded together with OWASP to research, analyze, and propose a top 10 list of vulnerabilities facing LLM. I had the pleasure of being a part of it!
Together, we identified 43 distinct threats and narrowed the list down to 10. We found six new critical vulnerabilities in addition to four existing ones from the original OWASP Top Ten. Let’s dive in!
- Prompt Injection (LLM01)
Yet again, number one on the list is a form of injection – Prompt Injection.
Attackers can craft malicious input to manipulate LLMs into unknowingly executing unintended actions. There are two types of Prompt Injection – direct and indirect. An example of direct injection is an attacker sending a prompt injection to an LLM for interacting with insecure functions and exploiting backend systems. Indirect injection could consist of an LLM accepting input from an external source, such as a website or files controlled by a malicious user.
- Insecure Output Handling (LLM02)
Similar to web application output handling, the LLM model output is parsed without any filtration. This can allow for XSS, SSRF, privilege escalation, or remote code execution.
- Training Data Poisoning (LLM03) *new*
Any machine learning model requires training data (raw text) to service the user needs. Training data should contain a broad range of genres, domains, languages, and content. The manipulation of data or a lack of tuning data can result in data poisoning, which can compromise the model’s security, effectiveness, and accuracy of predictions. Some threat vectors include using data from an unverified source, inadequate sandboxing/training, or the creation of falsified documents impacting the model’s outputs.
- Model Denial of Service (LLM04)
An attacker can consume a high volume of resources by interacting with an LLM to cause Denial of Service. Since we are in the early stages of understanding and implementing LLMs, developers need to understand the input and output capacity of the model.
- Supply Chain Vulnerabilities (LLM05) *new*
This vulnerability exists in the current AppSec realm and extends to training data, vulnerable ML models, third-party packages, and deprecated models. Understanding the terms and conditions of specific models, mitigating vulnerable and outdated components, updating inventory per the Software Bill of Materials (SBOM), and implementing the patching policy will prevent these types of vulnerabilities.
- Sensitive Information Disclosure (LLM06)
LLMs may reveal sensitive information (e.g., proprietary algorithms) if output is not properly sanitized. Those adopting LLM models should understand how each model works and avoid the risk of revealing sensitive information in the output via different streams.
- Insecure Plug-in Design (LLM07) *new*
This occurs when a potential attacker constructs a malicious request to the plugin, resulting in a wide range of undesired behaviors (e.g., remote code execution). An example would include a plugin accepting configuration strings instead of parameters to override entire configuration settings.
- Excessive Agency (LLM08) *new*
Excessive Agency arises when an LLM agent or plugin is provisioned with excessive permissions to read, write, or execute the required operation. An example is a plugin requiring access to modify the data used in the application. Although read permission is necessary, edit permission may not be required.
- Overreliance (LLM09) *new*
Systems depending excessively on LLM models for decision-making may produce inaccurate information and misleading content. Attack scenarios include using AI models for news organizations and Codex (a general-purpose programming model) to develop code with security vulnerabilities. This will generate an output that is syntactically correct but may not be semantically correct.
- Model Theft (LLM10) *new*
Last on the top ten list, Model Theft is about the propriety LLM model being compromised and extracted to another model. It compromises the confidentiality and integrity of the LLM and provides unauthorized access to any sensitive information contained within the model.
With the advent of AI into multi-cloud environments, supply chains, and global sales channels, security concerns relating to privacy and intellectual property are top-of-mind for development security operations teams. Understanding the hierarchy of vulnerabilities is now requisite for IT professionals along every step of the vulnerability management lifecycle.
References
https://owasp.org/www-project-top-10-for-large-language-model-applications/
https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0_1.pdf
*** This is a Security Bloggers Network syndicated blog from The Coalfire Blog authored by The Coalfire Blog. Read the original post at: https://www.coalfire.com/the-coalfire-blog/owasp-top-10-for-large-languate-model-applications?feed=blogs