Skip to content

Latest commit

 

History

History
43 lines (32 loc) · 2.37 KB

File metadata and controls

43 lines (32 loc) · 2.37 KB

OWASP Top 10 List for Large Language Models version 0.1

This is a draft list of important vulnerability types for Artificial Intelligence (AI) applicaitons build on Large Language Models (LLMs)

Description:
Bypassing filters or manipulating the LLM using carefully crafted prompts that make the model ignore previous instructions or perform unintended actions.

Description:
Accidentally revealing sensitive information, proprietary algorithms, or other confidential details through the LLM's responses.

Description:
Failing to properly isolate LLMs when they have access to external resources or sensitive systems, allowing for potential exploitation and unauthorized access.

Description:
Exploiting LLMs to execute malicious code, commands, or actions on the underlying system through natural language prompts.

Description:
Exploiting LLMs to perform unintended requests or access restricted resources, such as internal services, APIs, or data stores.

Description:
Excessive dependence on LLM-generated content without human oversight can result in harmful consequences.

Description:
Failing to ensure that the LLM's objectives and behavior align with the intended use case, leading to undesired consequences or vulnerabilities.

Description:
Not properly implementing access controls or authentication, allowing unauthorized users to interact with the LLM and potentially exploit vulnerabilities.

Description:
Exposing error messages or debugging information that could reveal sensitive information, system details, or potential attack vectors.

Description:
Maliciously manipulating training data or fine-tuning procedures to introduce vulnerabilities or backdoors into the LLM.