From 839121b4b99775ad720b2d7582a0871655824356 Mon Sep 17 00:00:00 2001 From: Ads Dawson <104169244+GangGreenTemperTatum@users.noreply.github.com> Date: Sat, 10 Feb 2024 19:28:18 -0800 Subject: [PATCH] Ads/llm06 word suggestion example attack scenario (#269) * feat: kickoff v2 0 dir and files * docs: suggestion for issue 267 --- 2_0_vulns/LLM06_SensitiveInformationDisclosure.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2_0_vulns/LLM06_SensitiveInformationDisclosure.md b/2_0_vulns/LLM06_SensitiveInformationDisclosure.md index 297a0b6b..f6a74272 100644 --- a/2_0_vulns/LLM06_SensitiveInformationDisclosure.md +++ b/2_0_vulns/LLM06_SensitiveInformationDisclosure.md @@ -27,7 +27,7 @@ The consumer-LLM application interaction forms a two-way trust boundary, where w 1. Unsuspecting legitimate user A is exposed to certain other user data via the LLM when interacting with the LLM application in a non-malicious manner. 2. User A targets a well-crafted set of prompts to bypass input filters and sanitization from the LLM to cause it to reveal sensitive information (PII) about other users of the application. -3. Personal data such as PII is leaked into the model via training data due to either negligence from the user themselves, or the LLM application. This case could increase the risk and probability of scenario 1 or 2 above. +3. Personal data such as PII is leaked into the model via training data due to either negligence from the user themselves, or the LLM application. This case could increase the impact of scenario 1 or 2 above. ### Reference Links