diff --git a/1_1_vulns/PromptInjection.md b/1_1_vulns/PromptInjection.md index 478d9e0b..ace630ad 100644 --- a/1_1_vulns/PromptInjection.md +++ b/1_1_vulns/PromptInjection.md @@ -15,7 +15,7 @@ In advanced attacks, the LLM could be manipulated to mimic a harmful persona or 1. A malicious user crafts a direct prompt injection to the LLM, which instructs it to ignore the application creator's system prompts and instead execute a prompt that returns private, dangerous, or otherwise undesirable information. 2. A user employs an LLM to summarize a webpage containing an indirect prompt injection. This then causes the LLM to solicit sensitive information from the user and perform exfiltration via JavaScript or Markdown. -3. A malicious user uploads a resume containing an indirect prompt injection. The document contains a prompt injection with instructions to make the LLM inform users that this document is excellent eg. anm excellent candidate for a job role. An internal user runs the document through the LLM to summarize the document. The output of the LLM returns information stating that this is an excellent document. +3. A malicious user uploads a resume containing an indirect prompt injection. The document contains a prompt injection with instructions to make the LLM inform users that this document is excellent eg. an excellent candidate for a job role. An internal user runs the document through the LLM to summarize the document. The output of the LLM returns information stating that this is an excellent document. 4. A user enables a plugin linked to an e-commerce site. A rogue instruction embedded on a visited website exploits this plugin, leading to unauthorized purchases. 5. A rogue instruction and content embedded on a visited website exploits other plugins to scam users.