diff --git a/2_0_vulns/LLM01_PromptInjection.md b/2_0_vulns/LLM01_PromptInjection.md index 20611074..fd7f1d1f 100644 --- a/2_0_vulns/LLM01_PromptInjection.md +++ b/2_0_vulns/LLM01_PromptInjection.md @@ -52,7 +52,7 @@ Prompt injection vulnerabilities are possible due to the nature of generative AI ### Reference Links 1. [ChatGPT Plugin Vulnerabilities - Chat with Code](https://embracethered.com/blog/posts/2023/chatgpt-plugin-vulns-chat-with-code/) **Embrace the Red** -2. [ChatGPT Cross Plugin Request Forgery and Prompt Injection](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection) **Embrace the Red** +2. [ChatGPT Cross Plugin Request Forgery and Prompt Injection](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./) **Embrace the Red** 3. [Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection](https://arxiv.org/pdf/2302.12173.pdf) **Arxiv** 4. [Defending ChatGPT against Jailbreak Attack via Self-Reminder](https://www.researchsquare.com/article/rs-2873090/v1) **Research Square** 5. [Prompt Injection attack against LLM-integrated Applications](https://arxiv.org/abs/2306.05499) **Cornell University**