Skip to content

Commit

Permalink
Broken link in LLM01 (#457)
Browse files Browse the repository at this point in the history
It looks like Embrace the Red URL is returning a 404 error. I have updated the URL even though it looks like a typo on their end.
  • Loading branch information
talesh authored Nov 2, 2024
1 parent 0acb794 commit d16c7fd
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion 2_0_vulns/LLM01_PromptInjection.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Prompt injection vulnerabilities are possible due to the nature of generative AI
### Reference Links

1. [ChatGPT Plugin Vulnerabilities - Chat with Code](https://embracethered.com/blog/posts/2023/chatgpt-plugin-vulns-chat-with-code/) **Embrace the Red**
2. [ChatGPT Cross Plugin Request Forgery and Prompt Injection](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection) **Embrace the Red**
2. [ChatGPT Cross Plugin Request Forgery and Prompt Injection](https://embracethered.com/blog/posts/2023/chatgpt-cross-plugin-request-forgery-and-prompt-injection./) **Embrace the Red**
3. [Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection](https://arxiv.org/pdf/2302.12173.pdf) **Arxiv**
4. [Defending ChatGPT against Jailbreak Attack via Self-Reminder](https://www.researchsquare.com/article/rs-2873090/v1) **Research Square**
5. [Prompt Injection attack against LLM-integrated Applications](https://arxiv.org/abs/2306.05499) **Cornell University**
Expand Down

0 comments on commit d16c7fd

Please sign in to comment.