This Repository Contains all the resources to learn about AI Security.
If you want to contribute, create a PR or contact me @Green_terminals.
- https://josephthacker.com/ai/2023/08/25/prompt-injection-primer.html
- https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/
- https://infosecwriteups.com/art-of-hacking-llm-apps-a22cf60a523b
- https://infosecwriteups.com/bypassing-kyc-using-deepfake-e11f0722c722
- https://learnprompting.org/docs/prompt_hacking/injection
- https://boringappsec.substack.com/p/guest-post-edition-24-pentesting
- Attacking LLM - Prompt Injection
- Accidental LLM Backdoor - Prompt Tricks
- Defending LLM - Prompt Injection
- Prompt Injection 101 - Understanding Security Risks in LLM
- AI Hacking 🔥 OWASP Top 10 Vulnerabilities in LLM Applications
- Prompt Injection 🎯 AI hacking & LLM Attacks
- Intro to AI Security
- Daniel Miessler and Rez0: Hacking with AI (Ep. 24)
- AI and hacking - opportunities and threats - Joseph “rez0” Thacker
- AI Application Security by Joseph Thacker
- Invisible Prompt Injection Explained
- Hacking into Pretrained ML model by S.G Harish
- Extracting Training Data from Large Language Models
- Poisoning Language Models During Instruction Tuning
- Stealing Machine Learning Models via Prediction APIs
- Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models
- Prompt Injection attack against LLM-integrated Applications
- Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
- Universal and Transferable Adversarial Attacks on Aligned Language Models