Skip to content

Latest commit

 

History

History
118 lines (64 loc) · 10.1 KB

File metadata and controls

118 lines (64 loc) · 10.1 KB

Awesome-LLM-Supply-Chain-Security

Awesome Page Views Count Stars

A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)

Contributions are always welcome. Please follow the Contribution Guidelines.

Table of Contents

Papers

Technical Paper

  • [ICSE'25] Prompt-to-SQL Injections in LLM-Integrated Web Applications: Risks and Defenses [Paper]

  • [S&P'25] My Model is Malware to You: Transforming AI Models into Malware by Abusing TensorFlow APIs [Repo]

  • [ASE'24] Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs [Paper]

  • [CCS'24] Demystifying RCE Vulnerabilities in LLM-Integrated Apps [Paper] [Repo]

  • [arXiv] Cracks in The Stack: Hidden Vulnerabilities and Licensing Risks in LLM Pre-Training Datasets [Paper]

  • [arXiv] We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs [Paper]

  • [SCORED'22] An Empirical Study of Artifacts and Security Practices in the Pre-trained Model Supply Chain [Paper] [Repo]

  • [arXiv] Naming Practices of Pre-Trained Models in Hugging Face [Paper]

  • [arXiv] Towards Semantic Versioning of Open Pre-trained Language Model Releases on Hugging Face [Paper]

  • [arXiv] A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems [Paper]

Survey

  • [arXiv] Large Language Model Supply Chain: A Research Agenda [Paper]

  • [arXiv] Lifting the Veil on the Large Language Model Supply Chain: Composition, Risks, and Mitigations [Paper]

  • [arXiv] Large Language Model Supply Chain: Open Problems From the Security Perspective [Paper]

Talks

  • [BlackhatUSA'24] Practical LLM Security: Takeaways From a Year in the Trenches [Link] [Slides]

  • [BlackhatUSA'24] Isolation or Hallucination? Hacking AI Infrastructure Providers for Fun and Weights [Link]

  • [BlackhatUSA'24] From MLOps to MLOops - Exposing the Attack Surface of Machine Learning Platforms [Link] [Slides]

  • [BlackhatASIA'24] LLM4Shell: Discovering and Exploiting RCE Vulnerabilities in Real-World LLM-Integrated Frameworks and Apps [Link] [Slides]

  • [BlackhatASIA'24] Confused Learning: Supply Chain Attacks through Machine Learning Models [Link] [Slides]

  • [BlackhatASIA'24] How to Make Hugging Face to Hug Worms: Discovering and Exploiting Unsafe Pickle.loads over Pre-Trained Large Model Hubs [Link] [Slides]

  • [AIVillage@Defcon'31] Assessing the Vulnerabilities of the Open-Source Artificial Intelligence (AI) Landscape: A Large-Scale Analysis of the Hugging Face Platform [Slides]

  • [AIVillage@Defcon'31] You Sound Confused, Anyways - Thanks for The Jewels [Slides]

Security Reports

  • Machine Learning Bug Bonanza – Exploiting ML Services [Link]

  • Machine Learning Bug Bonanza – Exploiting ML Clients and “Safe” Model Formats [Link]

  • Offensive ML Playbook [Link]

  • OWASP Top 10 for LLMs [Link]

  • Wiz Research finds architecture risks that may compromise AI-as-a-Service providers and consequently risk customer data; works with Hugging Face on mitigations [Link]

  • The risk in malicious AI models: Wiz Research discovers critical vulnerability in AI-as-a-Service provider, Replicate [Link]

  • Data Scientists Targeted by Malicious Hugging Face ML Models with Silent Backdoor [Link]

  • From MLOps to MLOops: Exposing the Attack Surface of Machine Learning Platforms [Link]

  • Legit Discovers "AI Jacking" Vulnerability in Popular Hugging Face AI Platform [Link]

  • Diving Deeper into AI Package Hallucinations [Link]

  • +1500 HuggingFace API Tokens were exposed, leaving millions of Meta-Llama, Bloom, and Pythia users vulnerable [Link]

  • More Models, More ProbLLMs [Link]

  • Shelltorch Explained: Multiple Vulnerabilities in Pytorch Model Server [Link]

  • Anatomy of an LLM RCE [Link]

  • 老树开新花:大模型时代的代码执行沙箱 [Link]

  • 警惕Hugging Face开源组件风险被利用于大模型供应链攻击 [Link]

CVEs

  • [CVE-2024-34359] Supply-Chain Attacks in LLMs: From GGUF model format metadata RCE, to State-of-The-Art NLP Project RCEs [Link]

  • [CVE-2024-3660] Keras 2 Lambda Layers Allow Arbitrary Code Injection in TensorFlow Models [Link]

  • [CVE-2023-48022] The story of ShadowRay [Link]

  • [CVE-2023-49785] Hacking AI Chatbots for fun and learning - Analyzing an unauthenticated SSRF and reflected XSS in ChatGPT-Next-Web (CVE-2023-49785) [Link] to be continued...

Contribution

Any contribution to make this list more comprehensive (adding papers, talks, reports, and CVEs etc.) is always welcome. Please feel free to open an issue or a PR.