From a84b3a2992ba9bec1daf105c5de3fc41d36319c4 Mon Sep 17 00:00:00 2001 From: Ads Dawson <104169244+GangGreenTemperTatum@users.noreply.github.com> Date: Mon, 6 Jan 2025 08:25:46 -0500 Subject: [PATCH] docs: update reference --- 2_0_vulns/LLM10_UnboundedConsumption.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2_0_vulns/LLM10_UnboundedConsumption.md b/2_0_vulns/LLM10_UnboundedConsumption.md index 46c093c3..b4271368 100644 --- a/2_0_vulns/LLM10_UnboundedConsumption.md +++ b/2_0_vulns/LLM10_UnboundedConsumption.md @@ -77,7 +77,7 @@ Attacks designed to disrupt service, deplete the target's financial resources, o 1. [Proof Pudding (CVE-2019-20634)](https://avidml.org/database/avid-2023-v009/) **AVID** (`moohax` & `monoxgas`) 2. [arXiv:2403.06634 Stealing Part of a Production Language Model](https://arxiv.org/abs/2403.06634) **arXiv** 3. [Runaway LLaMA | How Meta's LLaMA NLP model leaked](https://www.deeplearning.ai/the-batch/how-metas-llama-nlp-model-leaked/): **Deep Learning Blog** -4. [I Know What You See:](https://arxiv.org/pdf/1803.05847.pdf): **Arxiv White Paper** +4. [You wouldn't download an AI, Extracting AI models from mobile apps](https://altayakkus.substack.com/p/you-wouldnt-download-an-ai): **Substack blog** 5. [A Comprehensive Defense Framework Against Model Extraction Attacks](https://ieeexplore.ieee.org/document/10080996): **IEEE** 6. [Alpaca: A Strong, Replicable Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html): **Stanford Center on Research for Foundation Models (CRFM)** 7. [How Watermarking Can Help Mitigate The Potential Risks Of LLMs?](https://www.kdnuggets.com/2023/03/watermarking-help-mitigate-potential-risks-llms.html): **KD Nuggets**