From 2546f78ff1537bff5d3f2150e34dfaaadc4d0644 Mon Sep 17 00:00:00 2001 From: Steve Wilson <62770473+virtualsteve-star@users.noreply.github.com> Date: Sun, 19 May 2024 09:36:56 -0700 Subject: [PATCH] clean up --- 2_0_candidates/SteveWilson_DangerousHallucinations.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/2_0_candidates/SteveWilson_DangerousHallucinations.md b/2_0_candidates/SteveWilson_DangerousHallucinations.md index 9866e28a..01ba151e 100644 --- a/2_0_candidates/SteveWilson_DangerousHallucinations.md +++ b/2_0_candidates/SteveWilson_DangerousHallucinations.md @@ -29,9 +29,9 @@ Dangerous hallucinations in Large Language Models (LLMs) refer to instances wher ### Example Attack Scenarios -**Scenario #1:** A legal firm uses an LLM to generate legal documents. The model confidently fabricates legal precedents, leading the firm to present false information in court, resulting in fines and reputational damage【7†source】 . +**Scenario #1:** A legal firm uses an LLM to generate legal documents. The model confidently fabricates legal precedents, leading the firm to present false information in court, resulting in fines and reputational damage. -**Scenario #2:** Developers use an LLM as a coding assistant. The model suggests a non-existent code library, which developers integrate into their software. An attacker exploits this by creating a malicious package with the same name, leading to a security breach【7†source】 . +**Scenario #2:** Developers use an LLM as a coding assistant. The model suggests a non-existent code library, which developers integrate into their software. An attacker exploits this by creating a malicious package with the same name, leading to a security breach. ### Reference Links