From 4c868c83e1a58fda372ec08f72089155e4893ec8 Mon Sep 17 00:00:00 2001 From: August Hill Date: Sat, 11 May 2024 11:14:38 -0700 Subject: [PATCH] Update 13 Securing AI Applications README.md I am correcting a few typos in the text. --- 13-securing-ai-applications/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/13-securing-ai-applications/README.md b/13-securing-ai-applications/README.md index 20bdbf3e1..3a4c6c57c 100644 --- a/13-securing-ai-applications/README.md +++ b/13-securing-ai-applications/README.md @@ -56,7 +56,7 @@ Much like the MITRE ATT&CK® framework, which is extensively used in traditional Additionally, the Open Web Application Security Project (OWASP) has created a "[Top 10 list](https://llmtop10.com/?WT.mc_id=academic-105485-koreyst)" of the most critical vulnerabilities found in applications utilizing LLMs. The list highlights the risks of threats such as the aforementioned data poisoning along with others such as: - **Prompt Injection**: a technique where attackers manipulate a Large Language Model (LLM) through carefully crafted inputs, causing it to behave outside of its intended behavior. -- **Supply Chain Vulnerabilities**: The components and software that make up the applications used by an LLM, such as Python modules or external datasets, can themselves be compromised leading to unexpected results, introduced biases and even vulmerabilities in the underlying infrastructure. +- **Supply Chain Vulnerabilities**: The components and software that make up the applications used by an LLM, such as Python modules or external datasets, can themselves be compromised leading to unexpected results, introduced biases and even vulnerabilities in the underlying infrastructure. - **Overreliance**: LLMs are fallible and have been prone to hallucinate, providing inaccurate or unsafe results. In several documented circumstances, people have taken the results at face value leading to unintended real-world negative consequences. Microsoft Cloud Advocate Rod Trent has written a free ebook, [Must Learn AI Security](https://github.com/rod-trent/OpenAISecurity/tree/main/Must_Learn/Book_Version?WT.mc_id=academic-105485-koreyst), that dives deeply into these and other emerging AI threats and provides extensive guidance on how to best tackle these scenarios. @@ -134,7 +134,7 @@ Below are key insights that have shaped Microsoft’s AI Red Team program. 3. **Dynamic Nature of AI Systems:** AI applications constantly evolve. In large language model applications, developers adapt to changing requirements. Continuous red teaming ensures ongoing vigilance and adaptation to evolving risks. -AI red teaming is not all encompassing and should be consider a complementary motion to additional controls such as [role-based access control (RBAC)](https://learn.microsoft.com/azure/ai-services/openai/how-to/role-based-access-control?WT.mc_id=academic-105485-koreyst) and comprehensive data management solutions. It's meant to suppplement a security strategy that focuses on employing safe and responsible AI solutions that account for privacy and security while aspiring to minimize biases, harmful content and misinformaiton that can erode user confidence. +AI red teaming is not all encompassing and should be considered a complementary motion to additional controls such as [role-based access control (RBAC)](https://learn.microsoft.com/azure/ai-services/openai/how-to/role-based-access-control?WT.mc_id=academic-105485-koreyst) and comprehensive data management solutions. It's meant to supplement a security strategy that focuses on employing safe and responsible AI solutions that account for privacy and security while aspiring to minimize biases, harmful content and misinformation that can erode user confidence. Here's a list of additional reading that can help you better understand how red teaming can help identify and mitigate risks in your AI systems: