Skip to content

Commit

Permalink
Update 13 Securing AI Applications README.md
Browse files Browse the repository at this point in the history
I am correcting a few typos in the text.
  • Loading branch information
august-hill authored May 11, 2024
1 parent 0cd80f1 commit 4c868c8
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions 13-securing-ai-applications/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Much like the MITRE ATT&CK® framework, which is extensively used in traditional
Additionally, the Open Web Application Security Project (OWASP) has created a "[Top 10 list](https://llmtop10.com/?WT.mc_id=academic-105485-koreyst)" of the most critical vulnerabilities found in applications utilizing LLMs. The list highlights the risks of threats such as the aforementioned data poisoning along with others such as:

- **Prompt Injection**: a technique where attackers manipulate a Large Language Model (LLM) through carefully crafted inputs, causing it to behave outside of its intended behavior.
- **Supply Chain Vulnerabilities**: The components and software that make up the applications used by an LLM, such as Python modules or external datasets, can themselves be compromised leading to unexpected results, introduced biases and even vulmerabilities in the underlying infrastructure.
- **Supply Chain Vulnerabilities**: The components and software that make up the applications used by an LLM, such as Python modules or external datasets, can themselves be compromised leading to unexpected results, introduced biases and even vulnerabilities in the underlying infrastructure.
- **Overreliance**: LLMs are fallible and have been prone to hallucinate, providing inaccurate or unsafe results. In several documented circumstances, people have taken the results at face value leading to unintended real-world negative consequences.

Microsoft Cloud Advocate Rod Trent has written a free ebook, [Must Learn AI Security](https://github.com/rod-trent/OpenAISecurity/tree/main/Must_Learn/Book_Version?WT.mc_id=academic-105485-koreyst), that dives deeply into these and other emerging AI threats and provides extensive guidance on how to best tackle these scenarios.
Expand Down Expand Up @@ -134,7 +134,7 @@ Below are key insights that have shaped Microsoft’s AI Red Team program.
3. **Dynamic Nature of AI Systems:**
AI applications constantly evolve. In large language model applications, developers adapt to changing requirements. Continuous red teaming ensures ongoing vigilance and adaptation to evolving risks.

AI red teaming is not all encompassing and should be consider a complementary motion to additional controls such as [role-based access control (RBAC)](https://learn.microsoft.com/azure/ai-services/openai/how-to/role-based-access-control?WT.mc_id=academic-105485-koreyst) and comprehensive data management solutions. It's meant to suppplement a security strategy that focuses on employing safe and responsible AI solutions that account for privacy and security while aspiring to minimize biases, harmful content and misinformaiton that can erode user confidence.
AI red teaming is not all encompassing and should be considered a complementary motion to additional controls such as [role-based access control (RBAC)](https://learn.microsoft.com/azure/ai-services/openai/how-to/role-based-access-control?WT.mc_id=academic-105485-koreyst) and comprehensive data management solutions. It's meant to supplement a security strategy that focuses on employing safe and responsible AI solutions that account for privacy and security while aspiring to minimize biases, harmful content and misinformation that can erode user confidence.

Here's a list of additional reading that can help you better understand how red teaming can help identify and mitigate risks in your AI systems:

Expand Down

0 comments on commit 4c868c8

Please sign in to comment.