From dc791d4942f16bc2513f4f7364d0af2b1131cdd3 Mon Sep 17 00:00:00 2001 From: John Aziz Date: Tue, 5 Mar 2024 23:57:21 +0200 Subject: [PATCH] fix remove locale --- 13-securing-ai-applications/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/13-securing-ai-applications/README.md b/13-securing-ai-applications/README.md index 21ec4e6f5..b0292495a 100644 --- a/13-securing-ai-applications/README.md +++ b/13-securing-ai-applications/README.md @@ -134,11 +134,11 @@ Below are key insights that have shaped Microsoft’s AI Red Team program. 3. **Dynamic Nature of AI Systems:** AI applications constantly evolve. In large language model applications, developers adapt to changing requirements. Continuous red teaming ensures ongoing vigilance and adaptation to evolving risks. -AI red teaming is not all encompassing and should be consider a complementary motion to additonal controls such as [role-based access control (RBAC)](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control?WT.mc_id=academic-105485-koreyst) and comprehensive data management solutions. It's meant to suppplement a security strategy that focuses on employing safe and responsible AI solutions that account for privacy and security while aspiring to minimize biases, harmful content and misinformaiton that can erode user confidence. +AI red teaming is not all encompassing and should be consider a complementary motion to additional controls such as [role-based access control (RBAC)](https://learn.microsoft.com/azure/ai-services/openai/how-to/role-based-access-control?WT.mc_id=academic-105485-koreyst) and comprehensive data management solutions. It's meant to suppplement a security strategy that focuses on employing safe and responsible AI solutions that account for privacy and security while aspiring to minimize biases, harmful content and misinformaiton that can erode user confidence. Here's a list of additional reading that can help you better understand how red teaming can help identify and mitigate risks in your AI systems: -- [Planning red teaming for large language models (LLMs) and their applications](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/red-teaming?WT.mc_id=academic-105485-koreyst) +- [Planning red teaming for large language models (LLMs) and their applications](https://learn.microsoft.com/azure/ai-services/openai/concepts/red-teaming?WT.mc_id=academic-105485-koreyst) - [What is the OpenAI Red Teaming Network?](https://openai.com/blog/red-teaming-network?WT.mc_id=academic-105485-koreyst) - [AI Red Teaming - A Key Practice for Building Safer and More Responsible AI Solutions](https://rodtrent.substack.com/p/ai-red-teaming?WT.mc_id=academic-105485-koreyst) - MITRE [ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)](https://atlas.mitre.org/?WT.mc_id=academic-105485-koreyst), a knowledgebase of tactics and techniques employed by adversaries in real-world attacks on AI systems. @@ -155,7 +155,7 @@ A:1, While all three are great recommendations, ensuring that you're assigning t ## 🚀 Challenge -Read up more on how you can [govern and protect sensitive information](https://learn.microsoft.com/en-us/training/paths/purview-protect-govern-ai/?WT.mc_id=academic-105485-koreyst) in the age of AI. +Read up more on how you can [govern and protect sensitive information](https://learn.microsoft.com/training/paths/purview-protect-govern-ai/?WT.mc_id=academic-105485-koreyst) in the age of AI. ## Great Work, Continue Your Learning