From d94c8e73d12d1c01f39bfb9e08d28ff467534e6a Mon Sep 17 00:00:00 2001 From: Lee Stott Date: Mon, 22 Apr 2024 19:38:13 +0100 Subject: [PATCH] Fixes --- 05-advanced-prompts/translations/ko/README.md | 2 +- 08-building-search-applications/translations/ko/README.md | 2 +- 11-integrating-with-function-calling/translations/ko/README.md | 2 +- 13-securing-ai-applications/README.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/05-advanced-prompts/translations/ko/README.md b/05-advanced-prompts/translations/ko/README.md index 2f3b6fdf6..1b89bc47a 100644 --- a/05-advanced-prompts/translations/ko/README.md +++ b/05-advanced-prompts/translations/ko/README.md @@ -599,7 +599,7 @@ GitHub Copilot이나 ChatGPT와 같은 AI 어시스턴트를 사용하여 "자 > [!TIP] > 개선 사항을 요청하는 프롬프트를 작성하는 것이 좋습니다. 개선 사항의 수를 제한하는 것도 좋은 아이디어입니다. 또한 아키텍처, 성능, 보안 등 특정한 방식으로 개선을 요청할 수도 있습니다. -[해답](../../solution.py?WT.mc_id=academic-105485-koreyst) +[해답](../../python/aoai-solution.py?WT.mc_id=academic-105485-koreyst) ## 지식 확인 diff --git a/08-building-search-applications/translations/ko/README.md b/08-building-search-applications/translations/ko/README.md index d69c621f2..7001b99dc 100644 --- a/08-building-search-applications/translations/ko/README.md +++ b/08-building-search-applications/translations/ko/README.md @@ -149,7 +149,7 @@ az cognitiveservices account deployment create \ ## 해답 -GitHub Codespaces에서 [해답 노트북](../../solution.ipynb?WT.mc_id=academic-105485-koreyst)을 열고 Jupyter Notebook의 지침을 따르세요. +GitHub Codespaces에서 [해답 노트북](../../python/oai-solution.ipynb?WT.mc_id=academic-105485-koreyst)을 열고 Jupyter Notebook의 지침을 따르세요. 노트북을 실행할 때, 쿼리를 입력하라는 메시지가 표시됩니다. 입력 상자는 다음과 같이 보일 것입니다: diff --git a/11-integrating-with-function-calling/translations/ko/README.md b/11-integrating-with-function-calling/translations/ko/README.md index 4cf0be855..e30688d52 100644 --- a/11-integrating-with-function-calling/translations/ko/README.md +++ b/11-integrating-with-function-calling/translations/ko/README.md @@ -49,7 +49,7 @@ function calling은 다음과 같은 제한 사항을 극복하기 위한 Azure ## 시나리오를 통해 문제 설명하기 -> 아래 시나리오를 실행하려면 [제공된 노트북](../../Lesson11-FunctionCalling.ipynb?WT.mc_id=academic-105485-koreyst)을 사용하는 것을 권장합니다. 문제를 설명하기 위해 함수가 문제를 해결하는 데 도움이 되는 시나리오를 보여주려고 하므로 읽기만 해도 됩니다. +> 아래 시나리오를 실행하려면 [제공된 노트북](../../aoai-assignment.ipynb?WT.mc_id=academic-105485-koreyst)을 사용하는 것을 권장합니다. 문제를 설명하기 위해 함수가 문제를 해결하는 데 도움이 되는 시나리오를 보여주려고 하므로 읽기만 해도 됩니다. 응답 형식 문제를 보여주는 예제를 살펴보겠습니다: diff --git a/13-securing-ai-applications/README.md b/13-securing-ai-applications/README.md index ee018d8d5..20bdbf3e1 100644 --- a/13-securing-ai-applications/README.md +++ b/13-securing-ai-applications/README.md @@ -121,7 +121,7 @@ Data security, governance, and compliance are critical for any organization that Emulating real-world threats is now considered a standard practice in building resilient AI systems by employing similar tools, tactics, procedures to identify the risks to systems and test the response of defenders. -> The practice of AI red teaming has evolved to take on a more expanded meaning: it not only covers probing for security vulnerabilities, but also includes probing for other system failures, such as the generation of potentially harmful content. AI systems come with new risks, and red teaming is core to understanding those novel risks, such as prompt injection and producing ungrounded content. - [Microsoft AI Red Team building future of safer AI](https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/?WT.mc_id=academic-105485-koreyst) +> The practice of AI red teaming has evolved to take on a more expanded meaning: it not only covers probing for security vulnerabilities, but also includes probing for other system failures, such as the generation of potentially harmful content. AI systems come with new risks, and red teaming is core to understanding those novel risks, such as prompt injection and producing ungrounded content. - [Microsoft AI Red Team building future of safer AI](https://www.microsoft.com/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/?WT.mc_id=academic-105485-koreyst) [![Guidance and resources for red teaming](./images/13-AI-red-team.png?WT.mc_id=academic-105485-koreyst)]()