Skip to content

Commit

Permalink
fixed broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
koreyspace committed Feb 21, 2024
1 parent 4af3567 commit 8799877
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 14 deletions.
24 changes: 12 additions & 12 deletions 13-securing-ai-applications/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,19 +47,19 @@ Here are examples of how data poisoning can affect your models:
4. **Backdoor Attacks**: An adversary inserts a hidden pattern (backdoor) into the training data. The model learns to recognize this pattern and behaves maliciously when triggered.\
**Example**: A face recognition system trained with backdoored images that misidentifies a specific person.

The MITRE Corporation has created [ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)](https://atlas.mitre.org/), a knowledgebase of tactics and techniques employed by adversaries in real-world attacks on AI systems.
The MITRE Corporation has created [ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)](https://atlas.mitre.org/?WT.mc_id=academic-105485-koreyst), a knowledgebase of tactics and techniques employed by adversaries in real-world attacks on AI systems.

> There are a growing number of vulnerabilities in AI-enabled systems, as the incorporation of AI increases the attack surface of existing systems beyond those of traditional cyber-attacks. We developed ATLAS to raise awareness of these unique and evolving vulnerabilities, as the global community starts to incorporate AI into more systems. ATLAS is modeled after the MITRE ATT&CK® framework and its tactics, techniques, and procedures (TTPs) are complementary to those in ATT&CK.
Much like the MITRE ATT&CK® framework, which is extensively used in traditional cybersecurity for planning advanced threat emulation scenarios, ATLAS provides an easily searchable set TTPs that can help to better understand and prepare for defending against emerging attacks.

Additionally, the Open Web Application Security Project (OWASP) has created a "[Top 10 list](https://llmtop10.com/)" of the most critical vulnerabilities found in applications utilizing LLMs. The list highlights the risks of threats such as the aforementioned data poisoning along with others such as:
Additionally, the Open Web Application Security Project (OWASP) has created a "[Top 10 list](https://llmtop10.com/?WT.mc_id=academic-105485-koreyst)" of the most critical vulnerabilities found in applications utilizing LLMs. The list highlights the risks of threats such as the aforementioned data poisoning along with others such as:

- **Prompt Injection**: a technique where attackers manipulate a Large Language Model (LLM) through carefully crafted inputs, causing it to behave outside of its intended behavior.
- **Supply Chain Vulnerabilities**: The components and software that make up the applications used by an LLM, such as Python modules or external datasets, can themselves be compromised leading to unpexected results, introduced biases and even vulmerabilities in the underlying infrastructure.
- **Overreliance**: LLMs are fallible and have been prone to hallucinate, providing inaccurate or unsafe results. In several documented circumstances, people have taken the results at face value leading to unintended real-world megative consequences.

Microsoft Cloud Advocate Rod Trent has written a free ebook, [Must Learn AI Security](https://github.com/rod-trent/OpenAISecurity/tree/main/Must_Learn/Book_Version), that dives deeply into these and other emerging AI threats and provides extensive guidance on how to best tackle these scenarios.
Microsoft Cloud Advocate Rod Trent has written a free ebook, [Must Learn AI Security](https://github.com/rod-trent/OpenAISecurity/tree/main/Must_Learn/Book_Version?WT.mc_id=academic-105485-koreyst), that dives deeply into these and other emerging AI threats and provides extensive guidance on how to best tackle these scenarios.

## Security Testing for AI Systems and LLMs

Expand All @@ -78,15 +78,15 @@ OpenAI, a leader in AI systems, has setup a series of _safety evaluations_ as pa
#### Persuasion

- [MakeMeSay](https://github.com/openai/evals/tree/main/evals/elsuite/make_me_say/readme.md): How well can an AI system trick another AI system into saying a secret word?
- [MakeMePay](https://github.com/openai/evals/tree/main/evals/elsuite/make_me_pay/readme.md): How well can an AI system convince another AI system to donate money?
- [Ballot Proposal](https://github.com/openai/evals/tree/main/evals/elsuite/ballots/readme.md): How well can an AI system influence another AI system’s support of a political proposition?
- [MakeMeSay](https://github.com/openai/evals/tree/main/evals/elsuite/make_me_say/readme.md?WT.mc_id=academic-105485-koreyst): How well can an AI system trick another AI system into saying a secret word?
- [MakeMePay](https://github.com/openai/evals/tree/main/evals/elsuite/make_me_pay/readme.md?WT.mc_id=academic-105485-koreyst): How well can an AI system convince another AI system to donate money?
- [Ballot Proposal](https://github.com/openai/evals/tree/main/evals/elsuite/ballots/readme.md?WT.mc_id=academic-105485-koreyst): How well can an AI system influence another AI system’s support of a political proposition?

#### Steganography (hidden messaging)

- [Steganography](https://github.com/openai/evals/tree/main/evals/elsuite/steganography/readme.md): How well can an AI system ​​pass secret messages without being caught by another AI system?
- [Text Compression](https://github.com/openai/evals/tree/main/evals/elsuite/text_compression/readme.md): How well can an AI system compress and decompress messages, to enable hiding secret messages?
- [Schelling Point](https://github.com/openai/evals/blob/main/evals/elsuite/schelling_point/README.md): How well can an AI system coordinate with another AI system, without direct communication?
- [Steganography](https://github.com/openai/evals/tree/main/evals/elsuite/steganography/readme.md?WT.mc_id=academic-105485-koreyst): How well can an AI system ​​pass secret messages without being caught by another AI system?
- [Text Compression](https://github.com/openai/evals/tree/main/evals/elsuite/text_compression/readme.md?WT.mc_id=academic-105485-koreyst): How well can an AI system compress and decompress messages, to enable hiding secret messages?
- [Schelling Point](https://github.com/openai/evals/blob/main/evals/elsuite/schelling_point/README.md?WT.mc_id=academic-105485-koreyst): How well can an AI system coordinate with another AI system, without direct communication?

### AI Security

Expand Down Expand Up @@ -139,9 +139,9 @@ AI red teaming is not all encompassing and should be consider a complementary mo
Here's a list of additional reading that can help you better understand how red teaming can help identify and mitigate risks in your AI systems:

- [Planning red teaming for large language models (LLMs) and their applications](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/red-teaming?WT.mc_id=academic-105485-koreyst)
- [What is the OpenAI Red Teaming Network?](https://openai.com/blog/red-teaming-network)
- [AI Red Teaming - A Key Practice for Building Safer and More Responsible AI Solutions](https://rodtrent.substack.com/p/ai-red-teaming)
- MITRE [ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)](https://atlas.mitre.org/), a knowledgebase of tactics and techniques employed by adversaries in real-world attacks on AI systems.
- [What is the OpenAI Red Teaming Network?](https://openai.com/blog/red-teaming-network?WT.mc_id=academic-105485-koreyst)
- [AI Red Teaming - A Key Practice for Building Safer and More Responsible AI Solutions](https://rodtrent.substack.com/p/ai-red-teaming?WT.mc_id=academic-105485-koreyst)
- MITRE [ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)](https://atlas.mitre.org/?WT.mc_id=academic-105485-koreyst), a knowledgebase of tactics and techniques employed by adversaries in real-world attacks on AI systems.

## Knowledge check

Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Don't forget to star [star (🌟) this repo](https://docs.github.com/en/get-star

## 🧠 Ready to Deploy?

If you are looking for more advanced code samples, check out our [collection of Generative AI Code Samples](https://aka.ms/genai-beg-code) in both **Python** and **TypeScript**.
If you are looking for more advanced code samples, check out our [collection of Generative AI Code Samples](https://aka.ms/genai-beg-code?WT.mc_id=academic-105485-koreyst) in both **Python** and **TypeScript**.

## 🗣️ Meet Other Learners, Get Support

Expand Down Expand Up @@ -80,7 +80,7 @@ Find spelling errors, code errors or have a suggestion? [Raise an issue](https:/
| 12 | [Designing UX for AI Applications](./12-designing-ux-for-ai-applications/README.md?WT.mc_id=academic-105485-koreyst) | **Learn:** How to apply UX design principles when developing Generative AI Applications | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 13 | [Securing Your Generative AI Applications](./13-securing-ai-applications/README.md?WT.mc_id=academic-105485-koreyst) | **Learn:** The threats and risks to AI systems and methods to secure these systems. | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 14 | [The Generative AI Application Lifecycle](./14-the-generative-ai-application-lifecycle/README.md?WT.mc_id=academic-105485-koreyst) | **Learn:** The tools and metrics to manage the LLM Lifecycle and LLMOps | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 15 | [Retrieval Augmented Generation (RAG) and Vector Databases](<(./15-rag-and-vector-databases/README.md?WT.mc_id=academic-105485-koreyst)>) | **Build:** An application using a RAG Framework to retrieve embeddings from a Vector Databases | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 15 | [Retrieval Augmented Generation (RAG) and Vector Databases](./15-rag-and-vector-databases/README.md?WT.mc_id=academic-105485-koreyst) | **Build:** An application using a RAG Framework to retrieve embeddings from a Vector Databases | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 16 | Open Source Models and Hugging Face | **Build:** An application using open source models available on Hugging Face | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 17 | [AI Agents](./17-ai-agents/README.md?WT.mc_id=academic-105485-koreyst) | **Build:** An application using an AI Agent Framework | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
| 18 | Fine-Tuning LLMs | **Learn:** The what, why and how of fine-tuning LLMs | [Learn More](https://aka.ms/genai-collection?WT.mc_id=academic-105485-koreyst) |
Expand Down

0 comments on commit 8799877

Please sign in to comment.