Skip to content

Commit

Permalink
Minor Editorial style correction (#453)
Browse files Browse the repository at this point in the history
Minor Editorial style correction

Signed-off-by: Krishna Sankar <[email protected]>
  • Loading branch information
xsankar authored Oct 29, 2024
1 parent db8aee4 commit 4b7924a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion 2_0_vulns/LLM08_VectorAndEmbeddingWeaknesses.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Vectors and embeddings vulnerabilities present significant security risks in sys

Retrieval Augmented Generation (RAG) is a model adaptation technique that enhances the performance and contextual relevance of responses from LLM Applications, by combining pre-trained language models with external knowledge sources.Retrieval Augmentation uses vector mechanisms and embedding. (Ref #1)

### Common Examples of Risk
### Common Examples of Risks

1. **Unauthorized Access & Data Leakage:** Inadequate or misaligned access controls can lead to unauthorized access to embeddings containing sensitive information. If not properly managed, the model could retrieve and disclose personal data, proprietary information, or other sensitive content. Unauthorized use of copyrighted material or non-compliance with data usage policies during augmentation can lead to legal repercussions.
2. **Cross-Context Information Leaks and Federation Knowledge Conflict:** In multi-tenant environments where multiple classes of users or applications share the same vector database, there's a risk of context leakage between users or queries. Data federation knowledge conflict errors can occur when data from multiple sources contradict each other (Ref #2). This can also happen when an LLM can’t supersede old knowledge that it has learned while training, with the new data from Retrieval Augmentation.
Expand Down

0 comments on commit 4b7924a

Please sign in to comment.