Skip to content
#

llm-red-teaming

Here are 9 public repositories matching this topic...

PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.

  • Updated Jun 29, 2025
  • Python

LLM Sentinel Red Teaming Platform is an enterprise-grade framework for automated security testing of Large Language Models, detecting vulnerabilities such as jailbreaks, prompt injection, and system prompt leakage across multiple providers, with structured attack orchestration, risk scoring, and security reporting to harden models before production

  • Updated Mar 4, 2026
  • Python

Improve this page

Add a description, image, and links to the llm-red-teaming topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-red-teaming topic, visit your repo's landing page and select "manage topics."

Learn more