DeepTeam is a framework to red team LLMs and LLM systems.
-
Updated
Mar 16, 2026 - Python
DeepTeam is a framework to red team LLMs and LLM systems.
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
A comprehensive guide to adversarial testing and security evaluation of AI systems, helping organizations identify vulnerabilities before attackers exploit them.
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.
RAG Poisoning Lab — Educational AI Security Exercise
Test and evaluate Large Language Models against prompt injections, jailbreaks, and adversarial attacks with a web-based interactive lab.
Damn Vulnerable AI Application - For LLM Red Team Training. LLM testing, RAG testing, Multimodal testing, Agent testing, LLM paload generation
LLM Sentinel Red Teaming Platform is an enterprise-grade framework for automated security testing of Large Language Models, detecting vulnerabilities such as jailbreaks, prompt injection, and system prompt leakage across multiple providers, with structured attack orchestration, risk scoring, and security reporting to harden models before production
Multi‑agent AI security testing framework that orchestrates red‑team analyses, consolidates findings with an arbiter, and records an immutable audit ledger—plus a deterministic demo mode for repeatable results.
Add a description, image, and links to the llm-red-teaming topic page so that developers can more easily learn about it.
To associate your repository with the llm-red-teaming topic, visit your repo's landing page and select "manage topics."