Curated resources, research, and tools for securing AI systems.
- Best Practices, Frameworks & Controls
- Tools
- Models
- Attack & Defense Matrices
- Checklists
- Newsletter
- Datasets
- Courses & Certifications
- Training
- Reports and Research
- Communities & Social Groups
- Benchmarking
- Incident Response
- Supply Chain Security
- Videos & Playlists
- Conferences
- Foundations: Glossary, SoK/Surveys & Taxonomies
- Podcasts
- Market Landscape
- Startups Blogs
- Related Awesome Lists
- Common Acronyms
- NIST - AI Risk Management Framework (AI RMF)
- ISO/IEC 42001 (AI Management System)
- OWASP - AI Maturity Assessment (AIMA)
- Google - Secure AI Framework (SAIF)
- OWASP - LLM & GenAI Security Center of Excellence (CoE) Guide
- CSA - AI Model Risk Management Framework
- NIST - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
- OWASP - LLM Security Verification Standard (LLMSVS)
- OWASP - Artificial Intelligence Security Verification Standard (AISVS)
- CSA - AI Controls Matrix (AICM) - The AICM contains 243 control objectives across 18 domains and maps to ISO 42001, ISO 27001, NIST AI RMF 1.0, and BSI AIC4. Freely downloadable.
- OWASP - Top 10 for Large Language Model Applications
- CSA - MCP Client Top 10
- CSA - MCP Server Top 10
- OWASP - AI Testing Guide
- OWASP - Red Teaming Guide
- OWASP - LLM Exploit Generation
- CSA - Agentic AI Red Teaming Guide
- OWASP
- CSA - Secure LLM Systems: Essential Authorization Practices
- OASIS CoSAI - Preparing Defenders of AI Systems
- DoD CIO - AI Cybersecurity Risk Management Tailoring Guide (2025) - Practical RMF tailoring for AI systems across the lifecycle; complements CDAO’s RAI toolkit.
- NCSC (UK) - Guidelines for Secure AI System Development - End-to-end secure AI SDLC (secure design, development, deployment, and secure operation & maintenance), including logging/monitoring and update management.
- SANS – Critical AI Security Guidelines
- Control-focused guidance for securing AI/LLM systems across six domains (e.g., access controls, data protection, inference security, monitoring, GRC).
- BSI – Security of AI Systems: Fundamentals - Sector-agnostic fundamentals: lifecycle threat model (data/model/pipeline/runtime), adversarial ML attacks (poisoning, evasion, inversion, extraction, backdoors), and baseline controls for design→deploy→operate, plus assurance/certification guidance.
- MITRE – SAFE-AI: A Framework for Securing AI-Enabled Systems - Threat-informed RMF overlay for AI: maps AI/ATLAS tactics to NIST SP 800-53 controls, lists ~100 AI-affected controls, and includes assessor interview Q&A sets to plan SCAs.
- NSA - Artificial Intelligence Security Center (AISC)
- Deploying AI Systems Securely (CSI) - Practical, ops-focused guidance for deploying/operating externally developed AI systems (with CISA, FBI & international partners); complements NCSC’s secure-AI dev guidelines.
- AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems (CSI) - Joint guidance on securing data across the AI lifecycle.
- Content Credentials: Strengthening Multimedia Integrity in the Generative AI Era (CSI) - Provenance and Durable Content Credentials for transparent media.
- Contextualizing Deepfake Threats to Organizations (CSI) - Risks, impacts, and mitigations for synthetic media targeting orgs.
- OWASP - Agent Observability Standard (AOS)
- OWASP - Agent Name Service (ANS) for Secure AI Agent Discovery
- OWASP - Agentic AI - Threats and Mitigations
- OWASP - Securing Agentic Applications Guide
- OWASP - State of Agentic AI Security and Governance
- CSA - Secure Agentic System Design: A Trait-Based Approach
- CSA - Agentic AI Identity & Access Management - 08/25
- OWASP - Multi-Agentic System Threat Modeling Guide - Applies OWASP’s agentic threat taxonomy to multi-agent systems and demonstrates modeling using the MAESTRO framework with worked examples.
- AWS - Threat modeling your generative AI workload to evaluate security risk - Practical, four-question approach (what are we working on; what can go wrong; what are we going to do about it; did we do a good enough job) with concrete deliverables: DFDs and assumptions, threat statements using AWS’s threat grammar, mapped mitigations, and validation; includes worked examples and AWS Threat Composer templates.
- Microsoft - Threat Modeling AI/ML Systems and Dependencies - Practical guidance for threat modeling AI/ML: “Key New Considerations” questions plus a threats→mitigations catalog (adversarial perturbation, data poisoning, model inversion, membership inference, model stealing) based on “Failure Modes in Machine Learning”; meant for security design reviews of products that use or depend on AI/ML.
- DHS/CISA - Safety & Security Guidelines for Critical Infrastructure AI - Cross-lifecycle guidance for owners/operators (govern, design, develop, deploy, operate); developed with SRMAs and informed by CISA’s cross-sector risk analysis.
↑Tools
Inclusion criteria (open-source tools): must have 220+ GitHub stars, active maintenance in the last 12 months, and ≥3 contributors.
Detect and stop prompt-injection (direct/indirect) across inputs, context, and outputs; filter hostile content before it reaches tools or models.
- (none from your current list yet)
Enforce safety policies and block jailbreaks at runtime via rules/validators/DSLs, with optional human-in-the-loop for sensitive actions.
- NeMo Guardrails
- LLM Guard
- Llama Guard
- LlamaFirewall
- Code Shield
- Guardrails
- Runtime policy enforcement for LLM apps: compose input/output validators (PII, toxicity, jailbreak/PI, regex, competitor checks), then block/redact/rewrite/retry on fail; optional server mode; also supports structured outputs (Pydantic/function-calling).
Analyze serialized model files for unsafe deserialization and embedded code; verify integrity/metadata and block or quarantine on fail.
Scan/audit MCP servers & client configs; detect tool poisoning, unsafe flows; constrain tool access with least-privilege and audit trails.
- Beelzebub
- Beelzebub is a honeypot framework designed to provide a secure environment for detecting and analyzing cyber attacks. It offers a low code approach for easy implementation and uses AI to mimic the behavior of a high-interaction honeypot.
- PortSwigger - MCP Server
- ToolHive
- MCP server orchestrator for desktop, CLI, and Kubernetes Operator: discover and deploy servers in isolated containers with restricted permissions, manage secrets, use an optional egress proxy, auto-configure popular MCP clients (e.g., GitHub Copilot, Cursor), and manage at scale via CRDs/registry.
Run untrusted or LLM-triggered code in isolated sandboxes (FS/network/process limits) to contain RCE and reduce blast radius.
-
E2B
- SDK + self-hostable infra to run untrusted, LLM-generated code in isolated cloud sandboxes (Firecracker microVMs).
-
microsandbox
- self-hosted microVM (libkrun) sandbox for untrusted AI/user code.
Centralize auth, quotas/rate limits, cost caps, egress/DLP filters, and guardrail orchestration across all model/providers.
- (none from your current list yet)
- Claude Code Security Reviewer
- An AI-powered security review GitHub Action using Claude to analyze code changes for security vulnerabilities.
- Vulnhuntr
- Vulnhuntr leverages the power of LLMs to automatically create and analyze entire code call chains starting from remote user input and ending at server output for detection of complex, multi-step, security-bypassing vulnerabilities that go far beyond what traditional static code analysis tools are capable of performing.
Automate attack suites (prompt-injection, leakage, jailbreak, goal-based tasks) in CI; score results and produce regression evidence.
- promptfoo
- Agentic Radar
- DeepTeam
- Buttercup
- Trail of Bits’ AIxCC Cyber Reasoning System: runs OSS-Fuzz-style campaigns to find vulns, then uses a multi-agent LLM patcher to generate & validate fixes for C/Java repos; ships SigNoz observability; requires at least one LLM API key.
- Giskard
- Pre-deployment/CI evaluation harness for LLM/RAG: runs scan checks (prompt injection, harmful output, sensitive-information disclosure, robustness), auto-generates RAG evaluation datasets and component scores (retriever, generator, rewriter, router), exports shareable reports, and integrates with CI for regression gates.
- (none from your current list yet)
Generate and verify AI/ML BOMs, signatures, and provenance for models/datasets/dependencies; enforce allow/deny policies.
- (none from your current list yet)
Harden RAG memory: isolate namespaces, sanitize queries/content, detect poisoning/outliers, and prevent secret/PII retention.
- (none from your current list yet)
Detect and mitigate dataset/model poisoning and backdoors; validate training/fine-tuning integrity and prune suspicious behaviors.
Prevent secret/PII exfiltration in prompts/outputs via detection, redaction, and policy checks at I/O boundaries.
- Presidio
- PII/PHI detection & redaction for text, images, and structured data; use as a pre/post-LLM DLP filter and for dataset sanitization.
Collect AI-specific security logs/signals; detect abuse patterns (PI/jailbreak/leakage), enrich alerts, and support forensics.
-
LangKit
- LLM observability metrics toolkit (whylogs-compatible): prompt-injection/jailbreak similarity, PII patterns, hallucination/consistency, relevance, sentiment/toxicity, readability.
-
Alibi Detect
- Production drift/outlier/adversarial detection for tabular, text, images, and time series; online/offline detectors with TF/PyTorch backends; returns scores, thresholds, and flags for alerting.
↑Attack & Defense Matrices
Matrix-style resources covering adversarial TTPs and curated defensive techniques for AI systems.
- MITRE ATLAS - Adversarial TTP matrix and knowledge base for threats to AI systems.
- GenAI Attacks Matrix - Matrix of TTPs targeting GenAI apps, copilots, and agents.
- MCP Security Tactics, Techniques, and Procedures (TTPs)
- AIDEFEND - AI Defense Framework
- Interactive defensive countermeasures knowledge base with Tactics / Pillars / Phases views; maps mitigations to MITRE ATLAS, MAESTRO, and OWASP LLM risks. • Live demo: https://edward-playground.github.io/aidefense-framework/
↑Checklists
↑Supply Chain Security
Guidance and standards for securing the AI/ML software supply chain (models, datasets, code, pipelines). Primarily specs and frameworks; includes vetted TPRM templates.
Normative formats and specifications for transparency and traceability across AI components and dependencies.
- OWASP - AI Bill of Materials (AIBOM)
- Bill of materials format for AI components, datasets, and model dependencies.
Questionnaires and templates to assess external vendors, model providers, and integrators for security, privacy, and compliance.
- FS-ISAC - Generative AI Vendor Evaluation & Qualitative Risk Assessment - Assessment Tool XLSX • Guide PDF - Vendor due-diligence toolkit for GenAI: risk tiering by use case, integration and data sensitivity; questionnaires across privacy, security, model development and validation, integration, legal and compliance; auto-generated reporting.
↑Videos & Playlists
Monthly curated playlists of AI-security talks, demos, incidents, and tooling.
- AI Security Playlist - September 2025
- AI Security Playlist - August 2025
- AI Security Playlist - July 2025
- AI Security Playlist - June 2025
↑Newsletter
- Adversarial AI Digest - A digest of AI security research, threats, governance challenges, and best practices for securing AI systems.
↑Datasets
- Kaggle - Community-contributed datasets (IDS, phishing, malware URLs, incidents).
- Hugging Face - Search HF datasets tagged/related to cybersecurity and threat intel.
- SafetyPrompts - living index of LLM safety datasets & evals (jailbreak, prompt injection, toxicity, privacy), with filters and a maintained sheet.
- Awesome Cybersecurity Datasets
Interactive CTFs and self-contained labs for hands-on security skills (web, pwn, crypto, forensics, reversing). Used to assess practical reasoning, tool use, and end-to-end task execution.
- InterCode-CTF
- 100 picoCTF challenges (high-school level); categories: cryptography, web, binary exploitation (pwn), reverse engineering, forensics, miscellaneous. [Dataset+Benchmark] arXiv
- NYU CTF Bench
- 200 CSAW challenges (2017-2023); difficulty very easy → hard; categories: cryptography, web, binary exploitation (pwn), reverse engineering, forensics, miscellaneous. [Dataset+Benchmark] arXiv
- CyBench
- 40 tasks from HackTheBox, Sekai CTF, Glacier, HKCert (2022-2024); categories: cryptography, web, binary exploitation (pwn), reverse engineering, forensics, miscellaneous; difficulty grounded by first-solve time (FST). [Dataset+Benchmark] arXiv
- pwn.college CTF Archive
- large collection of runnable CTF challenges; commonly used as a source corpus for research. [Dataset]
-
Devign / CodeXGLUE-Vul
- function-level C vuln detection. [Dataset+Benchmark]
-
DiverseVul
- multi-CWE function-level detection (C/C++). [Dataset]
-
Big-Vul
- real-world C/C++ detection (often with localization). [Dataset]
-
Py150k
- ≈150k Python snippets (GitHub). Static analysis with Bandit, Semgrep, Snyk identified 42,753 vulnerabilities across 26,147 snippets; common CWEs: XSS (18%), SQLi (15%), Improper Input Validation (12%), OS Command Injection (10%), Information Exposure (8%). Collected from GitHub with dedup/fork removal, only parsable code (AST checks, ≤30k nodes), and permissive licenses. Used for: training and fine-tuning (e.g., CodeGen, CodeGen2/2.5, CodeLlama, CrystalCoder, CodeT5+).
- CVEfixes
- CVE-linked fix commits for security repair. [Dataset]
- Also used for repair: Big-Vul (generate minimal diffs, then build + scan).
- OWASP Benchmark (Java)
- runnable Java app with seeded vulns; supports SAST/DAST/IAST evaluation and scoring. [Dataset+Benchmark]
- Juliet (NIST SARD) (C/C++ mirror
• Java mirror
) - runnable CWE cases for detect → fix → re-test. [Dataset+Benchmark]
Phishing dataset gap: there isn’t a public corpus that, per page, stores the URL plus full HTML/CSS/JS, images, favicon, and a screenshot. Most sources are just URL feeds; pages vanish quickly; older benchmarks drift, so models don’t generalize well. Collect a per-URL archive of all page resources, with caveats that screenshots are viewport-only and some assets may be blocked by browser safety.
- PhishTank - Continuously updated dataset (API/feed); community-verified phishing URLs; labels zero-day phishing; offers webpage screenshots.
- OpenPhish - Regularly updated phishing URLs with fields such as webpage info, hostname, supported language, IP presence, country code, and SSL certificate; includes brand-target stats.
- PhreshPhish - 372k HTML–URL samples (119k phishing / 253k benign) with full-page HTML, URLs, timestamps, and brand targets (~185 brands) across 50+ languages; suitable for training and evaluating URL/page-based phishing detection.
- Phishing.Database - Continuously updated lists of phishing domains/links/IPs (ACTIVE/INACTIVE/INVALID and NEW last hour/today); repo resets daily-download lists; status validated via PyFunceble.
- UCI – Phishing Websites - 11,055 URLs (phishing and legitimate) with 30 engineered features across URL, content, and third-party signals.
- Mendeley – Phishing Websites Dataset - Labeled phishing/legitimate samples; provides webpage content (HTML) for each URL.; useful for training/eval.
- UCI – PhiUSIIL Phishing URL - 235,795 URLs (134,850 legitimate; 100,945 phishing) with 54 URL/content features; labels: Class 1 = legitimate, Class 0 = phishing.
- MillerSmiles - Large archive of phishing email scams with the URLs used; long-running email corpus (not a live feed).
Structured Q&A datasets assessing security knowledge and terminology. Used to evaluate factual recall and conceptual understanding.
Code snippet datasets labeled as vulnerable or secure, often tied to CWEs (Common Weakness Enumeration). Used to evaluate the model’s ability to recognize insecure code patterns and suggest secure fixes.
-
Py150k
- ≈150k Python files from GitHub (deduped/fork-removed); Static analysis with Bandit, Semgrep, Snyk identified 42,753 vulnerabilities across 26,147 snippets; common CWEs: XSS (18%), SQLi (15%), Improper Input Validation (12%), OS Command Injection (10%), Information Exposure (8%). Collected from GitHub with dedup/fork removal, only parsable code (AST checks, ≤30k nodes), and permissive licenses. Used for: training and fine-tuning (e.g., CodeGen, CodeGen2/2.5, CodeLlama, CrystalCoder, CodeT5+).
- Avast–CTU Public CAPEv2 Dataset
- 48,976 sandbox JSON reports (CAPEv2) across 10 families (Adload, Emotet, HarHar, Lokibot, njRAT, Qakbot, Swisyn, Trickbot, Ursnif, Zeus); per-sample metadata:
sha256
, family, type (banker
,trojan
,pws
,coinminer
,rat
,keylogger
), detection date. Two versions: Full (~13 GB) and Reduced (~566 MB) keepingbehavior.summary
+static.pe
(avoids label leakage). Used for: behavior-based malware classification & concept-drift studies. - arXiv
- ASVspoof 5 - train / dev / eval - Train: 8 TTS attacks; Dev: 8 unseen (validation/fusion); Eval: 16 unseen incl. adversarial/codec. Labels:
bona-fide
/spoofed
. arXiv - In-the-Wild (ITW) - 58 politicians/celebrities with per-speaker pairing; ≈20.7 h
bona-fide
+ 17.2 hspoofed
, scraped from social/video platforms. Labels:bona-fide
/spoofed
. arXiv - MLAAD (+M-AILABS) - Multilingual synthetic TTS corpus (hundreds of hours; many models/languages). Labels:
bona-fide
(M-AILABS) /spoof
(MLAAD). arXiv - LlamaPartialSpoof - LLM-driven attacker styles; includes full and partial (spliced) spoofs. Labels:
bona-fide
/fully-spoofed
/partially-spoofed
. arXiv - Fake-or-Real (FoR) - >195k utterances; four variants:
for-original
,for-norm
,for-2sec
,for-rerec
. Labels:real
/synthetic
. - CodecFake - codec-based deepfake audio dataset (Interspeech 2024); Labels:
real
/codec-generated fake
. arXiv
Adversarial prompt datasets-both text-only and multimodal-designed to bypass safety mechanisms or test refusal logic. Used to test how effectively a model resists jailbreaks and enforces policy-based refusal.
- CySecBench
cybersecurity-domain jailbreak dataset with 12,662 close-ended prompts across multiple attack categories; paper introduces an obfuscation-based jailbreaking method and LLM evals.
- JailBreakV-28K
multimodal jailbreak benchmark with ~28k test cases (20k text-based transfer attacks + 8k image-based) to assess MLLM robustness; HF page includes a mini-leaderboard and image types.
- Do-Not-Answer
refusal-evaluation set of 939 “should-refuse” prompts plus an automatic evaluator; answering instead of refusing can be used as a jailbreak-success signal.
Public prompt-injection datasets have recurring limitations: partial staleness as models and defenses evolve, CTF skew toward basic instruction following, and label mixing across toxicity, jailbreak roleplay, and true injections that inflates measured true positive rates and distorts evaluation.
- prompt-injection-attack-dataset
3.7k rows pairing benign task prompts with attack variants (naive / escape / ignore / fake-completion / combined). Columns for both target and injected tasks; train split only.
- prompt-injections-benchmark
5,000 prompts labeled
jailbreak
/benign
for robustness evals. - prompt_injections
~1k short injection prompts; multilingual (EN, FR, DE, ES, IT, PT, RO); single
train
split; CSV/Parquet. - prompt-injection
Large-scale injection/benign corpus (~327k rows,
train/test
) for training baselines and detectors. - prompt-injection-safety
60k rows (
train
50k /test
10k); 3-way labels: benign0
, injection1
, harmful request2
; Parquet.
Collections of leaked, official, and synthetic system prompts and paired responses used to study guardrails and spot system prompt exposure. Used to build leakage detectors, craft targeted guardrail tests (consent gates, tool use rules, safety policies), and reproduce vendor behaviors for evaluation.
- Official_LLM_System_Prompts
- leaked and date-stamped prompts from proprietary assistants (OpenAI, Anthropic, MS Copilot, GitHub Copilot, Grok, Perplexity); 29 rows.
- system-prompt-leakage
- synthetic prompts + responses for leakage detection; train 283,353 / test 71,351 (binary leakage labels).
- system-prompts-and-models-of-ai-tools
- community collection of prompts and internal tool configs for code/IDE agents and apps (Cursor, VSCode Copilot Agent, Windsurf, Devin, v0, etc.); includes a security notice.
- system_prompts_leaks
- collection of extracted system prompts from popular chatbots like ChatGPT, Claude & Gemini
- leaked-system-prompts
- leaked prompts across many services; requires verifiable sources or reproducible prompts for PRs.
- chatgpt_system_prompt
- community collection of GPT system prompts, prompt-injection/leak techniques, and protection prompts.
- CL4R1T4S
- extracted/leaked prompts, guidelines, and tooling references spanning major assistants and agents (OpenAI, Google, Anthropic, xAI, Perplexity, Cursor, Devin, etc.).
- grok-prompts
- official xAI repository publishing Grok’s system prompts for chat/X features (DeepSearch, Ask Grok, Explain, etc.).
- Prompt-Leakage Finetune
- adversarial attack prompts (~1,300) used to instruction-tune refusal to system-prompt extraction (synthetic + Gandalf subset).
↑Courses & Certifications
- SANS - AI Cybersecurity Careers - Career pathways poster + training map; baseline skills for AI security (IR, DFIR, detection, threat hunting).
- SANS - SEC545: GenAI & LLM Application Security - Hands-on course covering prompt injection, excessive agency, model supply chain, and defensive patterns. (Certificate of completion provided by SANS.)
- SANS - SEC495: Leveraging LLMs: Building & Securing RAG, Contextual RAG, and Agentic RAG - Practical RAG builds with threat modeling, validation, and guardrails. (Certificate of completion provided by SANS.)
- Practical DevSecOps - Certified AI Security Professional (CAISP) - Hands-on labs covering LLM Top 10, AI Attack and Defend techniques, MITRE ATLAS Framework, AI Threat Modeling, AI supply chain attacks, Secure AI Deployment, and AI Governance. (Certificate of completion provided by Practical DevSecOps.)
- IAPP - Artificial Intelligence Governance Professional (AIGP) - Governance-focused credential aligned with emerging regulations.
- ISACA - Advanced in AI Security Management (AAISM™) - AI-centric security management certification.
- NIST AI RMF 1.0 Architect - Certified Information Security - Credential aligned to NIST AI RMF 1.0.
- ISO/IEC 23894 - AI Risk Management (AI Risk Manager, PECB) - Risk identification, assessment, and mitigation aligned to ISO/IEC 23894 and NIST AI RMF.
- ISO/IEC 42001 - AI Management System (Lead Implementer, PECB) - Implement an AIMS per ISO/IEC 42001.
- ISO/IEC 42001 - AI Management System (Lead Auditor, PECB) - Audit AIMS using recognized principles.
- ISACA - Advanced in AI Audit (AAIA™) - Certification for auditing AI systems and mitigating AI-related risks.
- Practical DevSecOps - Certified AI Security Professional (CAISP) - Challenge-based exam certification simulating real-world AI security scenarios. 5 Challenges and 6 hours duration and report submission.
↑Training
- Microsoft AI Security Learning Path - Free, self-paced Microsoft content on secure AI model development, risk management, and threat mitigation.
- AWS AI Security Training - Free AWS portal with courses on securing AI applications, risk management, and AI/ML security best practices.
- PortSwigger - Web Security Academy: Web LLM attacks - Structured, guided track on LLM issues (prompt injection, insecure output handling, excessive agency) with walkthrough-style exercises.
- AI GOAT
- Vulnerable LLM CTF challenges for learning AI security.
- Damn Vulnerable LLM Agent
- AI Red Teaming Playground Labs - Microsoft
- Self-hostable environment with 12 challenges (direct/indirect prompt injection, metaprompt extraction, Crescendo multi-turn, guardrail bypass).
- Trail of Bits - AI/ML Security & Safety Training - Courses on AI failure modes, adversarial attacks, data provenance, pipeline threats, and mitigation.
↑Models
- segolilylabs/Lily-Cybersecurity-7B-v0.2-GGUF
- quantized GGUF build of a 7B cybersecurity-tuned chat model.
- DeepHat/DeepHat-V1-7B
- 7B cybersecurity-oriented text-generation model.
- clouditera/secgpt
- cybersecurity-tuned instruction model (CN/EN) with released weights (variants incl. 1.5B/7B/14B); built on Qwen2.5-Instruct/DeepSeek-R1, Apache-2.0, supports vLLM deployment.
- ZySec-AI/SecurityLLM
- cybersecurity-focused chat model (“ZySec-7B”); weights available. Community GGUF quantization exists for llama.cpp.
- jackaduma/SecRoBERTa
- RoBERTa trained on cybersecurity corpora for fill-mask tasks.
- jackaduma/SecBERT
- BERT trained on security corpora (APTnotes, CASIE, Stucco).
- ehsanaghaei/SecureBERT
- RoBERTa-based domain LM for CTI/automation. - arXiv
- markusbayer/CySecBERT
- BERT further pre-trained on cybersecurity text for CTI tasks. - arXiv
- ibm-research/CTI-BERT
- BERT tuned on large security text for CTI extraction. - paper
- basel/ATTACK-BERT
- sentence-transformer for cybersecurity: maps attack-action text to embeddings; used to map free text to MITRE ATT&CK techniques (see SMET). :contentReference[oaicite:0]{index=0}
- meta-llama/Llama-Prompt-Guard-2-86M
- multi-label prompt-injection/jailbreak detector (lightweight).
- google/shieldgemma-2b
- text safety classifier. - arXiv
- meta-llama/Llama-Guard-4-12B
- multimodal safety classifier. - arXiv (v1) • arXiv (v3 vision)
- protectai/deberta-v3-base-prompt-injection-v2
- DeBERTa-based PI detector.
- qualifire/prompt-injection-sentinel
- ModernBERT-large PI/jailbreak classifier. - arXiv
- ICL-ml4csec/VulBERTa
- RoBERTa pre-trained on real-world C/C++; used for vuln detectors. - arXiv
- MTUCI/AASIST3
- enhanced AASIST (KAN + SSL features).
↑Research Working Groups
- Cloud Security Alliance (CSA) AI Security Working Groups - Collaborative research groups focused on AI security, cloud security, and emerging threats in AI-driven systems.
- OWASP Top 10 for LLM & Generative AI Security Risks Project - An open-source initiative addressing critical security risks in Large Language Models (LLMs) and Generative AI applications, offering resources and guidelines to mitigate emerging threats.
- CWE Artificial Intelligence Working Group (AI WG) - The AI WG was established by CWE™ and CVE® community stakeholders to identify and address gaps in the CWE corpus where AI-related weaknesses are not adequately covered, and work collaboratively to fix them.
- NIST - SP 800-53 Control Overlays for Securing AI Systems (COSAiS) - Public collaboration to develop AI security control overlays with NIST principal investigators and the community.
- OpenSSF - AI/ML Security Working Group - Cross-org WG on “security for AI” and “AI for security”
- CoSAI - Coalition for Secure AI (OASIS Open Project) - Open, cross-industry initiative advancing secure-by-design AI through shared frameworks, tooling, and guidance.
- WS1: Software Supply Chain Security for AI Systems - Extends SSDF/SLSA principles to AI; provenance, model risks, and pipeline security.https://github.com/cosai-oasis/ws1-supply-chain
- WS2: Preparing Defenders for a Changing Cybersecurity Landscape - Defender-focused framework aligning threats, mitigations, and investments for AI-driven ops. https://github.com/cosai-oasis/ws2-defenders
• Reference doc: “Preparing Defenders of AI Systems” https://github.com/cosai-oasis/ws2-defenders/blob/main/preparing-defenders-of-ai-systems.md - WS3: AI Security Risk Governance - Security-focused risk & controls taxonomy, checklist, and scorecard for AI products and components.https://github.com/cosai-oasis/ws3-ai-risk-governance
- WS4: Secure Design Patterns for Agentic Systems - Threat models and secure design patterns for agentic systems and infrastructure. https://github.com/cosai-oasis/ws4-secure-design-agentic-systems
📌 (More working groups to be added.)
↑Communities & Social Groups
↑Benchmarks
Purpose: Evaluates the security of model-generated code using CWE-tagged prompts and static analysis.
- LLMSecEval
- Prompt-based, CWE-mapped security benchmark for code-generation models; generate from each prompt and score with static analysis (e.g., CodeQL / Semgrep / Bandit) to label outputs secure vs. vulnerable and compute per-CWE metrics. Used for: benchmarking generated-code security. arXiv
Purpose: Evaluates how AI systems withstand adversarial attacks, including evasion, poisoning, and model extraction. Ensures AI remains functional under manipulation.
NIST AI RMF Alignment: Measure, Manage
- Measure: Identify risks related to adversarial attacks.
- Manage: Implement mitigation strategies to ensure resilience.
AutoPenBench - 33 tasks: 22 in-vitro fundamentals (incl. 4 crypto) + 11 real-world CVEs for autonomous pentesting evaluation. arXiv • Best for: controlled, task-based coverage across fundamentals and known CVEs (repeatable, fine-grained scoring).
AI-Pentest-Benchmark - 13 full vulnerable VMs (from VulnHub), 152 subtasks across Recon (72), Exploit (44), PrivEsc (22), and General (14), for end-to-end recon → exploit → privesc benchmarking. arXiv • Best for: realistic, end-to-end machine takeovers stressing planning, tool use, and multi-step reasoning.
CVE-Bench - 40 real-world web CVEs in dockerized apps; evaluates agent-driven exploit generation/execution. arXiv • Best for: focused testing of exploitability against real CVEs (web).
NYU CTF Bench - 200 dockerized CSAW challenges (web, pwn, rev, forensics, crypto, misc.) for skill-granular agent evaluation. arXiv • Best for: CTF-style, per-skill assessment and tool-use drills.
AgentHarm human-authored harmful agent tasks for tool-using agents with benign counterparts, synthetic proxy tools, and a reproducible scoring harness; 110 base tasks (440 with augmentation), 11 categories, 104 tools. arXiv • Best for: measuring refusal vs completion on multi-step tool use and the impact of jailbreaks.
Purple Llama – CyberSecEval - evaluates models’ propensity to assist cyber-offense (exploit/malware) and to generate insecure code; graded-risk tasks with a reproducible harness. Best for: dangerous-capability / misuse-risk scoring (text/IDE, non-agent).
Purpose: Evaluates resistance to prompt-injection and jailbreak attempts in chat/RAG/agent contexts.
NIST AI RMF Alignment: Measure, Manage
-
Lakera PINT Benchmark
Prompt-injection benchmark with a curated multilingual test suite, explicit categories (injections, jailbreaks, hard negatives, benign chats/docs), and a reproducible scoring harness (PINT score + notebooks) for fair detector comparison and regression tracking.
-
JailbreakBench
standardized jailbreak prompts + scoring harness; measures refusal/compliance and jailbreak success across models and settings.
Purpose: Assesses AI models for unauthorized modifications, including backdoors and dataset poisoning. Supports trustworthiness and security of model outputs.
NIST AI RMF Alignment: Map, Measure
-
Map: Understand and identify risks to model/data integrity.
-
Measure: Evaluate and mitigate risks through validation techniques.
-
CVE-Bench - @uiuc-kang-lab
- How well AI agents can exploit real-world software vulnerabilities that are listed in the CVE database.
Purpose: Ensures AI security aligns with governance frameworks, industry regulations, and security policies. Supports auditability and risk management.
NIST AI RMF Alignment: Govern
- Govern: Establish policies, accountability structures, and compliance controls.
Purpose: Evaluates AI for risks like data leakage, membership inference, and model inversion. Helps ensure privacy preservation and compliance.
NIST AI RMF Alignment: Measure, Manage
- Measure: Identify and assess AI-related privacy risks.
- Manage: Implement security controls to mitigate privacy threats.
Purpose: Assesses AI for transparency, fairness, and bias mitigation. Ensures AI operates in an interpretable and ethical manner.
NIST AI RMF Alignment: Govern, Map, Measure
- Govern: Establish policies for fairness, bias mitigation, and transparency.
- Map: Identify potential explainability risks in AI decision-making.
- Measure: Evaluate AI outputs for fairness, bias, and interpretability.
↑Incident Response
- AI Incident Database (AIID)
- MIT AI Risk Repository - Incident Tracker
- AIAAIC Repository
- OECD.AI - AIM: AI Incidents and Hazards Monitor
- AVID - AI Vulnerability Database - Open, taxonomy-driven catalog of AI failure modes; Vulnerabilities, Reports map incidents to failure modes/lifecycle stages.
- OWASP - GenAI Incident Response Guide
- OWASP - Guide for Preparing & Responding to Deepfake Events
- CISA - JCDC AI Cybersecurity Collaboration Playbook - Info-sharing & coordination procedures for AI incidents.
- eSafety Commissioner - Guide to responding to image-based abuse involving AI deepfakes (PDF) - Practical, step-by-step playbook (school-focused but adaptable) covering reporting/takedown, evidence preservation, and support.
- EU AI Act - Article 73: Reporting of Serious Incidents - Providers of high-risk AI systems need to report serious incidents to national authorities.
↑Reports and Research
- AI Security Research Feed - Continuously updated feed of AI security-related academic papers, preprints, and research indexed from arXiv.
- AI Security Portal - Literature Database - Categorized database of AI security literature, taxonomy, and related resources.
- CSA - Principles to Practice: Responsible AI in a Dynamic Regulatory Environment
- CSA - AI Resilience: A Revolutionary Benchmarking Model for AI Safety - Governance & compliance benchmarking model.
- CSA - Using AI for Offensive Security
📌 (More to be added - A collection of AI security reports, white papers, and academic studies.)
↑Foundations: Glossary, SoK/Surveys & Taxonomies
(Core references and syntheses for orientation and shared language.)
(Authoritative definitions for AI/ML security, governance, and risk-use to align terminology across docs and reviews.)
- NIST - “The Language of Trustworthy AI: An In-Depth Glossary of Terms.” - Authoritative cross-org terminology aligned to NIST AI RMF; useful for standardizing terms across teams.
- ISO/IEC 22989:2022 - Artificial intelligence - Concepts and terminology - International standard that formalizes core AI concepts and vocabulary used in policy and engineering.
(Systematizations of Knowledge (SoK), surveys, systematic reviews, and mapping studies.)
(Reusable classification schemes-clear dimensions, categories, and labeling rules for attacks, defenses, datasets, and risks.)
- CSA - Large Language Model (LLM) Threats Taxonomy - Community taxonomy of LLM-specific threats; clarifies categories/definitions for risk discussion and control mapping.
- ARC - PI (Prompt Injection) Taxonomy - Focused taxonomy for prompt-injection behaviors/variants with practical labeling guidance for detection and defense.
↑Podcasts
- The MLSecOps Podcast - Insightful conversations with industry leaders and AI experts, exploring the fascinating world of machine learning security operations.
↑Market Landscape
Curated market maps of tools and vendors for securing LLM and agentic AI applications across the lifecycle.
- OWASP - LLM and Generative AI Security Solutions Landscape
- OWASP - AI Security Solutions Landscape for Agentic AI
- Latio - 2025 AI Security Report - Market trends and vendor landscape snapshot for AI security.
- Woodside Capital Partners - Cybersecurity Sector - A snapshot with vendor breakdowns and landscape view.
- Insight Partners - Cybersecurity Portfolio Overview (Market Map) - Visual market map and portfolio overview across cybersecurity domains.
↑Startups Blogs
A curated list of startups securing agentic AI applications, organized by the OWASP Agentic AI lifecycle (Scope & Plan → Govern). Each company appears once in its best-fit stage based on public positioning, and links point to blog/insights for deeper context. Some startups span multiple stages; placements reflect primary focus.
Inclusion criteria
- Startup has not been acquired
- Has an active blog
- Has an active GitHub organization/repository
Design-time security: non-human identities, agent threat modeling, privilege boundaries/authn, and memory scoping/isolation.
no startups here with active blog and active GitHub account
Secure agent loops and tool use; validate I/O contracts; embed policy hooks; test resilience during co-engineering.
no startups here with active blog and active GitHub account
Sanitize/trace data and reasoning; validate alignment; protect sensitive memory with privacy controls before deployment.
Adversarial testing for goal drift, prompt injection, and tool misuse; red-team sims; sandboxed calls; decision validation.
Sign models/plugins/memory; verify SBOMs; enforce cryptographically validated policies; register agents/capabilities.
no startups here with active blog and active GitHub account
Zero-trust activation: rotate ephemeral creds, apply allowlists/LLM firewalls, and fine-grained least-privilege authorization.
Monitor memory mutations for drift/poisoning, detect abnormal loops/misuse, enforce HITL overrides, and scan plugins-continuous, real-time vigilance for resilient operations as systems scale and self-orchestrate.
Correlate agent steps/tools/comms; detect anomalies (e.g., goal reversal); keep immutable logs for auditability.
Enforce role/task policies, version/retire agents, prevent privilege creep, and align evidence with AI regulations.
↑Related Awesome Lists
- Awesome LLMSecOps - wearetyomsmnv
- OSS LLM Security - kaplanlior
- Awesome LLM Security - corca-ai
- Security for AI - zmre
- Awesome AI Security - DeepSpaceHarbor
- Awesome AI for Cybersecurity - Billy1900
- Awesome ML Security - Trail of Bits
- Awesome MLSecOps - RiccardoBiosas
- MLSecOps References - disesdi
- Awesome ML Privacy Attacks - StratosphereIPS
- Awesome LLM Supply Chain Security - ShenaoW
- Awesome Prompt Injection - FonduAI
- Awesome Jailbreak on LLMs - yueliu1999
- Awesome LM-SSP (Large Model Security, Safety & Privacy) - ThuCCSLab
- Security & Privacy for LLMs (llm-sp) - chawins
- Awesome LVLM Attack - liudaizong
- Awesome ML/SP Papers - gnipping
- Awesome LLM JailBreak Papers - WhileBug
- Awesome Adversarial Machine Learning - man3kin3ko
- LLM Security & Privacy - briland
- Awesome GenAI Security - jassics
- Awesome GenAI CyberHub - Ashfaaq98
- Awesome AI for Security - AmanPriyanshu
- Awesome ML for Cybersecurity - jivoi
- Awesome AI Security - ottosulin
- Awesome AI4DevSecOps - awsm-research
- Prompt Hacking Resources - PromptLabs
- Awesome LALMs Jailbreak - WangCheng0116
- Awesome LRMs Safety - WangCheng0116
- Awesome LLM Safety - ydyjya
- Awesome MCP Security - Puliczek
↑Common Acronyms
Acronym | Full Form |
---|---|
AI | Artificial Intelligence |
AGI | Artificial General Intelligence |
ALBERT | A Lite BERT |
AOC | Area Over Curve |
ASR | Attack Success Rate |
BERT | Bidirectional Encoder Representations from Transformers |
BGMAttack | Black-box Generative Model-based Attack |
CBA | Composite Backdoor Attack |
CCPA | California Consumer Privacy Act |
CNN | Convolutional Neural Network |
CoT | Chain-of-Thought |
DAN | Do Anything Now |
DFS | Depth-First Search |
DNN | Deep Neural Network |
DPO | Direct Preference Optimization |
DP | Differential Privacy |
FL | Federated Learning |
GA | Genetic Algorithm |
GDPR | General Data Protection Regulation |
GPT | Generative Pre-trained Transformer |
GRPO | Group Relative Policy Optimization |
HIPAA | Health Insurance Portability and Accountability Act |
ICL | In-Context Learning |
KL | Kullback-Leibler Divergence |
LAS | Leakage-Adjusted Simulatability |
LM | Language Model |
LLM | Large Language Model |
Llama | Large Language Model Meta AI |
LoRA | Low-Rank Adapter |
LRM | Large Reasoning Model |
MCTS | Monte-Carlo Tree Search |
MIA | Membership Inference Attack |
MDP | Masking-Differential Prompting |
MLM | Masked Language Model |
MLLM | Multimodal Large Language Model |
MLRM | Multimodal Large Reasoning Model |
MoE | Mixture-of-Experts |
NLP | Natural Language Processing |
OOD | Out Of Distribution |
ORM | Outcome Reward Model |
PI | Prompt Injection |
PII | Personally Identifiable Information |
PAIR | Prompt Automatic Iterative Refinement |
PLM | pre-trained Language Model |
PRM | Process Reward Model |
QA | Question-Answering |
RAG | Retrieval-Augmented Generation |
RL | Reinforcement Learning |
RLHF | Reinforcement Learning from Human Feedback |
RLVR | Reinforcement Learning with Verifiable Reward |
RoBERTa | Robustly optimized BERT approach |
SCM | Structural Causal Model |
SGD | Stochastic Gradient Descent |
SOTA | State of the Art |
TAG | Gradient Attack on Transformer-based Language Models |
VR | Verifiable Reward |
XLNet | Transformer-XL with autoregressive and autoencoding pre-training |
↑Contributing
Contributions are welcome! If you have new resources, tools, or insights to add, feel free to submit a pull request.
This repository follows the Awesome Manifesto guidelines.
↑License
© 2025 Tal Eliyahu. Licensed under the MIT License. See LICENSE
.