AI-powered security scanner for source code. Uses LLMs to identify vulnerabilities, generate actionable recommendations, and output industry-standard SARIF reports.
- Multi-provider -- supports Anthropic (Claude) and OpenAI (GPT) backends
- SARIF 2.1.0 output -- integrates with GitHub Code Scanning and other SARIF-compatible tools
- GitHub Actions support -- reusable composite action and PR annotations
- Config file defaults --
.ai-sec-scan.yamlfor per-project scan settings - Pre-commit ready -- hook manifest for changed-file scanning
- Rich terminal UI -- color-coded severity badges, structured findings, progress indicators
- Flexible filtering -- include/exclude glob patterns, severity thresholds, file size limits
- CI-ready -- non-zero exit code when findings are detected
pip install ai-sec-scanOr install from source:
git clone https://github.com/frankentini/ai-sec-scan.git
cd ai-sec-scan
pip install -e .Set your API key:
export ANTHROPIC_API_KEY="your-key-here"
# or
export OPENAI_API_KEY="your-key-here"Scan a file or directory:
# Scan a single file
ai-sec-scan scan app.py
# Scan a directory
ai-sec-scan scan ./src
# Use OpenAI instead of Anthropic
ai-sec-scan scan ./src -p openai
# Filter by minimum severity
ai-sec-scan scan ./src -s high
# Output as SARIF
ai-sec-scan scan ./src -o sarif -f results.sarif
# Include only Python files
ai-sec-scan scan ./src -i "*.py"Rich terminal output with color-coded severity, file locations, descriptions, and fix recommendations.
ai-sec-scan scan ./src -o jsonMachine-readable JSON with all finding details, scan metadata, and timing.
ai-sec-scan scan ./src -o sarif -f results.sarifSARIF 2.1.0 for integration with GitHub Code Scanning, VS Code SARIF Viewer, and other static analysis tools.
ai-sec-scan scan ./src --github-annotations
# or
ai-sec-scan scan ./src -o githubEmits ::warning / ::error workflow command annotations for pull request diffs in GitHub Actions.
Use the included composite action:
- name: Run ai-sec-scan
uses: ./.github/actions/ai-sec-scan
with:
path: .
provider: anthropic
output-format: sarif
api-key: ${{ secrets.ANTHROPIC_API_KEY }}The example workflow is provided at .github/workflows/security-scan.yml.
If you prefer running the CLI directly in CI:
- name: Run ai-sec-scan
run: ai-sec-scan scan ./src -o sarif -f results.sarif
continue-on-error: true
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarifAdd ai-sec-scan to .pre-commit-config.yaml:
- repo: https://github.com/frankentini/ai-sec-scan
rev: v0.2.0
hooks:
- id: ai-sec-scanThe hook manifest is published in .pre-commit-hooks.yaml and scans changed Python files.
Create .ai-sec-scan.yaml in your scan target directory (or current working directory) to set default options:
provider: anthropic
model: claude-sonnet-4-20250514
severity: medium
output: sarif
max_file_size: 100
include:
- "**/*.py"
exclude:
- "tests/**"CLI flags always override config file values.
Usage: ai-sec-scan scan [OPTIONS] PATH
Options:
-p, --provider [anthropic|openai] LLM provider (default: anthropic)
-m, --model TEXT Model name override
-o, --output [text|json|sarif|github]
Output format (default: text)
-s, --severity [info|low|medium|high|critical]
Minimum severity to report
-f, --output-file TEXT Write output to file
--max-file-size INTEGER Max file size in KB (default: 100)
-i, --include TEXT Glob patterns to include (repeatable)
-e, --exclude TEXT Glob patterns to exclude (repeatable)
--github-annotations Emit GitHub annotation commands
| Environment Variable | Required | Description |
|---|---|---|
ANTHROPIC_API_KEY |
For Anthropic provider | console.anthropic.com |
OPENAI_API_KEY |
For OpenAI provider | platform.openai.com |
git clone https://github.com/frankentini/ai-sec-scan.git
cd ai-sec-scan
pip install -e ".[dev]"
pytest
ruff check src/ tests/
mypy src/