NeuroCAT is an AI-powered security administrator agent that leverages Ollama models and CIS (Center for Internet Security) benchmarks to automatically generate audit scripts and scan remote Linux servers for compliance.
- π€ AI-Driven Script Generation: Uses local Ollama LLMs to generate robust audit scripts from PDF benchmarks
- π― Targeted Auditing: CIS controls drive what to check - no blind scanning
- π Secure Remote Access: SSH-based auditing with password or key authentication
- π Beautiful Reports: Generate JSON and HTML compliance reports
- β‘ Optimized Workflow: Ingest benchmarks, generate scripts, and scan at scale
For a detailed view of the system architecture, please see ARCHITECTURE.md.
- Python: 3.10 or higher
- LLM Provider: Ollama (local) or OpenRouter (cloud)
- SSH Access: Remote server with SSH enabled
- CIS Benchmarks: PDF documents from CIS Benchmarks
# Clone the repository
git clone https://github.com/neur0cat/neurocat.git
cd neurocat
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Linux/Mac
# or
venv\Scripts\activate # On Windows
# Install NeuroCAT
pip install -e .
# For development with testing tools
pip install -e ".[dev]"Option A: Ollama (Local)
ollama serve
ollama pull llama3.2Option B: OpenRouter (Cloud)
export OPENROUTER_API_KEY="your-api-key"
# Edit config.yaml: llm.provider = "openrouter"# Copy example configuration
cp config/config.example.yaml config/config.yaml
# Edit configuration (optional - defaults work for most cases)
nano config/config.yamlneurocat checkFirst, ingest a CIS benchmark PDF to create the vector embeddings needed for script generation.
# Ingest CIS benchmark into vector database
neurocat ingest -b benchmarks/CIS_Ubuntu_Linux_22.04.pdfUse the LLM to generate audit scripts for a specific profile (e.g., L1, L2).
# Generate from vectorstore (after running 'neurocat ingest')
neurocat generate -p L1
# Generate directly from PDF (will ingest first)
neurocat generate -b benchmarks/CIS_Ubuntu_Linux_22.04.pdfExecute the generated scripts against a remote target.
# Scan with SSH key
neurocat scan \
-h 192.168.1.100 \
-u admin \
-k ~/.ssh/id_rsa
# Scan with custom scripts directory
neurocat scan \
-h 192.168.1.100 \
-u admin \
-k ~/.ssh/id_rsa \
-s ./scripts# View benchmark structure and controls
neurocat parse -b benchmarks/CIS_Ubuntu_Linux_22.04.pdf1. Ingest β Parse CIS PDF and create vector embeddings
2. Generate β Create sanitized audit scripts using LLM
3. Scan β Execute scripts on remote system and generate report
neurocat/
βββ src/neurocat/
β βββ __init__.py
β βββ __main__.py # Entry point
β βββ cli.py # CLI commands (scan, generate, ingest, parse)
β βββ script_generator.py # Script generation with LLM repair
β βββ llm_provider.py # LLM provider factory
β βββ config.py # Configuration management
β βββ errors.py # Exception hierarchy and error handling
β βββ logging_config.py # Logging setup
β βββ ollama_client.py # Ollama client
β βββ openrouter_client.py # OpenRouter client
β βββ vector_store.py # ChromaDB vector store for RAG
β βββ ssh_manager.py # SSH connection and command execution
β βββ cis_parser.py # CIS benchmark PDF parser
βββ tests/
β βββ conftest.py # Pytest configuration
β βββ test_*.py # Unit tests
β βββ test_integration.py # Integration tests
βββ config/
β βββ config.yaml # Active configuration
β βββ config.example.yaml # Example configuration
βββ benchmarks/ # Place CIS PDFs here
βββ scripts/ # Generated audit scripts
βββ reports/ # Generated reports
βββ data/
β βββ vectorstore/ # ChromaDB embeddings
β βββ benchmarks/ # Cached benchmark data
βββ .github/
β βββ workflows/
β βββ ci.yml # CI/CD pipeline
βββ pyproject.toml # Project metadata and dependencies
βββ README.md
βββ .gitignore
NeuroCAT uses a hierarchical configuration system:
Priority Order: CLI flags β Environment variables β Config file β Defaults
# config/config.yaml
ollama:
url: "http://localhost:11434"
model: "llama3.2"
timeout: 300
ssh:
port: 22
timeout: 30
key_filename: ~/.ssh/id_rsaexport NEUROCAT_OLLAMA_URL="http://localhost:11434"
export NEUROCAT_OLLAMA_MODEL="llama3.2"
export NEUROCAT_SSH_PORT=22- β CIS benchmarks contain no sensitive data
- β Audit outputs stored locally only
- β No data sent to external services (Ollama runs locally)
# Run all tests
pytest
# Run with coverage
pytest --cov=neurocat --cov-report=htmlContributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Install with dev dependencies
pip install -e ".[dev]"
# Run linters
ruff check src/ tests/
black --check src/ tests/
# Run type checker
mypy src/- CIS Benchmarks - Official benchmark downloads
- Ollama Documentation - Ollama setup and models
- ChromaDB Docs - Vector database documentation
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama
ollama serve# Test SSH connection manually
ssh -i ~/.ssh/id_rsa user@host
# Check SSH key permissions
chmod 600 ~/.ssh/id_rsa# List available models
ollama list
# Pull required model
ollama pull llama3.2This project is licensed under the MIT License - see the LICENSE file for details.
- Center for Internet Security for comprehensive security benchmarks
- Ollama for local LLM inference
- ChromaDB for vector embeddings storage
- Create an issue on GitHub
- Email: kkzone@gmail.com