Skip to content

neur0cat/neurocat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

NeuroCAT - AI-Powered Security Hardening Agent

Intelligent CIS Benchmark Analysis and Remediation using AI

Python 3.10+ License: MIT

🎯 Overview

NeuroCAT is an AI-powered security administrator agent that leverages Ollama models and CIS (Center for Internet Security) benchmarks to automatically generate audit scripts and scan remote Linux servers for compliance.

Key Features

  • πŸ€– AI-Driven Script Generation: Uses local Ollama LLMs to generate robust audit scripts from PDF benchmarks
  • 🎯 Targeted Auditing: CIS controls drive what to check - no blind scanning
  • πŸ”’ Secure Remote Access: SSH-based auditing with password or key authentication
  • πŸ“Š Beautiful Reports: Generate JSON and HTML compliance reports
  • ⚑ Optimized Workflow: Ingest benchmarks, generate scripts, and scan at scale

πŸ—οΈ Architecture

For a detailed view of the system architecture, please see ARCHITECTURE.md.

Architecture Preview

πŸ“‹ Prerequisites

  • Python: 3.10 or higher
  • LLM Provider: Ollama (local) or OpenRouter (cloud)
  • SSH Access: Remote server with SSH enabled
  • CIS Benchmarks: PDF documents from CIS Benchmarks

πŸš€ Quick Start

1. Installation

# Clone the repository
git clone https://github.com/neur0cat/neurocat.git
cd neurocat

# Create virtual environment
python3 -m venv venv
source venv/bin/activate  # On Linux/Mac
# or
venv\Scripts\activate     # On Windows

# Install NeuroCAT
pip install -e .

# For development with testing tools
pip install -e ".[dev]"

2. Setup LLM Provider

Option A: Ollama (Local)

ollama serve
ollama pull llama3.2

Option B: OpenRouter (Cloud)

export OPENROUTER_API_KEY="your-api-key"
# Edit config.yaml: llm.provider = "openrouter"

3. Configure NeuroCAT

# Copy example configuration
cp config/config.example.yaml config/config.yaml

# Edit configuration (optional - defaults work for most cases)
nano config/config.yaml

4. Verify Installation

neurocat check

πŸ“– Usage

1. Ingest Benchmarks for RAG

First, ingest a CIS benchmark PDF to create the vector embeddings needed for script generation.

# Ingest CIS benchmark into vector database
neurocat ingest -b benchmarks/CIS_Ubuntu_Linux_22.04.pdf

2. Generate Audit Scripts

Use the LLM to generate audit scripts for a specific profile (e.g., L1, L2).

# Generate from vectorstore (after running 'neurocat ingest')
neurocat generate -p L1

# Generate directly from PDF (will ingest first)
neurocat generate -b benchmarks/CIS_Ubuntu_Linux_22.04.pdf

3. Scan Remote System

Execute the generated scripts against a remote target.

# Scan with SSH key
neurocat scan \
  -h 192.168.1.100 \
  -u admin \
  -k ~/.ssh/id_rsa

# Scan with custom scripts directory
neurocat scan \
  -h 192.168.1.100 \
  -u admin \
  -k ~/.ssh/id_rsa \
  -s ./scripts

4. Parse and View Benchmark Info

# View benchmark structure and controls
neurocat parse -b benchmarks/CIS_Ubuntu_Linux_22.04.pdf

πŸ—οΈ Workflow

1. Ingest β†’ Parse CIS PDF and create vector embeddings
2. Generate β†’ Create sanitized audit scripts using LLM
3. Scan β†’ Execute scripts on remote system and generate report

πŸ“ Project Structure

neurocat/
β”œβ”€β”€ src/neurocat/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ __main__.py          # Entry point
β”‚   β”œβ”€β”€ cli.py               # CLI commands (scan, generate, ingest, parse)
β”‚   β”œβ”€β”€ script_generator.py  # Script generation with LLM repair
β”‚   β”œβ”€β”€ llm_provider.py      # LLM provider factory
β”‚   β”œβ”€β”€ config.py            # Configuration management
β”‚   β”œβ”€β”€ errors.py            # Exception hierarchy and error handling
β”‚   β”œβ”€β”€ logging_config.py    # Logging setup
β”‚   β”œβ”€β”€ ollama_client.py     # Ollama client
β”‚   β”œβ”€β”€ openrouter_client.py # OpenRouter client
β”‚   β”œβ”€β”€ vector_store.py      # ChromaDB vector store for RAG
β”‚   β”œβ”€β”€ ssh_manager.py       # SSH connection and command execution
β”‚   └── cis_parser.py        # CIS benchmark PDF parser
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ conftest.py          # Pytest configuration
β”‚   β”œβ”€β”€ test_*.py            # Unit tests
β”‚   └── test_integration.py  # Integration tests
β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ config.yaml          # Active configuration
β”‚   └── config.example.yaml  # Example configuration
β”œβ”€β”€ benchmarks/              # Place CIS PDFs here
β”œβ”€β”€ scripts/                 # Generated audit scripts
β”œβ”€β”€ reports/                 # Generated reports
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ vectorstore/         # ChromaDB embeddings
β”‚   └── benchmarks/          # Cached benchmark data
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       └── ci.yml           # CI/CD pipeline
β”œβ”€β”€ pyproject.toml           # Project metadata and dependencies
β”œβ”€β”€ README.md
└── .gitignore

βš™οΈ Configuration

NeuroCAT uses a hierarchical configuration system:

Priority Order: CLI flags β†’ Environment variables β†’ Config file β†’ Defaults

Configuration File

# config/config.yaml
ollama:
  url: "http://localhost:11434"
  model: "llama3.2"
  timeout: 300

ssh:
  port: 22
  timeout: 30
  key_filename: ~/.ssh/id_rsa

Environment Variables

export NEUROCAT_OLLAMA_URL="http://localhost:11434"
export NEUROCAT_OLLAMA_MODEL="llama3.2"
export NEUROCAT_SSH_PORT=22

Data Privacy

  • βœ… CIS benchmarks contain no sensitive data
  • βœ… Audit outputs stored locally only
  • βœ… No data sent to external services (Ollama runs locally)

πŸ§ͺ Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=neurocat --cov-report=html

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Setup

# Install with dev dependencies
pip install -e ".[dev]"

# Run linters
ruff check src/ tests/
black --check src/ tests/

# Run type checker
mypy src/

πŸ“š Documentation

πŸ› Troubleshooting

Ollama Connection Error

# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama
ollama serve

SSH Authentication Failed

# Test SSH connection manually
ssh -i ~/.ssh/id_rsa user@host

# Check SSH key permissions
chmod 600 ~/.ssh/id_rsa

Model Not Found

# List available models
ollama list

# Pull required model
ollama pull llama3.2

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

πŸ“§ Contact


About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages