Skip to content

KristionB/LLM-Powered-Log-Insight-Engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-Powered Log Insight Engine

A minimal prototype that ingests logs, computes embeddings (placeholder), runs anomaly scoring, and uses an LLM to generate insights.

Features

  • Log Ingestion: Single and bulk log entry ingestion
  • Embedding Computation: Placeholder embedding service (replace with production model)
  • Anomaly Detection: Isolation Forest-based anomaly scoring
  • LLM Insights: AI-powered insights generation (placeholder - replace with production LLM)
  • AI Evaluation: Braintrust integration for evaluating and optimizing AI components
  • Modern UI: React-based frontend with intuitive interface
  • RESTful API: FastAPI backend with comprehensive endpoints

Project Structure

.
├── backend/
│   ├── app/
│   │   ├── api/          # API routes
│   │   ├── models.py     # Database models
│   │   ├── schemas.py    # Pydantic schemas
│   │   ├── services/     # Business logic (embedding, anomaly, LLM)
│   │   ├── database.py   # Database configuration
│   │   ├── config.py     # Settings
│   │   └── main.py       # FastAPI app
│   ├── requirements.txt
│   ├── render.yaml       # Render deployment config
│   └── .env.example
├── frontend/
│   ├── src/
│   │   ├── components/   # React components
│   │   ├── api/          # API client
│   │   └── App.jsx
│   ├── package.json
│   └── vercel.json       # Vercel deployment config
└── README.md

Setup

Backend

  1. Create a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  2. Navigate to backend directory:

    cd backend
  3. Install dependencies:

    pip install -r requirements.txt
  4. Create .env file:

    cp .env.example .env
  5. Edit .env and set:

    • DATABASE_URL: Database connection string (SQLite for local, PostgreSQL for production)
    • OPENAI_API_KEY: (Optional) OpenAI API key for LLM insights
    • BRAINTRUST_API_KEY: (Optional) Braintrust API key for AI evaluation
    • ENABLE_BRAINTRUST: (Optional) Set to true to enable Braintrust evaluation
  6. Run the server:

    uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

The API will be available at http://localhost:8000 API documentation: http://localhost:8000/docs

Frontend

  1. Navigate to frontend directory:

    cd frontend
  2. Install dependencies:

    npm install
  3. (Optional) Create .env file for custom API URL:

    cp .env.example .env

    Edit .env and set VITE_API_URL if your backend is not at http://localhost:8000

  4. Run the development server:

    npm run dev

The frontend will be available at http://localhost:3000

API Endpoints

Logs

  • POST /api/logs/ - Create a single log entry
  • POST /api/logs/bulk - Create multiple log entries
  • GET /api/logs/ - Get log entries (with filters: level, source, min_anomaly_score)
  • GET /api/logs/{log_id} - Get a specific log entry
  • DELETE /api/logs/{log_id} - Delete a log entry

Insights

  • POST /api/insights/generate - Generate an LLM insight for a log entry
  • GET /api/insights/log/{log_id} - Get all insights for a log entry
  • GET /api/insights/log/{log_id}/with-insights - Get log entry with all insights
  • GET /api/insights/{insight_id} - Get a specific insight

Evaluation (Braintrust)

  • GET /api/evaluation/status - Get Braintrust evaluation status
  • POST /api/evaluation/test-insight - Test insight evaluation
  • GET /api/evaluation/metrics - Get evaluation metrics summary

Deployment

Backend (Render)

  1. Push your code to a Git repository
  2. In Render dashboard, create a new Web Service
  3. Connect your repository
  4. Use the backend/render.yaml configuration or manually set:
    • Build Command: pip install -r requirements.txt
    • Start Command: uvicorn app.main:app --host 0.0.0.0 --port $PORT
    • Environment Variables: Set DATABASE_URL, OPENAI_API_KEY, and optionally BRAINTRUST_API_KEY and ENABLE_BRAINTRUST=true

Frontend (Vercel)

  1. Push your code to a Git repository
  2. In Vercel dashboard, import your repository
  3. Set build settings:
    • Framework Preset: Vite
    • Build Command: npm run build
    • Output Directory: dist
  4. Set environment variable:
    • VITE_API_URL: Your backend API URL (e.g., https://your-backend.onrender.com)

Notes

Placeholder Services

The following services use placeholder implementations and should be replaced with production models:

  1. Embedding Service (backend/app/services/embedding_service.py):

    • Currently returns random vectors
    • Replace with actual embedding model (OpenAI, Cohere, or local model)
  2. LLM Service (backend/app/services/llm_service.py):

    • Currently returns placeholder insights
    • Replace with actual LLM integration (OpenAI, Anthropic, etc.)
  3. Braintrust Service (backend/app/services/braintrust_service.py):

    • Evaluates AI component quality (insights, embeddings, anomaly detection)
    • Automatically logs evaluations when enabled
    • Provides scoring for relevance, actionability, and anomaly alignment
    • To enable: Set BRAINTRUST_API_KEY and ENABLE_BRAINTRUST=true in .env

Security

  • Do not commit secrets (.env files, API keys)
  • Update CORS settings in production (currently allows all origins)
  • Use environment variables for all sensitive configuration
  • Consider adding authentication/authorization for production use

Database

  • Default: SQLite (for local development)
  • Production: Use PostgreSQL (update DATABASE_URL in .env)
  • Database migrations: Consider using Alembic for schema management

Development

Running Tests

# Backend tests (when implemented)
cd backend
pytest

# Frontend tests (when implemented)
cd frontend
npm test

Code Style

  • Backend: Follow PEP 8, use Black formatter
  • Frontend: Follow ESLint rules

License

MIT

Releases

No releases published

Packages

No packages published