A minimal prototype that ingests logs, computes embeddings (placeholder), runs anomaly scoring, and uses an LLM to generate insights.
- Log Ingestion: Single and bulk log entry ingestion
- Embedding Computation: Placeholder embedding service (replace with production model)
- Anomaly Detection: Isolation Forest-based anomaly scoring
- LLM Insights: AI-powered insights generation (placeholder - replace with production LLM)
- AI Evaluation: Braintrust integration for evaluating and optimizing AI components
- Modern UI: React-based frontend with intuitive interface
- RESTful API: FastAPI backend with comprehensive endpoints
.
├── backend/
│ ├── app/
│ │ ├── api/ # API routes
│ │ ├── models.py # Database models
│ │ ├── schemas.py # Pydantic schemas
│ │ ├── services/ # Business logic (embedding, anomaly, LLM)
│ │ ├── database.py # Database configuration
│ │ ├── config.py # Settings
│ │ └── main.py # FastAPI app
│ ├── requirements.txt
│ ├── render.yaml # Render deployment config
│ └── .env.example
├── frontend/
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── api/ # API client
│ │ └── App.jsx
│ ├── package.json
│ └── vercel.json # Vercel deployment config
└── README.md
-
Create a virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Navigate to backend directory:
cd backend -
Install dependencies:
pip install -r requirements.txt
-
Create
.envfile:cp .env.example .env
-
Edit
.envand set:DATABASE_URL: Database connection string (SQLite for local, PostgreSQL for production)OPENAI_API_KEY: (Optional) OpenAI API key for LLM insightsBRAINTRUST_API_KEY: (Optional) Braintrust API key for AI evaluationENABLE_BRAINTRUST: (Optional) Set totrueto enable Braintrust evaluation
-
Run the server:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
The API will be available at http://localhost:8000
API documentation: http://localhost:8000/docs
-
Navigate to frontend directory:
cd frontend -
Install dependencies:
npm install
-
(Optional) Create
.envfile for custom API URL:cp .env.example .env
Edit
.envand setVITE_API_URLif your backend is not athttp://localhost:8000 -
Run the development server:
npm run dev
The frontend will be available at http://localhost:3000
POST /api/logs/- Create a single log entryPOST /api/logs/bulk- Create multiple log entriesGET /api/logs/- Get log entries (with filters: level, source, min_anomaly_score)GET /api/logs/{log_id}- Get a specific log entryDELETE /api/logs/{log_id}- Delete a log entry
POST /api/insights/generate- Generate an LLM insight for a log entryGET /api/insights/log/{log_id}- Get all insights for a log entryGET /api/insights/log/{log_id}/with-insights- Get log entry with all insightsGET /api/insights/{insight_id}- Get a specific insight
GET /api/evaluation/status- Get Braintrust evaluation statusPOST /api/evaluation/test-insight- Test insight evaluationGET /api/evaluation/metrics- Get evaluation metrics summary
- Push your code to a Git repository
- In Render dashboard, create a new Web Service
- Connect your repository
- Use the
backend/render.yamlconfiguration or manually set:- Build Command:
pip install -r requirements.txt - Start Command:
uvicorn app.main:app --host 0.0.0.0 --port $PORT - Environment Variables: Set
DATABASE_URL,OPENAI_API_KEY, and optionallyBRAINTRUST_API_KEYandENABLE_BRAINTRUST=true
- Build Command:
- Push your code to a Git repository
- In Vercel dashboard, import your repository
- Set build settings:
- Framework Preset: Vite
- Build Command:
npm run build - Output Directory:
dist
- Set environment variable:
VITE_API_URL: Your backend API URL (e.g.,https://your-backend.onrender.com)
The following services use placeholder implementations and should be replaced with production models:
-
Embedding Service (
backend/app/services/embedding_service.py):- Currently returns random vectors
- Replace with actual embedding model (OpenAI, Cohere, or local model)
-
LLM Service (
backend/app/services/llm_service.py):- Currently returns placeholder insights
- Replace with actual LLM integration (OpenAI, Anthropic, etc.)
-
Braintrust Service (
backend/app/services/braintrust_service.py):- Evaluates AI component quality (insights, embeddings, anomaly detection)
- Automatically logs evaluations when enabled
- Provides scoring for relevance, actionability, and anomaly alignment
- To enable: Set
BRAINTRUST_API_KEYandENABLE_BRAINTRUST=truein.env
- Do not commit secrets (
.envfiles, API keys) - Update CORS settings in production (currently allows all origins)
- Use environment variables for all sensitive configuration
- Consider adding authentication/authorization for production use
- Default: SQLite (for local development)
- Production: Use PostgreSQL (update
DATABASE_URLin.env) - Database migrations: Consider using Alembic for schema management
# Backend tests (when implemented)
cd backend
pytest
# Frontend tests (when implemented)
cd frontend
npm test- Backend: Follow PEP 8, use Black formatter
- Frontend: Follow ESLint rules
MIT