Split deployment: FastAPI backend on Render, Vite React frontend on Vercel.
- Create a new Web Service on render.com, pointing at this repo.
- Render auto-detects
render.yaml— no manual settings needed. - Add the required environment variable in the Render dashboard:
OPENAI_API_KEY— your OpenAI API key
- The service exposes a
/healthendpoint used by Render's health check. - WebSocket connections are served at
/wson the same service.
Start command (from render.yaml):
uvicorn src.api.main:app --host 0.0.0.0 --port $PORT
Note:
stable-baselines3,torch, andtorch-geometricincrease build time significantly. Use the Standard plan or above for sufficient build memory.
- Import this repo on vercel.com.
- In project settings, set Root Directory to
src/ui/dashboard. - Framework preset: Vite (auto-detected).
- Add environment variables:
VITE_API_URL— your Render service URL, e.g.https://cybercypher-api.onrender.comVITE_WS_URL— (optional) WebSocket URL. If omitted, derived automatically fromVITE_API_URL(https→wss, appends/ws).
vercel.jsoninsrc/ui/dashboard/handles SPA rewrites — no extra config needed.
Backend:
cp .env.example .env
# fill in OPENAI_API_KEY
pip install -r requirements.txt
uvicorn src.api.main:app --reloadFrontend:
cd src/ui/dashboard
cp .env.example .env.local
# set VITE_API_URL=http://localhost:8000 (or leave blank; vite.config.js proxies /api and /ws)
npm install
npm run devpython train_rl_synthetic.py --timesteps 50000 --output models/rl_traffic_engineerThe model is auto-loaded by DeciderAgent at startup from models/rl_traffic_engineer.zip.
# Step 1 — generate synthetic incident dataset
python src/models/llm_finetune/synthetic_incident_generator.py --count 500 --output data/incidents.jsonl
# Step 2 — fine-tune with LoRA
python src/models/llm_finetune/train_lora.py --dataset data/incidents.jsonl