FastAPI backend service that uses Llamastack to interact with language models.
- Install dependencies:
cd app
pip install -r requirements.txt- Run the server:
python main.pyThe server will start on http://localhost:8000
Build the container image:
cd app
podman build -t canopy-backend .Deploy using Helm:
helm install canopy-backend ./helm-chart/canopy-backendSummarize text using the language model with streaming response.
Request body:
{
"prompt": "Your text to summarize here"
}Response: Server-sent events stream with delta chunks
Once the server is running, access:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc