Skip to content

rhoai-genaiops/backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

99 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

canopy-be

FastAPI backend service that uses Llamastack to interact with language models.

Local Development

  1. Install dependencies:
cd app
pip install -r requirements.txt
  1. Run the server:
python main.py

The server will start on http://localhost:8000

Container Build

Build the container image:

cd app
podman build -t canopy-backend .

OpenShift Deployment

Deploy using Helm:

helm install canopy-backend ./helm-chart/canopy-backend

API Endpoints

POST /summarize

Summarize text using the language model with streaming response.

Request body:

{
    "prompt": "Your text to summarize here"
}

Response: Server-sent events stream with delta chunks

API Documentation

Once the server is running, access:

  • Swagger UI: http://localhost:8000/docs
  • ReDoc: http://localhost:8000/redoc

About

Canopy backend that uses Llama Stack to interact with models

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors