Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
de9facc
initial file structure created. Populated with unimplemented files
madison-evans May 20, 2025
2763712
added relevant code to files within comps/router/deployment
madison-evans May 20, 2025
358e027
added Dockerfile, opea_router_microservice.py, README.md, and require…
madison-evans May 20, 2025
4fc9690
added controller components for router instances
madison-evans May 20, 2025
de5cfee
added initial routellm controller test script in router directory
madison-evans May 20, 2025
56b8b2d
fixed requirements.txt issue
madison-evans May 20, 2025
23aeeeb
added HUGGINGFACEHUB_API_TOKEN as an env variable
madison-evans May 20, 2025
5aab6bc
removed hard OPENAI dependency and made OPENAI_API_KEY default to emp…
madison-evans May 21, 2025
b4c86b0
removed empty str fallback for OPENAI_API_KEY var
madison-evans May 21, 2025
9262b05
target localhost in RouteLLM E2E test to avoid Docker network issues
madison-evans May 21, 2025
64c8507
fixed e2e test issue for routellm test
madison-evans May 21, 2025
cf1622c
changed the checkpoint path for the custom mf model weights. Now usin…
madison-evans May 28, 2025
efdd653
moved RouteEndpointDoc class into 'api_protocol.py' under cores/proto
madison-evans May 28, 2025
2d8e71e
added 'router-compose.yaml' to workflows/docker/compose
madison-evans May 28, 2025
9eb977a
pre commit format updates
madison-evans May 28, 2025
8db8aa2
removed the forked version of RouteLLM from requirements.txt dependen…
madison-evans May 28, 2025
beadac5
updated README to reflect the patch usage for modified RouteLLM repo
madison-evans May 28, 2025
febad4f
Merge branch 'opea-project:main' into routing-service
madison-evans May 30, 2025
5316753
added H1 title to README
madison-evans May 30, 2025
a9bf90a
Merge branch 'opea-project:main' into routing-service
madison-evans May 30, 2025
cbae17f
Merge branch 'opea-project:main' into routing-service
madison-evans Jun 3, 2025
96fcda3
comply with formatting requests.
Jun 9, 2025
357ff33
fix pre-commit issues: remove trailing whitespace and add newline
Jun 9, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .github/workflows/docker/compose/router-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

# this file should be run in the root of the repo
services:
router:
build:
dockerfile: comps/router/src/Dockerfile
image: ${REGISTRY:-opea}/opea-router:${TAG:-latest}
4 changes: 4 additions & 0 deletions comps/cores/proto/api_protocol.py
Original file line number Diff line number Diff line change
Expand Up @@ -1014,3 +1014,7 @@ class FineTuningJobCheckpoint(BaseModel):

step_number: Optional[int] = None
"""The step number that the checkpoint was created at."""


class RouteEndpointDoc(BaseModel):
url: str = Field(..., description="URL of the chosen inference endpoint")
35 changes: 35 additions & 0 deletions comps/router/deployment/docker_compose/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

services:
router_service:
build:
context: ../../../..
dockerfile: comps/router/src/Dockerfile

image: "${REGISTRY_AND_REPO:-opea/router}:${TAG:-latest}"
container_name: opea_router

volumes:
- ./configs:/app/configs

environment:
CONFIG_PATH: /app/configs/router.yaml

WEAK_ENDPOINT: ${WEAK_ENDPOINT:-http://opea_router:8000/weak}
STRONG_ENDPOINT: ${STRONG_ENDPOINT:-http://opea_router:8000/strong}
WEAK_MODEL_ID: ${WEAK_MODEL_ID:-openai/gpt-3.5-turbo}
STRONG_MODEL_ID: ${STRONG_MODEL_ID:-openai/gpt-4}

HF_TOKEN: ${HF_TOKEN:?set HF_TOKEN}
OPENAI_API_KEY: ${OPENAI_API_KEY:-""}

CONTROLLER_TYPE: ${CONTROLLER_TYPE:-routellm}

ports:
- "${ROUTER_PORT:-6000}:6000"
restart: unless-stopped

networks:
default:
driver: bridge
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

# which embedder backend to use ("huggingface" or "openai")
embedding_provider: "huggingface"

embedding_model_name: "intfloat/e5-base-v2"

routing_algorithm: "mf"
threshold: 0.3

config:
sw_ranking:
arena_battle_datasets:
- "lmsys/lmsys-arena-human-preference-55k"
- "routellm/gpt4_judge_battles"
arena_embedding_datasets:
- "routellm/arena_battles_embeddings"
- "routellm/gpt4_judge_battles_embeddings"

causal_llm:
checkpoint_path: "routellm/causal_llm_gpt4_augmented"

bert:
checkpoint_path: "routellm/bert_gpt4_augmented"

mf:
checkpoint_path: "OPEA/routellm-e5-base-v2"
use_openai_embeddings: false
14 changes: 14 additions & 0 deletions comps/router/deployment/docker_compose/configs/router.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

model_map:
weak:
endpoint: "${WEAK_ENDPOINT:-http://opea_router:8000/weak}"
model_id: "${WEAK_MODEL_ID}"
strong:
endpoint: "${STRONG_ENDPOINT:-http://opea_router:8000/strong}"
model_id: "${STRONG_MODEL_ID}"

controller_config_paths:
routellm: "/app/configs/routellm_config.yaml"
semantic_router: "/app/configs/semantic_router_config.yaml"
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

embedding_provider: "huggingface"

embedding_models:
huggingface: "BAAI/bge-base-en-v1.5"
openai: "text-embedding-ada-002"

routes:
- name: "strong"
utterances:
- "Prove the Pythagorean theorem using geometric arguments..."
- "Explain the Calvin cycle..."
- "Discuss the ethical implications of deploying AI..."
- name: "weak"
utterances:
- "Hello, how are you?"
- "What's 2 + 2?"
- "Can you tell me a funny joke?"
52 changes: 52 additions & 0 deletions comps/router/deployment/docker_compose/deploy_router.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#!/bin/bash

# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

# ========================
# OPEA Router Deploy Script
# ========================

# Load environment variables from a .env file if present
if [ -f .env ]; then
echo "[INFO] Loading environment variables from .env"
export $(grep -v '^#' .env | xargs)
fi

# Required variables
REQUIRED_VARS=("HF_TOKEN")

# Validate that all required variables are set
for VAR in "${REQUIRED_VARS[@]}"; do
if [ -z "${!VAR}" ]; then
echo "[ERROR] $VAR is not set. Please set it in your environment or .env file."
exit 1
fi
done

export HUGGINGFACEHUB_API_TOKEN="$HF_TOKEN"

# Default values for Docker image
REGISTRY_AND_REPO=${REGISTRY_AND_REPO:-opea/router}
TAG=${TAG:-latest}

# Export them so Docker Compose can see them
export REGISTRY_AND_REPO
export TAG

# Print summary
echo "[INFO] Starting deployment with the following config:"
echo " Image: ${REGISTRY_AND_REPO}:${TAG}"
echo " HF_TOKEN: ***${HF_TOKEN: -4}"
echo " OPENAI_API_KEY: ***${OPENAI_API_KEY: -4}"
echo ""

# Compose up
echo "[INFO] Launching Docker Compose service..."
docker compose -f compose.yaml up --build

# Wait a moment then check status
sleep 2
docker ps --filter "name=opea-router"

echo "[SUCCESS] Router service deployed and running on http://localhost:6000"
30 changes: 30 additions & 0 deletions comps/router/src/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

FROM python:3.10-slim

# Install git
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*

# Add a non-root user
RUN useradd -m -s /bin/bash user && chown -R user /home/user

# Copy the *entire* comps/ package
WORKDIR /home/user
COPY comps /home/user/comps

# Install deps from the router’s requirements.txt
RUN pip install --no-cache-dir --upgrade pip && pip install --no-cache-dir -r /home/user/comps/router/src/requirements.txt && git clone --depth 1 https://github.com/lm-sys/RouteLLM.git /tmp/RouteLLM && patch -p1 -d /tmp/RouteLLM < /home/user/comps/router/src/hf_compatibility.patch && pip install --no-cache-dir /tmp/RouteLLM && rm -rf /tmp/RouteLLM

# Make imports work
ENV PYTHONPATH=/home/user

# Switch to non-root
USER user

# Expose the port
EXPOSE 6000

# Run the microservice
WORKDIR /home/user/comps/router/src
CMD ["python", "opea_router_microservice.py"]
120 changes: 120 additions & 0 deletions comps/router/src/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Router Microservice

> Location: comps/router/src/README.md

A lightweight HTTP service that routes incoming text prompts to the most appropriate LLM back‑end (e.g. strong vs weak) and returns the target inference endpoint. It is built on the OPEA micro‑service SDK and can switch between two controller back‑ends:

- RouteLLM (matrix‑factorisation, dataset‑driven)
- Semantic‑Router (encoder‑based semantic similarity)

The router is stateless; it inspects the prompt, consults the configured controller, and replies with a single URL such as http://opea_router:8000/strong.

## Build

```bash
# From repo root 📂
# Build the container image directly
$ docker build -t opea/router:latest -f comps/router/src/Dockerfile .
```

Alternatively, the Docker Compose workflow below will build the image for you.

```bash
# Navigate to the compose bundle
$ cd comps/router/deployment/docker_compose

# Populate required secrets (or create a .env file)
$ export HF_TOKEN="<your hf token>"
$ export OPENAI_API_KEY="<your openai key>"

# Optional: point to custom inference endpoints / models
$ export WEAK_ENDPOINT=http://my‑llm‑gateway:8000/weak
$ export STRONG_ENDPOINT=http://my‑llm‑gateway:8000/strong
$ export CONTROLLER_TYPE=routellm # or semantic_router

# Launch (using the helper script)
$ ./deploy_router.sh
```

_The service listens on http://localhost:6000 (host‑mapped from container port 6000). Logs stream to STDOUT; use Ctrl‑C to stop or docker compose down to clean up._

## RouteLLM compatibility patch

The upstream **RouteLLM** project is geared toward OpenAI embeddings and GPT-4–augmented
checkpoints.
We include a small patch – `hf_compatibility.patch` – that:

- adds a `hf_token` plumb-through,
- switches the Matrix-Factorisation router to Hugging Face sentence embeddings,
- removes hard-coded GPT-4 “golden-label” defaults.

**Container users:**
The Dockerfile applies the patch automatically during `docker build`, so you don’t have to do anything.

**Local development:**

```bash
# 1. Clone upstream RouteLLM
git clone https://github.com/lm-sys/RouteLLM.git
cd RouteLLM

# 2. Apply the patch shipped with this repo
patch -p1 < ../comps/router/src/hf_compatibility.patch

# 3. Install the patched library
pip install -e .
```

## API Usage

| Method | URL | Body schema | Success response |
| ------ | ----------- | ----------------------------- | ---------------------------------------------- |
| `POST` | `/v1/route` | `{ "text": "<user prompt>" }` | `200 OK` → `{ "url": "<inference endpoint>" }` |

**Example**

```
curl -X POST http://localhost:6000/v1/route \
-H "Content-Type: application/json" \
-d '{"text": "Explain the Calvin cycle in photosynthesis."}'
```

Expected JSON _(assuming the strong model wins the routing decision)_:

```
{
"url": "http://opea_router:8000/strong"
}
```

## Configuration Reference

| Variable / file | Purpose | Default | Where set |
| ----------------------------------- | ------------------------------------------------- | -------------------------------------- | ------------------- |
| `HF_TOKEN` | Hugging Face auth token for encoder models | — | `.env` / shell |
| `OPENAI_API_KEY` | OpenAI key (only if `embedding_provider: openai`) | — | `.env` / shell |
| `CONTROLLER_TYPE` | `routellm` or `semantic_router` | `routellm` | env / `router.yaml` |
| `CONFIG_PATH` | Path to global router YAML | `/app/configs/router.yaml` | Compose env |
| `WEAK_ENDPOINT` / `STRONG_ENDPOINT` | Final inference URLs | container DNS | Compose env |
| `WEAK_MODEL_ID` / `STRONG_MODEL_ID` | Model IDs forwarded to controllers | `openai/gpt-3.5-turbo`, `openai/gpt-4` | Compose env |

## Troubleshooting

`HF_TOKEN` is not set – export the token or place it in a .env file next to compose.yaml.

Unknown controller type – `CONTROLLER_TYPE` must be either routellm or semantic_router and a matching entry must exist in controller_config_paths.

Routed model `<name>` not in `model_map` – make sure model_map in router.yaml lists both strong and weak with the correct model_id values.

Use docker compose logs -f router_service for real‑time debugging.

## Testing

Includes an end-to-end script for the RouteLLM controller:

```bash
chmod +x tests/router/test_router_routellm.sh
export HF_TOKEN="<your HF token>"
export OPENAI_API_KEY="<your OpenAI key>"
tests/router/test_router_routellm.sh
```
Loading
Loading