Skip to content

Commit d37062b

Browse files
update secrets token name for CodeGen and CodeTrans (opea-project#2031)
Signed-off-by: ZePan110 <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 35e0ae4 commit d37062b

File tree

26 files changed

+55
-55
lines changed

26 files changed

+55
-55
lines changed

CodeGen/benchmark/accuracy/run_acc.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
1+
#!/bin/bash
22

33
# Copyright (C) 2024 Intel Corporation
44
# SPDX-License-Identifier: Apache-2.0

CodeGen/docker_compose/amd/gpu/rocm/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ Key parameters are configured via environment variables set before running `dock
109109
| Environment Variable | Description | Default (Set Externally) |
110110
| :-------------------------------------- | :------------------------------------------------------------------------------------------------------------------ | :----------------------------------------------------------------------------------------------- |
111111
| `HOST_IP` | External IP address of the host machine. **Required.** | `your_external_ip_address` |
112-
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
112+
| `HF_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
113113
| `LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
114114
| `EMBEDDING_MODEL_ID` | Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. | `BAAI/bge-base-en-v1.5` |
115115
| `LLM_ENDPOINT` | Internal URL for the LLM serving endpoint (used by `codegen-llm-server`). Configured in `compose.yaml`. | `http://codegen-tgi-server:80/generate` or `http://codegen-vllm-server:8000/v1/chat/completions` |
@@ -125,7 +125,7 @@ For TGI
125125

126126
```bash
127127
export host_ip="External_Public_IP" #ip address of the node
128-
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
128+
export HF_TOKEN="Your_Huggingface_API_Token"
129129
export http_proxy="Your_HTTP_Proxy" #http proxy if any
130130
export https_proxy="Your_HTTPs_Proxy" #https proxy if any
131131
export no_proxy=localhost,127.0.0.1,$host_ip #additional no proxies if needed
@@ -137,7 +137,7 @@ For vLLM
137137

138138
```bash
139139
export host_ip="External_Public_IP" #ip address of the node
140-
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
140+
export HF_TOKEN="Your_Huggingface_API_Token"
141141
export http_proxy="Your_HTTP_Proxy" #http proxy if any
142142
export https_proxy="Your_HTTPs_Proxy" #https proxy if any
143143
export no_proxy=localhost,127.0.0.1,$host_ip #additional no proxies if needed
@@ -422,7 +422,7 @@ Users can interact with the backend service using the `Neural Copilot` VS Code e
422422

423423
## Troubleshooting
424424

425-
- **Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
425+
- **Model Download Issues:** Check `HF_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
426426
- **Connection Errors:** Verify `HOST_IP` is correct and accessible. Check `docker ps` for port mappings. Ensure `no_proxy` includes `HOST_IP` if using a proxy. Check logs of the service failing to connect (e.g., `codegen-backend-server` logs if it can't reach `codegen-llm-server`).
427427
- **"Container name is in use"**: Stop existing containers (`docker compose down`) or change `container_name` in `compose.yaml`.
428428
- **Resource Issues:** CodeGen models can be memory-intensive. Monitor host RAM usage. Increase Docker resources if needed.

CodeGen/docker_compose/amd/gpu/rocm/set_env.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ export EXTERNAL_HOST_IP=${ip_address}
1212
export CODEGEN_TGI_SERVICE_PORT=8028
1313

1414
### A token for accessing repositories with models
15-
export CODEGEN_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
15+
export CODEGEN_HUGGINGFACEHUB_API_TOKEN=${HF_TOKEN}
1616

1717
### Model ID
1818
export CODEGEN_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"

CodeGen/docker_compose/amd/gpu/rocm/set_env_vllm.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ export CODEGEN_VLLM_SERVICE_PORT=8028
1313
export CODEGEN_VLLM_ENDPOINT="http://${HOST_IP}:${CODEGEN_VLLM_SERVICE_PORT}"
1414

1515
### A token for accessing repositories with models
16-
export CODEGEN_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
16+
export CODEGEN_HUGGINGFACEHUB_API_TOKEN=${HF_TOKEN}
1717

1818
### Model ID
1919
export CODEGEN_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"

CodeGen/docker_compose/intel/cpu/xeon/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ This uses the default vLLM-based deployment profile (`codegen-xeon-vllm`).
4242
# Replace with your host's external IP address (do not use localhost or 127.0.0.1)
4343
export HOST_IP="your_external_ip_address"
4444
# Replace with your Hugging Face Hub API token
45-
export HUGGINGFACEHUB_API_TOKEN="your_huggingface_token"
45+
export HF_TOKEN="your_huggingface_token"
4646

4747
# Optional: Configure proxy if needed
4848
# export http_proxy="your_http_proxy"
@@ -90,7 +90,7 @@ The `compose.yaml` file uses Docker Compose profiles to select the LLM serving b
9090
- **Services Deployed:** `codegen-tgi-server`, `codegen-llm-server`, `codegen-tei-embedding-server`, `codegen-retriever-server`, `redis-vector-db`, `codegen-dataprep-server`, `codegen-backend-server`, `codegen-gradio-ui-server`.
9191
- **To Run:**
9292
```bash
93-
# Ensure environment variables (HOST_IP, HUGGINGFACEHUB_API_TOKEN) are set
93+
# Ensure environment variables (HOST_IP, HF_TOKEN) are set
9494
docker compose --profile codegen-xeon-tgi up -d
9595
```
9696

@@ -103,7 +103,7 @@ Key parameters are configured via environment variables set before running `dock
103103
| Environment Variable | Description | Default (Set Externally) |
104104
| :-------------------------------------- | :------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------- | ------------------------------------ |
105105
| `HOST_IP` | External IP address of the host machine. **Required.** | `your_external_ip_address` |
106-
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
106+
| `HF_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
107107
| `LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
108108
| `EMBEDDING_MODEL_ID` | Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. | `BAAI/bge-base-en-v1.5` |
109109
| `LLM_ENDPOINT` | Internal URL for the LLM serving endpoint (used by `codegen-llm-server`). Configured in `compose.yaml`. | `http://codegen-vllm | tgi-server:9000/v1/chat/completions` |
@@ -216,7 +216,7 @@ Users can interact with the backend service using the `Neural Copilot` VS Code e
216216

217217
## Troubleshooting
218218

219-
- **Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
219+
- **Model Download Issues:** Check `HF_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
220220
- **Connection Errors:** Verify `HOST_IP` is correct and accessible. Check `docker ps` for port mappings. Ensure `no_proxy` includes `HOST_IP` if using a proxy. Check logs of the service failing to connect (e.g., `codegen-backend-server` logs if it can't reach `codegen-llm-server`).
221221
- **"Container name is in use"**: Stop existing containers (`docker compose down`) or change `container_name` in `compose.yaml`.
222222
- **Resource Issues:** CodeGen models can be memory-intensive. Monitor host RAM usage. Increase Docker resources if needed.

CodeGen/docker_compose/intel/cpu/xeon/compose.yaml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ services:
1717
no_proxy: ${no_proxy}
1818
http_proxy: ${http_proxy}
1919
https_proxy: ${https_proxy}
20-
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
20+
HF_TOKEN: ${HF_TOKEN}
2121
host_ip: ${host_ip}
2222
healthcheck:
2323
test: ["CMD-SHELL", "curl -f http://localhost:80/health || exit 1"]
@@ -39,7 +39,7 @@ services:
3939
no_proxy: ${no_proxy}
4040
http_proxy: ${http_proxy}
4141
https_proxy: ${https_proxy}
42-
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
42+
HF_TOKEN: ${HF_TOKEN}
4343
host_ip: ${host_ip}
4444
healthcheck:
4545
test: ["CMD-SHELL", "curl -f http://localhost:80/health || exit 1"]
@@ -56,7 +56,7 @@ services:
5656
https_proxy: ${https_proxy}
5757
LLM_ENDPOINT: ${LLM_ENDPOINT}
5858
LLM_MODEL_ID: ${LLM_MODEL_ID}
59-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
59+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
6060
restart: unless-stopped
6161
llm-tgi-service:
6262
extends: llm-base
@@ -140,7 +140,7 @@ services:
140140
REDIS_URL: ${REDIS_URL}
141141
REDIS_HOST: ${host_ip}
142142
INDEX_NAME: ${INDEX_NAME}
143-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
143+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
144144
LOGFLAG: true
145145
healthcheck:
146146
test: ["CMD-SHELL", "curl -f http://localhost:5000/v1/health_check || exit 1"]
@@ -162,7 +162,7 @@ services:
162162
http_proxy: ${http_proxy}
163163
https_proxy: ${https_proxy}
164164
host_ip: ${host_ip}
165-
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
165+
HF_TOKEN: ${HF_TOKEN}
166166
healthcheck:
167167
test: ["CMD", "curl", "-f", "http://localhost:80/health"]
168168
interval: 10s
@@ -202,7 +202,7 @@ services:
202202
REDIS_RETRIEVER_PORT: ${REDIS_RETRIEVER_PORT}
203203
INDEX_NAME: ${INDEX_NAME}
204204
TEI_EMBEDDING_ENDPOINT: ${TEI_EMBEDDING_ENDPOINT}
205-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
205+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
206206
LOGFLAG: ${LOGFLAG}
207207
RETRIEVER_COMPONENT_NAME: ${RETRIEVER_COMPONENT_NAME:-OPEA_RETRIEVER_REDIS}
208208
restart: unless-stopped

CodeGen/docker_compose/intel/cpu/xeon/compose_remote.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ services:
5959
REDIS_URL: ${REDIS_URL}
6060
REDIS_HOST: ${host_ip}
6161
INDEX_NAME: ${INDEX_NAME}
62-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
62+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
6363
LOGFLAG: true
6464
restart: unless-stopped
6565
tei-embedding-serving:
@@ -76,7 +76,7 @@ services:
7676
http_proxy: ${http_proxy}
7777
https_proxy: ${https_proxy}
7878
host_ip: ${host_ip}
79-
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
79+
HF_TOKEN: ${HF_TOKEN}
8080
healthcheck:
8181
test: ["CMD", "curl", "-f", "http://${host_ip}:${TEI_EMBEDDER_PORT}/health"]
8282
interval: 10s
@@ -116,7 +116,7 @@ services:
116116
REDIS_RETRIEVER_PORT: ${REDIS_RETRIEVER_PORT}
117117
INDEX_NAME: ${INDEX_NAME}
118118
TEI_EMBEDDING_ENDPOINT: ${TEI_EMBEDDING_ENDPOINT}
119-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
119+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
120120
LOGFLAG: ${LOGFLAG}
121121
RETRIEVER_COMPONENT_NAME: ${RETRIEVER_COMPONENT_NAME:-OPEA_RETRIEVER_REDIS}
122122
restart: unless-stopped

CodeGen/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ This uses the default vLLM-based deployment profile (`codegen-gaudi-vllm`).
4242
# Replace with your host's external IP address (do not use localhost or 127.0.0.1)
4343
export HOST_IP="your_external_ip_address"
4444
# Replace with your Hugging Face Hub API token
45-
export HUGGINGFACEHUB_API_TOKEN="your_huggingface_token"
45+
export HF_TOKEN="your_huggingface_token"
4646

4747
# Optional: Configure proxy if needed
4848
# export http_proxy="your_http_proxy"
@@ -93,7 +93,7 @@ The `compose.yaml` file uses Docker Compose profiles to select the LLM serving b
9393
- **Other Services:** Same CPU-based services as the vLLM profile.
9494
- **To Run:**
9595
```bash
96-
# Ensure environment variables (HOST_IP, HUGGINGFACEHUB_API_TOKEN) are set
96+
# Ensure environment variables (HOST_IP, HF_TOKEN) are set
9797
docker compose --profile codegen-gaudi-tgi up -d
9898
```
9999

@@ -106,7 +106,7 @@ Key parameters are configured via environment variables set before running `dock
106106
| Environment Variable | Description | Default (Set Externally) |
107107
| :-------------------------------------- | :------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------- | ------------------------------------ |
108108
| `HOST_IP` | External IP address of the host machine. **Required.** | `your_external_ip_address` |
109-
| `HUGGINGFACEHUB_API_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
109+
| `HF_TOKEN` | Your Hugging Face Hub token for model access. **Required.** | `your_huggingface_token` |
110110
| `LLM_MODEL_ID` | Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. | `Qwen/Qwen2.5-Coder-7B-Instruct` |
111111
| `EMBEDDING_MODEL_ID` | Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. | `BAAI/bge-base-en-v1.5` |
112112
| `LLM_ENDPOINT` | Internal URL for the LLM serving endpoint (used by `llm-codegen-vllm-server`). Configured in `compose.yaml`. | http://codegen-vllm | tgi-server:9000/v1/chat/completions` |
@@ -224,7 +224,7 @@ Use the `Neural Copilot` extension configured with the CodeGen backend URL: `htt
224224
- Ensure host drivers and Habana Docker runtime are installed and working (`habana-container-runtime`).
225225
- Verify `runtime: habana` and volume mounts in `compose.yaml`.
226226
- Gaudi initialization can take significant time and memory. Monitor resource usage.
227-
- **Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`, internet access, proxy settings. Check LLM service logs.
227+
- **Model Download Issues:** Check `HF_TOKEN`, internet access, proxy settings. Check LLM service logs.
228228
- **Connection Errors:** Verify `HOST_IP`, ports, and proxy settings. Use `docker ps` and check service logs.
229229
230230
## Stopping the Application

CodeGen/docker_compose/intel/hpu/gaudi/compose.yaml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ services:
1717
https_proxy: ${https_proxy}
1818
HABANA_VISIBLE_DEVICES: all
1919
OMPI_MCA_btl_vader_single_copy_mechanism: none
20-
HUGGING_FACE_HUB_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
20+
HUGGING_FACE_HUB_TOKEN: ${HF_TOKEN}
2121
ENABLE_HPU_GRAPH: true
2222
LIMIT_HPU_GRAPH: true
2323
USE_FLASH_ATTENTION: true
@@ -46,7 +46,7 @@ services:
4646
no_proxy: ${no_proxy}
4747
http_proxy: ${http_proxy}
4848
https_proxy: ${https_proxy}
49-
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
49+
HF_TOKEN: ${HF_TOKEN}
5050
HABANA_VISIBLE_DEVICES: all
5151
OMPI_MCA_btl_vader_single_copy_mechanism: none
5252
VLLM_SKIP_WARMUP: ${VLLM_SKIP_WARMUP:-false}
@@ -71,7 +71,7 @@ services:
7171
https_proxy: ${https_proxy}
7272
LLM_ENDPOINT: ${LLM_ENDPOINT}
7373
LLM_MODEL_ID: ${LLM_MODEL_ID}
74-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
74+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
7575
restart: unless-stopped
7676
llm-tgi-service:
7777
extends: llm-base
@@ -156,7 +156,7 @@ services:
156156
REDIS_URL: ${REDIS_URL}
157157
REDIS_HOST: ${host_ip}
158158
INDEX_NAME: ${INDEX_NAME}
159-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
159+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
160160
LOGFLAG: true
161161
healthcheck:
162162
test: ["CMD-SHELL", "curl -f http://localhost:5000/v1/health_check || exit 1"]
@@ -178,7 +178,7 @@ services:
178178
http_proxy: ${http_proxy}
179179
https_proxy: ${https_proxy}
180180
host_ip: ${host_ip}
181-
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
181+
HF_TOKEN: ${HF_TOKEN}
182182
healthcheck:
183183
test: ["CMD", "curl", "-f", "http://localhost:80/health"]
184184
interval: 10s
@@ -218,7 +218,7 @@ services:
218218
REDIS_RETRIEVER_PORT: ${REDIS_RETRIEVER_PORT}
219219
INDEX_NAME: ${INDEX_NAME}
220220
TEI_EMBEDDING_ENDPOINT: ${TEI_EMBEDDING_ENDPOINT}
221-
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
221+
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
222222
LOGFLAG: ${LOGFLAG}
223223
RETRIEVER_COMPONENT_NAME: ${RETRIEVER_COMPONENT_NAME:-OPEA_RETRIEVER_REDIS}
224224
restart: unless-stopped

CodeGen/docker_compose/intel/set_env.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ source .set_env.sh
77
popd > /dev/null
88

99
export HOST_IP=$(hostname -I | awk '{print $1}')
10-
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
11-
if [ -z "${HUGGINGFACEHUB_API_TOKEN}" ]; then
12-
echo "Error: HUGGINGFACEHUB_API_TOKEN is not set. Please set HUGGINGFACEHUB_API_TOKEN"
10+
export HF_TOKEN=${HF_TOKEN}
11+
if [ -z "${HF_TOKEN}" ]; then
12+
echo "Error: HF_TOKEN is not set. Please set HF_TOKEN"
1313
fi
1414

1515
if [ -z "${HOST_IP}" ]; then

0 commit comments

Comments
 (0)