Skip to content

Commit 81b02bb

Browse files
authored
Revert "HUGGINGFACEHUB_API_TOKEN environment is change to HF_TOKEN (#… (opea-project#1521)
Revert this PR since the test is not triggered properly due to the false merge of a WIP CI PR, opea-project@44a689b, which block the CI test. This change will be submitted in another PR.
1 parent 47069ac commit 81b02bb

File tree

69 files changed

+113
-263
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+113
-263
lines changed

AudioQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Before starting the services with `docker compose`, you have to recheck the foll
4949

5050
```bash
5151
export host_ip=<your External Public IP> # export host_ip=$(hostname -I | awk '{print $1}')
52-
export HF_TOKEN=<your HF token>
52+
export HUGGINGFACEHUB_API_TOKEN=<your HF token>
5353

5454
export LLM_MODEL_ID=Intel/neural-chat-7b-v3-3
5555

AudioQnA/docker_compose/intel/cpu/xeon/set_env.sh

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,7 @@
55

66
# export host_ip=<your External Public IP>
77
export host_ip=$(hostname -I | awk '{print $1}')
8-
9-
if [ -z "$HF_TOKEN" ]; then
10-
echo "Error: The HF_TOKEN environment variable is **NOT** set. Please set it"
11-
return -1
12-
fi
8+
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
139
# <token>
1410

1511
export LLM_MODEL_ID=Intel/neural-chat-7b-v3-3

AudioQnA/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Before starting the services with `docker compose`, you have to recheck the foll
4949

5050
```bash
5151
export host_ip=<your External Public IP> # export host_ip=$(hostname -I | awk '{print $1}')
52-
export HF_TOKEN=<your HF token>
52+
export HUGGINGFACEHUB_API_TOKEN=<your HF token>
5353

5454
export LLM_MODEL_ID=Intel/neural-chat-7b-v3-3
5555

AudioQnA/docker_compose/intel/hpu/gaudi/compose.yaml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,7 @@ services:
4545
no_proxy: ${no_proxy}
4646
http_proxy: ${http_proxy}
4747
https_proxy: ${https_proxy}
48-
HUGGING_FACE_HUB_TOKEN: ${HF_TOKEN}
49-
HF_TOKEN: ${HF_TOKEN}
48+
HUGGING_FACE_HUB_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
5049
HF_HUB_DISABLE_PROGRESS_BARS: 1
5150
HF_HUB_ENABLE_HF_TRANSFER: 0
5251
HABANA_VISIBLE_DEVICES: all

AudioQnA/docker_compose/intel/hpu/gaudi/set_env.sh

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,7 @@
55

66
# export host_ip=<your External Public IP>
77
export host_ip=$(hostname -I | awk '{print $1}')
8-
9-
if [ -z "$HF_TOKEN" ]; then
10-
echo "Error: The HF_TOKEN environment variable is **NOT** set. Please set it"
11-
return -1
12-
fi
13-
14-
export HF_TOKEN=${HF_TOKEN}
8+
export HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN}
159
# <token>
1610

1711
export LLM_MODEL_ID=Intel/neural-chat-7b-v3-3

AvatarChatbot/docker_compose/intel/cpu/xeon/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Then run the command `docker images`, you will have following images ready:
5858
Before starting the services with `docker compose`, you have to recheck the following environment variables.
5959

6060
```bash
61-
export HF_TOKEN=<your_hf_token>
61+
export HUGGINGFACEHUB_API_TOKEN=<your_hf_token>
6262
export host_ip=$(hostname -I | awk '{print $1}')
6363

6464
export LLM_MODEL_ID=Intel/neural-chat-7b-v3-3
@@ -173,7 +173,7 @@ In the current version v1.0, you need to set the avatar figure image/video and t
173173
cd GenAIExamples/AvatarChatbot/tests
174174
export IMAGE_REPO="opea"
175175
export IMAGE_TAG="latest"
176-
export HF_TOKEN=<your_hf_token>
176+
export HUGGINGFACEHUB_API_TOKEN=<your_hf_token>
177177

178178
test_avatarchatbot_on_xeon.sh
179179
```

AvatarChatbot/docker_compose/intel/cpu/xeon/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ services:
3737
no_proxy: ${no_proxy}
3838
http_proxy: ${http_proxy}
3939
https_proxy: ${https_proxy}
40-
HF_TOKEN: ${HF_TOKEN}
40+
HF_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
4141
healthcheck:
4242
test: ["CMD-SHELL", "curl -f http://${host_ip}:3006/health || exit 1"]
4343
interval: 10s

AvatarChatbot/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Then run the command `docker images`, you will have following images ready:
5858
Before starting the services with `docker compose`, you have to recheck the following environment variables.
5959

6060
```bash
61-
export HF_TOKEN=<your_hf_token>
61+
export HUGGINGFACEHUB_API_TOKEN=<your_hf_token>
6262
export host_ip=$(hostname -I | awk '{print $1}')
6363

6464
export LLM_MODEL_ID=Intel/neural-chat-7b-v3-3
@@ -183,7 +183,7 @@ In the current version v1.0, you need to set the avatar figure image/video and t
183183
cd GenAIExamples/AvatarChatbot/tests
184184
export IMAGE_REPO="opea"
185185
export IMAGE_TAG="latest"
186-
export HF_TOKEN=<your_hf_token>
186+
export HUGGINGFACEHUB_API_TOKEN=<your_hf_token>
187187

188188
test_avatarchatbot_on_gaudi.sh
189189
```

AvatarChatbot/docker_compose/intel/hpu/gaudi/compose.yaml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ services:
3838
- SYS_NICE
3939
restart: unless-stopped
4040
tgi-service:
41-
image: ghcr.io/huggingface/tgi-gaudi:2.3.1
41+
image: ghcr.io/huggingface/tgi-gaudi:2.0.6
4242
container_name: tgi-gaudi-server
4343
ports:
4444
- "3006:80"
@@ -48,8 +48,7 @@ services:
4848
no_proxy: ${no_proxy}
4949
http_proxy: ${http_proxy}
5050
https_proxy: ${https_proxy}
51-
HUGGING_FACE_HUB_TOKEN: ${HF_TOKEN}
52-
HF_TOKEN: ${HF_TOKEN}
51+
HUGGING_FACE_HUB_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
5352
HF_HUB_DISABLE_PROGRESS_BARS: 1
5453
HF_HUB_ENABLE_HF_TRANSFER: 0
5554
HABANA_VISIBLE_DEVICES: all

ChatQnA/docker_compose/intel/cpu/aipc/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ export https_proxy=${your_http_proxy}
105105
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
106106
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
107107
export INDEX_NAME="rag-redis"
108-
export HF_TOKEN=${your_hf_api_token}
108+
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
109109
export OLLAMA_HOST=${host_ip}
110110
export OLLAMA_MODEL="llama3.2"
111111
```
@@ -116,7 +116,7 @@ export OLLAMA_MODEL="llama3.2"
116116
set EMBEDDING_MODEL_ID=BAAI/bge-base-en-v1.5
117117
set RERANK_MODEL_ID=BAAI/bge-reranker-base
118118
set INDEX_NAME=rag-redis
119-
set HF_TOKEN=%your_hf_api_token%
119+
set HUGGINGFACEHUB_API_TOKEN=%your_hf_api_token%
120120
set OLLAMA_HOST=host.docker.internal
121121
set OLLAMA_MODEL="llama3.2"
122122
```

0 commit comments

Comments
 (0)