You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`HOST_IP`| External IP address of the host machine. **Required.**|`your_external_ip_address`|
112
-
|`HUGGINGFACEHUB_API_TOKEN`| Your Hugging Face Hub token for model access. **Required.**|`your_huggingface_token`|
112
+
|`HF_TOKEN`| Your Hugging Face Hub token for model access. **Required.**|`your_huggingface_token`|
113
113
|`LLM_MODEL_ID`| Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. |`Qwen/Qwen2.5-Coder-7B-Instruct`|
114
114
|`EMBEDDING_MODEL_ID`| Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. |`BAAI/bge-base-en-v1.5`|
115
115
|`LLM_ENDPOINT`| Internal URL for the LLM serving endpoint (used by `codegen-llm-server`). Configured in `compose.yaml`. |`http://codegen-tgi-server:80/generate` or `http://codegen-vllm-server:8000/v1/chat/completions`|
@@ -125,7 +125,7 @@ For TGI
125
125
126
126
```bash
127
127
export host_ip="External_Public_IP"#ip address of the node
export http_proxy="Your_HTTP_Proxy"#http proxy if any
142
142
export https_proxy="Your_HTTPs_Proxy"#https proxy if any
143
143
export no_proxy=localhost,127.0.0.1,$host_ip#additional no proxies if needed
@@ -422,7 +422,7 @@ Users can interact with the backend service using the `Neural Copilot` VS Code e
422
422
423
423
## Troubleshooting
424
424
425
-
-**Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
425
+
-**Model Download Issues:** Check `HF_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
426
426
-**Connection Errors:** Verify `HOST_IP` is correct and accessible. Check `docker ps` for port mappings. Ensure `no_proxy` includes `HOST_IP` if using a proxy. Check logs of the service failing to connect (e.g., `codegen-backend-server` logs if it can't reach `codegen-llm-server`).
427
427
-**"Container name is in use"**: Stop existing containers (`docker compose down`) or change `container_name` in `compose.yaml`.
428
428
-**Resource Issues:** CodeGen models can be memory-intensive. Monitor host RAM usage. Increase Docker resources if needed.
|`HOST_IP`| External IP address of the host machine. **Required.**|`your_external_ip_address`|
106
-
|`HUGGINGFACEHUB_API_TOKEN`| Your Hugging Face Hub token for model access. **Required.**|`your_huggingface_token`|
106
+
|`HF_TOKEN`| Your Hugging Face Hub token for model access. **Required.**|`your_huggingface_token`|
107
107
|`LLM_MODEL_ID`| Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. |`Qwen/Qwen2.5-Coder-7B-Instruct`|
108
108
|`EMBEDDING_MODEL_ID`| Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. |`BAAI/bge-base-en-v1.5`|
@@ -216,7 +216,7 @@ Users can interact with the backend service using the `Neural Copilot` VS Code e
216
216
217
217
## Troubleshooting
218
218
219
-
- **Model Download Issues:** Check `HUGGINGFACEHUB_API_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
219
+
- **Model Download Issues:** Check `HF_TOKEN`. Ensure internet connectivity or correct proxy settings. Check logs of `tgi-service`/`vllm-service` and `tei-embedding-server`. Gated models need prior Hugging Face access.
220
220
- **Connection Errors:** Verify `HOST_IP` is correct and accessible. Check `docker ps`for port mappings. Ensure `no_proxy` includes `HOST_IP`if using a proxy. Check logs of the service failing to connect (e.g., `codegen-backend-server` logs if it can't reach `codegen-llm-server`).
221
221
- **"Container name is in use"**: Stop existing containers (`docker compose down`) or change `container_name` in `compose.yaml`.
222
222
- **Resource Issues:** CodeGen models can be memory-intensive. Monitor host RAM usage. Increase Docker resources if needed.
|`HOST_IP`| External IP address of the host machine. **Required.**|`your_external_ip_address`|
109
-
|`HUGGINGFACEHUB_API_TOKEN`| Your Hugging Face Hub token for model access. **Required.**|`your_huggingface_token`|
109
+
|`HF_TOKEN`| Your Hugging Face Hub token for model access. **Required.**|`your_huggingface_token`|
110
110
|`LLM_MODEL_ID`| Hugging Face model ID for the CodeGen LLM (used by TGI/vLLM service). Configured within `compose.yaml` environment. |`Qwen/Qwen2.5-Coder-7B-Instruct`|
111
111
|`EMBEDDING_MODEL_ID`| Hugging Face model ID for the embedding model (used by TEI service). Configured within `compose.yaml` environment. |`BAAI/bge-base-en-v1.5`|
0 commit comments