Skip to content

Commit f391695

Browse files
authored
Merge branch 'main' into agents/finance_compose
2 parents f8dbef7 + 162f5a8 commit f391695

File tree

9 files changed

+23
-20
lines changed

9 files changed

+23
-20
lines changed

CodeGen/docker_compose/intel/cpu/xeon/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ This guide focuses on running the pre-configured CodeGen service using Docker Co
2828
- Clone the `GenAIExamples` repository:
2929
```bash
3030
git clone https://github.com/opea-project/GenAIExamples.git
31-
cd GenAIExamples/CodeGen/docker_compose/intel/cpu/xeon
31+
cd GenAIExamples/CodeGen/docker_compose
3232
```
3333

3434
## Quick Start Deployment
@@ -48,7 +48,8 @@ This uses the default vLLM-based deployment profile (`codegen-xeon-vllm`).
4848
# export http_proxy="your_http_proxy"
4949
# export https_proxy="your_https_proxy"
5050
# export no_proxy="localhost,127.0.0.1,${HOST_IP}" # Add other hosts if necessary
51-
source ../../set_env.sh
51+
source intel/set_env.sh
52+
cd /intel/cpu/xeon
5253
```
5354

5455
_Note: The compose file might read additional variables from set_env.sh. Ensure all required variables like ports (`LLM_SERVICE_PORT`, `MEGA_SERVICE_PORT`, etc.) are set if not using defaults from the compose file._

CodeGen/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ This guide focuses on running the pre-configured CodeGen service using Docker Co
2828
- Clone the `GenAIExamples` repository:
2929
```bash
3030
git clone https://github.com/opea-project/GenAIExamples.git
31-
cd GenAIExamples/CodeGen/docker_compose/intel/hpu/gaudi
31+
cd GenAIExamples/CodeGen/docker_compose
3232
```
3333

3434
## Quick Start Deployment
@@ -48,7 +48,8 @@ This uses the default vLLM-based deployment profile (`codegen-gaudi-vllm`).
4848
# export http_proxy="your_http_proxy"
4949
# export https_proxy="your_https_proxy"
5050
# export no_proxy="localhost,127.0.0.1,${HOST_IP}" # Add other hosts if necessary
51-
source ../../set_env.sh
51+
source intel/set_env.sh
52+
cd /intel/hpu/gaudi
5253
```
5354

5455
_Note: The compose file might read additional variables from set_env.sh. Ensure all required variables like ports (`LLM_SERVICE_PORT`, `MEGA_SERVICE_PORT`, etc.) are set if not using defaults from the compose file._

CodeTrans/docker_compose/intel/cpu/xeon/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,8 @@ export http_proxy="Your_HTTP_Proxy" # http proxy if any
4646
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
4747
export no_proxy=localhost,127.0.0.1,$host_ip # additional no proxies if needed
4848
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
49-
source docker_compose/intel/set_env.sh
49+
cd docker_compose/intel/
50+
source set_env.sh
5051
```
5152

5253
Consult the section on [CodeTrans Service configuration](#codetrans-configuration) for information on how service specific configuration parameters affect deployments.
@@ -56,7 +57,7 @@ Consult the section on [CodeTrans Service configuration](#codetrans-configuratio
5657
To deploy the CodeTrans services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
5758

5859
```bash
59-
cd docker_compose/intel/cpu/xeon
60+
cd cpu/xeon
6061
docker compose -f compose.yaml up -d
6162
```
6263

CodeTrans/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,8 @@ export http_proxy="Your_HTTP_Proxy" # http proxy if any
4646
export https_proxy="Your_HTTPs_Proxy" # https proxy if any
4747
export no_proxy=localhost,127.0.0.1,$host_ip # additional no proxies if needed
4848
export NGINX_PORT=${your_nginx_port} # your usable port for nginx, 80 for example
49-
source docker_compose/intel/set_env.sh
49+
cd docker_compose/intel
50+
source set_env.sh
5051
```
5152

5253
Consult the section on [CodeTrans Service configuration](#codetrans-configuration) for information on how service specific configuration parameters affect deployments.
@@ -56,7 +57,7 @@ Consult the section on [CodeTrans Service configuration](#codetrans-configuratio
5657
To deploy the CodeTrans services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
5758

5859
```bash
59-
cd docker_compose/intel/hpu/gaudi
60+
cd hpu/gaudi
6061
docker compose -f compose.yaml up -d
6162
```
6263

DBQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,7 @@ or
7373
edit the file set_env.sh to set those environment variables,
7474

7575
```bash
76+
cd GenAIExamples/DBQnA/docker_compose/intel/cpu/xeon/
7677
source set_env.sh
7778
```
7879

DocSum/docker_compose/intel/cpu/xeon/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,8 @@ Clone the GenAIExample repository and access the ChatQnA Intel Xeon platform Doc
2727

2828
```bash
2929
git clone https://github.com/opea-project/GenAIExamples.git
30-
cd GenAIExamples/DocSum/docker_compose/intel
31-
source set_env.sh
32-
cd cpu/xeon/
30+
cd GenAIExamples/DocSum/docker_compose
31+
source intel/set_env.sh
3332
```
3433

3534
NOTE: by default vLLM does "warmup" at start, to optimize its performance for the specified model and the underlying platform, which can take long time. For development (and e.g. autoscaling) it can be skipped with `export VLLM_SKIP_WARMUP=true`.
@@ -49,7 +48,7 @@ Some HuggingFace resources, such as some models, are only accessible if you have
4948
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute:
5049

5150
```bash
52-
cd cpu/xeon/
51+
cd intel/cpu/xeon/
5352
docker compose up -d
5453
```
5554

DocSum/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,8 @@ Clone the GenAIExample repository and access the DocSum Intel® Gaudi® platform
2929

3030
```bash
3131
git clone https://github.com/opea-project/GenAIExamples.git
32-
cd GenAIExamples/DocSum/docker_compose/intel
33-
source set_env.sh
34-
cd hpu/gaudi/
32+
cd GenAIExamples/DocSum/docker_compose
33+
source intel/set_env.sh
3534
```
3635

3736
NOTE: by default vLLM does "warmup" at start, to optimize its performance for the specified model and the underlying platform, which can take long time. For development (and e.g. autoscaling) it can be skipped with `export VLLM_SKIP_WARMUP=true`.
@@ -51,7 +50,7 @@ Some HuggingFace resources, such as some models, are only accessible if you have
5150
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute:
5251

5352
```bash
54-
cd hpu/gaudi/
53+
cd intel/hpu/gaudi/
5554
docker compose up -d
5655
```
5756

SearchQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Clone the GenAIExample repository and access the SearchQnA Intel® Xeon® platfo
2626

2727
```bash
2828
git clone https://github.com/opea-project/GenAIExamples.git
29-
cd GenAIExamples/SearchQnA
29+
cd GenAIExamples/SearchQnA/docker_compose/intel
3030
```
3131

3232
Then checkout a released version, such as v1.3:
@@ -58,7 +58,7 @@ Consult the section on [SearchQnA Service configuration](#SearchQnA-configuratio
5858
To deploy the SearchQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
5959

6060
```bash
61-
cd docker_compose/intel/cpu/xeon
61+
cd cpu/xeon
6262
docker compose -f compose.yaml up -d
6363
```
6464

SearchQnA/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Clone the GenAIExample repository and access the searchqna Intel® Gaudi® platf
2626

2727
```bash
2828
git clone https://github.com/opea-project/GenAIExamples.git
29-
cd GenAIExamples/SearchQnA
29+
cd GenAIExamples/SearchQnA/docker_compose/intel
3030
```
3131

3232
Then checkout a released version, such as v1.3:
@@ -58,7 +58,7 @@ Consult the section on [SearchQnA Service configuration](#SearchQnA-configuratio
5858
To deploy the SearchQnA services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute the command below. It uses the 'compose.yaml' file.
5959

6060
```bash
61-
cd docker_compose/intel/hpu/gaudi
61+
cd hpu/gaudi
6262
docker compose -f compose.yaml up -d
6363
```
6464

0 commit comments

Comments
 (0)