Skip to content

Commit ad93ec1

Browse files
Merge branch 'main' into codegen/rag_agents
2 parents 127b235 + d4952d1 commit ad93ec1

File tree

19 files changed

+113
-107
lines changed

19 files changed

+113
-107
lines changed

.github/workflows/_run-docker-compose.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -141,6 +141,12 @@ jobs:
141141
bash ${{ github.workspace }}/.github/workflows/scripts/docker_compose_clean_up.sh "ports"
142142
docker ps
143143
144+
- name: Log in DockerHub
145+
uses: docker/[email protected]
146+
with:
147+
username: ${{ secrets.DOCKERHUB_USER }}
148+
password: ${{ secrets.DOCKERHUB_TOKEN }}
149+
144150
- name: Run test
145151
shell: bash
146152
env:

.github/workflows/dockerhub-description.yml

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -517,7 +517,7 @@ jobs:
517517
password: ${{ secrets.DOCKERHUB_TOKEN }}
518518
repository: opea/gpt-sovits
519519
short-description: "The docker image exposed the OPEA GPT-SoVITS service for GenAI application use."
520-
readme-filepath: GenAIComps/comps/tts/src/integrations/dependency/gpt-sovits/README.md
520+
readme-filepath: GenAIComps/comps/third_parties/gpt-sovits/src/README.md
521521
enable-url-completion: false
522522

523523
- name: Description for
@@ -697,7 +697,7 @@ jobs:
697697
password: ${{ secrets.DOCKERHUB_TOKEN }}
698698
repository: opea/lvm-llava
699699
short-description: "The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) server for GenAI application use."
700-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/llava/README.md
700+
readme-filepath: GenAIComps/comps/third_parties/llava/src/README.md
701701
enable-url-completion: false
702702

703703
- name: Description for
@@ -707,7 +707,7 @@ jobs:
707707
password: ${{ secrets.DOCKERHUB_TOKEN }}
708708
repository: opea/lvm-video-llama
709709
short-description: "The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) for GenAI application use."
710-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/video-llama/README.md
710+
readme-filepath: GenAIComps/comps/third_parties/video-llama/src/README.md
711711
enable-url-completion: false
712712

713713
- name: Description for
@@ -717,7 +717,7 @@ jobs:
717717
password: ${{ secrets.DOCKERHUB_TOKEN }}
718718
repository: opea/lvm-predictionguard
719719
short-description: "The docker image exposed the OPEA microservice running predictionguard as a large visual model (LVM) server for GenAI application use."
720-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/predictionguard/README.md
720+
readme-filepath: GenAIComps/comps/third_parties/predictionguard/src/README.md
721721
enable-url-completion: false
722722

723723
- name: Description for
@@ -727,7 +727,7 @@ jobs:
727727
password: ${{ secrets.DOCKERHUB_TOKEN }}
728728
repository: opea/llava-gaudi
729729
short-description: "The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi2."
730-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/llava/README.md
730+
readme-filepath: GenAIComps/comps/third_parties/llava/src/README.md
731731
enable-url-completion: false
732732

733733
- name: Description for
@@ -737,7 +737,7 @@ jobs:
737737
password: ${{ secrets.DOCKERHUB_TOKEN }}
738738
repository: opea/lvm-llama-vision
739739
short-description: "The docker image exposed the OPEA microservice running Llama Vision as the base large visual model service for GenAI application use."
740-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/llama-vision/README.md
740+
readme-filepath: GenAIComps/comps/third_parties/llama-vision/src/README.md
741741
enable-url-completion: false
742742

743743
- name: Description for
@@ -747,7 +747,7 @@ jobs:
747747
password: ${{ secrets.DOCKERHUB_TOKEN }}
748748
repository: opea/lvm-llama-vision-tp
749749
short-description: "The docker image exposed the OPEA microservice running Llama Vision with deepspeed as the base large visual model service for GenAI application use."
750-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/llama-vision/README.md
750+
readme-filepath: GenAIComps/comps/third_parties/llama-vision/src/README.md
751751
enable-url-completion: false
752752

753753
- name: Description for lvm-llama-vision-guard
@@ -757,7 +757,7 @@ jobs:
757757
password: ${{ secrets.DOCKERHUB_TOKEN }}
758758
repository: opea/lvm-llama-vision-guard
759759
short-description: "The docker image exposed the OPEA microservice running Llama Vision Guard as the base large visual model service for GenAI application use."
760-
readme-filepath: GenAIComps/comps/lvms/src/integrations/dependency/llama-vision/README.md
760+
readme-filepath: GenAIComps/comps/third_parties/llama-vision/src/README.md
761761
enable-url-completion: false
762762

763763
- name: Description for promptregistry-mongo
@@ -857,7 +857,7 @@ jobs:
857857
password: ${{ secrets.DOCKERHUB_TOKEN }}
858858
repository: opea/gpt-sovits
859859
short-description: "The docker image exposed the OPEA gpt-sovits service for GenAI application use."
860-
readme-filepath: GenAIComps/comps/tts/src/integrations/dependency/gpt-sovits/README.md
860+
readme-filepath: GenAIComps/comps/third_parties/gpt-sovits/src/README.md
861861
enable-url-completion: false
862862

863863
- name: Description for nginx

AudioQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ cd GenAIComps
1818
### 2. Build ASR Image
1919

2020
```bash
21-
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile .
21+
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile .
2222
```
2323

2424
### 3. Build vLLM Image
@@ -34,10 +34,10 @@ docker build --no-cache --build-arg https_proxy=$https_proxy --build-arg http_pr
3434
### 4. Build TTS Image
3535

3636
```bash
37-
docker build -t opea/speecht5:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/tts/src/integrations/dependency/speecht5/Dockerfile .
37+
docker build -t opea/speecht5:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/speecht5/src/Dockerfile .
3838

3939
# multilang tts (optional)
40-
docker build -t opea/gpt-sovits:latest --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -f comps/tts/src/integrations/dependency/gpt-sovits/Dockerfile .
40+
docker build -t opea/gpt-sovits:latest --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -f comps/third_parties/gpt-sovits/src/Dockerfile .
4141
```
4242

4343
### 5. Build MegaService Docker Image
@@ -177,7 +177,7 @@ to the response, decode the base64 string and save it as a .wav file.
177177

178178
```bash
179179
# if you are using speecht5 as the tts service, voice can be "default" or "male"
180-
# if you are using gpt-sovits for the tts service, you can set the reference audio following https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/integrations/dependency/gpt-sovits/README.md
180+
# if you are using gpt-sovits for the tts service, you can set the reference audio following https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/gpt-sovits/src/README.md
181181
curl http://${host_ip}:3008/v1/audioqna \
182182
-X POST \
183183
-d '{"audio": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA", "max_tokens":64, "voice":"default"}' \

AudioQnA/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ cd GenAIComps
1818
### 2. Build ASR Image
1919

2020
```bash
21-
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile.intel_hpu .
21+
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile.intel_hpu .
2222
```
2323

2424
### 3. Build vLLM Image
@@ -32,7 +32,7 @@ docker build --no-cache --build-arg https_proxy=$https_proxy --build-arg http_pr
3232
### 4. Build TTS Image
3333

3434
```bash
35-
docker build -t opea/speecht5-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu .
35+
docker build -t opea/speecht5-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/speecht5/src/Dockerfile.intel_hpu .
3636
```
3737

3838
### 5. Build MegaService Docker Image

AudioQnA/docker_image_build/build.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,13 +26,13 @@ services:
2626
whisper-gaudi:
2727
build:
2828
context: GenAIComps
29-
dockerfile: comps/asr/src/integrations/dependency/whisper/Dockerfile.intel_hpu
29+
dockerfile: comps/third_parties/whisper/src/Dockerfile.intel_hpu
3030
extends: audioqna
3131
image: ${REGISTRY:-opea}/whisper-gaudi:${TAG:-latest}
3232
whisper:
3333
build:
3434
context: GenAIComps
35-
dockerfile: comps/asr/src/integrations/dependency/whisper/Dockerfile
35+
dockerfile: comps/third_parties/whisper/src/Dockerfile
3636
extends: audioqna
3737
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
3838
asr:
@@ -50,13 +50,13 @@ services:
5050
speecht5-gaudi:
5151
build:
5252
context: GenAIComps
53-
dockerfile: comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu
53+
dockerfile: comps/third_parties/speecht5/src/Dockerfile.intel_hpu
5454
extends: audioqna
5555
image: ${REGISTRY:-opea}/speecht5-gaudi:${TAG:-latest}
5656
speecht5:
5757
build:
5858
context: GenAIComps
59-
dockerfile: comps/tts/src/integrations/dependency/speecht5/Dockerfile
59+
dockerfile: comps/third_parties/speecht5/src/Dockerfile
6060
extends: audioqna
6161
image: ${REGISTRY:-opea}/speecht5:${TAG:-latest}
6262
tts:
@@ -68,7 +68,7 @@ services:
6868
gpt-sovits:
6969
build:
7070
context: GenAIComps
71-
dockerfile: comps/tts/src/integrations/dependency/gpt-sovits/Dockerfile
71+
dockerfile: comps/third_parties/gpt-sovits/src/Dockerfile
7272
extends: audioqna
7373
image: ${REGISTRY:-opea}/gpt-sovits:${TAG:-latest}
7474
vllm:

AvatarChatbot/docker_compose/amd/gpu/rocm/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ cd GenAIComps
1414
### 2. Build ASR Image
1515

1616
```bash
17-
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile .
17+
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile .
1818

1919

2020
docker build -t opea/asr:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/Dockerfile .
@@ -29,7 +29,7 @@ docker build --no-cache -t opea/llm-textgen:latest --build-arg https_proxy=$http
2929
### 4. Build TTS Image
3030

3131
```bash
32-
docker build -t opea/speecht5:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/tts/src/integrations/dependency/speecht5/Dockerfile .
32+
docker build -t opea/speecht5:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/speecht5/src/Dockerfile .
3333

3434
docker build -t opea/tts:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/tts/src/Dockerfile .
3535
```

AvatarChatbot/docker_compose/intel/cpu/xeon/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ cd GenAIComps
1414
### 2. Build ASR Image
1515

1616
```bash
17-
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile .
17+
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile .
1818
```
1919

2020
### 3. Build LLM Image
@@ -24,7 +24,7 @@ Intel Xeon optimized image hosted in huggingface repo will be used for TGI servi
2424
### 4. Build TTS Image
2525

2626
```bash
27-
docker build -t opea/speecht5:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/tts/src/integrations/dependency/speecht5/Dockerfile .
27+
docker build -t opea/speecht5:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/speecht5/src/Dockerfile .
2828
```
2929

3030
### 5. Build Animation Image

AvatarChatbot/docker_compose/intel/hpu/gaudi/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ cd GenAIComps
1414
### 2. Build ASR Image
1515

1616
```bash
17-
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile.intel_hpu .
17+
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile.intel_hpu .
1818
```
1919

2020
### 3. Build LLM Image
@@ -24,7 +24,7 @@ Intel Gaudi optimized image hosted in huggingface repo will be used for TGI serv
2424
### 4. Build TTS Image
2525

2626
```bash
27-
docker build -t opea/speecht5-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu .
27+
docker build -t opea/speecht5-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/speecht5/src/Dockerfile.intel_hpu .
2828
```
2929

3030
### 5. Build Animation Image

AvatarChatbot/docker_image_build/build.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,13 @@ services:
1414
whisper-gaudi:
1515
build:
1616
context: GenAIComps
17-
dockerfile: comps/asr/src/integrations/dependency/whisper/Dockerfile.intel_hpu
17+
dockerfile: comps/third_parties/whisper/src/Dockerfile.intel_hpu
1818
extends: avatarchatbot
1919
image: ${REGISTRY:-opea}/whisper-gaudi:${TAG:-latest}
2020
whisper:
2121
build:
2222
context: GenAIComps
23-
dockerfile: comps/asr/src/integrations/dependency/whisper/Dockerfile
23+
dockerfile: comps/third_parties/whisper/src/Dockerfile
2424
extends: avatarchatbot
2525
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}
2626
asr:
@@ -38,13 +38,13 @@ services:
3838
speecht5-gaudi:
3939
build:
4040
context: GenAIComps
41-
dockerfile: comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu
41+
dockerfile: comps/third_parties/speecht5/src/Dockerfile.intel_hpu
4242
extends: avatarchatbot
4343
image: ${REGISTRY:-opea}/speecht5-gaudi:${TAG:-latest}
4444
speecht5:
4545
build:
4646
context: GenAIComps
47-
dockerfile: comps/tts/src/integrations/dependency/speecht5/Dockerfile
47+
dockerfile: comps/third_parties/speecht5/src/Dockerfile
4848
extends: avatarchatbot
4949
image: ${REGISTRY:-opea}/speecht5:${TAG:-latest}
5050
tts:

DocSum/docker_compose/intel/cpu/xeon/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ cd GenAIComps
2828
The Whisper Service converts audio files to text. Follow these steps to build and run the service:
2929

3030
```bash
31-
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/src/integrations/dependency/whisper/Dockerfile .
31+
docker build -t opea/whisper:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/whisper/src/Dockerfile .
3232
```
3333

3434
### 2. Build MegaService Docker Image

0 commit comments

Comments
 (0)