|
1 | 1 | # MultimodalQnA Application |
2 | 2 |
|
3 | | -Suppose you possess a set of videos, images, audio files, PDFs, or some combination thereof and wish to perform question-answering to extract insights from these documents. To respond to your questions, the system needs to comprehend a mix of textual, visual, and audio facts drawn from the document contents. The MultimodalQnA framework offers an optimal solution for this purpose. |
| 3 | +Multimodal question answering is the process of extracting insights from documents that contain a mix of text, images, videos, audio, and PDFs. It involves reasoning over both textual and non-textual content to answer user queries. |
4 | 4 |
|
5 | | -`MultimodalQnA` addresses your questions by dynamically fetching the most pertinent multimodal information (e.g. images, transcripts, and captions) from your collection of video, image, audio, and PDF files. For this purpose, MultimodalQnA utilizes [BridgeTower model](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi), a multimodal encoding transformer model which merges visual and textual data into a unified semantic space. During the ingestion phase, the BridgeTower model embeds both visual cues and auditory facts as texts, and those embeddings are then stored in a vector database. When it comes to answering a question, the MultimodalQnA will fetch its most relevant multimodal content from the vector store and feed it into a downstream Large Vision-Language Model (LVM) as input context to generate a response for the user, which can be text or audio. |
| 5 | +The MultimodalQnA framework enables this by leveraging the BridgeTower model, which encodes visual and textual data into a shared semantic space. During ingestion, it processes content and stores embeddings in a vector database. At query time, relevant multimodal segments are retrieved and passed to a vision-language model to generate responses in text or audio form. |
6 | 6 |
|
7 | | -The MultimodalQnA architecture shows below: |
| 7 | +## Table of Contents |
| 8 | + |
| 9 | +1. [Architecture](#architecture) |
| 10 | +2. [Deployment Options](#deployment-options) |
| 11 | +3. [Monitoring and Tracing](./README_miscellaneous.md) |
| 12 | + |
| 13 | +## Architecture |
| 14 | + |
| 15 | +The MultimodalQnA application is an end-to-end workflow designed for multimodal question answering across video, image, audio, and PDF inputs. The architecture is illustrated below: |
8 | 16 |
|
9 | 17 |  |
10 | 18 |
|
11 | | -MultimodalQnA is implemented on top of [GenAIComps](https://github.com/opea-project/GenAIComps), the MultimodalQnA Flow Chart shows below: |
| 19 | +The MultimodalQnA example is implemented using the component-level microservices defined in [GenAIComps](https://github.com/opea-project/GenAIComps), the MultimodalQnA Flow Chart shows below: |
12 | 20 |
|
13 | 21 | ```mermaid |
14 | 22 | --- |
@@ -86,182 +94,9 @@ flowchart LR |
86 | 94 |
|
87 | 95 | This MultimodalQnA use case performs Multimodal-RAG using LangChain, Redis VectorDB and Text Generation Inference on [Intel Gaudi2](https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi-overview.html) and [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon.html), and we invite contributions from other hardware vendors to expand the example. |
88 | 96 |
|
89 | | -The [Whisper Service](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/src/README.md) |
90 | | -is used by MultimodalQnA for converting audio queries to text. If a spoken response is requested, the |
91 | | -[SpeechT5 Service](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/README.md) translates the text |
92 | | -response from the LVM to a speech audio file. |
93 | | - |
94 | | -The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details. |
95 | | - |
96 | | -In the below, we provide a table that describes for each microservice component in the MultimodalQnA architecture, the default configuration of the open source project, hardware, port, and endpoint. |
97 | | - |
98 | | -<details> |
99 | | -<summary><b>Gaudi and Xeon default compose.yaml settings</b></summary> |
100 | | - |
101 | | -| MicroService | Open Source Project | HW | Port | Endpoint | |
102 | | -| ------------ | ----------------------- | ----- | ---- | ----------------------------------------------------------- | |
103 | | -| Dataprep | Redis, Langchain, TGI | Xeon | 6007 | /v1/generate_transcripts, /v1/generate_captions, /v1/ingest | |
104 | | -| Embedding | Langchain | Xeon | 6000 | /v1/embeddings | |
105 | | -| LVM | Langchain, Transformers | Xeon | 9399 | /v1/lvm | |
106 | | -| Retriever | Langchain, Redis | Xeon | 7000 | /v1/retrieval | |
107 | | -| SpeechT5 | Transformers | Xeon | 7055 | /v1/tts | |
108 | | -| Whisper | Transformers | Xeon | 7066 | /v1/asr | |
109 | | -| Dataprep | Redis, Langchain, TGI | Gaudi | 6007 | /v1/generate_transcripts, /v1/generate_captions, /v1/ingest | |
110 | | -| Embedding | Langchain | Gaudi | 6000 | /v1/embeddings | |
111 | | -| LVM | Langchain, TGI | Gaudi | 9399 | /v1/lvm | |
112 | | -| Retriever | Langchain, Redis | Gaudi | 7000 | /v1/retrieval | |
113 | | -| SpeechT5 | Transformers | Gaudi | 7055 | /v1/tts | |
114 | | -| Whisper | Transformers | Gaudi | 7066 | /v1/asr | |
115 | | - |
116 | | -</details> |
117 | | - |
118 | | -## Required Models |
119 | | - |
120 | | -By default, the embedding and LVM models are set to a default value as listed below: |
121 | | - |
122 | | -| Service | HW | Model | |
123 | | -| --------- | ----- | ----------------------------------------- | |
124 | | -| embedding | Xeon | BridgeTower/bridgetower-large-itm-mlm-itc | |
125 | | -| LVM | Xeon | llava-hf/llava-1.5-7b-hf | |
126 | | -| SpeechT5 | Xeon | microsoft/speecht5_tts | |
127 | | -| Whisper | Xeon | openai/whisper-small | |
128 | | -| embedding | Gaudi | BridgeTower/bridgetower-large-itm-mlm-itc | |
129 | | -| LVM | Gaudi | llava-hf/llava-v1.6-vicuna-13b-hf | |
130 | | -| SpeechT5 | Gaudi | microsoft/speecht5_tts | |
131 | | -| Whisper | Gaudi | openai/whisper-small | |
132 | | - |
133 | | -You can choose other LVM models, such as `llava-hf/llava-1.5-7b-hf ` and `llava-hf/llava-1.5-13b-hf`, as needed. |
134 | | - |
135 | | -## Deploy MultimodalQnA Service |
136 | | - |
137 | | -The MultimodalQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors. |
138 | | - |
139 | | -Currently we support deploying MultimodalQnA services with docker compose. The [`docker_compose`](docker_compose) |
140 | | -directory has folders which include `compose.yaml` files for different hardware types: |
141 | | - |
142 | | -``` |
143 | | -📂 docker_compose |
144 | | -├── 📂 amd |
145 | | -│ └── 📂 gpu |
146 | | -│ └── 📂 rocm |
147 | | -│ ├── 📄 compose.yaml |
148 | | -│ └── ... |
149 | | -└── 📂 intel |
150 | | - ├── 📂 cpu |
151 | | - │ └── 📂 xeon |
152 | | - │ ├── 📄 compose.yaml |
153 | | - │ └── ... |
154 | | - └── 📂 hpu |
155 | | - └── 📂 gaudi |
156 | | - ├── 📄 compose.yaml |
157 | | - └── ... |
158 | | -``` |
159 | | - |
160 | | -### Setup Environment Variables |
161 | | - |
162 | | -To set up environment variables for deploying MultimodalQnA services, follow these steps: |
163 | | - |
164 | | -1. Set the required environment variables: |
165 | | - |
166 | | - ```bash |
167 | | - # Example: export host_ip=$(hostname -I | awk '{print $1}') |
168 | | - export host_ip="External_Public_IP" |
169 | | - |
170 | | - # Append the host_ip to the no_proxy list to allow container communication |
171 | | - # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" |
172 | | - export no_proxy="${no_proxy},${host_ip}" |
173 | | - ``` |
174 | | - |
175 | | -2. If you are in a proxy environment, also set the proxy-related environment variables: |
176 | | - |
177 | | - ```bash |
178 | | - export http_proxy="Your_HTTP_Proxy" |
179 | | - export https_proxy="Your_HTTPs_Proxy" |
180 | | - ``` |
181 | | - |
182 | | -3. Set up other environment variables: |
183 | | - |
184 | | - > Choose **one** command below to set env vars according to your hardware. Otherwise, the port numbers may be set incorrectly. |
185 | | -
|
186 | | - ```bash |
187 | | - # on Gaudi |
188 | | - cd docker_compose/intel/hpu/gaudi |
189 | | - source ./set_env.sh |
190 | | - |
191 | | - # on Xeon |
192 | | - cd docker_compose/intel/cpu/xeon |
193 | | - source ./set_env.sh |
194 | | - ``` |
195 | | - |
196 | | -### Deploy MultimodalQnA on Gaudi |
197 | | - |
198 | | -Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) if you would like to build docker images from |
199 | | -source, otherwise images will be pulled from Docker Hub. |
200 | | - |
201 | | -Find the corresponding [compose.yaml](./docker_compose/intel/hpu/gaudi/compose.yaml). |
202 | | - |
203 | | -```bash |
204 | | -# While still in the docker_compose/intel/hpu/gaudi directory, use docker compose to bring up the services |
205 | | -docker compose -f compose.yaml up -d |
206 | | -``` |
207 | | - |
208 | | -> Notice: Currently only the **Habana Driver 1.18.x** is supported for Gaudi. |
209 | | -
|
210 | | -### Deploy MultimodalQnA on Xeon |
211 | | - |
212 | | -Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) if you would like to build docker images from |
213 | | -source, otherwise images will be pulled from Docker Hub. |
214 | | - |
215 | | -Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml). |
216 | | - |
217 | | -```bash |
218 | | -# While still in the docker_compose/intel/cpu/xeon directory, use docker compose to bring up the services |
219 | | -docker compose -f compose.yaml up -d |
220 | | -``` |
221 | | - |
222 | | -## MultimodalQnA Demo on Gaudi2 |
223 | | - |
224 | | -### Multimodal QnA UI |
225 | | - |
226 | | - |
227 | | - |
228 | | -### Video Ingestion |
229 | | - |
230 | | - |
231 | | - |
232 | | -### Text Query following the ingestion of a Video |
233 | | - |
234 | | - |
235 | | - |
236 | | -### Image Ingestion |
237 | | - |
238 | | - |
239 | | - |
240 | | -### Text Query following the ingestion of an image |
241 | | - |
242 | | - |
243 | | - |
244 | | -### Text Query following the ingestion of an image using text-to-speech |
245 | | - |
246 | | - |
247 | | - |
248 | | -### Audio Ingestion |
249 | | - |
250 | | - |
251 | | - |
252 | | -### Text Query following the ingestion of an Audio Podcast |
253 | | - |
254 | | - |
255 | | - |
256 | | -### PDF Ingestion |
257 | | - |
258 | | - |
259 | | - |
260 | | -### Text query following the ingestion of a PDF |
261 | | - |
262 | | - |
| 97 | +## Deployment Options |
263 | 98 |
|
264 | | -### View, Refresh, and Delete ingested media in the Vector Store |
| 99 | +The table below lists currently available deployment options. They outline in detail the implementation of this example on selected hardware. |
265 | 100 |
|
266 | 101 |  |
267 | 102 |
|
|
0 commit comments