You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CodeGen/README.md
+59-13Lines changed: 59 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Code Generation Application
2
2
3
-
Code Generation (CodeGen) Large Language Models (LLMs) are specialized AI models designed for the task of generating computer code. Such models undergo training with datasets that encompass repositories, specialized documentation, programming code, relevant web content, and other related data. They possess a deep understanding of various programming languages, coding patterns, and software development concepts. CodeGen LLMs are engineered to assist developers and programmers. When these LLMs are seamlessly integrated into the developer's Integrated Development Environment (IDE), they possess a comprehensive understanding of the coding context, which includes elements such as comments, function names, and variable names. This contextual awareness empowers them to provide more refined and contextually relevant coding suggestions.
3
+
Code Generation (CodeGen) Large Language Models (LLMs) are specialized AI models designed for the task of generating computer code. Such models undergo training with datasets that encompass repositories, specialized documentation, programming code, relevant web content, and other related data. They possess a deep understanding of various programming languages, coding patterns, and software development concepts. CodeGen LLMs are engineered to assist developers and programmers. When these LLMs are seamlessly integrated into the developer's Integrated Development Environment (IDE), they possess a comprehensive understanding of the coding context, which includes elements such as comments, function names, and variable names. This contextual awareness empowers them to provide more refined and contextually relevant coding suggestions. Additionally Retrieval-Augmented Generation (RAG) and Agents are parts of the CodeGen example which provide an additional layer of intelligence and adaptability, ensuring that the generated code is not only relevant but also accurate, efficient, and tailored to the specific needs of the developers and programmers.
4
4
5
5
The capabilities of CodeGen LLMs include:
6
6
@@ -20,6 +20,7 @@ The workflow falls into the following architecture:
20
20
21
21
The CodeGen example is implemented using the component-level microservices defined in [GenAIComps](https://github.com/opea-project/GenAIComps). The flow chart below shows the information flow between different microservices for this example.
@@ -138,11 +161,25 @@ Refer to the [Gaudi Guide](./docker_compose/intel/hpu/gaudi/README.md) to build
138
161
139
162
Find the corresponding [compose.yaml](./docker_compose/intel/cpu/xeon/compose.yaml).
140
163
164
+
Start CodeGen based on TGI service:
165
+
141
166
```bash
142
-
cd GenAIExamples/CodeGen/docker_compose/intel/cpu/xeon
143
-
docker compose up -d
167
+
cd GenAIExamples/CodeGen/docker_compose
168
+
source set_env.sh
169
+
cd intel/cpu/xeon
170
+
docker compose --profile codegen-xeon-tgi up -d
171
+
```
172
+
173
+
Start CodeGen based on vLLM service:
174
+
175
+
```bash
176
+
cd GenAIExamples/CodeGen/docker_compose
177
+
source set_env.sh
178
+
cd intel/cpu/xeon
179
+
docker compose --profile codegen-xeon-vllm up -d
144
180
```
145
181
182
+
146
183
Refer to the [Xeon Guide](./docker_compose/intel/cpu/xeon/README.md) for more instructions on building docker images from source.
147
184
148
185
### Deploy CodeGen on Kubernetes using Helm Chart
@@ -161,6 +198,15 @@ Two ways of consuming CodeGen Service:
161
198
-d '{"messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
162
199
```
163
200
201
+
If the user wants a CodeGen service with RAG and Agents based on dedicated documentation.
202
+
203
+
```bash
204
+
curl http://localhost:7778/v1/codegen \
205
+
-H "Content-Type: application/json" \
206
+
-d '{"agents_flag": "True", "index_name": "my_API_document", "messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
207
+
208
+
```
209
+
164
210
2. Access via frontend
165
211
166
212
To access the frontend, open the following URL in your browser: http://{host_ip}:5173.
Copy file name to clipboardExpand all lines: CodeGen/docker_compose/intel/cpu/xeon/README.md
+92Lines changed: 92 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,89 @@
3
3
This document outlines the deployment process for a CodeGen application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker images creation, container deployment via Docker Compose, and service execution to integrate microservices such as `llm`. We will publish the Docker images to Docker Hub soon, further simplifying the deployment process for this service.
4
4
The default pipeline deploys with vLLM as the LLM serving component. It also provides options of using TGI backend for LLM microservice.
5
5
6
+
## 🚀 Create an AWS Xeon Instance
7
+
8
+
To run the example on an AWS Xeon instance, start by creating an AWS account if you don't have one already. Then, get started with the [EC2 Console](https://console.aws.amazon.com/ec2/v2/home). AWS EC2 M7i, C7i, C7i-flex and M7i-flex instances are 4th Generation Intel Xeon Scalable processors suitable for the task.
9
+
10
+
For detailed information about these instance types, you can refer to [m7i](https://aws.amazon.com/ec2/instance-types/m7i/). Once you've chosen the appropriate instance type, proceed with configuring your instance settings, including network configurations, security groups, and storage options.
11
+
12
+
After launching your instance, you can connect to it using SSH (for Linux instances) or Remote Desktop Protocol (RDP) (for Windows instances). From there, you'll have full access to your Xeon server, allowing you to install, configure, and manage your applications as needed.
13
+
14
+
## 🚀 Start Microservices and MegaService
15
+
16
+
The CodeGen megaservice manages a several microservices including 'Embedding MicroService', 'Retrieval MicroService' and 'LLM MicroService' within a Directed Acyclic Graph (DAG). In the diagram below, the LLM microservice is a language model microservice that generates code snippets based on the user's input query. The TGI service serves as a text generation interface, providing a RESTful API for the LLM microservice. Data Preparation allows users to save/update documents or online resources to the vector database. Users can upload files or provide URLs, and manage their saved resources. The CodeGen Gateway acts as the entry point for the CodeGen application, invoking the Megaservice to generate code snippets in response to the user's input query.
17
+
18
+
The mega flow of the CodeGen application, from user's input query to the application's output response, is as follows:
19
+
20
+
```mermaid
21
+
---
22
+
config:
23
+
flowchart:
24
+
nodeSpacing: 400
25
+
rankSpacing: 100
26
+
curve: linear
27
+
themeVariables:
28
+
fontSize: 25px
29
+
---
30
+
flowchart LR
31
+
%% Colors %%
32
+
classDef blue fill:#ADD8E6,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below.
@@ -77,6 +160,15 @@ docker compose --profile codegen-xeon-vllm up -d
77
160
}'
78
161
```
79
162
163
+
If the user wants a CodeGen service with RAG and Agents based on dedicated documentation.
164
+
165
+
```bash
166
+
curl http://localhost:7778/v1/codegen \
167
+
-H "Content-Type: application/json" \
168
+
-d '{"agents_flag": "True", "index_name": "my_API_document", "messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
169
+
```
170
+
171
+
80
172
## 🚀 Launch the UI
81
173
82
174
To access the frontend, open the following URL in your browser: `http://{host_ip}:5173`. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below:
Copy file name to clipboardExpand all lines: CodeGen/docker_compose/intel/hpu/gaudi/README.md
+71-13Lines changed: 71 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,28 +6,77 @@ The default pipeline deploys with vLLM as the LLM serving component. It also pro
6
6
7
7
## 🚀 Start MicroServices and MegaService
8
8
9
-
The CodeGen megaservice manages a single microservice called LLM within a Directed Acyclic Graph (DAG). In the diagram above, the LLM microservice is a language model microservice that generates code snippets based on the user's input query. The TGI service serves as a text generation interface, providing a RESTful API for the LLM microservice. The CodeGen Gateway acts as the entry point for the CodeGen application, invoking the Megaservice to generate code snippets in response to the user's input query.
9
+
The CodeGen megaservice manages a several microservices including 'Embedding MicroService', 'Retrieval MicroService' and 'LLM MicroService' within a Directed Acyclic Graph (DAG). In the diagram below, the LLM microservice is a language model microservice that generates code snippets based on the user's input query. The TGI service serves as a text generation interface, providing a RESTful API for the LLM microservice. Data Preparation allows users to save/update documents or online resources to the vector database. Users can upload files or provide URLs, and manage their saved resources. The CodeGen Gateway acts as the entry point for the CodeGen application, invoking the Megaservice to generate code snippets in response to the user's input query.
10
10
11
11
The mega flow of the CodeGen application, from user's input query to the application's output response, is as follows:
12
12
13
13
```mermaid
14
+
---
15
+
config:
16
+
flowchart:
17
+
nodeSpacing: 400
18
+
rankSpacing: 100
19
+
curve: linear
20
+
themeVariables:
21
+
fontSize: 25px
22
+
---
14
23
flowchart LR
15
-
subgraph CodeGen
24
+
%% Colors %%
25
+
classDef blue fill:#ADD8E6,stroke:#ADD8E6,stroke-width:2px,fill-opacity:0.5
C((LLM<br>9000)) -. Post .-> D{{TGI Service<br>8028}}
22
-
end
23
-
Megaservice --> |Output| E[Response]
33
+
EM([Embedding<br>MicroService]):::blue
34
+
RET([Retrieval<br>MicroService]):::blue
35
+
RER([Agents]):::blue
36
+
LLM([LLM<br>MicroService]):::blue
24
37
end
25
-
26
-
subgraph Legend
38
+
subgraph User Interface
27
39
direction LR
28
-
G([Microservice]) ==> H([Microservice])
29
-
I([Microservice]) -.-> J{{Server API}}
40
+
a([Submit Query Tab]):::orchid
41
+
UI([UI server]):::orchid
42
+
Ingest([Manage Resources]):::orchid
30
43
end
44
+
45
+
CLIP_EM{{Embedding<br>service}}
46
+
VDB{{Vector DB}}
47
+
V_RET{{Retriever<br>service}}
48
+
Ingest{{Ingest data}}
49
+
DP([Data Preparation]):::blue
50
+
LLM_gen{{TGI Service}}
51
+
GW([CodeGen GateWay]):::orange
52
+
53
+
%% Data Preparation flow
54
+
%% Ingest data flow
55
+
direction LR
56
+
Ingest[Ingest data] --> UI
57
+
UI --> DP
58
+
DP <-.-> CLIP_EM
59
+
60
+
%% Questions interaction
61
+
direction LR
62
+
a[User Input Query] --> UI
63
+
UI --> GW
64
+
GW <==> CodeGen-MegaService
65
+
EM ==> RET
66
+
RET ==> RER
67
+
RER ==> LLM
68
+
69
+
70
+
%% Embedding service flow
71
+
direction LR
72
+
EM <-.-> CLIP_EM
73
+
RET <-.-> V_RET
74
+
LLM <-.-> LLM_gen
75
+
76
+
direction TB
77
+
%% Vector DB interaction
78
+
V_RET <-.->VDB
79
+
DP <-.->VDB
31
80
```
32
81
33
82
### Setup Environment Variables
@@ -104,6 +153,15 @@ docker compose --profile codegen-gaudi-vllm up -d
104
153
}'
105
154
```
106
155
156
+
If the user wants a CodeGen service with RAG and Agents based on dedicated documentation.
157
+
158
+
```bash
159
+
curl http://localhost:7778/v1/codegen \
160
+
-H "Content-Type: application/json" \
161
+
-d '{"agents_flag": "True", "index_name": "my_API_document", "messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
162
+
```
163
+
164
+
107
165
## 🚀 Launch the Svelte Based UI
108
166
109
167
To access the frontend, open the following URL in your browser: `http://{host_ip}:5173`. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below:
0 commit comments