Skip to content

Commit bf1123e

Browse files
Merge pull request #285 from univerone/add-docker-build
[MRG]:whale: Update modelci docker image (CPU&GPU). docker-compose is done
2 parents 8ba8605 + b99c2aa commit bf1123e

13 files changed

+517
-114
lines changed

.dockerignore

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ build
55
dist
66
*.egg-info
77
*.egg/
8-
*.pyc
98
*.swp
109

1110
.tox
@@ -15,7 +14,8 @@ html/*
1514
__pycache__
1615

1716
### Front end ###
18-
frontend
17+
node_modules
18+
npm-debug.log
1919

2020
### Build cache ###
2121
*/**/*.cache

.github/workflows/run_test.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ on:
1111
- 'frontend/**'
1212
jobs:
1313
test:
14-
runs-on: ubuntu-latest
14+
runs-on: ubuntu-18.04
1515
services:
1616
mongodb:
1717
image: mongo

README.md

+106-34
Original file line numberDiff line numberDiff line change
@@ -47,79 +47,151 @@ Several features are in beta testing and will be available in the next release s
4747

4848
*If your want to join in our development team, please contact huaizhen001 @ e.ntu.edu.sg*
4949

50-
## Installation
50+
## Demo
5151

52-
### using pip
52+
The below figures illusrates the web interface of our system and overall workflow.
53+
| Web frontend | Workflow |
54+
|:------------:|:--------------:|
55+
| <img src="https://i.loli.net/2020/12/10/4FsfciXjtPO12BQ.png" alt="drawing" width="500"/> | <img src="https://i.loli.net/2020/12/10/8IaeW9mS2NjQEYB.png" alt="drawing" width="500"/> |
56+
57+
58+
## Installation Guide
59+
60+
### Prerequisites
61+
62+
- A GNU/Linux environment(Ubuntu preferred)
63+
- [Docker](https://docs.docker.com/engine/install/)
64+
- [Docker Compose](https://docs.docker.com/compose/) (Optional, for Installation via Docker)
65+
- [TVM](https://github.com/apache/incubator-tvm) and `tvm` Python module(Optional)
66+
- [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) and Python API(Optional)
67+
- Python >= 3.7
68+
69+
### Installation using pip
5370

5471
```shell script
55-
# upgrade installation packages first
56-
pip install -U setuptools requests
5772
# install modelci from GitHub
5873
pip install git+https://github.com/cap-ntu/ML-Model-CI.git@master
5974
```
6075

61-
### create conda workspace
76+
Once you have installed, make sure the docker daemon is running, then you can start MLModelCI service on a leader server by:
77+
78+
```bash
79+
modelci service init
80+
```
6281

63-
**Note**
64-
- Conda and Docker are required to run this installation script.
65-
- To use TensorRT, you have to manually install TensorRT (`sudo` is required). See instruction
66-
[here](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
82+
![CLI start service](https://i.loli.net/2021/04/15/rLiMoxkqRO67Tyg.gif)
6783

68-
```shell script
69-
bash scripts/install.sh
84+
85+
86+
Or stop the service by:
87+
88+
```bash
89+
modelci service stop
7090
```
7191

72-
### Docker
92+
![CLI stop service](https://i.loli.net/2021/04/16/jo1ZnWsqrmxFvlU.gif)
93+
94+
95+
96+
### Installation using Docker
7397

74-
![](https://img.shields.io/docker/pulls/mlmodelci/mlmodelci.svg) ![](https://img.shields.io/docker/image-size/mlmodelci/mlmodelci)
98+
![](https://img.shields.io/docker/pulls/mlmodelci/mlmodelci.svg)
99+
100+
#### For CPU-only Machines
101+
102+
![](https://img.shields.io/docker/v/mlmodelci/mlmodelci/cpu)![](https://img.shields.io/docker/image-size/mlmodelci/mlmodelci/cpu)
75103

76104
```shell script
77-
docker pull mlmodelci/mlmodelci
105+
docker pull mlmodelci/mlmodelci:cpu
78106
```
79107

80-
<!-- Please refer to [here](/integration/README.md) for more information. -->
108+
Start basic services by Docker Compose:
81109

110+
```bash
111+
docker-compose -f ML-Model-CI/docker/docker-compose-cpu-modelhub.yml up -d
112+
```
82113

83-
## Quick Start
114+
Stop the services by:
84115

85-
The below figurs illusrates the
86-
| Web frontend | Workflow |
87-
|:------------:|:--------------:|
88-
| <img src="https://i.loli.net/2020/12/10/4FsfciXjtPO12BQ.png" alt="drawing" width="500"/> | <img src="https://i.loli.net/2020/12/10/8IaeW9mS2NjQEYB.png" alt="drawing" width="500"/> |
116+
```bash
117+
docker-compose -f ML-Model-CI/docker/docker-compose-cpu-modelhub.yml down
118+
```
119+
120+
#### For CUDA10.2 Machine
89121

90-
### 0. Start the ModelCI service
122+
![](https://img.shields.io/docker/v/mlmodelci/mlmodelci/cuda10.2-cudnn8)![](https://img.shields.io/docker/image-size/mlmodelci/mlmodelci/cuda10.2-cudnn8)
91123

92-
Once you have installed, start MLModelCI service on a leader server by:
93124
```shell script
94-
modelci service init
125+
docker pull mlmodelci/mlmodelci:cuda10.2-cudnn8
126+
```
127+
128+
Start basic services by Docker Compose:
129+
130+
```bash
131+
docker-compose -f ML-Model-CI/docker/docker-compose-gpu-modelhub.yml up -d
132+
```
133+
134+
![docker-compose start service](https://i.loli.net/2021/04/15/65oYIBurfhPRK3U.gif)
135+
136+
Stop the services by:
137+
138+
```bash
139+
docker-compose -f ML-Model-CI/docker/docker-compose-gpu-modelhub.yml down
95140
```
96141

97-
**We provide three options for users to use MLModelCI: CLI, Web interface and import it as a python package**
142+
![docker-compose stop service](https://i.loli.net/2021/04/15/CyNzo4uhXkSrQRE.gif)
143+
144+
<!-- Please refer to [here](/integration/README.md) for more information. -->
145+
146+
147+
## Usage
148+
149+
**We provide three options for users to use MLModelCI: CLI, Running Programmatically and Web interface**
150+
98151
### 1. CLI
99152

100-
```python
153+
```console
101154
# publish a model to the system
102-
modelci modelhub publish registration_example.yml
155+
modelci@modelci-PC:~$ modelci modelhub publish -f example/resnet50.yml
156+
{'data': {'id': ['60746e4bc3d5598e0e7a786d']}, 'status': True}
103157
```
104158

105-
Please refer to [WIKI](https://github.com/cap-ntu/ML-Model-CI/wiki) for more CLIs.
159+
Please refer to [WIKI](https://github.com/cap-ntu/ML-Model-CI/wiki) for more CLI options.
106160

107-
### 2. Python Package
161+
### 2. Running Programmatically
108162

109163
```python
110164
# utilize the convert function
111-
from modelci.hub.converter import ONNXConverter
165+
from modelci.hub.converter import convert
112166
from modelci.types.bo import IOShape
113167

114168
# the system can trigger the function automaticlly
115169
# users can call the function individually
116-
ONNXConverter.from_torch_module(
117-
'<path to torch model>',
118-
'<path to export onnx model>',
119-
inputs=[IOShape([-1, 3, 224, 224], float)],
120-
)
170+
convert(
171+
'<torch model>',
172+
src_framework='pytorch',
173+
dst_framework='onnx',
174+
save_path='<path to export onnx model>',
175+
inputs=[IOShape([-1, 3, 224, 224], dtype=float)],
176+
outputs=[IOShape([-1, 1000], dtype=float)],
177+
opset=11)
121178
```
122179

180+
### 3. Web Interface
181+
182+
If you have installed MLModelCI via pip, you should start the frontend service manually.
183+
184+
```bash
185+
# Navigate to the frontend folder
186+
cd frontend
187+
# Install dependencies
188+
yarn install
189+
# Start the frontend
190+
yarn start
191+
```
192+
193+
The frontend will start on <http://localhost:3333>
194+
123195
## Quickstart with Notebook
124196

125197
- [Publish an image classification model](./example/notebook/image_classification_model_deployment.ipynb) [![nbviewer](https://raw.githubusercontent.com/jupyter/design/master/logos/Badges/nbviewer_badge.svg)](https://nbviewer.jupyter.org/github/cap-ntu/ML-Model-CI/blob/master/example/notebook/image_classification_model_deployment.ipynb)

0 commit comments

Comments
 (0)