Skip to content

Commit 5b2c4d7

Browse files
committed
readme
Signed-off-by: Can-Zhao <[email protected]>
1 parent ce53d1d commit 5b2c4d7

File tree

4 files changed

+42
-50
lines changed

4 files changed

+42
-50
lines changed

generation/maisi/README.md

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,13 @@ More details can be found in our WACV 2025 paper:
1212

1313
[Guo, P., Zhao, C., Yang, D., Xu, Z., Nath, V., Tang, Y., ... & Xu, D. (2024). MAISI: Medical AI for Synthetic Imaging. WACV 2025](https://arxiv.org/pdf/2409.11169)
1414

15-
**Release Note (March 2025):** We are excited to announce the new MAISI Version `maisi3d-rflow`. Compared with the previous version `maisi3d-ddpm`, it accelerated latent diffusion model inference by 33x. The differences are:
15+
🎉🎉🎉🎉🎉🎉**Release Note (March 2025):** 🎉🎉🎉🎉🎉🎉
16+
17+
We are excited to announce the new MAISI Version `maisi3d-rflow`. Compared with the previous version `maisi3d-ddpm`, **it accelerated latent diffusion model inference by 33x**. The MAISI VAE is not changed. The differences are:
1618
- The maisi version `maisi3d-ddpm` uses basic noise scheduler DDPM. `maisi3d-rflow` uses Rectified Flow scheduler. The diffusion model inference can be 33 times faster.
1719
- The maisi version `maisi3d-ddpm` requires training images to be labeled with body regions (`"top_region_index"` and `"bottom_region_index"`), while `maisi3d-rflow` does not have such requirement. In other words, it is easier to prepare training data for `maisi3d-rflow`.
1820
- For the released model weights, `maisi3d-rflow` can generate images with better quality for head region and small output volumes, and comparable quality for other cases compared with `maisi3d-ddpm`.
21+
- `maisi3d-rflow` added a diffusionn model input `modality`, which gives it flexibility to extend to other modalities. Currently it is set as always equal to 1 since this version only supports CT generation. We predefined some modalities in [./configs/modality_mapping.json](./configs/modality_mapping.json).
1922

2023
**GUI demo:** Welcome to try our GUI demo at [https://build.nvidia.com/nvidia/maisi](https://build.nvidia.com/nvidia/maisi).
2124
The GUI is only a demo for toy examples. This Github repo is the full version.
@@ -67,22 +70,20 @@ We retrained several state-of-the-art diffusion model-based methods using our da
6770
## Time Cost and GPU Memory Usage
6871

6972
### Inference Time Cost and GPU Memory Usage
70-
### Inference Time Cost and GPU Memory Usage
71-
| `output_size` | latent size |`autoencoder_sliding_window_infer_size` | `autoencoder_tp_num_splits` | Peak Memory | VAE Time | DM Time (`maisi3d-ddpm`) | DM Time (`maisi3d-rflow`) | VAE Time + DM Time (`maisi3d-ddpm`) | VAE Time + DM Time (`maisi3d-rflow`) |
72-
|---------------|:--------------------------------------:|:--------------------------------------:|:---------------------------:|:-----------:|:--------:|:---------------:|:---------------:|:------------------------:|:------------------------:|
73-
| [256x256x128](./configs/config_infer_16g_256x256x128.json) |4x64x64x32| >=[64,64,32], not used | 2 | 15.0G | 1s | 57s | 2s | 58s | 3s |
74-
| [256x256x256](./configs/config_infer_16g_256x256x256.json) |4x64x64x64| [48,48,64], 4 patches | 4 | 15.4G | 5s | 81s | 3s | 86s | 8s |
75-
| [512x512x128](./configs/config_infer_16g_512x512x128.json) |4x128x128x32| [64,64,32], 9 patches | 2 | 15.7G | 8s | 138s | 5s | 146s | 13s |
76-
| | | | | | | | | | |
77-
| [256x256x256](./configs/config_infer_24g_256x256x256.json) |4x64x64x64| >=[64,64,64], not used | 4 | 22.7G | 2s | 81s | 3s | 83s | 5s |
78-
| [512x512x128](./configs/config_infer_24g_512x512x128.json) |4x128x128x32| [80,80,32], 4 patches | 2 | 21.0G | 6s | 138s | 5s | 144s | 11s |
79-
| [512x512x512](./configs/config_infer_24g_512x512x512.json) |4x128x128x128| [64,64,48], 36 patches | 2 | 22.8G | 29s | 569s | 19s | 598s | 48s |
80-
| | | | | | | | | | |
81-
| [512x512x512](./configs/config_infer_32g_512x512x512.json) |4x128x128x128| [80,80,48], 16 patches | 4 | 28.4G | 30s | 569s | 19s | 599s | 49s |
82-
| | | | | | | | | | |
83-
| [512x512x128](./configs/config_infer_80g_512x512x128.json) |4x128x128x32| >=[128,128,32], not used | 4 | 37.7G | 127s | 138s | 5s | 265s | 132s |
84-
| [512x512x512](./configs/config_infer_80g_512x512x512.json) |4x128x128x128| [80,80,80], 8 patches | 2 | 45.3G | 32s | 569s | 19s | 601s | 51s |
85-
| [512x512x768](./configs/config_infer_80g_512x512x768.json) |4x128x128x192| [80,80,112], 8 patches | 4 | 56.2G | 50s | 904s | 30s | 954s | 80s |
73+
| `output_size` | Peak Memory | VAE Time + DM Time (`maisi3d-ddpm`) | VAE Time + DM Time (`maisi3d-rflow`) | latent size | `autoencoder_sliding_window_infer_size` | `autoencoder_tp_num_splits` | VAE Time | DM Time (`maisi3d-ddpm`) | DM Time (`maisi3d-rflow`) |
74+
|---------------|:-----------:|:------------------------:|:------------------------:|:--------------------------------------:|:--------------------------------------:|:---------------------------:|:--------:|:---------------:|:---------------:|
75+
| [256x256x128](./configs/config_infer_16g_256x256x128.json) | 15.0G | 58s | 3s | 4x64x64x32 | >=[64,64,32], not used | 2 | 1s | 57s | 2s |
76+
| [256x256x256](./configs/config_infer_16g_256x256x256.json) | 15.4G | 86s | 8s | 4x64x64x64 | [48,48,64], 4 patches | 4 | 5s | 81s | 3s |
77+
| [512x512x128](./configs/config_infer_16g_512x512x128.json) | 15.7G | 146s | 13s | 4x128x128x32 | [64,64,32], 9 patches | 2 | 8s | 138s | 5s |
78+
| | | | | | | | | | |
79+
| [256x256x256](./configs/config_infer_24g_256x256x256.json) | 22.7G | 83s | 5s | 4x64x64x64 | >=[64,64,64], not used | 4 | 2s | 81s | 3s |
80+
| [512x512x128](./configs/config_infer_24g_512x512x128.json) | 21.0G | 144s | 11s | 4x128x128x32 | [80,80,32], 4 patches | 2 | 6s | 138s | 5s |
81+
| [512x512x512](./configs/config_infer_24g_512x512x512.json) | 22.8G | 598s | 48s | 4x128x128x128 | [64,64,48], 36 patches | 2 | 29s | 569s | 19s |
82+
| | | | | | | | | | |
83+
| [512x512x512](./configs/config_infer_32g_512x512x512.json) | 28.4G | 599s | 49s | 4x128x128x128 | [80,80,48], 16 patches | 4 | 30s | 569s | 19s |
84+
| | | | | | | | | | |
85+
| [512x512x512](./configs/config_infer_80g_512x512x512.json) | 45.3G | 601s | 51s | 4x128x128x128 | [80,80,80], 8 patches | 2 | 32s | 569s | 19s |
86+
| [512x512x768](./configs/config_infer_80g_512x512x768.json) | 49.7G | 961s | 87s | 4x128x128x192 | [80,80,96], 12 patches | 4 | 57s | 904s | 30s |
8687

8788
**Table 3:** Inference Time Cost and GPU Memory Usage. `DM Time` refers to the time required for diffusion model inference. `VAE Time` refers to the time required for VAE decoder inference. The total inference time is the sum of `DM Time` and `VAE Time`. The experiment was conducted on an A100 80G GPU.
8889

@@ -96,7 +97,7 @@ When `autoencoder_sliding_window_infer_size` is equal to or larger than the late
9697
### Training GPU Memory Usage
9798
The VAE is trained on patches and can be trained using a 16G GPU if the patch size is set to a small value, such as [64, 64, 64]. Users can adjust the patch size to fit the available GPU memory. For the released model, we initially trained the autoencoder on 16G V100 GPUs with a small patch size of [64, 64, 64], and then continued training on 32G V100 GPUs with a larger patch size of [128, 128, 128].
9899

99-
The DM and ControlNet are trained on whole images rather than patches. The GPU memory usage during training depends on the size of the input images.
100+
The DM and ControlNet are trained on whole images rather than patches. The GPU memory usage during training depends on the size of the input images. There is no big difference on memory usage between `maisi3d-ddpm` and `maisi3d-rflow`.
100101

101102
| image size | latent size | Peak Memory |
102103
|--------------|:------------- |:-----------:|
@@ -198,7 +199,12 @@ Please refer to [maisi_inference_tutorial.ipynb](maisi_inference_tutorial.ipynb)
198199
To run the inference script with TensorRT acceleration, please run:
199200
```bash
200201
export MONAI_DATA_DIRECTORY=<dir_you_will_download_data>
201-
python -m scripts.inference -c ./configs/config_maisi.json -i ./configs/config_infer.json -e ./configs/environment.json -x ./configs/config_trt.json --random-seed 0
202+
python -m scripts.inference -c ./configs/config_maisi3d-ddpm.json -i ./configs/config_infer.json -e ./configs/environment_maisi3d-ddpm.json -x ./configs/config_trt.json --random-seed 0 --version maisi3d-ddpm
203+
```
204+
205+
```bash
206+
export MONAI_DATA_DIRECTORY=<dir_you_will_download_data>
207+
python -m scripts.inference -c ./configs/config_maisi3d-rflow.json -i ./configs/config_infer.json -e ./configs/environment_maisi3d-rflow.json -x ./configs/config_trt.json --random-seed 0 --version maisi3d-rflow
202208
```
203209
Extra config file, [./configs/config_trt.json](./configs/config_trt.json) is using `trt_compile()` utility from MONAI to convert select modules to TensorRT by overriding their definitions from [./configs/config_infer.json](./configs/config_infer.json).
204210

generation/maisi/configs/config_infer_80g_512x512x128.json

Lines changed: 0 additions & 29 deletions
This file was deleted.

generation/maisi/configs/config_infer_80g_512x512x768.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,8 @@
1717
0.75,
1818
0.66667
1919
],
20-
"autoencoder_sliding_window_infer_size": [80,80,112],
21-
"autoencoder_sliding_window_infer_overlap": 0.25,
20+
"autoencoder_sliding_window_infer_size": [80,80,96],
21+
"autoencoder_sliding_window_infer_overlap": 0.4,
2222
"autoencoder_tp_num_splits": 4,
2323
"controlnet": "$@controlnet_def",
2424
"diffusion_unet": "$@diffusion_unet_def",
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
{
2+
"unknown":0,
3+
"ct":1,
4+
"ct_wo_contrast":2,
5+
"ct_contrast":3,
6+
"mri":8,
7+
"mri_t1":9,
8+
"mri_t2":10,
9+
"mri_flair":11,
10+
"mri_pd":12,
11+
"mri_dwi":13,
12+
"mri_adc":14,
13+
"mri_ssfp":15,
14+
"mri_mra":16
15+
}

0 commit comments

Comments
 (0)