Skip to content

Commit 0d2cf00

Browse files
committed
add details in readme
Signed-off-by: Can-Zhao <[email protected]>
1 parent 504a14b commit 0d2cf00

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

generation/maisi/README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -67,10 +67,9 @@ We retrained several state-of-the-art diffusion model-based methods using our da
6767
| [512x512x512](./configs/config_infer_80g_512x512x512.json) |128x128x128| [80,80,80], 8 patches | 2 | 44G | 569s | 30s |
6868
| [512x512x768](./configs/config_infer_24g_512x512x768.json) |128x128x192| [80,80,112], 8 patches | 4 | 55G | 904s | 48s |
6969

70-
When `autoencoder_sliding_window_infer_size` is equal or larger than the latent feature size, sliding window will not be used,
71-
and the time and memory cost remain the same.
70+
**Table 3:** Inference Time Cost and GPU Memory Usage. `DM Time` refers to the time cost of diffusion model inference. `VAE Time` refers to the time cost of VAE decoder inference. When `autoencoder_sliding_window_infer_size` is equal or larger than the latent feature size, sliding window will not be used,
71+
and the time and memory cost remain the same. The experiment was tested on A100 80G GPU.
7272

73-
The experiment was tested on A100 80G GPU.
7473

7574
During inference, the peak GPU memory usage happens during the autoencoder decoding latent features.
7675
To reduce GPU memory usage, we can either increasing `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`.

0 commit comments

Comments
 (0)