Skip to content

Commit 047fb28

Browse files
committed
Merge branch 'maisi_readme' of https://github.com/Can-Zhao/tutorials into maisi_readme
2 parents 5b5b01e + c044c17 commit 047fb28

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

generation/maisi/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,11 +76,11 @@ We retrained several state-of-the-art diffusion model-based methods using our da
7676

7777
**Table 3:** Inference Time Cost and GPU Memory Usage. `DM Time` refers to the time required for diffusion model inference. `VAE Time` refers to the time required for VAE decoder inference. The total inference time is the sum of `DM Time` and `VAE Time`. The experiment was conducted on an A100 80G GPU.
7878

79-
During inference, the peak GPU memory usage occurs during the VAE's decoding of latent features.
80-
To reduce GPU memory usage, we can either increase `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`.
79+
During inference, the peak GPU memory usage occurs during the VAE's decoding of latent features.
80+
To reduce GPU memory usage, we can either increase `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`.
8181
Increasing `autoencoder_tp_num_splits` has a smaller impact on the generated image quality, while reducing `autoencoder_sliding_window_infer_size` may introduce stitching artifacts and has a larger impact on the generated image quality.
8282

83-
When `autoencoder_sliding_window_infer_size` is equal to or larger than the latent feature size, the sliding window will not be used, and the time and memory costs remain the same.
83+
When `autoencoder_sliding_window_infer_size` is equal to or larger than the latent feature size, the sliding window will not be used, and the time and memory costs remain the same.
8484

8585

8686
### Training GPU Memory Usage

0 commit comments

Comments
 (0)