Skip to content

Commit 41dabe4

Browse files
committed
grammar
Signed-off-by: Can-Zhao <[email protected]>
1 parent f43d3d5 commit 41dabe4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

generation/maisi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ We retrained several state-of-the-art diffusion model-based methods using our da
7676

7777
**Table 3:** Inference Time Cost and GPU Memory Usage. `DM Time` refers to the time required for diffusion model inference. `VAE Time` refers to the time required for VAE decoder inference. The total inference time is the sum of `DM Time` and `VAE Time`. The experiment was conducted on an A100 80G GPU.
7878

79-
During inference, the peak GPU memory usage occurs during the autoencoder's decoding of latent features.
79+
During inference, the peak GPU memory usage occurs during the VAE's decoding of latent features.
8080
To reduce GPU memory usage, we can either increase `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`.
8181
Increasing `autoencoder_tp_num_splits` has a smaller impact on the generated image quality, while reducing `autoencoder_sliding_window_infer_size` may introduce stitching artifacts and has a larger impact on the generated image quality.
8282

0 commit comments

Comments
 (0)