Skip to content

Commit 01374e5

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent c78bf7b commit 01374e5

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

generation/maisi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ We retrained several state-of-the-art diffusion model-based methods using our da
5050
| 512x512x768 | [80,80,112], 8 patches | 4 | 55G | 904s | 48s |
5151

5252

53-
The experiment was tested on A100 80G GPU.
53+
The experiment was tested on A100 80G GPU.
5454

5555
During inference, the peak GPU memory usage happens during the autoencoder decoding latent features.
5656
To reduce GPU memory usage, we can either increasing `autoencoder_tp_num_splits` or reduce `autoencoder_sliding_window_infer_size`.

generation/maisi/scripts/inference.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -231,5 +231,5 @@ def main():
231231
)
232232
torch.cuda.reset_peak_memory_stats()
233233
main()
234-
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024 ** 3) # Convert to GB
234+
peak_memory_gb = torch.cuda.max_memory_allocated() / (1024**3) # Convert to GB
235235
print(f"Peak GPU memory usage: {peak_memory_gb:.2f} GB")

0 commit comments

Comments
 (0)