Skip to content

Commit e00a69e

Browse files
authored
Merge branch 'main' into 3d_ddpm
2 parents 0782fa2 + fc15cdd commit e00a69e

File tree

97 files changed

+19950
-1081
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

97 files changed

+19950
-1081
lines changed

3d_classification/densenet_training_array.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@
200200
],
201201
"source": [
202202
"if not os.path.isfile(images[0]):\n",
203-
" resource = \"https://drive.google.com/file/d/1f5odq9smadgeJmDeyEy_UOjEtE_pkKc0/view?usp=sharing\"\n",
203+
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
204204
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
205205
"\n",
206206
" dataset_dir = os.path.join(root_dir, \"ixi\")\n",

3d_registration/learn2reg_nlst_paired_lung_ct.ipynb

Lines changed: 107 additions & 117 deletions
Large diffs are not rendered by default.

3d_regression/densenet_training_array.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@
205205
"outputs": [],
206206
"source": [
207207
"if not os.path.isfile(images[0]):\n",
208-
" resource = \"https://drive.google.com/file/d/1f5odq9smadgeJmDeyEy_UOjEtE_pkKc0/view?usp=sharing\"\n",
208+
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
209209
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
210210
"\n",
211211
" dataset_dir = os.path.join(root_dir, \"ixi\")\n",

3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
"\n",
4646
"https://www.synapse.org/#!Synapse:syn27046444/wiki/616992\n",
4747
"\n",
48-
"The JSON file containing training and validation sets (internal split) needs to be downloaded from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset. As discussed in the following, this tutorial uses fold 1 for training a Swin UNETR model on the BraTS 21 challenge.\n",
48+
"The JSON file containing training and validation sets (internal split) needs to be downloaded from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset. As discussed in the following, this tutorial uses fold 1 for training a Swin UNETR model on the BraTS 21 challenge.\n",
4949
"\n",
5050
"### Tumor Characteristics\n",
5151
"\n",
@@ -114,7 +114,7 @@
114114
" \"TrainingData/BraTS2021_01146/BraTS2021_01146_flair.nii.gz\"\n",
115115
" \n",
116116
"\n",
117-
"- Download the json file from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset.\n"
117+
"- Download the json file from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset.\n"
118118
]
119119
},
120120
{

3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
"\n",
3434
"For this tutorial, the dataset needs to be downloaded from: https://www.synapse.org/#!Synapse:syn3193805/wiki/217752. More details are provided in the \"Download dataset\" section below.\n",
3535
"\n",
36-
"In addition, the json file for data splits needs to be downloaded from this [link](https://drive.google.com/file/d/1qcGh41p-rI3H_sQ0JwOAhNiQSXriQqGi/view?usp=sharing). Once downloaded, place the json file in the same folder as the dataset. \n",
36+
"In addition, the json file for data splits needs to be downloaded from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json). Once downloaded, place the json file in the same folder as the dataset. \n",
3737
"\n",
3838
"For BTCV dataset, under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial, and a retrospective ventral hernia study. The 50 scans were captured during portal venous contrast phase with variable volume sizes (512 x 512 x 85 - 512 x 512 x 198) and field of views (approx. 280 x 280 x 280 mm3 - 500 x 500 x 650 mm3). The in-plane resolution varies from 0.54 x 0.54 mm2 to 0.98 x 0.98 mm2, while the slice thickness ranges from 2.5 mm to 5.0 mm. \n",
3939
"\n",
@@ -98,8 +98,6 @@
9898
"\n",
9999
"We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Tranformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n",
100100
"\n",
101-
"![image](https://lh3.googleusercontent.com/pw/AM-JKLVLgduGZ9naCSasWg09U665NBdd3UD4eLTy15wJiwbmKLS_p5WSZ2MBcRePEJO2tv9X3TkC52MsbnomuPy5JT3vSVeCji1MOEuAzcsxily88TdbHuAt6PzccefwKupbXyOCumK5hzz5Ul38kZnlEQ84=w397-h410-no?authuser=2)\n",
102-
"\n",
103101
"Please download the pre-trained weights from this [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) and place it in the root directory of this tutorial. \n",
104102
"\n",
105103
"If training from scratch is desired, please skip the step for initializing from pre-trained weights. "
@@ -321,7 +319,7 @@
321319
"\n",
322320
"3. Make a JSON file to define train/val split and other relevant parameters. Place the JSON file at `./data/dataset_0.json`.\n",
323321
"\n",
324-
" You can download an example of the JSON file [here](https://drive.google.com/file/d/1qcGh41p-rI3H_sQ0JwOAhNiQSXriQqGi/view?usp=sharing), or, equivalently, use the following `wget` command. If you would like to use this directly, please move it into the `./data` folder."
322+
" You can download an example of the JSON file [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json), or, equivalently, use the following `wget` command. If you would like to use this directly, please move it into the `./data` folder."
325323
]
326324
},
327325
{
@@ -331,7 +329,7 @@
331329
"outputs": [],
332330
"source": [
333331
"# uncomment this command to download the JSON file directly\n",
334-
"# wget -O data/dataset_0.json 'https://drive.google.com/uc?export=download&id=1qcGh41p-rI3H_sQ0JwOAhNiQSXriQqGi'"
332+
"# wget -O data/dataset_0.json 'https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json'"
335333
]
336334
},
337335
{

3d_segmentation/unetr_btcv_segmentation_3d.ipynb

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@
1919
"\n",
2020
"\n",
2121
"This tutorial demonstrates how to construct a training workflow of UNETR [1] on multi-organ segmentation task using the BTCV challenge dataset.\n",
22-
"![image](https://lh3.googleusercontent.com/pw/AM-JKLU2eTW17rYtCmiZP3WWC-U1HCPOHwLe6pxOfJXwv2W-00aHfsNy7jeGV1dwUq0PXFOtkqasQ2Vyhcu6xkKsPzy3wx7O6yGOTJ7ZzA01S6LSh8szbjNLfpbuGgMe6ClpiS61KGvqu71xXFnNcyvJNFjN=w1448-h496-no?authuser=0)\n",
2322
"\n",
2423
"And it contains the following features:\n",
2524
"1. Transforms for dictionary format data.\n",
@@ -51,7 +50,7 @@
5150
"3. [Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning (MIA)](https://www.sciencedirect.com/science/article/abs/pii/S1361841515000766?via%3Dihub)\n",
5251
"\n",
5352
"\n",
54-
"![image](https://lh3.googleusercontent.com/pw/AM-JKLX0svvlMdcrchGAgiWWNkg40lgXYjSHsAAuRc5Frakmz2pWzSzf87JQCRgYpqFR0qAjJWPzMQLc_mmvzNjfF9QWl_1OHZ8j4c9qrbR6zQaDJWaCLArRFh0uPvk97qAa11HtYbD6HpJ-wwTCUsaPcYvM=w1724-h522-no?authuser=0)\n",
53+
"![image](../figures/BTCV_organs.png)\n",
5554
"\n",
5655
"\n",
5756
"\n",
@@ -586,7 +585,7 @@
586585
" hidden_size=768,\n",
587586
" mlp_dim=3072,\n",
588587
" num_heads=12,\n",
589-
" pos_embed=\"perceptron\",\n",
588+
" proj_type=\"perceptron\",\n",
590589
" norm_name=\"instance\",\n",
591590
" res_block=True,\n",
592591
" dropout_rate=0.0,\n",

3d_segmentation/unetr_btcv_segmentation_3d_lightning.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -423,7 +423,7 @@
423423
" hidden_size=768,\n",
424424
" mlp_dim=3072,\n",
425425
" num_heads=12,\n",
426-
" pos_embed=\"perceptron\",\n",
426+
" proj_type=\"perceptron\",\n",
427427
" norm_name=\"instance\",\n",
428428
" res_block=True,\n",
429429
" conv_block=True,\n",

README.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,8 @@ This folder provides a simple baseline method for training, validation, and infe
113113
This notebook demonstrates how to construct a training workflow of UNETR on multi-organ segmentation task using the BTCV challenge dataset.
114114
##### [unetr_btcv_segmentation_3d_lightning](./3d_segmentation/unetr_btcv_segmentation_3d_lightning.ipynb)
115115
This tutorial demonstrates how MONAI can be used in conjunction with [PyTorch Lightning](https://www.pytorchlightning.ai/) framework to construct a training workflow of UNETR on multi-organ segmentation task using the BTCV challenge dataset.
116+
##### [vista3d](./3d_segmentation/vista3d)
117+
This tutorial showcases the process of fine-tuning VISTA3D on [MSD Spleen dataset](http://medicaldecathlon.com) using MONAI. For an in-depth exploration, please visit the [VISTA](https://github.com/Project-MONAI/VISTA) repository.
116118

117119
#### <ins>**2D registration**</ins>
118120
##### [registration using mednist](./2d_registration/registration_mednist.ipynb)
@@ -313,3 +315,24 @@ This tutorial shows the use cases of training and validating a 3D Latent Diffusi
313315

314316
##### [2D latent diffusion model](./generative/2d_ldm)
315317
This tutorial shows the use cases of training and validating a 2D Latent Diffusion Model.
318+
319+
##### [Brats 3D latent diffusion model](./3d_ldm/README.md)
320+
Example shows the use cases of training and validating a 3D Latent Diffusion Model on Brats 2016&2017 data, expanding on the above notebook.
321+
322+
##### [MAISI 3D latent diffusion model](./maisi/README.md)
323+
Example shows the use cases of training and validating Nvidia MAISI (Medical AI for Synthetic Imaging) model, a 3D Latent Diffusion Model that can generate large CT images with paired segmentation masks, variable volume size and voxel size, as well as controllable organ/tumor size.
324+
325+
##### [SPADE in VAE-GAN for Semantic Image Synthesis on 2D BraTS Data](./spade_gen)
326+
Example shows the use cases of applying SPADE, a VAE-GAN-based neural network for semantic image synthesis, to a subset of BraTS that was registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF).
327+
328+
##### [Applying Latent Diffusion Models to 2D BraTS Data for Semantic Image Synthesis](./spade_ldm)
329+
Example shows the use cases of applying SPADE normalization to a latent diffusion model, following the methodology by Wang et al., for semantic image synthesis on a subset of BraTS registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF).
330+
331+
##### [Diffusion Models for Implicit Image Segmentation Ensembles](./image_to_image_translation)
332+
Example shows the use cases of how to use MONAI for 2D segmentation of images using DDPMs. The same structure can also be used for conditional image generation, or image-to-image translation.
333+
334+
##### [Evaluate Realism and Diversity of the generated images](./realism_diversity_metrics)
335+
Example shows the use cases of using MONAI to evaluate the performance of a generative model by computing metrics such as Frechet Inception Distance (FID) and Maximum Mean Discrepancy (MMD) for assessing realism, as well as MS-SSIM and SSIM for evaluating image diversity.
336+
337+
#### [VISTA2D](./vista_2d)
338+
This tutorial demonstrates how to train a cell segmentation model using the [MONAI](https://monai.io/) framework and the [Segment Anything Model (SAM)](https://github.com/facebookresearch/segment-anything) on the [Cellpose dataset](https://www.cellpose.org/).

active_learning/liver_tumor_al/active_learning.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@
5454
parser = argparse.ArgumentParser(description="Active Learning Setting")
5555

5656
# Directory & Json & Seed
57-
parser.add_argument("--base_dir", default="/home/vishwesh/experiments/al_sanity_test_apr27_2023", type=str)
57+
parser.add_argument("--base_dir", default="./experiments/al_sanity_test_apr27_2023", type=str)
5858
parser.add_argument("--data_root", default="/scratch_2/data_2021/68111", type=str)
5959
parser.add_argument("--json_path", default="/scratch_2/data_2021/68111/dataset_val_test_0_debug.json", type=str)
6060
parser.add_argument("--seed", default=102, type=int)
@@ -155,7 +155,7 @@ def main():
155155
# Model Definition
156156
device = torch.device("cuda:0")
157157
network = UNet(
158-
dimensions=3,
158+
spatial_dims=3,
159159
in_channels=1,
160160
out_channels=3,
161161
channels=(16, 32, 64, 128, 256),
@@ -187,7 +187,7 @@ def main():
187187
b_max=1.0,
188188
clip=True,
189189
),
190-
CropForegroundd(keys=["image", "label"], source_key="image"),
190+
CropForegroundd(keys=["image", "label"], source_key="image", allow_smaller=True),
191191
SpatialPadd(keys=["image", "label"], spatial_size=(96, 96, 96)),
192192
RandCropByPosNegLabeld(
193193
keys=["image", "label"],
@@ -225,7 +225,7 @@ def main():
225225
b_max=1.0,
226226
clip=True,
227227
),
228-
CropForegroundd(keys=["image", "label"], source_key="image"),
228+
CropForegroundd(keys=["image", "label"], source_key="image", allow_smaller=True),
229229
EnsureTyped(keys=["image", "label"]),
230230
]
231231
)
@@ -240,7 +240,7 @@ def main():
240240
mode=("bilinear"),
241241
),
242242
ScaleIntensityRanged(keys="image", a_min=-21, a_max=189, b_min=0.0, b_max=1.0, clip=True),
243-
CropForegroundd(keys=("image"), source_key="image"),
243+
CropForegroundd(keys=("image"), source_key="image", allow_smaller=True),
244244
EnsureTyped(keys=["image"]),
245245
]
246246
)
@@ -315,7 +315,7 @@ def main():
315315
unl_loader = DataLoader(unl_ds, batch_size=1)
316316

317317
# Calculation of Epochs based on steps
318-
max_epochs = np.int(args.steps / (np.ceil(len(train_d) / args.batch_size)))
318+
max_epochs = int(args.steps / (np.ceil(len(train_d) / args.batch_size)))
319319
print("Epochs Estimated are {} for Active Iter {} with {} Vols".format(max_epochs, active_iter, len(train_d)))
320320

321321
# Model Training begins for one active iteration
@@ -393,7 +393,7 @@ def main():
393393
prev_best_ckpt = os.path.join(active_model_dir, "model.pt")
394394

395395
device = torch.device("cuda:0")
396-
ckpt = torch.load(prev_best_ckpt)
396+
ckpt = torch.load(prev_best_ckpt, weights_only=True)
397397
network.load_state_dict(ckpt)
398398
network.to(device=device)
399399

@@ -487,16 +487,16 @@ def main():
487487

488488
variance_dims = np.shape(variance)
489489
score_list.append(np.nanmean(variance))
490-
name_list.append(unl_data["image_meta_dict"]["filename_or_obj"][0])
490+
name_list.append(unl_data["image"].meta["filename_or_obj"][0])
491491
print(
492492
"Variance for image: {} is: {}".format(
493-
unl_data["image_meta_dict"]["filename_or_obj"][0], np.nanmean(variance)
493+
unl_data["image"].meta["filename_or_obj"][0], np.nanmean(variance)
494494
)
495495
)
496496

497497
# Plot with matplotlib and save all slices
498498
plt.figure(1)
499-
plt.imshow(np.squeeze(variance[:, :, np.int(variance_dims[2] / 2)]))
499+
plt.imshow(np.squeeze(variance[:, :, int(variance_dims[2] / 2)]))
500500
plt.colorbar()
501501
plt.title("Dropout Uncertainty")
502502
fig_path = os.path.join(fig_base_dir, "active_{}_file_{}.png".format(active_iter, counter))

active_learning/liver_tumor_al/results_uncertainty_analysis.ipynb

Lines changed: 10 additions & 20 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)