Skip to content

Commit 4a30e1b

Browse files
authored
Merge branch 'Project-MONAI:main' into maisi
2 parents febeae7 + d6da454 commit 4a30e1b

27 files changed

+1565
-155
lines changed

.github/workflows/test-modified.yml

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -9,39 +9,40 @@ on:
99

1010
concurrency:
1111
# automatically cancel the previously triggered workflows when there's a newer version
12-
group: build-gpu-${{ github.event.pull_request.number || github.ref }}
12+
group: build-${{ github.event.pull_request.number || github.ref }}
1313
cancel-in-progress: true
1414

1515
jobs:
1616
build:
17-
if: github.repository == 'Project-MONAI/tutorials'
18-
container:
19-
image: nvcr.io/nvidia/pytorch:24.02-py3
20-
options: --gpus all --ipc host
21-
runs-on: [self-hosted, linux, x64]
17+
runs-on: ubuntu-latest
2218
steps:
19+
- uses: actions/checkout@v3
20+
- name: Set up Python 3.10
21+
uses: actions/setup-python@v3
22+
with:
23+
python-version: '3.10'
2324
- name: Install MONAI
2425
id: monai-install
2526
run: |
27+
find /opt/hostedtoolcache/* -maxdepth 0 ! -name 'Python' -exec rm -rf {} \;
2628
which python
27-
nvidia-smi
28-
rm -rf ../../MONAI/MONAI
29-
python -m pip install --upgrade pip wheel
30-
pip uninstall -y monai
31-
pip uninstall -y monai
32-
pip uninstall -y monai-weekly
33-
pip uninstall -y monai-weekly # make sure there's no existing installation
34-
BUILD_MONAI=0 python -m pip install git+https://github.com/Project-MONAI/MONAI#egg=MONAI
29+
python -m pip install -U pip wheel
30+
python -m pip install torch torchvision torchaudio
31+
3532
python -m pip install -r https://raw.githubusercontent.com/Project-MONAI/MONAI/dev/requirements-dev.txt
36-
python -m pip install -U torch torchvision torchaudio
37-
- uses: actions/checkout@v3
33+
python -m pip install -r requirements.txt
34+
35+
BUILD_MONAI=0 python -m pip install git+https://github.com/Project-MONAI/MONAI#egg=MONAI
36+
python -m pip list
3837
- name: Notebook quick check
3938
shell: bash
4039
run: |
41-
git config --global --add safe.directory /__w/tutorials/tutorials
42-
git fetch origin main
43-
python -m pip install -r requirements.txt; python -m pip list
4440
python -c "import monai; monai.config.print_debug_info()"
45-
export CUDA_VISIBLE_DEVICES=0
46-
git diff --name-only origin/main | while read line; do if [[ $line == *.ipynb ]]; then ./runner.sh -p " -and -wholename './${line}'"; fi; done;
47-
# [[ $line == *.ipynb ]] && ./runner.sh --file "$line"
41+
git fetch origin main
42+
git diff --name-only origin/main | while read line
43+
do
44+
if [[ $line == *.ipynb ]]
45+
then
46+
./runner.sh -p " -and -wholename './${line}'"
47+
fi
48+
done

.pre-commit-config.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
default_language_version:
2-
python: python3.9
2+
python: python3
33

44
ci:
55
autofix_prs: true
@@ -9,7 +9,7 @@ ci:
99

1010
repos:
1111
- repo: https://github.com/pre-commit/pre-commit-hooks
12-
rev: v5.0.0
12+
rev: v6.0.0
1313
hooks:
1414
- id: end-of-file-fixer
1515
- id: trailing-whitespace
@@ -22,8 +22,8 @@ repos:
2222
args: ['--maxkb=1024']
2323
- id: detect-private-key
2424

25-
- repo: https://github.com/psf/black
26-
rev: "25.1.0"
25+
- repo: https://github.com/psf/black-pre-commit-mirror
26+
rev: "25.9.0"
2727
hooks:
2828
- id: black
2929
- id: black-jupyter

2d_classification/monai_101.ipynb

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -141,15 +141,7 @@
141141
"cell_type": "code",
142142
"execution_count": null,
143143
"metadata": {},
144-
"outputs": [
145-
{
146-
"name": "stdout",
147-
"output_type": "stream",
148-
"text": [
149-
"/workspace/data\n"
150-
]
151-
}
152-
],
144+
"outputs": [],
153145
"source": [
154146
"directory = os.environ.get(\"MONAI_DATA_DIRECTORY\")\n",
155147
"if directory is not None:\n",
@@ -250,11 +242,12 @@
250242
"outputs": [],
251243
"source": [
252244
"max_epochs = 5\n",
253-
"model = densenet121(spatial_dims=2, in_channels=1, out_channels=6).to(\"cuda:0\")\n",
245+
"device = torch.device(\"cuda:0\" if torch.cuda.device_count() > 0 else \"cpu\")\n",
246+
"model = densenet121(spatial_dims=2, in_channels=1, out_channels=6).to(device)\n",
254247
"\n",
255248
"logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n",
256249
"trainer = SupervisedTrainer(\n",
257-
" device=torch.device(\"cuda:0\"),\n",
250+
" device=device,\n",
258251
" max_epochs=max_epochs,\n",
259252
" train_data_loader=DataLoader(dataset, batch_size=512, shuffle=True, num_workers=4),\n",
260253
" network=model,\n",
@@ -320,7 +313,7 @@
320313
"max_items_to_print = 10\n",
321314
"with eval_mode(model):\n",
322315
" for item in DataLoader(testdata, batch_size=1, num_workers=0):\n",
323-
" prob = np.array(model(item[\"image\"].to(\"cuda:0\")).detach().to(\"cpu\"))[0]\n",
316+
" prob = np.array(model(item[\"image\"].to(device)).detach().to(\"cpu\"))[0]\n",
324317
" pred = class_names[prob.argmax()]\n",
325318
" gt = item[\"class_name\"][0]\n",
326319
" print(f\"Class prediction is {pred}. Ground-truth: {gt}\")\n",

3d_segmentation/spleen_segmentation_3d_visualization_basic.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -484,7 +484,7 @@
484484
"# standard PyTorch program style: create UNet, DiceLoss and Adam optimizer\n",
485485
"device = torch.device(\"cuda:0\")\n",
486486
"\n",
487-
"UNet_meatdata = {\n",
487+
"UNet_metadata = {\n",
488488
" \"spatial_dims\": 3,\n",
489489
" \"in_channels\": 1,\n",
490490
" \"out_channels\": 2,\n",
@@ -494,7 +494,7 @@
494494
" \"norm\": Norm.BATCH,\n",
495495
"}\n",
496496
"\n",
497-
"model = UNet(**UNet_meatdata).to(device)\n",
497+
"model = UNet(**UNet_metadata).to(device)\n",
498498
"loss_function = DiceLoss(to_onehot_y=True, softmax=True)\n",
499499
"loss_type = \"DiceLoss\"\n",
500500
"optimizer = torch.optim.Adam(model.parameters(), 1e-4)\n",
@@ -539,7 +539,7 @@
539539
"# initialize a new Aim Run\n",
540540
"aim_run = aim.Run()\n",
541541
"# log model metadata\n",
542-
"aim_run[\"UNet_meatdata\"] = UNet_meatdata\n",
542+
"aim_run[\"UNet_metadata\"] = UNet_metadata\n",
543543
"# log optimizer metadata\n",
544544
"aim_run[\"Optimizer_metadata\"] = Optimizer_metadata\n",
545545
"\n",

README.md

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,50 @@ Running:
4141

4242
in a cell will verify this has worked and show you what kind of hardware you have access to.
4343

44+
#### Google Colab Setup (CUDA 12.x, PyTorch 2.6, MONAI 1.5)
45+
46+
In Google Colab, the default environment may cause version conflicts with MONAI.
47+
To ensure compatibility, install PyTorch and MONAI explicitly as follows:
48+
49+
# Install PyTorch 2.6.0 with CUDA 12.4
50+
pip install --index-url https://download.pytorch.org/whl/cu124 \
51+
torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0
52+
53+
# Install MONAI and common dependencies
54+
pip install "monai[all]" nibabel pydicom ipywidgets==8.1.2
55+
56+
57+
### Known issues and fixes
58+
59+
- Torchaudio mismatch
60+
Colab may come with torchaudio 2.8.0, which is incompatible with torch 2.6.0.
61+
Installing the versions above resolves this issue.
62+
63+
- filelock conflicts with nni
64+
Some preinstalled packages (such as pytensor with newer filelock) may conflict.
65+
Use the following commands to fix:
66+
67+
pip uninstall -y pytensor
68+
pip install -U filelock
69+
70+
- Too many workers warning
71+
Colab has limited CPU resources, and high num_workers settings may freeze execution.
72+
It is recommended to use --num_workers=2 when running tutorials and adjust the `num_workers` parameters where it is used in notebooks (eg. for data loaders).
73+
74+
75+
### Quick smoke test
76+
77+
After installation, verify the environment by running:
78+
79+
git clone https://github.com/Project-MONAI/tutorials.git
80+
cd tutorials/3d_segmentation/torch
81+
python -u unet_training_array.py --max_epochs 2 --batch_size 1 --num_workers 2
82+
83+
If the logs show decreasing training loss and a Dice score, the setup is correct.
84+
85+
**Note:** In most cases, users can run MONAI tutorials directly in Colab notebooks without additional installation.
86+
The steps above are mainly for resolving dependency conflicts when installing extra packages.
87+
4488
#### Data
4589

4690
Some notebooks will require additional data.
@@ -342,3 +386,4 @@ Example shows the use cases of using MONAI to evaluate the performance of a gene
342386

343387
#### [VISTA2D](./vista_2d)
344388
This tutorial demonstrates how to train a cell segmentation model using the [MONAI](https://monai.io/) framework and the [Segment Anything Model (SAM)](https://github.com/facebookresearch/segment-anything) on the [Cellpose dataset](https://www.cellpose.org/).
389+
ECHO°¡ ¼³Á¤µÇ¾î ÀÖ½À´Ï´Ù.

auto3dseg/README.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,13 +56,44 @@ We provide [a two-minute example](notebooks/auto3dseg_hello_world.ipynb) for use
5656

5757
To further demonstrate the capabilities of **Auto3DSeg**, [here](./tasks/instance22/README.md) is the detailed performance of the algorithm in **Auto3DSeg**, which won 2nd place in the MICCAI 2022 challenge **[INSTANCE22: The 2022 Intracranial Hemorrhage Segmentation Challenge on Non-Contrast Head CT (NCCT)](https://instance.grand-challenge.org/)**
5858

59+
## Running With Your Own Data
60+
61+
To run Auto3DSeg on your own dataset, you need to build a `datalist.json` file, and pass it to the AutoRunner.
62+
63+
The datalist format is based on the datasets released by the [Medical Segmentation Decathlon](http://medicaldecathlon.com).
64+
See the function `load_decathlon_datalist` in `monai/data/decathlon_datalist.py` for a description of the format.
65+
66+
For the AutoRunner, we only need the `training` list in the JSON, it does not use any other fields.
67+
The `fold` key for each image is not required, as the AutoRunner will automatically create cross-validation folds (the number of folds is hard-coded to 5).
68+
If you do add the cross-validation folds beforehand, the AutoRunner will use these by default.
69+
You can also choose to include a `validation` list in the JSON file, in which case the AutoRunner will disable cross-validation and use the specified validation set.
70+
Any other metadata, such as `modality`, `numTraining`, `name`, etc. will not be used by the AutoRunner, but we do recommend using metadata fields to keep track of names and versions of your dataset. If you are using multi-modal scans, it is possible to enter lists of image paths for both the `image` and `label` keys; MONAI will stack them into channels.
71+
In short, your `datalist.json` file should look like this:
72+
73+
```
74+
{
75+
"name": "Example datalist.json"
76+
"training":
77+
[
78+
{"image": "/path/to/image_1.nii.gz", "label": "/path/to/label_1.nii.gz"},
79+
{"image": "/path/to/image_2.nii.gz", "label": "/path/to/label_2.nii.gz"},
80+
...
81+
]
82+
}
83+
84+
```
85+
86+
The AutoRunner will create a `work_dir` folder in the directory from which it is run, which will contain the resulting models and the copied datalist file _with_ cross-validation folds. This allows you to keep track of which datalist file the models are trained on.
87+
88+
See the description below or the file [run_with_minimal_input.md](docs/run_with_minimal_input.md) to use your datalist with the AutoRunner.
89+
5990
## Reference Python APIs for Auto3DSeg
6091

6192
**Auto3DSeg** offers users different levels of APIs to run pipelines that suit their needs.
6293

6394
### 1. Run with Minimal Input using ```AutoRunner```
6495

65-
The user needs to provide a data list (".json" file) for the new task and data root. A typical data list is as this [example](tasks/msd/Task05_Prostate/msd_task05_prostate_folds.json). A sample datalist for an existing MSD formatted dataset can be created using [this notebook](notebooks/msd_datalist_generator.ipynb). After creating the data list, the user can create a simple "task.yaml" file (shown below) as the minimum input for **Auto3DSeg**.
96+
The user needs to provide a data list (".json" file) for the new task and data root. A typical data list is as this [example](tasks/msd/Task05_Prostate/msd_task05_prostate_folds.json). [This notebook](notebooks/msd_crossval_datalist_generator.ipynb) features an example to create a datalist with cross-validation folds from an existing MSD dataset. After creating the data list, the user can create a simple "task.yaml" file (shown below) as the minimum input for **Auto3DSeg**.
6697

6798
```
6899
modality: CT

auto3dseg/docs/run_with_minimal_input.md

Lines changed: 24 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -18,55 +18,39 @@ if os.path.exists(root):
1818
download_and_extract(resource, compressed_file, root)
1919
```
2020

21-
**Step 1.** Provide the following data list (a ".json" file) for a new task and the data root. The typical data list is shown as follows.
21+
**Step 1.** Provide a `datalist.json` file.
22+
See the documentation under the `load_decathlon_datalist` function in `monai.data.decathlon_datalist` for details on the file format.
2223

24+
For the AutoRunner, you only need the `training` field with its list of training files:
2325
```
2426
{
25-
"training": [
26-
{
27-
"fold": 0,
28-
"image": "image_001.nii.gz",
29-
"label": "label_001.nii.gz"
30-
},
31-
{
32-
"fold": 0,
33-
"image": "image_002.nii.gz",
34-
"label": "label_002.nii.gz"
35-
},
36-
{
37-
"fold": 1,
38-
"image": "image_003.nii.gz",
39-
"label": "label_001.nii.gz"
40-
},
41-
{
42-
"fold": 2,
43-
"image": "image_004.nii.gz",
44-
"label": "label_002.nii.gz"
45-
},
46-
{
47-
"fold": 3,
48-
"image": "image_005.nii.gz",
49-
"label": "label_003.nii.gz"
50-
},
51-
{
52-
"fold": 4,
53-
"image": "image_006.nii.gz",
54-
"label": "label_004.nii.gz"
55-
}
56-
],
57-
"testing": [
58-
{
59-
"image": "image_010.nii.gz"
60-
}
61-
]
27+
"training":
28+
[
29+
{"image": "/path/to/image_1.nii.gz", "label": "/path/to/label_1.nii.gz"},
30+
{"image": "/path/to/image_2.nii.gz", "label": "/path/to/label_2.nii.gz"},
31+
...
32+
],
33+
"testing":
34+
[
35+
"/path/to/test_image_1.nii.gz",
36+
"/path/to/test_image_2.nii.gz",
37+
...
38+
]
6239
}
40+
6341
```
42+
In each training item, you can add a `fold` field (with an integer starting at 0) to pre-specify the cross-validation folds, otherwise the AutoRunner will generate its own folds (always 5). All trained algorithms will use the same generated or pre-specified folds, the file can be found in the `work_dir` folder that the AutoRunner generates.
43+
If you have a validation set, you can include it under a `validation` key with the same format as the `training` list. This will disable cross-validation.
44+
A "testing" list can also be added, which only requires the image files, not the labels. If it is included, the AutoRunner will output predictions on the testing set after training.
45+
It is recommended to add a `name` field and any other metadata fields that allow you to track which version of your dataset the models are trained on.
46+
47+
Save the file to `./datalist.json`.
6448

6549
**Step 2.** Prepare "task.yaml" with the necessary information as follows.
6650

6751
```
68-
modality: CT
69-
datalist: "./task.json"
52+
modality: CT # or MRI
53+
datalist: "./datalist.json"
7054
dataroot: "/workspace/data/task"
7155
```
7256

auto3dseg/notebooks/auto_runner.ipynb

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -273,13 +273,9 @@
273273
"\n",
274274
"`set_training_params` in `AutoRunner` provides an interface to change all algorithms' training parameters in one line. \n",
275275
"\n",
276-
"NOTE: \n",
277-
"**Auto3DSeg** uses MONAI bundle templates to perform training, validation, and inference.\n",
278-
"The number of epochs/iterations of training is specified by the config files in each template.\n",
279-
"Users can override these these values in the bundle templates.\n",
280-
"But users should consider that some bundle templates may use `num_iterations` and other may use `num_epochs` to iterate.\n",
276+
"As an example, see the code block below, which specifies e.g. the number of epochs used for training. Note that some algorithms may treat this as a maximum number of epochs.\n",
281277
"\n",
282-
"For demo purposes, below is a code block to convert num_epoch to iteration style and override all algorithms with the same training parameters.\n",
278+
"NOTE: \n",
283279
"The setup works fine for a machine that has GPUs less than or equal to 8.\n",
284280
"The datalist in this example is only using a subset of the original dataset.\n",
285281
"Users need to ensure the number of GPUs is not greater than the number that the training dataset can be partitioned.\n",

auto3dseg/notebooks/msd_datalist_generator.ipynb renamed to auto3dseg/notebooks/msd_crossval_datalist_generator.ipynb

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,15 @@
1919
"See the License for the specific language governing permissions and \n",
2020
"limitations under the License. \n",
2121
"\n",
22-
"# Datalist Generator"
22+
"# Datalist Cross-Validation Folds Generator"
23+
]
24+
},
25+
{
26+
"cell_type": "markdown",
27+
"metadata": {},
28+
"source": [
29+
"This notebook contains an example to add cross-validation folds to an existing Medical Segmentation Decathlon datalist, in this case the one of Task09_Spleen. \n",
30+
"When running repeated experiments, it can be beneficial to create cross-validation folds beforehand."
2331
]
2432
},
2533
{

0 commit comments

Comments
 (0)