Skip to content

ICANS-Strasbourg/PSAT

Repository files navigation

PSAT Logo

Pediatric Segmentation Approaches via Adult Augmentations and Transfer Learning

Python package


Table of Contents


Overview

PSAT addresses pediatric segmentation challenges by leveraging adult, pediatric, and mixed datasets, advanced augmentation strategies, and transfer learning. It is designed for researchers and practitioners working on medical image segmentation, especially in pediatric contexts.

PSAT Overview

Features

  • Flexible Training Plans: Use adult, pediatric, or mixed data ($P_a$, $P_p$, $P_m$)
  • Customizable Learning Sets: Adult-only, pediatric-only, or mixed ($S_a$, $S_p$, $S_m$)
  • Augmentation Strategies: Default ($A_d$) and contraction-based ($A_c$)
  • Transfer Learning: Direct inference ($T_o$), fine-tuning ($T_p$), continual learning ($T_m$)
  • Pretrained Models: Ready-to-use checkpoints for nnU-Net
  • Evaluation Scripts: For fast metrics computation

Citation

If you use this code, please cite our paper:

@InProceedings{10.1007/978-3-032-04981-0_45,
author="Kirscher, Tristan
and Faisan, Sylvain
and Coubez, Xavier
and Barrier, Loris
and Meyer, Philippe",
title="PSAT: Pediatric Segmentation Approaches via Adult Augmentations and Transfer Learning",
booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025",
year="2026",
publisher="Springer Nature Switzerland",
address="Cham",
pages="474--483",
isbn="978-3-032-04981-0"
}

This repository includes a CITATION.cff file for standardized citation metadata. You can also use the "Cite this repository" button on GitHub to obtain citation formats automatically.

Checkpoints & Pretrained Models

We provide two model checkpoints for nnU-Net:

  • mixed_model_continual_learning.zip
  • pure_pediatric_model.zip

Installing Pretrained Models

  1. Download Pretrained Weights:

    • Go to the GitHub Releases page.
    • Download mixed_model_continual_learning.zip and pure_pediatric_model.zip.
    • Place them in resources/checkpoints/.
  2. Install the Checkpoint Using nnU-Net:

    nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/mixed_model_continual_learning.zip
    nnUNetv2_install_pretrained_model_from_zip resources/checkpoints/pure_pediatric_model.zip
  3. Run Inference: After installing a checkpoint, run inference on your images:

    nnUNetv2_predict -i <input_images_dir> -o <output_dir> -d <dataset_id> -c <trainer_name> -f 0 -tr <task_name>

    Replace <input_images_dir>, <output_dir>, <dataset_id>, <trainer_name>, and <task_name> as appropriate. See nnUNet documentation for details.

For more details, see the Resources section.

Quickstart

  1. Install dependencies:

    pip install -r requirements.txt
  2. Evaluate Metrics (Example):

    python scripts/compute_metrics.py <ground_truth_dir> <predictions_dir>

    Replace <ground_truth_dir> and <predictions_dir> with your folder paths containing NIfTI files.

Usage

Dependencies

  • nibabel
  • numpy
  • pandas
  • p_tqdm
  • scipy
  • surface-distance

(See requirements.txt for the full list.)

Documentation

Running Tests

Install dependencies listed in requirements.txt and run:

pytest -q

Contributing

Contributions are welcome! Please open issues or pull requests for bug fixes, improvements, or new features.

License

This project is licensed under the MIT License. See LICENSE for details.