Official PyTorch implementation of Multi-Band Brain Network (MBBN), published in Communications Biology. [paper link will be added soon]
MBBN is a self-supervised, pre-trainable transformer for functional MRI (fMRI) that decomposes BOLD signals into three frequency bands and learns multi-scale brain dynamics through band-specific temporal–spatial modules.
Standard fMRI models treat the BOLD time series as a single signal, ignoring the rich multi-scale temporal structure of neural activity. MBBN addresses this by:
- Frequency decomposition — splitting each ROI's time series into ultralow, low, and high frequency bands using data-driven Lorentzian fitting (f₁) and spline multifractal analysis (f₂)
- Band-specific modules — each band is independently processed by a BERT-style temporal encoder followed by a multi-head spatial attention module
- Self-supervised pretraining — spatiotemporal masking of structurally central (high-communicability) hub ROIs and random time windows, trained with a mask reconstruction loss + spatial difference loss
- Interpretability — GradCAM-style spatial attention analysis reveals band-specific brain network patterns associated with each phenotype
For full quantitative results, ablation studies, and interpretability analyses, please refer to the paper.
conda create -n mbbn python=3.10 -y
conda activate mbbn
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu114
pip install nibabel nilearn nitime timm tensorboard numpy pandas \
wandb weightwatcher tqdm scikit-learn scikit-image \
matplotlib transformers lmfitOr use the provided script:
bash environment.sh| Dataset | N | TR (s) | Task | Access |
|---|---|---|---|---|
| UK Biobank (UKB) | 40,699 | 0.735 | Pretraining | Application required |
| ABCD | 8,833 | 0.8 | Sex / Fluid intelligence / Depression / ADHD | NDA required |
| ABIDE I+II | 141 | varies | ASD | Public |
| Atlas | ROIs | --intermediate_vec |
|---|---|---|
| HCP-MMP1 (asymmetric) | 360 | 360 |
| Schaefer 2018 (400 parcels, 7 networks) | 400 | 400 |
Use the scripts in data_preprocess_and_load/:
python data_preprocess_and_load/ROI_EXTRACT_UKB.py # UKB
python data_preprocess_and_load/ROI_EXTRACT_ABIDE.py # ABIDEMetadata CSVs are expected under data/metadata/. See data_preprocess_and_load/dataloaders.py for the exact filenames.
Structural communicability is used to identify hub ROIs for pretraining masking. Run once per atlas:
python communicability.py \
--base_path /path/to/MBBN \
--intermediate_vec 360 # or 400Or use the SLURM script:
sbatch scripts/main_experiments/02_pretraining/compute_communicability.slurmpython main.py \
--base_path /path/to/MBBN \
--step 3 \
--dataset_name UKB \
--target reconstruction \
--intermediate_vec 400 \
--num_hub_ROIs 380 \
--spatial_loss_factor 1.0 \
--nEpochs 1000 \
--exp_name pretrain_UKB_Schaefersbatch scripts/main_experiments/02_pretraining/pretrain_MBBN.slurmKey arguments:
| Argument | Description |
|---|---|
--intermediate_vec |
Atlas size: 360 (HCPMMP1) or 400 (Schaefer) |
--num_hub_ROIs |
Number of high-communicability hub ROIs to mask |
--spatial_loss_factor |
λ for the spatial difference loss (UKB default: 1.0) |
python main.py \
--base_path /path/to/MBBN \
--step 2 \
--finetune \
--pretraining_model_path /path/to/pretrained_model.pth \
--dataset_name ABIDE \
--target ASD \
--intermediate_vec 360 \
--spatial_loss_factor 100 \
--exp_name finetune_ABIDE_ASDsbatch scripts/main_experiments/03_finetuning/finetune_MBBN.slurmRecommended --spatial_loss_factor per dataset:
| Dataset | λ |
|---|---|
| ABIDE | 100 |
| ABCD | 10 |
| UKB | 1 |
python main.py \
--base_path /path/to/MBBN \
--step 2 \
--dataset_name ABCD \
--target sex \
--intermediate_vec 360 \
--spatial_loss_factor 10 \
--exp_name from_scratch_ABCD_sexsbatch scripts/main_experiments/01_from_scratch/train_MBBN_from_scratch.slurmpython visualization.py \
--base_path /path/to/MBBN \
--dataset_name ABIDE \
--target ASD \
--intermediate_vec 360 \
--model_path /path/to/finetuned_model.pth \
--save_dir /path/to/save_dirsbatch scripts/main_experiments/04_interpretability/interpretability_MBBN.slurmOutputs per-band spatial attention maps (high / low / ultralow) for each subject, enabling GradCAM-style interpretation of which ROIs drove the model's predictions.
MBBN/
├── main.py # Training entry point
├── model.py # Model definitions
│ ├── Transformer_Finetune # Step 1: vanilla BERT baseline
│ ├── Transformer_Finetune_Three_Channels # Step 2: MBBN
│ └── Transformer_Reconstruction_Three_Channels # Step 3: MBBN pretraining
├── trainer.py # Training / evaluation loop
├── losses.py # Mask loss + Spatial difference loss
├── loss_writer.py # Loss orchestration
├── visualization.py # Interpretability (GradCAM-style)
├── communicability.py # Structural communicability computation
├── data_preprocess_and_load/
│ ├── dataloaders.py # DataLoader factory
│ ├── datasets.py # Dataset classes (UKB / ABCD / ABIDE)
│ ├── ROI_EXTRACT_UKB.py
│ └── ROI_EXTRACT_ABIDE.py
├── data/
│ ├── atlas/ # Atlas NIfTI files
│ ├── communicability/ # Precomputed hub ROI orderings
│ ├── coordinates/ # ROI coordinate CSVs
│ └── metadata/ # Phenotype CSVs
├── scripts/
│ ├── main_experiments/
│ │ ├── 01_from_scratch/ # SLURM: from-scratch training
│ │ ├── 02_pretraining/ # SLURM: communicability + pretrain
│ │ ├── 03_finetuning/ # SLURM: fine-tuning
│ │ └── 04_interpretability/ # SLURM: interpretability + WeightWatcher
├── docs/MBBN_procedure.png # Architecture figure
└── docs/MBBN_code_flow.png # Pretrain → finetune flow diagram
If you find this work useful, please cite:
@article{MBBN_CommsBio,
title = {Multi-Band Brain Network for Multi-Scale Temporal Analysis of Functional MRI},
author = {Bae, Sangyoon and others},
journal = {Communications Biology},
year = {2025},
doi = {TBD}
}Note: DOI and full citation will be updated upon publication. [paper link will be added soon]
This project is licensed under the terms of the LICENSE file in this repository.

