Deep learning using computed tomography to identify high-risk patients for acute small bowel obstruction: development and validation of a prediction model
PyTorch implementation and pretrained models for DBA+DRP. For details, see the paper: Deep learning using computed tomography to identify high-risk patients for acute small bowel obstruction: development and validation of a prediction model
You can download the weights of the backbones used in our experiments. Detailed arguments and learning/assessment methods can be found in the configs folder. The path of the weights is at the top level of the project folder(DBADRP_Classifier/weights
).
Backbone | Methods | Accuracy(%) | AUROC | Download |
ResNet | Naive | 66.15 | 0.848 + 0.05 | Link |
DBA+DRP | 70.26 | 0.876 + 0.02 | Link | |
ResNext | Naive | 68.46 | 0.874 + 0.02 | Link |
DBA+DRP | 66.41 | 0.883 + 0.01 | Link | |
WideResNet | Naive | 65.13 | 0.861 + 0.02 | Link |
DBA+DRP | 72.56 | 0.896 + 0.01 | Link | |
DenseNet | Naive | 75.12 | 0.868 + 0.05 | Link |
DBA+DRP | 72.68 | 0.873 + 0.07 | Link | |
EfficientNet | Naive | 71.22 | 0.841 + 0.04 | Link |
DBA+DRP | 71.71 | 0.868 + 0.03 | Link | |
Please install the requirements including pytorch for stable running. This code has been developed with python3.8 and PyTorch 2.0.1, torch 0.19, and CUDA 11.8.
pip install -r requirements.txt
Run on general train/valid dataset with single GPU for 100 epochs as following command. You can choose specific backbone and methods using -
notation(backbone=resnet
or backbone=resnet-dbadrp
). For other configs and config structures, please check out the configs folder.
python main.py dataset=asbo gpus=[0] train.epochs=100 backbone=resnet-dbadrp
By default, we train using 5-fold cross validation and evaluated the model with average values. For 5-fold training, all you need to do is change a few commands for the dataset and execute file.
python cross_validation_train.py dataset=asbo-k gpus=[0] backbone=resnet-dbadrp
Before checking the model performance, the all weight files should be located in weights
forder in the foramt ‘model_{n}k_best.pt’(resnet0k_best.pt, resnet1k_best.pt, …., etc.). If weight files are located well, you just run. This will automatically evaluate about all 5 folds.
python cross_inference.py
To generate the GradCAM of last layer, please run the following command. However, as with inference, the model must be in the right directory before running the command. You can modify the method, gpus, k, etc.
python gradcam.py
The data has three classes, HGSBO
, LGSBO
and NORMAL
, which are located in the top-level folder. The three classes have folders separated by case series id, and each folder has a .dcm
file containing CT images. The data splitting according to k-fold is stored in the splits
folder as a .json
file, and the corrupted data for the robustness test is organized under the distort
folder with distory_type, intenstiy, and classes.
./data
├── HGSBO
│ └── case_series_id
| └── 0001.dcm
| └── ...
├── LGSBO
│ └── case_series_id
| └── 0001.dcm
| └── ...
├── NORMAL
│ └── case_series_id
| └── 0001.dcm
| └── ...
├── splits
| └── asbo_k_classification.json
| └── asbo_classification.json
└──distort
└──affine # dist_type
└── 0 # intensity
├── HGSBO #class
├── LGSBO #class
└── NORMAL #class
The *k*.json
file is constructed by list
with 5 length, and each item of list is dict
with keys: train, train_label, valid, valid_label. On the other hand, a .json file without k is just a dict and has the same keys.
This repository is released under the Apache 2.0 license as found in the LICENSE file.
If our project is helpful for your research, please consider citation:paperclip: and giving a star ⭐
@article{oh2023deep,
title={Deep learning using computed tomography to identify high-risk patients for acute small bowel obstruction: development and validation of a prediction model: A retrospective cohort study},
author={Oh, Seungmin and Ryu, Jongbin and Shin, Ho-Jung and Song, Jeong Ho and Son, Sang-Yong and Hur, Hoon and Han, Sang-Uk},
journal={International Journal of Surgery},
pages={10--1097},
year={2023},
publisher={LWW}
}