this is a fork of imatge-upc which is derivative of OSVOS
Liver Detection Model is preprocessing method for sorting images in CT scan between liver and non-liver
The Liver Slice Classification model was trained on 10,728 PNG's with a validation set of 6,041 PNG's after 2:29:13.6 hours.
The dataset used for training was sampled from the LiTs Database, with PNG's generated from 22 of the 131 patients.
Requirements to replicate this process include Python version 3.8.x, with the required packages included in requirements.txt. The script slice_classification.py is used to generate and test the Liver Slice Classification model. The model is implemented in the file file_sorter.py, which is used to generate a sub dataset of the LiTs database that attempts to remove PNGs that are not liver containing.
LiTS dataset --> https://competitions.codalab.org/competitions/17094#learn_the_details
You will need to convert the LiTS dataset to matlab files PNGs using the script below or download the preprocessed dataset --> here
Download the dataset from here -- >
- Clone this repository
git clone https://github.com/imatge-upc/liverseg-2017-nipsws.git
- Install if necessary the required dependencies:
- Python 2.7
- Tensorflow r1.0 or higher
- Python dependencies: PIL, numpy, scipy
If you want to test our models, download the different weights. Extract the contents of this folder in the root of the repository, so there is a train_files
folder with the following checkpoints:
- Liver segmentation checkpoint
- Lesion segmentation checkpoint
- Lesion detection checkpoint
If you want to train the models by yourself, we provide also the following pretrained models:
- VGG-16 weights
- Resnet-50 weights weights
This code was developed to participate in the Liver lesion segmentation challenge (LiTS), but can be used for other segmentation tasks also. The LiTS database consists on 130 CT scans for training and 70 CT scans for testing. These CT scans are compressed in a nifti format. We did our own partition of the training set, we used folders 0 - 104 to train, and 105-130 to test. This code is prepared to do experiments with our partition.
The code expects that the database is inside the LiTS_database
folder. Inside there should be the following folders:
images_volumes
: inside there should be a folder for each CT volume. Inside each of these folders, there should be a .mat file for each CT slice of the volume. The preprocessing required consists in clipping the values outside the range (-150,250) and doing max-min normalization.liver_seg
: the same structure as the previous, but with .png for each CT slice with the mask of the liver.item_seg
: the same structure as the previous, but with .png for each CT slice with the mask of the lesion.
An example of the structure for a single slice of a CT volume is the following:
LiTS_database/images_volumes/31/100.mat
LiTS_database/liver_seg/31/100.png
LiTS_database/item_seg/31/100.png
We provide a file in matlab to convert the nifti files into this same structure. In our case we used this matlab library. You can use whatever library you decide as long as the file structure and the preprocessing is the same.
cd /utils/matlab_utils
matlab process_database_liver.m
1. Train the liver model
In seg_liver_train.py you should indicate a dataset list file. An example is inside seg_DatasetList
, training_volume_3.txt
. Each line has:
img1 seg_lesion1 seg_liver1 img2 seg_lesion2 seg_liver2 img3 seg_lesion3 seg_liver3
If you just have segmentations of the liver, then repeat seg_lesionX=seg_liverX
. If you used the folder structure explained in the previous point, you can use the training and testing_volume_3.txt
files.
python seg_liver_train.py
2. Test the liver model
A dataset list with the same format but with the test images is required here. If you don't have annotations, simply put a dummy annotation X.png. There is also an example in seg_DatasetList/testing_volume_3.txt
.
python seg_liver_test.py
This network samples locations around liver and detects whether they have a lesion or not.
1. Crop slices around the liver
In order to train the lesion detector and the lesion segmentation network, we need to crop the CT scans around the liver region. First, we will need to obtain liver predictions for all the dataset, and move them to the LiTS_database
folder.
cp -rf ./results/seg_liver_ck ./LiTS_database/seg_liver_ck
And the following lines will crop the images from the database, the ground truth and the liver predictions.
cd utils/crops_methods
python compute_3D_bbs_from_gt_liver.py
This will generate three folders:
LiTS_database/bb_liver_seg_alldatabase3_gt_nozoom_common_bb
LiTS_database/bb_liver_lesion_seg_alldatabase3_gt_nozoom_common_bb
LiTS_database/bb_images_volumes_alldatabase3_gt_nozoom_common_bb
LiTS_database/liver_results
and also a ./utils/crops_list/crops_LiTS_gt.txt
file with the coordinates of the crop.
The default version will crop the images, ground truth, and liver predictions, considering the liver ground truth masks instead of the predictions. You can change this option in the same script.
2. Sample locations around liver
Now we need to sample locations around the liver region, in order to train and test the lesion detector. We need a .txt with the following format:
img1 x1 x2 id
Example:
images_volumes/97/444 385.0 277.0 1
whre x1
and x2
are the coordinates of the upper-left vertex of the bounding box and id
is the data augmentation option. There are two options in this script. To sample locations for slices with ground truth or without. In the first case, two separate lists will be generated, one for positive locations (/w lesion) and another for negative locations (/wo lesion), in order to train the detector with balanced batches. These lists are already generated so you can use them, they are inside det_DatasetList
(for instance, training_positive_det_patches_data_aug.txt
for the positive patches of training set).
In case you want to generate other lists, use the following script:
cd utils/sampling_bb
python sample_bbs.py
3. Train lesion detector
Once you sample the positive and negative locations, or decide to use the default lists, you can use the following command to train the detector.
python det_lesion_train.py
4. Test lesion detector
In order to test the detector, you can use the following command:
python det_lesion_test.py
This will create a folder inside detection_results
with the task_name
given to the experiment, and inside two .txt files, one with the hard results (considering a th of 0.5) and another with soft results with the prob predicted by the detector that a location is unhealthy.
This is the network that segments the lesion. It is trained just backpropagatins gradients through the liver region.
1. Train the lesion model
In order to train the algorithm that does not backpropagate through pixels outside the liver, each line of the .txt list file in this case should have the following format:
img1 seg_lesion1 seg_liver1 result_liver1 img2 seg_lesion2 seg_liver2 result_liver1 img3 seg_lesion3 seg_liver3 result_liver1
An example list file is seg_DatasetList/training_lesion_commonbb_nobackprop_3.txt
. If you used the folder structure proposed in the Database section, and you have named the folders of the cropped slices as proposed in the compute_3D_bbs_from_gt_liver.py
file, you can use these files for training and testing the algorithm with the following command:
python seg_lesion_train.py
2. Test the lesion model
The command to test the network is the following:
python seg_lesion_test.py
In this case, observe that the script does 4 different steps:
- Does inference with the lesion segmentation network
- Returns results to the original size (from cropped slices to 512x512)
- Masks the results with the liver segmentation masks
- Checks positive detections of lesions in the liver. Remove those false positive of the segmentation network using the detection results.
E-mail at [email protected]. for questions or post in issues
**List of research papers **
Detection-aided liver lesion segmentation using deep learning with assoicated github project arXiv, and related slides here.
@misc{1711.11069,
Author = {Miriam Bellver and Kevis-Kokitsi Maninis and Jordi Pont-Tuset and Xavier Giro-i-Nieto and Jordi Torres and Luc Van Gool},
Title = {Detection-aided liver lesion segmentation using deep learning},
Year = {2017},
Eprint = {arXiv:1711.11069},
}