Skip to content

Latest commit

 

History

History
36 lines (27 loc) · 1.88 KB

README.md

File metadata and controls

36 lines (27 loc) · 1.88 KB

VLFATRollOut

This repository contains the source code of the following paper: VLFATRollout: Fully Transformer-based Classifier for Retinal OCT Volumes, Marzieh Oghbaie, Teresa Araujo, Ursula Schmidt-Erfurth, Hrvoje Bogunovic

The proposed network deploys Transformers for volume classification that is able to handle variable volume resolutions both at development and inference time.

Proposed Approach for 3D volume Classification

Alt text

The main models are available at model_zoo/feature_extrc/models.py.

Installation

Please check INSTALL.md for installation instructions.

Training

For OLIVES dataset, the list of samples should be provided in a .csv file under dataset to annotation_path_test field. The file should at least includes sample_path,FileSetId,label,label_int,n_frames. On Duke dataset, however, give the directory of the samples arranged like the following to the dataloader is sufficient: subset/class.

python main/Smain.py --config_path config/YML_files/VLFATRollout.yaml

Evaluation

  • Simple Test with confusion matrix: set the train: false and allow_size_mismatch: false under train_config in the corresponding config file.
python main/Smain.py --config_path config/YML_files/FATRollOut.yaml

Acknowledgement

This repository is built using the timm library, Pytorch and Meta Research repositories.

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Citation