Skip to content

Latest commit

 

History

History
36 lines (28 loc) · 1.44 KB

README.md

File metadata and controls

36 lines (28 loc) · 1.44 KB

ECAMP

The official implementation of "ECAMP: Entity-centered Context-aware Medical Vision Language Pre-training".
Our paper can be found here

framework

Installation

Clone this repository:

git clone https://github.com/ToniChopp/ECAMP.git

Install Python dependencies:

conda env create -f environment.yml

Resource fetching

As of now, we exclusively offer pre-training code, focusing solely on illustrating the process of retrieving MIMIC-CXR data

  • MIMIC-CXR: We downloaded the MIMIC-CXR-JPG dataset as the radiographs. Paired medical reports can be downloaded in MIMIC-CXR.

You can download ViTB/16 checkpoint here for pretraining.
Our pre-trained model can be found here.

Pre-training

The distilled report and attention weights will be released as soon as our paper is accepted, but you can still use the original radiographs and report for pre-training.
We pre-train ECAMP on MIMIC-CXR using this command:

cd ECAMP/ECAMP/Pre-training
chmod a+x run.sh
./run.sh

Note that it is flexible to develop other pre-training models under this framework.
Hope you enjoy!