-
Notifications
You must be signed in to change notification settings - Fork 507
Running ColabFold in Docker
- Docker or Singularity/Apptainer should be installed on your machine.
- NVIDIA GPU drivers are required. Versions required according to the JAX documentation: "We recommend installing the newest driver available from NVIDIA, but the driver must be version >= 525.60.13 for CUDA 12 and >= 450.80.02 for CUDA 11". However, we recommend to always use the latest driver.
- For Docker, install the NVIDIA Docker toolkit for GPU support. This is not needed for Singularity/Apptainer.
- Pull the Docker Image:
-
Check for Latest Versions: The latest versions of the ColabFold Docker images, including updates to ColabFold and CUDA runtime versions, can be found at sokrypton/ColabFold Container Registry. This registry will provide you with a list of available tags, indicating different versions of ColabFold and their corresponding CUDA runtime environments.
-
Selecting the Right Tag: Each Docker image tag follows the format
colabfold:<colabfold_version>-cuda<cuda_version>
. Make sure to select the tag that matches the version of ColabFold you wish to use and is compatible with your CUDA runtime environment.In the following, we will use CUDA version 12.2.2 and ColabFold version 1.5.5. Pull the image with the following command:
# Docker docker pull ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2 # Singularity (this will create a `colabfold_1.5.5-cuda12.2.2.sif` file) singularity pull docker://ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2
-
Compatibility: Ensure that your NVIDIA GPU drivers and CUDA version installed on your host machine are compatible with the CUDA version of the Docker image. Mismatched versions may lead to errors or suboptimal performance.
-
Set Up Cache Directory:
Create a directory to store the AlphaFold2 weights. This directory, referred to as the cache directory, will be reused to avoid downloading weights each time the container is run.
-
Run Docker Image to Download Weights:
Execute the following command to start the Docker container and download the AlphaFold2 weights into your cache directory:This command mounts the cache directory to# Docker docker run --user $(id -u) -ti --rm \ -v /local/path/to/cache:/cache:rw \ ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2 \ python -m colabfold.download # Singularity singularity run -B /local/path/to/cache:/cache \ colabfold_1.5.5-cuda12.2.2.sif \ python -m colabfold.download
/cache
for storing the downloaded weights. Weights will be stored in the/local/path/to/cache
directory on your host machine.
Once the weights are downloaded, you can run a prediction using colabfold_batch
.
-
Prepare Your Input Data:
Place your input sequence file (e.g.,input_sequence.fasta
) in a directory on your host machine. This directory will be mounted into the Docker container. -
Run the Prediction:
Use the command below to run the prediction. The input data should be accessible in the container, and the results will be written to the output directory:# Docker docker run --user $(id -u) -ti --rm --runtime=nvidia --gpus 1 \ -v /local/path/to/cache:/cache:rw \ -v $(pwd):/work:rw \ ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2 \ colabfold_batch /work/input_sequence.fasta /work/output # Singularity singularity run --nv \ -B /local/path/to/cache:/cache -B $(pwd):/work \ colabfold_1.5.5-cuda12.2.2.sif \ colabfold_batch /work/test.fa /work/output
-
--runtime=nvidia --gpus 1
(Docker) or--nv
(Singularity) will enable the Nvidia Docker runtime. - Replace
input_sequence.fasta
andoutput
with your actual input file and desired output directory name. - The results will be available in the
output
directory within your current working directory on your host machine after the run is complete.
-