-
Install Docker (with GPU Support)
Ensure that Docker is installed and configured with GPU support. Follow these steps:
- Install Docker if not already installed.
- Install the NVIDIA Container Toolkit to enable GPU support.
- Verify the setup with (using a version close to our environment):
docker run --rm --gpus all nvidia/cuda:12.6.3-base-ubuntu22.04 nvidia-smi
-
Pull the Docker image The image contains all necessary dependencies (PyTorch, HMMER, Kalign, CUTLASS, etc.), but does not include the Protenix source code by default.
docker pull ai4s-share-public-cn-beijing.cr.volces.com/release/protenix:1.0.0.4
-
Clone this repository
git clone https://github.com/bytedance/protenix.git cd ./protenix -
Run Docker with an interactive shell Mount the current directory to
/appinside the container. If you have external data or weights (e.g., in/root/protenix), consider mounting them as well.docker run --gpus all -it \ -v "$(pwd)":/app \ -v /dev/shm:/dev/shm \ ai4s-share-public-cn-beijing.cr.volces.com/release/protenix:1.0.0.4 \ /bin/bash -
Install Protenix and Verify Once inside the container, install Protenix in editable mode and verify the installation:
cd /app pip install -e . # Verify the installation by checking the help message protenix --help
After completing these steps, you can proceed with inference or training. See Inference Guide for more details.