Bench-PRS Dock is a curated collection of ten Dockerized Polygenic Risk Score (PRS) tools built to make genetic risk prediction analyses reproducible, portable, and easy to benchmark across populations. Each container is versioned, metadata-tracked, and verified for runtime consistency.
Figure 1. Bench-PRS Dock Architecture.
Pull and run any tool (example PRS-CSx):
docker pull chiomab/prscsx:v1.2
docker run --rm \
-v ld_ref:/ld_ref \
-v $(pwd)/test_data:/test_data \
-v $(pwd)/results:/results \
chiomab/prscsx:v1.2 \
python PRScsx.py \
--ref_dir=/ld_ref \
--bim_prefix=test_data/test \
--sst_file=test_data/EUR_sumstats.txt,test_data/EAS_sumstats.txt \
--n_gwas=200000,100000 \
--pop=EUR,EAS \
--chrom=22 \
--phi=1e-2 \
--out_dir=prscsx_results/ \
--out_name=test \
--seed=1234See all tool pages in docs/tool_pages/
See the full technical inventory in Bench-PRS Dock Inventory
| Tools | Docker Image/ Tags | Image Size | Citation |
|---|---|---|---|
| BridgePRS | chiomab/bridgeprs:v1.5 | 588.9 MB | Hoggart et al. (2024) |
| PRS-CSx | chiomab/prscsx:v1.2 | 118.3 MB | Ruan et al. (2022) |
| SDPRX | chiomab/sdprx:v1.0 | 555.8 MB | Zhou et al. (2023) |
| XPASS/ XPASS+ | chiomab/xpass:v1.2 | 689.1 | Cai et al. (2021) |
| XP-BLUP | chiomab/xpblup:v1.0 | 51.4 MB | Coram et al. (2017) |
| GAUDI | chiomab/gaudi-prs:v1.0 | 577.3 MB | Sun et al. (2024) |
| PolyFun | chiomab/polyfun:v1.3 | 2.6 GB | Weissbrod et al. (2022) |
| SNPNET | chiomab/snpnet:v1.1 | 811.3 MB | Qian et al. (2020) |
| TL-PRS | chiomab/tl-prs:v1.2 | 635.2 MB | Zhao et al. (2022) |
Bench-PRS-Dock/
├─ README.md
├─ CITATION.cff
├─ docs/
│ └─ tool_pages/
│ └─ <tool_name>.md
├─ manifest/
│ ├─ prsdock_inventory.csv
│ └─ verification_log.csv
├─ benchmarking/
│ ├─ scripts/
│ │ ├─ capture_system_info.sh
│ │ ├─ setup/
│ │ │ ├─ measure_time_setup.sh
│ │ │ ├─ run_manual_setup.sh
│ │ │ └─ run_setup_docker.sh
│ │ ├─ execution/
│ │ │ ├─ measure_time_execution.sh
│ │ │ ├─ run_manual_execution.sh
│ │ │ └─ run_docker_execution.sh
│ │ └─ visualization/
│ │ │ ├─ plot_eff_ratio.R
│ │ │ └─ plot_error_barchart.R
│ │
│ ├─ results/
│ │ ├─ setup_benchmarks/
│ │ │ ├─ docker/
│ │ │ │ └─ <tool>/
│ │ │ └─ manual/
│ │ │ └─ <tool>/
│ │ │
│ │ └─ execution_benchmarks/
│ │ ├─ docker/
│ │ │ └─ <tool>/
│ │ └─ manual/
│ │ └─ <tool>/
│ │
│ └─ figures/Description:
- Essential:
README.md - Documentation:
docs/- Docker image inventory and Tool-specific pages - Benchmarking Scripts:
benchmarking/scripts/- To reproduce the benchmarking results reported in this repository, run the scripts located in:setup/- Setup-time benchmarking scriptsexecution/- Execution-time benchmarking scriptsvisualization/- R scripts for plotting figures in manuscriptcapture_system_info.sh- Environment metadata capture
- Benchmark Results:
benchmarking/results/- Organized setup and execution benchmarks for all tools (Docker vs manual) - Figures:
figures/- Visuals used in README and manuscript
For citation and attribution information, please refer to CITATION.cff.