[[ benchmark.deeplabcut.org ]]
Welcome to the DeepLabCut benchmark! This repo hosts all submitted results, which are available at benchmark.deeplabcut.org. If you are interested in submitting to the benchmark, please find detailed instructions on benchmark.deeplabcut.org/submission.
The mandatory requirements for building the benchmark page can be installed via
$ pip install -r requirements.txtThe (non-public) ground truth data needs to be present in data/. Check that this is the case by running
find data -type f
benchmark/data/CollectedData_Mackenzie.h5
benchmark/data/CollectedData_Daniel.h5
benchmark/data/CollectedData_Valentina.h5
benchmark/data/CollectedData_Mostafizur.h5For using all functionalities of this package and re-running evaluations, a DeepLabCut installation is additionally required.
Check that the package works as expected by running
python -m pytest testswhich should finish without errors or warnings.
To re-evaluate all available models, run
$ python -m benchmarkor, if you want to run in debugging mode,
python -m benchmark --nocache --onerror raisefrom the repository root.
To manually build the documentation, run
$ make deploy