Skip to content

Experiments repository for paper submission.

License

Notifications You must be signed in to change notification settings

PyJobShop/Experiments

Repository files navigation

Experiments

This repository contains experimental code and data for the paper submission related to PyJobShop.

Installation

Before using this repository, ensure you have the following installed:

  • uv (version 0.5.4 or higher)
  • CP Optimizer (version 22.1.1.0 or higher)

Install PyJobShop and all required packages by running:

uv sync

This command installs PyJobShop at commit 3ad1a02, which has been modified to support permutation constraints. See this branch for more details.

Repository structure

The repository is organized as follows.

The data/ directory:

  • bks/: Contains all best-known solutions.
  • instances/: Contains all problem instances.
  • bks.csv: Parsed best-known solutions for result analysis.
  • stats.csv: Parsed instance data for result analysis.
  • results.csv: Comprehensive CSV overview of all results.

The notebooks/ directory:

  • parse_bks.ipynb: Notebook for parsing best-known solutions.
  • parse_results.ipynb: Notebook for parsing benchmark results.
  • analysis.ipynb: Notebook for performing results analysis.

Additional utilities:

  • read/read.py: Helper functions to read various instance formats.
  • benchmark.py: Script for running benchmarks.

Results:

  • results/: Contains all raw benchmark results (including full solutions).
    Note: This folder is not included in the repository but can be downloaded separately from Zenodo.

Reproducing results

To reproduce all benchmark results (i.e., the results/ folder), use the benchmark.py script which interfaces with the data. For example, to solve all FJSP instances using OR-Tools with a 10-second time limit and 8 cores per instance, run:

uv run benchmark.py data/instances/FJSP/*.txt \
  --problem_variant FJSP \
  --solver ortools \
  --time_limit 10 \
  --num_workers_per_instance 8 \
  --display

For more configuration options, you can view the help documentation:

uv run benchmark.py --help

SLURM job script

For running all experiments on a SLURM cluster, use the jobscript.py file. This script submits independent SLURM jobs, each running the benchmark.py script for a specific problem variant. Note: This script is tailored to our cluster hardware, so adjustments may be required for your system.

I run the following commands:

uv run jobscript.py --solver ortools --time_limit 900
uv run jobscript.py --solver cpoptimizer --time_limit 900

Post-processing results

After running the experiments, execute the following scripts to generate parsed files for data analysis:

  • notebooks/parse_bks.py
  • notebooks/parse_results.py
  • parse_stats.py

Other

  • fjsp_naderi.py: Replicates the FJSP model from Naderi et al. (2023).