Authors: Roel van Os, Marc Geilen, Martijn Hendriks, Twan Basten
Link to release: https://github.com/TUE-EE-ES/APSP-toolset/releases/tag/1.0.0
For any questions feel free to contact: Roel van Os (r.w.m.v.os@tue.nl).
This repository includes software and benchmark assets for solving the Allocation and Periodic Scheduling Problem (APSP) as described in ([APSP-2026]). For any questions feel free to contact: Roel van Os (r.w.m.v.os@tue.nl).
This repository includes:
- Precomputed benchmark results as seen in the paper
- Solution models discussed in (
[APSP-2026]) - Solution models discussed in (
[QUINTON-2020]) - Scripts to run and verify solutions
- Scripts to generate figures
There are no specific system requirements for running this software. The implementation is platform-independent and can be executed on any operating system (e.g., Linux, macOS, Windows) and hardware architecture, including both x86 and ARM-based systems.
solve_instance.py: run selected models on a single graph instance.solve_benchmark.py: run all models on all instances inbenchmark/graphs/.verifiy_instance.py: verify one saved solution instance against APSP constraints.verify_benchmark.py: verify all benchmark entries and writebenchmark/verification_results.csv.models/: formulations and shared configuration classes.models/configuration/apsp_parameters.py: solver parameter defaults (applied to all models).models/configuration/apsp_configure.py: naming/visualization configuration used by model output.benchmark/: Precomputed results as seen in the paperbenchmark_full/: skeleton folder for a full benchmark run.benchmark_shortened/: skeleton folder for a shortened benchmark run.combinatorial_optimization_tools: code for constructing and solving combinatorial optimization problems such as the APSP.sdf3_python_utilities: code for parsing and converting SDF3 files to usable python data structures.
All default file settings correspond to those used for finding the results as reported in ([APSP-2026])
Required: Python 3.10+ environment.
Core packages to run the optimization models:
docplex: v2.29.241cplex: v22.1.1.0pandas: v2.2.3matplotlib: v3.10.1interruptingcowv0.8
Additional packages used by results evaluation plotting:
numpy: v2.2.4natsort: v8.4.0lxml: v5.3.1
Install with:
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install pandas matplotlib interruptingcow numpy natsort lxmlThe models are built with DOcplex (docplex) and require IBM CPLEX (cplex) for local solves.
- Download IBM ILOG CPLEX Optimization Studio from IBM.
- If you are a student/researcher, use the IBM academic initiative for access/licensing: https://www.ibm.com/support/pages/ibm-ilog-optimization-academic-initiative
NOTE: When installing CPLEX via pip, you obtain the community edition of the package. This edition only allows us to solve small problem instances. For the benchmark instances provided in this repository, we need to full version of the CPLEX package. The setup files for this package is provided after installing the IBM ILOG CPLEX Optimizaiton Studio.
- In your terminal application, browse to the CPLEX package install directory (i.e.
C:\Program Files\IBM\ILOG\CPLEX_StudioXXXX\python) - Run the
setup.pyscript:- Windows:
py setup.py install - Linux (Ubuntu):
python3 setup.py install
- Windows:
Instructions to get the full CPLEX package, after we have installed IBM ILOG CPLEX Optimization Studio, can also be found here: https://www.ibm.com/docs/en/icos/20.1.0?topic=cplex-setting-up-python-api
python -c "import cplex, docplex; print('CPLEX + DOcplex OK')"If this import fails, the solver will not run.
The main results from the paper can be found in the /benchmark folder. Additional scalability experiments mentioned in the paper can be found in the /benchmark_scaling folder.
NOTE: To regenarate paper figures you need a valid LaTeX installation.
The scripts in /benchmark can be used to generate the figures from the paper.
results_optimality_count.py: Bar chart of the optimality count for all modeling techniques (fig. 7 in paper)results_accuracy_issues.py: Bar chart of the accuracy issues per benchmark set for all modeling techniques (fig. 8 in paper)results_solve_time.py: Boxplot of solve time for all modeling techniques (fig. 9 in paper)
NOTE: To run the benchmark used in
[APSP-2026], the full version of CPLEX is required.
The following steps allow a full replication of the results. No changes are needed to the files, all default file settings correspond to those needed to replicate the results as seen in the paper.
- Run:
solve_benchmark.py
python solve_benchmark.py- Run:
verify_benchmark.py
python verify_benchmark.pyNOTE: Note that the
modespecified inmodels/configuration/apsp_configure.pyneed to be the same for both the solving and verification of the benchmark in order to succesfully verify the results. E.g. existing benchmarks in\benchmarkand\benchmark_scalingwere ran withSDF3mode, so to reverify all resultsSDF3mode needs to be configured inmodels/configuration/apsp_configure.py.
- Open:
benchmark_full
cd benchmark_full- Run figure generation scripts:
results_optimality_count.py: Bar chart of the optimality count for all modeling techniques (fig. 7 in paper)results_accuracy_issues.py: Bar chart of the accuracy issues per benchmark set for all modeling techniques (fig. 8 in paper)results_solve_time.py: Boxplot of solve time for all modeling techniques (fig. 9 in paper)
solve_instance.py is intended for manual experiments on one graph.
- Set:
filepathandfilenametimeoutiterations(for Benders methods)
- Enable/disable model blocks in the script.
- Run:
python solve_instance.pysolve_benchmark.py loops over all files in the selected folder and writes per-instance outputs.
Two folders are provided:
benchmark_full/: Contains the full set of benchmark graphs as reported in[APSP-2026]benchmark_shortened/: Contains a quarter of the set of benchmark graphs
To select either benchmark change the experiments_folder.
A full benchmark run takes roughly 8 days to complete. For the artifact evaluation we also provide a shortened benchmark with an expected runtime of approximately two days, that provides an indication of the trends reported in the paper.
- Set:
timeout(optional)iterations(optional)experiments_folder: to"/benchmark_full/"or"/benchmark_shortened/"
- Run:
python solve_benchmark.pyOutputs are written to:
[benchmark_name]/results/<instance>/[benchmark_name]/figures/<instance>/- results table:
[benchmark_name]/results.csv
Upon completion of a benchmark run.
Use verify_benchmark.py to verify the results of the benchmark outputs for correctness and accuracy issues, which generates statistics about the stability of each model.
python verify_benchmark.pyUpon completion of a full benchmark run, and completed verification. The scripts in benchmark_name/ can be used to generate figures.
results_optimality_count.py: Bar chart of the optimality count for all modeling techniques (fig. 7 in paper)results_accuracy_issues.py: Bar chart of the accuracy issues per benchmark set for all modeling techniques (fig. 8 in paper)results_solve_time.py: Boxplot of solve time for all modeling techniques (fig. 9 in paper)
This class applies CPLEX defaults each time a model is built. Current defaults:
mdl.context.cplex_parameters.emphasis.mip = 2(focus on optimality)mdl.parameters.emphasis.numerical = 1(numerical robustness)
Edit these values to tune solver behavior globally for all formulations.
This class controls naming and figure labeling:
mode:SDF3: Default mode, uses the SDF3 naming configurations. This mode is also used for obtaining benchmark results.PAPER: Applies the naming configuration as in the paper. Additionally, the figures have improved styling that matches the one of the illustrative example seen in Section II of the paper.CUSTOM: Apply a custom naming configuration.
time_units: x-axis label suffixfigure_repetitions: visualization settingcustom_task_name,custom_resource_name: used inCUSTOMmode
This affects model naming in parsed data and generated plots/text outputs.
See the dedicated benchmark guide at:
BENCHMARK_README.md
Use either the number or the tag when referencing:
[APSP-2026](Conditionally Accept at RTAS 2026) van Os, R., Geilen, M., Hendriks, M., Basten, T., 2026. Optimal Resource Allocation and Periodic Scheduling.[QUINTON-2020]Quinton, F., Hamaz, I., Houssin, L., 2020. A mixed integer linear programming modelling for the flexible cyclic jobshop problem. Ann Oper Res 285, 335–352. https://doi.org/10.1007/s10479-019-03387-9