Hi,
in our project we run a benchmark that fails for very old revisions, simply because the functionality we are testing did not exist back then. Unfortunately, every time we run all the benchmarks, vbench tries to re-run this benchmark again for the revisions where it failed before. Is there any way of preventing this behaviour? Or would it be possible to blacklist revisions on a per-benchmark basis?
We could of course wrap the code in a try/except, but then this would lead to very short run times for those, somewhat spoiling the data.
Thanks