Consider this:
bench::Bench b;
b.relative(true).unit("item");
b.batch(100'000)
.run("baseline",
[]
{
// single-threaded, so batch size has little to none impact on
// ns/item but is too slow with the larger batch size (5-10s).
});
b.batch(1'000'000)
.run("relative",
[]
{
// multi-threaded, and a larger batch size is needed to peak
// out. Duration is less affected by batch size.
});
Documentation states:
$100\% * \frac{baseline}{runtime}$
which I assume means it ignores the batch size. If so, I think it should be changed to something like:
$100\% * \frac{baselinePerItem}{runtimePerItem}$
Which, theoretically, should be same-same if the batch() size is unassigned.
Without this the relative percentage is skewed when batch size differs, and one would have to chose between skewed percentage (other stats are of course correct) or a very slow-running benchmark.
Consider this:
Documentation states:
$100\% * \frac{baseline}{runtime}$
$100\% * \frac{baselinePerItem}{runtimePerItem}$
which I assume means it ignores the batch size. If so, I think it should be changed to something like:
Which, theoretically, should be same-same if the
batch()size is unassigned.Without this the relative percentage is skewed when batch size differs, and one would have to chose between skewed percentage (other stats are of course correct) or a very slow-running benchmark.