Skip to content

nanobench documentation suggestion #123

@melg8

Description

@melg8

When running minimal example from readme:

#define ANKERL_NANOBENCH_IMPLEMENT
#include <nanobench.h>

int main() {
    double d = 1.0;
    ankerl::nanobench::Bench().run("some double ops", [&] {
        d += 1.0 / d;
        if (d > 5.0) {
            d -= 5.0;
        }
        ankerl::nanobench::doNotOptimizeAway(d);
    });
}

You get ouput like so:

ns/op op/s err% total benchmark
7.92 126,322,335.20 2.5% 0.01 some double ops

Which has less information than were shown in documenation for same example:

ns/op op/s err% ins/op cyc/op IPC bra/op miss% total benchmark
7.52 132,948,239.79 1.1% 6.65 24.07 0.276 1.00 8.9% 0.00 some double ops

Reproducible: https://godbolt.org/z/o7nEEP3P9

Only after reading this #99 i found out about need of usage linux + perf tool to get same extended output.

I would suggest to have "default" behavior represented in readme/docs first and only then - extended version with explanation of environment/tools needed for it to work. Cause now, at least few people got confused why "hello world" example on windows, in linux docker container (which doesnt have perf by default) and even godbolt - are all showing other output than presented in readme as if it was default output.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions