Skip to content
This repository was archived by the owner on Apr 8, 2024. It is now read-only.
This repository was archived by the owner on Apr 8, 2024. It is now read-only.

Add option to report confidence intervals for benchmark results #162

@perezbecker

Description

@perezbecker

I was thinking that we might want to consider adding an option of running the same benchmark several times to be able to quantify the natural variance in LightGBM training/inferencing times. This will allow us to report confidence intervals in benchmark results.

I have measured training times that varied by about 20% when running distributed LightGBM repeatedly under identical conditions. In some cases, this intrinsic variance might be larger than the time differences measured across variants that we are benchmarking. Having confidence intervals in benchmark results would be especially beneficial for these cases.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions