You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 8, 2024. It is now read-only.
I was thinking that we might want to consider adding an option of running the same benchmark several times to be able to quantify the natural variance in LightGBM training/inferencing times. This will allow us to report confidence intervals in benchmark results.
I have measured training times that varied by about 20% when running distributed LightGBM repeatedly under identical conditions. In some cases, this intrinsic variance might be larger than the time differences measured across variants that we are benchmarking. Having confidence intervals in benchmark results would be especially beneficial for these cases.