-
Notifications
You must be signed in to change notification settings - Fork 1
Description
I think it would be nice to have a little benchmark script that checks how long object creation and the evaluate call take for common use cases (say one to a few axes, 10 to 100 points per axis, evaluate call for 1, 10, 100, 1000, 1M points).
I did something like this here:
https://github.com/gammapy/gammapy-extra/blob/master/checks/interpolate_spectral_cube.ipynb
The conclusion was that the scipy.interpolate.RegularGridInterpolator is very fast to instantiate (basically no time, no computation happens there, can be instantiated in the IRF evaluate call, although probably doing it in __init__ is also OK because the values don't change during use, and it is fastest to evaluate (just does straight nearest neighbor or bilinear interpolation).
A good other option is to call scipy.ndimage.interpolation.map_coordinates, which I think is also very fast for the linear case, and can do splines.
So maybe using it from the start is even the better option.
Karl used it in his fitshistogram class:
https://github.com/cta-observatory/ctapipe/blob/master/ctapipe/utils/fitshistogram.py#L349
Probably either choice is good, maybe it can even be made a generic option so that we can later do real-world testing with different IRFs and see which one is faster / more precise if we use order 2 or 3 splines.