-
Notifications
You must be signed in to change notification settings - Fork 23
Description
It would be good to have a set of test models of varying complexity that can serve as benchmarks for speed tests. Then, as development progresses, we can run the benchmarks on old and new versions, and see if the code is getting faster. Also, we can see just how much slower PySD is than SDEverywhere. (I dread to think.)
I don't know much about benchmarking, but I expect there are some well understood best practices we could implement. I imagine we want a dozen or so different models of different complexities and sizes that exercise the range of behaviors we see in SD models. Ideally, they would be models the community understands, so they would be easy to communicate about. Maybe World3, the market growth model, the beer game, The C-Roads model, etc. would be good candidates. Tom Fiddaman has some of these on his blog - https://metasd.com/model-library/ - we could ask his permission to use them.
I'd propose putting these models together in a /benchmarks/ directory, and possibly listing the number of variables, subscript dimensionality, etc. associated with each one.
@enekomartinmartinez, @alexprey, @julienmalard, @ivan-perl, @travisfranck, what are your thoughts?