-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Right now I've basically got a benchmark for each significant method. For the most part that's been helpful as I'm in a stage where I'd like to see how a change directly affects a method's performance, but later down the road this model doesn't really make sense. The current solution is pretty noise prone, and it takes forever to run the thing.
Instead of like 10-15 benchmarks, there should be 3-5 that each try to push the system to some extreme from a handful of angles. These should be computationally expensive benchmarks that each ask for 100+ frames of unique color generation. While one might focus on mapping techniques, another might make heavier use of angular/directional setters, or filters.
With better benchmarks like these, regressions/improvements should be more distinguishable from the noise.
- quirky_trail
heavy usage of angle-based draw methods - raindrops
heavy usage of distance-based draw methods - something focusing on filters
- [ ] something complex that focuses on index/range operations. Maybe 1D conways game of life?
Metadata
Metadata
Assignees
Labels
Projects
Status