Skip to content

Better benchmarks #21

@DavJCosby

Description

@DavJCosby

Right now I've basically got a benchmark for each significant method. For the most part that's been helpful as I'm in a stage where I'd like to see how a change directly affects a method's performance, but later down the road this model doesn't really make sense. The current solution is pretty noise prone, and it takes forever to run the thing.

Instead of like 10-15 benchmarks, there should be 3-5 that each try to push the system to some extreme from a handful of angles. These should be computationally expensive benchmarks that each ask for 100+ frames of unique color generation. While one might focus on mapping techniques, another might make heavier use of angular/directional setters, or filters.

With better benchmarks like these, regressions/improvements should be more distinguishable from the noise.

  • quirky_trail
    heavy usage of angle-based draw methods
  • raindrops
    heavy usage of distance-based draw methods
  • something focusing on filters
    - [ ] something complex that focuses on index/range operations. Maybe 1D conways game of life?

Metadata

Metadata

Assignees

No one assigned

    Labels

    e/performanceChanges intended to improve performanceneeds_thoughtConcepts that need to be explored more before they're ready to be worked on

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions