Skip to content

Benchmark flexibility design discussion #85

@mmilenkoski

Description

@mmilenkoski

As the number of things we want to modify in the benchmarks increase, it is necessary that we discuss how do we handle this situation. On the one hand, it is important that we have a consistent benchmark that is exactly the same every time. On the other hand, we would like to allow users to try out and compare different backends, optimizers, etc. Introducing this flexibility raises design questions in the design of the benchmarks, but also in the design of the dashboard, CLI, and other components. I am opening this issue in order for it to serve as a place for discussing these design choices.

Some of the questions we need to address are:

  1. Will we create separate images for the official benchmark and one intended for experimentation?
  2. For the case of the experimentation image, how do we handle the parameter selection in the dashboard and the CLI?
  3. How do we handle the passing of the selected parameters to the experimentation image?

Feel free to propose solutions and add new questions as they come up.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions