Skip to content

Conversation

@abhaasgoyal
Copy link
Contributor

@abhaasgoyal abhaasgoyal commented May 20, 2025

  • Add endpoint support for the following CLI operations:
    • meorg output update [OPTIONS] MODEL_OUTPUT_ID - Update operation related to model output, with default values for fields.
    • meorg output create [OPTIONS] MOD_PROF_ID NAME - Add support for optional parameters for model output fields (similar to update operation) while creating model output.
    • meorg benchmark list MODEL_OUTPUT_ID EXP_ID - List available benchmarks
    • meorg benchmark update MODEL_OUTPUT_ID EXP_ID [BENCHMARK_IDS] - Change benchmarks associated with Model output and Experiment.
    • meorg experiment update MODEL_OUTPUT_ID [EXP_IDS] - Add new experiments to a given model output ID
    • meorg experiment delete MODEL_OUTPUT_ID EXP_ID - delete a specific experiment associated with a given model output ID

Optional arguments are specified with square braces []

  • During testing for client/cli, add pre-post test fixtures as wrappers, for making test definition as independent. Some examples are:
    • benchmark tests require atleast 2 model output IDs associated with the same experiment
    • Starting analysis requires files to be uploaded to an existing model output ID
    • Most of the tests have a prerequisite of automatically creating a fresh model output ID (with a name specific to the test), and automatically deleting them if the test passes. However, if the test fails, then the model output ID is not deleted (for further debugging in the web-app/other options in CLI manually)
  • Group tests for different endpoints as classes.

@bschroeter
Copy link
Contributor

What's going on with this PR?

@abhaasgoyal
Copy link
Contributor Author

abhaasgoyal commented Jul 1, 2025

For now,

  1. Achieved compatibility with me.org API v3.0.0.
  2. Implemented all available endpoints additional endpoints related to having benchmark and experiment operations. and wrote tests.

Working on

  1. Implementing the workflow in benchcab
  2. Writing tests which can be run standalone in pytest (currently tests related to model output require tests to run in a specific order).

@abhaasgoyal abhaasgoyal marked this pull request as ready for review August 5, 2025 23:03
@abhaasgoyal abhaasgoyal requested a review from bschroeter August 5, 2025 23:04
@bschroeter
Copy link
Contributor

@abhaasgoyal, would you mind putting some more detail into this PR as to what has been done and why there are such sweeping changes to existing functionality? The diffs are quite long so some additional context would be very welcome. Cheers, B

@abhaasgoyal
Copy link
Contributor Author

@bschroeter, I have provided the list of the significant changes related to the updated testing infrastructure, and the list of new endpoints in the PR description. Happy to provide further clarifications on the same.

Copy link
Contributor

@bschroeter bschroeter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update docstrings with more details about the functions. Otherwise LGTM.

json=dict(benchmarks=updated_benchmarks),
)

def model_output_experiments_extend(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extend? Should this be "update"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, if an experiment already exists, it is not replaced by the new set of experiments, they are just added for a model output id (unlike update benchmarks, which replaces the benchmarks).

Copy link
Contributor

@SeanBryan51 SeanBryan51 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I'm not able to comment on anything specific given I'm not familiar so I'm relying on Ben's previous comments. Just a small suggestion for one of the docstrings but other than that all good!

@abhaasgoyal abhaasgoyal force-pushed the 3-model-output-endpoint branch from fffbdf0 to 532ae0c Compare September 11, 2025 05:37
@abhaasgoyal abhaasgoyal merged commit 6311596 into main Sep 12, 2025
2 checks passed
abhaasgoyal added a commit to CABLE-LSM/benchcab that referenced this pull request Oct 15, 2025
Create a workflow for benchcab not requiring for the user to enter model_output_id anymore. Instead, the user needs to specify which branch should be chosen as the model_output_name, from which benchcab does the necessary workflow.

Uses the new API endpoints from me.org API v3.0.0 (see CABLE-LSM/meorg_client#67 for more details)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants