-
Notifications
You must be signed in to change notification settings - Fork 1
Model output endpoint #67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
What's going on with this PR? |
|
For now,
Working on
|
|
@abhaasgoyal, would you mind putting some more detail into this PR as to what has been done and why there are such sweeping changes to existing functionality? The diffs are quite long so some additional context would be very welcome. Cheers, B |
|
@bschroeter, I have provided the list of the significant changes related to the updated testing infrastructure, and the list of new endpoints in the PR description. Happy to provide further clarifications on the same. |
bschroeter
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update docstrings with more details about the functions. Otherwise LGTM.
| json=dict(benchmarks=updated_benchmarks), | ||
| ) | ||
|
|
||
| def model_output_experiments_extend( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
extend? Should this be "update"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, if an experiment already exists, it is not replaced by the new set of experiments, they are just added for a model output id (unlike update benchmarks, which replaces the benchmarks).
SeanBryan51
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I'm not able to comment on anything specific given I'm not familiar so I'm relying on Ben's previous comments. Just a small suggestion for one of the docstrings but other than that all good!
fffbdf0 to
532ae0c
Compare
Create a workflow for benchcab not requiring for the user to enter model_output_id anymore. Instead, the user needs to specify which branch should be chosen as the model_output_name, from which benchcab does the necessary workflow. Uses the new API endpoints from me.org API v3.0.0 (see CABLE-LSM/meorg_client#67 for more details)
meorg output update [OPTIONS] MODEL_OUTPUT_ID- Update operation related to model output, with default values for fields.meorg output create [OPTIONS] MOD_PROF_ID NAME- Add support for optional parameters for model output fields (similar to update operation) while creating model output.meorg benchmark list MODEL_OUTPUT_ID EXP_ID- List available benchmarksmeorg benchmark update MODEL_OUTPUT_ID EXP_ID [BENCHMARK_IDS]- Change benchmarks associated with Model output and Experiment.meorg experiment update MODEL_OUTPUT_ID [EXP_IDS]- Add new experiments to a given model output IDmeorg experiment delete MODEL_OUTPUT_ID EXP_ID- delete a specific experiment associated with a given model output IDOptional arguments are specified with square braces
[]