Feat: implement fetch and save outputs for modules#78
Merged
Conversation
b1e3cf6 to
6dccb03
Compare
The `fetch_outputs` commands will load a module's outputs in memory into dataclasses and similar (for the most part: exess exports are a bit complicated, so we leave them as raw json dicts or hdf5 bytes for now), and `save_outputs` will save them to the workspace as before. Implement both for `exess.exess`, and rename auto3d's function to `fetch_outputs`, as it should now be named. Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Also, rename some exess dataclasses for clarity. Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Move the EXESS geo-opt and QM/MM wrappers into dedicated top-level modules, keep the energy-facing helpers in rush.exess, and update the surrounding docs/examples/tests to use the new import structure. This also cleans up the EXESS tutorial and reference snippets so they match the current output shapes and helper APIs. Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Add typed fetch_outputs and save_outputs helpers for the EXESS geometry optimization and QM/MM modules, update the examples and tutorials to use them, and add focused helper coverage. Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
6dccb03 to
762c6f3
Compare
Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
- rename RushError to RushRunError and raise it when it happens - use a distinct str-like RunID type (via NewType) - use a distinct str-like Auto3DError type (via NewType) - change collect_run to behave more generically, so that output validation rightfully happens in each module - overload module functions so that collect=blah typing is automatic - don't allow run ID pass-throughs in fetch / save outputs, it complicates the typing too much and mixes concerns - add tons of comments and typing, esp. for fetch / save outputs - remove unnecessary optionals from some module result types
Also, remove schema_version in exess result dataclasses.
Collaborator
Author
|
In the process of doing this, I've moved to raising exceptions for run failures. We'll add an alternative path that avoids raising exceptions for module function calls, but for now, this greatly simplifies things. |
Also, move module output helper tests into their own folder. Co-authored-by: OpenAI Codex (GPT-5.4 High) <codex@openai.com>
Collaborator
Author
|
Merging because the changes here were reviewed in the 7b2 PR. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The
fetch_outputscommands will load a module's outputs in memory into dataclasses and similar (for the most part: exess exports are a bit complicated, so we leave them as raw json dicts or hdf5 bytes for now), andsave_outputswill save them to the workspace as before.Current status is that this is done for
exess.exess, and will continue with this pattern moving forward.Will update docs to demonstrate a usage pattern with
fetch_outputs, which should be a lot more ergonomic.