Skip to content

Conversation

@jo-mueller
Copy link
Collaborator

@clbarnes I think some of the testing functionality got lost along the way at some point when I made you rebase #26 a bunch of times. I noticed that the json files in the attributes/zarr directories under tests were no longer executed, so that should fix this.

@github-actions
Copy link

Automated Review URLs

@jo-mueller
Copy link
Collaborator Author

jo-mueller commented Jan 21, 2026

@clbarnes can you check whether the state of this PR is how the testing scheme was originally intended? Or should it rather have superseded the Suite tests? I have a vague feeling that we are overtesting here a bit :)

Edit: Can see myself that tests are now duplicated. Could I just remove the ...suite.json files and keep the rest?

@clbarnes
Copy link
Contributor

Yes, the new single-file tests were intended to supercede the suite tests (as they're just the exact same tests, split out). I preferred to infer the validity of an example through its path (whether it's in the valid/ or invalid/ directory) rather than store it in the JSON - fewer things to change when you copy and paste a valid example and change something to make it invalid.

@jo-mueller jo-mueller moved this to In progress in RSE-Unit Jan 22, 2026
@jo-mueller jo-mueller self-assigned this Jan 22, 2026
@jo-mueller
Copy link
Collaborator Author

jo-mueller commented Jan 26, 2026

@clbarnes could you give this a short look for a thumbs up or down? The RFC5 PR #67 does some changes to this after all, but since it's unrelated to this, I thought it could make the review there a bit easier by merging this separately first.

In #67, I ended up using the valid field in the test metadata after all, since just looking for valid or invalid in the path turned out messier than expected (i.e., an example under valid/spec/invalid_axes.json should check out as valid but as both invalid/valid in the path. So I ended up adding this information back in for a more direct annotation of examples. Maybe the description of each of the examples could even be expanded a bit for a clearer description of why a test should fail/pass but I couldn't be bothered with that rn.

@clbarnes
Copy link
Contributor

Sorry I didn't catch this earlier - the idea was for the testing to be handled by https://github.com/jo-mueller/ngff-spec/blob/expand-tests/conformance/ome_zarr_conformance.py#L109 , which looks specifically at the hierarchy from the root tests directory. That one script handles both the attributes-mode and zarr-mode tests, as well as giving the option to include/ exclude the strict tests, and generally filter on patterns. Using the conformance tester internally gives us some confidence that it's doing what it's supposed to, for other developers' usage.

However, the conformance tests could (should?) include types of validation which can't be caught by JSON Schema, which means that it may not be the best tool for testing the JSON Schema itself. We could maintain a list of JSON Schema expected failures and compare it to the conformance tester output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants