Conversation
|
This is great, cheers! I'd suggest that we parametrizing those tests such that they run for all tasks, i.e., something along the lines of: @pytest.mark.parametrize(
"task_name",
[
(task_name,) for task_name in sbibm.get_available_tasks()
],
)What do you think? |
|
I think that this makes a lot of sense. I can add that if you want. |
That would be great!
That's an excellent suggestion. I agree that, depending on the task, we will probably want specialized tests as well -- probably mostly marked as slow for CI execution. |
- 3 types of test suites added + test_task_interface.py for mere API tests + test_task_rej_abc_demo.py for testing the API demonstrated on the landing page (README.md) + test_task_benchmark.py to see/document if the benchmarks work - added some "noref" sentinels for tasks which do not have a reference posterior - using sets to better work with list of tasks to run tests for - as of now, the tests exclude julia based tests
b4f3adb to
b7b78f7
Compare
|
@jan-matthis done for now. I am happy to adapt #18 if this can be merged earlier. |
jan-matthis
left a comment
There was a problem hiding this comment.
Great, I think this provides a good foundation for testing existing and future tasks. Thanks a lot for your work on this, much appreciated!
I only left small comments
- include noref tasks as we don't use the reference posterior - add TODO for later
|
Thanks for the review. I hope I implemented those comments alright. |
jan-matthis
left a comment
There was a problem hiding this comment.
Except for a single comment this looks good to go from my side, cheers!
9ffc0f6 to
abc4a85
Compare
|
Cheers! |
As discussed in #19 this PR isolated the unit test for
two_moonsto serve as a starting point establishing tests for all tasks.