-
Notifications
You must be signed in to change notification settings - Fork 6
Add AI-generated unit testing from SECQUOIA fork #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Ickaser
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note to self: still need to reorganize tests across files and DRY them out a bit. Some tests are still not passing, and so far that sometimes means the tests are nonsense but in a couple cases means that there was a bug.
|
Note to self: some tests are skipped, because they pertain to cases where decisions must still be made (e.g. how to gracefully handle cases where optimization has no feasible solution). All others should be passing now, but opt_Pch_Tsh, opt_Tsh still haven't been reviewed, and in general tests should be reorganized logically across test modules (rather than ..._coverage.py, etc.). It's annoying to wait for some slow tests to run when doing frequent iteration, but I find that I come down against the "only run slow tests after merge" philosophy. Either logically separate tests so that only the "relevant" ones are run on a PR, or run all of them; at most, let a developer run the "slow" tests separately on their local machine with |
This takes just the unit testing scripts, etc. from #6 and makes it into a separate PR.
Ping: @bernalde . I may request your feedback about some of the structure and patterns about these unit tests, but there are some things I know for sure I would like to change about these tests. I intend to merge this before #9 to ensure that I don't break the API in that refactor.