scheduler: Stop scheduling from the imperative queue when asked to quit#2005
scheduler: Stop scheduling from the imperative queue when asked to quit#2005
Conversation
| # Pull elements forward through queues | ||
| elements = [] | ||
| for queue in queues: | ||
| for queue in self.queues: |
There was a problem hiding this comment.
This line looks very dubious.
The first part of the function's job is to select the queues on which to process in the latter half of the function, that seems to make sense. And except for this line you've changed, the rest of the function does only operate on the selected queues.
I wonder if this change is really needed, or if only the previous change is needed.
Given this is such a sensitive code block, I think this change needs further clarity.
There was a problem hiding this comment.
The first part of the function's job is to select the queues on which to process in the latter half of the function, that seems to make sense. And except for this line you've changed, the rest of the function does only operate on the selected queues.
Correct, and that's the root cause of the issue I'm trying to fix.
There are two operations in this function ("pull elements forward through queues", and "kickoff whatever processes can be processed at this time") and they shouldn't operate on the same list of queues.
Let's consider a simple example where we have three queues: fetch, build and push. Once we receive the quit signal, we only want to:
- pull elements forward from build to push
- kickoff new jobs in push
What the current code does is:
- pull elements forward from build to push
- kickoff new jobs in build (<- not desired, and is the issue I'm trying to fix)
- kickoff new jobs in push
What the new code does is:
- pull elements forward from fetch to build (<- undesired but harmless, this is the issue you're pointing)
- pull elements through from build to push
- kickoff new jobs in push
I hope this explains it. So while this line is indeed "dubious", it needs to be different from the other one. And since it's harmless to pull jobs into next queues even when asked to quit, I think it's fine to do that unconditionally.
There was a problem hiding this comment.
Ok... I've done some mental gymnastics and I agree with your statement :)
What I would like to see here is:
- Rename the local
queuesvariable to something more explicit, perhapsqueues_to_process - At least slightly elaborate on the comment "Pull elements forward through queues"
- Perhaps something like "Pull elements forward through all queues, regardless of whether we are processing those queues"
- Maybe even a little explanation, like "We need to propagate elements through all queues even if we are not processing post-imperative queues, so that we do end up processing the jobs we need to"
|
Can we add a test here ? I think that we can potentially test correct behavior, as described in the comments of #1787, by constructing a test which:
I think we already have a similar test in |
I think there is a misunderstanding here. What this test does (and what makes me confident that my changes don't break anything) is test for correctness "buildstream pushes the artifacts it has built before quitting". While my change is about interactivity "buildstream quits ASAP when asked to". I don't think I can reliably test for the interactivity part as buildstream isn't deterministic (or at least it isn't guaranteed to be deterministic) in the order in which it schedules ready jobs. How I would test this manually is to have two build jobs that would run in parallel, set --builders to 1, then interrupt the build and request quit during the build of the first element that gets scheduled. Talking this through, I think we might be able to do the same thing with failed builds: have two elements that fail, launch the build with |
Sorry, I read the first and last part of what you wrote, and then reading the middle part I can see you thought of all of that already ;-) |
5ec3901 to
d15a8be
Compare
|
[...]
I've rebased your branch, and added:
I've verified that these new tests indeed fail with your changes reverted. |
d15a8be to
70c163e
Compare
70c163e to
b6c34dc
Compare
|
So the update here is that this fails in CI, and we're confident that the tests are correct. The original patch to the scheduler appears to be working with python 3.13 but not in the other python versions which are covered in CI. The patch needs to be better understood, clarified and made to work. Note: One thing we know which has changed in python 3.13 is the default list sorting algorithm, this has effected stage ordering in buildstream which we have worked around, it is unclear how this could have affected this scheduler patch, if indeed it is related at all. |
|
I think it's not fair to say that this is dependent on the python version. It's not fair to say it's because of this patch either: it could be a race condition somewhere else in the scheduler that we only see now with this test (and God knows how many race conditions there are in the scheduler, I remember two manifestations of such race conditions in everyday usage of buildstream). As a data point, I just did 10 back-to-back runs of just these two tests ( I know this failure exists, and could reproduce it on my laptop at least once, but this is definitely some non-deterministic behaviour rather than a real issue with the patch. I would recommend landing this patch now: it's definitely better than what we currently have, and we can keep investigating the test failure and land the test later (along with any fix to make it always pass) |
|
I just reproduced the failure locally (by running the whole file rather than just the two tests), and looking at the log I can see that the issue is that the cache isn't correctly cleaned up between the tests. At the pipeline summary when buildstream starts, one of the two elements is already "failed" (and base is already "cached"). BuildStream then builds the other one and has two failures at the end. In the second test, both files were already cached failures before the build started. |
|
You can also see this in CI (with python 3.13, no less): https://github.com/apache/buildstream/actions/runs/15327164427/job/43124550203?pr=2005#step:3:8187 |
For instance, we shouldn't schedule new build jobs when asked to quit. However, not considering the build queue at all would miss pushing newly built elements. What this commit does is: * Pull elements forward through all queues (including the ones we no longer need) * Only schedule jobs after the imperative queue This means that elements in the build queue (or any imperative queue) will still get passed to the following queues, but no new build jobs get scheduled. Fixes #1787
b6c34dc to
7f91257
Compare
We use remove_artifact_from_cache() in integration tests where the cache is shared and we just want to clean up some artifacts regardless of whether they have already been built or not.
… project This approach has been applied mostly elsewhere but not yet in the integration tests. Since we are modifying integration tests, it is our responsibility to remodel the tests so that multiple tests do not share the same project directory data.
… builds Test that builds are no longer scheduled after a build fails with `--on-error quit` Also test that failed builds are *still* pushed after a build fails, even if no other builds are scheduled. This is a regression test for #1787
7f91257 to
8078dd7
Compare
For instance, we shouldn't schedule new build jobs when asked to quit. However, not considering the build queue at all would miss pushing newly built elements.
What this commit does is:
This means that elements in the build queue (or any imperative queue) will still get passed to the following queues, but no new build jobs get scheduled.
Fixes #1787