-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Starting in python 3.9.0, using a ProcessPoolExecutor has a good chance of deadlocking on a terra task. It's almost always happens with 10 workers, and is practically guaranteed with 16 workers.
I've managed to put together a piece of code to reproduce the error:
#!/usr/bin/env python
from concurrent.futures import ProcessPoolExecutor, as_completed
from celery import shared_task
@shared_task
def foo(x):
return x*x
if __name__ == '__main__':
futures = {}
with ProcessPoolExecutor(max_workers=10) as executor:
for x in range(10):
futures[executor.submit(foo, x)] = x
results = {}
for future in as_completed(futures):
task_id = futures[future]
results[task_id] = future.result()
print(len(results))As you can see here, the bug is actually not part of terra, but can be reproduced just using the celery task object. Something about how celery works and a "task" vs a "function" is causing workers to hang before they ever process a single job.
Metadata
Metadata
Assignees
Labels
No labels