Merged
Conversation
Replace the _cache_key classmethod convention with a key keyword argument passed at class definition time. The metaclass stores the callable and invokes it in the injected __new__, making the cache key requirement explicit at the declaration site.
Replace the single run-tests job with separate unit-tests and integration-tests jobs. Each job filters by pytest marker and enforces its own coverage threshold via --cov-fail-under (98% for unit, 70% for integration). The integration job depends on unit-tests so it only runs after unit tests pass. Move the fail_under setting out of .coveragerc into per-job CLI args to allow independent thresholds. Extract the shared checkout, uv setup, install, and pytest steps into a run-tests composite action to avoid duplicating the setup sequence across both jobs.
WorkerProxy now accepts a lazy flag (default True) that defers discovery subscription and sentinel setup until dispatch() is first called. Double-checked locking ensures start() runs exactly once under concurrent dispatch. Lazy proxies treat start() as a no-op and stop() as a no-op when never started. The flag is threaded through WorkerPool and preserved across pickle via __reduce__. The proxy factory calls start() indiscriminately since lazy proxies handle it as a no-op.
Add unit tests for the lazy property, start() no-op, dispatch auto-start, concurrent dispatch locking, stop no-op, and pickle roundtrip. Update existing proxy tests to use lazy=False where they test eager start behavior. Add LazyMode dimension to the integration test pairwise covering array and Hypothesis strategy so both LAZY and EAGER paths are exercised end-to-end.
Cover the lazy parameter in three places: the main README's worker pools section, the worker README's nested routines section, and a new "Lazy startup" subsection under connections. All three explain that WorkerPool propagates lazy to WorkerProxy, and that task serialization carries the flag to worker subprocesses.
The old start() conflated context-variable setup with resource acquisition, making lazy behavior hard to reason about. Split into enter()/exit() for context management and start()/stop() for the actual discovery, load-balancer, and sentinel lifecycle.
Implements a generic @noreentry descriptor class that prevents instance methods from being invoked more than once. Guard state is tracked via a weakref.WeakSet on the descriptor instance — when a method is called on an instance, that instance is added to the set. Any subsequent call raises RuntimeError. This approach keeps instance namespaces clean and auto-cleans references when instances are garbage collected. The descriptor works with both sync and async methods. Attempting to use it on bare functions or call it directly raises TypeError at runtime and is flagged as a static error via the Never return annotation on __call__.
…usage Apply the @noreentry decorator to WorkerProxy.enter() and WorkerPool.__aenter__() to enforce single-use semantics. Both context managers now reject any attempt to re-enter (whether reentrant or after a full enter/exit cycle) with RuntimeError. Update docstrings to document the single-use contract and explain that users must create a new instance for a new context.
Add and update tests to verify single-use enforcement on WorkerPool and WorkerProxy. New test cases cover: - Reentrant entry (async with pool within pool context) - Post-exit re-entry (entering pool again after full cycle) - RuntimeError is raised with appropriate message These tests validate that the noreentry guard works correctly across both context managers and complements the comprehensive unit tests for the noreentry descriptor itself.
Add documentation to the worker README explaining the single-use contract for WorkerPool and WorkerProxy. Include examples of correct usage patterns and clarify that both context managers must be re-created for each context. This complements the updated docstrings in pool.py and proxy.py and ensures users understand the single-use semantics enforced by the @noreentry guard.
Conditionally set the asyncio.coroutines._is_coroutine marker when Python version is 3.11. This ensures that asyncio.iscoroutinefunction() returns True for async noreentry methods on Python 3.11 and earlier.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Auto-generated by the cut release workflow.