diff --git a/README.md b/README.md
index c248a9b..71d44e0 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,8 @@ Install Genesis using `pip`:
pip install genesis
```
+For better asyncio performance on Unix (Linux and macOS), use the optional uvloop extra: `pip install genesis[uvloop]`. See the [Installation Guide](https://otoru.github.io/Genesis/docs/installation/) for details.
+
## Quickstart
### Inbound Socket Mode
diff --git a/docker/freeswitch/conf/autoload_configs/local_stream.conf.xml b/docker/freeswitch/conf/autoload_configs/local_stream.conf.xml
new file mode 100644
index 0000000..ff60a23
--- /dev/null
+++ b/docker/freeswitch/conf/autoload_configs/local_stream.conf.xml
@@ -0,0 +1,4 @@
+
+
+
+
diff --git a/docs/content/docs/Examples/_index.md b/docs/content/docs/Examples/_index.md
index 9071979..fab12b4 100644
--- a/docs/content/docs/Examples/_index.md
+++ b/docs/content/docs/Examples/_index.md
@@ -48,10 +48,37 @@ docker-compose down
For more details about the Docker setup, see the [docker/freeswitch/README.md](https://github.com/Otoru/Genesis/blob/main/docker/freeswitch/README.md) file.
+## Testing with a SIP Client
+
+You can test the examples using a SIP client (e.g. Linphone, Zoiper, or X-Lite):
+
+1. Configure your SIP client to connect to FreeSWITCH:
+ - **Server:** `127.0.0.1:5060`
+ - **Username:** `1000` or `1001`
+ - **Password:** `1000` or `1001` (same as username)
+ - **Domain:** `127.0.0.1`
+
+2. Register the SIP client.
+
+## Dialplan Configuration
+
+The Docker environment includes a dialplan entry that routes calls to `9999` to the outbound socket:
+
+```xml
+
+
+
+
+
+```
+
+Calls to `9999` trigger FreeSWITCH to connect to your application at `127.0.0.1:9696`.
+
## Available Examples
{{< cards cols="1" >}}
{{< card link="fastapi-click2call/" title="Click2Call API" icon="code" subtitle="REST API endpoint for click2call functionality using FastAPI." >}}
{{< card link="ivr/" title="IVR" icon="phone" subtitle="Simple IVR system using Outbound mode with DTMF interaction." >}}
{{< card link="group-call/" title="Group Call" icon="users" subtitle="Simultaneous originate that calls multiple destinations and bridges with the first to answer." >}}
+ {{< card link="queue/" title="Queue" icon="view-list" subtitle="Outbound with a queue: one call at a time; others wait in line (FIFO)." >}}
{{< /cards >}}
diff --git a/docs/content/docs/Examples/fastapi-click2call.md b/docs/content/docs/Examples/fastapi-click2call.md
index 6ef8b8e..ab87e1d 100644
--- a/docs/content/docs/Examples/fastapi-click2call.md
+++ b/docs/content/docs/Examples/fastapi-click2call.md
@@ -64,20 +64,20 @@ The example uses a **per-request connection**, opening a new connection to FreeS
{{% steps %}}
-### 1. Clone the Repository
+### Clone the Repository
```bash
git clone https://github.com/Otoru/Genesis.git
cd Genesis
```
-### 2. Install Dependencies
+### Install Dependencies
```bash
poetry install --with examples
```
-### 3. Configure FreeSWITCH Connection
+### Configure FreeSWITCH Connection
Set environment variables for your FreeSWITCH connection:
@@ -87,7 +87,7 @@ export FS_PORT=8021
export FS_PASSWORD=ClueCon
```
-### 4. Run the Server
+### Run the Server
```bash
uvicorn examples.click2call:app --reload
@@ -95,7 +95,7 @@ uvicorn examples.click2call:app --reload
The API will be available at `http://localhost:8000`.
-### 5. Test the Endpoint
+### Test the Endpoint
```bash
curl -X POST "http://localhost:8000/" \
diff --git a/docs/content/docs/Examples/group-call.md b/docs/content/docs/Examples/group-call.md
index c4048a3..11d2ef7 100644
--- a/docs/content/docs/Examples/group-call.md
+++ b/docs/content/docs/Examples/group-call.md
@@ -91,12 +91,30 @@ sequenceDiagram
## Running the Example
-Start FreeSWITCH (see [Examples environment]({{< relref "../Examples/_index.md" >}})) and run:
+{{% steps %}}
+
+### Start FreeSWITCH
+
+Make sure FreeSWITCH is running (see [Examples environment]({{< relref "../Examples/_index.md" >}})).
+
+### Run the example
```bash
python examples/group_call.py
```
-The example will ring the group `["user/1001", "user/1002", "user/1003"]` in parallel mode, wait for the first callee to answer, create and bridge the caller (`user/1000`) with the answered callee, then hang up all channels after 5 seconds.
+### Make test calls
+
+- Register multiple SIP clients: user `1000` , `1001`, `1002` and `1003`.
+- Run the example; the first callee to answer is connected to the caller.
+
+### View Logs
+
+To see what's happening in FreeSWITCH:
+
+```bash
+docker exec -it genesis-freeswitch fs_cli -x "show channels"
+docker logs genesis-freeswitch -f
+```
-To test this properly, you'll need multiple SIP clients registered: user `1000` (caller) and users `1001`, `1002`, `1003` (callees). The first callee to answer will be connected to the caller.
+{{% /steps %}}
diff --git a/docs/content/docs/Examples/ivr.md b/docs/content/docs/Examples/ivr.md
index 7137664..ef4aa28 100644
--- a/docs/content/docs/Examples/ivr.md
+++ b/docs/content/docs/Examples/ivr.md
@@ -72,11 +72,11 @@ This example demonstrates Outbound Socket mode, where FreeSWITCH connects to you
{{% steps %}}
-### 1. Start FreeSWITCH
+### Start FreeSWITCH
Make sure FreeSWITCH is running in Docker (see [Examples environment]({{< relref "../Examples/_index.md" >}})).
-### 2. Start the IVR Server
+### Start the IVR Server
In a terminal, run the IVR example:
@@ -86,7 +86,7 @@ python examples/ivr.py
The server will start listening on `0.0.0.0:9696` and wait for FreeSWITCH to connect.
-### 3. Make a Test Call
+### Make a Test Call
In another terminal, use FreeSWITCH CLI to originate a call to the IVR:
@@ -98,14 +98,14 @@ This command:
- Creates a call from user `1000` (a test user configured in the Docker environment)
- Routes it to number `9999` (configured in the dialplan to connect to your outbound socket)
-### 4. Interact with the IVR
+### Interact with the IVR
Once the call is connected:
- You'll hear the welcome message
- Press `1`, `2`, or `3` to select an option
- The IVR will respond to your selection
-### 5. View Logs
+### View Logs
To see what's happening in FreeSWITCH:
@@ -115,33 +115,3 @@ docker logs genesis-freeswitch -f
```
{{% /steps %}}
-
-## Testing with a SIP Client
-
-You can also test using a SIP client (like Linphone, Zoiper, or X-Lite):
-
-1. Configure your SIP client to connect to FreeSWITCH:
- - **Server:** `127.0.0.1:5060`
- - **Username:** `1000` or `1001`
- - **Password:** `1000` or `1001` (same as username)
- - **Domain:** `127.0.0.1`
-
-2. Register the SIP client
-
-3. Make a call to `9999`
-
-4. The call will be routed to your IVR application
-
-## Dialplan Configuration
-
-The Docker environment includes a dialplan entry that routes calls to `9999` to your outbound socket:
-
-```xml
-
-
-
-
-
-```
-
-This means any call to `9999` will trigger FreeSWITCH to connect to your application at `127.0.0.1:9696`.
diff --git a/docs/content/docs/Examples/queue.md b/docs/content/docs/Examples/queue.md
new file mode 100644
index 0000000..5935b6c
--- /dev/null
+++ b/docs/content/docs/Examples/queue.md
@@ -0,0 +1,116 @@
+---
+title: Queue
+weight: 25
+parent: Examples
+---
+
+Outbound example: one extension calls another through the app. The caller hears hold music (or a message) until a queue slot is free, then we bridge them to the callee. Only one bridge at a time, so you keep control (e.g. one agent per queue).
+
+## Example Code
+
+```python {filename="examples/queue.py" base_url="https://github.com/Otoru/Genesis/blob/main"}
+"""
+Queue example.
+
+One extension calls another via the app: the caller is held (music or message)
+until a queue slot is free, then we bridge them to the callee. Only one
+bridge at a time so you keep control (e.g. one agent per queue).
+"""
+
+import asyncio
+import os
+
+from genesis import Outbound, Session, Queue, Channel
+from genesis.types import ChannelState
+
+HOST = os.getenv("HOST", "0.0.0.0")
+PORT = int(os.getenv("PORT", "9696"))
+CALLEE = "user/1001"
+HOLD_SOUND = os.getenv("HOLD_SOUND", "local_stream://moh")
+
+queue = Queue() # in-memory by default
+
+
+async def handler(session: Session) -> None:
+ if session.channel is None:
+ return
+ await session.channel.answer()
+ await session.channel.playback(HOLD_SOUND, block=False)
+
+ async with queue.slot("support"):
+ callee = await Channel.create(session, CALLEE)
+ await callee.wait(ChannelState.EXECUTE, timeout=30.0)
+ await session.channel.bridge(callee)
+
+
+async def main() -> None:
+ server = Outbound(handler=handler, host=HOST, port=PORT)
+ await server.start()
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
+```
+
+## Flow
+
+{{% steps %}}
+
+### FreeSWITCH sends the call
+
+FreeSWITCH sends the call to your app (outbound socket).
+
+### Answer and play hold sound
+
+We answer and start playing a hold sound (`playback(..., block=False)`), so the caller hears it while waiting.
+
+### Wait for a queue slot
+
+The handler waits for a slot in the `"support"` queue (`async with queue.slot("support")`). If another call is already in the slot, this call waits (caller keeps hearing the hold sound).
+
+### Originate callee and bridge
+
+When we get the slot, we originate the callee (`Channel.create(session, CALLEE)`), wait for them to answer, then bridge the caller to the callee. The bridge replaces the hold playback.
+
+### Release the slot
+
+When the handler leaves the `async with` block, the slot is released and the next waiting caller can be served.
+
+{{% /steps %}}
+
+## Running the Example
+
+{{% steps %}}
+
+### Start FreeSWITCH
+
+Make sure FreeSWITCH is running (see [Examples environment]({{< relref "../Examples/_index.md" >}})).
+
+### Run the queue example
+
+```bash
+python examples/queue.py
+```
+
+### Make test calls
+
+- You need two SIP clients: caller and callee (`user/1001`). See [Examples environment]({{< relref "../Examples/_index.md" >}}) (Docker includes MOH).
+- Call the number that hits this dialplan. You hear hold music until your turn, then you're bridged to the callee.
+- Place a second call while the first is still connected: the second caller hears hold music until the first call ends.
+
+### View Logs
+
+To see what's happening in FreeSWITCH:
+
+```bash
+docker exec -it genesis-freeswitch fs_cli -x "show channels"
+docker logs genesis-freeswitch -f
+```
+
+{{% /steps %}}
+
+## Related
+
+- [Queue]({{< relref "../Tools/queue/_index.md" >}}) - Queue API and backends
+- [Outbound Socket]({{< relref "../Quickstart/outbound.md" >}}) - Outbound basics
+- [Channel]({{< relref "../Tools/channel.md" >}}) - Creating channels and bridge
diff --git a/docs/content/docs/Observability/_index.md b/docs/content/docs/Observability/_index.md
index 33bedac..47a9052 100644
--- a/docs/content/docs/Observability/_index.md
+++ b/docs/content/docs/Observability/_index.md
@@ -12,4 +12,5 @@ Genesis ships with **OpenTelemetry** for tracing, logging, and metrics. You get
{{< card link="logging/" title="Logging" icon="terminal" subtitle="Structured logs with trace correlation and optional JSON output." >}}
{{< card link="server/" title="Server" icon="server" subtitle="Health, readiness, and metrics over HTTP." >}}
{{< card link="metrics/" title="Metrics" icon="chart-bar" subtitle="Counters and histograms for commands, events, channels, and ring groups." >}}
+ {{< card link="otel-config/" title="OTEL configuration" icon="cog" subtitle="Configure OpenTelemetry via OTEL_SDK_DISABLED, OTEL_SERVICE_NAME, and OTEL_RESOURCE_ATTRIBUTES." >}}
{{< /cards >}}
diff --git a/docs/content/docs/Observability/otel-config.md b/docs/content/docs/Observability/otel-config.md
new file mode 100644
index 0000000..5ad305c
--- /dev/null
+++ b/docs/content/docs/Observability/otel-config.md
@@ -0,0 +1,56 @@
+---
+title: Configuration
+weight: 70
+---
+
+Genesis supports configuring OpenTelemetry via standard environment variables. When you run the CLI (`genesis consumer` or `genesis outbound`), these variables control the metrics resource and whether the SDK is enabled.
+
+## Supported variables
+
+- **`OTEL_SDK_DISABLED`**
+ - Disables the OpenTelemetry SDK when set to `true` (case-insensitive).
+ - When disabled, the CLI does not set a meter provider; metrics are no-ops.
+ - Default: not set (SDK enabled).
+
+- **`OTEL_SERVICE_NAME`**
+ - Sets the `service.name` resource attribute for metrics (and traces if you configure a tracer provider).
+ - Default: `genesis`.
+
+- **`OTEL_RESOURCE_ATTRIBUTES`**
+ - Extra resource attributes as comma-separated key-value pairs: `key1=value1,key2=value2`.
+ - If `service.name` is present here, it is overridden by `OTEL_SERVICE_NAME` when that variable is set.
+ - Example: `deployment.environment=production,service.version=1.0.0`.
+
+- **`OTEL_EXPORTER_OTLP_ENDPOINT`**
+ - Base URL for OTLP/HTTP export (traces and metrics). When set, the CLI configures an OTLP HTTP exporter so telemetry is sent to this endpoint (e.g. an OpenTelemetry Collector).
+ - Default for HTTP per spec: `http://localhost:4318` (collector OTLP HTTP receiver).
+
+- **`OTEL_EXPORTER_OTLP_METRICS_ENDPOINT`**
+ - Overrides the metrics endpoint (if unset, `OTEL_EXPORTER_OTLP_ENDPOINT` is used).
+
+- **`OTEL_EXPORTER_OTLP_TRACES_ENDPOINT`**
+ - Overrides the traces endpoint (if unset, `OTEL_EXPORTER_OTLP_ENDPOINT` is used). When set (or when `OTEL_EXPORTER_OTLP_ENDPOINT` is set), the CLI also sets a TracerProvider with OTLP HTTP span exporter.
+
+## Examples
+
+Disable OpenTelemetry (e.g. in tests or when using another instrumentation):
+
+```bash
+export OTEL_SDK_DISABLED=true
+genesis consumer ...
+```
+
+Set a custom service name and environment:
+
+```bash
+export OTEL_SERVICE_NAME=my-call-center
+export OTEL_RESOURCE_ATTRIBUTES=deployment.environment=production
+genesis outbound ...
+```
+
+Send metrics and traces to an OTLP collector over HTTP:
+
+```bash
+export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
+genesis consumer ...
+```
diff --git a/docs/content/docs/Tools/_index.md b/docs/content/docs/Tools/_index.md
index 15e28fd..d46b2c5 100644
--- a/docs/content/docs/Tools/_index.md
+++ b/docs/content/docs/Tools/_index.md
@@ -6,7 +6,9 @@ weight: 50
Useful utilities and patterns to streamline your development with Genesis.
{{< cards cols="1" >}}
+ {{< card link="uvloop/" title="uvloop" icon="lightning-bolt" subtitle="Optional fast asyncio event loop on Unix; install with genesis[uvloop]." >}}
{{< card link="filtrate/" title="Filtrate" icon="code" subtitle="Filter events based on key-value pairs using decorators." >}}
{{< card link="channel/" title="Channel Abstraction" icon="phone" subtitle="Create and manage outbound channels for call origination and bridging." >}}
+ {{< card link="queue/" title="Queue" icon="view-list" subtitle="FIFO queue with concurrency limit; slot and semaphore API; in-memory or Redis backend." >}}
{{< card link="ring-group/" title="Ring Group" icon="users" subtitle="Call multiple destinations simultaneously or sequentially, connect to first answer." >}}
{{< /cards >}}
diff --git a/docs/content/docs/Tools/queue/_index.md b/docs/content/docs/Tools/queue/_index.md
new file mode 100644
index 0000000..45394e5
--- /dev/null
+++ b/docs/content/docs/Tools/queue/_index.md
@@ -0,0 +1,171 @@
+---
+title: Queue
+weight: 40
+---
+
+The queue abstraction lets you limit concurrency per logical queue and process callers in FIFO order. It uses a **context manager** and **semaphore-like** API: you enter a slot, do your work, and release on exit.
+
+## Overview
+
+Use the queue when you want to:
+
+- Enqueue calls and process them one (or N) at a time per queue
+- Control how many calls are "inside" a given flow at once (e.g. one at a time for a ring group)
+- Share queue state across app instances via Redis when scaling out
+
+The queue is **generic**: it does not know about ring groups or FreeSWITCH. You use it to acquire a "slot" (FIFO + concurrency limit); what you do inside the slot (ring group, IVR, etc.) is up to you.
+
+## Flow
+
+With `max_concurrent=1`, only one caller holds the slot at a time. Others wait in FIFO order and acquire when the slot is released:
+
+```mermaid
+sequenceDiagram
+ box Callers
+ participant A as Caller A
+ participant B as Caller B
+ participant C as Caller C
+ end
+ box Queue
+ participant Q as Queue
+ end
+
+ A->>Q: enqueue
+ Q-->>A: slot acquired
+ A->>A: work (ring, bridge…)
+
+ B->>Q: enqueue
+ C->>Q: enqueue
+ Note over Q: B and C waiting (FIFO)
+
+ A->>Q: release
+ Q-->>B: slot acquired
+ B->>B: work
+ B->>Q: release
+ Q-->>C: slot acquired
+ C->>C: work
+ C->>Q: release
+```
+
+- **Enqueue**: On `slot(...)` enter, you join the queue (FIFO). You block until you are at the head and a slot is free.
+- **Slot acquired**: You run your code (e.g. ring group, bridge). With `max_concurrent=1`, only one caller is in this phase at a time.
+- **Release**: On exit, you free the slot; the next caller in line acquires it.
+
+With `max_concurrent=2`, two callers can hold a slot at once; the rest wait in line.
+
+## Basic Example
+
+```python
+import asyncio
+from genesis import Outbound, Session, Queue, RingGroup, RingMode
+
+queue = Queue() # in-memory by default
+
+async def handler(session: Session):
+ # Only one call at a time in "sales" queue; others wait in line
+ async with queue.slot("sales"):
+ answered = await RingGroup.ring(
+ session,
+ ["user/1001", "user/1002"],
+ RingMode.PARALLEL,
+ timeout=30.0,
+ )
+ if answered:
+ await session.channel.bridge(answered)
+
+app = Outbound(handler, "127.0.0.1", 9000)
+asyncio.run(app.start())
+```
+
+## API
+
+### `queue.slot()`
+
+Use `async with queue.slot(...)` to acquire a slot and release it when the block ends.
+
+- **Enter**: enqueues this call, then blocks until it is at the head of the queue and a slot is free
+- **Exit**: releases the slot so the next caller can proceed
+
+```python
+async with queue.slot("sales"):
+ # do work
+ pass
+
+# With explicit item_id (e.g. for Redis / tracing)
+async with queue.slot("sales", item_id=session.uuid):
+ pass
+
+# Allow 2 concurrent
+async with queue.slot("support", max_concurrent=2):
+ pass
+
+# Optional timeout: raise QueueTimeoutError if not acquired in 30s
+try:
+ async with queue.slot("sales", timeout=30.0):
+ # do work
+ pass
+except QueueTimeoutError:
+ # Caller waited too long; item was removed from queue
+ pass
+```
+
+### `queue.semaphore()`
+
+Returns a reusable object you can use as a context manager. Same semantics as a slot, but you can keep a reference and use it in several places.
+
+```python
+sem = queue.semaphore("support", max_concurrent=2)
+
+async with sem:
+ # do work
+ pass
+
+# Optional: pass item_id when used as callable
+async with sem(item_id=session.uuid):
+ pass
+```
+
+For in-memory and Redis, see [Backends]({{< relref "backends.md" >}}).
+
+## Parameters
+
+**`queue.slot(queue_id, *, item_id=None, max_concurrent=1, timeout=None)`**
+
+- `queue_id`: Logical queue name (e.g. `"sales"`, `"support"`)
+- `item_id`: Optional identifier for this entry (e.g. `session.uuid`). If omitted, a UUID is generated. Useful with Redis for tracing
+- `max_concurrent`: Maximum number of callers allowed inside this queue at once. First use for a given `queue_id` sets this for that queue in the backend
+- `timeout`: Optional seconds to wait for a slot. If the wait exceeds this, the item is removed from the queue and :exc:`genesis.exceptions.QueueTimeoutError` is raised
+
+**`queue.semaphore(queue_id, max_concurrent=1, timeout=None)`**
+
+- `queue_id`: Name of the queue
+- `max_concurrent`: Max concurrent slots when using this semaphore
+- `timeout`: Optional seconds per acquire
+
+## Timeout
+
+You can pass **`timeout`** (seconds) to `queue.slot()` or `queue.semaphore()` so that if the caller does not get a slot within that time, the wait is aborted instead of blocking indefinitely.
+
+- **Behavior**
The timeout covers both (1) waiting for your turn in the FIFO queue and (2) waiting for a free concurrency slot. If the time is exceeded, your entry is removed from the queue and the next caller can proceed.
+- **Exception**
:exc:`genesis.exceptions.QueueTimeoutError` is raised. Handle it to e.g. play a message and hang up.
+- **Use case**: Avoid callers waiting forever when the queue is congested; after a limit (e.g. 60 seconds), you can play "all agents are busy" and disconnect.
+
+## Use Cases
+
+- **Ring group with one call at a time**: use a single slot per queue so only one caller is "in" the ring group at once; others wait in line
+- **Bounded concurrency**: use `max_concurrent > 1` to allow N calls in the same flow (e.g. support pool)
+- **Scaling**: use `RedisBackend` so several app instances share the same queue and respect global order and limits
+
+## Observability
+
+The queue reports:
+
+- **Metrics**: `genesis.queue.operations` (acquire/release counts), `genesis.queue.wait_duration` (time waiting for a slot)
+- **Tracing**: spans for `queue.wait_and_acquire` with attributes `queue.id` and `queue.item_id`
+
+## Related
+
+- [Backends]({{< relref "backends.md" >}}) - In-memory and Redis
+- [Ring Group]({{< relref "../ring-group/_index.md" >}}) - Often used inside a queue slot to ring agents
+- [Outbound Socket]({{< relref "../../Quickstart/outbound.md" >}}) - Typical place to use the queue in the session handler
+- [Queue Example]({{< relref "../../Examples/queue.md" >}}) - Full runnable example
diff --git a/docs/content/docs/Tools/queue/backends.md b/docs/content/docs/Tools/queue/backends.md
new file mode 100644
index 0000000..7f558fb
--- /dev/null
+++ b/docs/content/docs/Tools/queue/backends.md
@@ -0,0 +1,78 @@
+---
+title: Backends
+weight: 41
+---
+
+Backends store queue state (FIFO order and concurrency). Choose the backend based on whether you run a single instance or multiple instances of your application.
+
+## Single Instance
+
+If you run a single process, use the default in-memory backend:
+
+```python
+from genesis import Queue
+
+queue = Queue() # InMemoryBackend by default
+```
+
+- State lives in process memory
+- No extra dependencies
+- Omit the backend for simplicity
+
+## Multiple Instances
+
+If you run multiple instances (horizontal scaling), pass `RedisBackend` so all instances share the same queue state:
+
+```python
+from genesis import Queue
+from genesis.queue import RedisBackend
+
+queue = Queue(RedisBackend(url="redis://localhost:6379"))
+
+async with queue.slot("sales", item_id=session.uuid):
+ # ...
+```
+
+- State lives in Redis (list + counter per queue, pub/sub to wake waiters)
+- Each instance enqueues its own call and waits until it is that call's turn; the **process that holds the ESL session** must be the one that runs the handler. Redis only stores order and concurrency.
+- Optional **timeout** on `queue.slot()` / `queue.semaphore()` is supported by both backends; when it expires, the item is removed and :exc:`genesis.exceptions.QueueTimeoutError` is raised.
+
+## Custom Redis Key Prefix
+
+To avoid key collisions in Redis, set a custom prefix:
+
+```python
+backend = RedisBackend(
+ url="redis://localhost:6379",
+ key_prefix="myapp:queue:"
+)
+queue = Queue(backend)
+```
+
+## Parameters
+
+**`Queue(backend=None)`**
+
+- `backend`: Backend to use (FIFO + concurrency state). Default: `InMemoryBackend`, so `Queue()` is enough for single-process use.
+
+**`InMemoryBackend()`**
+
+- No arguments
+
+**`RedisBackend(url="redis://localhost:6379", key_prefix="genesis:queue:")`**
+
+- `url`: Redis connection URL
+- `key_prefix`: Prefix for Redis keys (default: `"genesis:queue:"`)
+
+## Best Practices
+
+1. Create the backend once and reuse the same `Queue` instance (e.g. at app startup)
+2. Use `InMemoryBackend` for single-instance deployments, `RedisBackend` when running multiple instances
+3. With Redis, pass `item_id=session.uuid` (or similar) when acquiring a slot so you can correlate metrics and traces across instances
+4. If Redis becomes unavailable, `RedisBackend` will raise; ensure your application handles these errors
+
+## Related
+
+- [Queue]({{< relref "_index.md" >}}) - API and usage
+- [Ring Group]({{< relref "../ring-group/_index.md" >}}) - Often used inside a queue slot
+- [Observability / Metrics]({{< relref "../../Observability/metrics.md" >}}) - Queue metrics
diff --git a/docs/content/docs/Tools/uvloop.md b/docs/content/docs/Tools/uvloop.md
new file mode 100644
index 0000000..f7af6fc
--- /dev/null
+++ b/docs/content/docs/Tools/uvloop.md
@@ -0,0 +1,29 @@
+---
+title: uvloop
+weight: 15
+---
+
+[uvloop](https://github.com/MagicStack/uvloop) is a fast, drop-in replacement for the default asyncio event loop, built on libuv.
+
+On **Unix** (Linux and macOS), using uvloop can improve asyncio performance.
+
+{{< callout type="warning" >}}
+uvloop is **not supported on Windows**.
+{{< /callout >}}
+
+## Installation
+
+Install Genesis with the uvloop extra:
+
+```bash
+pip install genesis[uvloop]
+```
+
+## Usage
+
+When the extra is installed, the Genesis CLI uses uvloop automatically.
+
+## See also
+
+- [uvloop on GitHub](https://github.com/MagicStack/uvloop)
+- [Installation Guide]({{< relref "../installation.md" >}}) — base Genesis installation
diff --git a/examples/queue.py b/examples/queue.py
new file mode 100644
index 0000000..540b6da
--- /dev/null
+++ b/examples/queue.py
@@ -0,0 +1,45 @@
+"""
+Queue example.
+
+One extension calls another via the app: the caller is held (music or message)
+until a queue slot is free, then we bridge them to the callee. Only one
+bridge at a time so you keep control (e.g. one agent per queue).
+"""
+
+import asyncio
+import os
+
+from genesis import Outbound, Session, Queue, Channel
+from genesis.types import ChannelState
+
+HOST = os.getenv("HOST", "0.0.0.0")
+PORT = int(os.getenv("PORT", "9696"))
+CALLEE = "user/1001"
+# Sound to play while caller waits. Default: music on hold (project Docker has MOH).
+# Alternative: ivr/8000/ivr-one_moment_please.wav (Callie voice)
+HOLD_SOUND = os.getenv("HOLD_SOUND", "local_stream://moh")
+
+queue = Queue() # in-memory by default
+
+
+async def handler(session: Session) -> None:
+ if session.channel is None:
+ return
+ await session.channel.answer()
+
+ # Start hold sound (block=False so it plays while we wait for a slot)
+ await session.channel.playback(HOLD_SOUND, block=False)
+
+ async with queue.slot("support"):
+ callee = await Channel.create(session, CALLEE)
+ await callee.wait(ChannelState.EXECUTE, timeout=30.0)
+ await session.channel.bridge(callee)
+
+
+async def main() -> None:
+ server = Outbound(handler=handler, host=HOST, port=PORT)
+ await server.start()
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/genesis/__init__.py b/genesis/__init__.py
index f14b6ce..c1585d8 100644
--- a/genesis/__init__.py
+++ b/genesis/__init__.py
@@ -6,12 +6,22 @@
from .protocol.parser import ESLEvent
from .inbound import Inbound
from .channel import Channel
+from .exceptions import QueueTimeoutError
from .group import (
RingGroup,
RingMode,
InMemoryLoadBalancer,
RedisLoadBalancer,
)
+from .queue import (
+ Queue,
+ QueueBackend,
+ QueueSemaphore,
+ QueueSlot,
+ InMemoryBackend,
+ RedisBackend,
+)
+from .loop import use_uvloop
__all__ = [
"Inbound",
@@ -25,5 +35,13 @@
"RingMode",
"InMemoryLoadBalancer",
"RedisLoadBalancer",
+ "Queue",
+ "QueueBackend",
+ "QueueSemaphore",
+ "QueueSlot",
+ "QueueTimeoutError",
+ "InMemoryBackend",
+ "RedisBackend",
+ "use_uvloop",
]
__version__ = importlib.metadata.version("genesis")
diff --git a/genesis/cli/__init__.py b/genesis/cli/__init__.py
index 4104866..c84d3a9 100644
--- a/genesis/cli/__init__.py
+++ b/genesis/cli/__init__.py
@@ -11,15 +11,24 @@
import typer
from rich import print
-from opentelemetry import metrics
+from opentelemetry import metrics, trace
from opentelemetry.sdk.metrics import MeterProvider
-from opentelemetry.sdk.resources import Resource
+from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader
+from opentelemetry.sdk.trace import TracerProvider
+from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.prometheus import PrometheusMetricReader
-from prometheus_client import start_http_server
+from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
+from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from genesis.cli.consumer import consumer
from genesis.cli.outbound import outbound
from genesis.observability import reconfigure_logger, logger
+from genesis.observability.otel_config import (
+ create_resource,
+ get_otel_exporter_otlp_metrics_endpoint,
+ get_otel_exporter_otlp_traces_endpoint,
+ is_otel_sdk_disabled,
+)
app = typer.Typer(rich_markup_mode="rich")
app.add_typer(consumer, name="consumer", short_help="Run you ESL events consumer.")
@@ -45,20 +54,6 @@ def callback(
typer.Option("--json", help="Output logs in JSON format."),
] = False,
) -> None:
- reconfigure_logger(json)
-
- try:
- # Setup OpenTelemetry
- metric_reader = PrometheusMetricReader()
- provider = MeterProvider(
- resource=Resource.create({"service.name": "genesis"}),
- metric_readers=[metric_reader],
- )
- metrics.set_meter_provider(provider)
-
- except Exception as e:
- logger.warning(f"Failed to setup OpenTelemetry: {e}")
-
"""
Genesis - [blue]FreeSWITCH Event Socket protocol[/blue] implementation with [bold]asyncio[/bold].
@@ -66,3 +61,28 @@ def callback(
ℹ️ Read more in the docs: [link]https://otoru.github.io/Genesis/[/link].
"""
+ reconfigure_logger(json)
+
+ try:
+ # Setup OpenTelemetry (honors OTEL_SDK_DISABLED, OTEL_* env vars)
+ if not is_otel_sdk_disabled():
+ resource = create_resource()
+ metric_readers: list = [PrometheusMetricReader()]
+ if get_otel_exporter_otlp_metrics_endpoint():
+ metric_readers.append(
+ PeriodicExportingMetricReader(
+ OTLPMetricExporter(),
+ export_interval_millis=60_000,
+ )
+ )
+ metrics.set_meter_provider(
+ MeterProvider(resource=resource, metric_readers=metric_readers)
+ )
+ if get_otel_exporter_otlp_traces_endpoint():
+ tracer_provider = TracerProvider(resource=resource)
+ tracer_provider.add_span_processor(
+ BatchSpanProcessor(OTLPSpanExporter())
+ )
+ trace.set_tracer_provider(tracer_provider)
+ except Exception as e:
+ logger.warning(f"Failed to setup OpenTelemetry: {e}")
diff --git a/genesis/cli/consumer.py b/genesis/cli/consumer.py
index d37719b..fa09a2d 100644
--- a/genesis/cli/consumer.py
+++ b/genesis/cli/consumer.py
@@ -7,6 +7,7 @@
import typer
from genesis.cli import watcher
+from genesis.loop import use_uvloop
from genesis.observability import logger
from genesis.consumer import Consumer
from genesis.cli.exceptions import CLIExcpetion
@@ -79,6 +80,8 @@ def _run(
levels = get_log_level_map()
logger.setLevel(levels.get(loglevel.upper(), logging.INFO))
+ use_uvloop()
+
if reload:
asyncio.run(_run_with_reload(consumer_app, path))
else:
diff --git a/genesis/cli/outbound.py b/genesis/cli/outbound.py
index 3294f5d..dad57d7 100644
--- a/genesis/cli/outbound.py
+++ b/genesis/cli/outbound.py
@@ -7,6 +7,7 @@
import typer
from genesis.cli import watcher
+from genesis.loop import use_uvloop
from genesis.observability import logger
from genesis.outbound import Outbound
from genesis.cli.exceptions import CLIExcpetion
@@ -173,6 +174,8 @@ def _run(
levels = get_log_level_map()
logger.setLevel(levels.get(loglevel.upper(), logging.INFO))
+ use_uvloop()
+
if reload:
asyncio.run(_run_with_reload(outbound_app, path))
else:
diff --git a/genesis/exceptions.py b/genesis/exceptions.py
index 2d35050..2e6a8ef 100644
--- a/genesis/exceptions.py
+++ b/genesis/exceptions.py
@@ -50,3 +50,9 @@ class TimeoutError(GenesisError):
"""Occurs when an operation times out (e.g., waiting for an event)."""
...
+
+
+class QueueTimeoutError(TimeoutError):
+ """Occurs when waiting for a queue slot exceeds the optional timeout."""
+
+ ...
diff --git a/genesis/loop.py b/genesis/loop.py
new file mode 100644
index 0000000..6580b05
--- /dev/null
+++ b/genesis/loop.py
@@ -0,0 +1,38 @@
+"""
+Event loop utilities.
+
+Provides optional uvloop integration for improved asyncio performance on Unix.
+"""
+
+import asyncio
+
+
+def use_uvloop() -> bool:
+ """
+ Set the current event loop policy to use uvloop, when available.
+
+ uvloop is a fast, drop-in replacement for the default asyncio event loop,
+ built on libuv. It is only supported on Unix (Linux and macOS); on Windows
+ or when uvloop is not installed, this function does nothing.
+
+ Call this once at application startup, before creating any event loop
+ (e.g. before asyncio.run()).
+
+ Returns:
+ True if uvloop was successfully installed as the event loop policy,
+ False otherwise (uvloop not installed or not supported on this platform).
+
+ Example:
+ >>> from genesis import use_uvloop
+ >>> use_uvloop()
+ True
+ >>> import asyncio
+ >>> asyncio.run(my_main())
+ """
+ try:
+ import uvloop
+
+ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
+ return True
+ except (ImportError, OSError, AttributeError):
+ return False
diff --git a/genesis/observability/__init__.py b/genesis/observability/__init__.py
index 00d9703..5ee4fd8 100644
--- a/genesis/observability/__init__.py
+++ b/genesis/observability/__init__.py
@@ -7,6 +7,15 @@
JSONFormatter,
get_log_level,
)
+from .otel_config import (
+ create_resource,
+ get_otel_exporter_otlp_endpoint,
+ get_otel_exporter_otlp_metrics_endpoint,
+ get_otel_exporter_otlp_traces_endpoint,
+ get_otel_resource_attributes,
+ get_otel_service_name,
+ is_otel_sdk_disabled,
+)
from .server import Observability, AppType, observability
__all__ = [
@@ -20,4 +29,11 @@
"Observability",
"AppType",
"observability",
+ "create_resource",
+ "get_otel_exporter_otlp_endpoint",
+ "get_otel_exporter_otlp_metrics_endpoint",
+ "get_otel_exporter_otlp_traces_endpoint",
+ "get_otel_resource_attributes",
+ "get_otel_service_name",
+ "is_otel_sdk_disabled",
]
diff --git a/genesis/observability/otel_config.py b/genesis/observability/otel_config.py
new file mode 100644
index 0000000..07459fa
--- /dev/null
+++ b/genesis/observability/otel_config.py
@@ -0,0 +1,118 @@
+"""
+OpenTelemetry configuration via environment variables.
+------------------------------------------------------
+
+Supports the standard OTEL environment variables for configuring
+tracing and metrics in Genesis.
+"""
+
+import os
+from typing import Dict, Optional
+
+from opentelemetry.sdk.resources import Resource
+
+
+def _parse_boolean(value: str) -> bool:
+ """Parse OTEL boolean env: only 'true' (case-insensitive) is True."""
+ return value.strip().lower() == "true"
+
+
+def _parse_resource_attributes(value: str) -> Dict[str, str]:
+ """
+ Parse OTEL_RESOURCE_ATTRIBUTES string into a dict.
+
+ Format: key1=value1,key2=value2
+ Values may contain equals signs; only the first '=' splits key and value.
+ """
+ attrs: Dict[str, str] = {}
+ for item in value.split(","):
+ item = item.strip()
+ if not item:
+ continue
+ idx = item.find("=")
+ if idx < 0:
+ continue
+ key = item[:idx].strip()
+ val = item[idx + 1 :].strip()
+ if key:
+ attrs[key] = val
+ return attrs
+
+
+def is_otel_sdk_disabled() -> bool:
+ """
+ Return whether the OpenTelemetry SDK is disabled via environment.
+
+ Reads OTEL_SDK_DISABLED. Only the case-insensitive value "true"
+ disables the SDK (per OpenTelemetry spec).
+ """
+ raw = os.getenv("OTEL_SDK_DISABLED", "").strip()
+ if not raw:
+ return False
+ return _parse_boolean(raw)
+
+
+def get_otel_service_name() -> str:
+ """
+ Return the service name for the OTEL resource.
+
+ Reads OTEL_SERVICE_NAME. Defaults to "genesis" when unset.
+ """
+ return os.getenv("OTEL_SERVICE_NAME", "genesis").strip() or "genesis"
+
+
+def get_otel_resource_attributes() -> Dict[str, str]:
+ """
+ Return resource attributes from OTEL_RESOURCE_ATTRIBUTES.
+
+ Format: key1=value1,key2=value2. OTEL_SERVICE_NAME is applied
+ separately and takes precedence over service.name here.
+ """
+ raw = os.getenv("OTEL_RESOURCE_ATTRIBUTES", "").strip()
+ if not raw:
+ return {}
+ return _parse_resource_attributes(raw)
+
+
+def get_otel_exporter_otlp_endpoint() -> Optional[str]:
+ """
+ Return the OTLP exporter endpoint for HTTP (all signals).
+
+ Reads OTEL_EXPORTER_OTLP_ENDPOINT. When set, metrics (and traces if
+ configured) can be sent to this endpoint via OTLP/HTTP.
+ Default per spec for HTTP is http://localhost:4318.
+ """
+ raw = os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "").strip()
+ return raw if raw else None
+
+
+def get_otel_exporter_otlp_metrics_endpoint() -> Optional[str]:
+ """
+ Return the OTLP metrics exporter endpoint (overrides OTEL_EXPORTER_OTLP_ENDPOINT for metrics).
+ """
+ raw = os.getenv("OTEL_EXPORTER_OTLP_METRICS_ENDPOINT", "").strip()
+ return raw if raw else get_otel_exporter_otlp_endpoint()
+
+
+def get_otel_exporter_otlp_traces_endpoint() -> Optional[str]:
+ """
+ Return the OTLP traces exporter endpoint (overrides OTEL_EXPORTER_OTLP_ENDPOINT for traces).
+ """
+ raw = os.getenv("OTEL_EXPORTER_OTLP_TRACES_ENDPOINT", "").strip()
+ return raw if raw else get_otel_exporter_otlp_endpoint()
+
+
+def create_resource() -> Resource:
+ """
+ Create an OpenTelemetry Resource from OTEL environment variables.
+
+ Uses:
+ - OTEL_SERVICE_NAME for service.name (default: "genesis")
+ - OTEL_RESOURCE_ATTRIBUTES for additional key=value pairs
+
+ OTEL_SERVICE_NAME takes precedence over service.name in
+ OTEL_RESOURCE_ATTRIBUTES (per OpenTelemetry spec).
+ """
+ attrs: Dict[str, str] = dict(get_otel_resource_attributes())
+ attrs["service.name"] = get_otel_service_name()
+ return Resource.create(attrs)
diff --git a/genesis/queue/__init__.py b/genesis/queue/__init__.py
new file mode 100644
index 0000000..9869b3e
--- /dev/null
+++ b/genesis/queue/__init__.py
@@ -0,0 +1,22 @@
+"""
+Genesis queue
+-------------
+
+FIFO queue with concurrency limit per queue_id. Context-manager and
+semaphore-like API; backends: in-memory (single process) or Redis (scalable).
+"""
+
+from genesis.exceptions import QueueTimeoutError
+from genesis.queue.backends import InMemoryBackend, QueueBackend
+from genesis.queue.core import Queue, QueueSemaphore, QueueSlot
+from genesis.queue.redis_backend import RedisBackend
+
+__all__ = [
+ "Queue",
+ "QueueBackend",
+ "QueueSemaphore",
+ "QueueSlot",
+ "QueueTimeoutError",
+ "InMemoryBackend",
+ "RedisBackend",
+]
diff --git a/genesis/queue/backends.py b/genesis/queue/backends.py
new file mode 100644
index 0000000..e172bb6
--- /dev/null
+++ b/genesis/queue/backends.py
@@ -0,0 +1,177 @@
+"""
+Genesis queue backends
+----------------------
+
+Backend protocol and in-memory implementation for the queue abstraction.
+"""
+
+from __future__ import annotations
+
+import asyncio
+import time
+from collections import deque
+from typing import Optional, Protocol, runtime_checkable
+
+from genesis.exceptions import QueueTimeoutError
+
+
+class _QueueState:
+ """Per-queue state: FIFO deque, lock, condition; semaphore created on first acquire."""
+
+ __slots__ = ("deque", "lock", "condition", "semaphore", "max_concurrent")
+
+ def __init__(self) -> None:
+ self.deque: deque[str] = deque()
+ self.lock = asyncio.Lock()
+ self.condition = asyncio.Condition(self.lock)
+ self.semaphore: asyncio.Semaphore | None = None
+ self.max_concurrent: int | None = None
+
+
+@runtime_checkable
+class QueueBackend(Protocol):
+ """
+ Protocol for queue backends.
+
+ Implementations provide FIFO-ordered, concurrency-limited slots per queue_id.
+ """
+
+ async def enqueue(self, queue_id: str, item_id: str) -> None:
+ """Add item_id to the tail of the queue."""
+ ...
+
+ async def wait_and_acquire(
+ self,
+ queue_id: str,
+ item_id: str,
+ max_concurrent: int,
+ timeout: Optional[float] = None,
+ ) -> None:
+ """
+ Block until this item is at the head of the queue and a slot is free,
+ then consume the head and hold one slot.
+
+ If ``timeout`` is set (seconds) and expires before acquiring,
+ remove this item from the queue and raise :exc:`QueueTimeoutError`.
+ """
+ ...
+
+ async def release(self, queue_id: str) -> None:
+ """Release one slot for the queue."""
+ ...
+
+
+class InMemoryBackend:
+ """
+ In-memory queue backend.
+
+ Uses a deque and a semaphore per queue_id. Suitable for single-process use.
+ """
+
+ def __init__(self) -> None:
+ """Initialize in-memory backend."""
+ self._states: dict[str, _QueueState] = {}
+
+ def _get_or_create_state(self, queue_id: str) -> _QueueState:
+ """Get or create queue state (deque, lock, condition). Semaphore set in wait_and_acquire."""
+ if queue_id not in self._states:
+ self._states[queue_id] = _QueueState()
+ return self._states[queue_id]
+
+ async def enqueue(self, queue_id: str, item_id: str) -> None:
+ """Add item_id to the tail of the queue."""
+ state = self._get_or_create_state(queue_id)
+ async with state.lock:
+ state.deque.append(item_id)
+ state.condition.notify_all()
+
+ def _remove_item_and_raise_timeout(
+ self,
+ state: _QueueState,
+ item_id: str,
+ ) -> None:
+ """
+ Remove item_id from deque if present, notify waiters and raise QueueTimeoutError.
+ """
+ try:
+ state.deque.remove(item_id)
+ except ValueError:
+ pass
+ state.condition.notify_all()
+ raise QueueTimeoutError()
+
+ async def _wait_until_at_head(
+ self,
+ state: _QueueState,
+ item_id: str,
+ deadline: Optional[float],
+ ) -> None:
+ """Wait until item_id is at head of deque; on timeout remove item and raise."""
+ while True:
+ if state.deque and state.deque[0] == item_id:
+ state.deque.popleft()
+ state.condition.notify_all()
+ return
+ remaining = None
+ if deadline is not None:
+ remaining = deadline - time.monotonic()
+ if remaining <= 0:
+ self._remove_item_and_raise_timeout(state, item_id)
+ try:
+ if remaining is not None:
+ await asyncio.wait_for(state.condition.wait(), timeout=remaining)
+ else:
+ await state.condition.wait()
+ except asyncio.TimeoutError:
+ self._remove_item_and_raise_timeout(state, item_id)
+
+ async def _acquire_semaphore(
+ self, state: _QueueState, deadline: Optional[float]
+ ) -> None:
+ """Acquire one semaphore slot; raise QueueTimeoutError if deadline exceeded."""
+ assert state.semaphore is not None # set in wait_and_acquire before calling
+ if deadline is not None:
+ remaining = deadline - time.monotonic()
+ if remaining <= 0:
+ raise QueueTimeoutError()
+ try:
+ await asyncio.wait_for(state.semaphore.acquire(), timeout=remaining)
+ except asyncio.TimeoutError:
+ raise QueueTimeoutError()
+ else:
+ await state.semaphore.acquire()
+
+ async def wait_and_acquire(
+ self,
+ queue_id: str,
+ item_id: str,
+ max_concurrent: int,
+ timeout: Optional[float] = None,
+ ) -> None:
+ """
+ Block until this item is at the head and a slot is free, then pop head and acquire.
+ First call for a queue_id sets max_concurrent for that queue.
+ If timeout (seconds) expires, remove item from queue and raise QueueTimeoutError.
+ """
+ state = self._get_or_create_state(queue_id)
+ deadline = time.monotonic() + timeout if timeout is not None else None
+ async with state.lock:
+ if state.semaphore is None:
+ state.semaphore = asyncio.Semaphore(max_concurrent)
+ state.max_concurrent = max_concurrent
+ elif state.max_concurrent != max_concurrent:
+ raise ValueError(
+ f"Queue '{queue_id}' was initialized with max_concurrent={state.max_concurrent}, "
+ f"got {max_concurrent}."
+ )
+ await self._wait_until_at_head(state, item_id, deadline)
+ await self._acquire_semaphore(state, deadline)
+
+ async def release(self, queue_id: str) -> None:
+ """Release one slot for the queue."""
+ if queue_id in self._states:
+ state = self._states[queue_id]
+ if state.semaphore is not None:
+ state.semaphore.release()
+ async with state.lock:
+ state.condition.notify_all()
diff --git a/genesis/queue/core.py b/genesis/queue/core.py
new file mode 100644
index 0000000..5fe2c59
--- /dev/null
+++ b/genesis/queue/core.py
@@ -0,0 +1,217 @@
+"""
+Genesis queue core
+------------------
+
+Queue abstraction with context-manager and semaphore-like API.
+"""
+
+from __future__ import annotations
+
+import time
+from contextlib import asynccontextmanager
+from typing import Optional
+from uuid import uuid4
+
+from opentelemetry import metrics, trace
+
+from genesis.queue.backends import InMemoryBackend, QueueBackend
+
+tracer = trace.get_tracer(__name__)
+meter = metrics.get_meter(__name__)
+
+queue_operations_counter = meter.create_counter(
+ "genesis.queue.operations",
+ description="Queue slot acquire/release operations",
+ unit="1",
+)
+queue_wait_duration = meter.create_histogram(
+ "genesis.queue.wait_duration",
+ description="Time spent waiting for a slot",
+ unit="s",
+)
+
+ATTR_QUEUE_ID = "queue.id"
+ATTR_QUEUE_ITEM_ID = "queue.item_id"
+
+
+class QueueSlot:
+ """
+ Async context manager for a single slot acquisition.
+
+ Use via ``async with queue.slot(queue_id):``. On enter, enqueues and blocks
+ until this item is at the head and a slot is free; on exit, releases the slot.
+ Optional ``timeout`` (seconds): raise :exc:`~genesis.exceptions.QueueTimeoutError` if not acquired in time.
+ """
+
+ __slots__ = (
+ "_queue",
+ "_queue_id",
+ "_item_id",
+ "_max_concurrent",
+ "_timeout",
+ "_acquired",
+ "_released",
+ )
+
+ def __init__(
+ self,
+ queue: "Queue",
+ queue_id: str,
+ *,
+ item_id: Optional[str] = None,
+ max_concurrent: int = 1,
+ timeout: Optional[float] = None,
+ ) -> None:
+ self._queue = queue
+ self._queue_id = queue_id
+ self._item_id = item_id or str(uuid4())
+ self._max_concurrent = max_concurrent
+ self._timeout = timeout
+ self._acquired = False
+ self._released = False
+
+ async def __aenter__(self) -> "QueueSlot":
+ await self._queue._enqueue(self._queue_id, self._item_id)
+ start = time.monotonic()
+ with tracer.start_as_current_span(
+ "queue.wait_and_acquire",
+ attributes={
+ ATTR_QUEUE_ID: self._queue_id,
+ ATTR_QUEUE_ITEM_ID: self._item_id,
+ },
+ ):
+ await self._queue._backend.wait_and_acquire(
+ self._queue_id,
+ self._item_id,
+ self._max_concurrent,
+ timeout=self._timeout,
+ )
+ self._acquired = True
+ elapsed = time.monotonic() - start
+ queue_wait_duration.record(elapsed, attributes={ATTR_QUEUE_ID: self._queue_id})
+ queue_operations_counter.add(
+ 1, attributes={ATTR_QUEUE_ID: self._queue_id, "op": "acquire"}
+ )
+ return self
+
+ async def __aexit__(self, *args: object) -> None:
+ if self._acquired and not self._released:
+ self._released = True
+ await self._queue._release(self._queue_id)
+ queue_operations_counter.add(
+ 1, attributes={ATTR_QUEUE_ID: self._queue_id, "op": "release"}
+ )
+
+
+class QueueSemaphore:
+ """
+ Semaphore-like handle for a queue: reusable context manager for the same queue_id.
+
+ Use via ``async with queue.semaphore(queue_id):`` or store and reuse:
+ ``sem = queue.semaphore("sales", max_concurrent=2); async with sem: ...``
+ Optional ``timeout`` (seconds) applies to each acquire.
+ """
+
+ __slots__ = ("_queue", "_queue_id", "_max_concurrent", "_timeout", "_slot")
+
+ def __init__(
+ self,
+ queue: "Queue",
+ queue_id: str,
+ max_concurrent: int = 1,
+ timeout: Optional[float] = None,
+ ) -> None:
+ self._queue = queue
+ self._queue_id = queue_id
+ self._max_concurrent = max_concurrent
+ self._timeout = timeout
+ self._slot: Optional[QueueSlot] = None
+
+ @asynccontextmanager
+ async def __call__(self, *, item_id: Optional[str] = None):
+ """Acquire a slot with optional item_id (e.g. session uuid)."""
+ slot = QueueSlot(
+ self._queue,
+ self._queue_id,
+ item_id=item_id,
+ max_concurrent=self._max_concurrent,
+ timeout=self._timeout,
+ )
+ async with slot:
+ yield
+
+ async def __aenter__(self) -> "QueueSemaphore":
+ self._slot = QueueSlot(
+ self._queue,
+ self._queue_id,
+ max_concurrent=self._max_concurrent,
+ timeout=self._timeout,
+ )
+ await self._slot.__aenter__()
+ return self
+
+ async def __aexit__(self, *args: object) -> None:
+ if self._slot is not None:
+ await self._slot.__aexit__(*args)
+ self._slot = None
+
+
+class Queue:
+ """
+ FIFO queue with concurrency limit per queue_id.
+
+ Uses an in-memory backend by default; pass a backend for Redis or custom
+ storage. API is context-manager and semaphore-like:
+ ``async with queue.slot("sales"):`` or
+ ``sem = queue.semaphore("sales", max_concurrent=2); async with sem: ...``
+ """
+
+ __slots__ = ("_backend",)
+
+ def __init__(self, backend: Optional[QueueBackend] = None) -> None:
+ self._backend = backend if backend is not None else InMemoryBackend()
+
+ def slot(
+ self,
+ queue_id: str,
+ *,
+ item_id: Optional[str] = None,
+ max_concurrent: int = 1,
+ timeout: Optional[float] = None,
+ ) -> QueueSlot:
+ """
+ Return a context manager that acquires a slot in the given queue.
+
+ On enter: enqueue (with optional item_id), then block until at head and
+ a slot is free. On exit: release the slot.
+ If ``timeout`` (seconds) is set and expires before acquiring,
+ the item is removed from the queue and :exc:`~genesis.exceptions.QueueTimeoutError` is raised.
+ """
+ return QueueSlot(
+ self,
+ queue_id,
+ item_id=item_id,
+ max_concurrent=max_concurrent,
+ timeout=timeout,
+ )
+
+ def semaphore(
+ self,
+ queue_id: str,
+ max_concurrent: int = 1,
+ timeout: Optional[float] = None,
+ ) -> QueueSemaphore:
+ """
+ Return a semaphore-like handle for the queue. Reusable for multiple
+ ``async with sem:`` calls with the same concurrency limit.
+ Optional ``timeout`` (seconds) applies to each acquire.
+ """
+ return QueueSemaphore(
+ self, queue_id, max_concurrent=max_concurrent, timeout=timeout
+ )
+
+ async def _enqueue(self, queue_id: str, item_id: str) -> None:
+ await self._backend.enqueue(queue_id, item_id)
+
+ async def _release(self, queue_id: str) -> None:
+ await self._backend.release(queue_id)
diff --git a/genesis/queue/redis_backend.py b/genesis/queue/redis_backend.py
new file mode 100644
index 0000000..30d7896
--- /dev/null
+++ b/genesis/queue/redis_backend.py
@@ -0,0 +1,171 @@
+"""
+Genesis queue Redis backend
+---------------------------
+
+Redis-backed queue for multi-process / horizontal scaling.
+"""
+
+from __future__ import annotations
+
+import asyncio
+import time
+from typing import Any, Optional
+
+import redis.asyncio as redis
+
+from genesis.exceptions import QueueTimeoutError
+
+# Lua: try to acquire if we're at head and slots available. Keys: waiting_list, in_use_key. Args: item_id, max_concurrent.
+SCRIPT_ACQUIRE = """
+local head = redis.call('LINDEX', KEYS[1], 0)
+if head == ARGV[1] then
+ local in_use = tonumber(redis.call('GET', KEYS[2]) or '0')
+ if in_use < tonumber(ARGV[2]) then
+ redis.call('LPOP', KEYS[1])
+ redis.call('INCR', KEYS[2])
+ return 1
+ end
+end
+return 0
+"""
+
+
+class RedisBackend:
+ """
+ Redis-backed queue backend.
+
+ Uses a list for FIFO order and a counter for in-use slots per queue_id.
+ Suitable for horizontal scaling (multiple app instances).
+ """
+
+ def __init__(
+ self,
+ url: str = "redis://localhost:6379",
+ key_prefix: str = "genesis:queue:",
+ ) -> None:
+ self._url = url
+ self._prefix = key_prefix
+ self._client: Any = None
+
+ async def _get_client(self) -> Any:
+ if self._client is None:
+ self._client = await redis.from_url(self._url)
+ return self._client
+
+ def _waiting_key(self, queue_id: str) -> str:
+ return f"{self._prefix}{queue_id}:waiting"
+
+ def _in_use_key(self, queue_id: str) -> str:
+ return f"{self._prefix}{queue_id}:in_use"
+
+ def _channel(self, queue_id: str) -> str:
+ return f"{self._prefix}{queue_id}:release"
+
+ async def enqueue(self, queue_id: str, item_id: str) -> None:
+ """Add item_id to the tail of the queue."""
+ client = await self._get_client()
+ key = self._waiting_key(queue_id)
+ await client.rpush(key, item_id)
+
+ async def _try_acquire(
+ self,
+ client: Any,
+ waiting_key: str,
+ in_use_key: str,
+ item_id: str,
+ max_concurrent: int,
+ ) -> bool:
+ """Try to acquire a slot; returns True if acquired."""
+ try:
+ script = client.register_script(SCRIPT_ACQUIRE)
+ got = await script(
+ keys=[waiting_key, in_use_key],
+ args=[item_id, str(max_concurrent)],
+ )
+ return bool(got)
+ except AttributeError:
+ pass
+ head = await client.lindex(waiting_key, 0)
+ if head is not None:
+ head = head.decode("utf-8") if isinstance(head, bytes) else head
+ in_use = int(await client.get(in_use_key) or 0)
+ if head == item_id and in_use < max_concurrent:
+ await client.lpop(waiting_key)
+ await client.incr(in_use_key)
+ return True
+ return False
+
+ async def _wait_for_release_signal(self, client: Any, channel: str) -> None:
+ """Block until a message is published on channel or timeout."""
+ sub = client.pubsub()
+ await sub.subscribe(channel)
+ try:
+ async for msg in sub.listen():
+ if msg.get("type") == "message":
+ return
+ await asyncio.sleep(0)
+ except asyncio.CancelledError:
+ raise
+ finally:
+ await sub.unsubscribe(channel)
+ await sub.close()
+
+ async def wait_and_acquire(
+ self,
+ queue_id: str,
+ item_id: str,
+ max_concurrent: int,
+ timeout: Optional[float] = None,
+ ) -> None:
+ """
+ Block until this item is at the head and a slot is free, then pop head and acquire.
+ If timeout (seconds) expires, remove item from waiting list and raise QueueTimeoutError.
+ """
+ client = await self._get_client()
+ waiting_key = self._waiting_key(queue_id)
+ in_use_key = self._in_use_key(queue_id)
+ channel = self._channel(queue_id)
+ deadline = time.monotonic() + timeout if timeout is not None else None
+
+ while True:
+ if await self._try_acquire(
+ client, waiting_key, in_use_key, item_id, max_concurrent
+ ):
+ return
+ wait_timeout = 1.0
+ if deadline is not None:
+ remaining = deadline - time.monotonic()
+ if remaining <= 0:
+ await client.lrem(waiting_key, 1, item_id)
+ raise QueueTimeoutError()
+ wait_timeout = min(1.0, remaining)
+ try:
+ await asyncio.wait_for(
+ self._wait_for_release_signal(client, channel),
+ timeout=wait_timeout,
+ )
+ except asyncio.TimeoutError:
+ if deadline is not None and time.monotonic() >= deadline:
+ await client.lrem(waiting_key, 1, item_id)
+ raise QueueTimeoutError()
+ # retry loop
+
+ async def release(self, queue_id: str) -> None:
+ """Release one slot for the queue."""
+ client = await self._get_client()
+ in_use_key = self._in_use_key(queue_id)
+ channel = self._channel(queue_id)
+ await client.decr(in_use_key)
+ await client.publish(channel, "1")
+
+ async def close(self) -> None:
+ """Close the Redis connection."""
+ if self._client is not None:
+ await self._client.aclose()
+ self._client = None
+
+ async def __aenter__(self) -> "RedisBackend":
+ return self
+
+ async def __aexit__(self, *args: object) -> None:
+ await self.close()
diff --git a/poetry.lock b/poetry.lock
index ca677b0..373ce3f 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -48,10 +48,10 @@ trio = ["trio (>=0.31.0) ; python_version < \"3.10\"", "trio (>=0.32.0) ; python
name = "async-timeout"
version = "5.0.1"
description = "Timeout context manager for asyncio programs"
-optional = true
+optional = false
python-versions = ">=3.8"
groups = ["main"]
-markers = "extra == \"redis\" and python_full_version < \"3.11.3\""
+markers = "python_full_version < \"3.11.3\""
files = [
{file = "async_timeout-5.0.1-py3-none-any.whl", hash = "sha256:39e3809566ff85354557ec2398b55e096c8364bacac9405a7a1fa429e77fe76c"},
{file = "async_timeout-5.0.1.tar.gz", hash = "sha256:d9321a7a3d5a6a5e187e824d2fa0793ce379a202935782d555d6e9d2735677d3"},
@@ -116,12 +116,151 @@ version = "2026.1.4"
description = "Python package for providing Mozilla's CA Bundle."
optional = false
python-versions = ">=3.7"
-groups = ["dev"]
+groups = ["main", "dev"]
files = [
{file = "certifi-2026.1.4-py3-none-any.whl", hash = "sha256:9943707519e4add1115f44c2bc244f782c0249876bf51b6599fee1ffbedd685c"},
{file = "certifi-2026.1.4.tar.gz", hash = "sha256:ac726dd470482006e014ad384921ed6438c457018f4b3d204aea4281258b2120"},
]
+[[package]]
+name = "charset-normalizer"
+version = "3.4.6"
+description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet."
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "charset_normalizer-3.4.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:2e1d8ca8611099001949d1cdfaefc510cf0f212484fe7c565f735b68c78c3c95"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:e25369dc110d58ddf29b949377a93e0716d72a24f62bad72b2b39f155949c1fd"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:259695e2ccc253feb2a016303543d691825e920917e31f894ca1a687982b1de4"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:dda86aba335c902b6149a02a55b38e96287157e609200811837678214ba2b1db"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:51fb3c322c81d20567019778cb5a4a6f2dc1c200b886bc0d636238e364848c89"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-manylinux_2_31_armv7l.whl", hash = "sha256:4482481cb0572180b6fd976a4d5c72a30263e98564da68b86ec91f0fe35e8565"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:39f5068d35621da2881271e5c3205125cc456f54e9030d3f723288c873a71bf9"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:8bea55c4eef25b0b19a0337dc4e3f9a15b00d569c77211fa8cde38684f234fb7"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:f0cdaecd4c953bfae0b6bb64910aaaca5a424ad9c72d85cb88417bb9814f7550"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:150b8ce8e830eb7ccb029ec9ca36022f756986aaaa7956aad6d9ec90089338c0"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:e68c14b04827dd76dcbd1aeea9e604e3e4b78322d8faf2f8132c7138efa340a8"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:3778fd7d7cd04ae8f54651f4a7a0bd6e39a0cf20f801720a4c21d80e9b7ad6b0"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:dad6e0f2e481fffdcf776d10ebee25e0ef89f16d691f1e5dee4b586375fdc64b"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-win32.whl", hash = "sha256:74a2e659c7ecbc73562e2a15e05039f1e22c75b7c7618b4b574a3ea9118d1557"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-win_amd64.whl", hash = "sha256:aa9cccf4a44b9b62d8ba8b4dd06c649ba683e4bf04eea606d2e94cfc2d6ff4d6"},
+ {file = "charset_normalizer-3.4.6-cp310-cp310-win_arm64.whl", hash = "sha256:e985a16ff513596f217cee86c21371b8cd011c0f6f056d0920aa2d926c544058"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:82060f995ab5003a2d6e0f4ad29065b7672b6593c8c63559beefe5b443242c3e"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:60c74963d8350241a79cb8feea80e54d518f72c26db618862a8f53e5023deaf9"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f6e4333fb15c83f7d1482a76d45a0818897b3d33f00efd215528ff7c51b8e35d"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:bc72863f4d9aba2e8fd9085e63548a324ba706d2ea2c83b260da08a59b9482de"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9cc4fc6c196d6a8b76629a70ddfcd4635a6898756e2d9cac5565cf0654605d73"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-manylinux_2_31_armv7l.whl", hash = "sha256:0c173ce3a681f309f31b87125fecec7a5d1347261ea11ebbb856fa6006b23c8c"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:c907cdc8109f6c619e6254212e794d6548373cc40e1ec75e6e3823d9135d29cc"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:404a1e552cf5b675a87f0651f8b79f5f1e6fd100ee88dc612f89aa16abd4486f"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:e3c701e954abf6fc03a49f7c579cc80c2c6cc52525340ca3186c41d3f33482ef"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:7a6967aaf043bceabab5412ed6bd6bd26603dae84d5cb75bf8d9a74a4959d398"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:5feb91325bbceade6afab43eb3b508c63ee53579fe896c77137ded51c6b6958e"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:f820f24b09e3e779fe84c3c456cb4108a7aa639b0d1f02c28046e11bfcd088ed"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b35b200d6a71b9839a46b9b7fff66b6638bb52fc9658aa58796b0326595d3021"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-win32.whl", hash = "sha256:9ca4c0b502ab399ef89248a2c84c54954f77a070f28e546a85e91da627d1301e"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-win_amd64.whl", hash = "sha256:a9e68c9d88823b274cf1e72f28cb5dc89c990edf430b0bfd3e2fb0785bfeabf4"},
+ {file = "charset_normalizer-3.4.6-cp311-cp311-win_arm64.whl", hash = "sha256:97d0235baafca5f2b09cf332cc275f021e694e8362c6bb9c96fc9a0eb74fc316"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:2ef7fedc7a6ecbe99969cd09632516738a97eeb8bd7258bf8a0f23114c057dab"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a4ea868bc28109052790eb2b52a9ab33f3aa7adc02f96673526ff47419490e21"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:836ab36280f21fc1a03c99cd05c6b7af70d2697e374c7af0b61ed271401a72a2"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:f1ce721c8a7dfec21fcbdfe04e8f68174183cf4e8188e0645e92aa23985c57ff"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0e28d62a8fc7a1fa411c43bd65e346f3bce9716dc51b897fbe930c5987b402d5"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-manylinux_2_31_armv7l.whl", hash = "sha256:530d548084c4a9f7a16ed4a294d459b4f229db50df689bfe92027452452943a0"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:30f445ae60aad5e1f8bdbb3108e39f6fbc09f4ea16c815c66578878325f8f15a"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ac2393c73378fea4e52aa56285a3d64be50f1a12395afef9cce47772f60334c2"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:90ca27cd8da8118b18a52d5f547859cc1f8354a00cd1e8e5120df3e30d6279e5"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:8e5a94886bedca0f9b78fecd6afb6629142fd2605aa70a125d49f4edc6037ee6"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:695f5c2823691a25f17bc5d5ffe79fa90972cc34b002ac6c843bb8a1720e950d"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:231d4da14bcd9301310faf492051bee27df11f2bc7549bc0bb41fef11b82daa2"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a056d1ad2633548ca18ffa2f85c202cfb48b68615129143915b8dc72a806a923"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-win32.whl", hash = "sha256:c2274ca724536f173122f36c98ce188fd24ce3dad886ec2b7af859518ce008a4"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-win_amd64.whl", hash = "sha256:c8ae56368f8cc97c7e40a7ee18e1cedaf8e780cd8bc5ed5ac8b81f238614facb"},
+ {file = "charset_normalizer-3.4.6-cp312-cp312-win_arm64.whl", hash = "sha256:899d28f422116b08be5118ef350c292b36fc15ec2daeb9ea987c89281c7bb5c4"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:11afb56037cbc4b1555a34dd69151e8e069bee82e613a73bef6e714ce733585f"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:423fb7e748a08f854a08a222b983f4df1912b1daedce51a72bd24fe8f26a1843"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:d73beaac5e90173ac3deb9928a74763a6d230f494e4bfb422c217a0ad8e629bf"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d60377dce4511655582e300dc1e5a5f24ba0cb229005a1d5c8d0cb72bb758ab8"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:530e8cebeea0d76bdcf93357aa5e41336f48c3dc709ac52da2bb167c5b8271d9"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-manylinux_2_31_armv7l.whl", hash = "sha256:a26611d9987b230566f24a0a125f17fe0de6a6aff9f25c9f564aaa2721a5fb88"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:34315ff4fc374b285ad7f4a0bf7dcbfe769e1b104230d40f49f700d4ab6bbd84"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5f8ddd609f9e1af8c7bd6e2aca279c931aefecd148a14402d4e368f3171769fd"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:80d0a5615143c0b3225e5e3ef22c8d5d51f3f72ce0ea6fb84c943546c7b25b6c"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:92734d4d8d187a354a556626c221cd1a892a4e0802ccb2af432a1d85ec012194"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:613f19aa6e082cf96e17e3ffd89383343d0d589abda756b7764cf78361fd41dc"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:2b1a63e8224e401cafe7739f77efd3f9e7f5f2026bda4aead8e59afab537784f"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6cceb5473417d28edd20c6c984ab6fee6c6267d38d906823ebfe20b03d607dc2"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-win32.whl", hash = "sha256:d7de2637729c67d67cf87614b566626057e95c303bc0a55ffe391f5205e7003d"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-win_amd64.whl", hash = "sha256:572d7c822caf521f0525ba1bce1a622a0b85cf47ffbdae6c9c19e3b5ac3c4389"},
+ {file = "charset_normalizer-3.4.6-cp313-cp313-win_arm64.whl", hash = "sha256:a4474d924a47185a06411e0064b803c68be044be2d60e50e8bddcc2649957c1f"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-macosx_10_15_universal2.whl", hash = "sha256:9cc6e6d9e571d2f863fa77700701dae73ed5f78881efc8b3f9a4398772ff53e8"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ef5960d965e67165d75b7c7ffc60a83ec5abfc5c11b764ec13ea54fbef8b4421"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b3694e3f87f8ac7ce279d4355645b3c878d24d1424581b46282f24b92f5a4ae2"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5d11595abf8dd942a77883a39d81433739b287b6aa71620f15164f8096221b30"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7bda6eebafd42133efdca535b04ccb338ab29467b3f7bf79569883676fc628db"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-manylinux_2_31_armv7l.whl", hash = "sha256:bbc8c8650c6e51041ad1be191742b8b421d05bbd3410f43fa2a00c8db87678e8"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:22c6f0c2fbc31e76c3b8a86fba1a56eda6166e238c29cdd3d14befdb4a4e4815"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7edbed096e4a4798710ed6bc75dcaa2a21b68b6c356553ac4823c3658d53743a"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:7f9019c9cb613f084481bd6a100b12e1547cf2efe362d873c2e31e4035a6fa43"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:58c948d0d086229efc484fe2f30c2d382c86720f55cd9bc33591774348ad44e0"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:419a9d91bd238052642a51938af8ac05da5b3343becde08d5cdeab9046df9ee1"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:5273b9f0b5835ff0350c0828faea623c68bfa65b792720c453e22b25cc72930f"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:0e901eb1049fdb80f5bd11ed5ea1e498ec423102f7a9b9e4645d5b8204ff2815"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-win32.whl", hash = "sha256:b4ff1d35e8c5bd078be89349b6f3a845128e685e751b6ea1169cf2160b344c4d"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-win_amd64.whl", hash = "sha256:74119174722c4349af9708993118581686f343adc1c8c9c007d59be90d077f3f"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314-win_arm64.whl", hash = "sha256:e5bcc1a1ae744e0bb59641171ae53743760130600da8db48cbb6e4918e186e4e"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-macosx_10_15_universal2.whl", hash = "sha256:ad8faf8df23f0378c6d527d8b0b15ea4a2e23c89376877c598c4870d1b2c7866"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f5ea69428fa1b49573eef0cc44a1d43bebd45ad0c611eb7d7eac760c7ae771bc"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:06a7e86163334edfc5d20fe104db92fcd666e5a5df0977cb5680a506fe26cc8e"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:e1f6e2f00a6b8edb562826e4632e26d063ac10307e80f7461f7de3ad8ef3f077"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:95b52c68d64c1878818687a473a10547b3292e82b6f6fe483808fb1468e2f52f"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:7504e9b7dc05f99a9bbb4525c67a2c155073b44d720470a148b34166a69c054e"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:172985e4ff804a7ad08eebec0a1640ece87ba5041d565fff23c8f99c1f389484"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:4be9f4830ba8741527693848403e2c457c16e499100963ec711b1c6f2049b7c7"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-musllinux_1_2_armv7l.whl", hash = "sha256:79090741d842f564b1b2827c0b82d846405b744d31e84f18d7a7b41c20e473ff"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:87725cfb1a4f1f8c2fc9890ae2f42094120f4b44db9360be5d99a4c6b0e03a9e"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:fcce033e4021347d80ed9c66dcf1e7b1546319834b74445f561d2e2221de5659"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:ca0276464d148c72defa8bb4390cce01b4a0e425f3b50d1435aa6d7a18107602"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:197c1a244a274bb016dd8b79204850144ef77fe81c5b797dc389327adb552407"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-win32.whl", hash = "sha256:2a24157fa36980478dd1770b585c0f30d19e18f4fb0c47c13aa568f871718579"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-win_amd64.whl", hash = "sha256:cd5e2801c89992ed8c0a3f0293ae83c159a60d9a5d685005383ef4caca77f2c4"},
+ {file = "charset_normalizer-3.4.6-cp314-cp314t-win_arm64.whl", hash = "sha256:47955475ac79cc504ef2704b192364e51d0d473ad452caedd0002605f780101c"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:659a1e1b500fac8f2779dd9e1570464e012f43e580371470b45277a27baa7532"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f61aa92e4aad0be58eb6eb4e0c21acf32cf8065f4b2cae5665da756c4ceef982"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:f50498891691e0864dc3da965f340fada0771f6142a378083dc4608f4ea513e2"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:bf625105bb9eef28a56a943fec8c8a98aeb80e7d7db99bd3c388137e6eb2d237"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2bd9d128ef93637a5d7a6af25363cf5dec3fa21cf80e68055aad627f280e8afa"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-manylinux_2_31_armv7l.whl", hash = "sha256:d08ec48f0a1c48d75d0356cea971921848fb620fdeba805b28f937e90691209f"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:1ed80ff870ca6de33f4d953fda4d55654b9a2b340ff39ab32fa3adbcd718f264"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:f98059e4fcd3e3e4e2d632b7cf81c2faae96c43c60b569e9c621468082f1d104"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-musllinux_1_2_armv7l.whl", hash = "sha256:ab30e5e3e706e3063bc6de96b118688cb10396b70bb9864a430f67df98c61ecc"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:d5f5d1e9def3405f60e3ca8232d56f35c98fb7bf581efcc60051ebf53cb8b611"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-musllinux_1_2_riscv64.whl", hash = "sha256:461598cd852bfa5a61b09cae2b1c02e2efcd166ee5516e243d540ac24bfa68a7"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:71be7e0e01753a89cf024abf7ecb6bca2c81738ead80d43004d9b5e3f1244e64"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:df01808ee470038c3f8dc4f48620df7225c49c2d6639e38f96e6d6ac6e6f7b0e"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-win32.whl", hash = "sha256:69dd852c2f0ad631b8b60cfbe25a28c0058a894de5abb566619c205ce0550eae"},
+ {file = "charset_normalizer-3.4.6-cp38-cp38-win_amd64.whl", hash = "sha256:517ad0e93394ac532745129ceabdf2696b609ec9f87863d337140317ebce1c14"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:31215157227939b4fb3d740cd23fe27be0439afef67b785a1eb78a3ae69cba9e"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ecbbd45615a6885fe3240eb9db73b9e62518b611850fdf8ab08bd56de7ad2b17"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c45a03a4c69820a399f1dda9e1d8fbf3562eda46e7720458180302021b08f778"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:e8aeb10fcbe92767f0fa69ad5a72deca50d0dca07fbde97848997d778a50c9fe"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:54fae94be3d75f3e573c9a1b5402dc593de19377013c9a0e4285e3d402dd3a2a"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-manylinux_2_31_armv7l.whl", hash = "sha256:2f7fdd9b6e6c529d6a2501a2d36b240109e78a8ceaef5687cfcfa2bbe671d297"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:4d1d02209e06550bdaef34af58e041ad71b88e624f5d825519da3a3308e22687"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:8bc5f0687d796c05b1e28ab0d38a50e6309906ee09375dd3aff6a9c09dd6e8f4"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:ee4ec14bc1680d6b0afab9aea2ef27e26d2024f18b24a2d7155a52b60da7e833"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:d1a2ee9c1499fc8f86f4521f27a973c914b211ffa87322f4ee33bb35392da2c5"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-musllinux_1_2_riscv64.whl", hash = "sha256:48696db7f18afb80a068821504296eb0787d9ce239b91ca15059d1d3eaacf13b"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:4f41da960b196ea355357285ad1316a00099f22d0929fe168343b99b254729c9"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:802168e03fba8bbc5ce0d866d589e4b1ca751d06edee69f7f3a19c5a9fe6b597"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-win32.whl", hash = "sha256:8761ac29b6c81574724322a554605608a9960769ea83d2c73e396f3df896ad54"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-win_amd64.whl", hash = "sha256:1cf0a70018692f85172348fe06d3a4b63f94ecb055e13a00c644d368eb82e5b8"},
+ {file = "charset_normalizer-3.4.6-cp39-cp39-win_arm64.whl", hash = "sha256:3516bbb8d42169de9e61b8520cbeeeb716f12f4ecfe3fd30a9919aa16c806ca8"},
+ {file = "charset_normalizer-3.4.6-py3-none-any.whl", hash = "sha256:947cf925bc916d90adba35a64c82aace04fa39b46b52d4630ece166655905a69"},
+ {file = "charset_normalizer-3.4.6.tar.gz", hash = "sha256:1ae6b62897110aa7c79ea2f5dd38d1abca6db663687c0b1ad9aed6f6bae3d9d6"},
+]
+
[[package]]
name = "click"
version = "8.1.8"
@@ -321,6 +460,24 @@ files = [
{file = "filelock-3.20.3.tar.gz", hash = "sha256:18c57ee915c7ec61cff0ecf7f0f869936c7c30191bb0cf406f1341778d0834e1"},
]
+[[package]]
+name = "googleapis-common-protos"
+version = "1.73.0"
+description = "Common protobufs used in Google APIs"
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "googleapis_common_protos-1.73.0-py3-none-any.whl", hash = "sha256:dfdaaa2e860f242046be561e6d6cb5c5f1541ae02cfbcb034371aadb2942b4e8"},
+ {file = "googleapis_common_protos-1.73.0.tar.gz", hash = "sha256:778d07cd4fbeff84c6f7c72102f0daf98fa2bfd3fa8bea426edc545588da0b5a"},
+]
+
+[package.dependencies]
+protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0"
+
+[package.extras]
+grpc = ["grpcio (>=1.44.0,<2.0.0)"]
+
[[package]]
name = "h11"
version = "0.16.0"
@@ -716,6 +873,45 @@ files = [
importlib-metadata = ">=6.0,<8.8.0"
typing-extensions = ">=4.5.0"
+[[package]]
+name = "opentelemetry-exporter-otlp-proto-common"
+version = "1.39.1"
+description = "OpenTelemetry Protobuf encoding"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_exporter_otlp_proto_common-1.39.1-py3-none-any.whl", hash = "sha256:08f8a5862d64cc3435105686d0216c1365dc5701f86844a8cd56597d0c764fde"},
+ {file = "opentelemetry_exporter_otlp_proto_common-1.39.1.tar.gz", hash = "sha256:763370d4737a59741c89a67b50f9e39271639ee4afc999dadfe768541c027464"},
+]
+
+[package.dependencies]
+opentelemetry-proto = "1.39.1"
+
+[[package]]
+name = "opentelemetry-exporter-otlp-proto-http"
+version = "1.39.1"
+description = "OpenTelemetry Collector Protobuf over HTTP Exporter"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_exporter_otlp_proto_http-1.39.1-py3-none-any.whl", hash = "sha256:d9f5207183dd752a412c4cd564ca8875ececba13be6e9c6c370ffb752fd59985"},
+ {file = "opentelemetry_exporter_otlp_proto_http-1.39.1.tar.gz", hash = "sha256:31bdab9745c709ce90a49a0624c2bd445d31a28ba34275951a6a362d16a0b9cb"},
+]
+
+[package.dependencies]
+googleapis-common-protos = ">=1.52,<2.0"
+opentelemetry-api = ">=1.15,<2.0"
+opentelemetry-exporter-otlp-proto-common = "1.39.1"
+opentelemetry-proto = "1.39.1"
+opentelemetry-sdk = ">=1.39.1,<1.40.0"
+requests = ">=2.7,<3.0"
+typing-extensions = ">=4.5.0"
+
+[package.extras]
+gcp-auth = ["opentelemetry-exporter-credential-provider-gcp (>=0.59b0)"]
+
[[package]]
name = "opentelemetry-exporter-prometheus"
version = "0.60b1"
@@ -733,6 +929,21 @@ opentelemetry-api = ">=1.12,<2.0"
opentelemetry-sdk = ">=1.39.1,<1.40.0"
prometheus-client = ">=0.5.0,<1.0.0"
+[[package]]
+name = "opentelemetry-proto"
+version = "1.39.1"
+description = "OpenTelemetry Python Proto"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "opentelemetry_proto-1.39.1-py3-none-any.whl", hash = "sha256:22cdc78efd3b3765d09e68bfbd010d4fc254c9818afd0b6b423387d9dee46007"},
+ {file = "opentelemetry_proto-1.39.1.tar.gz", hash = "sha256:6c8e05144fc0d3ed4d22c2289c6b126e03bcd0e6a7da0f16cedd2e1c2772e2c8"},
+]
+
+[package.dependencies]
+protobuf = ">=5.0,<7.0"
+
[[package]]
name = "opentelemetry-sdk"
version = "1.39.1"
@@ -846,6 +1057,26 @@ aiohttp = ["aiohttp"]
django = ["django"]
twisted = ["twisted"]
+[[package]]
+name = "protobuf"
+version = "6.33.6"
+description = ""
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "protobuf-6.33.6-cp310-abi3-win32.whl", hash = "sha256:7d29d9b65f8afef196f8334e80d6bc1d5d4adedb449971fefd3723824e6e77d3"},
+ {file = "protobuf-6.33.6-cp310-abi3-win_amd64.whl", hash = "sha256:0cd27b587afca21b7cfa59a74dcbd48a50f0a6400cfb59391340ad729d91d326"},
+ {file = "protobuf-6.33.6-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:9720e6961b251bde64edfdab7d500725a2af5280f3f4c87e57c0208376aa8c3a"},
+ {file = "protobuf-6.33.6-cp39-abi3-manylinux2014_aarch64.whl", hash = "sha256:e2afbae9b8e1825e3529f88d514754e094278bb95eadc0e199751cdd9a2e82a2"},
+ {file = "protobuf-6.33.6-cp39-abi3-manylinux2014_s390x.whl", hash = "sha256:c96c37eec15086b79762ed265d59ab204dabc53056e3443e702d2681f4b39ce3"},
+ {file = "protobuf-6.33.6-cp39-abi3-manylinux2014_x86_64.whl", hash = "sha256:e9db7e292e0ab79dd108d7f1a94fe31601ce1ee3f7b79e0692043423020b0593"},
+ {file = "protobuf-6.33.6-cp39-cp39-win32.whl", hash = "sha256:bd56799fb262994b2c2faa1799693c95cc2e22c62f56fb43af311cae45d26f0e"},
+ {file = "protobuf-6.33.6-cp39-cp39-win_amd64.whl", hash = "sha256:f443a394af5ed23672bc6c486be138628fbe5c651ccbc536873d7da23d1868cf"},
+ {file = "protobuf-6.33.6-py3-none-any.whl", hash = "sha256:77179e006c476e69bf8e8ce866640091ec42e1beb80b213c3900006ecfba6901"},
+ {file = "protobuf-6.33.6.tar.gz", hash = "sha256:a6768d25248312c297558af96a9f9c929e8c4cee0659cb07e780731095f38135"},
+]
+
[[package]]
name = "py"
version = "1.11.0"
@@ -1033,10 +1264,9 @@ windows-terminal = ["colorama (>=0.4.6)"]
name = "pyjwt"
version = "2.12.1"
description = "JSON Web Token implementation in Python"
-optional = true
+optional = false
python-versions = ">=3.9"
groups = ["main"]
-markers = "extra == \"redis\""
files = [
{file = "pyjwt-2.12.1-py3-none-any.whl", hash = "sha256:28ca37c070cad8ba8cd9790cd940535d40274d22f80ab87f3ac6a713e6e8454c"},
{file = "pyjwt-2.12.1.tar.gz", hash = "sha256:c74a7a2adf861c04d002db713dd85f84beb242228e671280bf709d765b03672b"},
@@ -1285,10 +1515,9 @@ files = [
name = "redis"
version = "5.3.1"
description = "Python client for Redis database and key-value store"
-optional = true
+optional = false
python-versions = ">=3.8"
groups = ["main"]
-markers = "extra == \"redis\""
files = [
{file = "redis-5.3.1-py3-none-any.whl", hash = "sha256:dc1909bd24669cc31b5f67a039700b16ec30571096c5f1f0d9d2324bff31af97"},
{file = "redis-5.3.1.tar.gz", hash = "sha256:ca49577a531ea64039b5a36db3d6cd1a0c7a60c34124d46924a45b956e8cf14c"},
@@ -1302,6 +1531,28 @@ PyJWT = ">=2.9.0"
hiredis = ["hiredis (>=3.0.0)"]
ocsp = ["cryptography (>=36.0.1)", "pyopenssl (==23.2.1)", "requests (>=2.31.0)"]
+[[package]]
+name = "requests"
+version = "2.32.5"
+description = "Python HTTP for Humans."
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6"},
+ {file = "requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf"},
+]
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+charset_normalizer = ">=2,<4"
+idna = ">=2.5,<4"
+urllib3 = ">=1.21.1,<3"
+
+[package.extras]
+socks = ["PySocks (>=1.5.6,!=1.5.7)"]
+use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
+
[[package]]
name = "rich"
version = "13.9.4"
@@ -1503,6 +1754,24 @@ files = [
[package.dependencies]
typing-extensions = ">=4.12.0"
+[[package]]
+name = "urllib3"
+version = "2.6.3"
+description = "HTTP library with thread-safe connection pooling, file post, and more."
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "urllib3-2.6.3-py3-none-any.whl", hash = "sha256:bf272323e553dfb2e87d9bfd225ca7b0f467b919d7bbd355436d3fd37cb0acd4"},
+ {file = "urllib3-2.6.3.tar.gz", hash = "sha256:1b62b6884944a57dbe321509ab94fd4d3b307075e0c2eae991ac71ee15ad38ed"},
+]
+
+[package.extras]
+brotli = ["brotli (>=1.2.0) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=1.2.0.0) ; platform_python_implementation != \"CPython\""]
+h2 = ["h2 (>=4,<5)"]
+socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
+zstd = ["backports-zstd (>=1.0.0) ; python_version < \"3.14\""]
+
[[package]]
name = "uvicorn"
version = "0.32.1"
@@ -1532,68 +1801,49 @@ standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)
[[package]]
name = "uvloop"
-version = "0.22.1"
+version = "0.19.0"
description = "Fast implementation of asyncio event loop on top of libuv"
optional = false
-python-versions = ">=3.8.1"
+python-versions = ">=3.8.0"
groups = ["main"]
-markers = "sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\""
-files = [
- {file = "uvloop-0.22.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ef6f0d4cc8a9fa1f6a910230cd53545d9a14479311e87e3cb225495952eb672c"},
- {file = "uvloop-0.22.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7cd375a12b71d33d46af85a3343b35d98e8116134ba404bd657b3b1d15988792"},
- {file = "uvloop-0.22.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ac33ed96229b7790eb729702751c0e93ac5bc3bcf52ae9eccbff30da09194b86"},
- {file = "uvloop-0.22.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:481c990a7abe2c6f4fc3d98781cc9426ebd7f03a9aaa7eb03d3bfc68ac2a46bd"},
- {file = "uvloop-0.22.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:a592b043a47ad17911add5fbd087c76716d7c9ccc1d64ec9249ceafd735f03c2"},
- {file = "uvloop-0.22.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:1489cf791aa7b6e8c8be1c5a080bae3a672791fcb4e9e12249b05862a2ca9cec"},
- {file = "uvloop-0.22.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c60ebcd36f7b240b30788554b6f0782454826a0ed765d8430652621b5de674b9"},
- {file = "uvloop-0.22.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3b7f102bf3cb1995cfeaee9321105e8f5da76fdb104cdad8986f85461a1b7b77"},
- {file = "uvloop-0.22.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:53c85520781d84a4b8b230e24a5af5b0778efdb39142b424990ff1ef7c48ba21"},
- {file = "uvloop-0.22.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:56a2d1fae65fd82197cb8c53c367310b3eabe1bbb9fb5a04d28e3e3520e4f702"},
- {file = "uvloop-0.22.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:40631b049d5972c6755b06d0bfe8233b1bd9a8a6392d9d1c45c10b6f9e9b2733"},
- {file = "uvloop-0.22.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:535cc37b3a04f6cd2c1ef65fa1d370c9a35b6695df735fcff5427323f2cd5473"},
- {file = "uvloop-0.22.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:fe94b4564e865d968414598eea1a6de60adba0c040ba4ed05ac1300de402cd42"},
- {file = "uvloop-0.22.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:51eb9bd88391483410daad430813d982010f9c9c89512321f5b60e2cddbdddd6"},
- {file = "uvloop-0.22.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:700e674a166ca5778255e0e1dc4e9d79ab2acc57b9171b79e65feba7184b3370"},
- {file = "uvloop-0.22.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7b5b1ac819a3f946d3b2ee07f09149578ae76066d70b44df3fa990add49a82e4"},
- {file = "uvloop-0.22.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:e047cc068570bac9866237739607d1313b9253c3051ad84738cbb095be0537b2"},
- {file = "uvloop-0.22.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:512fec6815e2dd45161054592441ef76c830eddaad55c8aa30952e6fe1ed07c0"},
- {file = "uvloop-0.22.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:561577354eb94200d75aca23fbde86ee11be36b00e52a4eaf8f50fb0c86b7705"},
- {file = "uvloop-0.22.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:1cdf5192ab3e674ca26da2eada35b288d2fa49fdd0f357a19f0e7c4e7d5077c8"},
- {file = "uvloop-0.22.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6e2ea3d6190a2968f4a14a23019d3b16870dd2190cd69c8180f7c632d21de68d"},
- {file = "uvloop-0.22.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0530a5fbad9c9e4ee3f2b33b148c6a64d47bbad8000ea63704fa8260f4cf728e"},
- {file = "uvloop-0.22.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:bc5ef13bbc10b5335792360623cc378d52d7e62c2de64660616478c32cd0598e"},
- {file = "uvloop-0.22.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:1f38ec5e3f18c8a10ded09742f7fb8de0108796eb673f30ce7762ce1b8550cad"},
- {file = "uvloop-0.22.1-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3879b88423ec7e97cd4eba2a443aa26ed4e59b45e6b76aabf13fe2f27023a142"},
- {file = "uvloop-0.22.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:4baa86acedf1d62115c1dc6ad1e17134476688f08c6efd8a2ab076e815665c74"},
- {file = "uvloop-0.22.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:297c27d8003520596236bdb2335e6b3f649480bd09e00d1e3a99144b691d2a35"},
- {file = "uvloop-0.22.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c1955d5a1dd43198244d47664a5858082a3239766a839b2102a269aaff7a4e25"},
- {file = "uvloop-0.22.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:b31dc2fccbd42adc73bc4e7cdbae4fc5086cf378979e53ca5d0301838c5682c6"},
- {file = "uvloop-0.22.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:93f617675b2d03af4e72a5333ef89450dfaa5321303ede6e67ba9c9d26878079"},
- {file = "uvloop-0.22.1-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:37554f70528f60cad66945b885eb01f1bb514f132d92b6eeed1c90fd54ed6289"},
- {file = "uvloop-0.22.1-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:b76324e2dc033a0b2f435f33eb88ff9913c156ef78e153fb210e03c13da746b3"},
- {file = "uvloop-0.22.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:badb4d8e58ee08dad957002027830d5c3b06aea446a6a3744483c2b3b745345c"},
- {file = "uvloop-0.22.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b91328c72635f6f9e0282e4a57da7470c7350ab1c9f48546c0f2866205349d21"},
- {file = "uvloop-0.22.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:daf620c2995d193449393d6c62131b3fbd40a63bf7b307a1527856ace637fe88"},
- {file = "uvloop-0.22.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6cde23eeda1a25c75b2e07d39970f3374105d5eafbaab2a4482be82f272d5a5e"},
- {file = "uvloop-0.22.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:80eee091fe128e425177fbd82f8635769e2f32ec9daf6468286ec57ec0313efa"},
- {file = "uvloop-0.22.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:017bd46f9e7b78e81606329d07141d3da446f8798c6baeec124260e22c262772"},
- {file = "uvloop-0.22.1-cp38-cp38-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c3e5c6727a57cb6558592a95019e504f605d1c54eb86463ee9f7a2dbd411c820"},
- {file = "uvloop-0.22.1-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:57df59d8b48feb0e613d9b1f5e57b7532e97cbaf0d61f7aa9aa32221e84bc4b6"},
- {file = "uvloop-0.22.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:55502bc2c653ed2e9692e8c55cb95b397d33f9f2911e929dc97c4d6b26d04242"},
- {file = "uvloop-0.22.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:4a968a72422a097b09042d5fa2c5c590251ad484acf910a651b4b620acd7f193"},
- {file = "uvloop-0.22.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b45649628d816c030dba3c80f8e2689bab1c89518ed10d426036cdc47874dfc4"},
- {file = "uvloop-0.22.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ea721dd3203b809039fcc2983f14608dae82b212288b346e0bfe46ec2fab0b7c"},
- {file = "uvloop-0.22.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0ae676de143db2b2f60a9696d7eca5bb9d0dd6cc3ac3dad59a8ae7e95f9e1b54"},
- {file = "uvloop-0.22.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:17d4e97258b0172dfa107b89aa1eeba3016f4b1974ce85ca3ef6a66b35cbf659"},
- {file = "uvloop-0.22.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:05e4b5f86e621cf3927631789999e697e58f0d2d32675b67d9ca9eb0bca55743"},
- {file = "uvloop-0.22.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:286322a90bea1f9422a470d5d2ad82d38080be0a29c4dd9b3e6384320a4d11e7"},
- {file = "uvloop-0.22.1.tar.gz", hash = "sha256:6c84bae345b9147082b17371e3dd5d42775bddce91f885499017f4607fdaf39f"},
+markers = "sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\" or extra == \"uvloop\""
+files = [
+ {file = "uvloop-0.19.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:de4313d7f575474c8f5a12e163f6d89c0a878bc49219641d49e6f1444369a90e"},
+ {file = "uvloop-0.19.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5588bd21cf1fcf06bded085f37e43ce0e00424197e7c10e77afd4bbefffef428"},
+ {file = "uvloop-0.19.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7b1fd71c3843327f3bbc3237bedcdb6504fd50368ab3e04d0410e52ec293f5b8"},
+ {file = "uvloop-0.19.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5a05128d315e2912791de6088c34136bfcdd0c7cbc1cf85fd6fd1bb321b7c849"},
+ {file = "uvloop-0.19.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:cd81bdc2b8219cb4b2556eea39d2e36bfa375a2dd021404f90a62e44efaaf957"},
+ {file = "uvloop-0.19.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5f17766fb6da94135526273080f3455a112f82570b2ee5daa64d682387fe0dcd"},
+ {file = "uvloop-0.19.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:4ce6b0af8f2729a02a5d1575feacb2a94fc7b2e983868b009d51c9a9d2149bef"},
+ {file = "uvloop-0.19.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:31e672bb38b45abc4f26e273be83b72a0d28d074d5b370fc4dcf4c4eb15417d2"},
+ {file = "uvloop-0.19.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:570fc0ed613883d8d30ee40397b79207eedd2624891692471808a95069a007c1"},
+ {file = "uvloop-0.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5138821e40b0c3e6c9478643b4660bd44372ae1e16a322b8fc07478f92684e24"},
+ {file = "uvloop-0.19.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:91ab01c6cd00e39cde50173ba4ec68a1e578fee9279ba64f5221810a9e786533"},
+ {file = "uvloop-0.19.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:47bf3e9312f63684efe283f7342afb414eea4d3011542155c7e625cd799c3b12"},
+ {file = "uvloop-0.19.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:da8435a3bd498419ee8c13c34b89b5005130a476bda1d6ca8cfdde3de35cd650"},
+ {file = "uvloop-0.19.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:02506dc23a5d90e04d4f65c7791e65cf44bd91b37f24cfc3ef6cf2aff05dc7ec"},
+ {file = "uvloop-0.19.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2693049be9d36fef81741fddb3f441673ba12a34a704e7b4361efb75cf30befc"},
+ {file = "uvloop-0.19.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7010271303961c6f0fe37731004335401eb9075a12680738731e9c92ddd96ad6"},
+ {file = "uvloop-0.19.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:5daa304d2161d2918fa9a17d5635099a2f78ae5b5960e742b2fcfbb7aefaa593"},
+ {file = "uvloop-0.19.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:7207272c9520203fea9b93843bb775d03e1cf88a80a936ce760f60bb5add92f3"},
+ {file = "uvloop-0.19.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:78ab247f0b5671cc887c31d33f9b3abfb88d2614b84e4303f1a63b46c046c8bd"},
+ {file = "uvloop-0.19.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:472d61143059c84947aa8bb74eabbace30d577a03a1805b77933d6bd13ddebbd"},
+ {file = "uvloop-0.19.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45bf4c24c19fb8a50902ae37c5de50da81de4922af65baf760f7c0c42e1088be"},
+ {file = "uvloop-0.19.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:271718e26b3e17906b28b67314c45d19106112067205119dddbd834c2b7ce797"},
+ {file = "uvloop-0.19.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:34175c9fd2a4bc3adc1380e1261f60306344e3407c20a4d684fd5f3be010fa3d"},
+ {file = "uvloop-0.19.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:e27f100e1ff17f6feeb1f33968bc185bf8ce41ca557deee9d9bbbffeb72030b7"},
+ {file = "uvloop-0.19.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:13dfdf492af0aa0a0edf66807d2b465607d11c4fa48f4a1fd41cbea5b18e8e8b"},
+ {file = "uvloop-0.19.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6e3d4e85ac060e2342ff85e90d0c04157acb210b9ce508e784a944f852a40e67"},
+ {file = "uvloop-0.19.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8ca4956c9ab567d87d59d49fa3704cf29e37109ad348f2d5223c9bf761a332e7"},
+ {file = "uvloop-0.19.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f467a5fd23b4fc43ed86342641f3936a68ded707f4627622fa3f82a120e18256"},
+ {file = "uvloop-0.19.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:492e2c32c2af3f971473bc22f086513cedfc66a130756145a931a90c3958cb17"},
+ {file = "uvloop-0.19.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2df95fca285a9f5bfe730e51945ffe2fa71ccbfdde3b0da5772b4ee4f2e770d5"},
+ {file = "uvloop-0.19.0.tar.gz", hash = "sha256:0246f4fd1bf2bf702e06b0d45ee91677ee5c31242f39aab4ea6fe0c51aedd0fd"},
]
[package.extras]
-dev = ["Cython (>=3.0,<4.0)", "setuptools (>=60)"]
-docs = ["Sphinx (>=4.1.2,<4.2.0)", "sphinx_rtd_theme (>=0.5.2,<0.6.0)", "sphinxcontrib-asyncio (>=0.3.0,<0.4.0)"]
-test = ["aiohttp (>=3.10.5)", "flake8 (>=6.1,<7.0)", "mypy (>=0.800)", "psutil", "pyOpenSSL (>=25.3.0,<25.4.0)", "pycodestyle (>=2.11.0,<2.12.0)"]
+docs = ["Sphinx (>=4.1.2,<4.2.0)", "sphinx-rtd-theme (>=0.5.2,<0.6.0)", "sphinxcontrib-asyncio (>=0.3.0,<0.4.0)"]
+test = ["Cython (>=0.29.36,<0.30.0)", "aiohttp (==3.9.0b0) ; python_version >= \"3.12\"", "aiohttp (>=3.8.1) ; python_version < \"3.12\"", "flake8 (>=5.0,<6.0)", "mypy (>=0.800)", "psutil", "pyOpenSSL (>=23.0.0,<23.1.0)", "pycodestyle (>=2.9.0,<2.10.0)"]
[[package]]
name = "virtualenv"
@@ -1879,9 +2129,9 @@ test = ["big-O", "jaraco.functools", "jaraco.itertools", "jaraco.test", "more_it
type = ["pytest-mypy"]
[extras]
-redis = ["redis"]
+uvloop = ["uvloop"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<3.13"
-content-hash = "c5ed6efc110a665b3a16b60601cc893c617dc5b90b25de8048bc0d8fa9458e7f"
+content-hash = "be2783376c102dfca18b48694dc2515dc397f4a84796d7f8909d7833df2c5722"
diff --git a/pyproject.toml b/pyproject.toml
index ec89da0..a6e28cc 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -36,10 +36,12 @@ opentelemetry-api = "^1.39.1"
opentelemetry-exporter-prometheus = "^0.60b1"
fastapi = "^0.128.0"
uvicorn = {extras = ["standard"], version = "^0.32.0"}
-redis = {version = "^5.0.0", optional = true}
+redis = "^5.0.0"
+uvloop = {version = "^0.19.0", optional = true}
+opentelemetry-exporter-otlp-proto-http = "^1.39.1"
[tool.poetry.extras]
-redis = ["redis"]
+uvloop = ["uvloop"]
[tool.poetry.group.dev.dependencies]
pytest = "^8.3.4"
@@ -92,3 +94,7 @@ warn_unused_ignores = true
[[tool.mypy.overrides]]
module = "redis.*"
ignore_missing_imports = true
+
+[[tool.mypy.overrides]]
+module = "genesis.loop"
+ignore_missing_imports = true
diff --git a/tests/test_cli.py b/tests/test_cli.py
index 0a38671..5416426 100644
--- a/tests/test_cli.py
+++ b/tests/test_cli.py
@@ -34,10 +34,7 @@ def simple_info(msg, *args, **kwargs):
monkeypatch.setattr(genesis.cli.logger, "info", simple_info)
monkeypatch.setattr(genesis.cli.logger, "warning", lambda *args, **kwargs: None)
- # We still keep the patch to avoid actual network binding if possible,
- # but the env var ensures if it DOES bind, it won't conflict.
- with patch("genesis.cli.start_http_server"):
- yield
+ yield
runner = CliRunner()
diff --git a/tests/test_loop.py b/tests/test_loop.py
new file mode 100644
index 0000000..ec27efa
--- /dev/null
+++ b/tests/test_loop.py
@@ -0,0 +1,11 @@
+"""Tests for genesis loop utilities (uvloop support)."""
+
+import pytest
+
+from genesis.loop import use_uvloop
+
+
+def test_use_uvloop_returns_bool() -> None:
+ """use_uvloop() returns True if uvloop was set, False otherwise."""
+ result = use_uvloop()
+ assert isinstance(result, bool)
diff --git a/tests/test_otel_config.py b/tests/test_otel_config.py
new file mode 100644
index 0000000..7ea2b5d
--- /dev/null
+++ b/tests/test_otel_config.py
@@ -0,0 +1,148 @@
+import pytest
+
+from genesis.observability.otel_config import (
+ create_resource,
+ get_otel_exporter_otlp_endpoint,
+ get_otel_exporter_otlp_metrics_endpoint,
+ get_otel_exporter_otlp_traces_endpoint,
+ get_otel_resource_attributes,
+ get_otel_service_name,
+ is_otel_sdk_disabled,
+)
+
+
+def test_is_otel_sdk_disabled_unset(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.delenv("OTEL_SDK_DISABLED", raising=False)
+ assert is_otel_sdk_disabled() is False
+
+
+def test_is_otel_sdk_disabled_true(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv("OTEL_SDK_DISABLED", "true")
+ assert is_otel_sdk_disabled() is True
+
+
+def test_is_otel_sdk_disabled_true_case_insensitive(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.setenv("OTEL_SDK_DISABLED", "TRUE")
+ assert is_otel_sdk_disabled() is True
+
+
+def test_is_otel_sdk_disabled_false(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv("OTEL_SDK_DISABLED", "false")
+ assert is_otel_sdk_disabled() is False
+
+
+def test_get_otel_service_name_default(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.delenv("OTEL_SERVICE_NAME", raising=False)
+ assert get_otel_service_name() == "genesis"
+
+
+def test_get_otel_service_name_custom(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv("OTEL_SERVICE_NAME", "my-service")
+ assert get_otel_service_name() == "my-service"
+
+
+def test_get_otel_resource_attributes_empty(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.delenv("OTEL_RESOURCE_ATTRIBUTES", raising=False)
+ assert get_otel_resource_attributes() == {}
+
+
+def test_get_otel_resource_attributes_single(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv("OTEL_RESOURCE_ATTRIBUTES", "deployment.environment=prod")
+ assert get_otel_resource_attributes() == {"deployment.environment": "prod"}
+
+
+def test_get_otel_resource_attributes_multiple(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv(
+ "OTEL_RESOURCE_ATTRIBUTES",
+ "deployment.environment=prod,service.version=1.0.0",
+ )
+ assert get_otel_resource_attributes() == {
+ "deployment.environment": "prod",
+ "service.version": "1.0.0",
+ }
+
+
+def test_get_otel_resource_attributes_value_with_equals(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.setenv("OTEL_RESOURCE_ATTRIBUTES", "key=value=with=equals")
+ assert get_otel_resource_attributes() == {"key": "value=with=equals"}
+
+
+def test_create_resource_default(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.delenv("OTEL_SERVICE_NAME", raising=False)
+ monkeypatch.delenv("OTEL_RESOURCE_ATTRIBUTES", raising=False)
+ resource = create_resource()
+ assert resource.attributes["service.name"] == "genesis"
+
+
+def test_create_resource_custom_service_name(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv("OTEL_SERVICE_NAME", "custom-app")
+ monkeypatch.delenv("OTEL_RESOURCE_ATTRIBUTES", raising=False)
+ resource = create_resource()
+ assert resource.attributes["service.name"] == "custom-app"
+
+
+def test_create_resource_service_name_overrides_attributes(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.setenv("OTEL_SERVICE_NAME", "from-env")
+ monkeypatch.setenv("OTEL_RESOURCE_ATTRIBUTES", "service.name=from-attributes")
+ resource = create_resource()
+ assert resource.attributes["service.name"] == "from-env"
+
+
+def test_create_resource_with_attributes(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.delenv("OTEL_SERVICE_NAME", raising=False)
+ monkeypatch.setenv(
+ "OTEL_RESOURCE_ATTRIBUTES",
+ "deployment.environment=staging,service.version=2.0",
+ )
+ resource = create_resource()
+ assert resource.attributes["service.name"] == "genesis"
+ assert resource.attributes["deployment.environment"] == "staging"
+ assert resource.attributes["service.version"] == "2.0"
+
+
+def test_get_otel_exporter_otlp_endpoint_unset(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.delenv("OTEL_EXPORTER_OTLP_ENDPOINT", raising=False)
+ assert get_otel_exporter_otlp_endpoint() is None
+
+
+def test_get_otel_exporter_otlp_endpoint_set(monkeypatch: pytest.MonkeyPatch) -> None:
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318")
+ assert get_otel_exporter_otlp_endpoint() == "http://localhost:4318"
+
+
+def test_get_otel_exporter_otlp_metrics_endpoint_fallback(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.delenv("OTEL_EXPORTER_OTLP_METRICS_ENDPOINT", raising=False)
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://collector:4318")
+ assert get_otel_exporter_otlp_metrics_endpoint() == "http://collector:4318"
+
+
+def test_get_otel_exporter_otlp_metrics_endpoint_override(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://default:4318")
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_METRICS_ENDPOINT", "http://metrics:4318")
+ assert get_otel_exporter_otlp_metrics_endpoint() == "http://metrics:4318"
+
+
+def test_get_otel_exporter_otlp_traces_endpoint_fallback(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.delenv("OTEL_EXPORTER_OTLP_TRACES_ENDPOINT", raising=False)
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://collector:4318")
+ assert get_otel_exporter_otlp_traces_endpoint() == "http://collector:4318"
+
+
+def test_get_otel_exporter_otlp_traces_endpoint_override(
+ monkeypatch: pytest.MonkeyPatch,
+) -> None:
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://default:4318")
+ monkeypatch.setenv("OTEL_EXPORTER_OTLP_TRACES_ENDPOINT", "http://traces:4318")
+ assert get_otel_exporter_otlp_traces_endpoint() == "http://traces:4318"
diff --git a/tests/test_queue.py b/tests/test_queue.py
new file mode 100644
index 0000000..8ece9ff
--- /dev/null
+++ b/tests/test_queue.py
@@ -0,0 +1,348 @@
+"""Tests for genesis queue (slot, semaphore, backends)."""
+
+import asyncio
+from unittest.mock import AsyncMock, MagicMock
+
+import pytest
+
+from genesis.queue import InMemoryBackend, Queue, QueueSemaphore, QueueTimeoutError
+from genesis.queue.redis_backend import RedisBackend
+
+
+@pytest.mark.asyncio
+async def test_in_memory_backend_enqueue_wait_and_acquire_release():
+ """Backend: enqueue, wait_and_acquire, release."""
+ backend = InMemoryBackend()
+ await backend.enqueue("q1", "item1")
+ await backend.wait_and_acquire("q1", "item1", max_concurrent=1)
+ await backend.release("q1")
+
+
+@pytest.mark.asyncio
+async def test_in_memory_backend_fifo_order():
+ """Backend: first enqueued acquires first when slot free."""
+ backend = InMemoryBackend()
+ await backend.enqueue("q1", "first")
+ await backend.enqueue("q1", "second")
+
+ # First acquires
+ await backend.wait_and_acquire("q1", "first", max_concurrent=1)
+ # Second must wait until first releases
+ entered = asyncio.Event()
+ released = asyncio.Event()
+
+ async def second_acquires():
+ await backend.wait_and_acquire("q1", "second", max_concurrent=1)
+ entered.set()
+ await released.wait()
+ await backend.release("q1")
+
+ async def first_releases():
+ await backend.release("q1")
+ released.set()
+
+ t2 = asyncio.create_task(second_acquires())
+ t1 = asyncio.create_task(first_releases())
+ await asyncio.wait_for(entered.wait(), timeout=2.0)
+ await asyncio.wait_for(t2, timeout=2.0)
+ await t1
+
+
+@pytest.mark.asyncio
+async def test_queue_slot_context_manager():
+ """Queue.slot() is an async context manager; release on exit."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ entered = asyncio.Event()
+
+ async def use_slot():
+ async with queue.slot("sales"):
+ entered.set()
+
+ t = asyncio.create_task(use_slot())
+ await asyncio.wait_for(entered.wait(), timeout=2.0)
+ await asyncio.wait_for(t, timeout=2.0)
+
+
+@pytest.mark.asyncio
+async def test_queue_slot_with_item_id():
+ """Queue.slot(queue_id, item_id=...) uses that item_id for ordering."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ order: list[str] = []
+
+ async def first():
+ async with queue.slot("q", item_id="a"):
+ order.append("a-in")
+ order.append("a-out")
+
+ async def second():
+ async with queue.slot("q", item_id="b"):
+ order.append("b-in")
+ order.append("b-out")
+
+ t1 = asyncio.create_task(first())
+ t2 = asyncio.create_task(second())
+ await asyncio.gather(t1, t2)
+ assert order == ["a-in", "a-out", "b-in", "b-out"]
+
+
+@pytest.mark.asyncio
+async def test_queue_semaphore_context_manager():
+ """Queue.semaphore() returns a reusable context manager."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ sem = queue.semaphore("support", max_concurrent=1)
+ entered = asyncio.Event()
+
+ async def use_sem():
+ async with sem:
+ entered.set()
+
+ t = asyncio.create_task(use_sem())
+ await asyncio.wait_for(entered.wait(), timeout=2.0)
+ await asyncio.wait_for(t, timeout=2.0)
+
+
+@pytest.mark.asyncio
+async def test_queue_semaphore_max_concurrent_two():
+ """With max_concurrent=2, two can be inside at once."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ sem = queue.semaphore("pool", max_concurrent=2)
+ both_inside = asyncio.Event()
+
+ async def enter(ready: asyncio.Event):
+ async with sem:
+ ready.set()
+ await both_inside.wait()
+
+ r1 = asyncio.Event()
+ r2 = asyncio.Event()
+ t1 = asyncio.create_task(enter(r1))
+ t2 = asyncio.create_task(enter(r2))
+ await asyncio.wait_for(r1.wait(), timeout=2.0)
+ await asyncio.wait_for(r2.wait(), timeout=2.0)
+ both_inside.set()
+ await asyncio.gather(t1, t2)
+
+
+@pytest.mark.asyncio
+async def test_queue_slot_semaphore_like():
+ """slot() behaves like a semaphore: one in, one out, then next."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ log: list[str] = []
+
+ async def worker(name: str):
+ async with queue.slot("single", item_id=name):
+ log.append(f"{name}-in")
+ log.append(f"{name}-out")
+
+ await asyncio.gather(
+ worker("a"),
+ worker("b"),
+ worker("c"),
+ )
+ assert log == ["a-in", "a-out", "b-in", "b-out", "c-in", "c-out"]
+
+
+@pytest.mark.asyncio
+async def test_queue_slot_timeout_raises_and_removes_from_queue():
+ """With timeout, wait_and_acquire raises QueueTimeoutError and item is removed."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ entered = asyncio.Event()
+
+ async def holder():
+ async with queue.slot("q", item_id="first"):
+ entered.set()
+ await asyncio.Event().wait()
+
+ async def waiter():
+ await entered.wait()
+ with pytest.raises(QueueTimeoutError):
+ async with queue.slot("q", item_id="second", timeout=0.2):
+ pass
+
+ t_holder = asyncio.create_task(holder())
+ t_waiter = asyncio.create_task(waiter())
+ await asyncio.wait_for(t_waiter, timeout=2.0)
+ t_holder.cancel()
+ with pytest.raises(asyncio.CancelledError):
+ await t_holder
+
+
+@pytest.mark.asyncio
+async def test_queue_slot_timeout_next_in_line_can_acquire():
+ """After one item times out, the next in line can acquire."""
+ backend = InMemoryBackend()
+ queue = Queue(backend)
+ order: list[str] = []
+ first_holding = asyncio.Event()
+ release_first = asyncio.Event()
+
+ async def first_acquires():
+ async with queue.slot("q", item_id="a"):
+ order.append("a-in")
+ first_holding.set()
+ await release_first.wait()
+ order.append("a-out")
+
+ async def second_times_out():
+ await first_holding.wait()
+ try:
+ async with queue.slot("q", item_id="b", timeout=0.2):
+ order.append("b-in")
+ except QueueTimeoutError:
+ order.append("b-timeout")
+ order.append("b-done")
+ release_first.set()
+
+ async def third_acquires():
+ await first_holding.wait()
+ async with queue.slot("q", item_id="c"):
+ order.append("c-in")
+ order.append("c-out")
+
+ await asyncio.gather(
+ first_acquires(),
+ second_times_out(),
+ third_acquires(),
+ )
+ assert "a-in" in order and "a-out" in order
+ assert "b-timeout" in order and "b-done" in order
+ assert "c-in" in order and "c-out" in order
+ assert order.index("b-timeout") < order.index("c-in")
+
+
+@pytest.mark.asyncio
+async def test_queue_default_in_memory_backend():
+ """Queue() without backend uses InMemoryBackend by default."""
+ queue = Queue()
+ entered = asyncio.Event()
+
+ async def use_slot():
+ async with queue.slot("default"):
+ entered.set()
+
+ t = asyncio.create_task(use_slot())
+ await asyncio.wait_for(entered.wait(), timeout=2.0)
+ await asyncio.wait_for(t, timeout=2.0)
+
+
+@pytest.mark.asyncio
+async def test_in_memory_backend_max_concurrent_mismatch_raises():
+ """Backend: different max_concurrent for the same queue_id raises ValueError."""
+ backend = InMemoryBackend()
+ await backend.enqueue("q1", "item1")
+ await backend.wait_and_acquire("q1", "item1", max_concurrent=2)
+ await backend.release("q1")
+
+ await backend.enqueue("q1", "item2")
+ with pytest.raises(ValueError, match="max_concurrent"):
+ await backend.wait_and_acquire("q1", "item2", max_concurrent=5)
+
+
+# ---------------------------------------------------------------------------
+# RedisBackend (mock-based)
+# ---------------------------------------------------------------------------
+
+
+def _make_redis_mock(script_result: int = 1) -> MagicMock:
+ """Return a mock redis client where the Lua script returns script_result."""
+ mock_script = AsyncMock(return_value=script_result)
+ client = AsyncMock()
+ client.register_script = MagicMock(return_value=mock_script)
+ return client
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_enqueue():
+ """RedisBackend.enqueue pushes to the correct Redis key."""
+ client = _make_redis_mock()
+ backend = RedisBackend()
+ backend._client = client
+
+ await backend.enqueue("q1", "item1")
+ client.rpush.assert_called_once_with("genesis:queue:q1:waiting", "item1")
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_wait_and_acquire_success():
+ """RedisBackend: acquire succeeds immediately when Lua script returns 1."""
+ client = _make_redis_mock(script_result=1)
+ backend = RedisBackend()
+ backend._client = client
+
+ await backend.wait_and_acquire("q1", "item1", max_concurrent=1)
+ client.register_script.assert_called_once()
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_release():
+ """RedisBackend.release decrements counter and publishes to release channel."""
+ client = _make_redis_mock()
+ backend = RedisBackend()
+ backend._client = client
+
+ await backend.release("q1")
+ client.decr.assert_called_once_with("genesis:queue:q1:in_use")
+ client.publish.assert_called_once_with("genesis:queue:q1:release", "1")
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_wait_and_acquire_timeout():
+ """RedisBackend: raises QueueTimeoutError when deadline passes without acquiring."""
+ client = _make_redis_mock(script_result=0) # never acquires
+
+ # pubsub() is synchronous; listen() must hang so wait_for can time it out
+ async def _never_yields():
+ await asyncio.Future()
+ yield # pragma: no cover
+
+ pubsub = MagicMock()
+ pubsub.subscribe = AsyncMock()
+ pubsub.unsubscribe = AsyncMock()
+ pubsub.close = AsyncMock()
+ pubsub.listen = _never_yields
+ client.pubsub = MagicMock(return_value=pubsub)
+
+ backend = RedisBackend()
+ backend._client = client
+
+ with pytest.raises(QueueTimeoutError):
+ await backend.wait_and_acquire("q1", "item1", max_concurrent=1, timeout=0.1)
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_close():
+ """RedisBackend.close() calls aclose on the client and clears the reference."""
+ client = AsyncMock()
+ backend = RedisBackend()
+ backend._client = client
+
+ await backend.close()
+ client.aclose.assert_called_once()
+ assert backend._client is None
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_close_when_no_client():
+ """RedisBackend.close() is a no-op when no client exists."""
+ backend = RedisBackend()
+ await backend.close() # must not raise
+
+
+@pytest.mark.asyncio
+async def test_redis_backend_context_manager():
+ """RedisBackend used as async context manager closes on exit."""
+ client = AsyncMock()
+ backend = RedisBackend()
+ backend._client = client
+
+ async with backend:
+ pass
+
+ client.aclose.assert_called_once()
+ assert backend._client is None