Skip to content

Commit 97a3fde

Browse files
davidamaceyclaude
andcommitted
fix: Add celery queue specification to enable task processing
Critical fix for production celery-worker - adds queue specification to process tasks. Problem: Celery-worker only listened to the default 'celery' queue, but transcription tasks are sent to the 'gpu' queue. Uploaded files stayed 'pending' forever. Solution: Add `-Q gpu,nlp,utility,celery` to match dev docker-compose.yml configuration. Queues: - gpu: WhisperX transcription and PyAnnote diarization - nlp: LLM summarization and speaker identification - utility: File recovery and maintenance - celery: Default queue Architecture Note: GPU runtime config is handled separately via docker-compose.nvidia.yml override, which is automatically merged by opentranscribe.sh when USE_NVIDIA_RUNTIME=true. This keeps the base prod config GPU-agnostic for CPU-only deployments. Verified: Both docker-compose.prod.yml and docker-compose.offline.yml now match all critical configurations from docker-compose.yml (dev/ground truth). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 047d2cb commit 97a3fde

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docker-compose.prod.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ services:
161161
image: davidamacey/opentranscribe-backend:latest
162162
pull_policy: always
163163
restart: always
164-
command: celery -A app.core.celery worker --loglevel=info --concurrency=1
164+
command: celery -A app.core.celery worker --loglevel=info -Q gpu,nlp,utility,celery --concurrency=1
165165
volumes:
166166
- ${MODEL_CACHE_DIR:-./models}/huggingface:/root/.cache/huggingface
167167
- ${MODEL_CACHE_DIR:-./models}/torch:/root/.cache/torch

0 commit comments

Comments
 (0)