A demonstration monorepo showcasing how to build resilient distributed systems using Cloudflare Queues. This project implements a producer-consumer architecture with AI-powered message processing and R2 event notifications.
This project demonstrates key concepts of message queues:
- Producers send data to a queue and respond immediately
- Consumers process data asynchronously with retry logic
- Event Subscriptions trigger processing from Cloudflare services (R2, D1, etc.)
- Dead Letter Queues catch failed messages for later reprocessing
- AI Message Processing: User submits text → Queue → AI generates response → Display in UI
- Image Caption Generation: User uploads image to R2 → Event notification → Queue → AI generates caption → Display in UI
This monorepo contains two applications:
A TanStack Start frontend application (producer) that:
- Provides an input form for sending messages to the queue
- Uploads images to Cloudflare R2 storage
- Polls KV store to display processed results
- Runs on
http://localhost:3000
A Cloudflare Worker backend (consumer) that:
- Consumes messages from the queue in batches
- Routes events to appropriate handlers using type-safe Zod schemas
- Processes messages with Cloudflare AI (Llama 2, LLaVA models)
- Stores results in Cloudflare KV for retrieval
data-ops: Contains shared Zod schemas for type-safe queue messages
# Install dependencies and build shared packages
pnpm run setupYou'll need to create these Cloudflare resources:
- Queue:
queue-example(main processing queue) - Dead Letter Queue:
catch-all-queue(catches failed messages) - R2 Bucket:
queue-examplewith event notifications configured - KV Namespace:
CACHE(stores processed results)
Configure R2 event notifications to send PutObject events to queue-example queue.
pnpm run dev:user-applicationStarts the TanStack Start application on port 3000.
pnpm run dev:data-serviceStarts the Cloudflare Worker locally (connects to production queue with remote: true).
apps/user-application/wrangler.jsonc:
apps/data-service/wrangler.jsonc:
"queues": {
"consumers": [
{
"queue": "queue-example",
"max_retries": 3, // Retry failed messages 3 times
"retry_delay": 2, // Wait 2 seconds between retries
"dead_letter_queue": "catch-all-queue",
"max_batch_size": 4, // Process up to 4 messages at once
"max_batch_timeout": 1 // Wait 1 second to fill batch
}
]
}The project uses discriminated unions for type-safe message routing:
{
type: "EXAMPLE_MESSAGE",
id: "uuid",
message: "What is the meaning of life?"
}Processed by Llama 2 AI model, response stored in KV.
{
type: "R2_EVENT",
account: "...",
bucket: "queue-example",
action: "PutObject",
object: {
key: "image.jpg",
size: 12345,
eTag: "..."
}
}Processed by LLaVA image-to-text model, caption stored in KV.
await env.QUEUE.send(message, { delaySeconds: 300 }); // 5 minute delayasync queue(batch: MessageBatch) {
for (const message of batch.messages) {
try {
await processMessage(message.body);
message.ack(); // Acknowledge successful processing
} catch (error) {
message.retry(); // Retry this specific message
}
}
}async queue(batch: MessageBatch) {
console.log(batch.queue); // Queue name
batch.ackAll(); // Acknowledge all messages
batch.retryAll(); // Retry all messages
}pnpm run deploy:user-applicationpnpm run deploy:data-service- Cloudflare Queues only support one consumer per queue (but unlimited producers)
- Use
remote: truein wrangler config to test with production queues locally - Run
pnpm wrangler typesto generate TypeScript bindings for Cloudflare resources - Event subscriptions eliminate the need to update application code for common events
- Dead letter queues are essential for handling persistent failures
