A distributed task queue system built in Go
A high-performance, scalable task queue system built in Go featuring concurrent processing, graceful shutdown, and multiple deployment options. This project demonstrates production-ready Go development practices including channels, context management, Docker containerization, and Kubernetes deployment.
- Concurrent Processing: Worker pools with configurable concurrency using Go channels and goroutines
- In-Memory Queue: High-performance task queuing with priority support
- Redis Integration: π§ In Progress - Distributed queuing for multi-instance deployment
- Priority Queuing: Support for high-priority task processing
- Graceful Shutdown: Context-based cancellation and clean shutdown handling
- REST API: HTTP endpoints for task submission and status monitoring
- Health Checks: Built-in health monitoring and metrics endpoints
- Retry Logic: Configurable task retry with exponential backoff
- Docker Support: Multi-stage builds and production-ready containers
- Kubernetes Ready: Complete K8s manifests with auto-scaling and load balancing
- Production Features: Rate limiting, CORS, logging middleware
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β HTTP Client βββββΆβ API Server βββββΆβ Task Queue β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Monitoring ββββββ Worker Pool ββββββ Redis/Memory β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
βΌ
ββββββββββββββββββββ
β Task Processors β
ββββββββββββββββββββ
- Email: Send emails with SMTP simulation
- Image Processing: Resize images with configurable dimensions
- Data Processing: Process CSV files and data transformations
- Webhooks: Make HTTP calls to external APIs
- Custom: Extensible processor interface for new task types
- Go 1.23+
- Docker & Docker Compose (for containerized deployment)
- Kubernetes cluster (for K8s deployment)
- Redis (coming soon for distributed queues)
- Clone the repository:
git clone <repository-url>
cd go-task-queue- Install dependencies:
go mod tidy- Build the application:
go build -o bin/task-queue cmd/server/main.go- Run the application:
./bin/task-queue- Redis Integration (Coming Soon!):
# Redis-backed distributed queuing is under development
# Currently uses high-performance in-memory queues
REDIS_URL=redis://localhost:6379 ./bin/task-queue # Will be supported soon!The application is configured via environment variables:
| Variable | Default | Description |
|---|---|---|
PORT |
8080 | HTTP server port |
WORKER_COUNT |
5 | Number of worker goroutines |
QUEUE_SIZE |
1000 | In-memory queue buffer size |
REDIS_URL |
"" | Redis connection URL (π§ coming soon!) |
POST /api/v1/tasks
Content-Type: application/json
{
"type": "email",
"payload": {
"recipient": "user@example.com",
"subject": "Hello!",
"body": "Task queue is working!"
},
"priority": 0,
"max_retries": 3
}GET /api/v1/tasks/{task_id}GET /api/v1/healthGET /api/v1/statsGET /api/v1/workers# Start the server
go run cmd/server/main.go
# Create an email task
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"type": "email",
"payload": {
"recipient": "test@example.com",
"subject": "Welcome!",
"body": "Hello from task queue!"
}
}'# Create a high-priority image resize task
curl -X POST http://localhost:8080/api/v1/tasks \
-H "Content-Type: application/json" \
-d '{
"type": "image_resize",
"payload": {
"image_url": "https://example.com/image.jpg",
"width": 800,
"height": 600
},
"priority": 5
}'# Make sure the server is running, then:
chmod +x examples/demo.sh
./examples/demo.sh# Build image
docker build -f deployments/docker/Dockerfile -t task-queue .
# Run container
docker run -p 8080:8080 \
-e WORKER_COUNT=10 \
-e QUEUE_SIZE=2000 \
task-queuecd deployments/docker
docker-compose up -dThis starts:
- Two task queue instances (in-memory queues)
- Nginx load balancer
- π§ Redis integration coming soon for distributed queuing
Access points:
- Load Balancer: http://localhost
- Direct Instance 1: http://localhost:8080
- Direct Instance 2: http://localhost:8081
# Apply all manifests
kubectl apply -f deployments/k8s/
# Check status
kubectl get pods -n task-queue-system# 1. Create namespace and configs
kubectl apply -f deployments/k8s/01-namespace-configmap.yaml
# 2. Deploy Redis
kubectl apply -f deployments/k8s/02-redis.yaml
# 3. Deploy application
kubectl apply -f deployments/k8s/03-task-queue.yaml
# 4. Setup ingress
kubectl apply -f deployments/k8s/04-ingress.yaml# Manual scaling
kubectl scale deployment task-queue --replicas=5 -n task-queue-system
# The HPA will automatically scale between 2-10 replicas based on CPU/memory
kubectl get hpa -n task-queue-system- HTTP:
GET /api/v1/health - Kubernetes: Built-in liveness and readiness probes
- Docker: HEALTHCHECK instruction included
- Queue size and worker status via
/api/v1/stats - Task completion rates and error rates
- System resource utilization
# Local development
tail -f logs/task-queue.log
# Docker
docker logs -f task-queue-container
# Kubernetes
kubectl logs -f deployment/task-queue -n task-queue-systemgo test ./...# Install hey (HTTP load testing tool)
go install github.com/rakyll/hey@latest
# Load test task creation
hey -n 1000 -c 10 -m POST \
-H "Content-Type: application/json" \
-d '{"type":"email","payload":{"recipient":"test@example.com","subject":"Load test"}}' \
http://localhost:8080/api/v1/tasks# Run demo script for end-to-end testing
./examples/demo.sh- Non-root container execution
- Read-only root filesystem
- Network policies for pod-to-pod communication
- Secret management for Redis credentials
- Horizontal pod autoscaling based on CPU/memory
- Redis clustering for high availability
- Connection pooling and circuit breakers
- Optimized Docker multi-stage builds
- Graceful shutdown with 30s timeout
- Task retry logic with configurable limits
- Health checks and automatic restarts
- Persistent volumes for Redis data
go-task-queue/
βββ cmd/server/ # Application entry point
βββ internal/
β βββ api/ # HTTP API handlers
β βββ queue/ # Queue implementations
β βββ worker/ # Worker pool and task processors
βββ pkg/models/ # Shared data models
βββ deployments/
β βββ docker/ # Docker and compose files
β βββ k8s/ # Kubernetes manifests
βββ examples/ # Demo scripts and examples
βββ README.md
- Define task type in
pkg/models/task.go:
const TaskTypeCustom TaskType = "custom_task"- Implement processor in
internal/worker/types.go:
func (p *DefaultTaskProcessor) processCustomTask(ctx context.Context, task *models.Task) *models.TaskResult {
// Your implementation here
}- Add to switch statement in
ProcessTaskmethod
Implement the Queue interface in internal/queue/queue.go:
type Queue interface {
Enqueue(ctx context.Context, task *models.Task) error
Dequeue(ctx context.Context) (*models.Task, error)
Size() int
Close() error
GetTask(ctx context.Context, taskID string) (*models.Task, error)
UpdateTask(ctx context.Context, task *models.Task) error
}