An HTTP request replay and comparison tool written in Go. Perfect for testing API changes, comparing environments, load testing, validating migrations, and generating detailed reports
- Replay HTTP requests from JSON log files
- Multi-target support - test multiple environments simultaneously
- Concurrent execution with configurable limits
- Smart filtering by method, path, and limits
- Ignore rules for skipping noisy or irrelevant fields during diffing
- Regression rules: to automatically fail when behavioral or performance regressions are detected
- Rate limiting - control requests per second
- Configurable timeouts and delays
- Real-time progress tracking with ETA
- Detailed latency statistics (p50, p90, p95, p99, min, max, avg)
- Automatic diff detection between targets
- Status code mismatch reporting
- Response body comparison
- Latency comparison across targets
- Per-target statistics breakdown
- Ignore fields during comparison
- Bearer token authentication
- Custom API headers (repeatable)
- Supports multiple headers simultaneously
- Colorized console output for easy reading
- JSON output for programmatic use and CI/CD
- HTML reports with executive summary, latency charts, per-target breakdown, and difference highlighting
- Summary-only mode for quick overview
- Nginx log conversion to JSON Lines format (combined/common)
- Supports filtering and replay directly from raw logs
- Fully replayable: captured logs can be replayed or compared after the fact
Replayer returns specific exit codes to allow CI/CD pipelines and scripts to react programmatically:
| Exit Code | Meaning |
|---|---|
| 0 | Run completed successfully, no differences or errors |
| 1 | Differences detected between targets (used with --compare) |
| 2 | One or more regression rules were violated |
| 3 | Invalid arguments or command-line usage |
| 4 | Runtime error occurred (network, file I/O, or unexpected failure) |
# Clone the repository
git clone <repo-url>
cd replayer
# Build all components
make build
make demoOnce it's finished, the demo.html will open up on your browser
Replay requests against a single target:
./replayer --input-file test_logs.json --concurrency 5 localhost:8080The killer feature - compare two environments side-by-side:
./replayer \
--input-file prod_logs.json \
--compare \
--concurrency 10 \
staging.example.com \
production.example.comSimulate realistic load patterns:
./replayer \
--input-file logs.json \
--rate-limit 1000 \
--concurrency 50 \
--timeout 10000 \
localhost:8080Provide auth token or custom headers:
# Bearer token
./replayer --input-file logs.json --auth "Bearer token123" api.example.com
# Custom headers
./replayer --input-file logs.json --header "X-API-Key: abc" --header "X-Env: staging" api.example.com# Single target
./replayer --input-file logs.json --html-report report.html localhost:8080
# Comparison mode
./replayer --input-file logs.json --compare --html-report comparison_report.html staging.api production.api# Convert nginx logs to JSON Lines
./replayer --input-file /var/log/nginx/access.log --parse-nginx traffic.json --nginx-format combined
# Replay converted logs
./replayer --input-file traffic.json --concurrency 10 staging.api.comTest only certain endpoints:
# Only replay POST requests to /checkout
./replayer \
--input-file test_logs.json \
--filter-method POST \
--filter-path /checkout \
--limit 100 \
localhost:8080Ignore specific JSON fields when comparing responses
| Type | Example |
|---|---|
| Exact field | --ignore status.updated_at |
| Wildcard | --ignore '*.timestamp' |
| Multiple fields | --ignore x --ignore y --ignore z |
# Ignore timestamps, request IDs, metadata
./replayer \
--input-file logs.json \
--compare \
--ignore "*.timestamp" \
--ignore "request_id" \
--ignore "metadata.*" \
staging.api prod.api
# Ignore an entire object subtree
--ignore "debug_info"Perfect for CI/CD pipelines:
./replayer \
--input-file test_logs.json \
--output-json \
--compare \
staging.api \
production.api > results.json
cat results.json | jq '.summary.succeeded'Capture requests in real-time from a running service or proxy and replay/compare them on the fly
# HTTP capture
./replayer --capture \
--listen :8080 \
--upstream http://staging.api \
--output traffic.json \
--stream
# HTTPS capture
./replayer --capture \
--listen :8080 \
--upstream https://staging.api \
--output traffic.json \
--stream \
--tls-cert proxy.crt \
--tls-key proxy.key
# Replay captured traffic
./replayer --input-file traffic.json staging.api
# Compare captured traffic between two environments
./replayer --input-file traffic.json --compare staging.api production.apiWhen you finish capturing you may use the generated traffic.json file to replay or compare as usual
Declare regression rules via a yaml file. Replayer allows you to fail runs automatically when behavioral or performance regressions are detected
./replayer \
--input-file traffic.json \
--compare \
--rules rules.yaml \
staging.api \
production.apiIf any rule is violated the run fails and violations are reported
rules.yaml example
rules:
status_mismatch:
max: 0
body_diff:
allowed: false
ignore:
- "*.timestamp"
- "request_id"
latency:
metric: p95
regression_percent: 20
endpoint_rules:
- path: /users
method: GET
status_mismatch:
max: 0
- path: /slow
latency:
metric: p95
regression_percent: 10- Status: fails if response status differ
- Body: exact fields, or prefix/suffix wildcards
- Latency: you need a baseline for this (available metrics: min, max, avg, p50, p90, p95, p99)
Example:
./replayer \
--input-file traffic.json \
--compare \
--output-json \
staging.api production.api > baseline.json
./replayer \
--input-file traffic.json \
--compare \
--rules rules.yaml \
--baseline baseline.json \
staging.api production.apiPreview what will be replayed without sending requests:
./replayer --input-file test_logs.json --dry-run| Flag | Type | Default | Description |
|---|---|---|---|
--input-file |
string | required | Path to the input log file |
--concurrency |
int | 1 | Number of concurrent requests |
--timeout |
int | 5000 | Request timeout in milliseconds |
--delay |
int | 0 | Delay between requests in milliseconds |
--rate-limit |
int | 0 | Maximum requests per second (0 = unlimited) |
--limit |
int | 0 | Limit number of requests to replay (0 = all) |
--filter-method |
string | "" | Filter by HTTP method (GET, POST, etc.) |
--filter-path |
string | "" | Filter by path substring |
--compare |
bool | false | Compare responses between targets |
--output-json |
bool | false | Output results as JSON |
--progress |
bool | true | Show progress bar |
--dry-run |
bool | false | Preview mode - don't send requests |
--summary-only |
bool | false | Output summary only |
--auth |
string | "" | Authorization header value |
--header |
string | "" | Custom header (repeatable) |
--html-report |
string | "" | Generate HTML report |
--parse-nginx |
string | "" | Convert nginx log to JSON Lines |
--nginx-format |
string | "combined" | Nginx format: combined/common |
--ignore |
string | "" | Ignore fields during diff (repeatable) |
--capture |
Enable live capture mode | ||
--listen |
string | "" | Port to listen for incoming requests |
--upstream |
string | "" | URL of the real service to forward requests to |
--output |
string | "" | Path to save captured requests in JSON format |
--stream |
Optionally stream captured requests to stdout as they happen | ||
--tls-cert |
string | "" | TLS certification |
--tls-key |
string | "" | TLS key |
--rules |
string | "" | Path to rules.yaml file for regression testing |
--baseline |
string | "" | Path to baseline results JSON for comparison |
- Each line is a single JSON object (JSON Lines)
- Request/response bodies are base64-encoded
- Headers are arrays to support multiple values per key
{"timestamp":"2025-12-10T17:12:48.377+02:00","method":"POST","path":"/test","headers":{"Content-Type":["application/json"]},"body":"SGVsbG8gd29ybGQ=","status":200,"response_headers":{"Content-Type":["application/json"]},"response_body":"eyJzdWNjZXNzIjp0cnVlfQ==","latency_ms":12}[████████████████████████░░░░░░░░░░░░] 150/200 (75.0%) | Elapsed: 15s | ETA: 5s
[0][localhost:8080] 200 -> 45ms
[0][localhost:8081] 200 -> 47ms
[12][localhost:8080] 200 -> 5ms
[12][localhost:8081] 200 -> 6ms
[DIFF] Request 12 - GET /users/42:
Response bodies differ:
localhost:8080: {"id":42,"name":"Liakos koulaxis"}
localhost:8081: {"id":42,"name":"Liakos Koulaxis Jr.","version":"v2"}
[45][localhost:8080] 200 -> 3ms
[45][localhost:8081] 404 -> 2ms
[DIFF] Request 45 - GET /users/678:
Status codes differ: localhost:8080=200 localhost:8081=404
==== Summary ====
Overall Statistics:
Total Requests: 200
Succeeded: 195
Failed: 5
Differences: 23
Latency (ms):
min: 2
avg: 45
p50: 42
p90: 78
p95: 95
p99: 124
max: 2001
Per-Target Statistics:
localhost:8080:
Succeeded: 98
Failed: 2
Latency:
min: 2
avg: 43
p50: 40
p90: 75
p95: 92
p99: 120
max: 2001
localhost:8081:
Succeeded: 97
Failed: 3
Latency:
min: 2
avg: 47
p50: 44
p90: 81
p95: 98
p99: 128
max: 3002
{
"results": [
{
"index": 0,
"request": {
"method": "GET",
"path": "/users/123",
"headers": {"Content-Type": "application/json"},
"body": null
},
"responses": {
"localhost:8080": {
"index": 0,
"status": 200,
"latency_ms": 45,
"body": "{\"id\":123,\"name\":\"Liakos koulaxis\"}"
},
"localhost:8081": {
"index": 0,
"status": 200,
"latency_ms": 47,
"body": "{\"id\":123,\"name\":\"Liakos koulaxis\",\"version\":\"v2\"}"
}
},
"diff": {
"status_mismatch": false,
"body_mismatch": true,
"body_diffs": {
"localhost:8080": "{\"id\":123,\"name\":\"Liakos koulaxis\"}",
"localhost:8081": "{\"id\":123,\"name\":\"Liakos koulaxis\",\"version\":\"v2\"}"
}
}
}
],
"summary": {
"total_requests": 200,
"succeeded": 195,
"failed": 5,
"latency": {
"p50": 42,
"p90": 78,
"p95": 95,
"p99": 124,
"min": 2,
"max": 2001,
"avg": 45
},
"by_target": {
"localhost:8080": {
"succeeded": 98,
"failed": 2,
"latency": {...}
}
}
}
}Problem: Is staging behaving exactly like production?
# Parse logs
./replayer --input-file prod_traffic.log --parse-nginx prod_traffic.json
# Replay and compare with auth
./replayer \
--input-file prod_traffic.json \
--auth "Bearer ${STAGING_TOKEN}" \
--compare \
--html-report staging_validation.html \
--rate-limit 100 \
staging.api.example.com \
production.api.example.comWhat you get: Instant visibility into any behavioral differences between environments
Problem: Did the new version slow down any endpoints?
./replayer --input-file baseline_traffic.json --compare old-api.com new-api.comWhat you get: Side-by-side latency comparison for every endpoint
Problem: Can the new infrastructure handle production load?
./replayer --input-file prod_logs.json --rate-limit 1000 --concurrency 50 new-infra.comWhat you get: Confidence that your new infrastructure can handle real traffic patterns
Problem: Did the API response format change?
./replayer --input-file api_calls.json --compare --output-json v1.api v2.api > diff.jsonWhat you get: Automated detection of breaking changes
Problem: Synthetic load tests don't match real usage.
./replayer --input-file peak_hour_traffic.json --rate-limit 500 api.example.comWhat you get: Load testing based on actual production traffic patterns
# Only test authentication endpoints
./replayer --input-file logs.json --filter-path /auth localhost:8080
# Only test write operations
./replayer --input-file logs.json --filter-method POST localhost:8080
# Test just the first 50 requests
./replayer --input-file logs.json --limit 50 localhost:8080# Gentle ramp-up: 10 req/s
./replayer --input-file logs.json --rate-limit 10 --concurrency 5 api.com
# Stress test: 1000 req/s
./replayer --input-file logs.json --rate-limit 1000 --concurrency 100 api.com
# Sustained load test with unlimited rate
./replayer --input-file logs.json --concurrency 50 api.com#!/bin/bash
# compare staging to production and fail if differences found
./replayer --input-file smoke_tests.json --compare --output-json \
staging.api production.api > results.json
DIFFS=$(cat results.json | jq '[.results[] | select(.diff != null)] | length')
if [ "$DIFFS" -gt 0 ]; then
echo "Found $DIFFS differences between staging and production"
exit 1
else
echo "Staging matches production"
fiMIT
Contributions are welcome! Please feel free to submit a Pull Request
If this tool helped you catch bugs or validate deployments, give it a star! ⭐