Global, serverless network probe endpoints—latency, jitter, speed test, and edge metadata, with OpenTelemetry support. Designed for Cloudflare Workers and portable to other edge/serverless clouds.
- Latency Measurement: Measure network latency to the edge
- Jitter Analysis: Calculate network jitter with multiple measurements
- Speed Test: Test download speeds with configurable file sizes
- Edge Metadata: Get detailed information about the edge location and client
- Rate Limiting: Built-in rate limiting for API protection
- OpenTelemetry Header Support: Echoes the
traceparentheader for distributed tracing. Spans are not exported unless you wire up an OpenTelemetry exporter yourself.
Basic information about the API and available endpoints.
| Endpoint | Method | Authentication | Description |
|---|---|---|---|
/ping |
GET | 🔄 Rate-limited (30/min/IP) | Get latency/jitter + edge information |
/speed |
GET | 🔑 API token | Download speed test (max 100MB). Query params: size (bytes), pattern (zero/rand/asterisk), meta (flag to return JSON metadata instead of data) |
/upload |
POST | 🔑 API token | Upload speed test (max 100MB) |
/info |
GET | 🔄 Rate-limited (30/min/IP) | Detailed edge POP and geo information |
/headers |
GET | 🔄 Rate-limited (30/min/IP) | Returns all request headers |
/version |
GET | 🔄 Rate-limited (30/min/IP) | Worker version and build information |
/echo |
POST | 🔑 API token | Echo back the request body and headers |
/healthz |
GET | 🔄 Rate-limited (30/min/IP) | Health check endpoint |
| Any other path | ANY | - | Returns 200 OK with "ok" |
All responses include the following security headers:
Strict-Transport-Security: max-age=63072000; includeSubDomains; preloadX-Content-Type-Options: nosniffX-Frame-Options: DENYReferrer-Policy: no-referrerX-XSS-Protection: 0Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidatePermissions-Policy: camera=(), microphone=(), geolocation=()
-
🔑 API Token Required for sensitive endpoints (
/speed,/upload,/echo)- Set
x-api-probe-tokenheader with a valid token
- Set
-
🔄 Rate Limited (30 requests/minute per IP)
- Applies to public endpoints:
/ping,/info,/healthz,/headers,/version - Returns
429 Too Many Requestswhen limit exceeded - Uses Cloudflare Workers KV for distributed rate limiting
- Applies to public endpoints:
All endpoints support distributed tracing through the traceparent header:
- Echoes back any received
traceparentheader in both the response header and JSON - Follows the W3C Trace Context specification
- Enables end-to-end request tracing across services
This worker does not export spans by default. To send trace data to a
backend, integrate an OpenTelemetry exporter or other tracing
library in src/index.js.
All successful responses include:
- Standard HTTP status codes
- Consistent JSON format for structured data
- Security headers (see above)
- Request tracing information when available
Error responses follow the format:
{
"error": "Error message",
"code": "ERROR_CODE",
"details": {}
}-
Create KV Namespace: In Cloudflare dashboard, create a namespace (e.g.,
RATE_LIMIT_KV). Add its ID towrangler.toml. -
Set API Token Secret: Generate a secure token, then:
wrangler secret put API_PROBE_TOKEN
-
Test Locally: Run the worker in development mode:
wrangler dev
-
Deploy: Publish the worker to Cloudflare:
wrangler publish
Below are ready-to-copy curl snippets for every endpoint. Replace <your-worker> with your deployed hostname and set API_PROBE_TOKEN in your shell.
curl -s https://<your-worker>.workers.dev/healthzcurl -s https://<your-worker>.workers.dev/ping | jq- Metadata only (no large transfer):
curl -s "https://<your-worker>.workers.dev/speed?size=5000000&pattern=rand&meta" | jq- Actual speed test (requires API token; measures Mbps locally):
speed_bps=$(curl -s -H "x-api-probe-token:$API_PROBE_TOKEN" \
"https://<your-worker>.workers.dev/speed?size=5000000&pattern=rand" \
-o /dev/null -w "%{speed_download}")
echo "Download: $(echo "scale=2; $speed_bps*8/1000000" | bc) Mb/s"Parameters:
size– bytes (1-104857600)pattern–zero/rand/asteriskmetaflag – if present, returns JSON instead of data
- Zero-filled 10 MB buffer:
dd if=/dev/zero bs=1m count=10 2>/dev/null | \
curl -s -X POST "https://<your-worker>.workers.dev/upload" \
-H "x-api-probe-token:$API_PROBE_TOKEN" \
--data-binary @- -o /dev/null -w "Upload: %{speed_upload}B/s\n"- Random 5 MB buffer:
dd if=/dev/urandom bs=1m count=5 2>/dev/null | \
curl -s -X POST "https://<your-worker>.workers.dev/upload" \
-H "x-api-probe-token:$API_PROBE_TOKEN" \
--data-binary @- -o /dev/null -w "Upload: %{speed_upload}B/s\n"curl -s https://<your-worker>.workers.dev/info | jqcurl -s https://<your-worker>.workers.dev/headers | jqcurl -s https://<your-worker>.workers.dev/version | jqcurl -s -X POST https://<your-worker>.workers.dev/echo \
-H "x-api-probe-token:$API_PROBE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"hello":"world"}' | jqAfter deployment, access the endpoints via your worker URL. For example:
# Get metadata (no large download)
curl -s "https://<your-worker>.workers.dev/speed?size=5000000&pattern=rand&meta" | jq
# Run speed test and measure Mbps (requires bc)
speed_bps=$(curl -s -H "x-api-probe-token:$API_PROBE_TOKEN" \
"https://<your-worker>.workers.dev/speed?size=5000000&pattern=rand" \
-o /dev/null -w "%{speed_download}")
echo "Mbps: $(echo "scale=2; $speed_bps*8/1000000" | bc)"dd if=/dev/zero bs=1m count=10 2>/dev/null | \
curl -s -X POST "https://<your-worker>.workers.dev/upload" \
-H "x-api-probe-token:$API_PROBE_TOKEN" \
--data-binary @- -o /dev/null -w "upload=%{speed_upload}B/s\n"