An MVP tool for visualizing API latency using percentile analysis (p50, p95, p99) instead of averages. Built with Node.js, TypeScript, PostgreSQL, and React.
The Problem with Averages:
- Averages hide outliers. An API with 99 requests at 10ms and 1 request at 10 seconds has an average of ~200ms, which doesn't reflect the user experience.
- Real-world latency follows a long-tail distribution where a small percentage of requests take much longer.
Why p95 and p99 Matter:
- p50 (median): Half of your requests are faster than this. Good baseline, but doesn't show worst-case.
- p95: 95% of requests are faster than this. This is what most users experience. Critical for SLA monitoring.
- p99: 99% of requests are faster than this. Shows your worst-case performance for almost all users.
Example: If your API has:
- Average: 50ms
- p95: 500ms
- p99: 2000ms
This tells you that while most requests are fast (average 50ms), 5% of users experience 500ms+ latency, and 1% experience 2+ seconds. This is critical information that averages hide.
Clear separation of concerns:
- Runner (
src/runner.ts): Sends 50 sequential requests per endpoint, measures latency using high-resolution timers - Storage (
src/storage.ts): Saves raw request data to Postgres - Analytics (
src/analytics.ts): Computes percentiles and payload bucket analysis
- Node.js 18+
- PostgreSQL 12+
- npm or yarn
- Clone the repository:
git clone https://github.com/devleo10/delayt.git
cd delayt- Install backend dependencies:
npm install- Install frontend dependencies:
cd client
npm install
cd ..- Set up PostgreSQL:
CREATE DATABASE latency_visualizer;- Configure environment variables (optional):
Create a
.envfile in the root directory:
DB_HOST=localhost
DB_PORT=5432
DB_NAME=latency_visualizer
DB_USER=postgres
DB_PASSWORD=postgres
PORT=3001- Run database migration:
npm run build
npm run migrate- Start the backend server:
npm run dev- Start the frontend (in a new terminal):
npm run client
# or
cd client && npm start- Open your browser:
Navigate to
http://localhost:3000
-
Add API endpoints:
- Enter endpoints in the text area, one per line
- Format:
METHOD URL [PAYLOAD for POST] - Examples:
GET https://api.github.com/users/octocat POST https://jsonplaceholder.typicode.com/posts {"title": "test"}
-
Run tests:
- Click "Run Tests" - this sends exactly 50 sequential requests per endpoint
- Results are stored in Postgres
-
View analytics:
- Table shows: endpoint, method, p50, p95, p99, avg payload size
- Endpoints are ranked by p95 (slowest first)
- Slow endpoints (p95 > 1000ms) are highlighted
- Chart shows payload size vs latency for POST requests
- β Exactly 50 sequential requests per endpoint
- β
High-resolution latency measurement (nanosecond precision using
process.hrtime.bigint()) - β Records: endpoint, method, latency_ms, request_size_bytes, response_size_bytes, status_code
- β Percentile analysis (p50, p95, p99) - no averages as primary metric
- β Payload size bucketing for POST requests
- β Request timeouts (30 seconds)
- β No retries (failures are logged and recorded)
- β Single-page React UI with table and chart visualization
- β Slow endpoint highlighting
Submit endpoints to test.
Request:
{
"endpoints": [
{"url": "https://api.example.com", "method": "GET"},
{"url": "https://api.example.com/data", "method": "POST", "payload": {"key": "value"}}
]
}Response:
{
"success": true,
"message": "Tests completed"
}Get percentile statistics and payload buckets.
Response:
{
"percentileStats": [
{
"endpoint": "https://api.example.com",
"method": "GET",
"p50": 45.2,
"p95": 120.5,
"p99": 250.8,
"avg_payload_size": 0
}
],
"payloadBuckets": [
{
"bucket_min": 0,
"bucket_max": 100,
"p95": 85.3,
"count": 150
}
]
}.
βββ src/
β βββ runner.ts # Request execution and latency measurement
β βββ storage.ts # Postgres data persistence
β βββ analytics.ts # Percentile and bucket calculations
β βββ server.ts # Express API server
β βββ types.ts # TypeScript type definitions
β βββ migrations/
β βββ migrate.ts # Database schema
βββ client/
β βββ src/
β β βββ App.tsx # React frontend
β β βββ App.css # Styles
β β βββ types.ts # Frontend type definitions
β βββ package.json
βββ package.json
βββ tsconfig.json
βββ .gitignore
βββ README.md
npm run buildnpm run migrate# Backend (with hot reload)
npm run dev
# Frontend (with hot reload)
npm run clientCREATE TABLE api_requests (
id SERIAL PRIMARY KEY,
endpoint VARCHAR(500) NOT NULL,
method VARCHAR(10) NOT NULL,
latency_ms NUMERIC(10, 2) NOT NULL,
request_size_bytes INTEGER NOT NULL,
response_size_bytes INTEGER NOT NULL,
status_code INTEGER NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);- No authentication
- No live monitoring
- No distributed tracing
- No background agents
- Sequential requests only (not parallel)
- Fixed 50 requests per endpoint
- No historical comparison
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details
devleo10
- GitHub: @devleo10
- Built with React
- Charts powered by Recharts
- Backend built with Express
- Database: PostgreSQL