The most fundamental performance test type that verifies system behavior under expected normal conditions.
Purpose:
- Validate system performance under expected production load
- Verify system meets performance SLAs
- Baseline performance metrics
Key Characteristics:
- Constant number of virtual users (VUs)
- Steady, predictable load
- Usually runs for 5-30 minutes
Example Configuration:
export const options = {
vus: 50,
duration: '15m',
thresholds: {
http_req_duration: ['p95<500'], // 95% of requests within 500ms
http_req_failed: ['rate<0.01'], // Less than 1% error rate
},
};When to Use:
- Regular performance validation
- Baseline performance measurements
- Pre-production verification
Gradually increases load until the system breaks or degrades significantly.
Purpose:
- Find system breaking points
- Determine maximum capacity
- Identify performance bottlenecks
- Verify system recovery
Key Characteristics:
- Gradually increasing load
- Runs until failure or performance degradation
- Monitors system recovery
Example Configuration:
export const options = {
stages: [
{ duration: '2m', target: 100 }, // Normal load
{ duration: '5m', target: 500 }, // Ramp up
{ duration: '2m', target: 1000 }, // Breaking point
{ duration: '1m', target: 0 }, // Recovery
],
thresholds: {
http_req_failed: ['rate<0.05'],
},
};When to Use:
- Capacity planning
- System limit identification
- Scaling decisions
- Failover testing
Tests system's ability to handle sudden, large increases in load.
Purpose:
- Verify system behavior during traffic spikes
- Test auto-scaling effectiveness
- Validate system recovery
- Identify performance under sudden load
Key Characteristics:
- Sudden increase in load
- Short duration at peak
- Monitors recovery time
Example Configuration:
export const options = {
stages: [
{ duration: '1m', target: 10 }, // Baseline
{ duration: '1m', target: 500 }, // Spike
{ duration: '3m', target: 500 }, // Sustained spike
{ duration: '1m', target: 10 }, // Recovery
],
};When to Use:
- Sale events (Black Friday)
- Event-triggered traffic spikes
- Marketing campaign launches
- Emergency scenario planning
Gradually increases load to verify system scaling capabilities.
Purpose:
- Validate auto-scaling configurations
- Test gradual system scaling
- Verify resource allocation
- Monitor performance during scaling
Key Characteristics:
- Gradual load increase
- Step-wise progression
- Monitoring of scaling events
Example Configuration:
export const options = {
stages: [
{ duration: '3m', target: 10 },
{ duration: '5m', target: 50 },
{ duration: '7m', target: 100 },
{ duration: '2m', target: 0 },
],
};When to Use:
- Auto-scaling validation
- Cloud deployment testing
- Resource allocation testing
- Gradual system warm-up
Long-running test to identify performance degradation over time.
Purpose:
- Find memory leaks
- Identify resource depletion
- Verify system stability
- Test data handling over time
Key Characteristics:
- Extended duration (hours/days)
- Consistent moderate load
- Resource monitoring
Example Configuration:
export const options = {
vus: 50,
duration: '6h',
thresholds: {
http_req_duration: ['p95<1000'],
http_req_failed: ['rate<0.01'],
},
};When to Use:
- Pre-release validation
- Memory leak detection
- Database performance testing
- Long-term stability verification
-
Breakpoint Test - Identifies exact point where system fails under load
-
Capacity Test - Determines maximum user load while maintaining SLAs
-
Volume Test - Tests system with large amounts of data
-
Isolation Test - Verifies performance of system components in isolation
-
Configuration Test - Tests performance across different system configurations
-
Failover Test - Validates system behavior during component failures
-
Compliance Test - Ensures performance meets regulatory requirements
-
Baseline Test - Establishes reference point for future comparisons
-
Smoke Test - Quick test to verify basic performance functionality
-
Recovery Test - Tests system recovery after failures under load
-
Test Environment:
- Use production-like environment
- Clear test data between runs
- Monitor system resources
-
Test Data:
- Use realistic data sets
- Maintain data consistency
- Clean up test data
-
Monitoring:
- Track all relevant metrics
- Monitor system resources
- Log unusual behavior
-
Thresholds:
- Set realistic SLAs
- Monitor error rates
- Track response times
- Response Time (p95, p99)
- Error Rate
- Throughput (RPS)
- CPU Usage
- Memory Usage
- Network I/O
- Database Connections
- Cache Hit Ratio