This document provides a comprehensive analysis of the performance and load tests conducted on the target URL https://coretaxdjp.pajak.go.id. The tests were performed at three different request rates: 100 requests/s, 500 requests/s, and 1000 requests/s, each running for 30 seconds. Additionally, a comparison is provided using the go-wrk package with 1000 goroutines for the same duration.
To better understand the test results, you can view the following media files:
| Metric | Value |
|---|---|
| 📤 Total Requests | 3000 |
| ✅ Success Rate | 66.00% |
| ❌ Failed Requests | 1020 |
| ⚡ Throughput | 38.80 req/s |
| ⏳ Avg. Latency | 9.72 s |
| 🐢 Max. Latency | 22.12 s |
| Connection timeout |
-
Initial Performance:
- The system handled the first three segments (900 requests) with a 100% success rate, indicating it can manage low to moderate traffic effectively.
- Latency was relatively low during this phase, with an average of 9.72s.
-
Performance Degradation:
- After the initial phase, the success rate dropped significantly, with 58 failures in the fourth segment and 290 failures in the tenth segment.
- This suggests that the system exhausts its resources (e.g., CPU, memory, or database connections) over time when subjected to sustained load.
-
Failures and Errors:
- The primary error was connection timeouts, indicating the server was unable to respond within the expected time frame.
- The fail threshold of 35% was exceeded, with an overall failure rate of 34%.
| Metric | Value |
|---|---|
| 📤 Total Requests | 14589 |
| ✅ Success Rate | 13.52% |
| ❌ Failed Requests | 12617 |
| ⚡ Throughput | 35.70 req/s |
| ⏳ Avg. Latency | 20.32 s |
| 🐢 Max. Latency | 29.79 s |
| Connection timeout |
-
Initial Performance:
- The first segment had a 100% success rate, but this dropped sharply in subsequent segments.
- By the third segment, all requests failed, indicating the system was overwhelmed by the load.
-
Performance Bottlenecks:
- The average latency increased to 20.32s, and the 95th percentile latency reached 26.29s, showing severe delays in processing requests.
- The system likely hit hardware or software limits, such as maxed-out CPU usage, memory exhaustion, or database connection limits.
-
Failures and Errors:
- The majority of errors were connection timeouts, suggesting the server was unable to handle the incoming traffic volume.
- The success rate of 13.52% is far below acceptable levels for a production system.
| Metric | Value |
|---|---|
| 📤 Total Requests | 29976 |
| ✅ Success Rate | 6.55% |
| ❌ Failed Requests | 28014 |
| ⚡ Throughput | 32.59 req/s |
| ⏳ Avg. Latency | 29.02 s |
| 🐢 Max. Latency | 46.67 s |
| Connection timeout, Context deadline exceeded |
-
System Collapse:
- The system completely failed under this load, with a success rate of only 6.55%.
- After the first segment, all requests failed, indicating the system was unable to scale to handle the traffic.
-
Latency and Throughput:
- The average latency skyrocketed to 29.02s, with the 99th percentile reaching 41.49s.
- Throughput dropped to 32.59 requests/s, far below the target of 1000 requests/s.
-
Failures and Errors:
- The primary errors were connection timeouts and context deadline exceeded, indicating the server was unresponsive under high load.
- The system is not designed to handle such high traffic volumes.
- Total Requests: 7 (extremely low) ❌
- Requests/sec: 49.30
- Error Count: 1993
- Latency (Average): 20.28s ⏳
- Fastest Request: 20.28s
- Slowest Request: 20.30s
-
Severe Performance Issues:
- The
go-wrktest revealed that the system is unresponsive under high concurrency, with only 7 successful requests out of thousands attempted. - The error rate was extremely high, with 1993 errors due to connection timeouts.
- The
-
Latency and Throughput:
- The average latency was 20.28s, and the throughput was only 49.30 requests/s, far below the expected performance.
- This confirms the system’s inability to scale under high concurrency.
- At 100 requests/s:
- The system performs moderately but begins to fail as the test progresses, indicating resource exhaustion.
- At 500 requests/s:
- The system struggles significantly, with a high failure rate and increased latency, showing it is not scalable for this level of traffic.
- At 1000 requests/s:
- The system fails catastrophically, unable to handle the load, with a success rate of only 6.55%.
go-wrkTest:- Confirms the system’s inability to scale under high concurrency, with only 7 successful requests and 1993 errors.
- Optimize Server Resources:
- Increase server capacity (e.g., CPU, memory, and database connections) to handle higher loads.
- Implement auto-scaling to dynamically adjust resources based on traffic.
- Implement Rate Limiting:
- Prevent overloading by limiting the number of requests per client or IP address.
- Improve Error Handling:
- Ensure the system gracefully handles timeouts and connection failures, providing meaningful error messages to users.
- Conduct Further Analysis:
- Investigate bottlenecks (e.g., database queries, network latency, or application logic) causing performance degradation.
- Use profiling tools to identify and optimize slow components.
- Load Balancing:
- Distribute traffic across multiple servers using a load balancer to improve scalability and reliability.
- Caching:
- Implement caching mechanisms (e.g., Redis or CDN) to reduce the load on the server and improve response times.
Conclusion: The system is not ready for high traffic loads. Immediate improvements are required to ensure reliability and scalability. 🚨🔧





