Skip to content

evilmagics/coretax_performance_test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Performance and Load Test Summary for https://coretaxdjp.pajak.go.id 🚀📊

This document provides a comprehensive analysis of the performance and load tests conducted on the target URL https://coretaxdjp.pajak.go.id. The tests were performed at three different request rates: 100 requests/s, 500 requests/s, and 1000 requests/s, each running for 30 seconds. Additionally, a comparison is provided using the go-wrk package with 1000 goroutines for the same duration.

Graph

Table

Visualizations and Media 🎥📷

To better understand the test results, you can view the following media files:

Test 1: 100 Requests/s

Test Result Watch Video

Test 2: 500 Requests/s

Test Result Watch Video

Test 3: 1000 Requests/s

Test Result Watch Video

Test 4: Go-wrk 30s

Test Result Watch Video

Test 1: 100 Requests/s 📈

Metric Value
📤 Total Requests 3000
Success Rate 66.00%
Failed Requests 1020
Throughput 38.80 req/s
Avg. Latency 9.72 s
🐢 Max. Latency 22.12 s
⚠️ Errors Connection timeout

Detailed Analysis:

  1. Initial Performance:

    • The system handled the first three segments (900 requests) with a 100% success rate, indicating it can manage low to moderate traffic effectively.
    • Latency was relatively low during this phase, with an average of 9.72s.
  2. Performance Degradation:

    • After the initial phase, the success rate dropped significantly, with 58 failures in the fourth segment and 290 failures in the tenth segment.
    • This suggests that the system exhausts its resources (e.g., CPU, memory, or database connections) over time when subjected to sustained load.
  3. Failures and Errors:

    • The primary error was connection timeouts, indicating the server was unable to respond within the expected time frame.
    • The fail threshold of 35% was exceeded, with an overall failure rate of 34%.

Test 2: 500 Requests/s 📈📈

Metric Value
📤 Total Requests 14589
Success Rate 13.52%
Failed Requests 12617
Throughput 35.70 req/s
Avg. Latency 20.32 s
🐢 Max. Latency 29.79 s
⚠️ Errors Connection timeout

Detailed Analysis:

  1. Initial Performance:

    • The first segment had a 100% success rate, but this dropped sharply in subsequent segments.
    • By the third segment, all requests failed, indicating the system was overwhelmed by the load.
  2. Performance Bottlenecks:

    • The average latency increased to 20.32s, and the 95th percentile latency reached 26.29s, showing severe delays in processing requests.
    • The system likely hit hardware or software limits, such as maxed-out CPU usage, memory exhaustion, or database connection limits.
  3. Failures and Errors:

    • The majority of errors were connection timeouts, suggesting the server was unable to handle the incoming traffic volume.
    • The success rate of 13.52% is far below acceptable levels for a production system.

Test 3: 1000 Requests/s 📈📈📈

Metric Value
📤 Total Requests 29976
Success Rate 6.55%
Failed Requests 28014
Throughput 32.59 req/s
Avg. Latency 29.02 s
🐢 Max. Latency 46.67 s
⚠️ Errors Connection timeout, Context deadline exceeded

Detailed Analysis:

  1. System Collapse:

    • The system completely failed under this load, with a success rate of only 6.55%.
    • After the first segment, all requests failed, indicating the system was unable to scale to handle the traffic.
  2. Latency and Throughput:

    • The average latency skyrocketed to 29.02s, with the 99th percentile reaching 41.49s.
    • Throughput dropped to 32.59 requests/s, far below the target of 1000 requests/s.
  3. Failures and Errors:

    • The primary errors were connection timeouts and context deadline exceeded, indicating the server was unresponsive under high load.
    • The system is not designed to handle such high traffic volumes.

Comparison with go-wrk 🛠️

Results:

  • Total Requests: 7 (extremely low) ❌
  • Requests/sec: 49.30
  • Error Count: 1993
  • Latency (Average): 20.28s ⏳
  • Fastest Request: 20.28s
  • Slowest Request: 20.30s

Detailed Analysis:

  1. Severe Performance Issues:

    • The go-wrk test revealed that the system is unresponsive under high concurrency, with only 7 successful requests out of thousands attempted.
    • The error rate was extremely high, with 1993 errors due to connection timeouts.
  2. Latency and Throughput:

    • The average latency was 20.28s, and the throughput was only 49.30 requests/s, far below the expected performance.
    • This confirms the system’s inability to scale under high concurrency.

Overall Summary 📝

Key Findings:

  1. At 100 requests/s:
    • The system performs moderately but begins to fail as the test progresses, indicating resource exhaustion.
  2. At 500 requests/s:
    • The system struggles significantly, with a high failure rate and increased latency, showing it is not scalable for this level of traffic.
  3. At 1000 requests/s:
    • The system fails catastrophically, unable to handle the load, with a success rate of only 6.55%.
  4. go-wrk Test:
    • Confirms the system’s inability to scale under high concurrency, with only 7 successful requests and 1993 errors.

Recommendations 🛠️:

  1. Optimize Server Resources:
    • Increase server capacity (e.g., CPU, memory, and database connections) to handle higher loads.
    • Implement auto-scaling to dynamically adjust resources based on traffic.
  2. Implement Rate Limiting:
    • Prevent overloading by limiting the number of requests per client or IP address.
  3. Improve Error Handling:
    • Ensure the system gracefully handles timeouts and connection failures, providing meaningful error messages to users.
  4. Conduct Further Analysis:
    • Investigate bottlenecks (e.g., database queries, network latency, or application logic) causing performance degradation.
    • Use profiling tools to identify and optimize slow components.
  5. Load Balancing:
    • Distribute traffic across multiple servers using a load balancer to improve scalability and reliability.
  6. Caching:
    • Implement caching mechanisms (e.g., Redis or CDN) to reduce the load on the server and improve response times.

Conclusion: The system is not ready for high traffic loads. Immediate improvements are required to ensure reliability and scalability. 🚨🔧

About

Performance test results and analysis for https://coretaxdjp.pajak.go.id

Topics

Resources

Stars

Watchers

Forks

Languages