Skip to content

[Perf]: Establish baseline benchmarks and validate PRD latency targets #232

@Vraj1234

Description

@Vraj1234

Describe the performance problem

The PRD defines latency SLAs that have never been measured:

  • Standard API queries: < 1s (95th percentile)
  • Complex spatial queries: < 2s (95th percentile)
  • Cloud Run cold start: < 1s

No benchmark suite exists, so there is no way to know whether these targets are currently met or violated.

Steps / workload to reproduce

  1. Stand up the staging stack
  2. Run a load test (e.g. locust or k6) against representative endpoints:
    • GET /v1/feedstocks/usda/census/crops/{crop}/geoid/{geoid}/parameters/{parameter}
    • GET /v1/feedstocks/analysis/resources/{resource}/geoid/{geoid}/parameters
    • All discovery endpoints
  3. Record p50/p95/p99 latency and error rate
  4. Test cold-start latency by letting the Cloud Run instance scale to zero, then sending a request

Software version

Current staging HEAD (main branch)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions