Add test coverage for agents router endpoints#370
Add test coverage for agents router endpoints#370reo0603 wants to merge 3 commits intoLight-Heart-Labs:mainfrom
Conversation
Lightheartdevs
left a comment
There was a problem hiding this comment.
Review: Add test coverage for agents router endpoints
The auth-enforcement tests (401 without token) are solid and follow the established pattern — no issues there.
However, the authenticated endpoint tests have a significant gap: they only exercise the default/zeroed state of the global singletons (agent_metrics, cluster_status, throughput in agent_monitor.py), not any real behavior.
Specific issues
1. test_agents_cluster_authenticated tests nothing beyond defaults (test_routers.py, new lines ~340-347)
The /api/agents/cluster endpoint calls await cluster_status.refresh(), which shells out to curl to hit a cluster proxy. In the test environment, curl either fails or the proxy isn't running, so every exception is silently caught, and to_dict() returns the initial {"nodes": [], "total_gpus": 0, "active_gpus": 0, "failover_ready": False}. The test asserts those keys exist — which they always will, even if the endpoint were completely broken.
Suggestion: mock asyncio.create_subprocess_exec to simulate a healthy cluster response (e.g., two nodes, one healthy) and assert the parsed values (active_gpus == 1, failover_ready == False). Then a second case with two healthy nodes where failover_ready == True. This tests the actual parsing/logic in ClusterStatus.refresh().
2. test_agents_throughput_authenticated only sees empty history (test_routers.py, new lines ~350-358)
throughput.get_stats() returns {"current": 0, "average": 0, "peak": 0, "history": []} when data_points is empty. The test asserts those keys exist but never verifies the endpoint works with actual data. Since ThroughputMetrics is a global singleton, you could call throughput.add_sample(42.0) before the request and assert current == 42.0, peak == 42.0, etc. This would test real behavior with minimal setup.
3. test_agents_metrics_authenticated — same default-state issue (test_routers.py, new lines ~334-339)
get_full_agent_metrics() composes the other two plus agent_metrics.to_dict(). Since all are at defaults, this test only confirms the response shape, not any meaningful aggregation. At minimum, seeding throughput and agent_metrics with non-default values and checking the composed output would add value.
4. Missing coverage for /api/agents/metrics.html
The router has 4 endpoints but only 3 are tested. The metrics.html endpoint renders an HTML fragment with interpolated values and html.escape() for XSS safety — that escaping logic is worth testing (e.g., inject a <script> tag into a metric value and assert it's escaped in the response).
Summary
The auth tests are good. The authenticated tests need to move beyond asserting default response shapes and actually exercise the endpoint logic with non-trivial inputs. As written, these tests would still pass even if the parsing, aggregation, or refresh logic were completely removed.
|
What's needed to get this merged:
|
|
Updated to address the review feedback: 1. Seeded real data in authenticated tests
2. Mocked cluster responses to test parsing logic
3. Added XSS escaping coverage
All tests now exercise real endpoint logic with non-trivial inputs. |
Summary
Adds comprehensive test coverage for the agents router which was previously untested in test_routers.py.
Changes
Auth enforcement tests (401 without token):
Authenticated endpoint tests (200 with token):
Consistency
This brings agents router coverage in line with other routers (workflows, privacy, setup, features) which all have similar auth + authenticated test pairs.
The agents router had 4 endpoints but zero tests in test_routers.py. All other routers have test coverage.