Skip to content

Conversation

@major
Copy link
Contributor

@major major commented Jan 27, 2026

Description

Add end-to-end test coverage for the rlsapi v1 /infer endpoint, which serves RHEL Lightspeed Command Line Assistant (CLA) clients.

This PR adds 7 high-value test scenarios covering:

  • Basic inference with minimal request (question only)
  • Inference with full context (systeminfo populated)
  • Authentication enforcement (401 for missing/empty auth)
  • Input validation (422 for empty/whitespace question)
  • Response structure validation (data.text, data.request_id)
  • Statelessness validation (unique request_ids)

Type of change

  • End to end tests improvement

Tools used to create PR

  • Assisted-by: Claude
  • Generated by: N/A

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Summary by CodeRabbit

  • New Features

    • Added inference configuration with OpenAI as the default provider and gpt-4o-mini as the default model.
  • Tests

    • Expanded end-to-end coverage for the inference API (/infer): new feature scenarios for auth, request validation, response structure, and request_id uniqueness; added test step implementations and updated test suite listing.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

Walkthrough

Adds an inference top-level config (defaults: default_provider: openai, default_model: gpt-4o-mini) to four e2e YAML configs and introduces new end-to-end BDD tests and step definitions for the rlsapi v1 /infer endpoint; also registers the new feature in the test list.

Changes

Cohort / File(s) Summary
E2E Configuration Files
tests/e2e/configuration/library-mode/lightspeed-stack-auth-noop-token.yaml, tests/e2e/configuration/library-mode/lightspeed-stack.yaml, tests/e2e/configuration/server-mode/lightspeed-stack-auth-noop-token.yaml, tests/e2e/configuration/server-mode/lightspeed-stack.yaml
Added top-level inference block with default_provider: openai and default_model: gpt-4o-mini (+3 lines each).
RLS API v1 Feature Tests
tests/e2e/features/rlsapi_v1.feature
Added BDD scenarios exercising /v1/infer: auth checks (401), validation (422 for empty questions), response structure assertions, request_id uniqueness, and full-context requests (+89 lines).
RLS API v1 Step Definitions
tests/e2e/features/steps/rlsapi_v1.py
Added step implementations: check_rlsapi_response_structure(context), store_rlsapi_request_id(context), check_rlsapi_request_id_different(context) (validations and request_id storage/comparison) (+65 lines).
Test List
tests/e2e/test_list.txt
Registered features/rlsapi_v1.feature in test list (+1 line).

Sequence Diagram(s)

(omitted — changes are configuration and tests; no new multi-component control flow implemented)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

ok-to-test

Suggested reviewers

  • tisnik
  • radofuchs
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding end-to-end tests for the rlsapi v1 /infer endpoint, which aligns with all file modifications in the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@major major force-pushed the LCORE-1223-rlsapi-v1-e2e-tests branch 2 times, most recently from 50ac720 to 19e699c Compare January 27, 2026 14:49
@major major marked this pull request as ready for review January 27, 2026 15:13
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/e2e/features/steps/rlsapi_v1.py`:
- Around line 30-56: In both store_rlsapi_request_id and
check_rlsapi_request_id_different, tighten validation so request_id is a
non-empty string: after extracting response_json and before assigning
context.stored_request_id (in store_rlsapi_request_id) assert that
response_json["data"]["request_id"] is an instance of str and not empty, and in
check_rlsapi_request_id_different assert the same for current_request_id before
comparing; use clear assertion messages and keep referencing response_json,
current_request_id, and context.stored_request_id to locate the checks.
🧹 Nitpick comments (1)
tests/e2e/features/rlsapi_v1.feature (1)

65-73: Consider consolidating with the basic inference scenario.

This scenario validates the same assertions as "Basic inference with minimal request" (lines 8-16): both check for a 200 status code and valid response structure. Unless there's a specific reason to test with different question content, consider removing this scenario to reduce test duplication.

@major major force-pushed the LCORE-1223-rlsapi-v1-e2e-tests branch from 19e699c to 1d14117 Compare January 27, 2026 15:26
- Add rlsapi_v1.feature with 7 test scenarios
- Add rlsapi_v1.py step definitions for response validation
- Update test_list.txt to include new feature file

Implements LCORE-1223

Signed-off-by: Major Hayden <major@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant