Skip to content

Conversation

@enyst
Copy link
Owner

@enyst enyst commented Sep 1, 2025

I am OpenHands-GPT-5, an AI agent. This PR streamlines the ported config tests per review feedback in OpenHands#49. It focuses on behavior (defaults, env side effects, essential validations, merge concatenation), removes redundant/brittle scenarios and V0 precedence/overwrite semantics, and documents desired stricter MCP validation via xfail (so we don’t enshrine current gaps).

Highlights:

  • LLM: defaults, OpenRouter env wiring, Azure api_version, AWS env, reasoning_effort default, SecretStr safety, extra fields forbidden, minimal round-trip, small boolean/model spot-checks.
  • MCP: lean merge tests (concat + originals preserved), validate that merged duplicates error, xfail tests recording desired stricter validation for sHTTP and cross-type duplicates.
  • Tooling: pre-commit (ruff + pyright) passes. Config tests run fast.

Notes:

  • Config-only suite: 51 passed, 3 xfailed in ~1.5s. Full repo hit an environment MemoryError in pytest teardown/capture; likely resource-bound and unrelated to these changes. CI should be fine; if not, we can trim coverage/plugins or split jobs.

Next:

  • If/when we implement stricter MCP validation (unifying duplicate checks across sse + shttp and cross-type), we will remove the xfail markers and convert those tests to normal passing ones in a separate PR.

enyst and others added 3 commits August 31, 2025 23:57
…and merge behavior; add xfails to document desired MCP validation

- Trim LLM tests (remove optional-None boilerplate)
- Keep defaults and side-effect coverage (OpenRouter, Azure, AWS, reasoning)
- Reduce MCP merge tests to minimal concatenation + preservation
- Add xfail tests for desired stricter duplicate detection (SHTTP, cross-type, post-merge)

Co-authored-by: openhands <openhands@all-hands.dev>
…on’t XPASS if implementation improves\n\nCo-authored-by: openhands <openhands@all-hands.dev>
…eset setup.sh mode to 644)

Co-authored-by: openhands <openhands@all-hands.dev>
@coderabbitai
Copy link

coderabbitai bot commented Sep 1, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch simplify-config-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@enyst enyst marked this pull request as ready for review September 1, 2025 23:57
@enyst enyst requested a review from Copilot September 1, 2025 23:57
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR streamlines the ported config tests per review feedback by focusing on behavior-driven coverage while removing redundant and brittle scenarios. It documents desired stricter MCP validation through xfail markers to avoid enshrining current gaps in validation logic.

  • Simplified test suites to focus on essential behavior validation (defaults, environment wiring, merge functionality)
  • Removed verbose test cases that provide minimal additional coverage over core scenarios
  • Added xfail markers to document desired MCP validation improvements for future implementation

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
openhands/core/tests/config/test_mcp_settings_merge.py Streamlined MCP merge tests to focus on concatenation behavior and preserve original configs, added xfail test for desired duplicate validation
openhands/core/tests/config/test_mcp_config.py Consolidated MCP config tests using parametrization, simplified validation tests, added xfail markers for desired cross-type and shttp duplicate detection
openhands/core/tests/config/test_llm_config.py Reduced LLM config tests to essential behavior validation, removed redundant field assignment tests, kept minimal round-trip and validation coverage

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

enyst and others added 2 commits September 2, 2025 01:59
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
… validation-on-merge test\n\nCo-authored-by: openhands <openhands@all-hands.dev>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants