Skip to content

Conversation

@enyst
Copy link
Owner

@enyst enyst commented Dec 29, 2025

Updates the VS Code client (agent-sdk-ts RemoteConversation) to support runtime LLM switching against the Python agent-server.

  • On profile changes, POSTs POST /api/conversations/{id}/llm with an inline llm payload (TS profiles are local-only).
  • Remote agent-server E2E: creates two profiles and asserts server-side LLM updates + subsequent LLM request hits the mock server.
  • E2E runner builds @openhands/agent-sdk-ts before compiling the extension to avoid stale dist artifacts.
  • Docs updated to reflect the new server API.\n\nRestore across restart is covered by [agent-sdk-j2s] agent-server runtime LLM switching OpenHands/software-agent-sdk#1544 (wsproto tests + remote example).

Pairs with OpenHands/software-agent-sdk#1544.

- RemoteConversation posts /api/conversations/{id}/llm on profile changes\n- Add remote agent-server E2E coverage for switching + restore\n- Update docs to reflect new server API
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 29, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @enyst, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the VS Code client's ability to manage Large Language Model (LLM) configurations for remote conversations. It introduces a mechanism where changes to the user's selected LLM profile are communicated to the remote agent-server, ensuring that the active conversation utilizes the desired LLM settings. This improves flexibility and control over the LLM used during remote interactions.

Highlights

  • Remote LLM Switching: The VS Code client (agent-sdk-ts) now supports runtime LLM switching for remote conversations, allowing users to change LLM profiles dynamically.
  • API Integration: When an LLM profile is changed in the client, a POST /api/conversations/{id}/llm request is sent to the Python agent-server with the expanded LLM payload to update the active conversation's LLM configuration.
  • Documentation Updates: Relevant documentation (PRD.md, llm_profiles.md, settings_prd.md) has been updated to reflect the new behavior and API for remote LLM profile management.
  • End-to-End Testing: New end-to-end tests have been added to verify the functionality of LLM switching and conversation restore in a remote agent-server environment.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for runtime LLM switching for remote conversations, which is a great enhancement. The implementation is robust, with a well-designed queuing mechanism to handle rapid setting changes and ensure state is synchronized before agent actions. The addition of comprehensive unit and end-to-end tests is excellent and covers the new functionality and its edge cases thoroughly. The documentation has also been updated to reflect these new capabilities.

I've found one high-severity issue in the error handling of the LLM update logic that could lead to an infinite loop on network failure. I've provided suggestions to fix this. Besides that, the changes are of high quality.

}
this.lastAppliedLlmSignature = signature;
}).catch((err) => {
this.emit('error', err instanceof Error ? err : new Error(String(err)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of the .catch block can lead to an infinite loop of failing requests. When fetchWithTimeout fails, the error is emitted, but the promise chain resolves. This causes await this.llmUpdateInFlight to complete successfully. The subsequent check this.desiredLlmSignature !== this.lastAppliedLlmSignature will still be true (since lastAppliedLlmSignature wasn't updated), triggering a recursive call to flushRemoteLlmUpdate, which will fail again, and so on.

To fix this, the promise chain should propagate the failure. You can achieve this by re-throwing the error in the .catch block. This will cause await this.llmUpdateInFlight to throw, breaking the loop.

Suggested change
this.emit('error', err instanceof Error ? err : new Error(String(err)));
this.emit('error', err instanceof Error ? err : new Error(String(err)));
throw err;

if (!signature) return;
if (signature === this.lastAppliedLlmSignature) return;
if (this.llmUpdateInFlight) return;
void this.flushRemoteLlmUpdate();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Following the recommended change to propagate errors from flushRemoteLlmUpdate, this fire-and-forget call will result in an unhandled promise rejection if the flush fails. To prevent this, you should add a .catch() to the promise chain to swallow the already-emitted error.

Suggested change
void this.flushRemoteLlmUpdate();
this.flushRemoteLlmUpdate().catch(() => { /* error is emitted by flush, swallow to prevent unhandled rejection */ });

@github-actions
Copy link

🔧 VSCode Extension Built Successfully

• File: openhands-tab-0.0.4.vsix (2262 KB)
• Download: https://github.com/enyst/OpenHands-Tab/actions/runs/20585606424

To install:

  1. Download the artifact from the run page above
  2. VS Code → Command Palette → "Extensions: Install from VSIX..."
  3. Select the downloaded .vsix

Built with Node 22. Commit 81fac5b.

@openhands-ai
Copy link

openhands-ai bot commented Dec 30, 2025

Looks like there are a few issues preventing this PR from being merged!

  • GitHub Actions are failing:
    • E2E (Agent-Server)

If you'd like me to help, just leave a comment, like

@OpenHands please fix the failing actions on PR #625 at branch `agent-server-llm-switch`

Feel free to include any additional details that might help me get this PR into a better state.

You can manage your notification settings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants