-
Notifications
You must be signed in to change notification settings - Fork 101
Fix context window displaying as 0 when resuming conversation #1590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Coverage Report •
|
||||||||||||||||||||
|
Hi! I started running the integration tests on your PR. You will receive a comment with the results shortly. |
🧪 Integration Tests ResultsOverall Success Rate: 96.0% 📁 Detailed Logs & ArtifactsClick the links below to access detailed agent/LLM logs showing the complete reasoning process for each model. On the GitHub Actions page, scroll down to the 'Artifacts' section to download the logs.
📊 Summary
📋 Detailed Resultslitellm_proxy_deepseek_deepseek_chat
Skipped Tests:
litellm_proxy_claude_sonnet_4_5_20250929
litellm_proxy_vertex_ai_gemini_3_pro_preview
litellm_proxy_moonshot_kimi_k2_thinking
Skipped Tests:
litellm_proxy_mistral_devstral_2512
Skipped Tests:
Failed Tests:
litellm_proxy_gpt_5.1_codex_max
Skipped Tests:
Failed Tests:
|
|
Help me download the artifact and can you make sure the context window dispalying 0 is fixed? |
|
I'm on it! xingyaoww can track my progress at all-hands.dev |
SummaryI downloaded the integration test artifacts from PR #1590 and identified the root cause of the Root Cause AnalysisThe integration test results showed
Fix AppliedModified # Before: log_ctx was None when logging disabled
log_ctx = None
if self._telemetry.log_enabled:
log_ctx = {..., "context_window": self.max_input_tokens or 0}
# After: Always include context_window for metrics tracking
log_ctx: dict[str, Any] = {"context_window": self.max_input_tokens or 0}
if self._telemetry.log_enabled:
log_ctx.update({...}) # Add logging-specific fieldsThis change was applied to both the Testing
Changes PushedThe fix has been pushed to branch |
The context_window was only being passed to telemetry when log_enabled was True. This caused context_window to always be 0 in production when logging was disabled. Changes: - Always pass context_window to telemetry regardless of log_enabled - Add context_window field to integration test TokenUsageData schema Co-authored-by: openhands <openhands@all-hands.dev>
208dd7a to
6ecbc5f
Compare
|
@OpenHands Do a /codereview-roasted on this PR diff. Understand the goal is actually:
Is this correct? Then read the review threads (inline threads) and answer the question there. |
|
I'm on it! enyst can track my progress at all-hands.dev |
Final summaryWhat I did: I reviewed the PR diff for Restated answer to the main question (“Is this correct?”)Yes. The statement is correct:
In the old code, Answer to the inline thread questionThe reviewer asked why we only pass My answer: token counts aren’t sourced from Checklist against your request
Notes on pushing / changesNo code changes were made in this session, so:
|
|
🟡 Acceptable taste — fixes the real issue with minimal churn, but the naming and data flow are still muddled. [IMPROVEMENT OPPORTUNITIES]1) Misleading variable name:
Concrete suggestion: rename 2) The dict
That’s okay, but it reads like “logging is special, everything else is baseline” — which is correct — but the naming ( 3) The real architectural smell: telemetry depends on logging structure
This PR patches it by splitting the payload, which is good, but the fact that it’s still the same variable passed into telemetry suggests telemetry API itself might be too logging-shaped. |
|
Looks like there are a few issues preventing this PR from being merged!
If you'd like me to help, just leave a comment, like Feel free to include any additional details that might help me get this PR into a better state. You can manage your notification settings |
Summary
This fixes issue #1228 where the context_window (and other stats) were being reset to zero when sending
ConversationStatusUpdateEvent(full_state) after resuming a conversation.Root cause: In
ConversationState.create(), when resuming a conversation, the stats were correctly deserialized frombase_state.jsonbut then immediately overwritten with an emptyConversationStats()object at line 216.Changes:
state.pyresume path (the stats are already correctly deserialized frombase_state.json)context_windowfield toTokenUsageDatain integration test schemascontext_windowis 0 despite LLM usage (to catch similar issues in the future)test_conversation_state_stats_preserved_on_resumefor stats preservation on resumetest_conversation_state_flags_persistenceto reflect correct behaviorFixes #1228
Checklist
@xingyaoww can click here to continue refining the PR
Agent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.12-nodejs22golang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:70c1405-pythonRun
All tags pushed for this build
About Multi-Architecture Support
70c1405-python) is a multi-arch manifest supporting both amd64 and arm6470c1405-python-amd64) are also available if needed