Skip to content

fix: reasoning_content is missing in assistant tool call message#259

Open
Malpl3naInk wants to merge 7 commits intoGitlawb:mainfrom
Malpl3naInk:fix/reasoning-content-missing
Open

fix: reasoning_content is missing in assistant tool call message#259
Malpl3naInk wants to merge 7 commits intoGitlawb:mainfrom
Malpl3naInk:fix/reasoning-content-missing

Conversation

@Malpl3naInk
Copy link
Copy Markdown

Summary

Fix compatibility with Moonshot AI's kimi-k2.5 model when reasoning/thinking is enabled. The model returns content in
a separate reasoning_content field and requires this field to be preserved in subsequent requests with tool calls.

Fixes the error: thinking is enabled but reasoning_content is missing in assistant tool call message

Changes

  • OpenAIMessage interface: Added optional reasoning_content field for outgoing requests
  • convertMessages(): Extracts thinking content blocks from assistant messages and maps them to reasoning_content for API compatibility
  • OpenAIStreamChunk interface: Added reasoning_content to delta type for streaming responses
  • openaiStreamToAnthropic(): Handles streaming reasoning_content by converting it to Anthropic thinking_delta events; properly closes thinking blocks on finish
  • _convertNonStreamingResponse(): Extracts reasoning_content from non-streaming responses and converts to Anthropic thinking content blocks

Testing

bun test src/services/api/openaiShim.test.ts - passed
Verify thinking features with Moonshot API, kimi-k2.5 - passed
Verify thinking features with DeepSeek API, deepseek-reasoner - passed

…easoning content support

- Added `reasoning_content` field to `OpenAIMessage` and `OpenAIStreamChunk` interfaces for improved API compatibility.
- Implemented extraction of reasoning content from thinking blocks in `convertMessages` function.
- Updated `openaiStreamToAnthropic` to handle reasoning content during streaming, converting it to thinking blocks.
- Ensured reasoning content is included in the final message structure for better interaction with the API.
Copy link
Copy Markdown
Collaborator

@auriti auriti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good addition — reasoning_content is a legitimate extension in the OpenAI-compat ecosystem (DeepSeek, Moonshot, and others use it), and the bidirectional mapping (outgoing: thinking → reasoning_content, incoming: reasoning_content → thinking) is the right approach. The streaming path handles block lifecycle correctly.

Two things to address:

1. redacted_thinking blocks are not filtered

The content filter only excludes thinking:

const thinkingContent = content.filter((b: { type?: string }) => b.type === 'thinking')
const textContent = content.filter(
  (b: { type?: string }) => b.type !== 'tool_use' && b.type !== 'thinking',
)

redacted_thinking blocks will fall through to textContent and get serialized as text. These should be excluded from textContent (and can be safely dropped — there's no useful content to forward):

const textContent = content.filter(
  (b: { type?: string }) =>
    b.type !== 'tool_use' && b.type !== 'thinking' && b.type !== 'redacted_thinking',
)

2. Missing tests

The changes are testable with the existing test infrastructure in openaiShim.test.ts. At minimum:

  • Non-streaming: verify reasoning_content in response → thinking content block in output
  • Streaming: verify delta.reasoning_contentthinking_delta events
  • Outgoing: verify thinking blocks in assistant messages → reasoning_content field in request body

Note: this PR is complementary to #258 (which strips residual thinking blocks in convertContentBlocks()). No conflicts — they operate at different layers.

@devmuhnnad
Copy link
Copy Markdown

How can I use the kimi-k2.5 with openclaude? I have the api key already

@Malpl3naInk
Copy link
Copy Markdown
Author

How can I use the kimi-k2.5 with openclaude? I have the api key already

You can type /provider command and select 3. OpenAI-compatible, configure the API-key and Moonshot base URL https://api.moonshot.ai/v1, type kimi-k2.5 for model name. Relaunch the program, you'll see the startup page displays like:

Provider  OpenAI
Model     kimi-k2.5
Endpoint  https://api.moonshot.ai/v1

- Added tests for non-streaming and streaming scenarios to verify conversion of reasoning_content to thinking blocks and vice versa.
- Implemented checks for filtering out redacted_thinking blocks from text content.
- Updated the convertMessages function to exclude redacted_thinking types from text content extraction.
@Malpl3naInk
Copy link
Copy Markdown
Author

Updated. Sorry I'm not sure my tests is correct or not, just tell me if there anything I need to do.

@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

@Malpl3naInk check all conflicts

Copy link
Copy Markdown
Collaborator

@gnanam1990 gnanam1990 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The non-streaming reasoning-content handling looks like a step in the right direction, but I think the streaming behavior is still incorrect.

In the streaming path, a thinking block can be started for reasoning_content and then later text or tool blocks are emitted without clearly closing that thinking block first. That risks emitting invalid or overlapping Anthropic-format content blocks.

Please fix the streaming block lifecycle and add a test for a single assistant message that contains both reasoning content and a tool call.

…ent handling

- Implemented tests to verify the lifecycle of thinking and tool use blocks during streaming.
- Enhanced the openaiStreamToAnthropic function to handle null and empty content from DeepSeek, ensuring proper block closure and event emission.
- Added checks for reasoning content and tool call references in the streaming response.
@Vasanthdev2004
Copy link
Copy Markdown
Collaborator

fix conflict

@kevincodex1 kevincodex1 requested a review from gnanam1990 April 7, 2026 10:20
@AxDSan
Copy link
Copy Markdown

AxDSan commented Apr 10, 2026

Please fix conflicts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants