Skip to content

Support enabling/disabling deep reasoning in Xorbits Inference (similar to Ollama) #2340

@Alysondao

Description

@Alysondao

Self Checks

  • I have read the Contributing Guide and Language Policy.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report, otherwise it will be closed.
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

I am using Xorbits Inference to serve LLMs for different scenarios, including real-time chat and automated tasks.

In some cases, I want the model to perform deep reasoning (e.g. complex problem solving), but in many production scenarios (such as latency-sensitive chat, tool calling, or simple Q&A), deep reasoning is unnecessary and even undesirable due to increased latency and token usage.

Currently, Xorbits Inference does not seem to provide an explicit option to enable or disable deep reasoning behavior. This makes it harder to balance performance, cost, and response quality across different use cases.

By contrast, tools like Ollama expose a clear switch or configuration to control whether “deep thinking / reasoning” is enabled, which is very helpful in practice.

2. Additional context or comments

I suggest adding a configurable option (API parameter, model config, or runtime flag) to explicitly control deep reasoning behavior, for example:

enable / disable deep reasoning

3. Can you help us with this feature?

  • I am interested in contributing to this feature.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions