Skip to content

Conversation

@guapisolo
Copy link
Collaborator

@guapisolo guapisolo commented Jan 22, 2026

This PR support several things (sorry I found it hard to split it into seperate PR.. Will do better in later PR)

Under /chat/completions endpoint with miles router enable:

  • Support token consistency between train & inferenc by adding trajectory manager middleware to store history samples.
  • Support token in for SGLang. Miles can enable token input to SGLang by setting miles_router_enable_token_input_for_chat_completions. (text model only).
  • Refine token out in SGLang engine, including the prompt part token out and token id (convert_ids_to_tokens sometimes has bug when the output token text include white space).
  • Refine agentic tool call impl.
  • Refine some code logic in router to make it decouple from radix tree.
  • Doc for refactored multi-turn & agentic
  • Cross turn token consistency is not supported. which means, in each turn we retokenize the OAI messages in the trajectory.
    Why? Chat template is not append-only. e.g. Different model cut think part in very different timing and scenario. It makes cross turn token inherit tightly coupled with the chat template, and very hard to find a general solution.、

TODO:

  • Fireworks wrapper.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @guapisolo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly overhauls the system's generation and rollout mechanisms. It introduces a new, more modular interface for defining generation functions, moving away from a monolithic approach. This refactor facilitates the integration of advanced features like multi-turn conversations and OpenAI-style tool calling, ensuring token consistency during generation. The changes also extend to the Miles router, enabling it to manage sessions and proxy these new generation types, laying the groundwork for more complex agentic behaviors.

Highlights

  • Refactored Rollout Function Interface: Introduced an experimental refactor of the rollout function interface, enabling more modular and flexible generation functions, activated by the MILES_EXPERIMENTAL_ROLLOUT_REFACTOR environment variable.
  • New Generate Function Hub: Added a new generate_hub directory containing specialized generation functions for single_turn, multi_turn, and agentic_tool_call scenarios.
  • OpenAI-style Tool Calling and Session Support: Implemented support for OpenAI-style chat completions with tool calling, including session management within the Miles router to maintain token consistency during generation.
  • Enhanced Router Functionality: The Miles router now includes session management capabilities and can proxy OpenAI-style requests, handling tokenization and record keeping for improved trajectory tracking.
  • Expanded Test Infrastructure: Added new test utilities (mock_sglang_server, mock_tools, uvicorn_thread_server) and comprehensive unit/integration tests to validate the new rollout and generation functionalities.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/pr-test.yml
    • .github/workflows/pr-test.yml.j2
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant refactoring to support naive OpenAI Token-In-Token-Out (TITO) for agentic workflows, particularly for multi-turn conversations with tool calls. The changes are extensive, adding new modules for generation hubs, inference rollouts, and session management, along with numerous tests. A key addition is the MILES_EXPERIMENTAL_ROLLOUT_REFACTOR feature flag to control the new logic. While the overall direction is a great improvement in modularity and functionality, I've identified a critical bug in the multi-turn logic and a few areas for improvement in the new shell script and utility functions.

Comment on lines 56 to 63
if args.generate_multi_samples:
sample = deepcopy(input.sample)

output = await post(url, payload)
await update_sample_from_response(args, sample, payload=payload, output=output, update_loss_mask=True)

if args.generate_multi_samples:
multi_samples.append(deepcopy(sample))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's a critical bug in the logic for generate_multi_samples. When args.generate_multi_samples is true, the sample variable is rebound to a fresh copy of input.sample on line 57. This causes the conversation history, which is maintained in the sample object, to be lost. As a result, subsequent turns in the loop will not have the correct context, as update_sample_with_tool_responses on line 75 will be updating a sample that doesn't contain the full conversation history.

To fix this, you should use a separate variable for the turn-specific sample that gets appended to multi_samples, while the main sample variable continues to hold the running state of the entire conversation.

Suggested change
if args.generate_multi_samples:
sample = deepcopy(input.sample)
output = await post(url, payload)
await update_sample_from_response(args, sample, payload=payload, output=output, update_loss_mask=True)
if args.generate_multi_samples:
multi_samples.append(deepcopy(sample))
output = await post(url, payload)
if args.generate_multi_samples:
turn_sample = deepcopy(input.sample)
await update_sample_from_response(args, turn_sample, payload=payload, output=output, update_loss_mask=True)
multi_samples.append(turn_sample)
await update_sample_from_response(args, sample, payload=payload, output=output, update_loss_mask=True)

Comment on lines 9 to 11
sleep 3
pkill -9 ray
pkill -9 python
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The commands to kill ray and python processes on lines 10-11 are redundant, as they are already executed on lines 7-8. The ray stop --force command should handle stopping Ray processes, and the subsequent pkill commands serve as a fallback. The second set of pkill commands is unnecessary and can be removed to simplify the script.


async def post(url, payload, max_retries=60):
# TODO may generalize the name since it now contains http DELETE/GET etc (with retries and remote-execution)
async def post(url, payload, max_retries=60, action="post"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function post now handles more than just POST requests (e.g., GET, DELETE). The function name should be generalized to reflect its broader capability, for instance, to request_with_retry or a similar name that better describes its function of making various types of HTTP requests with retry logic.



# TODO may rename to `load_object` since it can be used to load things like tool_specs
def load_function(path):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function load_function is now used to load objects other than functions, such as tool_specs. Renaming it to load_object would more accurately reflect its usage and prevent potential confusion.

@guapisolo guapisolo marked this pull request as draft January 23, 2026 01:09
@guapisolo guapisolo force-pushed the feat/lazy_oai_tito branch 2 times, most recently from d4655d4 to 8efb357 Compare January 23, 2026 18:55
@guapisolo guapisolo marked this pull request as ready for review January 23, 2026 19:04
@guapisolo
Copy link
Collaborator Author

guapisolo commented Jan 24, 2026

@yueming-yuan Do you have some time to review this PR? I think it's almost ready. Thanks!

@guapisolo guapisolo changed the title Support naive OAI TITO feat: Support OAI TITO v1 Jan 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants