Skip to content

[feature]Support tau2 bench#192

Open
SJTUyh wants to merge 1 commit intoAISBench:masterfrom
SJTUyh:tau2_dev
Open

[feature]Support tau2 bench#192
SJTUyh wants to merge 1 commit intoAISBench:masterfrom
SJTUyh:tau2_dev

Conversation

@SJTUyh
Copy link
Collaborator

@SJTUyh SJTUyh commented Mar 16, 2026

Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)

🔍 Motivation / 变更动机

Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。

📝 Modification / 修改内容

Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。

📐 Associated Test Results / 关联测试结果

Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。

⚠️ Performance degradation (Optional) / 性能下降(可选)

If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。

🌟 Use cases (Optional) / 使用案例(可选)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@SJTUyh SJTUyh deployed to smoke-test-approval March 16, 2026 06:37 — with GitHub Actions Active
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the benchmarking capabilities by integrating the tau2 bench framework. It provides a dedicated task for running tau2 simulations and refines the configuration system to offer greater flexibility in defining and executing benchmark workflows. Key changes include simplifying configuration requirements and introducing the ability to conditionally skip inference or evaluation stages, which is crucial for tailored benchmarking scenarios.

Highlights

  • TAU2 Bench Integration: Introduced a new TAU2BenchTask to support the tau2 bench framework, enabling the execution and evaluation of tau2 benchmarks within the system.
  • Flexible Task Execution: Implemented mechanisms to allow skipping inference or evaluation steps by introducing an EmptyTask and modifying worker classes to check for and handle this task type.
  • Configuration Simplification: Streamlined the required fields for model and dataset configurations, reducing them to only the 'abbr' field, and improved handling of missing configuration elements.
  • Dependency Update: Updated the scikit-learn dependency to allow for newer versions, ensuring broader compatibility.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/cli/config_manager.py
    • Reduced the list of required fields for model and dataset configurations to only 'abbr'.
    • Added a check to skip dataset configuration filling if 'infer_cfg' is not present.
  • ais_bench/benchmark/cli/utils.py
    • Modified get_config_type to return None when the input object is None.
  • ais_bench/benchmark/cli/workers.py
    • Imported EmptyTask for conditional task skipping.
    • Added a skip attribute to BaseWorker to manage task execution flow.
    • Updated Infer and Eval worker classes to detect EmptyTask in their update_cfg methods, allowing them to skip their respective do_work operations.
    • Changed the return type hint for update_cfg methods in Infer and Eval to ConfigDict.
  • ais_bench/benchmark/partitioners/base.py
    • Adjusted _check_task_cfg to safely retrieve 'type' for dataset and model configurations using .get().
  • ais_bench/benchmark/partitioners/naive.py
    • Provided default empty string values for out_dir in the NaivePartitioner constructor and partition method.
  • ais_bench/benchmark/registry.py
    • Extended the TASKS registry to include custom task locations, specifically tasks.custom_tasks.
  • ais_bench/benchmark/tasks/base.py
    • Added a new EmptyTask class that inherits from BaseTask and provides no-op run and get_command methods.
  • ais_bench/benchmark/tasks/custom_tasks/tau2_bench_task.py
    • Added a new file defining the TAU2BenchTask for integrating tau2 benchmarks.
    • Included patches for litellm and tau2's logging and cost calculation to handle unmapped models gracefully.
    • Implemented methods for API key management, output directory preparation, configuration refreshing, tau2 run configuration, and result dumping.
    • Integrated tqdm and TaskStateManager for progress monitoring during tau2 simulations.
  • ais_bench/benchmark/utils/prompt/prompt.py
    • Added a condition in get_prompt_hash to return '/' if infer_cfg is not present in the dataset configuration.
  • ais_bench/configs/agent_example/tau2_bench_task.py
    • Added a new configuration example file for tau2_bench_task.
    • Defined model and dataset configurations for tau2 benchmarks across 'airline', 'retail', and 'telecom' sub-tasks.
    • Configured the infer stage to use EmptyTask and the eval stage to use TAU2BenchTask.
  • requirements/runtime.txt
    • Updated the scikit-learn dependency from a fixed version ==1.5.0 to a minimum version >=1.5.0.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the tau2 benchmark by adding a new custom task, updating worker logic to accommodate custom tasks and skippable steps, and relaxing some configuration requirements. The overall structure of the changes is sound. However, I have identified several critical issues that need to be addressed. These include potential NameError exceptions in the new task due to incorrect variable scoping, and KeyError exceptions from unsafe dictionary access in the worker classes. Additionally, there is a bug in the new tau2 benchmark configuration file, and the new task employs module-level monkey-patching, which is a risky practice. I have provided specific comments and suggestions for fixing these issues.

total_tasks = self._get_task_count(self.run_config) * self.run_config.num_trials
save_to = f"{self.run_config.save_to}.json"
pbar = tqdm(total=total_tasks, desc="Running TAU2 Bench", unit="task")
task_state_manager.update_task_state(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The task_state_manager variable is used here but it is not defined in the current scope. It seems you intended to use self.task_state_manager, which was assigned in the run method. This will cause a NameError at runtime.

Suggested change
task_state_manager.update_task_state(
self.task_state_manager.update_task_state(

new_completed = len(data.get('simulations', []))
if new_completed > completed:
pbar.update(new_completed - completed)
task_state_manager.update_task_state(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The task_state_manager variable is not defined in this scope, which will lead to a NameError. You should use the instance attribute self.task_state_manager instead.

Suggested change
task_state_manager.update_task_state(
self.task_state_manager.update_task_state(

monitor_thread.join()
finally:
pbar.update(total_tasks - pbar.n)
task_state_manager.update_task_state(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The task_state_manager variable is not defined in this scope, which will cause a NameError. Please use self.task_state_manager which is available as an instance attribute.

Suggested change
task_state_manager.update_task_state(
self.task_state_manager.update_task_state(

custom_infer = cfg.get("infer")
custom_task = None
if custom_infer:
custom_task = custom_infer["runner"]["task"].get("type")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Accessing nested dictionary keys directly without checking for their existence can lead to a KeyError. custom_infer['runner'] or custom_infer['runner']['task'] could fail if these keys are not present in the configuration. You should use .get() with default values for safer access.

Suggested change
custom_task = custom_infer["runner"]["task"].get("type")
custom_task = custom_infer.get("runner", {}).get("task", {}).get("type")

custom_eval = cfg.get("eval")
custom_task = None
if custom_eval:
custom_task = custom_eval["runner"]["task"].get("type")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the Infer class, accessing nested dictionary keys directly can cause a KeyError if runner or task keys are missing in the eval configuration. Please use .get() for safe access to prevent potential crashes.

Suggested change
custom_task = custom_eval["runner"]["task"].get("type")
custom_task = custom_eval.get("runner", {}).get("task", {}).get("type")

Comment on lines +28 to +82
# ================= 替换litellm中计费函数 =================
import litellm
import logging

litellm_logger = logging.getLogger("litellm")
litellm_logger.setLevel(logging.CRITICAL)

try:
from litellm.utils import get_response_cost as litellm_get_response_cost
except ImportError:
try:
from litellm.cost_calculator import get_response_cost as litellm_get_response_cost
except ImportError:
litellm_get_response_cost = None

def patched_get_response_cost(*args, **kwargs):
if litellm_get_response_cost is None:
return 0.0
try:
return litellm_get_response_cost(*args, **kwargs)
except Exception as e:
if "This model isn't mapped yet" in str(e):
return 0.0
raise e

try:
litellm.utils.get_response_cost = patched_get_response_cost
except AttributeError:
pass
try:
litellm.cost_calculator.get_response_cost = patched_get_response_cost
except AttributeError:
pass
# ================= 替换litellm中计费函数 =================

DEFAULT_FAKE_API_KEY = "fake_api_key"

from tau2.data_model.simulation import RunConfig
from tau2.run import run_domain, get_tasks
from tau2.metrics.agent_metrics import compute_metrics

# ================= 替换tau2中计费函数 =================
import tau2.utils.llm_utils as tau2_llm_utils
import loguru

_original_tau2_get_response_cost = tau2_llm_utils.get_response_cost
_original_tau2_logger_error = tau2_llm_utils.logger.error

def _patched_logger_error(message, *args, **kwargs):
if "This model isn't mapped yet" in str(message):
return
_original_tau2_logger_error(message, *args, **kwargs)

tau2_llm_utils.logger.error = _patched_logger_error
# ================= 替换tau2中计费函数 =================
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Monkey-patching libraries at the module level is a risky practice. It globally alters the behavior of litellm and tau2 and can lead to unexpected side effects, making the code harder to debug and maintain. Consider if there's a way to apply these patches more locally, for example using a context manager around the calls to run_domain. If this global patch is absolutely necessary, it should be accompanied by detailed comments explaining why it's needed and what risks are involved.

dict(
abbr=f'tau2_bench_{task}',
args = dict(
domain = "airline", # -d, 要运行的模拟域,可选值为 get_options().domains ["airline", "retail", "telecom"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The domain is hardcoded to "airline" inside a loop that iterates over sub_tasks. This means that all generated dataset configurations (tau2_bench_airline, tau2_bench_retail, tau2_bench_telecom) will incorrectly point to the 'airline' domain. You probably intended to use the loop variable task for the domain.

Suggested change
domain = "airline", # -d, 要运行的模拟域,可选值为 get_options().domains ["airline", "retail", "telecom"]
domain = task, # -d, 要运行的模拟域,可选值为 get_options().domains ["airline", "retail", "telecom"]

Comment on lines +19 to +20
if obj is None:
return None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While adding a None check is a good improvement for robustness, the function's return type hint -> str on line 18 is now incorrect because the function can return None. Please update the signature to -> Optional[str] to accurately reflect its behavior. You will also need to add from typing import Optional at the top of the file.

new_cfg = dict(
infer=dict(
partitioner=dict(type=get_config_type(NaivePartitioner)),
partitioner= dict(type=get_config_type(NaivePartitioner)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is an extra space after partitioner=. Please remove it for consistent formatting.

Suggested change
partitioner= dict(type=get_config_type(NaivePartitioner)),
partitioner=dict(type=get_config_type(NaivePartitioner)),

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant