Conversation
llama_stack/apis/agents/agents.py
Outdated
| session_id: str | ||
| turn_id: str | ||
| tool_responses: List[ToolResponseMessage] | ||
| tool_responses: List[ToolResponse | ToolResponseMessage] |
There was a problem hiding this comment.
What's difference between ToolResponse and ToolResponseMessage? Maybe we should just keep one?
or
class ToolResponseMessage(ToolResponse)
There was a problem hiding this comment.
ToolResponse represents response from a tool.
ToolResponseMessage responses an input message to the model whose content is from a tool response.
ToolResponseMessage here is for backward compatibility and will be removed in future version.
There was a problem hiding this comment.
Hmm, it feels like these 2 types are kind of redundant and can be unified into 1?
class ToolResponseMessage(BaseModel):
role: Literal["tool"] = "tool"
call_id: str
tool_name: Union[BuiltinTool, str]
content: InterleavedContent
class ToolResponse(BaseModel):
call_id: str
tool_name: Union[BuiltinTool, str]
content: InterleavedContent
metadata: Optional[Dict[str, Any]] = None
@field_validator("tool_name", mode="before")
@classmethod
def validate_field(cls, v):
if isinstance(v, str):
try:
return BuiltinTool(v)
except ValueError:
return v
return vOr is the idea that we could could be augmenting the content from ToolResponse into ToolResponseMessage using metadata?
There was a problem hiding this comment.
Yea we could unify as it contains mostly the same information, but I think having them separate keeps the interface cleaner. ToolResponseMessage is one of the model inference API inputs. Having metadata there that's never used by the model makes the inference API less clean and could confuse users. From inference API users POV, metadata doesn't make sense.
There was a problem hiding this comment.
Makes sense! So ToolExecutionStep would contain metadata from ToolResponse which user can query about, while inference only sees ToolResponseMessage.
yanxi0830
left a comment
There was a problem hiding this comment.
Maybe add a task to deprecate the ToolResponseMessage for 0.1.7? Also do we need OpenAPI updates for this PR?
Summary: Test Plan:
Summary: Corresponding change on the python client for llamastack/llama-stack#1426 Test Plan: Tested in llamastack/llama-stack#1426 Co-authored-by: Eric Huang (AI Platform) <erichuang@fb.com>
Summary:
Client side change in llamastack/llama-stack-client-python#180
Changes the resume_turn API to accept
ToolResponseinstead ofToolResponseMessage:ToolResponsecontainsmetadataToolResponseMessageis a concept for model inputs. Here we are just submitting the outputs of tool execution.Test Plan:
Ran integration tests with newly added test using client tool with metadata
LLAMA_STACK_CONFIG=fireworks pytest -s -v tests/integration/agents/test_agents.py --safety-shield meta-llama/Llama-Guard-3-8B --record-responses