-
Notifications
You must be signed in to change notification settings - Fork 849
Description
Background and motivation
Problem
FunctionInvokingChatClient currently provides a way to validate function calls via the FunctionInvoker pipeline (i.e., you can validate arguments before invoking, and validate outputs after invocation).
However, the approval flow (FunctionApprovalRequestContent / FunctionApprovalResponseContent) does not provide a comparable hook to run the same validation logic before presenting/processing an approval request.
This creates a gap:
- For normal tool calls, I can validate input/output as part of the invocation pipeline.
- For approval tool calls, I cannot run the same validation before approval is requested/handled.
- If validation fails during approval, there’s no good built-in way to:
- automatically re-prompt the LLM with validation errors (before the user is asked to approve), and/or
- retry the approval request with corrected arguments.
Result: consumers have to manually manipulate chat contents / approval contents and build custom retry flows, instead of relying on FunctionInvokingChatClient.
Desired behavior
Validation should be invoked for both:
- normal function calls (existing behavior)
- approval-based function calls (missing)
Additionally, there should be an option to resubmit to the LLM when validation fails before approval, similarly to how invocation retry flows can work.
Example scenario
- LLM proposes a tool call with a large argument object.
- The app requires schema validation / business rules validation.
- Validation fails (e.g., missing required field, invalid enum, violates constraints).
- Before asking the user to approve, the client should be able to:
- send the validation error back to the model (if configured),
- let the model revise the tool call arguments,
- then proceed to approval only once validation passes.
Today, this is difficult because FunctionApprovalRequestContent has no place to plug in validation/retry logic.
I am not certain what the best behavior should be when output validation fails.
Some open questions:
- Should output validation failures be reported back to the model and allow regeneration?
- Should they surface as hard errors to the caller?
- Should retry behavior for output validation be configurable and separate from input validation retries?
I wanted to explicitly call this out, as input validation before approval is the primary pain point, but output validation behavior likely needs a clear and consistent policy as well.
API Proposal
Add validator hooks to FunctionInvokingChatClient so they are applied consistently across both normal invocation and approval flows. And option to specify validation retry count separately.
public partial class FunctionInvokingChatClient
{
public Func<FunctionInvocationContext, CancellationToken, ValueTask>? FunctionInputValidator { get; set; }
public Func<FunctionInvocationContext, object?, CancellationToken, ValueTask>? FunctionOutputValidator { get; set; }
public int FunctionInputValidationRetryCount { get; set; } = 1;
}API Usage
var client = GetChatClient().AsBuilder().UseFunctionInvocation(configure: c =>
{
c.FunctionInputValidator = (ctx, ct) =>
{
var schema = Json.Schema.JsonSchema.Build(ctx.Function.JsonSchema);
var result = schema.Evaluate(JsonSerializer.Serialize<JsonElement>(schema, AIJsonUtilities.DefaultOptions));
if (!result.IsValid)
throw new JsonValidationExceptionForLLM(result);
};
}).Build();Alternative Designs
No response
Risks
No response