Skip to content

EPIC: Support generative UI capabilities #80

@tomkis

Description

@tomkis

Problem

Currently, agents can only render UI using predefined static components via the A2A metadata-driven extension system. Agent builders are limited to composing from a fixed component library. They cannot have agents dynamically generate custom UI based on context or user interaction.

Proposed solution

Allow agents to produce generative UI, where the agent constructs the interface dynamically rather than selecting from a predefined component library. This would let agents generate rich, context-aware interfaces on the fly (e.g., custom dashboards, interactive visualizations, bespoke layouts) without requiring those components to exist in the SDK upfront.

A generative UI approach would need to work alongside or extend the existing A2A metadata-driven extension system (A2AUiExtension), potentially introducing a new extension type that accepts LLM-generated UI definitions and safely renders them in the frontend.

Alternatives considered

  • Expanding the static component library to cover more use cases. Doesn't scale and limits agent creativity.
  • Using markdown/HTML rendering within text messages. Lacks interactivity and structured data binding.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions