Skip to content

Conversation

@gspencergoog
Copy link
Collaborator

Adds a LoadingScrim widget to the flutter_genui package to
provide a visual indicator and disable UI interaction while an LLM
inference is in flight.

The travel_app example has been updated to use this new widget,
wrapping the main UI surface and the prompt input field. The
loadingStream from the GenUiManager is used to control the
visibility of the scrim.

A new widget test has been added to verify that the scrim is
displayed correctly when an inference is in progress.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a LoadingScrim widget to provide visual feedback during LLM inference, which is a great user experience enhancement. The implementation is clean, wrapping the main UI and controlled by a stream from the GenUiManager. The refactoring of the event handling system from a time-based EventDebouncer to an action-based UiEventManager is a significant architectural improvement, making the event flow more robust and predictable. The introduction of UiActionEvent and UiChangeEvent clarifies the intent of each UI interaction. The tests have been updated accordingly, and a new test for the loading scrim has been added, which is excellent. I've found one critical issue and a couple of medium-severity suggestions for improving maintainability.

@jacobsimionato
Copy link
Collaborator

I personally think we should avoid this approach of blocking the entire UI, because it prevents the user from even properly consuming existing content while their waiting for new content to load. Instead, can we make the FilterChipGroup show a loading scrim when the "Submit" button is pressed, and then ideally stop loading it when the surface gets regenerated, e.g. perhaps in didUpdateWidget if necessary?

I'm worried that if we switch to the loading scrim temporarily, it will get in the way of us prioritising a better solution to loading, which is necessary to fulfill our first milestone of supporting chat-oriented UIs.

Some other options we could explore:

  • Switch the agent to be a loop that can do multiple inferences for a single UI e.g. relatively quickly show a loading state or create a new surface, then fill in that surface in the next inference, possible after a database fetch tool call.
  • Create an even more locked down conversation mode that only allows appending new surfaces. That way, when an event is triggered, we can show a new surface with a loading spinner immediately so that the user can perceive cause and effect.

@gspencergoog
Copy link
Collaborator Author

gspencergoog commented Aug 5, 2025

Okay, makes sense. The scrim is pretty transparent so you can see what's still there, but I get it: I was kind of doing this as a test to see how it looked/acted, and I'm not super pleased with it, although it does do what I set out to do.

The problem I'm trying to solve here is that while the LLM is working on an answer, the user can send more events its way before it's ready.

I thought about just letting the events accumulate until the LLM responded and then send all the accumulated interactions its way. I suspect that would end in chaos though, and the user would just click a bunch of times on things and make a mess.

You could lock down the conversation mode, and just disable anything that's not the last thing in. But you'd still have to disable the last thing during inferences.

How about a wrapper to catalog widgets that will put a tiny scrim over each of them when the LLM is thinking?

Or maybe we can segment the chat into the separate surfaces for each "turn" and just let the LLM update an existing turn, and disable the UI for that turn while it thinks?

In any case, we should probably disable the prompt input field while the LLM thinks.

@gspencergoog
Copy link
Collaborator Author

gspencergoog commented Aug 6, 2025

@jacobsimionato and I spoke, and decided that probably the best choice would be to somehow separate the chat into "turns", which each had their own surface, and when the user interacts with that surface, it becomes disabled, but other turns remain interactable. We also thought it would be good to look at an internal project that has some similar UI challenges.

Adds a `LoadingScrim` widget to the `flutter_genui` package to
provide a visual indicator and disable UI interaction while an LLM
inference is in flight.

The `travel_app` example has been updated to use this new widget,
wrapping the main UI surface and the prompt input field. The
`loadingStream` from the `GenUiManager` is used to control the
visibility of the scrim.

A new widget test has been added to verify that the scrim is
displayed correctly when an inference is in progress.
@gspencergoog
Copy link
Collaborator Author

I'm going to close this and make another PR later with the solution we want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants