Skip to content

Conversation

@nforro
Copy link
Member

@nforro nforro commented Dec 15, 2025

Signed-off-by: Nikola Forró <nforro@redhat.com>
trying to implement a VertexAI API client demo agent and an agent build on top of LiteLLM without BeeAI, and evaluate
pros and cons of each approach. We also need to consider the previous points (tool constraints and reasoning).

## How complicated it would be to get a more searchable solution when debugging agentic runs? Phoenix is amazing in visualizing the runs but the lack of "easy search in a text file" makes debugging longer. Also the default BeeAI Middleware prints everything which makes the output hard to consume.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nforro so the Loki was not usable for this purpose? You can integrate with it directly also ...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is more about what is passed to Loki (and directly consumable during local runs), the logs generated by the default middleware are extensive and really hard to navigate in, because they contain everything, including the ever-increasing history of messages exchanged between the model and the framework.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants