Skip to content

Conversation

@tucnak
Copy link
Collaborator

@tucnak tucnak commented Apr 17, 2025

This change includes quite a bit of everything, really. Perhaps most importantly: it handles cached content for both normal inference, and batch inference, via cache_ttl and cached_content metadata. This is great news for those of us who're doing many, many nasty batches.

It also enables structured output via decoding of Vertex-supported schema.

@tucnak tucnak merged commit 0cc8d7e into master Apr 17, 2025
2 checks passed
@tucnak tucnak deleted the vertexmaxxing branch April 17, 2025 19:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants