Skip to content

Conversation

@marckraw
Copy link
Owner

@marckraw marckraw commented Nov 8, 2025

No description provided.

@changeset-bot
Copy link

changeset-bot bot commented Nov 8, 2025

🦋 Changeset detected

Latest commit: 43a1c38

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 7 packages
Name Type
@mrck-labs/grid-core Minor
@mrck-labs/grid-agents Patch
@mrck-labs/grid-workflows Major
express-test Patch
hono-test Patch
nextjs-test Patch
terminal-agent Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@vercel
Copy link

vercel bot commented Nov 8, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
grid-docs Ready Ready Preview Comment Nov 8, 2025 11:19am

@marckraw marckraw requested a review from Copilot November 8, 2025 11:19
@marckraw marckraw merged commit af9fd22 into develop Nov 8, 2025
3 checks passed
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds streaming support to the LLM service, enabling real-time text generation responses from language models. The implementation adds two new optional methods to handle streaming use cases with and without tool support.

  • Added runStreamedLLM and runStreamedLLMWithTools methods for streaming text generation
  • Updated type definitions to include optional streaming method signatures
  • Bumped package version from 0.35.0 to 0.36.0

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 14 comments.

File Description
packages/core/src/types/llm.types.ts Added optional method signatures for runStreamedLLM and runStreamedLLMWithTools to the LLMService interface
packages/core/src/services/base.llm.service.ts Implemented two streaming methods using the AI SDK's streamText function, with support for multiple providers (OpenAI, Anthropic, OpenRouter) and tool execution tracking
packages/core/package.json Version bump to 0.36.0 to reflect the new streaming functionality
.changeset/cold-dancers-repair.md Added changeset documenting the minor version update with streaming response support

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +452 to +457
if (sendUpdate) {
sendUpdate({
type: "tool_response",
content: JSON.stringify(content),
});
}
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same inconsistency: sendUpdate is checked for existence but is defined as required in the type definition.

Copilot uses AI. Check for mistakes.
tools = [],
temperature = 0.1,
maxOutputTokens,
traceContext,
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The traceContext variable is destructured from options but never used in this function. Unlike runLLM which uses it for telemetry metadata (lines 241-242), this streaming function doesn't include it. Either remove the unused variable or add it to the metadata for consistency with runLLM.

Copilot uses AI. Check for mistakes.
options: LLMServiceOptions & { tools?: any[] }
): Promise<{
textStream: AsyncIterable<string>;
generation: any;
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generation return type should be more specific. Consider using a proper type from the Langfuse SDK instead of any to improve type safety.

Copilot uses AI. Check for mistakes.
maxOutputTokens,
});

return { textStream: result.textStream, generation };
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The streaming methods don't complete the Langfuse generation tracking. Unlike runLLM which calls generation.end() with usage and cost data (lines 291-299), these streaming methods return the generation object without finalizing it. This could lead to incomplete telemetry data. Consider adding completion logic after the stream finishes, or documenting that the caller is responsible for ending the generation.

Copilot uses AI. Check for mistakes.
Comment on lines +435 to +440
if (sendUpdate) {
sendUpdate({
type: "tool_execution",
content: JSON.stringify(content),
});
}
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sendUpdate function is checked for existence before use (lines 435, 452), but according to LLMServiceOptions interface (llm.types.ts line 70), sendUpdate is a required field, not optional. Either make sendUpdate optional in the type definition by adding a ?, or remove these null checks since the field is guaranteed to exist.

Copilot uses AI. Check for mistakes.
Comment on lines +370 to +375
const runStreamedLLMWithTools = async (
options: LLMServiceOptions & { tools?: any[] }
): Promise<{
textStream: AsyncIterable<string>;
generation: any;
}> => {
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This streaming method also lacks documentation. Consider adding JSDoc comments similar to the suggestion for runStreamedLLM.

Copilot uses AI. Check for mistakes.
messages,
temperature = 0.7,
maxOutputTokens,
traceContext,
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The traceContext variable is destructured from options but never used in this function. Unlike runLLM which uses it for telemetry metadata (lines 241-242), this streaming function doesn't include it. Either remove the unused variable or add it to the metadata for consistency with runLLM.

Copilot uses AI. Check for mistakes.
Comment on lines +321 to +324
const runStreamedLLM = async (options: LLMServiceOptions): Promise<{
textStream: AsyncIterable<string>;
generation: any;
}> => {
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new streaming methods lack documentation. Consider adding JSDoc comments to explain:

  • When to use streaming vs. non-streaming methods
  • The purpose and usage of the returned generation object
  • Whether the caller is responsible for calling generation.end()
  • Example usage of consuming the textStream

This would help API consumers understand how to properly use these methods.

Copilot uses AI. Check for mistakes.
// Streaming methods (optional)
runStreamedLLM?(options: LLMServiceOptions): Promise<{
textStream: AsyncIterable<string>;
generation: any;
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generation return type should be more specific. Consider using a proper type from the Langfuse SDK instead of any to improve type safety. This applies to both streaming methods.

Copilot uses AI. Check for mistakes.
Comment on lines +401 to +412
// Select AI model based on provider
let aiModel;
if (provider === "anthropic") {
aiModel = anthropic(model);
} else if (provider === "openrouter") {
const openrouter = createOpenRouter({
apiKey: process.env.OPENROUTER_API_KEY,
});
aiModel = openrouter.chat(model);
} else {
aiModel = openai(model);
}
Copy link

Copilot AI Nov 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider selection logic is duplicated. This is the third instance of the same code pattern. Consider extracting this into a shared helper function as suggested in the earlier comment.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants