-
Notifications
You must be signed in to change notification settings - Fork 215
Add Venice.ai #335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add Venice.ai #335
Conversation
Reviewer's Guide by SourceryThis pull request introduces support for Venice.ai as a new provider. It includes changes to the API, configuration, UI components, and environment variables to accommodate the new provider. Sequence diagram for Venice.ai chat completion flowsequenceDiagram
participant C as Client
participant A as API Route
participant V as Venice.ai API
C->>A: POST /api/chat/messages
Note over A: Create OpenAI client<br/>with Venice.ai config
A->>V: Create chat completion
V-->>A: Stream response
A-->>C: Stream text response
Class diagram for Venice provider settingsclassDiagram
class ProviderSetting {
+Amazon AmazonSettings
+Anthropic AnthropicSettings
+Azure AzureSettings
+Claude ClaudeSettings
+Cohere CohereSettings
+Google GoogleSettings
+HuggingFace HuggingFaceSettings
+Mistral MistralSettings
+OpenAI OpenAISettings
+Perplexity PerplexitySettings
+Venice VeniceSettings
+Custom CustomSettings
}
class VeniceSettings {
+String apiKey
+String endpoint
}
ProviderSetting *-- VeniceSettings
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @mariodian - I've reviewed your changes - here's some feedback:
Overall Comments:
- There appears to be duplicate Venice API implementation in both
route.tsandvenice/route.ts. Consider consolidating these into a single implementation to avoid maintenance issues.
Here's what I looked at during the review
- 🟡 General issues: 1 issue found
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| const response = await venice.chat.completions.create({ | ||
| model: config.model.model_id, | ||
| stream: true, | ||
| max_tokens: 4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: max_tokens should be configurable through the config object
Consider making max_tokens configurable through the config object rather than hardcoding it. This value is duplicated in venice/route.ts as well.
Suggested implementation:
const response = await venice.chat.completions.create({
model: config.model.model_id,
stream: true,
max_tokens: config.model.max_tokens,
messages,
});You'll also need to:
- Update the config type definition to include max_tokens in the model configuration
- Update any config files or objects to include the max_tokens value
- Update venice/route.ts to use the same config.model.max_tokens instead of its hardcoded value
The PR adds support for https://venice.ai
Summary by Sourcery
New Features: