Skip to content

Conversation

@mariodian
Copy link

@mariodian mariodian commented Feb 1, 2025

The PR adds support for https://venice.ai

Summary by Sourcery

New Features:

  • Added Venice.ai as a provider.

@sourcery-ai
Copy link

sourcery-ai bot commented Feb 1, 2025

Reviewer's Guide by Sourcery

This pull request introduces support for Venice.ai as a new provider. It includes changes to the API, configuration, UI components, and environment variables to accommodate the new provider.

Sequence diagram for Venice.ai chat completion flow

sequenceDiagram
    participant C as Client
    participant A as API Route
    participant V as Venice.ai API

    C->>A: POST /api/chat/messages
    Note over A: Create OpenAI client<br/>with Venice.ai config
    A->>V: Create chat completion
    V-->>A: Stream response
    A-->>C: Stream text response
Loading

Class diagram for Venice provider settings

classDiagram
    class ProviderSetting {
        +Amazon AmazonSettings
        +Anthropic AnthropicSettings
        +Azure AzureSettings
        +Claude ClaudeSettings
        +Cohere CohereSettings
        +Google GoogleSettings
        +HuggingFace HuggingFaceSettings
        +Mistral MistralSettings
        +OpenAI OpenAISettings
        +Perplexity PerplexitySettings
        +Venice VeniceSettings
        +Custom CustomSettings
    }

    class VeniceSettings {
        +String apiKey
        +String endpoint
    }

    ProviderSetting *-- VeniceSettings
Loading

File-Level Changes

Change Details Files
Added Venice.ai as a new provider in the API.
  • Added a new case for the Venice provider in the POST function to handle chat completions.
  • Created a new route file for Venice.ai API calls.
app/api/chat/messages/route.ts
app/api/chat/messages/venice/route.ts
Updated configuration files to include Venice.ai.
  • Added Venice to the Provider enum.
  • Added VeniceModelId and VeniceModelName types.
  • Added Venice to the AllModelId and AllModelName types.
  • Added VeniceSettings interface.
  • Added Venice to the ProviderSetting interface.
config/provider/index.ts
types/settings.ts
Added Venice.ai to the model selection UI.
  • Added a new ModelSelector component for Venice.ai.
components/layout/model-select.tsx
Added Venice.ai settings to the settings UI.
  • Added a new VeniceProvider component for settings.
  • Added Venice settings to the ProviderSettings component.
  • Added Venice settings to the SettingsDialog component.
  • Added Venice settings to the SettingsDrawer component.
components/layout/settings/provider.tsx
components/layout/settings-dialog.tsx
components/layout/settings-drawer.tsx
components/layout/settings/provider/venice.tsx
Added Venice.ai environment variables.
  • Added VENICE_API_KEY and VENICE_ENDPOINT environment variables to Dockerfile.
  • Added VENICE_API_KEY and VENICE_ENDPOINT environment variables to docker-compose.yml.
  • Added VENICE_API_KEY and VENICE_ENDPOINT environment variables to .env.example.
Dockerfile
docker-compose.yml
.env.example

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@vercel
Copy link

vercel bot commented Feb 1, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
chat-chat ❌ Failed (Inspect) Feb 1, 2025 7:43am

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @mariodian - I've reviewed your changes - here's some feedback:

Overall Comments:

  • There appears to be duplicate Venice API implementation in both route.ts and venice/route.ts. Consider consolidating these into a single implementation to avoid maintenance issues.
Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

const response = await venice.chat.completions.create({
model: config.model.model_id,
stream: true,
max_tokens: 4096,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: max_tokens should be configurable through the config object

Consider making max_tokens configurable through the config object rather than hardcoding it. This value is duplicated in venice/route.ts as well.

Suggested implementation:

            const response = await venice.chat.completions.create({
                model: config.model.model_id,
                stream: true,
                max_tokens: config.model.max_tokens,
                messages,
            });

You'll also need to:

  1. Update the config type definition to include max_tokens in the model configuration
  2. Update any config files or objects to include the max_tokens value
  3. Update venice/route.ts to use the same config.model.max_tokens instead of its hardcoded value

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant