Skip to content

Conversation

@HavenDV
Copy link
Contributor

@HavenDV HavenDV commented Sep 8, 2025

Summary by CodeRabbit

  • New Features

    • Added an optional prompt_cache_key to chat/completions input to enable caching and reuse of prompts across requests. This nullable string key is backward compatible and can improve performance and consistency when reusing the same prompt.
  • Documentation

    • Updated public API schema with a titled and described prompt_cache_key field to clarify usage and behavior.

@coderabbitai
Copy link

coderabbitai bot commented Sep 8, 2025

Walkthrough

Added an optional string field prompt_cache_key (nullable) to the OpenAPI input schema for chat/completions in src/libs/DeepInfra/openapi.yaml. No required properties changed and no other schema or control-flow modifications indicated.

Changes

Cohort / File(s) Summary
OpenAPI schema update
src/libs/DeepInfra/openapi.yaml
Added optional field prompt_cache_key (string, nullable) to the public input schema alongside reasoning_effort; includes title and description for identifying and reusing prompt cache across requests.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Client
  participant API as DeepInfra API
  participant Cache as Prompt Cache
  participant Model as Model Runtime

  Client->>API: POST /chat/completions {prompt, prompt_cache_key?}
  alt prompt_cache_key provided
    API->>Cache: Lookup(key)
    alt Cache hit
      Cache-->>API: Cached prompt/context
    else Cache miss
      API->>Model: Generate with prompt
      Model-->>API: Output
      API->>Cache: Store(key, prompt/context)
    end
  else no key
    API->>Model: Generate with prompt
    Model-->>API: Output
  end
  API-->>Client: Response
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks (1 passed, 1 warning, 1 inconclusive)

❌ Failed Checks (1 warning, 1 inconclusive)
Check Name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description includes only a placeholder line and lacks any context, rationale, or details about the changes, leaving reviewers without necessary information to understand the purpose and impact of the addition. Please expand the description to explain the motivation for adding prompt_cache_key, how it will be used, any backward‐compatibility considerations, and consider adding a pull request template to ensure consistent, informative descriptions in future.
Title Check ❓ Inconclusive The current title “feat:@coderabbitai” is too vague to convey the primary change in the pull request and does not describe the addition of the new prompt_cache_key field to the OpenAPI schema, so it fails to summarize the main change clearly. Please rename the title to briefly summarize the main change, for example “feat: add prompt_cache_key to OpenAPI input schema” to clearly convey the purpose of the update.
✅ Passed Checks (1 passed)
Check Name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

Poem

I stash my prompts like clover in spring,
A cache-key tune that I quietly sing.
Hop, store, reuse—what a clever trick!
Next request comes, and it’s lightning-quick.
Ears up, whiskers twitch—merge bells ring! 🐇✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bot/update-openapi_202509082116

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@HavenDV HavenDV enabled auto-merge (squash) September 8, 2025 21:17
@HavenDV HavenDV merged commit 6f7c835 into main Sep 8, 2025
3 of 4 checks passed
@HavenDV HavenDV deleted the bot/update-openapi_202509082116 branch September 8, 2025 21:18
@coderabbitai coderabbitai bot changed the title feat:@coderabbitai feat:Add optional prompt_cache_key to chat/completions OpenAPI schema Sep 8, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/libs/DeepInfra/openapi.yaml (3)

7290-7294: Clarify semantics: scope, TTL, and precedence vs message-level cache_control

Please document:

  • Scope (per user/account/team) and cross-tenant isolation
  • TTL/eviction behavior
  • Precedence with ChatCompletion[User|System|Assistant|Tool]Message.cache_control if both are set

7290-7294: Parity: consider adding prompt_cache_key to OpenAICompletionsIn

If caching is supported for non-chat completions too, expose the same field there for consistency.

YAML patch (outside this hunk, under components.schemas.OpenAICompletionsIn.properties):

+        prompt_cache_key:
+          title: Prompt Cache Key
+          type: string
+          description: 'A key to identify the prompt cache for reuse across requests. Scoped to the authenticated account/team. Case-sensitive.'
+          minLength: 1
+          maxLength: 256
+          pattern: '^[A-Za-z0-9._:-]+$'
+          nullable: true

7290-7294: Wording nit in description

“identify prompt cache” → “identify the prompt cache”.

-          description: 'A key to identify prompt cache for reuse across requests. If provided, the prompt will be cached and can be reused in subsequent requests with the same key.'
+          description: 'A key to identify the prompt cache for reuse across requests. If provided, the prompt will be cached and can be reused in subsequent requests with the same key.'
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9288694 and 563372e.

⛔ Files ignored due to path filters (5)
  • src/libs/DeepInfra/Generated/DeepInfra.DeepInfraClient.OpenaiChatCompletions.g.cs is excluded by !**/generated/**
  • src/libs/DeepInfra/Generated/DeepInfra.DeepInfraClient.OpenaiChatCompletions2.g.cs is excluded by !**/generated/**
  • src/libs/DeepInfra/Generated/DeepInfra.IDeepInfraClient.OpenaiChatCompletions.g.cs is excluded by !**/generated/**
  • src/libs/DeepInfra/Generated/DeepInfra.IDeepInfraClient.OpenaiChatCompletions2.g.cs is excluded by !**/generated/**
  • src/libs/DeepInfra/Generated/DeepInfra.Models.OpenAIChatCompletionsIn.g.cs is excluded by !**/generated/**
📒 Files selected for processing (1)
  • src/libs/DeepInfra/openapi.yaml (1 hunks)
🔇 Additional comments (1)
src/libs/DeepInfra/openapi.yaml (1)

7290-7294: LGTM: non-breaking optional field added

Optional, nullable string fits existing schema style. Safe addition.

Comment on lines +7290 to +7294
prompt_cache_key:
title: Prompt Cache Key
type: string
description: 'A key to identify prompt cache for reuse across requests. If provided, the prompt will be cached and can be reused in subsequent requests with the same key.'
nullable: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add length/charset constraints to prevent abuse and collisions

Constrain the key to a sane length and allowed chars; avoids accidental PII, log injection, and oversized headers/bodies.

         prompt_cache_key:
           title: Prompt Cache Key
           type: string
-          description: 'A key to identify prompt cache for reuse across requests. If provided, the prompt will be cached and can be reused in subsequent requests with the same key.'
+          description: 'A key to identify the prompt cache for reuse across requests. Scoped to the authenticated account/team. Case-sensitive.'
+          minLength: 1
+          maxLength: 256
+          pattern: '^[A-Za-z0-9._:-]+$'
           nullable: true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
prompt_cache_key:
title: Prompt Cache Key
type: string
description: 'A key to identify prompt cache for reuse across requests. If provided, the prompt will be cached and can be reused in subsequent requests with the same key.'
nullable: true
prompt_cache_key:
title: Prompt Cache Key
type: string
description: 'A key to identify the prompt cache for reuse across requests. Scoped to the authenticated account/team. Case-sensitive.'
minLength: 1
maxLength: 256
pattern: '^[A-Za-z0-9._:-]+$'
nullable: true
🤖 Prompt for AI Agents
In src/libs/DeepInfra/openapi.yaml around lines 7290-7294, the prompt_cache_key
schema lacks length and charset constraints; add validation to limit size and
allowed characters to prevent PII, log injection, and oversized payloads. Update
the schema to include minLength (e.g. 1), maxLength (e.g. 128), and a
restrictive pattern that only permits safe characters (for example alphanumeric
and a small set of separators like _ - . :), and adjust the description to note
these limits; ensure the regex disallows spaces and control characters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants