Skip to content

Conversation

@HavenDV
Copy link
Contributor

@HavenDV HavenDV commented Jul 8, 2025

Summary by CodeRabbit

  • Documentation
    • Clarified the maximum limit for transcript input to 100 hours.
    • Updated the description of input text size to reflect the context limit of the selected model without specifying a default value.

@coderabbitai
Copy link

coderabbitai bot commented Jul 8, 2025

Walkthrough

The documentation for the LemurBaseParams schema in the OpenAPI specification was updated. The transcript_ids property description now only references a 100-hour maximum, and the input_text property description no longer specifies a default context limit value, instead referring generically to the model's context limit.

Changes

File Change Summary
src/libs/AssemblyAI/openapi.yaml Updated descriptions for transcript_ids and input_text properties in LemurBaseParams schema

Poem

In YAML fields, the limits shift,
Hours now guide the transcript list.
The token count takes flight,
Context limits set the height.
With clearer words, the docs are bright—
A rabbit’s hop toward making things right! 🐇✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bot/update-openapi_202507080142

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@HavenDV HavenDV enabled auto-merge (squash) July 8, 2025 01:42
@HavenDV HavenDV merged commit 36666e1 into main Jul 8, 2025
3 of 4 checks passed
@HavenDV HavenDV deleted the bot/update-openapi_202507080142 branch July 8, 2025 01:43
@coderabbitai coderabbitai bot changed the title feat:@coderabbitai feat:Update LemurBaseParams schema documentation for transcript_ids and input_text Jul 8, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 016994a and 72a40ac.

⛔ Files ignored due to path filters (1)
  • src/libs/AssemblyAI/Generated/AssemblyAI.Models.LemurBaseParams.g.cs is excluded by !**/generated/**
📒 Files selected for processing (1)
  • src/libs/AssemblyAI/openapi.yaml (2 hunks)
🔇 Additional comments (1)
src/libs/AssemblyAI/openapi.yaml (1)

5588-5592: Verify that all remaining references to the “100 000 tokens” default were removed

The default context-limit figure was dropped here, but earlier examples / docs might still cite the 100 000 value.
Please grep the spec & markdown docs to ensure no stale numbers persist, otherwise users will see conflicting guidance.

Comment on lines 5577 to 5582
transcript_ids:
x-label: Transcript IDs
description: |
A list of completed transcripts with text. Up to a maximum of 100 files or 100 hours, whichever is lower.
A list of completed transcripts with text. Up to a maximum of 100 hours.
Use either transcript_ids or input_text as input into LeMUR.
type: array
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Description change leaves the array unconstrained – consider adding a formal maxItems or clarifying docs

The previous wording (“100 files or 100 hours”) communicated two independent limits.
After the edit the hours-based limit remains, but the array length is now unspecified.
Because OpenAPI cannot express a duration constraint, clients will infer the only enforced limit is hours and might push thousands of transcript_ids in one call.

If the backend still restricts the list size (e.g. 100 transcripts), add an explicit schema constraint:

         items:
           x-label: Transcript ID
           type: string
           format: uuid
+        maxItems: 100        # hard cap retained from the original spec

At minimum, extend the sentence to state whether the number of transcripts is unlimited.
Without this, SDKs and generated docs lose an important contract.


🏁 Script executed:

#!/bin/bash
# Locate the transcript_ids definition
grep -R "transcript_ids:" -n src/libs/AssemblyAI/openapi.yaml

# Show the surrounding context of that block
sed -n '5560,5600p' src/libs/AssemblyAI/openapi.yaml

# Check for any existing maxItems constraints in the file
grep -R "maxItems" -n src/libs/AssemblyAI/openapi.yaml

Length of output: 1958


Add explicit maxItems to the transcript_ids array

We confirmed that the transcript_ids schema in src/libs/AssemblyAI/openapi.yaml (around lines 5577–5582) lacks a maxItems constraint, even though the description still mentions “100 hours” and the original spec enforced a 100-item cap. Without this, generated clients may assume unlimited IDs.

Please update the schema as follows:

   properties:
     transcript_ids:
       x-label: Transcript IDs
       description: |
         A list of completed transcripts with text. Up to a maximum of 100 hours.
         Use either transcript_ids or input_text as input into LeMUR.
       type: array
-      items:
+      items:
         x-label: Transcript ID
         type: string
         format: uuid
+      maxItems: 100        # enforce the original 100-transcript limit

If the service truly allows unlimited transcript IDs regardless of count, please clarify in the description instead.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/libs/AssemblyAI/openapi.yaml around lines 5577 to 5582, the
transcript_ids array schema is missing a maxItems constraint, which should limit
the number of items to 100 as per the original specification. Add a maxItems:
100 property to the transcript_ids array definition to explicitly enforce this
limit and align with the description. If the service actually supports unlimited
transcript IDs, update the description to clarify this instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants