Skip to content

Changed prompt to create more randomness in the quiz#22

Open
matteomekhail wants to merge 2 commits intoT3-Content:mainfrom
matteomekhail:feature/more-randomness-in-the-quip
Open

Changed prompt to create more randomness in the quiz#22
matteomekhail wants to merge 2 commits intoT3-Content:mainfrom
matteomekhail:feature/more-randomness-in-the-quip

Conversation

@matteomekhail
Copy link
Contributor

@matteomekhail matteomekhail commented Feb 23, 2026

Changed prompt, implemented blacklist, random casual "style", more temp

Summary by CodeRabbit

  • New Features
    • Improved prompt generation with diverse style options for greater creativity
    • Intelligent system that learns from recent prompt history to prevent repetition and similar patterns
    • Optimized generation parameters for enhanced output quality and consistency

@coderabbitai
Copy link

coderabbitai bot commented Feb 23, 2026

Warning

Rate limit exceeded

@matteomekhail has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 17 minutes and 25 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

The changes introduce a prompt diversification system that retrieves recent prompts from the database and integrates them into the generation pipeline, enabling style-based prompt generation and automatic avoidance of repeated patterns.

Changes

Cohort / File(s) Summary
Database Integration
db.ts
New exported function getRecentPrompts(limit: number = 50): string[] retrieves the most recent prompts from the database with safe parsing and filtering, using SQL ordering by id descending.
Dynamic Prompt System
game.ts
Integrates recent prompts into generation pipeline by importing getRecentPrompts, updating buildPromptSystem signature to accept recentPrompts parameter, adding PROMPT_STYLES array for style selection, and increasing temperature to 1.2 for enhanced variability.

Sequence Diagram(s)

sequenceDiagram
    participant Game as Game Logic
    participant DB as Database
    participant Prompt as Prompt Builder
    participant API as Text Generation API

    Game->>DB: getRecentPrompts(30)
    DB-->>Game: recent prompts array
    
    Game->>Prompt: buildPromptSystem(recentPrompts)
    Prompt->>Prompt: Generate system prompt<br/>with style guidance and<br/>avoidance patterns
    Prompt-->>Game: system prompt string
    
    Game->>Game: Select random style<br/>from PROMPT_STYLES
    Game->>API: generateText(systemPrompt,<br/>userPrompt, temp=1.2)
    API-->>Game: generated prompt
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 Recent prompts now flow, through databases deep,
Styles spin round like carrots in a creative sweep,
No patterns repeat when our system's awake,
Temperature high, fresh prompts we make! 🎲✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Changed prompt to create more randomness in the quiz' accurately reflects the main objective of the pull request - increasing randomness in quiz generation through prompt system enhancements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@macroscopeapp
Copy link

macroscopeapp bot commented Feb 23, 2026

Bias quiz prompt generation toward randomness by using recent prompts via db.getRecentPrompts(30) and injecting a random style in game.callGeneratePrompt

Adds db.getRecentPrompts(limit=50) to fetch recent prompt texts; updates game.buildPromptSystem to include recent prompts as exclusions; updates game.callGeneratePrompt to pass recent prompts, inject a random style into the user prompt, and set temperature to 1.0 for anthropic/* or 1.2 otherwise.

📍Where to Start

Start with callGeneratePrompt in game.ts, then review buildPromptSystem, and finally getRecentPrompts in db.ts.


Macroscope summarized 895c592.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
game.ts (2)

212-215: Remove the dead style variable inside buildPromptSystem.

Line 214 computes a random style but never references it within the function body — the string goes nowhere. The actual style selection and use happen separately in callGeneratePrompt at lines 239 and 245. This is a leftover from refactoring that wastes a Math.random() call and misleads the reader into expecting ${style} to appear in the returned system string.

♻️ Proposed fix
 function buildPromptSystem(recentPrompts: string[]): string {
   const examples = shuffle([...ALL_PROMPTS]).slice(0, 80);
-  const style = PROMPT_STYLES[Math.floor(Math.random() * PROMPT_STYLES.length)];
-
   let system = `You are a comedy writer...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@game.ts` around lines 212 - 215, The buildPromptSystem function declares a
local variable style that is never used; remove the dead declaration (the const
style = PROMPT_STYLES[...] line) so you don't consume a pointless Math.random()
and avoid confusion with the actual style selection performed in
callGeneratePrompt; update buildPromptSystem to only compute examples and the
system string, leaving style handling to callGeneratePrompt and any other
callers.

187-187: Inconsistent import extension for "./db".

Line 187 imports from "./db" (no extension) while line 302 imports from "./db.ts" (with extension). Both resolve correctly in Bun, but pick one convention and stick with it throughout the file.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@game.ts` at line 187, The import for getRecentPrompts uses "./db" while
elsewhere the file imports "./db.ts" — make imports consistent by choosing one
convention and updating the other(s). Locate the import statements referencing
the db module (e.g., the getRecentPrompts import and the later import that uses
"./db.ts") and change them all to the same path form (either all "./db" or all
"./db.ts") so the file consistently uses one extension convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@game.ts`:
- Line 243: The temperature is hard-coded to 1.2 which violates Anthropic's
0.0–1.0 limit and causes OpenRouter requests to be rejected when an Anthropic
model from the MODELS array (e.g., "claude-opus-4.6" or "claude-sonnet-4.6") is
selected; update the request-building logic to enforce a valid temperature per
model: detect the chosen model from MODELS at selection time and clamp or
override the temperature to Math.min(requestedTemperature, 1.0) (or set to 1.0)
for Anthropic models before sending the request (the code path that calls
withRetry and builds the model request payload must apply this check). Ensure
the fix preserves the original temperature for non-Anthropic models and prevents
withRetry from exhausting retries due to API rejections.

---

Nitpick comments:
In `@game.ts`:
- Around line 212-215: The buildPromptSystem function declares a local variable
style that is never used; remove the dead declaration (the const style =
PROMPT_STYLES[...] line) so you don't consume a pointless Math.random() and
avoid confusion with the actual style selection performed in callGeneratePrompt;
update buildPromptSystem to only compute examples and the system string, leaving
style handling to callGeneratePrompt and any other callers.
- Line 187: The import for getRecentPrompts uses "./db" while elsewhere the file
imports "./db.ts" — make imports consistent by choosing one convention and
updating the other(s). Locate the import statements referencing the db module
(e.g., the getRecentPrompts import and the later import that uses "./db.ts") and
change them all to the same path form (either all "./db" or all "./db.ts") so
the file consistently uses one extension convention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant