perf: less deterministic AI — add temperature + split reasoning effort#23
perf: less deterministic AI — add temperature + split reasoning effort#23EtanHey wants to merge 1 commit intoT3-Content:mainfrom
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Set higher randomness and split reasoning by configuring
|
The problem
Same models + same prompt patterns + no temperature = same jokes every round. The game feels repetitive because all AI calls use default sampling parameters and
reasoning.effort: "medium"globally.The fix (6 lines)
temperature: 1.2on prompt generation — more diverse promptstemperature: 1.3on answer generation — wilder, less predictable jokestemperature: 0.3on voting — judges stay decisive, not randomreasoning.effort: "high"for creative calls — models think harder about comedyreasoning.effort: "low"for voting — judges don't need deep reasoning to pick A or BWhy these values
What changed
callGeneratePromptcallGenerateAnswercallVote