You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bounty Bot has two distinct rule systems that work together:
Type
Directory
Execution
Purpose
Code Rules
rules/code/
Run by the engine at pipeline time
Programmatic pass/fail checks
LLM Rules
rules/llm/
Injected into the LLM system prompt
Natural-language instructions for the model
Both are loaded at startup and can be hot-reloaded via POST /api/v1/rules/reload.
rules/
code/ # Deterministic, code-verified checks
validity.ts # Body length, title quality, structure
media.ts # Evidence requirements
spam.ts # Template detection, generic titles
content.ts # Profanity, length, context
scoring.ts # Penalty weight adjustments
llm/ # Instructions the LLM must follow
evaluation.ts # Evidence priority, reproducibility
tone.ts # Professional tone, no sympathy
spam-detection.ts # Template farming, AI filler
output-format.ts # Tool usage, reasoning structure
Code Rules (rules/code/)
Code rules are TypeScript functions that evaluate an issue context and return true (pass) or false (fail). They execute deterministically in the pipeline before the LLM gate.
LLM rules are natural-language instructions injected into the model's system prompt. They tell the LLM how to reason, what to prioritize, and how to format its output. They don't execute code — they shape the model's behavior.
Interface
interfaceLLMRule{id: string;// e.g. "llm.eval.evidence-first"description: string;// Short label for logscategory: string;// evaluation | tone | spam | output-format | ...priority: LLMRulePriority;// critical | high | normal | lowenabled?: boolean;// default: trueinstruction: string;// The actual text injected into the promptcondition?: (ctx: RuleContext)=>boolean;// Optional: only inject when true}
Priority Ordering
Instructions are injected in priority order — critical first, low last:
Priority
When to use
critical
Core evaluation criteria that must never be violated
high
Important constraints (confidence calibration, duplicate rules)
normal
Tone and formatting preferences
low
Nice-to-have guidelines
Conditional Rules
LLM rules can have a condition function. The instruction is only injected when the condition returns true:
{id: 'llm.spam.burst-awareness',description: 'Extra scrutiny for high spam scores',category: 'spam',priority: 'normal',instruction: 'The pre-computed spam score is high. Pay extra attention to quality.',condition: (ctx)=>ctx.spamScore>0.5,}
Writing an LLM Rule
// rules/llm/custom.tsimporttype{LLMRule}from'../../src/rules/types.js';construles: LLMRule[]=[{id: 'llm.custom.platform-specific',description: 'Platform-specific evaluation context',category: 'evaluation',priority: 'normal',instruction:
'This bounty program is for a web application. Mobile-only bugs reported '+'without specifying the responsive viewport are still VALID if the screenshots '+'clearly show the issue in a browser.',},];exportdefaultrules;
How They Work Together
Pipeline execution:
1. Media check → pass/fail
2. Spam detection → score
3. Duplicate detection → score
4. Edit history → score
5. Code rules evaluated → reject/require/penalize/flag results
6. LLM gate receives:
a. The issue content
b. Code rule results (pass/fail report)
c. LLM rule instructions (numbered list)
→ LLM calls deliver_verdict
The LLM prompt includes both:
## Code Rule Results
12/15 programmatic checks passed.
### Failed Checks
- [PENALIZE] code.content.has-context: No page/URL reference found.
## Evaluation Instructions
You MUST follow these rules when making your verdict:
1. Always prioritize concrete evidence over narrative quality...
2. A valid bug report must contain enough information for reproduction...
3. Never soften a verdict out of sympathy...