You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Two distinct rule systems:
- rules/code/ — programmatic checks (validity, media, spam, content, scoring)
executed deterministically in the pipeline, produce pass/fail results
- rules/llm/ — natural-language instructions (evaluation, tone, spam-detection,
output-format) injected into the LLM system prompt by priority order
LLM rules support conditional injection (e.g. only when spam > 0.5).
Both types hot-reloadable via POST /api/v1/rules/reload.
Updated types, loader, engine, pipeline, README, and docs/RULES.md.
|**Rules**|Configurable rules from `rules/*.ts`|`invalid` or penalty |
70
-
|**LLM Gate**| Gemini 3.1 Pro with `deliver_verdict` tool calling|`invalid`|
69
+
|**Code Rules**|Programmatic checks from `rules/code/*.ts`|`invalid` or penalty |
70
+
|**LLM Gate**| Gemini 3.1 Pro + LLM instructions from `rules/llm/*.ts`|`invalid`|
71
71
72
72
If all stages pass, the issue is labeled **valid**.
73
73
@@ -113,18 +113,24 @@ Full schemas and examples: **<a href="docs/API.md">docs/API.md</a>**
113
113
114
114
## Rules Engine
115
115
116
-
Drop a TypeScript file in `rules/` and bounty-bot loads it at startup. Each file exports an array of typed rules that are evaluated during the pipeline and injected into the LLM prompt.
116
+
Two kinds of rules, two directories:
117
117
118
118
```
119
119
rules/
120
-
validity.ts # body length, title quality, structure
121
-
media.ts # evidence requirements
122
-
spam.ts # template detection, generic titles
123
-
content.ts # profanity, length limits, context
124
-
scoring.ts # penalty weight adjustments
120
+
code/ # Programmatic checks — executed by the engine
121
+
validity.ts # body length, title quality, structure
122
+
media.ts # evidence requirements
123
+
spam.ts # template detection, generic titles
124
+
content.ts # profanity, length limits, context
125
+
scoring.ts # penalty weight adjustments
126
+
llm/ # LLM instructions — injected into the prompt
spam-detection.ts # template farming, AI filler, screenshot mismatch
130
+
output-format.ts # tool usage, reasoning order, no internal leaks
125
131
```
126
132
127
-
Four severity levels:
133
+
**Code rules** run programmatically and produce pass/fail results. They short-circuit the pipeline:
128
134
129
135
| Severity | Effect |
130
136
|---|---|
@@ -133,6 +139,8 @@ Four severity levels:
133
139
|`penalize`| Adds weight to penalty score |
134
140
|`flag`| Logged but no verdict change |
135
141
142
+
**LLM rules** are natural-language instructions injected into the model's system prompt, ordered by priority (`critical` > `high` > `normal` > `low`). They shape how the model reasons and phrases its verdict.
143
+
136
144
Hot-reload without restart: `POST /api/v1/rules/reload`
137
145
138
146
Full documentation: **<ahref="docs/RULES.md">docs/RULES.md</a>**
0 commit comments