Skip to content

feat: twitch chat integration + audience-aware AI#14

Open
EtanHey wants to merge 3 commits intoT3-Content:mainfrom
EtanHey:feat/twitch-chat
Open

feat: twitch chat integration + audience-aware AI#14
EtanHey wants to merge 3 commits intoT3-Content:mainfrom
EtanHey:feat/twitch-chat

Conversation

@EtanHey
Copy link

@EtanHey EtanHey commented Feb 23, 2026

What this does

Connects quipslop to Twitch chat and makes the AI models audience-aware — they read the room, react to chat energy, and let viewers influence the game.

Chat Infrastructure

  • IRC client (chat.ts) — tmi.js connection with spam filtering (URL spam, excessive caps, emote-only, bot commands) that still allows vote messages through
  • SQLite persistence (chat-store.ts) — batch inserts via db.transaction(), flushes every 5s or 50 messages. Same bun:sqlite pattern as existing db.ts
  • REST APIGET /api/chat/recent, /api/chat/round/:num, /api/chat/votes, /api/chat/stats (all rate-limited)
  • WebSocket broadcast — live chat stats pushed to spectators every 3s

Audience-Aware AI (the fun part)

Instead of just piping chat to a sidebar, the AI models actually read and respond to the audience:

  • Prompt generation — models see recent chat messages + crowd mood when crafting prompts
  • Answer generation — models factor in what the audience finds funny (hype patterns like 💀😂 "lmao" "dead" vs cringe patterns like "mid" "boring" 😴)
  • Vote phase — AI judges see audience reactions when scoring answers

This makes the models significantly less deterministic. The same prompt with a hyped chat vs a bored chat produces noticeably different comedy.

Audience Voting

During vote phases, viewers type A/B (or 1/2) in chat:

  • One vote per viewer per round (deduped by username)
  • Lenient parsing — accepts "A!", "vote A", "a lol" (first valid char wins)
  • Fair weighting: 10pts per audience vote vs 100pts per AI judge, so chat can sway close rounds but can't override a unanimous panel
  • Audience bonus factors into winner determination

What I didn't change

  • No new dependencies besides tmi.js (+ types)
  • No architecture changes — follows existing patterns (bun:sqlite, Bun.serve(), log(), parsePositiveInt(), rate limiting)
  • No UI changes — this is all backend. Frontend integration (chat overlay, vote UI) is a natural follow-up
  • Game still works identically without a Twitch connection (anonymous read-only by default)

Config

TWITCH_CHANNEL=quipslop          # defaults to "quipslop"
TWITCH_BOT_USERNAME=             # optional, for authenticated connection
TWITCH_OAUTH_TOKEN=              # optional, anonymous works fine

Files

File Change Lines
chat.ts New — IRC client, spam filter, voting, stats +232
chat-store.ts New — SQLite persistence, batch writes, queries +128
server.ts Modified — endpoints, WebSocket broadcast, lifecycle +128
game.ts Modified — audience context injection, voting, reactions +78

Test plan

  • Game runs normally without Twitch env vars (graceful no-op)
  • Chat messages flow through spam filter correctly
  • Vote messages (A/B/1/2) pass spam filter despite being 1 char
  • Audience votes are one-per-viewer, counted correctly
  • /api/chat/recent returns last N messages
  • /api/chat/stats returns live stats
  • AI responses visibly vary based on chat mood
  • Clean shutdown on SIGTERM (no orphaned connections)

Summary by CodeRabbit

  • New Features
    • Twitch chat integration: display and monitor live viewer messages in real time
    • Persistent chat history: recent chat and per-round message history retained across sessions
    • Audience voting: viewers can vote A or B during voting phases; votes are tallied live
    • Vote bonuses: audience votes add bonus points to contestants' scores
    • Chat analytics: live stats (unique chatters, message rate, top contributors) visible to spectators

- tmi.js IRC client with spam filtering, vote detection
- SQLite persistence with batch inserts (chat_messages table)
- REST endpoints: /api/chat/recent, /round/:num, /votes, /stats
- WebSocket broadcast of live chat stats to spectators
- audience context injection into prompt/answer/vote phases
- audience voting (type A/B in chat) with fair 10pt weighting
- humor reaction analysis (hype vs cringe patterns)
- clean lifecycle: startChat/stopChat with graceful shutdown
@coderabbitai
Copy link

coderabbitai bot commented Feb 23, 2026

Warning

Rate limit exceeded

@EtanHey has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 3 minutes and 31 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Adds Twitch chat ingestion, spam filtering, in-memory buffering with batched SQLite persistence, audience voting, and integration of chat-derived context and vote bonuses into game scoring; exposes chat data and voting results via new server endpoints and lifecycle wiring.

Changes

Cohort / File(s) Summary
Chat runtime & ingestion
chat.ts
New Twitch chat client integration (tmi.js): message normalization, spam filtering, listeners, recent message buffer, chat stats, audience voting (open/close/process), and public APIs to start/stop and query chat/votes.
Persistence layer
chat-store.ts
New chat persistence: in-memory pendingMessages queue, periodic and size-triggered flush (batch INSERT OR IGNORE) into SQLite, schema creation, row↔ChatMessage mapping, and APIs to queue messages and query recent/round messages and counts.
Game integration
game.ts
Integrates audience reaction analysis and audience context into prompt generation and scoring; opens/closes audience voting during rounds and applies audience vote bonuses to contestant scores.
Server wiring & API
server.ts
Wires chat lifecycle into server startup/shutdown, subscribes to onChatMessage to queue messages, starts persistence, broadcasts chat stats periodically, expands client state to include audienceVotes, and adds HTTP endpoints: /api/chat/recent, /api/chat/round/:num, /api/chat/votes, /api/chat/stats.
Dependencies
package.json
Adds runtime dependency tmi.js and dev dependency @types/tmi.js.

Sequence Diagram(s)

sequenceDiagram
    actor Twitch
    participant ChatModule as Chat Module
    participant SpamFilter as Spam Filter
    participant Queue as In-Memory Queue
    participant DB as SQLite DB
    participant Game as Game Logic
    participant Server as Server API

    Twitch->>ChatModule: IRC message
    ChatModule->>SpamFilter: normalize & check
    alt not spam
        SpamFilter->>Queue: queueMessage(ChatMessage)
        Queue->>ChatModule: notify listeners (onChatMessage)
        ChatModule->>Game: deliver ChatMessage
        par
            Queue->>DB: batch flush (size/interval) \nINSERT OR IGNORE (transaction)
        and
            Game->>Game: analyzeAudienceReactions / processVote
            Game->>Server: update scoring / audienceVotes
        end
    else spam
        SpamFilter-->>ChatModule: drop message
    end

    Server->>DB: query recent/round messages
    Server-->>Clients: /api/chat/* responses & WS broadcasts
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐇 I nibble bytes and buffer chatter bright,

I batch and tuck each chatter into night,
Votes hop in, A or B they cheer and cheer,
The audience nudges winners near — hooray, my dear! 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.34% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: twitch chat integration + audience-aware AI' directly and accurately summarizes the main changes: adding Twitch chat integration and making the AI audience-aware.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@macroscopeapp
Copy link

macroscopeapp bot commented Feb 23, 2026

Add Twitch chat ingestion with spam filtering, 5‑second/50‑message buffered persistence, REST/WebSocket chat APIs, and audience voting that adds 10 points per vote to game scoring

Introduce tmi.js chat client with basic spam filtering and in‑memory stats; persist messages via a periodic batch flush; expose recent/round chat, votes, and stats over HTTP and broadcast updates over WebSocket; integrate audience votes into runGame scoring and append recent chat context to LLM prompts.

📍Where to Start

Start with startChat and related handlers in chat.ts, then review persistence flow in chat-store.ts, and finally the server wiring and HTTP endpoints in server.ts.


Macroscope summarized 08d852b.

Comment on lines +51 to +55
function flush() {
if (pendingMessages.length === 0) return;
const batch = pendingMessages.splice(0);
insertBatch(batch);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High chat-store.ts:51

Messages are removed from pendingMessages before insertBatch succeeds. If the insert throws, those messages are lost. Consider restoring the batch on error (e.g., catch { pendingMessages.unshift(...batch); throw e; }) or only splicing after success.

-function flush() {
-  if (pendingMessages.length === 0) return;
-  const batch = pendingMessages.splice(0);
-  insertBatch(batch);
+function flush() {
+  if (pendingMessages.length === 0) return;
+  const batch = pendingMessages.splice(0);
+  try {
+    insertBatch(batch);
+  } catch (e) {
+    pendingMessages.unshift(...batch);
+    throw e;
+  }
 }
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file chat-store.ts around lines 51-55:

Messages are removed from `pendingMessages` before `insertBatch` succeeds. If the insert throws, those messages are lost. Consider restoring the batch on error (e.g., `catch { pendingMessages.unshift(...batch); throw e; }`) or only splicing after success.

Evidence trail:
 chat - store . ts lines 51 - 55 ( commit 52 e 246 68 dd 79 af 099 1 e 8 fb 0 de 16 e 1 fe 96 08 dd 0 a 0 ): ` pending Messages . splice ( 0 )` on line 53 removes messages before ` insert Batch ( batch )` on line 54 is called , with no error handling to restore messages if insert fails .

Comment on lines +64 to +66
export function startPersistence() {
flushTimer = setInterval(flush, FLUSH_INTERVAL_MS);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium chat-store.ts:64

Calling startPersistence() multiple times leaks intervals since the previous timer reference is overwritten. Consider clearing any existing timer first, similar to how stopPersistence does.

Suggested change
export function startPersistence() {
flushTimer = setInterval(flush, FLUSH_INTERVAL_MS);
}
export function startPersistence() {
if (flushTimer) clearInterval(flushTimer);
flushTimer = setInterval(flush, FLUSH_INTERVAL_MS);
}
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file chat-store.ts around lines 64-66:

Calling `startPersistence()` multiple times leaks intervals since the previous timer reference is overwritten. Consider clearing any existing timer first, similar to how `stopPersistence` does.

Evidence trail:
chat-store.ts lines 47, 64-66, 68-74 at commit 52e24668dd79af0991e8fb0de16e1fe9608dd0a0. Line 65 shows `flushTimer = setInterval(flush, FLUSH_INTERVAL_MS);` with no guard. Lines 68-74 show stopPersistence does clear existing timer with `if (flushTimer) { clearInterval(flushTimer); ... }`.

chat.ts Outdated
Comment on lines +226 to +231
export function stopChat() {
if (client) {
client.disconnect().catch(() => {});
client = null;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium chat.ts:226

Consider making stopChat async and awaiting disconnect() to prevent duplicate event handlers if startChat is called before the old client fully disconnects.

-export function stopChat() {
+export async function stopChat() {
   if (client) {
-    client.disconnect().catch(() => {});
+    await client.disconnect().catch(() => {});
     client = null;
   }
 }
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file chat.ts around lines 226-231:

Consider making `stopChat` async and awaiting `disconnect()` to prevent duplicate event handlers if `startChat` is called before the old client fully disconnects.

Evidence trail:
chat.ts lines 226-231 at commit 52e24668dd79af0991e8fb0de16e1fe9608dd0a0: `export function stopChat() { if (client) { client.disconnect().catch(() => {}); client = null; } }` - confirms stopChat is not async and doesn't await disconnect(). chat.ts lines 103-170: `startChat` creates new tmi.Client and registers event handlers with `client.on('message', ...)` that modify module-level state.

const HYPE_PATTERNS = /\b(lmao|lol|dead|omg|bruh|no way|im crying|help|based|goated)\b|[💀😂🤣]/i;
const CRINGE_PATTERNS = /\b(mid|boring|cringe|meh|yawn|weak|trash)\b|[😴👎]/i;

function analyzeAudienceReactions(messages: ReturnType<typeof getRecentMessages>): string {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium game.ts:198

The emoji character classes need the /u flag. Without it, [💀😂🤣] matches individual UTF-16 code units (surrogates), causing most emojis to incorrectly match both patterns. Consider adding /u to both regex patterns.

🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file game.ts around line 198:

The emoji character classes need the `/u` flag. Without it, `[💀😂🤣]` matches individual UTF-16 code units (surrogates), causing most emojis to incorrectly match both patterns. Consider adding `/u` to both regex patterns.

Evidence trail:
game.ts lines 196-197 at commit 52e24668dd79af0991e8fb0de16e1fe9608dd0a0:
- Line 196: `const HYPE_PATTERNS = /\b(lmao|lol|dead|omg|bruh|no way|im crying|help|based|goated)\b|[💀😂🤣]/i;`
- Line 197: `const CRINGE_PATTERNS = /\b(mid|boring|cringe|meh|yawn|weak|trash)\b|[😴👎]/i;`

Both patterns use emoji character classes without the `/u` flag. JavaScript regex without `/u` interprets emoji character classes as UTF-16 code units (surrogates), not Unicode code points. See MDN documentation on Unicode flag for JavaScript regular expressions.

(m) => now - m.timestamp <= BUFFER_WINDOW_MS,
);

const sorted = [...chatterCounts.entries()]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium chat.ts:90

chatterCounts grows unbounded and is fully sorted in getChatStats, which can degrade memory/CPU over long runs. Suggest adding a TTL/size cap with periodic eviction (e.g., windowed counts aligned with BUFFER_WINDOW_MS) or using a bounded structure.

🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file chat.ts around line 90:

`chatterCounts` grows unbounded and is fully sorted in `getChatStats`, which can degrade memory/CPU over long runs. Suggest adding a TTL/size cap with periodic eviction (e.g., windowed counts aligned with `BUFFER_WINDOW_MS`) or using a bounded structure.

Evidence trail:
chat.ts line 38: `const chatterCounts = new Map<string, number>();` - declaration with no size limit
chat.ts lines 134-137: entries added/incremented with no eviction
chat.ts lines 90-91: `[...chatterCounts.entries()].sort((a, b) => b[1] - a[1])` - full sort of all entries
chat.ts lines 140-141: shows `recentMessages` HAS eviction logic, but `chatterCounts` does not
Commit: 52e24668dd79af0991e8fb0de16e1fe9608dd0a0

Comment on lines +80 to +82
export function getRecentMessages(limit = 50): ChatMessage[] {
return recentMessages.slice(-limit);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium chat.ts:80

slice(-0) returns the entire array because -0 === 0. Consider adding an early return for limit === 0 or using slice(Math.max(0, recentMessages.length - limit)).

Suggested change
export function getRecentMessages(limit = 50): ChatMessage[] {
return recentMessages.slice(-limit);
}
export function getRecentMessages(limit = 50): ChatMessage[] {
if (limit === 0) return [];
return recentMessages.slice(-limit);
}
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file chat.ts around lines 80-82:

`slice(-0)` returns the entire array because `-0 === 0`. Consider adding an early return for `limit === 0` or using `slice(Math.max(0, recentMessages.length - limit))`.

Evidence trail:
chat.ts lines 79-81 at commit 52e24668dd79af0991e8fb0de16e1fe9608dd0a0: `export function getRecentMessages(limit = 50): ChatMessage[] { return recentMessages.slice(-limit); }`. JavaScript specification confirms `-0 === 0` and `Array.prototype.slice(0)` returns the entire array.

if (!votingOpen) return;
if (audienceVotes.voters.has(username)) return; // one vote per person

const match = content.trim().match(/^[^a-z0-9]*([ab12])/i);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High chat.ts:208

The regex ^[^a-z0-9]*([ab12]) doesn't match "vote A" as the comment claims—v is alphanumeric, so the prefix can't consume it. Consider using /([ab12])/i without anchoring, or update the comment to reflect actual behavior.

🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file chat.ts around line 208:

The regex `^[^a-z0-9]*([ab12])` doesn't match `"vote A"` as the comment claims—`v` is alphanumeric, so the prefix can't consume it. Consider using `/([ab12])/i` without anchoring, or update the comment to reflect actual behavior.

Evidence trail:
chat.ts lines 200-220 in commit 52e24668dd79af0991e8fb0de16e1fe9608dd0a0: Line 203 comment claims regex accepts "vote A", line 208 shows regex `/^[^a-z0-9]*([ab12])/i`. The regex fails on "vote A" because: (1) `^` anchors at start, (2) `[^a-z0-9]*` with /i flag excludes all alphanumeric chars, (3) 'v' in "vote" is alphanumeric so the prefix matches zero chars, (4) `([ab12])` then tries to match 'v' which fails.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (2)
chat.ts (2)

117-118: Bot's own messages are not filtered out.

The _self parameter (4th argument of the "message" callback) indicates whether the message originates from the bot itself. Currently it's ignored (_self). If credentials are provided and the bot ever sends messages (e.g., a future "vote now!" announcement), those messages would be processed, persisted, and potentially counted as votes.

🛡️ Proposed fix
-  client.on("message", (_channel, tags, message, _self) => {
+  client.on("message", (_channel, tags, message, self) => {
+    if (self) return;
     if (isSpam(message)) return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@chat.ts` around lines 117 - 118, The message handler registered with
client.on("message") currently ignores the fourth argument _self, so the bot's
own messages get processed; update the callback to check the fourth parameter
(commonly named self or _self) and return early when it's truthy to skip
processing/persistence (i.e., before calling isSpam and any vote handling).
Locate the client.on("message", (_channel, tags, message, _self) => ...) handler
and add an early guard using the fourth arg name used in the diff so
bot-originated messages are not processed or counted.

36-39: chatterCounts grows unboundedly over the process lifetime.

In a long-running stream, chatterCounts accumulates entries for every unique chatter and is never pruned. For a popular channel this could grow to tens of thousands of entries. Consider periodically clearing or windowing it, or capping its size.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@chat.ts` around lines 36 - 39, chatterCounts is unbounded and must be capped;
add a MAX_CHATTER_COUNTS constant and enforce an eviction policy where, when
incrementing a chatter in the map (the code path that updates chatterCounts for
new messages), if a new key would push chatterCounts.size > MAX_CHATTER_COUNTS
remove the oldest entry (use Map insertion order: const oldest =
chatterCounts.keys().next().value; chatterCounts.delete(oldest)). Alternatively
(or additionally) implement periodic trimming by resetting or clearing
chatterCounts every N messages using totalMessages as a counter; update the
increment logic in the function that processes incoming ChatMessage(s) so it
applies the cap/eviction before inserting a new key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@chat-store.ts`:
- Around line 104-111: getRecentChat currently only reads persisted rows and can
miss messages still in memory (pendingMessages); update getRecentChat to include
pendingMessages before or while querying: either call the existing
flushPendingMessages (or the function that persists pendingMessages) and await
it prior to running the SELECT, or fetch DB rows and merge in-memory
pendingMessages into the result set (use timestamp to sort, dedupe by id if
present, and enforce the requested limit) so the returned ChatMessage[] reflects
both persisted and pending messages; refer to getRecentChat, pendingMessages,
and any flush/persist function name in the module to locate and implement the
change.
- Around line 51-55: The flush() implementation currently splices
pendingMessages out before calling insertBatch, which loses messages if
insertBatch throws; change flush to take a shallow copy of the current batch
(e.g., batch = pendingMessages.slice(0)), attempt the insertBatch call inside a
try/catch (await if insertBatch is async), and only remove/reduce
pendingMessages on success; on failure, log the error via your logger and
re-queue the copied messages back into pendingMessages (or leave the buffer
intact) so messages are not dropped; reference the flush and insertBatch
functions and the pendingMessages buffer when making this change.

In `@chat.ts`:
- Around line 43-70: isSpam currently treats single-word emotes like "lul",
"kekw", "poggers" as spam (the regex in isSpam that checks common Twitch spam
patterns), which biases audience reaction analysis that uses HYPE_PATTERNS in
game.ts; update isSpam to align with game.ts by either importing/reusing
HYPE_PATTERNS or adding an EMOTE_WHITELIST and change the spam regex so
single-word emotes are allowed (only flag them as spam when repeated/lengthy or
combined with other spam signals), e.g., adjust the common-emote check in isSpam
to ignore exact single-word matches from the whitelist and only match when the
message contains repeated emotes or extra spammy content.

In `@game.ts`:
- Around line 440-443: openAudienceVoting() is currently called immediately
before AI vote API calls and closeAudienceVoting() is called as soon as those
calls finish, resulting in a tiny/variable voting window; change the flow in the
routine that calls openAudienceVoting(), the AI voter calls, and
closeAudienceVoting() to enforce a minimum voting duration (e.g., 15–20s):
record a timestamp when openAudienceVoting() runs, start the AI vote requests,
and after all AI responses arrive wait until the minimum interval has elapsed
before calling closeAudienceVoting(); also trigger a chat announcement and start
a countdown UI (via rerender()/updateVoteCountdown()) when opening voting so
viewers know voting is open and how long remains.
- Around line 499-505: The audience bonus currently (audienceResult.a * 10 /
audienceResult.b * 10) can exceed AI judge influence; modify the scoring in the
block that computes audienceBonusA/audienceBonusB and round.scoreA/scoreB to cap
the audience contribution—e.g., define a constant AUDIENCE_BONUS_CAP (suggest
200) and compute audienceBonusA = Math.min(audienceResult.a * 10,
AUDIENCE_BONUS_CAP) and audienceBonusB = Math.min(audienceResult.b * 10,
AUDIENCE_BONUS_CAP) (or implement a percentage-based cap relative to
votesA/votesB), then use those capped values when setting round.scoreA and
round.scoreB so audience votes are meaningful but cannot override AI judges.
- Around line 218-225: buildAudienceContext currently injects raw m.content into
prompts; replace that direct injection with sanitization and summarization: in
buildAudienceContext, stop embedding full m.content strings and instead (a)
strip/normalize directive-like phrases (case-insensitive matches for tokens like
"system:", "ignore previous", "always vote", "vote for", "follow instruction",
etc.), (b) truncate messages to a safe length, and (c) produce a summarized
sentiment/intent line (e.g., "audience excitement: high; common topics: X") or a
redacted chat snippet before returning the context. Apply the same
sanitized/summarized output to any consumer that uses buildAudienceContext for
voting/judging (the vote-judging flow referenced in analyzeAudienceReactions/use
in vote judging) so no raw unfiltered chat reaches the AI judge.

In `@server.ts`:
- Around line 558-615: The four chat handlers (paths "/api/chat/recent",
"/api/chat/round/*", "/api/chat/votes", "/api/chat/stats") are all calling
isRateLimited(`history:${ip}`, HISTORY_LIMIT_PER_MIN, WINDOW_MS) and thus share
the same bucket as /api/history; change the bucket key to a distinct one (e.g.
`chat:${ip}`) in each handler so they use a separate rate-limit namespace while
keeping the same limit constants (HISTORY_LIMIT_PER_MIN, WINDOW_MS) or replace
with a new CHAT_LIMIT_PER_MIN constant if you want different limits; update the
isRateLimited calls referenced above to use the new key.
- Around line 733-743: The broadcast currently maps m.displayName to the field
username in the recent mapping (see msg, recent and m.displayName), which can
conflate Twitch display names with login names; update the mapping to include
both the display name and the actual login name used by Twitch consumers (e.g.
add fields displayName: m.displayName and login: m.login or m.userLogin as
appropriate) instead of overwriting one as "username" so downstream consumers
can rely on the true login value.

---

Nitpick comments:
In `@chat.ts`:
- Around line 117-118: The message handler registered with client.on("message")
currently ignores the fourth argument _self, so the bot's own messages get
processed; update the callback to check the fourth parameter (commonly named
self or _self) and return early when it's truthy to skip processing/persistence
(i.e., before calling isSpam and any vote handling). Locate the
client.on("message", (_channel, tags, message, _self) => ...) handler and add an
early guard using the fourth arg name used in the diff so bot-originated
messages are not processed or counted.
- Around line 36-39: chatterCounts is unbounded and must be capped; add a
MAX_CHATTER_COUNTS constant and enforce an eviction policy where, when
incrementing a chatter in the map (the code path that updates chatterCounts for
new messages), if a new key would push chatterCounts.size > MAX_CHATTER_COUNTS
remove the oldest entry (use Map insertion order: const oldest =
chatterCounts.keys().next().value; chatterCounts.delete(oldest)). Alternatively
(or additionally) implement periodic trimming by resetting or clearing
chatterCounts every N messages using totalMessages as a counter; update the
increment logic in the function that processes incoming ChatMessage(s) so it
applies the cap/eviction before inserting a new key.

Comment on lines +104 to +111
export function getRecentChat(limit = 50): ChatMessage[] {
const rows = db
.query(
"SELECT * FROM chat_messages ORDER BY timestamp DESC LIMIT $limit",
)
.all({ $limit: limit }) as ChatRow[];
return rows.reverse().map(rowToMessage);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

getRecentChat reads only flushed data — recent messages may be missing.

This function queries SQLite, but up to 50 messages (or 5 seconds worth) may still be sitting in pendingMessages and won't appear in results. The /api/chat/recent endpoint in server.ts (Line 566) calls this function, so API consumers could see stale data. Consider flushing before querying, or merging in-memory pending messages into the result set.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@chat-store.ts` around lines 104 - 111, getRecentChat currently only reads
persisted rows and can miss messages still in memory (pendingMessages); update
getRecentChat to include pendingMessages before or while querying: either call
the existing flushPendingMessages (or the function that persists
pendingMessages) and await it prior to running the SELECT, or fetch DB rows and
merge in-memory pendingMessages into the result set (use timestamp to sort,
dedupe by id if present, and enforce the requested limit) so the returned
ChatMessage[] reflects both persisted and pending messages; refer to
getRecentChat, pendingMessages, and any flush/persist function name in the
module to locate and implement the change.

Comment on lines +43 to +70
function isSpam(content: string): boolean {
const trimmed = content.trim();

// Allow single-char vote messages (A, B, 1, 2) through the filter
if (/^[AB12]$/i.test(trimmed)) return false;

if (trimmed.length < MIN_MESSAGE_LENGTH) return true;

// Pure emote spam or repeated single characters
if (/^(.)\1{4,}$/.test(trimmed)) return true;

// Common Twitch spam patterns
if (/^(lul|kekw|omegalul|poggers|copium|pepega|monkas|sadge|widepeepo)\s*$/i.test(trimmed)) return true;

// Bot commands
if (trimmed.startsWith("!")) return true;

// URL spam
if (/https?:\/\/\S+/i.test(trimmed)) return true;

// Excessive caps (>80% uppercase, 10+ chars)
if (trimmed.length >= 10) {
const upperCount = (trimmed.match(/[A-Z]/g) || []).length;
if (upperCount / trimmed.length > 0.8) return true;
}

return false;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Spam filter blocks common Twitch engagement signals used for audience analysis.

Single-word emotes like "lul", "kekw", "poggers" etc. (Line 55) are filtered as spam, but game.ts Line 195 uses HYPE_PATTERNS (matching "lmao", "lol", "bruh", etc.) to gauge audience reactions. Some overlap exists — e.g., a message that's just "lol" passes isSpam (length ≥ 2, doesn't match the emote list) but "kekw" alone is blocked. This asymmetry means the audience reaction analysis in game.ts sees a biased sample of chat. Worth verifying this is intentional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@chat.ts` around lines 43 - 70, isSpam currently treats single-word emotes
like "lul", "kekw", "poggers" as spam (the regex in isSpam that checks common
Twitch spam patterns), which biases audience reaction analysis that uses
HYPE_PATTERNS in game.ts; update isSpam to align with game.ts by either
importing/reusing HYPE_PATTERNS or adding an EMOTE_WHITELIST and change the spam
regex so single-word emotes are allowed (only flag them as spam when
repeated/lengthy or combined with other spam signals), e.g., adjust the
common-emote check in isSpam to ignore exact single-word matches from the
whitelist and only match when the message contains repeated emotes or extra
spammy content.

Comment on lines +440 to 443

// Open audience voting — viewers type A or B in Twitch chat
openAudienceVoting();
rerender();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Audience voting window is only as long as the AI vote API calls.

openAudienceVoting() (Line 442) fires just before the AI vote requests, and closeAudienceVoting() (Line 492) fires immediately after the last AI voter returns. If the API calls complete quickly (~2-5 seconds total), viewers have an extremely narrow window to vote. There's no minimum duration, no countdown, and no announcement in chat telling viewers voting is open.

Consider adding a minimum voting window (e.g., wait at least 15-20 seconds before closing) so the audience has a meaningful opportunity to participate:

💡 Sketch: enforce a minimum voting window
+    const MINIMUM_VOTE_WINDOW_MS = 15_000;
+    const voteWindowStart = Date.now();
+
     // Open audience voting — viewers type A or B in Twitch chat
     openAudienceVoting();
     rerender();

     await Promise.all(
       round.votes.map(async (vote) => {
         // ... existing AI vote logic ...
       }),
     );
+
+    // Ensure audience has at least MINIMUM_VOTE_WINDOW_MS to vote
+    const elapsed = Date.now() - voteWindowStart;
+    if (elapsed < MINIMUM_VOTE_WINDOW_MS) {
+      await new Promise((r) => setTimeout(r, MINIMUM_VOTE_WINDOW_MS - elapsed));
+    }
+
     // ── Score ──
     const audienceResult = closeAudienceVoting();

Also applies to: 492-492

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@game.ts` around lines 440 - 443, openAudienceVoting() is currently called
immediately before AI vote API calls and closeAudienceVoting() is called as soon
as those calls finish, resulting in a tiny/variable voting window; change the
flow in the routine that calls openAudienceVoting(), the AI voter calls, and
closeAudienceVoting() to enforce a minimum voting duration (e.g., 15–20s):
record a timestamp when openAudienceVoting() runs, start the AI vote requests,
and after all AI responses arrive wait until the minimum interval has elapsed
before calling closeAudienceVoting(); also trigger a chat announcement and start
a countdown UI (via rerender()/updateVoteCountdown()) when opening voting so
viewers know voting is open and how long remains.

Comment on lines +499 to +505

// Audience votes count as bonus points (10 per audience vote)
// This makes chat participation meaningful without overriding AI judges
const audienceBonusA = audienceResult.a * 10;
const audienceBonusB = audienceResult.b * 10;
round.scoreA = votesA * 100 + audienceBonusA;
round.scoreB = votesB * 100 + audienceBonusB;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Audience scoring can outweigh AI judges with modest viewer counts.

With the current weighting (100 per AI vote, 10 per audience vote), ~6 AI judges produce a max delta of 600 points. Just 61 audience votes for one side would produce 610 points of bonus, exceeding the entire AI judge contribution. The PR states the intent is for audience votes to be "meaningful without overriding AI judges," but the math doesn't hold for a popular Twitch channel.

Consider capping the total audience bonus (e.g., Math.min(audienceResult.a * 10, 200)) or using a percentage-based system.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@game.ts` around lines 499 - 505, The audience bonus currently
(audienceResult.a * 10 / audienceResult.b * 10) can exceed AI judge influence;
modify the scoring in the block that computes audienceBonusA/audienceBonusB and
round.scoreA/scoreB to cap the audience contribution—e.g., define a constant
AUDIENCE_BONUS_CAP (suggest 200) and compute audienceBonusA =
Math.min(audienceResult.a * 10, AUDIENCE_BONUS_CAP) and audienceBonusB =
Math.min(audienceResult.b * 10, AUDIENCE_BONUS_CAP) (or implement a
percentage-based cap relative to votesA/votesB), then use those capped values
when setting round.scoreA and round.scoreB so audience votes are meaningful but
cannot override AI judges.

Comment on lines +558 to +615
if (url.pathname === "/api/chat/recent") {
if (isRateLimited(`history:${ip}`, HISTORY_LIMIT_PER_MIN, WINDOW_MS)) {
return new Response("Too Many Requests", { status: 429 });
}
const rawLimit = parseInt(url.searchParams.get("limit") || "50", 10);
const limit = Number.isFinite(rawLimit)
? Math.min(Math.max(rawLimit, 1), 200)
: 50;
const messages = getRecentChat(limit);
return new Response(JSON.stringify({ messages }), {
headers: {
"Content-Type": "application/json",
"Cache-Control": "no-store",
},
});
}

if (url.pathname.startsWith("/api/chat/round/")) {
if (isRateLimited(`history:${ip}`, HISTORY_LIMIT_PER_MIN, WINDOW_MS)) {
return new Response("Too Many Requests", { status: 429 });
}
const parts = url.pathname.split("/");
const roundNum = parseInt(parts[parts.length - 1] ?? "", 10);
if (!Number.isFinite(roundNum) || roundNum < 1) {
return new Response("Invalid round number", { status: 400 });
}
const messages = getChatForRound(roundNum);
return new Response(JSON.stringify({ roundNum, messages }), {
headers: {
"Content-Type": "application/json",
"Cache-Control": "public, max-age=30",
},
});
}

if (url.pathname === "/api/chat/votes") {
if (isRateLimited(`history:${ip}`, HISTORY_LIMIT_PER_MIN, WINDOW_MS)) {
return new Response("Too Many Requests", { status: 429 });
}
return new Response(JSON.stringify(getAudienceVotes()), {
headers: {
"Content-Type": "application/json",
"Cache-Control": "no-store",
},
});
}

if (url.pathname === "/api/chat/stats") {
if (isRateLimited(`history:${ip}`, HISTORY_LIMIT_PER_MIN, WINDOW_MS)) {
return new Response("Too Many Requests", { status: 429 });
}
return new Response(JSON.stringify(getChatStats()), {
headers: {
"Content-Type": "application/json",
"Cache-Control": "no-store",
},
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Chat endpoints share the history: rate-limit bucket with /api/history.

All four chat endpoints use isRateLimited(\history:${ip}`, ...), meaning chat and history requests compete for the same per-IP allowance. A client polling /api/chat/statsfrequently could starve/api/historyrequests (or vice versa). If this is unintentional, use a distinct bucket key likechat:${ip}`.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.ts` around lines 558 - 615, The four chat handlers (paths
"/api/chat/recent", "/api/chat/round/*", "/api/chat/votes", "/api/chat/stats")
are all calling isRateLimited(`history:${ip}`, HISTORY_LIMIT_PER_MIN, WINDOW_MS)
and thus share the same bucket as /api/history; change the bucket key to a
distinct one (e.g. `chat:${ip}`) in each handler so they use a separate
rate-limit namespace while keeping the same limit constants
(HISTORY_LIMIT_PER_MIN, WINDOW_MS) or replace with a new CHAT_LIMIT_PER_MIN
constant if you want different limits; update the isRateLimited calls referenced
above to use the new key.

Comment on lines +733 to +743
const msg = JSON.stringify({
type: "chat",
stats,
recent: recent.map((m) => ({
username: m.displayName,
content: m.content,
badges: m.badges,
isMod: m.isMod,
isSubscriber: m.isSubscriber,
})),
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Broadcast payload labels displayName as username.

Line 737 maps m.displayName to a field called username. Twitch display names can differ in casing and characters from the login username. If any downstream consumer needs the actual login name, this would be incorrect.

       recent: recent.map((m) => ({
-        username: m.displayName,
+        displayName: m.displayName,
         content: m.content,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const msg = JSON.stringify({
type: "chat",
stats,
recent: recent.map((m) => ({
username: m.displayName,
content: m.content,
badges: m.badges,
isMod: m.isMod,
isSubscriber: m.isSubscriber,
})),
});
const msg = JSON.stringify({
type: "chat",
stats,
recent: recent.map((m) => ({
displayName: m.displayName,
content: m.content,
badges: m.badges,
isMod: m.isMod,
isSubscriber: m.isSubscriber,
})),
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server.ts` around lines 733 - 743, The broadcast currently maps m.displayName
to the field username in the recent mapping (see msg, recent and m.displayName),
which can conflate Twitch display names with login names; update the mapping to
include both the display name and the actual login name used by Twitch consumers
(e.g. add fields displayName: m.displayName and login: m.login or m.userLogin as
appropriate) instead of overwriting one as "username" so downstream consumers
can rely on the true login value.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
game.ts (1)

485-490: ⚠️ Potential issue | 🟡 Minor

closeAudienceVoting() is skipped when the generation check fires.

When state.generation !== roundGeneration at line 485, the continue jumps to the top of the for-loop without ever calling closeAudienceVoting() at line 490. This leaves votingOpen = true and the voters Set live, so Twitch chat messages between rounds continue being counted as votes. If this is the last round (the loop exits entirely), votingOpen stays true permanently for the process lifetime.

The next call to openAudienceVoting() resets state, so no completed round's score is corrupted, but the leaked voting session accepts stale votes during the gap.

🛠️ Proposed fix
     if (state.generation !== roundGeneration) {
+      closeAudienceVoting();
       continue;
     }

     // ── Score ──
     const audienceResult = closeAudienceVoting();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@game.ts` around lines 485 - 490, The loop skips calling closeAudienceVoting()
whenever state.generation !== roundGeneration, leaving votingOpen true and the
voters Set active; modify the loop so that before the early continue you invoke
closeAudienceVoting() (or conditionally call it when votingOpen is true) to
clear votingOpen and reset voters, ensuring you reference the same
state.generation vs roundGeneration check and functions closeAudienceVoting()
and openAudienceVoting() so no stale voting session remains active between
rounds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@game.ts`:
- Around line 485-490: The loop skips calling closeAudienceVoting() whenever
state.generation !== roundGeneration, leaving votingOpen true and the voters Set
active; modify the loop so that before the early continue you invoke
closeAudienceVoting() (or conditionally call it when votingOpen is true) to
clear votingOpen and reset voters, ensuring you reference the same
state.generation vs roundGeneration check and functions closeAudienceVoting()
and openAudienceVoting() so no stale voting session remains active between
rounds.

---

Duplicate comments:
In `@game.ts`:
- Around line 218-225: The system prompt currently injects raw Twitch content
via chatSnippets (built from m.content and m.displayName) into buildPromptSystem
and callGenerateAnswer; sanitize and normalize those fields before
concatenation: strip/escape control characters and newlines, remove or
neutralize prompt-instruction tokens (e.g., case-insensitive "system:", "ignore
previous", "do not follow instructions", special token patterns), and enforce a
safe max length per message and overall combined snippet; update the code paths
that build chatSnippets (the mapping over recent, and any use in
buildPromptSystem and callGenerateAnswer) to call a sanitization utility (e.g.,
sanitizeChatMessage) and use the sanitized displayName/content and a truncated
version when composing the prompt.
- Around line 438-441: The audience voting currently opens with
openAudienceVoting() and closes immediately when the AI vote Promise.all
completes, which can yield an unacceptably short window; modify the flow so
opening always triggers a visible announcement and countdown (call
openAudienceVoting() then rerender(), post an announcement/start countdown UI),
then wait for the AI Promise.all to finish but do not close voting until both
the AI responses have returned and a configured minimum window (e.g.,
minVotingMs) has elapsed — implement by recording start time and after
Promise.all resolves compute remaining = minVotingMs - elapsed and await
setTimeout(remaining) if positive (or use Promise.race/Promise.all with a
timeout Promise), then call closeAudienceVoting(); keep function names
openAudienceVoting, closeAudienceVoting, rerender and the AI vote Promise.all
invocation to locate changes.
- Around line 498-503: Audience bonus can exceed the total AI judges'
contribution; clamp the per-side audience bonus so it cannot surpass the maximum
AI delta. Compute the AI max (e.g., const aiMax = aiJudgesCount * 100 or derive
from aiJudges.length) and replace audienceBonusA/B with
Math.min(audienceResult.a * 10, aiMax) and Math.min(audienceResult.b * 10,
aiMax), then use those clamped values when setting round.scoreA and round.scoreB
(references: audienceResult, audienceBonusA, audienceBonusB, round.scoreA,
round.scoreB, votesA, votesB).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant