Purpose: a 20-minute, in-room exercise where 30 people submit one short line each, watch a live "brain" form on screen, and see an AI summary of themes, tensions, and outliers. Optional: show each person a most-similar match to spark conversations later.
AI is the facilitator/connector, not a replacement.
- Participants (phones): scan QR → pick table (A/B) → submit one short line.
- Big screen: the graph grows live; then a summary appears (themes, contradictions, outliers).
- Optional: each phone shows a suggested match across tables.
Time budget (suggested): Join 1’ • Submit 2’ • Watch graph 1’ • Summary 1’ • (Optional) Match 1’.
- SvelteKit (Node adapter) — single runtime (no Docker, no Python), easy on hotel Wi‑Fi.
- SQLite + Drizzle — local, fast, WAL-enabled.
- SSE (Server-Sent Events) — push
graph+summary(andmatches) to clients. - LLM — one JSON-only call for summary; embeddings for links & matching.
phones → /api/join + /api/submit ─┐
├─ DB (SQLite) → build embeddings → broadcast via SSE → visualiser
facilitator → /api/summary ───────┘
- participants:
id,name,tableSide('A'|'B'),createdAt - runs:
id,createdAt,clustersJson?,optionsJson?,pairsJson? - submissions:
id,runId,participantId,kind(default'triad'),payloadJson,createdAt
Unique: (runId,participantId,kind) - normalised:
submissionId(PK),dataJson,embeddingJson?
For the sprint we only need participants, submissions, normalised (with
embeddingJson). runs is used to keep sessions separate.
Already in repo (MVP):
POST /api/join→{ participantId }POST /api/submit→ stores submission, normalises, (now) embeds, then broadcasts counts/graphGET /api/stream(SSE) → emits:event: submission_count→{ count }event: graph→{ nodes:[{id,label,table}], links:[{source,target,value}] }- (later)
event: summary,event: matches
POST /api/vote+GET /api/vote/medians(optional, unused in this sprint)
To add (short tasks):
POST /api/summary→ runs one LLM call across all submissions → broadcastssummaryPOST /api/match(optional) → cross-table nearest-neighbour pairs → broadcastsmatches
Input (server assembles):
[
{ "id": 12, "text": "..." },
{ "id": 7, "text": "..." }
]System prompt (essence):
Cluster 30 short lines about "AI & logistics" into 3–6 themes, list clear contradictions and 1–2 outliers. Return ONLY valid JSON:
{
"themes":[{"label":"...", "why":"...", "members":[12,7, ...]}],
"contradictions":[{"a":12, "b":7, "explain":"..."}],
"outliers":[{"participantId":3, "explain":"..."}],
"stats":{"count":30}
}On parse failure: retry once, then fallback to heuristic (k-means on embeddings for themes; farthest points for outliers).
- Use cosine similarity on embeddings.
- Prefer A↔B matches (cross-table).
- Greedy pairing; allow "no match" if below threshold.
- Graph:
force-graph(2D). Nodes coloured by table. Links whencosine >= threshold(start ~0.78, tune live). Limit each node to top-3 links to avoid hairballs. - Summary view: Right panel lists themes and tensions. (Nice-to-have: draw soft hulls around theme members.)
- Node ≥ 20
- SQLite (bundled with better-sqlite3)
- An LLM key (for summary + embeddings) — e.g.
OPENAI_API_KEY
cp .env.example .env
# edit:
DATABASE_URL=/data/app.db # or ./local.db
LLM_API_KEY=sk-...
npm install
npm run db:push
npm run devOpen:
- Participants:
http://localhost:5173/join - Facilitator:
http://localhost:5173/visualiserandhttp://localhost:5173/control
- ✅ Laptop on mains power; Do Not Sleep.
- ✅ Local network stable (hotspot fallback ready).
- ✅
.envset; app running (npm run devornode build). - ✅ Big screen on /visualiser.
- ✅ Print two QR codes:
- Table A →
/join?table=A - Table B →
/join?table=B
- Table A →
- ✅ Threshold set (start 0.78); test with 3 dummy submissions.
- Join (1 min)
"Scan the QR, type your first name, select your table." - Submit (2 mins)
Prompt on screen:"In one short line, share a fact, fear, or hope about AI in logistics."
- Graph (1 min)
Narrate what they see: similar ideas linking, two tables mixing. - Summary (1–2 mins)
Tap Summarise on/control. Read out 3–6 themes, key tension(s), 1–2 outliers. - (Optional) Match (1 min)
Tap Match on/control. Each phone shows a cross-table match. Invite them to connect after dessert.
- If LLM is slow: continue speaking over the live graph; trigger summary once.
- If SSE drops: refresh the visualiser; clients keep submitting (HTTP still works).
- If Wi‑Fi fails: use your phone’s hotspot (no external internet needed if embeddings are pre-enabled or stubbed).
/visualiser— big screen./control— buttons: Show Graph (default), Summarise, Show Matches (optional)./join— phone page (query param?table=A|Bpreselects side)./input— phone submission page (1 text field)./vote— not used for this sprint (kept for future A/B).
- SIM_THRESHOLD (cosine) — start
0.78, adjust to control density. - TOP_K_LINKS per node — start
3. - MAX_INPUT_LEN — 120 chars (guard rail).
- RUN_RESET — optional
/api/admin/resetif you want to wipe mid-rehearsal.
- Store first name and one short line.
- No emails, no phone numbers.
- Data is ephemeral; a JSON dump is kept locally for post-event notes. Delete after.
- Join / Submit
- SSE bus
- Graph from embeddings
-
/api/summary(LLM JSON + heuristic fallback) -
/api/match(cross-table) - Visualiser: summary panel + (optional) hulls
- Control page buttons
- QR param preselect (
?table=A|B) - JSON data dump on exit
- Participants can join, submit a line, and see the graph react live.
- Facilitator can press Summarise and a valid JSON summary is shown on screen within ~10s.
- (Optional) Everyone sees a cross-table match on their phone.
- The whole flow runs on a single laptop with flaky hotel Wi‑Fi.
Private, event use. © You.