🏆 Winner of BSEC 2026 Hackathon (Brno University of Technology)
We built this system in a 12-hour sprint with a clear goal: to create a live-demo-ready workflow that helps creators decide what to film, where to film it, and how to publish it across platforms instantly.
The result is a functional editorial workflow that takes a short brief and produces high-quality outputs for:
- YouTube (video script)
- Instagram (post)
- TikTok (reel)
Sorted alphabetically
| Contributor | GitHub |
|---|---|
| LufyCZ | LufyCZ |
| olexamatej | olexamatej |
| Padi42 | Padi42 |
| TheRamsay | TheRamsay |
Based on important_files/Multi-Agentná AI Redakcia pre správu obsahu.pdf, the goal was to design and implement a working prototype of an editorial content generator for YouTube, Instagram, and TikTok.
Key constraints from the assignment:
- architecture and logic matter more than tooling choice,
- AI should be used efficiently (not for everything),
- clear multi-agent responsibilities in the editorial process:
- Author: content proposal,
- Editor: platform adaptation,
- QA: final quality decision.
Required functionality:
- 2 required use-cases:
- generate topic options,
- take a short brief and generate outputs for all platforms;
- required brief fields:
tema,cil,cilova_skupina,ton,hlavni_myslenka,cta; - required outputs for all 3 platforms;
- use of creator history + web trend enrichment;
- structured QA approve/revise decision;
- token/cost optimization and minimal unnecessary AI calls;
- end-to-end completion in a few minutes;
- frontend showing complete workflow status;
- automatic email send before judging.
Kamil is the central dashboard helper (inspired by Jarvis):
- understands user intent in plain language,
- calls tools and routes requests into research/generation flows,
- can be used both in the web dashboard and as a Telegram bot.
Kamil chat with tool-calling in action; user asks, Kamil decides whether to trigger research/workflows.
Research results are returned in a readable assistant format, ready for immediate post creation.
Our product intent was practical: help an influencer quickly discover what to show while traveling in a specific city (places, hooks, trends, and content angles) and turn that into platform-ready content.
Our research flow is designed to reduce hallucinations:
- fetches best matching creator-history examples,
- runs trends + location search in parallel,
- verifies locations in a second pass,
- geocodes and returns structured context,
- forwards validated context into downstream generation.
The anti-hallucination flow combines history retrieval, web search, verification, and structured output.
Topic-related places are visualized on a map so creators can quickly plan what to shoot in that city.
Trend signals and content angles are surfaced as practical ideas, not generic brainstorming.
One-click transition from research to a post draft workflow.
We use a democracy-style swarm with 4 role-specialized AI personalities.
In the current implementation, swarm runs on one strong base model (Gemini 2.5 Flash) but with different editorial personalities and responsibilities:
- Zoey (Gen-Z trend scout): focuses on hooks, viral formats, and platform-native energy.
- Marcus (Producer): focuses on structure, pacing, retention, and execution quality.
- Dr. Chen (Analyst): focuses on originality, depth, and non-obvious angles.
- Leo (Reviewer/QA): challenges ideas and scores platform fit, clarity, and performance potential.
Each round has generation/revision + peer rating, then score aggregation picks the strongest direction.
This creates better ideas than a single-shot prompt.
Multi-agent voting/revision loop where multiple perspectives compete before final selection.
Project detail view showing swarm status, generated outputs, and next actions in one place.
The app supports full project lifecycle:
- create projects,
- create/manage posts,
- attach media and context,
- track generation/QA status per platform,
- inspect final outputs.
Global project list and progress overview.
Simple project creation flow to start a new campaign quickly.
Per-project post management with clear state tracking.
Brief, media, and generated content are kept together for fast editorial iteration.
The system does automatic image understanding to ground output in visual evidence:
- vision model describes the image content,
- semantic text + vector embeddings are stored,
- relevant images are ranked by similarity and injected into prompts.
This helps avoid hallucinating visuals and improves relevance.
Beyond text generation, we also support:
- automatic YouTube thumbnail generation,
- Remotion-based video making pipeline (script/speech/video composition).
Preview of automated video generation flow (script → speech → composited short-form video).
Each platform output receives a structured QA decision (approve / revise) with reason and recommended edits.
The final output can be sent by email for judging.
Final output packaged into email-ready format for judges/comparison.
- Next.js (App Router), React, TypeScript
- Gemini/Vertex AI (
@google/genai,@ai-sdk/google-vertex) - Drizzle ORM + Turso/libSQL
- UploadThing
- Remotion
- Nodemailer + React Email