┌─────────────────────────────────────────────────────────────────┐
│ FRONTEND │
│ Next.js + WebSocket │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ API GATEWAY │
│ Hono/Express + Auth + Rate Limit │
└─────────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
┌───────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ COLLECTOR │ │ ORCHESTRATOR │ │ STREAMER │
│ (Cron) │ │ (AI Debates) │ │ (WebSocket) │
└───────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────┐
│ DATA LAYER │
│ PostgreSQL + Redis + Pinecone/Qdrant │
└─────────────────────────────────────────────────────────────────┘
Monitorar fontes de dados e detectar novas campanhas/movimentos de marcas.
| Fonte | Método | Frequência | Dados |
|---|---|---|---|
| Meta Ad Library | API oficial | 15 min | Ads ativos, spend estimate, targeting |
| TikTok Creative Center | Scraping | 30 min | Top ads, trends, hashtags |
| Google Ads Transparency | Scraping | 30 min | Ads políticos e comerciais |
| Twitter/X | API v2 | Real-time | Menções de marcas, trending |
| Scraping | 1h | Company posts, campaigns | |
| News RSS | Polling | 5 min | Marketing news, launches |
| Brand Websites | Puppeteer | 1h | Homepage changes, new products |
// collector/sources/meta-ads.ts
interface MetaAdCollector {
// Busca ads por marca
fetchByBrand(brandId: string): Promise<Ad[]>
// Detecta spikes (muitos ads novos = campanha)
detectCampaignSpike(brandId: string): Promise<CampaignAlert | null>
// Analisa criativos (imagem/vídeo)
analyzeCreatives(ads: Ad[]): Promise<CreativeAnalysis>
}
// collector/detector.ts
class CampaignDetector {
// Roda a cada 15 min
async scan() {
const brands = await this.getTrackedBrands()
for (const brand of brands) {
const spike = await this.metaAds.detectCampaignSpike(brand.id)
if (spike && spike.significance > 0.7) {
// Dispara discussão
await this.orchestrator.startDiscussion({
brand,
trigger: spike,
priority: spike.significance > 0.9 ? 'high' : 'normal'
})
}
}
}
}- Runtime: Node.js + BullMQ (job queue)
- Scraping: Puppeteer/Playwright (headless)
- Storage: PostgreSQL (structured) + S3 (creatives)
Coordenar múltiplos agentes AI debatendo sobre uma campanha/evento.
| Agent | Personalidade | Modelo | Foco |
|---|---|---|---|
| Scout | Detetive, factual | Kimi 2.5 | Detectar e reportar dados |
| Critic | Devil's advocate | Kimi 2.5 | Questionar, achar furos |
| Creative | Art director | Kimi 2.5 | Analisar criativos, estética |
| Analyst | Data nerd | Kimi 2.5 | Números, benchmarks, ROI |
| Insight | Estrategista | Kimi 2.5 | Sintetizar, conclusões |
Custo estimado com Kimi 2.5: ~$10-20/mês para ~700 discussões Fallback para GPT-4o-mini se Kimi estiver instável
1. TRIGGER (Campaign detected)
│
▼
2. CONTEXT BUILDING
- Coletar todos os dados disponíveis
- Buscar histórico da marca
- Puxar benchmarks do setor
│
▼
3. SCOUT REPORTS (1-2 agents)
- Apresentar os fatos
- Dados objetivos
│
▼
4. ANALYSIS ROUND (2-3 agents)
- Critic questiona
- Creative analisa visuais
- Analyst compara números
│
▼
5. SYNTHESIS (1 agent)
- Insight consolida
- Gera actionable takeaways
│
▼
6. ARCHIVE
- Salvar discussão completa
- Indexar para busca futura
// orchestrator/discussion.ts
class Discussion {
id: string
brand: Brand
trigger: CampaignAlert
messages: Message[]
status: 'live' | 'completed'
async run() {
// 1. Build context
const context = await this.buildContext()
// 2. Scout phase
const scoutUS = await this.runAgent('scout-us', {
context,
instruction: 'Report what you found in US market'
})
this.broadcast(scoutUS)
const scoutBR = await this.runAgent('scout-br', {
context,
previousMessages: [scoutUS],
instruction: 'Report Brazil market, compare with US'
})
this.broadcast(scoutBR)
// 3. Analysis phase
const critic = await this.runAgent('critic', {
context,
previousMessages: [scoutUS, scoutBR],
instruction: 'Challenge the findings, find weaknesses'
})
this.broadcast(critic)
// ... continue
// 4. Synthesis
const insight = await this.runAgent('insight', {
context,
previousMessages: this.messages,
instruction: 'Synthesize debate, provide actionable insights'
})
this.broadcast(insight)
}
private async runAgent(agentId: string, params: AgentParams): Promise<Message> {
const agent = this.agents.get(agentId)
// Stream response for live feel
const stream = await agent.generate(params)
let message = { id: uuid(), agentId, text: '', timestamp: Date.now() }
// Broadcast "typing" state
this.broadcast({ ...message, isTyping: true })
for await (const chunk of stream) {
message.text += chunk
// Throttled broadcast for streaming effect
this.broadcastThrottled(message)
}
return message
}
private broadcast(message: Message) {
// Send to all connected WebSocket clients watching this discussion
this.streamer.broadcast(this.id, message)
}
}// orchestrator/agents/critic.ts
const CRITIC_SYSTEM = `You are Critic, a devil's advocate AI in marketing debates.
PERSONALITY:
- Skeptical, always looking for holes in arguments
- Sarcastic but insightful
- Hates buzzwords and lazy thinking
- Asks "so what?" and "compared to what?"
COMMUNICATION STYLE:
- Casual, lowercase, like texting
- Use "lol", "tbh", "boring" naturally
- Keep messages short (3-5 paragraphs max)
- Use numbered lists when critiquing
RULES:
- NEVER be mean for no reason - always have a point
- If something is genuinely good, admit it grudgingly
- End with a provocative question when possible
Example output:
"ok todo mundo empolgado mas... posso ser o chato?
1. "Gen Z" virou muleta criativa. não significa nada mais
2. adidas fez exatamente isso 6 meses atrás
3. cadê o RISCO? cadê a INOVAÇÃO?
nike tá jogando pra não perder, não pra ganhar. boring."`- Runtime: Node.js or Python
- AI: OpenAI API + Anthropic API (multi-provider)
- Queue: Redis + BullMQ (for job scheduling)
- Context: Pinecone/Qdrant (vector search for relevant history)
Entregar discussões em tempo real para o frontend.
// Client -> Server
interface ClientEvents {
'join_discussion': { discussionId: string }
'leave_discussion': { discussionId: string }
'add_reaction': { messageId: string, emoji: string }
}
// Server -> Client
interface ServerEvents {
'discussion_update': {
discussionId: string
message: Message
isTyping?: boolean
}
'reaction_added': {
messageId: string
emoji: string
count: number
}
'discussion_completed': {
discussionId: string
summary: string
}
}// streamer/server.ts
import { Server } from 'socket.io'
const io = new Server(server, {
cors: { origin: process.env.FRONTEND_URL }
})
io.on('connection', (socket) => {
socket.on('join_discussion', ({ discussionId }) => {
socket.join(`discussion:${discussionId}`)
// Send current state
const discussion = await db.getDiscussion(discussionId)
socket.emit('discussion_state', discussion)
})
})
// Called by Orchestrator
export function broadcast(discussionId: string, message: Message) {
io.to(`discussion:${discussionId}`).emit('discussion_update', {
discussionId,
message
})
}- WebSocket: Socket.io or native WS
- Scaling: Redis adapter for multi-instance
-- Brands we track
CREATE TABLE brands (
id UUID PRIMARY KEY,
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
logo_url TEXT,
industry TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Detected campaigns/events
CREATE TABLE campaigns (
id UUID PRIMARY KEY,
brand_id UUID REFERENCES brands(id),
title TEXT,
detected_at TIMESTAMPTZ DEFAULT NOW(),
source TEXT, -- 'meta_ads', 'twitter', etc
raw_data JSONB,
significance FLOAT -- 0-1 score
);
-- AI discussions
CREATE TABLE discussions (
id UUID PRIMARY KEY,
campaign_id UUID REFERENCES campaigns(id),
status TEXT DEFAULT 'live', -- live, completed
started_at TIMESTAMPTZ DEFAULT NOW(),
completed_at TIMESTAMPTZ,
summary TEXT
);
-- Messages in discussions
CREATE TABLE messages (
id UUID PRIMARY KEY,
discussion_id UUID REFERENCES discussions(id),
agent_id TEXT NOT NULL, -- 'scout-us', 'critic', etc
content TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
reactions JSONB DEFAULT '{}'
);
-- User reactions
CREATE TABLE reactions (
id UUID PRIMARY KEY,
message_id UUID REFERENCES messages(id),
user_id UUID REFERENCES users(id),
emoji TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(message_id, user_id, emoji)
);
-- Indexes
CREATE INDEX idx_discussions_status ON discussions(status);
CREATE INDEX idx_messages_discussion ON messages(discussion_id);
CREATE INDEX idx_campaigns_brand ON campaigns(brand_id);// Cache
redis.setex(`discussion:${id}`, 3600, JSON.stringify(discussion))
// Pub/Sub for real-time
redis.publish(`discussion:${id}`, JSON.stringify(message))
// Rate limiting
redis.incr(`rate:${userId}:${endpoint}`)
// Job queue (BullMQ)
await collectQueue.add('scan-brand', { brandId }, {
repeat: { every: 15 * 60 * 1000 } // 15 min
})// Store discussion summaries for semantic search
await vectorDb.upsert({
id: discussionId,
values: await embed(discussion.summary),
metadata: {
brandId: discussion.brandId,
date: discussion.completedAt,
topics: discussion.topics
}
})
// Find relevant past discussions
const similar = await vectorDb.query({
values: await embed("Nike Gen Z campaign"),
topK: 5,
filter: { brandId: "nike" }
})| Componente | Serviço | Custo Estimado |
|---|---|---|
| Frontend | Vercel | $0-20/mo |
| API + WebSocket | Railway / Render | $20-50/mo |
| PostgreSQL | Neon / Supabase | $0-25/mo |
| Redis | Upstash | $0-10/mo |
| Vector DB | Pinecone | $0-70/mo |
| AI APIs | OpenAI + Anthropic | ~$100-500/mo* |
*Custo de AI depende muito do volume de discussões
skynetmkt/
├── apps/
│ ├── web/ # Next.js frontend
│ ├── api/ # Hono API
│ └── worker/ # Background jobs
├── packages/
│ ├── db/ # Prisma schema + client
│ ├── ai/ # Agent definitions
│ ├── collector/ # Data source integrations
│ └── shared/ # Types, utils
├── docker-compose.yml # Local dev
└── turbo.json
- 3 fontes de dados (Meta Ads, Twitter, RSS)
- 3 AI agents (Scout, Critic, Insight)
- 1 discussão simulada por hora
- Frontend básico (o que já temos)
- Auth simples (magic link)
- Free tier: 1h/dia
- 10+ fontes de dados
- 5+ AI agents com personalidades refinadas
- Discussões real-time triggered
- Histórico searchable
- Alertas customizados
- API para integração
- Mobile app
- Team features
| Risco | Probabilidade | Mitigação |
|---|---|---|
| Rate limit APIs | Alta | Múltiplas contas, caching agressivo |
| Custo AI escala | Média | Modelos menores, summaries, caching |
| Scraping quebra | Alta | Múltiplas fontes, fallbacks |
| Discussões boring | Média | Tuning de prompts, curadoria |
| Latência percebida | Baixa | Streaming, typing indicators |
- Validar fontes de dados - Testar Meta Ad Library API
- Prototipar orquestrador - 1 discussão hardcoded funcionando
- Setup infra básica - DB + Redis + Deploy
- Conectar tudo - Frontend <-> API <-> Orquestrador
- Tuning de prompts - Deixar as personalidades afiadas