Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions app/page.tsx
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import { Button } from '@/components/ui/button';
import { MermaidDiagram } from '@/components/marketing/mermaid-diagram';
import Image from 'next/image';
import Link from 'next/link';

Expand Down Expand Up @@ -38,6 +39,22 @@ const iterativeLoop = [
'Return the answer with visible framing, then capture user feedback.',
];

const marketingThinkingDiagram = String.raw`
flowchart TD
request["User asks for help"]
classify["Classify the task<br/>simple chat, analysis, planning,<br/>decision, or troubleshooting"]
gate{"Use meta mode?"}
direct["Direct answer path<br/>Respond normally when the task is easy"]
plan["Planner pass<br/>Extract goal, constraints, plan,<br/>and response strategy"]
draft["Draft pass<br/>Create a compact draft response"]
critique["Critic pass<br/>Check assumptions, limitations,<br/>and context gaps"]
final["Final answer<br/>Show the response with compact framing<br/>and trust signals"]

request --> classify --> gate
gate -- "No" --> direct --> final
gate -- "Yes" --> plan --> draft --> critique --> final
`;

const overview = [
'A meta layer on top of chat that makes task interpretation visible.',
'Structured planner and critic passes running behind the assistant.',
Expand Down Expand Up @@ -258,6 +275,50 @@ export default function Home() {
</div>
</section>

<section className="mx-auto max-w-6xl px-6 py-16 sm:px-10">
<div className="rounded-[2rem] border border-ink/10 bg-white/70 p-8 shadow-soft-edge sm:p-12">
<div className="grid gap-10 lg:grid-cols-[0.42fr_0.58fr]">
<div>
<p className="text-sm uppercase tracking-[0.4em] text-ink/60">
Meta-thinking path
</p>
<h2 className="mt-4 font-display text-4xl">
A visible diagram of how SecondOrder thinks.
</h2>
<p className="mt-4 text-ink/70">
The system does not run the full planner and critic stack for
every prompt. It first classifies the request, then chooses the
light path or the structured path based on how much reasoning
the task needs.
</p>
<div className="mt-8 rounded-3xl border border-ink/10 bg-bone p-5">
<p className="text-xs uppercase tracking-[0.3em] text-ink/55">
Why this matters
</p>
<p className="mt-3 text-sm text-ink/75">
Friends trying the product can understand it as a chat system
with a decision gate: answer directly when the task is easy,
or switch into a planning and critique loop when the task is
ambiguous, risky, or multi-step.
</p>
</div>
</div>

<div className="rounded-[2rem] border border-ink/10 bg-bone p-5 sm:p-6">
<MermaidDiagram
chart={marketingThinkingDiagram}
className="bg-white/90"
/>
<p className="mt-4 text-sm text-ink/65">
The key product idea is the decision gate in the middle:
simple prompts stay fast, while complex prompts trigger the
planning and critique loop.
</p>
</div>
</div>
</div>
</section>

<section className="mx-auto max-w-6xl px-6 py-16 sm:px-10">
<div className="grid gap-10 lg:grid-cols-2">
<div className="rounded-3xl border border-ink/10 bg-white/70 p-8">
Expand Down
112 changes: 112 additions & 0 deletions components/marketing/mermaid-diagram.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
'use client';

import { useEffect, useId, useState } from 'react';
import { cn } from '@/lib/utils';

interface MermaidDiagramProps {
chart: string;
className?: string;
}

let hasInitializedMermaid = false;

export function MermaidDiagram({
chart,
className,
}: MermaidDiagramProps) {
const [svg, setSvg] = useState<string>('');
const [hasError, setHasError] = useState(false);
const id = useId().replace(/:/g, '');

useEffect(() => {
let isActive = true;

async function renderDiagram() {
try {
const mermaid = (await import('mermaid')).default;

if (!hasInitializedMermaid) {
mermaid.initialize({
startOnLoad: false,
securityLevel: 'loose',
theme: 'base',
flowchart: {
htmlLabels: true,
useMaxWidth: true,
curve: 'stepBefore',
},
themeVariables: {
background: '#F7F7F5',
primaryColor: '#FFFFFF',
primaryTextColor: '#0B0B0B',
primaryBorderColor: '#0B0B0B',
lineColor: '#0B0B0B',
secondaryColor: '#E6E6E1',
tertiaryColor: '#F7F7F5',
fontFamily: 'Manrope, sans-serif',
},
});

hasInitializedMermaid = true;
}

const { svg: nextSvg } = await mermaid.render(`secondorder-${id}`, chart);

if (!isActive) {
return;
}

setSvg(nextSvg);
setHasError(false);
} catch {
if (!isActive) {
return;
}

setHasError(true);
}
}

void renderDiagram();

return () => {
isActive = false;
};
}, [chart, id]);

if (hasError) {
return (
<div
className={cn(
'rounded-3xl border border-ink/10 bg-white/70 p-6 text-sm text-ink/60',
className,
)}
>
Diagram unavailable.
</div>
);
}

return (
<div
className={cn(
'rounded-3xl border border-ink/10 bg-white/80 p-4 shadow-soft-edge',
className,
)}
>
{svg ? (
<div
className={cn(
'[&_foreignObject_div]:font-sans [&_svg]:h-auto [&_svg]:w-full [&_svg]:max-w-none',
'[&_svg_.edgeLabel]:bg-bone [&_svg_.edgeLabel]:px-1',
)}
dangerouslySetInnerHTML={{ __html: svg }}
/>
) : (
<div className="flex min-h-[420px] items-center justify-center text-sm text-ink/50">
Rendering diagram...
</div>
)}
</div>
);
}
93 changes: 93 additions & 0 deletions docs/architecture-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# SecondOrder Architecture Overview

This document gives two shareable views of the current system:

- the product and system architecture
- the meta-thinking flow used for harder chat requests

## High-Level System

```mermaid
flowchart LR
user["User"]
marketing["Marketing Site<br/>`/`"]
chatUi["Chat UI<br/>`/chat` and `/chat/[threadId]`"]
chatApi["Chat API<br/>`app/api/chat/route.ts`"]
workflow["Meta Chat Workflow<br/>classify -> plan -> draft -> critique"]
agent["SecondOrder Agent<br/>final streamed response"]
memory["Mastra Memory Store<br/>thread history + titles"]
events["Chat Events Store<br/>feedback + analytics events"]
models["LLM Models<br/>main agent + planner + critic"]
storage["Storage Backend<br/>Postgres or LibSQL/local file"]

user --> marketing
user --> chatUi
chatUi --> chatApi
chatApi --> memory
chatApi --> workflow
workflow --> models
workflow --> agent
agent --> models
agent --> memory
chatApi --> events
memory --> storage
events --> storage
chatApi --> chatUi
chatUi --> user
```

### What this means

- The marketing site explains the product and sends people into the chat experience.
- The chat UI talks only to the Next.js chat API route.
- The API route is the orchestration boundary: it validates the request, loads thread history, runs the meta workflow, and streams the final answer back to the UI.
- Mastra provides the workflow engine, agent runtime, memory, and storage integration.
- Thread history and titles live in Mastra memory storage.
- Product analytics events like `thread_started`, `meta_mode_used`, and `feedback_submitted` are recorded separately for measurement.
- Storage can run on Postgres in production or LibSQL/file-backed storage locally.

## Meta-Thinking Process

```mermaid
flowchart TD
request["User message arrives"]
classify["1. Classify task<br/>simple chat, analysis, planning,<br/>decision, or troubleshooting"]
gate{"Use meta mode?"}
direct["Respond directly"]
plan["2. Planner pass<br/>goal, constraints, plan,<br/>response strategy"]
draft["3. Draft pass<br/>short draft answer"]
critique["4. Critic pass<br/>confidence, limitations,<br/>context gaps, adjustments"]
context["5. Build request context<br/>task type + meta guidance"]
final["6. SecondOrder agent writes<br/>the final user-visible reply"]
stream["7. Stream answer to UI<br/>with compact metadata"]
feedback["8. Capture product events<br/>and user feedback"]

request --> classify
classify --> gate
gate -- "No" --> direct
direct --> final
gate -- "Yes" --> plan
plan --> draft
draft --> critique
critique --> context
context --> final
final --> stream
stream --> feedback
```

### How to explain this to a friend

- SecondOrder does not always use a heavy workflow. It first decides whether the request is simple or whether it needs a more structured pass.
- For harder requests, it creates a compact plan before answering.
- It then critiques that draft so the final response can include better framing, clearer limitations, and missing-context signals.
- The user does not see the full hidden chain-of-thought. They get a normal answer plus compact, useful signals about how the system approached the task.
- That makes the product feel like a chat assistant with visible reasoning discipline, not just raw text generation.

## Current Building Blocks

- UI: [`app/page.tsx`](/Users/henry/workspace/secondorder-web/app/page.tsx), [`app/chat/page.tsx`](/Users/henry/workspace/secondorder-web/app/chat/page.tsx), [`app/chat/[threadId]/page.tsx`](/Users/henry/workspace/secondorder-web/app/chat/[threadId]/page.tsx)
- API: [`app/api/chat/route.ts`](/Users/henry/workspace/secondorder-web/app/api/chat/route.ts)
- Workflow: [`mastra/workflows/meta-chat-workflow.ts`](/Users/henry/workspace/secondorder-web/mastra/workflows/meta-chat-workflow.ts)
- Agents and memory: [`mastra/agents.ts`](/Users/henry/workspace/secondorder-web/mastra/agents.ts)
- Runtime wiring: [`mastra/index.ts`](/Users/henry/workspace/secondorder-web/mastra/index.ts)
- Storage and events: [`lib/chat/history.ts`](/Users/henry/workspace/secondorder-web/lib/chat/history.ts), [`lib/chat/events.ts`](/Users/henry/workspace/secondorder-web/lib/chat/events.ts), [`lib/chat/storage.ts`](/Users/henry/workspace/secondorder-web/lib/chat/storage.ts)
Loading
Loading