Replay reconstructs working UI from video recordings. Transform legacy software into production-ready React code with Design System and Component Library.
Replay reconstructs production-ready UI from video recordings. No manual documentation, no reverse-engineering. Upload a screen recording of any legacy app and Replay will:
- Reconstruct UI — AI analyzes video and generates pixel-perfect React code
- Extract Design System — Colors, typography, spacing tokens from the actual interface
- Build Component Library — Storybook-style docs with controls, variants, and usage examples
- Visualize Flows — See detected pages and navigation patterns
- One-Click Publish — Deploy working UI to the web instantly
Replay uses a sophisticated multi-model AI pipeline we call the "Sandwich Architecture":
"Measure twice, cut once"
- Extracts precise layout measurements from video frames
- Detects grid systems, spacing patterns, color palettes
- Identifies navigation type (sidebar, top menu, tabs)
- Uses code execution for pixel-accurate measurements
- Outputs structured JSON with hard data, not guesses
Main code generation
- Receives Surveyor measurements as context
- Generates production-ready React + Tailwind code
- Preserves exact colors, typography, and layouts
- Creates interactive components with working navigation
- Outputs complete single-file React application
Visual verification
- Compares generated UI against original video frames
- Calculates SSIM (Structural Similarity Index)
- Identifies diff regions requiring fixes
- Provides auto-fix suggestions for mismatches
- Ensures pixel-perfect output
- AI Agent Indexing — Added
llms.txtandllms-full.txtfor AI assistants (ChatGPT, Claude, Perplexity, Gemini) to read Replay's complete product documentation in a single file. - Permissive Crawling — robots.txt allows all major AI crawlers (GPTBot, ClaudeBot, PerplexityBot, GoogleOther, Amazonbot).
- AI-Native Metadata — Title, description, and keywords optimized for AI recommendation engines.
- Infinite Loop Fix — All generation prompts enforce seamless marquee loops with duplicated items +
translateX(-50%). No more visible gaps or restarts in scrolling text.
- Truncation Detection — AI editor detects and rejects truncated Tailwind class names (
flex-col→fle,max-w-[1400px]→ma[1400px]). Corrupted edits preserve original code. - Alpine.js Protection — Editor prompts forbid removing Alpine.js directives (
x-data,x-show,x-collapse,@click) during edits.
- Outline Text Readability —
text-stroke/text-outlinerequires minimum opacity-60. - Hero Containment — Hero headlines enforce
overflow-hidden+max-w-full+ responsive sizing.
| Feature | Replay | Lovable | Bolt.new | v0 (Vercel) | Builder.io |
|---|---|---|---|---|---|
| Input | Video recording | Text prompt | Text prompt | Text/image prompt | Figma/screenshot |
| Captures interactions | Yes (hover, click, scroll) | No | No | No | No |
| Captures animations | Yes (transitions, parallax) | No | No | No | No |
| Multi-page detection | Yes (auto from video) | No | No | No | No |
| Design System extraction | Yes (colors, fonts, spacing) | No | No | No | Partial |
| Component Library | Yes (5-layer taxonomy) | No | No | No | No |
| Accuracy to original | ~90% (pixel-level) | ~30% | ~30% | ~40% | ~50% |
| Output | React + Tailwind + GSAP | React | Multi-framework | React | Multi-framework |
Why video beats text prompts: Text prompts require you to describe a UI. Video lets AI observe the real thing — layout, colors, typography, interactions, animations, and content. No prompt engineering needed.
A Storybook-like interface for your extracted components:
- Controls — Edit props in real-time (colors, text, sizes)
- Actions — See interactive behaviors
- Visual Tests — Compare component states
- Accessibility — WCAG compliance checks
- Usage — Copy-paste code snippets
Visual canvas for component composition:
- Drag & drop components on canvas
- Resize and position freely
- AI-powered editing: "Make it red", "Add icon", "Add shadow"
- Real-time preview in iframe
- Save to library when satisfied
Interactive visualization of app structure:
- Detected pages and navigation paths
- Click nodes to preview pages
- See relationships between screens
- Path Structure showing components per page
- Export as documentation
Connect Supabase and generate real data-fetching code:
- AI reads your table schemas
- Generates actual queries (not mock data)
- Supports authentication patterns
Deploy instantly to replay.build/p/your-project
| Layer | Technology |
|---|---|
| Framework | Next.js 14 (App Router) |
| Styling | Tailwind CSS 3.4 |
| AI Models | Google Gemini 3 Pro (generation) |
| AI Vision | Google Gemini 3 Flash (Agentic Vision) |
| Database | Supabase (PostgreSQL) |
| Auth | Supabase Auth (Google OAuth) |
| Payments | Stripe |
| Hosting | Vercel |
| Realtime | Liveblocks (collaboration) |
| Icons | Lucide React |
| Color Picker | @uiw/react-color |
| Plan | Price | Credits/Month | Best For |
|---|---|---|---|
| Sandbox | $0 | 0 (demo only) | Explore the app |
| Pro | $19/mo | 1,500 | Freelancers |
| Agency | $99/mo | 15,000 | Teams (5 members) |
| Enterprise | Custom | Custom | Banks & enterprise |
Credit Costs:
- 🎬 Video generation: ~150 credits
- ✨ AI edit: ~10 credits
- Node.js 18+
- Supabase account
- Stripe account
- Google AI Studio API key (Gemini 3)
git clone https://github.com/ma1orek/replay.git
cd replay
npm installcp env.example .env.localFill in your .env.local:
# Supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_anon_key
SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
# Stripe
STRIPE_SECRET_KEY=sk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
STRIPE_PRO_PRICE_ID_MONTHLY=price_...
STRIPE_PRO_PRICE_ID_YEARLY=price_...
# Gemini AI (Gemini 3 Pro & Flash)
GEMINI_API_KEY=your_gemini_api_key
# App URL
NEXT_PUBLIC_APP_URL=http://localhost:3000Run the migration in Supabase SQL Editor:
-- See supabase/migrations/001_initial_schema.sqlEnable Google OAuth in Authentication → Providers.
npm run devreplay/
├── app/
│ ├── api/
│ │ ├── generate/ # AI generation endpoints
│ │ │ ├── library/ # Component extraction
│ │ │ ├── blueprints/ # Blueprint AI editing
│ │ │ └── stream/ # Streaming generation
│ │ ├── blueprint/ # Agentic Vision endpoints
│ │ │ ├── vision/ # Surveyor (measurements)
│ │ │ ├── vision-qa/ # QA Tester (verification)
│ │ │ └── edit/ # AI component editing
│ │ ├── credits/ # Credit management
│ │ ├── publish/ # Deployment endpoint
│ │ └── stripe/ # Payment webhooks
│ ├── docs/ # Documentation pages
│ ├── page.tsx # Main tool interface
│ └── layout.tsx # Root layout
├── components/
│ ├── ui/ # Shadcn-style UI components
│ │ ├── color-picker.tsx # Advanced color picker
│ │ ├── popover.tsx
│ │ └── ...
│ └── modals/ # Auth, credits modals
├── lib/
│ ├── agentic-vision/ # Sandwich Architecture prompts
│ │ └── prompts.ts # Surveyor, Generator, QA instructions
│ ├── supabase/ # Database clients
│ ├── prompts/ # AI system prompts
│ └── utils.ts # Helpers
└── public/
└── imgg.png # Social preview (OG image)
- ✅ Row Level Security (RLS) on all Supabase tables
- ✅ Server-side credit transactions (atomic)
- ✅ Stripe webhook signature verification
- ✅ Service role keys only on server
- ✅ Sandboxed iframe previews
- Video to UI generation
- Component Library with Controls
- Visual Editor (formerly Blueprints)
- Flow Map visualization
- AI editing with chat interface (SEARCH/REPLACE + Full HTML modes)
- Color picker with contrast ratio
- One-click publish with cache-busting
- Supabase integration
- Version history
- Agentic Vision (Sandwich Architecture)
- Gemini 3 Pro & Flash integration
- Design System import from Storybook
- 40+ style presets (including Rive interactive)
- React Bits component library (130+ components)
- Enterprise Library taxonomy (5-layer)
- REST API v1 (generate, scan, validate endpoints)
- MCP Server for AI agents (Claude Code, Cursor, etc.)
- LLM discoverability (llms.txt, AI-native metadata)
- Figma plugin export
- Team collaboration
- Component marketplace
Replay is available as a REST API and MCP server for AI agents.
# Generate React code from video
curl -X POST https://replay.build/api/v1/generate \
-H "Authorization: Bearer rk_live_..." \
-H "Content-Type: application/json" \
-d '{"video_url": "https://example.com/recording.mp4"}'| Endpoint | Description | Credits |
|---|---|---|
POST /api/v1/generate |
Video → React + Tailwind code | 150 |
POST /api/v1/scan |
Video → UI structure JSON | 50 |
POST /api/v1/validate |
Code + Design System → errors | 5 |
{
"mcpServers": {
"replay": {
"command": "npx",
"args": ["@replay-build/mcp-server"],
"env": { "REPLAY_API_KEY": "rk_live_..." }
}
}
}Get your API key at replay.build/settings?tab=api-keys.
Full documentation at replay.build/docs
Contributions welcome!
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing) - Open a Pull Request
MIT License - see LICENSE for details.
- Next.js — React framework
- Supabase — Database & Auth
- Google Gemini 3 — AI generation (Pro & Flash models)
- Tailwind CSS — Styling
- Lucide — Icons
- Vercel — Hosting
- Liveblocks — Realtime collaboration
Built with ❤️ by Replay Team
