Advanced multimodal AI chat application built with Next.js 16 (App Router), React 19, Prisma & PostgreSQL. It features advanced chat branching and message versioning, tier‑aware model registry, runtime model capability introspection, and granular administrative controls.
Archive · Models · Auth · Admin · Tech Stack · Running locally
- Multimodal Conversations: Text plus model-dependent support for image/file/audio (auto-derived from model capabilities)
- Live Streaming: Incremental token + tool call streaming via AI SDK (
aiv5) - Conversation Lineage: Fork chats from any message to create new independent conversation trees (parent/fork metadata persisted in
Chattable) - Message Branching & Versioning: Seamlessly switch between message edits and regenerations within the same chat. The system tracks a tree of message versions (
ltree), allowing you to explore different conversation paths without losing context. - Auto-Resume: Recent context & pinned archive memory automatically reattached on reload
- Token Bucket Rate Limiting: Per-tier configurable capacity/refill stored in
Tier+ per-user runtime state inUserRateLimit - Guest & Auth Modes: Seamless anonymous upgrade path without losing context
- Encrypted Client-Side Caching: Securely caches chat history and data on the client using IndexedDB and the Web Crypto API. This provides a significant performance boost and enables a near-instant experience when revisiting chats.
- Unified Provider Layer: OpenRouter, OpenAI, Google (Gemini) exposed through a single abstraction
- Dynamic Entitlements: Runtime tier lookup (guest/regular) resolves allowed model IDs
- Capability Introspection: Model capabilities (tool support, formats) persisted & auto-synced (OpenRouter catalog fetch with fallback defaults)
- Selective Tool Access: Agents/chats may restrict tool allow-lists (see
agent-settingsserialization) - Context Preservation: Persisted
lastContext, pinned archive entries, and agent settings
Persistent, per-user, queryable knowledge base powering long‑term conversational memory:
- Entries:
ArchiveEntryentities (slug, tags, body) with automatic timestamps - Semantic Links: Directed edges (
ArchiveLink) with optional bidirectionality - Pinned Context:
ChatPinnedArchiveEntryjoin table injects selected memories into prompt scaffolding - AI Tools: Tool layer can create/update/link archive entries on demand
- Query & Filter: Tag & (planned) full‑text search over entries (implementation hooks prepared)
- Clerk: OAuth / SSO & user management (regular users)
- Guest Mode: Cookie-scoped ephemeral identity (pattern
guest-\d+) with restricted models - Admin Assertion: First-class admin via
ADMIN_USER_ID(preferred) or fallbackADMIN_EMAIL - Tier Enforcement: Model & rate limit entitlements resolved per user type
Operational management interface (in progress / evolving):
- Provider API Keys: Database overrides trump environment variables (
Providertable) - Tier Management: Adjust model allow lists + bucket config (backed by
Tierrows, with fallbacks if missing) - Model Capabilities: View persisted model capability matrix & usage (referenced by tiers)
- System Agents: Configure platform-level AI agents for special tasks (see below)
- Housekeeping Tasks: Planned actions (sync OpenRouter models, prune unused capabilities)
System agents are platform-level AI agents that perform special automated tasks. Unlike user agents, they:
- Admin-only: Can only be viewed and configured by administrators
- Cannot be deleted: They're integral to platform functionality
- Resettable: Each can be reset to its default configuration at any time
- Isolated: Don't have associated chats or user ownership
| Agent | Slug | Purpose |
|---|---|---|
| Default Chat | default-chat |
Workspace default chat behavior/model used when no user agent is selected |
| Title Generation | title-generation |
Automatically generates concise titles for new chat conversations |
Access system agents via Settings → Admin → System Agents. For each agent you can:
- Change Model: Select which AI model the agent uses (e.g., set the platform default for chats)
- Customize Prompt: Modify the system prompt blocks to change behavior and defaults
- Reset to Defaults: Restore the original configuration
To enhance performance and provide a more fluid user experience, Vero Chat implements an encrypted client-side caching mechanism.
- Fast Initial Load: Chat history and messages are loaded from a local IndexedDB cache, making navigation between chats nearly instantaneous.
- Cache Encryption: All cached data is encrypted using AES-GCM via the Web Crypto API. The encryption key is derived from a server-side secret and a stable user session identifier, ensuring that each user's data is secure and private.
- Hybrid Edge Architecture: Key derivation logic is "hoisted" to
@vero/sharedand runs on a Cloudflare Worker at the Edge for minimum latency. If the Worker is unavailable, the Next.js backend serves as a seamless fallback using the exact same logic. - Cache Synchronization: The client-side cache is kept in sync with the server. The application intelligently refreshes the cache in the background to ensure data consistency.
- Optimistic Updates: The UI updates optimistically when new messages are sent or chats are created, providing a responsive feel.
- Storage: Dexie.js is used as a wrapper around IndexedDB for convenient and robust database operations.
- Encryption: The Web Crypto API is used for all cryptographic operations, ensuring a high level of security.
- State Management: The cache is managed through a React Context provider (
EncryptedCacheProvider) and a custom hook (useEncryptedCache), which integrates seamlessly withreact-query.
The archive serves as a persistent memory system for both users and AI:
- Entries: Individual knowledge units with titles, content, and tags
- Links: Semantic relationships between entries (related, parent/child, etc.)
- Search: Full-text search across all entries with tag filtering
- Ownership: Per-user isolation with secure access controls
The archive provides tools for AI assistants to:
- Create new memories from conversations
- Retrieve relevant information for context
- Link related concepts automatically
- Update existing knowledge as new information emerges
- Explorer View: Browse and search archive entries
- Detail View: Read and edit individual entries
- Link Visualization: See relationships between entries -- Bulk Operations: Import/export and batch management
Provider-agnostic model registry supporting multiple AI providers.
Curated list included at build time (see lib/ai/models.ts). Capabilities may be further enriched automatically.
The default set includes Google Gemini variants (gemini-2.5-flash-image-preview, gemini-2.5-flash, gemini-2.5-pro).
Additional models can be configured via environment variables or added to the database.
- OpenRouter: Primary provider with model aggregation
- Direct OpenAI: Native OpenAI API integration
- Google Gemini: Direct Google AI API access
- Configurable Registry: Easy addition of new providers
Default tier definitions (fallback when DB rows absent):
| Tier | Models | Capacity | Refill | Interval |
|---|---|---|---|---|
| guest | Configured via GUEST_MODELS (defaults to auto-detected DEFAULT_CHAT_MODEL) |
60 | 20 | 3600s |
| regular | Configured via REGULAR_MODELS (defaults to auto-detected DEFAULT_CHAT_MODEL) |
300 | 100 | 3600s |
If DEFAULT_CHAT_MODEL is not explicitly set, the system selects a default model based on available API keys (Google > OpenAI > OpenRouter).
All values can be overridden by inserting/updating Tier rows.
Flexible authentication system with enterprise and guest support:
- SSO Support: Google, GitHub, and enterprise providers
- User Management: Profile management and session handling
- Admin Controls: User administration and access management
- Webhook Integration: Real-time user event processing
- Cookie session identity; upgrade path to Clerk user without losing chats
- Rate & model limitations inherited from
guesttier - Data stored under synthetic
Userrow keyed by guest id (no email requirement)
Administrative interface for system management:
The admin dashboard is accessible at /admin. Access is restricted to users who match the configured admin credentials:
ADMIN_USER_ID: The specific User ID (e.g. from Clerk or database) granted admin privileges. Recommended for production.ADMIN_EMAIL: Fallback email address for admin access (useful for bootstrapping).
A comprehensive operational dashboard providing real-time insights:
- Usage Statistics: View key performance indicators (KPIs) such as Total Messages, Active Users, and Active Models.
- Data Visualization: Interactive charts (Line, Pie, Bar) powered by
rechartsto visualize:- Message and User growth trends over time (24h, 7d, 30d, 90d).
- Model usage distribution (e.g., GPT-4 vs. Claude 3.5).
- Provider usage breakdown.
- Recent Activity: detailed log of the latest user interactions.
- Report Export: Download usage reports as CSV files for offline analysis.
- Provider Management: Manage API keys via database overrides (
Providertable) and monitor provider status. - Tier Management: Configure token buckets (rate limiting) and model allow-lists per user tier (
Tiertable). - Model Capabilities: Introspect and manage the persisted capabilities of available models.
- Next.js 16 (App Router, React 19)
- TypeScript + strict type surfaces
- Tailwind CSS v4 + shadcn/ui + Radix primitives
- Framer Motion for transitions
- Progressive streaming UI using
@ai-sdk/react
- AI SDK (
aiv5) provider unification + streaming handlers - Prisma ORM with modular schema (model capabilities, archive, rate limit)
- PostgreSQL primary storage (Neon friendly)
- Redis (optional) future caching / ephemeral coordination; current rate limiting uses PostgreSQL
- Vercel Blob for file attachments
- Cloudflare Workers (Edge caching & auth logic)
- Bun (package manager + fast scripts)
- ESLint + Prettier (configured) — (Biome mention removed; repo uses standard toolchain)
- Playwright (E2E) harness ready (browser specs live in
tests/e2e) - OpenTelemetry instrumentation hooks (
instrumentation.ts,@vercel/otel) - Deploy-first design for Vercel (Edge/Node hybrid)
-
ai,@ai-sdk/react(multimodal streaming + tool calls) -
@clerk/nextjs(auth),@tanstack/react-query,react-hook-form,zod -
sonner(toasts),lucide-react(icons),framer-motion(animation) -
diff-match-patch+ custom diff view components -
dexie: A wrapper for IndexedDB.
- Node.js 18+ (or Bun runtime) — Bun v1.3.0 recommended
- PostgreSQL database (local, Docker, or Neon)
- (Optional) Redis if extending caching strategies (not required for baseline)
The project is structured as a monorepo with:
apps/web- Main Next.js applicationapps/realtime-gateway- WebSocket gateway for chat notificationsapps/cache-worker- Cloudflare Worker for edge encryptionpackages/db- Shared database package (@vero/db)packages/shared- Shared isomorphic logic (@vero/shared)
The root package.json provides convenience scripts to run commands across packages.
# 1. Install dependencies
bun install
# 2. Set up environment variables
# Web app: copy apps/web/.env.example to apps/web/.env.local (or .env) and fill in values.
# Realtime gateway (optional): copy apps/realtime-gateway/.env.example to apps/realtime-gateway/.env.
# Database tools: create packages/db/.env with DATABASE_URL for Prisma CLI.
# Worker secrets: create apps/cache-worker/.dev.vars
# 3. (First time) Push schema & generate client
bun run db:push # Applies schema without creating a migration (dev convenience)
bun run db:generate # Generates Prisma client to packages/db/generated/client
# Alternatively create an initial migration (idempotent if already created)
bun run db:migrate # prisma migrate dev --name init
# 4. Start dev server (Next.js + streaming)
bun run dev
# 5. (Optional) Start Realtime Gateway
# Enables live chat updates across tabs/devices. Without it, the app still works but
# relies on page refreshes/polling for new messages.
# Run in a separate terminal:
cd apps/realtime-gateway
bun install
bun run start:dev
# 6. (Optional) Start Edge Gateway
# Runs the edge gateway locally on port 8787
cd apps/edge-gateway
bunx wrangler devNavigate to http://localhost:3000.
Create apps/web/.env.local (or .env) for the Next app and ensure DATABASE_URL is present when invoking Prisma CLI. The Next app loads env vars from its own directory even when started via the monorepo root scripts.
| Variable | Purpose |
|---|---|
AUTH_SECRET |
Guest session encryption key |
NEXT_PUBLIC_APP_BASE_URL |
Base URL for metadata / OAuth redirects |
NEXT_PUBLIC_APP_URL |
Alias used in some code paths; keep in sync with NEXT_PUBLIC_APP_BASE_URL |
DATABASE_URL |
PostgreSQL connection string |
OPENROUTER_API_KEY |
OpenRouter API key (model catalog + routing) |
CACHE_ENCRYPTION_SECRET |
A 32-byte, base64-encoded secret used to derive encryption keys for the client-side cache. |
| Variable | Purpose |
|---|---|
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY |
Clerk publishable key (frontend) |
CLERK_SECRET_KEY |
Clerk secret key (backend) |
| Variable | Purpose |
|---|---|
GUEST_SECRET |
Dedicated guest cookie signing secret; falls back to AUTH_SECRET if omitted |
COOKIE_DOMAIN |
Set to .yourdomain.com to share guest cookies with subdomains (required for separate worker subdomains). |
OPENAI_API_KEY |
Direct OpenAI API access (bypassing OpenRouter) |
GOOGLE_GENERATIVE_AI_API_KEY |
Direct Gemini API access |
GOOGLE_API_KEY |
Alternate env name for direct Gemini access (either works) |
DEFAULT_CHAT_MODEL |
Default model for new chats and fallback |
ARTIFACT_GENERATION_MODEL |
Override model for artifact generation flows |
GUEST_MODELS |
Comma-separated fallback guest tier model list |
REGULAR_MODELS |
Comma-separated fallback regular tier model list |
REDIS_URL |
(Pluggable) Redis caching / future rate control |
BLOB_READ_WRITE_TOKEN |
Vercel Blob storage token |
ADMIN_USER_ID |
Hard admin (takes precedence over email) |
ADMIN_EMAIL |
Fallback admin identity (bootstrap) |
NEXT_PUBLIC_DISABLE_SOCIAL_AUTH |
Set to 1 to hide social auth buttons; omit or 0 to allow |
NEXT_PUBLIC_REALTIME_GATEWAY_URL |
WebSocket URL for realtime updates (enable only when the gateway is running) |
NEXT_PUBLIC_CACHE_ENCRYPTION_URL |
URL of the Cloudflare Cache Worker (e.g., http://localhost:8787). If omitted, falls back to Next.js API. |
| Variable | Purpose |
|---|---|
PORT |
Gateway port (default: 3001) |
DATABASE_URL_UNPOOLED |
Unpooled PostgreSQL connection string for LISTEN/NOTIFY |
CLERK_SECRET_KEY |
Clerk Back-end API Key for token verification |
CORS_ORIGINS |
Allowed origins (e.g. http://localhost:3000) |
| Variable | Purpose |
|---|---|
CACHE_ENCRYPTION_SECRET |
Must match the web app's secret for valid decryption |
GUEST_SECRET |
Must match web app for guest cookie verification |
CLERK_SECRET_KEY |
For verifying Clerk sessions at the edge |
CLERK_PUBLISHABLE_KEY |
Required for Clerk client initialization |
ALLOWED_ORIGINS |
Comma-separated list of origins (e.g., http://localhost:3000) for CORS |
Because this worker relies on authentication cookies (guest_session and __session) which are set with SameSite=Lax, you cannot use the default *.workers.dev domain in production if your app is hosted elsewhere (e.g., Vercel). Browsers will block the cookies, resulting in 401 Unauthorized errors.
Required Production Setup:
-
Custom Domain: Assign a subdomain to the worker (e.g.,
cache.yourdomain.com) that shares the same root as your app.- Deploy with the domain flag:
cd apps/cache-worker bunx wrangler deploy --domain cache.yourdomain.com - Update Auth Config:
- Guest: Set
COOKIE_DOMAIN=.yourdomain.comin your Vercel env vars. - Clerk: Go to Clerk Dashboard > Configure > Paths & Domains and set Cookie Domain to
.yourdomain.com.
- Guest: Set
- Deploy with the domain flag:
-
Cloudflare Routes (Same-Origin): If your main domain is proxied by Cloudflare (Orange Cloud), use a Route. This avoids all CORS/Cookie configuration.
- Dashboard: Go to Cloudflare Dashboard > Workers Routes.
- Add route:
yourdomain.com/api/cache/encryption-key - Web App: Unset
NEXT_PUBLIC_CACHE_ENCRYPTION_URLso it defaults to the relative path.
# Apply schema (development convenience) OR create a migration:
bun run db:push # Fast, no migration file
# or
bun run db:migrate # Creates/updates migration history
# Generate client (usually triggered by build as well):
bun run db:generate
# (Optional) Inspect / edit data:
bun run db:studiobun run buildbuilds in dependency order: shared db package → web app → realtime gateway.bun run build:webbuilds only the web app (Vercel-friendly); it runs the db build first via the webprebuildhook.bun run build:gatewaybuilds only the realtime gateway (also runs the db build first).bun run build:dbbuilds the shared db package and runsprisma generateso generated clients stay in sync.
If you plan to enforce tier overrides or seed model capabilities manually, insert rows into Tier and Model tables (Prisma Studio or SQL). Missing rows fall back to hardcoded safe defaults so the app can boot cold.
bun test/bun run test:unit– Bun runner executes fast unit tests undertests/unit(JSDOM env, shared setup intests/unit/setup.ts).bun run test:e2e– Playwright spins up the dev server and runs Chromium tests fromtests/e2e.bun run lint– ESLint with the repo configuration.bunx tsc --noEmit– Type check the Next.js app and test utilities.
| Directory | Runner | Notes |
|---|---|---|
tests/unit |
Bun | Uses bunfig.toml preload for mocks and DOM stubs. |
tests/unit/mocks |
Bun | Shared mocks consumed during unit tests. |
tests/e2e |
Playwright | Browser automation; requires the dev server (managed automatically by the config). |
Tip: append
--watchtobun testfor watch mode, or--headedtobun run test:e2e -- --headedwhen debugging Playwright.
- Connect GitHub repository to Vercel
- Configure environment variables
- Enable Vercel integrations:
- Neon for PostgreSQL
- Upstash for Redis
- Vercel Blob for file storage
- Use
bun run build:webas the Vercel build command so only the web app (and its db dependency) is built - Deploy automatically on push
The application is designed to run on any platform supporting Node.js:
# Production build (generates Prisma client first)
bun run build
# Launch server
bun run start- Fork the repository
- Create a feature branch
- Make your changes with proper TypeScript types
- Add tests for new functionality
- Submit a pull request
MIT