A production-quality personal web application for AI-assisted CV analysis and tailoring. This application allows users to manage their professional profile, analyze job descriptions, and generate tailored CVs using AI.
- User Authentication: Secure login/register with Supabase Auth
- Profile Management: Complete professional profile with work experience, skills, education, languages, and profile images
- CV Import: Import existing CVs from PDF or Markdown files using AI extraction
- Job Description Analysis: AI-powered analysis comparing your profile against job descriptions with match scores, strengths, gaps, and recommendations
- CV Generation: Generate tailored CVs with structured content based on job descriptions
- CV Editor: Interactive CV editor with customizable themes, typography, spacing, and layout
- PDF Export: Download generated CVs as PDF files
- Real-time Updates: Background job processing with status tracking
- Frontend: Next.js 16 (App Router), TypeScript, TailwindCSS, shadcn/ui
- Backend/Async: Trigger.dev v4 for background jobs
- AI: Mastra framework with OpenAI
- Database: Supabase (PostgreSQL) with Prisma ORM
- Deployment: Vercel (frontend), Trigger.dev Cloud (workflows)
- Node.js 20.9+ or Bun
- Supabase account
- Trigger.dev account
- OpenAI API key
- Clone the repository:
git clone <repository-url>
cd cv-ai-enancher- Install dependencies:
bun install- Set up environment variables:
cp .env.example .envFill in your environment variables:
Required:
DATABASE_URL: Supabase PostgreSQL connection stringNEXT_PUBLIC_SUPABASE_URL: Your Supabase project URLNEXT_PUBLIC_SUPABASE_ANON_KEY: Your Supabase anon keySUPABASE_SERVICE_ROLE_KEY: Your Supabase service role key (for server-side operations)TRIGGER_SECRET_KEY: Your Trigger.dev secret keyTRIGGER_PROJECT_ID: Your Trigger.dev project ID (found in Trigger.dev Dashboard)OPENAI_API_KEY: Your OpenAI API key
Optional:
NEXT_PUBLIC_TRIGGER_PUBLIC_API_KEY: Trigger.dev public API key (if using public API)PUPPETEER_EXECUTABLE_PATH: Path to Chromium executable for PDF generation (only needed in serverless environments)LANGFUSE_PUBLIC_KEY: Langfuse public API key (for AI observability and tracing)LANGFUSE_SECRET_KEY: Langfuse secret key (for AI observability and tracing)LANGFUSE_BASE_URL: Langfuse instance URL (defaults to https://cloud.langfuse.com if not provided)
- Set up the database:
# Step 1: Create tables using Prisma (this creates the schema)
bun run db:push
# Step 2: Apply Supabase migrations (RLS policies and storage buckets)
# If using Supabase CLI locally:
supabase db reset
# Or manually run migrations in order:
# psql $DATABASE_URL -f supabase/migrations/001_rls_policies.sql
# psql $DATABASE_URL -f supabase/migrations/002_create_storage_bucket.sql
# psql $DATABASE_URL -f supabase/migrations/003_add_job_requirements_to_analysis.sql
# psql $DATABASE_URL -f supabase/migrations/004_update_storage_bucket_for_cv_imports.sqlImportant: Always run Prisma migrations/push FIRST to create tables, then apply Supabase migrations for RLS policies and storage setup.
-
Set up Supabase Storage:
- See docs/STORAGE_SETUP.md for detailed instructions
- The storage bucket
profile-imagesis created automatically via migration 002 - Supports profile images, PDF CVs, and Markdown files
-
Start the development server:
bun run dev- In another terminal, start Trigger.dev:
bun run trigger:devapp/
├── (auth)/ # Authentication pages (login, register)
├── (protected)/ # Protected routes requiring authentication
│ ├── profile/ # Profile management (personal info, skills, experience, etc.)
│ ├── jobs/ # Job descriptions management and analysis
│ │ ├── [id]/ # Individual job view with analysis
│ │ └── new/ # Create new job description
│ └── cv/ # Generated CVs
│ └── [id]/ # CV editor with customization options
├── api/ # Next.js API routes
│ ├── profile/ # Profile CRUD operations
│ ├── jobs/ # Job description endpoints
│ ├── analysis/ # Job analysis endpoints
│ └── cv/ # CV generation and management
└── lib/ # Utilities and clients
├── prisma/ # Prisma client
├── supabase/ # Supabase clients (auth, server, admin)
└── trigger/ # Trigger.dev client
trigger/
└── src/
├── tasks/ # Trigger.dev background tasks
│ ├── analyzeJobDescription.ts
│ ├── generateTailoredCV.ts
│ └── importCV.ts
└── lib/
├── mastra/ # Mastra AI framework integration
│ ├── agents/ # AI agents (analysis, content, CV generation, import)
│ ├── tools/ # AI tools (validation, extraction, scoring)
│ └── workflows/ # AI workflows
├── prisma/ # Prisma client for tasks
├── types/ # TypeScript types and schemas
└── utils/ # Utility functions (PDF extraction, caching, etc.)
prisma/
└── schema.prisma # Database schema (Prisma ORM)
supabase/
└── migrations/ # Supabase SQL migrations
├── 001_rls_policies.sql # Row Level Security policies
├── 002_create_storage_bucket.sql
├── 003_add_job_requirements_to_analysis.sql
└── 004_update_storage_bucket_for_cv_imports.sql
- Start Next.js dev server:
bun run dev - Start Trigger.dev dev server:
bun run trigger:dev - Open http://localhost:3000
- Generate Prisma Client:
bun run db:generate - Push schema changes:
bun run db:push(creates/updates tables) - Create migration:
bun run db:migrate(creates Prisma migration files) - Open Prisma Studio:
bun run db:studio
Database Setup Order:
bun run db:push- Creates tables from Prisma schema- Apply Supabase migrations - Adds RLS policies and storage buckets (run manually or via Supabase CLI)
- Start local Supabase:
bun run supabase:start - Stop local Supabase:
bun run supabase:stop - Reset database:
bun run supabase:reset(applies all migrations)
- Deploy workflows:
bun run trigger:deploy - View dashboard: https://cloud.trigger.dev
- Tasks run asynchronously in the background
- Check task status via the Trigger.dev dashboard or API
The application follows a clean architecture with:
- Frontend: Next.js 16 App Router with Server Components and Client Components
- API Layer: Next.js API routes for CRUD operations and task triggering
- Background Jobs: Trigger.dev v4 tasks for async AI processing
- AI Layer: Mastra framework with agents, tools, and workflows
- Database: Prisma ORM on top of Supabase PostgreSQL
- Storage: Supabase Storage for profile images and CV imports
- Security: Supabase RLS policies for row-level security
- Observability: Langfuse integration for AI tracing and LLM observability (optional)
The application uses Trigger.dev for asynchronous processing:
-
analyzeJobDescription: Analyzes a job description against user profile
- Extracts job requirements
- Calculates match score
- Identifies strengths and gaps
- Suggests focus areas
-
generateTailoredCV: Generates a tailored CV based on job description
- Uses analysis results
- Restructures profile data
- Creates structured CV content
- Applies best practices for CV formatting
-
importCV: Imports CV from PDF or Markdown
- Extracts text from PDF
- Parses structured data
- Validates and imports into profile
- Handles bulk imports
Why both Prisma and Supabase migrations?
- Prisma: Handles table creation, schema changes, and data migrations
- Supabase: Handles Row Level Security (RLS) policies and storage buckets, which Prisma cannot manage
Migration Workflow:
- Define schema in
prisma/schema.prisma - Run
bun run db:pushto create/update tables - Apply Supabase migrations from
supabase/migrations/in order:001_rls_policies.sql- Row Level Security policies002_create_storage_bucket.sql- Storage bucket for profile images003_add_job_requirements_to_analysis.sql- Schema updates004_update_storage_bucket_for_cv_imports.sql- Storage bucket updates for CV imports
Critical: The user profile is the single source of truth. AI operations:
- Cannot invent experience, skills, or qualifications
- Can only rephrase and restructure existing data
- Must explicitly state when data is missing
- Must validate generated content against profile data
- Must preserve accuracy and truthfulness of all information
GET /api/profile- Get user profilePUT /api/profile- Update profilePOST /api/profile/upload-image- Upload profile imagePOST /api/profile/import- Import CV from filePOST /api/profile/work-experiences- Add work experiencePUT /api/profile/work-experiences- Update work experiencePOST /api/profile/education- Add educationPUT /api/profile/education- Update educationPOST /api/profile/skills/add- Add skillPUT /api/profile/skills/update- Update skillPUT /api/profile/languages- Update languages
GET /api/jobs- List all job descriptionsPOST /api/jobs- Create job descriptionGET /api/jobs/[id]- Get job descriptionPUT /api/jobs/[id]- Update job description
POST /api/analysis- Trigger job analysisGET /api/analysis/[id]- Get analysis result
POST /api/cv- Generate tailored CVGET /api/cv/[id]- Get CVPUT /api/cv/[id]- Update CVPUT /api/cv/[id]/styles- Update CV stylesGET /api/cv/[id]/pdf- Download CV as PDF
See the docs/ directory for detailed documentation:
- Storage Setup - Supabase Storage configuration
- Architecture decisions
- Database schema
- API endpoints
- Deployment guide
- Feature documentation
This project is licensed under the MIT License - see the LICENSE file for details.