Skip to content

divyansharma001/Realtime-AI-Inteview

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 

Repository files navigation

AI Interviewer

End-to-end demo with a Next.js frontend and an Express/Drizzle backend that runs realtime OpenAI audio interviews and queues evaluations.

Prerequisites

  • Node.js 20+
  • npm (bundled with Node)
  • Docker + Docker Compose (for Postgres + Redis)
  • OpenAI API key with realtime access
  • S3-compatible storage (AWS S3 or MinIO) for recordings

Quick Start

  1. Start infrastructure
    cd backend
    docker compose up -d
  2. Configure environment
    • Copy backend/.env.example to backend/.env and fill secrets.
    • Copy frontend/.env.local.example to frontend/.env.local and set the API base.
  3. Install deps
    cd backend && npm install
    cd ../frontend && npm install
  4. Push schema (run from backend after Postgres is up)
    npm run db:push
  5. Run services
    • Backend (port 4000):
      cd backend
      npm run dev
    • Frontend (port 3000):
      cd frontend
      npm run dev
  6. Use the app

Environment Variables

Backend (backend/.env)

DATABASE_URL=postgres://user:password@localhost:5432/interviewer_db
PORT=4000
FRONTEND_URL=http://localhost:3000
OPENAI_API_KEY=sk-...
REDIS_URL=redis://localhost:6379

S3_REGION=us-east-1
S3_BUCKET_NAME=ai-interviewer-recordings
S3_ACCESS_KEY_ID=your-key
S3_SECRET_ACCESS_KEY=your-secret
# Optional for MinIO/local stacks
S3_ENDPOINT=http://localhost:9000

EMAIL_ENABLED=false
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
SMTP_USER=your-smtp-user
SMTP_PASS=your-smtp-pass
EMAIL_FROM_NAME=AI Interviewer
EMAIL_FROM=no-reply@example.com

Frontend (frontend/.env.local)

NEXT_PUBLIC_API_URL=http://localhost:4000/api

Auth Flow (working)

  • Uses better-auth with Drizzle adapter and cookie sessions.
  • Frontend talks to the backend auth endpoints at ${NEXT_PUBLIC_API_URL}/auth.
  • CORS is restricted to http://localhost:3000; keep frontend origin aligned.

Realtime Interview (working)

  • Starts with a Job template; hitting Start Interview calls /api/interviews/{id}/session to create an OpenAI realtime session and an S3 presigned upload URL.
  • Frontend captures mic/camera, opens a WebRTC connection to OpenAI, and streams audio both ways. Feedback updates live via tool calls.
  • Recording is uploaded to S3/compatible storage after the interview finishes.

Evaluation Pipeline (working)

  • Finalize call posts the transcript and sets status to PROCESSING, then enqueues a BullMQ job on Redis.
  • evaluation.worker runs in-process with the API server, calls OpenAI for a JSON evaluation, updates the interview to COMPLETED, and emails the candidate when enabled.
  • Results poll on /interview/{id}/result until the evaluation is ready; video is fetched via a presigned URL when available.

Common Checks

  • Ensure NEXT_PUBLIC_API_URL matches the backend (e.g., http://localhost:4000/api).
  • Postgres + Redis from docker compose up -d must be running before starting the backend.
  • S3 bucket must exist and allow the provided credentials; for MinIO, set S3_ENDPOINT and enable path-style access.
  • OpenAI key must have realtime + chat access for the configured models.

About

A full-stack TypeScript application that conducts live voice interviews using OpenAI's Realtime API, automatically records sessions to S3, and generates AI-powered candidate evaluations through an intelligent processing pipeline.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages