A mini AI support agent for live chat - Built for Spur Founding Full-Stack Engineer Take-Home Assignment
A full-stack web application that simulates a customer support chat where an AI agent answers user questions using Google's Gemini AI API. The application features conversation persistence, context-aware responses, and a modern chat widget UI.
- Features
- Tech Stack
- Project Structure
- Getting Started
- API Documentation
- Architecture Overview
- LLM Integration
- Database Schema
- Deployment
- Trade-offs & Future Improvements
- Real-time AI Chat: Interactive chat interface with AI-powered responses using Google Gemini
- Conversation Persistence: All messages are stored in PostgreSQL and can be retrieved across sessions
- Session Management: Automatic session creation and tracking using localStorage
- Context-Aware Responses: AI maintains conversation history for contextual replies
- Modern UI/UX:
- Floating chat widget with smooth animations
- Auto-scroll to latest messages
- Typing indicators while AI is responding
- Disabled send button during message processing
- Clear visual distinction between user and AI messages
- Input Validation: Empty message prevention, error handling for network failures
- Domain Knowledge: Pre-seeded with e-commerce store policies (shipping, returns, support hours)
- Robust Error Handling: Graceful degradation with user-friendly error messages
- Runtime: Node.js with TypeScript
- Framework: Express.js
- Database: PostgreSQL with Prisma ORM
- LLM Provider: Google Gemini 2.5 Flash via
@google/genai - Environment Management: dotenv
- Framework: React 19 with TypeScript
- Build Tool: Vite
- Styling: Tailwind CSS v4
- State Management: React Hooks (useState, useEffect)
- TypeScript: Type safety across the stack
- Nodemon: Backend hot-reloading
- ESLint: Code linting
- Prisma: Database migrations and type-safe ORM
Spur/
├── client/ # React frontend
│ ├── src/
│ │ ├── App.tsx # Main chat widget component
│ │ ├── api.ts # API client for backend communication
│ │ ├── main.tsx # React entry point
│ │ └── index.css # Global styles
│ ├── package.json
│ └── vite.config.ts
│
├── server/ # Node.js backend
│ ├── src/
│ │ ├── controllers/
│ │ │ └── chatController.ts # Request handlers
│ │ ├── routes/
│ │ │ └── chatRoutes.ts # API route definitions
│ │ ├── services/
│ │ │ └── chatService.ts # LLM integration & business logic
│ │ ├── index.ts # Express app & server setup
│ │ └── prisma.ts # Prisma client instance
│ ├── prisma/
│ │ ├── schema.prisma # Database schema
│ │ └── migrations/ # Database migrations
│ └── package.json
│
└── README.md
Ensure you have the following installed:
- Node.js (v18 or higher)
- npm or yarn
- PostgreSQL (v14 or higher)
- Google Gemini API Key (Get one here)
-
Clone the repository
git clone https://github.com/immohitsen/LiveChatBot cd Spur -
Install backend dependencies
cd server npm install -
Install frontend dependencies
cd ../client npm install
-
Create a PostgreSQL database
# Using psql or your preferred PostgreSQL client createdb spur_chat -
Configure database connection
Create a
.envfile in theserver/directory:DATABASE_URL="postgresql://username:password@localhost:5432/spur_chat" GEMINI_API_KEY="your_gemini_api_key_here" PORT=3000
Replace
username,password, and the database name as needed. -
Run database migrations
cd server npx prisma migrate dev --name initThis will:
- Create the database tables (
conversations,messages) - Generate the Prisma Client for type-safe database queries
- Create the database tables (
-
Verify database setup (optional)
npx prisma studio
This opens a GUI to browse your database at
http://localhost:5555
| Variable | Description | Example |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | postgresql://user:pass@localhost:5432/spur_chat |
GEMINI_API_KEY |
Google Gemini API key | AIza... |
PORT |
Server port (optional) | 3000 |
| Variable | Description | Example |
|---|---|---|
VITE_API_URL |
Backend API base URL | http://localhost:3000/api/chat |
For production deployment, update VITE_API_URL to your deployed backend URL.
-
Start the backend server
cd server npm run devServer runs at
http://localhost:3000 -
Start the frontend (in a new terminal)
cd client npm run devFrontend runs at
http://localhost:5173 -
Open your browser
Navigate to
http://localhost:5173and click the blue chat button to start chatting!
http://localhost:3000/api/chat
POST /message
Send a user message and receive an AI response.
Request Body:
{
"message": "What's your return policy?",
"sessionId": "optional-uuid-string"
}Response:
{
"reply": "We offer a 30-day return policy. Customers are responsible for return shipping costs.",
"sessionId": "550e8400-e29b-41d4-a716-446655440000"
}Error Responses:
400 Bad Request- Empty message500 Internal Server Error- LLM API failure or database error
GET /history/:sessionId
Retrieve all messages from a specific conversation.
Response:
[
{
"id": "msg-uuid-1",
"text": "Do you ship to USA?",
"sender": "user"
},
{
"id": "msg-uuid-2",
"text": "Yes! We ship to USA, Canada, and UK. Free shipping on orders over $50.",
"sender": "ai"
}
]The backend follows a layered architecture with clear separation of concerns:
Routes → Controllers → Services → Database
-
Routes Layer (
chatRoutes.ts)- Defines API endpoints
- Maps HTTP requests to controller functions
-
Controllers Layer (
chatController.ts)- Handles request/response parsing
- Orchestrates service calls
- Error handling and HTTP status codes
-
Services Layer (
chatService.ts)- Core business logic
- LLM API integration
- Message validation
- Session management
- Database operations
-
Data Layer (Prisma ORM)
- Type-safe database queries
- Schema management
- Migrations
Design Decisions:
- Prisma ORM: Chosen for type safety, excellent TypeScript support, and simplified migrations
- Service Encapsulation: LLM integration is abstracted into
handleChat(), making it easy to swap providers (OpenAI, Claude, etc.) by modifying just one function - UUID Session IDs: More secure and scalable than auto-incrementing integers
- Context Window Limiting: Only the last 10 messages are sent to the LLM to control costs and token usage
The frontend is a component-based React application:
- State Management: React hooks for local state (
useState,useEffect) - API Communication: Centralized in
api.tsfor reusability - Session Persistence: Uses
localStorageto maintain sessions across page reloads - Auto-scroll: Implemented with
useRefandscrollIntoView - Loading States: Prevents double-sends and shows typing indicator
- Model:
gemini-2.5-flash - SDK:
@google/genai(v1.34.0)
- Fast Response Times: Ideal for real-time chat
- Cost-Effective: Generous free tier
- Strong Context Understanding: Handles multi-turn conversations well
System Instruction:
You are a helpful customer support agent for "SpurStore".
- Shipping: We ship to USA, Canada, and UK. Free shipping over $50.
- Returns: 30-day return policy. Customer pays return shipping.
- Support Hours: Mon-Fri, 9am - 5pm EST.
- Tone: Professional, concise, and friendly. Do not use markdown formatting.
Context Handling:
- The last 10 messages from the conversation are included in the API call
- Messages are formatted as alternating
userandmodelroles - System instruction ensures consistent brand voice
- Invalid API Key: Returns user-friendly error message
- Rate Limiting: Caught and displayed to user
- Timeouts: Backend handles and returns fallback message
- Network Errors: Frontend displays alert prompting user to check backend
| Column | Type | Description |
|---|---|---|
id |
UUID (PK) | Unique conversation ID |
createdAt |
DateTime | Timestamp of creation |
| Column | Type | Description |
|---|---|---|
id |
UUID (PK) | Unique message ID |
content |
String | Message text |
role |
Enum (USER | ASSISTANT) |
Sender type |
conversationId |
UUID (FK) | References Conversation.id |
createdAt |
DateTime | Timestamp of message |
Conversation (1) ──────< (Many) Message
id id
createdAt content
role
conversationId
createdAt
-
Build the TypeScript code
npm run build
-
Set environment variables on your hosting platform:
DATABASE_URLGEMINI_API_KEYPORT(usually auto-assigned)
-
Run migrations (one-time setup)
npx prisma migrate deploy
-
Start the production server
npm start
-
Update environment variable
VITE_API_URL=https://your-backend-url.com/api/chat
-
Build for production
npm run build
-
Deploy the
dist/folder to your hosting platform
-
No Authentication
- Simplified for demo purposes
- In production: Add user auth (JWT, OAuth) to prevent abuse
-
LocalStorage for Sessions
- Quick and simple
- Better approach: Server-managed sessions with cookies/tokens
-
Limited Context Window (10 messages)
- Reduces API costs
- Could be increased for better context in long conversations
-
No Rate Limiting
- Current implementation allows unlimited requests
- Production needs: IP-based rate limiting, per-user quotas
-
Single LLM Provider
- Currently only supports Gemini
- Future: Provider abstraction layer to support OpenAI, Claude, etc.
- Rate Limiting: Implement
express-rate-limitto prevent abuse - Caching: Use Redis to cache frequent queries (e.g., FAQ responses)
- Streaming Responses: Implement Server-Sent Events (SSE) for token-by-token streaming
- Multi-Provider Support: Abstract LLM calls to easily switch between OpenAI, Claude, Gemini
- Better Logging: Structured logging with Winston/Pino for debugging and monitoring
- Unit Tests: Jest/Vitest for service layer and API endpoints
- Markdown Support: Render formatted AI responses with code blocks, lists
- File Uploads: Allow users to send images/documents
- Voice Input: Speech-to-text for accessibility
- Internationalization: Multi-language support
- Dark Mode: Theme toggle for better UX
- Message Reactions: Quick feedback (thumbs up/down)
- Offline Support: Queue messages when backend is unreachable
- Docker Compose: One-command local setup with DB
- CI/CD Pipeline: Automated testing and deployment
- Monitoring: Add Sentry for error tracking, Prometheus for metrics
- Database Backups: Automated daily backups for PostgreSQL
- Human Handoff: Escalate to live agent when AI can't help
- Analytics Dashboard: Track conversation metrics, common queries
- A/B Testing: Test different prompts to improve AI responses
- Sentiment Analysis: Detect frustrated users and prioritize support
This project is part of a take-home assignment for Spur and is for demonstration purposes only.
Built with care by Mohit Sen for the Spur Founding Engineer Take-Home Assignment
For questions or issues, please contact: [senmohit9005@gmail.com]