Interactive, WhatsApp‑inspired chat UI with selectable educator personas powered by Google Gemini (OpenAI compatible API).
- Modern Next.js App Router (React 19 / Server + Client Components)
- Persona selection landing page with animated cards & glassmorphism
- WhatsApp‑like chat interface (bubble layout, auto-resize composer, scroll to bottom)
- Two educator personas:
Hitesh ChoudharyPiyus Garg
- LLM integration via OpenAI compatible endpoint (Gemini
gemini-2.0-flashmodel) - Conversation history passed to API for contextual answers
- Simple link auto-detection in messages
- Dark mode–friendly theming using CSS variables
Streaming was prototyped; current committed version returns full responses (no SSE). You can extend it (see "Extending / Streaming" below).
app/
page.tsx # Persona selection screen
chat/page.tsx # Chat UI (client component, Suspense-wrapped search params)
api/chat/hitesh/route.ts
api/chat/piyush/route.ts
prompts/
systemPrompts.* # System prompts (imported by generator)
scripts/
llmResponseGenerator.ts # Central LLM call helper
public/ # Static assets (avatars, svgs)
- Install deps:
npm install- Create an
.env.localfile:
API_KEY=YOUR_GEMINI_API_KEY - The code currently reads `process.env.API_KEY` in `scripts/llmResponseGenerator.ts`.
- Key must have access to the Gemini OpenAI-compatible endpoint.
- Run dev server:
npm run dev- Open: http://localhost:3000
- Select a persona → you are navigated to
/chat?n=hitesh(orpiyush).
Central helper: scripts/llmResponseGenerator.ts
Builds messages array:
system -> persona system prompt
...history (role: user|assistant, content)
user -> latest question
Returns: response.choices[0].message (OpenAI chat completion shape).
Both endpoints accept POST JSON:
{
"message": "Explain closures in JS",
"history": [
{ "role": "user", "content": "Hi" },
{ "role": "assistant", "content": "Hello!" }
]
}Endpoints:
POST /api/chat/hiteshPOST /api/chat/piyush
Response (success):
{
"persona": "hitesh",
"reply": "Closures are...",
"timestamp": 1734300000000,
"model": "gemini-2.0-flash"
}Response (error):
{ "error": "Message required" }- Keeps messages in local state only (no persistence).
- Auto-scrolls on new message.
- Composer grows until 160px height cap.
- Time shown per bubble (local time HH:MM).
- Initial assistant greeting depends on persona query param.
With server running:
curl -X POST http://localhost:3000/api/chat/hitesh \
-H 'Content-Type: application/json' \
-d '{"message":"Hello!","history":[]}'npm run build
npm startTo re-enable streaming:
- Convert the route to Edge runtime:
export const runtime = 'edge'. - Use
openai.chat.completions.create({ stream: true, ... })and iterate async chunks. - Emit Server-Sent Events (SSE):
data: {"delta":"..."}\n\nand finaldata: {"done":true}. - In the client, replace the existing JSON fetch with an event stream parser updating a provisional assistant message.
- Never hardcode API keys; keep them in
.env.local(not committed). - Consider adding rate limiting (e.g., middleware) before exposing publicly.
- Sanitize / limit history length to stay within model token limits.
- Shared TypeScript types in a
types/folder instead of cross-importing route types. - Add streaming UX (progressive tokens).
- Add abort/cancel in-flight request.
- Persist conversations (DB or localStorage).
- Add persona management (dynamic metadata & avatars).
- Unit tests for message reducer & API normalization.
| Issue | Cause | Fix |
|---|---|---|
Message required |
Empty message |
Send non-empty string |
| 500 from endpoint | Upstream model / key invalid | Check API_KEY & quota |
Build error about useSearchParams |
Missing Suspense wrap | Already fixed by wrapping in <Suspense> |
| Empty reply string | Model returned no choices | Log full response & inspect quota/errors |
MIT (adjust as needed).
- Google Gemini (OpenAI compatibility layer)
- Next.js team for the App Router
- Inspiration from WhatsApp UI patterns
Feel free to open issues or propose improvements.