An AI assistant that combines OpenAI GPT or Perplexity models with real-time Tavily web search.
- Conversational agent – generate contextual answers through OpenAI or Perplexity (provider configured in
.env). - Token streaming – display responses as they stream; Perplexity also supports SSE streaming.
- Search grounding – enrich answers with Tavily results when fresh information is needed (requires Tavily API key).
- Quick prompts – fire commonly used prompts via one-click shortcuts.
- Conversation history – automatically persist chats and export them as JSON when needed.
- Keyboard shortcuts – accelerate new chats, exports, and resets via hotkeys.
- React 18 – modern React hooks and Suspense
- TypeScript – type-safe UI development
- Tailwind CSS – utility-first styling
- Lucide React – icon set
- OpenAI / Perplexity API – conversational models
- Tavily Search API – real-time web context
- LocalStorage – lightweight persistence
- Create React App – project bootstrap
- ESLint & Prettier – code quality
- PostCSS – CSS post-processing
- Type a request and press Enter (Shift+Enter inserts a new line).
- Use a quick prompt when you want a ready-made instruction.
- Export the conversation as JSON and share it with your backend or teammates.
💡 Streaming mode: when the model supports streaming, tokens appear in real time instead of arriving as a single block.
Enter: send messageShift + Enter: new lineCtrl/Cmd + E: export conversationCtrl/Cmd + Backspace: clear conversation
- Keep API keys in environment variables only (
frontend/.env). .envfiles are ignored by Git by default.- Be deliberate about exposing keys when calling APIs from the browser.
- Avoid putting sensitive information into chat transcripts.
- Fork this repository.
- Create a feature branch (
git checkout -b feature/amazing-feature). - Commit your changes (
git commit -m 'Add some amazing feature'). - Push to the branch (
git push origin feature/amazing-feature). - Open a pull request.
- Bug reports: GitHub Issues
- Feature ideas: GitHub Discussions
- Email: ingu627@gmail.com
- Integrated OpenAI and Perplexity models
- Added Tavily web search integration
- Introduced a Notion-style UI with glassmorphism accents
- Persisted conversations in local storage
- Added keyboard shortcuts and responsive design
- Enabled Docker and GHCR deployment workflow
- Delivered FastAPI backend support
- Filtered
<think>tags from reasoning models
See the Docker deployment guide for full instructions.
# Local run
docker-compose up -d
# Production (GHCR images)
docker-compose -f docker-compose.prod.yml up -d⭐ If this project helped you, please star the repository!
- Single-card dark layout keeps input and history visible together.
- Message bubbles and typing indicator stay minimal to match the console theme.
- Only essential quick actions remain: quick prompts, export, and new chat.
ai-agent-service/
├── README.md
├── .gitignore
├── frontend/ # React frontend
│ ├── package.json
│ ├── package-lock.json
│ ├── tsconfig.json
│ ├── tailwind.config.js
│ ├── postcss.config.js
│ ├── .env # Local env variables (gitignored)
│ ├── .env.example # Env variable template
│ ├── public/
│ │ └── index.html
│ └── src/
│ ├── components/ # React components
│ │ ├── ChatBot.tsx # Chat interface
│ │ ├── MessageBubble.tsx
│ │ ├── TypingIndicator.tsx
│ │ ├── WelcomeMessage.tsx
│ │ └── AIAgent.ts
│ ├── hooks/
│ │ └── useKeyboardShortcuts.ts
│ ├── services/ # API + storage helpers
│ ├── types.ts
│ ├── index.css
│ ├── index.tsx
│ └── App.tsx
├── backend/ # FastAPI backend
│ ├── __init__.py
│ ├── __main__.py
│ ├── api.py
│ ├── app_logging.py
│ ├── config.py
│ ├── schemas.py
│ ├── search.py
│ ├── services.py
│ ├── requirements.txt
│ ├── pyproject.toml
│ ├── .env.example
│ └── README.md
└── docs/ # Additional guides and assets
├── DOCKER.md
├── DOCKER_TROUBLESHOOTING.md
├── INTEGRATION.md
├── CHANGES.md
└── imgs/
└── ui_agent.png
The frontend can talk to OpenAI/Perplexity directly. To proxy requests through the FastAPI backend instead:
cd backend
cp .env.example .env
# fill in API keys inside .envpip install -r requirements.txtpython -m __main__
# or
uvicorn api:app --host 0.0.0.0 --port 8000 --reloadAdd the following to frontend/.env:
REACT_APP_BACKEND_URL=http://localhost:8000Restart the frontend and all API calls will flow through the backend.
