A powerful, fast, and feature-rich AI chatbot application with Python FastAPI backend and React frontend, featuring multi-provider LLM support, real-time messaging, conversation memory, and a beautiful responsive interface.
- Multi-Provider LLM Support: Google Gemini and more
- Real-time Chat: WebSocket-based instant messaging
- Conversation Memory: Context-aware conversations with persistent memory
- Fast Performance: Optimized for speed and responsiveness
- Beautiful UI: Modern, responsive design with Tailwind CSS
- Responsive Design: Works perfectly on desktop, tablet, and mobile
- Dark/Light Theme: Customizable appearance
- Smooth Animations: Framer Motion powered interactions
- Real-time Typing Indicators: See when AI is responding
- Message Actions: Copy, like, regenerate, and more
- Context Awareness: Remembers conversation history
- Provider Switching: Switch between different AI models
- Fallback Support: Automatic fallback if primary provider fails
- Response Streaming: Real-time response generation
- Memory Management: Intelligent conversation summarization
- Conversation History: Browse and search past conversations
- Export/Import: Backup and restore conversations
- Bulk Operations: Delete multiple conversations
- Search & Filter: Find conversations quickly
- Auto-save: Automatic conversation persistence
- React 18: Modern React with hooks and context
- TypeScript: Type-safe JavaScript development
- Node.js: JavaScript runtime for development
- Vite: Fast build tool and dev server
- Tailwind CSS: Utility-first CSS framework
- Framer Motion: Smooth animations and transitions
- Socket.io Client: Real-time communication
- Axios: HTTP client for API calls
- React Router: Client-side routing
- React Hot Toast: Beautiful notifications
- FastAPI: Modern web framework for building APIs
- Uvicorn: ASGI server for FastAPI
- WebSockets: Real-time communication
- Google API: Gemini models integration
- Pydantic: Data validation and serialization
- Python-dotenv: Environment variable management
- SlowAPI: Rate limiting
- Socket.IO: Additional WebSocket support
- MongoDB: NoSQL database with localhost-only access
- SQLite: Fallback database for development
- Dual Support: Automatic database detection and switching
- Local Storage: All data stored locally, no external connections
- Docker Compose: Multi-container orchestration
- MongoDB Container: Local database with initialization
- Nginx: Reverse proxy for production
- Python v3.14+ and pip (for backend)
- Node.js v25+ and npm (for frontend development)
- TypeScript knowledge (recommended)
- Docker Desktop (for containerized deployment)
- API keys for your preferred LLM providers
-
Clone the repository
git clone https://github.com/sudo-de/HyperLogic_AI_ChatBot.git cd HyperLogic_AI_ChatBot -
Install dependencies
npm run install-all
This will:
- Install frontend dependencies (React)
- Create Python virtual environment
- Install Python backend dependencies
-
Environment Setup
cp env.example .env
Edit
.envand add your API keys:GOOGLE_API_KEY=your_google_api_key_here PORT=5000 CORS_ORIGIN=http://localhost:3000
-
Start with Docker (Recommended)
# Start MongoDB and backend docker compose up -d mongodb hyperlogic-ai # Start frontend locally cd web && npm run dev
-
Start without Docker (Development)
./start.sh
This will start both the Python backend server (port 5000) and React frontend (port 3000).
# Docker Commands
docker compose up -d # Start all services
docker compose up -d mongodb # Start only MongoDB
docker compose up -d hyperlogic-ai # Start only backend
docker compose down # Stop all services
docker compose logs # View logs
# Frontend Development
cd web
npm install # Install dependencies
npm run dev # Start development server
npm run build # Build for production
npm run preview # Preview production build
# Backend Development
cd server
python -m venv venv # Create virtual environment
source venv/bin/activate # Activate virtual environment
pip install -r requirements.txt # Install dependencies
python run.py # Start backend server
# Full Stack Development
./start.sh # Start both frontend and backend| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 5000 |
NODE_ENV |
Environment | development |
GOOGLE_API_KEY |
Google API key | - |
CORS_ORIGIN |
Frontend URL | http://localhost:3000 |
MONGODB_URL |
MongoDB connection | mongodb://127.0.0.1:27017/hyperlogic_ai |
DATABASE_URL |
SQLite fallback | sqlite:///./hyperlogic_ai.db |
MongoDB (Production):
MONGODB_URL=mongodb://hyperlogic:password@127.0.0.1:27017/hyperlogic_ai?authSource=adminSQLite (Development):
DATABASE_URL=sqlite:///./hyperlogic_ai.dbAutomatic Detection:
- If
MONGODB_URLis set โ Uses MongoDB - If not set โ Falls back to SQLite
-
Install the provider SDK
npm install provider-sdk
-
Add to LLMProvider service
// In server/services/llmProvider.js addProvider('new-provider', newProviderInstance);
-
Update frontend provider list
// In web/src/components/Settings.js const providers = ['google', 'new-provider'];
- Click "New Conversation" in the sidebar
- Type your message in the input field
- Press Enter or click Send
- Watch the AI respond in real-time
- Search: Use the search bar to find specific conversations
- Filter: Sort by date, title, or message count
- Export: Download conversations as JSON files
- Delete: Remove individual or multiple conversations
- AI Provider: Switch between different AI models
- Theme: Choose light, dark, or auto theme
- Font Size: Adjust text size for better readability
- Notifications: Enable/disable notifications
server/
โโโ main.py # FastAPI application
โโโ run.py # Application runner
โโโ requirements.txt # Python dependencies
โโโ models/
โ โโโ schemas.py # Pydantic models
โโโ services/
โโโ llm_provider.py # Multi-provider LLM integration
โโโ conversation_manager.py # Conversation management
โโโ memory_manager.py # Memory and context management
โโโ websocket_manager.py # WebSocket handling
web/src/
โโโ components/ # React components
โ โโโ ChatInterface.js # Main chat interface
โ โโโ MessageBubble.js # Individual message component
โ โโโ Sidebar.js # Conversation sidebar
โ โโโ Settings.js # Settings panel
โ โโโ Header.js # App header
โโโ context/
โ โโโ AppContext.js # Global state management
โโโ services/
โ โโโ socketService.js # WebSocket communication
โ โโโ apiService.js # HTTP API calls
โโโ App.js # Main app component
- Rate Limiting: Prevents API abuse
- CORS Protection: Secure cross-origin requests
- Helmet: Security headers
- Input Validation: Sanitized user inputs
- Error Handling: Graceful error management
# Start all services with localhost-only access
docker compose up -d
# Start specific services
docker compose up -d mongodb hyperlogic-ai
# View logs
docker compose logs -f
# Stop all services
docker compose downNODE_ENV=production
PORT=5000
GOOGLE_API_KEY=your_production_key
CORS_ORIGIN=https://yourdomain.comThe application is ready for deployment on:
- Vercel: Frontend deployment
- Railway: Full-stack deployment
- Heroku: Backend deployment
- AWS or GCP: Scalable cloud deployment
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Create an issue on GitHub
- Check the documentation
- Review the troubleshooting guide
- Voice input/output support
- File upload and analysis
- Plugin system for custom providers
- Advanced conversation analytics
- Multi-language support
- Mobile app development
Built with โค๏ธ by HyperLogic AI Team