Skip to content

SBCSP/chatbox

Repository files navigation

🤖 Chatbox - AI Assistant

A modern chatbot application built with LlamaIndex, FastAPI, MongoDB, and OpenAI. Features a beautiful web interface with real-time streaming chat capabilities.

✨ Features

  • 🚀 Fast & Modern: Built with FastAPI for high performance
  • 🧠 AI-Powered: Uses OpenAI GPT models via LlamaIndex
  • 💬 Real-time Chat: Streaming responses for instant interaction
  • 📚 Memory: Persistent chat history with MongoDB
  • 🎨 Beautiful UI: Modern, responsive web interface
  • 🐳 Docker Ready: Easy deployment with Docker Compose
  • Session Management: Multiple concurrent chat sessions
  • 🔍 Health Monitoring: Built-in health checks and status monitoring

🛠️ Tech Stack

  • Backend: FastAPI, Python 3.11
  • AI/LLM: LlamaIndex, OpenAI GPT-4o-mini
  • Database: MongoDB
  • Frontend: HTML5, CSS3, JavaScript (Vanilla)
  • Deployment: Docker, Docker Compose

🚀 Quick Start

Prerequisites

  • Docker and Docker Compose installed
  • OpenAI API key

Setup

  1. Clone and navigate to the project:

    cd /path/to/chatbox
  2. Set up environment variables:

    cp .env.example .env
    # Edit .env and add your OpenAI API key
  3. Start the application:

    docker-compose up -d
  4. Access the application:

Environment Variables

Create a .env file with:

OPENAI_API_KEY=your_openai_api_key_here

📡 API Endpoints

Chat Endpoints

  • POST /api/chat - Send a message (non-streaming)
  • POST /api/chat/stream - Send a message (streaming)
  • GET /api/sessions/{session_id} - Get chat history
  • DELETE /api/sessions/{session_id} - Delete chat session

Utility Endpoints

  • GET / - Web interface
  • GET /health - Health check
  • GET /docs - API documentation

🐳 Docker Commands

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f chatbox

# Stop all services
docker-compose down

# Rebuild and restart
docker-compose up --build -d

# Remove everything (including data)
docker-compose down -v

🔧 Development

Local Development Setup

  1. Install Python dependencies:

    pip install -r requirements.txt
  2. Start MongoDB:

    docker-compose up -d mongodb
  3. Set environment variables:

    export OPENAI_API_KEY="your_key_here"
    export MONGODB_URL="mongodb://admin:password123@localhost:27017/chatbox?authSource=admin"
  4. Run the application:

    uvicorn main:app --reload --host 0.0.0.0 --port 8000

Project Structure

chatbox/
├── main.py                 # FastAPI application
├── requirements.txt        # Python dependencies
├── Dockerfile             # Container configuration
├── docker-compose.yml     # Multi-service setup
├── init-mongo.js          # MongoDB initialization
├── .env.example           # Environment template
├── templates/
│   └── index.html         # Web interface
└── README.md             # This file

🔍 Monitoring

Health Check

Visit /health to check service status:

{
  "status": "healthy",
  "mongodb": "connected",
  "openai": "connected",
  "version": "1.0.0"
}

MongoDB Administration

Access MongoDB Express at http://localhost:8081:

  • Username: admin
  • Password: admin123

🎨 Features Overview

Web Interface

  • Modern Design: Beautiful gradient UI with smooth animations
  • Responsive: Works on desktop and mobile devices
  • Real-time: Streaming chat with typing indicators
  • Session Persistence: Chat history saved automatically
  • Status Indicators: Connection status and health monitoring

Backend Features

  • Streaming Chat: Real-time response streaming
  • Session Management: Unique session IDs for each conversation
  • Error Handling: Comprehensive error handling and logging
  • Health Monitoring: Built-in health checks
  • API Documentation: Auto-generated OpenAPI docs

Database Features

  • Persistent Storage: Chat history stored in MongoDB
  • Indexing: Optimized queries with proper indexing
  • Session Tracking: Efficient session management
  • Scalable: Ready for horizontal scaling

🛡️ Security Considerations

  • Environment variables for sensitive data
  • Input validation and sanitization
  • Rate limiting ready (can be added)
  • CORS configuration available
  • Database authentication enabled

🚀 Production Deployment

For production deployment:

  1. Use environment-specific configurations
  2. Set up proper logging and monitoring
  3. Configure SSL/TLS certificates
  4. Set up database backups
  5. Use production-grade web server (nginx)
  6. Implement rate limiting
  7. Set up health checks and alerts

📄 License

This project is open source and available under the MIT License.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📞 Support

If you have any questions or issues, please open an issue on the repository.


Built with ❤️ using LlamaIndex, FastAPI, and OpenAI

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages