Skip to content

Llalithsaikumar/AI-ChatBot-with-Gemma2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Chatbot with DeepSeek R1 and Local RAG- 🤖 Dual-Mode Chat: Switches between campus-specific and general knowledge

Python Version Flask Version License: MIT PRs Welcome

A full-stack, Retrieval-Augmented Generation (RAG) powered chatbot application for campus information. This AI system answers campus-specific questions using a local Q&A dataset and the deepseek-r1:1.5b model via Ollama. It also works as a general-purpose chatbot for unrelated queries.


📑 Table of Contents


✨ Features

  • 🎯 Accurate Campus Information: Uses srec_qa.json with RAG for precise answers about your campus
  • � Dual-Mode Chat: Switches between campus-specific and general knowledge
  • ⚡ Local & Private: Runs completely offline with Ollama models
  • 🎨 Modern Web Interface: Clean design with real-time chat updates
  • 🔄 RAG Integration: Smart context retrieval for accurate campus-related responses
  • 🛠️ Modular Architecture: Easy to customize and extend with utils/ helpers
  • 🧪 Test Suite: Includes test_chatbot.py for reliability

🏗️ Project Structure

AI-ChatBot/
├── app.py                    # Flask backend with RAG integration
├── requirements.txt          # Python dependencies
├── test_chatbot.py          # Test suite for chatbot functionality
├── Campus_qa.json              # Q&A dataset for RAG
├── frontend/                 # Frontend assets
│   ├── index.html           # Main UI template
│   ├── app.js               # Frontend logic and API calls
│   ├── style.css            # UI styling
│   └── brain.png            # Logo asset
└── utils/                   # Utility modules
    └── time_helper.py       # Time-related utilities

⚙️ Installation

Install Ollama

Choose your platform:

📦 Windows
  1. Download from ollama.com/download
  2. Run the installer
  3. Start Ollama from Start Menu
🐧 Linux
curl -fsSL https://ollama.com/install.sh | sh
🍎 macOS
curl -fsSL https://ollama.com/install.sh | sh

Pull AI Models

# Pull required models
ollama pull deepseek-r1:1.5b
ollama pull mxbai-embed-large

# Verify installation
ollama list

Install Python Dependencies

# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # Linux/macOS
venv\Scripts\activate   # Windows

# Install dependencies
pip install -r requirements.txt

🚀 Running the App

  1. Start Ollama Server

    ollama serve
  2. Launch Backend

    python app.py
  3. Access the UI


🔄 How Retrieval-Augmented Generation (RAG) Works

  1. Question Analysis 🔍

    • Detects campus-specific queries
    • Routes general queries to standard chat
  2. Vector Search 📊

    • Embeds questions using mxbai-embed-large
    • Performs similarity search in campus Q&A
  3. Context Injection 🎯

    • Enriches prompts with relevant campus info
    • Ensures accurate, grounded responses
  4. Response Generation

    • Uses Gemma2:2b for natural language
    • Maintains conversation context

🛠️ Customization

Adding Campus Knowledge

// campus_qa.json
{
  "questions": [
    {
      "q": "What are the campus timings?",
      "a": "Your custom answer here"
    }
  ]
}

Model Configuration

  • Swap models in app.py:
CHAT_MODEL = "deepseek-r1:1.5b"  # Or any Ollama model
EMBED_MODEL = "mxbai-embed-large"

🧪 Testing

Run the test suite:

python test_chatbot.py

API Testing:

# Health check
curl http://localhost:5000/health

# Chat endpoint
curl -X POST http://localhost:5000/chat/simple \
     -H "Content-Type: application/json" \
     -d '{"message": "What are the campus timings?"}'

❓ FAQ

Q: How is data privacy maintained? All processing happens locally. No data leaves your system.
Q: Can I extend the campus knowledge? Yes! Add Q&A pairs to campus_qa.json.
Q: What's the recommended hardware? - RAM: 8GB minimum, 16GB recommended - GPU: Optional, CPU works fine - Storage: 10GB for models

🤝 Contributing

We welcome contributions! See our Contributing Guide for details.

  1. Fork the repository
  2. Create your feature branch
  3. Commit your changes
  4. Push to the branch
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


Built with ❤️ using Gemma2 and Flask

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •