Transform your documents into an intelligent, conversational knowledge base
Features β’ Quick Start β’ Documentation β’ API β’ Contributing
Glassmorphism UI with gradient animations and typing effects
Document management with real-time vector indexing
- π Redis Caching: 490x faster responses (14.7s β 0.03s)
- π Dual Redis Architecture: Local + cloud fallback
- π Smart Cache: 24-hour TTL with auto-invalidation
- β‘ Optimized RAG: Hybrid search with query expansion
- π― Hybrid Intelligence: Auto-route between knowledge base & general AI
- π Semantic Search: Advanced retrieval with RRF ranking
- π Multi-Query: Query expansion for better accuracy
- π OCR Support: Process scanned PDFs (Tesseract/EasyOCR)
- β¨ Glassmorphism: Frosted glass design elements
- π Animated Gradients: Flowing blue-cyan-purple colors
- β¨οΈ Typing Effect: Smooth "Mira" branding animation
- π« Smooth Transitions: Hover effects and scale animations
- π± Responsive: Works on desktop, tablet, and mobile
| Component | Version | Check Command |
|---|---|---|
| Python | 3.10+ | python --version or python3 --version |
| Node.js | 18+ | node --version |
| npm | 9+ | npm --version |
| Redis | 7.0+ | redis-server --version or redis-cli --version |
| Git | Any | git --version |
πͺ Windows
# Install Chocolatey (if not installed)
# Run PowerShell as Administrator
Set-ExecutionPolicy Bypass -Scope Process -Force
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
# Install Python
choco install python --version=3.10.0 -y
# Install Node.js
choco install nodejs-lts -y
# Install Redis (use Memurai - Redis alternative for Windows)
choco install memurai-developer -y
# OR download Redis for Windows from:
# https://github.com/microsoftarchive/redis/releases
# Extract and run redis-server.exe
# Install Tesseract OCR (optional, for scanned PDFs)
choco install tesseract -y
# Restart PowerShell and verify
python --version
node --version
npm --version
redis-server --versionAlternative: Manual Installation
- Python: Download from python.org
- Node.js: Download from nodejs.org
- Redis: Download Memurai or Redis for Windows
- Tesseract: Download from UB-Mannheim
π§ Linux (Ubuntu/Debian)
# Update package list
sudo apt update
# Install Python 3.10+
sudo apt install python3 python3-pip python3-venv -y
# Install Node.js 18+
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt install nodejs -y
# Install Redis
sudo apt install redis-server -y
sudo systemctl enable redis-server
sudo systemctl start redis-server
# Install Tesseract OCR (optional)
sudo apt install tesseract-ocr -y
# Verify installations
python3 --version
node --version
npm --version
redis-server --versionπ macOS
# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Python
brew install python@3.10
# Install Node.js
brew install node@18
# Install Redis
brew install redis
brew services start redis
# Install Tesseract OCR (optional)
brew install tesseract
# Verify installations
python3 --version
node --version
npm --version
redis-server --versiongit clone https://github.com/gaurav-163/Mira.git
cd MiraLinux/macOS:
cp .env.example .env
nano .env # or vim, code, etc.Windows (PowerShell):
copy .env.example .env
notepad .env # or code .envEdit .env with your API keys:
# Choose your LLM provider
LLM_PROVIDER=cohere # Options: cohere, groq, openai
# Get free API keys from:
# Cohere: https://dashboard.cohere.com/api-keys (RECOMMENDED - Free tier)
# Groq: https://console.groq.com/keys (10x faster)
# OpenAI: https://platform.openai.com/api-keys (Paid)
COHERE_API_KEY=your-cohere-key-here
GROQ_API_KEY=your-groq-key-here
OPENAI_API_KEY=your-openai-key-here
# Redis Configuration (defaults work for local Redis)
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
# REDIS_PASSWORD= # Optional
# Optional: Self-reflection (improves quality, adds 2-3s)
ENABLE_REFLECTION=falseLinux/macOS:
# Create virtual environment (recommended)
python3 -m venv .venv
source .venv/bin/activate
# Install Python dependencies
pip install -r requirements.txt
# Install frontend dependencies
cd frontend
npm install
cd ..Windows (PowerShell):
# Create virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1
# If execution policy error, run:
# Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Install Python dependencies
pip install -r requirements.txt
# Install frontend dependencies
cd frontend
npm install
cd ..Windows (Command Prompt):
# Create virtual environment
python -m venv .venv
.venv\Scripts\activate.bat
# Install Python dependencies
pip install -r requirements.txt
# Install frontend dependencies
cd frontend
npm install
cd ..Linux/macOS:
# Create knowledge base directory
mkdir -p data/knowledge_base
# Copy your PDF files
cp ~/Documents/*.pdf data/knowledge_base/Windows (PowerShell):
# Create knowledge base directory
New-Item -ItemType Directory -Force -Path data\knowledge_base
# Copy your PDF files
Copy-Item "C:\Users\YourName\Documents\*.pdf" -Destination "data\knowledge_base\"Linux:
# Check if Redis is running
redis-cli ping
# Should return: PONG
# If not running, start it
sudo systemctl start redis
# Enable auto-start on boot
sudo systemctl enable redismacOS:
# Check if Redis is running
redis-cli ping
# If not running
brew services start redis
# Or run in foreground
redis-serverWindows:
# If using Memurai
net start Memurai
# OR if using Redis for Windows
# Navigate to Redis directory and run:
redis-server.exe
# In another terminal, verify
redis-cli.exe ping
# Should return: PONGTerminal 1 - Backend:
chmod +x start.sh # First time only
./start.shTerminal 2 - Frontend:
cd frontend
chmod +x start-frontend.sh # First time only
./start-frontend.shTerminal 1 - Backend:
Linux/macOS:
source .venv/bin/activate
uvicorn api:app --host 0.0.0.0 --port 8000 --reloadWindows (PowerShell):
.\.venv\Scripts\Activate.ps1
uvicorn api:app --host 0.0.0.0 --port 8000 --reloadWindows (Command Prompt):
.venv\Scripts\activate.bat
uvicorn api:app --host 0.0.0.0 --port 8000 --reloadTerminal 2 - Frontend:
All Platforms:
cd frontend
npm run devTerminal 1 - Backend:
# Activate virtual environment first
python api.py
# OR
python3 api.pyTerminal 2 - Frontend:
cd frontend
npm install
npm run devOnce both servers are running, open your browser:
| Service | URL | Description |
|---|---|---|
| Frontend | http://localhost:3000 | Main user interface |
| Backend API | http://localhost:8000 | REST API server |
| API Docs | http://localhost:8000/docs | Interactive Swagger docs |
| Cache Stats | http://localhost:8000/api/cache/stats | Redis cache statistics |
- Open http://localhost:3000 in your browser
- Wait for green "Online" indicator in header (10-15 seconds)
- Start asking questions!
- "What is data warehousing?"
- "Explain the main concepts in chapter 3"
- "Summarize the key findings"
- "What does the document say about machine learning?"
- "What is Python?"
- "Explain quantum computing"
- "How does Redis caching work?"
- "What's the difference between SQL and NoSQL?"
First Time (Cache MISS):
- Question: "What is data warehousing?"
- Response Time: ~14.7 seconds
- Cached: No
Second Time (Cache HIT):
- Same Question: "What is data warehousing?"
- Response Time: ~0.03 seconds β‘
- Cached: Yes
- 490x faster! π
Linux/macOS:
# Use stop script
./stop.sh
# OR press Ctrl+C in both terminalsWindows (PowerShell/CMD):
# Press Ctrl+C in both terminals
# OR manually kill processes
# Find processes
Get-Process python,node
# Kill specific process
Stop-Process -Name "python" -Force
Stop-Process -Name "node" -ForceStop Redis:
Linux:
sudo systemctl stop redismacOS:
brew services stop redisWindows:
# If using Memurai
net stop Memurai
# OR close redis-server windowcurl -X POST http://localhost:8000/api/initializeResponse:
{
"status": "initialized",
"stats": {
"documents": 1819,
"pdfs": 3,
"cache_enabled": true
}
}curl -X POST http://localhost:8000/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is a data warehouse?"}'curl http://localhost:8000/api/cache/statsResponse:
{
"cache_enabled": true,
"redis_host": "127.0.0.1:6379",
"total_cached": 45,
"cache_hits": 89,
"cache_misses": 12,
"hit_rate": "88.12%"
}curl -X POST http://localhost:8000/api/cache/clearcurl -X POST http://localhost:8000/api/clearLinux/macOS:
# Find process using port 8000
lsof -i :8000
# Kill it
kill -9 <PID>
# For port 3000
lsof -i :3000
kill -9 <PID>Windows (PowerShell):
# Find process using port 8000
netstat -ano | findstr :8000
# Kill it
taskkill /PID <PID> /F
# For port 3000
netstat -ano | findstr :3000
taskkill /PID <PID> /FLinux:
sudo systemctl status redis
sudo systemctl start redismacOS:
brew services list
brew services start redisWindows:
# Check if Memurai is running
Get-Service Memurai
# Start it
net start Memurai
# OR launch redis-server.exe manuallyWindows - Add to PATH:
- Search "Environment Variables" in Windows search
- Click "Environment Variables"
- Edit "Path" in System Variables
- Add Python and Node.js installation directories
- Restart terminal
Typical paths:
- Python:
C:\Python310\andC:\Python310\Scripts\ - Node.js:
C:\Program Files\nodejs\
# If you get execution policy error
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Then retry activation
.\.venv\Scripts\Activate.ps1# Make sure virtual environment is activated
# Then reinstall
pip install --upgrade -r requirements.txtcd frontend
# Clean install
rm -rf node_modules package-lock.json # Linux/macOS
# OR
Remove-Item -Recurse -Force node_modules, package-lock.json # Windows
# Reinstall
npm install --legacy-peer-depsLinux/macOS:
tail -f api.log
tail -f api.log | grep -i cache
tail -f api.log | grep -E "ERROR|Cache"Windows (PowerShell):
Get-Content api.log -Wait -Tail 50
Get-Content api.log -Wait | Select-String "cache"All Platforms:
# Connect to Redis CLI
redis-cli
# View all Mira keys
> KEYS mira:qa:*
# Get database size
> DBSIZE
# Monitor commands in real-time
> MONITOR
# View statistics
> INFO stats
# View memory usage
> INFO memoryEdit config.py:
# Vector Search
CHUNK_SIZE = 500 # Smaller = faster
CHUNK_OVERLAP = 50
TOP_K_RESULTS = 5
SIMILARITY_THRESHOLD = 0.3
# Redis Cache
CACHE_TTL = 86400 # 24 hours
CACHE_MAX_SIZE = 10000
# LLM Settings
TEMPERATURE = 0.1
MAX_TOKENS = 512# Edit .env
LLM_PROVIDER=groq # Change to groq, openai, or cohereThen restart the backend.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
MIT License - see LICENSE file for details.
- Cohere - Powerful LLM API
- Redis - Lightning-fast caching
- LangChain - RAG framework
- ChromaDB - Vector database
- Next.js - React framework
- π Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions
- π Documentation: See
Docsfolder
Made with β€οΈ by Gaurav
β Star this repo if you find it helpful!