LocalMind is a production-grade, open-source AI platform designed to bridge the gap between private local LLMs and powerful cloud intelligence.
β¨ Features β’ π Quick Start β’ ποΈ Architecture β’ π§© API Docs β’ π€ Contributing
LocalMind empowers you to move away from restrictive, subscription-based AI models. Run LLaMA, Mistral, or Gemini through a single, unified interface while keeping your data 100% private.
The service follows SOLID principles and implements a Clean Architecture:
EmailService (Main Orchestrator)
β
β£ββ π¦ Providers (Strategy Pattern)
β β£ββ πΉ ResendProvider (Primary)
β β£ββ πΈ SendGridProvider (Fallback)
β β£ββ π§ NodemailerProvider (SMTP)
β βββ π§ͺ MockProviders (Testing)
β
β£ββ π οΈ Resilience Layers
β β£ββ π RetryManager ....... [Exponential Backoff]
β β£ββ π¦ RateLimiter ........ [Token Bucket]
β βββ β‘ CircuitBreaker ...... [Failure Detection]
β
β£ββ π‘οΈ Security & Integrity
β β£ββ π IdempotencyManager . [Duplicate Prevention]
β βββ π Queue System ....... [Failed Email Recovery]
β
βββ π Observability
βββ π Logger ............. [Structured JSON Logging]
A free, open-source AI platform that lets you run local LLMs, connect cloud AI providers, teach your AI with your own data, and share your AI instance globally β all with full privacy and unlimited usage.
Quick Start β’ Features β’ Installation β’ API Docs β’ Contributing
- π₯ Overview
- β¨ Features
- π Quick Start
- π¦ Installation Guide
- βοΈ Configuration
- π Project Structure
- π§© API Documentation
- π‘ Usage Examples
- π οΈ Tech Stack
- π§ Troubleshooting
- πΊοΈ Roadmap
- π€ Contributing
- π License
- π Acknowledgments
- π€ Author
LocalMind is a free, open-source, self-hosted AI platform designed for students, developers, researchers, and creators who demand powerful AI capabilities without the constraints of subscriptions, usage limits, or privacy compromises.
Traditional AI platforms lock you in with:
- πΈ Monthly subscription fees
- π« Message and usage limits
- π Privacy concerns with data collection
- βοΈ Dependency on cloud services
- π Vendor lock-in
LocalMind sets you free with:
- β 100% Free & Open Source β No hidden costs, ever
- β Unlimited Usage β No message caps or rate limits
- β Full Privacy β Your data never leaves your machine
- β Hybrid Architecture β Mix local and cloud models seamlessly
- β Custom Training β Teach AI with your own datasets
- β Global Sharing β Expose your AI to the world instantly
- β Developer-Friendly β RESTful API for easy integration
- π Students learning AI and machine learning
- π¨βπ» Developers building AI-powered applications
- π¬ Researchers conducting experiments with LLMs
- π Startups needing custom AI solutions without enterprise costs
- π’ Organizations requiring private AI infrastructure
- π¨ Creators experimenting with AI-assisted content generation
LocalMind provides a unified interface to interact with both local and cloud-based AI models:
Run powerful open-source models completely offline:
| Model Family | Description | Use Cases |
|---|---|---|
| LLaMA | Meta's flagship open model | General chat, reasoning, coding |
| Mistral | High-performance 7B model | Fast responses, efficiency |
| Phi | Microsoft's compact model | Edge devices, quick tasks |
| Gemma | Google's open model | Balanced performance |
| Custom Models | Any Ollama-compatible model | Specialized tasks |
Integrate premium AI services when needed:
- Google Gemini β Advanced reasoning and multimodal
- OpenAI GPT β Industry-leading language models
- Groq β Ultra-fast inference speeds
- RouterAI β Intelligent model routing
- Coming Soon: Anthropic Claude, Cohere, AI21 Labs
Switch between models instantly β No code changes required!
Transform LocalMind into your personal AI expert using Retrieval-Augmented Generation (RAG):
- π Excel Files (.xlsx, .xls) β Import spreadsheets directly
- π CSV Files β Parse comma-separated datasets
- β Q&A Datasets β Upload question-answer pairs for fine-tuning
- π Coming Soon: PDF, TXT, JSON, and more
- Upload your documents through the UI
- Processing β Automatic text extraction and chunking
- Vectorization β Converts data to embeddings
- Storage β Creates a private vector database
- Querying β AI retrieves relevant context for responses
- π Build a chatbot trained on your company's documentation
- π Create a study assistant with your course materials
- π¬ Analyze research papers and datasets
- πΌ Build internal knowledge bases
- π Query business data using natural language
Your data stays 100% local β No cloud uploads, no external storage.
Share your LocalMind instance with anyone, anywhere:
| Method | Speed | Custom Domain | Security |
|---|---|---|---|
| LocalTunnel | Fast | β | Basic |
| Ngrok | Fast | β Pro | Advanced |
| Cloudflared | Fast | β Random | Advanced |
- π Instant Deployment β No server setup required
- π Shareable URLs β Send links to teammates or clients
- π Perfect for Demos β Showcase your AI projects
- π₯ Collaborative Testing β Get feedback from users
- π± Access Anywhere β Use your AI from any device
- π API key authentication
- π¦ Rate limiting
- π HTTPS encryption
- π Usage monitoring
Your data is yours β always.
- π Local Processing β RAG data never leaves your machine
- π Encrypted Storage β API keys stored securely
- π« No Telemetry β Zero analytics or tracking
- ποΈ Open Source β Audit every line of code
- π No Vendor Lock-In β Export data anytime
- π‘οΈ JWT-based authentication
- π Bcrypt password hashing
- π CORS protection
- π¦ Rate limiting
- π Request validation
- π SQL injection prevention
Get LocalMind running in under 5 minutes:
# Clone the repository
git clone https://github.com/NexGenStudioDev/LocalMind.git
cd LocalMind
# Install dependencies
cd LocalMind-Backend && npm install
cd ../LocalMind-Frontend && npm install
# Start the backend
cd LocalMind-Backend && npm run dev
# Start the frontend (in a new terminal)
cd LocalMind-Frontend && npm run dev
# Open http://localhost:5173That's it! You're ready to chat with AI. π
For detailed setup instructions, see the Installation Guide below.
Ensure you have the following installed:
| Software | Version | Download |
|---|---|---|
| Node.js | 18.x or higher | nodejs.org |
| npm | 9.x or higher | Included with Node.js |
| Git | Latest | git-scm.com |
| Ollama (optional) | Latest | ollama.ai |
node --version # Should show v18.x.x or higher
npm --version # Should show 9.x.x or higher
git --version # Should show git version 2.x.x# Navigate to server directory
cd LocalMind-Backend
# Install dependencies
npm install
# Create environment file
cp .env.example .env
# Edit .env with your preferred editor
nano .env
# Start development server
npm run devThe backend will be available at http://localhost:3000
npm run dev # Start development server with hot reload
npm run build # Compile TypeScript to JavaScript
npm run start # Run production build
npm run lint # Check code quality with ESLint
npm run lint:fix # Fix ESLint errors automatically
npm run format # Format code with Prettier
npm run format:check # Check code formatting
npm run type-check # Check TypeScript types without building
npm run test # Run test suite# Navigate to client directory
cd LocalMind-Frontend
# Install dependencies
npm install
# Start development server
npm run devThe frontend will be available at http://localhost:5173
npm run dev # Start Vite dev server
npm run build # Build for production
npm run preview # Preview production build
npm run lint # Check code quality with ESLint
npm run lint:fix # Fix ESLint errors automatically
npm run format # Format code with Prettier
npm run format:check # Check code formatting
npm run type-check # Check TypeScript types without buildingRun LocalMind with Docker for simplified deployment and consistent environments.
- Docker (v20.10 or higher) - Install Docker
- Docker Compose (v2.0 or higher) - Usually included with Docker Desktop
Verify installation:
docker --version
docker compose version-
Configure environment variables:
cp env.example .env # Edit .env with your preferred editor nano .envRequired variables:
LOCALMIND_SECRET- Generate with:openssl rand -base64 32JWT_SECRET- Same as LOCALMIND_SECRET or generate separatelyYour_Name,YOUR_EMAIL,YOUR_PASSWORD- Admin credentialsDB_CONNECTION_STRING- MongoDB connection string- API keys for cloud providers (optional)
-
Build and start the application:
# Build and run (combined backend + frontend) docker compose up -d # View logs docker compose logs -f localmind # Check container status docker compose ps
-
Access the application:
- Frontend & API: http://localhost:3000
- API endpoints: http://localhost:3000/api/v1
For independent scaling of backend and frontend:
# Use separate services configuration
docker compose -f docker-compose.separate.yml up -d
# Access:
# - Frontend: http://localhost:80
# - Backend API: http://localhost:3000# Build the image
docker build -t localmind:latest .
# Run container manually
docker run -d \
--name localmind-app \
-p 3000:3000 \
--env-file .env \
-v localmind-uploads:/app/uploads \
-v localmind-data:/app/data \
localmind:latest
# Stop services
docker compose down
# Stop and remove volumes (β οΈ deletes data)
docker compose down -v
# Rebuild after code changes
docker compose up -d --build
# View logs
docker compose logs -f
# Execute commands in container
docker compose exec localmind sh- β Multi-stage builds - Optimized image size (~300MB)
- β Non-root user - Enhanced security
- β Health checks - Automatic container monitoring
- β Volume persistence - Data survives container restarts
- β Environment variables - Easy configuration
- β Resource limits - Prevent resource exhaustion
Container won't start:
# Check logs
docker compose logs localmind
# Verify environment variables
docker compose exec localmind envPort already in use:
# Change port in docker-compose.yml
ports:
- '8080:3000' # Access via localhost:8080Permission errors:
# Fix volume permissions
docker compose exec localmind chown -R localmind:localmind /app/uploads /app/dataFor more Docker details, see the Docker Deployment Guide section below.
Create a .env file in the server directory:
# Server Configuration
PORT=3000
NODE_ENV=development
ENVIRONMENT=development
# Database
DATABASE_URL=postgresql://user:password@localhost:5432/localmind
MONGO_URI=mongodb://localhost:27017/localmind
# Authentication
LOCALMIND_SECRET=your-super-secret-jwt-key-change-this
JWT_EXPIRATION=7d
REFRESH_TOKEN_EXPIRATION=30d
# AI Configuration
DEFAULT_MODEL=gemini-pro
OLLAMA_HOST=http://localhost:11434
# Cloud AI Provider Keys
GEMINI_API_KEY=your-gemini-api-key-here
OPENAI_API_KEY=your-openai-api-key-here
GROQ_API_KEY=your-groq-api-key-here
ROUTERAI_API_KEY=your-routerai-api-key-here
# RAG Configuration
VECTOR_DB_PATH=./data/vectordb
MAX_FILE_SIZE=50MB
SUPPORTED_FORMATS=.xlsx,.csv,.xls
# Tunnel Configuration
LOCALTUNNEL_SUBDOMAIN=my-localmind
NGROK_AUTHTOKEN=your-ngrok-token-here
# Security
CORS_ORIGIN=http://localhost:5173
RATE_LIMIT_WINDOW=15m
RATE_LIMIT_MAX=100
# Logging
LOG_LEVEL=info
LOG_FILE=./logs/app.log
β οΈ Security Warning: Never commit.envfiles to version control. Add.envto your.gitignore.
Create a .env file in the client directory:
VITE_API_URL=http://localhost:3000
VITE_APP_NAME=LocalMind
VITE_ENABLE_ANALYTICS=falseLocalMind uses ESLint and Prettier to maintain consistent code style and catch errors early.
-
Install dependencies (if not already installed):
# Root directory pnpm install # Backend cd LocalMind-Backend && pnpm install # Frontend cd LocalMind-Frontend && pnpm install
-
Install Husky (for pre-commit hooks):
# From root directory pnpm install pnpm prepare
cd LocalMind-Backend
pnpm lint # Check for linting errors
pnpm lint:fix # Automatically fix linting errors
pnpm format # Format code with Prettier
pnpm format:check # Check code formatting without changing files
pnpm type-check # Check TypeScript typescd LocalMind-Frontend
pnpm lint # Check for linting errors
pnpm lint:fix # Automatically fix linting errors
pnpm format # Format code with Prettier
pnpm format:check # Check code formatting without changing files
pnpm type-check # Check TypeScript types# From project root
pnpm lint # Lint both backend and frontend
pnpm lint:fix # Fix linting errors in both
pnpm format # Format both backend and frontend
pnpm format:check # Check formatting in bothHusky automatically runs linting and formatting on staged files before each commit:
- β Automatically formats code with Prettier
- β Fixes ESLint errors when possible
- β Prevents commits with linting errors
To bypass hooks (not recommended):
git commit --no-verify-
Install recommended extensions:
-
Settings are already configured in
.vscode/settings.json:- Format on save enabled
- ESLint auto-fix on save enabled
- Prettier as default formatter
-
Reload VS Code after installing extensions
.prettierrc- Shared Prettier configuration.prettierignore- Files to ignore when formattingLocalMind-Backend/eslint.config.js- Backend ESLint configLocalMind-Frontend/eslint.config.js- Frontend ESLint config
- TypeScript: Strict mode enabled, no
anytypes (warnings) - Code Style: Single quotes, no semicolons, 2-space indentation
- Unused Variables: Allowed if prefixed with
_ - Console: Only
console.warnandconsole.errorallowed
LocalMind/
β
βββ server/ # Backend application
β βββ src/
β β βββ config/ # Configuration files
β β βββ controllers/ # Request handlers
β β βββ middleware/ # Express middleware
β β βββ models/ # Database models
β β βββ routes/ # API routes
β β βββ services/ # Business logic
β β β βββ ai/ # AI provider integrations
β β β βββ rag/ # RAG implementation
β β β βββ tunnel/ # Tunnel services
β β βββ utils/ # Helper functions
β β βββ validators/ # Input validation
β β βββ index.ts # Entry point
β βββ tests/ # Test files
β βββ .env.example # Environment template
β βββ package.json
β βββ tsconfig.json
β
βββ client/ # Frontend application
β βββ public/ # Static assets
β βββ src/
β β βββ assets/ # Images, fonts, etc.
β β βββ components/ # React components
β β β βββ chat/
β β β βββ upload/
β β β βββ settings/
β β βββ hooks/ # Custom React hooks
β β βββ pages/ # Page components
β β βββ services/ # API client
β β βββ store/ # State management
β β βββ styles/ # CSS/SCSS files
β β βββ types/ # TypeScript types
β β βββ utils/ # Helper functions
β β βββ App.tsx # Root component
β β βββ main.tsx # Entry point
β βββ .env.example
β βββ package.json
β βββ tsconfig.json
β βββ vite.config.ts
β
βββ docs/ # Documentation
βββ scripts/ # Utility scripts
βββ docker-compose.yml
βββ .gitignore
βββ LICENSE
βββ README.md
=======
βββ assets/
β βββ Banner_LocalMind.png
β
βββ LocalMind-Backend/
β βββ src/
β β βββ ... (backend source code)
β β
β βββ types/
β β βββ ... (TypeScript types)
β β
β βββ .env.example
β βββ .gitignore
β βββ .prettierignore
β βββ .prettierrc
β βββ a.md
β βββ jest.config.ts
β βββ package.json
β βββ pnpm-lock.yaml
β βββ setup-cloudflare.sh
β βββ tsconfig.json
β
βββ LocalMind-Frontend/
β βββ public/
β β βββ ... (static assets)
β β
β βββ src/
β β βββ ... (React code)
β β
β βββ .gitignore
β βββ eslint.config.js
β βββ index.html
β βββ package.json
β βββ pnpm-lock.yaml
β βββ tsconfig.app.json
β βββ tsconfig.json
β βββ tsconfig.node.json
β βββ vite.config.ts
β
βββ Contributing.md
βββ LICENSE
βββ README.md
http://localhost:3000/api/v1
All protected endpoints require a JWT token in the Authorization header:
Authorization: Bearer YOUR_JWT_TOKENPOST /api/v1/user/register
Content-Type: application/json
{
"username": "john_doe",
"email": "john@example.com",
"password": "SecurePassword123!"
}Response:
{
"success": true,
"message": "User registered successfully",
"data": {
"userId": "abc123",
"username": "john_doe",
"email": "john@example.com",
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
}
}POST /api/v1/user/login
Content-Type: application/json
{
"email": "john@example.com",
"password": "SecurePassword123!"
}GET /api/v1/user/profile
Authorization: Bearer YOUR_JWT_TOKENPUT /api/v1/user/profile
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"username": "john_updated",
"preferences": {
"defaultModel": "gemini-pro",
"theme": "dark"
}
}POST /api/v1/user/local-mind-api-key-generator
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"name": "Production API Key",
"permissions": ["chat", "upload", "train"]
}Response:
{
"success": true,
"data": {
"apiKey": "lm_1234567890abcdef",
"name": "Production API Key",
"createdAt": "2024-01-15T10:30:00Z"
}
}GET /api/v1/user/local-mind-api-keys
Authorization: Bearer YOUR_JWT_TOKENDELETE /api/v1/user/local-mind-api-keys/:keyId
Authorization: Bearer YOUR_JWT_TOKENGET /api/v1/user/ai-config
Authorization: Bearer YOUR_JWT_TOKENPUT /api/v1/user/ai-config
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"providers": {
"gemini": {
"enabled": true,
"apiKey": "your-gemini-key"
},
"ollama": {
"enabled": true,
"host": "http://localhost:11434"
}
},
"defaultModel": "gemini-pro"
}POST /api/v1/chat/send-message
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"message": "What is quantum computing?",
"model": "gemini-pro",
"conversationId": "conv_123",
"useRAG": true
}Response:
{
"success": true,
"data": {
"messageId": "msg_456",
"response": "Quantum computing is...",
"model": "gemini-pro",
"timestamp": "2024-01-15T10:30:00Z",
"tokensUsed": 245
}
}POST /api/v1/chat/stream
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"message": "Write a poem about AI",
"model": "gpt-4"
}GET /api/v1/chat/history?conversationId=conv_123&limit=50
Authorization: Bearer YOUR_JWT_TOKENPOST /api/v1/chat/conversation
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"title": "Project Discussion",
"model": "gemini-pro"
}DELETE /api/v1/chat/conversation/:conversationId
Authorization: Bearer YOUR_JWT_TOKENPOST /api/v1/upload/excel
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: multipart/form-data
file: [your-file.xlsx]
name: "Sales Data Q4"
description: "Quarterly sales figures"Response:
{
"success": true,
"data": {
"fileId": "file_789",
"name": "Sales Data Q4",
"size": 2048576,
"rowCount": 1500,
"status": "processing"
}
}POST /api/v1/upload/dataSet
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"name": "FAQ Dataset",
"questions": [
{
"question": "What is LocalMind?",
"answer": "LocalMind is an open-source AI platform..."
}
]
}POST /api/v1/train/upload
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"fileId": "file_789",
"chunkSize": 500,
"overlapSize": 50
}GET /api/v1/upload/status/:fileId
Authorization: Bearer YOUR_JWT_TOKENGET /api/v1/upload/files
Authorization: Bearer YOUR_JWT_TOKENDELETE /api/v1/upload/files/:fileId
Authorization: Bearer YOUR_JWT_TOKENPOST /api/v1/expose/localtunnel
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"subdomain": "my-awesome-ai",
"port": 3000
}Response:
{
"success": true,
"data": {
"url": "https://my-awesome-ai.loca.lt",
"status": "active"
}
}POST /api/v1/expose/ngrok
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"authToken": "your-ngrok-token",
"domain": "myapp.ngrok.io"
}POST /api/v1/expose/cloudflared
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json
{
"port": 3000
}Response:
{
"success": true,
"message": "Cloudflared tunnel started successfully",
"data": {
"url": "https://random-subdomain.trycloudflare.com",
"port": 3000,
"status": "active"
}
}GET /api/v1/expose/cloudflared/status
Authorization: Bearer YOUR_JWT_TOKENResponse (when active):
{
"success": true,
"message": "Tunnel status retrieved successfully",
"data": {
"active": true,
"url": "https://random-subdomain.trycloudflare.com",
"port": 3000,
"startedAt": "2024-01-15T10:30:00Z"
}
}Response (when inactive):
{
"success": true,
"message": "Tunnel status retrieved successfully",
"data": {
"active": false
}
}DELETE /api/v1/expose/cloudflared/stop
Authorization: Bearer YOUR_JWT_TOKENResponse:
{
"success": true,
"message": "Cloudflared tunnel stopped successfully",
"data": {
"previousUrl": "https://random-subdomain.trycloudflare.com"
}
}GET /api/v1/expose/status
Authorization: Bearer YOUR_JWT_TOKENDELETE /api/v1/expose/stop
Authorization: Bearer YOUR_JWT_TOKENGET /api/v1/analytics/usage
Authorization: Bearer YOUR_JWT_TOKENGET /api/v1/analytics/models
Authorization: Bearer YOUR_JWT_TOKEN// Initialize client
const API_URL = 'http://localhost:3000/api/v1'
const token = 'your-jwt-token'
// Send message
const response = await fetch(`${API_URL}/chat/send-message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
message: 'Explain machine learning in simple terms',
model: 'gemini-pro',
}),
})
const data = await response.json()
console.log(data.data.response)// Upload Excel file
const formData = new FormData()
formData.append('file', fileInput.files[0])
formData.append('name', 'Company Knowledge Base')
const uploadResponse = await fetch(`${API_URL}/upload/excel`, {
method: 'POST',
headers: {
Authorization: `Bearer ${token}`,
},
body: formData,
})
const {
data: { fileId },
} = await uploadResponse.json()
// Train model with uploaded data
const trainResponse = await fetch(`${API_URL}/train/upload`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
fileId,
chunkSize: 500,
}),
})
// Use RAG-enhanced chat
const chatResponse = await fetch(`${API_URL}/chat/send-message`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
message: 'What does our policy say about remote work?',
useRAG: true,
}),
})const eventSource = new EventSource(
`${API_URL}/chat/stream?token=${token}&message=Write a story about AI`
)
eventSource.onmessage = event => {
const chunk = JSON.parse(event.data)
console.log(chunk.content) // Display chunk in real-time
}
eventSource.onerror = () => {
eventSource.close()
}// Start LocalTunnel
const exposeResponse = await fetch(`${API_URL}/expose/localtunnel`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
subdomain: 'my-ai-demo',
port: 3000,
}),
})
const {
data: { url },
} = await exposeResponse.json()
console.log(`Your AI is now accessible at: ${url}`)
// Check status
const statusResponse = await fetch(`${API_URL}/expose/localtunnel/status`, {
headers: { Authorization: `Bearer ${token}` },
})
const { data: status } = await statusResponse.json()
// Stop when done
await fetch(`${API_URL}/expose/localtunnel/stop`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${token}` },
})// Start Cloudflared tunnel
const tunnelResponse = await fetch(`${API_URL}/expose/cloudflared`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({
port: 3000,
}),
})
const {
data: { url: tunnelUrl },
} = await tunnelResponse.json()
console.log(`Cloudflared tunnel active at: ${tunnelUrl}`)
// Check status later
const statusResponse = await fetch(`${API_URL}/expose/cloudflared/status`, {
headers: {
Authorization: `Bearer ${token}`,
},
})
const { data: status } = await statusResponse.json()
if (status.active) {
console.log(`Tunnel is running: ${status.url}`)
}
// Stop when done
await fetch(`${API_URL}/expose/cloudflared/stop`, {
method: 'DELETE',
headers: {
Authorization: `Bearer ${token}`,
},
})| Technology | Purpose | Version |
|---|---|---|
| Node.js | Runtime environment | 18+ |
| Express | Web framework | 4.x |
| TypeScript | Type safety | 5.x |
| Prisma / MongoDB | Database ORM | Latest |
| JWT | Authentication | Latest |
| Multer | File uploads | Latest |
| LangChain | RAG implementation | Latest |
| Ollama SDK | Local LLM integration | Latest |
| Technology | Purpose | Version |
|---|---|---|
| React | UI framework | 18+ |
| TypeScript | Type safety | 5.x |
| Vite | Build tool | 5.x |
| TailwindCSS | Styling | 3.x |
| Zustand | State management | Latest |
| React Query | Data fetching | Latest |
| React Router | Navigation | 6.x |
| Axios | HTTP client | Latest |
- Ollama β Local LLM runtime
- LangChain β RAG framework
- Vector Databases β Embeddings storage
- Google Gemini SDK
- OpenAI SDK
- Groq SDK
Problem: Error: Cannot find module 'express'
Solution:
cd server
rm -rf node_modules package-lock.json
npm install
npm run devProblem: Error: ECONNREFUSED localhost:11434
Solution:
- Ensure Ollama is installed and running:
ollama serve - Check Ollama status:
ollama list - Verify OLLAMA_HOST in
.env
Problem: Error: File size exceeds limit
Solution:
- Check MAX_FILE_SIZE in
.env - Increase the limit if needed
- Compress large files before uploading
Problem: AI doesn't use uploaded data
Solution:
- Verify file was processed:
GET /api/v1/upload/status/:fileId - Ensure
useRAG: truein chat request - Check vector database path in
.env
Problem: Access-Control-Allow-Origin error
Solution:
- Update CORS_ORIGIN in server
.env - Restart backend server
- Check frontend URL matches CORS_ORIGIN
- PDF and TXT file support for RAG
- Multi-language support
- Dark/Light theme toggle
- Voice input/output
- Mobile-responsive design improvements
- Anthropic Claude integration
- Image generation support
- Code execution sandbox
- Collaborative chat sessions
- Advanced analytics dashboard
- Plugin system for extensions
- Marketplace for custom models
- Enterprise features (SSO, RBAC)
- Kubernetes deployment support
- Multi-user workspaces
- WhatsApp/Telegram bot integration
- Markdown export for conversations
- Custom model fine-tuning UI
- Blockchain-based API key management
Want to suggest a feature? Open an issue or join our Discord community!
We β€οΈ contributions! Here's how you can help:
- π Report bugs β Found a bug? Open an issue
- π‘ Suggest features β Have ideas? Share them!
- π Improve docs β Help others understand LocalMind
- π§ Submit PRs β Fix bugs or add features
- π Translate β Make LocalMind accessible worldwide
- β Star the repo β Show your support!
-
Fork the repository
# Click "Fork" on GitHub, then: git clone https://github.com/YOUR_USERNAME/LocalMind.git cd LocalMind
-
Create a feature branch
git checkout -b feature/amazing-feature
-
Make your changes
- Follow TypeScript best practices
- Write clean, documented code
- Add tests for new features
- Update documentation
-
Test your changes
npm run test npm run lint npm run type-check -
Commit with conventional commits
git commit -m "feat: add amazing feature" git commit -m "fix: resolve bug in chat" git commit -m "docs: update API documentation"
-
Push and create PR
git push origin feature/amazing-feature # Then open a Pull Request on GitHub
We follow Conventional Commits:
feat:β New featurefix:β Bug fixdocs:β Documentation changesstyle:β Code style changes (formatting, etc.)refactor:β Code refactoringtest:β Adding or updating testschore:β Build process or auxiliary tool changes
- TypeScript β Use strict typing, avoid
any - ESLint β Follow configured rules
- Prettier β Auto-format on save
- Naming β Use camelCase for variables, PascalCase for components
- Comments β Document complex logic
- Update README.md with details of changes if needed
- Update the documentation with new API endpoints
- Add tests for new functionality
- Ensure all tests pass
- Request review from maintainers
- Address review feedback
- Squash commits before merging
- Be respectful and inclusive
- Provide constructive feedback
- Help newcomers get started
- Follow our Code of Conduct
This project is licensed under the MIT License β see the LICENSE file for details.
β
Commercial use β Use LocalMind in commercial projects
β
Modification β Modify the code as you see fit
β
Distribution β Share LocalMind with others
β
Private use β Use it privately in your organization
Attribution appreciated but not required! If you build something cool with LocalMind, let us know β we'd love to feature it!
LocalMind stands on the shoulders of giants. Huge thanks to:
- Ollama β Making local LLMs accessible
- LangChain β Powering our RAG implementation
- React β Building amazing UIs
- Vite β Lightning-fast build tool
- Express β Reliable backend framework
- Google β Gemini API
- OpenAI β GPT models
- Meta β LLaMA models
- Mistral AI β Open models
- Groq β Fast inference
- All our contributors
- Everyone who reported bugs and suggested features
- The open-source community for inspiration
- Students and educators using LocalMind for learning
- Developers building amazing apps with our API
- Contributors who helped improve the codebase
- You for choosing LocalMind! π
NexGenStudioDev
- π Website: [Coming Soon]
- πΌ GitHub: @NexGenStudioDev
- π¦ Twitter: [Coming Soon]
- π¬ Discord: Join our community
- π§ Email: support@localmind.ai
If LocalMind has been helpful to you:
- β Star this repository on GitHub
- π¦ Share it on social media
- π Write about it on your blog
- π° Sponsor development (Coming Soon)
- π€ Contribute code or documentation
- π Documentation: Read our full docs
- π¬ Discord: Join our community server
- π Bug Reports: Open an issue
- π‘ Feature Requests: Suggest features
- π§ Email: support@localmind.ai
Q: Is LocalMind really free?
A: Yes! 100% free and open-source. No hidden costs, no premium tiers, no subscriptions.
Q: Can I use LocalMind commercially?
A: Absolutely! The MIT license allows commercial use.
Q: Do I need a GPU for local models?
A: Recommended but not required. Ollama works on CPU, but GPU speeds things up significantly.
Q: How much disk space do I need?
A: Base installation: ~500MB. Each Ollama model: 2-7GB depending on size.
Q: Can I deploy LocalMind to production?
A: Yes! Use Docker for easy deployment. See our deployment guide.
Q: Is my data secure?
A: Yes. RAG data stays on your machine. API keys are encrypted. No telemetry or tracking.
Q: Can I contribute without coding?
A: Yes! Help with documentation, translations, bug reports, or spread the word.
Built with β€οΈ by the open-source community
Get Started β’ Documentation β’ Join Community β’ Report Bug
If you find LocalMind useful, please consider giving it a βοΈ on GitHub!
This guide will help you deploy LocalMind using Docker for a consistent, portable, and production-ready setup.
Before you begin, ensure you have installed:
- Docker (v20.10 or higher) - Install Docker
- Docker Compose (v2.0 or higher) - Usually included with Docker Desktop
Verify installation:
docker --version
docker compose version-
Clone the repository:
git clone https://github.com/your-username/LocalMind.git cd LocalMind -
Configure environment variables:
cp .env.example .env nano .env # Edit with your preferred editorRequired variables to set:
LOCALMIND_SECRET- Generate with:openssl rand -base64 32- Add API keys for cloud providers (optional)
-
Start the application:
docker compose up -d
-
Access LocalMind:
- Open your browser: http://localhost:3000
- The application will serve both backend API and frontend
-
View logs:
docker compose logs -f localmind
-
Build the image:
docker build -t localmind:latest . -
Run the container:
docker run -d \ --name localmind-app \ -p 3000:3000 \ -e LOCALMIND_SECRET="your-secret-key" \ -e API_KEY="your-api-key" \ -e OLLAMA_HOST="http://host.docker.internal:11434" \ -v localmind-uploads:/app/uploads \ -v localmind-data:/app/data \ localmind:latest
-
Access the application:
Create a .env file in the project root with the following variables:
| Variable | Description | Required | Default |
|---|---|---|---|
NODE_ENV |
Environment mode | No | production |
PORT |
Application port | No | 3000 |
LOCALMIND_SECRET |
JWT secret key | Yes | - |
API_KEY |
Generic API key | No | - |
OPENAI_API_KEY |
OpenAI API key | No | - |
GEMINI_API_KEY |
Google Gemini key | No | - |
GROQ_API_KEY |
Groq API key | No | - |
OLLAMA_HOST |
Ollama server URL | No | http://host.docker.internal:11434 |
Generate a secure secret:
openssl rand -base64 32If Ollama runs on your host machine:
OLLAMA_HOST=http://host.docker.internal:11434If Ollama runs in Docker (uncomment the ollama service in docker-compose.yml):
OLLAMA_HOST=http://ollama:11434# Build the image
docker build -t localmind:latest .
# Run container (basic)
docker run -d -p 3000:3000 --name localmind-app localmind:latest
# Run with environment variables
docker run -d -p 3000:3000 \
--env-file .env \
--name localmind-app \
localmind:latest
# Run with volumes (persist data)
docker run -d -p 3000:3000 \
-v localmind-uploads:/app/uploads \
-v localmind-data:/app/data \
--name localmind-app \
localmind:latest# Start container
docker start localmind-app
# Stop container
docker stop localmind-app
# Restart container
docker restart localmind-app
# View logs
docker logs localmind-app
docker logs -f localmind-app # Follow logs
# Check container status
docker ps -a
# Execute commands in running container
docker exec -it localmind-app sh# Start services
docker compose up -d
# Stop services
docker compose down
# Stop and remove volumes (β οΈ deletes data)
docker compose down -v
# View logs
docker compose logs -f
# Rebuild and restart
docker compose up -d --build
# Scale services (if needed)
docker compose up -d --scale localmind=3# Remove container
docker rm -f localmind-app
# Remove image
docker rmi localmind:latest
# Remove volumes (β οΈ permanent data loss)
docker volume rm localmind-uploads localmind-data
# Clean up all unused resources
docker system prune -a --volumesCheck logs:
docker logs localmind-appCommon issues:
- Missing required environment variables
- Port 3000 already in use
- Insufficient permissions
# Find process using port 3000
lsof -i :3000 # macOS/Linux
netstat -ano | findstr :3000 # Windows
# Change port in docker-compose.yml
ports:
- "8080:3000" # Access via localhost:8080-
Verify Ollama is running:
curl http://localhost:11434/api/version
-
Check Docker network:
docker network inspect localmind-network
-
Use correct host:
- Host machine:
http://host.docker.internal:11434 - Docker container:
http://ollama:11434
- Host machine:
# Fix volume permissions
docker exec -it localmind-app chown -R localmind:localmind /app/uploads /app/dataIncrease Docker resources:
- Docker Desktop β Settings β Resources β Memory (increase to 4GB+)
Or limit container memory:
docker run -d -p 3000:3000 \
--memory="2g" \
--name localmind-app \
localmind:latest-
Never commit
.envfiles:# Ensure .env is in .gitignore echo ".env" >> .gitignore
-
Use strong secrets:
# Generate secure random secret openssl rand -base64 32 -
Run as non-root user:
- The Dockerfile already implements this
- User
localmind(UID 1001) is used
-
Keep images updated:
docker pull node:20-alpine docker compose build --no-cache
-
Scan for vulnerabilities:
docker scan localmind:latest
-
Create production docker-compose:
cp docker-compose.yml docker-compose.prod.yml
-
Update production settings:
environment: - NODE_ENV=production - LOG_LEVEL=error
-
Deploy:
docker compose -f docker-compose.prod.yml up -d
Example Nginx configuration:
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}The container includes a health check endpoint:
# Check container health
docker inspect --format='{{.State.Health.Status}}' localmind-app
# Manual health check
curl http://localhost:3000/healthThe Dockerfile uses multi-stage builds to:
- Reduce final image size by ~60%
- Separate build and runtime dependencies
- Improve build caching
- Without optimization: ~800MB
- With multi-stage build: ~300MB
DOCKER_BUILDKIT=1 docker build -t localmind:latest .If you encounter issues:
- Check logs:
docker logs localmind-app - Verify environment:
docker exec localmind-app env - Open an issue: GitHub Issues
- Community support: [Discord/Forum link]
π You're all set! Your LocalMind instance is now running in Docker.
- π Initial release of LocalMind
- π§ Support for Ollama local models
- βοΈ Cloud AI integrations (Gemini, OpenAI, Groq, RouterAI)
- π RAG with Excel/CSV uploads
- π LocalTunnel and Ngrok support
- π JWT authentication
- π¬ Real-time chat interface
- π Usage analytics
- π¨ Modern React UI with Tailwind CSS
- Implemented bcrypt password hashing
- Added CORS protection
- Rate limiting for API endpoints
- Input validation and sanitization