Skip to content

kraayush952-dotcom/LocalMind

Β 
Β 

Repository files navigation

LocalMind Banner

LocalMind β€” AI Without Limits

A free, open-source AI platform that lets you run local LLMs, connect cloud AI providers, teach your AI with your own data, and share your AI instance globally β€” all with full privacy and unlimited usage.


MIT License TypeScript React Node.js Express



Quick Start β€’ Features β€’ Installation β€’ API Docs β€’ Contributing


πŸ“– Table of Contents


πŸ”₯ Overview

LocalMind is a free, open-source, self-hosted AI platform designed for students, developers, researchers, and creators who demand powerful AI capabilities without the constraints of subscriptions, usage limits, or privacy compromises.

Why LocalMind?

Traditional AI platforms lock you in with:

  • πŸ’Έ Monthly subscription fees
  • 🚫 Message and usage limits
  • πŸ” Privacy concerns with data collection
  • ☁️ Dependency on cloud services
  • πŸ”’ Vendor lock-in

LocalMind sets you free with:

  • βœ… 100% Free & Open Source β€” No hidden costs, ever
  • βœ… Unlimited Usage β€” No message caps or rate limits
  • βœ… Full Privacy β€” Your data never leaves your machine
  • βœ… Hybrid Architecture β€” Mix local and cloud models seamlessly
  • βœ… Custom Training β€” Teach AI with your own datasets
  • βœ… Global Sharing β€” Expose your AI to the world instantly
  • βœ… Developer-Friendly β€” RESTful API for easy integration

Perfect For

  • πŸŽ“ Students learning AI and machine learning
  • πŸ‘¨β€πŸ’» Developers building AI-powered applications
  • πŸ”¬ Researchers conducting experiments with LLMs
  • πŸš€ Startups needing custom AI solutions without enterprise costs
  • 🏒 Organizations requiring private AI infrastructure
  • 🎨 Creators experimenting with AI-assisted content generation

✨ Features

🧠 AI Model Support

LocalMind provides a unified interface to interact with both local and cloud-based AI models:

πŸ–₯️ Local Models (via Ollama)

Run powerful open-source models completely offline:

Model Family Description Use Cases
LLaMA Meta's flagship open model General chat, reasoning, coding
Mistral High-performance 7B model Fast responses, efficiency
Phi Microsoft's compact model Edge devices, quick tasks
Gemma Google's open model Balanced performance
Custom Models Any Ollama-compatible model Specialized tasks

☁️ Cloud Models

Integrate premium AI services when needed:

  • Google Gemini β€” Advanced reasoning and multimodal
  • OpenAI GPT β€” Industry-leading language models
  • Groq β€” Ultra-fast inference speeds
  • RouterAI β€” Intelligent model routing
  • Coming Soon: Anthropic Claude, Cohere, AI21 Labs

Switch between models instantly β€” No code changes required!


πŸ“š RAG: Train with Your Own Data

Transform LocalMind into your personal AI expert using Retrieval-Augmented Generation (RAG):

Supported Formats

  • πŸ“Š Excel Files (.xlsx, .xls) β€” Import spreadsheets directly
  • πŸ“„ CSV Files β€” Parse comma-separated datasets
  • ❓ Q&A Datasets β€” Upload question-answer pairs for fine-tuning
  • πŸ”œ Coming Soon: PDF, TXT, JSON, and more

How It Works

  1. Upload your documents through the UI
  2. Processing β€” Automatic text extraction and chunking
  3. Vectorization β€” Converts data to embeddings
  4. Storage β€” Creates a private vector database
  5. Querying β€” AI retrieves relevant context for responses

Use Cases

  • πŸ“– Build a chatbot trained on your company's documentation
  • πŸŽ“ Create a study assistant with your course materials
  • πŸ”¬ Analyze research papers and datasets
  • πŸ’Ό Build internal knowledge bases
  • πŸ“Š Query business data using natural language

Your data stays 100% local β€” No cloud uploads, no external storage.


🌐 Global AI Sharing

Share your LocalMind instance with anyone, anywhere:

Exposure Methods

Method Speed Custom Domain Security
LocalTunnel Fast βœ… Basic
Ngrok Fast βœ… Pro Advanced

Benefits

  • 🌍 Instant Deployment β€” No server setup required
  • πŸ”— Shareable URLs β€” Send links to teammates or clients
  • πŸš€ Perfect for Demos β€” Showcase your AI projects
  • πŸ‘₯ Collaborative Testing β€” Get feedback from users
  • πŸ“± Access Anywhere β€” Use your AI from any device

Security Features

  • πŸ” API key authentication
  • 🚦 Rate limiting
  • πŸ”’ HTTPS encryption
  • πŸ“Š Usage monitoring

πŸ”’ Privacy & Security

Your data is yours β€” always.

Privacy Guarantees

  • 🏠 Local Processing β€” RAG data never leaves your machine
  • πŸ”‘ Encrypted Storage β€” API keys stored securely
  • 🚫 No Telemetry β€” Zero analytics or tracking
  • πŸ‘οΈ Open Source β€” Audit every line of code
  • πŸ”“ No Vendor Lock-In β€” Export data anytime

Security Features

  • πŸ›‘οΈ JWT-based authentication
  • πŸ” Bcrypt password hashing
  • πŸ”’ CORS protection
  • 🚦 Rate limiting
  • πŸ“ Request validation
  • πŸ” SQL injection prevention

πŸš€ Quick Start

Get LocalMind running in under 5 minutes:

# Clone the repository
git clone https://github.com/NexGenStudioDev/LocalMind.git
cd LocalMind

# Install dependencies
cd server && npm install
cd ../client && npm install

# Start the backend
cd server && npm run dev

# Start the frontend (in a new terminal)
cd client && npm run dev

# Open http://localhost:5173

That's it! You're ready to chat with AI. πŸŽ‰

For detailed setup instructions, see the Installation Guide below.


πŸ“¦ Installation Guide

Prerequisites

Ensure you have the following installed:

Software Version Download
Node.js 18.x or higher nodejs.org
npm 9.x or higher Included with Node.js
Git Latest git-scm.com
Ollama (optional) Latest ollama.ai

Verify Installation

node --version  # Should show v18.x.x or higher
npm --version   # Should show 9.x.x or higher
git --version   # Should show git version 2.x.x

1. Backend Setup

# Navigate to server directory
cd server

# Install dependencies
npm install

# Create environment file
cp .env.example .env

# Edit .env with your preferred editor
nano .env

# Start development server
npm run dev

The backend will be available at http://localhost:3000

Available Scripts

npm run dev          # Start development server with hot reload
npm run build        # Compile TypeScript to JavaScript
npm run start        # Run production build
npm run lint         # Check code quality with ESLint
npm run lint:fix     # Fix ESLint errors automatically
npm run format       # Format code with Prettier
npm run format:check # Check code formatting
npm run type-check   # Check TypeScript types without building
npm run test         # Run test suite

2. Frontend Setup

# Navigate to client directory
cd client

# Install dependencies
npm install

# Start development server
npm run dev

The frontend will be available at http://localhost:5173

Available Scripts

npm run dev          # Start Vite dev server
npm run build        # Build for production
npm run preview      # Preview production build
npm run lint         # Check code quality with ESLint
npm run lint:fix     # Fix ESLint errors automatically
npm run format       # Format code with Prettier
npm run format:check # Check code formatting
npm run type-check   # Check TypeScript types without building

3. Docker (Recommended for Production)

Run LocalMind with Docker for simplified deployment and consistent environments.

Prerequisites

  • Docker (v20.10 or higher) - Install Docker
  • Docker Compose (v2.0 or higher) - Usually included with Docker Desktop

Verify installation:

docker --version
docker compose version

Quick Start with Docker Compose

  1. Configure environment variables:

    cp env.example .env
    # Edit .env with your preferred editor
    nano .env

    Required variables:

    • LOCALMIND_SECRET - Generate with: openssl rand -base64 32
    • JWT_SECRET - Same as LOCALMIND_SECRET or generate separately
    • Your_Name, YOUR_EMAIL, YOUR_PASSWORD - Admin credentials
    • DB_CONNECTION_STRING - MongoDB connection string
    • API keys for cloud providers (optional)
  2. Build and start the application:

    # Build and run (combined backend + frontend)
    docker compose up -d
    
    # View logs
    docker compose logs -f localmind
    
    # Check container status
    docker compose ps
  3. Access the application:

Using Separate Services (Advanced)

For independent scaling of backend and frontend:

# Use separate services configuration
docker compose -f docker-compose.separate.yml up -d

# Access:
# - Frontend: http://localhost:80
# - Backend API: http://localhost:3000

Docker Commands Reference

# Build the image
docker build -t localmind:latest .

# Run container manually
docker run -d \
  --name localmind-app \
  -p 3000:3000 \
  --env-file .env \
  -v localmind-uploads:/app/uploads \
  -v localmind-data:/app/data \
  localmind:latest

# Stop services
docker compose down

# Stop and remove volumes (⚠️ deletes data)
docker compose down -v

# Rebuild after code changes
docker compose up -d --build

# View logs
docker compose logs -f

# Execute commands in container
docker compose exec localmind sh

Docker Features

  • βœ… Multi-stage builds - Optimized image size (~300MB)
  • βœ… Non-root user - Enhanced security
  • βœ… Health checks - Automatic container monitoring
  • βœ… Volume persistence - Data survives container restarts
  • βœ… Environment variables - Easy configuration
  • βœ… Resource limits - Prevent resource exhaustion

Troubleshooting Docker

Container won't start:

# Check logs
docker compose logs localmind

# Verify environment variables
docker compose exec localmind env

Port already in use:

# Change port in docker-compose.yml
ports:
  - '8080:3000'  # Access via localhost:8080

Permission errors:

# Fix volume permissions
docker compose exec localmind chown -R localmind:localmind /app/uploads /app/data

For more Docker details, see the Docker Deployment Guide section below.


βš™οΈ Configuration

Environment Variables

Create a .env file in the server directory:

# Server Configuration
PORT=3000
NODE_ENV=development
ENVIRONMENT=development

# Database
DATABASE_URL=postgresql://user:password@localhost:5432/localmind
MONGO_URI=mongodb://localhost:27017/localmind

# Authentication
LOCALMIND_SECRET=your-super-secret-jwt-key-change-this
JWT_EXPIRATION=7d
REFRESH_TOKEN_EXPIRATION=30d

# AI Configuration
DEFAULT_MODEL=gemini-pro
OLLAMA_HOST=http://localhost:11434

# Cloud AI Provider Keys
GEMINI_API_KEY=your-gemini-api-key-here
OPENAI_API_KEY=your-openai-api-key-here
GROQ_API_KEY=your-groq-api-key-here
ROUTERAI_API_KEY=your-routerai-api-key-here

# RAG Configuration
VECTOR_DB_PATH=./data/vectordb
MAX_FILE_SIZE=50MB
SUPPORTED_FORMATS=.xlsx,.csv,.xls

# Tunnel Configuration
LOCALTUNNEL_SUBDOMAIN=my-localmind
NGROK_AUTHTOKEN=your-ngrok-token-here

# Security
CORS_ORIGIN=http://localhost:5173
RATE_LIMIT_WINDOW=15m
RATE_LIMIT_MAX=100

# Logging
LOG_LEVEL=info
LOG_FILE=./logs/app.log

⚠️ Security Warning: Never commit .env files to version control. Add .env to your .gitignore.

Frontend Configuration

Create a .env file in the client directory:

VITE_API_URL=http://localhost:3000
VITE_APP_NAME=LocalMind
VITE_ENABLE_ANALYTICS=false

πŸ”§ Code Quality & Linting

LocalMind uses ESLint and Prettier to maintain consistent code style and catch errors early.

Setup

  1. Install dependencies (if not already installed):

    # Root directory
    pnpm install
    
    # Backend
    cd LocalMind-Backend && pnpm install
    
    # Frontend
    cd LocalMind-Frontend && pnpm install
  2. Install Husky (for pre-commit hooks):

    # From root directory
    pnpm install
    pnpm prepare

Available Commands

Backend

cd LocalMind-Backend

pnpm lint          # Check for linting errors
pnpm lint:fix      # Automatically fix linting errors
pnpm format        # Format code with Prettier
pnpm format:check  # Check code formatting without changing files
pnpm type-check    # Check TypeScript types

Frontend

cd LocalMind-Frontend

pnpm lint          # Check for linting errors
pnpm lint:fix      # Automatically fix linting errors
pnpm format        # Format code with Prettier
pnpm format:check  # Check code formatting without changing files
pnpm type-check    # Check TypeScript types

Root (Run for both)

# From project root
pnpm lint          # Lint both backend and frontend
pnpm lint:fix      # Fix linting errors in both
pnpm format        # Format both backend and frontend
pnpm format:check  # Check formatting in both

Pre-commit Hooks

Husky automatically runs linting and formatting on staged files before each commit:

  • βœ… Automatically formats code with Prettier
  • βœ… Fixes ESLint errors when possible
  • βœ… Prevents commits with linting errors

To bypass hooks (not recommended):

git commit --no-verify

Editor Integration (VS Code)

  1. Install recommended extensions:

  2. Settings are already configured in .vscode/settings.json:

    • Format on save enabled
    • ESLint auto-fix on save enabled
    • Prettier as default formatter
  3. Reload VS Code after installing extensions

Configuration Files

  • .prettierrc - Shared Prettier configuration
  • .prettierignore - Files to ignore when formatting
  • LocalMind-Backend/eslint.config.js - Backend ESLint config
  • LocalMind-Frontend/eslint.config.js - Frontend ESLint config

Rules & Standards

  • TypeScript: Strict mode enabled, no any types (warnings)
  • Code Style: Single quotes, no semicolons, 2-space indentation
  • Unused Variables: Allowed if prefixed with _
  • Console: Only console.warn and console.error allowed

πŸ“ Project Structure

LocalMind/
β”‚

β”œβ”€β”€ server/                      # Backend application
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ config/             # Configuration files
β”‚   β”‚   β”œβ”€β”€ controllers/        # Request handlers
β”‚   β”‚   β”œβ”€β”€ middleware/         # Express middleware
β”‚   β”‚   β”œβ”€β”€ models/             # Database models
β”‚   β”‚   β”œβ”€β”€ routes/             # API routes
β”‚   β”‚   β”œβ”€β”€ services/           # Business logic
β”‚   β”‚   β”‚   β”œβ”€β”€ ai/            # AI provider integrations
β”‚   β”‚   β”‚   β”œβ”€β”€ rag/           # RAG implementation
β”‚   β”‚   β”‚   └── tunnel/        # Tunnel services
β”‚   β”‚   β”œβ”€β”€ utils/              # Helper functions
β”‚   β”‚   β”œβ”€β”€ validators/         # Input validation
β”‚   β”‚   └── index.ts           # Entry point
β”‚   β”œβ”€β”€ tests/                  # Test files
β”‚   β”œβ”€β”€ .env.example           # Environment template
β”‚   β”œβ”€β”€ package.json
β”‚   └── tsconfig.json
β”‚
β”œβ”€β”€ client/                      # Frontend application
β”‚   β”œβ”€β”€ public/                 # Static assets
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ assets/            # Images, fonts, etc.
β”‚   β”‚   β”œβ”€β”€ components/        # React components
β”‚   β”‚   β”‚   β”œβ”€β”€ chat/
β”‚   β”‚   β”‚   β”œβ”€β”€ upload/
β”‚   β”‚   β”‚   └── settings/
β”‚   β”‚   β”œβ”€β”€ hooks/             # Custom React hooks
β”‚   β”‚   β”œβ”€β”€ pages/             # Page components
β”‚   β”‚   β”œβ”€β”€ services/          # API client
β”‚   β”‚   β”œβ”€β”€ store/             # State management
β”‚   β”‚   β”œβ”€β”€ styles/            # CSS/SCSS files
β”‚   β”‚   β”œβ”€β”€ types/             # TypeScript types
β”‚   β”‚   β”œβ”€β”€ utils/             # Helper functions
β”‚   β”‚   β”œβ”€β”€ App.tsx            # Root component
β”‚   β”‚   └── main.tsx           # Entry point
β”‚   β”œβ”€β”€ .env.example
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ tsconfig.json
β”‚   └── vite.config.ts
β”‚
β”œβ”€β”€ docs/                        # Documentation
β”œβ”€β”€ scripts/                     # Utility scripts
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ .gitignore
β”œβ”€β”€ LICENSE
└── README.md
=======
β”œβ”€β”€ assets/
β”‚   └── Banner_LocalMind.png
β”‚
β”œβ”€β”€ LocalMind-Backend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   └── ... (backend source code)
β”‚   β”‚
β”‚   β”œβ”€β”€ types/
β”‚   β”‚   └── ... (TypeScript types)
β”‚   β”‚
β”‚   β”œβ”€β”€ .env.example
β”‚   β”œβ”€β”€ .gitignore
β”‚   β”œβ”€β”€ .prettierignore
β”‚   β”œβ”€β”€ .prettierrc
β”‚   β”œβ”€β”€ a.md
β”‚   β”œβ”€β”€ jest.config.ts
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ pnpm-lock.yaml
β”‚   β”œβ”€β”€ setup-cloudflare.sh
β”‚   β”œβ”€β”€ tsconfig.json
β”‚
β”œβ”€β”€ LocalMind-Frontend/
β”‚   β”œβ”€β”€ public/
β”‚   β”‚   └── ... (static assets)
β”‚   β”‚
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   └── ... (React code)
β”‚   β”‚
β”‚   β”œβ”€β”€ .gitignore
β”‚   β”œβ”€β”€ eslint.config.js
β”‚   β”œβ”€β”€ index.html
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ pnpm-lock.yaml
β”‚   β”œβ”€β”€ tsconfig.app.json
β”‚   β”œβ”€β”€ tsconfig.json
β”‚   β”œβ”€β”€ tsconfig.node.json
β”‚   β”œβ”€β”€ vite.config.ts
β”‚
β”œβ”€β”€ Contributing.md
β”œβ”€β”€ LICENSE
└── README.md



🧩 API Documentation

Base URL

http://localhost:3000/api/v1

Authentication

All protected endpoints require a JWT token in the Authorization header:

Authorization: Bearer YOUR_JWT_TOKEN

πŸ” Authentication & User Management

Register User

POST /api/v1/user/register
Content-Type: application/json

{
  "username": "john_doe",
  "email": "john@example.com",
  "password": "SecurePassword123!"
}

Response:

{
  "success": true,
  "message": "User registered successfully",
  "data": {
    "userId": "abc123",
    "username": "john_doe",
    "email": "john@example.com",
    "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
  }
}

Login

POST /api/v1/user/login
Content-Type: application/json

{
  "email": "john@example.com",
  "password": "SecurePassword123!"
}

Get User Profile

GET /api/v1/user/profile
Authorization: Bearer YOUR_JWT_TOKEN

Update Profile

PUT /api/v1/user/profile
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "username": "john_updated",
  "preferences": {
    "defaultModel": "gemini-pro",
    "theme": "dark"
  }
}

βš™οΈ AI Configuration & API Keys

Generate LocalMind API Key

POST /api/v1/user/local-mind-api-key-generator
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "name": "Production API Key",
  "permissions": ["chat", "upload", "train"]
}

Response:

{
  "success": true,
  "data": {
    "apiKey": "lm_1234567890abcdef",
    "name": "Production API Key",
    "createdAt": "2024-01-15T10:30:00Z"
  }
}

List API Keys

GET /api/v1/user/local-mind-api-keys
Authorization: Bearer YOUR_JWT_TOKEN

Delete API Key

DELETE /api/v1/user/local-mind-api-keys/:keyId
Authorization: Bearer YOUR_JWT_TOKEN

Get AI Configuration

GET /api/v1/user/ai-config
Authorization: Bearer YOUR_JWT_TOKEN

Update AI Configuration

PUT /api/v1/user/ai-config
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "providers": {
    "gemini": {
      "enabled": true,
      "apiKey": "your-gemini-key"
    },
    "ollama": {
      "enabled": true,
      "host": "http://localhost:11434"
    }
  },
  "defaultModel": "gemini-pro"
}

πŸ’¬ Chat & Messaging

Send Message

POST /api/v1/chat/send-message
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "message": "What is quantum computing?",
  "model": "gemini-pro",
  "conversationId": "conv_123",
  "useRAG": true
}

Response:

{
  "success": true,
  "data": {
    "messageId": "msg_456",
    "response": "Quantum computing is...",
    "model": "gemini-pro",
    "timestamp": "2024-01-15T10:30:00Z",
    "tokensUsed": 245
  }
}

Stream Message (SSE)

POST /api/v1/chat/stream
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "message": "Write a poem about AI",
  "model": "gpt-4"
}

Get Chat History

GET /api/v1/chat/history?conversationId=conv_123&limit=50
Authorization: Bearer YOUR_JWT_TOKEN

Create New Conversation

POST /api/v1/chat/conversation
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "title": "Project Discussion",
  "model": "gemini-pro"
}

Delete Conversation

DELETE /api/v1/chat/conversation/:conversationId
Authorization: Bearer YOUR_JWT_TOKEN

πŸ“š File Upload & RAG Training

Upload Excel/CSV

POST /api/v1/upload/excel
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: multipart/form-data

file: [your-file.xlsx]
name: "Sales Data Q4"
description: "Quarterly sales figures"

Response:

{
  "success": true,
  "data": {
    "fileId": "file_789",
    "name": "Sales Data Q4",
    "size": 2048576,
    "rowCount": 1500,
    "status": "processing"
  }
}

Upload Q&A Dataset

POST /api/v1/upload/dataSet
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "name": "FAQ Dataset",
  "questions": [
    {
      "question": "What is LocalMind?",
      "answer": "LocalMind is an open-source AI platform..."
    }
  ]
}

Train Model with Uploaded Data

POST /api/v1/train/upload
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "fileId": "file_789",
  "chunkSize": 500,
  "overlapSize": 50
}

Get Upload Status

GET /api/v1/upload/status/:fileId
Authorization: Bearer YOUR_JWT_TOKEN

List Uploaded Files

GET /api/v1/upload/files
Authorization: Bearer YOUR_JWT_TOKEN

Delete Uploaded File

DELETE /api/v1/upload/files/:fileId
Authorization: Bearer YOUR_JWT_TOKEN

🌐 Public Exposure

Expose via LocalTunnel

POST /api/v1/expose/localtunnel
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "subdomain": "my-awesome-ai",
  "port": 3000
}

Response:

{
  "success": true,
  "data": {
    "url": "https://my-awesome-ai.loca.lt",
    "status": "active"
  }
}

Expose via Ngrok

POST /api/v1/expose/ngrok
Authorization: Bearer YOUR_JWT_TOKEN
Content-Type: application/json

{
  "authToken": "your-ngrok-token",
  "domain": "myapp.ngrok.io"
}

Get Exposure Status

GET /api/v1/expose/status
Authorization: Bearer YOUR_JWT_TOKEN

Stop Exposure

DELETE /api/v1/expose/stop
Authorization: Bearer YOUR_JWT_TOKEN

πŸ“Š Analytics & Monitoring

Get Usage Statistics

GET /api/v1/analytics/usage
Authorization: Bearer YOUR_JWT_TOKEN

Get Model Performance

GET /api/v1/analytics/models
Authorization: Bearer YOUR_JWT_TOKEN

πŸ’‘ Usage Examples

Example 1: Basic Chat

// Initialize client
const API_URL = 'http://localhost:3000/api/v1'
const token = 'your-jwt-token'

// Send message
const response = await fetch(`${API_URL}/chat/send-message`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${token}`,
  },
  body: JSON.stringify({
    message: 'Explain machine learning in simple terms',
    model: 'gemini-pro',
  }),
})

const data = await response.json()
console.log(data.data.response)

Example 2: Upload and Train with Custom Data

// Upload Excel file
const formData = new FormData()
formData.append('file', fileInput.files[0])
formData.append('name', 'Company Knowledge Base')

const uploadResponse = await fetch(`${API_URL}/upload/excel`, {
  method: 'POST',
  headers: {
    Authorization: `Bearer ${token}`,
  },
  body: formData,
})

const {
  data: { fileId },
} = await uploadResponse.json()

// Train model with uploaded data
const trainResponse = await fetch(`${API_URL}/train/upload`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${token}`,
  },
  body: JSON.stringify({
    fileId,
    chunkSize: 500,
  }),
})

// Use RAG-enhanced chat
const chatResponse = await fetch(`${API_URL}/chat/send-message`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${token}`,
  },
  body: JSON.stringify({
    message: 'What does our policy say about remote work?',
    useRAG: true,
  }),
})

Example 3: Streaming Responses

const eventSource = new EventSource(
  `${API_URL}/chat/stream?token=${token}&message=Write a story about AI`
)

eventSource.onmessage = event => {
  const chunk = JSON.parse(event.data)
  console.log(chunk.content) // Display chunk in real-time
}

eventSource.onerror = () => {
  eventSource.close()
}

Example 4: Expose Your AI Globally

// Start LocalTunnel
const exposeResponse = await fetch(`${API_URL}/expose/localtunnel`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${token}`,
  },
  body: JSON.stringify({
    subdomain: 'my-ai-demo',
  }),
})

const {
  data: { url },
} = await exposeResponse.json()
console.log(`Your AI is now accessible at: ${url}`)

πŸ› οΈ Tech Stack

Backend

Technology Purpose Version
Node.js Runtime environment 18+
Express Web framework 4.x
TypeScript Type safety 5.x
Prisma / MongoDB Database ORM Latest
JWT Authentication Latest
Multer File uploads Latest
LangChain RAG implementation Latest
Ollama SDK Local LLM integration Latest

Frontend

Technology Purpose Version
React UI framework 18+
TypeScript Type safety 5.x
Vite Build tool 5.x
TailwindCSS Styling 3.x
Zustand State management Latest
React Query Data fetching Latest
React Router Navigation 6.x
Axios HTTP client Latest

AI & ML

  • Ollama β€” Local LLM runtime
  • LangChain β€” RAG framework
  • Vector Databases β€” Embeddings storage
  • Google Gemini SDK
  • OpenAI SDK
  • Groq SDK

πŸ”§ Troubleshooting

Common Issues

1. Backend Won't Start

Problem: Error: Cannot find module 'express'

Solution:

cd server
rm -rf node_modules package-lock.json
npm install
npm run dev

2. Ollama Connection Failed

Problem: Error: ECONNREFUSED localhost:11434

Solution:

  • Ensure Ollama is installed and running: ollama serve
  • Check Ollama status: ollama list
  • Verify OLLAMA_HOST in .env

3. File Upload Fails

Problem: Error: File size exceeds limit

Solution:

  • Check MAX_FILE_SIZE in .env
  • Increase the limit if needed
  • Compress large files before uploading

4. RAG Not Working

Problem: AI doesn't use uploaded data

Solution:

  • Verify file was processed: GET /api/v1/upload/status/:fileId
  • Ensure useRAG: true in chat request
  • Check vector database path in .env

5. CORS Errors

Problem: Access-Control-Allow-Origin error

Solution:

  • Update CORS_ORIGIN in server .env
  • Restart backend server
  • Check frontend URL matches CORS_ORIGIN

πŸ—ΊοΈ Roadmap

Version 1.1 (Q2 2024)

  • PDF and TXT file support for RAG
  • Multi-language support
  • Dark/Light theme toggle
  • Voice input/output
  • Mobile-responsive design improvements

Version 1.2 (Q3 2024)

  • Anthropic Claude integration
  • Image generation support
  • Code execution sandbox
  • Collaborative chat sessions
  • Advanced analytics dashboard

Version 2.0 (Q4 2024)

  • Plugin system for extensions
  • Marketplace for custom models
  • Enterprise features (SSO, RBAC)
  • Kubernetes deployment support
  • Multi-user workspaces

Community Requests

  • WhatsApp/Telegram bot integration
  • Markdown export for conversations
  • Custom model fine-tuning UI
  • Blockchain-based API key management

Want to suggest a feature? Open an issue or join our Discord community!


🀝 Contributing

We ❀️ contributions! Here's how you can help:

Ways to Contribute

  • πŸ› Report bugs β€” Found a bug? Open an issue
  • πŸ’‘ Suggest features β€” Have ideas? Share them!
  • πŸ“ Improve docs β€” Help others understand LocalMind
  • πŸ”§ Submit PRs β€” Fix bugs or add features
  • 🌍 Translate β€” Make LocalMind accessible worldwide
  • ⭐ Star the repo β€” Show your support!

Development Workflow

  1. Fork the repository

    # Click "Fork" on GitHub, then:
    git clone https://github.com/YOUR_USERNAME/LocalMind.git
    cd LocalMind
  2. Create a feature branch

    git checkout -b feature/amazing-feature
  3. Make your changes

    • Follow TypeScript best practices
    • Write clean, documented code
    • Add tests for new features
    • Update documentation
  4. Test your changes

    npm run test
    npm run lint
    npm run type-check
  5. Commit with conventional commits

    git commit -m "feat: add amazing feature"
    git commit -m "fix: resolve bug in chat"
    git commit -m "docs: update API documentation"
  6. Push and create PR

    git push origin feature/amazing-feature
    # Then open a Pull Request on GitHub

Commit Message Guidelines

We follow Conventional Commits:

  • feat: β€” New feature
  • fix: β€” Bug fix
  • docs: β€” Documentation changes
  • style: β€” Code style changes (formatting, etc.)
  • refactor: β€” Code refactoring
  • test: β€” Adding or updating tests
  • chore: β€” Build process or auxiliary tool changes

Code Style

  • TypeScript β€” Use strict typing, avoid any
  • ESLint β€” Follow configured rules
  • Prettier β€” Auto-format on save
  • Naming β€” Use camelCase for variables, PascalCase for components
  • Comments β€” Document complex logic

Pull Request Process

  1. Update README.md with details of changes if needed
  2. Update the documentation with new API endpoints
  3. Add tests for new functionality
  4. Ensure all tests pass
  5. Request review from maintainers
  6. Address review feedback
  7. Squash commits before merging

Community Guidelines

  • Be respectful and inclusive
  • Provide constructive feedback
  • Help newcomers get started
  • Follow our Code of Conduct

πŸ“„ License

This project is licensed under the MIT License β€” see the LICENSE file for details.

What This Means

βœ… Commercial use β€” Use LocalMind in commercial projects
βœ… Modification β€” Modify the code as you see fit
βœ… Distribution β€” Share LocalMind with others
βœ… Private use β€” Use it privately in your organization

⚠️ Limitation of liability β€” Use at your own risk
⚠️ No warranty β€” Provided "as is"

Attribution appreciated but not required! If you build something cool with LocalMind, let us know β€” we'd love to feature it!


πŸ™ Acknowledgments

LocalMind stands on the shoulders of giants. Huge thanks to:

Open Source Projects

  • Ollama β€” Making local LLMs accessible
  • LangChain β€” Powering our RAG implementation
  • React β€” Building amazing UIs
  • Vite β€” Lightning-fast build tool
  • Express β€” Reliable backend framework

AI Providers

  • Google β€” Gemini API
  • OpenAI β€” GPT models
  • Meta β€” LLaMA models
  • Mistral AI β€” Open models
  • Groq β€” Fast inference

Community

  • All our contributors
  • Everyone who reported bugs and suggested features
  • The open-source community for inspiration

Special Thanks

  • Students and educators using LocalMind for learning
  • Developers building amazing apps with our API
  • Contributors who helped improve the codebase
  • You for choosing LocalMind! πŸŽ‰

πŸ‘€ Author

NexGenStudioDev

Connect With Us

Support the Project

If LocalMind has been helpful to you:

  • ⭐ Star this repository on GitHub
  • 🐦 Share it on social media
  • πŸ“ Write about it on your blog
  • πŸ’° Sponsor development (Coming Soon)
  • 🀝 Contribute code or documentation

πŸ“Š Project Stats

GitHub stars GitHub forks GitHub issues GitHub pull requests GitHub license


🎯 Support

Getting Help

FAQ

Q: Is LocalMind really free?
A: Yes! 100% free and open-source. No hidden costs, no premium tiers, no subscriptions.

Q: Can I use LocalMind commercially?
A: Absolutely! The MIT license allows commercial use.

Q: Do I need a GPU for local models?
A: Recommended but not required. Ollama works on CPU, but GPU speeds things up significantly.

Q: How much disk space do I need?
A: Base installation: ~500MB. Each Ollama model: 2-7GB depending on size.

Q: Can I deploy LocalMind to production?
A: Yes! Use Docker for easy deployment. See our deployment guide.

Q: Is my data secure?
A: Yes. RAG data stays on your machine. API keys are encrypted. No telemetry or tracking.

Q: Can I contribute without coding?
A: Yes! Help with documentation, translations, bug reports, or spread the word.



πŸš€ LocalMind β€” Free, Private, Limitless AI for Everyone

Built with ❀️ by the open-source community


Get Started β€’ Documentation β€’ Join Community β€’ Report Bug


If you find LocalMind useful, please consider giving it a ⭐️ on GitHub!


🐳 Docker Deployment Guide

This guide will help you deploy LocalMind using Docker for a consistent, portable, and production-ready setup.


πŸ“‹ Prerequisites

Before you begin, ensure you have installed:

  • Docker (v20.10 or higher) - Install Docker
  • Docker Compose (v2.0 or higher) - Usually included with Docker Desktop

Verify installation:

docker --version
docker compose version

πŸš€ Quick Start

Option 1: Using Docker Compose (Recommended)

  1. Clone the repository:

    git clone https://github.com/your-username/LocalMind.git
    cd LocalMind
  2. Configure environment variables:

    cp .env.example .env
    nano .env  # Edit with your preferred editor

    Required variables to set:

    • LOCALMIND_SECRET - Generate with: openssl rand -base64 32
    • Add API keys for cloud providers (optional)
  3. Start the application:

    docker compose up -d
  4. Access LocalMind:

  5. View logs:

    docker compose logs -f localmind

Option 2: Using Docker CLI

  1. Build the image:

    docker build -t localmind:latest .
  2. Run the container:

    docker run -d \
      --name localmind-app \
      -p 3000:3000 \
      -e LOCALMIND_SECRET="your-secret-key" \
      -e API_KEY="your-api-key" \
      -e OLLAMA_HOST="http://host.docker.internal:11434" \
      -v localmind-uploads:/app/uploads \
      -v localmind-data:/app/data \
      localmind:latest
  3. Access the application:


βš™οΈ Configuration

Environment Variables

Create a .env file in the project root with the following variables:

Variable Description Required Default
NODE_ENV Environment mode No production
PORT Application port No 3000
LOCALMIND_SECRET JWT secret key Yes -
API_KEY Generic API key No -
OPENAI_API_KEY OpenAI API key No -
GEMINI_API_KEY Google Gemini key No -
GROQ_API_KEY Groq API key No -
OLLAMA_HOST Ollama server URL No http://host.docker.internal:11434

Generate a secure secret:

openssl rand -base64 32

Connecting to Ollama

If Ollama runs on your host machine:

OLLAMA_HOST=http://host.docker.internal:11434

If Ollama runs in Docker (uncomment the ollama service in docker-compose.yml):

OLLAMA_HOST=http://ollama:11434

πŸ“¦ Docker Commands Reference

Building & Running

# Build the image
docker build -t localmind:latest .

# Run container (basic)
docker run -d -p 3000:3000 --name localmind-app localmind:latest

# Run with environment variables
docker run -d -p 3000:3000 \
  --env-file .env \
  --name localmind-app \
  localmind:latest

# Run with volumes (persist data)
docker run -d -p 3000:3000 \
  -v localmind-uploads:/app/uploads \
  -v localmind-data:/app/data \
  --name localmind-app \
  localmind:latest

Managing Containers

# Start container
docker start localmind-app

# Stop container
docker stop localmind-app

# Restart container
docker restart localmind-app

# View logs
docker logs localmind-app
docker logs -f localmind-app  # Follow logs

# Check container status
docker ps -a

# Execute commands in running container
docker exec -it localmind-app sh

Docker Compose Commands

# Start services
docker compose up -d

# Stop services
docker compose down

# Stop and remove volumes (⚠️ deletes data)
docker compose down -v

# View logs
docker compose logs -f

# Rebuild and restart
docker compose up -d --build

# Scale services (if needed)
docker compose up -d --scale localmind=3

Cleanup

# Remove container
docker rm -f localmind-app

# Remove image
docker rmi localmind:latest

# Remove volumes (⚠️ permanent data loss)
docker volume rm localmind-uploads localmind-data

# Clean up all unused resources
docker system prune -a --volumes

πŸ” Troubleshooting

Container won't start

Check logs:

docker logs localmind-app

Common issues:

  • Missing required environment variables
  • Port 3000 already in use
  • Insufficient permissions

Port already in use

# Find process using port 3000
lsof -i :3000  # macOS/Linux
netstat -ano | findstr :3000  # Windows

# Change port in docker-compose.yml
ports:
  - "8080:3000"  # Access via localhost:8080

Can't connect to Ollama

  1. Verify Ollama is running:

    curl http://localhost:11434/api/version
  2. Check Docker network:

    docker network inspect localmind-network
  3. Use correct host:

    • Host machine: http://host.docker.internal:11434
    • Docker container: http://ollama:11434

Permission denied errors

# Fix volume permissions
docker exec -it localmind-app chown -R localmind:localmind /app/uploads /app/data

Out of memory

Increase Docker resources:

  • Docker Desktop β†’ Settings β†’ Resources β†’ Memory (increase to 4GB+)

Or limit container memory:

docker run -d -p 3000:3000 \
  --memory="2g" \
  --name localmind-app \
  localmind:latest

πŸ”’ Security Best Practices

  1. Never commit .env files:

    # Ensure .env is in .gitignore
    echo ".env" >> .gitignore
  2. Use strong secrets:

    # Generate secure random secret
    openssl rand -base64 32
  3. Run as non-root user:

    • The Dockerfile already implements this
    • User localmind (UID 1001) is used
  4. Keep images updated:

    docker pull node:20-alpine
    docker compose build --no-cache
  5. Scan for vulnerabilities:

    docker scan localmind:latest

🚒 Production Deployment

Using Docker Compose (Production)

  1. Create production docker-compose:

    cp docker-compose.yml docker-compose.prod.yml
  2. Update production settings:

    environment:
      - NODE_ENV=production
      - LOG_LEVEL=error
  3. Deploy:

    docker compose -f docker-compose.prod.yml up -d

Behind a Reverse Proxy (Nginx/Traefik)

Example Nginx configuration:

server {
    listen 80;
    server_name yourdomain.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

πŸ“Š Health Checks

The container includes a health check endpoint:

# Check container health
docker inspect --format='{{.State.Health.Status}}' localmind-app

# Manual health check
curl http://localhost:3000/health

🎯 Performance Optimization

Multi-stage Build Benefits

The Dockerfile uses multi-stage builds to:

  • Reduce final image size by ~60%
  • Separate build and runtime dependencies
  • Improve build caching

Image Size Comparison

  • Without optimization: ~800MB
  • With multi-stage build: ~300MB

Build with BuildKit (faster builds)

DOCKER_BUILDKIT=1 docker build -t localmind:latest .

πŸ†˜ Getting Help

If you encounter issues:

  1. Check logs: docker logs localmind-app
  2. Verify environment: docker exec localmind-app env
  3. Open an issue: GitHub Issues
  4. Community support: [Discord/Forum link]

πŸ“ Additional Resources


πŸŽ‰ You're all set! Your LocalMind instance is now running in Docker.

πŸ“ Changelog

[v1.0.0] - 2024-01-15

Added

  • πŸŽ‰ Initial release of LocalMind
  • 🧠 Support for Ollama local models
  • ☁️ Cloud AI integrations (Gemini, OpenAI, Groq, RouterAI)
  • πŸ“š RAG with Excel/CSV uploads
  • 🌐 LocalTunnel and Ngrok support
  • πŸ” JWT authentication
  • πŸ’¬ Real-time chat interface
  • πŸ“Š Usage analytics
  • 🎨 Modern React UI with Tailwind CSS

Security

  • Implemented bcrypt password hashing
  • Added CORS protection
  • Rate limiting for API endpoints
  • Input validation and sanitization


Made with ⚑ by NexGenStudioDev

Back to top ↑

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 76.9%
  • JavaScript 12.5%
  • Dockerfile 8.1%
  • Shell 1.4%
  • Other 1.1%