Skip to content

Janos11/Local-LLM-Backend-Container

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

💬 Local LLM Chatbot

A modular chatbot powered by a local Large Language Model (LLM), served via Docker, and designed for privacy, portability, and edge deployment.
Ideal for Raspberry Pi or cloud hosting (e.g., AWS), this project runs a Mistral-based LLM backend with a lightweight HTML/JS frontend, all behind an Apache2 reverse proxy.


Watch the demo on LinkedIn

▶️ Watch the demo video on LinkedIn


🚀 Features

  • 🧠 Local LLM Inference: Runs Mistral or similar models via Ollama, no external APIs or internet access required.
  • 🐳 Dockerized Backend: Isolated Python-based LLM backend, easily portable across environments.
  • 🌐 Simple Web UI: Minimalist HTML/CSS/JS interface for chatting in any browser.
  • 🔄 Reverse Proxy Ready: Integrated with Apache2 for secure access via HTTPS.
  • 🍓 Raspberry Pi Optimized: Designed for ARM devices with low memory footprint.
  • ☁️ Cloud Deployable: Tested on AWS EC2 and scalable to VPS or local networks.
  • 🔐 Privacy First: All processing is done locally—your data never leaves your machine.

🧱 Tech Stack

  • Python 3 + FastAPI
  • Ollama for local LLMs (e.g., mistral, llama2)
  • Docker & Docker Compose
  • Apache2 reverse proxy with SSL
  • HTML/CSS/JS frontend

📂 Project Structure

chatBot/
├── backend/              # Python API with LLM interaction
│   ├── app.py
│   └── Dockerfile
├── frontend/             # HTML/JS chat interface
│   └── index.html
├── notebooks/            # Documentation
├── docker-compose.yml    # Service orchestration
└── README.md

📌 Documentation

Section Link
🦙 Useful ollama commands ollama_commands_cheat_sheet.md
⚙️ Deployment Guide on MacBook ollama_setup_on_MacBook.md
⚙️ Deployment Guide on Raspberry Pi tinyllama_setup_on_raspberry_pi.md
🧪 Testing & Benchmarks (coming soon)
🗂️ Useful git commands git_cheat_sheet.md

🧭 Getting Started

git clone https://github.com/Janos11/Local-LLM-Backend-Container.git
cd chatBot
docker compose up

🌍 Use Cases

  • Personal assistant without giving away your data
  • Offline chatbot for travel, remote sites, or IoT
  • Embedded interface for smart devices or terminals
  • Private family or team chat interface

📎 Related Projects

Automated Apache IP Update — dynamic IP reverse proxy update script Add more links here as your project grows

🧠 Why This Matters

Modern AI projects often depend on external APIs, raising privacy, latency, and cost concerns. This project is built for local-first, edge-compatible deployment—a skillset highly relevant in DevOps, MLOps, and systems engineering roles, including quant firms, infrastructure teams, or R&D environments.


🤝 Contributors

János Rostás 👨‍💻 Electronic & Computer Engineer (Final Year Student)
🧠 Passionate about AI, LLMs, and RAG systems
🐳 Docker & Linux Power User
🔧 Raspberry Pi Builder | Automation Fanatic
💻 Git & GitHub DevOps Explorer
📦 Loves tinkering with Ollama, containerized models, and APIs
🌐 janosrostas.co.uk
🔗 LinkedIn
🐙 GitHub | 🐋 Docker Hub
ChatGPT 🤖 AI Pair Programmer by OpenAI
💡 Collaborates on brainstorming, prototyping, and debugging
📚 Built on a foundation of global programming knowledge
🔍 Assists with everything from low-level scripting to high-level LLM orchestration

About

A lightweight, modular chatbot built for Raspberry Pi and cloud environments. Features a simple web frontend and a Python backend container that runs a local LLM via Ollama. 🚀 Technologies: Ollama · Docker · Apache2 · Flask · HTML/CSS/JS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages