Skip to content

cscheng/llm-chat

Repository files navigation

LLM Chat

A simple web-based AI chat application that uses a local LLM. It's built with:

  • Docker Model Runner for local LLM serving
  • FastAPI for the backend API
  • Vanilla JavaScript for the frontend UI

This is a demo project intended for learning and exploration, see related blog post: Building a chat application with a local LLM from scratch

Requirements

Quick start

  1. Start the application:

    $ docker compose up
  2. Open your browser and navigate to http://localhost:8000

The FastAPI backend will be available at http://localhost:8001.

Model configuration

You can configure the LLM model by changing the model field in docker-compose.yml:

models:
  llm:
    model: hf.co/bartowski/Qwen2.5-0.5B-Instruct-GGUF

Project structure

llm-chat/
├── backend/          # FastAPI backend application
├── frontend/         # Static HTML/CSS/JS frontend
└── playground/       # Code examples for different LLM backends

About

A simple web-based AI chat application that uses a local LLM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published