Skip to content

KateChat is a universal chat bot platform similar to chat.openai.com. The platform supports multiple LLM providers, MCP servers (with OAuth), RAG with semantic search for different DB (sqlite, PostgreSQL, MS SQL).

License

Notifications You must be signed in to change notification settings

artiz/kate-chat

Repository files navigation

KateChat - Universal AI Chat Interface

KateChat is a universal chat bot platform similar to chat.openai.com that can be used as a base for customized chat bots. The platform supports multiple LLM models from various providers and allows switching between them on the fly within a chat session.

logo

🚀 Live Demo

Experience KateChat in action with our live demo:

Try KateChat Demo →

Getting Started with Demo

To interact with all supported AI models in the demo, you'll need to provide your own API keys for:

  • AWS Bedrock - Access to Claude, Llama, and other models
  • OpenAI - GPT-4, GPT-5, and other OpenAI models
  • Yandex Foundation Models - YandexGPT and other Yandex models

đź“‹ Note: API keys are stored by default locally in your browser and sent securely to our backend. See the Getting Started section below for detailed instructions on obtaining API keys.

Features

  • Multiple chats creation with pristine chat functionality
  • Chat history storage and management, messages editing/deletion
  • Rich markdown formatting: code blocks, images, MathJax formulas etc.
  • Localization
  • "Switch model"/"Call other model" logic to process current chat messages with another model
  • Request cancellation to stop reasoning or web search
  • Parallel call for assistant message against other models to compare results
  • Images input support (drag & drop, copy-paste, etc.), images stored on S3-compatible storage (localstack on localdev env)
  • Client-side Python code execution with Pyodide
  • Reusable @katechat/ui that includes basic chatbot controls.
    • Usage examples are available in examples.
    • Voice-to-voice demo for OpenAI realtime WebRTC API.
  • Distributed messages processing using external queue (Redis), full-fledged production-like dev environment with docker-compose
  • User authentication (email/password, Google OAuth, GitHub OAuth)
  • Real-time communication with GraphQL subscriptions
  • Support for various LLM model Providers:
  • Custom OpenAI-compatible REST API endpoint (Deepseek, local Ollama, etc.).
  • External MCP servers support (could be tested with https://github.com/github/github-mcp-server)
  • LLM tools (Web Search, Code Interpreter, Reasoning) support, custom WebSearch tool implemented using Yandex Search API
  • RAG implementation with documents (PDF, DOCX, TXT) parsing by Docling and vector embeddings stored in PostgreSQL/Sqlite/MS SQL server
  • CI/CD pipeline with GitHub Actions to deploy the app to AWS
  • Demo mode when no LLM providers configured on Backend and AWS_BEDROCK_... or OPENAI_API_... settings are stored in local storage and sent to the backend as "x-aws-region", "x-aws-access-key-id", "x-aws-secret-access-key", "x-openai-api-key" headers

TODO

  • Introduce chat folders hierarchy (wuth customized color/icon) under pinned folders, finalize paging for pinned chats
  • Put status update time into document processing, load pages count and show it and full processing time and average proc speed
  • Add voice-to-voice interaction for OpenAI realtime models, put basic controls to katechat/ui and extend OpenAI protocol in main API.
  • Rust API sync: add images generation support, Library, admin API. Migrate to OpenAI protocol for OpenAI, Yandex and Custom models (https://github.com/YanceyOfficial/rs-openai).
  • Switch OpenAI "gpt-image..." models to Responses API, use image placeholder, do not wait response in cycle but use new requests queue with setTimeout and publishMessage with result
  • Google Vertex AI provider support
  • Finish "Forgot password?" logic for local login
  • @katechat/ui chat bot demo with animated UI and custom actions buttons (plugins={[Actions]}) in chat to ask weather report tool or fill some form
  • SerpApi for Web Search (new setting in UI)
  • Python API (FastAPI)
  • MySQL: check whether https://github.com/stephenc222/mysql_vss/ could be used for RAG

Tech Stack

Frontend

  • React with TypeScript
  • Mantine UI library
  • Apollo Client for GraphQL
  • GraphQL code generation
  • Real-time updates with GraphQL subscriptions (WebSockets)

Backend

  • Node.js with TypeScript
  • TypeORM for persistence
  • Express.js for API server
  • GraphQL with Apollo Server
  • AWS Bedrock for AI model integrations
  • OpenAI API for AI model integrations
  • Jest for testing

Project Structure

The project consists of several parts:

  1. API - Node.js GraphQL API server. Also there is alternative backend API implementation on Rust, Python is in plans.
  2. Client - Universal web interface
  3. Database - any TypeORM compatible RDBMS (PostgreSQL, MySQL, SQLite, etc.)
  4. Redis - for message queue and caching (optional, but recommended for production)

Customization

  • The API configuration is centralized in api/src/global-config.ts. Defaults are merged with an optional customization.json placed in the API folder or its parent (use the provided api/customization.example.json as a template).
  • Supported overrides: demo limits, enabled AI providers, feature flags (images generation, RAG, MCP), AI defaults (temperature, max tokens, top_p, context and summarization limits), admin emails, app defaults, and optional initial custom models/MCP servers (API keys pulled from env vars such as DEEPSEEK_API_KEY).
  • Only deployment/security values are loaded from .env: PORT, NODE_ENV, ALLOWED_ORIGINS, LOG_LEVEL, CALLBACK_URL_BASE, FRONTEND_URL, JWT_SECRET, SESSION_SECRET, RECAPTCHA_SECRET_KEY, OAuth client secrets, DB/S3/SQS/AWS Bedrock/OpenAI/Yandex credentials.
  • The client reads customization.json (current or parent folder; see client/customization.example.json) to override brand colors, font family, app title, footer links, and the chat AI-usage notice without changing code.

Getting Started

Prerequisites

  • Node.js (v20+)
  • Connection to LLM, any from:
  • Docker and Docker Compose (optional, for development environment)

Quick Start

  1. Clone the repository
git clone https://github.com/artiz/kate-chat.git
cd kate-chat
npm install
npm run dev

App will be available at http://localhost:3000 There you could use own OpenAI API key, AWS Bedrock credentials or Yandex FM to connect to cloud models. Local Ollama-like models could be added as Custom models.

Production-like environment using Docker

Add the following to your /etc/hosts file:

127.0.0.1       katechat.dev.com

Then run the following commands:

export COMPOSE_BAKE=true
npm install
npm run build:client
docker compose up --build

App will be available at http://katechat.dev.com

Development Mode

To run the projects in development mode:

Default Node.js API/Client

npm install
docker compose up redis localstack postgres mysql mssql -d
npm run dev

Documents processor (Python)

python -m venv document-processor/.venv
source document-processor/.venv/bin/activate
pip install -r document-processor/requirements.txt
npm run dev:document_processor

Rust API (experiment)

  1. Server
cd api-rust
diesel migration run
cargo build
cargo run
  1. Client
APP_API_URL=http://localhost:4001  APP_WS_URL=http://localhost:4002 npm run dev:client

API DB Migrations

  • Create new migration
docker compose up redis localstack postgres mysql mssql -d
npm run migration:generate <migration name>
  • Apply migrations (automated at app start but could be used to test)
npm run migration:run

NOTE: do not update more than one table definition at once, sqlite sometimes applies migrations incorrectly due to "temporary_xxx" tables creation. NOTE: do not use more then 1 foreign key with ON DELETE CASCADE in one table for MS SQL, or use NO ACTION as fallback:

@ManyToOne(() => Message, { onDelete: DB_TYPE == "mssql" ? "NO ACTION" : "CASCADE" })

Production Build

npm run install:all
npm run build

Docker Build

docker build -t katechat-api ./ -f api/Dockerfile  
docker run --env-file=./api/.env  -p4000:4000 katechat-api 
docker build -t katechat-client --build-arg APP_API_URL=http://localhost:4000 --build-arg APP_WS_URL=http://localhost:4000 ./ -f client/Dockerfile  
docker run -p3000:80 katechat-client

All-in-one service

docker build -t katechat-app ./ -f infrastructure/services/katechat-app/Dockerfile

docker run -it --rm --pid=host --env-file=./api/.env \
 --env PORT=80 \
 --env NODE_ENV=production \
 --env ALLOWED_ORIGINS="*" \
 --env REDIS_URL="redis://host.docker.internal:6379" \
 --env S3_ENDPOINT="http://host.docker.internal:4566" \
 --env SQS_ENDPOINT="http://host.docker.internal:4566" \
 --env DB_URL="postgres://katechat:katechat@host.docker.internal:5432/katechat" \
 --env CALLBACK_URL_BASE="http://localhost" \
 --env FRONTEND_URL="http://localhost" \
 --env DB_MIGRATIONS_PATH="./db-migrations/*-*.js" \
 -p80:80 katechat-app

Document processor

DOCKER_BUILDKIT=1 docker build -t katechat-document-processor ./ -f infrastructure/services/katechat-document-processor/Dockerfile

docker run -it --rm --pid=host --env-file=./document-processor/.env \
 --env PORT=8080 \
 --env NODE_ENV=production \
 --env REDIS_URL="redis://host.docker.internal:6379" \
 --env S3_ENDPOINT="http://host.docker.internal:4566" \
 --env SQS_ENDPOINT="http://host.docker.internal:4566" \
 -p8080:8080 katechat-document-processor

Environment setup

App could be tuned for your needs with environment variables:

cp api/.env.example api/.env
cp api-rust/.env.example api-rust/.env
cp client/.env.example client/.env

Edit the .env files with your configuration settings.

Admin Dashboard

KateChat includes an admin dashboard for managing users and viewing system statistics. Admin access is controlled by email addresses specified in the DEFAULT_ADMIN_EMAILS environment variable.

Admin Features

  • User Management: View all registered users with pagination and search
  • System Statistics: Monitor total users, chats, and models
  • Role-based Access: Automatic admin role assignment for specified email addresses

Configuring Admin Access

  1. Set the DEFAULT_ADMIN_EMAILS environment variable in your .env file:
    DEFAULT_ADMIN_EMAILS=admin@example.com,another-admin@example.com
  2. Users with these email addresses will automatically receive admin privileges upon:
    • Registration
    • Login (existing users)
    • OAuth authentication (Google/GitHub)

AI Setup

AWS Bedrock API connection

  1. Create an AWS Account

    • Visit AWS Sign-up
    • Follow the instructions to create a new AWS account
    • You'll need to provide a credit card and phone number for verification
  2. Enable AWS Bedrock Access

    • Log in to the AWS Management Console
    • Search for "Bedrock" in the services search bar
    • Click on "Amazon Bedrock"
    • Click on "Model access" in the left navigation
    • Select the models you want to use (e.g., Claude, Llama 2)
    • Click "Request model access" and follow the approval process
  3. Create an IAM User for API Access

    • Go to the IAM Console
    • Click "Users" in the left navigation and then "Create user"
    • Enter a user name (e.g., "bedrock-api-user")
    • For permissions, select "Attach policies directly"
    • Search for and select "AmazonBedrockFullAccess"
    • Complete the user creation process
  4. Generate Access Keys

    • From the user details page, navigate to the "Security credentials" tab
    • Under "Access keys", click "Create access key"
    • Select "Command Line Interface (CLI)" as the use case
    • Click through the confirmation and create the access key
    • IMPORTANT: Download the CSV file or copy the "Access key ID" and "Secret access key" values immediately. You won't be able to view the secret key again.
  5. Configure Your Environment

    • Open the .env file in the api directory
    • Add your AWS credentials:
      AWS_BEDROCK_REGION=us-east-1  # or your preferred region
      AWS_BEDROCK_ACCESS_KEY_ID=your_access_key_id
      AWS_BEDROCK_SECRET_ACCESS_KEY=your_secret_access_key
  6. Verify AWS Region Availability

    • Not all Bedrock models are available in every AWS region
    • Check the AWS Bedrock documentation for model availability by region
    • Make sure to set the AWS_BEDROCK_REGION to a region that supports your desired models

OpenAI API connection

  1. Create an OpenAI Account

    • Visit OpenAI's website
    • Click "Sign Up" and create an account
    • Complete the verification process
  2. Generate API Key

    • Log in to your OpenAI account
    • Navigate to the API keys page
    • Click "Create new secret key"
    • Name your API key (e.g., "KateChat")
    • Copy the API key immediately - it won't be shown again
  3. Configure Your Environment

    • Open the .env file in the api directory
    • Add your OpenAI API key:
      OPENAI_API_KEY=your_openai_api_key
      OPENAI_API_URL=https://api.openai.com/v1  # Default OpenAI API URL
  4. Note on API Usage Costs

    • OpenAI charges for API usage based on the number of tokens processed
    • Different models have different pricing tiers
    • Monitor your usage through the OpenAI dashboard
    • Consider setting up usage limits to prevent unexpected charges

Custom REST API Models (Deepseek, Local Models, etc.)

KateChat supports connecting to any OpenAI-compatible REST API endpoint, allowing you to use services like Deepseek, local models running on Ollama, or other third-party providers.

Setting Up Custom Models

Custom models are configured per-model through the GraphQL API or database. Each custom model requires the following settings:

  1. Endpoint URL: The base URL of the API (e.g., https://api.deepseek.com/v1)
  2. API Key: Your authentication key for the API
  3. Model Name: The specific model identifier (e.g., deepseek-chat, llama-3-70b)
  4. Protocol: Choose between:
    • OPENAI_CHAT_COMPLETIONS - Standard OpenAI chat completions API
    • OPENAI_RESPONSES - OpenAI Responses API (for advanced features)
  5. Description: Human-readable description of the model

Example: Deepseek Configuration

Deepseek is a powerful AI model provider with an OpenAI-compatible API:

  1. Get API Key

  2. Configure Custom Model

    • Create a new Model entry with:
      apiProvider: CUSTOM_REST_API
      customSettings: {
        endpoint: "https://api.deepseek.com/v1"
        apiKey: "your_deepseek_api_key"
        modelName: "deepseek-chat"
        protocol: OPENAI_CHAT_COMPLETIONS
        description: "Deepseek Chat Model"
      }
      

Example: Local Ollama Models

For running local models with Ollama:

  1. Install Ollama

    • Visit Ollama website
    • Download and install Ollama
    • Pull a model: ollama pull llama3
    • Pull a model: ollama run llama3
  2. Configure Custom Model

    • Create a new Model entry with:
      apiProvider: CUSTOM_REST_API
      customSettings: {
        endpoint: "http://localhost:11434/v1"
        apiKey: "ollama"  # Ollama doesn't require real auth
        modelName: "llama3"
        protocol: OPENAI_CHAT_COMPLETIONS
        description: "Local Llama 3 via Ollama"
      }
      

Supported Protocols

  • OPENAI_CHAT_COMPLETIONS: Standard chat completions endpoint (/chat/completions)

    • Best for most OpenAI-compatible APIs
    • Supports streaming responses
    • Tool/function calling support (if the underlying API supports it)
  • OPENAI_RESPONSES: Advanced Responses API

    • Support for complex multi-modal interactions
    • Request cancellation support
    • Web search and code interpreter tools

Screenshots

Rich Formatting

image

Images Generation

image

Call Other Model

image

RAG (Retrieval-Augmented Generation)

image

Python Code Run in browser

image

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

About

KateChat is a universal chat bot platform similar to chat.openai.com. The platform supports multiple LLM providers, MCP servers (with OAuth), RAG with semantic search for different DB (sqlite, PostgreSQL, MS SQL).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Contributors 6