KateChat is a universal chat bot platform similar to chat.openai.com that can be used as a base for customized chat bots. The platform supports multiple LLM models from various providers and allows switching between them on the fly within a chat session.
Experience KateChat in action with our live demo:
To interact with all supported AI models in the demo, you'll need to provide your own API keys for:
- AWS Bedrock - Access to Claude, Llama, and other models
- OpenAI - GPT-4, GPT-5, and other OpenAI models
- Yandex Foundation Models - YandexGPT and other Yandex models
đź“‹ Note: API keys are stored by default locally in your browser and sent securely to our backend. See the Getting Started section below for detailed instructions on obtaining API keys.
- Multiple chats creation with pristine chat functionality
- Chat history storage and management, messages editing/deletion
- Rich markdown formatting: code blocks, images, MathJax formulas etc.
- Localization
- "Switch model"/"Call other model" logic to process current chat messages with another model
- Request cancellation to stop reasoning or web search
- Parallel call for assistant message against other models to compare results
- Images input support (drag & drop, copy-paste, etc.), images stored on S3-compatible storage (
localstackon localdev env) - Client-side Python code execution with Pyodide
- Reusable @katechat/ui that includes basic chatbot controls.
- Usage examples are available in examples.
- Voice-to-voice demo for OpenAI realtime WebRTC API.
- Distributed messages processing using external queue (Redis), full-fledged production-like dev environment with docker-compose
- User authentication (email/password, Google OAuth, GitHub OAuth)
- Real-time communication with GraphQL subscriptions
- Support for various LLM model Providers:
- AWS Bedrock (Amazon, Anthropic, Meta, Mistral, AI21, Cohere...)
- OpenAI
- Yandex Foundation Models with OpenAI protocol
- Custom OpenAI-compatible REST API endpoint (Deepseek, local Ollama, etc.).
- External MCP servers support (could be tested with https://github.com/github/github-mcp-server)
- LLM tools (Web Search, Code Interpreter, Reasoning) support, custom WebSearch tool implemented using Yandex Search API
- RAG implementation with documents (PDF, DOCX, TXT) parsing by Docling and vector embeddings stored in PostgreSQL/Sqlite/MS SQL server
- CI/CD pipeline with GitHub Actions to deploy the app to AWS
- Demo mode when no LLM providers configured on Backend and
AWS_BEDROCK_...orOPENAI_API_...settings are stored in local storage and sent to the backend as "x-aws-region", "x-aws-access-key-id", "x-aws-secret-access-key", "x-openai-api-key" headers
- Introduce chat folders hierarchy (wuth customized color/icon) under pinned folders, finalize paging for pinned chats
- Put status update time into document processing, load pages count and show it and full processing time and average proc speed
- Add voice-to-voice interaction for OpenAI realtime models, put basic controls to katechat/ui and extend OpenAI protocol in main API.
- Rust API sync: add images generation support, Library, admin API. Migrate to OpenAI protocol for OpenAI, Yandex and Custom models (https://github.com/YanceyOfficial/rs-openai).
- Switch OpenAI "gpt-image..." models to Responses API, use image placeholder, do not wait response in cycle but use
new
requestsqueue with setTimeout andpublishMessagewith result - Google Vertex AI provider support
- Finish "Forgot password?" logic for local login
- @katechat/ui chat bot demo with animated UI and custom actions buttons (plugins={[Actions]}) in chat to ask weather report tool or fill some form
- SerpApi for Web Search (new setting in UI)
- Python API (FastAPI)
- MySQL: check whether https://github.com/stephenc222/mysql_vss/ could be used for RAG
- React with TypeScript
- Mantine UI library
- Apollo Client for GraphQL
- GraphQL code generation
- Real-time updates with GraphQL subscriptions (WebSockets)
- Node.js with TypeScript
- TypeORM for persistence
- Express.js for API server
- GraphQL with Apollo Server
- AWS Bedrock for AI model integrations
- OpenAI API for AI model integrations
- Jest for testing
The project consists of several parts:
- API - Node.js GraphQL API server. Also there is alternative backend API implementation on Rust, Python is in plans.
- Client - Universal web interface
- Database - any TypeORM compatible RDBMS (PostgreSQL, MySQL, SQLite, etc.)
- Redis - for message queue and caching (optional, but recommended for production)
- The API configuration is centralized in
api/src/global-config.ts. Defaults are merged with an optionalcustomization.jsonplaced in the API folder or its parent (use the providedapi/customization.example.jsonas a template). - Supported overrides: demo limits, enabled AI providers, feature flags (images generation, RAG, MCP), AI defaults
(temperature, max tokens, top_p, context and summarization limits), admin emails, app defaults, and optional initial
custom models/MCP servers (API keys pulled from env vars such as
DEEPSEEK_API_KEY). - Only deployment/security values are loaded from
.env:PORT,NODE_ENV,ALLOWED_ORIGINS,LOG_LEVEL,CALLBACK_URL_BASE,FRONTEND_URL,JWT_SECRET,SESSION_SECRET,RECAPTCHA_SECRET_KEY, OAuth client secrets, DB/S3/SQS/AWS Bedrock/OpenAI/Yandex credentials. - The client reads
customization.json(current or parent folder; seeclient/customization.example.json) to override brand colors, font family, app title, footer links, and the chat AI-usage notice without changing code.
- Node.js (v20+)
- Connection to LLM, any from:
- AWS Account with Bedrock access
- OpenAI API Account
- Yandex Foundation Models API key.
- Local Ollama model
- Docker and Docker Compose (optional, for development environment)
- Clone the repository
git clone https://github.com/artiz/kate-chat.git
cd kate-chat
npm install
npm run dev
App will be available at http://localhost:3000
There you could use own OpenAI API key, AWS Bedrock credentials or Yandex FM to connect to cloud models.
Local Ollama-like models could be added as Custom models.
Add the following to your /etc/hosts file:
127.0.0.1 katechat.dev.com
Then run the following commands:
export COMPOSE_BAKE=true
npm install
npm run build:client
docker compose up --buildApp will be available at http://katechat.dev.com
To run the projects in development mode:
npm install
docker compose up redis localstack postgres mysql mssql -d
npm run devpython -m venv document-processor/.venv
source document-processor/.venv/bin/activate
pip install -r document-processor/requirements.txt
npm run dev:document_processor- Server
cd api-rust
diesel migration run
cargo build
cargo run- Client
APP_API_URL=http://localhost:4001 APP_WS_URL=http://localhost:4002 npm run dev:client- Create new migration
docker compose up redis localstack postgres mysql mssql -d
npm run migration:generate <migration name>- Apply migrations (automated at app start but could be used to test)
npm run migration:runNOTE: do not update more than one table definition at once, sqlite sometimes applies migrations incorrectly due to "temporary_xxx" tables creation. NOTE: do not use more then 1 foreign key with ON DELETE CASCADE in one table for MS SQL, or use NO ACTION as fallback:
@ManyToOne(() => Message, { onDelete: DB_TYPE == "mssql" ? "NO ACTION" : "CASCADE" })
npm run install:all
npm run builddocker build -t katechat-api ./ -f api/Dockerfile
docker run --env-file=./api/.env -p4000:4000 katechat-api docker build -t katechat-client --build-arg APP_API_URL=http://localhost:4000 --build-arg APP_WS_URL=http://localhost:4000 ./ -f client/Dockerfile
docker run -p3000:80 katechat-clientAll-in-one service
docker build -t katechat-app ./ -f infrastructure/services/katechat-app/Dockerfile
docker run -it --rm --pid=host --env-file=./api/.env \
--env PORT=80 \
--env NODE_ENV=production \
--env ALLOWED_ORIGINS="*" \
--env REDIS_URL="redis://host.docker.internal:6379" \
--env S3_ENDPOINT="http://host.docker.internal:4566" \
--env SQS_ENDPOINT="http://host.docker.internal:4566" \
--env DB_URL="postgres://katechat:katechat@host.docker.internal:5432/katechat" \
--env CALLBACK_URL_BASE="http://localhost" \
--env FRONTEND_URL="http://localhost" \
--env DB_MIGRATIONS_PATH="./db-migrations/*-*.js" \
-p80:80 katechat-appDocument processor
DOCKER_BUILDKIT=1 docker build -t katechat-document-processor ./ -f infrastructure/services/katechat-document-processor/Dockerfile
docker run -it --rm --pid=host --env-file=./document-processor/.env \
--env PORT=8080 \
--env NODE_ENV=production \
--env REDIS_URL="redis://host.docker.internal:6379" \
--env S3_ENDPOINT="http://host.docker.internal:4566" \
--env SQS_ENDPOINT="http://host.docker.internal:4566" \
-p8080:8080 katechat-document-processorApp could be tuned for your needs with environment variables:
cp api/.env.example api/.env
cp api-rust/.env.example api-rust/.env
cp client/.env.example client/.envEdit the .env files with your configuration settings.
KateChat includes an admin dashboard for managing users and viewing system statistics. Admin access is controlled by email addresses specified in the DEFAULT_ADMIN_EMAILS environment variable.
- User Management: View all registered users with pagination and search
- System Statistics: Monitor total users, chats, and models
- Role-based Access: Automatic admin role assignment for specified email addresses
- Set the
DEFAULT_ADMIN_EMAILSenvironment variable in your.envfile:DEFAULT_ADMIN_EMAILS=admin@example.com,another-admin@example.com
- Users with these email addresses will automatically receive admin privileges upon:
- Registration
- Login (existing users)
- OAuth authentication (Google/GitHub)
-
Create an AWS Account
- Visit AWS Sign-up
- Follow the instructions to create a new AWS account
- You'll need to provide a credit card and phone number for verification
-
Enable AWS Bedrock Access
- Log in to the AWS Management Console
- Search for "Bedrock" in the services search bar
- Click on "Amazon Bedrock"
- Click on "Model access" in the left navigation
- Select the models you want to use (e.g., Claude, Llama 2)
- Click "Request model access" and follow the approval process
-
Create an IAM User for API Access
- Go to the IAM Console
- Click "Users" in the left navigation and then "Create user"
- Enter a user name (e.g., "bedrock-api-user")
- For permissions, select "Attach policies directly"
- Search for and select "AmazonBedrockFullAccess"
- Complete the user creation process
-
Generate Access Keys
- From the user details page, navigate to the "Security credentials" tab
- Under "Access keys", click "Create access key"
- Select "Command Line Interface (CLI)" as the use case
- Click through the confirmation and create the access key
- IMPORTANT: Download the CSV file or copy the "Access key ID" and "Secret access key" values immediately. You won't be able to view the secret key again.
-
Configure Your Environment
- Open the
.envfile in theapidirectory - Add your AWS credentials:
AWS_BEDROCK_REGION=us-east-1 # or your preferred region AWS_BEDROCK_ACCESS_KEY_ID=your_access_key_id AWS_BEDROCK_SECRET_ACCESS_KEY=your_secret_access_key
- Open the
-
Verify AWS Region Availability
- Not all Bedrock models are available in every AWS region
- Check the AWS Bedrock documentation for model availability by region
- Make sure to set the
AWS_BEDROCK_REGIONto a region that supports your desired models
-
Create an OpenAI Account
- Visit OpenAI's website
- Click "Sign Up" and create an account
- Complete the verification process
-
Generate API Key
- Log in to your OpenAI account
- Navigate to the API keys page
- Click "Create new secret key"
- Name your API key (e.g., "KateChat")
- Copy the API key immediately - it won't be shown again
-
Configure Your Environment
- Open the
.envfile in theapidirectory - Add your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key OPENAI_API_URL=https://api.openai.com/v1 # Default OpenAI API URL
- Open the
-
Note on API Usage Costs
- OpenAI charges for API usage based on the number of tokens processed
- Different models have different pricing tiers
- Monitor your usage through the OpenAI dashboard
- Consider setting up usage limits to prevent unexpected charges
KateChat supports connecting to any OpenAI-compatible REST API endpoint, allowing you to use services like Deepseek, local models running on Ollama, or other third-party providers.
Custom models are configured per-model through the GraphQL API or database. Each custom model requires the following settings:
- Endpoint URL: The base URL of the API (e.g.,
https://api.deepseek.com/v1) - API Key: Your authentication key for the API
- Model Name: The specific model identifier (e.g.,
deepseek-chat,llama-3-70b) - Protocol: Choose between:
OPENAI_CHAT_COMPLETIONS- Standard OpenAI chat completions APIOPENAI_RESPONSES- OpenAI Responses API (for advanced features)
- Description: Human-readable description of the model
Deepseek is a powerful AI model provider with an OpenAI-compatible API:
-
Get API Key
- Visit Deepseek Platform
- Sign up and create an API key
- Copy your API key
-
Configure Custom Model
- Create a new Model entry with:
apiProvider: CUSTOM_REST_API customSettings: { endpoint: "https://api.deepseek.com/v1" apiKey: "your_deepseek_api_key" modelName: "deepseek-chat" protocol: OPENAI_CHAT_COMPLETIONS description: "Deepseek Chat Model" }
- Create a new Model entry with:
For running local models with Ollama:
-
Install Ollama
- Visit Ollama website
- Download and install Ollama
- Pull a model:
ollama pull llama3 - Pull a model:
ollama run llama3
-
Configure Custom Model
- Create a new Model entry with:
apiProvider: CUSTOM_REST_API customSettings: { endpoint: "http://localhost:11434/v1" apiKey: "ollama" # Ollama doesn't require real auth modelName: "llama3" protocol: OPENAI_CHAT_COMPLETIONS description: "Local Llama 3 via Ollama" }
- Create a new Model entry with:
-
OPENAI_CHAT_COMPLETIONS: Standard chat completions endpoint (
/chat/completions)- Best for most OpenAI-compatible APIs
- Supports streaming responses
- Tool/function calling support (if the underlying API supports it)
-
OPENAI_RESPONSES: Advanced Responses API
- Support for complex multi-modal interactions
- Request cancellation support
- Web search and code interpreter tools
- Fork the repository
- Create your feature branch:
git checkout -b feature/my-new-feature - Commit your changes:
git commit -am 'Add some feature' - Push to the branch:
git push origin feature/my-new-feature - Submit a pull request





