Skip to content

Make Discord your LLM frontend - Supports any OpenAI compatible API (Ollama, xAI, Gemini, OpenRouter and more)

Notifications You must be signed in to change notification settings

EttyKitty/llmcord

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

556 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llmcord (Etty's fork⭐)

Talk to LLMs with your friends!

llmcord transforms Discord into a collaborative LLM frontend. It works with practically any LLM, remote or locally hosted.

New things added in this fork are marked with ⭐

Features

Chat System:

1. Reply Chains (Default)

The classic, organized way to chat. The bot only sees the specific chain of messages you reply to.

  • Start: @ the bot to start a new conversation.
  • Continue: Reply to the bot's message to continue the chat.
  • Branch: You can reply to the same message multiple times to create different conversation branches.
  • Thread: You can branch conversations into Discord Threads.

2. Direct Messages

Enable allow_dms in the config to make the bot respond to Direct Messages.

  • No Replies or Mentions Needed: Just talk normally.
  • Context: The bot reads the entire DM history (up to the token limit).

3. ⭐Channel Context

Enable use_channel_context in the config to make the bot behave like a standard chatbot.

  • Context: The bot reads the entire channel history (up to the token limit).
  • Hybrid Mode: If you enable force_reply_chains, the bot reads the whole channel unless you reply to a specific message, allowing you to isolate conversations when needed.

API support:

llmcord supports remote models from:

Or local models with:

...Or any other OpenAI compatible API server.


And more:

  • Multi-Modal Support: Handles images (Vision models) and text file attachments (.txt, .py, .c, etc.).
  • Customizable Personality: Pre-history prompt support with ⭐dynamic placeholders (like {guild_name} or {user_roles}).
  • Identity Aware: Natively uses the name API parameter for OpenAI/xAI. ⭐For other providers, the prefix_users option automatically prepends user IDs and Display Names to messages so the bot knows who is speaking.
  • Flexible Model Switching: Change the global model with /config model, or ⭐assign specific models to specific channels (e.g., a coding model for #dev) using /config channelmodel.
  • Efficient Caching: Caches message data in a size-managed (no memory leaks) and mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls.
  • Fully Asynchronous
  • Zero-Hassle Launcher: Included starter.bat automatically creates a virtual environment, installs/updates dependencies, and handles auto-restarts.
  • Smart Context Management: Uses litellm to enforce max_input_tokens, automatically dropping older messages to ensure you never hit API limits.
  • Advanced Prompting: Supports a post_history_prompt to inject instructions at the very end of the context, perfect for reinforcing formatting rules or jailbreaks.
  • Clean Output: Automatically strips <think> tags from reasoning models (like DeepSeek R1) and includes a sanitize_response option to convert smart typography to ASCII and collapse excessive whitespace.
  • Multi-Modal Output Fix: Mistral model magistral notably responds with a multi-modal list, that includes reasoning and text outputs. These responses are now properly accepted by llmcord, without errors.
  • Hot Reloading: Use /config reload to reload config.yaml settings without restarting the bot.
  • LLM Tools: The LLM has access to tools for enhanced capabilities:
    • web_search: Perform internet searches using DuckDuckGo for real-time information.
    • open_link: Fetch and extract the main content of web pages, with security measures to prevent access to localhost or private networks.
    • read_message_link: Read a specific Discord message via its link, including surrounding context.
    • ignore_message: Allow the LLM to ignore messages that don't require a response or violate instructions.
  • Request Logging: All LLM API requests are logged to logs/llm_requests.json with sensitive information redacted for debugging and monitoring purposes.

Setting up and Running

1. Clone the repo:

git clone https://github.com/jakobdylanc/llmcord

2. Create a copy of config.default.yaml named config.yaml and set it up.

3. Run the bot.

Using the Starter (Recommended for Windows):

Simply launch `starter.bat`. It will:
1. Create a secure virtual environment.
2. Install/Update all dependencies automatically.
3. Launch the bot (and auto-restart it if you reload configs).

Additionally, when the bot is running, you can control it via console commands: type reload to reload the project without restarting, or exit, stop, or quit to stop the bot gracefully.

Using Docker:

docker compose up

Using Python manually:

python -m pip install -e .
python -m llmcord

Notes

  • If you're having issues, try jakobdylanc's suggestions here

About

Make Discord your LLM frontend - Supports any OpenAI compatible API (Ollama, xAI, Gemini, OpenRouter and more)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.2%
  • Batchfile 2.5%
  • Dockerfile 0.3%