llmcord transforms Discord into a collaborative LLM frontend. It works with practically any LLM, remote or locally hosted.
1. Reply Chains (Default)
The classic, organized way to chat. The bot only sees the specific chain of messages you reply to.
- Start: @ the bot to start a new conversation.
- Continue: Reply to the bot's message to continue the chat.
- Branch: You can reply to the same message multiple times to create different conversation branches.
- Thread: You can branch conversations into Discord Threads.
2. Direct Messages
Enable allow_dms in the config to make the bot respond to Direct Messages.
- No Replies or Mentions Needed: Just talk normally.
- Context: The bot reads the entire DM history (up to the token limit).
3. ⭐Channel Context
Enable use_channel_context in the config to make the bot behave like a standard chatbot.
- Context: The bot reads the entire channel history (up to the token limit).
- Hybrid Mode: If you enable
force_reply_chains, the bot reads the whole channel unless you reply to a specific message, allowing you to isolate conversations when needed.
llmcord supports remote models from:
Or local models with:
...Or any other OpenAI compatible API server.
- Multi-Modal Support: Handles images (Vision models) and text file attachments (.txt, .py, .c, etc.).
- Customizable Personality: Pre-history prompt support with ⭐dynamic placeholders (like
{guild_name}or{user_roles}). - Identity Aware: Natively uses the
nameAPI parameter for OpenAI/xAI. ⭐For other providers, theprefix_usersoption automatically prepends user IDs and Display Names to messages so the bot knows who is speaking. - Flexible Model Switching: Change the global model with
/config model, or ⭐assign specific models to specific channels (e.g., a coding model for #dev) using/config channelmodel. - Efficient Caching: Caches message data in a size-managed (no memory leaks) and mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls.
- Fully Asynchronous
- ⭐Zero-Hassle Launcher: Included
starter.batautomatically creates a virtual environment, installs/updates dependencies, and handles auto-restarts. - ⭐Smart Context Management: Uses
litellmto enforcemax_input_tokens, automatically dropping older messages to ensure you never hit API limits. - ⭐Advanced Prompting: Supports a
post_history_promptto inject instructions at the very end of the context, perfect for reinforcing formatting rules or jailbreaks. - ⭐Clean Output: Automatically strips
<think>tags from reasoning models (like DeepSeek R1) and includes asanitize_responseoption to convert smart typography to ASCII and collapse excessive whitespace. - ⭐Multi-Modal Output Fix: Mistral model
magistralnotably responds with a multi-modal list, that includes reasoning and text outputs. These responses are now properly accepted by llmcord, without errors. - ⭐Hot Reloading: Use
/config reloadto reloadconfig.yamlsettings without restarting the bot. - ⭐LLM Tools: The LLM has access to tools for enhanced capabilities:
- web_search: Perform internet searches using DuckDuckGo for real-time information.
- open_link: Fetch and extract the main content of web pages, with security measures to prevent access to localhost or private networks.
- read_message_link: Read a specific Discord message via its link, including surrounding context.
- ignore_message: Allow the LLM to ignore messages that don't require a response or violate instructions.
- ⭐Request Logging: All LLM API requests are logged to
logs/llm_requests.jsonwith sensitive information redacted for debugging and monitoring purposes.
1. Clone the repo:
git clone https://github.com/jakobdylanc/llmcord2. Create a copy of config.default.yaml named config.yaml and set it up.
3. Run the bot.
⭐Using the Starter (Recommended for Windows):
Simply launch `starter.bat`. It will:
1. Create a secure virtual environment.
2. Install/Update all dependencies automatically.
3. Launch the bot (and auto-restart it if you reload configs).
Additionally, when the bot is running, you can control it via console commands: type reload to reload the project without restarting, or exit, stop, or quit to stop the bot gracefully.
Using Docker:
docker compose upUsing Python manually:
python -m pip install -e .
python -m llmcord- If you're having issues, try jakobdylanc's suggestions here
