- Local LLM Integration: Powered by OpenHermes-2.5-Mistral-7B via llama-cpp-python
- Real-time Responses: Asynchronous message handling with concurrent processing
- Message Filtering: Responds to
!askcommand prefix - Detailed Logging: Comprehensive debug logging for troubleshooting
- Environment Variables: Secure token management with python-dotenv
- Performance Optimized: Thread pool executor for efficient LLM inference
- Python 3.12 or higher
- Poetry for dependency management
- Discord Bot Token
- OpenHermes-2.5-Mistral-7B model file (Q5_K_M quantized)
git clone https://github.com/yourusername/SophiesRocket.git
cd SophiesRocketpoetry installCreate a .env file in the root directory:
DISCORD_TOKEN=your_discord_bot_tokenPlace the OpenHermes model file at models/openhermes-2.5-mistral-7b.Q5_K_M.gguf
poetry run python -m sophiesrocket.botThe bot responds to messages starting with !ask. For example:
!ask What is the capital of France?
| Variable | Description | Required | Default |
|---|---|---|---|
DISCORD_TOKEN |
Discord bot token | Yes | - |
The bot uses the following model settings:
- Context window: 2048 tokens
- Max output tokens: 500
- Model: OpenHermes-2.5-Mistral-7B (Q5_K_M quantized)
src/sophiesrocket/
├── __init__.py
├── bot.py # Main Discord bot implementation
│ ├── Discord client setup
│ ├── Message handling
│ └── LLM integration
└── llm.py # LLM query implementation
└── Health check functionality
-
Initialization:
- Load environment variables
- Initialize Llama model
- Configure Discord intents
- Set up thread pool executor
-
Message Processing:
- Listen for messages
- Filter for
!askprefix - Extract prompt from message
- Show typing indicator
-
LLM Integration:
- Run inference in separate thread
- Process model output
- Handle response formatting
- Send response to Discord
-
Error Handling:
- Log all operations
- Handle model errors
- Manage thread execution
- Provide user feedback
docker build -t sophiesrocket .docker run --rm --env-file .env sophiesrocket- py-cord (≥2.3.0)
- python-dotenv (≥1.0.0)
- requests (≥2.32.3)
- aiohttp
- llama-cpp-python
- numpy (≥1.26.0)
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE.md file for details.
- py-cord - Discord API wrapper
- OpenHermes-2.5-Mistral-7B - LLM model
- llama-cpp-python - LLM inference