A sophisticated Telegram bot that uses AI personas, memory management, and multiple AI services to create engaging, context-aware conversations in group chats.
- AI-Powered Personas: Multiple distinct AI personalities with unique voices and expertise
- Memory Integration: Remembers past conversations using Mem0 for contextual responses
- Multi-Service AI: Combines OpenAI GPT, Grok, and other AI services
- Real-time Query Handling: Processes fact-based queries with up-to-date information
- Intelligent Reactions: Context-aware responses to group messages
- Topic Initiation: Proactively starts engaging conversations
- Persona Stickiness: Maintains character consistency across conversations
- Firestore Integration: Persistent message storage and retrieval
src/
├── core_logic/
│ ├── response_logic.py # Main response handling logic
│ ├── memory.py # Memory management with Mem0
│ └── llm_personas.py # Persona management system
├── services/
│ ├── openai_chat.py # OpenAI API integration
│ ├── grok_chat.py # Grok API integration
│ ├── fetch_db.py # Database operations
│ └── state_manager.py # State management
├── workers/
│ ├── brain.py # Core processing worker
│ └── sender.py # Message sending worker
└── config/
└── settings.py # Configuration management
- Python 3.8+
- Telegram Bot Token
- OpenAI API Key
- Grok API Key
- Mem0 API Key
- Google Cloud Firestore credentials
- Clone the repository
git clone <repository-url>
cd telegram_bot_tg- Install dependencies
pip install -r requirements.txt- Environment Setup
Create a
.envfile in the root directory:
# Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_GROUP_ID=your_group_id
# AI Services
OPENAI_API_KEY=your_openai_api_key
GROK_API_KEY=your_grok_api_key
MEM0_API_KEY=your_mem0_api_key
# Google Cloud (for Firestore)
GOOGLE_APPLICATION_CREDENTIALS=path/to/your/service-account.json- Configure Settings
Update
config/settings.pywith your specific configuration:
APP_CONFIG = {
'telegram_group_id': 'your_group_id',
'sender_bot_users': ['bot_username'],
'response_context_messages': 10,
# ... other settings
}- Setup Personas Create persona embeddings for better matching:
python -c "from src.core_logic.llm_personas import PersonaManager; pm = PersonaManager(); pm.generate_embeddings()"The bot includes multiple AI personas, each with unique characteristics:
- Crypto OG: Experienced blockchain enthusiast
- DeFi Degen: High-risk DeFi trader
- NFT Collector: Digital art and collectibles expert
- Tech Minimalist: Privacy and security focused
- Community Builder: Engagement and networking specialist
- Meme Lord: Humor and viral content creator
- Data Analyst: Numbers and analytics focused
- Venture Capitalist: Investment and funding expert
- Developer: Technical implementation specialist
- Enthusiast: General crypto enthusiast
Each persona has:
- Unique voice and tone
- Specialized expertise areas
- Distinct communication patterns
- Emoji usage preferences
The bot uses Mem0 for advanced memory management:
get_memory_context(query): Retrieves relevant past interactionsadd_to_memory(content, role): Stores new interactionshandle_memory(query, type): Handles both query and response storage
- Query Memory: Searches for relevant context before responding
- Response Memory: Stores bot responses for future reference
- Context Integration: Automatically adds memory context to all AI prompts
# Messages are categorized into:
- PERSONA_OPINION: Regular group interactions
- REALTIME_QUERY: Fact-based queries
- TOPIC_INITIATION: Conversation starters# Three main handlers:
- handle_reaction(): Persona-based responses
- handle_realtime_query(): Fact gathering + humanization
- handle_initiation(): Topic generation# Before each AI call:
memory_context = get_memory_context(message)
prompt = f"##0. Previous chat Context: {memory_context}\n{main_prompt}"
# After response generation:
add_to_memory(user_message, "user")
add_to_memory(bot_response, "assistant")python -m src.mainpython memory_test.pyThe bot provides detailed logging for:
- Message processing
- Persona selection
- Memory operations
- AI service calls
- Error handling
'response_context_messages': 10, # Number of messages for context
'max_tokens': 60, # AI response length limit
'persona_stickiness_duration': 180 # Persona consistency window (seconds)'memory_search_limit': 5, # Number of relevant memories to retrieve
'memory_user_id': 'telegram_bot' # Unique identifier for bot memory- Maintains character consistency for 3 minutes
- 15% bonus to recently used personas
- Prevents rapid personality switching
- Raw AI responses → Grok fact gathering → OpenAI humanization
- Removes formal language and AI-speak
- Adds natural conversation patterns
- Automatically removes surrounding quotes from responses
- Regex pattern:
r'^"(.*)"$' - Preserves internal quotations
- Graceful fallbacks for AI service failures
- Memory operation error recovery
- Comprehensive logging for debugging
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
Memory API Errors
# Check API key configuration
echo $MEM0_API_KEY
# Verify memory client initialization
python -c "from src.core_logic.memory import memory_client; print('Memory client OK')"Persona Embeddings Missing
# Generate embeddings
python -c "from src.core_logic.llm_personas import PersonaManager; PersonaManager().generate_embeddings()"Firestore Connection Issues
# Verify credentials
export GOOGLE_APPLICATION_CREDENTIALS=path/to/service-account.jsonEnable detailed logging by setting:
import logging
logging.basicConfig(level=logging.DEBUG)For issues and questions:
- Check the troubleshooting section
- Review logs for error details
- Create an issue with detailed information
- Include relevant configuration (without API keys)
Note: This bot is designed for educational and experimental purposes. Ensure compliance with Telegram's Terms of Service and applicable regulations when deploying.