Build powerful, production-ready chatbots in minutes with built-in infrastructure, function calling, smart routing, and Docker support.
A FastAPI-based intelligent chatbot platform powered by Google's Gemini AI, designed to make creating specialized chatbots incredibly easy while handling all the complex infrastructure for you.
This isn't just another chatbot project - it's a complete chatbot framework that handles all the heavy lifting:
- β Zero Infrastructure Setup - All the complex parts are already built
- β Just Add Your Content - Define your bot's knowledge and you're done
- β Production Ready - Rate limiting, CORS, error handling, and Docker included
- β Smart AI Routing - Automatically switches between AI models to overcome rate limits
- β Function Calling Built-in - Your bots can take actions, not just answer questions
- β Static Response Caching - Lightning-fast responses for common questions
-
π¦ Complete Infrastructure Out of the Box
- Rate limiting with
slowapi - CORS configuration for web apps
- Redis-backed state management
- Docker containerization
- Automatic error handling
- Request/response validation with Pydantic
- Rate limiting with
-
π§ Intelligent Model Routing
- Automatically tracks API usage across multiple Gemini models
- Seamlessly switches between models when rate limits are reached
- Supports 6 different Gemini models with automatic fallback
- Persistent state management across restarts
- Never worry about hitting API limits again!
-
β‘ Static Bot for Instant Responses
- Pre-defined answers for common questions
- Zero API calls for frequently asked questions
- Blazing fast response times
- Perfect for FAQs and standard information
-
π§ Function Calling Made Easy
- Built-in support for AI-powered actions
- Example: User registration system for events
- Easy to extend with your own functions
- Type-safe function declarations
-
π¨ Multiple Specialized Bots
- Each bot has its own personality and knowledge base
- Shared infrastructure, different purposes
- Easy to add new bots by following the pattern
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FastAPI Application β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β SDG Bot β β Event Bot β β Podcast Bot β β
β β β β β β β β
β β + Static β β + Static β β + Static β β
β β + AI β β + AI β β + AI β β
β β β β + Functions β β β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββ¬ββββββββ β
β β β β β
β βββββββββββββββββββ΄βββββββββββββββββββ β
β β β
β βββββββββββββββββββΌβββββββββββββββββββ β
β β Models Manager (Smart Routing) β β
β β ββββββββββββββββββββββββββββββββ β β
β β β gemini-2.5-flash-lite β β β β
β β β gemini-2.0-flash-lite β β β β
β β β gemini-2.5-flash β β β β
β β β gemini-2.0-flash β β β β
β β β gemini-1.5-flash β β β β
β β ββββββββββββββββββββββββββββββββ β β
β βββββββββββββββββββ¬βββββββββββββββββββ β
β β β
β βββββββββββββββββββΌβββββββββββββββββββ β
β β Redis State Persistence β β
β ββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The ModelsManager is the brain of the operation:
- Automatic Rate Limit Tracking: Monitors both per-minute and per-day limits for each model
- Intelligent Fallback: Automatically switches to the next available model when limits are reached
- Persistent State: Uses Redis to maintain usage counts across server restarts
- Timezone Aware: Tracks limits in Pacific Time (Google's timezone for API quotas)
- Priority-Based: Uses faster, higher-limit models first, falls back to others as needed
Models Supported (in priority order):
gemini-2.5-flash-lite- 1000/day, 30/mingemini-2.0-flash-lite- 200/day, 30/mingemini-2.5-flash- 500/day, 10/mingemini-2.0-flash- 200/day, 15/mingemini-1.5-flash- 50/day, 15/min (fallback)
Before hitting the AI, every message is checked against:
- Static Responses: Pre-defined Q&A pairs
- Info Dictionary: Structured information lookup
- Instant Return: Zero latency, zero API costs
Perfect for:
- FAQs
- Contact information
- Operating hours
- Standard greetings
The Event Bot demonstrates the power of function calling:
Example: Workshop Registration
# The AI can automatically:
1. Understand user intent to register
2. Collect required information through conversation
3. Call the registration function with proper parameters
4. Confirm successful registration
# You just define the function and its declaration!Built-in Features:
- Type-safe function declarations
- Automatic parameter validation
- Error handling and user feedback
- Easy to extend with new functions
# Automatic rate limiting on all endpoints
- Root endpoint: 10 requests/hour
- Chatbot endpoints: 100 requests/hour + 10 requests/minute# Pre-configured for common scenarios
- Production domains
- Development localhost
- Credentials support- Model unavailability gracefully handled
- Function call errors caught and reported
- User-friendly error messages
- Detailed logging for debugging
One-Command Deployment:
docker-compose up --buildFeatures:
- Lightweight Alpine-based image
- Gunicorn + Uvicorn for production
- Environment variable support
- Port 8000 exposed by default
# In infos.py
my_bot_info = {
"hello": "Hi! I'm your custom bot!",
"help": "I can help you with..."
}
my_bot_static_replay = {
"what is your name": "I'm CustomBot, nice to meet you!"
}# In system_instructions.py
MY_BOT_INSTRUCTION = """You are a helpful assistant for...
Keep responses short and focused.
Only answer based on provided information."""# In chat_bots.py
async def MyCustom_chatbot(msg: Massage) -> Massage:
return await chatbot(
msg,
MY_BOT_INSTRUCTION,
'MyBot',
my_bot_info,
my_bot_static_replay,
None # Or add function declarations here
)# In main.py
@app.post("/api/chatbot/custom")
@limiter.limit("100/hour")
@limiter.limit("10/minute")
async def custom_chatbot(request: Request, msg: Massage) -> Massage:
return await MyCustom_chatbot(msg)That's it! Your bot is ready! π
- Python 3.13+
- Redis instance (for state management)
- Google Gemini API key
- Docker (optional, for containerized deployment)
git clone https://github.com/setif-developers-group/SDG-chat-bots.git
cd SDG-chat-botspython -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activatepip install -r requirements.txtCreate a .env file in the root directory:
# Google Gemini API
GOOGLE_API_KEY=your_gemini_api_key_here
# Redis Configuration
REDIS_HOST=your_redis_host
REDIS_PORT=6379
REDIS_USERNAME=default
REDIS_PASSWORD=your_redis_password
# API Configuration (for function calling)
API_URL=https://your-api-url.com
CHATBOT_API_KEY=your_api_key_hereuvicorn main:app --reloaddocker-compose up --buildThe application will be available at http://localhost:8000
GET /POST /api/chatbot/sdg
Content-Type: application/json
{
"message": "What is SDG?",
"history": [],
"summary": ""
}POST /api/chatbot/event
Content-Type: application/json
{
"message": "I want to register for the AI workshop",
"history": [],
"summary": ""
}POST /api/chatbot/podcast
Content-Type: application/json
{
"message": "Tell me about the latest episode",
"history": [],
"summary": ""
}{
message: string; // User's message
history: Array<{ // Conversation history
role: "user" | "model";
content: string;
}>;
summary?: string; // Optional conversation summary
}{
message: string; // Bot's response
history: Array<{ // Updated conversation history
role: "user" | "model";
content: string;
}>;
summary?: string; // Updated conversation summary
}# In tools.py
async def my_custom_function(param1: str, param2: int) -> str:
# Your logic here
result = do_something(param1, param2)
return result# In tools_declarations.py
my_function_declaration = {
"name": "my_custom_function",
"description": "What this function does...",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Description of param1"
},
"param2": {
"type": "integer",
"description": "Description of param2"
}
},
"required": ["param1", "param2"]
}
}# In chat_bot.py - function_call_execution()
if function_call.name == "my_custom_function":
args = function_call.args
result = await my_custom_function(**args)
return f"Function executed successfully: {result}"# In chat_bots.py
async def MyBot_chatbot(msg: Massage) -> Massage:
return await chatbot(
msg,
MY_INSTRUCTION,
'MyBot',
my_info,
my_static,
[my_function_declaration] # Enable function calling
)# Build the image
docker build -t sdg-chatbots .
# Run with docker-compose
docker-compose up -d
# View logs
docker-compose logs -f
# Stop
docker-compose downThe docker-compose.yaml automatically loads your .env file.
SDG-chat-bots/
βββ main.py # FastAPI app & routes
βββ chat_bots.py # Bot implementations
βββ chat_bot.py # Core AI response generation
βββ models_manager.py # Smart model routing system
βββ static_bot.py # Static response handler
βββ tools.py # Function implementations
βββ tools_declarations.py # Function declarations for AI
βββ system_instructions.py # Bot personalities & rules
βββ infos.py # Knowledge bases & static data
βββ schemas.py # Pydantic models
βββ util.py # Utility functions
βββ sdg_exceptions.py # Custom exceptions
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker configuration
βββ docker-compose.yaml # Docker Compose setup
βββ README.md # This file
- Provides information about Setif Development Group
- Answers questions about mission, activities, membership
- Uses static responses for common queries
- Information about SDG Skills Lab events
- Function Calling: Can register users for workshops
- Collects: name, email, phone, attendance type, workshop selection
- Integrates with external registration API
- Information about SDG Podcast
- Details about episodes, guests, topics
- Links to podcast platforms
- Rate Limiting: Prevents abuse with configurable limits
- CORS Protection: Whitelist-based origin control
- Input Validation: Pydantic schemas validate all requests
- Error Sanitization: User-friendly errors without exposing internals
- API Key Protection: Secure function calling with API keys
# Automatic logging of model usage
print(f"Using model: {current_model}")
print(f"Model usage: day_count={day_count}/{day_limit}, minute_count={minute_count}/{minute_limit}")# Detailed logging of function calls
print(f"Function call detected: {function_name} with args {args}")
print(f"Function response: {response}")- Usage counts saved to Redis on shutdown
- Automatically loaded on startup
- Survives server restarts
- Static Response Cache: Instant responses for common questions
- Async/Await: Non-blocking I/O for better concurrency
- Model Prioritization: Uses fastest models first
- Redis State: Fast in-memory state management
- Lightweight Docker: Alpine-based image for smaller footprint
Once running, access interactive API documentation:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
We welcome contributions! Here's how:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Check your Gemini API key in
.env - Verify you haven't exceeded all model limits
- Check Redis connection
- Verify function declarations match function signatures
- Check API_URL and CHATBOT_API_KEY in
.env - Review function execution logs
- Adjust limits in
main.pyif needed - Check if Redis is properly persisting state
- Review model usage logs
This project is open source and available under the MIT License.
- β 5 Minutes to First Bot: Seriously, that's all it takes
- β Production Ready: Used in real production environments
- β Scales Automatically: Smart routing handles growth
- β Extensible: Add features without touching core code
- β Well Documented: Clear examples and patterns
- β Active Development: Continuously improved
- GitHub Issues: Report bugs or request features
- Organization: Setif Development Group
git clone https://github.com/setif-developers-group/SDG-chat-bots.git
cd SDG-chat-bots
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Add your .env file
uvicorn main:app --reloadYour intelligent chatbot platform is ready in under 2 minutes! π
Making AI chatbots accessible to everyone