A FastAPI-based application that deploys AI agents for email management and research tasks using LangChain and LangGraph frameworks. The application features a supervisor agent that coordinates between specialized agents to handle complex multi-step tasks.
This project implements a multi-agent system with the following components:
- Supervisor Agent: Orchestrates work between specialized agents
- Email Agent: Manages email operations (send, receive, manage inbox)
- Research Agent: Handles research tasks and data preparation
- FastAPI Backend: RESTful API for agent interactions
- PostgreSQL Database: Persistent storage for chat messages
- Docker Compose: Container orchestration for development
ai-agent-deploy/
βββ server/ # Backend FastAPI application
β βββ src/
β β βββ api/
β β β βββ ai/ # AI agents and tools
β β β β βββ agents.py # Agent definitions
β β β β βββ llms.py # Language model configurations
β β β β βββ tools.py # Agent tools and functions
β β β β βββ schemas.py # Pydantic schemas
β β β βββ chat/ # Chat functionality
β β β β βββ models.py # Database models
β β β β βββ routing.py # API endpoints
β β β βββ db.py # Database configuration
β β βββ main.py # FastAPI application entry point
β βββ Dockerfile # Server container configuration
β βββ requirements.txt # Python dependencies
β βββ railway.json # Railway deployment config
βββ client/ # Frontend client (currently commented out)
β βββ src/
β βββ Dockerfile
βββ compose.yaml # Docker Compose configuration
βββ README.md # This file
| Agent | Purpose | Tools | Description |
|---|---|---|---|
| Supervisor | Orchestration | N/A | Manages and coordinates work between specialized agents |
| Email Agent | Email Management | send_me_email, get_unread_emails |
Handles email operations and inbox management |
| Research Agent | Research Tasks | research_email |
Prepares and researches email-related data |
- User sends a message through the API
- Supervisor agent analyzes the request
- Supervisor delegates tasks to appropriate specialized agents
- Agents execute their tasks using available tools
- Results are coordinated and returned to the user
- FastAPI: Modern Python web framework
- SQLModel: SQL database modeling with Pydantic integration
- PostgreSQL: Primary database for persistent storage
- LangChain: Framework for building language model applications
- LangGraph: Graph-based agent orchestration
- OpenAI: Language model provider
- Uvicorn: ASGI server for production
- Docker: Containerization
- Docker Compose: Multi-container orchestration
- Railway: Cloud deployment platform
| Column | Type | Description |
|---|---|---|
id |
Integer (PK) | Unique message identifier |
message |
String | User message content |
created_at |
DateTime (UTC) | Message creation timestamp |
- Docker and Docker Compose
- OpenAI API key
- Environment variables configured
- Create a
.envfile in the root directory:
OPENAI_API_KEY=your_openai_api_key_here
DATABASE_URL=postgresql://admin:admin@db:5432/app_db
PORT=8000
MY_PROJECT=AI Agent Deploy- Clone the repository
git clone <repository-url>
cd ai-agent-deploy- Start the development environment
docker compose up --build- Access the application
- API: http://localhost:8080
- API Documentation: http://localhost:8080/docs
- Database: localhost:5432
The application is configured for Railway deployment:
# Deploy to Railway
railway deploy| Method | Endpoint | Description | Request Body | Response |
|---|---|---|---|---|
GET |
/api/chats/ |
Health check | None | {"status": "ok"} |
GET |
/api/chats/recent/ |
Get recent messages | None | Array of ChatMessageListItem |
POST |
/api/chats/ |
Send message to agents | ChatMessagePayload |
SupervisorMessageSchema |
Send a message to the AI agents:
curl -X POST http://localhost:8080/api/chats/ \
-H "Content-Type: application/json" \
-d '{"message": "Find out how to create a latte then email me the results"}'Get recent chat messages:
curl http://localhost:8080/api/chats/recent/| Service | Purpose | Port | Dependencies |
|---|---|---|---|
server |
FastAPI application | 8080:8000 | PostgreSQL |
db |
PostgreSQL database | 5432:5432 | None |
- Hot Reload: Automatic restart on code changes
- Volume Mounting: Live code updates without rebuilds
- Database Persistence: Data persists across container restarts
- API Health:
GET /api/chats/ - Application Root:
GET /
- Container logs:
docker compose logs -f server - Database logs:
docker compose logs -f db
The compose file includes watch configurations for:
- Requirements changes (triggers rebuild)
- Dockerfile changes (triggers rebuild)
- Source code changes (triggers restart)
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is part of a learning exercise for Docker and AI agent deployment.
- OpenAI API for language model functionality
- PostgreSQL for data persistence
- Docker for containerization
- Railway for cloud deployment
For more detailed information about specific components, refer to the inline documentation in the respective source files.