An AI-powered chat API with Model Context Protocol (MCP) integration, built with FastAPI, LangChain, and LangGraph.
- Backend: FastAPI with Python 3.12
- Conversational AI: LangChain with LangGraph for stateful, multi-step agents
- LLM: Google Gemini (configurable model)
- Real-time Communication: Server-Sent Events (SSE) for streaming LLM responses
- MCP Integration: Support for Model Context Protocol tools
- Search Capabilities: Integrated Tavily search
- Configuration: Pydantic-based settings management
- Dependency Management: Poetry
The project follows a clean architecture pattern with clear separation of concerns:
src/config/: Configuration management using Pydanticsrc/ai/: AI agents and toolssrc/mcp/: MCP server implementationsrc/api/: FastAPI application and routes
- Python 3.12 or higher
- Poetry for dependency management
-
Install Poetry: If you don't have Poetry installed, follow the instructions on the official website.
-
Clone the repository:
git clone <repository-url> cd py-ai
-
Install dependencies:
poetry install
-
Configure environment: Edit
.envand add your API keys. If none are provided, the variables will be interpolated from the system environment variables.Keys to be set up:
- Get a Google API key from Google AI for Developers
- Get a Tavily API key from Tavily
- Get a Weather API key from WeatherAPI
A code formatter is included in the project. Run the following command to format the code base:
poetry run black .To check for formatting issues without making changes, run:
poetry run black --check .You need to run both the MCP server and the API server for full functionality.
In separate terminal windows:
Terminal 1—Run the MCP server:
poetry run python main.py mcpTerminal 2—Run the API server:
poetry run python main.py apiTerminal 1—Run the MCP server:
poetry run python -m src.mcp.serverTerminal 2—Run the API server:
poetry run uvicorn src.api.app:app --reload --host localhost --port 8000Terminal 3—Run the Streamlit UI app
poetry run python main.py streamlitOnce both servers are running:
The assistant has access to:
- Weather information (through MCP tools)
- Web search capabilities (through Tavily)
- General knowledge from the LLM
POST /chat/stream- Stream chat responses (SSE)GET /mcp?city={city}- Test MCP weather tool directlyGET /health- Health check endpointGET /docs- Swagger UI documentationGET /redoc- ReDoc documentation
Format code with Black:
poetry run black src testsSort imports with isort:
poetry run isort src testsLint with flake8:
poetry run flake8 src testsType check with mypy:
poetry run mypy srcRun all checks:
poetry run black src tests && poetry run isort src tests && poetry run flake8 src tests && poetry run mypy srcpoetry run pytestRun with coverage:
poetry run pytest --cov=srcRun specific test file:
poetry run pytest tests/test_api.pyAll configuration is managed through environment variables and the src/config/settings.py file. Key settings include:
GOOGLE_API_KEY: Google API key for Gemini LLMTAVILY_API_KEY: Tavily API key for search functionalityWEATHER_API_KEY: Weather API key for weather data
-
Dependency conflicts during installation:
poetry lock --no-update poetry install
-
MCP server connection errors:
- Ensure the MCP server is running before starting the API server
- Check that the MCP_WS_URL in your .env matches the MCP server address
-
API key errors:
- Verify all required API keys are set in your .env file
- Ensure there are no extra spaces or quotes around the API keys
-
Port already in use:
- Change the port in your .env file or use different ports:
poetry run uvicorn src.api.app:app --port 8001
Build and run:
docker build -t py-ai .
docker run -p 8000:8000 -p 8001:8001 --env-file .env py-ai