AI-powered DevOps assistant that helps you with operations tasks using LLM and MCP tools.
- Python 3.8 or higher
- pip
pip install -r requirements.txt# Set environment variables
export MCP_SERVER_URL="http://localhost/mcp"
export OPENAI_API_KEY="sk-your-key"
export OPENAI_API_HOST="https://api.openai.com/v1"
export OPENAI_API_MODEL="gpt-4o-mini"
# Run copilot
python main.py
# Or with verbose logging
python main.py --verboseConfiguration can be provided through:
- Command-line arguments (highest priority)
- Environment variables (medium priority)
- config.yaml file (lowest priority)
Create a configs/config.yaml file:
# MCP Server Configuration
mcp:
server_url: "http://localhost/mcp"
timeout: "600s"
token: ""
# OpenAI Configuration
openai:
endpoint: "https://api.openai.com/v1"
api_key: "your-api-key-here"
model: "gpt-4o-mini"
# Chat Configuration
chat:
max_history: 8
verbose: falseEnvironment variables follow the config file structure: SECTION_KEY (all uppercase, one underscore per level).
These will override values in config.yaml:
MCP Configuration:
MCP_SERVER_URL: MCP server URL (e.g., http://localhost/mcp)MCP_TIMEOUT: MCP request timeout (default: 600s)MCP_TOKEN: MCP server token (if authentication is enabled)
Note: The tools list is cached for 5 minutes (300 seconds) to reduce API calls.
OpenAI Configuration:
OPENAI_ENDPOINT: OpenAI API endpoint (default: https://api.openai.com/v1)OPENAI_API_KEY: OpenAI API keyOPENAI_MODEL: OpenAI model (default: gpt-4o-mini)
Chat Configuration:
CHAT_MAX_HISTORY: Maximum chat history length (default: 8)CHAT_VERBOSE: Enable verbose logging (true/false, default: false)
Backward Compatibility:
OPENAI_API_HOSTorOPENAI_API_BASE: Same asOPENAI_ENDPOINTOPENAI_API_MODEL: Same asOPENAI_MODEL
--endpoint, -e: OpenAI API endpoint (overrides OPENAI_API_HOST)--model, -m: OpenAI model (overrides OPENAI_API_MODEL)--key, -k: OpenAI API key (overrides OPENAI_API_KEY)--mcp-server: MCP server URL (overrides MCP_SERVER_URL)--mcp-token: MCP server token (overrides MCP_TOKEN)--mcp-timeout: MCP timeout (overrides config/env)--verbose, -v: Enable verbose/debug logging--history: Chat history length (default: from config or 8)--config, -c: Path to config.yaml file (default: ./configs/config.yaml)
# 1. Create config file (if not exists)
# Edit configs/config.yaml with your values
# 2. Run
python main.py
# Or with verbose logging
python main.py --verbose# Set environment variables
export MCP_SERVER_URL="http://localhost/mcp"
export OPENAI_API_KEY="sk-your-key"
export OPENAI_API_HOST="https://api.openai.com/v1"
export OPENAI_API_MODEL="gpt-4o-mini"
# Run
python main.py --verbose# Run with flags (highest priority)
python main.py \
--endpoint https://api.openai.com/v1 \
--model gpt-4o-mini \
--key sk-your-key \
--mcp-server http://localhost/mcp \
--verbose
# Or specify custom config file
python main.py --config /path/to/config.yaml- AI-powered assistance: Uses OpenAI's GPT models for intelligent DevOps assistance
- MCP tool integration: Automatically calls MCP tools (SOPS, logs, events, metrics) based on conversation
- Interactive chat: Terminal-based interactive chat interface
- Multi-language support: Supports both English and Chinese
- Configurable: Flexible configuration through flags and environment variables
- Automatic tool selection: LLM automatically selects and executes appropriate MCP tools
- Verbose logging: Detailed logging of tool calls and LLM interactions
This project contains:
ops_copilot/: Core packagecore/: Core modules (OpenAI client, Chat)tools/: MCP tool integrationutils/: Utility modules (logging)
main.py: Command-line entry point
ops-copilot/
├── main.py # Command-line entry point
├── requirements.txt # Python dependencies
├── README.md # This file
├── env.example # Environment variables example
└── ops_copilot/ # Core package
├── __init__.py
├── core/ # Core modules
│ ├── __init__.py
│ ├── openai_client.py # OpenAI API client
│ └── chat.py # Chat with MCP tool calling
├── tools/ # MCP tools
│ ├── __init__.py
│ └── mcp_tool.py # MCP tool wrapper
└── utils/ # Utilities
├── __init__.py
└── logging.py # Logging utilities
# Install dependencies
pip install -r requirements.txt
# Run with verbose logging
python main.py --verbose
# Run with custom configuration
python main.py \
--endpoint https://api.openai.com/v1 \
--model gpt-4o-mini \
--key sk-your-key \
--mcp-server http://localhost/mcp \
--verbosedocker build -t shaowenchen/ops-copilot:latest .# Run interactively with environment variables
docker run -it --rm \
-e MCP_SERVER_URL="http://your-mcp-server/mcp" \
-e MCP_TOKEN="your-token" \
-e MCP_TIMEOUT="600s" \
-e OPENAI_API_KEY="sk-your-key" \
-e OPENAI_API_HOST="https://api.openai.com/v1" \
-e OPENAI_API_MODEL="gpt-4o-mini" \
shaowenchen/ops-copilot:latest
# With verbose mode
docker run -it --rm \
-e MCP_SERVER_URL="http://your-mcp-server/mcp" \
-e OPENAI_API_KEY="sk-your-key" \
-e OPENAI_API_HOST="https://api.openai.com/v1" \
shaowenchen/ops-copilot:latest --verbose
# With custom config file (mount configs directory)
docker run -it --rm \
-v $(pwd)/configs:/app/configs \
-e MCP_SERVER_URL="http://your-mcp-server/mcp" \
-e OPENAI_API_KEY="sk-your-key" \
shaowenchen/ops-copilot:latest --config /app/configs/config.yaml# Create .env file with your configuration
cat > .env << EOF
MCP_SERVER_URL=http://your-mcp-server/mcp
MCP_TOKEN=your-token
OPENAI_API_KEY=sk-your-key
OPENAI_API_HOST=https://api.openai.com/v1
OPENAI_API_MODEL=gpt-4o-mini
EOF
# Run with .env file
docker run -it --rm \
--env-file .env \
shaowenchen/ops-copilot:latestdocker run -it --rm \
-e MCP_SERVER_URL="http://your-mcp-server/mcp" \
-e OPENAI_API_KEY="sk-your-key" \
shaowenchen/ops-copilot:latest \
--endpoint https://api.openai.com/v1 \
--model gpt-4o-mini \
--mcp-server http://your-mcp-server/mcp \
--verboseNote: The -it flags are required for interactive mode. Use --rm to automatically remove the container when it exits.
Pre-built Docker images are available on Docker Hub: shaowenchen/ops-copilot:latest
# Pull and run from Docker Hub
docker run -it --rm \
-e MCP_SERVER_URL="http://your-mcp-server/mcp" \
-e OPENAI_API_KEY="sk-your-key" \
shaowenchen/ops-copilot:latestThis project uses GitHub Actions to automatically build and push Docker images to Docker Hub on push to master branch.
Configure these secrets in GitHub repository settings:
DOCKERHUB_USERNAME: Your Docker Hub usernameDOCKERHUB_TOKEN: Your Docker Hub access token
Same as the original project.