Thank you for your interest in contributing to Zen MCP Server! This guide will help you understand our development process, coding standards, and how to submit high-quality contributions.
- Fork the repository on GitHub
- Clone your fork locally
- Set up the development environment:
./run-server.sh
- Create a feature branch from
main:git checkout -b feat/your-feature-name
We maintain high code quality standards. All contributions must pass our automated checks.
Option 1 - Automated (Recommended):
# Install pre-commit hooks (one-time setup)
pre-commit install
# Now linting runs automatically on every commit
# Includes: ruff (with auto-fix), black, isortOption 2 - Manual:
# Run the comprehensive quality checks script
./code_quality_checks.shThis script automatically runs:
- Ruff linting with auto-fix
- Black code formatting
- Import sorting with isort
- Complete unit test suite (361 tests)
- Verification that all checks pass 100%
Manual commands (if you prefer to run individually):
# Run all linting checks (MUST pass 100%)
ruff check .
black --check .
isort --check-only .
# Auto-fix issues if needed
ruff check . --fix
black .
isort .
# Run complete unit test suite (MUST pass 100%)
python -m pytest -xvs
# Run simulator tests for tool changes
python communication_simulator_test.pyImportant:
- Every single test must pass - we have zero tolerance for failing tests in CI
- All linting must pass cleanly (ruff, black, isort)
- Import sorting must be correct
- Tests failing in GitHub Actions will result in PR rejection
-
New features MUST include tests:
- Add unit tests in
tests/for new functions or classes - Test both success and error cases
- Add unit tests in
-
Tool changes require simulator tests:
- Add simulator tests in
simulator_tests/for new or modified tools - Use realistic prompts that demonstrate the feature
- Validate output through server logs
- Add simulator tests in
-
Bug fixes require regression tests:
- Add a test that would have caught the bug
- Ensure the bug cannot reoccur
- Unit tests:
test_<feature>_<scenario>.py - Simulator tests:
test_<tool>_<behavior>.py
Your PR title MUST follow one of these formats:
Version Bumping Prefixes (trigger version bump):
feat: <description>- New features (MINOR version bump)fix: <description>- Bug fixes (PATCH version bump)breaking: <description>orBREAKING CHANGE: <description>- Breaking changes (MAJOR version bump)perf: <description>- Performance improvements (PATCH version bump)refactor: <description>- Code refactoring (PATCH version bump)
Non-Version Prefixes (no version bump):
docs: <description>- Documentation onlychore: <description>- Maintenance taskstest: <description>- Test additions/changesci: <description>- CI/CD changesstyle: <description>- Code style changes
Other Options:
docs: <description>- Documentation changes onlychore: <description>- Maintenance tasks
Use our PR template and ensure:
- PR title follows the format guidelines above
- Activated venv and ran
./code_quality_checks.sh(all checks passed 100%) - Self-review completed
- Tests added for ALL changes
- Documentation updated as needed
- All unit tests passing
- Relevant simulator tests passing (if tool changes)
- Ready for review
- Follow PEP 8 with Black formatting
- Use type hints for function parameters and returns
- Add docstrings to all public functions and classes
- Keep functions focused and under 50 lines when possible
- Use descriptive variable names
def process_model_response(
response: ModelResponse,
max_tokens: Optional[int] = None
) -> ProcessedResult:
"""Process and validate model response.
Args:
response: Raw response from the model provider
max_tokens: Optional token limit for truncation
Returns:
ProcessedResult with validated and formatted content
Raises:
ValueError: If response is invalid or exceeds limits
"""
# Implementation hereImports must be organized by isort into these groups:
- Standard library imports
- Third-party imports
- Local application imports
See our detailed guide: Adding a New Provider
See our detailed guide: Adding a New Tool
- Ensure backward compatibility unless explicitly breaking
- Update all affected tests
- Update documentation if behavior changes
- Add simulator tests for new functionality
- Update README.md for user-facing changes
- Add docstrings to all new code
- Update relevant docs/ files
- Include examples for new features
- Keep documentation concise and clear
Write clear, descriptive commit messages:
- First line: Brief summary (50 chars or less)
- Blank line
- Detailed explanation if needed
- Reference issues: "Fixes #123"
Example:
feat: Add retry logic to Gemini provider
Implements exponential backoff for transient errors
in Gemini API calls. Retries up to 2 times with
configurable delays.
Fixes #45
# Auto-fix most issues
ruff check . --fix
black .
isort .- Check test output for specific errors
- Run individual tests for debugging:
pytest tests/test_specific.py -xvs - Ensure server environment is set up for simulator tests
- Verify virtual environment is activated
- Check all dependencies are installed:
pip install -r requirements.txt
- Questions: Open a GitHub issue with the "question" label
- Bug Reports: Use the bug report template
- Feature Requests: Use the feature request template
- Discussions: Use GitHub Discussions for general topics
- Be respectful and inclusive
- Welcome newcomers and help them get started
- Focus on constructive feedback
- Assume good intentions
Contributors are recognized in:
- GitHub contributors page
- Release notes for significant contributions
- Special mentions for exceptional work
Thank you for contributing to Zen MCP Server! Your efforts help make this tool better for everyone.