A Model Context Protocol (MCP) interface for the Nipoppy neuroimaging dataset framework. This server exposes tools through MCP that allow AI Agents to interact with in a more deliberate way with Nipoppy studies.
Nipoppy is a lightweight framework for standardized organization and processing of neuroimaging-clinical datasets. It follows the Brain Imaging Data Structure (BIDS) standard and provides tools for managing datasets and processing pipelines.
The Model Context Protocol (MCP) is a standardized protocol that allows AI applications (LLMs) to access external tools and resources through a consistent interface. This server exposes tools for summarizing the current processing status of a Nipoppy study.
This MCP server provides comprehensive access to Nipoppy neuroimaging datasets through both tools and resources:
Context information automatically available to AI agents without function calls:
nipoppy://config- Global dataset configuration and metadatanipoppy://manifest- Dataset structure manifest (participants/sessions/datatypes)nipoppy://status/curation- Data availability at different curation stagesnipoppy://status/processing- Pipeline completion status across participants/sessionsnipoppy://pipelines/{pipeline_name}/{version}/config- Individual pipeline configurationnipoppy://pipelines/{pipeline_name}/{version}/descriptor- Boutiques pipeline descriptornipoppy://demographics- De-identified participant demographic informationnipoppy://bids/description- BIDS dataset description and metadata
get_participants_sessions- Unified participant/session query with filtering by data stageget_dataset_info- Enhanced dataset overview with configurable detail levelsnavigate_dataset- File path and configuration access with smart path resolution
list_manifest_participants_sessions- Useget_participants_sessions(data_stage="all")insteadlist_manifest_imaging_participants_sessions- Useget_participants_sessions(data_stage="imaging")insteadget_pre_reorg_participants_sessions- Useget_participants_sessions(data_stage="downloaded")insteadget_post_reorg_participants_sessions- Useget_participants_sessions(data_stage="organized")insteadget_bids_participants_sessions- Useget_participants_sessions(data_stage="bidsified")insteadlist_processed_participants_sessions- Useget_participants_sessions(data_stage="processed", ...)instead
- Python 3.10 or higher
# Clone the repository
git clone https://github.com/nipoppy/mcp.git
cd mcp
# Install dependencies
pip install -e .You can also use the pre-built Docker container from GitHub Container Registry:
# Pull the latest version
docker pull ghcr.io/bcmcpher/nipoppy-mcp:latest
# Pull a specific version
docker pull ghcr.io/bcmcpher/nipoppy-mcp:v0.1.0The server can be run in different modes depending on your use case:
# Set the dataset root (optional, defaults to current directory)
# Run the server
python -m nipoppy_mcp.server# Run with local dataset mounted
docker run -v /path/to/your/nipoppy/dataset:/data ghcr.io/bcmcpher/nipoppy-mcp:latest
# Run with specific version and custom dataset path
docker run \
-v /path/to/dataset:/data \
-e NIPOPPY_DATASET_ROOT=/data \
ghcr.io/bcmcpher/nipoppy-mcp:v0.1.0Add to your Claude Desktop configuration file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"nipoppy": {
"command": "python",
"args": ["-m", "nipoppy_mcp.server"]
}
}
}Once connected to an MCP-compatible client, you can access Nipoppy dataset information through both automatic context and explicit tool calls.
Setting the Dataset Root:
The server requires the NIPOPPY_DATASET_ROOT environment variable to be set for resources to work:
export NIPOPPY_DATASET_ROOT=/path/to/your/nipoppy/datasetExample Queries:
- "Get information about this dataset"
- "How many participants and sessions are in this dataset?"
- "What pipelines are installed and what's their status?"
- "List all participants and sessions in the dataset"
- "Show me participants with imaging data"
- "Which participants have completed fMRIPrep processing?"
- "Get participants with BIDS-converted data"
- "Navigate to the fMRIPrep output directory"
- "Show me the configuration for the latest MRIQC pipeline"
- "Get the path to the derivatives directory"
- The server automatically provides context through resources, so you can ask about:
- "What's the global configuration of this dataset?"
- "Show me the dataset manifest"
- "What's the curation status of the data?"
- "Get the BIDS dataset description"
- "Get the configuration for fMRIPrep version 23.2.0"
- "Show me the Boutiques descriptor for the MRIQC pipeline"
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Or run basic import tests
python -c "from nipoppy_mcp.server import mcp; print('✅ MCP server imports successfully')"The refactored implementation includes comprehensive error handling and validation. Test with:
# Test basic functionality
python -c "
from nipoppy_mcp.server import get_participants_sessions, get_dataset_info, navigate_dataset
print('✅ Refactored tools imported successfully')
# Test validation (these should raise appropriate errors)
get_participants_sessions('/fake/path', data_stage='invalid_stage')
navigate_dataset('/fake/path', path_type='invalid_type')
"
# Test resource functions
python -c "
from nipoppy_mcp.server import get_dataset_config, get_dataset_manifest
print('✅ Resource functions imported successfully')
"nipoppy-mcp/
├── nipoppy_mcp/
│ ├── __init__.py
│ └── server.py # Main MCP server implementation
│ # - 8 MCP resources (automatic context)
│ # - 3 refactored tools (unified interface)
│ # - 7 deprecated tools (backward compatibility)
├── tests/ # Test files
├── pyproject.toml # Project configuration
└── README.md
The refactored server provides:
- 8 MCP Resources: Automatic context loading of dataset metadata, configuration, and status
- 3 Unified Tools: Replace 7 previous specialized tools with a unified interface
- 7 Deprecated Tools: Maintained for backward compatibility with deprecation warnings
- Strict Validation: Comprehensive error handling and parameter validation
- Type Safety: Full type hints and structured data returns
For existing users migrating from the old tools:
| Old Tool | New Tool Call |
|---|---|
list_manifest_participants_sessions() |
get_participants_sessions(data_stage="all") |
list_manifest_imaging_participants_sessions() |
get_participants_sessions(data_stage="imaging") |
get_pre_reorg_participants_sessions() |
get_participants_sessions(data_stage="downloaded") |
get_post_reorg_participants_sessions() |
get_participants_sessions(data_stage="organized") |
get_bids_participants_sessions() |
get_participants_sessions(data_stage="bidsified") |
list_processed_participants_sessions(name, ver, step) |
get_participants_sessions(data_stage="processed", pipeline_name=name, pipeline_version=ver, pipeline_step=step) |
The repository includes example_usage.py to demonstrate the refactored functionality:
# Set your dataset path
export NIPOPPY_DATASET_ROOT=/path/to/your/nipoppy/dataset
# Run the example
python example_usage.pyThis script demonstrates:
- Enhanced dataset information retrieval
- Unified participant/session filtering by data stage
- Dataset navigation and path resolution
- Error handling and validation
For immediate access to dataset information (requires NIPOPPY_DATASET_ROOT):
import os
os.environ['NIPOPPY_DATASET_ROOT'] = '/path/to/dataset'
from nipoppy_mcp.server import get_dataset_config
# Resources are available as direct function calls
config = get_dataset_config() # Auto-loads from environment variable
print(f"Dataset has {len(config['installed_pipelines'])} pipelines")Contributions are welcome! This is a Brainhack 2026 project. Please feel free to submit issues and pull requests.
MIT License - see LICENSE file for details.
The Docker container is automatically built and published to GitHub Container Registry (GHCR) when a new release is tagged:
- Registry:
ghcr.io/bcmcpher/nipoppy-mcp - Architecture: Multi-platform (linux/amd64, linux/arm64)
- Tags:
latest- Points to the most recent releasev0.1.0- Full semantic versionv0.1- Minor versionv0- Major version
# Build the Docker image locally
docker build -t nipoppy-mcp .
# Run the locally built image
docker run -v /path/to/dataset:/data nipoppy-mcp