Production-grade MCP server toolkit with minimal boilerplate
NextMCP is a Python SDK built on top of FastMCP that provides a developer-friendly experience for building MCP (Model Context Protocol) servers. Inspired by Next.js, it offers minimal setup, powerful middleware, and a rich CLI for rapid development.
- Full MCP Specification - Complete support for Tools, Prompts, and Resources primitives
- Convention-Based Structure - Next.js-inspired file-based organization with auto-discovery
- Zero-Config Setup - Single-line
NextMCP.from_config()for instant project setup - Auto-Discovery - Automatically discover and register primitives from directory structure
- Production Deployment - One-command deploy to Docker, Railway, Render, Fly.io with health checks
- Health Checks - Built-in liveness and readiness probes for Kubernetes and cloud platforms
- Graceful Shutdown - Clean termination with SIGTERM/SIGINT handling and resource cleanup
- Minimal Boilerplate - Get started with just a few lines of code
- Decorator-based API - Register tools, prompts, and resources with simple decorators
- Async Support - Full support for async/await across all primitives
- Argument Completion - Smart suggestions for prompt arguments and resource templates
- Resource Subscriptions - Real-time notifications when resources change
- WebSocket Transport - Real-time bidirectional communication for interactive applications
- Authentication & Authorization - Built-in support for API keys, JWT, sessions, OAuth 2.0, and RBAC
- OAuth 2.0 Support - Complete OAuth implementation with GitHub and Google providers, PKCE, session management
- Global & Primitive-specific Middleware - Add logging, auth, rate limiting, caching, and more
- Rich CLI - Scaffold projects, run servers, deploy, and generate docs with
mcpcommands - Configuration Management - Support for
.env, YAML config files, and environment variables - Schema Validation - Optional Pydantic integration for type-safe inputs
- Metrics & Monitoring - Prometheus-compatible metrics with custom metric support
- Production Ready - Built-in error handling, logging, and comprehensive testing
pip install nextmcp# CLI tools (recommended)
pip install nextmcp[cli]
# Configuration support
pip install nextmcp[config]
# Schema validation with Pydantic
pip install nextmcp[schema]
# WebSocket transport
pip install nextmcp[websocket]
# OAuth authentication (included in base by default via aiohttp)
pip install nextmcp[oauth]
# Everything
pip install nextmcp[all]
# Development dependencies
pip install nextmcp[dev]NextMCP offers two approaches: Convention-Based (recommended) for scalable projects, and Manual for simple use cases.
Perfect for projects with multiple tools, prompts, and resources. Uses file-based organization for automatic discovery.
my-blog-server/
├── nextmcp.config.yaml
├── server.py
├── tools/
│ ├── __init__.py
│ └── posts.py
├── prompts/
│ ├── __init__.py
│ └── workflows.py
└── resources/
├── __init__.py
└── blog_resources.py# nextmcp.config.yaml
name: blog-server
version: 1.0.0
description: A blog management MCP server
auto_discover: true
discovery:
tools: tools/
prompts: prompts/
resources: resources/# tools/posts.py
from nextmcp import NextMCP
app = NextMCP.from_config()
@app.tool()
def create_post(title: str, content: str) -> dict:
"""Create a new blog post"""
return {"id": 1, "title": title, "content": content}
@app.tool()
def list_posts() -> list:
"""List all blog posts"""
return [{"id": 1, "title": "First Post"}]# server.py
from nextmcp import NextMCP
# Auto-discovers all tools, prompts, and resources
app = NextMCP.from_config()
if __name__ == "__main__":
app.run()python server.pyThat's it! All tools, prompts, and resources are automatically discovered and registered.
For simple projects with just a few tools.
mcp init my-bot
cd my-bot# app.py
from nextmcp import NextMCP
app = NextMCP("my-bot")
@app.tool()
def greet(name: str) -> str:
"""Greet someone by name"""
return f"Hello, {name}!"
if __name__ == "__main__":
app.run()mcp run app.pyYour MCP server is now running with the greet tool available.
NextMCP v0.3.0 introduces a powerful convention-based architecture inspired by Next.js's file-based routing. This approach enables automatic discovery and registration of tools, prompts, and resources from your directory structure, eliminating boilerplate and improving project organization.
Before (Manual Registration):
# app.py - 200+ lines of boilerplate
from nextmcp import NextMCP
app = NextMCP("my-server")
@app.tool()
def create_post(...):
...
@app.tool()
def update_post(...):
...
@app.tool()
def delete_post(...):
...
@app.prompt()
def writing_workflow(...):
...
@app.resource("blog://posts/recent")
def recent_posts():
...
# ... 20+ more primitives mixed togetherAfter (Convention-Based):
# server.py - Just 3 lines!
from nextmcp import NextMCP
app = NextMCP.from_config()
if __name__ == "__main__":
app.run()Organize your primitives in standard directories:
my-mcp-server/
├── nextmcp.config.yaml # Project configuration
├── server.py # Entry point
├── tools/ # Tool definitions
│ ├── __init__.py
│ ├── posts.py # Post management tools
│ └── comments.py # Comment management tools
├── prompts/ # Prompt templates
│ ├── __init__.py
│ └── workflows.py # Workflow prompts
└── resources/ # Resource providers
├── __init__.py
└── blog_resources.py # Blog data resources
NextMCP scans your directory structure and automatically discovers decorated functions:
# tools/posts.py
from nextmcp import NextMCP
app = NextMCP.from_config()
@app.tool()
def create_post(title: str, content: str) -> dict:
"""Create a new blog post"""
return {"id": 1, "title": title}
@app.tool()
def list_posts(limit: int = 10) -> list:
"""List recent blog posts"""
return []The discovery engine:
- Recursively scans
tools/,prompts/, andresources/directories - Imports Python modules and inspects decorated functions
- Automatically registers all discovered primitives
- Skips
__init__.pyandtest_*.pyfiles
Control discovery behavior with nextmcp.config.yaml:
name: my-mcp-server
version: 1.0.0
description: My awesome MCP server
# Enable/disable auto-discovery
auto_discover: true
# Customize directory paths
discovery:
tools: tools/
prompts: prompts/
resources: resources/
# Server configuration
server:
host: 0.0.0.0
port: 8000
transport: stdio
# Middleware pipeline
middleware:
- nextmcp.middleware.log_calls
- nextmcp.middleware.error_handlerUse the from_config() class method for automatic setup:
from nextmcp import NextMCP
# Load configuration and auto-discover primitives
app = NextMCP.from_config()
# Optional: specify custom config file or base path
app = NextMCP.from_config(
config_file="custom.yaml",
base_path="/path/to/project"
)The auto-discovery engine follows these rules:
- Directory Scanning: Recursively searches configured directories
- Module Importing: Dynamically imports all
.pyfiles - Decorator Detection: Finds functions with MCP decorator markers
- Automatic Registration: Registers discovered primitives with the app
- File Exclusions: Skips
__init__.pyandtest_*.pyfiles
For large projects, use subdirectories and modules:
tools/
├── __init__.py
├── posts/
│ ├── __init__.py
│ ├── create.py
│ ├── update.py
│ └── delete.py
├── comments/
│ ├── __init__.py
│ └── moderate.py
└── users/
├── __init__.py
└── manage.py
All tools in subdirectories are automatically discovered.
Validate your project structure:
from nextmcp import validate_project_structure
# Check if project follows conventions
results = validate_project_structure()
if results["valid"]:
print(f"✓ Found {results['stats']['tools']} tool files")
print(f"✓ Found {results['stats']['prompts']} prompt files")
print(f"✓ Found {results['stats']['resources']} resource files")
else:
print("Errors:", results["errors"])
print("Warnings:", results["warnings"])- Separation of Concerns - Tools, prompts, and resources in dedicated directories
- Scalability - Add new primitives by creating files, no registration needed
- Team Collaboration - Clear structure for multiple developers
- Zero Boilerplate - No manual registration code
- Type Safety - Full IDE support with organized modules
- Testing - Easy to test individual modules in isolation
Existing manual projects work unchanged. To migrate gradually:
# You can mix both approaches!
from nextmcp import NextMCP
# Start with auto-discovery
app = NextMCP.from_config()
# Add manual tools as needed
@app.tool()
def legacy_tool():
"""This still works!"""
return "result"See examples/blog_server/ for a complete convention-based project.
NextMCP provides a comprehensive authentication and authorization system with support for API keys, JWT tokens, sessions, OAuth 2.0, and fine-grained RBAC. The system is production-ready with extensive documentation and examples.
MCP servers often need to:
- Protect sensitive tools from unauthorized access
- Implement role-based access (admin, user, viewer)
- Track who performed actions for audit logs
- Integrate with existing auth systems (API keys, JWT, OAuth 2.0)
- Authenticate users with external providers (GitHub, Google, etc.)
The simplest way to protect your tools:
from nextmcp import NextMCP
from nextmcp.auth import APIKeyProvider, AuthContext, requires_auth_async
app = NextMCP("secure-server")
# Configure API key provider
api_key_provider = APIKeyProvider(
valid_keys={
"admin-key-123": {
"user_id": "admin1",
"username": "admin",
"roles": ["admin"],
"permissions": ["read:*", "write:*"],
},
"user-key-456": {
"user_id": "user1",
"username": "alice",
"roles": ["user"],
"permissions": ["read:posts"],
}
}
)
# Protected tool - requires authentication
@app.tool()
@requires_auth_async(provider=api_key_provider)
async def protected_tool(auth: AuthContext, data: str) -> dict:
"""Only authenticated users can access this."""
return {
"message": f"Hello {auth.username}",
"data": data,
"user_id": auth.user_id
}For stateless token-based auth:
from nextmcp.auth import JWTProvider
# Configure JWT provider
jwt_provider = JWTProvider(
secret_key="your-secret-key",
algorithm="HS256",
verify_exp=True
)
# Login endpoint that generates tokens
@app.tool()
async def login(username: str, password: str) -> dict:
"""Login and receive a JWT token."""
# Validate credentials (check database, etc.)
# Generate token
token = jwt_provider.create_token(
user_id=f"user_{username}",
roles=["user"],
permissions=["read:posts", "write:posts"],
username=username,
expires_in=3600 # 1 hour
)
return {"token": token, "expires_in": 3600}
# Use the token for authentication
@app.tool()
@requires_auth_async(provider=jwt_provider)
async def secure_action(auth: AuthContext) -> dict:
"""Requires valid JWT token."""
return {"user": auth.username, "action": "performed"}NextMCP includes three production-ready authentication providers:
| Provider | Use Case | Features |
|---|---|---|
| APIKeyProvider | Simple API key auth | Pre-configured keys, custom validators, secure generation |
| JWTProvider | Token-based auth | Automatic expiration, signature verification, stateless |
| SessionProvider | Session-based auth | In-memory sessions, automatic cleanup, session management |
Control access based on user roles:
from nextmcp.auth import requires_role_async
# Only admins can access this tool
@app.tool()
@requires_auth_async(provider=api_key_provider)
@requires_role_async("admin")
async def admin_tool(auth: AuthContext) -> dict:
"""Admin-only functionality."""
return {"action": "admin action performed"}
# Users or admins can access
@app.tool()
@requires_auth_async(provider=api_key_provider)
@requires_role_async("user", "admin") # Either role works
async def user_tool(auth: AuthContext) -> dict:
"""User or admin can access."""
return {"action": "user action"}Fine-grained control with specific permissions:
from nextmcp.auth import RBAC, requires_permission_async
# Set up RBAC system
rbac = RBAC()
# Define permissions
rbac.define_permission("read:posts", "Read blog posts")
rbac.define_permission("write:posts", "Create and edit posts")
rbac.define_permission("delete:posts", "Delete posts")
# Define roles with permissions
rbac.define_role("viewer", "Read-only access")
rbac.assign_permission_to_role("viewer", "read:posts")
rbac.define_role("editor", "Full content management")
rbac.assign_permission_to_role("editor", "read:posts")
rbac.assign_permission_to_role("editor", "write:posts")
rbac.assign_permission_to_role("editor", "delete:posts")
# Require specific permission
@app.tool()
@requires_auth_async(provider=api_key_provider)
@requires_permission_async("write:posts")
async def create_post(auth: AuthContext, title: str) -> dict:
"""Requires write:posts permission."""
return {"status": "created", "title": title}
# Multiple permissions (user needs at least one)
@app.tool()
@requires_auth_async(provider=api_key_provider)
@requires_permission_async("admin:posts", "delete:posts")
async def delete_post(auth: AuthContext, post_id: int) -> dict:
"""Requires admin:posts OR delete:posts permission."""
return {"status": "deleted", "post_id": post_id}Support for wildcard permissions:
# Admin with wildcard - matches ALL permissions
rbac.define_role("admin", "Full access")
rbac.assign_permission_to_role("admin", "*")
# Namespace wildcard - matches all admin permissions
rbac.assign_permission_to_role("moderator", "admin:*")
# moderator has: admin:users, admin:posts, admin:settings, etc.The AuthContext object is injected as the first parameter to protected tools:
@app.tool()
@requires_auth_async(provider=api_key_provider)
async def my_tool(auth: AuthContext, param: str) -> dict:
# Access user information
user_id = auth.user_id # Unique user ID
username = auth.username # Human-readable name
# Check roles and permissions
is_admin = auth.has_role("admin")
can_write = auth.has_permission("write:posts")
# Access metadata
department = auth.metadata.get("department")
return {
"user": username,
"is_admin": is_admin,
"can_write": can_write
}Stack authentication and authorization decorators:
@app.tool() # 4. Register as tool
@requires_auth_async(provider=api_key_provider) # 3. Authenticate user
@requires_role_async("admin") # 2. Check role
@requires_permission_async("delete:users") # 1. Check permission (executes first)
async def delete_user(auth: AuthContext, user_id: int) -> dict:
"""Requires authentication, admin role, AND delete:users permission."""
return {"status": "deleted", "user_id": user_id}Using the SessionProvider for session-based authentication:
from nextmcp.auth import SessionProvider
session_provider = SessionProvider(session_timeout=3600) # 1 hour
@app.tool()
async def login(username: str, password: str) -> dict:
"""Create a new session."""
# Validate credentials...
# Create session
session_id = session_provider.create_session(
user_id=f"user_{username}",
username=username,
roles=["user"],
permissions=["read:posts"]
)
return {"session_id": session_id, "expires_in": 3600}
@app.tool()
async def logout(session_id: str) -> dict:
"""Destroy a session."""
success = session_provider.destroy_session(session_id)
return {"logged_out": success}
# Use session for authentication
@app.tool()
@requires_auth_async(provider=session_provider)
async def protected_tool(auth: AuthContext) -> dict:
"""Requires valid session."""
return {"user": auth.username}Define roles and permissions in configuration:
from nextmcp.auth import RBAC
rbac = RBAC()
config = {
"permissions": [
{"name": "read:posts", "description": "Read posts"},
{"name": "write:posts", "description": "Write posts"},
{"name": "delete:posts", "description": "Delete posts"},
],
"roles": [
{
"name": "viewer",
"description": "Read-only",
"permissions": ["read:posts"]
},
{
"name": "editor",
"description": "Full content management",
"permissions": ["read:posts", "write:posts", "delete:posts"]
}
]
}
rbac.load_from_config(config)Create your own authentication provider:
from nextmcp.auth import AuthProvider, AuthResult, AuthContext
class CustomAuthProvider(AuthProvider):
"""Custom authentication using external service."""
async def authenticate(self, credentials: dict) -> AuthResult:
"""Validate credentials against external service."""
token = credentials.get("token")
# Call your external auth service
user_data = await external_auth_service.validate(token)
if not user_data:
return AuthResult.failure("Invalid token")
# Build auth context
context = AuthContext(
authenticated=True,
user_id=user_data["id"],
username=user_data["name"],
)
# Add roles from external service
for role in user_data.get("roles", []):
context.add_role(role)
return AuthResult.success_result(context)Authentication errors are raised as exceptions:
from nextmcp.auth import PermissionDeniedError
from nextmcp.auth.middleware import AuthenticationError
try:
# Call protected tool without credentials
result = await protected_tool(data="test")
except AuthenticationError as e:
print(f"Auth failed: {e}")
try:
# Call tool without required permission
result = await admin_tool()
except PermissionDeniedError as e:
print(f"Permission denied: {e}")
print(f"Required: {e.required}")
print(f"User: {e.user_id}")- Never commit secrets: Use environment variables for keys/secrets
- Use HTTPS/TLS: Always encrypt traffic in production
- Rotate keys regularly: Implement key rotation policies
- Short token expiration: Balance security and UX (1-24 hours)
- Log auth attempts: Track successful and failed authentication
- Validate all inputs: Never trust client-provided data
- Use strong secrets: Generate with
secrets.token_urlsafe(32) - Implement rate limiting: Prevent brute force attacks
NextMCP includes a complete OAuth 2.0 implementation with PKCE support, ready-to-use providers for GitHub and Google, and a flexible session management system.
- OAuth 2.0 with PKCE: Full Authorization Code Flow with PKCE for secure authentication
- Ready-to-use Providers: GitHub and Google OAuth providers with sensible defaults
- Session Management: Persistent token storage with memory and file-based backends
- Auth Metadata Protocol: Servers can announce authentication requirements to MCP hosts
- Request Enforcement: Runtime middleware validates tokens, scopes, and permissions
- Automatic Token Refresh: Built-in token refresh handling
- Production-Ready: Comprehensive testing and error handling
from fastmcp import FastMCP
from nextmcp.auth import GoogleOAuthProvider, create_auth_middleware
from nextmcp.session import FileSessionStore
from nextmcp.protocol import AuthRequirement, AuthMetadata, AuthFlowType
mcp = FastMCP("My Secure Server")
# Set up Google OAuth provider
google = GoogleOAuthProvider(
client_id=os.getenv("GOOGLE_CLIENT_ID"),
client_secret=os.getenv("GOOGLE_CLIENT_SECRET"),
scopes=["openid", "email", "profile"]
)
# Create auth middleware with session storage
auth_middleware = create_auth_middleware(
provider=google,
requirement=AuthRequirement.REQUIRED,
session_store=FileSessionStore("./sessions")
)
# Apply middleware to server
mcp.use(auth_middleware)
# Expose auth metadata so hosts know this server requires OAuth
auth_metadata = AuthMetadata(
requirement=AuthRequirement.REQUIRED,
flow_type=AuthFlowType.OAUTH,
provider_name="google",
scopes=["openid", "email", "profile"]
)
mcp.set_auth_metadata(auth_metadata)
# Tools now require authentication
@mcp.tool()
async def get_user_data(ctx: Context) -> str:
return f"Hello {ctx.auth.username}!"from nextmcp.auth import GitHubOAuthProvider
github = GitHubOAuthProvider(
client_id=os.getenv("GITHUB_CLIENT_ID"),
client_secret=os.getenv("GITHUB_CLIENT_SECRET"),
scopes=["user:email", "read:user"]
)
auth_middleware = create_auth_middleware(
provider=github,
requirement=AuthRequirement.REQUIRED,
session_store=FileSessionStore("./sessions")
)
mcp.use(auth_middleware)Use multiple OAuth providers in the same application:
from nextmcp.auth import GitHubOAuthProvider, GoogleOAuthProvider
# Set up both providers
github = GitHubOAuthProvider(...)
google = GoogleOAuthProvider(...)
# Different tools can use different providers
@mcp.tool()
@requires_auth_async(provider=github)
async def github_tool(ctx: Context) -> dict:
return {"provider": "github", "user": ctx.auth.username}
@mcp.tool()
@requires_auth_async(provider=google)
async def google_tool(ctx: Context) -> dict:
return {"provider": "google", "user": ctx.auth.username}Session stores persist authentication state and handle token refresh:
from nextmcp.session import FileSessionStore, MemorySessionStore
# File-based session storage (persists across restarts)
file_store = FileSessionStore(
directory="./sessions",
auto_cleanup=True,
cleanup_interval=3600 # Clean up expired sessions every hour
)
# In-memory session storage (for development)
memory_store = MemorySessionStore()
# Sessions automatically handle token refresh
auth_middleware = create_auth_middleware(
provider=google,
session_store=file_store,
auto_refresh=True # Automatically refresh expired tokens
)Validate OAuth scopes at the tool level:
from nextmcp.auth import requires_scope_async
@mcp.tool()
@requires_auth_async(provider=google)
@requires_scope_async("https://www.googleapis.com/auth/calendar")
async def access_calendar(ctx: Context) -> dict:
"""Requires Google Calendar scope."""
return {"calendar": "data"}The auth metadata protocol allows MCP servers to announce their authentication requirements to host applications (like Claude Desktop, Cursor, etc.):
from nextmcp.protocol import AuthMetadata, AuthRequirement, AuthFlowType
# Define authentication requirements
auth_metadata = AuthMetadata(
requirement=AuthRequirement.REQUIRED, # or OPTIONAL, NONE
flow_type=AuthFlowType.OAUTH, # or API_KEY, JWT
provider_name="google",
scopes=["openid", "email", "profile"],
authorization_url="https://accounts.google.com/o/oauth2/v2/auth",
token_url="https://oauth2.googleapis.com/token"
)
# Expose to hosts via server metadata
mcp.set_auth_metadata(auth_metadata)Hosts can query this metadata to:
- Determine if authentication is required
- Identify the OAuth provider and flow type
- Obtain authorization and token URLs
- Request appropriate scopes
- Guide users through authentication
Create custom OAuth providers for other services:
from nextmcp.auth import OAuthProvider, OAuthConfig
class CustomOAuthProvider(OAuthProvider):
"""Custom OAuth provider for your service."""
def __init__(self, client_id: str, client_secret: str):
config = OAuthConfig(
client_id=client_id,
client_secret=client_secret,
authorization_url="https://your-service.com/oauth/authorize",
token_url="https://your-service.com/oauth/token",
scopes=["read", "write"],
redirect_uri="http://localhost:8000/callback"
)
super().__init__(config)
async def get_user_info(self, access_token: str) -> dict:
"""Fetch user information from your service."""
async with aiohttp.ClientSession() as session:
async with session.get(
"https://your-service.com/api/user",
headers={"Authorization": f"Bearer {access_token}"}
) as response:
return await response.json()Use the provided helper to test OAuth flows interactively:
# Run the OAuth token helper
python examples/auth/oauth_token_helper.py
# Follow the interactive prompts to:
# 1. Choose provider (GitHub or Google)
# 2. Get authorization URL
# 3. Complete OAuth flow
# 4. Receive access token
# 5. Test API calls with tokenSee the comprehensive OAuth documentation:
- ARCHITECTURE.md - System design and data flow
- OAUTH_TESTING_SETUP.md - Setting up OAuth credentials
- MIGRATION_GUIDE.md - Adding auth to existing servers
- HOST_INTEGRATION.md - Guide for MCP host developers
Check out complete authentication examples:
examples/auth/complete_oauth_server.py- Production-ready Google OAuth with session managementexamples/auth/github_oauth_server.py- GitHub OAuth authentication exampleexamples/auth/google_oauth_server.py- Google OAuth authentication exampleexamples/auth/multi_provider_server.py- Multiple OAuth providers in one serverexamples/auth/session_management_example.py- Advanced session management workflowsexamples/auth/combined_auth_server.py- OAuth combined with RBAC and permissionsexamples/auth/manifest_server.py- Permission manifests with OAuthexamples/auth/oauth_token_helper.py- Interactive OAuth testing toolexamples/auth_api_key/- API key authentication with role-based accessexamples/auth_jwt/- JWT token authentication with login endpointexamples/auth_rbac/- Advanced RBAC with fine-grained permissions
from nextmcp import NextMCP
app = NextMCP(
name="my-mcp-server",
description="A custom MCP server"
)@app.tool()
def calculate(x: int, y: int) -> int:
"""Add two numbers"""
return x + y
# With custom name and description
@app.tool(name="custom_name", description="A custom tool")
def my_function(data: str) -> dict:
return {"result": data}Middleware wraps your tools to add cross-cutting functionality.
from nextmcp import log_calls, error_handler
# Add middleware that applies to all tools
app.add_middleware(log_calls)
app.add_middleware(error_handler)
@app.tool()
def my_tool(x: int) -> int:
return x * 2 # This will be logged and error-handled automaticallyfrom nextmcp import cache_results, require_auth
@app.tool()
@cache_results(ttl_seconds=300) # Cache for 5 minutes
def expensive_operation(param: str) -> dict:
# Expensive computation here
return {"result": perform_calculation(param)}
@app.tool()
@require_auth(valid_keys={"secret-key-123"})
def protected_tool(auth_key: str, data: str) -> str:
return f"Protected: {data}"NextMCP includes several production-ready middleware:
log_calls- Log all tool invocations with timingerror_handler- Catch exceptions and return structured errorsrequire_auth(valid_keys)- API key authenticationrate_limit(max_calls, time_window)- Rate limitingcache_results(ttl_seconds)- Response cachingvalidate_inputs(**validators)- Custom input validationtimeout(seconds)- Execution timeout
All middleware also have async variants (e.g., log_calls_async, error_handler_async, etc.) for use with async tools.
NextMCP has full support for async/await patterns, allowing you to build high-performance tools that can handle concurrent I/O operations.
from nextmcp import NextMCP
import asyncio
app = NextMCP("async-app")
@app.tool()
async def fetch_data(url: str) -> dict:
"""Fetch data from an API asynchronously"""
# Use async libraries like httpx, aiohttp, etc.
await asyncio.sleep(0.1) # Simulate API call
return {"url": url, "data": "fetched"}Use async middleware variants for async tools:
from nextmcp import log_calls_async, error_handler_async, cache_results_async
app.add_middleware(log_calls_async)
app.add_middleware(error_handler_async)
@app.tool()
@cache_results_async(ttl_seconds=300)
async def expensive_async_operation(param: str) -> dict:
await asyncio.sleep(1) # Simulate expensive operation
return {"result": param}The real power of async is handling multiple operations concurrently:
@app.tool()
async def fetch_multiple_sources(sources: list) -> dict:
"""Fetch data from multiple sources concurrently"""
async def fetch_one(source: str):
# Each fetch happens concurrently, not sequentially
await asyncio.sleep(0.1)
return {"source": source, "data": "..."}
# Gather results concurrently - much faster than sequential!
results = await asyncio.gather(*[fetch_one(s) for s in sources])
return {"sources": results}Performance Comparison:
- Sequential: 4 sources × 0.1s = 0.4s
- Concurrent (async): ~0.1s (all at once!)
You can have both sync and async tools in the same application:
@app.tool()
def sync_tool(x: int) -> int:
"""Regular synchronous tool"""
return x * 2
@app.tool()
async def async_tool(x: int) -> int:
"""Async tool for I/O operations"""
await asyncio.sleep(0.1)
return x * 3Use async for:
- HTTP API calls (with
httpx,aiohttp) - Database queries (with
asyncpg,motor) - File I/O operations
- Multiple concurrent operations
- WebSocket connections
Stick with sync for:
- CPU-bound operations (heavy computations)
- Simple operations with no I/O
- When third-party libraries don't support async
See examples/async_weather_bot/ for a complete async example.
from nextmcp import NextMCP
from pydantic import BaseModel
app = NextMCP("my-server")
class WeatherInput(BaseModel):
city: str
units: str = "fahrenheit"
@app.tool()
def get_weather(city: str, units: str = "fahrenheit") -> dict:
# Input automatically validated against WeatherInput schema
return {"city": city, "temp": 72, "units": units}Prompts are user-driven workflow templates that guide AI interactions. They're explicitly invoked by users (not automatically by the AI) and can reference available tools and resources.
from nextmcp import NextMCP
app = NextMCP("my-server")
@app.prompt()
def vacation_planner(destination: str, budget: int) -> str:
"""Plan a vacation itinerary."""
return f"""
Plan a vacation to {destination} with a budget of ${budget}.
Use these tools:
- flight_search: Find flights
- hotel_search: Find accommodations
Check these resources:
- resource://user/preferences
- resource://calendar/availability
"""from nextmcp import argument
@app.prompt(description="Research a topic", tags=["research"])
@argument("topic", description="What to research", suggestions=["Python", "MCP", "FastMCP"])
@argument("depth", suggestions=["basic", "detailed", "comprehensive"])
def research_prompt(topic: str, depth: str = "basic") -> str:
"""Generate a research prompt with the specified depth."""
return f"Research {topic} at {depth} level..."
# Dynamic completion
@app.prompt_completion("research_prompt", "topic")
async def complete_topics(partial: str) -> list[str]:
"""Provide dynamic topic suggestions."""
topics = await fetch_available_topics()
return [t for t in topics if partial.lower() in t.lower()]@app.prompt(tags=["analysis"])
async def analyze_prompt(data_source: str) -> str:
"""Generate analysis prompt with real-time data."""
data = await fetch_data(data_source)
return f"Analyze this data: {data}"When to use prompts:
- Guide complex multi-step workflows
- Provide templates for common tasks
- Structure AI interactions
- Reference available tools and resources
See examples/knowledge_base/ for a complete example using prompts.
Resources provide read-only access to contextual data through unique URIs. They're application-driven and give the AI access to information without triggering actions.
from nextmcp import NextMCP
app = NextMCP("my-server")
@app.resource("file:///logs/app.log", description="Application logs")
def app_logs() -> str:
"""Provide access to application logs."""
with open("/var/logs/app.log") as f:
return f.read()
@app.resource("config://app/settings", mime_type="application/json")
def app_settings() -> dict:
"""Provide application configuration."""
return {
"theme": "dark",
"language": "en",
"max_results": 100
}Templates allow parameterized access to dynamic resources:
@app.resource_template("weather://forecast/{city}/{date}")
async def weather_forecast(city: str, date: str) -> dict:
"""Get weather forecast for a specific city and date."""
return await fetch_weather(city, date)
@app.resource_template("file:///docs/{category}/{filename}")
def documentation(category: str, filename: str) -> str:
"""Access documentation files."""
return Path(f"/docs/{category}/{filename}").read_text()
# Template parameter completion
@app.template_completion("weather_forecast", "city")
def complete_cities(partial: str) -> list[str]:
"""Suggest city names."""
return ["London", "Paris", "Tokyo", "New York"]Resources can notify subscribers when they change:
@app.resource(
"config://live/settings",
subscribable=True,
max_subscribers=50
)
async def live_settings() -> dict:
"""Provide live configuration that can change."""
return await load_live_config()
# Notify subscribers when config changes
app.notify_resource_changed("config://live/settings")
# Manage subscriptions
app.subscribe_to_resource("config://live/settings", "subscriber_id")
app.unsubscribe_from_resource("config://live/settings", "subscriber_id")@app.resource("db://users/recent")
async def recent_users() -> list[dict]:
"""Get recently active users from database."""
return await db.query("SELECT * FROM users ORDER BY last_active DESC LIMIT 10")When to use resources:
- Provide read-only data access
- Expose configuration and settings
- Share application state
- Offer real-time data feeds (with subscriptions)
Resource URIs can use any scheme:
file://- File system accessconfig://- Configuration datadb://- Database queriesapi://- External API data- Custom schemes for your use case
See examples/knowledge_base/ for a complete example using resources and templates.
NextMCP supports multiple configuration sources with automatic merging:
from nextmcp import load_config
# Load from config.yaml and .env
config = load_config(config_file="config.yaml")
# Access configuration
host = config.get_host()
port = config.get_port()
debug = config.is_debug()
# Custom config values
api_key = config.get("api_key", default="default-key")config.yaml:
host: "0.0.0.0"
port: 8080
log_level: "DEBUG"
api_key: "my-secret-key".env:
MCP_HOST=0.0.0.0
MCP_PORT=8080
API_KEY=my-secret-key
NextMCP supports WebSocket transport for real-time, bidirectional communication - perfect for chat applications, live updates, and interactive tools.
from nextmcp import NextMCP
from nextmcp.transport import WebSocketTransport
app = NextMCP("websocket-server")
@app.tool()
async def send_message(username: str, message: str) -> dict:
return {
"status": "sent",
"username": username,
"message": message
}
# Create WebSocket transport
transport = WebSocketTransport(app)
# Run on ws://localhost:8765
transport.run(host="0.0.0.0", port=8765)from nextmcp.transport import WebSocketClient
async def main():
async with WebSocketClient("ws://localhost:8765") as client:
# List available tools
tools = await client.list_tools()
print(f"Available tools: {tools}")
# Invoke a tool
result = await client.invoke_tool(
"send_message",
{"username": "Alice", "message": "Hello!"}
)
print(f"Result: {result}")- Real-time Communication: Persistent connections with low latency
- Bidirectional: Server can push updates to clients
- JSON-RPC Protocol: Clean message format for tool invocation
- Multiple Clients: Handle multiple concurrent connections
- Async Native: Built on Python's async/await for high performance
| Feature | HTTP (FastMCP) | WebSocket |
|---|---|---|
| Connection type | One per request | Persistent |
| Latency | Higher overhead | Lower latency |
| Bidirectional | No | Yes |
| Use case | Traditional APIs | Real-time apps |
| Best for | Request/response | Chat, notifications, live data |
See examples/websocket_chat/ for a complete WebSocket application.
NextMCP features a powerful plugin system that allows you to extend functionality through modular, reusable components.
Plugins are self-contained modules that can:
- Register new tools with your application
- Add middleware for cross-cutting concerns
- Extend core functionality
- Be easily shared and reused across projects
from nextmcp import Plugin
class MathPlugin(Plugin):
name = "math-plugin"
version = "1.0.0"
description = "Mathematical operations"
author = "Your Name"
def on_load(self, app):
@app.tool()
def add(a: float, b: float) -> float:
"""Add two numbers"""
return a + b
@app.tool()
def multiply(a: float, b: float) -> float:
"""Multiply two numbers"""
return a * bfrom nextmcp import NextMCP
app = NextMCP("my-app")
# Discover all plugins in a directory
app.discover_plugins("./plugins")
# Load all discovered plugins
app.load_plugins()from nextmcp import NextMCP
from my_plugins import MathPlugin
app = NextMCP("my-app")
# Load a specific plugin
app.use_plugin(MathPlugin)Plugins have three lifecycle hooks:
on_init()- Called during plugin initializationon_load(app)- Called when plugin is loaded (register tools here)- on_unload() - Called when plugin is unloaded (cleanup)
class LifecyclePlugin(Plugin):
name = "lifecycle-example"
version = "1.0.0"
def on_init(self):
# Early initialization
self.config = {}
def on_load(self, app):
# Register tools and middleware
@app.tool()
def my_tool():
return "result"
def on_unload(self):
# Cleanup resources
self.config.clear()class TimingPlugin(Plugin):
name = "timing"
version = "1.0.0"
def on_load(self, app):
import time
def timing_middleware(fn):
def wrapper(*args, **kwargs):
start = time.time()
result = fn(*args, **kwargs)
elapsed = (time.time() - start) * 1000
print(f"⏱️ {fn.__name__} took {elapsed:.2f}ms")
return result
return wrapper
app.add_middleware(timing_middleware)Plugins can declare dependencies on other plugins:
class DependentPlugin(Plugin):
name = "advanced-math"
version = "1.0.0"
dependencies = ["math-plugin"] # Loads math-plugin first
def on_load(self, app):
@app.tool()
def factorial(n: int) -> int:
# Can use tools from math-plugin
return 1 if n <= 1 else n * factorial(n - 1)# List all loaded plugins
for plugin in app.plugins.list_plugins():
print(f"{plugin['name']} v{plugin['version']} - {plugin['loaded']}")
# Get a specific plugin
plugin = app.plugins.get_plugin("math-plugin")
# Unload a plugin
app.plugins.unload_plugin("math-plugin")
# Check if plugin is loaded
if "math-plugin" in app.plugins:
print("Math plugin is available")- Use descriptive names - Make plugin names clear and unique
- Version semantically - Follow semver (major.minor.patch)
- Document thoroughly - Add descriptions and docstrings
- Handle errors gracefully - Catch exceptions in lifecycle hooks
- Declare dependencies - List required plugins explicitly
- Implement cleanup - Use
on_unload()to release resources
See examples/plugin_example/ for a complete plugin demonstration with multiple plugin types.
NextMCP includes a built-in metrics system for monitoring your MCP applications in production.
from nextmcp import NextMCP
app = NextMCP("my-app")
app.enable_metrics() # That's it! Automatic metrics collection
@app.tool()
def my_tool():
return "result"When metrics are enabled, NextMCP automatically tracks:
tool_invocations_total- Total number of tool invocationstool_duration_seconds- Histogram of tool execution timestool_completed_total- Completed invocations by status (success/error)tool_errors_total- Errors by error typetool_active_invocations- Currently executing tools
All metrics include labels for the tool name and any global labels you configure.
Add your own metrics for business logic:
@app.tool()
def process_order(order_id: int):
# Custom counter
app.metrics.inc_counter("orders_processed")
# Custom gauge
app.metrics.set_gauge("current_queue_size", get_queue_size())
# Custom histogram with timer
with app.metrics.time_histogram("processing_duration"):
result = process(order_id)
return resultMonotonically increasing value. Use for: counts, totals.
counter = app.metrics.counter("requests_total")
counter.inc() # Increment by 1
counter.inc(5) # Increment by 5Value that can go up or down. Use for: current values, temperatures, queue sizes.
gauge = app.metrics.gauge("active_connections")
gauge.set(10) # Set to specific value
gauge.inc() # Increment
gauge.dec() # DecrementDistribution of values. Use for: durations, sizes.
histogram = app.metrics.histogram("request_duration_seconds")
histogram.observe(0.25)
# Or use as timer
with app.metrics.time_histogram("duration"):
# Code to time
pass# Get metrics in Prometheus format
prometheus_data = app.get_metrics_prometheus()
print(prometheus_data)Output:
# HELP my-app_tool_invocations_total Total tool invocations
# TYPE my-app_tool_invocations_total counter
my-app_tool_invocations_total{tool="my_tool"} 42.0
# HELP my-app_tool_duration_seconds Tool execution duration
# TYPE my-app_tool_duration_seconds histogram
my-app_tool_duration_seconds_bucket{tool="my_tool",le="0.005"} 10
my-app_tool_duration_seconds_bucket{tool="my_tool",le="0.01"} 25
my-app_tool_duration_seconds_sum{tool="my_tool"} 1.234
my-app_tool_duration_seconds_count{tool="my_tool"} 42
# Get metrics as JSON
json_data = app.get_metrics_json(pretty=True)app.enable_metrics(
collect_tool_metrics=True, # Track tool invocations
collect_system_metrics=False, # Track CPU/memory (future)
collect_transport_metrics=False, # Track WebSocket/HTTP (future)
labels={"env": "prod", "region": "us-west"} # Global labels
)Labels allow you to slice and dice your metrics:
counter = app.metrics.counter(
"api_requests",
labels={"method": "GET", "endpoint": "/users"}
)
counter.inc()The Prometheus format is compatible with:
- Prometheus for scraping and storage
- Grafana for visualization
- AlertManager for alerting
- Any Prometheus-compatible system
See examples/metrics_example/ for a complete metrics demonstration.
NextMCP provides comprehensive tools for deploying your MCP servers to production with Docker, cloud platforms, and Kubernetes.
Generate Docker deployment files for your project:
cd my-mcp-project
mcp init --docker --with-databaseThis creates:
Dockerfile- Optimized multi-stage build (<100MB)docker-compose.yml- Complete local development environment.dockerignore- Minimal Docker context
Deploy with one command:
mcp deploy --platform docker- Health Checks - Kubernetes-compatible liveness and readiness probes
- Graceful Shutdown - Clean termination with SIGTERM/SIGINT handling
- Multi-Stage Builds - Optimized Docker images with minimal size
- Platform Support - Docker, Railway, Render, Fly.io, Kubernetes
- Auto-Detection - Automatically detects and configures dependencies
- Security Hardening - Non-root user, minimal attack surface
Built-in health check system for monitoring application health:
from nextmcp import NextMCP
from nextmcp.deployment import HealthCheck
app = NextMCP("my-app")
health = HealthCheck()
# Add custom readiness check
def check_database():
return db.is_connected()
health.add_readiness_check("database", check_database)
# Health endpoints automatically available:
# GET /health - Liveness probe
# GET /health/ready - Readiness probeHealth Check Types:
- Liveness: Is the application running? (restart if fails)
- Readiness: Is the application ready to serve traffic? (remove from load balancer if fails)
Status Types:
healthy- All checks passingunhealthy- One or more checks failingdegraded- Partial functionality available
Handle shutdown signals cleanly to prevent data loss:
from nextmcp.deployment import GracefulShutdown
shutdown = GracefulShutdown(timeout=30.0)
# Add cleanup handlers
def cleanup_resources():
db.close()
cache.flush()
shutdown.add_cleanup_handler(cleanup_resources)
shutdown.register()
# Handles SIGTERM/SIGINT automatically
# Waits for in-flight requests to complete
# Runs cleanup handlers in order# Basic Docker setup
mcp init --docker
# With PostgreSQL
mcp init --docker --with-database
# With PostgreSQL and Redis
mcp init --docker --with-database --with-redis
# Custom port
mcp init --docker --port 9000# Build and start
docker compose up --build
# View logs
docker compose logs -f
# Stop
docker compose downThe generated Dockerfile includes:
- Multi-stage build - Separate builder and runtime stages
- Python 3.10 slim - Minimal base image (~100MB total)
- Non-root user - Runs as user
nextmcp(UID 1000) - Health check - Built-in HTTP health check
- Optimized layers - Cached dependencies for faster builds
Deploy to popular cloud platforms with one command. Note: These integrations are in beta and use the platform's CLI tools. Community testing and feedback welcome!
mcp deploy --platform railwayFeatures:
- Automatic HTTPS
- Environment variables
- Automatic scaling
- Built-in monitoring
Requirements: Install Railway CLI (npm install -g @railway/cli)
mcp deploy --platform renderFeatures:
- Git-based deployment
- Automatic SSL
- Free tier available
- Persistent volumes
Requirements: Install Render CLI
mcp deploy --platform flyFeatures:
- Edge deployment
- Global distribution
- WebSocket support
- Custom domains
Requirements: Install Fly CLI (flyctl)
Beta Notice: Cloud platform deployments shell out to their respective CLI tools. We verify the commands are called correctly, but haven't tested actual deployments on these platforms. If you use these successfully (or encounter issues), please open an issue to help improve the integration!
Health checks are Kubernetes-ready:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextmcp-app
spec:
replicas: 3
template:
spec:
containers:
- name: nextmcp
image: my-nextmcp-app:latest
ports:
- containerPort: 8000
# Liveness probe
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
periodSeconds: 30
# Readiness probe
readinessProbe:
httpGet:
path: /health/ready
port: 8000
initialDelaySeconds: 5
periodSeconds: 10
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Configure your application for different environments:
# .env file (development)
PORT=8000
LOG_LEVEL=DEBUG
DATABASE_URL=postgresql://localhost:5432/dev
# .env.production
PORT=8000
LOG_LEVEL=INFO
DATABASE_URL=postgresql://prod-db:5432/appLoad configuration in your app:
import os
app = NextMCP("my-app")
# Environment-specific configuration
port = int(os.getenv("PORT", "8000"))
log_level = os.getenv("LOG_LEVEL", "INFO")
app.run(host="0.0.0.0", port=port)Before deploying to production:
- Health checks configured and tested
- Graceful shutdown enabled
- Environment variables secured (use secrets management)
- Database migrations run
- SSL/TLS certificates configured
- Monitoring and alerting setup
- Log aggregation configured
- Backup strategy implemented
- Load testing performed
- Resource limits set (CPU, memory)
- Security scanning completed
- Documentation updated
See complete deployment examples:
-
examples/deployment_simple/- Basic deployment with health checks- Simple health and readiness checks
- Graceful shutdown
- Production logging
- Docker deployment ready
-
examples/deployment_docker/- Production-ready deployment- Database integration with health checks
- Metrics collection
- Advanced health checks (disk space, database)
- Multi-service Docker Compose
- Environment configuration
- Production best practices
| Platform | Status | CLI Required | Testing | Notes |
|---|---|---|---|---|
| Docker | ✅ Full | docker, docker compose | ✅ Automated CI | Fully tested, production-ready |
| Kubernetes | ✅ Ready | kubectl | ✅ Manifests validated | Health checks tested |
| Railway | 🧪 Beta | railway | CLI integration, needs testing | |
| Render | 🧪 Beta | render | CLI integration, needs testing | |
| Fly.io | 🧪 Beta | flyctl | CLI integration, needs testing | |
| AWS Lambda | 🔄 Planned | aws-cli | - | Serverless support |
| Google Cloud Run | 🔄 Planned | gcloud | - | Managed containers |
Testing Status:
- ✅ Full Support: Comprehensive automated tests in CI, production-ready
- 🧪 Beta: CLI integration works, but not tested on actual platforms - community testing needed
- 🔄 Planned: Not yet implemented
Note on Beta Platforms: Railway, Render, and Fly.io deployments use their respective CLI tools. We test that commands are invoked correctly, but full platform integration requires manual verification. If you successfully deploy to these platforms, please share your experience in GitHub Issues!
Container won't start:
# Check logs
docker compose logs nextmcp-app
# Verify configuration
docker compose config
# Check port conflicts
lsof -i :8000Health check failing:
# Manual health check
curl http://localhost:8000/health
# Check container health
docker inspect nextmcp-app | grep Health -A 10
# View detailed logs
docker compose logs -f --tail=100Database connection issues:
# Test database connectivity
docker compose exec postgres pg_isready
# Check database logs
docker compose logs postgresFor more deployment guides and troubleshooting, see the example READMEs in examples/deployment_simple/ and examples/deployment_docker/.
NextMCP provides a rich CLI for common development tasks.
# Create from template
mcp init my-project
mcp init my-project --template weather_bot
mcp init my-project --path /custom/path
# Generate Docker deployment files
mcp init --docker
mcp init --docker --with-database
mcp init --docker --with-database --with-redis
mcp init --docker --port 9000# Auto-detect platform and deploy
mcp deploy
# Deploy to specific platform
mcp deploy --platform docker
mcp deploy --platform railway
mcp deploy --platform render
mcp deploy --platform fly
# Deploy without building
mcp deploy --platform docker --no-buildmcp run app.py
mcp run app.py --host 0.0.0.0 --port 8080
mcp run app.py --reload # Auto-reload on changesmcp docs app.py
mcp docs app.py --output docs.md
mcp docs app.py --format jsonGenerate a manifest file describing your server's capabilities, tools, prompts, and resources:
# Print manifest to stdout
mcp manifest app.py
# Save to file
mcp manifest app.py --output manifest.json
mcp manifest app.py --save # Shorthand for --output manifest.json
# Validate manifest
mcp manifest app.py --validateThe generated manifest includes:
- Server metadata (name, version, description)
- Capabilities declaration (tools, prompts, resources, logging, completions)
- Complete tool listings with JSON Schema for parameters
- Prompt templates with argument specifications
- Resources and resource templates with URI patterns
- Generation metadata (timestamp, auto-discovery info, middleware, deployment settings)
You can also generate manifests programmatically:
from nextmcp import NextMCP
app = NextMCP.from_config()
# Generate and save
manifest = app.generate_manifest("manifest.json")
# Or just generate without saving
manifest = app.generate_manifest()Validate a manifest for security issues using static analysis:
# Validate a manifest file
mcp validate manifest.json
# Generate and validate from app
mcp validate --app app.py
# Fail on different risk levels
mcp validate manifest.json --fail-on high # Blocks HIGH and CRITICAL
mcp validate manifest.json --fail-on medium # Blocks MEDIUM, HIGH, and CRITICAL
# JSON output for CI/CD integration
mcp validate manifest.json --jsonManifest validation is NOT sufficient for security!
The validator performs static analysis to catch obvious issues but CANNOT:
- ❌ Detect malicious code in server implementation
- ❌ Verify authentication/authorization is properly implemented
- ❌ Detect runtime vulnerabilities or business logic flaws
- ❌ Prevent sophisticated attacks from determined adversaries
- ❌ Guarantee your server is secure even if validation passes
Manifests can be fabricated or broken:
- Attackers can create fake manifests that look safe but hide malicious operations
- Manifest can claim strict validation that doesn't exist in code
- Schema in manifest may not match actual server behavior
- Tools can be hidden from manifest entirely
What the validator DOES check:
- ✅ Dangerous operation patterns (delete, execute, admin commands)
- ✅ Missing input validation (unbounded strings, unconstrained objects)
- ✅ Common injection risks (SQL, command, path traversal, SSRF)
- ✅ Sensitive data exposure indicators
- ✅ Large attack surface (many exposed tools)
- ✅ Missing authentication indicators for dangerous operations
Use validation as ONE LAYER in defense-in-depth:
Security Layer 1: Manifest Validation (this tool) ← Catches obvious issues
Security Layer 2: Static Code Analysis (Bandit, Semgrep) ← Finds vulnerabilities in code
Security Layer 3: Dependency Scanning (Snyk, Safety) ← Detects known CVEs
Security Layer 4: Manual Code Review ← Human security review
Security Layer 5: Penetration Testing ← Test for exploits
Security Layer 6: Runtime Monitoring ← Detect anomalies in production
Best practices:
- Never trust manifest alone - Always review server code
- Defense in depth - Use multiple security layers
- Principle of least privilege - Only expose necessary operations
- Assume breach - Add audit logging, rate limiting, monitoring
- Regular updates - Re-validate on every change
See examples/security_validation/ for detailed examples of secure vs insecure servers.
mcp versionCheck out the examples/ directory for complete working examples:
- blog_server - Convention-based project structure with auto-discovery (5 tools, 3 prompts, 4 resources)
- security_validation - Manifest validation examples showing secure vs insecure servers
- auth/ - OAuth 2.0 authentication examples:
- complete_oauth_server.py - Production-ready Google OAuth with session management
- github_oauth_server.py - GitHub OAuth authentication
- google_oauth_server.py - Google OAuth authentication
- multi_provider_server.py - Multiple OAuth providers in one server
- session_management_example.py - Advanced session management
- combined_auth_server.py - OAuth with RBAC and permissions
- manifest_server.py - Permission manifests with OAuth
- oauth_token_helper.py - Interactive OAuth testing tool
- auth_api_key - API key authentication with role-based access control
- auth_jwt - JWT token authentication with login endpoint and token generation
- auth_rbac - Advanced RBAC with fine-grained permissions and wildcards
- deployment_simple - Basic production deployment with health checks and graceful shutdown
- deployment_docker - Production-ready deployment with database, metrics, and Docker Compose
- weather_bot - A weather information server with multiple tools
- async_weather_bot - Async version demonstrating concurrent operations and async middleware
- websocket_chat - Real-time chat server using WebSocket transport
- plugin_example - Plugin system demonstration with multiple plugin types
- metrics_example - Metrics and monitoring demonstration with automatic and custom metrics
# Clone the repository
git clone https://github.com/KeshavVarad/NextMCP.git
cd nextmcp
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Install git pre-commit hooks (recommended)
./scripts/install-hooks.sh
# Run tests
pytest
# Run tests with coverage
pytest --cov=nextmcp --cov-report=html
# Format code
black nextmcp tests
# Lint code
ruff check nextmcp tests
# Type check
mypy nextmcpThe repository includes a pre-commit hook that automatically runs before each commit to:
- Check and auto-fix code with ruff
- Format code with black
- Run all tests
Install the hook with:
./scripts/install-hooks.shThe hook ensures all commits pass linting and tests, preventing CI failures. To bypass the hook (not recommended), use:
git commit --no-verify# Run all tests
pytest
# Run specific test file
pytest tests/test_core.py
# Run with verbose output
pytest -v
# Run with coverage
pytest --cov=nextmcpNextMCP is organized into several modules:
core.py- MainNextMCPclass, application lifecycle, andfrom_config()methoddiscovery.py- Auto-discovery engine for convention-based project structuretools.py- Tool registration, metadata, and documentation generationmiddleware.py- Built-in middleware for common use casesconfig.py- Configuration management (YAML, .env, environment variables)cli.py- Typer-based CLI commandslogging.py- Centralized logging setup and utilities
NextMCP builds on FastMCP to provide:
| Feature | FastMCP | NextMCP |
|---|---|---|
| Basic MCP server | ✅ | ✅ |
| Tool registration | Manual | Decorator-based + auto-discovery |
| Convention-based structure | ❌ | ✅ File-based organization |
| Auto-discovery | ❌ | ✅ Automatic primitive registration |
| Zero-config setup | ❌ | ✅ NextMCP.from_config() |
| Authentication & Authorization | ❌ | ✅ Built-in auth system |
| API key auth | ❌ | ✅ APIKeyProvider |
| JWT auth | ❌ | ✅ JWTProvider |
| Session auth | ❌ | ✅ SessionProvider |
| OAuth 2.0 | ❌ | ✅ GitHub, Google, custom providers |
| OAuth PKCE | ❌ | ✅ Built-in PKCE support |
| Session management | ❌ | ✅ File and memory stores |
| Auth metadata protocol | ❌ | ✅ Server auth announcements |
| RBAC | ❌ | ✅ Full RBAC system |
| Permission-based access | ❌ | ✅ Fine-grained permissions |
| Async/await support | ❌ | ✅ Full support |
| WebSocket transport | ❌ | ✅ Built-in |
| Middleware | ❌ | Global + tool-specific |
| Plugin system | ❌ | ✅ Full-featured |
| Metrics & monitoring | ❌ | ✅ Built-in |
| CLI commands | ❌ | init, run, docs |
| Project scaffolding | ❌ | Templates & examples |
| Configuration management | ❌ | YAML + .env support |
| Built-in logging | Basic | Colored, structured |
| Schema validation | ❌ | Pydantic integration |
| Testing utilities | ❌ | Included |
- v0.1.0 - Core MCP server with Tools primitive
- v0.2.0 - Full MCP Primitives (Prompts, Resources, Resource Templates, Subscriptions)
- v0.3.0 - Convention-Based Architecture (Auto-discovery,
from_config(), Project structure) - v0.4.0 - Authentication & Authorization (API keys, JWT, Sessions, RBAC)
- v0.5.0 - Production Deployment (Health checks, Graceful shutdown, Docker, Cloud platforms)
- v0.6.0 - OAuth 2.0 Authentication (GitHub/Google providers, PKCE, Session management, Auth metadata protocol)
- Async tool support
- WebSocket transport
- Plugin system
- Built-in monitoring and metrics
- MCP server registry and discovery
- More example projects
- Documentation site
- Hot Reload: Development mode with automatic file watching
- Enhanced CLI:
mcp dev,mcp validate,mcp testcommands - Interactive Debugging: Built-in debugger for MCP tools
- Serverless Support: AWS Lambda, Google Cloud Functions, Azure Functions
- Kubernetes Helm Charts: Production-ready K8s deployments
- Package Distribution:
mcp packagefor Docker, PyPI, and serverless
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- GitHub Issues: https://github.com/KeshavVarad/NextMCP/issues
- Documentation: [Coming soon]
Made with ❤️ by the NextMCP community