Framework for building and running MCP servers as HTTP services. Define tools as pure Python functions, wire up with two lines, run with one command.
FastMCP is great for quickly spinning up a local tool. But as soon as you want to productize it, share it with others, or use it across multiple identities, you end up building auth, user management, admin endpoints, and deployment config into each app. They all invariably get done a little differently.
When you're moving at the speed of agents — building and releasing impactful tools quickly — you need them to be consistent and secure without repeating boilerplate into each one and trying to manage all the implementations. You want to scale impact instead of adding to the cognitive load needed to keep deploying and trusting you'll be able to come back and refresh a token or add a user months later.
mcp-app gives you:
- Identity enforced by default. JWT auth runs automatically. Tools can't execute without an established user. You can't accidentally ship a wide-open service.
- User management built in. Admin endpoints, CLI for local and remote user management, typed profile per user — identical across every app.
- Both transports, same code.
serve(HTTP) andstdio(local) from oneAppobject. - Free tests for your app.
mcp_app.testingchecks auth, admin, wiring, and tool coverage against your specific app. Import the tests, run pytest, confirm everything works. - Deployment-ready. Container, bare metal, Cloud Run, or gapp.
The consistency is the point. User management, token rotation, auth enforcement, admin CLI — these work the same way across all your solutions. Learn it once, the tests confirm it works, and when you need to update a token or revoke a user six months later, the workflow is the same regardless of which app you're touching.
pip install git+https://github.com/echomodel/mcp-app.gitCreate your tools module — pure async functions, no framework imports:
# my_app/mcp/tools.py
from my_app.sdk.core import MySDK
sdk = MySDK()
async def do_thing(param: str) -> dict:
"""Tool description shown to agents."""
return sdk.do_thing(param)Wire up in __init__.py:
# my_app/__init__.py
from mcp_app import App
from my_app.mcp import tools
app = App(name="my-app", tools_module=tools)For API-proxy apps with per-user credentials:
# my_app/__init__.py
from pydantic import BaseModel, Field
from mcp_app import App
from my_app.mcp import tools
class Profile(BaseModel):
token: str = Field(description="API token from https://example.com/settings")
app = App(
name="my-app",
tools_module=tools,
profile_model=Profile,
profile_expand=True,
)profile_expand=True generates typed CLI flags (--token) on
the admin CLI. profile_expand=False (default) accepts profile
as JSON or @file.
The Field(description=...) is important — it appears in --help
output for both users add and users update-profile. An operator
or agent managing a deployed instance discovers what credentials the
app needs by running my-app-admin users add --help. The
description should say what the credential is, where to get it,
and what system it connects to. The field name itself
(token, api_key, github_pat, etc.) is the app author's
choice — mcp-app does not enforce or assume any naming convention.
Add entry points to pyproject.toml:
[project.scripts]
my-app-mcp = "my_app:app.mcp_cli"
my-app-admin = "my_app:app.admin_cli"
[project.entry-points."mcp_app.apps"]
my-app = "my_app:app"The mcp_app.apps entry point lets the test suite and tooling
discover your app automatically.
Run:
my-app-mcp serve # HTTP, multi-user
my-app-mcp stdio --user local # stdio, single userNo config files. Tool discovery, identity middleware, admin endpoints, and store wiring are handled by the framework from the Python args.
Default store is filesystem — per-user directories under
~/.local/share/{name}/users/. Override with APP_USERS_PATH
env var. Custom store backends can be passed to App.
Identity middleware runs automatically in HTTP mode. It validates
JWTs, loads the full user record from the store, and sets the
current_user ContextVar. No configuration needed.
See docs/custom-middleware.md for advanced middleware configuration.
Both data-owning and API-proxy apps use the same framework. The difference is what the SDK reads from the user context.
Data-owning app (owns user data — food logs, notes, etc.):
# my_data_app/sdk/core.py
from mcp_app.context import current_user
from mcp_app import get_store
class MySDK:
def save_entry(self, data):
user = current_user.get()
store = get_store()
store.save(user.email, "entries/today", data)The SDK reads current_user.get().email to scope data. The store holds per-user app data.
API-proxy app (wraps an external API — financial data, Google Workspace, etc.):
# my_proxy/sdk/core.py
from mcp_app.context import current_user
import httpx
class MySDK:
def list_items(self):
user = current_user.get()
token = user.profile["token"]
resp = httpx.get("https://api.example.com/items",
headers={"Authorization": f"Bearer {token}"})
return resp.json()The SDK reads current_user.get().profile for the backend credential. The profile was saved at registration time and loaded in one read with the auth record.
What's identical: store setup, admin endpoints, tool discovery, deployment. The middleware is the same. The SDK decides what to read from the user context.
The tools module is imported and all public async functions (not starting with _) are registered as MCP tools. Function names become tool names. Docstrings become descriptions. Type hints become schemas.
| Variable | Required | If Missing | Purpose |
|---|---|---|---|
SIGNING_KEY |
For HTTP | Startup fails | JWT signing key |
JWT_AUD |
No | Audience not validated | Expected JWT aud claim |
APP_USERS_PATH |
No | ~/.local/share/{name}/users/ |
Per-user data directory |
TOKEN_DURATION_SECONDS |
No | 315360000 (~10yr) | Token lifetime in seconds |
SIGNING_KEY is a secret. Never commit it to the repo. Generate
a strong random value:
python3 -c 'import secrets; print(secrets.token_urlsafe(32))'How it gets into the environment depends on your deployment: CI/CD secrets (e.g., GitHub Actions), cloud secret managers (e.g., GCP Secret Manager), or deployment tools that generate and manage secrets directly.
JWT_AUD — if unset, audience is not validated. Apps sharing the
same signing key without distinct JWT_AUD values will accept each
other's user tokens. If each app has a unique signing key, audience
validation is less critical.
APP_USERS_PATH — the default writes to the local filesystem,
which works for development. In a container, this path is ephemeral
— the app starts, users get registered, tools execute, and then
user data is silently lost on container restart. No error, no
warning. For any persistent deployment, set APP_USERS_PATH to a
mounted volume or persistent storage path.
TOKEN_DURATION_SECONDS — the default (~10 years) effectively
means tokens are permanent. Set a shorter value if tokens should
expire. Applies to newly issued tokens only.
Every mcp-app solution has a current_user ContextVar set before tools execute. No default — tools that run without an established identity return an error.
| Transport | How it's set |
|---|---|
HTTP (my-app-mcp serve) |
Identity middleware validates JWT, loads full user record from store |
stdio (my-app-mcp stdio) |
CLI loads user record from store using --user flag |
The SDK reads it:
from mcp_app.context import current_user
user = current_user.get()
user.email # "alice@example.com" (HTTP) or "local" (stdio)
user.profile # dict or typed Pydantic model — whatever was saved at registrationThe user record includes an optional profile field — app-specific data saved at registration time (backend credentials, preferences, config). mcp-app stores it and loads it but does not interpret it.
For typed profile access, the app declares a Pydantic model on
the App object:
# my_app/__init__.py
from pydantic import BaseModel, Field
from mcp_app import App
from my_app.mcp import tools
class Profile(BaseModel):
token: str = Field(description="Personal access token from https://example.com/settings")
app = App(name="my-app", tools_module=tools, profile_model=Profile, profile_expand=True)Now user.profile.token is typed and validated. If no model is
registered, user.profile is a raw dict.
Field descriptions are how the app tells operators (and agents)
what credentials it needs. When profile_expand=True, the admin
CLI generates typed flags from the model — the field name becomes
the flag, the description becomes the help text. An operator
running my-app-admin users add --help sees exactly what to
provide and where to get it, without reading the source code.
This is the re-discovery mechanism: months later, when a token
needs rotating, the CLI tells you what each field is for.
# Data-owning app — no profile needed
my-app-admin users add alice@example.com
# API-proxy app — profile set at registration via typed flags
my-app-admin users add alice@example.com --token api-key-xxx
# Update a single profile field later (e.g., rotate a credential)
my-app-admin users update-profile alice@example.com token new-api-keyusers add rejects existing users — use users update-profile
to change credentials for a user that's already registered.
stdio user identity is always specified via the --user flag:
mcp-app stdio --user local
my-app-mcp stdio --user alice@example.comThe CLI loads the user record from the store and sets current_user.
Refuses to start without --user.
REST admin endpoints are mounted at /admin in HTTP mode:
POST /admin/users— register user (with optional profile), returns JWTGET /admin/users— list usersDELETE /admin/users/{email}— revoke userPOST /admin/tokens— issue new token for existing user
Gated by admin-scoped JWT (scope: "admin", same signing key).
Validate the full stack in-memory — no server, no Docker, no cloud:
from my_app import app
import httpx
transport = httpx.ASGITransport(app=app)
client = httpx.AsyncClient(transport=transport, base_url="http://test")App is directly ASGI-callable, so any ASGI host — httpx in-process,
uvicorn, hypercorn, granian, Mangum for Lambda — treats it as the
server callable without wrapping. If it works here, it works in
Docker. httpx is already a dependency of mcp-app.
See CONTRIBUTING.md for full test examples.
No auth, no signing key, no server process. The MCP client launches the process directly:
my-app-mcp stdio --user local--user is required — it specifies which user record to load from
the store. Refuses to start without it.
SIGNING_KEY=your-key my-app-mcp serveWith persistent storage and all options:
SIGNING_KEY=your-key \
APP_USERS_PATH=/data/my-app/users \
JWT_AUD=my-app \
TOKEN_DURATION_SECONDS=2592000 \
my-app-mcp serve --host 0.0.0.0 --port 8080Runs uvicorn on 0.0.0.0:8080 by default. Override with --host
and --port.
mcp-app is a standard Python app. Deploy it however you deploy Python — as a process, in a container, on any platform. The app does not know or care how it was deployed.
This posture is inherited. Apps built on mcp-app are
deployment-agnostic by default. When authoring your app's own
README, describe what the app needs from any environment — env
vars, start command, endpoint paths, auth model — and let the
reader's deployment tooling map to it. Docker is a useful
universal illustration; specific platforms (Cloud Run, ECS,
Kubernetes) should only appear in your docs if the app is
deliberately coupled to one. Concrete values tied to a
deployment (signing-key secret names, APP_USERS_PATH paths,
orchestration details) belong in the deployment tooling's
domain, not the app's README. This is how the same app can be
picked up and deployed anywhere without its docs arguing with
the operator's choice.
When the app is deployment-agnostic, the deployment decisions and configuration live separately from the app repo — in CI/CD workflows, ops repos, infrastructure-as-code modules, or wherever environment-specific (but non-secret) settings, build scripts, and deployment tooling belong. The app repo stays focused on the app. Operators bring their own deployment tooling, and agentic workflows operating on a deployment environment will typically have additional skills or plugins loaded for that tooling, separate from the app itself.
Some of the connective tissue is retained across sessions by
the mcp-app admin CLI. Per-app connect config persists the
deployed URL and signing-key access for each app, so returning
to administer an app months later doesn't require
re-discovering how or where it was deployed. This state lives
in XDG config paths (~/.config/{app-name}/setup.json) —
always outside the solution app repo — where it can be managed
and versioned by a dotfile manager or lifted into a separate,
private operator-owned repo if durability beyond the
workstation is needed. Either way, it stays external to the
solution app repo. Capabilities here may expand over time —
additional metadata about a deployment (environments, aliases,
deployment tool hints) could reasonably accrue to this per-app
config, still in external locations not versioned with or
exposed in the repo itself as a reusable product.
A secondary route — not required by mcp-app — is to ship
opinionated build and deployment tooling inside the app repo:
Dockerfiles beyond a minimal illustration, CI workflow
templates, Terraform modules (.tf files) or other
infrastructure-as-code, platform-specific manifests, or configs
for a particular deployment tool. Done well, this tooling is
still
operator-agnostic: environment specifics (project IDs,
secret names, domains) and secrets stay out of the repo; the
configs describe how to build and deploy without dictating
where. The goal is to give operators an easy, opinionated path
— batteries included rather than assembly required.
This route trades some portability for convenience. Apps published as reusable public products commonly avoid it, or include only a minimal Docker example, to maximize audience and adoption — any in-repo tooling assumption is one more thing a would-be user has to agree with or work around. Apps that are internal, personal, or have a narrower audience may reasonably include more opinionated tooling, on the theory that the authors and operators are closely aligned and the convenience is worth it. Both are valid — the choice is the author's.
Any deployment environment must provide:
- Start command:
my-app-mcp serve(optionally--host/--port, default0.0.0.0:8080) SIGNING_KEYenv var — required for HTTP. A secret — must not be committed to the repo or hardcoded in config files. Source it from a secrets store, CI/CD secrets, or have the deployment tool generate it (see Environment Variables above)APP_USERS_PATHenv var — must point to persistent storage for any durable deployment. The default writes to the local filesystem, which is ephemeral in containers (see Environment Variables above)- MCP endpoint:
/(root path). MCP clients connect tohttps://host:port/, not/mcp - Health check:
GET /health— no auth, returns{"status": "ok"} - Admin API:
/admin/users(POST, GET),/admin/users/{email}(DELETE),/admin/tokens(POST) - Auth model: mcp-app handles its own auth via JWT. If the platform has an auth gate (IAM, API gateway, etc.), configure it to allow unauthenticated traffic through to the app
- Build root: the repo root where
pyproject.tomllives
pip install -e .
SIGNING_KEY=your-key my-app-mcp serveFROM python:3.11-slim
WORKDIR /app
COPY . /app
RUN pip install -e .
EXPOSE 8080
CMD ["my-app-mcp", "serve"]docker build -t my-app .
docker run -p 8080:8080 \
-e SIGNING_KEY=your-key \
-v /persistent/path:/data \
-e APP_USERS_PATH=/data/users \
my-appThe Dockerfile works on any container platform. The volume mount ensures user data survives container restarts.
Deploy from source or a container image using your platform's
tooling. Set SIGNING_KEY via the platform's secret manager and
APP_USERS_PATH to a persistent volume. Ensure the platform
allows unauthenticated HTTP traffic through to the app.
Deployment tools like gapp can automate infrastructure, secrets, and container builds.
1. Connect the admin CLI:
my-app-admin connect https://your-service --signing-key xxx2. Register a user (if none exist yet):
my-app-admin users add alice@example.com3. Probe — single-command end-to-end verification:
my-app-admin probeOutput:
URL: https://your-service
Health: healthy
MCP: ok (probed as alice@example.com)
Tools (3):
do_thing
list_items
get_status
Probe hits /health for liveness, then does an MCP tools/list
round-trip using a short-lived token minted for an existing user.
If it reports all tools, the app is fully operational — health,
admin auth, user auth, MCP layer, and tool wiring all work.
4. Generate MCP client registration commands:
my-app-admin register --user alice@example.comThis outputs ready-to-paste commands for Claude Code, Gemini CLI, and the Claude.ai URL form.
Prefer the per-app admin CLI (my-app-admin) over the
generic CLI (mcp-app) whenever possible. The per-app CLI
stores connection config per app — each app remembers its own
target (local or remote) and signing key independently in
~/.config/{name}/setup.json. This means you can switch between
administering different apps without losing connection state,
and return to an app months later without re-discovering how or
where it was deployed.
The generic CLI stores one connection at a time in
~/.config/mcp-app/setup.json. Connecting to a different
service overwrites the previous connection. It exists for cases
where the per-app admin CLI isn't installed locally.
# Per-app admin CLI (preferred) — local or remote
my-app-admin connect local
my-app-admin connect https://your-service --signing-key xxx
# Generic CLI — remote only, single connection
mcp-app connect https://your-service --signing-key xxxconnect local is only available on the per-app admin CLI
because it needs the app name to locate the filesystem store
(~/.local/share/{name}/users/). The generic CLI doesn't know
which app it's managing, so it only supports remote targets.
Connection config is set once and never repeated. No other
command accepts --url or --signing-key.
Note: the framework currently tracks one connection per app
— a single deployment environment (local or remote), not
multiple environments. If you deploy the same app to staging
and production, connect switches between them but only
remembers the last one configured.
# Register users
my-app-admin users add alice@example.com
my-app-admin users add bob@example.com --profile '{"token": "api-key-xxx"}'
# List users
my-app-admin users list
# Revoke a user (invalidates all their tokens)
my-app-admin users revoke alice@example.com
# Issue a new token for an existing user
my-app-admin tokens create alice@example.com
# Health check (remote only)
my-app-admin healthThe token returned from users add or tokens create is what
the user puts in their MCP client configuration.
No signing key needed — stdio has no JWT auth.
CLI registration:
claude mcp add my-app -- my-app-mcp stdio --user local
gemini mcp add my-app -- my-app-mcp stdio --user localManual config (~/.claude.json or ~/.gemini/settings.json):
{
"mcpServers": {
"my-app": {
"command": "my-app-mcp",
"args": ["stdio", "--user", "local"]
}
}
}CLI registration:
claude mcp add --transport http my-app \
https://your-service/ \
--header "Authorization: Bearer USER_TOKEN"Manual config (~/.claude.json or ~/.gemini/settings.json):
{
"mcpServers": {
"my-app": {
"url": "https://your-service/",
"headers": {
"Authorization": "Bearer ${MY_APP_TOKEN}"
}
}
}
}Both Claude Code and Gemini CLI support ${VAR} expansion in
config files — reference a host environment variable instead of
pasting the token directly.
Claude.ai / Claude mobile (remote via URL):
https://your-service/?token=USER_TOKEN
Remote MCP servers added through Claude.ai are available across all Claude clients — web, mobile, and Claude Code.
mcp-app wraps FastMCP (the official MCP Python SDK) and Starlette (ASGI framework). Solutions never import these directly — mcp-app handles all wiring.
App(name="my-app", tools_module=tools)
→ discovers async functions in tools module
→ registers each as FastMCP tool (with identity enforcement)
→ creates data store from app name
→ HTTP (serve): wraps with identity middleware + admin endpoints → uvicorn
→ stdio (--user): loads user record from store → FastMCP over stdin/stdout
mcp-app ships reusable test modules that check auth, user admin,
JWT enforcement, CLI wiring, and tool protocol compliance against
your specific app. Import them in two files, provide your App
object as a fixture, and get 25+ tests for free.
import pytest
from my_app import app
@pytest.fixture(scope="session")
def app():
return appfrom mcp_app.testing.iam import *
from mcp_app.testing.wiring import *
from mcp_app.testing.tools import *
from mcp_app.testing.health import *This file is identical across all mcp-app solutions. The
conftest.py is the only file that changes — it points the
tests at your specific App object.
pytest tests/Zero failures means: auth works, admin works, tools are wired, identity is enforced, and the SDK has test coverage for every tool. Your app is correctly built on mcp-app.
Two agent skills ship with this repo under skills/:
author-mcp-app— guides authoring, migration, review, and framework-upgrade work on mcp-app solutions.mcp-app-admin— guides operators and agents managing deployed mcp-app instances (connect, verify, users, tokens, MCP client registration).
Install as symlinks from a local clone so edits in the repo go live immediately:
# Claude Code — user scope
ln -s $(pwd)/skills/author-mcp-app ~/.claude/skills/author-mcp-app
ln -s $(pwd)/skills/mcp-app-admin ~/.claude/skills/mcp-app-admin
# Gemini CLI — link from local clone
gemini skills link ./skills/author-mcp-app
gemini skills link ./skills/mcp-app-adminInstall method may vary by agent platform; follow the established pattern in your environment.
author-mcp-app is for lifecycle events on a solution
repo, not for steady-state work:
- Initial authoring of a new mcp-app solution (greenfield).
- Periodic review of an existing solution against the current framework — produces a compliance dashboard.
- Framework upgrades or migrations to adopt new features or replace deprecated patterns.
mcp-app-admin is for operational work on a deployed
instance — connecting the admin CLI, verifying the
deployment, managing users, rotating credentials, registering
MCP clients. Invoke it alongside whatever deployment-tool
skill (if any) is in use.
The processes these skills describe — authoring, reviewing, upgrading, deploying, redeploying, administering — are all inherently recurring. The admin process in particular runs continuously across the lifetime of a deployed app (redeploys, user additions, credential rotations, client registrations). That work never becomes obsolete.
What can become obsolete, per solution repo, are the
skills as agent-guidance artifacts. author-mcp-app is
designed — when mcp-app-admin and any other relevant
accelerator skills are available in the environment at
authoring time — to absorb their guidance into the solution
repo's own README.md, CONTRIBUTING.md, and agent context
files (CLAUDE.md, .gemini/settings.json) in app-specific
and often more concrete terms than the skills themselves can
offer. The solution repo's docs end up carrying the complete
end-to-end process — authoring AND operating — expressed in
the app's real CLI names, real profile fields, real
deployment posture.
Once the author skill has completed that pass, a future agent opening the solution repo with neither skill loaded must be able to install, run, deploy, redeploy, connect the admin CLI, manage users, rotate credentials, register MCP clients, add or modify tools, and run tests, entirely from the repo's own files. Neither skill is needed for ongoing work on that specific repo.
The skills remain broadly useful:
author-mcp-app— for lifecycle events on any repo (initial authoring, periodic review, framework upgrade), or on repos that haven't been brought under this discipline yet.mcp-app-admin— for operational work on instances whose repos don't have the admin journey fully documented (legacy apps, third-party apps, or any solution that skipped the author skill's pass), and as a cross-cutting reference that tracks framework evolution before any one repo's docs catch up.
The bar the author skill holds itself to: if it ran on this repo successfully, neither skill should be required the next time someone (human or agent) opens the repo to do normal work on it.
author-mcp-app today has only loose handoff mechanics for
deployment: it describes what the app needs from any
environment (the runtime contract) and leaves concrete
deployment to whatever tooling the user has paired with it.
As the framework grows, author-mcp-app may gain the ability
to agnostically trigger externally-configured build and
deploy workflows — coordinating the "app is ready" → "app is
deployed and reachable" handoff without owning the concrete
environment configs, which continue to live outside the
solution repo. Until then, that handoff lives in whatever
deployment skill or tool the user pairs with the author skill.
- docs/custom-middleware.md — advanced middleware configuration
- CONTRIBUTING.md — architecture, design decisions, testing