Skip to content

echomodel/mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

EchoModel MCP

Opinionated patterns for building AI-native MCP solutions. The AI agent is the interface. Everything else is infrastructure.

AI-Agent-First

EchoModel solutions are MCP servers where the AI agent IS the primary interface. They are not CLI tools that also have MCP. This is a deliberate design choice, not an accident.

The MCP server exposes tools via tools/list. The AI agent reads the tool docstrings, understands the domain, and drives the interaction. No system prompts, no plugins, no skills — just well-written tool functions. This works across Claude.ai, Claude mobile, Claude Code, Gemini CLI, and any future MCP client.

Where CLI fits

CLI exists for operations that don't belong in an agent conversation:

  • Security-related settings — configuring auth, identity, and secrets that affect the agent's own behavior. The agent shouldn't configure its own trust boundaries (separation of concerns).
  • Scripted deterministic behavior — CI/CD pipelines, automation, operations that need exact reproducibility.
  • MCP server installation and management — registering, health-checking, configuring servers.
  • Infrequently used admin operations — user management, deployment config, things that don't justify tool surface area.

These are admin/ops concerns. The core product is the MCP server and its tool docstrings.

SDK-First Architecture

All business logic lives in the SDK. MCP tools are thin async wrappers that call SDK methods. CLI commands (where they exist) do the same.

SDK (business logic, testable, reusable)
 └── MCP tools (async functions, return dicts)
 └── CLI commands (admin/ops only, where needed)

Rules:

  • SDK methods return JSON-serializable dicts. MCP tools return them directly.
  • SDK never imports MCP or CLI code.
  • If you're writing logic in an MCP tool or CLI command, stop and move it to the SDK.

Reference: echofit, aicfg

Transport Selection: FastMCP vs mcp-app

Solutions must choose one MCP framework — never import both.

Solution type Use Why
stdio-only (local tools, no cloud) FastMCP directly No auth, no multi-user, no overhead
HTTP multi-user (cloud-hosted, data-owning) mcp-app JWT auth, user identity, per-user data stores
Both stdio and HTTP mcp-app One codebase, two transports (once mcp-app supports stdio)

mcp-app wraps FastMCP internally. A solution using mcp-app never imports FastMCP directly — mcp-app is the abstraction layer. Tool functions are plain async functions discovered from a config file (mcp-app.yaml), not decorated with @mcp.tool().

FastMCP is simpler for stdio-only. Tools use @mcp.tool() decorators directly. No config file, no middleware, no indirection. Better upstream documentation. The natural choice when you don't need auth or multi-user.

Don't mix them. A solution that uses mcp-app for HTTP must use mcp-app for stdio too. You don't have one code path with FastMCP decorators and another with mcp-app config — that's two frameworks doing the same thing.

Current limitation

mcp-app only supports HTTP (mcp-app serve). Stdio support is planned but not implemented. This means solutions that want both transports (like echofit) currently lack a local stdio mode.

Reference implementations:

  • FastMCP direct: aicfg (stdio-only, local tool)
  • mcp-app: echofit (HTTP multi-user, cloud-hosted)

Docstring-Only MCP Pattern

MCP tool docstrings are the only context the end-user AI agent receives. No README, no CONTRIBUTING.md, no system prompts, no skills — just the docstrings returned by tools/list.

This means:

  • Every tool must be fully self-describing — inputs, outputs, behavior, edge cases
  • The AI figures out the domain from the docstrings alone — no custom instructions needed
  • It works on every MCP client — Claude.ai, Claude mobile, Claude Code, Gemini CLI, any future client

When this pattern fits

  • Tool names and docstrings are sufficient for the AI to understand the domain
  • No complex multi-step workflows requiring orchestration beyond the AI's natural behavior
  • The AI's general knowledge covers the domain (nutrition, exercise, task management)
  • The solution is AI-native — the AI IS the interface

When it doesn't fit

  • Solutions that need custom system prompts or behavioral rules beyond docstrings
  • Solutions that need client-side hooks (pre/post tool call logic)
  • Solutions that need client-side UI beyond conversation

Verified across

  • Claude.ai (registered via URL, works immediately)
  • Claude mobile app (same account, auto-appears)
  • Claude Code (local stdio or remote HTTP)
  • Gemini CLI

Reference: echofit

Data-Owning vs Proxy Servers

A fundamental branching point for cloud-hosted HTTP MCP servers.

Data-owning servers store user data internally. The server decides what users can do with their data. Examples: echofit (fitness tracking), any app where users create and own content.

Proxy servers mediate access to an external service with its own API. The scope is determined by the upstream service. Examples: solutions wrapping Google Workspace, TickTick, Monarch, or any third-party API.

Concern Data-owning Proxy
Auth model Own user store (app-user, JWT) Pass-through to external OAuth
Data path Own filesystem / GCS FUSE External API calls
Module/feature gating Meaningful — gate access to data you own Limited — upstream API defines scope
Pricing/tiering Natural Awkward
Multi-tenancy First-class Irrelevant

Module Configuration (Multi-Product from One Server)

Applies to data-owning servers with multiple feature modules (e.g., diet, workout, health tracking in echofit).

Module toggling is runtime config, not packaging

Every deployment includes all modules. gapp deploy produces the same artifact regardless of product surface. What varies is runtime configuration:

  • Module breadth — which feature sets (diet, workout, health) respond to tools/list
  • Module depth — how much of a module is exposed (calorie-only vs full nutrition)
  • Per-user access — different users get different modules via JWT claims or tier

Control mechanisms

  • Server-level config — the deployment determines which modules are active
  • JWT claims — user tokens include module entitlements
  • Client credentials — a client ID registered for "Calorie Counter" only gets diet tools

This enables publishing multiple products to different connector stores, each pointing at the same server with different module configurations. The AI agent never sees tools for disabled modules and never mentions them.

Packaging implication

No per-module packages. All modules ship together in one SDK package. Module selection is a server concern, not an install concern.

Reference: echofit README — Module configuration

Tool Permissions Policy

When adding MCP tools to any solution:

  • Safe tools (read-only or append-only) — can be auto-approved in client configurations
  • Destructive tools (modify or delete existing data) — must never be auto-approved; require explicit user confirmation per invocation

Distribution Packages

For solutions with separate concerns (SDK, MCP server, CLI), use separate PyPI packages:

echofit-sdk  ←  echofit-mcp (MCP server, depends on SDK)
             ←  echofit     (CLI, depends on SDK)

Why: A Claude plugin user shouldn't install Click. A developer using the SDK shouldn't install the MCP framework. A CLI user shouldn't install MCP.

When to split: When the solution has distinct audiences with distinct dependency needs. Don't split prematurely — one package is fine until marketplace requirements demand separation.

Ecosystem Libraries

Library Purpose Used by
mcp-app MCP server framework: config-driven tool discovery, JWT auth, user-identity middleware, ASGI composition HTTP multi-user solutions
app-user User management: registration, token generation, per-user data stores Data-owning solutions
gapp Deployment: Cloud Run, secrets, GCS FUSE data volumes, CI/CD Any solution deploying to GCP
FastMCP MCP protocol SDK with @mcp.tool() decorators stdio-only solutions

Prior Art

This repo supersedes experiment-cli-mcp-sdk, which explored the SDK-first pattern before the echomodel ecosystem existed. The interface parity testing concepts from that repo remain relevant but are not yet implemented.

About

Architecture patterns for AI-native MCP solutions: SDK-first, transport selection, module configuration, deployment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors