Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

README.md

Examples

Runnable examples demonstrating different features of Casual MCP. Each subfolder contains its own config.json and one or more scripts.

Prerequisites

Install the project for development from the repo root:

uv sync --group dev

Set the required API key for your provider (e.g. export OPENAI_API_KEY=your-key).

Running Examples

Change into the subfolder and run a script with uv run:

cd examples/tool_calling
uv run python chat_weather.py

Most examples default to gpt-4.1-nano. Override the model with the MODEL_NAME environment variable:

MODEL_NAME=gpt-4.1 uv run python chat_weather.py

Folders

Core tool-calling examples using McpToolChat.from_config().

Script Description
chat_weather.py Asks the LLM to compare weather in two cities using weather tools
chat_fetch.py Asks the LLM to fetch and summarise a webpage
multiturn.py Multi-turn conversation that asks about Sydney's weather then compares it to Tokyo, demonstrating persistent connections and per-turn stats
manual_construction.py Builds McpToolChat manually from individual components (ToolCache, ModelFactory, load_mcp_client) instead of using from_config()

Demonstrates deferred tool loading with the search-tools meta-tool.

Script Description
single_server_discovery.py Shows how tools are partitioned into loaded vs deferred sets, then lets the LLM discover and load deferred tools on demand
multi_server_discovery.py Asks a question that requires tools from both the time and weather servers, forcing the LLM to search across multiple deferred servers

The config in this folder has tool_discovery enabled and at least one server marked with defer_loading: true.

Demonstrates restricting which tools are available to the LLM per request.

Script Description
chat_with_toolset.py Shows three ways to use toolsets: from config, programmatic creation, and exclusion-based filtering