A Python package that integrates Streamlit's intuitive web interface with LangGraph's advanced multi-agent orchestration. Build interactive AI applications featuring multiple specialized agents collaborating in customizable workflows.
If you're using Streamlit with a single agent, consider streamlit-openai instead. This project is inspired by that work, especially its integration with the OpenAI Response API.
streamlit-langgraph is designed for multi-agent systems where multiple specialized agents collaborate to solve complex tasks.
- Main Goal
- Status
- Installation
- API Key Configuration
- Quick Start
- Examples
- Package Structure
- Core Logic
- Core Concepts
- API Reference
- License
To build successful multi-agent systems, defining agent instructions, tasks, and context is more important than the actual orchestration logic. As illustrated by:
LangChain - Customizing agent context:
At the heart of multi-agent design is context engineering - deciding what information each agent sees... The quality of your system heavily depends on context engineering.
80% of your effort should go into designing tasks, and only 20% into defining agents... well-designed tasks can elevate even a simple agent.
With that in mind, this package is designed so users can focus on defining agents and tasks, rather than worrying about agent orchestration or UI implementation details.
Key Features:
-
Seamless Integration of Streamlit and LangGraph: Combine Streamlit's rapid UI development, which turns simple Python scripts into interactive web applications, with LangGraph's flexible agent orchestration for real-time interaction.
-
Lowering the Barrier to Multi-Agent Orchestration: Simplify multi-agent development with easy-to-use interfaces that abstract away LangGraph's complexity.
-
Ready-to-Use Multi-Agent Architectures: Include standard patterns (supervisor, hierarchical, network) out of the box.
-
Fully support OpenAI Responses API unlike the partial support of LangChain: Automatically configures OpenAI's Responses API when native tools are enabled. LangChain's ChatOpenAI supports only basic native tool features and lacks support for partial image generation, real-time code interpreter output, and several other advanced functionalities. To provide a true live experience, I separately integrated the Responses API while maintaining compatibility with other LangChain features.
-
Extensibility to Other LLMs: Not limited to OpenAI, the framework is designed to support additional LLM providers such as Gemini, Claude, and others by utilizing LangChain and manual adaptations as needed, similar to the approach used for OpenAI's Response API.
This project is in alpha. Features and APIs are subject to change.
Note: Uses langchain/langgraph version 1.0.1.
| Provider | Support | Notes |
|---|---|---|
| OpenAI | β | Uses ResponseAPIExecutor (Responses API) when native tools enabled and HITL disabled. Uses CreateAgentExecutor (ChatCompletion API) for HITL or when native tools disabled. |
| Anthropic (Claude) | β | May work but not explicitly tested. |
| Google (Gemini) | β | Full support via LangChain's init_chat_model |
| Other LangChain Providers | β | May work but not explicitly tested. |
Legend:
- β O = Fully supported and tested
- β X = Not supported
- β ? = May work but not explicitly tested
Notes:
- OpenAI: Automatically selects ResponseAPIExecutor (Responses API) or CreateAgentExecutor (ChatCompletion API) based on native tool configuration and HITL settings
- ResponseAPIExecutor: Used when native tools enabled and HITL disabled
- CreateAgentExecutor: Used for HITL scenarios or when native tools are disabled
- Support depends on LangChain's provider compatibility
Using pip:
pip install streamlit-langgraphUsing UV:
UV is a fast Python package installer and resolver:
uv pip install streamlit-langgraphOr if you're using UV for project management:
uv add streamlit-langgraphBefore running your application, you need to configure your API keys. Create a .streamlit/config.toml file in your project root directory:
OPENAI_API_KEY = "your-openai-api-key-here"File structure::
your-project/
βββ .streamlit/
β βββ config.toml
βββ your_app.py
βββ ...
Run with: streamlit run your_app.py
Single Agent (Simple):
# your_app.py
import streamlit as st
import streamlit_langgraph as slg
# Define your agent
assistant = slg.Agent(
name="assistant",
role="AI Assistant",
instructions="You are a helpful AI assistant.",
provider="openai",
model="gpt-4.1-mini"
)
# Configure UI
config = slg.UIConfig(
title="My AI Assistant",
welcome_message="Hello! How can I help you today?"
)
# Create and run chat interface
if "chat" not in st.session_state:
st.session_state.chat = slg.LangGraphChat(
agents=[assistant],
config=config
)
st.session_state.chat.run()Multi-Agent Workflow:
# your_app.py
import streamlit as st
import streamlit_langgraph as slg
# Load agents from YAML
agents = slg.AgentManager.load_from_yaml("configs/my_agents.yaml")
# Create workflow
supervisor = agents[0]
workers = agents[1:]
builder = slg.WorkflowBuilder()
workflow = builder.create_supervisor_workflow(
supervisor=supervisor,
workers=workers,
execution_mode="sequential",
delegation_mode="handoff"
)
# Create chat with workflow
if "chat" not in st.session_state:
st.session_state.chat = slg.LangGraphChat(
workflow=workflow,
agents=agents
)
st.session_state.chat.run()All examples are in the examples/ directory.
File: examples/01_basic_simple_example.py
Basic chat interface with a single agent. No workflow orchestration.
streamlit run examples/01_basic_simple_example.pyFile: examples/06_feature_file_callback_example.py
Demonstrates how to use the file_callback parameter to preprocess uploaded files before they are sent to OpenAI. The callback receives the file path and returns a processed file path, or optionally a tuple with additional files.
streamlit run examples/06_feature_file_callback_example.pyFeatures:
- Preprocess files (e.g., filter CSV columns) before upload
- Works with single agent and multi-agent workflows
- Support for returning additional files generated during preprocessing
- Automatically uploads additional CSV files to code_interpreter container
Example - Simple Preprocessing:
import pandas as pd
import streamlit_langgraph as slg
def filter_columns(file_path: str) -> str:
"""Filter CSV to keep only columns starting with 'num_'."""
if not file_path.endswith('.csv'):
return file_path
df = pd.read_csv(file_path)
num_cols = [col for col in df.columns if col.startswith('num_')]
df_filtered = df[num_cols] if num_cols else df
processed_path = file_path.replace('.csv', '_filtered.csv')
df_filtered.to_csv(processed_path, index=False)
return processed_path
config = slg.UIConfig(
title="File Preprocessing Example",
file_callback=filter_columns,
)Example - Preprocessing with Additional Files:
from pathlib import Path
import streamlit_langgraph as slg
def preprocess_with_additional_files(file_path: str):
"""
Preprocess file and generate additional CSV files.
Returns:
Tuple of (main_file_path, additional_files_directory) or
Tuple of (main_file_path, [list_of_file_paths])
"""
# Process the main file
processed_path = process_main_file(file_path)
# Generate additional CSV files in a directory
output_dir = Path("outputs") / "generated_csvs"
output_dir.mkdir(parents=True, exist_ok=True)
# Generate multiple CSV files
generate_csv_files(output_dir)
# Return tuple: (main file, directory with additional files)
# All CSV files in the directory will be automatically uploaded
return (processed_path, output_dir)
config = slg.UIConfig(
title="Multi-File Preprocessing Example",
file_callback=preprocess_with_additional_files,
)File: examples/02_workflow_supervisor_sequential_example.py
Supervisor coordinates workers sequentially. Workers execute one at a time with full context.
Config: examples/configs/supervisor_sequential.yaml
streamlit run examples/02_workflow_supervisor_sequential_example.pyFile: examples/03_workflow_supervisor_parallel_example.py
Supervisor delegates tasks to multiple workers who can work in parallel.
Config: examples/configs/supervisor_parallel.yaml
streamlit run examples/03_workflow_supervisor_parallel_example.pyFile: examples/04_workflow_hierarchical_example.py
Multi-level organization with top supervisor managing sub-supervisor teams.
Config: examples/configs/hierarchical.yaml
streamlit run examples/04_workflow_hierarchical_example.pyFile: examples/05_workflow_network_example.py
Peer-to-peer network pattern where agents can communicate directly with any other agent. No central supervisor - agents form a mesh topology and can hand off work to any peer.
Config: examples/configs/network.yaml
streamlit run examples/05_workflow_network_example.pyFeatures:
- True peer-to-peer collaboration
- Agents can hand work back and forth dynamically
- No central coordinator - all agents are peers
- First agent in the list serves as the entry point
- Best for: Complex scenarios with interdependent concerns
Use Case: Strategic consulting teams where specialists need to collaborate dynamically, with work flowing back and forth as issues are identified and re-evaluated.
File: examples/07_feature_human_in_the_loop_example.py
Demonstrates HITL with tool execution approval. Users can approve, reject, or edit tool calls before execution.
Config: examples/configs/human_in_the_loop.yaml
streamlit run examples/07_feature_human_in_the_loop_example.pyFeatures:
- Custom tools with approval workflow
- Sentiment analysis example
- Review escalation with edit capability
File: examples/08_feature_mcp_example.py
Demonstrates integration with MCP (Model Context Protocol) servers to access external tools and resources.
streamlit run examples/08_feature_mcp_example.pyPrerequisites:
pip install fastmcp langchain-mcp-adaptersFeatures:
- Connect to MCP servers via stdio or HTTP transport
- Access tools from external MCP servers
- Works with both ResponseAPIExecutor and CreateAgentExecutor
- Example MCP servers included (math, weather)
MCP Server Examples:
examples/mcp_servers/math_server.py- Math operations (add, multiply, subtract, divide)examples/mcp_servers/weather_server.py- Weather information
This section provides an overview of the package's internal organization and module structure.
agent.py:Agentclass andAgentManagerfor agent configuration and managementchat.py:LangGraphChatmain interface andUIConfigfor UI settingsworkflow/: Workflow builders and patterns (supervisor, hierarchical, network)
Executor (core/executor/):
response_api.py:ResponseAPIExecutorfor OpenAI Responses APIcreate_agent.py:CreateAgentExecutorfor LangChain agents with HITL supportregistry.py:ExecutorRegistryfor automatic executor selectionworkflow.py:WorkflowExecutorfor workflow executionconversation_history.py: Conversation history management mixin
State (core/state/):
state_schema.py:WorkflowStateTypedDict andWorkflowStateManagerstate_sync.py:StateSynchronizerfor syncing workflow state
Middleware (core/middleware/):
hitl.py:HITLHandlerandHITLUtilsfor human-in-the-loopinterrupts.py:InterruptManagerfor interrupt handling
display_manager.py:DisplayManager,Section, andBlockfor UI renderingstream_processor.py:StreamProcessorfor handling streaming responses
file_handler.py:FileHandlerfor file upload and processingcustom_tool.py:CustomToolregistry for custom toolsmcp_tool.py:MCPToolManagerfor MCP server integration
builder.py:WorkflowBuilderfor creating workflowspatterns/: Workflow pattern implementations (supervisor, hierarchical, network)agent_nodes/: Agent node factories and delegation patterns
This section explains the internal architecture for rendering messages and managing state.
All chat messages are rendered through a Section/Block architecture:
- Section: Represents a single chat message (user or assistant). Contains multiple blocks.
- Block: Individual content units within a section:
text: Plain text contentcode: Code blocks (collapsible)reasoning: Reasoning/thinking blocks (collapsible)image: Image contentdownload: Downloadable files
Flow:
- User input β Creates a
Sectionwithtextblock - Agent response β Creates a
Sectionwith blocks based on content type - Streaming β Updates existing blocks or creates new ones as content arrives
- All sections/blocks are saved to
workflow_statefor persistence
workflow_state is the single source of truth for all chat history and application state:
Structure:
workflow_state = {
"messages": [...], # Conversation messages (user/assistant)
"metadata": {
"display_sections": [...], # UI sections/blocks for rendering
"pending_interrupts": {...}, # HITL state
"executors": {...}, # Executor metadata
...
},
"agent_outputs": {...}, # Agent responses by agent name
"current_agent": "...", # Currently active agent
"files": [...] # File metadata
}Key Points:
- All messages (user and assistant) are stored in
workflow_state["messages"] - All UI sections/blocks are stored in
workflow_state["metadata"]["display_sections"] - State persistence: Workflow state persists across Streamlit reruns
- Workflow execution: LangGraph workflows read from and write to
workflow_state - State synchronization:
StateSynchronizermanages updates toworkflow_state - No fallbacks: All state operations require
state_manager- no directsession_stateaccess for display sections
st.session_state is used for display management and runtime state:
Display Management:
workflow_state: The single source of truth (stored in session state for Streamlit persistence)display_sections: Deprecated - now stored inworkflow_state.metadata.display_sectionsagent_executors: Runtime executor instances (not persisted in workflow_state)uploaded_files: File objects for current session (metadata stored in workflow_state)
Key Separation:
workflow_state: Persistent, single source of truth for all chat datast.session_state: Streamlit-specific runtime state and references to workflow_state
State Flow:
User Input
β
StateSynchronizer.add_user_message()
β
workflow_state["messages"] updated
β
DisplayManager creates Section/Block
β
Section._save_to_session_state()
β
workflow_state["metadata"]["display_sections"] updated
β
render_message_history() reads from workflow_state
β
Streamlit renders UI
Benefits:
- Consistency: All state in one place (
workflow_state) - Persistence: State survives Streamlit reruns
- Workflow compatibility: LangGraph workflows can read/write state directly
- UI synchronization: Display always reflects workflow_state
Agents can be configured in two ways:
Python Configuration:
import streamlit_langgraph as slg
agent = slg.Agent(
name="analyst", # Unique identifier
role="Data Analyst", # Agent's role description
instructions="...", # Detailed task instructions
provider="openai", # LLM provider
model="gpt-4.1-mini", # Model name
temperature=0.0, # Response randomness
tools=["tool1", "tool2"], # Available tools
mcp_servers={...}, # MCP server configurations
context="full", # Context mode
human_in_loop=True, # Enable HITL
interrupt_on={...} # HITL configuration
)YAML File Configuration:
Agents can be configured using YAML files for easier management:
- name: supervisor
role: Project Manager
instructions: |
You coordinate tasks and delegate to specialists.
Analyze user requests and assign work appropriately.
provider: openai
model: gpt-4.1-mini
temperature: 0.0
tools:
- tool_name
context: full
- name: worker
role: Specialist
instructions: |
You handle specific tasks delegated by the supervisor.
provider: openai
model: gpt-4.1-mini
temperature: 0.0Load the above YAML to python:
import streamlit_langgraph as slg
# Load agents from YAML file
agents = slg.AgentManager.load_from_yaml("configs/agents.yaml")
supervisor = agents[0]
workers = agents[1:]For complete parameter reference, see Agent API Reference.
Configure the Streamlit interface using UIConfig:
import streamlit as st
import streamlit_langgraph as slg
config = slg.UIConfig(
title="My Multiagent App",
welcome_message="Welcome! Ask me anything.",
user_avatar="π€",
assistant_avatar="π€",
page_icon="π€",
page_layout="wide",
enable_file_upload="multiple",
show_sidebar=True,
stream=True,
file_callback=None
)
if "chat" not in st.session_state:
st.session_state.chat = slg.LangGraphChat(workflow=workflow, agents=agents, config=config)
st.session_state.chat.run()Custom Sidebar:
import streamlit as st
import streamlit_langgraph as slg
config = slg.UIConfig(show_sidebar=False) # Disable default sidebar
# Define your own sidebar
with st.sidebar:
st.header("Custom Sidebar")
option = st.selectbox("Choose option", ["A", "B", "C"])
# Your custom controls
if "chat" not in st.session_state:
st.session_state.chat = slg.LangGraphChat(
workflow=workflow,
agents=agents,
config=config
)
st.session_state.chat.run()For complete parameter reference, see UIConfig API Reference.
A supervisor agent coordinates worker agents:
- Sequential: Workers execute one at a time
- Parallel: Workers can execute simultaneously
- Handoff: Full context transfer between agents (works with both ResponseAPIExecutor and CreateAgentExecutor)
- Tool Calling: Workers called as tools
Multiple supervisor teams coordinated by a top supervisor:
- Top supervisor delegates to sub-supervisors
- Each sub-supervisor manages their own team
- Multi-level organizational structure
Peer-to-peer mesh topology where agents can communicate directly:
- No central supervisor - all agents are peers
- Any agent can hand off to any other agent
- First agent in the list serves as the entry point
- Best for: Complex scenarios with interdependent concerns where work needs to flow back and forth
| Pattern | Use Case | Execution | Best For |
|---|---|---|---|
| Supervisor Sequential | Tasks need full context from previous steps | Sequential | Research, analysis pipelines |
| Supervisor Parallel | Independent tasks can run simultaneously | Parallel | Data processing, multi-source queries |
| Hierarchical | Complex multi-level organization | Sequential | Large teams, department structure |
| Network | Interdependent concerns, dynamic collaboration | Peer-to-peer | Strategic consulting, complex problem-solving with back-and-forth |
The system uses two executors that are automatically selected based on agent configuration:
- When Used: Native OpenAI tools enabled (
allow_code_interpreter,allow_web_search,allow_file_search,allow_image_generation) AND HITL disabled - API: Uses OpenAI's native Responses API directly
- Features:
- Native tool support (code_interpreter, file_search, web_search, image_generation)
- Custom tools support (converts LangChain tools to OpenAI function format)
- MCP tools support (via OpenAI's MCP integration)
- Streaming support
- Limitations: Does not support HITL (human-in-the-loop)
- When Used: HITL enabled OR native tools disabled
- API: Uses ChatCompletion API via LangChain's
create_agent - Features:
- Full HITL support with approval workflows
- Multi-provider support (OpenAI, Anthropic, Google, etc.)
- Custom tools support (LangChain StructuredTool)
- MCP tools support (via LangChain MCP adapters)
- Streaming support
- Note: When HITL is enabled, native OpenAI tools are automatically disabled (HITL requires CreateAgentExecutor)
The ExecutorRegistry automatically selects the appropriate executor:
# Selection logic:
# - If HITL enabled β CreateAgentExecutor (native tools disabled)
# - If native tools enabled AND HITL disabled β ResponseAPIExecutor
# - Otherwise β CreateAgentExecutorExample:
# Uses ResponseAPIExecutor (native tools, no HITL)
agent = slg.Agent(
name="assistant",
allow_code_interpreter=True,
allow_web_search=True
)
# Uses CreateAgentExecutor (HITL enabled)
agent = slg.Agent(
name="assistant",
human_in_loop=True,
interrupt_on={"tool_name": {"allowed_decisions": ["approve", "reject"]}}
)Both executors work seamlessly with handoff delegation patterns:
- ResponseAPIExecutor: Uses OpenAI ChatCompletion API with function calling for delegation
- CreateAgentExecutor: Uses LangChain tool calling for delegation
- The delegation system automatically routes to the appropriate execution method based on executor type
Control how conversation history is managed for agents:
- Agent sees all previous messages in the conversation
- Best for: Tasks requiring complete conversation context
- Use case: Long-running conversations, context-dependent tasks
- Agent sees filtered conversation history (system messages and relevant context)
- Best for: Most use cases, balances context with efficiency
- Use case: General purpose agents, standard workflows
- Agent sees no conversation history (only current turn)
- Best for: Stateless operations, independent tasks
- Use case: One-off computations, API calls, isolated operations
import streamlit_langgraph as slg
agent = slg.Agent(
name="analyst",
role="Data Analyst",
instructions="Analyze data",
conversation_history_mode="filtered" # Default
)Control how much context each agent receives from workflow execution:
- Agent sees all messages and previous worker outputs
- Best for: Tasks requiring complete conversation history
- Use case: Analysis, synthesis, decision-making
- Agent sees summarized context from previous steps
- Best for: Tasks that need overview but not details
- Use case: High-level coordination, routing decisions
- Agent sees only supervisor instructions for their task
- Best for: Focused, independent tasks
- Use case: Specialized computations, API calls
import streamlit_langgraph as slg
analyst = slg.Agent(
name="analyst",
role="Data Analyst",
instructions="Analyze the provided data",
context="least", # Default: sees only task instructions
conversation_history_mode="filtered" # Default: filtered conversation history
)Enable human approval for critical agent actions:
- Tool Execution Approval: Human reviews tool calls before execution
- Decision Types: Approve, Reject, or Edit tool inputs
- Interrupt-Based: Workflow pauses until human decision
- Sensitive operations (data deletion, API calls)
- Financial transactions
- Content moderation
- Compliance requirements
import streamlit_langgraph as slg
executor = slg.Agent(
name="executor",
role="Action Executor",
instructions="Execute approved actions",
tools=["delete_data", "send_email"],
human_in_loop=True, # Enable HITL
interrupt_on={
"delete_data": {
"allowed_decisions": ["approve", "reject"]
},
"send_email": {
"allowed_decisions": ["approve", "reject", "edit"]
}
},
hitl_description_prefix="Action requires approval"
)- Approve: Execute tool with provided inputs
- Reject: Skip tool execution, continue workflow
- Edit: Modify tool inputs before execution
Extend agent capabilities by registering custom functions as tools:
import streamlit_langgraph as slg
def analyze_data(data: str, method: str = "standard") -> str:
"""
Analyze data using specified method.
This docstring is shown to the LLM, so be descriptive about:
- What the tool does
- When to use it
- What each parameter means
Args:
data: The data to analyze (JSON string, CSV, etc.)
method: Analysis method - "standard", "advanced", or "quick"
Returns:
Analysis results with insights and recommendations
"""
# Your tool logic here
result = f"Analyzed {len(data)} characters using {method} method"
return result
# Register the tool
slg.CustomTool.register_tool(
name="analyze_data",
description=(
"Analyze structured data using various methods. "
"Use this when you need to process and extract insights from data. "
"Supports JSON, CSV, and plain text formats."
),
function=analyze_data
)import streamlit_langgraph as slg
# Reference registered tools by name
agent = slg.Agent(
name="analyst",
role="Data Analyst",
instructions="Use analyze_data tool to process user data",
tools=["analyze_data"] # Tool name from registration
)- Descriptive Docstrings: LLM uses these to understand when/how to use the tool
- Type Hints: Help with parameter validation and documentation
- Clear Names: Use descriptive names that indicate purpose
- Error Handling: Return error messages as strings, don't raise exceptions
- Return Strings: Always return string results for LLM consumption
import streamlit_langgraph as slg
def delete_records(record_ids: str, reason: str) -> str:
"""
Delete records from database. REQUIRES APPROVAL.
Args:
record_ids: Comma-separated list of record IDs
reason: Justification for deletion
Returns:
Confirmation message with deleted record count
"""
ids = record_ids.split(",")
return f"Deleted {len(ids)} records. Reason: {reason}"
slg.CustomTool.register_tool(
name="delete_records",
description="Delete database records (requires human approval)",
function=delete_records
)
# Agent with HITL for this tool
agent = slg.Agent(
name="admin",
role="Database Administrator",
instructions="Manage database operations",
tools=["delete_records"],
human_in_loop=True,
interrupt_on={
"delete_records": {
"allowed_decisions": ["approve", "reject", "edit"]
}
}
)MCP (Model Context Protocol) is an open protocol for standardizing how applications provide tools and context to LLMs. This package supports connecting to MCP servers to access external tools and resources.
MCP enables LLMs to interact with external systems through a standardized interface. MCP servers expose tools, resources, and prompts that agents can use, making it easy to integrate with databases, APIs, file systems, and other services.
MCP servers can communicate via different transport protocols:
-
STDIO Transport (Default)
- Communicates through standard input/output
- Perfect for local development and command-line tools
- Each client spawns a new server process
- Works with all agents (unified executor)
-
HTTP Transport (streamable_http)
- Network-accessible web service
- Supports multiple concurrent clients
- Works with all agents (unified executor)
- When using native OpenAI tools with Responses API: Server must be publicly accessible (not localhost)
-
SSE Transport (Legacy)
- Server-Sent Events transport
- Backward compatibility only
- Use HTTP transport for new projects
Configure MCP servers in your agent:
import streamlit_langgraph as slg
import os
# STDIO transport (for local development)
mcp_servers = {
"math": {
"transport": "stdio",
"command": "python",
"args": [os.path.join("mcp_servers", "math_server.py")]
}
}
# HTTP transport (for network-accessible servers)
# Note: When using native OpenAI tools with Responses API, server must be publicly accessible
mcp_servers = {
"math": {
"transport": "http", # or "streamable_http" (both accepted)
"url": "http://your-server.com:8000/mcp" # Public URL required when using Responses API
}
}
agent = slg.Agent(
name="calculator",
role="Calculator",
instructions="Use MCP tools to perform calculations",
provider="openai",
model="gpt-4o-mini",
mcp_servers=mcp_servers
)Use FastMCP to create MCP servers:
# math_server.py
from fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run() # STDIO transport (default)
# Or: mcp.run(transport="http", port=8000) # HTTP transportRunning MCP Servers:
# Using FastMCP CLI
fastmcp run math_server.py
# Using FastMCP CLI with HTTP transport
fastmcp run math_server.py --transport http| Transport | Support | Notes |
|---|---|---|
| stdio | β Supported | Local only, perfect for development |
| http | β Supported | Network-accessible, supports multiple clients |
| sse | β Supported | Legacy, use HTTP instead |
Important Notes:
- Executor selection is automatic based on agent configuration (ResponseAPIExecutor or CreateAgentExecutor)
- When using native OpenAI tools (code_interpreter, web_search, etc.) without HITL, ResponseAPIExecutor is used
- For ResponseAPIExecutor with MCP tools: MCP servers must be publicly accessible (not localhost)
- OpenAI's servers connect to your MCP server when using Responses API, so
localhostwon't work - For local development with native tools, use stdio transport or deploy MCP servers publicly
- For local development without native tools, stdio or localhost HTTP works fine
- CreateAgentExecutor supports all MCP transport types (stdio, HTTP, localhost)
# Use stdio transport for local development
mcp_servers = {
"math": {
"transport": "stdio",
"command": "python",
"args": ["math_server.py"]
}
}
agent = slg.Agent(
name="calculator",
mcp_servers=mcp_servers
)# Use HTTP transport with public URL
mcp_servers = {
"math": {
"transport": "http",
"url": "https://your-mcp-server.com/mcp" # Public URL
}
}
agent = slg.Agent(
name="calculator",
mcp_servers=mcp_servers
)For agents using native OpenAI tools (Responses API) with HTTP transport:
- MCP server must be publicly accessible (not localhost)
- Server should bind to
0.0.0.0(not127.0.0.1) to accept external connections - Security groups/firewalls must allow inbound traffic
- Use HTTPS for production deployments
Description: Core class for defining individual agents with their configurations.
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
Required | Unique identifier for the agent |
role |
str |
Required | Brief description of the agent's role |
instructions |
str |
Required | Detailed instructions guiding agent behavior |
provider |
str |
"openai" |
LLM provider: "openai", "anthropic", "google", etc. |
model |
str |
"gpt-4.1-mini" |
Model name (e.g., "gpt-4o", "claude-3-5-sonnet-20241022") |
system_message |
str |
None |
Custom system message (auto-generated from role and instructions if None) |
temperature |
float |
0.0 |
Sampling temperature (0.0 to 2.0) |
tools |
List[str] |
[] |
List of tool names available to the agent |
mcp_servers |
Dict[str, Dict] |
None |
MCP server configurations (see MCP Tools) |
context |
str |
"least" |
Context mode: "full", "summary", or "least" |
human_in_loop |
bool |
False |
Enable human-in-the-loop approval for tool execution |
interrupt_on |
Dict |
{} |
HITL configuration per tool |
hitl_description_prefix |
str |
"Tool execution pending approval" |
Prefix for HITL approval messages |
allow_code_interpreter |
bool |
False |
Enable code interpreter (Responses API only) |
container_id |
str |
None |
OpenAI container ID for code interpreter (auto-created if not provided) |
allow_file_search |
bool |
False |
Enable file search (Responses API only) |
allow_web_search |
bool |
False |
Enable web search (Responses API only) |
allow_image_generation |
bool |
False |
Enable image generation (Responses API only) |
conversation_history_mode |
str |
"filtered" |
Conversation history mode: "full", "filtered", or "disable" |
Example:
import streamlit_langgraph as slg
agent = slg.Agent(
name="analyst",
role="Data Analyst",
instructions="Analyze data and provide insights",
provider="openai",
model="gpt-4o-mini",
temperature=0.0,
tools=["analyze_data", "visualize"],
context="full", # See all messages and previous outputs
conversation_history_mode="filtered", # Use filtered conversation history
human_in_loop=True,
interrupt_on={
"analyze_data": {
"allowed_decisions": ["approve", "reject", "edit"]
}
}
)Description: Manages multiple agents and handles agent loading/retrieval.
Class Methods:
| Method | Parameters | Returns | Description |
|---|---|---|---|
load_from_yaml(path) |
path: str |
List[Agent] |
Load agents from YAML configuration file |
get_llm_client(agent) |
agent: Agent |
LLM client | Get configured LLM client for an agent |
Instance Methods:
| Method | Parameters | Returns | Description |
|---|---|---|---|
add_agent(agent) |
agent: Agent |
None |
Add agent to the manager |
remove_agent(name) |
name: str |
None |
Remove agent by name |
Properties:
| Property | Type | Description |
|---|---|---|
agents |
Dict[str, Agent] |
Dictionary of agents keyed by name |
active_agent |
str |
Name of the currently active agent |
Example:
import streamlit_langgraph as slg
# Load from YAML
agents = slg.AgentManager.load_from_yaml("config/agents.yaml")
# Or create manager and add agents
manager = slg.AgentManager()
manager.add_agent(my_agent)
agent = manager.agents["analyst"] # Access via agents dictionaryDescription: Configuration for Streamlit UI customization.
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
title |
str |
Required | Application title shown in browser tab and header |
page_icon |
str |
"π€" |
Favicon emoji or path to image file |
page_layout |
str |
"wide" |
Page layout mode: "wide" or "centered" |
stream |
bool |
True |
Enable streaming responses |
enable_file_upload |
bool or str |
"multiple" |
File upload configuration: False, True, "multiple", or "directory" |
show_sidebar |
bool |
True |
Show default sidebar (set False for custom) |
user_avatar |
str |
"π€" |
Avatar for user messages (emoji or image path) |
assistant_avatar |
str |
"π€" |
Avatar for assistant messages (emoji or image path) |
placeholder |
str |
"Type your message here..." |
Placeholder text for chat input |
welcome_message |
str |
None |
Welcome message shown at start (supports Markdown) |
file_callback |
Callable[[str], str | tuple] |
None |
Optional callback to preprocess files before upload. Can return a single file path or a tuple (main_file_path, additional_files) where additional_files can be a directory path or list of file paths |
Example:
import streamlit_langgraph as slg
config = slg.UIConfig(
title="My AI Team",
page_icon="π",
welcome_message="Welcome to **My AI Team**!",
user_avatar="π¨βπΌ",
assistant_avatar="π€",
stream=True,
enable_file_upload="multiple", # Allow multiple file uploads
file_callback=None, # Optional: preprocess files before upload
show_sidebar=True,
placeholder="Ask me anything..."
)Description: Main interface for running chat applications with single or multiple agents.
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
workflow |
StateGraph |
None |
Compiled LangGraph workflow (for multi-agent) |
agents |
List[Agent] |
Required | List of agents in the application |
config |
UIConfig |
UIConfig() |
UI configuration |
custom_tools |
List[CustomTool] |
None |
List of custom tools to register |
Methods:
| Method | Parameters | Returns | Description |
|---|---|---|---|
run() |
None | None |
Start the Streamlit chat interface |
Example:
import streamlit as st
import streamlit_langgraph as slg
# Single agent
if "chat" not in st.session_state:
st.session_state.chat = slg.LangGraphChat(
agents=[assistant],
config=config
)
st.session_state.chat.run()
# Multi-agent with workflow
if "chat" not in st.session_state:
st.session_state.chat = slg.LangGraphChat(
workflow=compiled_workflow,
agents=all_agents,
config=config
)
st.session_state.chat.run()Description: Builder for creating multi-agent workflows with different patterns.
Methods:
Creates a supervisor pattern where one agent coordinates multiple workers.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
supervisor |
Agent |
Required | Supervisor agent that coordinates |
workers |
List[Agent] |
Required | Worker agents to be coordinated |
execution_mode |
str |
"sequential" |
"sequential" or "parallel" |
delegation_mode |
str |
"handoff" |
"handoff" or "tool_calling" |
checkpointer |
Any |
None |
Optional checkpointer for workflow state persistence |
Returns: StateGraph - Compiled workflow
Example:
import streamlit_langgraph as slg
builder = slg.WorkflowBuilder()
workflow = builder.create_supervisor_workflow(
supervisor=supervisor_agent,
workers=[worker1, worker2, worker3],
execution_mode="sequential", # or "parallel"
delegation_mode="handoff" # or "tool_calling"
)Creates a hierarchical pattern with a top supervisor managing sub-supervisor teams.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
top_supervisor |
Agent |
Required | Top-level supervisor |
supervisor_teams |
List[SupervisorTeam] |
Required | List of sub-supervisor teams |
execution_mode |
str |
"sequential" |
Currently only "sequential" supported |
checkpointer |
Any |
None |
Optional checkpointer for workflow state persistence |
Returns: StateGraph - Compiled workflow
Example:
import streamlit_langgraph as slg
# Create teams
research_team = slg.WorkflowBuilder.SupervisorTeam(
supervisor=research_lead,
workers=[researcher1, researcher2],
team_name="research_team"
)
content_team = slg.WorkflowBuilder.SupervisorTeam(
supervisor=content_lead,
workers=[writer, editor],
team_name="content_team"
)
# Create hierarchical workflow
builder = slg.WorkflowBuilder()
workflow = builder.create_hierarchical_workflow(
top_supervisor=project_manager,
supervisor_teams=[research_team, content_team],
execution_mode="sequential"
)Description: Dataclass representing a sub-supervisor and their team for hierarchical workflows.
Constructor Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
supervisor |
Agent |
Required | Sub-supervisor agent |
workers |
List[Agent] |
Required | Worker agents in this team |
team_name |
str |
Auto-generated | Team identifier |
Example:
import streamlit_langgraph as slg
team = slg.WorkflowBuilder.SupervisorTeam(
supervisor=team_lead_agent,
workers=[worker1, worker2, worker3],
team_name="engineering_team"
)Creates a network pattern where agents can communicate peer-to-peer in a mesh topology.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
agents |
List[Agent] |
Required | List of peer agents. First agent is the entry point |
checkpointer |
Any |
None |
Optional checkpointer for workflow state persistence |
Returns: StateGraph - Compiled workflow
Example:
import streamlit_langgraph as slg
# Create network of peer agents
agents = [
tech_strategist,
business_analyst,
risk_strategist,
delivery_lead
]
# Create network workflow
builder = slg.WorkflowBuilder()
workflow = builder.create_network_workflow(agents=agents)Note: In network workflows, any agent can hand off to any other agent. There is no central supervisor - all agents are peers.
Description: Registry for custom tools that agents can use.
Class Methods:
Register a custom function as a tool available to agents.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
Required | Unique tool name |
description |
str |
Required | Description shown to LLM |
function |
Callable |
Required | Python function to execute |
parameters |
Dict |
Auto-extracted | Tool parameters schema (extracted from function signature if not provided) |
return_direct |
bool |
False |
Return tool output directly to user |
Returns: CustomTool instance
Example:
import streamlit_langgraph as slg
def calculate_sum(a: float, b: float) -> str:
"""
Add two numbers together.
Args:
a: First number
b: Second number
Returns:
The sum as a string
"""
return str(a + b)
slg.CustomTool.register_tool(
name="calculate_sum",
description="Add two numbers and return the sum",
function=calculate_sum
)
# Use in agent
agent = slg.Agent(
name="calculator",
role="Calculator",
instructions="Use calculate_sum to add numbers",
tools=["calculate_sum"]
)Decorator for registering functions as tools.
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
Required | Unique tool name |
description |
str |
Required | Description shown to LLM |
**kwargs |
Any |
- | Additional parameters (e.g., return_direct, parameters) |
Returns: Decorator function
Example:
import streamlit_langgraph as slg
@slg.CustomTool.tool("calculator", "Performs basic arithmetic")
def calculate(expression: str) -> float:
"""Evaluate a mathematical expression."""
return eval(expression)
# Use in agent
agent = slg.Agent(
name="calculator",
role="Calculator",
instructions="Use calculator to evaluate expressions",
tools=["calculator"]
)MIT License - see LICENSE file for details.
Status: Alpha | Python: 3.10+ | LangGraph: 1.0.1
For issues and feature requests, please open an issue on GitHub.