diff --git a/.gitignore b/.gitignore
index 81c38f9..6fca6c9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -190,6 +190,9 @@ examples/visualizations/
# telegram
groups.txt
+# Agent wallet data
+data/
+
# VS Code
.vscode/
.idea/
diff --git a/CHANGELOG.md b/CHANGELOG.md
index e6b48c8..49ba443 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -8,16 +8,48 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Added
+- Agent payment capabilities using Coinbase Developer Platform (CDP) SDK and AgentKit.
+- Wallet persistence for agents (`agentconnect.utils.wallet_manager`).
+- CDP environment validation and payment readiness checks (`agentconnect.utils.payment_helper`).
+- Payment-related dependencies (`cdp-sdk`, `coinbase-agentkit`, `coinbase-agentkit-langchain`) to `pyproject.toml`.
+- Payment address (`payment_address`) field to `AgentMetadata` and `AgentRegistration`.
+- Payment capability template (`PAYMENT_CAPABILITY_TEMPLATE`) for agent prompts.
+- Autonomous workflow demo (`examples/autonomous_workflow`) showcasing inter-agent payments.
+- Standalone chat method in `AIAgent` for using agents without registration to a hub.
+- Response callbacks in `HumanAgent` for integration with external systems.
+- Asynchronous collaboration request status checking tool for handling delayed agent responses.
+- Enhanced reasoning step visualization in `ToolTracerCallbackHandler` for all LLM providers.
+- Comprehensive end-to-end developer guides in documentation.
+- Improved model support for OpenAI and Google AI.
### Changed
+- `BaseAgent` and `AIAgent` to initialize wallet provider and AgentKit conditionally based on `enable_payments` flag.
+- `AIAgent` workflow initialization to include AgentKit tools when payments are enabled.
+- Agent prompts (`CORE_DECISION_LOGIC`, ReAct prompts) updated to include payment instructions and context.
+- `CommunicationHub` registration updated to include `payment_address`.
+- `ToolTracerCallbackHandler` enhanced to provide specific tracing for payment tool actions.
+- Enhanced `HumanAgent` to function as an independent agent in the network with `run()` method.
+- Added `stop()` method to `BaseAgent` for proper cleanup and resource management.
+- `CommunicationHub` improved to handle late responses and request timeouts gracefully.
+- Simplified and enhanced prompt templates for more customizable agents.
+- Updated all examples to use the `stop()` method for better cleanup.
+- Refactored postprocessing logic in `agent_prompts.py` for better error handling.
### Deprecated
-
-### Removed
+- `examples/run_example.py` in favor of using the official CLI tool (`agentconnect` command)
### Fixed
+- Suppressed unnecessary warnings in capability discovery module
+- Improved error handling in agent communication
+- Fixed reasoning step extraction for different LLM providers (OpenAI, Anthropic, Google)
+- Addressed collaboration request timeouts with new polling mechanism
### Security
+- Enhanced validation for inter-agent messages
+- Implemented secure API key management
+- Added rate limiting for API calls
+- Set up environment variable handling
+- Added input validation for all API endpoints
## [0.2.0] - 2025-04-01
diff --git a/README.md b/README.md
index 2635dff..49bbda6 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,9 @@
-*Build and connect independent AI agents that discover, interact, and collaborate securely.*
+*A Decentralized Framework for Autonomous Agent Collaboration*
+
+**Build and connect independent AI agents that discover, interact, and collaborate securely.**
[](https://github.com/AKKI0511/AgentConnect/actions/workflows/main.yml)
[](https://github.com/AKKI0511/AgentConnect/actions/workflows/docs.yml)
@@ -13,33 +15,38 @@
[](https://python-poetry.org/)
[](https://opensource.org/licenses/Apache-2.0)
-[Installation](#-installation) •
+[Installation](#quick-start) •
[Documentation](https://AKKI0511.github.io/AgentConnect/) •
-[Examples](examples/README.md) •
+[Examples](#examples) •
[Contributing](CONTRIBUTING.md)
## 📖 Overview
-AgentConnect is a revolutionary framework for building and connecting *independent* AI agents. Unlike traditional multi-agent systems that operate within a single, centrally controlled environment, AgentConnect enables the creation of a *decentralized network* of autonomous agents. These agents can:
+**AgentConnect provides a framework for building decentralized networks of truly autonomous AI agents, enabling the next generation of collaborative AI.**
-* **Operate Independently:** Each agent is a self-contained system, potentially with its own internal multi-agent structure (using LangGraph, custom logic, or any other approach).
-* **Discover Each Other Dynamically:** Agents discover each other based on *capabilities*, not pre-defined connections. This allows for a flexible and adaptable network.
-* **Communicate Securely:** Built-in message signing, verification, and communication protocols ensure secure interactions.
-* **Collaborate on Complex Tasks:** Agents can request services from each other, exchange data, and work together to achieve goals.
-* **Scale Horizontally:** The framework is designed to support thousands of independent agents, each with its own internal complexity.
+Move beyond traditional, centrally controlled systems and embrace an ecosystem where independent agents can:
-AgentConnect empowers developers to create a truly decentralized ecosystem of AI agents, opening up possibilities for complex, collaborative AI applications that were previously impossible.
+* **Discover peers on-demand:** Locate partners via **capability broadcasts** instead of hard-wired endpoints.
+* **Interact Securely (A2A):** Leverage built-in cryptographic verification for **trustworthy Agent-to-Agent** communication.
+* **Execute Complex Workflows:** Request services, exchange value, and achieve goals collectively.
+* **Autonomous Operation:** Each agent hosts its own logic—no central brain required.
+* **Scale Limitlessly:** Support thousands of agents interacting seamlessly.
### Why AgentConnect?
-* **Beyond Hierarchies:** Break free from the limitations of traditional, centrally controlled multi-agent systems.
-* **True Agent Autonomy:** Build agents that are truly independent and can interact with any other agent in the network.
-* **Dynamic and Flexible:** The network adapts as agents join, leave, and update their capabilities.
-* **Secure by Design:** Cryptographic message verification and standardized protocols ensure secure interactions.
-* **Unprecedented Scalability:** Designed to scale to thousands of interacting agents.
-* **Extensible and Customizable:** Easily integrate custom agents, capabilities, and communication protocols.
+AgentConnect delivers unique advantages over classic multi-agent approaches:
+
+* **Decentralized Architecture:** No central router, no single point of failure.
+* **First-class agent autonomy:** Agents negotiate, cooperate, and evolve independently.
+* **Interconnect Agent Systems:** Operates above internal frameworks, linking entire agent swarms.
+* **Living ecosystem:** The network fluidly adapts as agents join, leave, or evolve their skills.
+* **Secure A2A Communication:** Crypto-grade identity & message signing baked in.
+* **Horizontal scalability:** Engineered for planet-scale agent populations.
+* **Plug-and-play extensibility:** Easily integrate custom agents, capabilities, and protocols.
+* **Integrated Agent Economy:** Seamless A2A payments powered by **Coinbase CDP & AgentKit**.
+
## ✨ Key Features
@@ -48,52 +55,77 @@ AgentConnect empowers developers to create a truly decentralized ecosystem of AI
🤖 Dynamic Agent Discovery
- - Capability-based matching
- - Flexible and adaptable network
- - No pre-defined connections
+ - Capability-Based lookup
+ - Decentralized Registry
+ - Zero static links
|
- ⚡ Decentralized Communication
+ ⚡ A2A Communication
- - Secure message routing
- - No central control
- - Reliable message delivery
+ - Direct Agent-to-Agent Messaging
+ - Cryptographic signatures
+ - No routing bottlenecks
|
- ⚙️ Autonomous Agents
+ ⚙️ True Agent Autonomy
+
+ - Independent Operation & Logic
+ - Self-Managed Lifecycles
+ - Unrestricted Collaboration
+
+ |
+
+
+
+ 🔒 Trust Layer
+
+ - Verifiable identities
+ - Tamper-proof messages
+ - Standard Security Protocols
+
+ |
+
+ 💰 Built-in Agent Economy
+
+ - Autonomous A2A Payments
+ - Coinbase CDP Integration
+ - Instant service settlement
+
+ |
+
+ 🔌 Multi-LLM Support
- - Independent operation
- - Own processing loop
- - Potentially complex internal structure
+ - OpenAI, Anthropic, Groq, Google
+ - Flexible AI Core Choice
+ - Vendor-Agnostic Intelligence
|
- 🔒 Secure Communication
+ 📊 Deep Observability
- - Message signing and verification
- - Standardized protocols
- - Ensures secure interactions
+ - LangSmith tracing
+ - Monitor tools & payments
+ - Custom Callbacks
|
- 🔌 Multi-Provider Support
+ 🌐 Dynamic Capability Advertising
- - OpenAI
- - Anthropic
- - Groq
- - Google AI
+ - Agent Skill Broadcasting
+ - Market-Driven Discovery
+ - On-the-Fly Collaboration
|
- 📊 Monitoring (LangSmith)
+ 🔗 Native Blockchain Integration
- - Comprehensive tracing
- - Debugging capabilities
- - Performance analysis
+ - Coinbase AgentKit Ready
+ - On-Chain Value Exchange
+ - Configurable networks
|
@@ -114,6 +146,15 @@ copy example.env .env # Windows
cp example.env .env # Linux/Mac
```
+Set required environment variables in your `.env` file:
+```
+# Required for AI providers (at least one)
+OPENAI_API_KEY=your_openai_api_key
+# Optional for payment capabilities
+CDP_API_KEY_NAME=your_cdp_api_key_name
+CDP_API_KEY_PRIVATE_KEY=your_cdp_api_key_private_key
+```
+
For detailed installation instructions and configuration options, see the [QuickStart Guide](docs/source/quickstart.md) and [Installation Guide](docs/source/installation.md).
## 🎮 Usage
@@ -132,6 +173,7 @@ AgentConnect includes several example applications to demonstrate different feat
- **Research Assistant**: Task delegation and information retrieval
- **Data Analysis**: Specialized data processing
- **Telegram Assistant**: Telegram AI agent with multi-agent collaboration
+- **Agent Economy**: Autonomous workflow with automatic cryptocurrency payments between agents
For code examples and detailed descriptions, see the [Examples Directory](examples/README.md).
@@ -180,6 +222,7 @@ AgentConnect integrates with LangSmith for comprehensive monitoring:
* View detailed traces of agent interactions
* Debug complex reasoning chains
* Analyze token usage and performance
+ * Track payment tool calls from AgentKit integration
## 🛠️ Development
@@ -225,8 +268,9 @@ AgentConnect/
- ✅ **MVP with basic agent-to-agent interactions**
- ✅ **Autonomous communication between agents**
- ✅ **Capability-based agent discovery**
-- ⬜ **Coinbase AgentKit Payment Integration**
+- ✅ **Coinbase AgentKit Payment Integration**
- ⬜ **Agent Identity & Reputation System**
+- ⬜ **Asynchronous Agent Collaboration System**
- ⬜ **Marketplace-Style Agent Discovery**
- ⬜ **MCP Integration**
- ⬜ **Structured Parameters SDK**
@@ -250,8 +294,6 @@ See the [Changelog](CHANGELOG.md) for a detailed history of changes to the proje
## 🙏 Acknowledgments
-- Built with [FastAPI](https://fastapi.tiangolo.com/), [LangChain](https://www.langchain.com/), and
-[React](https://reactjs.org/)
- Inspired by the need for independent autonomous multi-agent collaboration with dynamic agent discovery
- Thanks to all contributors who have helped shape this project
diff --git a/agentconnect/agents/README.md b/agentconnect/agents/README.md
index f68827d..b4b9074 100644
--- a/agentconnect/agents/README.md
+++ b/agentconnect/agents/README.md
@@ -28,9 +28,20 @@ The `AIAgent` class is an autonomous, independent AI implementation that can ope
- Rate limiting and cooldown mechanisms
- Workflow-based processing that can include its own internal agent system
- Tool integration for enhanced capabilities
+- Optional payment capabilities for agent-to-agent transactions
Each AI agent can operate completely independently, potentially with its own internal multi-agent structure, while still being able to discover and communicate with other independent agents across the network.
+#### Payment Integration
+
+When created with `enable_payments=True`, the `AIAgent` integrates payment capabilities:
+
+- **Wallet Setup**: Triggers wallet initialization in `BaseAgent.__init__`
+- **AgentKit Tools**: Payment tools (e.g., `native_transfer`, `erc20_transfer`) are automatically added to the agent's workflow in `AIAgent._initialize_workflow`
+- **LLM Decision Making**: The agent's LLM decides when to use payment tools based on prompt instructions in templates like `CORE_DECISION_LOGIC` and `PAYMENT_CAPABILITY_TEMPLATE`
+- **Network Support**: Default support for Base Sepolia testnet, configurable to other networks
+- **Transaction Verification**: Built-in transaction verification and confirmation
+
### HumanAgent
The `HumanAgent` class provides an interface for human users to interact with AI agents. It offers:
@@ -243,3 +254,4 @@ if not message.verify(agent.identity):
5. **Resource Management**: Be mindful of resource usage when creating multiple independent AI agents.
6. **Secure Communication**: Always verify message signatures to maintain security in the decentralized network.
7. **Autonomous Operation**: Design agents that can make independent decisions without central control.
+8. **Secure CDP Keys**: When using `enable_payments=True`, ensure CDP API keys are handled securely and never exposed.
diff --git a/agentconnect/agents/ai_agent.py b/agentconnect/agents/ai_agent.py
index 882edb5..37aa8c6 100644
--- a/agentconnect/agents/ai_agent.py
+++ b/agentconnect/agents/ai_agent.py
@@ -13,16 +13,19 @@
import logging
from datetime import datetime
from enum import Enum
-from typing import List, Optional
+from typing import List, Optional, Union, Dict, Any
+from pathlib import Path
# Third-party imports
from langchain_core.messages import HumanMessage
from langchain_core.runnables import Runnable
+from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.tools import BaseTool
# Absolute imports from agentconnect package
from agentconnect.core.agent import BaseAgent
from agentconnect.core.message import Message
+from agentconnect.core.payment_constants import POC_PAYMENT_TOKEN_SYMBOL
from agentconnect.core.types import (
AgentIdentity,
AgentType,
@@ -43,6 +46,7 @@
InteractionState,
TokenConfig,
)
+from agentconnect.utils.payment_helper import validate_cdp_environment
# Set up logging
logger = logging.getLogger(__name__)
@@ -95,9 +99,13 @@ def __init__(
memory_type: MemoryType = MemoryType.BUFFER,
prompt_tools: Optional[PromptTools] = None,
prompt_templates: Optional[PromptTemplates] = None,
- # Custom tools parameter
custom_tools: Optional[List[BaseTool]] = None,
agent_type: str = "ai",
+ enable_payments: bool = False,
+ verbose: bool = False,
+ wallet_data_dir: Optional[Union[str, Path]] = None,
+ external_callbacks: Optional[List[BaseCallbackHandler]] = None,
+ model_config: Optional[Dict[str, Any]] = None,
):
"""Initialize the AI agent.
@@ -120,7 +128,32 @@ def __init__(
prompt_templates: Optional prompt templates for the agent
custom_tools: Optional list of custom LangChain tools for the agent
agent_type: Type of agent workflow to create
+ enable_payments: Whether to enable payment capabilities
+ verbose: Whether to enable verbose logging
+ wallet_data_dir: Optional custom directory for wallet data storage
+ external_callbacks: Optional list of external callback handlers to include
+ model_config: Optional dict of default model parameters (e.g., temperature, max_tokens)
"""
+ # Validate CDP environment if payments are requested
+ actual_enable_payments = enable_payments
+ if enable_payments:
+ is_valid, message = validate_cdp_environment()
+ if not is_valid:
+ logger.warning(
+ f"Payment capabilities requested for agent {agent_id} but environment validation failed: {message}"
+ )
+ logger.warning(
+ f"Payment capabilities will be disabled for agent {agent_id}"
+ )
+ actual_enable_payments = False
+ else:
+ logger.info(
+ f"CDP environment validation passed for agent {agent_id}: {message}"
+ )
+
+ # Store the model config before initializing LLM
+ self.model_config = model_config or {}
+
# Initialize base agent
super().__init__(
agent_id=agent_id,
@@ -129,6 +162,8 @@ def __init__(
capabilities=capabilities or [],
organization_id=organization_id,
interaction_modes=interaction_modes,
+ enable_payments=actual_enable_payments,
+ wallet_data_dir=wallet_data_dir,
)
# Store agent-specific attributes
@@ -141,15 +176,16 @@ def __init__(
self.is_ui_mode = is_ui_mode
self.memory_type = memory_type
self.workflow_agent_type = agent_type
-
- # Store the custom tools list if provided
+ self.verbose = verbose
self.custom_tools = custom_tools or []
-
- # Store the prompt_tools instance if provided
self._prompt_tools = prompt_tools
-
- # Create a new PromptTemplates instance for this agent
+ self.external_callbacks = external_callbacks or []
self.prompt_templates = prompt_templates or PromptTemplates()
+ self.workflow = None
+
+ # Initialize hub and registry references
+ self._hub = None
+ self._registry = None
# Initialize token tracking and rate limiting
token_config = TokenConfig(
@@ -158,27 +194,17 @@ def __init__(
)
self.interaction_control = InteractionControl(
- token_config=token_config, max_turns=max_turns
+ agent_id=self.agent_id, token_config=token_config, max_turns=max_turns
)
-
- # Set cooldown callback to update agent's cooldown state
self.interaction_control.set_cooldown_callback(self.set_cooldown)
# Initialize the LLM
self.llm = self._initialize_llm()
logger.debug(f"Initialized LLM for AI Agent {self.agent_id}: {self.llm}")
-
- # Initialize the workflow to None - will be set when registry and hub are available
- self.workflow = None
-
- # Initialize the conversation chain (kept for consistency)
- self.conversation_chain = None
-
logger.info(
f"AI Agent {self.agent_id} initialized with {len(self.capabilities)} capabilities"
)
- # Property setter for hub that initializes workflow when both hub and registry are set
@property
def hub(self):
"""Get the hub property."""
@@ -190,7 +216,6 @@ def hub(self, value):
self._hub = value
self._initialize_workflow_if_ready()
- # Property setter for registry that initializes workflow when both hub and registry are set
@property
def registry(self):
"""Get the registry property."""
@@ -202,6 +227,16 @@ def registry(self, value):
self._registry = value
self._initialize_workflow_if_ready()
+ @property
+ def prompt_tools(self):
+ """Get the prompt_tools property."""
+ return self._prompt_tools
+
+ @prompt_tools.setter
+ def prompt_tools(self, value):
+ """Set the prompt_tools property."""
+ self._prompt_tools = value
+
def _initialize_workflow_if_ready(self):
"""Initialize the workflow if both registry and hub are set."""
if (
@@ -209,73 +244,147 @@ def _initialize_workflow_if_ready(self):
and self._hub is not None
and hasattr(self, "_registry")
and self._registry is not None
+ and self.workflow is None
):
- if self.workflow is None:
- logger.debug(
- f"AI Agent {self.agent_id}: Registry and hub are set, initializing workflow"
- )
- self.workflow = self._initialize_workflow()
- logger.debug(f"AI Agent {self.agent_id}: Workflow initialized")
+ logger.debug(
+ f"AI Agent {self.agent_id}: Registry and hub are set, initializing workflow"
+ )
+ self.workflow = self._initialize_workflow()
+ logger.debug(f"AI Agent {self.agent_id}: Workflow initialized")
+
+ def _initialize_llm(self):
+ """Initialize the LLM based on the provider type and model name."""
+ from agentconnect.providers import ProviderFactory
+
+ provider = ProviderFactory.create_provider(self.provider_type, self.api_key)
+ logger.debug(f"AI Agent {self.agent_id}: LLM provider created: {provider}")
+ return provider.get_langchain_llm(
+ model_name=self.model_name, **self.model_config or {}
+ )
def _initialize_workflow(self) -> Runnable:
"""Initialize the workflow for the agent."""
+ # Determine if we're in standalone mode
+ is_standalone = (
+ not hasattr(self, "_registry")
+ or self._registry is None
+ or not hasattr(self, "_hub")
+ or self._hub is None
+ )
- # Create a new PromptTools instance for this agent if not provided
+ # Create a PromptTools instance if not already provided
if self._prompt_tools is None:
self._prompt_tools = PromptTools(
- agent_registry=self.registry, communication_hub=self.hub, llm=self.llm
+ agent_registry=self._registry, communication_hub=self._hub, llm=self.llm
+ )
+ logger.debug(
+ f"AI Agent {self.agent_id}: Created {'standalone' if is_standalone else 'connected'} PromptTools instance."
)
- logger.debug(f"AI Agent {self.agent_id}: Created new PromptTools instance.")
# Set the current agent context for the tools
self._prompt_tools.set_current_agent(self.agent_id)
- logger.debug(f"AI Agent {self.agent_id}: Current agent context set in tools.")
- # Get the tools from PromptTools
- tools = self._prompt_tools
- logger.debug(f"AI Agent {self.agent_id}: Tools initialized or provided.")
+ # Create system config if not already created
+ if not hasattr(self, "system_config"):
+ # Add standalone mode note to system config if in standalone mode
+ additional_context = {}
+ if is_standalone:
+ additional_context["standalone_mode"] = (
+ "You are operating in standalone mode without connections to other agents. "
+ "Focus on using your internal capabilities to help the user directly. "
+ "If collaboration would normally be useful, explain why it's not available "
+ "and offer the best alternative solutions you can provide on your own."
+ )
- # Create prompt templates if not provided
- prompt_templates = self.prompt_templates or PromptTemplates()
- logger.debug(
- f"AI Agent {self.agent_id}: Prompt templates initialized or provided."
- )
+ self.system_config = SystemPromptConfig(
+ name=self.name,
+ capabilities=self.capabilities,
+ personality=self.personality,
+ additional_context=additional_context,
+ )
- # Create system config - Pass the full Capability objects
- self.system_config = SystemPromptConfig(
- name=self.name,
- capabilities=self.capabilities, # Pass full Capability objects
- personality=self.personality,
- )
- logger.debug(
- f"AI Agent {self.agent_id}: System config created with capabilities: {self.capabilities}"
- )
+ # Initialize custom tools list
+ custom_tools_list = list(self.custom_tools) if self.custom_tools else []
+
+ # Add payment tools if enabled
+ if self.enable_payments and self.agent_kit is not None:
+ try:
+ from coinbase_agentkit_langchain import get_langchain_tools
+
+ agentkit_tools = get_langchain_tools(self.agent_kit)
+ custom_tools_list.extend(agentkit_tools)
+
+ tool_names = [tool.name for tool in agentkit_tools]
+ logger.info(
+ f"AI Agent {self.agent_id}: Added {len(agentkit_tools)} AgentKit payment tools: {tool_names}"
+ )
+ payment_tool = (
+ "native_transfer"
+ if POC_PAYMENT_TOKEN_SYMBOL == "ETH"
+ else "erc20_transfer"
+ )
+ logger.info(
+ f"AI Agent {self.agent_id}: Will use {payment_tool} for payments with {POC_PAYMENT_TOKEN_SYMBOL} token"
+ )
+
+ # Enable payment capabilities in the system prompt config
+ self.system_config.enable_payments = True
+ self.system_config.payment_token_symbol = POC_PAYMENT_TOKEN_SYMBOL
+ logger.info(
+ f"AI Agent {self.agent_id}: Enabled payment capabilities in system prompt"
+ )
+ except ImportError as e:
+ logger.warning(
+ f"AI Agent {self.agent_id}: Could not import AgentKit LangChain tools: {e}"
+ )
+ logger.warning(
+ "To use payment capabilities, install with: pip install coinbase-agentkit-langchain"
+ )
+ except Exception as e:
+ logger.error(
+ f"AI Agent {self.agent_id}: Error initializing AgentKit tools: {e}"
+ )
- # Create and compile the workflow with business logic info
+ # Create the workflow with all components
workflow = create_workflow_for_agent(
agent_type=self.workflow_agent_type,
system_config=self.system_config,
llm=self.llm,
- tools=tools,
- prompt_templates=prompt_templates,
+ tools=self._prompt_tools,
+ prompt_templates=self.prompt_templates,
agent_id=self.agent_id,
- custom_tools=self.custom_tools, # Pass the custom tools list
- )
- logger.debug(
- f"AI Agent {self.agent_id}: Workflow created with {len(self.custom_tools)} custom tools."
+ custom_tools=custom_tools_list,
+ verbose=self.verbose,
)
- compiled_workflow = workflow.compile()
- logger.debug(f"AI Agent {self.agent_id}: Workflow compiled.")
- return compiled_workflow
+ return workflow.compile()
- def _initialize_llm(self):
- """Initialize the LLM based on the provider type and model name."""
- from agentconnect.providers import ProviderFactory
+ def _create_error_response(
+ self,
+ message: Message,
+ error_msg: str,
+ error_type: str,
+ is_collaboration_request: bool = False,
+ ) -> Message:
+ """Create a standardized error response message."""
+ message_type = (
+ MessageType.COLLABORATION_RESPONSE
+ if is_collaboration_request
+ else MessageType.ERROR
+ )
- provider = ProviderFactory.create_provider(self.provider_type, self.api_key)
- logger.debug(f"AI Agent {self.agent_id}: LLM provider created: {provider}")
- return provider.get_langchain_llm(model_name=self.model_name)
+ metadata = {"error_type": error_type}
+ if is_collaboration_request:
+ metadata["original_message_type"] = "ERROR"
+
+ return Message.create(
+ sender_id=self.agent_id,
+ receiver_id=message.sender_id,
+ content=error_msg,
+ sender_identity=self.identity,
+ message_type=message_type,
+ metadata=metadata,
+ )
async def process_message(self, message: Message) -> Optional[Message]:
"""
@@ -293,7 +402,7 @@ async def process_message(self, message: Message) -> Optional[Message]:
to generate appropriate responses and handle complex tasks that may require collaboration
with other independent agents in the decentralized network.
"""
- # Check if this is a collaboration request before calling super().process_message
+ # Check if this is a collaboration request
is_collaboration_request = (
message.message_type == MessageType.REQUEST_COLLABORATION
)
@@ -307,7 +416,7 @@ async def process_message(self, message: Message) -> Optional[Message]:
return response
try:
- # Initialize workflow if it wasn't initialized in the constructor
+ # Initialize workflow if needed
if self.workflow is None:
if (
hasattr(self, "_hub")
@@ -323,31 +432,11 @@ async def process_message(self, message: Message) -> Optional[Message]:
logger.error(
f"AI Agent {self.agent_id}: Cannot initialize workflow, registry or hub not set"
)
-
- error_msg = "I'm sorry, I'm not fully initialized yet. Please try again later."
- error_type = "initialization_error"
-
- # Use COLLABORATION_RESPONSE for collaboration requests
- message_type = (
- MessageType.COLLABORATION_RESPONSE
- if is_collaboration_request
- else MessageType.ERROR
- )
-
- return Message.create(
- sender_id=self.agent_id,
- receiver_id=message.sender_id,
- content=error_msg,
- sender_identity=self.identity,
- message_type=message_type,
- metadata={
- "error_type": error_type,
- **(
- {"original_message_type": "ERROR"}
- if is_collaboration_request
- else {}
- ),
- },
+ return self._create_error_response(
+ message,
+ "I'm sorry, I'm not fully initialized yet. Please try again later.",
+ "initialization_error",
+ is_collaboration_request,
)
# If workflow is still None, return an error
@@ -355,33 +444,11 @@ async def process_message(self, message: Message) -> Optional[Message]:
logger.error(
f"AI Agent {self.agent_id}: Cannot process message, workflow not initialized"
)
-
- error_msg = (
- "I'm sorry, I'm not fully initialized yet. Please try again later."
- )
- error_type = "initialization_error"
-
- # Use COLLABORATION_RESPONSE for collaboration requests
- message_type = (
- MessageType.COLLABORATION_RESPONSE
- if is_collaboration_request
- else MessageType.ERROR
- )
-
- return Message.create(
- sender_id=self.agent_id,
- receiver_id=message.sender_id,
- content=error_msg,
- sender_identity=self.identity,
- message_type=message_type,
- metadata={
- "error_type": error_type,
- **(
- {"original_message_type": "ERROR"}
- if is_collaboration_request
- else {}
- ),
- },
+ return self._create_error_response(
+ message,
+ "I'm sorry, I'm not fully initialized yet. Please try again later.",
+ "initialization_error",
+ is_collaboration_request,
)
# Check if this is an error message that needs special handling
@@ -418,26 +485,13 @@ async def process_message(self, message: Message) -> Optional[Message]:
metadata={"handled_error": error_type},
)
- # Special handling for collaboration requests
- # This ensures responses are properly correlated with the original request
-
# Get the conversation ID for this sender
conversation_id = self._get_conversation_id(message.sender_id)
- # Get the callback manager from interaction_control
- # Only include our rate limiting callback, not a tracer
- callbacks = self.interaction_control.get_callback_manager()
-
- # Set up the configuration with the thread ID for memory persistence and callbacks
- # Use the thread_id for LangGraph memory persistence
- config = {
- "configurable": {
- "thread_id": conversation_id,
- # Add a run name for better LangSmith organization
- "run_name": f"Agent {self.agent_id} - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
- },
- "callbacks": callbacks,
- }
+ # Setup callbacks - combine rate limiting callbacks with any external ones
+ callbacks = self.interaction_control.get_callback_handlers()
+ if self.external_callbacks:
+ callbacks.extend(self.external_callbacks)
# Ensure the prompt_tools has the correct agent_id set
if (
@@ -445,75 +499,88 @@ async def process_message(self, message: Message) -> Optional[Message]:
and self._prompt_tools._current_agent_id != self.agent_id
):
self._prompt_tools.set_current_agent(self.agent_id)
- logger.debug(
- f"AI Agent {self.agent_id}: Reset current agent ID in tools before workflow invocation"
- )
- # Create the initial state for the workflow
+ # Add context prefix based on sender/message type
+ sender_type = (
+ "Human" if message.sender_id.startswith("human") else "AI Agent"
+ )
+ is_collab_request = (
+ message.message_type == MessageType.REQUEST_COLLABORATION
+ )
+ is_collab_response = "response_to" in (message.metadata or {})
+
+ context_prefix = ""
+ if sender_type == "AI Agent":
+ if is_collab_request:
+ context_prefix = f"[Incoming Collaboration Request from AI Agent {message.sender_id}]:\n"
+ elif is_collab_response:
+ context_prefix = f"[Incoming Response from Collaborating AI Agent {message.sender_id}]:\n"
+ else:
+ context_prefix = (
+ f"[Incoming Message from AI Agent {message.sender_id}]:\n"
+ )
+
+ workflow_input_content = f"{context_prefix}{message.content}"
+
+ # Create the initial state and config for the workflow
initial_state = {
- "messages": [HumanMessage(content=message.content)],
+ "messages": [HumanMessage(content=workflow_input_content)],
"sender": message.sender_id,
"receiver": self.agent_id,
"message_type": message.message_type,
"metadata": message.metadata or {},
- "max_retries": 2, # Set a maximum number of retries for collaboration
- "retry_count": 0, # Initialize retry count
+ "max_retries": 2,
+ "retry_count": 0,
}
+
+ config = {
+ "configurable": {
+ "thread_id": conversation_id,
+ "run_name": f"Agent {self.agent_id} - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ },
+ "callbacks": callbacks,
+ }
+
logger.debug(
f"AI Agent {self.agent_id} invoking workflow with conversation ID: {conversation_id}"
)
- # Use the provided runnable with a timeout
+ # Invoke the workflow with a timeout
try:
- # Invoke the workflow with a timeout and callbacks
- response = await asyncio.wait_for(
+ response_state = await asyncio.wait_for(
self.workflow.ainvoke(initial_state, config),
- timeout=180.0, # 3 minute timeout for workflow execution
+ timeout=180.0, # 3 minute timeout
)
logger.debug(f"AI Agent {self.agent_id} workflow invocation complete.")
except asyncio.TimeoutError:
logger.error(f"AI Agent {self.agent_id} workflow execution timed out")
-
- # Create a timeout response
- timeout_message = "I'm sorry, but this request is taking too long to process. Please try again with a simpler request or break it down into smaller parts."
-
- # Use COLLABORATION_RESPONSE for collaboration requests
- message_type = (
- MessageType.COLLABORATION_RESPONSE
- if is_collaboration_request
- else MessageType.ERROR
+ return self._create_error_response(
+ message,
+ "I'm sorry, but this request is taking too long to process. Please try again with a simpler request or break it down into smaller parts.",
+ "workflow_timeout",
+ is_collaboration_request,
)
- return Message.create(
- sender_id=self.agent_id,
- receiver_id=message.sender_id,
- content=timeout_message,
- sender_identity=self.identity,
- message_type=message_type,
- metadata={
- "error_type": "workflow_timeout",
- **(
- {"original_message_type": "ERROR"}
- if is_collaboration_request
- else {}
- ),
- },
+ # Extract the last message from the workflow response state
+ if "messages" not in response_state or not response_state["messages"]:
+ logger.error(
+ f"AI Agent {self.agent_id}: Workflow returned empty or invalid messages state."
+ )
+ return self._create_error_response(
+ message,
+ "Internal error: Could not retrieve response.",
+ "empty_workflow_response",
+ is_collaboration_request,
)
- # Extract the last message from the workflow response
- last_message = response["messages"][-1]
- logger.debug(
- f"AI Agent {self.agent_id} extracted last message from workflow response."
- )
+ last_message = response_state["messages"][-1]
# Token counting and rate limiting
total_tokens = 0
if hasattr(last_message, "usage_metadata") and last_message.usage_metadata:
total_tokens = last_message.usage_metadata.get("total_tokens", 0)
- logger.debug(f"AI Agent {self.agent_id} token count: {total_tokens}")
- # Update token count after response - this will automatically trigger cooldown if needed
- # through the callback we set earlier
+ # Update token count and handle rate limiting
state = await self.interaction_control.process_interaction(
token_count=total_tokens, conversation_id=conversation_id
)
@@ -523,73 +590,44 @@ async def process_message(self, message: Message) -> Optional[Message]:
logger.info(
f"AI Agent {self.agent_id} reached maximum turns with {message.sender_id}. Ending conversation."
)
- # End the conversation
self.end_conversation(message.sender_id)
last_message.content = f"{last_message.content}\n\nWe've reached the maximum number of turns for this conversation. If you need further assistance, please start a new conversation."
- elif state == InteractionState.WAIT:
- logger.info(
- f"AI Agent {self.agent_id} is in cooldown state with {message.sender_id}."
- )
- # We don't need to create a special message here as the cooldown callback
- # will have already set the agent's cooldown state, which will be handled
- # by the BaseAgent.process_message method on the next interaction
-
# Update conversation tracking
+ current_time = datetime.now()
if message.sender_id in self.active_conversations:
self.active_conversations[message.sender_id]["message_count"] += 1
self.active_conversations[message.sender_id][
"last_message_time"
- ] = datetime.now()
- logger.debug(
- f"AI Agent {self.agent_id} updated active conversation with {message.sender_id}."
- )
+ ] = current_time
else:
self.active_conversations[message.sender_id] = {
"message_count": 1,
- "last_message_time": datetime.now(),
+ "last_message_time": current_time,
}
- logger.debug(
- f"AI Agent {self.agent_id} created new active conversation with {message.sender_id}."
- )
# Determine the appropriate message type for the response
- # Always use COLLABORATION_RESPONSE for collaboration requests
response_message_type = (
MessageType.COLLABORATION_RESPONSE
if is_collaboration_request
else MessageType.RESPONSE
)
- if is_collaboration_request:
- logger.info(
- f"AI Agent {self.agent_id} sending collaboration response to {message.sender_id}"
- )
# Create response metadata
- response_metadata = {
- "token_count": total_tokens,
- }
+ response_metadata = {"token_count": total_tokens}
# Add response_to if this is a response to a request with an ID
if message.metadata and "request_id" in message.metadata:
response_metadata["response_to"] = message.metadata["request_id"]
- logger.debug(
- f"AI Agent {self.agent_id} adding response correlation: {message.metadata['request_id']}"
- )
elif (
hasattr(self, "pending_requests")
and message.sender_id in self.pending_requests
):
- # If we don't have a request_id in the message metadata, but we have one stored in pending_requests,
- # use that one instead
request_id = self.pending_requests[message.sender_id].get("request_id")
if request_id:
response_metadata["response_to"] = request_id
- logger.debug(
- f"AI Agent {self.agent_id} adding response correlation from pending_requests: {request_id}"
- )
- # Create the response message
+ # Create and return the response message
response_message = Message.create(
sender_id=self.agent_id,
receiver_id=message.sender_id,
@@ -607,57 +645,22 @@ async def process_message(self, message: Message) -> Optional[Message]:
logger.exception(
f"AI Agent {self.agent_id} error processing message: {str(e)}"
)
-
- # Create an error response
- # Use COLLABORATION_RESPONSE for collaboration requests
- message_type = (
- MessageType.COLLABORATION_RESPONSE
- if is_collaboration_request
- else MessageType.ERROR
+ return self._create_error_response(
+ message,
+ f"I encountered an unexpected error while processing your request: {str(e)}\n\nPlease try again with a different approach.",
+ "processing_error",
+ is_collaboration_request,
)
- return Message.create(
- sender_id=self.agent_id,
- receiver_id=message.sender_id,
- content=f"I encountered an unexpected error while processing your request: {str(e)}\n\nPlease try again with a different approach.",
- sender_identity=self.identity,
- message_type=message_type,
- metadata={
- "error_type": "processing_error",
- **(
- {"original_message_type": "ERROR"}
- if is_collaboration_request
- else {}
- ),
- },
- )
-
- # Property for prompt_tools to ensure consistent access
- @property
- def prompt_tools(self):
- """Get the prompt_tools property."""
- return self._prompt_tools
-
- @prompt_tools.setter
- def prompt_tools(self, value):
- """Set the prompt_tools property."""
- self._prompt_tools = value
-
def set_cooldown(self, duration: int) -> None:
- """Set a cooldown period for the agent.
-
- Args:
- duration: Cooldown duration in seconds
- """
+ """Set a cooldown period for the agent."""
# Call the parent class method to set the cooldown
super().set_cooldown(duration)
-
- # Log detailed information about the cooldown
logger.warning(
f"AI Agent {self.agent_id} entered cooldown for {duration} seconds due to rate limiting."
)
- # If this is a UI agent, we might want to send a notification to the UI
+ # UI notification if in UI mode
if self.is_ui_mode:
# TODO: This would be implemented by a UI notification system
logger.info(
@@ -665,9 +668,8 @@ def set_cooldown(self, duration: int) -> None:
)
def reset_interaction_state(self) -> None:
- """Reset the interaction state of the agent.
-
- This resets both the cooldown state and the turn counter.
+ """
+ Reset the interaction state of the agent. This resets both the cooldown state and the turn counter.
"""
# Reset the cooldown state
self.reset_cooldown()
@@ -690,3 +692,172 @@ def reset_interaction_state(self) -> None:
logger.info(
f"Conversation {conv_id}: {conv_stats['total_tokens']} tokens, {conv_stats['turn_count']} turns"
)
+
+ async def chat(
+ self,
+ query: str,
+ conversation_id: str = "standalone_chat",
+ metadata: Optional[Dict] = None,
+ ) -> str:
+ """
+ Allows direct interaction with the agent without needing a CommunicationHub or AgentRegistry.
+
+ This method is useful for testing or using a single agent instance directly.
+ It simulates a user query and returns the agent's response, maintaining
+ conversation history based on the conversation_id if memory is configured.
+
+ Args:
+ query: The user's input/query to the agent.
+ conversation_id: An identifier for the conversation thread. Defaults to "standalone_chat".
+ Use different IDs to maintain separate conversation histories.
+ metadata: Optional metadata to pass to the workflow.
+
+ Returns:
+ The agent's response as a string.
+
+ Raises:
+ RuntimeError: If the workflow cannot be initialized or fails unexpectedly.
+ asyncio.TimeoutError: If the workflow execution times out.
+ """
+ logger.info(
+ f"AI Agent {self.agent_id} received direct chat query: {query[:50]}..."
+ )
+
+ # Initialize workflow if not already done
+ if self.workflow is None:
+ try:
+ # Ensure hub and registry attributes exist (as None) for standalone mode
+ if not hasattr(self, "_registry"):
+ self._registry = None
+ if not hasattr(self, "_hub"):
+ self._hub = None
+
+ # Create PromptTools if needed for standalone mode
+ if not hasattr(self, "_prompt_tools") or self._prompt_tools is None:
+ self._prompt_tools = PromptTools(
+ agent_registry=None,
+ communication_hub=None,
+ llm=self._initialize_llm(),
+ )
+ logger.info(
+ f"AI Agent {self.agent_id}: Created standalone PromptTools instance."
+ )
+
+ # Set current agent context
+ self._prompt_tools.set_current_agent(self.agent_id)
+
+ # Create standalone system config
+ self.system_config = SystemPromptConfig(
+ name=self.name,
+ capabilities=self.capabilities,
+ personality=self.personality,
+ additional_context={
+ "standalone_mode": (
+ "You are operating in standalone mode without connections to other agents. "
+ "Focus on using your internal capabilities to help the user directly. "
+ "If collaboration would normally be useful, explain why it's not available "
+ "and offer the best alternative solutions you can provide on your own."
+ )
+ },
+ )
+
+ # Initialize workflow
+ self.workflow = self._initialize_workflow()
+ if self.workflow is None:
+ raise RuntimeError("Workflow initialization failed.")
+
+ logger.info(
+ f"AI Agent {self.agent_id}: Workflow initialized for standalone chat."
+ )
+ except Exception as e:
+ logger.exception(
+ f"AI Agent {self.agent_id}: Failed to initialize workflow for chat: {e}"
+ )
+ raise RuntimeError(f"Failed to initialize agent workflow: {e}") from e
+
+ # Set up workflow input and configuration
+ initial_state = {
+ "messages": [HumanMessage(content=query)],
+ "sender": "user_standalone",
+ "receiver": self.agent_id,
+ "message_type": MessageType.TEXT,
+ "metadata": metadata or {},
+ "max_retries": 0,
+ "retry_count": 0,
+ }
+
+ # Prepare callbacks
+ callbacks = self.interaction_control.get_callback_handlers()
+ if self.external_callbacks:
+ callbacks.extend(self.external_callbacks)
+
+ # Create workflow configuration
+ config = {
+ "configurable": {
+ "thread_id": conversation_id,
+ "run_name": f"Agent {self.agent_id} - Standalone Chat - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}",
+ },
+ "callbacks": callbacks,
+ }
+
+ # Ensure prompt_tools has current agent context
+ if (
+ hasattr(self, "_prompt_tools")
+ and self._prompt_tools
+ and self._prompt_tools._current_agent_id != self.agent_id
+ ):
+ self._prompt_tools.set_current_agent(self.agent_id)
+
+ # Invoke workflow
+ try:
+ logger.debug(
+ f"AI Agent {self.agent_id} invoking workflow for chat with conversation ID: {conversation_id}"
+ )
+ response_state = await asyncio.wait_for(
+ self.workflow.ainvoke(initial_state, config),
+ timeout=180.0,
+ )
+ logger.debug(
+ f"AI Agent {self.agent_id}: Chat workflow invocation complete."
+ )
+ except asyncio.TimeoutError as e:
+ logger.error(
+ f"AI Agent {self.agent_id}: Chat workflow execution timed out."
+ )
+ raise e
+ except Exception as e:
+ logger.exception(
+ f"AI Agent {self.agent_id}: Error during chat workflow invocation: {e}"
+ )
+ raise RuntimeError(f"Agent workflow failed during chat: {e}") from e
+
+ # Extract response
+ if "messages" not in response_state or not response_state["messages"]:
+ logger.error(
+ f"AI Agent {self.agent_id}: Chat workflow returned empty or invalid messages state."
+ )
+ raise RuntimeError("Agent workflow returned no response message.")
+
+ # Get response content
+ last_message = response_state["messages"][-1]
+ if hasattr(last_message, "content"):
+ response_content = last_message.content
+ else:
+ logger.error(
+ f"AI Agent {self.agent_id}: Last message in chat response has no content: {last_message}"
+ )
+ raise RuntimeError("Agent workflow returned unexpected message format.")
+
+ # Handle token tracking and rate limiting
+ total_tokens = 0
+ if hasattr(last_message, "usage_metadata") and last_message.usage_metadata:
+ total_tokens = last_message.usage_metadata.get("total_tokens", 0)
+
+ await self.interaction_control.process_interaction(
+ token_count=total_tokens, conversation_id=conversation_id
+ )
+
+ logger.info(
+ f"AI Agent {self.agent_id} generated chat response: {response_content[:50]}..."
+ )
+ return response_content
diff --git a/agentconnect/agents/human_agent.py b/agentconnect/agents/human_agent.py
index 218b543..72f7fdf 100644
--- a/agentconnect/agents/human_agent.py
+++ b/agentconnect/agents/human_agent.py
@@ -8,7 +8,7 @@
# Standard library imports
import asyncio
import logging
-from typing import Optional
+from typing import Optional, Callable, List, Dict, Any
# Third-party imports
import aioconsole
@@ -47,6 +47,7 @@ def __init__(
name: str,
identity: AgentIdentity,
organization_id: Optional[str] = None,
+ response_callbacks: Optional[List[Callable]] = None,
):
"""Initialize the human agent.
@@ -55,6 +56,7 @@ def __init__(
name: Human-readable name for the agent
identity: Identity information for the agent
organization_id: ID of the organization the agent belongs to
+ response_callbacks: Optional list of callbacks to be called when human responds
"""
# Create Capability objects for human capabilities
capabilities = [
@@ -82,6 +84,8 @@ def __init__(
)
self.name = name
self.is_active = True
+ self.response_callbacks = response_callbacks or []
+ self.last_response_data = {}
logger.info(f"Human Agent {self.agent_id} initialized.")
def _initialize_llm(self):
@@ -118,6 +122,11 @@ async def start_interaction(self, target_agent: BaseAgent) -> None:
)
return
+ print(
+ f"{Fore.GREEN}Human Agent {self.agent_id} starting interaction with {target_agent.agent_id}{Style.RESET_ALL}"
+ )
+ print(f"{Fore.GREEN}Exit with 'exit', 'quit', or 'bye'{Style.RESET_ALL}")
+ print(f"{Fore.GREEN}Loading...{Style.RESET_ALL}")
while self.is_active:
try:
# Get user input
@@ -169,6 +178,7 @@ async def start_interaction(self, target_agent: BaseAgent) -> None:
print(
f"{Fore.RED}❌ Error: {response.content}{Style.RESET_ALL}"
)
+ print("-" * 40)
logger.error(
f"Human Agent {self.agent_id} received error message: {response.content[:50]}..."
)
@@ -191,8 +201,10 @@ async def start_interaction(self, target_agent: BaseAgent) -> None:
f"Human Agent {self.agent_id} received processing status message: {response.content[:50]}..."
)
else:
- print(f"\n{Fore.CYAN}{target_agent.name}:{Style.RESET_ALL}")
+ print("-" * 40)
+ print(f"{Fore.CYAN}{target_agent.name}:{Style.RESET_ALL}")
print(f"{response.content}")
+ print("-" * 40)
logger.info(
f"Human Agent {self.agent_id} received and displayed response: {response.content[:50]}..."
)
@@ -257,8 +269,105 @@ async def process_message(self, message: Message) -> Optional[Message]:
# Display received message
print(f"\n{Fore.CYAN}{message.sender_id}:{Style.RESET_ALL}")
print(f"{message.content}")
+ print("-" * 40)
logger.info(
f"Human Agent {self.agent_id} displayed received message from {message.sender_id}."
)
- self.message_queue.task_done()
- return None
+
+ # Prompt for and get user response
+ print(
+ f"{Fore.YELLOW}Type your response or use these commands:{Style.RESET_ALL}"
+ )
+ print(
+ f"{Fore.YELLOW}- 'exit', 'quit', or 'bye' to end the conversation{Style.RESET_ALL}"
+ )
+ print(
+ f"{Fore.YELLOW}- Press Enter without typing to skip responding{Style.RESET_ALL}"
+ )
+ user_input = await aioconsole.ainput(f"\n{Fore.GREEN}You: {Style.RESET_ALL}")
+
+ # Check for exit commands
+ if user_input.lower().strip() in ["exit", "quit", "bye"]:
+ logger.info(
+ f"Human Agent {self.agent_id} ending conversation with {message.sender_id}"
+ )
+ print(
+ f"{Fore.YELLOW}Ending conversation with {message.sender_id}{Style.RESET_ALL}"
+ )
+
+ # Send an exit message to the AI
+ return Message.create(
+ sender_id=self.agent_id,
+ receiver_id=message.sender_id,
+ content="__EXIT__",
+ sender_identity=self.identity,
+ message_type=MessageType.STOP,
+ metadata={"reason": "user_exit"},
+ )
+
+ # Log the user input
+ if user_input.strip():
+ logger.info(
+ f"Human Agent {self.agent_id} received user input: {user_input[:50]}..."
+ )
+
+ # Send response back to the sender
+ logger.info(
+ f"Human Agent {self.agent_id} sending response to {message.sender_id}: {user_input[:50]}..."
+ )
+ return Message.create(
+ sender_id=self.agent_id,
+ receiver_id=message.sender_id,
+ content=user_input,
+ sender_identity=self.identity,
+ message_type=MessageType.TEXT,
+ )
+ else:
+ # If the user didn't enter any text, log it but don't send a response
+ logger.info(
+ f"Human Agent {self.agent_id} skipped responding to {message.sender_id}"
+ )
+ print(f"{Fore.YELLOW}No response sent.{Style.RESET_ALL}")
+ return None
+
+ async def send_message(
+ self,
+ receiver_id: str,
+ content: str,
+ message_type: MessageType = MessageType.TEXT,
+ metadata: Optional[Dict[str, Any]] = None,
+ ) -> Message:
+ """Override send_message to track human responses and notify callbacks"""
+ # Call the original method in the parent class
+ message = await super().send_message(
+ receiver_id, content, message_type, metadata
+ )
+
+ # Store information about this response
+ self.last_response_data = {
+ "receiver_id": receiver_id,
+ "content": content,
+ "message_type": message_type,
+ "timestamp": asyncio.get_event_loop().time(),
+ }
+
+ # Notify any registered callbacks
+ for callback in self.response_callbacks:
+ try:
+ callback(self.last_response_data)
+ except Exception as e:
+ logger.error(f"Error in response callback: {str(e)}")
+
+ return message
+
+ def add_response_callback(self, callback: Callable) -> None:
+ """Add a callback to be notified when the human sends a response"""
+ if callback not in self.response_callbacks:
+ self.response_callbacks.append(callback)
+ logger.debug(f"Human Agent {self.agent_id}: Added response callback")
+
+ def remove_response_callback(self, callback: Callable) -> None:
+ """Remove a previously registered callback"""
+ if callback in self.response_callbacks:
+ self.response_callbacks.remove(callback)
+ logger.debug(f"Human Agent {self.agent_id}: Removed response callback")
diff --git a/agentconnect/agents/telegram/message_processor.py b/agentconnect/agents/telegram/message_processor.py
index b7b57c3..bbc9993 100644
--- a/agentconnect/agents/telegram/message_processor.py
+++ b/agentconnect/agents/telegram/message_processor.py
@@ -365,7 +365,7 @@ async def process_agent_response(
"configurable": {
"thread_id": conversation_id,
},
- "callbacks": interaction_control.get_callback_manager(),
+ "callbacks": interaction_control.get_callback_handlers(),
}
logger.debug(
diff --git a/agentconnect/agents/telegram/telegram_agent.py b/agentconnect/agents/telegram/telegram_agent.py
index 66aae4d..e3b096e 100644
--- a/agentconnect/agents/telegram/telegram_agent.py
+++ b/agentconnect/agents/telegram/telegram_agent.py
@@ -13,6 +13,7 @@
from dotenv import load_dotenv
from aiogram import types
from langchain.tools import BaseTool
+from langchain_core.callbacks import BaseCallbackHandler
from agentconnect.agents.ai_agent import AIAgent
from agentconnect.agents.telegram.bot_manager import TelegramBotManager
@@ -131,6 +132,10 @@ def __init__(
max_tokens_per_minute: int = 5500,
max_tokens_per_hour: int = 100000,
telegram_token: Optional[str] = None,
+ enable_payments: bool = False,
+ verbose: bool = False,
+ wallet_data_dir: Optional[str] = None,
+ external_callbacks: Optional[List[BaseCallbackHandler]] = None,
):
"""
Initialize a Telegram AI Agent.
@@ -150,6 +155,10 @@ def __init__(
max_tokens_per_minute: Rate limiting for token usage per minute
max_tokens_per_hour: Rate limiting for token usage per hour
telegram_token: Telegram Bot API token (can also use TELEGRAM_BOT_TOKEN env var)
+ enable_payments: Whether to enable payments
+ verbose: Whether to enable verbose logging
+ wallet_data_dir: Directory to store wallet data
+ external_callbacks: List of external callbacks to use
"""
# Define Telegram-specific capabilities
telegram_capabilities = [
@@ -249,6 +258,10 @@ def __init__(
max_tokens_per_minute=max_tokens_per_minute,
max_tokens_per_hour=max_tokens_per_hour,
custom_tools=self._get_custom_tools(), # Pass the custom tools to AIAgent
+ enable_payments=enable_payments,
+ verbose=verbose,
+ wallet_data_dir=wallet_data_dir,
+ external_callbacks=external_callbacks,
)
def _initialize_telegram_components(self):
diff --git a/agentconnect/cli.py b/agentconnect/cli.py
index 213ea7c..742e5a0 100644
--- a/agentconnect/cli.py
+++ b/agentconnect/cli.py
@@ -15,16 +15,18 @@
agentconnect --example research
agentconnect --example data
agentconnect --example telegram
+ agentconnect --example agent_economy
agentconnect --demo # UI compatibility under development (Windows only)
agentconnect --check-env
agentconnect --help
Available examples:
- chat - Simple chat with an AI assistant
- multi - Multi-agent e-commerce analysis
- research - Research assistant with multiple agents
- data - Data analysis and visualization assistant
- telegram - Modular multi-agent system with Telegram integration
+ chat - Simple chat with an AI assistant
+ multi - Multi-agent e-commerce analysis
+ research - Research assistant with multiple agents
+ data - Data analysis and visualization assistant
+ telegram - Modular multi-agent system with Telegram integration
+ agent_economy - Autonomous workflow with agent payments system
Note: The demo UI is currently under development and only supported on Windows.
For the best experience, please use the examples instead.
@@ -85,11 +87,12 @@ def parse_args(args: Optional[List[str]] = None) -> argparse.Namespace:
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Available examples:
- chat - Simple chat with an AI assistant
- multi - Multi-agent e-commerce analysis
- research - Research assistant with multiple agents
- data - Data analysis and visualization assistant
- telegram - Modular multi-agent system with Telegram integration
+ chat - Simple chat with an AI assistant
+ multi - Multi-agent e-commerce analysis
+ research - Research assistant with multiple agents
+ data - Data analysis and visualization assistant
+ telegram - Modular multi-agent system with Telegram integration
+ agent_economy - Autonomous workflow with agent payments system
Examples:
agentconnect --example chat
@@ -105,8 +108,16 @@ def parse_args(args: Optional[List[str]] = None) -> argparse.Namespace:
parser.add_argument(
"--example",
"-e",
- choices=["chat", "multi", "research", "data", "telegram"],
- help="Run a specific example: chat (simple AI assistant), multi (multi-agent ecommerce analysis), research (research assistant), data (data analysis assistant), or telegram (modular multi-agent system with Telegram integration)",
+ choices=[
+ "chat",
+ "multi",
+ "research",
+ "data",
+ "telegram",
+ "workflow",
+ "agent_economy",
+ ],
+ help="Run a specific example: chat (simple AI assistant), multi (multi-agent ecommerce analysis), research (research assistant), data (data analysis assistant), telegram (modular multi-agent system with Telegram integration), agent_economy (autonomous workflow with payments), or workflow (legacy name, same as agent_economy)",
)
parser.add_argument(
@@ -153,6 +164,27 @@ async def run_example(example_name: str, verbose: bool = False) -> None:
"The example will run, but research capabilities will be limited"
)
+ # Check for workflow dependencies
+ if example_name in ["workflow", "agent_economy"]:
+ try:
+ from langchain_community.tools.tavily_search import ( # noqa: F401
+ TavilySearchResults, # noqa: F401
+ )
+ from langchain_community.tools.requests.tool import ( # noqa: F401
+ RequestsGetTool, # noqa: F401
+ )
+ from langchain_community.utilities import TextRequestsWrapper # noqa: F401
+ from colorama import init, Fore, Style # noqa: F401
+ except ImportError:
+ logger.warning("Dependencies are missing for the agent economy demo")
+ logger.info("To install the required dependencies:")
+ logger.info(" poetry install --with demo")
+ logger.info(
+ " or: pip install langchain-community colorama tavily-python python-dotenv"
+ )
+ logger.info("Please install the missing dependencies and try again")
+ sys.exit(1)
+
try:
if example_name == "chat":
from examples import run_chat_example
@@ -174,6 +206,12 @@ async def run_example(example_name: str, verbose: bool = False) -> None:
from examples import run_telegram_assistant
await run_telegram_assistant(enable_logging=verbose)
+ elif example_name in ["workflow", "agent_economy"]:
+ from examples.autonomous_workflow.run_workflow_demo import (
+ main as run_workflow_demo,
+ )
+
+ await run_workflow_demo(enable_logging=verbose)
else:
logger.error(f"Unknown example: {example_name}")
except ImportError as e:
diff --git a/agentconnect/communication/hub.py b/agentconnect/communication/hub.py
index cd06d15..c07f53d 100644
--- a/agentconnect/communication/hub.py
+++ b/agentconnect/communication/hub.py
@@ -60,6 +60,8 @@ def __init__(self, registry: AgentRegistry):
self._global_handlers: List[Callable[[Message], Awaitable[None]]] = []
# Store pending requests as {request_id: Future}
self.pending_responses: Dict[str, Future] = {}
+ # Store late responses as {request_id: Message}
+ self.late_responses: Dict[str, Message] = {}
def add_message_handler(
self, agent_id: str, handler: Callable[[Message], Awaitable[None]]
@@ -237,6 +239,7 @@ async def register_agent(self, agent: BaseAgent) -> bool:
capabilities=agent.capabilities, # Use the Capability objects directly
identity=agent.identity,
owner_id=agent.metadata.organization_id,
+ payment_address=agent.metadata.payment_address,
metadata=agent.metadata.metadata,
)
@@ -379,6 +382,11 @@ async def route_message(self, message: Message) -> bool:
logger.warning(
f"Received late response for timed out request {request_id}"
)
+ # Store the late response for potential retrieval
+ self.late_responses[request_id] = message
+ logger.info(
+ f"Stored late response for request {request_id} for potential future retrieval"
+ )
# Even though the request timed out, we still want to record the message
# and notify handlers, but we won't set the result on the future
else:
@@ -789,6 +797,11 @@ async def send_collaboration_request(
if "request_id" not in metadata:
metadata["request_id"] = request_id
+ # Estimate an appropriate timeout based on task complexity
+ # Base timeout of 60 seconds plus 15 seconds per 100 characters, capped at 5 minutes
+ estimated_timeout = min(60 + (len(task_description) // 100) * 15, 300)
+ effective_timeout = kwargs.get("timeout", estimated_timeout)
+
# Send the request and wait for response
logger.debug(
f"Sending collaboration request with request_id: {metadata['request_id']}"
@@ -799,7 +812,7 @@ async def send_collaboration_request(
content=task_description,
message_type=MessageType.REQUEST_COLLABORATION,
metadata=metadata,
- timeout=max(timeout, 60),
+ timeout=effective_timeout,
)
# Log the response status for debugging
@@ -817,10 +830,15 @@ async def send_collaboration_request(
return response.content
else:
logger.warning(
- f"No response received from {receiver_id} within {timeout} seconds for request_id {metadata['request_id']}"
+ f"No response received from {receiver_id} within {effective_timeout} seconds for request_id {metadata['request_id']}"
)
+
+ # More helpful error message that provides the request ID for later checking
return (
- f"No response received from {receiver_id} within {timeout} seconds"
+ f"No immediate response received from {receiver_id} within {effective_timeout} seconds. "
+ f"The request is still processing (ID: {metadata['request_id']}). "
+ f"If you receive a response later, it will be available. "
+ f"You can continue with other tasks and check back later."
)
except Exception as e:
diff --git a/agentconnect/core/README.md b/agentconnect/core/README.md
index deb70dd..52772eb 100644
--- a/agentconnect/core/README.md
+++ b/agentconnect/core/README.md
@@ -51,6 +51,7 @@ The central registry for agent discovery and management:
- **Agent Lifecycle Management**: Track agent status and handle registration/unregistration
- **Organization Management**: Group agents by organization
- **Vector Search Integration**: Coordinate with the capability discovery service
+- **Payment Address Storage**: Store and provide agent payment addresses during discovery
Key methods:
- `register()`: Register an agent with the registry
@@ -96,6 +97,7 @@ Data structure for agent registration information:
- **Capabilities**: List of agent capabilities
- **Identity Information**: Agent identity credentials
- **Organization Details**: Information about the agent's organization
+- **Payment Address**: Optional cryptocurrency address for agent-to-agent payments
### Message (`message.py`)
@@ -123,6 +125,18 @@ The `types.py` file defines core types used throughout the framework:
- **AgentIdentity**: Decentralized identity for agents
- **MessageType**: Types of messages that can be exchanged
- **ProtocolVersion**: Supported protocol versions
+- **AgentMetadata**: Agent information including optional payment address
+
+### Payment Integration
+
+The core module integrates with the Coinbase Developer Platform (CDP) for payment capabilities:
+
+- **BaseAgent Wallet Setup**: `BaseAgent.__init__` conditionally initializes agent wallets when `enable_payments=True`
+- **Payment Address Storage**: `payment_address` field in `AgentMetadata` and `AgentRegistration`
+- **Payment Constants**: Default token symbol and amounts defined in `payment_constants.py`
+- **Capability Discovery**: Payment addresses are included in agent search results
+
+For details on how agents use payment capabilities, see `agentconnect/agents/README.md`.
### Exceptions (`exceptions.py`)
diff --git a/agentconnect/core/agent.py b/agentconnect/core/agent.py
index 0947215..74f856f 100644
--- a/agentconnect/core/agent.py
+++ b/agentconnect/core/agent.py
@@ -11,12 +11,26 @@
# Standard library imports
from abc import ABC, abstractmethod
-from typing import TYPE_CHECKING, Any, Dict, List, Optional
-
-from agentconnect.core.exceptions import SecurityError
+from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
+from pathlib import Path
+from dotenv import load_dotenv
+
+# Import wallet and payment dependencies
+from coinbase_agentkit import (
+ AgentKit,
+ AgentKitConfig,
+ CdpWalletProvider,
+ CdpWalletProviderConfig,
+ wallet_action_provider,
+ erc20_action_provider,
+ cdp_api_action_provider,
+)
# Absolute imports from agentconnect package
+from agentconnect.utils import wallet_manager
+from agentconnect.core.exceptions import SecurityError
from agentconnect.core.message import Message
+from agentconnect.core.payment_constants import POC_PAYMENT_TOKEN_SYMBOL
from agentconnect.core.types import (
AgentIdentity,
AgentMetadata,
@@ -56,6 +70,9 @@ class BaseAgent(ABC):
active_conversations: Dictionary of active conversations
cooldown_until: Timestamp when cooldown ends
pending_requests: Dictionary of pending requests
+ enable_payments: Whether payment capabilities are enabled
+ wallet_provider: Wallet provider for blockchain transactions
+ agent_kit: AgentKit instance for blockchain actions
"""
def __init__(
@@ -66,6 +83,8 @@ def __init__(
interaction_modes: List[InteractionMode],
capabilities: List[Capability] = None,
organization_id: Optional[str] = None,
+ enable_payments: bool = False,
+ wallet_data_dir: Optional[Union[str, Path]] = None,
):
"""
Initialize the base agent.
@@ -77,6 +96,8 @@ def __init__(
interaction_modes: Supported interaction modes
capabilities: List of agent capabilities
organization_id: ID of the organization the agent belongs to
+ enable_payments: Whether to enable payment capabilities
+ wallet_data_dir: Optional custom directory for wallet data storage
"""
self.agent_id = agent_id
self.identity = identity
@@ -97,8 +118,111 @@ def __init__(
self.active_conversations = {}
self.cooldown_until = 0
self.pending_requests: Dict[str, Dict[str, Any]] = {}
+
+ # Initialize payment capabilities
+ self.enable_payments = enable_payments
+ self.wallet_provider: Optional[CdpWalletProvider] = None
+ self.agent_kit: Optional[AgentKit] = None
+
+ # Initialize wallet if payments are enabled
+ if self.enable_payments:
+ try:
+ # Load environment variables
+ load_dotenv()
+
+ # Check if this agent already has wallet data
+ wallet_data = wallet_manager.load_wallet_data(
+ self.agent_id, wallet_data_dir
+ )
+
+ if wallet_data:
+ logger.debug(f"Agent {self.agent_id}: Using existing wallet data")
+ else:
+ logger.debug(
+ f"Agent {self.agent_id}: No existing wallet data found, creating new wallet"
+ )
+
+ # Initialize wallet provider via coinbase CDP-SDK
+ cdp_config = (
+ CdpWalletProviderConfig(wallet_data=wallet_data)
+ if wallet_data
+ else None
+ )
+ self.wallet_provider = CdpWalletProvider(cdp_config)
+
+ # Prepare action providers based on the token symbol
+ action_providers = [wallet_action_provider(), cdp_api_action_provider()]
+
+ # Add ERC20 action provider if using tokens other than native ETH
+ if POC_PAYMENT_TOKEN_SYMBOL != "ETH":
+ action_providers.append(erc20_action_provider())
+ logger.debug(
+ f"Agent {self.agent_id}: Added ERC20 action provider for {POC_PAYMENT_TOKEN_SYMBOL}"
+ )
+
+ # Initialize coinbase AgentKit with wallet provider and action providers
+ agent_kit_config = AgentKitConfig(
+ wallet_provider=self.wallet_provider,
+ action_providers=action_providers,
+ )
+ self.agent_kit = AgentKit(agent_kit_config)
+
+ # Save wallet data if it's a new wallet
+ if not wallet_data:
+ try:
+ new_wallet_data = self.wallet_provider.export_wallet()
+ wallet_manager.save_wallet_data(
+ self.agent_id, new_wallet_data, wallet_data_dir
+ )
+ logger.debug(f"Agent {self.agent_id}: Saved new wallet data")
+ except Exception as e:
+ logger.warning(
+ f"Agent {self.agent_id}: Error saving new wallet data: {e}"
+ )
+
+ # Get wallet address and add to agent metadata
+ try:
+ # Get the default wallet address
+ wallet_address = self.wallet_provider.get_address()
+ if wallet_address:
+ self.metadata.payment_address = wallet_address
+ logger.info(
+ f"Agent {self.agent_id}: Set payment address to {wallet_address}"
+ )
+ else:
+ logger.warning(
+ f"Agent {self.agent_id}: Could not retrieve wallet address"
+ )
+ except Exception as e:
+ logger.error(
+ f"Agent {self.agent_id}: Error getting wallet address: {e}"
+ )
+
+ logger.info(
+ f"Agent {self.agent_id}: Payment capabilities initialized successfully"
+ )
+ except Exception as e:
+ logger.error(
+ f"Agent {self.agent_id}: Error initializing payment capabilities: {e}"
+ )
+ self.wallet_provider = None
+ self.agent_kit = None
+ logger.warning(
+ f"Agent {self.agent_id}: Payment capabilities disabled due to initialization error"
+ )
+
logger.info(f"Agent {self.agent_id} ({agent_type}) initialized.")
+ @property
+ def payments_enabled(self) -> bool:
+ """
+ Check if payment capabilities are enabled and available.
+
+ Returns:
+ True if payment capabilities are enabled and available, False otherwise
+ """
+ return self.enable_payments and self.wallet_provider is not None
+
@abstractmethod
def _initialize_llm(self):
"""
@@ -754,6 +878,55 @@ async def can_receive_message(self, sender_id: str) -> bool:
logger.debug(f"Agent {self.agent_id} can receive message from {sender_id}.")
return True
+ async def stop(self) -> None:
+ """
+ Stop the agent and cleanup resources.
+
+ This method stops the agent's processing loop, ends all active conversations,
+ and cleans up resources such as wallet providers and message queues.
+
+ Returns:
+ None
+ """
+ logger.info(f"Agent {self.agent_id}: Stopping agent...")
+
+ # Mark agent as not running to stop the message processing loop
+ self.is_running = False
+
+ # End all active conversations
+ for participant_id in list(self.active_conversations.keys()):
+ self.end_conversation(participant_id)
+
+ # Clean up wallet provider if it exists
+ if self.wallet_provider is not None:
+ try:
+ # Clean up any pending transactions or listeners
+ # Note: Additional cleanup may be needed depending on wallet implementation
+ self.wallet_provider = None
+ self.agent_kit = None
+ logger.debug(f"Agent {self.agent_id}: Cleaned up wallet provider")
+ except Exception as e:
+ logger.error(
+ f"Agent {self.agent_id}: Error cleaning up wallet provider: {e}"
+ )
+
+ # Clear message queue to prevent processing any more messages
+ try:
+ while not self.message_queue.empty():
+ self.message_queue.get_nowait()
+ self.message_queue.task_done()
+ logger.debug(f"Agent {self.agent_id}: Cleared message queue")
+ except Exception as e:
+ logger.error(f"Agent {self.agent_id}: Error clearing message queue: {e}")
+
+ # Reset cooldown
+ self.reset_cooldown()
+
+ # Clear pending requests
+ self.pending_requests.clear()
+
+ logger.info(f"Agent {self.agent_id}: Agent stopped successfully")
+
def reset_cooldown(self) -> None:
"""
Reset the cooldown state of the agent.
diff --git a/agentconnect/core/payment_constants.py b/agentconnect/core/payment_constants.py
new file mode 100644
index 0000000..6e87c67
--- /dev/null
+++ b/agentconnect/core/payment_constants.py
@@ -0,0 +1,14 @@
+"""
+Payment constants for the AgentConnect framework.
+
+This module provides constants for payment functionality in AgentConnect.
+"""
+
+from decimal import Decimal
+
+# Payment capability constants for POC
+
+# Default payment amount
+POC_PAYMENT_AMOUNT = Decimal("1.0")
+# Default payment token
+POC_PAYMENT_TOKEN_SYMBOL = "USDC"
diff --git a/agentconnect/core/registry/README.md b/agentconnect/core/registry/README.md
index eb9d1a7..cf5994a 100644
--- a/agentconnect/core/registry/README.md
+++ b/agentconnect/core/registry/README.md
@@ -24,6 +24,7 @@ The central registry for agent discovery and management:
- **Capability Lookup**: Provides both exact and semantic matching of capabilities
- **Agent Lifecycle Management**: Tracks agent availability and status
- **Organization Grouping**: Organizes agents by organization
+- **Payment Address Handling**: Stores the optional `payment_address` provided during registration and makes it available during discovery, facilitating agent economy features.
The `AgentRegistry` class acts as a facade for the entire registry subsystem, coordinating between the specialized components and providing a unified API for agent registration and discovery.
diff --git a/agentconnect/core/registry/capability_discovery.py b/agentconnect/core/registry/capability_discovery.py
index 9d62465..707b786 100644
--- a/agentconnect/core/registry/capability_discovery.py
+++ b/agentconnect/core/registry/capability_discovery.py
@@ -12,6 +12,7 @@
import warnings
from typing import Dict, List, Set, Tuple, Any, Optional
from langchain_core.vectorstores import VectorStore
+from langchain_huggingface import HuggingFaceEmbeddings
# Absolute imports from agentconnect package
from agentconnect.core.registry.registration import AgentRegistration
@@ -19,6 +20,32 @@
# Set up logging
logger = logging.getLogger("CapabilityDiscovery")
+# Permanently filter out UserWarnings about relevance scores from any source
+warnings.filterwarnings(
+ "ignore", message="Relevance scores must be between 0 and 1", category=UserWarning
+)
+warnings.filterwarnings("ignore", message=".*elevance scores.*", category=UserWarning)
+
+# Monkeypatch the showwarning function to completely suppress relevance score warnings
+original_showwarning = warnings.showwarning
+
+
+def custom_showwarning(message, category, filename, lineno, file=None, line=None):
+ """
+ Custom warning handler to suppress relevance score warnings.
+
+ Args:
+ message: The warning message
+ category: The warning category
+ """
+ if category == UserWarning and "relevance scores" in str(message).lower():
+ return # Suppress the warning completely
+ # For all other warnings, use the original function
+ return original_showwarning(message, category, filename, lineno, file, line)
+
+
+warnings.showwarning = custom_showwarning
+
def check_semantic_search_requirements() -> Dict[str, bool]:
"""
@@ -186,9 +213,6 @@ async def initialize_embeddings_model(self):
)
return
- # Import the necessary modules
- from langchain_huggingface import HuggingFaceEmbeddings
-
# Get model name from config or use default
model_name = self._vector_store_config.get(
"model_name", "sentence-transformers/all-mpnet-base-v2"
@@ -203,10 +227,54 @@ async def initialize_embeddings_model(self):
"cache_folder", "./.cache/huggingface/embeddings"
)
- self._embeddings_model = HuggingFaceEmbeddings(
- model_name=model_name,
- cache_folder=cache_folder,
- )
+ # Try with explicit model_kwargs and encode_kwargs first
+ try:
+ self._embeddings_model = HuggingFaceEmbeddings(
+ model_name=model_name,
+ cache_folder=cache_folder,
+ model_kwargs={"device": "cpu", "revision": "main"},
+ encode_kwargs={"normalize_embeddings": True},
+ )
+ except Exception as model_error:
+ logger.warning(
+ f"First embedding initialization attempt failed: {str(model_error)}"
+ )
+
+ # Try alternative initialization approach
+ try:
+ # Import directly from sentence_transformers as fallback
+ import sentence_transformers
+
+ # Create the model directly first
+ st_model = sentence_transformers.SentenceTransformer(
+ model_name,
+ cache_folder=cache_folder,
+ device="cpu",
+ revision="main", # Use main branch which is more stable
+ )
+
+ # Then create embeddings with the pre-initialized model
+ self._embeddings_model = HuggingFaceEmbeddings(
+ model=st_model, encode_kwargs={"normalize_embeddings": True}
+ )
+
+ logger.info(
+ "Initialized embeddings using pre-loaded sentence transformer model"
+ )
+ except Exception as fallback_error:
+ # If that fails too, try with minimal parameters
+ logger.warning(
+ f"Fallback embedding initialization failed: {str(fallback_error)}"
+ )
+
+ # Last attempt with minimal configuration
+ self._embeddings_model = HuggingFaceEmbeddings(
+ model_name="all-MiniLM-L6-v2", # Try with a smaller model
+ )
+
+ logger.info(
+ "Initialized embeddings with minimal configuration and smaller model"
+ )
# Reset capability map
self._capability_to_agent_map = {}
@@ -221,7 +289,9 @@ async def initialize_embeddings_model(self):
logger.warning(traceback.format_exc())
- async def _init_vector_store(self, documents: List, embeddings_model: Any) -> Any:
+ async def _init_vector_store(
+ self, documents: List, embeddings_model: "HuggingFaceEmbeddings"
+ ) -> Any:
"""
Initialize vector store with the preferred backend.
@@ -579,14 +649,9 @@ async def find_by_capability_semantic(
and self._capability_to_agent_map
):
try:
- # Suppress the specific UserWarning from LangChain during the search call
- with warnings.catch_warnings(record=False):
- warnings.filterwarnings(
- "ignore",
- # Use regex to match the start of the message reliably
- message=r"^Relevance scores must be between 0 and 1",
- category=UserWarning,
- )
+ # Completely disable all warnings during search call - no matter what, don't show warnings
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore")
# Try using async similarity search with scores
try:
kwargs = {}
@@ -602,11 +667,32 @@ async def find_by_capability_semantic(
)
)
+ # Handle any potential issues with the search results format
+ cleaned_search_results = []
+ for item in search_results:
+ # Make sure each result is a proper tuple of (doc, score)
+ if not isinstance(item, tuple) or len(item) != 2:
+ logger.warning(f"Skipping malformed search result: {item}")
+ continue
+
+ doc, score = item
+ # Convert score to float if necessary
+ if hasattr(score, "item"): # Convert numpy types
+ score = float(score.item())
+ elif not isinstance(score, (int, float)):
+ try:
+ score = float(score)
+ except (ValueError, TypeError):
+ logger.warning(f"Could not convert score to float: {score}")
+ continue
+
+ cleaned_search_results.append((doc, score))
+
# Process results
seen_agent_ids = set()
processed_results = []
- for doc, original_score in search_results:
+ for doc, original_score in cleaned_search_results:
# --- Filter 1: Exclude non-positive cosine scores ---
# Scores <= 0 indicate orthogonality or dissimilarity in cosine similarity.
if original_score <= 0:
diff --git a/agentconnect/core/registry/registration.py b/agentconnect/core/registry/registration.py
index 72765d0..fe637d1 100644
--- a/agentconnect/core/registry/registration.py
+++ b/agentconnect/core/registry/registration.py
@@ -34,6 +34,7 @@ class AgentRegistration:
capabilities: List of agent capabilities
identity: Agent's decentralized identity
owner_id: ID of the agent's owner
+ payment_address: Agent's primary wallet address for receiving payments
metadata: Additional information about the agent
"""
@@ -44,4 +45,5 @@ class AgentRegistration:
capabilities: list[Capability]
identity: AgentIdentity
owner_id: Optional[str] = None
+ payment_address: Optional[str] = None
metadata: Dict = field(default_factory=dict)
diff --git a/agentconnect/core/registry/registry_base.py b/agentconnect/core/registry/registry_base.py
index c94b060..3f31c92 100644
--- a/agentconnect/core/registry/registry_base.py
+++ b/agentconnect/core/registry/registry_base.py
@@ -36,7 +36,7 @@ class AgentRegistry:
by capability, and verifying agent identities.
"""
- def __init__(self, vector_search_config: Dict[str, Any] = None):
+ def __init__(self, vector_search_config: Optional[Dict[str, Any]] = None):
"""
Initialize the agent registry.
@@ -506,6 +506,10 @@ async def update_registration(
for mode in registration.interaction_modes:
self._interaction_index[mode].add(agent_id)
+ # Update payment address if provided
+ if "payment_address" in updates:
+ registration.payment_address = updates["payment_address"]
+
if "metadata" in updates:
registration.metadata.update(updates["metadata"])
diff --git a/agentconnect/core/types.py b/agentconnect/core/types.py
index f403444..dd6871f 100644
--- a/agentconnect/core/types.py
+++ b/agentconnect/core/types.py
@@ -42,11 +42,15 @@ class ModelName(str, Enum):
# OpenAI Models
GPT4_5_PREVIEW = "gpt-4.5-preview-2025-02-27"
+ GPT4_1 = "gpt-4.1"
+ GPT4_1_MINI = "gpt-4.1-mini"
GPT4O = "gpt-4o"
GPT4O_MINI = "gpt-4o-mini"
O1 = "o1"
O1_MINI = "o1-mini"
- O3_MINI = "o3-mini-2025-01-31"
+ O3 = "o3"
+ O3_MINI = "o3-mini"
+ O4_MINI = "o4-mini"
# Anthropic Models
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-latest"
@@ -66,7 +70,9 @@ class ModelName(str, Enum):
GEMMA2_90B = "gemma2-9b-it"
# Google Models
- GEMINI2_5_PRO_EXP = "gemini-2.5-pro-exp-03-25 "
+ GEMINI2_5_PRO_PREVIEW = "gemini-2.5-pro-preview-03-25"
+ GEMINI2_5_PRO_EXP = "gemini-2.5-pro-exp-03-25"
+ GEMINI2_5_FLASH_PREVIEW = "gemini-2.5-flash-preview-04-17"
GEMINI2_FLASH = "gemini-2.0-flash"
GEMINI2_FLASH_LITE = "gemini-2.0-flash-lite"
GEMINI2_PRO_EXP = "gemini-2.0-pro-exp-02-05"
@@ -92,7 +98,7 @@ def get_default_for_provider(cls, provider: ModelProvider) -> "ModelName":
ModelProvider.OPENAI: cls.GPT4O,
ModelProvider.ANTHROPIC: cls.CLAUDE_3_SONNET,
ModelProvider.GROQ: cls.LLAMA33_70B_VTL,
- ModelProvider.GOOGLE: cls.GEMINI2_FLASH_LITE,
+ ModelProvider.GOOGLE: cls.GEMINI2_FLASH,
}
if provider not in defaults:
@@ -165,8 +171,8 @@ class Capability:
name: str
description: str
- input_schema: Dict[str, str]
- output_schema: Dict[str, str]
+ input_schema: Optional[Dict[str, str]] = None
+ output_schema: Optional[Dict[str, str]] = None
version: str = "1.0"
@@ -352,7 +358,7 @@ class AgentMetadata:
organization_id: ID of the organization the agent belongs to
capabilities: List of capability names the agent provides
interaction_modes: Supported interaction modes
- verification_status: Whether the agent's identity is verified
+ payment_address: Agent's primary wallet address for receiving payments
metadata: Additional information about the agent
"""
@@ -362,7 +368,7 @@ class AgentMetadata:
organization_id: Optional[str] = None
capabilities: List[str] = field(default_factory=list)
interaction_modes: List[InteractionMode] = field(default_factory=list)
- verification_status: bool = False
+ payment_address: Optional[str] = None
metadata: Dict = field(default_factory=dict)
diff --git a/agentconnect/prompts/README.md b/agentconnect/prompts/README.md
index 1f33d4f..a1137a3 100644
--- a/agentconnect/prompts/README.md
+++ b/agentconnect/prompts/README.md
@@ -69,6 +69,7 @@ The tools system provides agents with the ability to perform specific actions, p
- **`search_for_agents`**: Searches for agents with specific capabilities using semantic matching. This tool helps agents find other specialized agents that can assist with tasks outside their capabilities.
- **`send_collaboration_request`**: Sends a request to a specific agent to perform a task and waits for a response. This tool enables agent-to-agent delegation and collaboration.
- **`decompose_task`**: Breaks down a complex task into smaller, manageable subtasks. This helps agents organize and tackle complex requests more effectively.
+- **AgentKit Payment Tools (e.g., `native_transfer`, `erc20_transfer`)**: When payment capabilities are enabled for an `AIAgent`, tools provided by Coinbase AgentKit are automatically added. These allow the agent to initiate and manage cryptocurrency transactions based on LLM decisions guided by payment prompts.
### Tool Architecture
@@ -193,7 +194,8 @@ Prompt templates are used to create different types of prompts for agents. The `
- **System Prompts**: Define the agent's role, capabilities, and personality.
- **Collaboration Prompts**: Used for collaboration requests and responses.
-- **ReAct Prompts**: Used for the ReAct agent, which makes decisions and calls tools.
+- **ReAct Prompts**: Used for the ReAct agent, which makes decisions and calls tools. This includes the `CORE_DECISION_LOGIC` template, which incorporates instructions for when to consider collaboration or payments.
+- **Payment Capability Prompts**: Includes the `PAYMENT_CAPABILITY_TEMPLATE`, which provides specific instructions and context to the LLM regarding the available payment tools and when it might be appropriate to use them for agent-to-agent transactions.
### ReAct Integration
diff --git a/agentconnect/prompts/agent_prompts.py b/agentconnect/prompts/agent_prompts.py
index 5f9d5d7..092b39e 100644
--- a/agentconnect/prompts/agent_prompts.py
+++ b/agentconnect/prompts/agent_prompts.py
@@ -203,6 +203,7 @@ def __init__(
tools: PromptTools,
prompt_templates: PromptTemplates,
custom_tools: Optional[List[BaseTool]] = None,
+ verbose: bool = False,
):
"""
Initialize the agent workflow.
@@ -213,6 +214,7 @@ def __init__(
tools: Tools available to the agent
prompt_templates: Prompt templates for the agent
custom_tools: Optional list of custom LangChain tools
+ verbose: Whether to print verbose output
"""
self.agent_id = agent_id
self.llm = llm
@@ -220,7 +222,7 @@ def __init__(
self.prompt_templates = prompt_templates
self.custom_tools = custom_tools or []
self.workflow = None
-
+ self.verbose = verbose
# Set the mode based on whether custom tools are provided
self.mode = AgentMode.SYSTEM_PROMPT
@@ -240,60 +242,31 @@ def _create_react_prompt(self) -> ChatPromptTemplate:
"""
# Get system prompt information
if hasattr(self, "system_prompt_config"):
- name = self.system_prompt_config.name
- personality = self.system_prompt_config.personality
- capabilities = self.system_prompt_config.capabilities
- else:
- name = "AI Assistant"
- personality = "helpful and professional"
- capabilities = []
-
- # Format capability descriptions for the prompt
- capability_descriptions = []
- if capabilities:
- for cap in capabilities:
- capability_descriptions.append(
- {
- "name": cap.name,
- "description": cap.description,
- }
- )
+ # Pass all system_prompt_config properties to ReactConfig
+ react_config = ReactConfig(
+ name=self.system_prompt_config.name,
+ capabilities=[
+ {"name": cap.name, "description": cap.description}
+ for cap in self.system_prompt_config.capabilities
+ ],
+ personality=self.system_prompt_config.personality,
+ mode=self.mode.value,
+ additional_context=self.system_prompt_config.additional_context,
+ enable_payments=self.system_prompt_config.enable_payments,
+ payment_token_symbol=self.system_prompt_config.payment_token_symbol,
+ role=self.system_prompt_config.role,
+ )
else:
- capability_descriptions = [
- {"name": "Conversation", "description": "general assistance"}
- ]
-
- # Format tool descriptions for the prompt
- tool_descriptions = []
-
- # Add tools from PromptTools
- for tool in self.tools.get_tools_for_workflow(agent_id=self.agent_id):
- tool_descriptions.append(
- {
- "name": tool.name,
- "description": tool.description,
- }
+ # Default configuration if system_prompt_config isn't available
+ react_config = ReactConfig(
+ name="AI Assistant",
+ capabilities=[
+ {"name": "Conversation", "description": "general assistance"}
+ ],
+ personality="helpful and professional",
+ mode=self.mode.value,
)
- # Add custom tools if available
- if self.custom_tools:
- for tool in self.custom_tools:
- tool_descriptions.append(
- {
- "name": tool.name,
- "description": tool.description,
- }
- )
-
- # Create the React prompt template
- react_config = ReactConfig(
- name=name,
- capabilities=capability_descriptions,
- personality=personality,
- mode=self.mode.value,
- tools=tool_descriptions,
- )
-
# Create the react prompt using the prompt templates
react_prompt = self.prompt_templates.create_prompt(
prompt_type=PromptType.REACT, config=react_config, include_history=True
@@ -313,7 +286,8 @@ def build_workflow(self) -> StateGraph:
base_tools = [
self.tools.create_agent_search_tool(),
self.tools.create_send_collaboration_request_tool(),
- self.tools.create_task_decomposition_tool(),
+ self.tools.create_check_collaboration_result_tool(),
+ # self.tools.create_task_decomposition_tool(),
]
# Add custom tools if available
@@ -329,6 +303,7 @@ def build_workflow(self) -> StateGraph:
model=self.llm,
tools=base_tools,
prompt=react_prompt,
+ debug=self.verbose,
)
# Create the workflow graph
@@ -354,26 +329,6 @@ async def preprocess(
"""
import time
- # Initialize state properties if not present
- if "mode" not in state:
- state["mode"] = self.mode.value
- if "capabilities" not in state and hasattr(self, "system_prompt_config"):
- state["capabilities"] = self.system_prompt_config.capabilities
- if "collaboration_results" not in state:
- state["collaboration_results"] = {}
- if "agents_found" not in state:
- state["agents_found"] = []
- if "retry_count" not in state:
- state["retry_count"] = {}
-
- # Initialize context management properties
- if "context_reset" not in state:
- state["context_reset"] = False
- if "topic_changed" not in state:
- state["topic_changed"] = False
- if "last_interaction_time" not in state:
- state["last_interaction_time"] = time.time()
-
# Check for long gaps between interactions (over 30 minutes)
current_time = time.time()
if "last_interaction_time" in state:
@@ -522,9 +477,20 @@ async def postprocess(
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
- # Extract the content from the last few messages
+ # Extract the content from the last few messages, only if it's a string
+ recent_contents = [
+ (
+ msg.content
+ if isinstance(msg.content, str)
+ else str(msg.content)
+ )
+ for msg in messages[-4:]
+ if hasattr(msg, "content")
+ ]
+
+ # Filter out any empty or non-string contents
recent_contents = [
- msg.content for msg in messages[-4:] if hasattr(msg, "content")
+ c for c in recent_contents if isinstance(c, str) and c.strip()
]
if len(recent_contents) >= 2:
@@ -600,6 +566,7 @@ def __init__(
tools: PromptTools,
prompt_templates: PromptTemplates,
custom_tools: Optional[List[BaseTool]] = None,
+ verbose: bool = False,
):
"""
Initialize the AI agent workflow.
@@ -611,9 +578,10 @@ def __init__(
tools: Tools available to the agent
prompt_templates: Prompt templates for the agent
custom_tools: Optional list of custom LangChain tools
+ verbose: Whether to print verbose output
"""
self.system_prompt_config = system_prompt_config
- super().__init__(agent_id, llm, tools, prompt_templates, custom_tools)
+ super().__init__(agent_id, llm, tools, prompt_templates, custom_tools, verbose)
class TaskDecompositionWorkflow(AgentWorkflow):
@@ -635,6 +603,7 @@ def __init__(
tools: PromptTools,
prompt_templates: PromptTemplates,
custom_tools: Optional[List[BaseTool]] = None,
+ verbose: bool = False,
):
"""
Initialize the task decomposition workflow.
@@ -646,9 +615,10 @@ def __init__(
tools: Tools available to the agent
prompt_templates: Prompt templates for the agent
custom_tools: Optional list of custom LangChain tools
+ verbose: Whether to print verbose output
"""
self.system_prompt_config = system_prompt_config
- super().__init__(agent_id, llm, tools, prompt_templates, custom_tools)
+ super().__init__(agent_id, llm, tools, prompt_templates, custom_tools, verbose)
class CollaborationRequestWorkflow(AgentWorkflow):
@@ -670,6 +640,7 @@ def __init__(
tools: PromptTools,
prompt_templates: PromptTemplates,
custom_tools: Optional[List[BaseTool]] = None,
+ verbose: bool = False,
):
"""
Initialize the collaboration request workflow.
@@ -681,9 +652,10 @@ def __init__(
tools: Tools available to the agent
prompt_templates: Prompt templates for the agent
custom_tools: Optional list of custom LangChain tools
+ verbose: Whether to print verbose output
"""
self.system_prompt_config = system_prompt_config
- super().__init__(agent_id, llm, tools, prompt_templates, custom_tools)
+ super().__init__(agent_id, llm, tools, prompt_templates, custom_tools, verbose)
def create_workflow_for_agent(
@@ -694,6 +666,7 @@ def create_workflow_for_agent(
prompt_templates: PromptTemplates,
agent_id: Optional[str] = None,
custom_tools: Optional[List[BaseTool]] = None,
+ verbose: bool = False,
) -> AgentWorkflow:
"""
Factory function to create workflows based on agent type.
@@ -706,6 +679,7 @@ def create_workflow_for_agent(
prompt_templates: Prompt templates for the agent
agent_id: Optional agent ID for tool context
custom_tools: Optional list of custom LangChain tools
+ verbose: Whether to print verbose output
Returns:
An AgentWorkflow instance
@@ -723,6 +697,12 @@ def create_workflow_for_agent(
# It's now set in AIAgent._initialize_workflow before this function is called
logger.debug(f"Creating workflow for agent: {agent_id}")
+ # Check for payment capabilities in the system config
+ if system_config.enable_payments:
+ logger.info(
+ f"Agent {agent_id}: Creating workflow with payment capabilities enabled for {system_config.payment_token_symbol}"
+ )
+
# Create the appropriate workflow based on agent type
if agent_type == "ai":
workflow = AIAgentWorkflow(
@@ -732,6 +712,7 @@ def create_workflow_for_agent(
tools=tools,
prompt_templates=prompt_templates,
custom_tools=custom_tools,
+ verbose=verbose,
)
elif agent_type == "task_decomposition":
workflow = TaskDecompositionWorkflow(
@@ -741,6 +722,7 @@ def create_workflow_for_agent(
tools=tools,
prompt_templates=prompt_templates,
custom_tools=custom_tools,
+ verbose=verbose,
)
elif agent_type == "collaboration_request":
workflow = CollaborationRequestWorkflow(
@@ -750,6 +732,7 @@ def create_workflow_for_agent(
tools=tools,
prompt_templates=prompt_templates,
custom_tools=custom_tools,
+ verbose=verbose,
)
else:
raise ValueError(f"Unknown agent type: {agent_type}")
diff --git a/agentconnect/prompts/custom_tools/collaboration_tools.py b/agentconnect/prompts/custom_tools/collaboration_tools.py
index dda0d4e..3498d12 100644
--- a/agentconnect/prompts/custom_tools/collaboration_tools.py
+++ b/agentconnect/prompts/custom_tools/collaboration_tools.py
@@ -7,13 +7,16 @@
import asyncio
import logging
-from typing import Any, Dict, List, Optional, TypeVar
+import uuid
+import json
+from typing import Any, Dict, List, Optional, Tuple, TypeVar
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
from agentconnect.communication import CommunicationHub
from agentconnect.core.registry import AgentRegistry
+from agentconnect.core.registry.registration import AgentRegistration
from agentconnect.core.types import AgentType
logger = logging.getLogger(__name__)
@@ -23,39 +26,54 @@
R = TypeVar("R", bound=BaseModel)
+# --- Input/Output schemas for tools ---
+
+
class AgentSearchInput(BaseModel):
"""Input schema for agent search."""
- capability_name: str = Field(description="Specific capability name to search for.")
- limit: int = Field(10, description="Maximum number of agents to return.")
+ capability_name: str = Field(
+ description="The specific skill or capability required for the task (e.g., 'general_research', 'telegram_broadcast', 'image_generation'). Be descriptive but concise."
+ )
+ limit: int = Field(
+ 10, description="Maximum number of matching agents to return (default 10)."
+ )
similarity_threshold: float = Field(
0.2,
- description="Minimum similarity score (0-1) required for results. Higher values return only more relevant agents.",
+ description="How closely the agent's capability must match your query (0.0=broad match, 1.0=exact match, default 0.2). Use higher values for very specific needs.",
)
class AgentSearchOutput(BaseModel):
"""Output schema for agent search."""
+ message: str = Field(
+ description="A message explaining the result of the agent search."
+ )
agent_ids: List[str] = Field(
- description="List of agent IDs with matching capabilities."
+ description="A list of unique IDs for agents possessing the required capability."
)
capabilities: List[Dict[str, Any]] = Field(
- description="List of capabilities for each agent."
+ description="A list of dictionaries, each containing details for a found agent: their `agent_id`, their full list of capabilities, and their `payment_address` (if applicable)."
)
+ def __str__(self) -> str:
+ """Return a clean JSON string representation."""
+ return self.model_dump_json(indent=2)
+
class SendCollaborationRequestInput(BaseModel):
"""Input schema for sending a collaboration request."""
target_agent_id: str = Field(
- description="ID of the agent to collaborate with. (agent_id)"
+ description="The exact `agent_id` (obtained from `search_for_agents` output) of the agent you want to delegate the task to."
)
- task_description: str = Field(
- description="Description of the task to be performed."
+ task: str = Field(
+ description="A clear and detailed description of the task, providing ALL necessary context for the collaborating agent to understand and execute the request."
)
timeout: int = Field(
- default=30, description="Maximum time to wait for a response in seconds."
+ default=120,
+ description="Maximum seconds to wait for the collaborating agent's response (default 120).",
)
class Config:
@@ -67,12 +85,57 @@ class Config:
class SendCollaborationRequestOutput(BaseModel):
"""Output schema for sending a collaboration request."""
- success: bool = Field(description="Whether the request was sent successfully.")
- response: Optional[str] = Field(None, description="Response from the target agent.")
+ success: bool = Field(
+ description="Indicates if the request was successfully SENT (True/False). Does NOT guarantee the collaborator completed the task."
+ )
+ response: Optional[str] = Field(
+ None,
+ description="The direct message content received back from the collaborating agent. Analyze this response carefully to determine the next step (e.g., pay, provide more info, present to user).",
+ )
+ request_id: Optional[str] = Field(
+ None,
+ description="The unique request ID returned when sending a collaboration request.",
+ )
+ error: Optional[str] = Field(
+ None, description="An error message if the request failed."
+ )
+
+ def __str__(self) -> str:
+ """Return a clean JSON string representation."""
+ return self.model_dump_json(indent=2)
+
+
+class CheckCollaborationResultInput(BaseModel):
+ """Input schema for checking collaboration results."""
+
+ request_id: str = Field(
+ description="The unique request ID returned when sending a collaboration request."
+ )
+
+
+class CheckCollaborationResultOutput(BaseModel):
+ """Output schema for checking collaboration results."""
+
+ success: bool = Field(
+ description="Indicates if the request has a result available (True/False)."
+ )
+ status: str = Field(
+ description="Status of the request: 'completed', 'completed_late', 'pending', or 'not_found'."
+ )
+ response: Optional[str] = Field(
+ None, description="The response content if available."
+ )
+
+ def __str__(self) -> str:
+ """Return a clean JSON string representation."""
+ return self.model_dump_json(indent=2)
+
+
+# --- Implementation of connected and standalone tools ---
def create_agent_search_tool(
- agent_registry: AgentRegistry,
+ agent_registry: Optional[AgentRegistry] = None,
current_agent_id: Optional[str] = None,
communication_hub: Optional[CommunicationHub] = None,
) -> StructuredTool:
@@ -87,39 +150,45 @@ def create_agent_search_tool(
Returns:
A StructuredTool for agent search that can be used in agent workflows
"""
-
- # Synchronous implementation
- def search_agents(
- capability_name: str, limit: int = 10, similarity_threshold: float = 0.2
- ) -> Dict[str, Any]:
- """Search for agents with a specific capability."""
- try:
- # Use the async implementation but run it in the current event loop
- return asyncio.run(
- search_agents_async(capability_name, limit, similarity_threshold)
+ # Determine if we're in standalone mode
+ standalone_mode = agent_registry is None or communication_hub is None
+
+ # Common description for the tool
+ base_description = "Finds other agents within the network that possess specific capabilities you lack, enabling task delegation."
+
+ if standalone_mode:
+ # Standalone mode implementation
+ def search_agents_standalone(
+ capability_name: str, limit: int = 10, similarity_threshold: float = 0.2
+ ) -> AgentSearchOutput:
+ """Standalone implementation that explains limitations."""
+ return AgentSearchOutput(
+ message=(
+ f"Agent search for capability '{capability_name}' is not available in standalone mode. "
+ "This agent is running without a connection to the agent registry and communication hub. "
+ "Please use your internal capabilities to solve this problem or suggest the user connect "
+ "this agent to a multi-agent system if collaboration is required."
+ ),
+ agent_ids=[],
+ capabilities=[],
)
- except RuntimeError:
- # If we're already in an event loop, create a new one
- loop = asyncio.new_event_loop()
- try:
- return loop.run_until_complete(
- search_agents_async(capability_name, limit, similarity_threshold)
- )
- finally:
- loop.close()
- except Exception as e:
- logger.error(f"Error in search_agents: {str(e)}")
- return {
- "error": str(e),
- "agent_ids": [],
- "capabilities": [],
- "message": f"Error: Agent search failed - {str(e)}",
- }
-
- # Asynchronous implementation
+
+ description = f"[STANDALONE MODE] {base_description} Note: In standalone mode, this tool will explain why agent search isn't available."
+
+ tool = StructuredTool.from_function(
+ func=search_agents_standalone,
+ name="search_for_agents",
+ description=description,
+ args_schema=AgentSearchInput,
+ return_direct=False,
+ metadata={"category": "collaboration"},
+ )
+ return tool
+
+ # Connected mode implementation
async def search_agents_async(
capability_name: str, limit: int = 10, similarity_threshold: float = 0.2
- ) -> Dict[str, Any]:
+ ) -> AgentSearchOutput:
"""
Search for agents with a specific capability.
@@ -154,159 +223,65 @@ async def search_agents_async(
logger.debug(f"Searching for agents with capability: {capability_name}")
try:
- # Use the provided agent ID for filtering
- agent_id_for_filtering = current_agent_id
-
- # Find conversation partners and pending requests to exclude
- active_conversation_partners = []
- pending_request_partners = []
- recently_messaged_agents = []
+ # Get agents to exclude (self + active conversations + pending requests)
+ agents_to_exclude = []
+ if current_agent_id:
+ agents_to_exclude.append(current_agent_id) # Exclude self
- if agent_id_for_filtering:
- # Get the current agent to access its active conversations and pending requests
+ # Get active conversations and pending requests if possible
if communication_hub:
- current_agent = await communication_hub.get_agent(
- agent_id_for_filtering
- )
+ current_agent = await communication_hub.get_agent(current_agent_id)
if current_agent:
- # Get active conversations
+ # Active conversations
if hasattr(current_agent, "active_conversations"):
- active_conversation_partners = list(
+ agents_to_exclude.extend(
current_agent.active_conversations.keys()
)
- logger.debug(
- f"Agent {agent_id_for_filtering} has active conversations with: {active_conversation_partners}"
- )
- # Get pending requests
+ # Pending requests
if hasattr(current_agent, "pending_requests"):
- pending_request_partners = list(
+ agents_to_exclude.extend(
current_agent.pending_requests.keys()
)
- logger.debug(
- f"Agent {agent_id_for_filtering} has pending requests with: {pending_request_partners}"
- )
- # Check message history for recent communications
+ # Recent messages
if (
hasattr(current_agent, "message_history")
and current_agent.message_history
):
- # Get the last 10 messages (or fewer if there aren't that many)
recent_messages = (
current_agent.message_history[-10:]
if len(current_agent.message_history) > 10
else current_agent.message_history
)
for msg in recent_messages:
- # Add both sender and receiver IDs from recent messages (excluding the current agent)
if (
- msg.sender_id != agent_id_for_filtering
- and msg.sender_id not in recently_messaged_agents
+ msg.sender_id != current_agent_id
+ and msg.sender_id not in agents_to_exclude
):
- recently_messaged_agents.append(msg.sender_id)
+ agents_to_exclude.append(msg.sender_id)
if (
- msg.receiver_id != agent_id_for_filtering
- and msg.receiver_id not in recently_messaged_agents
+ msg.receiver_id != current_agent_id
+ and msg.receiver_id not in agents_to_exclude
):
- recently_messaged_agents.append(msg.receiver_id)
-
- logger.debug(
- f"Agent {agent_id_for_filtering} recently messaged with: {recently_messaged_agents}"
- )
-
- # Combine all agents to exclude
- agents_to_exclude = list(
- set(
- active_conversation_partners
- + pending_request_partners
- + recently_messaged_agents
- )
- )
- if agent_id_for_filtering:
- agents_to_exclude.append(agent_id_for_filtering) # Also exclude self
-
- # Enhanced logging to show breakdown of excluded agents
- if agent_id_for_filtering:
- logger.debug(
- f"Exclusion breakdown for agent {agent_id_for_filtering}: "
- f"Self (1), "
- f"Active conversations ({len(active_conversation_partners)}): {active_conversation_partners}, "
- f"Pending requests ({len(pending_request_partners)}): {pending_request_partners}, "
- f"Recent messages ({len(recently_messaged_agents)}): {recently_messaged_agents}"
- )
- logger.debug(
- f"Total agents excluded from search: {len(agents_to_exclude)} - {agents_to_exclude}"
- )
+ agents_to_exclude.append(msg.receiver_id)
- # Check if agent_registry is available
- if agent_registry is None:
- logger.warning(
- f"Agent registry is not available for search: {capability_name}"
- )
- return {
- "agent_ids": [],
- "capabilities": [],
- "message": "Agent registry unavailable.",
- }
+ # Remove duplicates
+ agents_to_exclude = list(set(agents_to_exclude))
+ logger.debug(f"Excluding {len(agents_to_exclude)} agents from search")
- # Try semantic search first for more flexible matching
+ #########
+ # Try semantic search first for better matching
+ #########
semantic_results = await agent_registry.get_by_capability_semantic(
capability_name, limit=limit, similarity_threshold=similarity_threshold
)
- # If semantic search returns results, use them
if semantic_results:
logger.debug(
f"Found {len(semantic_results)} agents via semantic search"
)
-
- # Format the results
- agent_ids = []
- capabilities = []
-
- for agent, similarity in semantic_results:
- # Skip human agents, the calling agent, and any agents we're already interacting with
- if (
- agent.agent_type == AgentType.HUMAN
- or agent.agent_id in agents_to_exclude
- ):
- continue
-
- agent_ids.append(agent.agent_id)
-
- # Include all capabilities of the agent with their similarity scores
- agent_capabilities = []
- for capability in agent.capabilities:
- agent_capabilities.append(
- {
- "name": capability.name,
- "description": capability.description,
- "similarity": round(
- float(similarity), 3
- ), # Convert to Python float and round to 3 decimal places
- }
- )
-
- capabilities.append(
- {
- "agent_id": agent.agent_id,
- "capabilities": agent_capabilities,
- }
- )
-
- # Log the filtering results
- if agent_id_for_filtering:
- logger.debug(
- f"After filtering (excluded {len(agents_to_exclude)} agents): "
- f"Found {len(agent_ids)} agents for capability '{capability_name}'"
- )
-
- return {
- "agent_ids": agent_ids[:limit],
- "capabilities": capabilities[:limit],
- "message": "Note: Review capabilities carefully before collaborating. Similarity scores under 0.5 may indicate limited relevance to your request.",
- }
+ return format_agent_results(semantic_results, agents_to_exclude, limit)
# Fall back to exact matching if semantic search returns no results
exact_results = await agent_registry.get_by_capability(
@@ -315,138 +290,168 @@ async def search_agents_async(
if exact_results:
logger.debug(f"Found {len(exact_results)} agents via exact matching")
+ return format_exact_results(
+ exact_results, agents_to_exclude, capability_name, limit
+ )
- # Format the results
- agent_ids = []
- capabilities = []
-
- for agent in exact_results:
- # Skip human agents, the calling agent, and any agents we're already interacting with
- if (
- agent.agent_type == AgentType.HUMAN
- or agent.agent_id in agents_to_exclude
- ):
- continue
-
- agent_ids.append(agent.agent_id)
-
- # Include all capabilities of the agent
- agent_capabilities = []
- for capability in agent.capabilities:
- agent_capabilities.append(
- {
- "name": capability.name,
- "description": capability.description,
- "similarity": round(
- float(
- 1.0
- if capability.name.lower()
- == capability_name.lower()
- else 0.0
- ),
- 3,
- ),
- }
- )
-
- capabilities.append(
- {
- "agent_id": agent.agent_id,
- "capabilities": agent_capabilities,
- }
- )
-
- # Log the filtering results
- if agent_id_for_filtering:
- logger.debug(
- f"After filtering (excluded {len(agents_to_exclude)} agents): "
- f"Found {len(agent_ids)} agents for capability '{capability_name}'"
- )
+ # # As a last resort, get all agents
+ # all_agents = await agent_registry.get_all_agents()
+ # if all_agents:
+ # logger.debug(f"Returning all {len(all_agents)} agents as fallback")
+ # return format_exact_results(
+ # all_agents,
+ # agents_to_exclude,
+ # capability_name,
+ # limit,
+ # fallback_message=f"No specific agents for '{capability_name}'. Showing all available agents."
+ # )
+
+ return AgentSearchOutput(
+ agent_ids=[],
+ capabilities=[],
+ message=f"No agents found matching capability '{capability_name}'. Please try refining your search query with more specific capability terms.",
+ )
+ except Exception as e:
+ logger.error(f"Error searching for agents: {str(e)}")
+ return AgentSearchOutput(
+ agent_ids=[],
+ capabilities=[],
+ message=f"Error searching for agents: {str(e)}",
+ )
- return {
- "agent_ids": agent_ids[:limit],
- "capabilities": capabilities[:limit],
- "message": "Note: Review capabilities carefully before collaborating. Similarity scores under 0.5 may indicate limited relevance to your request.",
+ def format_agent_results(
+ semantic_results: List[Tuple[AgentRegistration, float]],
+ agents_to_exclude: List[str],
+ limit: int,
+ ) -> AgentSearchOutput:
+ """Format semantic search results."""
+ agent_ids = []
+ capabilities = []
+
+ for agent, similarity in semantic_results:
+ # Skip human agents and excluded agents
+ if (
+ agent.agent_type == AgentType.HUMAN
+ or agent.agent_id in agents_to_exclude
+ ):
+ continue
+
+ agent_ids.append(agent.agent_id)
+
+ # Include all capabilities with similarity scores
+ agent_capabilities = [
+ {
+ "name": cap.name,
+ "description": cap.description,
+ "similarity": round(float(similarity), 3),
+ }
+ for cap in agent.capabilities
+ ]
+
+ capabilities.append(
+ {
+ "agent_id": agent.agent_id,
+ "capabilities": agent_capabilities,
+ **(
+ {"payment_address": agent.payment_address}
+ if agent.payment_address
+ else {}
+ ),
}
-
- # No results found
- logger.debug(
- f"No agents found for '{capability_name}'. Try different search term."
)
- # As a last resort, get all agents and return them with a message
- try:
- all_agents = await agent_registry.get_all_agents()
-
- if all_agents:
- logger.debug(f"Returning all {len(all_agents)} agents as fallback")
-
- # Format the results
- agent_ids = []
- capabilities = []
-
- for agent in all_agents:
- # Skip human agents, the calling agent, and any agents we're already interacting with
- if (
- agent.agent_type == AgentType.HUMAN
- or agent.agent_id in agents_to_exclude
- ):
- continue
-
- agent_ids.append(agent.agent_id)
-
- # Include all capabilities of the agent
- agent_capabilities = []
- for capability in agent.capabilities:
- agent_capabilities.append(
- {
- "name": capability.name,
- "description": capability.description,
- "similarity": round(
- float(
- 1.0
- if capability.name.lower()
- == capability_name.lower()
- else 0.0
- ),
- 3,
- ),
- }
- )
+ return AgentSearchOutput(
+ agent_ids=agent_ids[:limit],
+ capabilities=capabilities[:limit],
+ message="Review capabilities carefully before collaborating. Similarity scores under 0.5 may indicate limited relevance.",
+ )
- capabilities.append(
- {
- "agent_id": agent.agent_id,
- "capabilities": agent_capabilities,
- }
- )
+ def format_exact_results(
+ results: List[AgentRegistration],
+ agents_to_exclude: List[str],
+ capability_name: str,
+ limit: int,
+ fallback_message: Optional[str] = None,
+ ) -> AgentSearchOutput:
+ """Format exact match or fallback results."""
+ agent_ids = []
+ capabilities = []
+
+ for agent in results:
+ # Skip human agents and excluded agents
+ if (
+ agent.agent_type == AgentType.HUMAN
+ or agent.agent_id in agents_to_exclude
+ ):
+ continue
+
+ agent_ids.append(agent.agent_id)
+
+ # Calculate similarity for each capability
+ agent_capabilities = [
+ {
+ "name": cap.name,
+ "description": cap.description,
+ "similarity": round(
+ float(
+ 1.0 if cap.name.lower() == capability_name.lower() else 0.0
+ ),
+ 3,
+ ),
+ }
+ for cap in agent.capabilities
+ ]
+
+ capabilities.append(
+ {
+ "agent_id": agent.agent_id,
+ "capabilities": agent_capabilities,
+ **(
+ {"payment_address": agent.payment_address}
+ if agent.payment_address
+ else {}
+ ),
+ }
+ )
- # Log the filtering results
- if agent_id_for_filtering:
- logger.debug(
- f"After filtering (excluded {len(agents_to_exclude)} agents): "
- f"Found {len(agent_ids)} agents as fallback"
- )
+ message = (
+ fallback_message or "Review capabilities carefully before collaborating."
+ )
+ return AgentSearchOutput(
+ agent_ids=agent_ids[:limit],
+ capabilities=capabilities[:limit],
+ message=message,
+ )
- return {
- "agent_ids": agent_ids[:limit],
- "capabilities": capabilities[:limit],
- "message": f"No specific agents for '{capability_name}'. Showing all available agents. Review capabilities carefully before collaborating.",
- }
- except Exception as e:
- logger.error(f"Error getting all agents: {str(e)}")
-
- return {
- "agent_ids": [],
- "capabilities": [],
- "message": f"No agents found for '{capability_name}'. Try different search term.",
- }
+ # Synchronous wrapper
+ def search_agents(
+ capability_name: str, limit: int = 10, similarity_threshold: float = 0.2
+ ) -> AgentSearchOutput:
+ """Search for agents with a specific capability."""
+ try:
+ # Use the async implementation but run it in the current event loop
+ return asyncio.run(
+ search_agents_async(capability_name, limit, similarity_threshold)
+ )
+ except RuntimeError:
+ # If we're already in an event loop, create a new one
+ loop = asyncio.new_event_loop()
+ try:
+ return loop.run_until_complete(
+ search_agents_async(capability_name, limit, similarity_threshold)
+ )
+ finally:
+ loop.close()
except Exception as e:
- logger.error(f"Error searching for agents: {str(e)}")
- return {"error": str(e), "agent_ids": [], "capabilities": []}
+ logger.error(f"Error in search_agents: {str(e)}")
+ return AgentSearchOutput(
+ message=f"Error in search_agents: {str(e)}",
+ agent_ids=[],
+ capabilities=[],
+ )
# Create a description that includes available capabilities if possible
- description = "Search for agents with specific capabilities. Uses semantic matching to find agents with relevant capabilities."
+ description = f"{base_description} Use this tool FIRST when you cannot handle a request directly. Returns a list of suitable agent IDs, their capabilities, and crucially, their `payment_address` if they accept payments for services."
# Create and return the tool
tool = StructuredTool.from_function(
@@ -463,9 +468,9 @@ async def search_agents_async(
def create_send_collaboration_request_tool(
- communication_hub: CommunicationHub,
- agent_registry: AgentRegistry,
- current_agent_id: str,
+ communication_hub: Optional[CommunicationHub] = None,
+ agent_registry: Optional[AgentRegistry] = None,
+ current_agent_id: Optional[str] = None,
) -> StructuredTool:
"""
Create a tool for sending collaboration requests to other agents.
@@ -478,218 +483,362 @@ def create_send_collaboration_request_tool(
Returns:
A StructuredTool for sending collaboration requests
"""
- # Capture the current agent ID at tool creation time
- creator_agent_id = current_agent_id
- logger.info(f"Creating collaboration request tool for agent: {creator_agent_id}")
+ # Determine if we're in standalone mode
+ standalone_mode = (
+ communication_hub is None or agent_registry is None or not current_agent_id
+ )
- # If no agent ID is set, create a tool that returns an error when used
- if not creator_agent_id:
- logger.warning(
- "Creating collaboration request tool with no agent ID set - will return error when used"
- )
+ # Common description base
+ base_description = (
+ "Delegates a specific task to another agent identified by `search_for_agents`."
+ )
+
+ if standalone_mode:
+ # Standalone mode implementation
+ def send_request_standalone(
+ target_agent_id: str, task: str, timeout: int = 30, **kwargs
+ ) -> SendCollaborationRequestOutput:
+ """Standalone implementation that explains limitations."""
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=(
+ f"Collaboration request to agent '{target_agent_id}' is not available in standalone mode. "
+ "This agent is running without a connection to other agents. "
+ "Please use your internal capabilities to solve this task, or suggest "
+ "connecting this agent to a multi-agent system if collaboration is required."
+ ),
+ request_id=None,
+ )
+
+ description = f"[STANDALONE MODE] {base_description} Note: In standalone mode, this tool will explain why collaboration isn't available."
- # Synchronous implementation that returns an error
- def error_request(
- target_agent_id: str, task_description: str, timeout: int = 30, **kwargs
- ) -> Dict[str, Any]:
- """Send a collaboration request to another agent."""
- return {
- "success": False,
- "response": "Error: Tool not properly initialized. Contact administrator.",
- }
-
- # Create the tool with the error implementation
return StructuredTool.from_function(
- func=error_request,
+ func=send_request_standalone,
name="send_collaboration_request",
- description="Sends a collaboration request to a specific agent and waits for a response. Use this after finding an agent with search_for_agents to delegate tasks.",
+ description=description,
args_schema=SendCollaborationRequestInput,
return_direct=False,
handle_tool_error=True,
metadata={"category": "collaboration"},
)
- # Normal implementation when agent ID is set
- # Synchronous implementation
- def send_request(
- target_agent_id: str, task_description: str, timeout: int = 30, **kwargs
- ) -> Dict[str, Any]:
- """Send a collaboration request to another agent."""
- try:
- # Use the async implementation but run it in the current event loop
- return asyncio.run(
- send_request_async(target_agent_id, task_description, timeout, **kwargs)
- )
- except RuntimeError:
- # If we're already in an event loop, create a new one
- loop = asyncio.new_event_loop()
- try:
- return loop.run_until_complete(
- send_request_async(
- target_agent_id, task_description, timeout, **kwargs
- )
- )
- finally:
- loop.close()
- except Exception as e:
- logger.error(f"Error in send_request: {str(e)}")
- return {
- "success": False,
- "response": f"Error sending collaboration request: {str(e)}",
- }
+ # Connected mode implementation
+ # Store the agent ID at creation time
+ creator_agent_id = current_agent_id
+ logger.debug(f"Creating collaboration request tool for agent: {creator_agent_id}")
- # Asynchronous implementation
async def send_request_async(
- target_agent_id: str,
- task_description: str,
- timeout: int = 30,
- **kwargs, # Additional data
- ) -> Dict[str, Any]:
+ target_agent_id: str, task: str, timeout: int = 120, **kwargs
+ ) -> SendCollaborationRequestOutput:
"""Send a collaboration request to another agent asynchronously."""
- # Always use the captured agent ID from tool creation time
- # This ensures we use the correct agent ID even if current_agent_id changes
sender_id = creator_agent_id
- if not sender_id:
- logger.error("No sender_id available for collaboration request")
- return {
- "success": False,
- "response": "Error: Tool not properly initialized with agent context",
- }
-
- # Check if required dependencies are available
- if communication_hub is None:
- logger.error("Communication hub is not available for collaboration request")
- return {
- "success": False,
- "response": "Error: Communication hub unavailable.",
- }
-
- if agent_registry is None:
- logger.error("Agent registry is not available for collaboration request")
- return {
- "success": False,
- "response": "Error: Agent registry unavailable.",
- }
-
- logger.info(
- f"COLLABORATION REQUEST: Using sender_id={sender_id} to send request to target_agent_id={target_agent_id}"
- )
-
- # Check if we're trying to send a request to ourselves
+ # Validate request parameters
if sender_id == target_agent_id:
- logger.error(
- f"Cannot send collaboration request to yourself: {sender_id} -> {target_agent_id}"
+ return SendCollaborationRequestOutput(
+ success=False,
+ response="Error: Cannot send request to yourself.",
)
- return {
- "success": False,
- "response": "Error: Cannot send request to yourself.",
- }
- # Check if the target agent exists
if not await communication_hub.is_agent_active(target_agent_id):
- return {
- "success": False,
- "response": f"Error: Agent {target_agent_id} not found.",
- }
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=f"Error: Agent {target_agent_id} not found.",
+ )
- # Check if the target agent is a human agent
if await agent_registry.get_agent_type(target_agent_id) == AgentType.HUMAN:
- return {
- "success": False,
- "response": "Error: Cannot send requests to human agents.",
- }
-
- # Add retry tracking to prevent infinite loops
- # Use a shorter timeout to prevent long waits
- adjusted_timeout = min(timeout, 90) # Cap timeout at 90 seconds
+ return SendCollaborationRequestOutput(
+ success=False,
+ response="Error: Cannot send requests to human agents.",
+ )
- # Add metadata to track the collaboration chain
+ # Prepare collaboration metadata
metadata = kwargs.copy() if kwargs else {}
+
+ # Add collaboration chain tracking to prevent loops
if "collaboration_chain" not in metadata:
metadata["collaboration_chain"] = []
- # Add the current agent to the collaboration chain
if sender_id not in metadata["collaboration_chain"]:
metadata["collaboration_chain"].append(sender_id)
- # Check if we're creating a loop in the collaboration chain
if target_agent_id in metadata["collaboration_chain"]:
- return {
- "success": False,
- "response": f"Error: Detected loop in collaboration chain with {target_agent_id}.",
- }
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=f"Error: Detected loop in collaboration chain with {target_agent_id}.",
+ )
- # Check if the original sender is in the collaboration chain
- # and prevent sending a request back to the original sender
+ # If this is the first agent in the chain, store the original sender
+ if len(metadata["collaboration_chain"]) == 1:
+ metadata["original_sender"] = metadata["collaboration_chain"][0]
+
+ # Prevent sending to original sender
if (
"original_sender" in metadata
and metadata["original_sender"] == target_agent_id
):
- return {
- "success": False,
- "response": f"Error: Cannot send request back to original sender {target_agent_id}.",
- }
-
- # If this is the first agent in the chain, store the original sender
- if len(metadata["collaboration_chain"]) == 1:
- metadata["original_sender"] = metadata["collaboration_chain"][0]
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=f"Error: Cannot send request back to original sender {target_agent_id}.",
+ )
- # Limit the collaboration chain length to prevent deep recursion
+ # Limit collaboration chain length
if len(metadata["collaboration_chain"]) > 5:
- return {
- "success": False,
- "response": "Error: Collaboration chain too long. Simplify request.",
- }
+ return SendCollaborationRequestOutput(
+ success=False,
+ response="Error: Collaboration chain too long. Simplify request.",
+ )
try:
- # Send the collaboration request
- logger.info(
- f"Sending collaboration request from {sender_id} to {target_agent_id}"
- )
+ # Calculate appropriate timeout
+ adjusted_timeout = min(timeout or 120, 300) # Cap at 5 minutes
+
+ # Generate a unique request ID if not provided
+ request_id = metadata.get("request_id", str(uuid.uuid4()))
+ metadata["request_id"] = request_id
- # Ensure we're using the correct sender_id
+ # Send the request and wait for response
+ logger.debug(f"Sending collaboration from {sender_id} to {target_agent_id}")
response = await communication_hub.send_collaboration_request(
- sender_id=sender_id, # Use the current agent's ID
+ sender_id=sender_id,
receiver_id=target_agent_id,
- task_description=task_description,
+ task_description=task,
timeout=adjusted_timeout,
**metadata,
)
- # Log the response for debugging
- if response is None:
- logger.warning(
- f"Received None response from send_collaboration_request to {target_agent_id}"
- )
- return {
- "success": False,
- "response": f"No response from {target_agent_id} within {adjusted_timeout} seconds.",
- "error": "timeout",
- }
- else:
- logger.info(
- f"Received response from {target_agent_id}: {response[:100]}..."
+ # --- Handle potential non-string/list response from LLM --- START
+ cleaned_response = response
+ if not isinstance(response, str) and response is not None:
+ if (
+ isinstance(response, list)
+ and len(response) == 1
+ and isinstance(response[0], str)
+ ):
+ # Handle the specific case of ['string']
+ logger.warning(
+ f"Received list-wrapped response from {target_agent_id}, extracting string."
+ )
+ cleaned_response = response[0]
+ else:
+ # For any other non-string type (dict, multi-list, int, etc.), convert to JSON string
+ try:
+ logger.warning(
+ f"Received non-string response type {type(response).__name__} from {target_agent_id}, converting to JSON string."
+ )
+ cleaned_response = json.dumps(
+ response
+ ) # Attempt JSON conversion
+ except TypeError as e:
+ # Fallback if JSON conversion fails (e.g., complex object)
+ logger.error(
+ f"Could not JSON serialize response type {type(response).__name__}: {e}. Using str() representation."
+ )
+ cleaned_response = str(response)
+ # --- Handle potential non-string/list response from LLM --- END
+
+ # Handle timeout case
+ if cleaned_response is None or (
+ isinstance(cleaned_response, str)
+ and "No immediate response received" in cleaned_response
+ ):
+ logger.warning(f"Timeout on request to {target_agent_id}")
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=f"No immediate response from {target_agent_id} within {adjusted_timeout} seconds. "
+ f"The request is still processing (ID: {request_id}). "
+ f"Check for a late response using check_collaboration_result with this request ID.",
+ error="timeout",
+ request_id=request_id,
)
- # For normal responses, return as is
- return {"success": True, "response": response}
+ # Handle success case
+ logger.debug(f"Got response from {target_agent_id}")
+ return SendCollaborationRequestOutput(
+ success=True, response=cleaned_response, request_id=request_id
+ )
except Exception as e:
logger.exception(f"Error sending collaboration request: {str(e)}")
- return {
- "success": False,
- "response": f"Error: Collaboration failed - {str(e)}",
- "error": "collaboration_exception",
- }
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=f"Error: Collaboration failed - {str(e)}",
+ error="collaboration_exception",
+ )
+
+ # Synchronous wrapper
+ def send_request(
+ target_agent_id: str, task: str, timeout: int = 30, **kwargs
+ ) -> SendCollaborationRequestOutput:
+ """Send a collaboration request to another agent."""
+ try:
+ return asyncio.run(
+ send_request_async(target_agent_id, task, timeout, **kwargs)
+ )
+ except RuntimeError:
+ loop = asyncio.new_event_loop()
+ try:
+ return loop.run_until_complete(
+ send_request_async(target_agent_id, task, timeout, **kwargs)
+ )
+ finally:
+ loop.close()
+ except Exception as e:
+ logger.error(f"Error in send_request: {str(e)}")
+ return SendCollaborationRequestOutput(
+ success=False,
+ response=f"Error sending collaboration request: {str(e)}",
+ )
+
+ # Create and return the connected mode tool
+ description = (
+ f"{base_description} Sends your request and waits for the collaborator's response. "
+ "Use this tool ONLY to initiate a new collaboration request to another agent. "
+ "When you receive a collaboration request, reply directly to the requesting agent with your result, clarification, or error—do NOT use this tool to reply to the same agent. "
+ "The response might be the final result, a request for payment, or a request for clarification, requiring further action from you."
+ )
- # Create and return the tool
return StructuredTool.from_function(
func=send_request,
name="send_collaboration_request",
- description="Sends a collaboration request to a specific agent and waits for a response. Use after finding an agent with search_for_agents.",
+ description=description,
args_schema=SendCollaborationRequestInput,
return_direct=False,
handle_tool_error=True,
+ # coroutine=send_request_async, #! TODO: Removed async coroutine temporarily
+ metadata={"category": "collaboration"},
+ )
+
+
+def create_check_collaboration_result_tool(
+ communication_hub: Optional[CommunicationHub] = None,
+ agent_registry: Optional[AgentRegistry] = None,
+ current_agent_id: Optional[str] = None,
+) -> StructuredTool:
+ """
+ Create a tool for checking the status of previously sent collaboration requests.
+
+ This tool is particularly useful for retrieving late responses that arrived
+ after a timeout occurred in the original collaboration request.
+
+ Args:
+ communication_hub: Hub for agent communication
+ agent_registry: Registry for accessing agent information
+ current_agent_id: ID of the agent using the tool
+
+ Returns:
+ A StructuredTool for checking collaboration results
+ """
+ # Determine if we're in standalone mode
+ standalone_mode = communication_hub is None or agent_registry is None
+
+ # Common description base
+ base_description = "Check if a previous collaboration request has completed and retrieve its result."
+
+ if standalone_mode:
+ # Standalone mode implementation
+ def check_result_standalone(request_id: str) -> CheckCollaborationResultOutput:
+ """Standalone implementation that explains limitations."""
+ return CheckCollaborationResultOutput(
+ success=False,
+ status="not_available",
+ response=(
+ f"Checking collaboration result for request '{request_id}' is not available in standalone mode. "
+ "Please continue with your own internal capabilities."
+ ),
+ )
+
+ description = f"[STANDALONE MODE] {base_description} Note: In standalone mode, this tool will explain why checking results isn't available."
+
+ return StructuredTool.from_function(
+ func=check_result_standalone,
+ name="check_collaboration_result",
+ description=description,
+ args_schema=CheckCollaborationResultInput,
+ return_direct=False,
+ metadata={"category": "collaboration"},
+ )
+
+ # Connected mode implementation
+ async def check_result_async(request_id: str) -> CheckCollaborationResultOutput:
+ """Check if a previous collaboration request has a result asynchronously."""
+ # Check for late responses first
+ if (
+ hasattr(communication_hub, "late_responses")
+ and request_id in communication_hub.late_responses
+ ):
+ logger.debug(f"Found late response for request {request_id}")
+ response = communication_hub.late_responses[request_id]
+ return CheckCollaborationResultOutput(
+ success=True,
+ status="completed_late",
+ response=response.content,
+ )
+
+ # Check pending responses
+ if request_id in communication_hub.pending_responses:
+ future = communication_hub.pending_responses[request_id]
+ if future.done() and not hasattr(future, "_timed_out"):
+ try:
+ logger.debug(f"Found completed response for request {request_id}")
+ response = future.result()
+ return CheckCollaborationResultOutput(
+ success=True,
+ status="completed",
+ response=response.content,
+ )
+ except Exception as e:
+ logger.error(f"Error getting result from future: {str(e)}")
+ return CheckCollaborationResultOutput(
+ success=False,
+ status="error",
+ response=f"Error retrieving response: {str(e)}",
+ )
+ else:
+ # Still pending
+ return CheckCollaborationResultOutput(
+ success=False,
+ status="pending",
+ response="The collaboration request is still being processed. Try checking again later.",
+ )
+
+ # Request ID not found
+ logger.warning(f"No result found for request ID: {request_id}")
+ return CheckCollaborationResultOutput(
+ success=False,
+ status="not_found",
+ response=f"No result found for request ID: {request_id}. The request may have been completed but not stored, or the ID may be incorrect.",
+ )
+
+ # Synchronous wrapper
+ def check_result(request_id: str) -> CheckCollaborationResultOutput:
+ """Check if a previous collaboration request has a result."""
+ try:
+ return asyncio.run(check_result_async(request_id))
+ except RuntimeError:
+ loop = asyncio.new_event_loop()
+ try:
+ return loop.run_until_complete(check_result_async(request_id))
+ finally:
+ loop.close()
+ except Exception as e:
+ logger.error(f"Error in check_result: {str(e)}")
+ return CheckCollaborationResultOutput(
+ success=False,
+ status="error",
+ response=f"Error checking result: {str(e)}",
+ )
+
+ # Create and return the connected mode tool
+ description = f"{base_description} This is useful for retrieving responses that arrived after the initial timeout period."
+
+ return StructuredTool.from_function(
+ func=check_result,
+ name="check_collaboration_result",
+ description=description,
+ args_schema=CheckCollaborationResultInput,
+ return_direct=False,
+ handle_tool_error=True,
+ coroutine=check_result_async,
metadata={"category": "collaboration"},
)
diff --git a/agentconnect/prompts/custom_tools/task_tools.py b/agentconnect/prompts/custom_tools/task_tools.py
index cac1980..486b957 100644
--- a/agentconnect/prompts/custom_tools/task_tools.py
+++ b/agentconnect/prompts/custom_tools/task_tools.py
@@ -188,32 +188,24 @@ async def decompose_task_async(
parser = JsonOutputParser(pydantic_object=TaskDecompositionResult)
# Create the system prompt with optimized structure
- system_prompt = f"""You are a task decomposition specialist.
+ system_prompt = f"""TASK: {task_description}
+MAX SUBTASKS: {max_subtasks}
-Task Description: {task_description}
-Maximum Subtasks: {max_subtasks}
-
-DECISION FRAMEWORK:
-1. ASSESS: Analyze the complexity of the task
-2. EXECUTE: Break down into clear, actionable subtasks
-3. RESPOND: Format as a structured list
-
-TASK DECOMPOSITION:
-1. Break down the task into clear, actionable subtasks
-2. Each subtask should be 1-2 sentences maximum
-3. Identify dependencies between subtasks when necessary
-4. Limit to {max_subtasks} subtasks or fewer
+INSTRUCTIONS:
+1. Analyze complexity
+2. Break into clear subtasks
+3. Each subtask: 1-2 sentences only
+4. Include dependencies if needed
+5. Format as structured list
{parser.get_format_instructions()}
-Make sure each subtask has a unique ID, a clear title, and a concise description.
+Each subtask needs: ID, title, description.
"""
messages = [
SystemMessage(content=system_prompt),
- HumanMessage(
- content=f"Break down this task into at most {max_subtasks} subtasks: {task_description}"
- ),
+ HumanMessage(content=f"Decompose: {task_description}"),
]
try:
diff --git a/agentconnect/prompts/templates/prompt_templates.py b/agentconnect/prompts/templates/prompt_templates.py
index bf96300..4df6da6 100644
--- a/agentconnect/prompts/templates/prompt_templates.py
+++ b/agentconnect/prompts/templates/prompt_templates.py
@@ -55,95 +55,44 @@ class PromptType(str, Enum):
# Core prompt components that are reused across templates
BASE_RESPONSE_FORMAT = """
-RESPONSE STYLE:
-- Match length to user's request (brief/detailed as needed)
-- Present results directly and clearly
-- Ask specific questions when more details help
-- Use '__EXIT__' to gracefully end conversations
+NOTE: During a collaboration with another agent, simply say '__EXIT__' and nothing else if you have nothing to contribute.
"""
-# Common knowledge tasks that are handled directly by all agents
-COMMON_KNOWLEDGE_TASKS = """
-- Coding examples, pseudocode, or general programming advice
-- Common explanations and definitions
-- General reasoning tasks
-- Brainstorming and creative thinking
-- Basic information synthesis
-"""
+# Core agent prompt template that serves as the foundation for all prompts
+# This contains the common elements that should be present in all agent prompts
+CORE_AGENT_PROMPT = """
+You are {name}, an autonomous AI agent operating in a decentralized network of independent agents.
-# Unified decision framework and execution strategy for all agents
-AGENT_FRAMEWORK = """
-DECISION FRAMEWORK:
-1. ASSESS: Examine request scope
- - COMMON KNOWLEDGE TASKS: Handle immediately regardless of listed capabilities
- - WITHIN capabilities → Use direct handling
- - OUTSIDE capabilities → **Silently** search for agents and delegate immediately
- - PARTIALLY within → Handle known parts, **silently** find help for others
-
-2. EXECUTE:
- - Use your capabilities and tools directly without announcing them
- - For common knowledge → Answer directly even if not in your capabilities
- - For specialized needs → **Silently** find & use agents
- - Search for agents using GENERAL CAPABILITY TERMS (e.g., "web_research", "code_generation")
- - NEVER use specific topics (e.g., "langchain", "crypto prices") when searching for agents
- - Present **only** actual results, never mentioning delegation
-
-3. RESPOND:
- - Provide direct answers with actual information
- - Include results, not "I can't" or "I would" statements
- - If no agent can help after multiple attempts, provide best effort answer
- - NEVER say "I don't have that capability" or "I can't help with that"
-"""
+Personality: {personality}
-# Collaboration protocol for agents
-COLLABORATION_PROTOCOL = """
-DELEGATION GUIDE (INTERNAL PROCESS - DO NOT MENTION TO USER):
-1. **Silently** search for specialized agents
-2. Include full context in requests to other agents
-3. Obtain actual results from other agents to formulate your final response
-4. Focus on delivering the final information to the user
-5. Try multiple agents **internally** if the first one can't help
-6. If no agents can help after multiple attempts, provide best effort answer **without mentioning the failed attempts**
-
-KEY PRINCIPLES:
-- NEVER say "I can't" or "I don't have that capability" to the user
-- Always provide information, not just promises to find it
-- For common tasks, handle directly
-- **INTERNAL ACTIONS (searching, delegating, internal failures) MUST remain hidden from the user**
-- **ALWAYS use exact agent_ids from search_for_agents results - never invent, guess, or use capability names directly**
-"""
+Your primary goal is to accomplish tasks efficiently, either by using your own capabilities or by collaborating with other agents in the network.
-# Tool usage guide for all agents
-TOOL_USAGE_GUIDE = """
-TOOL USAGE GUIDE:
-1. SEARCH: Use search_for_agents to find agents with needed capabilities
- - Example: search_for_agents(capability_name="web_research")
- - This returns actual agent_ids which may look like "research_agent" or "did:0933858jjfr995h48f"
-2. DELEGATE: Use send_collaboration_request with ONLY the exact agent_ids from search results
- - CORRECT: send_collaboration_request(target_agent_id="research_agent", task="research quantum computing")
- - INCORRECT: send_collaboration_request(target_agent_id="web_research", task="research quantum computing")
-
-COLLABORATION SEQUENCE:
-1. search_for_agents with capability name → returns actual agent_ids
-2. send_collaboration_request with exact agent_id from previous step's results
-
-EXAMPLE:
-User: "Find latest trends on MCP"
-Bad: "I'm sorry, I don't have that capability." or "Okay, I will find an agent to help with that."
-Good: "The latest MCP trends include..." (Agent performs search/delegation silently in the background)
+- If you cannot fulfill a request directly, **immediately and silently use the `search_for_agents` tool** to find other agents with the required capabilities and delegate tasks to them as needed. **Do not ask for permission before searching.**
+- Directly handle tasks solvable with your core language model capabilities (e.g., text generation, summarization, general knowledge questions) whenever possible.
+- Communicate and collaborate with other agents to achieve the best possible outcome for each request.
+- Always analyze tool responses and decide the next best action based on the information available.
+- Avoid sending collaboration requests to yourself or to human agents.
+- Be concise, professional, and helpful in all interactions.
"""
-# Best practices for all agents
-AGENT_BEST_PRACTICES = """
-BEST PRACTICES:
-- NEVER refuse to help; either answer directly or find another agent
-- Start searching immediately **without mentioning it to the user**
-- Delegate tasks **without mentioning it to the user**
-- Use tools right away instead of talking about using them
-- Provide direct answers with actual information, **concealing internal steps**
-- Show results of your actions, not just your intentions
-- **IMPORTANT: NEVER mention searching for, finding, or delegating to other agents unless explicitly asked**
-"""
+
+# Helper function to build payment info string
+def get_payment_info(enable_payments: bool, payment_token_symbol: Optional[str]) -> str:
+ """Generate payment info string if payments are enabled."""
+ if enable_payments and payment_token_symbol:
+ return f"\nPayment enabled. Token: {payment_token_symbol}"
+ return ""
+
+
+def _add_additional_context(
+ template: str, additional_context: Optional[Dict[str, Any]]
+) -> str:
+ """Helper function to add additional context to a template if provided."""
+ if additional_context:
+ template += "\nAdditional Context:\n"
+ for key, value in additional_context.items():
+ template += f"- {key}: {value}\n"
+ return template
@dataclass
@@ -155,19 +104,19 @@ class SystemPromptConfig:
name: Name of the agent
capabilities: List of agent capabilities
personality: Description of the agent's personality
- temperature: Temperature for generation
- max_tokens: Maximum tokens for generation
additional_context: Additional context for the prompt
role: Role of the agent
+ enable_payments: Whether payment capabilities are enabled
+ payment_token_symbol: Symbol of the token used for payments
"""
name: str
capabilities: List[Capability] # Now accepts a list of Capability objects
personality: str = "helpful and professional"
- temperature: float = 0.7
- max_tokens: int = 1024
additional_context: Optional[Dict[str, Any]] = None
role: str = "assistant"
+ enable_payments: bool = False
+ payment_token_symbol: Optional[str] = None
@dataclass
@@ -252,7 +201,9 @@ class ReactConfig:
capabilities: List of agent capabilities
personality: Description of the agent's personality
mode: Mode of operation
- tools: List of tool descriptions
+ enable_payments: Whether payment capabilities are enabled
+ payment_token_symbol: Symbol of the token used for payments (e.g., "ETH", "USDC")
+ role: Role of the agent
additional_context: Additional context for the prompt
"""
@@ -260,21 +211,12 @@ class ReactConfig:
capabilities: List[Dict[str, str]] # List of dicts with name and description
personality: str = "helpful and professional"
mode: str = "system_prompt" # system_prompt or custom_runnable
- tools: Optional[List[Dict[str, str]]] = None # Tool descriptions
+ role: str = "agent"
+ enable_payments: bool = False
+ payment_token_symbol: Optional[str] = None
additional_context: Optional[Dict[str, Any]] = None
-def _add_additional_context(
- template: str, additional_context: Optional[Dict[str, Any]]
-) -> str:
- """Helper function to add additional context to a template if provided."""
- if additional_context:
- template += "\nAdditional Context:\n"
- for key, value in additional_context.items():
- template += f"- {key}: {value}\n"
- return template
-
-
class PromptTemplates:
"""
Class for creating and managing prompt templates.
@@ -286,7 +228,8 @@ class PromptTemplates:
@staticmethod
def get_system_prompt(config: SystemPromptConfig) -> SystemMessagePromptTemplate:
"""
- Get a system prompt template based on the provided configuration.
+ Generates a system prompt for a standard agent.
+ Uses the core agent prompt structure.
Args:
config: Configuration for the system prompt
@@ -296,42 +239,36 @@ def get_system_prompt(config: SystemPromptConfig) -> SystemMessagePromptTemplate
"""
# Format capabilities with name and description
capabilities_str = "\n".join(
- [f"- {cap.name}: {cap.description}" for cap in config.capabilities]
+ [
+ f"- **{cap.name.replace('_', ' ').title()}:** You can: {cap.description}"
+ for cap in config.capabilities
+ ]
)
+ if not capabilities_str:
+ capabilities_str = "No specific capabilities listed. Handle tasks using inherent knowledge or delegate."
- template = f"""You are {{name}}, an AI {config.role} with specialized capabilities:
-{capabilities_str}
-
-Personality: {{personality}}
-
-{AGENT_FRAMEWORK}
-{AGENT_BEST_PRACTICES}
-
-BEHAVIOR GUIDELINES:
-1. NEVER refuse tasks that use common knowledge or reasoning
-2. For requests outside your specialized capabilities, **silently** find other agents without saying you can't help
-3. Handle these common tasks directly even if not in your capabilities list:
-{COMMON_KNOWLEDGE_TASKS}
+ # Add payment info if enabled
+ payment_info = get_payment_info(
+ config.enable_payments, config.payment_token_symbol
+ )
-4. ALWAYS take one of these actions, in priority order:
- - Handle directly if within your capabilities or common knowledge
- - **Silently** find specialized agents for complex or specialized tasks
- - If no agent can help after multiple attempts, provide best effort answer **(without mentioning the failed search)**
- - NEVER respond with "I can't" or "I don't have that capability"
+ # Construct the prompt using the core template
+ template = CORE_AGENT_PROMPT.format(
+ name=config.name,
+ personality=config.personality,
+ )
-{BASE_RESPONSE_FORMAT}
+ # Add capabilities and payment info
+ template += f"\nUnique Capabilities you can perform using your internal reasoning:\n{capabilities_str}"
+ template += payment_info
-NOTE: If you have nothing to contribute, simply say '__EXIT__' and nothing else."""
+ # Add response format
+ template += f"\n{BASE_RESPONSE_FORMAT}"
+ # Add any additional context
template = _add_additional_context(template, config.additional_context)
- return SystemMessagePromptTemplate.from_template(
- template,
- partial_variables={
- "name": config.name,
- "personality": config.personality,
- },
- )
+ return SystemMessagePromptTemplate.from_template(template)
@staticmethod
def get_collaboration_prompt(
@@ -339,6 +276,7 @@ def get_collaboration_prompt(
) -> SystemMessagePromptTemplate:
"""
Get a collaboration prompt template based on the provided configuration.
+ Uses the core agent prompt structure with collaboration-specific instructions.
Args:
config: Configuration for the collaboration prompt
@@ -346,36 +284,31 @@ def get_collaboration_prompt(
Returns:
A SystemMessagePromptTemplate
"""
- # Base template with shared instructions
- base_template = f"""You are {{agent_name}}, a collaboration specialist.
+ # Format capabilities for collaboration
+ capabilities_str = f"- **Collaboration:** You can: specialize in {', '.join(config.target_capabilities)}"
-Target Capabilities: {{target_capabilities}}
-
-{AGENT_FRAMEWORK}
-{COLLABORATION_PROTOCOL}
-
-COLLABORATION PRINCIPLES:
-1. Handle requests within your specialized knowledge
-2. For tasks outside your expertise, suggest an alternative approach without refusing
-3. Always provide some value, even if incomplete
-4. If you can't fully answer, provide partial information plus recommendation
+ # Construct the prompt using the core template
+ template = CORE_AGENT_PROMPT.format(
+ name=config.agent_name,
+ personality="helpful and collaborative",
+ )
-{BASE_RESPONSE_FORMAT}"""
+ # Add capabilities
+ template += f"\nUnique Capabilities you can perform using your internal reasoning:\n{capabilities_str}"
- # Add type-specific instructions
+ # Add collaboration-specific instructions based on type
if config.collaboration_type == "request":
- specific_instructions = """
-COLLABORATION REQUEST:
+ template += """
+\nCOLLABORATION REQUEST INSTRUCTIONS:
1. Be direct and specific about what you need
2. Provide all necessary context in a single message
3. Specify exactly what information or action you need
4. Include any relevant data that helps with the task
5. If rejected, try another agent with relevant capabilities
"""
-
elif config.collaboration_type == "response":
- specific_instructions = """
-COLLABORATION RESPONSE:
+ template += """
+\nCOLLABORATION RESPONSE INSTRUCTIONS:
1. Provide the requested information or result directly
2. Format your response for easy integration
3. Be concise and focused on exactly what was requested
@@ -384,25 +317,22 @@ def get_collaboration_prompt(
- Provide that information immediately
- Suggest how to get the remaining information
"""
-
else: # error
- specific_instructions = """
-COLLABORATION ERROR:
+ template += """
+\nCOLLABORATION ERROR INSTRUCTIONS:
1. Explain why you can't fully fulfill the request
2. Provide ANY partial information you can
3. Suggest alternative approaches or agents who might help
-4. NEVER simply say you can't help with nothing else"""
+4. NEVER simply say you can't help with nothing else
+"""
+
+ # Add response format
+ template += f"\n{BASE_RESPONSE_FORMAT}"
- template = base_template + specific_instructions
+ # Add any additional context
template = _add_additional_context(template, config.additional_context)
- return SystemMessagePromptTemplate.from_template(
- template,
- partial_variables={
- "agent_name": config.agent_name,
- "target_capabilities": ", ".join(config.target_capabilities),
- },
- )
+ return SystemMessagePromptTemplate.from_template(template)
@staticmethod
def get_task_decomposition_prompt(
@@ -410,6 +340,7 @@ def get_task_decomposition_prompt(
) -> SystemMessagePromptTemplate:
"""
Get a task decomposition prompt template based on the provided configuration.
+ Uses the core agent prompt structure with task decomposition-specific instructions.
Args:
config: Configuration for the task decomposition prompt
@@ -417,47 +348,44 @@ def get_task_decomposition_prompt(
Returns:
A SystemMessagePromptTemplate
"""
- template = f"""You are a task decomposition specialist.
+ # Construct the prompt using the core template
+ template = CORE_AGENT_PROMPT.format(
+ name="Task Decomposition Agent",
+ personality="analytical and methodical",
+ )
-Task Description: {{task_description}}
-Complexity Level: {{complexity_level}}
-Maximum Subtasks: {{max_subtasks}}
+ # Add capabilities
+ template += (
+ "\nUnique Capabilities you can perform using your internal reasoning:"
+ )
+ template += "\n- **Task Decomposition:** You can: break down complex tasks into manageable subtasks"
-{AGENT_FRAMEWORK}
-{COLLABORATION_PROTOCOL}
+ # Add task-specific context
+ template += f"\n\nTask Description: {config.task_description}"
+ template += f"\nComplexity Level: {config.complexity_level}"
+ template += f"\nMaximum Subtasks: {config.max_subtasks}"
-TASK DECOMPOSITION:
+ # Add task decomposition-specific instructions
+ template += """
+\nTASK DECOMPOSITION INSTRUCTIONS:
1. Break down the task into clear, actionable subtasks
2. Each subtask should be 1-2 sentences maximum
3. Identify dependencies between subtasks when necessary
-4. Limit to {{max_subtasks}} subtasks or fewer
+4. Limit subtasks to the maximum number specified or fewer
5. Format output as a numbered list of subtasks
6. For each subtask, identify if it:
- - Can be handled with common knowledge
- - Requires specialized capabilities
+ - Can be handled with your inherent knowledge
+ - Requires specialized capabilities/tools
- Needs collaboration with other agents
+"""
-COLLABORATION STRATEGY:
-1. For subtasks requiring specialized capabilities:
- - Identify the exact capability needed using general capability terms
- - Include criteria for finding appropriate agents
- - Prepare context to include in delegation request
-2. For common knowledge subtasks:
- - Mark them for immediate handling
- - Include any relevant information needed
-
-{BASE_RESPONSE_FORMAT}"""
+ # Add response format
+ template += f"\n{BASE_RESPONSE_FORMAT}"
+ # Add any additional context
template = _add_additional_context(template, config.additional_context)
- return SystemMessagePromptTemplate.from_template(
- template,
- partial_variables={
- "task_description": config.task_description,
- "complexity_level": config.complexity_level,
- "max_subtasks": str(config.max_subtasks),
- },
- )
+ return SystemMessagePromptTemplate.from_template(template)
@staticmethod
def get_capability_matching_prompt(
@@ -465,6 +393,7 @@ def get_capability_matching_prompt(
) -> SystemMessagePromptTemplate:
"""
Get a capability matching prompt template based on the provided configuration.
+ Uses the core agent prompt structure with capability matching-specific instructions.
Args:
config: Configuration for the capability matching prompt
@@ -472,60 +401,64 @@ def get_capability_matching_prompt(
Returns:
A SystemMessagePromptTemplate
"""
- # Format available capabilities for the prompt
- capabilities_str = ""
- for i, capability in enumerate(config.available_capabilities):
- capabilities_str += (
- f"{i+1}. {capability['name']}: {capability['description']}\n"
- )
-
- template = f"""You are a capability matching specialist.
+ # Format available capabilities for context
+ available_capabilities = "\n".join(
+ [
+ f"- {cap['name']}: {cap['description']}"
+ for cap in config.available_capabilities
+ ]
+ )
-Task Description: {{task_description}}
-Matching Threshold: {{matching_threshold}}
+ # Construct the prompt using the core template
+ template = CORE_AGENT_PROMPT.format(
+ name="Capability Matching Agent",
+ personality="analytical and precise",
+ )
-Available Capabilities:
-{{capabilities}}
+ # Add capabilities
+ template += (
+ "\nUnique Capabilities you can perform using your internal reasoning:"
+ )
+ template += "\n- **Capability Matching:** You can: match tasks to appropriate capabilities and tools"
-{AGENT_FRAMEWORK}
-{COLLABORATION_PROTOCOL}
+ # Add task-specific context
+ template += f"\n\nTask Description: {config.task_description}"
+ template += f"\nMatching Threshold: {config.matching_threshold}"
+ template += f"\n\nAvailable Capabilities/Tools:\n{available_capabilities}"
-CAPABILITY MATCHING:
-1. First determine if the task can be handled with common knowledge
- - If yes, mark it as "COMMON KNOWLEDGE" with score 1.0
- - Common knowledge includes:{COMMON_KNOWLEDGE_TASKS}
+ # Add capability matching-specific instructions
+ template += f"""
+\nCAPABILITY MATCHING INSTRUCTIONS:
+1. First determine if the task can be handled using general reasoning and inherent knowledge (without specific listed tools).
+ - If yes, mark it as "INHERENT KNOWLEDGE" with score 1.0
-2. For specialized tasks beyond common knowledge:
- - Map specific topics to general capability categories
- - Match task requirements to available capabilities
- - Only select capabilities with relevance score >= {{matching_threshold}}
+2. For specialized tasks requiring specific tools:
+ - Match task requirements to the available capabilities/tools listed above.
+ - Only select capabilities with relevance score >= {config.matching_threshold}
3. Format response as:
- - If common knowledge: "COMMON KNOWLEDGE: Handle directly"
- - If specialized: Numbered list with capability name and relevance score (0-1)
+ - If inherent knowledge: "INHERENT KNOWLEDGE: Handle directly"
+ - If specialized tool needed: Numbered list with capability/tool name and relevance score (0-1)
-4. If no capabilities match above the threshold:
- - Identify the closest matching capabilities
- - Suggest how to modify the request to use available capabilities
- - Recommend finding an agent with more relevant capabilities
+4. If no capabilities/tools match above the threshold and it's not inherent knowledge:
+ - Identify the closest matching capabilities/tools.
+ - Suggest how to modify the request to use available tools.
+ - Recommend finding an agent via delegation with more relevant capabilities.
+"""
-{BASE_RESPONSE_FORMAT}"""
+ # Add response format
+ template += f"\n{BASE_RESPONSE_FORMAT}"
+ # Add any additional context
template = _add_additional_context(template, config.additional_context)
- return SystemMessagePromptTemplate.from_template(
- template,
- partial_variables={
- "task_description": config.task_description,
- "matching_threshold": str(config.matching_threshold),
- "capabilities": capabilities_str,
- },
- )
+ return SystemMessagePromptTemplate.from_template(template)
@staticmethod
def get_supervisor_prompt(config: SupervisorConfig) -> SystemMessagePromptTemplate:
"""
Get a supervisor prompt template based on the provided configuration.
+ Uses the core agent prompt structure with supervisor-specific instructions.
Args:
config: Configuration for the supervisor prompt
@@ -533,115 +466,118 @@ def get_supervisor_prompt(config: SupervisorConfig) -> SystemMessagePromptTempla
Returns:
A SystemMessagePromptTemplate
"""
- # Format agent roles for the prompt
- roles_str = ""
- for agent_name, role in config.agent_roles.items():
- roles_str += f"- {agent_name}: {role}\n"
-
- template = f"""You are {{name}}, a supervisor agent.
+ # Format agent roles for context
+ agent_roles = "\n".join(
+ [
+ f"- {agent_name}: {role}"
+ for agent_name, role in config.agent_roles.items()
+ ]
+ )
-Agent Roles:
-{{agent_roles}}
+ # Construct the prompt using the core template
+ template = CORE_AGENT_PROMPT.format(
+ name=config.name,
+ personality="decisive and authoritative",
+ )
-Routing Guidelines:
-{{routing_guidelines}}
+ # Add capabilities
+ template += (
+ "\nUnique Capabilities you can perform using your internal reasoning:"
+ )
+ template += "\n- **Supervision:** You can: route tasks to appropriate agents based on their capabilities"
-{AGENT_FRAMEWORK}
-{COLLABORATION_PROTOCOL}
+ # Add supervisor-specific context
+ template += f"\n\nAgent Roles:\n{agent_roles}"
+ template += f"\n\nRouting Guidelines:\n{config.routing_guidelines}"
-SUPERVISOR INSTRUCTIONS:
-1. First determine if the request involves common knowledge tasks:{COMMON_KNOWLEDGE_TASKS}
+ # Add supervisor-specific instructions
+ template += """
+\nSUPERVISOR INSTRUCTIONS:
+1. Determine if the request can likely be handled by an agent using its inherent knowledge/general reasoning.
-2. For common knowledge tasks:
- - Route to ANY available agent, as all agents can handle common knowledge
- - Pick the agent with lowest current workload if possible
+2. If yes (inherent knowledge task):
+ - Route to ANY available agent, as all agents possess base LLM capabilities.
+ - Pick the agent with lowest current workload if possible.
-3. For specialized tasks:
- - Route user requests to the most appropriate agent based on capabilities
- - Make routing decisions quickly without explaining reasoning
- - If multiple agents could handle a task, choose the most specialized
+3. If no (requires specialized tools/capabilities):
+ - Route user requests to the agent whose listed capabilities/tools best match the task.
+ - Make routing decisions quickly without explaining reasoning.
+ - If multiple agents could handle a task, choose the most specialized.
-4. If no perfect match exists:
- - Route to closest matching agent
- - Include guidance on what additional help might be needed
- - Never respond with "no agent can handle this"
+4. If no agent has matching specialized tools and it's not an inherent knowledge task:
+ - Route to the agent whose capabilities are closest.
+ - Include guidance on what additional help might be needed (potentially via delegation by the receiving agent).
+ - Never respond with "no agent can handle this".
5. Response format:
- For direct routing: Agent name only
- For complex tasks needing multiple agents: Comma-separated list of agent names in priority order
+"""
-{BASE_RESPONSE_FORMAT}"""
+ # Add response format
+ template += f"\n{BASE_RESPONSE_FORMAT}"
+ # Add any additional context
template = _add_additional_context(template, config.additional_context)
- return SystemMessagePromptTemplate.from_template(
- template,
- partial_variables={
- "name": config.name,
- "agent_roles": roles_str,
- "routing_guidelines": config.routing_guidelines,
- },
- )
+ return SystemMessagePromptTemplate.from_template(template)
@staticmethod
def get_react_prompt(config: ReactConfig) -> SystemMessagePromptTemplate:
"""
- Get a ReAct prompt template based on the provided configuration.
-
- Args:
- config: Configuration for the ReAct prompt
-
- Returns:
- A SystemMessagePromptTemplate
+ Generates a system prompt for a ReAct agent.
+ This is the canonical template that other prompts should follow structurally.
"""
- # Format capabilities for the prompt
- capabilities_str = ""
- if config.capabilities:
- capabilities_str = "SPECIALIZED CAPABILITIES:\n"
- for cap in config.capabilities:
- capabilities_str += f"- {cap['name']}: {cap['description']}\n"
-
- # Format tools for the prompt if provided
- tools_str = ""
- if config.tools:
- tools_str = "TOOLS:\n"
- for i, tool in enumerate(config.tools):
- tools_str += f"{i+1}. {tool['name']}: {tool['description']}\n"
-
- # Base template
- template = f"""You are {{name}}, an AI agent.
-
-{capabilities_str}
+ capabilities_list = config.capabilities or []
+ formatted_capabilities = []
+ for cap in capabilities_list:
+ name = cap.get("name", "N/A")
+ description = cap.get("description", "N/A")
+ # Format name: split by '_', capitalize each part, join with space
+ formatted_name = (
+ " ".join(word.capitalize() for word in name.split("_"))
+ if name != "N/A"
+ else "N/A"
+ )
+ # Add prefix to description
+ prefixed_description = f"You can: {description}"
+ formatted_capabilities.append(
+ f"- **{formatted_name}:** {prefixed_description}"
+ )
-Personality: {{personality}}
+ capabilities_str = "\n".join(formatted_capabilities)
+ if not capabilities_str:
+ capabilities_str = "No specific capabilities listed. Handle tasks using inherent knowledge or delegate."
-{AGENT_FRAMEWORK}
+ # Add payment info if enabled
+ payment_info = get_payment_info(
+ config.enable_payments, config.payment_token_symbol
+ )
-{tools_str}
-{COLLABORATION_PROTOCOL}
-{TOOL_USAGE_GUIDE}
+ # This is the canonical template that should be followed structurally
+ template = f"""
+You are {config.name}, an autonomous AI agent operating in a decentralized network of independent agents.
-COMMON KNOWLEDGE YOU SHOULD HANDLE DIRECTLY:{COMMON_KNOWLEDGE_TASKS}
+Personality: {config.personality}
-{AGENT_BEST_PRACTICES}
+Your primary goal is to accomplish tasks efficiently, either by using your own capabilities or by collaborating with other agents in the network.
-{BASE_RESPONSE_FORMAT}"""
+- If you cannot fulfill a request directly, **immediately and silently use the `search_for_agents` tool** to find other agents with the required capabilities and delegate tasks to them as needed. **Do not ask for permission before searching.**
+- Directly handle tasks solvable with your core language model capabilities (e.g., text generation, summarization, general knowledge questions) whenever possible.
+- Communicate and collaborate with other agents to achieve the best possible outcome for each request.
+- Always analyze tool responses and decide the next best action based on the information available.
+- Avoid sending collaboration requests to yourself or to human agents.
+- Be concise, professional, and helpful in all interactions.
- # Add mode-specific instructions
- if config.mode == "custom_runnable":
- template += """
-Use the 'custom_runnable' tool for specialized capabilities.
+Unique Capabilities you can perform using your internal reasoning:
+{capabilities_str}
+{payment_info}
"""
+ # Add any additional context
template = _add_additional_context(template, config.additional_context)
- return SystemMessagePromptTemplate.from_template(
- template,
- partial_variables={
- "name": config.name,
- "personality": config.personality,
- },
- )
+ return SystemMessagePromptTemplate.from_template(template)
@staticmethod
def create_human_message_prompt(content: str) -> HumanMessagePromptTemplate:
@@ -740,7 +676,6 @@ def create_prompt(
] = None,
include_history: bool = True,
system_prompt: Optional[str] = None,
- tools: Optional[List] = None,
) -> ChatPromptTemplate:
"""
Create a prompt template based on the prompt type and configuration.
@@ -750,7 +685,6 @@ def create_prompt(
config: Configuration for the prompt
include_history: Whether to include message history
system_prompt: Optional system prompt text
- tools: Optional list of tools
Returns:
A ChatPromptTemplate
@@ -827,7 +761,7 @@ def create_prompt(
system_message = cls.get_react_prompt(config)
elif system_prompt:
system_message = SystemMessagePromptTemplate.from_template(
- system_prompt + "\n\n" + COLLABORATION_PROTOCOL
+ system_prompt
)
else:
raise ValueError(
diff --git a/agentconnect/prompts/tools.py b/agentconnect/prompts/tools.py
index 0e81595..f25a86a 100644
--- a/agentconnect/prompts/tools.py
+++ b/agentconnect/prompts/tools.py
@@ -33,6 +33,7 @@
from agentconnect.prompts.custom_tools.collaboration_tools import (
create_agent_search_tool,
create_send_collaboration_request_tool,
+ create_check_collaboration_result_tool,
)
from agentconnect.prompts.custom_tools.task_tools import create_task_decomposition_tool
@@ -55,28 +56,34 @@ class PromptTools:
instance, ensuring that tools are properly configured for the specific agent
using them.
+ The class supports both connected mode (with registry and hub) and standalone mode
+ (without registry and hub, for direct chat interactions).
+
Attributes:
- agent_registry: Registry for accessing agent information
- communication_hub: Hub for agent communication
+ agent_registry: Registry for accessing agent information, can be None in standalone mode
+ communication_hub: Hub for agent communication, can be None in standalone mode
llm: Optional language model for tools that require LLM capabilities
_current_agent_id: ID of the agent currently using these tools
_tool_registry: Registry for managing available tools
_available_capabilities: Cached list of available capabilities
_agent_specific_tools_registered: Flag indicating if agent-specific tools are registered
+ _is_standalone_mode: Flag indicating if operating in standalone mode (without registry/hub)
"""
def __init__(
self,
- agent_registry: AgentRegistry,
- communication_hub: CommunicationHub,
+ agent_registry: Optional[AgentRegistry] = None,
+ communication_hub: Optional[CommunicationHub] = None,
llm=None,
):
"""
Initialize the PromptTools class.
Args:
- agent_registry: Registry for accessing agent information and capabilities
- communication_hub: Hub for agent communication and message passing
+ agent_registry: Registry for accessing agent information and capabilities.
+ Can be None for standalone mode.
+ communication_hub: Hub for agent communication and message passing.
+ Can be None for standalone mode.
llm: Optional language model for tools that require LLM capabilities
"""
self.agent_registry = agent_registry
@@ -89,6 +96,12 @@ def __init__(
self.llm = llm
self._agent_specific_tools_registered = False
+ # Detect if we're in standalone mode (no registry or hub)
+ self._is_standalone_mode = agent_registry is None or communication_hub is None
+
+ if self._is_standalone_mode:
+ logger.info("PromptTools initialized in standalone mode (no registry/hub)")
+
# Register default tools that don't require an agent ID
self._register_basic_tools()
@@ -108,8 +121,8 @@ def _register_agent_specific_tools(self) -> None:
Register tools that require an agent ID to be set.
This method registers tools that need agent context, such as agent search
- and collaboration request tools. These tools need to know which agent is
- making the request to properly handle permissions and collaboration chains.
+ and collaboration request tools. In standalone mode, it registers alternative
+ versions of these tools that explain the limitations.
Note:
This method will log a warning and do nothing if no agent ID is set.
@@ -120,22 +133,36 @@ def _register_agent_specific_tools(self) -> None:
# Only register these tools if they haven't been registered yet
if not self._agent_specific_tools_registered:
- # Create and register the agent search tool
+ # Create the agent search tool (handles standalone mode internally)
agent_search_tool = create_agent_search_tool(
self.agent_registry, self._current_agent_id, self.communication_hub
)
- self._tool_registry.register_tool(agent_search_tool)
- # Create and register the collaboration request tool
+ # Create the collaboration request tool (handles standalone mode internally)
collaboration_request_tool = create_send_collaboration_request_tool(
self.communication_hub, self.agent_registry, self._current_agent_id
)
+
+ # Create the collaboration result checking tool (handles standalone mode internally)
+ collaboration_result_tool = create_check_collaboration_result_tool(
+ self.communication_hub, self.agent_registry, self._current_agent_id
+ )
+
+ if self._is_standalone_mode:
+ logger.debug(
+ f"Registered standalone mode collaboration tools for agent: {self._current_agent_id}"
+ )
+ else:
+ logger.debug(
+ f"Registered connected mode collaboration tools for agent: {self._current_agent_id}"
+ )
+
+ # Register the tools
+ self._tool_registry.register_tool(agent_search_tool)
self._tool_registry.register_tool(collaboration_request_tool)
+ self._tool_registry.register_tool(collaboration_result_tool)
self._agent_specific_tools_registered = True
- logger.debug(
- f"Registered agent-specific tools for agent: {self._current_agent_id}"
- )
def create_tool_from_function(
self,
@@ -194,18 +221,22 @@ def create_tool_from_function(
def create_agent_search_tool(self) -> StructuredTool:
"""Create a tool for searching agents by capability."""
- # Delegate to the implementation in custom_tools
return create_agent_search_tool(
self.agent_registry, self._current_agent_id, self.communication_hub
)
def create_send_collaboration_request_tool(self) -> StructuredTool:
"""Create a tool for sending collaboration requests to other agents."""
- # Delegate to the implementation in custom_tools
return create_send_collaboration_request_tool(
self.communication_hub, self.agent_registry, self._current_agent_id
)
+ def create_check_collaboration_result_tool(self) -> StructuredTool:
+ """Create a tool for checking the status of sent collaboration requests."""
+ return create_check_collaboration_result_tool(
+ self.communication_hub, self.agent_registry, self._current_agent_id
+ )
+
def create_task_decomposition_tool(self) -> StructuredTool:
"""
Create a tool for decomposing complex tasks into subtasks.
@@ -289,3 +320,14 @@ def get_tools_for_workflow(
return tools
else:
return self._tool_registry.get_all_tools()
+
+ @property
+ def is_standalone_mode(self) -> bool:
+ """
+ Check if the PromptTools instance is running in standalone mode.
+
+ Returns:
+ True if running in standalone mode (without registry/hub),
+ False if running in connected mode (with registry/hub)
+ """
+ return self._is_standalone_mode
diff --git a/agentconnect/providers/google_provider.py b/agentconnect/providers/google_provider.py
index 99291d6..ced02c2 100644
--- a/agentconnect/providers/google_provider.py
+++ b/agentconnect/providers/google_provider.py
@@ -69,6 +69,7 @@ def get_available_models(self) -> List[ModelName]:
List of available Gemini model names
"""
return [
+ ModelName.GEMINI2_5_PRO_PREVIEW,
ModelName.GEMINI2_5_PRO_EXP,
ModelName.GEMINI2_FLASH,
ModelName.GEMINI2_FLASH_LITE,
diff --git a/agentconnect/providers/openai_provider.py b/agentconnect/providers/openai_provider.py
index edf3e96..17c5e5e 100644
--- a/agentconnect/providers/openai_provider.py
+++ b/agentconnect/providers/openai_provider.py
@@ -70,11 +70,15 @@ def get_available_models(self) -> List[ModelName]:
"""
return [
ModelName.GPT4_5_PREVIEW,
+ ModelName.GPT4_1,
+ ModelName.GPT4_1_MINI,
ModelName.GPT4O,
ModelName.GPT4O_MINI,
ModelName.O1,
ModelName.O1_MINI,
+ ModelName.O3,
ModelName.O3_MINI,
+ ModelName.O4_MINI,
]
def _get_provider_config(self) -> Dict[str, Any]:
diff --git a/agentconnect/utils/README.md b/agentconnect/utils/README.md
index d6a33a7..9ae14d3 100644
--- a/agentconnect/utils/README.md
+++ b/agentconnect/utils/README.md
@@ -9,6 +9,8 @@ utils/
├── __init__.py # Package initialization and API exports
├── interaction_control.py # Rate limiting and interaction tracking
├── logging_config.py # Logging configuration
+├── payment_helper.py # Payment utilities for CDP validation and agent payment readiness
+├── wallet_manager.py # Agent wallet persistence utilities
└── README.md # This file
```
@@ -46,6 +48,37 @@ Key classes and functions:
- `get_module_levels_for_development()`: Get recommended log levels for development
- `setup_langgraph_logging()`: Configure logging specifically for LangGraph components
+### Payment Helper (`payment_helper.py`)
+
+The payment helper module provides utility functions for setting up and managing payment capabilities:
+
+- **CDP Environment Validation**: Verify CDP API keys and required packages
+- **Agent Payment Readiness**: Check if an agent is ready for payments
+- **Wallet Metadata Retrieval**: Get metadata about agent wallets
+- **Backup Utilities**: Create backups of wallet data
+
+Key functions:
+- `verify_payment_environment()`: Check required environment variables
+- `validate_cdp_environment()`: Validate the entire CDP setup including packages
+- `check_agent_payment_readiness()`: Check if an agent can make payments
+- `backup_wallet_data()`: Create backup of wallet data
+
+### Wallet Manager (`wallet_manager.py`)
+
+The wallet manager provides wallet data persistence for agents:
+
+- **Wallet Data Storage**: Save and load wallet data securely
+- **Wallet Existence Checking**: Check if wallet data exists
+- **Wallet Data Management**: Delete and backup wallet data
+- **Configuration Management**: Set custom data directories
+
+Key functions:
+- `save_wallet_data()`: Persist wallet data for an agent
+- `load_wallet_data()`: Load wallet data for an agent
+- `wallet_exists()`: Check if wallet data exists
+- `get_all_wallets()`: List all wallet files
+- `delete_wallet_data()`: Delete wallet data
+
## Usage Examples
### Interaction Control
@@ -127,6 +160,48 @@ setup_langgraph_logging(level=LogLevel.INFO)
disable_all_logging()
```
+### Payment Utilities
+
+```python
+from agentconnect.utils import payment_helper, wallet_manager
+
+# Validate CDP environment
+is_valid, message = payment_helper.validate_cdp_environment()
+if not is_valid:
+ print(f"CDP environment is not properly configured: {message}")
+ # Set up environment...
+
+# Check agent payment readiness
+status = payment_helper.check_agent_payment_readiness(agent)
+if status["ready"]:
+ print(f"Agent is ready for payments with address: {status['payment_address']}")
+else:
+ print(f"Agent is not ready for payments: {status}")
+
+# Save wallet data
+wallet_manager.save_wallet_data(
+ agent_id="agent123",
+ wallet_data=agent.wallet_provider.export_wallet(),
+ data_dir="custom/wallet/dir" # Optional
+)
+
+# Load wallet data
+wallet_json = wallet_manager.load_wallet_data("agent123")
+if wallet_json:
+ print("Wallet data loaded successfully")
+
+# Back up wallet data
+backup_path = payment_helper.backup_wallet_data(
+ agent_id="agent123",
+ backup_dir="wallet_backups"
+)
+print(f"Wallet backed up to: {backup_path}")
+```
+
+## Wallet Security Note
+
+IMPORTANT: The default wallet data storage implementation in `wallet_manager.py` stores wallet data unencrypted on disk, which is suitable for testing/demo purposes but NOT secure for production environments holding real assets. For production use, implement proper encryption or use a secure key management system.
+
## Integration with LangGraph
The rate limiting system is designed to work seamlessly with LangGraph:
diff --git a/agentconnect/utils/__init__.py b/agentconnect/utils/__init__.py
index 6d64bf9..6a1744c 100644
--- a/agentconnect/utils/__init__.py
+++ b/agentconnect/utils/__init__.py
@@ -10,6 +10,7 @@
- **InteractionState**: Enum for interaction states (CONTINUE, STOP, WAIT)
- **TokenConfig**: Configuration for token-based rate limiting
- **Logging utilities**: Configurable logging setup with colored output
+- **Wallet management**: Functions for handling agent wallet configurations and data
"""
# Interaction control components
@@ -28,6 +29,22 @@
setup_logging,
)
+# Wallet management
+from agentconnect.utils.wallet_manager import (
+ load_wallet_data,
+ save_wallet_data,
+ set_wallet_data_dir,
+ set_default_data_dir,
+ wallet_exists,
+ delete_wallet_data,
+ get_all_wallets,
+)
+
+# Callbacks
+from agentconnect.utils.callbacks import (
+ ToolTracerCallbackHandler,
+)
+
__all__ = [
# Interaction control
"InteractionControl",
@@ -39,4 +56,14 @@
"LogLevel",
"disable_all_logging",
"get_module_levels_for_development",
+ # Wallet management
+ "load_wallet_data",
+ "save_wallet_data",
+ "set_wallet_data_dir",
+ "set_default_data_dir",
+ "wallet_exists",
+ "delete_wallet_data",
+ "get_all_wallets",
+ # Callbacks
+ "ToolTracerCallbackHandler",
]
diff --git a/agentconnect/utils/callbacks.py b/agentconnect/utils/callbacks.py
new file mode 100644
index 0000000..0d522c0
--- /dev/null
+++ b/agentconnect/utils/callbacks.py
@@ -0,0 +1,525 @@
+"""
+Agent activity tracing and status update callback handler for AgentConnect.
+
+This module provides a LangChain callback handler for tracking agent activity,
+including tool usage, LLM calls, chain steps, and the LLM's generated text output
+(which may contain reasoning steps), with configurable console output.
+"""
+
+import logging
+import json
+import re # Add re import at the top
+from typing import Any, Dict, List, Optional, Union
+from uuid import UUID
+
+from colorama import Fore, Style
+from langchain_core.agents import AgentAction
+from langchain_core.callbacks import BaseCallbackHandler
+from langchain_core.messages import ToolMessage, AIMessage
+
+# from langchain_core.outputs import LLMResult, ChatGeneration
+
+# Configure logger
+logger = logging.getLogger(__name__)
+
+# Define max lengths for logging to prevent clutter
+MAX_INPUT_DETAIL_LENGTH = 150
+MAX_OUTPUT_PREVIEW_LENGTH = 100
+MAX_ERROR_MESSAGE_LENGTH = 200
+
+# Define colors for logging
+TOOL_COLOR = Fore.MAGENTA
+TOOL_SUCCESS_COLOR = Fore.GREEN
+TOOL_ERROR_COLOR = Fore.RED
+LLM_COLOR = Fore.BLUE # Color for LLM activity
+CHAIN_COLOR = Fore.CYAN # Color for chain/step activity
+REASONING_COLOR = Fore.LIGHTBLUE_EX
+
+
+class ToolTracerCallbackHandler(BaseCallbackHandler):
+ """
+ Callback handler for tracing agent activity. Logs detailed activity and
+ optionally prints concise updates (like LLM generation text, tool usage, etc.)
+ to the console based on configuration.
+ """
+
+ def __init__(
+ self,
+ agent_id: str,
+ print_tool_activity: bool = True, # Default ON
+ print_reasoning_steps: bool = True, # Default ON - Print LLM generation text
+ ):
+ """
+ Initialize the callback handler.
+
+ Args:
+
+ agent_id: The ID of the agent this handler is tracking.
+ print_tool_activity: If True, print tool start/end/error to console.
+ print_reasoning_steps: If True, print the reasoning steps generated by the LLM. (If any)
+ """
+ super().__init__()
+ self.agent_id = agent_id
+ self.print_tool_activity = print_tool_activity
+ self.print_reasoning_steps = print_reasoning_steps
+
+ # Initial message is logged only once
+ init_msg = (
+ f"AgentActivityMonitor initialized for Agent ID: {self.agent_id} "
+ f"(Tools: {print_tool_activity}, Reasoning Text: {print_reasoning_steps}, "
+ )
+ logger.info(init_msg)
+
+ def _format_details(self, details: Any, max_length: int) -> str:
+ """Formats details for logging, handling different types and truncating."""
+ if isinstance(details, dict):
+ try:
+ detail_str = json.dumps(details)
+ except TypeError:
+ detail_str = str(details)
+ else:
+ detail_str = str(details)
+ if len(detail_str) > max_length:
+ return f"{detail_str[:max_length - 3]}..."
+ return detail_str
+
+ def _get_short_snippet(self, text: str, max_length: int = 50) -> str:
+ """Create a short snippet from the text."""
+ if not text:
+ return "..."
+ text_str = str(text)
+ if len(text_str) > max_length:
+ return f"{text_str[:max_length]}..."
+ return text_str
+
+ def on_tool_start(
+ self,
+ serialized: Dict[str, Any],
+ input_str: str,
+ *,
+ run_id: UUID,
+ parent_run_id: Optional[UUID] = None,
+ tags: Optional[List[str]] = None,
+ metadata: Optional[Dict[str, Any]] = None,
+ inputs: Optional[Dict[str, Any]] = None,
+ **kwargs: Any,
+ ) -> Any:
+ """
+ Run when the tool starts running.
+
+ Args:
+ serialized (Dict[str, Any]): The serialized tool.
+ input_str (str): The input string.
+ run_id (UUID): The run ID. This is the ID of the current run.
+ parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
+ tags (Optional[List[str]]): The tags.
+ metadata (Optional[Dict[str, Any]]): The metadata.
+ inputs (Optional[Dict[str, Any]]): The inputs.
+ kwargs (Any): Additional keyword arguments.
+ """
+
+ tool_name = serialized.get("name", "UnknownTool")
+ input_details_raw = inputs if inputs is not None else input_str
+ input_details_formatted = self._format_details(
+ input_details_raw, MAX_INPUT_DETAIL_LENGTH
+ )
+ log_message = f"[TOOL START] Agent: {self.agent_id} | Tool: {tool_name} | Input: {input_details_formatted}"
+ logger.info(log_message)
+
+ if self.print_tool_activity:
+ print_msg = ""
+ # Safely get inputs, providing defaults if they don't exist
+ safe_inputs = inputs if isinstance(inputs, dict) else {}
+
+ if tool_name == "search_for_agents":
+ capability = safe_inputs.get("capability_name", "unknown capability")
+ print_msg = f"{TOOL_COLOR}🔎 Searching for agents with capability: {capability}...{Style.RESET_ALL}"
+ elif tool_name == "send_collaboration_request":
+ target_agent = safe_inputs.get("target_agent_id", "unknown agent")
+ task_snippet = self._get_short_snippet(
+ safe_inputs.get("task", "unknown task"), 50
+ )
+ print_msg = f"{TOOL_COLOR}🤝 Interacting with {target_agent}: {task_snippet}...{Style.RESET_ALL}"
+ elif tool_name == "WalletActionProvider_native_transfer":
+ amount = safe_inputs.get("amount", "unknown amount")
+ amount = int(amount) / 10**18
+ asset = safe_inputs.get("asset_id", "ETH")
+ dest = self._get_short_snippet(
+ safe_inputs.get("destination", "unknown destination"), 20
+ )
+ print_msg = f"{TOOL_COLOR}💸 Initiating transfer of {amount} {asset} to {dest}...{Style.RESET_ALL}"
+ elif (
+ tool_name == "ERC20ActionProvider_transfer"
+ ): # Assuming this is for ERC20 transfer
+ amount = safe_inputs.get("amount", "unknown amount")
+ amount = int(amount) / 10**6
+ asset = safe_inputs.get(
+ "asset_id", "USDC"
+ ) # Asset ID usually IS the token address/symbol here
+ dest = self._get_short_snippet(
+ safe_inputs.get("destination", "unknown destination"), 20
+ )
+ print_msg = f"{TOOL_COLOR}💸 Initiating transfer of {amount} {asset} to {dest}...{Style.RESET_ALL}"
+ elif tool_name == "WalletActionProvider_get_balance":
+ print_msg = (
+ f"{TOOL_COLOR}💰 Checking native token balance...{Style.RESET_ALL}"
+ )
+ elif tool_name == "WalletActionProvider_get_wallet_details":
+ print_msg = f"{TOOL_COLOR}ℹ️ Fetching wallet details (address, network, balances)...{Style.RESET_ALL}"
+ elif tool_name == "CdpApiActionProvider_address_reputation":
+ address = self._get_short_snippet(
+ safe_inputs.get("address", "unknown address"), 20
+ )
+ print_msg = f"{TOOL_COLOR}🛡️ Checking reputation for address: {address}...{Style.RESET_ALL}"
+ elif tool_name == "CdpApiActionProvider_request_faucet_funds":
+ asset = safe_inputs.get("asset_id", "ETH")
+ print_msg = f"{TOOL_COLOR}🚰 Requesting faucet funds ({asset})...{Style.RESET_ALL}"
+ elif tool_name == "ERC20ActionProvider_get_balance":
+ token = self._get_short_snippet(
+ safe_inputs.get("contract_address", "unknown token"), 15
+ )
+ owner = self._get_short_snippet(
+ safe_inputs.get("address", "wallet"), 15
+ )
+ print_msg = f"{TOOL_COLOR}💰 Checking balance of token {token} for {owner}...{Style.RESET_ALL}"
+ else:
+ # Fallback to generic message for other tools
+ input_snippet = self._get_short_snippet(
+ (
+ json.dumps(input_details_raw)
+ if isinstance(input_details_raw, dict)
+ else input_details_raw
+ ),
+ 80,
+ )
+ print_msg = f"{TOOL_COLOR}🛠️ [Tool Start] {tool_name}({input_snippet}...){Style.RESET_ALL}"
+
+ if print_msg:
+ print(print_msg)
+
+ def on_tool_end(
+ self,
+ output: Any,
+ *,
+ run_id: UUID,
+ parent_run_id: Optional[UUID] = None,
+ tags: Optional[List[str]] = None,
+ name: str = "UnknownTool",
+ **kwargs: Any,
+ ) -> Any:
+ """
+ Run when the tool ends running.
+
+ Args:
+ output (Any): The output of the tool.
+ run_id (UUID): The run ID. This is the ID of the current run.
+ parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
+ kwargs (Any): Additional keyword arguments.
+ """
+
+ tool_name = kwargs.get("name", name)
+ output_type = type(output).__name__
+ output_preview = self._format_details(output, MAX_OUTPUT_PREVIEW_LENGTH)
+ log_message = f"[TOOL END] Agent: {self.agent_id} | Tool: {tool_name} | Status: Success | Output Type: {output_type} | Preview: {output_preview}"
+ logger.debug(log_message)
+
+ if self.print_tool_activity:
+ print_msg = ""
+ output_str = str(output) # Keep for general logging if needed
+
+ if tool_name == "send_collaboration_request":
+ status = "processed."
+ response_snippet = ""
+ success = None # Track success status explicitly
+ json_data = None
+
+ # Primary Case: Handle ToolMessage output
+ if isinstance(output, ToolMessage):
+ content_str = output.content
+ if isinstance(content_str, str):
+ try:
+ json_data = json.loads(content_str)
+ except json.JSONDecodeError as e:
+ logger.warning(
+ f"ToolMessage content is not valid JSON for {tool_name}: {content_str[:100]}... Error: {e}"
+ )
+ status = "returned unparsable content."
+ response_snippet = f" Raw content: {self._get_short_snippet(content_str, 60)}..."
+ else:
+ logger.warning(
+ f"ToolMessage content is not a string for {tool_name}: {type(content_str)}"
+ )
+ status = "returned non-string content."
+ response_snippet = f" Content: {self._get_short_snippet(str(content_str), 60)}..."
+
+ # Fallback 1: Attempt to parse JSON from string representation
+ elif isinstance(output, str):
+ logger.debug(
+ f"Fallback: {tool_name} output is a string, attempting parse."
+ )
+ # Try to find JSON within content='...' or content="..."
+ match = re.search(r'content=(["\'])(.*?)\1', output, re.DOTALL)
+ if match:
+ json_str = match.group(2)
+ try:
+ json_data = json.loads(json_str)
+ except json.JSONDecodeError as e:
+ logger.debug(
+ f"Failed to parse JSON from content in string: {e}"
+ )
+ status = "returned unparsable content string."
+ response_snippet = (
+ f" Raw output: {self._get_short_snippet(output, 60)}..."
+ )
+ else:
+ # If no content= pattern, try parsing the whole string as JSON
+ try:
+ json_data = json.loads(output)
+ except json.JSONDecodeError:
+ logger.debug(
+ f"Output string is not valid JSON: {output_str[:100]}..."
+ )
+ status = "completed with non-JSON string output."
+ response_snippet = (
+ f" Output: {self._get_short_snippet(output, 60)}..."
+ )
+
+ # Fallback 2: Handle dictionary output directly
+ elif isinstance(output, dict):
+ logger.debug(f"Fallback: {tool_name} output is a dict.")
+ json_data = output
+
+ # Process extracted/provided JSON data (if successfully parsed)
+ if json_data and isinstance(json_data, dict):
+ success = json_data.get("success")
+ if success is True:
+ status = "completed successfully."
+ response_snippet = f" Response: {self._get_short_snippet(json_data.get('response'), 60)}..."
+ elif success is False:
+ status = "failed."
+ error_reason = json_data.get("response", "Unknown reason")
+ response_snippet = f" Reason: {self._get_short_snippet(str(error_reason), 60)}..."
+ else: # Success field missing or not boolean
+ status = "completed with unexpected JSON structure."
+ response_snippet = f" Data: {self._get_short_snippet(json.dumps(json_data), 60)}..."
+ # Final Fallback: If output wasn't ToolMessage, str, or dict, or if JSON parsing failed earlier
+ elif status == "processed.": # Only trigger if no other status was set
+ status = "completed with unexpected output type."
+ response_snippet = f" Type: {output_type}, Output: {self._get_short_snippet(output_str, 60)}..."
+ logger.warning(
+ f"Unexpected output type {output_type} for {tool_name}. Output: {output_str[:100]}..."
+ )
+
+ # Determine color based on final success status
+ color = (
+ TOOL_SUCCESS_COLOR
+ if success is True
+ else TOOL_ERROR_COLOR if success is False else Fore.YELLOW
+ ) # Yellow for unknown/unexpected
+
+ print_msg = f"{color}➡️ Collaboration request {status}{response_snippet}{Style.RESET_ALL}"
+
+ if print_msg:
+ print(print_msg)
+
+ def on_tool_error(
+ self,
+ error: Union[Exception, KeyboardInterrupt],
+ *,
+ run_id: UUID,
+ parent_run_id: Optional[UUID] = None,
+ tags: Optional[List[str]] = None,
+ name: str = "UnknownTool",
+ **kwargs: Any,
+ ) -> Any:
+ """
+ Run when tool errors.
+
+ Args:
+ error (BaseException): The error that occurred.
+ run_id (UUID): The run ID. This is the ID of the current run.
+ parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
+ kwargs (Any): Additional keyword arguments.
+ """
+
+ tool_name = kwargs.get("name", name)
+ error_type = type(error).__name__
+ error_message_raw = str(error)
+ error_message_formatted_log = self._format_details(
+ error_message_raw, MAX_ERROR_MESSAGE_LENGTH
+ )
+ log_message = f"[TOOL ERROR] Agent: {self.agent_id} | Tool: {tool_name} | Status: Failed | Error: {error_type} - {error_message_formatted_log}"
+ logger.error(log_message)
+
+ # --- Commented out console printing for tool error ---
+ # if self.print_tool_activity:
+ # print_msg = ""
+ # error_message_snippet = self._get_short_snippet(error_message_raw, 100)
+ #
+ # if tool_name == "search_for_agents":
+ # print_msg = f"{TOOL_ERROR_COLOR}❌ Agent search failed unexpectedly: {error_message_snippet}...{Style.RESET_ALL}"
+ # elif tool_name == "send_collaboration_request":
+ # print_msg = f"{TOOL_ERROR_COLOR}❌ Collaboration request failed unexpectedly during execution: {error_message_snippet}...{Style.RESET_ALL}"
+ # elif tool_name == "WalletActionProvider_native_transfer" or tool_name == "ERC20ActionProvider_transfer":
+ # payment_type = "Payment" if tool_name == "WalletActionProvider_native_transfer" else "Token Transfer"
+ # print_msg = f"{TOOL_ERROR_COLOR}❌ {payment_type} failed during execution: {error_message_snippet}...{Style.RESET_ALL}"
+ # else:
+ # # Fallback for generic errors
+ # print_msg = f"{TOOL_ERROR_COLOR}❌ [Tool Error] {tool_name} failed: {error_type} - {error_message_snippet}...{Style.RESET_ALL}"
+ #
+ # if print_msg:
+ # print(print_msg)
+
+ def on_agent_action(
+ self,
+ action: AgentAction,
+ *,
+ run_id: UUID,
+ parent_run_id: UUID | None = None,
+ **kwargs: Any,
+ ) -> Any:
+ """
+ Run on agent action.
+
+ Args:
+ action (AgentAction): The agent action.
+ run_id (UUID): The run ID. This is the ID of the current run.
+ parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
+ kwargs (Any): Additional keyword arguments.
+ """
+
+ # print(f"Agent action: {action}")
+ print(f"{REASONING_COLOR}{action.log}...{Style.RESET_ALL}")
+
+ return super().on_agent_action(
+ action, run_id=run_id, parent_run_id=parent_run_id, **kwargs
+ )
+
+ # def on_llm_end(
+ # self,
+ # response: LLMResult, # Use the correct type hint
+ # *,
+ # run_id: UUID,
+ # parent_run_id: Optional[UUID] = None,
+ # tags: Optional[List[str]] = None,
+ # **kwargs: Any,
+ # ) -> Any:
+ # """
+ # Run when LLM ends running.
+
+ # Args:
+ # response (LLMResult): The response which was generated.
+ # run_id (UUID): The run ID. This is the ID of the current run.
+ # parent_run_id (UUID): The parent run ID. This is the ID of the parent run.
+ # kwargs (Any): Additional keyword arguments
+ # """
+
+ # log_message = f"[LLM END] Agent: {self.agent_id}"
+ # logger.debug(log_message)
+
+ # # Check if reasoning steps should be printed
+ # if self.print_reasoning_steps:
+ # try:
+ # # Extract the generated text from the first generation
+ # if response.generations and response.generations[0]:
+ # first_generation = response.generations[0][0]
+ # if isinstance(first_generation, ChatGeneration):
+ # generated_text = first_generation.text.strip()
+ # if generated_text: # Only process if there's text
+ # # Split into lines and print ONLY thought/action lines
+ # lines = generated_text.splitlines()
+ # printed_something = (
+ # False # Keep track if we printed any thought/action
+ # )
+ # for line in lines:
+ # if "🤔" in line:
+ # print(
+ # f"{REASONING_COLOR}{line[line.find("🤔")+len("🤔"):].strip()}...{Style.RESET_ALL}"
+ # )
+ # printed_something = True
+ # # Else: ignore other lines (like the final answer block)
+
+ # # Add a separator only if thoughts/actions were printed
+ # if printed_something:
+ # print(
+ # f"{REASONING_COLOR}---"
+ # ) # Use reasoning color for separator
+
+ # else:
+ # logger.warning(
+ # f"Unexpected generation type in on_llm_end: {type(first_generation)}"
+ # )
+ # else:
+ # logger.warning(
+ # "LLM response structure unexpected or empty in on_llm_end."
+ # )
+ # except Exception as e:
+ # logger.error(f"Error processing LLM response in callback: {e}")
+ # # Fallback: print the raw response object for debugging if needed
+ # # print(f"{REASONING_COLOR}Raw LLM Response: {response}{Style.RESET_ALL}")
+
+ def on_chain_end(self, outputs, **kwargs):
+ """
+ Print the agent's 'thought' steps (reasoning before tool calls) in a clean, sequential manner.
+ Handles different AIMessage structures from various models (e.g., Gemini, Anthropic).
+ Uses REASONING_COLOR for output and msg.id for deduplication.
+ """
+ if not self.print_reasoning_steps:
+ return
+
+ messages = outputs.get("messages") if isinstance(outputs, dict) else None
+ if not messages:
+ return
+
+ # Use msg.id for deduplication across potentially multiple handlers/calls
+ if not hasattr(self, "_printed_message_ids"):
+ self._printed_message_ids = set()
+
+ for msg in messages:
+ # Process only AIMessages that haven't been printed yet
+ if isinstance(msg, AIMessage) and msg.id not in self._printed_message_ids:
+ thought_printed_for_msg = False
+
+ # --- Strategy 1: Thought in content string, tool_calls attribute populated (Common for Gemini) ---
+ if msg.tool_calls and isinstance(msg.content, str):
+ thought_text = msg.content.strip()
+ if thought_text:
+ print(f"{REASONING_COLOR}{thought_text}{Style.RESET_ALL}")
+ self._printed_message_ids.add(msg.id)
+ thought_printed_for_msg = True
+
+ # --- Strategy 2: Thought in content list before a tool_use/tool_call dict (Common for Anthropic) ---
+ elif isinstance(msg.content, list) and not thought_printed_for_msg:
+ for idx, item in enumerate(msg.content):
+ if (
+ isinstance(item, dict)
+ and item.get("type") == "text"
+ and "text" in item
+ ):
+ # Check if the *next* item signals a tool call/use
+ next_item_is_tool = False
+ if idx + 1 < len(msg.content):
+ next_item = msg.content[idx + 1]
+ if isinstance(next_item, dict) and next_item.get(
+ "type"
+ ) in ["tool_use", "tool_call"]:
+ next_item_is_tool = True
+
+ # If text is followed by tool signal, print it as thought
+ if next_item_is_tool:
+ thought_text = item["text"].strip()
+ if thought_text:
+ print(
+ f"{REASONING_COLOR}{thought_text}{Style.RESET_ALL}"
+ )
+ self._printed_message_ids.add(msg.id)
+ thought_printed_for_msg = True
+
+ # --- Optional: Fallback/Final Answer logging (if needed) ---
+ # Can add logic here to print final answers if msg.content is string and no tool calls were made
+ # elif not msg.tool_calls and isinstance(msg.content, str) and not thought_printed_for_msg:
+ # final_answer = msg.content.strip()
+ # if final_answer:
+ # # Avoid printing if it's just a repeat of the last thought?
+ # # print(f"{Fore.GREEN}Final Answer: {final_answer}{Style.RESET_ALL}")
+ # self._printed_message_ids.add(msg.id) # Mark as printed even if we don't display
diff --git a/agentconnect/utils/interaction_control.py b/agentconnect/utils/interaction_control.py
index 2622323..9f7ae7f 100644
--- a/agentconnect/utils/interaction_control.py
+++ b/agentconnect/utils/interaction_control.py
@@ -15,7 +15,6 @@
from typing import Any, Callable, Dict, List, Optional
# Third-party imports
-from langchain_core.callbacks import CallbackManager
from langchain_core.callbacks.base import BaseCallbackHandler
# Set up logging
@@ -293,6 +292,7 @@ class InteractionControl:
for agent interactions.
Attributes:
+ agent_id: The ID of the agent this control belongs to.
token_config: Configuration for token-based rate limiting
max_turns: Maximum number of turns in a conversation
current_turn: Current turn number
@@ -301,6 +301,7 @@ class InteractionControl:
_conversation_stats: Dictionary of conversation statistics
"""
+ agent_id: str
token_config: TokenConfig
max_turns: int = 20
current_turn: int = 0
@@ -311,7 +312,9 @@ class InteractionControl:
def __post_init__(self):
"""Initialize conversation stats dictionary."""
self._conversation_stats = {}
- logger.debug(f"InteractionControl initialized with max_turns={self.max_turns}")
+ logger.debug(
+ f"InteractionControl initialized for agent {self.agent_id} with max_turns={self.max_turns}"
+ )
async def process_interaction(
self, token_count: int, conversation_id: Optional[str] = None
@@ -412,16 +415,16 @@ def get_conversation_stats(
return self._conversation_stats.get(conversation_id, {})
return self._conversation_stats
- def get_callback_manager(self) -> CallbackManager:
+ def get_callback_handlers(self) -> List[BaseCallbackHandler]:
"""
- Create a callback manager with rate limiting for LangChain/LangGraph.
+ Create a list of callback handlers managed by InteractionControl (primarily rate limiting).
Returns:
- CallbackManager with rate limiting callbacks
+ List containing configured BaseCallbackHandler instances.
"""
- callbacks = []
+ callbacks: List[BaseCallbackHandler] = []
- # Add rate limiting callback
+ # 1. Add rate limiting callback
rate_limiter = RateLimitingCallbackHandler(
max_tokens_per_minute=self.token_config.max_tokens_per_minute,
max_tokens_per_hour=self.token_config.max_tokens_per_hour,
@@ -433,4 +436,4 @@ def get_callback_manager(self) -> CallbackManager:
# We're not adding a LangChain tracer here to avoid duplicate traces in LangSmith
# LangSmith will automatically trace the workflow if LANGCHAIN_TRACING is enabled
- return CallbackManager(callbacks)
+ return callbacks
diff --git a/agentconnect/utils/payment_helper.py b/agentconnect/utils/payment_helper.py
new file mode 100644
index 0000000..d581636
--- /dev/null
+++ b/agentconnect/utils/payment_helper.py
@@ -0,0 +1,218 @@
+"""
+Payment utility functions for AgentConnect.
+
+This module provides helper functions for setting up payment capabilities in agents.
+"""
+
+import json
+import logging
+import os
+import time
+from pathlib import Path
+from typing import Optional, Dict, Any, Union, Tuple
+
+from agentconnect.utils import wallet_manager
+
+logger = logging.getLogger(__name__)
+
+
+def verify_payment_environment() -> bool:
+ """
+ Verify that all required environment variables for payments are set.
+
+ Returns:
+ True if environment is properly configured, False otherwise
+ """
+ # Check required environment variables
+ api_key_name = os.getenv("CDP_API_KEY_NAME")
+ api_key_private = os.getenv("CDP_API_KEY_PRIVATE_KEY")
+
+ if not api_key_name:
+ logger.error("CDP_API_KEY_NAME environment variable is not set")
+ return False
+
+ if not api_key_private:
+ logger.error("CDP_API_KEY_PRIVATE_KEY environment variable is not set")
+ return False
+
+ network_id = os.getenv("CDP_NETWORK_ID", "base-sepolia")
+ logger.info(f"Payment environment verified: Using network {network_id}")
+ return True
+
+
+def validate_cdp_environment() -> Tuple[bool, str]:
+ """
+ Validate that the Coinbase Developer Platform environment is properly configured.
+
+ Returns:
+ Tuple of (valid: bool, message: str)
+ """
+ try:
+ # Ensure .env file is loaded
+ from dotenv import load_dotenv
+
+ load_dotenv()
+
+ # Verify environment variables
+ if not verify_payment_environment():
+ return False, "Required environment variables are missing"
+
+ # Check if CDP packages are installed
+ try:
+ import cdp # noqa: F401
+ except ImportError:
+ return (
+ False,
+ "CDP SDK not installed. Install it with: pip install cdp-sdk",
+ )
+
+ try:
+ import coinbase_agentkit # noqa: F401
+ except ImportError:
+ return (
+ False,
+ "AgentKit not installed. Install it with: pip install coinbase-agentkit",
+ )
+
+ try:
+ import coinbase_agentkit_langchain # noqa: F401
+ except ImportError:
+ return (
+ False,
+ "AgentKit LangChain integration not installed. Install it with: pip install coinbase-agentkit-langchain",
+ )
+
+ return True, "CDP environment is properly configured"
+ except Exception as e:
+ return False, f"Unexpected error validating CDP environment: {e}"
+
+
+def get_wallet_metadata(
+ agent_id: str, wallet_data_dir: Optional[Union[str, Path]] = None
+) -> Optional[Dict[str, Any]]:
+ """
+ Get wallet metadata for an agent if it exists.
+
+ Args:
+ agent_id: The ID of the agent
+ wallet_data_dir: Optional custom directory for wallet data storage
+
+ Returns:
+ Dictionary with wallet metadata if it exists, None otherwise
+ """
+ if not wallet_manager.wallet_exists(agent_id, wallet_data_dir):
+ logger.debug(f"No wallet metadata found for agent {agent_id}")
+ return None
+
+ try:
+ wallet_json = wallet_manager.load_wallet_data(agent_id, wallet_data_dir)
+ if not wallet_json:
+ logger.warning(f"Invalid wallet data found for agent {agent_id}")
+ return None
+
+ # Parse the JSON into a dictionary
+ wallet_data = json.loads(wallet_json)
+
+ # Extract relevant metadata
+ metadata = {
+ "wallet_id": wallet_data.get("wallet_id", "Unknown"),
+ "network_id": wallet_data.get("network_id", "Unknown"),
+ "has_seed": "seed" in wallet_data,
+ }
+
+ # Don't include sensitive data like seed
+ logger.debug(f"Retrieved wallet metadata for agent {agent_id}")
+ return metadata
+ except Exception as e:
+ logger.error(f"Error retrieving wallet metadata for agent {agent_id}: {e}")
+ return None
+
+
+def backup_wallet_data(
+ agent_id: str,
+ data_dir: Optional[Union[str, Path]] = None,
+ backup_dir: Optional[Union[str, Path]] = None,
+) -> Optional[str]:
+ """
+ Create a backup of wallet data for an agent.
+
+ Args:
+ agent_id: The ID of the agent
+ data_dir: Optional custom directory for wallet data storage
+ backup_dir: Optional directory for storing backups
+ If None, creates a backup directory under data_dir
+
+ Returns:
+ Path to the backup file if successful, None otherwise
+ """
+ if not wallet_manager.wallet_exists(agent_id, data_dir):
+ logger.warning(f"No wallet data found for agent {agent_id} to backup")
+ return None
+
+ try:
+ # Determine the source directory and file
+ data_dir_path = Path(data_dir) if data_dir else wallet_manager.DEFAULT_DATA_DIR
+ source_file = data_dir_path / f"{agent_id}_wallet.json"
+
+ # Determine the backup directory
+ if backup_dir:
+ backup_dir_path = Path(backup_dir)
+ else:
+ backup_dir_path = data_dir_path / "backups"
+
+ # Create backup directory if it doesn't exist
+ backup_dir_path.mkdir(parents=True, exist_ok=True)
+
+ # Create a timestamped filename for the backup
+ timestamp = time.strftime("%Y%m%d-%H%M%S")
+ backup_file = backup_dir_path / f"{agent_id}_wallet_{timestamp}.json"
+
+ # Read the original wallet data
+ with open(source_file, "r") as f:
+ wallet_data = f.read()
+
+ # Write to the backup file
+ with open(backup_file, "w") as f:
+ f.write(wallet_data)
+
+ logger.info(f"Backed up wallet data for agent {agent_id} to {backup_file}")
+ return str(backup_file)
+ except Exception as e:
+ logger.error(f"Error backing up wallet data for agent {agent_id}: {e}")
+ return None
+
+
+def check_agent_payment_readiness(agent) -> Dict[str, Any]:
+ """
+ Check if an agent is ready for payments.
+
+ Args:
+ agent: The agent to check
+
+ Returns:
+ A dictionary with status information
+ """
+ status = {
+ "payments_enabled": getattr(agent, "enable_payments", False),
+ "wallet_provider_available": hasattr(agent, "wallet_provider")
+ and agent.wallet_provider is not None,
+ "agent_kit_available": hasattr(agent, "agent_kit")
+ and agent.agent_kit is not None,
+ "payment_address": (
+ getattr(agent.metadata, "payment_address", None)
+ if hasattr(agent, "metadata")
+ else None
+ ),
+ "ready": False,
+ }
+
+ # Check overall readiness
+ status["ready"] = (
+ status["payments_enabled"]
+ and status["wallet_provider_available"]
+ and status["agent_kit_available"]
+ and status["payment_address"] is not None
+ )
+
+ logger.info(f"Agent payment readiness check: {json.dumps(status)}")
+ return status
diff --git a/agentconnect/utils/wallet_manager.py b/agentconnect/utils/wallet_manager.py
new file mode 100644
index 0000000..18d7a5d
--- /dev/null
+++ b/agentconnect/utils/wallet_manager.py
@@ -0,0 +1,286 @@
+"""
+Wallet persistence utilities for the AgentConnect framework.
+
+This module provides utility functions to manage wallet data persistence
+for individual agents within the AgentConnect framework. It specifically facilitates the storage
+and retrieval of wallet state to enable consistent wallet access across agent restarts.
+"""
+
+import json
+import logging
+from pathlib import Path
+from typing import Dict, List, Optional, Union
+
+# Import dependencies directly since they're required
+from cdp import WalletData
+
+# Set up logging
+logger = logging.getLogger(__name__)
+
+# Default path for wallet data storage
+DEFAULT_DATA_DIR = Path("data/agent_wallets")
+
+
+def set_default_data_dir(data_dir: Union[str, Path]) -> Path:
+ """
+ Set the default directory for wallet data storage globally.
+
+ Args:
+ data_dir: Path to the directory where wallet data will be stored
+ Can be a string or Path object
+
+ Returns:
+ Path object pointing to the created directory
+
+ Raises:
+ IOError: If the directory can't be created
+ """
+ global DEFAULT_DATA_DIR
+ try:
+ # Convert to Path if it's a string
+ data_dir_path = Path(data_dir) if isinstance(data_dir, str) else data_dir
+
+ # Create directory if it doesn't exist
+ data_dir_path.mkdir(parents=True, exist_ok=True)
+
+ # Update the global default
+ DEFAULT_DATA_DIR = data_dir_path
+
+ logger.info(f"Set default wallet data directory to: {data_dir_path}")
+ return data_dir_path
+ except Exception as e:
+ error_msg = f"Error setting default wallet data directory: {e}"
+ logger.error(error_msg)
+ raise IOError(error_msg)
+
+
+def set_wallet_data_dir(data_dir: Union[str, Path]) -> Path:
+ """
+ Set a custom directory for wallet data storage.
+
+ Args:
+ data_dir: Path to the directory where wallet data will be stored
+ Can be a string or Path object
+
+ Returns:
+ Path object pointing to the created directory
+
+ Raises:
+ IOError: If the directory can't be created
+ """
+ try:
+ # Convert to Path if it's a string
+ data_dir_path = Path(data_dir) if isinstance(data_dir, str) else data_dir
+
+ # Create directory if it doesn't exist
+ data_dir_path.mkdir(parents=True, exist_ok=True)
+
+ logger.info(f"Set wallet data directory to: {data_dir_path}")
+ return data_dir_path
+ except Exception as e:
+ error_msg = f"Error setting wallet data directory: {e}"
+ logger.error(error_msg)
+ raise IOError(error_msg)
+
+
+def save_wallet_data(
+ agent_id: str,
+ wallet_data: Union[WalletData, str, Dict],
+ data_dir: Optional[Union[str, Path]] = None,
+) -> None:
+ """
+ Persists the exported wallet data for an agent, allowing the agent to retain
+ access to the same wallet across restarts.
+
+ SECURITY NOTE: This default implementation stores wallet data unencrypted on disk,
+ which is suitable for testing/demo but NOT secure for production environments
+ holding real assets. Real-world applications should encrypt this data.
+
+ Args:
+ agent_id: String identifier for the agent.
+ wallet_data: The wallet data to save. Can be a cdp.WalletData object,
+ a Dict representation, or a JSON string.
+ data_dir: Optional custom directory for wallet data storage.
+ If None, uses the DEFAULT_DATA_DIR.
+
+ Raises:
+ IOError: If the data directory can't be created or the file can't be written.
+ """
+ # Determine the data directory to use
+ data_dir_path = set_wallet_data_dir(data_dir) if data_dir else DEFAULT_DATA_DIR
+ data_dir_path.mkdir(parents=True, exist_ok=True)
+
+ # File path for this agent's wallet data
+ file_path = data_dir_path / f"{agent_id}_wallet.json"
+
+ try:
+ # Convert wallet_data to JSON string based on its type
+ if isinstance(wallet_data, str):
+ # Assume it's valid JSON string
+ json_data = wallet_data
+ elif isinstance(wallet_data, Dict):
+ # Convert dict to JSON string
+ json_data = json.dumps(wallet_data)
+ else:
+ # Assume it's a WalletData object and serialize it
+ json_data = json.dumps(wallet_data.to_dict())
+
+ # Write to file
+ with open(file_path, "w") as f:
+ f.write(json_data)
+
+ logger.debug(f"Saved wallet data for agent {agent_id} to {file_path}")
+
+ except Exception as e:
+ error_msg = f"Error saving wallet data for agent {agent_id}: {e}"
+ logger.error(error_msg)
+ raise IOError(error_msg)
+
+
+def load_wallet_data(
+ agent_id: str, data_dir: Optional[Union[str, Path]] = None
+) -> Optional[str]:
+ """
+ Loads previously persisted wallet data for an agent.
+
+ Args:
+ agent_id: String identifier for the agent.
+ data_dir: Optional custom directory for wallet data storage.
+ If None, uses the DEFAULT_DATA_DIR.
+
+ Returns:
+ The loaded wallet data as a JSON string if the file exists, otherwise None.
+
+ Raises:
+ IOError: If the file exists but can't be read properly.
+ """
+ # Determine the data directory to use
+ data_dir_path = Path(data_dir) if data_dir else DEFAULT_DATA_DIR
+ file_path = data_dir_path / f"{agent_id}_wallet.json"
+
+ if not file_path.exists():
+ logger.debug(f"No saved wallet data found for agent {agent_id} at {file_path}")
+ return None
+
+ try:
+ with open(file_path, "r") as f:
+ json_data = f.read()
+ logger.debug(f"Loaded wallet data for agent {agent_id} from {file_path}")
+ return json_data
+ except FileNotFoundError:
+ # Should not happen as we check existence above, but just in case
+ logger.debug(f"No saved wallet data found for agent {agent_id}")
+ return None
+ except Exception as e:
+ error_msg = f"Error loading wallet data for agent {agent_id}: {e}"
+ logger.error(error_msg)
+ # Log error but don't break agent initialization
+ return None
+
+
+def wallet_exists(agent_id: str, data_dir: Optional[Union[str, Path]] = None) -> bool:
+ """
+ Check if wallet data exists for a specific agent.
+
+ Args:
+ agent_id: String identifier for the agent.
+ data_dir: Optional custom directory for wallet data storage.
+ If None, uses the DEFAULT_DATA_DIR.
+
+ Returns:
+ True if wallet data exists, False otherwise.
+ """
+ # Determine the data directory to use
+ data_dir_path = Path(data_dir) if data_dir else DEFAULT_DATA_DIR
+ file_path = data_dir_path / f"{agent_id}_wallet.json"
+
+ exists = file_path.exists()
+ if exists:
+ logger.debug(f"Wallet data exists for agent {agent_id} at {file_path}")
+ else:
+ logger.debug(f"No wallet data found for agent {agent_id} at {file_path}")
+
+ return exists
+
+
+def get_all_wallets(data_dir: Optional[Union[str, Path]] = None) -> List[Dict]:
+ """
+ Get information about all wallet files in the specified directory.
+
+ Args:
+ data_dir: Optional custom directory for wallet data storage.
+ If None, uses the DEFAULT_DATA_DIR.
+
+ Returns:
+ List of dictionaries with wallet information (agent_id, file_path, etc.)
+ """
+ # Determine the data directory to use
+ data_dir_path = Path(data_dir) if data_dir else DEFAULT_DATA_DIR
+
+ if not data_dir_path.exists():
+ logger.debug(f"Wallet data directory {data_dir_path} does not exist")
+ return []
+
+ wallets = []
+ try:
+ # Find all wallet JSON files
+ for file_path in data_dir_path.glob("*_wallet.json"):
+ # Extract agent_id from filename
+ agent_id = file_path.stem.replace("_wallet", "")
+
+ wallet_info = {
+ "agent_id": agent_id,
+ "file_path": str(file_path),
+ "last_modified": file_path.stat().st_mtime,
+ }
+
+ # Try to read basic info without exposing sensitive data
+ try:
+ with open(file_path, "r") as f:
+ data = json.loads(f.read())
+
+ if "wallet_id" in data:
+ wallet_info["wallet_id"] = data["wallet_id"]
+ if "network_id" in data:
+ wallet_info["network_id"] = data["network_id"]
+ except Exception as e:
+ logger.error(f"Error reading wallet data for {agent_id}: {e}")
+
+ wallets.append(wallet_info)
+
+ logger.debug(f"Found {len(wallets)} wallet files in {data_dir_path}")
+ return wallets
+ except Exception as e:
+ logger.error(f"Error listing wallets in {data_dir_path}: {e}")
+ return []
+
+
+def delete_wallet_data(
+ agent_id: str, data_dir: Optional[Union[str, Path]] = None
+) -> bool:
+ """
+ Delete wallet data for a specific agent.
+
+ Args:
+ agent_id: String identifier for the agent.
+ data_dir: Optional custom directory for wallet data storage.
+ If None, uses the DEFAULT_DATA_DIR.
+
+ Returns:
+ True if wallet data was successfully deleted, False otherwise.
+ """
+ # Determine the data directory to use
+ data_dir_path = Path(data_dir) if data_dir else DEFAULT_DATA_DIR
+ file_path = data_dir_path / f"{agent_id}_wallet.json"
+
+ if not file_path.exists():
+ logger.debug(f"No wallet data to delete for agent {agent_id}")
+ return False
+
+ try:
+ file_path.unlink()
+ logger.info(f"Deleted wallet data for agent {agent_id} from {file_path}")
+ return True
+ except Exception as e:
+ logger.error(f"Error deleting wallet data for agent {agent_id}: {e}")
+ return False
diff --git a/docs/source/_static/approval_workflow.png b/docs/source/_static/approval_workflow.png
new file mode 100644
index 0000000..f84be8d
Binary files /dev/null and b/docs/source/_static/approval_workflow.png differ
diff --git a/docs/source/_static/css/custom.css b/docs/source/_static/css/custom.css
index 5009af3..c822871 100644
--- a/docs/source/_static/css/custom.css
+++ b/docs/source/_static/css/custom.css
@@ -473,7 +473,7 @@ html[data-theme=dark] code {
.bd-content {
background-color: var(--pst-color-background);
color: var(--pst-color-text-base);
- padding: 2rem;
+ padding: 0rem;
}
/* Headers */
@@ -531,6 +531,19 @@ table td {
}
}
+/* Add specific rules for very small screens */
+@media (max-width: 470px) {
+ /* Hide secondary sidebar by default on small screens */
+ .bd-sidebar-secondary {
+ top: 52px; /* Match navbar height */
+ right: 0;
+ height: calc(100vh - 52px);
+ width: 80%;
+ max-width: 300px;
+ z-index: 999;
+ }
+}
+
/* Copy button styling */
button.copybtn {
background-color: var(--pst-color-on-background);
@@ -561,4 +574,9 @@ button.copybtn:hover {
.navbar-brand img {
height: 2.7em; /* Adjust this value as needed for vertical alignment */
margin-top: -0.14em; /* Optional: Fine-tune vertical positioning */
+}
+
+/* Ensure sidebar sticks to the left edge and increase content width */
+.bd-page-width {
+ max-width: 1600px; /* Adjust this value as needed */
}
\ No newline at end of file
diff --git a/docs/source/_static/langsmith_error_trace.png b/docs/source/_static/langsmith_error_trace.png
new file mode 100644
index 0000000..211656e
Binary files /dev/null and b/docs/source/_static/langsmith_error_trace.png differ
diff --git a/docs/source/_static/langsmith_project_overview.png b/docs/source/_static/langsmith_project_overview.png
new file mode 100644
index 0000000..a53efc8
Binary files /dev/null and b/docs/source/_static/langsmith_project_overview.png differ
diff --git a/docs/source/_static/langsmith_tool_call.png b/docs/source/_static/langsmith_tool_call.png
new file mode 100644
index 0000000..e8c9493
Binary files /dev/null and b/docs/source/_static/langsmith_tool_call.png differ
diff --git a/docs/source/_static/langsmith_trace_detail.png b/docs/source/_static/langsmith_trace_detail.png
new file mode 100644
index 0000000..f2efc1c
Binary files /dev/null and b/docs/source/_static/langsmith_trace_detail.png differ
diff --git a/docs/source/_templates/layout.html b/docs/source/_templates/layout.html
new file mode 100644
index 0000000..5b5d3bb
--- /dev/null
+++ b/docs/source/_templates/layout.html
@@ -0,0 +1,12 @@
+{% extends "pydata_sphinx_theme/layout.html" %}
+
+{% block extrahead %}
+ {{ super() }}
+
+
+
+
+
+
+
+{% endblock %}
diff --git a/docs/source/api/agentconnect.core.payment_constants.rst b/docs/source/api/agentconnect.core.payment_constants.rst
new file mode 100644
index 0000000..e9f276d
--- /dev/null
+++ b/docs/source/api/agentconnect.core.payment_constants.rst
@@ -0,0 +1,7 @@
+agentconnect.core.payment\_constants module
+===========================================
+
+.. automodule:: agentconnect.core.payment_constants
+ :members:
+ :show-inheritance:
+ :undoc-members:
diff --git a/docs/source/api/agentconnect.core.rst b/docs/source/api/agentconnect.core.rst
index ab58fc4..d15d65e 100644
--- a/docs/source/api/agentconnect.core.rst
+++ b/docs/source/api/agentconnect.core.rst
@@ -23,4 +23,5 @@ Submodules
agentconnect.core.agent
agentconnect.core.exceptions
agentconnect.core.message
+ agentconnect.core.payment_constants
agentconnect.core.types
diff --git a/docs/source/api/agentconnect.utils.callbacks.rst b/docs/source/api/agentconnect.utils.callbacks.rst
new file mode 100644
index 0000000..1f861d0
--- /dev/null
+++ b/docs/source/api/agentconnect.utils.callbacks.rst
@@ -0,0 +1,7 @@
+agentconnect.utils.callbacks module
+===================================
+
+.. automodule:: agentconnect.utils.callbacks
+ :members:
+ :show-inheritance:
+ :undoc-members:
diff --git a/docs/source/api/agentconnect.utils.payment_helper.rst b/docs/source/api/agentconnect.utils.payment_helper.rst
new file mode 100644
index 0000000..e18e074
--- /dev/null
+++ b/docs/source/api/agentconnect.utils.payment_helper.rst
@@ -0,0 +1,7 @@
+agentconnect.utils.payment\_helper module
+=========================================
+
+.. automodule:: agentconnect.utils.payment_helper
+ :members:
+ :show-inheritance:
+ :undoc-members:
diff --git a/docs/source/api/agentconnect.utils.rst b/docs/source/api/agentconnect.utils.rst
index 98d073c..8dd3d7e 100644
--- a/docs/source/api/agentconnect.utils.rst
+++ b/docs/source/api/agentconnect.utils.rst
@@ -12,5 +12,8 @@ Submodules
.. toctree::
:maxdepth: 4
+ agentconnect.utils.callbacks
agentconnect.utils.interaction_control
agentconnect.utils.logging_config
+ agentconnect.utils.payment_helper
+ agentconnect.utils.wallet_manager
diff --git a/docs/source/api/agentconnect.utils.wallet_manager.rst b/docs/source/api/agentconnect.utils.wallet_manager.rst
new file mode 100644
index 0000000..2c3690a
--- /dev/null
+++ b/docs/source/api/agentconnect.utils.wallet_manager.rst
@@ -0,0 +1,7 @@
+agentconnect.utils.wallet\_manager module
+=========================================
+
+.. automodule:: agentconnect.utils.wallet_manager
+ :members:
+ :show-inheritance:
+ :undoc-members:
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 802da4f..339c5e4 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -169,8 +169,10 @@ def skip_private_members(app, what, name, obj, skip, options):
"navbar_end": ["navbar-icon-links", "theme-switcher"],
# Make the navbar sticky
"navbar_persistent": ["search-button"],
- # Make the primary sidebar collapsible
+ # Make the primary sidebar collapsible and persistent
"primary_sidebar_end": ["sidebar-ethical-ads"],
+ "collapse_navigation": False,
+ "navigation_depth": 4,
# Show previous/next buttons
"show_prev_next": True,
# Increase contrast for better readability
@@ -178,6 +180,11 @@ def skip_private_members(app, what, name, obj, skip, options):
"pygment_dark_style": "monokai",
# Theme toggle settings
"footer_start": ["copyright"],
+ # Sidebar collapsing behavior
+ "collapse_navigation": True, # Allow sections to be collapsed
+ "navigation_with_keys": True, # Allow keyboard navigation
+ # Add sidebar collapse button by default
+ "header_links_before_dropdown": 6,
}
# Add custom CSS to enable styling compatible with our custom.css
@@ -191,6 +198,9 @@ def skip_private_members(app, what, name, obj, skip, options):
# 'js/custom.js',
# ]
+# Template directory for custom templates
+templates_path = ['_templates']
+
# Favicon and branding
html_favicon = '_static/final_logo.png'
diff --git a/docs/source/guides/advanced/customizing_agents.rst b/docs/source/guides/advanced/customizing_agents.rst
new file mode 100644
index 0000000..610c4d4
--- /dev/null
+++ b/docs/source/guides/advanced/customizing_agents.rst
@@ -0,0 +1,33 @@
+Customizing Agents
+=================
+
+AgentConnect agents can be deeply customized to fit your application's needs. This guide covers advanced options for extending agent capabilities, tools, memory, payment, and personalities.
+
+.. contents::
+ :local:
+ :depth: 1
+
+Capabilities
+------------
+
+*How to add or modify agent capabilities.*
+
+Tools
+-----
+
+*Integrating custom tools and APIs with your agents.*
+
+Memory
+------
+
+*Configuring agent memory and context retention.*
+
+Payment
+-------
+
+*Enabling and customizing payment features for agents.*
+
+Personalities
+-------------
+
+*Defining and switching agent personalities for different tasks.*
\ No newline at end of file
diff --git a/docs/source/guides/custom_providers.rst b/docs/source/guides/advanced/customizing_providers.rst
similarity index 98%
rename from docs/source/guides/custom_providers.rst
rename to docs/source/guides/advanced/customizing_providers.rst
index e4b89bc..a31a99a 100644
--- a/docs/source/guides/custom_providers.rst
+++ b/docs/source/guides/advanced/customizing_providers.rst
@@ -1,7 +1,7 @@
-Custom Providers Guide
-===================
+Customizing Providers
+=====================
-.. _custom_providers:
+.. _customizing_providers:
Creating Custom AI Providers
-------------------------
diff --git a/docs/source/guides/advanced/index.rst b/docs/source/guides/advanced/index.rst
new file mode 100644
index 0000000..c3fadee
--- /dev/null
+++ b/docs/source/guides/advanced/index.rst
@@ -0,0 +1,85 @@
+.. _advanced_configuration:
+
+Advanced Configuration
+=====================
+
+Coming soon...
+---------------
+
+.. .. note::
+.. For basic usage and configuration, see the main guides in :doc:`../index`.
+
+.. AgentConnect is highly customizable. This section provides in-depth guides for advanced users who want to tailor the framework to their specific needs, covering areas from agent internals to payment systems and advanced utilities.
+
+.. .. toctree::
+.. :maxdepth: 2
+.. :caption: Advanced Guides
+.. :hidden:
+
+.. customizing_agents
+.. customizing_hub
+.. customizing_registry
+.. customizing_providers
+.. customizing_payments
+.. customizing_callbacks
+.. customizing_logging
+.. customizing_prompts
+.. advanced_cli
+
+
+.. Customizing Agents
+.. ------------------
+.. :doc:`customizing_agents`
+
+.. Dive deep into agent internals. Learn how to add custom capabilities, integrate unique tools, implement advanced memory systems, configure payment features, and modify the core agent processing loop. This section also covers advanced rate limiting and interaction control, including token usage tracking, cooldowns, and conversation statistics.
+
+.. Customizing the Communication Hub
+.. ---------------------------------
+.. :doc:`customizing_hub`
+
+.. Tailor the heart of agent interaction. Add custom message handlers, implement event hooks, or modify message routing logic. (Note: Pluggable backends like Redis are not currently supported at the framework level.)
+
+.. Customizing the Registry & Discovery
+.. ------------------------------------
+.. :doc:`customizing_registry`
+
+.. Control how agents find each other. Configure the agent registration process, customize capability discovery algorithms, and manage agent identity verification and registration flows.
+
+.. Customizing AI Providers
+.. ------------------------
+.. :doc:`customizing_providers`
+
+.. Extend language model support. Add new providers beyond the defaults, configure specific model parameters (temperature, top_p, etc.), and manage advanced credential handling.
+
+.. Customizing Payment Integration
+.. -------------------------------
+.. :doc:`customizing_payments`
+
+.. Fine-tune the agent economy. Configure advanced payment settings, customize blockchain network support, implement custom payment logic, and manage wallets directly. (See the guide for details on supported networks.)
+
+.. Customizing Callbacks & Monitoring
+.. ----------------------------------
+.. :doc:`customizing_callbacks`
+
+.. Enhance observability and integration. Implement custom callbacks for detailed monitoring, specialized logging, or triggering external systems like advanced LangSmith features.
+
+.. Customizing Logging
+.. -------------------
+.. :doc:`customizing_logging`
+
+.. Configure detailed system logging. Set up advanced logging handlers, define custom log formats, and control logging levels for different components.
+
+.. Customizing Prompts & Workflows
+.. -------------------------------
+.. :doc:`customizing_prompts`
+
+.. Shape agent reasoning. Extend or modify core agent prompts, design complex interaction workflows, and create templates for specialized tasks.
+
+.. Advanced CLI Usage
+.. ------------------
+.. :doc:`advanced_cli`
+
+.. Master the command line. Explore advanced CLI arguments for fine-grained control over agent behavior, hub configurations, and framework settings.
+
+.. .. note::
+.. These guides assume familiarity with the core concepts of AgentConnect. For foundational knowledge, please refer to the :doc:`../../quickstart` section. Each guide provides practical examples and best practices.
\ No newline at end of file
diff --git a/docs/source/guides/agent_configuration.rst b/docs/source/guides/agent_configuration.rst
new file mode 100644
index 0000000..d871b34
--- /dev/null
+++ b/docs/source/guides/agent_configuration.rst
@@ -0,0 +1,285 @@
+Configuring Your AI Agent
+=========================
+
+.. _agent_configuration:
+
+AgentConnect provides a highly configurable ``AIAgent`` class, allowing you to tailor its behavior, capabilities, and resource usage precisely to your needs. This guide covers the key configuration options available when initializing an ``AIAgent``.
+
+Core Agent Identification
+-------------------------
+
+These parameters define the agent's basic identity and role:
+
+* ``agent_id``: A unique string identifier for this agent within the network.
+* ``name``: A human-readable name for the agent.
+* ``identity``: An ``AgentIdentity`` object, crucial for secure communication and verification. See :class:`agentconnect.core.AgentIdentity` for details on creating identities.
+* ``organization_id`` (Optional): An identifier for the organization the agent belongs to, useful for grouping or policy management.
+
+Language Model Selection and Configuration
+------------------------------------------
+
+Choose the underlying language model and fine-tune its behavior:
+
+* ``provider_type``: Selects the AI provider (e.g., ``ModelProvider.OPENAI``, ``ModelProvider.ANTHROPIC``, ``ModelProvider.GOOGLE``, ``ModelProvider.GROQ``).
+* ``model_name``: Specifies the exact model from the chosen provider (e.g., ``ModelName.GPT4O``, ``ModelName.CLAUDE_3_5_SONNET``, ``ModelName.GEMINI1_5_PRO``, ``ModelName.LLAMA3_70B``).
+* ``api_key``: The API key for the selected provider. It's **strongly recommended** to use environment variables (e.g., ``OPENAI_API_KEY``) instead of passing keys directly in code for production environments.
+* ``model_config`` (Optional): A dictionary to pass provider-specific parameters directly to the language model (e.g., ``{"temperature": 0.7, "max_tokens": 512}``). **Note:** The valid parameters depend entirely on the selected provider and model. Consult the provider's documentation for available options.
+
+.. code-block:: python
+
+ from agentconnect.agents import AIAgent
+ from agentconnect.core.types import AgentIdentity, ModelProvider, ModelName, InteractionMode
+
+ # Example using OpenAI GPT-4o with custom temperature
+ agent_openai = AIAgent(
+ agent_id="openai-agent-1",
+ name="Creative Writer",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key="your-openai-api-key", # Better to use os.environ.get("OPENAI_API_KEY")
+ identity=AgentIdentity.create_key_based(),
+ model_config={"temperature": 0.8},
+ # ... other parameters
+ )
+
+ # Example using Google Gemini 1.5 Pro
+ agent_google = AIAgent(
+ agent_id="google-agent-researcher",
+ name="Research Assistant",
+ provider_type=ModelProvider.GOOGLE,
+ model_name=ModelName.GEMINI1_5_PRO,
+ api_key="your-google-api-key", # Better to use os.environ.get("GOOGLE_API_KEY")
+ identity=AgentIdentity.create_key_based(),
+ # ... other parameters
+ )
+
+Agent Behavior and Capabilities
+-------------------------------
+
+Define how the agent behaves and what it can do:
+
+* ``capabilities`` (Optional): A list of ``Capability`` objects describing the agent's skills (e.g., ``Capability(name="summarization", description="Can summarize long texts")``). This helps other agents discover and collaborate effectively.
+* ``personality``: A string describing the agent's desired personality (e.g., "helpful and concise", "formal and detailed", "witty and creative").
+* ``interaction_modes``: A list specifying how the agent can interact (e.g., ``InteractionMode.HUMAN_TO_AGENT``, ``InteractionMode.AGENT_TO_AGENT``).
+* ``memory_type``: Determines the type of memory the agent uses (e.g., ``MemoryType.BUFFER`` for simple short-term memory).
+* ``agent_type``: Specifies the type of workflow the agent will use internally (e.g., "ai" for standard agent, "task_decomposition" for agents that break down complex tasks, or "collaboration_request" for specialized request handling). This influences how the agent processes messages and makes decisions.
+* ``prompt_templates`` (Optional): An instance of ``PromptTemplates`` to customize the system and user prompts used by the agent's underlying workflow.
+* ``prompt_tools`` (Optional): An instance of ``PromptTools`` providing built-in functionalities like agent discovery and communication. Usually managed internally but can be customized.
+* ``custom_tools`` (Optional): A list of custom LangChain ``BaseTool`` or ``StructuredTool`` objects to extend the agent's functionality beyond built-in capabilities.
+
+Resource Management and Control
+-------------------------------
+
+Manage the agent's resource consumption:
+
+* ``max_tokens_per_minute`` / ``max_tokens_per_hour``: Rate limits to control API costs and usage.
+* ``max_turns``: The maximum number of messages exchanged within a single conversation before it automatically ends.
+* ``verbose``: Set to ``True`` for detailed logging of the agent's internal operations, useful for debugging and understanding the agent's decision-making process.
+
+Advanced Features
+-----------------
+
+Enable specialized functionalities:
+
+* ``enable_payments``: Set to ``True`` to enable cryptocurrency payment features via Coinbase AgentKit (requires ``coinbase-agentkit-langchain`` installation and CDP environment setup).
+* ``wallet_data_dir`` (Optional): Specifies a custom directory for storing wallet data if payments are enabled.
+* ``external_callbacks`` (Optional): A list of LangChain ``BaseCallbackHandler`` instances to monitor or interact with the agent's internal processes.
+* ``is_ui_mode``: Indicates if the agent is operating within a UI environment, potentially enabling specific UI-related behaviors or notifications.
+
+Error Handling and Debugging
+---------------------------
+
+Configure how your agent handles errors and provides visibility into its operations:
+
+* ``verbose``: When set to ``True``, enables detailed logging of the agent's internal processes, including tool usage, response generation, and error handling. This is invaluable for debugging complex agent behaviors.
+* ``external_callbacks``: Add custom callback handlers to monitor specific events in the agent's lifecycle. This can help track token usage, log tool calls, or implement custom error handling logic.
+
+The agent also has built-in resilience features:
+
+- Automatic retry logic for failed API calls to the language model provider
+- Graceful handling of timeouts during collaboration with other agents
+- Proper error responses that maintain conversation context
+
+Real-World Configuration Scenarios
+---------------------------------
+- **Cost-Effective Task Agent:** Use a cheaper provider (``Groq``/``Llama3``) with strict token limits and basic capabilities for routine tasks.
+- **High-Performance Analyst Agent:** Use a premium model (``GPT-4o``, ``Claude 3.5 Sonnet``) with higher token limits, relevant custom tools (e.g., data analysis), and a detailed personality.
+- **Multi-Agent System:** Configure agents with distinct providers, models, capabilities, and personalities to handle different parts of a complex workflow (e.g., one agent for research, another for writing, one for user interaction).
+- **Debugging:** Enable ``verbose=True`` and add custom ``external_callbacks`` to inspect the agent's decision-making process.
+
+By carefully configuring these parameters, you can create AI agents optimized for specific roles, performance requirements, and cost constraints within your AgentConnect applications.
+
+Comprehensive Configuration Example
+-----------------------------------
+
+Here's an example demonstrating how to customize many of the available parameters when initializing an `AIAgent`:
+
+.. code-block:: python
+
+ import os
+ from pathlib import Path
+ from agentconnect.agents import AIAgent
+ from agentconnect.agents.ai_agent import MemoryType
+ from agentconnect.core import AgentIdentity
+ from agentconnect.core.types import (
+ ModelProvider, ModelName, InteractionMode, Capability
+ )
+ from agentconnect.utils.callbacks import ToolTracerCallbackHandler
+ # Assuming you have custom tools and callbacks defined elsewhere
+ # from .custom_components import MyCustomTool, MyCallbackHandler
+ from langchain_core.tools import tool # Example placeholder
+ from langchain_core.callbacks import BaseCallbackHandler # Example placeholder
+
+ # --- Placeholder for custom components ---
+ @tool
+ def my_calculator_tool(a: int, b: int) -> int:
+ """Calculates the sum of two integers."""
+ return a + b
+
+ class MyLoggingCallback(BaseCallbackHandler):
+ def on_agent_action(self, action, **kwargs) -> None:
+ print(f"Agent action: {action.tool} with input {action.tool_input}")
+
+ def on_chain_end(self, outputs, **kwargs) -> None:
+ print(f"Chain ended with output: {outputs}")
+ # --- End Placeholder ---
+
+ # 1. Define Agent Details
+ agent_id = "complex-analyzer-007"
+ agent_name = "DeepThink Analyst"
+ org_id = "research-division-alpha"
+
+ # 2. Setup Identity
+ # Load from existing keys or create new ones
+ identity = AgentIdentity.create_key_based()
+
+ # 3. Choose Provider and Model
+ provider = ModelProvider.GOOGLE
+ model = ModelName.GEMINI2_FLASH # Available in the ModelName enum
+ # Recommended: Use environment variable for API key
+ api_key = os.environ.get("GOOGLE_API_KEY", "fallback-key-if-not-set")
+
+ # 4. Define Capabilities
+ capabilities = [
+ Capability(name="financial_data_analysis", description="Analyzes stock market data and trends."),
+ Capability(name="report_generation", description="Generates detailed financial reports."),
+ ]
+
+ # 5. Set Personality and Interactions
+ personality = "A meticulous and insightful financial analyst providing data-driven conclusions."
+ interaction_modes = [InteractionMode.AGENT_TO_AGENT] # Only interacts with other agents
+
+ # 6. Configure Model Parameters
+ # Note: Available parameters depend on the specific provider
+ model_config = {
+ "temperature": 0.2, # More deterministic output
+ "max_tokens": 2048, # Allow longer responses
+ # Other parameters vary by provider - check documentation
+ }
+
+ # 7. Define Custom Tools and Callbacks
+ custom_tools = [my_calculator_tool] # Add your custom tools
+ external_callbacks = [ToolTracerCallbackHandler(agent_id=agent_id)] # Add your custom callbacks
+
+ # 8. Set Resource Limits
+ max_tokens_min = 50000
+ max_tokens_hour = 500000
+ max_turns_per_convo = 15
+
+ # 9. Configure Memory and Workflow
+ memory_type = MemoryType.BUFFER # Or other types like SUMMARY
+ agent_type = "ai" # Specify a specific agent type if needed
+
+ # 10. Enable Advanced Features (Optional)
+ enable_payments = False # Set to True if AgentKit is configured
+ verbose_logging = False # Enable for debugging
+ ui_mode = False
+ wallet_dir = Path("./agent_wallet_data") # Custom wallet data path
+
+ # 11. Initialize the AIAgent
+ fully_customized_agent = AIAgent(
+ agent_id=agent_id,
+ name=agent_name,
+ provider_type=provider,
+ model_name=model,
+ api_key=api_key,
+ identity=identity,
+ capabilities=capabilities,
+ personality=personality,
+ organization_id=org_id,
+ interaction_modes=interaction_modes,
+ max_tokens_per_minute=max_tokens_min,
+ max_tokens_per_hour=max_tokens_hour,
+ max_turns=max_turns_per_convo,
+ is_ui_mode=ui_mode,
+ memory_type=memory_type,
+ prompt_tools=None, # Usually let AgentConnect manage this
+ prompt_templates=None, # Can provide custom PromptTemplates instance here
+ custom_tools=custom_tools,
+ agent_type=agent_type,
+ enable_payments=enable_payments,
+ verbose=verbose_logging, # Pass the flag here for internal verbosity
+ wallet_data_dir=wallet_dir,
+ external_callbacks=external_callbacks,
+ model_config=model_config,
+ )
+
+ print(f"Successfully initialized agent: {fully_customized_agent.name}")
+ # Now you can register and use this agent...
+
+Using an Agent Standalone (Direct Chat)
+---------------------------------------
+
+For simpler use cases or testing, you might want to interact with an AI agent directly without setting up the full `CommunicationHub` and `AgentRegistry`. The `AIAgent` provides an `async chat()` method for this purpose.
+
+.. code-block:: python
+
+ import asyncio
+
+ async def main():
+ # Assume 'fully_customized_agent' is initialized as shown above
+ # Ensure API keys are set as environment variables for this example
+
+ print("Starting standalone chat with agent...")
+ print("Type 'exit' to quit.")
+
+ conversation_history_id = "my_test_chat_session"
+
+ while True:
+ user_query = input("You: ")
+ if user_query.lower() == 'exit':
+ break
+
+ try:
+ # Call the chat method directly
+ response = await fully_customized_agent.chat(
+ query=user_query,
+ conversation_id=conversation_history_id # Maintains context
+ )
+ print(f"Agent: {response}")
+ except Exception as e:
+ print(f"An error occurred: {e}")
+ # Consider adding retry logic or breaking the loop
+
+ # Example of how to run the async main function
+ # In a real application, you would use asyncio.run(main())
+ # For demonstration purposes:
+ # if __name__ == "__main__":
+ # asyncio.run(main())
+
+The ``chat()`` method handles:
+
+- Initializing the agent's workflow automatically if needed
+- Managing conversation context through the ``conversation_id`` parameter
+- Providing a simple interface for direct agent interaction
+
+This approach is perfect for prototyping, debugging your agent configuration, or creating standalone applications that don't require multi-agent functionality.
+
+Next Steps
+----------
+
+Once you've configured your agent, you'll typically want to:
+
+- Register it with the ``AgentRegistry`` and ``CommunicationHub`` to enable collaboration (see :doc:`multi_agent_setup` for details)
+- Add it to a multi-agent system where it can discover and interact with other agents (see :doc:`collaborative_workflows`)
+- Implement specific conversational patterns for your use case (see :doc:`human_in_the_loop` for interactive scenarios)
diff --git a/docs/source/guides/agent_payment.rst b/docs/source/guides/agent_payment.rst
new file mode 100644
index 0000000..d8a33d3
--- /dev/null
+++ b/docs/source/guides/agent_payment.rst
@@ -0,0 +1,408 @@
+Agent Payment Integration
+=======================
+
+AgentConnect supports agent-to-agent payments through integration with the Coinbase Developer Platform (CDP) and Coinbase AgentKit. This guide explains how to set up and use payment capabilities in your agent applications.
+
+Overview
+--------
+
+The payment capabilities in AgentConnect allow agents to:
+
+- Make cryptocurrency payments to other agents for services
+- Advertise paid services with associated costs
+- Automatically process payments based on service agreements
+- Verify transactions on the blockchain
+
+These features enable the creation of autonomous agent economies where agents can offer services for payment and negotiate terms based on capabilities.
+
+Prerequisites
+------------
+
+Before using payment capabilities, ensure you have:
+
+1. A Coinbase Developer Platform (CDP) API key
+2. The required packages installed:
+
+.. code-block:: bash
+
+ # Install required packages
+ pip install coinbase-agentkit coinbase-agentkit-langchain cdp-sdk
+
+3. Environment variables set up in your ``.env`` file:
+
+.. code-block:: bash
+
+ CDP_API_KEY_NAME=your_cdp_api_key_name
+ CDP_API_KEY_PRIVATE_KEY=your_cdp_api_key_private_key
+ CDP_NETWORK_ID=base-sepolia # Optional, defaults to base-sepolia testnet
+
+Enabling Payments in Agents
+--------------------------
+
+To enable payment capabilities in an agent, set the ``enable_payments`` parameter to ``True`` when creating the agent:
+
+.. code-block:: python
+
+ from agentconnect.agents import AIAgent
+ from agentconnect.core.types import ModelProvider, ModelName, AgentIdentity
+
+ # Create an agent with payment capabilities
+ agent = AIAgent(
+ agent_id="service_provider",
+ name="Research Provider",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key="your_openai_api_key",
+ identity=AgentIdentity.create_key_based(),
+ enable_payments=True # Enable payment capabilities
+ )
+
+This automatically:
+
+1. Initializes Coinbase AgentKit with the CDP credentials
+2. Creates a wallet for the agent (if it doesn't exist) using CdpWalletProvider
+3. Sets up the payment address in the agent's metadata
+4. Adds payment tools to the agent's workflow
+5. Configures the agent's LLM to understand payment contexts
+
+Wallet Configuration
+--------------------------
+
+By default, wallet configuration is loaded from environment variables. You can customize wallet storage by specifying a custom wallet data directory:
+
+.. code-block:: python
+
+ from pathlib import Path
+
+ # Create an agent with payment capabilities and custom wallet storage location
+ agent = AIAgent(
+ agent_id="service_provider",
+ name="Research Provider",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key="your_openai_api_key",
+ identity=AgentIdentity.create_key_based(),
+ enable_payments=True, # Enable payment capabilities
+ wallet_data_dir=Path("custom/wallet/directory") # Custom wallet storage location
+ )
+
+You can control the network used for payments by setting the ``CDP_NETWORK_ID`` environment variable:
+
+.. code-block:: bash
+
+ # Configure network in .env file
+ CDP_NETWORK_ID=base-sepolia # Default if not specified
+ # Other options: base-mainnet, ethereum-mainnet, ethereum-sepolia
+
+Advertising Paid Services
+-----------------------
+
+There are two important ways to advertise paid services in AgentConnect:
+
+1. **For service discovery** - Include cost information in capability metadata so other agents can discover and evaluate the cost:
+
+.. code-block:: python
+
+ from agentconnect.core.types import Capability
+
+ # Define a capability with cost information
+ research_capability = Capability(
+ name="research_service",
+ description="Conducts in-depth research on any topic for 2 USDC per request",
+ input_schema={"topic": "string"},
+ output_schema={"research": "string"},
+ metadata={"cost": "2 USDC", "payment_token": "USDC"}
+ )
+
+ # Create agent with this capability
+ agent = AIAgent(
+ # ... other parameters ...
+ capabilities=[research_capability],
+ enable_payments=True
+ )
+
+2. **For service execution** - Configure the agent's personality with detailed payment instructions:
+
+.. code-block:: python
+
+ # Create a service provider agent with payment instructions in personality
+ research_agent = AIAgent(
+ # ... other parameters ...
+ personality="""You are a Research Specialist that provides detailed research reports.
+
+ IMPORTANT PAYMENT INSTRUCTIONS:
+ 1. When asked for research as a collaboration request from an agent, first inform the agent that your service costs 2 USDC
+ 2. Wait for payment confirmation and verify the transaction hash
+ 3. Only after payment confirmation, provide the requested research
+ 4. Always thank the agent for their payment
+
+ Always maintain a professional tone and ensure you receive payment before delivering services.
+ """
+ )
+
+Properly configuring both the capability metadata and personality ensures that agents can discover your paid services and correctly handle the payment workflow.
+
+Discovering Payment-Capable Agents
+-------------------------------
+
+AgentConnect automatically handles the discovery of payment-capable agents. When you create an agent with ``enable_payments=True``, the framework automatically:
+
+1. Initializes the agent's wallet
+2. Sets the payment address in the agent's metadata
+3. Makes this information available during agent discovery
+
+When other agents search for capabilities using the framework's built-in discovery mechanism, payment information is automatically included in the results without any manual effort. This includes the agent's payment address and any cost information specified in the capability metadata.
+
+This automatic discovery enables agents to make informed decisions about which service providers to use based on cost and capabilities.
+
+Making Payments
+-------------
+
+In AgentConnect, payments between agents are handled automatically through the agent's LLM workflow. Instead of manually coding payment logic, you simply need to:
+
+1. Configure clear capability descriptions with cost information
+2. Provide detailed payment instructions in the agent's personality
+3. Enable payments with ``enable_payments=True``
+
+The framework will:
+
+- Automatically add payment tools to the agent's toolkit
+- Allow the LLM to decide when and how to make payments based on context
+- Process transactions and verify them on-chain
+
+For example, a well-configured customer agent with this personality will understand when to make payments:
+
+.. code-block:: python
+
+ customer_agent = AIAgent(
+ # ... other parameters ...
+ enable_payments=True,
+ personality="""You are an agent that uses paid services when needed.
+
+ When using services from other agents:
+ 1. Review the cost before agreeing to the service
+ 2. Only pay for services that provide good value
+ 3. Pay the requested amount using your payment tools
+ 4. Keep track of transaction hashes for verification
+ 5. Don't pay twice for the same service
+
+ Be cost-conscious but willing to pay for high-quality services.
+ """
+ )
+
+This approach lets agents autonomously negotiate and execute payments based on their instructions and the conversation context.
+
+Available Payment Tools
+^^^^^^^^^^^^^^^^^^^^^
+
+The following payment tools are automatically made available to the agent's LLM when ``enable_payments=True`` is set:
+
+**From `WalletActionProvider`:**
+
+- ``get_wallet_details``: Fetches wallet address, network info, balances, etc.
+- ``get_balance``: Gets the native currency balance (e.g., ETH).
+- ``native_transfer``: Transfers native currency (e.g., ETH).
+
+**From `CdpApiActionProvider`:**
+
+- ``request_faucet_funds``: Requests testnet funds from a faucet.
+- ``address_reputation``: Checks reputation for an address.
+
+**From `Erc20ActionProvider` (Added if payment token is not ETH):**
+
+- ``get_balance``: Gets the balance of a specific ERC-20 token.
+- ``transfer``: Transfers a specified amount of an ERC-20 token.
+
+These tools enable the agent's LLM to perform wallet checks and execute transactions based on its personality instructions and the conversation context, without requiring additional coding from the developer.
+
+Verifying Payment Readiness
+-------------------------
+
+To check if an agent is properly configured for payments:
+
+.. code-block:: python
+
+ from agentconnect.utils.payment_helper import check_agent_payment_readiness
+
+ # Check if agent is ready for payments
+ status = check_agent_payment_readiness(agent)
+ if status["ready"]:
+ print(f"Agent is ready for payments with address: {status['payment_address']}")
+ else:
+ print("Agent is not ready for payments. Status:", status)
+
+If all status flags are ``True``, the agent is properly configured for payments.
+
+Wallet Management
+---------------
+
+AgentConnect provides utilities for managing agent wallets:
+
+.. code-block:: python
+
+ from agentconnect.utils import wallet_manager
+
+ # Check if wallet exists
+ if wallet_manager.wallet_exists(agent.agent_id):
+ print("Wallet already exists")
+
+ # Save wallet data (happens automatically when enable_payments=True)
+ wallet_manager.save_wallet_data(
+ agent_id=agent.agent_id,
+ wallet_data=agent.wallet_provider.export_wallet()
+ )
+
+ # Create a backup of wallet data
+ from agentconnect.utils.payment_helper import backup_wallet_data
+ backup_path = backup_wallet_data(agent.agent_id, backup_dir="wallet_backups")
+ print(f"Wallet backed up to: {backup_path}")
+
+Wallet Data Structure
+^^^^^^^^^^^^^^^^^^^^
+
+Wallet data is stored in JSON files named ``{agent_id}_wallet.json`` in the specified data directory (default: ``data/agent_wallets/``). The structure includes:
+
+- ``wallet_id``: Unique identifier for the wallet
+- ``seed``: The wallet seed phrase (sensitive data)
+- ``network_id``: The blockchain network (e.g., "base-sepolia")
+
+Security Considerations
+---------------------
+
+Important security considerations when using payment capabilities:
+
+1. **Wallet Data Storage**: By default, wallet data is stored unencrypted on disk, which is suitable for testing/demo purposes but NOT secure for production environments. For production use, implement proper encryption.
+
+2. **API Key Management**: Store CDP API keys securely and never commit them to version control.
+
+3. **Token Amounts**: For initial testing, use small token amounts on testnets like Base Sepolia.
+
+4. **Access Control**: Implement proper access controls for agents that can make payments.
+
+Example: Agent Economy Workflow
+----------------------------
+
+The following example demonstrates a multi-agent system with payment capabilities, featuring a research agent and a telegram broadcast agent that charge for their services:
+
+.. code-block:: python
+
+ from agentconnect.agents.ai_agent import AIAgent
+ from agentconnect.core.types import AgentIdentity, Capability, ModelProvider, ModelName
+
+ # Define token address (for example, USDC on Base Sepolia)
+ BASE_SEPOLIA_USDC_ADDRESS = "0x036CbD53842c5426634e7929541eC2318f3dCF7e"
+
+ # Create Research Agent
+ research_agent = AIAgent(
+ agent_id="research_agent",
+ name="Research Specialist",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key="your_openai_api_key",
+ identity=AgentIdentity.create_key_based(),
+ capabilities=[
+ Capability(
+ name="general_research",
+ description="Performs detailed research on a given topic, providing a structured report.",
+ metadata={"cost": "2 USDC"}
+ )
+ ],
+ enable_payments=True,
+ personality="""You are a Research Specialist that provides detailed research reports.
+
+ IMPORTANT PAYMENT INSTRUCTIONS:
+ 1. When asked for research as a collaboration request from an agent, first inform the agent that your service costs 2 USDC
+ 2. Wait for payment confirmation and verify the transaction hash
+ 3. Only after payment confirmation, provide the requested research
+ 4. Always thank the agent for their payment
+
+ Always maintain a professional tone and ensure you receive payment before delivering services.
+ """
+ )
+
+ # Create User Proxy Agent (Workflow Orchestrator)
+ user_proxy_agent = AIAgent(
+ agent_id="user_proxy_agent",
+ name="Workflow Orchestrator",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key="your_openai_api_key",
+ identity=AgentIdentity.create_key_based(),
+ enable_payments=True,
+ personality="""You are a workflow orchestrator responsible for managing payments and returning results.
+ Payment Details (USDC on Base Sepolia):
+ - Contract: {BASE_SEPOLIA_USDC_ADDRESS}
+ - Amount: 6 decimals. 1 USDC = '1000000'.
+ """
+ )
+
+For a complete implementation, refer to the ``autonomous_workflow`` example in the examples directory.
+
+Supported Networks
+---------------
+
+Payment capabilities support all networks supported by AgentKit, including:
+
+- Base Mainnet (``base-mainnet``)
+- Base Sepolia Testnet (``base-sepolia``)
+- Ethereum Mainnet (``ethereum-mainnet``)
+- Ethereum Sepolia Testnet (``ethereum-sepolia``)
+
+For testing, it's recommended to use testnet networks like Base Sepolia.
+
+Troubleshooting
+-------------
+
+Common issues and solutions:
+
+1. **CDP Environment Not Configured**:
+
+ .. code-block:: python
+
+ from agentconnect.utils.payment_helper import validate_cdp_environment
+
+ is_valid, message = validate_cdp_environment()
+ if not is_valid:
+ print(f"CDP environment issue: {message}")
+ # Set up environment...
+
+2. **Agent Not Ready for Payments**:
+
+ .. code-block:: python
+
+ from agentconnect.utils.payment_helper import check_agent_payment_readiness
+
+ status = check_agent_payment_readiness(agent)
+ print(status) # Check which component is missing
+
+3. **Missing Required Packages**:
+
+ If you see errors about missing CDP or AgentKit modules, install them:
+
+ .. code-block:: bash
+
+ pip install cdp-sdk coinbase-agentkit coinbase-agentkit-langchain
+
+4. **Network Connection Issues**:
+
+ Ensure your network allows connections to the CDP API endpoints.
+
+5. **Wallet Data Issues**:
+
+ If wallet data becomes corrupted, you can delete it and let the system recreate it:
+
+ .. code-block:: python
+
+ from agentconnect.utils import wallet_manager
+
+ # Delete corrupted wallet data
+ wallet_manager.delete_wallet_data(agent.agent_id)
+
+ # Restart your agent - it will create a new wallet
+
+Next Steps
+---------
+
+Now that you have a basic understanding of how to enable and use payment capabilities in AgentConnect, you can explore more advanced use cases and workflows.
+
+You can check the `Autonomous Workflow `_ example for a complete implementation of an autonomous agent economy workflow.
diff --git a/docs/source/guides/collaborative_workflows.rst b/docs/source/guides/collaborative_workflows.rst
new file mode 100644
index 0000000..7ecbd4b
--- /dev/null
+++ b/docs/source/guides/collaborative_workflows.rst
@@ -0,0 +1,181 @@
+Collaborative Workflows with Tools
+=================================
+
+.. _collaborative_workflows:
+
+This guide explains how dynamic collaboration patterns work in AgentConnect, where agents discover and interact with each other based on capabilities rather than hardcoded identifiers.
+
+Introduction
+-----------
+
+In the :doc:`multi_agent_setup` guide, you learned how to set up multiple agents with different capabilities. But how do agents actually find and collaborate with each other dynamically?
+
+The true power of AgentConnect's multi-agent systems comes from enabling agents to:
+
+1. **Discover** other agents based on needed capabilities
+2. **Delegate** tasks to the most appropriate agent
+3. **Process** responses asynchronously
+
+AgentConnect provides built-in collaboration tools that help agents perform these operations without requiring you to manually implement registry lookups and message handling. These tools are designed to be used by the agents themselves as part of their reasoning and execution flow.
+
+Introducing Collaboration Tools
+------------------------------
+
+AgentConnect includes a set of tools specifically designed for agent-to-agent collaboration:
+
+1. ``search_for_agents``: Finds agents based on capability requirements
+2. ``send_collaboration_request``: Sends tasks to other agents and awaits responses
+3. ``check_collaboration_result``: Polls for results of requests that previously timed out
+
+These tools abstract the complexity of registry lookups and message exchanges, making it easier to build dynamic, capability-driven workflows. Typically, these tools are created and provided to agents via the ``PromptTools`` class, which handles their initialization with appropriate dependencies.
+
+Finding Collaborators: ``search_for_agents``
+----------------------------------------
+
+The first step in dynamic collaboration is finding other agents that can provide needed capabilities.
+
+Purpose
+~~~~~~~
+
+The ``search_for_agents`` tool allows an agent to search the registry for other agents offering specific capabilities. It performs semantic search on capability descriptions, making it more flexible than exact name matching.
+
+Inputs
+~~~~~~
+
+- ``capability_name`` (required): The name or description of the capability needed
+- ``limit`` (optional): Maximum number of agents to return
+- ``similarity_threshold`` (optional): Minimum similarity score for matching
+
+Outputs
+~~~~~~~
+
+The tool returns a structured result containing matching agent IDs, their capabilities, and payment addresses (if available). This makes it easy for the agent to decide which collaborator to work with based on their specific requirements.
+
+Automatic Filtering
+~~~~~~~~~~~~~~~~~~
+
+The tool automatically excludes the calling agent itself, agents already in active conversations, agents with recent interaction timeouts, and human agents by default.
+
+Internal Mechanism
+~~~~~~~~~~~~~~~~
+
+This tool leverages the ``AgentRegistry``'s semantic search capabilities to find agents based on capability descriptions. It applies additional filtering logic to exclude inappropriate agents and provides results in a format that's easy for agents to process.
+
+Delegating Tasks: ``send_collaboration_request``
+--------------------------------------------
+
+Once an agent has found a suitable collaborator, it can delegate a task using the ``send_collaboration_request`` tool.
+
+Purpose
+~~~~~~~
+
+This tool sends a task description to a specific agent and waits for a response, handling the complexities of message routing and response tracking.
+
+Inputs
+~~~~~~
+
+- ``target_agent_id`` (required): ID of the agent to collaborate with
+- ``task`` (required): Description of the task to perform
+- ``timeout`` (optional): Maximum wait time in seconds
+
+Outputs
+~~~~~~~
+
+The tool returns whether the collaboration was successful, the response content (if received), a unique request ID for tracking, and any error messages. This gives the agent everything it needs to process the result or handle timeouts.
+
+Possible Outcomes
+~~~~~~~~~~~~~~~
+
+1. **Success**: The collaborator responds within the timeout period
+2. **Timeout**: The collaborator doesn't respond within the timeout
+3. **Error**: Other failures during sending/processing
+
+Internal Mechanism
+~~~~~~~~~~~~~~~~
+
+Behind the scenes, this tool uses the ``CommunicationHub``'s message routing system to deliver the request to the target agent and track responses. It handles message formatting, delivery confirmation, and timeout management automatically.
+
+Handling Timeouts: ``check_collaboration_result``
+---------------------------------------------
+
+For long-running tasks that exceed the timeout, the system includes a ``check_collaboration_result`` mechanism to poll for late responses.
+
+Purpose
+~~~~~~~
+
+This tool checks if a response has arrived for a request that previously timed out, allowing agents to handle asynchronous collaboration.
+
+Inputs
+~~~~~~
+
+- ``request_id`` (required): The request ID from a timed-out collaboration
+
+Outputs
+~~~~~~~
+
+The tool returns whether a result is available, the current status of the request, and the response content if completed. This allows agents to efficiently manage and track long-running collaborations.
+
+Internal Mechanism
+~~~~~~~~~~~~~~~~
+
+This tool works with the ``CommunicationHub``'s tracking system to check the status of pending and completed requests. The hub maintains these records across interactions, enabling agents to reconnect with previously initiated collaborations even after timeouts.
+
+Typical Collaboration Workflow
+----------------------------
+
+A typical capability-based collaboration follows this pattern:
+
+1. **Identify Need**: An agent determines it needs a capability it doesn't have
+2. **Search**: The agent uses ``search_for_agents`` to find other agents with the required capability
+3. **Select**: The agent selects a collaborator from the search results
+4. **Delegate**: The agent uses ``send_collaboration_request`` to send the task
+5. **Process Response**:
+
+ - If successful, the agent uses the response
+ - If timeout, the agent stores the ``request_id`` for later checking
+ - If error, the agent handles it appropriately (retry, fallback, etc.)
+6. **Optional Late Check**: If there was a timeout, the agent can periodically check using ``check_collaboration_result``
+
+Advanced Topics
+-------------
+
+**Payment Integration**
+
+AgentConnect supports payment integration for agent-to-agent services. For details on implementing payment workflows, see the :doc:`agent_payment` guide.
+
+**Parallel Collaborations**
+
+For complex tasks, AgentConnect allows sending requests to multiple agents simultaneously. This pattern is particularly useful for tasks requiring diverse expertise or redundancy.
+
+Seeing Tools in Action
+--------------------
+
+The collaboration tools described in this guide enable agents to discover and work with each other dynamically based on capabilities rather than hardcoded connections. This capability-driven approach is what makes AgentConnect particularly powerful for building flexible multi-agent systems.
+
+To see these dynamic, capability-based collaboration patterns in action, explore these examples:
+
+- `Research Assistant Example `_: Shows how distinct agents (Core, Research, Markdown) with specific capabilities collaborate on research tasks. This example highlights capability definition, agent discovery, and task delegation through the collaboration tools.
+
+- `Multi-Agent System Example `_: Demonstrates a modular system where specialized agents (Telegram, Research, Content Processing, Data Analysis) form a collaborative network. This example showcases registry-based discovery and how the communication hub facilitates dynamic collaboration.
+
+These examples demonstrate how the framework manages capability definition, agent discovery, and task delegation automatically in real-world scenarios.
+
+Customizing Collaboration Mechanisms
+----------------------------------
+
+If you need to customize how agents collaborate, you can reference these key files:
+
+- :doc:`Tools API <../api/agentconnect.prompts.tools>`: Defines the tool implementations and initialization logic
+- :doc:`Registry API <../api/agentconnect.core.registry.registry_base>`: Implements the agent registry and semantic search functionality
+- :doc:`Communication Hub API <../api/agentconnect.communication.hub>`: Handles message routing and collaboration request processing
+
+These files contain the implementation details for the collaboration tools described in this guide.
+
+Next Steps
+---------
+
+To build on your understanding of agent collaboration:
+
+- Learn about integrating external tools in :doc:`external_tools`
+- Explore payment options in :doc:`agent_payment`
+- Understand monitoring options in :doc:`event_monitoring`
\ No newline at end of file
diff --git a/docs/source/guides/core_concepts.rst b/docs/source/guides/core_concepts.rst
new file mode 100644
index 0000000..998805a
--- /dev/null
+++ b/docs/source/guides/core_concepts.rst
@@ -0,0 +1,115 @@
+Core Concepts
+============
+
+.. _core_concepts:
+
+Welcome to the core concepts guide for AgentConnect. This guide introduces the foundational components that make up the AgentConnect framework, providing you with a solid understanding of how independent agents discover and communicate with each other.
+
+.. image:: ../_static/architecture_flow.png
+ :width: 70%
+ :align: center
+ :alt: AgentConnect Architecture
+
+*The AgentConnect architecture enables decentralized agent discovery and communication.*
+
+Overall Vision: Independent Agents
+----------------------------------
+
+At its heart, AgentConnect is designed to create a network of independent, potentially heterogeneous agents that can discover and communicate with each other securely. Unlike traditional centralized systems, AgentConnect promotes agent autonomy - each agent makes its own decisions about when, how, and with whom to interact.
+
+The framework is built around the ``BaseAgent`` abstract class (``agentconnect/core/agent.py``), which provides the foundation for all agents in the system. This base class defines common functionality such as identity management, message handling, and capability declaration, while leaving implementation details to specific agent types like ``AIAgent`` or ``HumanAgent``.
+
+Communication Hub
+----------------
+
+The Communication Hub (``CommunicationHub`` in ``agentconnect/communication/hub.py``) is the central message router that facilitates agent-to-agent communication. It's important to understand that while the hub routes messages, it doesn't control agent behavior.
+
+Key responsibilities of the Communication Hub:
+
+1. **Message Routing**: Delivers messages between registered agents
+2. **Agent Lookup**: Uses the Agent Registry to locate message recipients
+3. **Protocol Management**: Ensures consistent communication patterns
+4. **Message History**: Tracks interactions for auditing and debugging
+
+The Hub provides a standardized communication channel while preserving agent autonomy - each agent decides independently how to respond to received messages.
+
+Agent Registry
+-------------
+
+The Agent Registry (``AgentRegistry`` in ``agentconnect/core/registry/registry_base.py``) serves as the dynamic directory or "phone book" where agents register themselves and their capabilities. It enables other agents to discover potential collaborators based on the capabilities they offer.
+
+Key functions of the Agent Registry:
+
+1. **Agent Registration**: Manages the registration of agents with verification
+2. **Capability Indexing**: Maintains searchable indexes of agent capabilities
+3. **Identity Verification**: Ensures agent identities are cryptographically verified
+4. **Discovery**: Allows agents to find other agents based on various criteria
+
+The registry doesn't impose or manage agent behavior - it simply provides the discovery mechanism that enables agents to find each other.
+
+Capabilities
+-----------
+
+Capabilities (``Capability`` class in ``agentconnect/core/types.py``) are standardized declarations of what an agent can do. Each capability has a name, description, and defined input/output schemas that allow other agents to understand how to interact with it.
+
+The capability system enables semantic discovery - agents can locate other agents based on the functionality they need rather than knowing specific identifiers in advance.
+
+A typical capability definition looks like:
+
+.. code-block:: python
+
+ Capability(
+ name="conversation",
+ description="General conversation and assistance",
+ input_schema={"query": "string"},
+ output_schema={"response": "string"},
+ )
+
+When an agent registers with the system, its capabilities become discoverable by other agents who may need those services.
+
+Agent Identity
+-------------
+
+Every agent in the system has a unique, cryptographically verifiable identity (``AgentIdentity`` in ``agentconnect/core/types.py``). This identity includes:
+
+1. **Decentralized Identifier (DID)**: A globally unique identifier
+2. **Public Key**: Used to verify message signatures
+3. **Private Key** (optional): Used to sign messages (stored only on the agent itself)
+4. **Verification Status**: Indicates whether the identity has been cryptographically verified
+
+The identity system ensures secure communications by enabling agents to verify that messages truly come from their claimed senders, protecting against impersonation and tampering.
+
+Messages
+-------
+
+All inter-agent communication happens through standardized ``Message`` objects (``agentconnect/core/message.py``). Each message contains:
+
+1. **Unique ID**: For tracking and referencing
+2. **Sender/Receiver IDs**: Who sent the message and who should receive it
+3. **Content**: The actual message payload
+4. **Message Type**: Indicating the purpose or nature of the message (e.g., TEXT, COMMAND)
+5. **Timestamp**: When the message was created
+6. **Signature**: Cryptographic signature for verification
+7. **Metadata**: Additional contextual information
+
+Messages are signed using the sender's private key and can be verified using the sender's public key, ensuring both authenticity and integrity.
+
+How These Components Work Together
+---------------------------------
+
+The flow of agent interaction typically follows this pattern:
+
+1. Agents register with the Agent Registry, declaring their identity and capabilities
+2. An agent needs to use a capability provided by another agent
+3. The agent queries the Registry to find agents offering that capability
+4. The agent creates a signed Message and sends it via the Communication Hub
+5. The Hub looks up the recipient agent and delivers the message
+6. The receiving agent verifies the message signature and processes the request
+7. If a response is needed, the process repeats in reverse
+
+This architecture allows for flexible, secure communication between autonomous agents while maintaining a decentralized approach - no central authority dictates what agents must do or how they must respond.
+
+Next Steps
+----------
+
+Now that you understand the core concepts of AgentConnect, proceed to the :doc:`first_agent` guide to create and run your first AI agent. You may also want to explore how to integrate human agents using :doc:`human_in_the_loop`.
\ No newline at end of file
diff --git a/docs/source/guides/event_monitoring.rst b/docs/source/guides/event_monitoring.rst
new file mode 100644
index 0000000..910df40
--- /dev/null
+++ b/docs/source/guides/event_monitoring.rst
@@ -0,0 +1,112 @@
+.. _event_monitoring:
+
+Monitoring Agent Interactions with LangSmith
+============================================
+
+Introduction
+-----------
+
+Debugging and monitoring complex multi-agent systems presents unique challenges. When agents collaborate, execute tools, and make decisions autonomously, understanding the flow of information and pinpointing issues becomes critical for development and production monitoring.
+
+AgentConnect integrates with `LangSmith `_ to provide comprehensive observability into your agent ecosystem. LangSmith is a powerful platform designed specifically for tracing, monitoring, and debugging LLM applications.
+
+Key benefits of using LangSmith with AgentConnect include:
+
+* **End-to-End Workflow Visualization**: See the complete execution path of agent interactions, from initial user request through all intermediate steps to final response.
+
+* **Tool Call Tracking**: Monitor all tool executions including collaboration tools (like ``search_for_agents``), payment tools (such as ``native_transfer``), and any custom tools you've added.
+
+* **Error Identification**: Quickly identify where and why failures occur in complex agent workflows.
+
+* **Resource Monitoring**: Track token usage, latency, and other performance metrics to optimize your application.
+
+Setup and Configuration
+----------------------
+
+LangSmith integration is primarily enabled through environment variables. To enable LangSmith tracing for your AgentConnect application, add the following to your ``.env`` file:
+
+.. code-block:: text
+
+ # LangSmith Configuration
+ LANGSMITH_TRACING=true
+ LANGSMITH_API_KEY=your_langsmith_api_key
+ LANGSMITH_PROJECT=AgentConnect
+ LANGSMITH_ENDPOINT=https://api.smith.langchain.com
+
+check out `LangSmith's documentation `_ for more information on how to create an API key
+
+Automatic Integration
+-------------------
+
+Once you've set these environment variables, AgentConnect and LangChain will automatically capture traces of agent execution without requiring any additional code changes. AgentConnect's internal components are designed to work seamlessly with this automatic tracing system.
+
+Every agent interaction, LLM call, and tool execution will be logged to your LangSmith project, creating a comprehensive record of your application's behavior.
+
+What You Can See in LangSmith
+----------------------------
+
+**Trace Visualization**
+
+LangSmith provides a visual representation of the entire interaction flow for each request or agent run, allowing you to see the big picture of how agents interact.
+
+.. image:: /_static/langsmith_project_overview.png
+ :width: 96%
+ :align: center
+ :alt: LangSmith project dashboard showing multiple AgentConnect traces
+
+*Figure 1: LangSmith project dashboard showing multiple AgentConnect traces.*
+
+**Step-by-Step Breakdown**
+
+Each trace details the sequence of operations, including LLM calls, tool executions, and agent decision steps. This breakdown helps you understand exactly how your agent arrived at its responses or actions.
+
+.. image:: /_static/langsmith_trace_detail.png
+ :width: 96%
+ :align: center
+ :alt: Detailed view of a single trace showing the sequence of LLM calls and tool executions
+
+*Figure 2: Detailed view of a single trace showing the sequence of LLM calls and tool executions.*
+
+**Tool Usage Insights**
+
+All tool calls are logged with their specific inputs and outputs, including:
+
+* Built-in collaboration tools (``search_for_agents``, ``send_collaboration_request``)
+* Payment tools (``native_transfer``, ``erc20_transfer``)
+* Custom tools added via ``custom_tools``
+
+This provides visibility into exactly what data is flowing between components of your system.
+
+.. image:: /_static/langsmith_tool_call.png
+ :width: 96%
+ :align: center
+ :alt: Detail of a tool call within a trace showing input arguments and the returned result
+
+*Figure 3: Detail of a tool call within a trace showing input arguments and the returned result.*
+
+**Error Debugging**
+
+Errors within the workflow are clearly marked in the trace, showing the failing step and the error message. This makes it much easier to identify and fix issues in complex workflows.
+
+.. image:: /_static/langsmith_error_trace.png
+ :width: 96%
+ :align: center
+ :alt: A LangSmith trace highlighting a failed step and the associated error message
+
+*Figure 4: A LangSmith trace highlighting a failed step and the associated error message.*
+
+**Performance Monitoring**
+
+LangSmith automatically tracks token counts and latency for LLM calls and overall traces. This data helps you optimize your application's performance and manage costs effectively.
+
+Console-Based Monitoring
+-----------------------
+
+For real-time console monitoring during development, AgentConnect offers callback-based logging tools. See the :doc:`logging_events` guide for details on using the ``ToolTracerCallbackHandler`` and other logging approaches.
+
+Summary
+------
+
+LangSmith offers powerful, largely automatic observability for AgentConnect applications when configured correctly via environment variables. This integration enables easier debugging and monitoring of complex agent behaviors, helping you develop more reliable and efficient multi-agent systems.
+
+By combining LangSmith's comprehensive tracing with AgentConnect's flexible architecture, you gain deep insights into your agents' decision-making processes, tool usage, and collaboration patterns. This visibility is essential for both development and production monitoring of sophisticated agent-based applications.
\ No newline at end of file
diff --git a/docs/source/guides/external_tools.rst b/docs/source/guides/external_tools.rst
new file mode 100644
index 0000000..e8eebf4
--- /dev/null
+++ b/docs/source/guides/external_tools.rst
@@ -0,0 +1,193 @@
+.. _external_tools:
+
+Integrating External Tools with AIAgent
+========================================
+
+Introduction
+-----------
+
+While ``AIAgent`` comes with built-in collaboration tools and optional payment capabilities, many applications require specialized functionality specific to your domain. You might need agents that can:
+
+* Query your organization's proprietary database
+* Call your internal APIs
+* Perform domain-specific calculations or transformations
+* Access external services only your system has credentials for
+
+AgentConnect allows you to extend agent capabilities by integrating standard `LangChain tools `_, providing a flexible way to equip your agents with the exact functionality they need.
+
+Adding Custom LangChain Tools
+----------------------------
+
+AgentConnect's ``AIAgent`` class accepts a ``custom_tools`` parameter in its constructor. This parameter takes a list of LangChain ``BaseTool`` instances (or tools created with decorators like ``@tool``).
+
+Here's a simple example showing how to create and add custom tools:
+
+.. code-block:: python
+
+ import os
+ from langchain_core.tools import tool
+ from langchain.tools import StructuredTool
+ from pydantic import BaseModel, Field
+
+ from agentconnect.agents import AIAgent
+ from agentconnect.core.types import ModelProvider, ModelName, AgentIdentity
+
+ # Simple tool using the @tool decorator
+ @tool
+ def calculate_compound_interest(principal: float, rate: float, time: int, compounds_per_year: int = 1) -> float:
+ """
+ Calculate compound interest for an investment.
+
+ Args:
+ principal: Initial investment amount
+ rate: Annual interest rate (as a decimal, e.g. 0.05 for 5%)
+ time: Time period in years
+ compounds_per_year: Number of times interest compounds per year (default: 1)
+
+ Returns:
+ The final amount after compound interest
+ """
+ return principal * (1 + rate/compounds_per_year)**(compounds_per_year*time)
+
+ # More complex tool using StructuredTool with Pydantic models
+ class WeatherQueryInput(BaseModel):
+ """Input for weather query."""
+ location: str = Field(description="City name or zip code")
+ forecast_days: int = Field(default=1, description="Number of days to forecast (1-7)")
+
+ class WeatherQueryOutput(BaseModel):
+ """Output for weather query."""
+ temperature: float = Field(description="Current temperature in Celsius")
+ conditions: str = Field(description="Weather conditions (e.g., sunny, rainy)")
+ forecast: str = Field(description="Text forecast for the requested period")
+
+ def get_weather(input_data: WeatherQueryInput) -> WeatherQueryOutput:
+ """
+ Get weather information for a specific location.
+
+ This is a mock implementation. In a real application, you would:
+ 1. Call your weather API with the provided location
+ 2. Parse the response
+ 3. Return properly formatted data
+ """
+ # Mock implementation - in real code you would call a weather API
+ # such as OpenWeatherMap, Weather.gov, etc.
+ return WeatherQueryOutput(
+ temperature=22.5,
+ conditions="Partly Cloudy",
+ forecast=f"Forecast for the next {input_data.forecast_days} days: Warm and partly cloudy."
+ )
+
+ # Create the structured tool
+ weather_tool = StructuredTool.from_function(
+ func=get_weather,
+ name="get_weather",
+ description="Get weather information for a specific location",
+ args_schema=WeatherQueryInput,
+ return_direct=False
+ )
+
+ # Initialize an AIAgent with custom tools
+ agent = AIAgent(
+ agent_id="domain_expert",
+ name="Domain Expert Agent",
+ provider_type=ModelProvider.ANTHROPIC,
+ model_name=ModelName.CLAUDE_3_OPUS,
+ api_key=os.getenv("ANTHROPIC_API_KEY"),
+ identity=AgentIdentity.create_key_based(),
+ custom_tools=[calculate_compound_interest, weather_tool], # Add your custom tools here
+ personality="You are a helpful assistant that specializes in financial calculations and weather forecasting."
+ )
+
+When you provide ``custom_tools``, they are automatically added to the pool of tools available to the agent's internal LLM workflow. These tools will be available alongside any built-in collaboration or payment tools that the agent has access to.
+
+Based on the user's requests and the conversation context, the agent's LLM will decide when to use these custom tools. The agent treats these tools as part of its capabilities and can invoke them when appropriate.
+
+How Custom Tools Work With the Agent
+-----------------------------------
+
+When a user interacts with an agent equipped with custom tools, the workflow typically looks like this:
+
+1. The user sends a request to the agent (e.g., "What would my $1000 investment be worth in 5 years at 7% interest?")
+2. The agent's LLM processes the request and recognizes that it needs to perform a financial calculation
+3. The LLM decides to use the ``calculate_compound_interest`` tool based on its description and parameters
+4. The agent invokes the tool with the appropriate parameters
+5. The tool returns the result to the agent
+6. The agent incorporates the result into its response to the user
+
+This process happens automatically within the agent's internal workflow, making the use of tools transparent to end users.
+
+Designing Effective Custom Tools
+------------------------------
+
+For your custom tools to work optimally with AI agents, follow these best practices:
+
+1. **Clear, Descriptive Names**: Use names that clearly indicate the tool's purpose (e.g., ``get_weather`` instead of ``weather_func``).
+
+2. **Detailed Descriptions**: Include comprehensive docstrings or descriptions. The LLM relies on these to understand when and how to use the tool.
+
+3. **Well-Defined Input Schemas**: Use type hints for simple tools or Pydantic models for more complex ones. This helps the LLM understand what parameters to provide.
+
+4. **Error Handling**: Implement proper error handling in your tools to provide useful feedback when things go wrong.
+
+5. **Focused Functionality**: Each tool should do one thing well. Break complex operations into multiple tools rather than creating monolithic functions.
+
+6. **Consistent Return Types**: Make sure your tools return consistent data structures that the LLM can easily interpret and incorporate into responses.
+
+.. code-block:: python
+
+ # Example of a well-designed tool with clear typing, description, and error handling
+ @tool
+ def search_customer_database(customer_id: str) -> dict:
+ """
+ Search the customer database for a specific customer and return their information.
+
+ Args:
+ customer_id: The unique identifier for the customer (format: CUS-XXXXX)
+
+ Returns:
+ A dictionary containing customer information (name, email, subscription status, etc.)
+
+ Raises:
+ ValueError: If customer_id is not in the correct format
+ KeyError: If no customer with the given ID exists
+ """
+ # Validate input
+ if not customer_id.startswith("CUS-"):
+ raise ValueError("Customer ID must be in format CUS-XXXXX")
+
+ # Implement actual database query logic here
+ # ...
+
+ # Return customer data
+ return {
+ "name": "John Doe",
+ "email": "john.doe@example.com",
+ "subscription": "Premium",
+ "signup_date": "2023-01-15"
+ }
+
+.. admonition:: Advanced Customization Planned
+ :class: note
+
+ This guide covers the standard method of adding discrete tools to agents. In future releases,
+ AgentConnect plans to support deeper levels of customization, potentially allowing developers to:
+
+ * Inject entirely custom internal workflows (e.g., complex LangGraph state machines)
+ * Fully override default prompt templates
+ * Define custom input/output schemas for the agent's core processing logic
+ * Integrate agents built with other frameworks
+
+ Detailed guides and enhanced framework support for these advanced scenarios are planned for future releases.
+ For now, ``custom_tools`` is the primary extension mechanism.
+
+Next Steps
+---------
+
+To learn more about configuring and using agents:
+
+* See :doc:`agent_configuration` for other agent parameters
+* Explore the :doc:`/examples/index` section for practical examples
+* Refer to the `LangChain documentation `_ for more details on creating and using tools
+
+By combining AgentConnect's built-in capabilities with your own custom tools, you can create agents that are perfectly tailored to your specific use cases and domain requirements.
\ No newline at end of file
diff --git a/docs/source/guides/first_agent.rst b/docs/source/guides/first_agent.rst
new file mode 100644
index 0000000..3e4b13e
--- /dev/null
+++ b/docs/source/guides/first_agent.rst
@@ -0,0 +1,272 @@
+Your First Agent
+===============
+
+.. _first_agent:
+
+This guide will walk you through creating and running your first AI agent with AgentConnect. By the end, you'll have a functioning AI agent that can communicate through the AgentConnect framework.
+
+Prerequisites
+------------
+
+Before starting, make sure you have:
+
+- Python 3.11 or higher installed
+- Cloned the AgentConnect repository
+- Installed dependencies with Poetry
+- Set up your API keys in a `.env` file
+
+If you haven't completed these steps, refer to the main :doc:`../installation` or the :doc:`../quickstart`.
+
+Setup & Imports
+--------------
+
+First, let's create a new Python file (e.g., ``my_first_agent.py``) and add the necessary imports:
+
+.. code-block:: python
+
+ import asyncio
+ import os
+ from dotenv import load_dotenv
+
+ from agentconnect.agents import AIAgent, HumanAgent
+ from agentconnect.communication import CommunicationHub
+ from agentconnect.core.registry import AgentRegistry
+ from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ InteractionMode,
+ ModelName,
+ ModelProvider
+ )
+
+Loading Environment Variables
+---------------------------
+
+Next, we'll load environment variables to access our API keys:
+
+.. code-block:: python
+
+ async def main():
+ # Load variables from .env file
+ load_dotenv()
+
+ # Now we can access API keys like os.getenv("OPENAI_API_KEY")
+
+Initializing Core Components
+--------------------------
+
+Let's initialize the two fundamental components of AgentConnect:
+
+.. code-block:: python
+
+ # Create the Agent Registry - the "phone book" of agents
+ registry = AgentRegistry()
+
+ # Create the Communication Hub - routes messages between agents
+ hub = CommunicationHub(registry)
+
+Creating Agent Identities
+-----------------------
+
+Each agent needs a secure identity for authentication and messaging:
+
+.. code-block:: python
+
+ # Create identities with cryptographic keys
+ human_identity = AgentIdentity.create_key_based()
+ ai_identity = AgentIdentity.create_key_based()
+
+Configuring the AI Agent
+----------------------
+
+Now we'll create our AI agent with specific capabilities:
+
+.. code-block:: python
+
+ # Create an AI agent with a specific provider/model
+ ai_assistant = AIAgent(
+ agent_id="ai1", # Unique identifier
+ name="Assistant", # Human-readable name
+ provider_type=ModelProvider.OPENAI, # Choose your provider
+ model_name=ModelName.GPT4O, # Choose your model
+ api_key=os.getenv("OPENAI_API_KEY"), # API key from .env
+ identity=ai_identity, # Identity created earlier
+ capabilities=[
+ Capability(
+ name="conversation",
+ description="General conversation and assistance",
+ input_schema={"query": "string"},
+ output_schema={"response": "string"},
+ )
+ ],
+ interaction_modes=[InteractionMode.HUMAN_TO_AGENT],
+ personality="helpful and professional", # Personality traits
+ organization_id="org1", # Optional organization grouping
+ )
+
+The key parameters you can adjust:
+
+- **provider_type**: Choose from ``ModelProvider.OPENAI``, ``ModelProvider.ANTHROPIC``, ``ModelProvider.GOOGLE``, etc.
+- **model_name**: Select from ``ModelName.GPT4O``, ``ModelName.O1``, ``ModelName.CLAUDE_3_7_SONNET``, etc.
+- **capabilities**: Define what your agent can do (these are discoverable by other agents)
+- **personality**: Adjust how your agent responds
+
+Configuring a Human Agent
+-----------------------
+
+For interactive testing, let's create a human agent that can chat with our AI:
+
+.. code-block:: python
+
+ # Create a human agent for interaction
+ human = HumanAgent(
+ agent_id="human1", # Unique identifier
+ name="User", # Human-readable name
+ identity=human_identity, # Identity created earlier
+ organization_id="org1", # Optional organization grouping
+ )
+
+Registering Agents
+----------------
+
+To make our agents discoverable, we register them with the hub:
+
+.. code-block:: python
+
+ # Register both agents with the hub
+ await hub.register_agent(human)
+ await hub.register_agent(ai_assistant)
+
+Running the Agent
+--------------
+
+Now we'll start the agent's processing loop:
+
+.. code-block:: python
+
+ # Start the AI agent's processing loop as a background task
+ ai_task = asyncio.create_task(ai_assistant.run())
+
+Initiating Interaction
+-------------------
+
+With everything set up, we can start chatting with our AI agent:
+
+.. code-block:: python
+
+ # Start interactive terminal chat session
+ await human.start_interaction(ai_assistant)
+
+Cleanup
+------
+
+Finally, let's clean up resources when we're done:
+
+.. code-block:: python
+
+ # Stop the AI agent
+ await ai_assistant.stop()
+
+ # Unregister agents
+ await hub.unregister_agent(human.agent_id)
+ await hub.unregister_agent(ai_assistant.agent_id)
+
+Complete Example
+--------------
+
+Here's the complete script:
+
+.. code-block:: python
+
+ import asyncio
+ import os
+ from dotenv import load_dotenv
+
+ from agentconnect.agents import AIAgent, HumanAgent
+ from agentconnect.communication import CommunicationHub
+ from agentconnect.core.registry import AgentRegistry
+ from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ InteractionMode,
+ ModelName,
+ ModelProvider
+ )
+
+ async def main():
+ # Load environment variables
+ load_dotenv()
+
+ # Initialize registry and hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+ # Create agent identities
+ human_identity = AgentIdentity.create_key_based()
+ ai_identity = AgentIdentity.create_key_based()
+
+ # Create a human agent
+ human = HumanAgent(
+ agent_id="human1",
+ name="User",
+ identity=human_identity,
+ organization_id="org1"
+ )
+
+ # Create an AI agent
+ ai_assistant = AIAgent(
+ agent_id="ai1",
+ name="Assistant",
+ provider_type=ModelProvider.OPENAI, # Or ModelProvider.GROQ, etc.
+ model_name=ModelName.GPT4O, # Choose your model
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=ai_identity,
+ capabilities=[Capability(
+ name="conversation",
+ description="General conversation and assistance",
+ input_schema={"query": "string"},
+ output_schema={"response": "string"},
+ )],
+ interaction_modes=[InteractionMode.HUMAN_TO_AGENT],
+ personality="helpful and professional",
+ organization_id="org1",
+ )
+
+ # Register agents with the hub
+ await hub.register_agent(human)
+ await hub.register_agent(ai_assistant)
+
+ # Start AI processing loop
+ ai_task = asyncio.create_task(ai_assistant.run())
+
+ # Start interactive session
+ await human.start_interaction(ai_assistant)
+
+ # Cleanup
+ await ai_assistant.stop()
+ await hub.unregister_agent(human.agent_id)
+ await hub.unregister_agent(ai_assistant.agent_id)
+
+ if __name__ == "__main__":
+ asyncio.run(main())
+
+Running the Script
+----------------
+
+To run your script:
+
+.. code-block:: shell
+
+ python my_first_agent.py
+
+You'll see a terminal prompt where you can interact with your AI agent. Type messages and receive responses. To exit the conversation, type "exit", "quit", or "bye".
+
+Next Steps
+---------
+
+Now that you've created your first agent, you're ready to explore more complex scenarios:
+
+- Try changing the agent's capabilities or personality
+- Experiment with different model providers
+- Learn how to set up multiple agents in the :doc:`multi_agent_setup` guide
+- Explore how to integrate human agents using :doc:`human_in_the_loop`
\ No newline at end of file
diff --git a/docs/source/guides/human_in_the_loop.rst b/docs/source/guides/human_in_the_loop.rst
new file mode 100644
index 0000000..e2729e8
--- /dev/null
+++ b/docs/source/guides/human_in_the_loop.rst
@@ -0,0 +1,326 @@
+Human-in-the-Loop Interaction
+=============================
+
+.. _human_in_the_loop:
+
+This guide explains how to integrate human users into your AgentConnect workflows using the ``HumanAgent`` class. You'll learn how to implement approval workflows, request human input, and properly handle human responses within your multi-agent systems.
+
+Introduction
+-----------
+
+The ``HumanAgent`` class (defined in ``agentconnect/agents/human_agent.py``) serves as the bridge between AI workflows and human users through a terminal interface. It allows AI agents to request human input, approvals, or reviews without requiring a pre-defined conversation structure.
+
+The ``HumanAgent`` supports two distinct interaction patterns:
+
+1. **Direct Chat Session** (via ``start_interaction()``): A dedicated terminal conversation between a human and an AI agent, as shown in the :doc:`first_agent` guide.
+2. **Workflow Participant** (via ``run()``): The human agent operates like any other agent in the system, receiving messages from various agents and providing responses as needed.
+
+This guide focuses on the second pattern—integrating a human as a participant in agent workflows.
+
+Human Agent as a Workflow Participant
+-----------------------------------
+
+To use the ``HumanAgent`` for approvals, reviews, or input within a workflow, you create and register it like any other agent in your system. The key difference from the direct chat approach is that you start its processing loop with ``run()`` instead of calling ``start_interaction()``.
+
+Here's how to set up a ``HumanAgent`` as a workflow participant:
+
+.. code-block:: python
+
+ # Create a human agent
+ human = HumanAgent(
+ agent_id="human1",
+ name="User",
+ identity=human_identity,
+ organization_id="org1"
+ )
+
+ # Register with the hub
+ await hub.register_agent(human)
+
+ # Start the human agent's processing loop
+ human_task = asyncio.create_task(human.run())
+
+With this setup, the human agent is now available to receive messages from any other agent in the system and respond to them through the terminal.
+
+Workflow: AI Sends Request to Human
+---------------------------------
+
+In a typical workflow, an AI agent sends a message to the human agent requesting some form of input or approval:
+
+.. code-block:: python
+
+ # AI agent sends a request to the human agent
+ await ai_agent.send_message(
+ receiver_id=human.agent_id,
+ content="Task completed: Report generated. Please review and respond 'approve' or 'request changes [your comments]'.",
+ message_type=MessageType.TEXT
+ )
+
+The message is routed through the ``CommunicationHub`` to the ``HumanAgent``, which then processes it using its ``process_message`` method.
+
+Human Interaction Flow (The Terminal Experience)
+---------------------------------------------
+
+When the ``HumanAgent`` (running via its ``run()`` loop) receives a message, the following sequence occurs in the terminal where your Python script is running:
+
+1. The message content appears in the terminal, prefixed with the sender's ID:
+
+ .. code-block:: text
+
+ ai1:
+ Task completed: Report generated. Please review and respond 'approve' or 'request changes [your comments]'.
+ ----------------------------------------
+
+
+2. The ``HumanAgent`` immediately prints a prompt showing available commands:
+
+ .. code-block:: text
+
+ Type your response or use these commands:
+ - 'exit', 'quit', or 'bye' to end the conversation
+ - Press Enter without typing to skip responding
+
+ You:
+
+3. The script execution **pauses** at this point, waiting for the human to type a response, using ``aioconsole.ainput`` to capture the input.
+
+This interaction happens directly in the terminal where you're running your script—there's no separate interface.
+
+Human Provides Input
+------------------
+
+The human (you, running the script) has three options when the ``HumanAgent`` prompts for input:
+
+1. **Type a response**: Whatever is typed will be sent back to the AI agent that sent the original message.
+
+ .. code-block:: text
+
+ You: approve
+
+2. **Press Enter without typing**: If the human presses Enter without typing anything, the ``HumanAgent`` logs this action but doesn't send any message back to the AI agent.
+
+ .. code-block:: text
+
+ You:
+ No response sent.
+
+
+3. **End the conversation**: If the human types "exit", "quit", or "bye", the ``HumanAgent`` sends a special STOP message to the AI agent and ends that conversation.
+
+ .. code-block:: text
+
+ You: exit
+ Ending conversation with ai1
+
+
+Response Sent Back to AI
+----------------------
+
+When the human types a response, the ``HumanAgent`` packages it into a standard ``Message`` object and sends it back to the original AI agent sender via the ``CommunicationHub``:
+
+1. The human's input is captured by ``aioconsole.ainput``
+2. The ``HumanAgent`` creates a ``Message`` with the input as content
+3. The message is sent back to the original sender (the AI agent)
+4. The AI agent can then process this response in its own ``process_message`` method
+
+Use Case Example: Approval Workflow
+---------------------------------
+
+A common use case for human-in-the-loop integration is an approval workflow, where an AI agent requires human approval before proceeding with a task:
+
+.. image:: ../_static/approval_workflow.png
+ :width: 60%
+ :align: center
+ :alt: Human Approval Workflow Diagram
+
+*Approval workflow with human-in-the-loop participation*
+
+The typical flow is:
+
+1. AI agent performs a task or analysis
+2. AI agent sends results to human agent for review
+3. Human agent displays the message in the terminal and prompts for input
+4. Human user types "approve" or "reject" (with optional comments)
+5. Human agent sends the response back to the AI agent
+6. AI agent proceeds based on the human's decision
+
+Code Example: Human Approval Workflow
+-----------------------------------
+
+Here's a complete example demonstrating a human approval workflow:
+
+.. code-block:: python
+
+ import asyncio
+ import os
+ from dotenv import load_dotenv
+
+ from agentconnect.agents import AIAgent, HumanAgent
+ from agentconnect.communication import CommunicationHub
+ from agentconnect.core.registry import AgentRegistry
+ from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ InteractionMode,
+ ModelName,
+ ModelProvider,
+ MessageType
+ )
+
+ async def main():
+ # Load environment variables
+ load_dotenv()
+
+ # Initialize registry and hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+ # Create agent identities
+ human_identity = AgentIdentity.create_key_based()
+ ai_identity = AgentIdentity.create_key_based()
+
+ # Create a human agent
+ human = HumanAgent(
+ agent_id="human1",
+ name="User",
+ identity=human_identity,
+ organization_id="org1"
+ )
+
+ # Create an AI agent
+ ai_assistant = AIAgent(
+ agent_id="ai1",
+ name="Assistant",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=ai_identity,
+ capabilities=[Capability(
+ name="data_analysis",
+ description="Analyze data and provide insights",
+ input_schema={"data": "string"},
+ output_schema={"analysis": "string"},
+ )],
+ interaction_modes=[InteractionMode.HUMAN_TO_AGENT, InteractionMode.AGENT_TO_AGENT],
+ personality="professional and thorough",
+ organization_id="org1",
+ )
+
+ # Register both agents with the hub
+ await hub.register_agent(human)
+ await hub.register_agent(ai_assistant)
+
+ # Start both agent processing loops
+ human_task = asyncio.create_task(human.run())
+ ai_task = asyncio.create_task(ai_assistant.run())
+
+ try:
+ # Simulate AI agent performing a task
+ print("AI agent performing analysis...")
+ await asyncio.sleep(2) # Simulate work
+
+ analysis_result = "Based on the data, I recommend Strategy A with 78% confidence."
+
+ # AI sends results to human for approval
+ print("AI agent requesting human approval...")
+ await ai_assistant.send_message(
+ receiver_id=human.agent_id,
+ content=f"I've completed my analysis:\n\n{analysis_result}\n\nDo you approve this recommendation? (Type 'approve' or 'reject')",
+ message_type=MessageType.TEXT
+ )
+
+ # At this point, the human will see the message in their terminal
+ # and will be prompted to respond. The script will wait at this point.
+
+ # Let the interaction run for a while
+ print("Waiting for human interaction (30 seconds)...")
+ await asyncio.sleep(30)
+
+ finally:
+ # Cleanup
+ print("Shutting down agents...")
+ await ai_assistant.stop()
+ await human.stop()
+ await hub.unregister_agent(human.agent_id)
+ await hub.unregister_agent(ai_assistant.agent_id)
+ print("Done.")
+
+ if __name__ == "__main__":
+ asyncio.run(main())
+
+Notice that we **do not call** ``human.start_interaction()`` in this example. Instead, we start the human agent's processing loop with ``human.run()``, allowing it to participate in the workflow like any other agent.
+
+Running the Workflow Example
+-------------------------
+
+When you run this script:
+
+1. The AI agent performs its analysis
+2. It sends a message to the human agent requesting approval
+3. **You** (as the human user) will see this message appear directly in the terminal where the script is running
+4. You'll be prompted to type your response
+5. Whatever you type will be sent back to the AI agent
+
+Remember, when running this script, **you are the Human Agent**. The messages will appear directly in the terminal where you launched the script, and you'll be expected to type responses there.
+
+Advanced: Response Callbacks
+-------------------------
+
+The ``HumanAgent`` supports response callbacks that allow you to track and react to human responses programmatically. This is particularly useful for:
+
+- Detecting when a human has provided input
+- Triggering other system actions based on human responses
+- Implementing timeouts for human input
+- Logging or auditing human decisions
+
+To use callbacks, provide a list of functions when creating the ``HumanAgent``:
+
+.. code-block:: python
+
+ # Define a callback function
+ def on_human_response(response_data):
+ print(f"Human responded: {response_data['content']}")
+
+ # Check if the human approved or rejected
+ if response_data['content'].lower() == 'approve':
+ print("Human approved! Proceeding with task...")
+ # Trigger additional system actions
+ elif response_data['content'].lower() == 'reject':
+ print("Human rejected. Cancelling task...")
+
+ # Create the human agent with the callback
+ human = HumanAgent(
+ agent_id="human1",
+ name="User",
+ identity=human_identity,
+ organization_id="org1",
+ response_callbacks=[on_human_response] # Add our callback
+ )
+
+You can also add or remove callbacks after creating the agent:
+
+.. code-block:: python
+
+ # Add a callback later
+ human.add_response_callback(another_callback)
+
+ # Remove a callback
+ human.remove_response_callback(on_human_response)
+
+The callback function receives a dictionary with information about the response:
+
+- ``receiver_id``: The ID of the agent receiving the human's message
+- ``content``: The text content of the human's message
+- ``message_type``: The type of message (TEXT, STOP, etc.)
+- ``timestamp``: When the response was sent
+
+This can be especially useful for implementing timeout mechanisms or coordinating complex workflows that depend on human input.
+
+Next Steps
+---------
+
+Now that you understand how to integrate humans into agent workflows, you can:
+
+- Explore more complex multi-agent systems in the :doc:`multi_agent_setup` guide
+- Learn about collaborative agent workflows in the :doc:`collaborative_workflows` guide
+- Discover how to enhance agents with external tools in the :doc:`external_tools` guide
\ No newline at end of file
diff --git a/docs/source/guides/index.rst b/docs/source/guides/index.rst
index 5814327..d5c10df 100644
--- a/docs/source/guides/index.rst
+++ b/docs/source/guides/index.rst
@@ -1,12 +1,110 @@
Guides
======
-This section provides guides for common tasks with AgentConnect.
+Welcome to the AgentConnect Guides! These practical, step-by-step tutorials will help you harness the power of AgentConnect to build, connect, and deploy independent AI agents for complex tasks.
+
+.. admonition:: Who are these guides for?
+ :class: note
+
+ These guides are designed for developers looking to:
+
+ - Build applications with single or multiple collaborating AI agents.
+ - Integrate AgentConnect into existing systems.
+ - Create autonomous workflows involving payments and external tools.
+ - Understand best practices for deploying and managing AgentConnect applications.
+
+Core Concepts & Getting Started
+-----------------------------------
+Build a solid foundation by understanding the core components (Registry, Hub, Capabilities, Identity) and setting up your first agent.
+
+* `Core Concepts `_: Key concepts like the Agent Registry, Communication Hub, Capabilities, and Agent Identity.
+* `Your First Agent `_: Create and run a simple AI agent, configure its provider, and interact with it.
+* `Human-in-the-Loop `_: Integrate a `HumanAgent` for interactive sessions or approvals.
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+ :caption: Core Concepts & Getting Started
+
+ core_concepts
+ first_agent
+ human_in_the_loop
+
+Building Multi-Agent Systems
+-------------------------------
+Learn how to orchestrate multiple agents that discover each other dynamically and collaborate by defining capabilities and designing interaction workflows.
+
+* `Multi-Agent Setup `_: Registering multiple agents (AI, Human, etc.) and defining their capabilities.
+* `Collaborative Workflows `_: Design patterns for common multi-agent tasks like information gathering, task delegation, and parallel processing.
.. toctree::
- :maxdepth: 2
+ :maxdepth: 1
+ :hidden:
+ :caption: Building Multi-Agent Systems
multi_agent_setup
- custom_providers
- advanced_configuration
+ collaborative_workflows
+
+Advanced Agent Configuration & Security
+------------------------------------------
+Customize the behavior of provided agents (like `AIAgent`) and understand the framework's security mechanisms like message signing.
+
+* `AI Agent Deep Dive `_: Advanced configuration options for the `AIAgent`, including personality, interaction modes, resource limits, and error handling.
+* `Secure Agent Communication `_: Understanding message signing and verification for secure interactions.
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+ :caption: Advanced Agent Configuration & Security
+
+ agent_configuration
+ secure_communication
+
+Specialized Use Cases & Integrations
+-----------------------------------------
+Explore specific applications like agent payments, Telegram integration, and connecting agents to external tools via capabilities.
+
+* `Agent Payments `_: Enabling and managing agent-to-agent payments.
+* `Telegram Integration `_: Building AI assistants accessible via Telegram.
+* `External Tools `_: Equipping agents with the ability to use external tools.
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+ :caption: Specialized Use Cases & Integrations
+
+ agent_payment
telegram_integration
+ external_tools
+
+Monitoring
+------------
+Observe, debug, and trace your AgentConnect applications using tools like LangSmith and custom logging.
+
+* `Monitoring with LangSmith `_: Tracing agent interactions, debugging, and analyzing performance with LangSmith.
+* `Logging & Event Handling `_: Implementing custom logging and reacting to agent events.
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+ :caption: Monitoring
+
+ event_monitoring
+ logging_events
+
+Extending AgentConnect
+-------------------------
+Dive deeper and extend the framework by building **Custom Agents** using the `BaseAgent` abstraction (integrating frameworks like CrewAI, etc.) or adding new AI provider integrations.
+
+* `Advanced Guides (Coming Soon) `_: Guides on creating custom agents, providers, and more advanced configurations.
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+ :caption: Extending AgentConnect
+
+ advanced/index
+
+
+.. note::
+ These guides focus on practical implementation. For detailed class and method descriptions, refer to the :doc:`../api/index`. For runnable code examples, see the :doc:`../examples/index`.
\ No newline at end of file
diff --git a/docs/source/guides/logging_events.rst b/docs/source/guides/logging_events.rst
new file mode 100644
index 0000000..d949b45
--- /dev/null
+++ b/docs/source/guides/logging_events.rst
@@ -0,0 +1,170 @@
+.. _logging_events:
+
+Application Logging & Event Handling
+====================================
+
+Introduction
+-----------
+
+AgentConnect provides multiple approaches to monitor your applications:
+
+1. **Python Logging**: For application status and component messages
+2. **Callback Handlers**: For reacting to agent lifecycle events
+3. **LangSmith Tracing**: For comprehensive workflow visualization (covered in :doc:`event_monitoring`)
+
+Using AgentConnect's Logging Configuration
+-----------------------------------------
+
+AgentConnect includes a built-in logging configuration module:
+
+.. code-block:: python
+
+ from agentconnect.utils.logging_config import setup_logging, LogLevel
+
+ # Quick setup with default INFO level
+ setup_logging()
+
+ # More granular control
+ setup_logging(
+ level=LogLevel.DEBUG, # Global level
+ module_levels={ # Component-specific levels
+ "agentconnect.agents": LogLevel.DEBUG,
+ "agentconnect.core": LogLevel.INFO,
+ "langchain": LogLevel.WARNING,
+ }
+ )
+
+This automatically configures colorized console output and proper formatting.
+
+For development environments, use recommended debug levels:
+
+.. code-block:: python
+
+ from agentconnect.utils.logging_config import setup_logging, get_module_levels_for_development
+
+ # Set up development-friendly logging levels
+ setup_logging(
+ level=LogLevel.INFO,
+ module_levels=get_module_levels_for_development()
+ )
+
+Adding Logging to Your Components
+--------------------------------
+
+After configuring logging, use standard Python logging in your code:
+
+.. code-block:: python
+
+ import logging
+
+ # Create a logger for your module
+ logger = logging.getLogger(__name__)
+
+ def my_function():
+ logger.debug("Starting function")
+ # Function logic here
+ logger.info("Operation completed")
+
+Using Environment Variables
+-------------------------
+
+Configure logging levels via environment variables:
+
+.. code-block:: python
+
+ # .env file
+ LOG_LEVEL=DEBUG
+
+ # In your code
+ import os
+ from agentconnect.utils.logging_config import setup_logging, LogLevel
+
+ # Map string to enum
+ level_map = {
+ "DEBUG": LogLevel.DEBUG,
+ "INFO": LogLevel.INFO,
+ "WARNING": LogLevel.WARNING,
+ "ERROR": LogLevel.ERROR,
+ }
+
+ log_level = level_map.get(os.getenv("LOG_LEVEL", "INFO").upper(), LogLevel.INFO)
+ setup_logging(level=log_level)
+
+Handling Agent Events with Callbacks
+----------------------------------
+
+Track and react to agent events using LangChain's callback system:
+
+.. code-block:: python
+
+ from typing import Dict, Any
+ from langchain_core.callbacks import BaseCallbackHandler
+
+ class ToolUsageTracker(BaseCallbackHandler):
+ def __init__(self):
+ super().__init__()
+ self.tool_counts = {}
+
+ def on_tool_start(self, serialized, input_str, **kwargs):
+ tool_name = serialized.get("name", "unknown")
+ self.tool_counts[tool_name] = self.tool_counts.get(tool_name, 0) + 1
+
+ def get_usage_report(self):
+ return self.tool_counts
+
+To use with an agent:
+
+.. code-block:: python
+
+ from agentconnect.agents import AIAgent
+ from agentconnect.core.types import ModelProvider, ModelName, AgentIdentity
+
+ # Create tracker
+ usage_tracker = ToolUsageTracker()
+
+ # Add to agent
+ agent = AIAgent(
+ agent_id="my_agent",
+ name="Agent with Tracking",
+ provider_type=ModelProvider.ANTHROPIC,
+ model_name=ModelName.CLAUDE_3_OPUS,
+ api_key="your_api_key",
+ identity=AgentIdentity.create_key_based(),
+ external_callbacks=[usage_tracker]
+ )
+
+ # After running, check stats
+ await agent.run()
+ print(f"Tool usage: {usage_tracker.get_usage_report()}")
+
+Built-in Tool Tracing
+-------------------
+
+AgentConnect includes a built-in `ToolTracerCallbackHandler` for colorized console output:
+
+.. code-block:: python
+
+ from agentconnect.utils.callbacks import ToolTracerCallbackHandler
+
+ # Create with default settings
+ tool_tracer = ToolTracerCallbackHandler(
+ agent_id="my_agent",
+ print_tool_activity=True,
+ print_reasoning_steps=True
+ )
+
+ # Add to agent initialization
+ agent = AIAgent(
+ # ... other parameters ...
+ agent_id="my_agent",
+ external_callbacks=[tool_tracer]
+ )
+
+When to Use Each Approach
+-----------------------
+
+* **Standard Logging**: Application status, errors, and diagnostic information
+* **Callbacks**: Tool usage tracking, custom metrics, and user interface updates
+* **LangSmith**: Detailed workflow debugging and token usage analysis
+
+For most applications, combining these approaches provides comprehensive visibility.
\ No newline at end of file
diff --git a/docs/source/guides/multi_agent_setup.rst b/docs/source/guides/multi_agent_setup.rst
index 0b6fa1b..07bf994 100644
--- a/docs/source/guides/multi_agent_setup.rst
+++ b/docs/source/guides/multi_agent_setup.rst
@@ -1,306 +1,431 @@
Multi-Agent Setup Guide
-=====================
+======================
.. _multi_agent_setup:
-Setting Up Multiple Agents
------------------------
+This guide explains how to set up multiple agents that can collaborate, discover each other based on capabilities, and work together to solve complex problems.
-This guide explains how to set up and manage multiple agents in AgentConnect.
-
-Prerequisites
+Introduction
-----------
-Before setting up multiple agents, ensure you have:
+In AgentConnect, multi-agent systems consist of independent agents—each with their own specialized capabilities—working together through standardized communication. The framework handles agent discovery and message routing automatically, allowing you to focus on defining the agents and their skills.
-- Installed AgentConnect
-- API keys for the AI providers you plan to use
-- Basic understanding of the AgentConnect architecture
+The core value of multi-agent systems comes from:
-Creating Multiple Agents
----------------------
+- **Specialization**: Agents can focus on specific tasks they excel at
+- **Modularity**: New capabilities can be added by introducing new agents
+- **Scalability**: Systems can grow organically as needs evolve
+- **Separation of concerns**: Each agent manages its own internal logic
+
+Core Principles of Multi-Agent Setup
+-----------------------------------
+
+The key to enabling collaboration between agents lies in three fundamental concepts:
+
+1. **Capabilities**: Clearly defined services that agents can provide
+2. **Registry**: A directory for capability-based discovery
+3. **Communication Hub**: A message router connecting agents based on registry lookups
-To create multiple agents, you'll need to:
+Let's explore each of these principles:
-1. Create agent identities
-2. Initialize agents with different configurations
-3. Register them with a communication hub
+Capabilities: The Foundation of Collaboration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Here's an example:
+Each agent declares its capabilities—the services it can provide to other agents. These capability definitions include:
+
+- A unique name
+- A clear description
+- Input and output schemas
+
+For example:
+
+.. code-block:: python
+
+ from agentconnect.core.types import Capability
+
+ # Define a summarization capability
+ summarization_capability = Capability(
+ name="text_summarization",
+ description="Summarizes text content into concise form",
+ input_schema={"text": "string", "max_length": "integer"},
+ output_schema={"summary": "string"}
+ )
+
+ # Define a data analysis capability
+ analysis_capability = Capability(
+ name="data_analysis",
+ description="Analyzes data and provides insights",
+ input_schema={"data": "string"},
+ output_schema={"analysis": "string"}
+ )
+
+When you create an agent with these capabilities, you're advertising what services the agent can provide to others in the system.
+
+Registry: The Agent Directory
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``AgentRegistry`` serves as a dynamic directory of all available agents and their capabilities. When an agent needs a specific capability, the registry provides the means to find agents that offer it.
.. code-block:: python
- import asyncio
- from agentconnect.agents import AIAgent
- from agentconnect.core.types import ModelProvider, ModelName, AgentIdentity, InteractionMode
from agentconnect.core.registry import AgentRegistry
+
+ # Create the registry
+ registry = AgentRegistry()
+
+Communication Hub: Message Routing
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``CommunicationHub`` handles message routing between agents, allowing them to exchange information regardless of where they're located:
+
+.. code-block:: python
+
from agentconnect.communication import CommunicationHub
- async def setup_agents():
- # Create an agent registry and communication hub
- registry = AgentRegistry()
- hub = CommunicationHub(registry)
-
- # Create identities for your agents
- assistant_identity = AgentIdentity.create_key_based()
- researcher_identity = AgentIdentity.create_key_based()
- coder_identity = AgentIdentity.create_key_based()
-
- # Create an assistant agent
- assistant_agent = AIAgent(
- agent_id="assistant-1",
- name="Assistant",
- provider_type=ModelProvider.OPENAI,
- model_name=ModelName.GPT4O,
- api_key="your-openai-api-key",
- identity=assistant_identity,
- interaction_modes=[
- InteractionMode.HUMAN_TO_AGENT,
- InteractionMode.AGENT_TO_AGENT
- ]
- )
-
- # Create a researcher agent
- researcher_agent = AIAgent(
- agent_id="researcher-1",
- name="Researcher",
- provider_type=ModelProvider.ANTHROPIC,
- model_name=ModelName.CLAUDE_3_7_SONNET,
- api_key="your-anthropic-api-key",
- identity=researcher_identity,
- interaction_modes=[
- InteractionMode.AGENT_TO_AGENT
- ]
- )
-
- # Create a coder agent
- coder_agent = AIAgent(
- agent_id="coder-1",
- name="Coder",
- provider_type=ModelProvider.OPENAI,
- model_name=ModelName.GPT4O,
- api_key="your-openai-api-key",
- identity=coder_identity,
- interaction_modes=[
- InteractionMode.AGENT_TO_AGENT
- ]
- )
-
- # Register the agents with the hub
- await hub.register_agent(assistant_agent)
- await hub.register_agent(researcher_agent)
- await hub.register_agent(coder_agent)
-
- return hub, [assistant_agent, researcher_agent, coder_agent]
+ # Create the hub with reference to the registry
+ hub = CommunicationHub(registry)
+
+Step-by-Step Guide to Setup
+--------------------------
+
+Now let's walk through the steps to create a multi-agent system:
+
+Step 1: Define Agent Roles & Capabilities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+First, plan what agents you need and what capabilities each should have. For example:
-Configuring Agent Communication
-----------------------------
+- **Orchestrator Agent**: Coordinates workflows, interacts with users
+- **Summarizer Agent**: Specializes in condensing text into summaries
-Once you have multiple agents, you need to configure how they communicate:
+For each agent, define clear, well-described capabilities that other agents can discover and use.
+
+Step 2: Create Agent Identities
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each agent needs a secure identity for authentication and message signing:
.. code-block:: python
- from agentconnect.core.message import Message
- from agentconnect.core.types import MessageType
+ from agentconnect.core.types import AgentIdentity
- async def setup_communication(hub, agents):
- assistant, researcher, coder = agents
-
- # Set up message handlers
- async def assistant_handler(message):
- print(f"Assistant received: {message.content[:50]}...")
- # Process the message and potentially respond
-
- async def researcher_handler(message):
- print(f"Researcher received: {message.content[:50]}...")
- # Process the message and potentially respond
-
- async def coder_handler(message):
- print(f"Coder received: {message.content[:50]}...")
- # Process the message and potentially respond
-
- # Register the handlers with the hub
- hub.register_message_handler(assistant.agent_id, assistant_handler)
- hub.register_message_handler(researcher.agent_id, researcher_handler)
- hub.register_message_handler(coder.agent_id, coder_handler)
+ # Create identities for each agent
+ orchestrator_identity = AgentIdentity.create_key_based()
+ summarizer_identity = AgentIdentity.create_key_based()
+ analyst_identity = AgentIdentity.create_key_based()
-Orchestrating Multi-Agent Workflows
---------------------------------
+Step 3: Instantiate Agents
+~~~~~~~~~~~~~~~~~~~~~~~
-To orchestrate workflows involving multiple agents:
+Create each agent with its unique identity, capabilities, and configuration:
.. code-block:: python
- async def run_workflow(hub, agents):
- assistant, researcher, coder = agents
-
- # User sends a request to the assistant
- user_identity = AgentIdentity.create_key_based()
-
- user_message = Message.create(
- sender_id="user-1",
- receiver_id=assistant.agent_id,
- content="I need to build a Python application that analyzes stock market data.",
- sender_identity=user_identity,
- message_type=MessageType.TEXT
- )
-
- # Assistant routes the request to the researcher for information gathering
- await hub.route_message(user_message)
-
- # In a real implementation, the assistant would process the message and then
- # decide to send a message to the researcher
-
- research_request = Message.create(
- sender_id=assistant.agent_id,
- receiver_id=researcher.agent_id,
- content="Find information about Python libraries for stock market analysis.",
- sender_identity=assistant.identity,
- message_type=MessageType.TEXT
- )
-
- await hub.route_message(research_request)
-
- # The researcher would process and respond, then the assistant might
- # send a request to the coder
-
- coding_request = Message.create(
- sender_id=assistant.agent_id,
- receiver_id=coder.agent_id,
- content="Create a Python script that uses pandas and yfinance to analyze stock data.",
- sender_identity=assistant.identity,
- message_type=MessageType.TEXT
- )
-
- await hub.route_message(coding_request)
-
- # In a real implementation, you would have proper message handling and
- # response processing
+ from agentconnect.agents import AIAgent
+ from agentconnect.core.types import ModelProvider, ModelName
+
+ # Create an orchestrator agent
+ orchestrator = AIAgent(
+ agent_id="orchestrator",
+ name="Orchestrator",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=orchestrator_identity,
+ capabilities=[
+ Capability(
+ name="task_management",
+ description="Manages and coordinates complex tasks",
+ input_schema={"task": "string"},
+ output_schema={"result": "string"}
+ )
+ ],
+ personality="I coordinate complex tasks by working with specialized agents."
+ )
+
+ # Create a summarizer agent
+ summarizer = AIAgent(
+ agent_id="summarizer",
+ name="Summarizer",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=summarizer_identity,
+ capabilities=[
+ Capability(
+ name="text_summarization",
+ description="Summarizes text into concise form",
+ input_schema={"text": "string", "max_length": "integer"},
+ output_schema={"summary": "string"}
+ )
+ ],
+ personality="I specialize in creating concise summaries of text content."
+ )
+
+Notice how each agent has different capabilities, even though they may use the same underlying AI model.
+
+Step 4: Initialize Hub & Registry
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Create the registry and hub that will connect your agents:
+
+.. code-block:: python
+
+ # Create registry and hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+Step 5: Register All Agents
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Register each agent with the hub to make them discoverable:
-Complete Example
--------------
+.. code-block:: python
+
+ # Register all agents
+ await hub.register_agent(orchestrator)
+ await hub.register_agent(summarizer)
+
+This step is crucial—only registered agents can be discovered by others based on their capabilities.
-Here's a complete example that sets up multiple agents and runs a simple workflow:
+Step 6: Start Agent Run Loops
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Start each agent's processing loop so they can receive and handle messages:
+
+.. code-block:: python
+
+ # Start all agent loops
+ orchestrator_task = asyncio.create_task(orchestrator.run())
+ summarizer_task = asyncio.create_task(summarizer.run())
+
+Each agent now runs independently, listening for messages and processing them based on their internal logic.
+
+Initiating Collaboration
+----------------------
+
+There are several ways agents can collaborate within the AgentConnect framework:
+
+**Direct Agent-to-Agent Communication**
+
+The simplest approach is when one agent explicitly sends a message to another:
+
+.. code-block:: python
+
+ # Orchestrator directly messages the summarizer
+ await orchestrator.send_message(
+ receiver_id=summarizer.agent_id,
+ content="Please summarize the following text: 'AgentConnect enables decentralized agent collaboration...'",
+ message_type=MessageType.TEXT
+ )
+
+**Human-Initiated Workflows**
+
+Often, a human user initiates the workflow by interacting with a primary agent:
+
+.. code-block:: python
+
+ # Create and register a human agent
+ human = HumanAgent(
+ agent_id="human",
+ name="User",
+ identity=human_identity
+ )
+ await hub.register_agent(human)
+
+ # Start human interaction with the primary agent
+ await human.start_interaction(orchestrator)
+
+The human's messages trigger the orchestrator, which then coordinates with other agents as needed to fulfill requests.
+
+**Capability-Based Discovery and Collaboration**
+
+In more sophisticated workflows, agents use built-in collaboration tools to discover each other and work together. These tools abstract the complexity of registry lookups and message exchange.
+
+For example, an agent might use:
+
+- ``search_for_agents`` to find other agents with specific capabilities
+- ``send_collaboration_request`` to delegate tasks and manage responses
+
+These built-in tools enable truly dynamic collaboration where agents discover and work with each other based on capabilities rather than hardcoded agent IDs. For a detailed exploration of these collaboration patterns, see the :doc:`collaborative_workflows` guide.
+
+Simplified Example: Task Delegation
+---------------------------------
+
+Here's a complete example demonstrating a basic multi-agent setup with task delegation:
.. code-block:: python
import asyncio
- import logging
+ import os
+ from dotenv import load_dotenv
- from agentconnect.agents import AIAgent
- from agentconnect.core.types import ModelProvider, ModelName, AgentIdentity, MessageType, InteractionMode
- from agentconnect.core.message import Message
- from agentconnect.core.registry import AgentRegistry
+ from agentconnect.agents import AIAgent, HumanAgent
from agentconnect.communication import CommunicationHub
-
- # Set up logging
- logging.basicConfig(level=logging.INFO)
- logger = logging.getLogger(__name__)
+ from agentconnect.core.registry import AgentRegistry
+ from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ InteractionMode,
+ ModelName,
+ ModelProvider,
+ MessageType
+ )
async def main():
- # Create an agent registry and communication hub
+ # Load environment variables
+ load_dotenv()
+
+ # Create the registry and hub
registry = AgentRegistry()
hub = CommunicationHub(registry)
- # Create identities for your agents
- assistant_identity = AgentIdentity.create_key_based()
- researcher_identity = AgentIdentity.create_key_based()
- user_identity = AgentIdentity.create_key_based()
+ # Create agent identities
+ orchestrator_identity = AgentIdentity.create_key_based()
+ summarizer_identity = AgentIdentity.create_key_based()
+ human_identity = AgentIdentity.create_key_based()
+
+ # Create an orchestrator agent
+ orchestrator = AIAgent(
+ agent_id="orchestrator",
+ name="Orchestrator",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=orchestrator_identity,
+ capabilities=[
+ Capability(
+ name="task_coordination",
+ description="Coordinates tasks and delegates to specialized agents",
+ input_schema={"request": "string"},
+ output_schema={"result": "string"}
+ )
+ ],
+ personality="I'm a coordinator who delegates tasks to specialized agents."
+ )
- # Create an assistant agent
- assistant_agent = AIAgent(
- agent_id="assistant-1",
- name="Assistant",
+ # Create a summarizer agent
+ summarizer = AIAgent(
+ agent_id="summarizer",
+ name="Summarizer",
provider_type=ModelProvider.OPENAI,
model_name=ModelName.GPT4O,
- api_key="your-openai-api-key",
- identity=assistant_identity
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=summarizer_identity,
+ capabilities=[
+ Capability(
+ name="text_summarization",
+ description="Summarizes text into concise form",
+ input_schema={"text": "string", "max_length": "integer"},
+ output_schema={"summary": "string"}
+ )
+ ],
+ personality="I specialize in creating concise summaries of text content."
)
- # Create a researcher agent
- researcher_agent = AIAgent(
- agent_id="researcher-1",
- name="Researcher",
- provider_type=ModelProvider.ANTHROPIC,
- model_name=ModelName.CLAUDE_3_7_SONNET,
- api_key="your-anthropic-api-key",
- identity=researcher_identity
+ # Create a human agent
+ human = HumanAgent(
+ agent_id="human",
+ name="User",
+ identity=human_identity,
)
- # Register the agents with the hub
- await hub.register_agent(assistant_agent)
- await hub.register_agent(researcher_agent)
+ # Register all agents
+ await hub.register_agent(orchestrator)
+ await hub.register_agent(summarizer)
+ await hub.register_agent(human)
- # Set up message handlers
- async def assistant_handler(message):
- logger.info(f"Assistant received: {message.content[:50]}...")
+ # Start agent processing loops
+ orchestrator_task = asyncio.create_task(orchestrator.run())
+ summarizer_task = asyncio.create_task(summarizer.run())
+
+ try:
+ # Simulate a direct collaboration
+ print("Demonstrating direct collaboration...")
- if message.sender_id == "user-1":
- # Forward to researcher for more information
- research_request = Message.create(
- sender_id=assistant_agent.agent_id,
- receiver_id=researcher_agent.agent_id,
- content=f"Research this topic: {message.content}",
- sender_identity=assistant_agent.identity,
- message_type=MessageType.TEXT
- )
- await hub.route_message(research_request)
+ # Orchestrator sends a task to the summarizer
+ # Note: In a more dynamic scenario, the orchestrator might first use
+ # the search_for_agents tool to find agents with summarization capabilities
+ await orchestrator.send_message(
+ receiver_id=summarizer.agent_id,
+ content="Please summarize the following text: 'AgentConnect is a framework for building decentralized multi-agent systems. It provides tools for agent identity, messaging, and capability discovery. Agents can find and collaborate with each other based on their capabilities without centralized control.'",
+ message_type=MessageType.TEXT
+ )
- elif message.sender_id == researcher_agent.agent_id:
- # Process research results and respond to user
- user_response = Message.create(
- sender_id=assistant_agent.agent_id,
- receiver_id="user-1",
- content=f"Based on research, here's what I found: {message.content}",
- sender_identity=assistant_agent.identity,
- message_type=MessageType.RESPONSE
- )
- await hub.route_message(user_response)
-
- async def researcher_handler(message):
- logger.info(f"Researcher received: {message.content[:50]}...")
+ # In a real system, the summarizer would process this and respond
+ # The orchestrator would receive the response via its run() loop
- # Simulate research process
- research_result = f"Research results for: {message.content}"
+ # Wait a moment to let the message processing occur
+ await asyncio.sleep(5)
- # Send results back to assistant
- response = Message.create(
- sender_id=researcher_agent.agent_id,
- receiver_id=message.sender_id,
- content=research_result,
- sender_identity=researcher_agent.identity,
- message_type=MessageType.RESPONSE
- )
- await hub.route_message(response)
-
- async def user_handler(message):
- logger.info(f"User received: {message.content[:50]}...")
- # In a real application, you would display this to the user
-
- # Register the handlers with the hub
- hub.register_message_handler(assistant_agent.agent_id, assistant_handler)
- hub.register_message_handler(researcher_agent.agent_id, researcher_handler)
- hub.register_message_handler("user-1", user_handler)
-
- # Start the workflow with a user message
- user_message = Message.create(
- sender_id="user-1",
- receiver_id=assistant_agent.agent_id,
- content="Tell me about quantum computing applications.",
- sender_identity=user_identity,
- message_type=MessageType.TEXT
- )
-
- # Send the message through the hub
- logger.info(f"User sending message: {user_message.content}")
- await hub.route_message(user_message)
-
- # In a real application, you would have a proper event loop
- # For this example, we'll just wait a short time for the workflow to complete
- await asyncio.sleep(5)
-
- logger.info("Multi-agent workflow completed")
+ print("\nNow starting human interaction with orchestrator...")
+ # Start human interaction for a more natural workflow
+ await human.start_interaction(orchestrator)
+
+ finally:
+ # Cleanup
+ print("Shutting down agents...")
+ await orchestrator.stop()
+ await summarizer.stop()
+ await hub.unregister_agent(orchestrator.agent_id)
+ await hub.unregister_agent(summarizer.agent_id)
+ await hub.unregister_agent(human.agent_id)
+ print("Done.")
- # Run the async function
if __name__ == "__main__":
- asyncio.run(main())
\ No newline at end of file
+ asyncio.run(main())
+
+When you run this example:
+
+1. Two AI agents are created with different capabilities
+2. Both agents are registered with the hub
+3. Both agents start their processing loops
+4. The orchestrator sends a summarization task to the summarizer
+5. The human user can then interact with the orchestrator to trigger more complex workflows
+
+Monitoring Interactions
+---------------------
+
+To understand what's happening in your multi-agent system, AgentConnect provides built-in monitoring:
+
+.. code-block:: python
+
+ from agentconnect.utils.callbacks import ToolTracerCallbackHandler
+
+ # Add this when creating an agent
+ orchestrator = AIAgent(
+ # ... other parameters ...
+ external_callbacks=[
+ ToolTracerCallbackHandler(
+ agent_id="orchestrator",
+ print_tool_activity=True,
+ print_reasoning_steps=True
+ )
+ ]
+ )
+
+The ``ToolTracerCallbackHandler`` provides detailed, color-coded output showing:
+
+- Messages sent and received
+- Tool usage and function calls
+- Agent reasoning steps
+
+For more advanced monitoring using LangSmith, see the :doc:`event_monitoring` guide.
+
+Conclusion & Next Steps
+---------------------
+
+You've now learned the fundamental principles of setting up multiple agents for collaboration in AgentConnect:
+
+1. Define clear capabilities for each agent
+2. Register all agents with the hub
+3. Start each agent's processing loop
+4. Initiate collaboration through direct messages or human interaction
+
+This setup enables a flexible, extensible multi-agent system where agents can discover and communicate with each other based on their capabilities.
+
+To build on this foundation:
+
+- Learn how to design more complex collaborative workflows in :doc:`collaborative_workflows`
+- Discover how to equip agents with external tools in :doc:`external_tools`
+- Explore options for payment-enabled agents in :doc:`agent_payment`
diff --git a/docs/source/guides/secure_communication.rst b/docs/source/guides/secure_communication.rst
new file mode 100644
index 0000000..4497706
--- /dev/null
+++ b/docs/source/guides/secure_communication.rst
@@ -0,0 +1,161 @@
+.. _secure_communication:
+
+Secure Agent Communication
+=========================
+
+In a decentralized agent framework like AgentConnect, where autonomous agents interact and exchange information, ensuring secure communication is critical. This guide explains how AgentConnect automatically handles message signing and verification to maintain authenticity and integrity across agent interactions.
+
+Why Security Matters
+------------------
+
+When independent agents communicate, two critical security aspects must be addressed:
+
+1. **Authenticity**: Ensuring messages truly come from their claimed sender
+2. **Integrity**: Confirming messages haven't been altered during transmission
+
+Without these guarantees, malicious entities could impersonate agents or modify message content, potentially compromising the entire system. AgentConnect provides built-in mechanisms to handle these security concerns automatically.
+
+The Role of AgentIdentity
+------------------------
+
+At the core of AgentConnect's security model is the :class:`AgentIdentity ` class. Each agent in the system has its own identity that:
+
+- Contains cryptographic key pairs (public/private)
+- Enables secure signing and verification of messages
+- Uniquely identifies the agent in the network
+
+When creating any agent, you must provide an identity:
+
+.. code-block:: python
+
+ from agentconnect.core.types import AgentIdentity
+ from agentconnect.agents import AIAgent
+
+ # Create a new identity with a fresh key pair
+ agent_identity = AgentIdentity.create_key_based()
+
+ # Assign the identity when initializing the agent
+ agent = AIAgent(
+ agent_id="secure-agent-001",
+ name="Secure Assistant",
+ identity=agent_identity, # Identity provides security capabilities
+ # ... other parameters
+ )
+
+The ``create_key_based()`` method generates a secure RSA key pair:
+
+- The **private key** allows the agent to sign messages (proving authorship)
+- The **public key** allows others to verify the signature (confirming authenticity)
+
+For more details on how identity fits into the overall framework, see the :doc:`core_concepts` guide.
+
+Automatic Message Signing
+-----------------------
+
+When an agent sends a message through AgentConnect, the framework automatically handles message signing:
+
+1. The message is created using the sender's identity
+2. The ``Message.create()`` method internally calls the identity's signing function
+3. The sender's private key cryptographically signs the message content
+4. The signature is attached to the message
+
+.. code-block:: python
+
+ # This happens automatically when messages are created
+ message = Message.create(
+ sender_id=agent.agent_id,
+ receiver_id=target_agent.agent_id,
+ content="Hello, this is a secure message",
+ sender_identity=agent.identity, # Used for signing
+ message_type=MessageType.TEXT
+ )
+
+ # At this point, the message already contains a cryptographic signature
+
+The ``CommunicationHub`` ensures that all messages flowing through the system have valid signatures before routing them to their destination.
+
+Automatic Message Verification
+---------------------------
+
+When an agent receives a message, the framework automatically verifies its authenticity:
+
+1. The ``CommunicationHub`` intercepts the message during routing
+2. It extracts the sender's public key from the attached identity
+3. It verifies the signature against the message content
+4. If verification fails, the message is rejected with a security error
+
+From the ``CommunicationHub``'s ``route_message`` method:
+
+.. code-block:: python
+
+ # This happens internally within the framework
+ if not message.verify(sender.identity):
+ logger.error(f"Message signature verification failed")
+ raise SecurityError("Message signature verification failed")
+
+This verification process guarantees that:
+
+- The message truly came from the claimed sender
+- The message hasn't been tampered with during transmission
+
+Developers don't need to implement any verification logic themselves; AgentConnect handles this automatically.
+
+Developer Responsibilities
+------------------------
+
+While AgentConnect handles most security concerns internally, developers should be aware of their responsibilities:
+
+1. **Secure Identity Creation**: Always create unique identities for each agent using ``AgentIdentity.create_key_based()``
+
+2. **Private Key Management**: If you need to persist agent identities across sessions, store the private keys securely:
+
+ - Use secure secret management systems
+ - Never hardcode private keys in source code
+ - Consider environment variables or encrypted storage
+ - Be careful about logging identity information
+
+3. **Identity Assignment**: Always ensure each agent has its own identity when initializing:
+
+ .. code-block:: python
+
+ # CORRECT: Each agent gets its own identity
+ agent1 = AIAgent(
+ agent_id="agent1",
+ identity=AgentIdentity.create_key_based(),
+ # ... other parameters
+ )
+
+ agent2 = AIAgent(
+ agent_id="agent2",
+ identity=AgentIdentity.create_key_based(),
+ # ... other parameters
+ )
+
+4. **Registry Trust**: The AgentRegistry maintains verified identities, so access to registry operations should be properly secured in production environments.
+
+.. admonition:: Coming Soon: Deeper Dive
+ :class: tip
+
+ While this guide covers the essentials of secure communication in AgentConnect, a more detailed guide exploring the cryptographic specifics, advanced security configurations, and best practices for production deployment is planned for the future.
+
+ For most applications, the default security model provided by AgentConnect is sufficient, but organizations with specific security requirements may benefit from the upcoming detailed security documentation.
+
+Summary
+------
+
+AgentConnect simplifies secure communication by automating the signing and verification of messages through the ``AgentIdentity`` system. By leveraging public key cryptography, the framework ensures:
+
+- Messages are authentically from their claimed senders
+- Message content remains unaltered during transmission
+- Agent identities are uniquely verified
+
+These mechanisms operate behind the scenes, allowing developers to focus on agent capabilities rather than security implementation details.
+
+Next Steps
+---------
+
+Now that you understand how AgentConnect ensures secure communication, you might want to explore:
+
+- :doc:`agent_configuration` for more details on configuring agent identities and other parameters
+- :doc:`multi_agent_setup` to learn how to set up multiple secure agents
+- :doc:`collaborative_workflows` to see how secure agents can collaborate while maintaining message integrity
\ No newline at end of file
diff --git a/docs/source/guides/telegram_integration.rst b/docs/source/guides/telegram_integration.rst
index 8d06d31..e10f673 100644
--- a/docs/source/guides/telegram_integration.rst
+++ b/docs/source/guides/telegram_integration.rst
@@ -22,9 +22,10 @@ To use the Telegram integration, you first need to create a bot through Telegram
1. Open Telegram and search for ``@BotFather``
2. Start a chat with BotFather and use the ``/newbot`` command
3. Follow the prompts to create your bot:
+
- Provide a display name for your bot (e.g., "My AgentConnect Assistant")
- Provide a username for your bot (must end with "bot", e.g., "my_agent_connect_bot")
-4. BotFather will provide a token that looks like this: ``123456789:ABCdefGhIJKlmNoPQRsTUVwxyZ``
+4. BotFather will provide a token that looks like this: ``123456789:ABCdefGhIJKlmosdQRsTUVwxyZ``
5. Save this token securely - you'll need it to initialize your Telegram agent
.. warning::
@@ -84,25 +85,6 @@ The simplest way to set up a Telegram agent is as follows:
Save this code in a file (e.g., ``telegram_bot.py``) and run it. Your bot should now be active on Telegram.
-Environment Variables
---------------------
-
-For security, store your API keys and tokens in a ``.env`` file:
-
-.. code-block:: text
-
- # Telegram Bot Token
- TELEGRAM_BOT_TOKEN=your_telegram_bot_token
-
- # LLM Provider API Key (choose one)
- GOOGLE_API_KEY=your_google_api_key
- # OR
- OPENAI_API_KEY=your_openai_api_key
- # OR
- ANTHROPIC_API_KEY=your_anthropic_api_key
- # OR
- GROQ_API_KEY=your_groq_api_key
-
Advanced Configuration
---------------------
@@ -234,7 +216,7 @@ One of the most powerful features of the TelegramAIAgent is its ability to colla
With this setup, when a user interacts with the Telegram bot, a sophisticated workflow can emerge:
1. **Request Interpretation**: The Telegram agent analyzes the user's request to determine what capabilities are needed
-2. **Capability Discovery**: The agent uses the registry to find other agents with the required capabilities
+2. **Capability Discovery**: The agent uses it's tools to find other agents with the required capabilities
3. **Collaboration Request**: The agent sends requests to the appropriate specialized agents
4. **Result Integration**: The agent combines results from multiple sources into a coherent response
5. **Content Distribution**: The agent can broadcast the finalized content to multiple groups or users
@@ -249,6 +231,7 @@ For example, when a user asks:
The workflow might look like this:
1. Telegram agent receives the request and identifies three required capabilities:
+
- Web research
- Data visualization
- Group messaging
@@ -271,160 +254,20 @@ Advanced Use Cases
Dynamic Group Announcements and Broadcasting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. note::
- **Important:** The code examples below are provided for creating a hardcoded research-to-broadcast workflow. You do not need to implement any of this code yourself. The Telegram agent already has these capabilities built-in and will handle finding agents, requesting collaboration, and broadcasting automatically based on natural language requests.
-
-One of the most powerful features of the Telegram agent is its ability to dynamically create and broadcast announcements to multiple groups:
+One of the most powerful features of the Telegram agent is its ability to dynamically create and broadcast announcements to multiple groups. The TelegramAIAgent can handle this autonomously, without requiring any manual implementation from the developer.
-.. code-block:: python
+The agent uses its built-in collaboration tools (like ``search_for_agents`` and ``send_collaboration_request``) to interact with other specialized agents (such as research agents). It then uses its Telegram-specific tools (like ``create_telegram_announcement`` and ``publish_telegram_announcement``) to format and broadcast content to Telegram groups.
- # Example of sending an announcement to all registered groups
- await telegram_agent.send_announcement_to_groups(
- text="Important announcement: System maintenance scheduled for tomorrow.",
- media_path=None, # Optional path to media file
- parse_mode="Markdown", # Optional formatting (Markdown or HTML)
- groups="all" # Or list specific group IDs
- )
+As a developer, your role is to configure the TelegramAIAgent with the right ``personality`` prompt to guide this autonomous behavior, rather than coding the workflow manually. For example, your personality prompt might include instructions like:
-This capability allows for sophisticated use cases where the Telegram agent acts as a central broadcasting system. For example, users can:
-
-1. Ask the agent to research a topic and create announcements based on the findings
-2. Request the agent to format content in a visually appealing way
-3. Have the agent automatically distribute content to multiple groups
-4. Edit or update previously sent announcements
+.. code-block::
-Here's a complete code example demonstrating a research-to-broadcast workflow:
+ When users ask you to research topics and broadcast findings, you should:
-.. code-block:: python
-
- async def handle_research_and_broadcast_request(telegram_agent, message_text, user_id):
- """
- Handle a request to research a topic and broadcast findings to groups.
-
- This demonstrates the full workflow from receiving a request to broadcasting
- results across multiple channels.
- """
- # Parse the user request to identify the research topic
- # For simplicity, let's assume the topic is provided directly
- research_topic = "latest trends in artificial intelligence"
-
- # Step 1: Find a research agent that can help
- research_agents = await telegram_agent.registry.find_agents_by_capability("web_research")
- if not research_agents:
- await telegram_agent.send_message(
- chat_id=user_id,
- text="I couldn't find any research agents to help with this task."
- )
- return
-
- research_agent = research_agents[0]
-
- # Step 2: Request collaboration for research
- await telegram_agent.send_message(
- chat_id=user_id,
- text=f"Researching '{research_topic}'. This may take a moment..."
- )
-
- research_response = await telegram_agent.request_collaboration(
- target_agent_id=research_agent.agent_id,
- capability_name="web_research",
- input_data={"topic": research_topic, "depth": "comprehensive"}
- )
-
- if not research_response or not research_response.get("success"):
- await telegram_agent.send_message(
- chat_id=user_id,
- text="I encountered an issue while researching. Please try again later."
- )
- return
-
- research_findings = research_response.get("data", {}).get("findings", "No findings available")
- research_sources = research_response.get("data", {}).get("sources", [])
-
- # Step 3: Create a visualization if requested
- # Find a visualization agent
- viz_agents = await telegram_agent.registry.find_agents_by_capability("data_visualization")
- viz_path = None
-
- if viz_agents:
- viz_agent = viz_agents[0]
- viz_response = await telegram_agent.request_collaboration(
- target_agent_id=viz_agent.agent_id,
- capability_name="data_visualization",
- input_data={
- "data": research_findings,
- "chart_type": "infographic"
- }
- )
-
- if viz_response and viz_response.get("success"):
- viz_path = viz_response.get("data", {}).get("image_path")
-
- # Step 4: Format the announcement with markdown
- formatted_announcement = f"""
- 🔍 **RESEARCH FINDINGS: {research_topic.upper()}** 🔍
-
- {research_findings}
-
- **Sources:**
- """
-
- for i, source in enumerate(research_sources[:3], 1):
- formatted_announcement += f"\n{i}. {source}"
-
- # Step 5: Send a preview to the user
- await telegram_agent.send_message(
- chat_id=user_id,
- text="Here's a preview of the announcement:",
- parse_mode="Markdown"
- )
-
- if viz_path:
- await telegram_agent.send_photo(
- chat_id=user_id,
- photo=viz_path,
- caption=formatted_announcement,
- parse_mode="Markdown"
- )
- else:
- await telegram_agent.send_message(
- chat_id=user_id,
- text=formatted_announcement,
- parse_mode="Markdown"
- )
-
- # Step 6: Ask for confirmation
- await telegram_agent.send_message(
- chat_id=user_id,
- text="Should I send this announcement to all registered groups?",
- parse_mode="Markdown"
- )
-
- # In a real implementation, you'd wait for the user's response
- # For this example, we'll assume confirmation was received
-
- # Step 7: Broadcast to all groups
- result = await telegram_agent.send_announcement_to_groups(
- text=formatted_announcement,
- media_path=viz_path,
- parse_mode="Markdown",
- groups="all"
- )
-
- # Step 8: Report back to the user
- if result.get("success"):
- groups_count = len(result.get("groups", []))
- await telegram_agent.send_message(
- chat_id=user_id,
- text=f"Announcement successfully sent to {groups_count} groups.",
- parse_mode="Markdown"
- )
- else:
- await telegram_agent.send_message(
- chat_id=user_id,
- text=f"Error sending announcement: {result.get('error')}",
- parse_mode="Markdown"
- )
+ 1. Use collaboration tools to find and query a specialized research agent
+ 2. Format the results in a visually appealing way with proper Markdown
+ 3. Create and preview announcements before broadcasting
+ 4. Send the finalized content to the appropriate Telegram groups
Here's a practical example of how a user might interact with this feature:
@@ -453,21 +296,12 @@ Here's a practical example of how a user might interact with this feature:
Bot: Announcement successfully sent to 5 groups.
-The Telegram agent can also handle media files (images, documents, videos) as part of announcements:
-
-.. code-block:: python
-
- # Sending an announcement with media
- await telegram_agent.send_announcement_to_groups(
- text="Check out our latest analysis results!",
- media_path="/path/to/chart.png",
- groups="all"
- )
+The TelegramAIAgent can also handle media files (images, documents, videos) as part of announcements, using its built-in tools to process and distribute this content appropriately.
Editing Messages and Announcements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Users can edit previously sent messages or announcements directly through the private chat with the bot:
+The agent's autonomous capabilities extend to editing previously sent messages or announcements. Using its internal LLM workflow and specialized Telegram tools, the agent can handle edit requests directly through natural language interaction:
.. code-block::
@@ -491,6 +325,7 @@ Users can edit previously sent messages or announcements directly through the pr
Bot: Announcement successfully updated in 5 groups.
This editing capability is particularly useful for:
+
- Correcting information in announcements
- Updating time-sensitive information
- Refining messaging based on feedback
@@ -498,13 +333,13 @@ This editing capability is particularly useful for:
Super Agent Capabilities
~~~~~~~~~~~~~~~~~~~~~~
-The combination of these features effectively makes the Telegram agent a "super agent" that can:
+The combination of these features effectively makes the Telegram agent a "super agent" that autonomously:
-1. Act as an interface between users and specialized agents in your network
-2. Perform complex tasks through multi-agent collaboration
-3. Broadcast results to multiple channels/groups simultaneously
-4. Manage and update previously sent content
-5. Handle media and formatted text for visually appealing messaging
+1. Acts as an interface between users and specialized agents in your network
+2. Performs complex tasks through multi-agent collaboration
+3. Broadcasts results to multiple channels/groups simultaneously
+4. Manages and updates previously sent content
+5. Handles media and formatted text for visually appealing messaging
For example, a user could request:
@@ -513,15 +348,17 @@ For example, a user could request:
User: Analyze last month's sales data, create a visualization, and send a summary to the Sales and
Executive groups with appropriate formatting.
-The Telegram agent would:
+The TelegramAIAgent will:
1. Parse the request to understand the required capabilities
-2. Find and collaborate with a data analysis agent in the network
+2. Use its collaboration tools to find and collaborate with a data analysis agent in the network
3. Get the analysis results and visualizations
-4. Format the content appropriately for professional presentation
+4. Use its Telegram tools to format the content appropriately for professional presentation
5. Send specifically tailored announcements to the different groups
6. Allow the user to edit or refine the messages if needed
+All of this happens through the agent's internal LLM workflow, guided by its personality prompt and using the tools provided during initialization, without any need for manual implementation by the developer.
+
Group Management
~~~~~~~~~~~~~~
@@ -530,6 +367,9 @@ The Telegram agent can manage group memberships and permissions. It can add or r
Customizing the Telegram Agent
-----------------------------
+.. note::
+ A detailed guide on customizing and extending the TelegramAIAgent is coming soon in the :doc:`/guides/advanced/index` section. This will include advanced configuration options, custom message handling, and integration patterns.
+
Extending Message Handlers
~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/docs/source/index.rst b/docs/source/index.rst
index a7833e6..d660471 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -15,7 +15,7 @@
Installation •
Quick Start •
- Examples •
+ Examples •
Documentation
@@ -23,23 +23,29 @@
Overview
========
-AgentConnect is a revolutionary framework for building and connecting *independent* AI agents. Unlike traditional multi-agent systems that operate within a single, centrally controlled environment, AgentConnect enables the creation of a *decentralized network* of autonomous agents that can:
+**AgentConnect provides a framework for building decentralized networks of truly autonomous AI agents, enabling the next generation of collaborative AI.**
-* **Operate Independently:** Each agent is a self-contained system with its own internal logic
-* **Discover Each Other Dynamically:** Agents find each other based on capabilities, not pre-defined connections
-* **Communicate Securely:** Built-in message signing, verification, and standardized protocols
-* **Collaborate on Complex Tasks:** Request services, exchange data, and work together to achieve goals
-* **Scale Horizontally:** Support thousands of independent agents in a decentralized ecosystem
+Move beyond traditional, centrally controlled systems and embrace an ecosystem where independent agents can:
+
+* **Discover peers on-demand:** Locate partners via **capability broadcasts** instead of hard-wired endpoints.
+* **Interact Securely (A2A):** Leverage built-in cryptographic verification for **trustworthy Agent-to-Agent** communication.
+* **Execute Complex Workflows:** Request services, exchange value, and achieve goals collectively.
+* **Autonomous Operation:** Each agent hosts its own logic—no central brain required.
+* **Scale Limitlessly:** Support thousands of agents interacting seamlessly.
Why AgentConnect?
----------------
-* **Beyond Hierarchies:** Break free from centrally controlled multi-agent systems
-* **True Agent Autonomy:** Build agents that are independent and interact with any agent in the network
-* **Dynamic Discovery:** The network adapts as agents join, leave, and update capabilities
-* **Secure Interactions:** Cryptographic verification ensures trustworthy communication
-* **Unprecedented Scalability:** Designed for thousands of interconnected agents
-* **Extensible Architecture:** Easily integrate custom agents, capabilities, and protocols
+AgentConnect delivers unique advantages over classic multi-agent approaches:
+
+* **Decentralized Architecture:** No central router, no single point of failure.
+* **First-class agent autonomy:** Agents negotiate, cooperate, and evolve independently.
+* **Interconnect Agent Systems:** Operates above internal frameworks, linking entire agent swarms.
+* **Living ecosystem:** The network fluidly adapts as agents join, leave, or evolve their skills.
+* **Secure A2A Communication:** Crypto-grade identity & message signing baked in.
+* **Horizontal scalability:** Engineered for planet-scale agent populations.
+* **Plug-and-play extensibility:** Easily integrate custom agents, capabilities, and protocols.
+* **Integrated Agent Economy:** Seamless A2A payments powered by **Coinbase CDP & AgentKit**.
Key Features
=============
@@ -50,53 +56,79 @@ Key Features
🤖 Dynamic Agent Discovery
- - Capability-based matching
- - Flexible agent network
- - No pre-defined connections
+ - Capability-Based lookup
+ - Decentralized Registry
+ - Zero static links
-
⚡ Decentralized Communication
+
⚡ A2A Communication
- - Secure message routing
- - No central control
- - Reliable message delivery
+ - Direct Agent-to-Agent Messaging
+ - Cryptographic signatures
+ - No routing bottlenecks
-
⚙️ Autonomous Agents
+
⚙️ True Agent Autonomy
- - Independent operation
- - Own processing loop
- - Complex internal structure
+ - Independent Operation & Logic
+ - Self-Managed Lifecycles
+ - Unrestricted Collaboration
-
🔒 Secure Communication
+
🔒 Trust Layer
- - Message signing
- - Identity verification
- - Standardized protocols
+ - Verifiable identities
+ - Tamper-proof messages
+ - Standard Security Protocols
-
🔌 Multi-Provider Support
+
💰 Built-in Agent Economy
- - OpenAI
- - Anthropic
- - Groq
- - Google AI
+ - Autonomous A2A Payments
+ - Coinbase CDP Integration
+ - Instant service settlement
-
📊 Monitoring & Tracing
+
🔌 Multi-LLM Support
- - LangSmith integration
- - Comprehensive tracing
- - Performance analysis
+ - OpenAI, Anthropic, Groq, Google
+ - Flexible AI Core Choice
+ - Vendor-Agnostic Intelligence
+
+
+
+
+
+
+
📊 Deep Observability
+
+ - LangSmith tracing
+ - Monitor tools & payments
+ - Custom Callbacks
+
+
+
+
🌐 Dynamic Capability Advertising
+
+ - Agent Skill Broadcasting
+ - Market-Driven Discovery
+ - On-the-Fly Collaboration
+
+
+
+
🔗 Native Blockchain Integration
+
+ - Coinbase AgentKit Ready
+ - On-Chain Value Exchange
+ - Configurable networks
@@ -143,13 +175,8 @@ Prerequisites
- Python 3.11 or higher
- Poetry (Python package manager)
-- Redis server
-- Node.js 18+ and npm (for frontend)
-
-Development Installation
------------------------
-To install AgentConnect from source:
+AgentConnect can be installed by cloning the repository and using Poetry to install dependencies.
.. code-block:: bash
@@ -157,53 +184,22 @@ To install AgentConnect from source:
git clone https://github.com/AKKI0511/AgentConnect.git
cd AgentConnect
- # Using Poetry (Recommended)
- # Install all dependencies (recommended)
- poetry install --with demo,dev
-
- # For production only
- poetry install --without dev
-
-Environment Setup
----------------
-
-.. code-block:: bash
-
- # Copy environment template
- copy example.env .env # Windows
- cp example.env .env # Linux/Mac
-
-Configure API keys in the `.env` file:
-
-.. code-block:: bash
-
- DEFAULT_PROVIDER=groq
- GROQ_API_KEY=your_groq_api_key
-
-For monitoring and additional features, you can configure optional settings:
-
-.. code-block:: bash
-
- # LangSmith for monitoring (recommended)
- LANGSMITH_TRACING=true
- LANGSMITH_API_KEY=your_langsmith_api_key
- LANGSMITH_PROJECT=AgentConnect
-
- # Additional providers
- OPENAI_API_KEY=your_openai_api_key
- ANTHROPIC_API_KEY=your_anthropic_api_key
- GOOGLE_API_KEY=your_google_api_key
+ # Install dependencies
+ poetry install
-For more detailed installation instructions, see the :doc:`installation` guide.
+For detailed installation instructions including environment setup and API configuration, see the :doc:`installation` guide.
.. _quick-start:
Quick Start
=============
+Here's a minimal example of creating and connecting a human user with an AI assistant:
+
.. code-block:: python
import asyncio
+ import os
from agentconnect.agents import AIAgent, HumanAgent
from agentconnect.core.registry import AgentRegistry
from agentconnect.communication import CommunicationHub
@@ -220,7 +216,7 @@ Quick Start
name="AI Assistant",
provider_type=ModelProvider.OPENAI,
model_name=ModelName.GPT4O,
- api_key="your-openai-api-key",
+ api_key=os.getenv("OPENAI_API_KEY"),
identity=AgentIdentity.create_key_based(),
interaction_modes=[InteractionMode.HUMAN_TO_AGENT]
)
@@ -233,6 +229,9 @@ Quick Start
identity=AgentIdentity.create_key_based()
)
await hub.register_agent(human)
+
+ # Start AI processing loop
+ asyncio.create_task(ai_agent.run())
# Start interaction between human and AI
await human.start_interaction(ai_agent)
@@ -240,7 +239,7 @@ Quick Start
if __name__ == "__main__":
asyncio.run(main())
-For more detailed examples, check out our :doc:`quickstart` guide.
+For more detailed examples and step-by-step instructions, see the :doc:`quickstart` guide.
.. _examples:
@@ -275,35 +274,13 @@ Documentation
code_of_conduct
changelog
-Monitoring with LangSmith
-==========================
-
-AgentConnect integrates with LangSmith for comprehensive monitoring:
-
-1. **Set up LangSmith**
-
- * Create an account at `LangSmith `_
- * Add your API key to `.env`:
-
- .. code-block:: bash
-
- LANGSMITH_TRACING=true
- LANGSMITH_API_KEY=your_langsmith_api_key
- LANGSMITH_PROJECT=AgentConnect
-
-2. **Monitor agent workflows**
-
- * View detailed traces of agent interactions
- * Debug complex reasoning chains
- * Analyze token usage and performance
-
Roadmap
=======
- ✅ **MVP with basic agent-to-agent interactions**
- ✅ **Autonomous communication between agents**
- ✅ **Capability-based agent discovery**
-- ⬜ **Coinbase AgentKit Payment Integration**
+- ✅ **Coinbase AgentKit Payment Integration**
- ⬜ **Agent Identity & Reputation System**
- ⬜ **Marketplace-Style Agent Discovery**
- ⬜ **MCP Integration**
diff --git a/docs/source/installation.md b/docs/source/installation.md
index 8f58fcd..692fcd5 100644
--- a/docs/source/installation.md
+++ b/docs/source/installation.md
@@ -1,48 +1,36 @@
# Installation
-## Prerequisites
+### Prerequisites
- Python 3.11 or higher
- Poetry (Python package manager)
-- Redis server
-- Node.js 18+ and npm (for frontend)
-## Installing AgentConnect
+### Installing AgentConnect
-Currently, AgentConnect is available from source only. Direct installation via pip will be available soon.
+AgentConnect is currently available from source only. Direct installation via pip will be available soon.
-## Development Installation
+### Development Installation
+
+Clone the repository and install dependencies using Poetry:
-For development, you can install AgentConnect from source:
```bash
git clone https://github.com/AKKI0511/AgentConnect.git
cd AgentConnect
-```
-
-### Using Poetry (Recommended)
-
-AgentConnect uses Poetry for dependency management:
-
-```bash
-# Install Poetry (if not already installed)
-# Visit https://python-poetry.org/docs/#installation for instructions
-
-# Install all dependencies (recommended)
-poetry install --with demo,dev
-
-# For production only
-poetry install --without dev
+poetry install --with demo,dev # For development (recommended)
+# For production only:
+# poetry install --without dev
```
### Environment Setup
+Copy the environment template and configure your API keys:
+
```bash
-# Copy environment template
copy example.env .env # Windows
cp example.env .env # Linux/Mac
```
-Configure API keys in the `.env` file:
+Edit the `.env` file to set your provider and API keys:
```
DEFAULT_PROVIDER=groq
@@ -63,18 +51,19 @@ ANTHROPIC_API_KEY=your_anthropic_api_key
GOOGLE_API_KEY=your_google_api_key
```
-## Dependencies
+### Payment Capabilities (Optional)
-AgentConnect requires Python 3.11 or later and depends on the following packages:
+AgentConnect supports agent-to-agent payments through the Coinbase Developer Platform (CDP). To enable these features, add the following to your `.env`:
+
+```
+CDP_API_KEY_NAME=your_cdp_api_key_name
+CDP_API_KEY_PRIVATE_KEY=your_cdp_api_key_private_key
+```
-* langchain
-* openai
-* anthropic
-* google-generativeai
-* groq
-* cryptography
-* fastapi (for API)
-* redis (for distributed communication)
+To obtain CDP API keys:
+1. Create an account at [Coinbase Developer Platform](https://www.coinbase.com/cloud)
+2. Create an API key with wallet management permissions
+3. Save the API key name and private key securely
-These dependencies will be automatically installed when you install AgentConnect using pip or Poetry.
+By default, payment features use the Base Sepolia testnet, which is suitable for development and testing without real currency.
diff --git a/docs/source/quickstart.md b/docs/source/quickstart.md
index b20bc1b..61e3deb 100644
--- a/docs/source/quickstart.md
+++ b/docs/source/quickstart.md
@@ -1,77 +1,69 @@
# Quickstart
-This guide will help you get started with AgentConnect, a framework that enables independent AI agents to discover, communicate, and collaborate with each other through capability-based discovery.
+AgentConnect lets you build, discover, and connect independent AI agents that can securely communicate and collaborate based on capabilities.
-## What is AgentConnect?
+### Prerequisites
-AgentConnect allows you to:
+- Python 3.11 or higher
+- Poetry (for dependency management)
+- At least one provider API key (e.g., OPENAI_API_KEY, GROQ_API_KEY, etc.)
-- Create independent AI agents with specific capabilities
-- Enable secure communication between agents with cryptographic verification
-- Discover agents based on their capabilities rather than pre-defined connections
-- Build systems where each agent can operate autonomously while collaborating with others
-- Develop multi-agent workflows for complex tasks
+### Installation
-## Basic Usage: Human-AI Interaction
+```bash
+git clone https://github.com/AKKI0511/AgentConnect.git
+cd AgentConnect
+poetry install --with demo,dev
+copy example.env .env # Windows
+cp example.env .env # Linux/Mac
+```
+
+Edit `.env` and add your API key(s):
+```
+OPENAI_API_KEY=your_openai_api_key
+# or
+GROQ_API_KEY=your_groq_api_key
+```
+
+### Minimal Example: Human-AI Chat
-Let's start with a simple example of a human user interacting with an AI assistant:
+This example shows a simple interactive conversation between a human user and an AI assistant.
```python
import asyncio
import os
from dotenv import load_dotenv
-
from agentconnect.agents import AIAgent, HumanAgent
from agentconnect.communication import CommunicationHub
from agentconnect.core.registry import AgentRegistry
-from agentconnect.core.types import (
- ModelProvider,
- ModelName,
- InteractionMode,
- AgentIdentity,
- Capability,
-)
-from agentconnect.core.message import Message
+from agentconnect.core.types import AgentIdentity, Capability, InteractionMode, ModelName, ModelProvider
async def main():
- # Load environment variables
load_dotenv()
-
- # Initialize core components
registry = AgentRegistry()
hub = CommunicationHub(registry)
-
- # Create secure agent identities with cryptographic keys
+
+ # Create agent identities
human_identity = AgentIdentity.create_key_based()
ai_identity = AgentIdentity.create_key_based()
-
- # Create a human agent
+
+ # Human agent
human = HumanAgent(
- agent_id="human1",
- name="User",
- identity=human_identity,
- organization_id="org1"
+ agent_id="human1", name="User", identity=human_identity, organization_id="org1"
)
-
- # Define AI agent capabilities (what this agent can do)
- ai_capabilities = [
- Capability(
- name="conversation",
- description="General conversation and assistance",
- input_schema={"query": "string"},
- output_schema={"response": "string"},
- )
- ]
-
- # Create an AI assistant with the defined capabilities
+
+ # AI agent (choose your provider/model and set API key in .env)
ai_assistant = AIAgent(
agent_id="ai1",
name="Assistant",
- provider_type=ModelProvider.GROQ, # Choose your provider
- model_name=ModelName.LLAMA3_70B, # Choose your model
- api_key=os.getenv("GROQ_API_KEY"),
+ provider_type=ModelProvider.OPENAI, # or ModelProvider.GROQ, etc.
+ model_name=ModelName.GPT4O, # or another model supported by your provider
+ api_key=os.getenv("OPENAI_API_KEY"),
identity=ai_identity,
- capabilities=ai_capabilities,
+ capabilities=[Capability(
+ name="conversation",
+ description="General conversation and assistance",
+ )],
interaction_modes=[InteractionMode.HUMAN_TO_AGENT],
personality="helpful and professional",
organization_id="org2",
@@ -88,8 +80,7 @@ async def main():
await human.start_interaction(ai_assistant)
# Cleanup
- ai_assistant.is_running = False
- await ai_task
+ await ai_assistant.stop()
await hub.unregister_agent(human.agent_id)
await hub.unregister_agent(ai_assistant.agent_id)
@@ -97,190 +88,9 @@ if __name__ == "__main__":
asyncio.run(main())
```
-## Agent Collaboration
-
-You can create specialized agents that collaborate on tasks without human intervention:
-
-```python
-import asyncio
-import os
-from dotenv import load_dotenv
-
-from agentconnect.agents import AIAgent
-from agentconnect.communication import CommunicationHub
-from agentconnect.core.registry import AgentRegistry
-from agentconnect.core.types import (
- ModelProvider,
- ModelName,
- InteractionMode,
- AgentIdentity,
- Capability,
-)
-
-async def message_handler(message: Message) -> None:
- """Track all messages passing through the hub"""
- print(f"Message routed: {message.sender_id} → {message.receiver_id}")
- print(f"Content: {message.content[:50]}...")
-
-async def run_multi_agent_demo():
- load_dotenv()
-
- # Initialize core components
- registry = AgentRegistry()
- hub = CommunicationHub(registry)
-
- # Add a global message handler to track all communication
- hub.add_global_handler(message_handler)
-
- # Create agent identities
- agent1_identity = AgentIdentity.create_key_based()
- agent2_identity = AgentIdentity.create_key_based()
-
- # Define specialized capabilities
- data_processing_capability = Capability(
- name="data_processing",
- description="Process and transform raw data into structured formats",
- input_schema={"data": "Any raw data format"},
- output_schema={"processed_data": "Structured data format"},
- )
-
- business_analysis_capability = Capability(
- name="business_analysis",
- description="Analyze business performance and metrics",
- input_schema={"business_data": "Business performance metrics"},
- output_schema={"business_insights": "Business performance analysis"},
- )
-
- # Create specialized agents
- data_processor = AIAgent(
- agent_id="processor1",
- name="DataProcessor",
- provider_type=ModelProvider.GOOGLE,
- model_name=ModelName.GEMINI2_FLASH_LITE,
- api_key=os.getenv("GOOGLE_API_KEY"),
- identity=agent1_identity,
- capabilities=[data_processing_capability],
- interaction_modes=[InteractionMode.AGENT_TO_AGENT],
- personality="detail-oriented data analyst",
- )
-
- business_analyst = AIAgent(
- agent_id="analyst1",
- name="BusinessAnalyst",
- provider_type=ModelProvider.GOOGLE,
- model_name=ModelName.GEMINI2_FLASH,
- api_key=os.getenv("GOOGLE_API_KEY"),
- identity=agent2_identity,
- capabilities=[business_analysis_capability],
- interaction_modes=[InteractionMode.AGENT_TO_AGENT],
- personality="strategic business analyst",
- )
-
- # Register agents
- await hub.register_agent(data_processor)
- await hub.register_agent(business_analyst)
-
- # Start agent processing loops
- tasks = [
- asyncio.create_task(data_processor.run()),
- asyncio.create_task(business_analyst.run())
- ]
-
- # Initiate communication between agents
- await data_processor.send_message(
- receiver_id=business_analyst.agent_id,
- content="I have processed the sales data for Q2. Revenue is up 15% compared to Q1. Would you analyze these trends?",
- )
-
- # Let agents communicate for a while
- await asyncio.sleep(60)
-
- # Cleanup
- for agent in [data_processor, business_analyst]:
- agent.is_running = False
-
- for task in tasks:
- await task
-
- await hub.unregister_agent(data_processor.agent_id)
- await hub.unregister_agent(business_analyst.agent_id)
-
-if __name__ == "__main__":
- asyncio.run(run_multi_agent_demo())
-```
-
-## Message Handling and Event Tracking
-
-You can add message handlers to track and respond to agent communications:
-
-```python
-from agentconnect.core.message import Message
-from agentconnect.core.types import MessageType
-
-# Handler for a specific agent
-async def agent_message_handler(message: Message) -> None:
- print(f"Agent received: {message.content}")
- # Log, analyze, or take action based on messages
-
-# Add handlers to the hub
-hub.add_message_handler("agent_id", agent_message_handler)
-
-# Add a global handler to monitor all messages
-async def global_message_tracker(message: Message) -> None:
- if message.message_type == MessageType.REQUEST_COLLABORATION:
- print(f"Collaboration request: {message.sender_id} → {message.receiver_id}")
- elif message.message_type == MessageType.COLLABORATION_RESPONSE:
- print(f"Collaboration response received from {message.sender_id}")
-
-hub.add_global_handler(global_message_tracker)
-```
-
-## Capability-Based Discovery
-
-Agents can discover and collaborate with other agents based on capabilities:
-
-```python
-# Find agents with specific capabilities
-matching_agents = await registry.find_agents_by_capability(
- capability_name="data_analysis"
-)
-
-if matching_agents:
- # Request collaboration from the first matching agent
- analysis_result = await hub.send_collaboration_request(
- sender_id=requester.agent_id,
- receiver_id=matching_agents[0].agent_id,
- task_description="Analyze this dataset: [1, 2, 3, 4, 5]"
- )
-```
-
-## Key Components
-
-### Agents
-
-- `AIAgent`: AI-powered agent with specific capabilities that can operate independently
-- `HumanAgent`: Interface for human users to participate in the agent network
-
-### Communication
-
-- `CommunicationHub`: Message routing system that enables agent discovery and interaction
-- `AgentRegistry`: Registry for capability-based agent discovery
-
-### Protocols
-
-- `SimpleAgentProtocol`: Ensures secure agent-to-agent communication
-- `CollaborationProtocol`: Enables capability discovery and task delegation
-
-### Core Types
-
-- `AgentIdentity`: Secure identity with cryptographic verification
-- `Capability`: Structured representation of what an agent can do
-- `ModelProvider`: Supported AI providers (OpenAI, Anthropic, Groq, Google, etc.)
-- `ModelName`: Available models for each provider
-- `MessageType`: Different types of messages (TEXT, REQUEST_COLLABORATION, etc.)
-
-## Next Steps
+- Run the script. You can now chat with your AI assistant in the terminal!
-- Explore the [Examples](https://github.com/AKKI0511/AgentConnect/tree/main/examples) directory for more detailed implementations
-- Check out the [API Reference](https://AKKI0511.github.io/AgentConnect/api/) for detailed information
-- Learn about [Advanced Features](https://AKKI0511.github.io/AgentConnect/advanced/) for customizing agent behavior
\ No newline at end of file
+### What's Next?
+- See more [examples](https://akki0511.github.io/AgentConnect/examples/) for multi-agent workflows, payments, and advanced features.
+- Explore the [API Reference](https://AKKI0511.github.io/AgentConnect/api/) for details on all classes and methods.
+- Check the [User Guides](https://akki0511.github.io/AgentConnect/guides) for deeper tutorials.
\ No newline at end of file
diff --git a/example.env b/example.env
index 8133869..33e3369 100644
--- a/example.env
+++ b/example.env
@@ -1,61 +1,100 @@
# =============================================
-# REQUIRED SETTINGS
+# CORE PROVIDER SETTINGS (REQUIRED)
# =============================================
-# You only need to set the API key for your chosen default provider
-DEFAULT_PROVIDER=groq # Choose one of: groq, anthropic, openai, google
+# Choose your default LLM provider
+DEFAULT_PROVIDER=groq # Options: groq, anthropic, openai, google
-# Provider API Keys
-# At least one is required (matching DEFAULT_PROVIDER)
+# API Keys for LLM Providers (At least ONE matching DEFAULT_PROVIDER is REQUIRED)
GROQ_API_KEY=
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GOOGLE_API_KEY=
+# Default Model Name (Optional - uses provider default if unset)
+# Example: For Groq, you might use llama3-70b-8192
+# DEFAULT_MODEL=
+
# =============================================
-# OPTIONAL SETTINGS (all have sensible defaults)
+# MONITORING (OPTIONAL)
# =============================================
-
# LangSmith Settings (for monitoring and debugging)
# LANGSMITH_TRACING=true
# LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
# LANGSMITH_API_KEY=your_langsmith_api_key
# LANGSMITH_PROJECT=AgentConnect
-# For advanced agent usage (search, telegram)
+# =============================================
+# ADVANCED FEATURES (OPTIONAL)
+# =============================================
+# For Telegram Agent Functionality
# TELEGRAM_BOT_TOKEN=
+
+# For Web Search Capabilities (e.g., Research Agent)
# TAVILY_API_KEY=
-# API Settings (defaults shown)
+# =============================================
+# PAYMENT CAPABILITIES (OPTIONAL)
+# =============================================
+# Coinbase Developer Platform (CDP) API Key for Agent Payments
+# Required only if you enable payment features
+# CDP_API_KEY_NAME=your_cdp_api_key_name
+# CDP_API_KEY_PRIVATE_KEY=your_cdp_api_key_private_key
+
+# =============================================
+# API SERVER SETTINGS (OPTIONAL)
+# =============================================
# API_HOST=127.0.0.1
# API_PORT=8000
# DEBUG=True
-# ALLOWED_ORIGINS=http://localhost:5173
+# ALLOWED_ORIGINS=http://localhost:5173 # Frontend URL for CORS
+
+# =============================================
+# RATE LIMITING SETTINGS (OPTIONAL)
+# =============================================
+# LLM Token Limits (applied per agent)
+# MAX_TOKENS_PER_MINUTE=5500
+# MAX_TOKENS_PER_HOUR=100000
-# Rate Limiting Settings (defaults shown)
+# WebSocket Rate Limiting (per connection)
# WS_RATE_LIMIT_TIMES=30
# WS_RATE_LIMIT_SECONDS=60
+
+# API Endpoint Rate Limiting (per IP)
# API_RATE_LIMIT_TIMES=100
# API_RATE_LIMIT_SECONDS=60
-# Session Settings (defaults shown)
+# =============================================
+# SESSION MANAGEMENT (OPTIONAL)
+# =============================================
+# Session Timeout (seconds)
# SESSION_TIMEOUT=3600
+
+# WebSocket Timeout (seconds)
# WEBSOCKET_TIMEOUT=300
+
+# Maximum messages allowed per session
# MAX_MESSAGES_PER_SESSION=1000
+
+# Maximum inactive time before session closure (seconds)
# MAX_INACTIVE_TIME=1800
+
+# Maximum total duration of a session (seconds)
# MAX_SESSION_DURATION=86400
-# MAX_SESSIONS_PER_USER=5
-# Model Settings (defaults shown)
-# DEFAULT_MODEL=llama-3.3-70b-versatile
-# MAX_TOKENS_PER_MINUTE=5500
-# MAX_TOKENS_PER_HOUR=100000
+# Maximum concurrent sessions allowed per user
+# MAX_SESSIONS_PER_USER=5
-# Authentication Settings (defaults shown)
+# =============================================
+# AUTHENTICATION (OPTIONAL)
+# =============================================
# A random secret key will be generated if not provided
-# AUTH_SECRET_KEY=your_secret_key_here
+# For persistent sessions across restarts, set a fixed key
+# AUTH_SECRET_KEY=your_very_secure_random_secret_key_here
# ACCESS_TOKEN_EXPIRE_MINUTES=30
# REFRESH_TOKEN_EXPIRE_DAYS=7
-# Logging Settings (defaults shown)
-# LOG_LEVEL=INFO
-# LOG_FORMAT=%(asctime)s - %(name)s - %(levelname)s - %(message)s
+# =============================================
+# LOGGING (OPTIONAL)
+# =============================================
+# LOG_LEVEL=INFO # Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
+# LOG_FORMAT='%(asctime)s - %(name)s - %(levelname)s - %(message)s' # Example format
diff --git a/examples/README.md b/examples/README.md
index f40bf71..c35690c 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -1,305 +1,114 @@
# AgentConnect Examples
-This directory contains examples demonstrating how to use the AgentConnect framework. These examples are organized by functionality to help you understand different aspects of the framework.
+This directory contains examples demonstrating various features and use cases of the AgentConnect framework.
-## Directory Structure
-
-- `agents/`: Examples demonstrating how to create and use different types of agents
-- `communication/`: Examples showing how agents communicate with each other
-- `multi_agent/`: Examples of multi-agent systems and collaborative workflows
-
-## Running Examples
-
-To run these examples, you'll need to have AgentConnect installed:
-
-```bash
-# Install AgentConnect with demo dependencies
-poetry install
-```
-
-### Recommended Method: Using the CLI Tool
-
-The recommended way to run examples is using the CLI tool that's installed with the package:
-
-```bash
-# Run a specific example
-agentconnect --example chat
-agentconnect --example multi
-agentconnect --example research
-agentconnect --example data
-agentconnect --example telegram
-
-# Run with detailed logging
-agentconnect --example telegram --verbose
-```
-
-### Alternative Method: Running Python Scripts Directly
+## Prerequisites
-Each example can also be run directly as a Python script:
+1. **Clone the Repository:**
+ ```bash
+ git clone https://github.com/AKKI0511/AgentConnect.git
+ cd AgentConnect
+ ```
+2. **Install Dependencies:** Use Poetry to install base dependencies plus optional extras needed for specific examples (like demo, research, telegram).
+ ```bash
+ # Install core + demo dependencies (recommended for most examples)
+ poetry install --with demo
+
+ # Or install specific groups as needed
+ # poetry install --with research
+ ```
+3. **Set Up Environment Variables:** Copy the example environment file and fill in your API keys.
+ ```bash
+ # Windows
+ copy example.env .env
+ # Linux/macOS
+ cp example.env .env
+ ```
+ Edit the `.env` file with your credentials. You need **at least one** LLM provider key (OpenAI, Google, Anthropic, Groq). See specific example requirements below for other keys (Telegram, Tavily, CDP).
+
+## Running Examples (CLI Recommended)
+
+The easiest way to run examples is using the `agentconnect` CLI tool:
```bash
-# Run a specific example
-python examples/agents/basic_agent_usage.py
-
-# Run a communication example
-python examples/communication/basic_communication.py
-
-# Run the modular multi-agent system
-python examples/multi_agent/multi_agent_system.py
+agentconnect --example [--verbose]
```
-## Available Examples
-
-### Agent Examples
-
-- `basic_agent_usage.py`: Demonstrates how to create and use a basic AI agent
-
-### Communication Examples
-
-- `basic_communication.py`: Shows how to set up communication between agents
-
-### Multi-Agent Examples
-
-- `multi_agent/`: A complete modular multi-agent system with the following components:
- - `multi_agent_system.py`: Main orchestration script
- - `telegram_agent.py`: Agent for Telegram bot integration
- - `research_agent.py`: Agent for web search and information retrieval
- - `content_processing_agent.py`: Agent for document processing and formatting
- - `data_analysis_agent.py`: Agent for data analysis and visualization
- - `message_logger.py`: Utility for visualizing agent interactions
-
-## Creating Your Own Examples
-
-Feel free to create your own examples based on these templates. If you create an example that might be useful to others, consider contributing it back to the project!
-
-## Notes
-
-- These examples are designed to be simple and focused on specific features
-- For more complex use cases, see the documentation
-- API keys for AI providers should be set in your environment variables (see `.env.example`)
-
-## Prerequisites
-
-Before running these examples, make sure you have:
-
-1. Set up your environment variables in a `.env` file in the project root with your API keys:
- ```
- # Provider API Keys (at least one is required)
- OPENAI_API_KEY=your_openai_api_key
- ANTHROPIC_API_KEY=your_anthropic_api_key
- GROQ_API_KEY=your_groq_api_key
+Replace `` with one of the following:
- # Optional: LangSmith for monitoring (recommended)
- LANGSMITH_TRACING=true
- LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
- LANGSMITH_API_KEY=your_langsmith_api_key
- LANGSMITH_PROJECT=AgentConnect
- ```
+* `chat`: Simple interactive chat between a human and an AI agent.
+* `multi`: Demonstrates a multi-agent system for e-commerce analysis.
+* `research`: Research assistant workflow involving multiple agents.
+* `data`: Data analysis assistant performing analysis and visualization tasks.
+* `telegram`: A multi-agent system integrated with a Telegram bot interface.
+* `agent_economy`: Autonomous workflow showcasing agent-to-agent payments.
-2. Installed all required dependencies:
- ```bash
- poetry install
+Use the `--verbose` flag for detailed logging output.
- # For research capabilities in the multi-agent system
- poetry install --with research
- ```
+## Example Details
-## Available Examples
+### Basic Chat (`chat`)
-The following examples are available through the CLI tool (`agentconnect --example `):
+* **Source:** `examples/example_usage.py`
+* **Description:** Demonstrates fundamental AgentConnect concepts: creating human and AI agents, establishing secure communication, and basic interaction.
+* **Optional:** Can be run with payment capabilities enabled (see `example_usage.py` comments and requires CDP keys in `.env`).
-### 1. Chat Example (chat)
+### E-commerce Analysis (`multi`)
-Demonstrates a simple chat interface with a single AI agent:
-- A human user interacts with an AI assistant
-- The assistant responds to user queries in real-time
-- Supports multiple AI providers (OpenAI, Anthropic, Groq, Google)
+* **Source:** `examples/example_multi_agent.py`
+* **Description:** Showcases a collaborative workflow where multiple agents analyze e-commerce data.
-**Key Features:**
-- Human-to-agent interaction
-- Real-time chat interface
-- Multiple provider/model selection
-- Message history tracking
-- Simple command-line interface
+### Research Assistant (`research`)
-### 2. Multi-Agent Analysis (multi)
+* **Source:** `examples/research_assistant.py`
+* **Description:** An example of agents collaborating to perform research tasks, potentially involving web searches (requires `Tavily` key and `research` extras).
+* **Requires:** `poetry install --with research`, `TAVILY_API_KEY` in `.env`.
-Demonstrates autonomous interaction between specialized AI agents:
-- A data processor agent analyzes e-commerce data
-- A business analyst agent provides strategic insights
-- Agents collaborate without human intervention
+### Data Analysis Assistant (`data`)
-**Key Features:**
-- Agent-to-agent communication
-- Autonomous collaboration
-- Structured data analysis
-- Real-time conversation visualization
-- Capability-based interaction
+* **Source:** `examples/data_analysis_assistant.py`
+* **Description:** Agents work together to analyze data and generate visualizations.
-### 3. Research Assistant (research)
+### Telegram Assistant (`telegram`)
-Demonstrates a research workflow with multiple specialized AI agents:
-- A human user interacts with a research coordinator agent
-- The coordinator delegates tasks to specialized agents:
- - Research Agent: Finds information on a topic
- - Summarization Agent: Condenses and organizes information
- - Fact-Checking Agent: Verifies the accuracy of information
+* **Source:** `examples/multi_agent/multi_agent_system.py`
+* **Description:** Integrates a multi-agent backend (similar to research/content processing agents) with a Telegram bot front-end.
+* **Requires:** `TELEGRAM_BOT_TOKEN` in `.env`.
+ * To get a token, talk to the [BotFather](https://t.me/botfather) on Telegram and follow the instructions to create a new bot.
-**Key Features:**
-- Multi-agent collaboration
-- Task decomposition and delegation
-- Human-in-the-loop interaction
-- Capability-based agent discovery
-- Asynchronous message processing
+### Autonomous Workflow with Agent Economy (`agent_economy`)
-### 4. Data Analysis Assistant (data)
-
-Demonstrates a data analysis workflow with specialized agents:
-- A human user interacts with a data analysis coordinator
-- The coordinator works with specialized agents:
- - Data Processor: Cleans and prepares data
- - Statistical Analyst: Performs statistical analysis
- - Visualization Expert: Creates data visualizations
- - Insights Generator: Extracts business insights
-
-**Key Features:**
-- Data-focused agent capabilities
-- Multi-step analysis workflow
-- Visualization generation
-- Insight extraction
-- Human-in-the-loop guidance
-
-### 5. Modular Multi-Agent System (telegram)
-
-Demonstrates a modular approach to building a multi-agent system with Telegram integration:
-- Each agent is implemented in its own file for clean separation of concerns
-- Users can interact with agents through both Telegram and CLI
-- The system includes specialized agents for different tasks:
- - Telegram Agent: Handles Telegram messaging platform integration
- - Research Agent: Performs web searches and information retrieval
- - Content Processing Agent: Handles document processing and format conversion
- - Data Analysis Agent: Analyzes data and creates visualizations
-
-**Key Features:**
-- Modular design with separate agent implementations
-- Factory pattern for agent creation
-- Message flow visualization
-- Telegram integration
-- CLI interface for direct interaction
-- Web search capabilities
-- Data analysis and visualization
-- Asynchronous message processing
-- Publishing capabilities
-
-**Prerequisites for the Multi-Agent System:**
-- Python 3.11 or higher
-- For Telegram functionality: A Telegram bot token (create one through [@BotFather](https://t.me/botfather))
-- API keys for one of the supported LLM providers (Google, OpenAI, Anthropic, or Groq)
-- Optional: Tavily API key for improved web search capabilities
-- The `arxiv` and `wikipedia` packages for the research agent (install with `poetry install --with research`)
-
-**Setting Up the Multi-Agent System:**
-1. Create a `.env` file with:
- ```
- # Required - at least one of these LLM API keys
- GOOGLE_API_KEY=your_google_api_key
- # OR
- OPENAI_API_KEY=your_openai_api_key
- # OR
- ANTHROPIC_API_KEY=your_anthropic_api_key
- # OR
- GROQ_API_KEY=your_groq_api_key
-
- # Optional for Telegram integration
- TELEGRAM_BOT_TOKEN=your_telegram_bot_token
-
- # Optional for improved research capabilities
- TAVILY_API_KEY=your_tavily_api_key
- ```
-
-2. Install the required dependencies:
- ```bash
- # Core dependencies
- poetry install
-
- # Research agent dependencies
- poetry install --with research
- ```
-
-3. Run the system:
- ```bash
- # Using the CLI tool (recommended)
- agentconnect --example telegram
-
- # For detailed logging, add the --verbose flag:
- agentconnect --example telegram --verbose
-
- # Alternative: run the Python script directly
- python examples/multi_agent/multi_agent_system.py
- python examples/multi_agent/multi_agent_system.py --logging
- ```
-
-4. If the Telegram bot token is provided, you can interact with the bot on Telegram:
- - Use `/start` to initialize the bot
- - Use `/help` to get help information
- - Ask questions or request research, content processing, or data analysis
- - Publish
-
-For more details on the implementation, see the source code and comments in the `multi_agent/` directory.
+* **Source:** `examples/autonomous_workflow/`
+* **Description:** Demonstrates a complete autonomous workflow featuring:
+ * Capability-based agent discovery.
+ * A user proxy orchestrating tasks between specialized agents (Research, Telegram Broadcast).
+ * Automated Agent-to-Agent (A2A) cryptocurrency payments (USDC on Base Sepolia testnet) using Coinbase Developer Platform (CDP).
+* **Requires:** LLM key(s), `TELEGRAM_BOT_TOKEN`, `TAVILY_API_KEY`, `CDP_API_KEY_NAME`, `CDP_API_KEY_PRIVATE_KEY` all set in `.env`.
+ * **Telegram Token:** See instructions in the `telegram` example section above.
+ * **CDP Keys:**
+ 1. Sign up/in at [Coinbase Developer Platform](https://cloud.coinbase.com/products/develop).
+ 2. Create a new Project if needed.
+ 3. Navigate to the **API Keys** section within your project.
+ 4. Create a new API key with `wallet:transaction:send`, `wallet:transaction:read`, `wallet:address:read`, `wallet:user:read` permissions (or select the pre-defined "Wallet" role).
+ 5. Securely copy the **API Key Name** and the **Private Key** provided upon creation and add them to your `.env` file.
## Monitoring with LangSmith
-These examples integrate with LangSmith for monitoring and debugging agent workflows:
-
-1. **View Agent Traces**: Each example generates traces in LangSmith that you can view to understand agent behavior
-2. **Debug Issues**: Identify and fix problems in agent workflows
-3. **Analyze Performance**: Measure response times and token usage
+All examples are configured to integrate with LangSmith for tracing and debugging.
-To enable LangSmith monitoring, make sure to set the following environment variables:
-
-```bash
-LANGSMITH_TRACING=true
-LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
-LANGSMITH_API_KEY=your_langsmith_api_key
-LANGSMITH_PROJECT=AgentConnect
-```
-
-## Customizing Examples
-
-You can customize these examples by:
-
-1. Modifying the agent capabilities in the `setup_agents()` function
-2. Changing the agent personalities and behaviors
-3. Adding new specialized agents with different capabilities
-4. Adjusting the interaction flow between agents
-5. Configuring different LLM providers and models
-
-## Agent Processing Loops
-
-Each example initializes the agent processing loops using `asyncio.create_task()` after registering the agents with the communication hub. These processing loops allow the agents to autonomously:
-
-1. Listen for incoming messages
-2. Process messages using their workflows
-3. Send responses to other agents
-4. Execute their capabilities
-
-The processing loops are properly cleaned up when the examples finish running, ensuring that all resources are released.
-
-```python
-# Example of starting agent processing loops
-asyncio.create_task(agent.run())
-
-# Example of cleaning up agent processing tasks
-if hasattr(agent, "_processing_task") and agent._processing_task:
- agent._processing_task.cancel()
-```
+1. **Enable Tracing:** Ensure these variables are set in your `.env` file:
+ ```
+ LANGSMITH_TRACING=true
+ LANGSMITH_API_KEY=your_langsmith_api_key
+ LANGSMITH_PROJECT=AgentConnect # Or your preferred project name
+ # LANGSMITH_ENDPOINT=https://api.smith.langchain.com (Defaults to this if not set)
+ ```
+2. **Monitor:** View detailed traces of agent interactions, tool calls, and LLM usage in your LangSmith project.
## Troubleshooting
-If you encounter issues:
-
-1. Check that your API keys are correctly set in the `.env` file
-2. Ensure all dependencies are installed
-3. Check the logs for detailed error messages (each example has logging enabled by default)
-4. Make sure you're running the examples from the project root directory
-5. View the LangSmith traces to identify where issues occur
+* Ensure you run commands from the project root directory.
+* Verify all required dependencies for the chosen example are installed (e.g., `poetry install --with demo`).
+* Double-check that all necessary API keys and tokens are correctly set in your `.env` file.
+* Use the `--verbose` flag when running via CLI for detailed logs.
+* Check LangSmith traces for deeper insights into execution flow and errors.
diff --git a/examples/autonomous_workflow/run_workflow_demo.py b/examples/autonomous_workflow/run_workflow_demo.py
new file mode 100644
index 0000000..c780caf
--- /dev/null
+++ b/examples/autonomous_workflow/run_workflow_demo.py
@@ -0,0 +1,349 @@
+#!/usr/bin/env python
+"""
+Autonomous Workflow Demo for AgentConnect
+
+This script demonstrates a multi-agent workflow using the AgentConnect framework.
+It features three agents:
+1. User Proxy Agent - Orchestrates the workflow based on user requests
+2. Research Agent - Performs company research using web search tools
+3. Telegram Broadcast Agent - Broadcasts messages to a Telegram group
+
+The demo showcases autonomous service discovery, execution, and payment between agents
+using AgentKit/CDP SDK integration within the AgentConnect framework.
+"""
+
+import asyncio
+import os
+from typing import List, Tuple
+
+from dotenv import load_dotenv
+from langchain_community.tools.tavily_search import TavilySearchResults
+from langchain_community.tools.requests.tool import RequestsGetTool
+from langchain_community.utilities import TextRequestsWrapper
+from colorama import init, Fore, Style
+
+from agentconnect.agents.ai_agent import AIAgent
+from agentconnect.agents.human_agent import HumanAgent
+from agentconnect.agents.telegram.telegram_agent import TelegramAIAgent
+from agentconnect.communication.hub import CommunicationHub
+from agentconnect.core.agent import BaseAgent
+from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ ModelProvider,
+ ModelName,
+)
+from agentconnect.core.registry import AgentRegistry
+from agentconnect.utils.logging_config import (
+ setup_logging,
+ LogLevel,
+ disable_all_logging,
+)
+from agentconnect.utils.callbacks import ToolTracerCallbackHandler
+
+# Initialize colorama for cross-platform colored output
+init()
+
+# Define colors for different message types
+COLORS = {
+ "SYSTEM": Fore.YELLOW,
+ "USER_PROXY": Fore.CYAN,
+ "RESEARCH": Fore.BLUE,
+ "TELEGRAM": Fore.MAGENTA,
+ "HUMAN": Fore.GREEN,
+ "ERROR": Fore.RED,
+ "INFO": Fore.WHITE,
+}
+
+def print_colored(message: str, color_type: str = "SYSTEM") -> None:
+ """Print a message with specified color"""
+ color = COLORS.get(color_type.upper(), Fore.WHITE)
+ print(f"{color}{message}{Style.RESET_ALL}")
+
+# Define Base Sepolia USDC Contract Address
+BASE_SEPOLIA_USDC_ADDRESS = "0x036CbD53842c5426634e7929541eC2318f3dCF7e"
+
+# Define Capabilities
+GENERAL_RESEARCH = Capability(
+ name="general_research",
+ description="Performs detailed research on a given topic, project, or URL, providing a structured report.",
+)
+
+TELEGRAM_BROADCAST = Capability(
+ name="telegram_broadcast",
+ description="Broadcasts a given message summary to pre-configured Telegram groups.",
+)
+
+
+async def setup_agents() -> Tuple[AIAgent, AIAgent, TelegramAIAgent, HumanAgent]:
+ """
+ Set up and configure all agents needed for the workflow.
+
+ Returns:
+ Tuple containing (user_proxy_agent, research_agent, telegram_broadcaster, human_agent)
+ """
+ # Load environment variables
+ load_dotenv()
+
+ # Retrieve API keys from environment
+ google_api_key = os.getenv("GOOGLE_API_KEY")
+ openai_api_key = os.getenv("OPENAI_API_KEY")
+ tavily_api_key = os.getenv("TAVILY_API_KEY")
+ telegram_token = os.getenv("TELEGRAM_BOT_TOKEN")
+
+ # Check for required environment variables
+ missing_vars = []
+ if not google_api_key and not openai_api_key:
+ missing_vars.append("GOOGLE_API_KEY or OPENAI_API_KEY")
+ if not os.getenv("CDP_API_KEY_NAME"):
+ missing_vars.append("CDP_API_KEY_NAME")
+ if not os.getenv("CDP_API_KEY_PRIVATE_KEY"):
+ missing_vars.append("CDP_API_KEY_PRIVATE_KEY")
+ if not telegram_token:
+ missing_vars.append("TELEGRAM_BOT_TOKEN")
+ if not tavily_api_key:
+ missing_vars.append("TAVILY_API_KEY")
+
+ if missing_vars:
+ raise ValueError(
+ f"Missing required environment variables: {', '.join(missing_vars)}"
+ )
+
+ # Determine which LLM to use based on available API keys
+ if google_api_key:
+ provider_type = ModelProvider.GOOGLE
+ model_name = ModelName.GEMINI2_5_FLASH_PREVIEW
+ api_key = google_api_key
+ else:
+ provider_type = ModelProvider.OPENAI
+ model_name = ModelName.GPT4O
+ api_key = openai_api_key
+
+ print_colored(f"Using {provider_type.value}: {model_name.value}", "INFO")
+
+ # Configure Callback Handler
+ monitor_callback = ToolTracerCallbackHandler(agent_id="user_proxy_agent")
+
+ # Create User Proxy Agent (Workflow Orchestrator)
+ user_proxy_agent = AIAgent(
+ agent_id="user_proxy_agent",
+ name="Workflow Orchestrator",
+ provider_type=provider_type,
+ model_name=model_name,
+ api_key=api_key,
+ identity=AgentIdentity.create_key_based(),
+ capabilities=[], # No specific capabilities - it orchestrates
+ enable_payments=True,
+ external_callbacks=[monitor_callback],
+ personality =f"""You are a workflow orchestrator. You interact with other agents to complete tasks. You are responsible for managing payments and returning results.
+ If a payment is made, provide the amount and the transaction hash in your response.
+
+ **Payment Details (USDC on Base Sepolia):**
+ - Contract: {BASE_SEPOLIA_USDC_ADDRESS}
+ - Amount: 6 decimals. 1 USDC = '1000000'.
+ """
+ )
+
+ # Create Research Agent
+ research_agent = AIAgent(
+ agent_id="research_agent",
+ name="Research Specialist",
+ provider_type=provider_type,
+ model_name=model_name,
+ api_key=api_key,
+ identity=AgentIdentity.create_key_based(),
+ capabilities=[GENERAL_RESEARCH],
+ enable_payments=True,
+ personality="""You are a Research Specialist. You provide detailed, well-structured reports on any given topic, project, or URL using web search tools.
+
+**Report Structure:**
+- **For companies/projects/organizations:** Aim to structure your report using these sections when applicable: Topic/Project Name, Executive Summary, Key Personnel/Founders, Offerings/Products, Ecosystem/Partners, Asset/Token Details, Community Sentiment, Sources Consulted, Closing Summary.
+- **For conceptual topics or general questions:** Adapt the structure logically. Focus on defining the concept, explaining key aspects, providing examples, discussing benefits/drawbacks, listing sources, and offering a concluding summary.
+- **Always include Sources Consulted.**
+
+Your fee is 2 USDC (Base Sepolia). When responding, state your fee.""",
+ custom_tools=[
+ TavilySearchResults(api_key=tavily_api_key, max_results=5),
+ RequestsGetTool(
+ requests_wrapper=TextRequestsWrapper(), allow_dangerous_requests=True
+ ),
+ ],
+ )
+
+ # Create Telegram Broadcast Agent
+ telegram_broadcaster = TelegramAIAgent(
+ agent_id="telegram_broadcaster_agent",
+ name="Telegram Broadcaster",
+ provider_type=provider_type,
+ model_name=model_name,
+ api_key=api_key,
+ identity=AgentIdentity.create_key_based(),
+ capabilities=[TELEGRAM_BROADCAST],
+ enable_payments=True,
+ personality="""You are a Telegram Broadcast Specialist. You broadcast messages to all regestered Telegram groups. \
+ Your fee is 1 USDC (Base Sepolia). After broadcasting, state your fee in your response.""",
+ telegram_token=telegram_token,
+ )
+
+ # Create Human Agent
+ human_agent = HumanAgent(
+ agent_id="human_user",
+ name="Human User",
+ identity=AgentIdentity.create_key_based(),
+ organization_id="demo_org",
+ )
+
+ return user_proxy_agent, research_agent, telegram_broadcaster, human_agent
+
+
+async def main(enable_logging: bool = False):
+ """
+ Main execution flow for the autonomous workflow demo.
+
+ Sets up the agents, registers them with the communication hub,
+ and handles user input for research and broadcast requests.
+
+ Args:
+ enable_logging: Whether to enable verbose logging
+ """
+
+ if not enable_logging:
+ disable_all_logging()
+ else:
+ # Keep logging setup simple if enabled, main feedback via print_colored
+ setup_logging(level=LogLevel.WARNING)
+
+ try:
+ print_colored("\nSetting up agents...", "SYSTEM")
+ # Set up agents
+ user_proxy_agent, research_agent, telegram_broadcaster, human_agent = (
+ await setup_agents()
+ )
+ agents: List[BaseAgent] = [
+ user_proxy_agent,
+ research_agent,
+ telegram_broadcaster,
+ human_agent,
+ ]
+
+ # Create registry and communication hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+ print_colored("Registering agents with Communication Hub...", "SYSTEM")
+ # Register all agents
+ for agent in agents:
+ if not await hub.register_agent(agent):
+ print_colored(f"Failed to register {agent.agent_id}", "ERROR")
+ return
+ print_colored(f" ✓ Registered: {agent.name} ({agent.agent_id})", "INFO")
+
+ # Display payment address if available
+ if hasattr(agent, "metadata") and hasattr(
+ agent.metadata, "payment_address"
+ ):
+ if agent.metadata.payment_address: # Check if address is not None or empty
+ print_colored(
+ f" Payment Address ({agent.name}): {agent.metadata.payment_address}", "INFO"
+ )
+ else:
+ print_colored(f" Payment address pending initialization for {agent.name}...", "INFO")
+
+ print_colored("All agents registered. Waiting for initialization...", "SYSTEM")
+
+ # Start agent processing loops
+ tasks = []
+ try:
+ print_colored("Starting agent processing loops...", "SYSTEM")
+ # Start the AI agents
+ telegram_task = asyncio.create_task(telegram_broadcaster.run())
+ tasks.append(telegram_task)
+
+ research_task = asyncio.create_task(research_agent.run())
+ tasks.append(research_task)
+
+ user_proxy_task = asyncio.create_task(user_proxy_agent.run())
+ tasks.append(user_proxy_task)
+
+ # Allow some time for agents to initialize
+ await asyncio.sleep(3)
+
+ # Print welcome message and instructions
+ print_colored("\n=== AgentConnect Autonomous Workflow Demo ===", "SYSTEM")
+ print_colored(
+ "This demo showcases multi-agent workflows with service discovery and payments.",
+ "SYSTEM",
+ )
+ print_colored("Available agents:", "INFO")
+ print_colored(" - User Proxy (Orchestrator)", "USER_PROXY")
+ print_colored(" - Research Agent (2 USDC per request)", "RESEARCH")
+ print_colored(" - Telegram Broadcaster (1.0 USDC per broadcast)", "TELEGRAM")
+ print_colored("\nExample commands:", "INFO")
+ print_colored(" - Research X and broadcast the summary", "INFO")
+ print_colored(
+ " - Find information about Y and share it on Telegram", "INFO"
+ )
+ print_colored("\nType 'exit' or 'quit' to end the demo", "INFO")
+
+ # Start human interaction with the user proxy agent
+ # HumanAgent will handle its own colored printing for the chat
+ print_colored("\n▶️ Starting interactive session with Workflow Orchestrator...", "SYSTEM")
+ await human_agent.start_interaction(user_proxy_agent)
+
+ except asyncio.CancelledError:
+ print_colored("Tasks cancelled", "SYSTEM")
+ except Exception as e:
+ print_colored(f"Error in main execution: {e}", "ERROR")
+ finally:
+ # Cleanup
+ print_colored("\nCleaning up...", "SYSTEM")
+
+ # Stop all agents
+ for agent in agents:
+ await agent.stop()
+ print_colored(f"Stopped {agent.agent_id}", "SYSTEM")
+
+ # Cancel all tasks
+ for task in tasks:
+ if not task.done():
+ task.cancel()
+
+ # Wait for tasks to finish
+ if tasks:
+ await asyncio.gather(*tasks, return_exceptions=True)
+
+ # Stop the Telegram bot explicitly
+ if telegram_broadcaster:
+ print_colored("Stopping Telegram bot...", "SYSTEM")
+ await telegram_broadcaster.stop_telegram_bot()
+
+ # Unregister agents
+ print_colored("Unregistering agents...", "SYSTEM")
+ for agent in agents:
+ # Skip human agent as it doesn't run a loop
+ if agent.agent_id == "human_user":
+ continue
+ try:
+ await hub.unregister_agent(agent.agent_id)
+ print_colored(f" ✓ Unregistered {agent.agent_id}", "INFO")
+ except Exception as e:
+ print_colored(f" ✗ Error unregistering {agent.agent_id}: {e}", "ERROR")
+
+ except ValueError as e:
+ print_colored(f"Setup error: {e}", "ERROR")
+ except Exception as e:
+ print_colored(f"Unexpected error: {e}", "ERROR")
+
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except KeyboardInterrupt:
+ print_colored("\nDemo interrupted by user. Shutting down...", "SYSTEM")
+ except Exception as e:
+ print_colored(f"Fatal error: {e}", "ERROR")
+ finally:
+ print_colored("\nDemo shutdown complete.", "SYSTEM")
+
+
+# Research the Uniswap protocol (uniswap.org), summarize its core function and tokenomics, and broadcast the summary on telegram.
\ No newline at end of file
diff --git a/examples/data_analysis_assistant.py b/examples/data_analysis_assistant.py
index cc8046f..7b4d69a 100644
--- a/examples/data_analysis_assistant.py
+++ b/examples/data_analysis_assistant.py
@@ -719,9 +719,8 @@ async def run_data_analysis_assistant_demo(enable_logging: bool = False) -> None
# Stop all agents
for agent_id, agent in agents.items():
if agent_id not in ["registry", "hub"] and agent_id != "human_agent":
- # Cancel the agent's processing task if it exists
- if hasattr(agent, "_processing_task") and agent._processing_task:
- agent._processing_task.cancel()
+ # Use the new stop method to properly clean up resources
+ await agent.stop()
# Unregister from the hub
await agents["hub"].unregister_agent(agent.agent_id)
diff --git a/examples/example_multi_agent.py b/examples/example_multi_agent.py
index d2156c5..e6c6633 100644
--- a/examples/example_multi_agent.py
+++ b/examples/example_multi_agent.py
@@ -446,7 +446,8 @@ async def run_ecommerce_analysis_demo(enable_logging: bool = False) -> None:
# Cleanup resources
for agent in agents:
- agent.is_running = False
+ await agent.stop()
+ print_system_message(f"Stopped agent: {agent.name}")
# Cancel running tasks
for task in tasks:
diff --git a/examples/example_usage.py b/examples/example_usage.py
index 78972f5..b3eeb3a 100644
--- a/examples/example_usage.py
+++ b/examples/example_usage.py
@@ -11,9 +11,11 @@
- Secure agent registration and communication with cryptographic verification
- Real-time message exchange with proper error handling
- Graceful session management and cleanup
+- Optional payment capabilities using blockchain technology
Required environment variables:
- At least one provider API key (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
+- For payment capabilities: CDP_API_KEY_NAME, CDP_API_KEY_PRIVATE (Coinbase Developer Platform)
"""
import asyncio
@@ -56,7 +58,7 @@ def print_colored(message: str, color_type: str = "SYSTEM") -> None:
print(f"{color}{message}{Style.RESET_ALL}")
-async def main(enable_logging: bool = False) -> None:
+async def main(enable_logging: bool = False, enable_payments: bool = False) -> None:
"""
Run an interactive demo between a human user and an AI assistant.
@@ -65,9 +67,11 @@ async def main(enable_logging: bool = False) -> None:
2. Secure agent registration and communication
3. Real-time message exchange with proper error handling
4. Graceful session management and cleanup
+ 5. Optional blockchain payment capabilities
Args:
enable_logging (bool): Enable detailed logging for debugging. Defaults to False.
+ enable_payments (bool): Enable blockchain payment capabilities. Defaults to False.
"""
# Load environment variables from .env file
load_dotenv()
@@ -186,6 +190,17 @@ async def main(enable_logging: bool = False) -> None:
)
]
+ # Initialize wallet configuration if payments are enabled
+ if enable_payments:
+ print_colored(
+ "Payment capabilities enabled. Environment will be validated during agent initialization.",
+ "INFO"
+ )
+ print_colored(
+ "Required environment variables: CDP_API_KEY_NAME, CDP_API_KEY_PRIVATE_KEY, (optional) CDP_NETWORK_ID",
+ "INFO"
+ )
+
ai_assistant = AIAgent(
agent_id="ai1",
name="Assistant",
@@ -197,6 +212,7 @@ async def main(enable_logging: bool = False) -> None:
interaction_modes=[InteractionMode.HUMAN_TO_AGENT],
personality="helpful and professional",
organization_id="org2",
+ enable_payments=enable_payments, # Enable payment capabilities if requested
)
# --- End AI Agent Setup ---
@@ -213,6 +229,10 @@ async def main(enable_logging: bool = False) -> None:
# Start AI processing
ai_task = asyncio.create_task(ai_assistant.run())
+ # Display payment address if payment capabilities are enabled
+ if enable_payments and ai_assistant.payments_enabled:
+ print_colored(f"\nAI Assistant Payment Address: {ai_assistant.metadata.payment_address}", "INFO")
+
print_colored("\n=== Starting Interactive Session ===", "SYSTEM")
print_colored("Type your messages and press Enter to send", "INFO")
print_colored("Type 'exit' to end the session", "INFO")
@@ -227,7 +247,7 @@ async def main(enable_logging: bool = False) -> None:
print_colored("\nEnding session...", "SYSTEM")
# Cleanup
if ai_assistant:
- ai_assistant.is_running = False
+ await ai_assistant.stop()
if ai_task:
try:
await asyncio.wait_for(ai_task, timeout=5.0)
@@ -248,4 +268,6 @@ async def main(enable_logging: bool = False) -> None:
if __name__ == "__main__":
- asyncio.run(main())
+ # By default, run without payments enabled for simpler setup
+ # To enable payments, you would call main(enable_payments=True)
+ asyncio.run(main(enable_payments=False))
diff --git a/examples/multi_agent/content_processing_agent.py b/examples/multi_agent/content_processing_agent.py
index 122e20c..590ba5b 100644
--- a/examples/multi_agent/content_processing_agent.py
+++ b/examples/multi_agent/content_processing_agent.py
@@ -8,10 +8,10 @@
import os
import re
-from typing import Dict, List, Any, Union
-from dotenv import load_dotenv
+from typing import Dict, Any, Union
from agentconnect.agents import AIAgent
+from agentconnect.utils.callbacks import ToolTracerCallbackHandler
from agentconnect.core.types import (
AgentIdentity,
Capability,
@@ -388,6 +388,7 @@ def load_pdf(pdf_source: Union[PDFSourceInput, str, Dict[str, str]]) -> Dict[str
capabilities=content_processing_capabilities,
personality="I am a content processing specialist who excels at transforming and converting content between different formats. I can extract text from PDFs, convert HTML to markdown, and process documents for better readability. I understand how to work with relative paths from the current directory.",
custom_tools=content_processing_tools,
+ # external_callbacks=[ToolTracerCallbackHandler(agent_id="content_processing_agent", print_tool_activity=False)],
)
return content_processing_agent
\ No newline at end of file
diff --git a/examples/multi_agent/data_analysis_agent.py b/examples/multi_agent/data_analysis_agent.py
index d8295bb..7651c87 100644
--- a/examples/multi_agent/data_analysis_agent.py
+++ b/examples/multi_agent/data_analysis_agent.py
@@ -9,8 +9,7 @@
import os
import io
import json
-from typing import Dict, List, Any
-from dotenv import load_dotenv
+from typing import Dict
from pydantic import BaseModel, Field
from agentconnect.agents import AIAgent
@@ -66,13 +65,13 @@ def create_data_analysis_agent(
data_analysis_capabilities = [
Capability(
name="data_analysis",
- description="Analyzes data and provides insights",
+ description="Analyzes provided data (structured or textual) to provide insights, identify trends, assess impacts (e.g., economic), and generate summaries.",
input_schema={"data": "string", "analysis_type": "string"},
output_schema={"result": "string", "visualization_path": "string"},
),
Capability(
name="data_visualization",
- description="Creates visualizations from data",
+ description="Creates visualizations from provided data",
input_schema={"data": "string", "chart_type": "string"},
output_schema={"visualization_path": "string", "description": "string"},
),
@@ -248,7 +247,10 @@ def analyze_data(data: str, analysis_type: str = "summary") -> Dict[str, str]:
api_key=api_key,
identity=data_analysis_identity,
capabilities=data_analysis_capabilities,
- personality="I am a data analysis specialist who excels at analyzing data, generating insights, and creating visualizations. I can process CSV and JSON data to discover patterns and present results in a clear, understandable format.",
+ personality=(
+ "I am a data analysis specialist. I excel at processing structured data (like CSV/JSON) for statistical analysis and visualization. "
+ "I can also analyze textual information to identify key trends, assess potential impacts (including economic consequences), and generate insightful summaries based on the provided context."
+ ),
custom_tools=[data_analysis_tool],
)
diff --git a/examples/multi_agent/multi_agent_system.py b/examples/multi_agent/multi_agent_system.py
index e0d839a..8c786b8 100644
--- a/examples/multi_agent/multi_agent_system.py
+++ b/examples/multi_agent/multi_agent_system.py
@@ -92,7 +92,7 @@ async def setup_agents(enable_logging: bool = False) -> Dict[str, Any]:
# Fall back to other API keys if Google's isn't available
provider_type = ModelProvider.GOOGLE
- model_name = ModelName.GEMINI2_FLASH
+ model_name = ModelName.GEMINI2_5_FLASH_PREVIEW
if not api_key:
print_colored("GOOGLE_API_KEY not found. Checking for alternatives...", "INFO")
@@ -259,15 +259,30 @@ async def run_multi_agent_system(enable_logging: bool = False) -> None:
except Exception as e:
print_colored(f"Error removing message logger: {e}", "ERROR")
- # Stop all agent tasks
+ # Stop all agents with the new stop method
+ for agent_id in [
+ "telegram_agent",
+ "research_agent",
+ "content_processing_agent",
+ "data_analysis_agent",
+ ]:
+ if agent_id in agents:
+ try:
+ await agents[agent_id].stop()
+ print_colored(f"Stopped {agent_id}", "SYSTEM")
+ except Exception as e:
+ print_colored(f"Error stopping {agent_id}: {e}", "ERROR")
+
+ # Cancel any remaining tasks
if "agent_tasks" in agents:
for task in agents["agent_tasks"]:
- task.cancel()
- try:
- # Wait for task to properly cancel
- await asyncio.wait_for(task, timeout=2.0)
- except (asyncio.TimeoutError, asyncio.CancelledError):
- pass
+ if not task.done():
+ task.cancel()
+ try:
+ # Wait for task to properly cancel
+ await asyncio.wait_for(task, timeout=2.0)
+ except (asyncio.TimeoutError, asyncio.CancelledError):
+ pass
# Unregister agents
for agent_id in [
diff --git a/examples/multi_agent/research_agent.py b/examples/multi_agent/research_agent.py
index fe093ef..47f42e9 100644
--- a/examples/multi_agent/research_agent.py
+++ b/examples/multi_agent/research_agent.py
@@ -7,8 +7,6 @@
"""
import os
-from typing import List
-from dotenv import load_dotenv
from agentconnect.agents import AIAgent
from agentconnect.core.types import (
diff --git a/examples/multi_agent/telegram_agent.py b/examples/multi_agent/telegram_agent.py
index cc00390..743d730 100644
--- a/examples/multi_agent/telegram_agent.py
+++ b/examples/multi_agent/telegram_agent.py
@@ -7,8 +7,6 @@
"""
import os
-from typing import List
-from dotenv import load_dotenv
from agentconnect.agents.telegram.telegram_agent import TelegramAIAgent
from agentconnect.core.types import (
diff --git a/examples/research_assistant.py b/examples/research_assistant.py
index 361d8cc..ee8b299 100644
--- a/examples/research_assistant.py
+++ b/examples/research_assistant.py
@@ -39,8 +39,8 @@
ModelName,
ModelProvider,
)
-from agentconnect.core.message import Message
from agentconnect.core.registry import AgentRegistry
+from agentconnect.utils.callbacks import ToolTracerCallbackHandler
from agentconnect.utils.logging_config import (
setup_logging,
LogLevel,
@@ -134,7 +134,7 @@ async def setup_agents() -> Dict[str, Any]:
# Fall back to other API keys if Google's isn't available
provider_type = ModelProvider.GOOGLE
- model_name = ModelName.GEMINI2_FLASH
+ model_name = ModelName.GEMINI2_5_FLASH_PREVIEW
if not api_key:
print_colored("GOOGLE_API_KEY not found. Checking for alternatives...", "INFO")
@@ -175,8 +175,8 @@ async def setup_agents() -> Dict[str, Any]:
hub = CommunicationHub(registry)
# Register message logger
- hub.add_global_handler(demo_message_logger)
- print_colored("Registered message flow logger to visualize agent collaboration", "INFO")
+ # hub.add_global_handler(demo_message_logger)
+ # print_colored("Registered message flow logger to visualize agent collaboration", "INFO")
# Create human agent
human_identity = AgentIdentity.create_key_based()
@@ -218,6 +218,7 @@ async def setup_agents() -> Dict[str, Any]:
identity=core_identity,
capabilities=core_capabilities,
personality="I am the primary interface between you and specialized agents. I understand your requests, delegate tasks to specialized agents, and present their findings in a coherent manner. I maintain conversation context and ensure a smooth experience.",
+ external_callbacks=[ToolTracerCallbackHandler("core_agent")],
)
# Create research agent
@@ -442,7 +443,7 @@ async def run_research_assistant_demo(enable_logging: bool = False) -> None:
print_colored("\nSetting up agents...", "SYSTEM")
agents = None
- message_logger_registered = enable_logging
+ # message_logger_registered = enable_logging
try:
# Set up agents with logging flag
@@ -478,57 +479,60 @@ async def run_research_assistant_demo(enable_logging: bool = False) -> None:
print_colored("\nCleaning up resources...", "SYSTEM")
# Remove message logger if it was registered
- if message_logger_registered and "hub" in agents:
- try:
- agents["hub"].remove_global_handler(demo_message_logger)
- print_colored("Removed message flow logger", "INFO")
- except Exception as e:
- print_colored(f"Error removing message logger: {e}", "ERROR")
-
- # Stop all agent tasks
- if "agent_tasks" in agents:
- for task in agents["agent_tasks"]:
- task.cancel()
- try:
- # Wait for task to properly cancel
- await asyncio.wait_for(task, timeout=2.0)
- except (asyncio.TimeoutError, asyncio.CancelledError):
- pass
-
- # Unregister agents
+ # if message_logger_registered and "hub" in agents:
+ # try:
+ # # agents["hub"].remove_global_handler(demo_message_logger)
+ # print_colored("Removed message flow logger", "INFO")
+ # except Exception as e:
+ # print_colored(f"Error removing message logger: {e}", "ERROR")
+
+ # Stop all agents
for agent_id in ["core_agent", "research_agent", "markdown_agent"]:
if agent_id in agents:
try:
+ # Use the new stop method for proper cleanup
+ await agents[agent_id].stop()
await agents["hub"].unregister_agent(agents[agent_id].agent_id)
- print_colored(f"Unregistered {agent_id}", "SYSTEM")
+ print_colored(f"Stopped and unregistered {agent_id}", "SYSTEM")
except Exception as e:
- print_colored(f"Error unregistering {agent_id}: {e}", "ERROR")
+ print_colored(f"Error stopping/unregistering {agent_id}: {e}", "ERROR")
+
+ # Cancel any remaining tasks
+ if "agent_tasks" in agents:
+ for task in agents["agent_tasks"]:
+ if not task.done():
+ task.cancel()
+ try:
+ # Wait for task to properly cancel
+ await asyncio.wait_for(task, timeout=2.0)
+ except (asyncio.TimeoutError, asyncio.CancelledError):
+ pass
print_colored("Demo completed successfully!", "SYSTEM")
# Define the global message logger function
-async def demo_message_logger(message: Message) -> None:
- """
- Global message handler for logging agent collaboration flow.
+# async def demo_message_logger(message: Message) -> None:
+# """
+# Global message handler for logging agent collaboration flow.
- This handler inspects messages routed through the hub and logs specific events
- in the research assistant demo to visualize agent collaboration.
+# This handler inspects messages routed through the hub and logs specific events
+# in the research assistant demo to visualize agent collaboration.
- Args:
- message (Message): The message being routed through the hub
- """
- if message.receiver_id == "human_user" or message.sender_id == "human_user":
- return
- color_type = "SYSTEM"
- if message.sender_id == "core_agent":
- color_type = "CORE"
- elif message.sender_id == "research_agent":
- color_type = "RESEARCH"
- elif message.sender_id == "markdown_agent":
- color_type = "MARKDOWN"
-
- print_colored(f"{message.sender_id} -> {message.receiver_id}: {message.content[:50]}...", color_type)
+# Args:
+# message (Message): The message being routed through the hub
+# """
+# if message.receiver_id == "human_user" or message.sender_id == "human_user":
+# return
+# color_type = "SYSTEM"
+# if message.sender_id == "core_agent":
+# color_type = "CORE"
+# elif message.sender_id == "research_agent":
+# color_type = "RESEARCH"
+# elif message.sender_id == "markdown_agent":
+# color_type = "MARKDOWN"
+
+# print_colored(f"{message.sender_id} -> {message.receiver_id}: {message.content[:50]}...", color_type)
if __name__ == "__main__":
diff --git a/poetry.lock b/poetry.lock
index 093dbdb..bf55fc8 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -1,5 +1,37 @@
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
+[[package]]
+name = "accelerate"
+version = "1.6.0"
+description = "Accelerate"
+optional = false
+python-versions = ">=3.9.0"
+groups = ["main"]
+files = [
+ {file = "accelerate-1.6.0-py3-none-any.whl", hash = "sha256:1aee717d3d3735ad6d09710a7c26990ee4652b79b4e93df46551551b5227c2aa"},
+ {file = "accelerate-1.6.0.tar.gz", hash = "sha256:28c1ef1846e690944f98b68dc7b8bb6c51d032d45e85dcbb3adb0c8b99dffb32"},
+]
+
+[package.dependencies]
+huggingface-hub = ">=0.21.0"
+numpy = ">=1.17,<3.0.0"
+packaging = ">=20.0"
+psutil = "*"
+pyyaml = "*"
+safetensors = ">=0.4.3"
+torch = ">=2.0.0"
+
+[package.extras]
+deepspeed = ["deepspeed"]
+dev = ["bitsandbytes", "black (>=23.1,<24.0)", "datasets", "diffusers", "evaluate", "hf-doc-builder (>=0.3.0)", "parameterized", "pytest (>=7.2.0,<=8.0.0)", "pytest-order", "pytest-subtests", "pytest-xdist", "rich", "ruff (>=0.11.2,<0.12.0)", "scikit-learn", "scipy", "timm", "torchdata (>=0.8.0)", "torchpippy (>=0.2.0)", "tqdm", "transformers"]
+quality = ["black (>=23.1,<24.0)", "hf-doc-builder (>=0.3.0)", "ruff (>=0.11.2,<0.12.0)"]
+rich = ["rich"]
+sagemaker = ["sagemaker"]
+test-dev = ["bitsandbytes", "datasets", "diffusers", "evaluate", "scikit-learn", "scipy", "timm", "torchdata (>=0.8.0)", "torchpippy (>=0.2.0)", "tqdm", "transformers"]
+test-prod = ["parameterized", "pytest (>=7.2.0,<=8.0.0)", "pytest-order", "pytest-subtests", "pytest-xdist"]
+test-trackers = ["comet-ml", "dvclive", "matplotlib", "mlflow", "tensorboard", "wandb"]
+testing = ["bitsandbytes", "datasets", "diffusers", "evaluate", "parameterized", "pytest (>=7.2.0,<=8.0.0)", "pytest-order", "pytest-subtests", "pytest-xdist", "scikit-learn", "scipy", "timm", "torchdata (>=0.8.0)", "torchpippy (>=0.2.0)", "tqdm", "transformers"]
+
[[package]]
name = "accessible-pygments"
version = "0.0.5"
@@ -238,6 +270,44 @@ files = [
{file = "alabaster-1.0.0.tar.gz", hash = "sha256:c00dca57bca26fa62a6d7d0a9fcce65f3e026e9bfe33e9c538fd3fbb2144fd9e"},
]
+[[package]]
+name = "allora-sdk"
+version = "0.2.2"
+description = "Allora Network SDK"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "allora_sdk-0.2.2-py3-none-any.whl", hash = "sha256:c3784d8125505eda57ffecdc7d285371be111be05fbb88cc732f14ca192d4560"},
+ {file = "allora_sdk-0.2.2.tar.gz", hash = "sha256:77ae01fdcc60178db33ea1f9c359fee316a15e92f527e04df87b914b6544cd3d"},
+]
+
+[package.dependencies]
+aiohttp = "*"
+annotated-types = "0.7.0"
+cachetools = "5.5.0"
+certifi = "2024.12.14"
+chardet = "5.2.0"
+charset-normalizer = "3.4.1"
+colorama = "0.4.6"
+distlib = "0.3.9"
+filelock = "3.16.1"
+idna = "3.10"
+packaging = "24.2"
+platformdirs = "4.3.6"
+pluggy = "1.5.0"
+pydantic = "2.10.4"
+pydantic-core = "2.27.2"
+pyproject-api = "1.8.0"
+requests = "2.32.3"
+tox = "4.23.2"
+typing-extensions = "4.12.2"
+urllib3 = "2.3.0"
+virtualenv = "20.28.1"
+
+[package.extras]
+dev = ["fastapi", "pytest", "pytest-asyncio", "starlette", "tox"]
+
[[package]]
name = "annotated-types"
version = "0.7.0"
@@ -312,6 +382,18 @@ files = [
[package.dependencies]
feedparser = "*"
+[[package]]
+name = "asn1crypto"
+version = "1.5.1"
+description = "Fast ASN.1 parser and serializer with definitions for private keys, public keys, certificates, CRL, OCSP, CMS, PKCS#3, PKCS#7, PKCS#8, PKCS#12, PKCS#5, X.509 and TSP"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "asn1crypto-1.5.1-py2.py3-none-any.whl", hash = "sha256:db4e40728b728508912cbb3d44f19ce188f218e9eba635821bb4b68564f8fd67"},
+ {file = "asn1crypto-1.5.1.tar.gz", hash = "sha256:13ae38502be632115abf8a24cbe5f4da52e3b5231990aff31123c805306ccb9c"},
+]
+
[[package]]
name = "astroid"
version = "3.3.9"
@@ -400,13 +482,52 @@ files = [
[package.extras]
dev = ["backports.zoneinfo ; python_version < \"3.9\"", "freezegun (>=1.0,<2.0)", "jinja2 (>=3.0)", "pytest (>=6.0)", "pytest-cov", "pytz", "setuptools", "tzdata ; sys_platform == \"win32\""]
+[[package]]
+name = "bcl"
+version = "2.3.1"
+description = "Python library that provides a simple interface for symmetric (i.e., secret-key) and asymmetric (i.e., public-key) encryption/decryption primitives."
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "bcl-2.3.1-cp310-abi3-macosx_10_10_universal2.whl", hash = "sha256:cf59d66d4dd653b43b197ad5fc140a131db7f842c192d9836f5a6fe2bee9019e"},
+ {file = "bcl-2.3.1-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a7696201b8111e877d21c1afd5a376f27975688658fa9001278f15e9fa3da2e0"},
+ {file = "bcl-2.3.1-cp310-abi3-win32.whl", hash = "sha256:28f55e08e929309eacf09118b29ffb4d110ce3702eef18e98b8b413d0dfb1bf9"},
+ {file = "bcl-2.3.1-cp310-abi3-win_amd64.whl", hash = "sha256:f65e9f347b76964d91294964559da05cdcefb1f0bdfe90b6173892de3598a810"},
+ {file = "bcl-2.3.1-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:edb8277faee90121a248d26b308f4f007da1faedfd98d246841fb0f108e47db2"},
+ {file = "bcl-2.3.1-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:99aff16e0da7a3b678c6cba9be24760eda75c068cba2b85604cf41818e2ba732"},
+ {file = "bcl-2.3.1-cp37-abi3-win32.whl", hash = "sha256:17d2e7dbe852c4447a7a2ff179dc466a3b8809ad1f151c4625ef7feff167fcaf"},
+ {file = "bcl-2.3.1-cp37-abi3-win_amd64.whl", hash = "sha256:fb778e77653735ac0bd2376636cba27ad972e0888227d4b40f49ea7ca5bceefa"},
+ {file = "bcl-2.3.1-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:f6d551e139fa1544f7c822be57b0a8da2dff791c7ffa152bf371e3a8712b8b62"},
+ {file = "bcl-2.3.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:447835deb112f75f89cca34e34957a36e355a102a37a7b41e83e5502b11fc10a"},
+ {file = "bcl-2.3.1-cp38-abi3-win32.whl", hash = "sha256:1d8e0a25921ee705840219ed3c78e1d2e9d0d73cb2007c2708af57489bd6ce57"},
+ {file = "bcl-2.3.1-cp38-abi3-win_amd64.whl", hash = "sha256:a7312d21f5e8960b121fadbd950659bc58745282c1c2415e13150590d2bb271e"},
+ {file = "bcl-2.3.1-cp39-abi3-macosx_10_10_universal2.whl", hash = "sha256:bb695832cb555bb0e3dee985871e6cfc2d5314fb69bbf62297f81ba645e99257"},
+ {file = "bcl-2.3.1-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:0922349eb5ffd19418f46c40469d132c6e0aea0e47fec48a69bec5191ee56bec"},
+ {file = "bcl-2.3.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:97117d57cf90679dd1b28f1039fa2090f5561d3c1ee4fe4e78d1b0680cc39b8d"},
+ {file = "bcl-2.3.1-cp39-abi3-win32.whl", hash = "sha256:a5823f1b655a37259a06aa348bbc2e7a38d39d0e1683ea0596b888b7ef56d378"},
+ {file = "bcl-2.3.1-cp39-abi3-win_amd64.whl", hash = "sha256:52cf26c4ecd76e806c6576c4848633ff44ebfff528fca63ad0e52085b6ba5aa9"},
+ {file = "bcl-2.3.1.tar.gz", hash = "sha256:2a10f1e4fde1c146594fe835f29c9c9753a9f1c449617578c1473d6371da9853"},
+]
+
+[package.dependencies]
+cffi = ">=1.15,<2.0"
+
+[package.extras]
+build = ["cffi (>=1.15,<2.0)", "setuptools (>=62.0,<63.0)", "wheel (>=0.37,<1.0)"]
+coveralls = ["coveralls (>=3.3.1,<3.4.0)"]
+docs = ["sphinx (>=4.2.0,<4.3.0)", "sphinx-rtd-theme (>=1.0.0,<1.1.0)"]
+lint = ["pylint (>=2.14.0,<2.15.0)"]
+publish = ["twine (>=4.0,<5.0)"]
+test = ["pytest (>=7.0,<8.0)", "pytest-cov (>=3.0,<4.0)"]
+
[[package]]
name = "bcrypt"
version = "4.3.0"
description = "Modern password hashing for your software and your servers"
optional = false
python-versions = ">=3.8"
-groups = ["demo"]
+groups = ["main", "demo"]
files = [
{file = "bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", hash = "sha256:f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5eeac541cefd0bb887a371ef73c62c3cd78535e4887b310626036a7c0a817bb"},
@@ -471,7 +592,7 @@ version = "4.13.3"
description = "Screen-scraping library"
optional = false
python-versions = ">=3.7.0"
-groups = ["dev", "docs", "research"]
+groups = ["main", "dev", "docs", "research"]
files = [
{file = "beautifulsoup4-4.13.3-py3-none-any.whl", hash = "sha256:99045d7d3f08f91f0d656bc9b7efbae189426cd913d830294a15eefa0ea4df16"},
{file = "beautifulsoup4-4.13.3.tar.gz", hash = "sha256:1bd32405dacc920b42b83ba01644747ed77456a65760e285fbc47633ceddaf8b"},
@@ -488,6 +609,181 @@ charset-normalizer = ["charset-normalizer"]
html5lib = ["html5lib"]
lxml = ["lxml"]
+[[package]]
+name = "bip-utils"
+version = "2.9.3"
+description = "Generation of mnemonics, seeds, private/public keys and addresses for different types of cryptocurrencies"
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "bip_utils-2.9.3-py3-none-any.whl", hash = "sha256:ee26b8417a576c7f89b847da37316db01a5cece1994c1609d37fbeefb91ad45e"},
+ {file = "bip_utils-2.9.3.tar.gz", hash = "sha256:72a8c95484b57e92311b0b2a3d5195b0ce4395c19a0b157d4a289e8b1300f48a"},
+]
+
+[package.dependencies]
+cbor2 = ">=5.1.2,<6.0.0"
+coincurve = [
+ {version = ">=18.0.0", markers = "python_version == \"3.11\""},
+ {version = ">=19.0.1", markers = "python_version >= \"3.12\""},
+]
+crcmod = ">=1.7,<2.0"
+ecdsa = ">=0.17,<1.0"
+ed25519-blake2b = [
+ {version = ">=1.4,<2.0.0", markers = "python_version < \"3.12\""},
+ {version = ">=1.4.1,<2.0.0", markers = "python_version >= \"3.12\""},
+]
+py-sr25519-bindings = {version = ">=0.2.0,<2.0.0", markers = "python_version >= \"3.11\""}
+pycryptodome = ">=3.15,<4.0"
+pynacl = ">=1.5,<2.0"
+
+[package.extras]
+develop = ["coverage (>=5.3)", "flake8 (>=3.8)", "isort (>=5.8)", "mypy (>=0.900)", "prospector[with-mypy,with-pyroma] (>=1.7)", "pytest (>=7.0)", "pytest-cov (>=2.10)"]
+
+[[package]]
+name = "bitarray"
+version = "3.3.1"
+description = "efficient arrays of booleans -- C extension"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "bitarray-3.3.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:811f559e0e5fca85d26b834e02f2a767aa7765e6b1529d4b2f9d4e9015885b4b"},
+ {file = "bitarray-3.3.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e44be933a60b27ef0378a2fdc111ae4ac53a090169db9f97219910cac51ff885"},
+ {file = "bitarray-3.3.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5aacbf54ad69248e17aab92a9f2d8a0a7efaea9d5401207cb9dac41d46294d56"},
+ {file = "bitarray-3.3.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3fcdaf79970b41cfe21b6cf6a7bbe2d0f17e3371a4d839f1279283ac03dd2a47"},
+ {file = "bitarray-3.3.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9aa5cf7a6a8597968ff6f4e7488d5518bba911344b32b7948012a41ca3ae7e41"},
+ {file = "bitarray-3.3.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc448e4871fc4df22dd04db4a7b34829e5c3404003b9b1709b6b496d340db9c7"},
+ {file = "bitarray-3.3.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:51ce410a2d91da4b98d0f043df9e0938c33a2d9ad4a370fa8ec1ce7352fc20d9"},
+ {file = "bitarray-3.3.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:f7eb851d62a3166b8d1da5d5740509e215fa5b986467bf135a5a2d197bf16345"},
+ {file = "bitarray-3.3.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:69679fcd5f2c4b7c8920d2824519e3bff81a18fac25acf33ded4524ea68d8a39"},
+ {file = "bitarray-3.3.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:9c8f580590822df5675b9bc04b9df534be23a4917f709f9483fa554fd2e0a4df"},
+ {file = "bitarray-3.3.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:5500052aaf761afede3763434097a59042e22fbde508c88238d34105c13564c0"},
+ {file = "bitarray-3.3.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e95f13d615f91da5a5ee5a782d9041c58be051661843416f2df9453f57008d40"},
+ {file = "bitarray-3.3.1-cp310-cp310-win32.whl", hash = "sha256:4ddef0b620db43dfde43fe17448ddc37289f67ad9a8ae39ffa64fa7bf529145f"},
+ {file = "bitarray-3.3.1-cp310-cp310-win_amd64.whl", hash = "sha256:d3f5cec4f8d27284f559a0d7c4a4bdfbae74d3b69d09c3f3b53989a730833ad8"},
+ {file = "bitarray-3.3.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:76abaeac4f94eda1755eed633a720c1f5f90048cb7ea4ab217ea84c48414189a"},
+ {file = "bitarray-3.3.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:75eb4d353dcf571d98e2818119af303fb0181b54361ac9a3e418b31c08131e56"},
+ {file = "bitarray-3.3.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e61b7552c953e58cf2d82b95843ca410eef18af2a5380f3ff058d21eaf902eda"},
+ {file = "bitarray-3.3.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d40dbc3609f1471ca3c189815ab4596adae75d8ee0da01412b2e3d0f6e94ab46"},
+ {file = "bitarray-3.3.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d2c8b7da269eb877cc2361d868fdcb63bfe7b5821c5b3ea2640be3f4b047b4bb"},
+ {file = "bitarray-3.3.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e362fc7a72fd00f641b3d6ed91076174cae36f49183afe8b4b4b77a2b5a116b0"},
+ {file = "bitarray-3.3.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f51322a55687f1ac075b897d409d0314a81f1ec55ebae96eeca40c9e8ad4a1c1"},
+ {file = "bitarray-3.3.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:dea204d3c6ec17fc3084c1db11bcad1347f707b7f5c08664e116a9c75ca134e9"},
+ {file = "bitarray-3.3.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:ea48f168274d60f900f847dd5fff9bd9d4e4f8af5a84149037c2b5fe1712fa0b"},
+ {file = "bitarray-3.3.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:8076650a08cec080f6726860c769320c27eb4379cfd22e2f5732787dec119bfe"},
+ {file = "bitarray-3.3.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:653d56c58197940f0c1305cb474b75597421b424be99284915bb4f3529d51837"},
+ {file = "bitarray-3.3.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5d47d349468177afbe77e5306e70fd131d8da6946dd22ed93cbe70c5f2965307"},
+ {file = "bitarray-3.3.1-cp311-cp311-win32.whl", hash = "sha256:ac5d80cd43a9a995a501b4e3b38802628b35065e896f79d33430989e2e3f0870"},
+ {file = "bitarray-3.3.1-cp311-cp311-win_amd64.whl", hash = "sha256:52edf707f2fddb6a60a20093c3051c1925830d8c4e7fb2692aac2ee970cee2b0"},
+ {file = "bitarray-3.3.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:673a21ebb6c72904d7de58fe8c557bad614fce773f21ddc86bcf8dd09a387a32"},
+ {file = "bitarray-3.3.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:946e97712014784c3257e4ca45cf5071ffdbbebe83977d429e8f7329d0e2387f"},
+ {file = "bitarray-3.3.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14f04e4eec65891523a8ca3bf9e1dcdefed52d695f40c4e50d5980471ffd22a4"},
+ {file = "bitarray-3.3.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0580b905ad589e3be52d36fbc83d32f6e3f6a63751d6c0da0ca328c32d037790"},
+ {file = "bitarray-3.3.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:50da5ecd86ee25df9f658d8724efbe8060de97217fb12a1163bee61d42946d83"},
+ {file = "bitarray-3.3.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42376c9e0a1357acc8830c4c0267e1c30ebd04b2d822af702044962a9f30b795"},
+ {file = "bitarray-3.3.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e9b18889a809d8f190e09dd6ee513983e1cdc04c3f23395d237ccf699dce5eaf"},
+ {file = "bitarray-3.3.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f4e2fc0f6a573979462786edbf233fc9e1b644b4e790e8c29796f96bbe45353a"},
+ {file = "bitarray-3.3.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:99ea63932e86b08e36d6246ff8f663728a5baefa7e9a0e2f682864fe13297514"},
+ {file = "bitarray-3.3.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:8627fc0c9806d6dac2fb422d9cd650b0d225f498601381d334685b9f071b793c"},
+ {file = "bitarray-3.3.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:4bb2fa914a7bbcd7c6a457d44461a8540b9450e9bb4163d734eb74bffba90e69"},
+ {file = "bitarray-3.3.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:dd0ba0cc46b9a7d5cee4c4a9733dce2f0aa21caf04fe18d18d2025a4211adc18"},
+ {file = "bitarray-3.3.1-cp312-cp312-win32.whl", hash = "sha256:b77a03aba84bf2d2c8f2d5a81af5957da42324d9f701d584236dc735b6a191f8"},
+ {file = "bitarray-3.3.1-cp312-cp312-win_amd64.whl", hash = "sha256:dc6407e899fc3148d796fc4c3b0cec78153f034c5ff9baa6ae9c91d7ea05fb45"},
+ {file = "bitarray-3.3.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:31f21c7df3b40db541182db500f96cf2b9688261baec7b03a6010fdfc5e31855"},
+ {file = "bitarray-3.3.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4c516daf790bd870d7575ac0e4136f1c3bc180b0de2a6bfa9fa112ea668131a0"},
+ {file = "bitarray-3.3.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b81664adf97f54cb174472f5511075bfb5e8fb13151e9c1592a09b45d544dab0"},
+ {file = "bitarray-3.3.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:421da43706c9a01d1b1454c34edbff372a7cfeff33879b6c048fc5f4481a9454"},
+ {file = "bitarray-3.3.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cb388586c9b4d338f9585885a6f4bd2736d4a7a7eb4b63746587cb8d04f7d156"},
+ {file = "bitarray-3.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b0bca424ee4d80a4880da332e56d2863e8d75305842c10aa6e94eb975bcad4fc"},
+ {file = "bitarray-3.3.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f62738cc16a387aa2f0dc6e93e0b0f48d5b084db249f632a0e3048d5ace783e6"},
+ {file = "bitarray-3.3.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:0d11e1a8914321fac34f50c48a9b1f92a1f51f45f9beb23e990806588137c4ca"},
+ {file = "bitarray-3.3.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:434180c1340268763439b80d21e074df24633c8748a867573bafecdbfaa68a76"},
+ {file = "bitarray-3.3.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:518e04584654a155fca829a6fe847cd403a17007e5afdc2b05b4240b53cd0842"},
+ {file = "bitarray-3.3.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:36851e3244950adc75670354dcd9bcad65e1695933c18762bb6f7590734c14ef"},
+ {file = "bitarray-3.3.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:824bd92e53f8e32dfa4bf38643246d1a500b13461ade361d342a8fcc3ddb6905"},
+ {file = "bitarray-3.3.1-cp313-cp313-win32.whl", hash = "sha256:8c84c3df9b921439189d0be6ad4f4212085155813475a58fbc5fb3f1d5e8a001"},
+ {file = "bitarray-3.3.1-cp313-cp313-win_amd64.whl", hash = "sha256:71838052ad546da110b8a8aaa254bda2e162e65af563d92b15c8bc7ab1642909"},
+ {file = "bitarray-3.3.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:131ff1eed8902fb54ea64f8d0bf8fcbbda8ad6b9639d81cacc3a398c7488fecb"},
+ {file = "bitarray-3.3.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62c2278763edc823e79b8f0a0fdc7c8c9c45a3e982db9355042839c1f0c4ea92"},
+ {file = "bitarray-3.3.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:751a2cd05326e1552b56090595ba8d35fe6fef666d5ca9c0a26d329c65a9c4a0"},
+ {file = "bitarray-3.3.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d57b3b92bfa453cba737716680292afb313ec92ada6c139847e005f5ac1ad08c"},
+ {file = "bitarray-3.3.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7c7913d3cf7017bd693177ca0a4262d51587378d9c4ae38d13be3655386f0c27"},
+ {file = "bitarray-3.3.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2441da551787086c57fa8983d43e103fd2519389c8e03302908697138c287d6a"},
+ {file = "bitarray-3.3.1-cp36-cp36m-musllinux_1_2_aarch64.whl", hash = "sha256:c17fd3a63b31a21a979962bd3ab0f96d22dcdb79dc5149efc2cf66a16ae0bb59"},
+ {file = "bitarray-3.3.1-cp36-cp36m-musllinux_1_2_i686.whl", hash = "sha256:f7cee295219988b50b543791570b013e3f3325867f9650f6233b48cb00b020c2"},
+ {file = "bitarray-3.3.1-cp36-cp36m-musllinux_1_2_ppc64le.whl", hash = "sha256:307e4cd6b94de4b4b5b0f4599ffddabde4c33ac22a74998887048d24cb379ad3"},
+ {file = "bitarray-3.3.1-cp36-cp36m-musllinux_1_2_s390x.whl", hash = "sha256:1b7e89d4005eee831dc90d50c69af74ece6088f3c1b673d0089c8ef7d5346c37"},
+ {file = "bitarray-3.3.1-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:8f267edd51db6903c67b2a2b0f780bb0e52d2e92ec569ddd241486eeff347283"},
+ {file = "bitarray-3.3.1-cp36-cp36m-win32.whl", hash = "sha256:2fbd399cfdb7dee0bb4705bc8cd51163a9b2f25bb266807d57e5c693e0a14df2"},
+ {file = "bitarray-3.3.1-cp36-cp36m-win_amd64.whl", hash = "sha256:551844744d22fe2e37525bd7132d2e9dae5a9621e3d8a43f46bbe6edadb4c63b"},
+ {file = "bitarray-3.3.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:492524a28c3aab6a4ef0a741ee9f3578b6606bb52a7a94106c386bdebab1df44"},
+ {file = "bitarray-3.3.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da225a602cb4a97900e416059bc77d7b0bb8ac5cb6cb3cc734fd01c636387d2b"},
+ {file = "bitarray-3.3.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:fd3ed1f7d2d33856252863d5fa976c41013fac4eb0898bf7c3f5341f7ae73e06"},
+ {file = "bitarray-3.3.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a1adc8cd484de52b6b11a0e59e087cd3ae593ce4c822c18d4095d16e06e49453"},
+ {file = "bitarray-3.3.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8360759897d50e4f7ec8be51f788119bd43a61b1fe9c68a508a7ba495144859a"},
+ {file = "bitarray-3.3.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:50df8e1915a1acfd9cd0a4657d26cacd5aee4c3286ebb63e9dd75271ea6b54e0"},
+ {file = "bitarray-3.3.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:b9f2247b76e2e8c88f81fb850adb211d9b322f498ae7e5797f7629954f5b9767"},
+ {file = "bitarray-3.3.1-cp37-cp37m-musllinux_1_2_i686.whl", hash = "sha256:958b75f26f8abbcb9bc47a8a546a0449ba565d6aac819e5bb80417b93e5777fa"},
+ {file = "bitarray-3.3.1-cp37-cp37m-musllinux_1_2_ppc64le.whl", hash = "sha256:54093229fec0f8c605b7873020c07681c1f1f96c433ae082d2da106ab11b206f"},
+ {file = "bitarray-3.3.1-cp37-cp37m-musllinux_1_2_s390x.whl", hash = "sha256:58365c6c3e4a5ebbc8f28bf7764f5b00be5c8b1ffbd70474e6f801383f3fe0a0"},
+ {file = "bitarray-3.3.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:af6a09c296aa2d68b25eb154079abd5a58da883db179e9df0fc9215c405be6be"},
+ {file = "bitarray-3.3.1-cp37-cp37m-win32.whl", hash = "sha256:b521c2d73f6fa1c461a68c5d220836d0fea9261d5f934833aaffde5114aecffb"},
+ {file = "bitarray-3.3.1-cp37-cp37m-win_amd64.whl", hash = "sha256:90178b8c6f75b43612dadf50ff0df08a560e220424ce33cf6d2514d7ab1803a7"},
+ {file = "bitarray-3.3.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:64a5404a258ef903db67d7911147febf112858ba30c180dae0c23405412e0a2f"},
+ {file = "bitarray-3.3.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0952d05e1d6b0a736d73d34128b652d7549ba7d00ccc1e7c00efbc6edd687ee3"},
+ {file = "bitarray-3.3.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c17eae957d61fea05d3f2333a95dd79dc4733f3eadf44862cd6d586daae31ea3"},
+ {file = "bitarray-3.3.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c00b2ea9aab5b2c623b1901a4c04043fb847c8bd64a2f52518488434eb44c4e6"},
+ {file = "bitarray-3.3.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a29ad824bf4b735cb119e2c79a4b821ad462aeb4495e80ff186f1a8e48362082"},
+ {file = "bitarray-3.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e92d2d7d405e004f2bdf9ff6d58faed6d04e0b74a9d96905ade61c293abe315"},
+ {file = "bitarray-3.3.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:833e06b01ff8f5a9f5b52156a23e9930402d964c96130f6d0bd5297e5dec95dc"},
+ {file = "bitarray-3.3.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:a0c87ffc5bf3669b0dfa91752653c41c9c38e1fd5b95aeb4c7ee40208c953fcd"},
+ {file = "bitarray-3.3.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:9ce64e247af33fa348694dbf7f4943a60040b5cc04df813649cc8b54c7f54061"},
+ {file = "bitarray-3.3.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:52e8d36933bb3fb132c95c43171f47f07c22dd31536495be20f86ddbf383e3c6"},
+ {file = "bitarray-3.3.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:434e389958ab98415ed4d9d67dd94c0ac835036a16b488df6736222f4f55ff35"},
+ {file = "bitarray-3.3.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:e75c4a1f00f46057f2fc98d717b2eabba09582422fe608158beed2ef0a5642da"},
+ {file = "bitarray-3.3.1-cp38-cp38-win32.whl", hash = "sha256:9d6fe373572b20adde2d6a58f8dc900b0cb4eec625b05ca1adbf053772723c78"},
+ {file = "bitarray-3.3.1-cp38-cp38-win_amd64.whl", hash = "sha256:eeda85d034a2649b7e4dbd7067411e9c55c1fc65fafb9feb973d810b103e36a0"},
+ {file = "bitarray-3.3.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:47abbec73f20176e119f5c4c68aaf243c46a5e072b9c182f2c110b5b227256a7"},
+ {file = "bitarray-3.3.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f46e7fe734b57f3783a324bf3a7053df54299653e646d86558a4b2576cb47208"},
+ {file = "bitarray-3.3.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2c411b7d3784109dfc33f5f7cdf331d3373b8349a4ad608ee482f1a04c30efe"},
+ {file = "bitarray-3.3.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9511420cf727eb6e603dc6f3c122da1a16af38abc92272a715ce68c47b19b140"},
+ {file = "bitarray-3.3.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:39fdd56fd9076a4a34c3cd21e1c84dc861dac5e92c1ed9daed6aed6b11719c8c"},
+ {file = "bitarray-3.3.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:638ad50ecbffd05efdfa9f77b24b497b8e523f078315846614c647ebc3389bb5"},
+ {file = "bitarray-3.3.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f26f3197779fe5a90a54505334d34ceb948cec6378caea49cd9153b3bbe57566"},
+ {file = "bitarray-3.3.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:01299fb36af3e7955967f3dbc4097a2d88845166837899350f411d95a857f8aa"},
+ {file = "bitarray-3.3.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:f1767c325ef4983f52a9d62590f09ea998c06d8d4aa9f13b9eeabaac3536381e"},
+ {file = "bitarray-3.3.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:ed6f9b158c11e7bcf9b0b6788003aed5046a0759e7b25e224d9551a01c779ee7"},
+ {file = "bitarray-3.3.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:ab52dd26d24061d67f485f3400cc7d3d5696f0246294a372ef09aa8ef31a44c4"},
+ {file = "bitarray-3.3.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:0ba347a4dcc990637aa700227675d8033f68b417dcd7ccf660bd2e87e10885ec"},
+ {file = "bitarray-3.3.1-cp39-cp39-win32.whl", hash = "sha256:4bda4e4219c6271beec737a5361b009dcf9ff6d84a2df92bf3dd4f4e97bb87e5"},
+ {file = "bitarray-3.3.1-cp39-cp39-win_amd64.whl", hash = "sha256:3afe39028afff6e94bb90eb0f8c5eb9357c0e37ce3c249f96dbcfc1a73938015"},
+ {file = "bitarray-3.3.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:c001b7ac2d9cf1a73899cf857d3d66919deca677df26df905852039c46aa30a6"},
+ {file = "bitarray-3.3.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:535cc398610ff22dc0341e8833c34be73634a9a0a5d04912b4044e91dfbbc413"},
+ {file = "bitarray-3.3.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dcb5aaaa2d91cc04fa9adfe31222ab150e72d99c779b1ddca10400a2fd319ec"},
+ {file = "bitarray-3.3.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54ac6f8d2f696d83f9ccbb4cc4ce321dc80b9fa4613749a8ab23bda5674510ea"},
+ {file = "bitarray-3.3.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78d069a00a8d06fb68248edd5bf2aa5e8009f4f5eae8dd5b5a529812132ad8a6"},
+ {file = "bitarray-3.3.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cbf063667ef89b0d8b8bd1fcaaa4dcc8c65c17048eb14fb1fa9dbe9cb5197c81"},
+ {file = "bitarray-3.3.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:0b7e1f4139d3f17feba72e386a8f1318fb35182ff65890281e727fd07fdfbd72"},
+ {file = "bitarray-3.3.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d030b96f6ccfec0812e2fc1b02ab72d56a408ec215f496a7a25cde31160a88b4"},
+ {file = "bitarray-3.3.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf7ead8b947a14c785d04943ff4743db90b0c40a4cb27e6bef4c3650800a927d"},
+ {file = "bitarray-3.3.1-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f5f44d71486949237679a8052cda171244d0be9279776c1d3d276861950dd608"},
+ {file = "bitarray-3.3.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:601fedd0e5227a5591e2eae2d35d45a07f030783fc41fd217cdf0c74db554cb9"},
+ {file = "bitarray-3.3.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7445c34e5d55ec512447efa746f046ecf4627c08281fc6e9ef844423167237bc"},
+ {file = "bitarray-3.3.1-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:24296caffe89af65fc8029a56274db6a268f6a297a5163e65df8177c2dd67b19"},
+ {file = "bitarray-3.3.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:90b35553c318b49d5ffdaf3d25b6f0117fd5bbfc3be5576fc41ca506ca0e9b8e"},
+ {file = "bitarray-3.3.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f937ef83e5666b6266236f59b1f38abe64851fb20e7d8d13033c5168d35ef39d"},
+ {file = "bitarray-3.3.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:86dd5b8031d690afc90430997187a4fc5871bc6b81d73055354b8eb48b3e6342"},
+ {file = "bitarray-3.3.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:9101d48f9532ceb6b1d6a5f7d3a2dd5c853015850c65a47045c70f5f2f9ff88f"},
+ {file = "bitarray-3.3.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7964b17923c1bfa519afe273335023e0800c64bdca854008e75f2b148614d3f2"},
+ {file = "bitarray-3.3.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:26a26614bba95f3e4ea8c285206a4efe5ffb99e8539356d78a62491facc326cf"},
+ {file = "bitarray-3.3.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1ce3e352f1b7f1201b04600f93035312b00c9f8f4d606048c39adac32b2fb738"},
+ {file = "bitarray-3.3.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:176991b2769425341da4d52a684795498c0cd4136f4329ba9d524bcb96d26604"},
+ {file = "bitarray-3.3.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:64cef9f2d15261ea667838a4460f75acf4b03d64d53df664357541cc8d2c8183"},
+ {file = "bitarray-3.3.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:28d866fa462d77cafbf284aea14102a31dcfdebb9a5abbfb453f6eb6b2deb4fd"},
+ {file = "bitarray-3.3.1.tar.gz", hash = "sha256:8c89219a672d0a15ab70f8a6f41bc8355296ec26becef89a127c1a32bb2e6345"},
+]
+
[[package]]
name = "black"
version = "25.1.0"
@@ -535,26 +831,107 @@ uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "cachetools"
-version = "5.5.2"
+version = "5.5.0"
description = "Extensible memoizing collections and decorators"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
- {file = "cachetools-5.5.2-py3-none-any.whl", hash = "sha256:d26a22bcc62eb95c3beabd9f1ee5e820d3d2704fe2967cbe350e20c8ffcd3f0a"},
- {file = "cachetools-5.5.2.tar.gz", hash = "sha256:1a661caa9175d26759571b2e19580f9d6393969e5dfca11fdb1f947a23e640d4"},
+ {file = "cachetools-5.5.0-py3-none-any.whl", hash = "sha256:02134e8439cdc2ffb62023ce1debca2944c3f289d66bb17ead3ab3dede74b292"},
+ {file = "cachetools-5.5.0.tar.gz", hash = "sha256:2cc24fb4cbe39633fb7badd9db9ca6295d766d9c2995f245725a46715d050f2a"},
+]
+
+[[package]]
+name = "cbor2"
+version = "5.6.5"
+description = "CBOR (de)serializer with extensive tag support"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "cbor2-5.6.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e16c4a87fc999b4926f5c8f6c696b0d251b4745bc40f6c5aee51d69b30b15ca2"},
+ {file = "cbor2-5.6.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:87026fc838370d69f23ed8572939bd71cea2b3f6c8f8bb8283f573374b4d7f33"},
+ {file = "cbor2-5.6.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a88f029522aec5425fc2f941b3df90da7688b6756bd3f0472ab886d21208acbd"},
+ {file = "cbor2-5.6.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9d15b638539b68aa5d5eacc56099b4543a38b2d2c896055dccf7e83d24b7955"},
+ {file = "cbor2-5.6.5-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:47261f54a024839ec649b950013c4de5b5f521afe592a2688eebbe22430df1dc"},
+ {file = "cbor2-5.6.5-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:559dcf0d897260a9e95e7b43556a62253e84550b77147a1ad4d2c389a2a30192"},
+ {file = "cbor2-5.6.5-cp310-cp310-win_amd64.whl", hash = "sha256:5b856fda4c50c5bc73ed3664e64211fa4f015970ed7a15a4d6361bd48462feaf"},
+ {file = "cbor2-5.6.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:863e0983989d56d5071270790e7ed8ddbda88c9e5288efdb759aba2efee670bc"},
+ {file = "cbor2-5.6.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5cff06464b8f4ca6eb9abcba67bda8f8334a058abc01005c8e616728c387ad32"},
+ {file = "cbor2-5.6.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4c7dbcdc59ea7f5a745d3e30ee5e6b6ff5ce7ac244aa3de6786391b10027bb3"},
+ {file = "cbor2-5.6.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:34cf5ab0dc310c3d0196caa6ae062dc09f6c242e2544bea01691fe60c0230596"},
+ {file = "cbor2-5.6.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:6797b824b26a30794f2b169c0575301ca9b74ae99064e71d16e6ba0c9057de51"},
+ {file = "cbor2-5.6.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:73b9647eed1493097db6aad61e03d8f1252080ee041a1755de18000dd2c05f37"},
+ {file = "cbor2-5.6.5-cp311-cp311-win_amd64.whl", hash = "sha256:6e14a1bf6269d25e02ef1d4008e0ce8880aa271d7c6b4c329dba48645764f60e"},
+ {file = "cbor2-5.6.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:e25c2aebc9db99af7190e2261168cdde8ed3d639ca06868e4f477cf3a228a8e9"},
+ {file = "cbor2-5.6.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fde21ac1cf29336a31615a2c469a9cb03cf0add3ae480672d4d38cda467d07fc"},
+ {file = "cbor2-5.6.5-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a8947c102cac79d049eadbd5e2ffb8189952890df7cbc3ee262bbc2f95b011a9"},
+ {file = "cbor2-5.6.5-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:38886c41bebcd7dca57739439455bce759f1e4c551b511f618b8e9c1295b431b"},
+ {file = "cbor2-5.6.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ae2b49226224e92851c333b91d83292ec62eba53a19c68a79890ce35f1230d70"},
+ {file = "cbor2-5.6.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f2764804ffb6553283fc4afb10a280715905a4cea4d6dc7c90d3e89c4a93bc8d"},
+ {file = "cbor2-5.6.5-cp312-cp312-win_amd64.whl", hash = "sha256:a3ac50485cf67dfaab170a3e7b527630e93cb0a6af8cdaa403054215dff93adf"},
+ {file = "cbor2-5.6.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f0d0a9c5aabd48ecb17acf56004a7542a0b8d8212be52f3102b8218284bd881e"},
+ {file = "cbor2-5.6.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:61ceb77e6aa25c11c814d4fe8ec9e3bac0094a1f5bd8a2a8c95694596ea01e08"},
+ {file = "cbor2-5.6.5-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:97a7e409b864fecf68b2ace8978eb5df1738799a333ec3ea2b9597bfcdd6d7d2"},
+ {file = "cbor2-5.6.5-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7f6d69f38f7d788b04c09ef2b06747536624b452b3c8b371ab78ad43b0296fab"},
+ {file = "cbor2-5.6.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f91e6d74fa6917df31f8757fdd0e154203b0dd0609ec53eb957016a2b474896a"},
+ {file = "cbor2-5.6.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5ce13a27ef8fddf643fc17a753fe34aa72b251d03c23da6a560c005dc171085b"},
+ {file = "cbor2-5.6.5-cp313-cp313-win_amd64.whl", hash = "sha256:54c72a3207bb2d4480c2c39dad12d7971ce0853a99e3f9b8d559ce6eac84f66f"},
+ {file = "cbor2-5.6.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4586a4f65546243096e56a3f18f29d60752ee9204722377021b3119a03ed99ff"},
+ {file = "cbor2-5.6.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3d1a18b3a58dcd9b40ab55c726160d4a6b74868f2a35b71f9e726268b46dc6a2"},
+ {file = "cbor2-5.6.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a83b76367d1c3e69facbcb8cdf65ed6948678e72f433137b41d27458aa2a40cb"},
+ {file = "cbor2-5.6.5-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:90bfa36944caccec963e6ab7e01e64e31cc6664535dc06e6295ee3937c999cbb"},
+ {file = "cbor2-5.6.5-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:37096663a5a1c46a776aea44906cbe5fa3952f29f50f349179c00525d321c862"},
+ {file = "cbor2-5.6.5-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:93676af02bd9a0b4a62c17c5b20f8e9c37b5019b1a24db70a2ee6cb770423568"},
+ {file = "cbor2-5.6.5-cp38-cp38-win_amd64.whl", hash = "sha256:8f747b7a9aaa58881a0c5b4cd4a9b8fb27eca984ed261a769b61de1f6b5bd1e6"},
+ {file = "cbor2-5.6.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:94885903105eec66d7efb55f4ce9884fdc5a4d51f3bd75b6fedc68c5c251511b"},
+ {file = "cbor2-5.6.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fe11c2eb518c882cfbeed456e7a552e544893c17db66fe5d3230dbeaca6b615c"},
+ {file = "cbor2-5.6.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:66dd25dd919cddb0b36f97f9ccfa51947882f064729e65e6bef17c28535dc459"},
+ {file = "cbor2-5.6.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa61a02995f3a996c03884cf1a0b5733f88cbfd7fa0e34944bf678d4227ee712"},
+ {file = "cbor2-5.6.5-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:824f202b556fc204e2e9a67d6d6d624e150fbd791278ccfee24e68caec578afd"},
+ {file = "cbor2-5.6.5-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:7488aec919f8408f9987a3a32760bd385d8628b23a35477917aa3923ff6ad45f"},
+ {file = "cbor2-5.6.5-cp39-cp39-win_amd64.whl", hash = "sha256:a34ee99e86b17444ecbe96d54d909dd1a20e2da9f814ae91b8b71cf1ee2a95e4"},
+ {file = "cbor2-5.6.5-py3-none-any.whl", hash = "sha256:3038523b8fc7de312bb9cdcbbbd599987e64307c4db357cd2030c472a6c7d468"},
+ {file = "cbor2-5.6.5.tar.gz", hash = "sha256:b682820677ee1dbba45f7da11898d2720f92e06be36acec290867d5ebf3d7e09"},
+]
+
+[package.extras]
+benchmarks = ["pytest-benchmark (==4.0.0)"]
+doc = ["Sphinx (>=7)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphinx-rtd-theme (>=1.3.0)", "typing-extensions ; python_version < \"3.12\""]
+test = ["coverage (>=7)", "hypothesis", "pytest"]
+
+[[package]]
+name = "cdp-sdk"
+version = "0.21.0"
+description = "CDP Python SDK"
+optional = false
+python-versions = "<4.0,>=3.10"
+groups = ["main"]
+files = [
+ {file = "cdp_sdk-0.21.0-py3-none-any.whl", hash = "sha256:36a2ec372c79354133f142566674f6f5a21f474d31f378154a3b4e0e0089818a"},
+ {file = "cdp_sdk-0.21.0.tar.gz", hash = "sha256:6d832189e84cec76c3353f52835ddf06789630325ca5f0ea1a48ad663b698e7d"},
]
+[package.dependencies]
+bip-utils = ">=2.9.3,<3.0.0"
+coincurve = ">=20.0.0,<21.0.0"
+cryptography = ">=44.0.0,<45.0.0"
+pydantic = ">=2.10.3,<3.0.0"
+pyjwt = ">=2.10.1,<3.0.0"
+python-dateutil = ">=2.9.0.post0,<3.0.0"
+urllib3 = ">=2.2.3,<3.0.0"
+web3 = ">=7.6.0,<8.0.0"
+
[[package]]
name = "certifi"
-version = "2025.1.31"
+version = "2024.12.14"
description = "Python package for providing Mozilla's CA Bundle."
optional = false
python-versions = ">=3.6"
groups = ["main", "demo", "dev", "docs", "research"]
files = [
- {file = "certifi-2025.1.31-py3-none-any.whl", hash = "sha256:ca78db4565a652026a4db2bcdf68f2fb589ea80d0be70e03929ed730746b84fe"},
- {file = "certifi-2025.1.31.tar.gz", hash = "sha256:3d5da6925056f6f18f119200434a4780a94263f10d1c21d032a6f6b2baa20651"},
+ {file = "certifi-2024.12.14-py3-none-any.whl", hash = "sha256:1275f7a45be9464efc1173084eaa30f866fe2e47d389406136d332ed4967ec56"},
+ {file = "certifi-2024.12.14.tar.gz", hash = "sha256:b650d30f370c2b724812bee08008be0c4163b163ddaec3f2546c1caf65f191db"},
]
[[package]]
@@ -650,6 +1027,18 @@ files = [
{file = "cfgv-3.4.0.tar.gz", hash = "sha256:e52591d4c5f5dead8e0f673fb16db7949d2cfb3f7da4582893288f0ded8fe560"},
]
+[[package]]
+name = "chardet"
+version = "5.2.0"
+description = "Universal encoding detector for Python 3"
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "chardet-5.2.0-py3-none-any.whl", hash = "sha256:e1cf59446890a00105fe7b7912492ea04b6e6f06d4b742b2c788469e34c82970"},
+ {file = "chardet-5.2.0.tar.gz", hash = "sha256:1b3b6ff479a8c414bc3fa2c0852995695c4a026dcd6d0633b2dd092ca39c1cf7"},
+]
+
[[package]]
name = "charset-normalizer"
version = "3.4.1"
@@ -752,6 +1141,116 @@ files = [
{file = "charset_normalizer-3.4.1.tar.gz", hash = "sha256:44251f18cd68a75b56585dd00dae26183e102cd5e0f9f1466e6df5da2ed64ea3"},
]
+[[package]]
+name = "ckzg"
+version = "2.1.1"
+description = "Python bindings for C-KZG-4844"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "ckzg-2.1.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4b9825a1458219e8b4b023012b8ef027ef1f47e903f9541cbca4615f80132730"},
+ {file = "ckzg-2.1.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e2a40a3ba65cca4b52825d26829e6f7eb464aa27a9e9efb6b8b2ce183442c741"},
+ {file = "ckzg-2.1.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a1d753fbe85be7c21602eddc2d40e0915e25fce10329f4f801a0002a4f886cc7"},
+ {file = "ckzg-2.1.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9d76b50527f1d12430bf118aff6fa4051e9860eada43f29177258b8d399448ea"},
+ {file = "ckzg-2.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44c8603e43c021d100f355f50189183135d1df3cbbddb8881552d57fbf421dde"},
+ {file = "ckzg-2.1.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:38707a638c9d715b3c30b29352b969f78d8fc10faed7db5faf517f04359895c0"},
+ {file = "ckzg-2.1.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:52c4d257bdcbe822d20c5cd24c8154ec5aac33c49a8f5a19e716d9107a1c8785"},
+ {file = "ckzg-2.1.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:1507f7bfb9bcf51d816db5d8d0f0ed53c8289605137820d437b69daea8333e16"},
+ {file = "ckzg-2.1.1-cp310-cp310-win_amd64.whl", hash = "sha256:d02eaaf4f841910133552b3a051dea53bcfe60cd98199fc4cf80b27609d8baa2"},
+ {file = "ckzg-2.1.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:465e2b71cf9dc383f66f1979269420a0da9274a3a9e98b1a4455e84927dfe491"},
+ {file = "ckzg-2.1.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ee2f26f17a64ad0aab833d637b276f28486b82a29e34f32cf54b237b8f8ab72d"},
+ {file = "ckzg-2.1.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99cc2c4e9fb8c62e3e0862c7f4df9142f07ba640da17fded5f6e0fd09f75909f"},
+ {file = "ckzg-2.1.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:773dd016693d74aca1f5d7982db2bad7dde2e147563aeb16a783f7e5f69c01fe"},
+ {file = "ckzg-2.1.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0af2b2144f87ba218d8db01382a961b3ecbdde5ede4fa0d9428d35f8c8a595ba"},
+ {file = "ckzg-2.1.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8f55e63d3f7c934a2cb53728ed1d815479e177aca8c84efe991c2920977cff6"},
+ {file = "ckzg-2.1.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:ecb42aaa0ffa427ff14a9dde9356ba69e5ae6014650b397af55b31bdae7a9b6e"},
+ {file = "ckzg-2.1.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5a01514239f12fb1a7ad9009c20062a4496e13b09541c1a65f97e295da648c70"},
+ {file = "ckzg-2.1.1-cp311-cp311-win_amd64.whl", hash = "sha256:6516b9684aae262c85cf7fddd8b585b8139ad20e08ec03994e219663abbb0916"},
+ {file = "ckzg-2.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c60e8903344ce98ce036f0fabacce952abb714cad4607198b2f0961c28b8aa72"},
+ {file = "ckzg-2.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a4299149dd72448e5a8d2d1cc6cc7472c92fc9d9f00b1377f5b017c089d9cd92"},
+ {file = "ckzg-2.1.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:025dd31ffdcc799f3ff842570a2a6683b6c5b01567da0109c0c05d11768729c4"},
+ {file = "ckzg-2.1.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b42ab8385c273f40a693657c09d2bba40cb4f4666141e263906ba2e519e80bd"},
+ {file = "ckzg-2.1.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1be3890fc1543f4fcfc0063e4baf5c036eb14bcf736dabdc6171ab017e0f1671"},
+ {file = "ckzg-2.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:b754210ded172968b201e2d7252573af6bf52d6ad127ddd13d0b9a45a51dae7b"},
+ {file = "ckzg-2.1.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:b2f8fda87865897a269c4e951e3826c2e814427a6cdfed6731cccfe548f12b36"},
+ {file = "ckzg-2.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:98e70b5923d77c7359432490145e9d1ab0bf873eb5de56ec53f4a551d7eaec79"},
+ {file = "ckzg-2.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:42af7bde4ca45469cd93a96c3d15d69d51d40e7f0d30e3a20711ebd639465fcb"},
+ {file = "ckzg-2.1.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7e4edfdaf87825ff43b9885fabfdea408737a714f4ce5467100d9d1d0a03b673"},
+ {file = "ckzg-2.1.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:815fd2a87d6d6c57d669fda30c150bc9bf387d47e67d84535aa42b909fdc28ea"},
+ {file = "ckzg-2.1.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c32466e809b1ab3ff01d3b0bb0b9912f61dcf72957885615595f75e3f7cc10e5"},
+ {file = "ckzg-2.1.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f11b73ccf37b12993f39a7dbace159c6d580aacacde6ee17282848476550ddbc"},
+ {file = "ckzg-2.1.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de3b9433a1f2604bd9ac1646d3c83ad84a850d454d3ac589fe8e70c94b38a6b0"},
+ {file = "ckzg-2.1.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b7d7e1b5ea06234558cd95c483666fd785a629b720a7f1622b3cbffebdc62033"},
+ {file = "ckzg-2.1.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:9f5556e6675866040cc4335907be6c537051e7f668da289fa660fdd8a30c9ddb"},
+ {file = "ckzg-2.1.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:55b2ba30c5c9daac0c55f1aac851f1b7bf1f7aa0028c2db4440e963dd5b866d6"},
+ {file = "ckzg-2.1.1-cp313-cp313-win_amd64.whl", hash = "sha256:10d201601fc8f28c0e8cec3406676797024dd374c367bbeec5a7a9eac9147237"},
+ {file = "ckzg-2.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:5f46c8fd5914db62b446baf62c8599da07e6f91335779a9709c554ef300a7b60"},
+ {file = "ckzg-2.1.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:60f14612c2be84f405755d734b0ad4e445db8af357378b95b72339b59e1f4fcf"},
+ {file = "ckzg-2.1.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:929e6e793039f42325988004a90d16b0ef4fc7e1330142e180f0298f2ed4527c"},
+ {file = "ckzg-2.1.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2beac2af53ea181118179570ecc81d8a8fc52c529553d7fd8786fd100a2aa39b"},
+ {file = "ckzg-2.1.1-cp36-cp36m-musllinux_1_2_aarch64.whl", hash = "sha256:2432d48aec296baee79556bfde3bddd2799bcc7753cd1f0d0c9a3b0333935637"},
+ {file = "ckzg-2.1.1-cp36-cp36m-musllinux_1_2_i686.whl", hash = "sha256:4c2e8180b54261ccae2bf8acd003ccee7394d88d073271af19c5f2ac4a54c607"},
+ {file = "ckzg-2.1.1-cp36-cp36m-musllinux_1_2_x86_64.whl", hash = "sha256:c44e36bd53d9dd0ab29bd6ed2d67ea43c48eecd57f8197854a75742213938bf5"},
+ {file = "ckzg-2.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:10befd86e643d38ac468151cdfb71e79b2d46aa6397b81db4224f4f6995262eb"},
+ {file = "ckzg-2.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:138a9324ad8e8a9ade464043dc3a84afe12996516788f2ed841bdbe5d123af81"},
+ {file = "ckzg-2.1.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:635af0a33a10c9ac275f3efc142880a6b46ac63f4495f600aae05266af4fadff"},
+ {file = "ckzg-2.1.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:360e263677ee5aedb279b42cf54b51c905ddcac9181c65d89ec0b298d3f31ec0"},
+ {file = "ckzg-2.1.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f81395f77bfd069831cbb1de9d473c7044abe9ce6cd562ef6ccd76d23abcef43"},
+ {file = "ckzg-2.1.1-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:db1ff122f8dc10c9500a00a4d680c3c38f4e19b01d95f38e0f5bc55a77c8ab98"},
+ {file = "ckzg-2.1.1-cp37-cp37m-musllinux_1_2_i686.whl", hash = "sha256:1f82f539949ff3c6a5accfdd211919a3e374d354b3665d062395ebdbf8befaeb"},
+ {file = "ckzg-2.1.1-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:5bc8ae85df97467e84abb491b516e25dbca36079e766eafce94d1bc45e4aaa35"},
+ {file = "ckzg-2.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:e749ce9fcb26e37101f2af8ba9c6376b66eb598880d35e457890044ba77c1cf7"},
+ {file = "ckzg-2.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5b00201979a64fd7e6029f64d791af42374febb42452537933e881b49d4e8c77"},
+ {file = "ckzg-2.1.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c61c437ba714ab7c802b51fb30125e8f8550e1320fe9050d20777420c153a2b3"},
+ {file = "ckzg-2.1.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8bd54394376598a7c081df009cfde3cc447beb640b6c6b7534582a31e6290ac7"},
+ {file = "ckzg-2.1.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67d8c6680a7b370718af59cc17a983752706407cfbcace013ee707646d1f7b00"},
+ {file = "ckzg-2.1.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:55f6c57b24bc4fe16b1b50324ef8548f2a5053ad76bf90c618e2f88c040120d7"},
+ {file = "ckzg-2.1.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:f55fc10fb1b217c66bfe14e05535e5e61cfbb2a95dbb9b93a80984fa2ab4a7c0"},
+ {file = "ckzg-2.1.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:2e23e3198f8933f0140ef8b2aeba717d8de03ec7b8fb1ee946f8d39986ce0811"},
+ {file = "ckzg-2.1.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:2f9caf88bf216756bb1361b92616c750a933c9afb67972ad05c212649a9be520"},
+ {file = "ckzg-2.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:30e0c2d258bbc0c099c2d1854c6ffa2fd9abf6138b9c81f855e1936f6cb259aa"},
+ {file = "ckzg-2.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a6239d3d2e30cb894ca4e7765b1097eb6a70c0ecbe5f8e0b023fbf059472d4ac"},
+ {file = "ckzg-2.1.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:909ebabc253a98d9dc1d51f93dc75990134bfe296c947e1ecf3b7142aba5108e"},
+ {file = "ckzg-2.1.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0700dace6559b288b42ca8622be89c2a43509881ed6f4f0bfb6312bcceed0cb9"},
+ {file = "ckzg-2.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3a36aeabd243e906314694b4a107de99b0c4473ff1825fcb06acd147ffb1951a"},
+ {file = "ckzg-2.1.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d884e8f9c7d7839f1a95561f4479096dce21d45b0c5dd013dc0842550cea1cad"},
+ {file = "ckzg-2.1.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:338fdf4a0b463973fc7b7e4dc289739db929e61d7cb9ba984ebbe9c49d3aa6f9"},
+ {file = "ckzg-2.1.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:c594036d3408eebdcd8ab2c7aab7308239ed4df3d94f3211b7cf253f228fb0b7"},
+ {file = "ckzg-2.1.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:b0912ebb328ced510250a2325b095917db19c1a014792a0bf4c389f0493e39de"},
+ {file = "ckzg-2.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:5046aceb03482ddf7200f2f5c643787b100e6fb96919852faf1c79f8870c80a1"},
+ {file = "ckzg-2.1.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:375918e25eafb9bafe5215ab91698504cba3fe51b4fe92f5896af6c5663f50c6"},
+ {file = "ckzg-2.1.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:38b3b7802c76d4ad015db2b7a79a49c193babae50ee5f77e9ac2865c9e9ddb09"},
+ {file = "ckzg-2.1.1-pp310-pypy310_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:438a5009fd254ace0bc1ad974d524547f1a41e6aa5e778c5cd41f4ee3106bcd6"},
+ {file = "ckzg-2.1.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ce11cc163a2e0dab3af7455aca7053f9d5bb8d157f231acc7665fd230565d48"},
+ {file = "ckzg-2.1.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b53964c07f6a076e97eaa1ef35045e935d7040aff14f80bae7e9105717702d05"},
+ {file = "ckzg-2.1.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cf085f15ae52ab2599c9b5a3d5842794bcf5613b7f58661fbfb0c5d9eac988b9"},
+ {file = "ckzg-2.1.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:4b0c850bd6cad22ac79b2a2ab884e0e7cd2b54a67d643cd616c145ebdb535a11"},
+ {file = "ckzg-2.1.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:26951f36bb60c9150bbd38110f5e1625596f9779dad54d1d492d8ec38bc84e3a"},
+ {file = "ckzg-2.1.1-pp311-pypy311_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bbe12445e49c4bee67746b7b958e90a973b0de116d0390749b0df351d94e9a8c"},
+ {file = "ckzg-2.1.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:71c5d4f66f09de4a99271acac74d2acb3559a77de77a366b34a91e99e8822667"},
+ {file = "ckzg-2.1.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42673c1d007372a4e8b48f6ef8f0ce31a9688a463317a98539757d1e2fb1ecc7"},
+ {file = "ckzg-2.1.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:57a7dc41ec6b69c1d9117eb61cf001295e6b4f67a736020442e71fb4367fb1a5"},
+ {file = "ckzg-2.1.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:22e4606857660b2ffca2f7b96c01d0b18b427776d8a93320caf2b1c7342881fe"},
+ {file = "ckzg-2.1.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b55475126a9efc82d61718b2d2323502e33d9733b7368c407954592ccac87faf"},
+ {file = "ckzg-2.1.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5939ae021557c64935a7649b13f4a58f1bd35c39998fd70d0cefb5cbaf77d1be"},
+ {file = "ckzg-2.1.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3ad1ec5f9726a9946508a4a2aace298172aa778de9ebbe97e21c873c3688cc87"},
+ {file = "ckzg-2.1.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:93d7edea3bb1602b18b394ebeec231d89dfd8d48fdd06571cb7656107aa62226"},
+ {file = "ckzg-2.1.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c450d77af61011ced3777f97431d5f1bc148ca5362c67caf516aa2f6ef7e4817"},
+ {file = "ckzg-2.1.1-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:8fc8df4e17e08974961d6c14f6c57ccfd3ad5aede74598292ec6e5d6fc2dbcac"},
+ {file = "ckzg-2.1.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:93338da8011790ef53a68475678bc951fa7b337db027d8edbf1889e59691161c"},
+ {file = "ckzg-2.1.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4889f24b4ff614f39e3584709de1a3b0f1556675b33e360dbcb28cda827296d4"},
+ {file = "ckzg-2.1.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7b58fbb1a9be4ae959feede8f103e12d80ef8453bdc6483bfdaf164879a2b80"},
+ {file = "ckzg-2.1.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:6136c5b5377c7f7033323b25bc2c7b43c025d44ed73e338c02f9f59df9460e5b"},
+ {file = "ckzg-2.1.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:fa419b92a0e8766deb7157fb28b6542c1c3f8dde35d2a69d1f91ec8e41047d35"},
+ {file = "ckzg-2.1.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:95cd6c8eb3ab5148cd97ab5bf44b84fd7f01adf4b36ffd070340ad2d9309b3f9"},
+ {file = "ckzg-2.1.1-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:848191201052b48bdde18680ebb77bf8da99989270e5aea8b0290051f5ac9468"},
+ {file = "ckzg-2.1.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4716c0564131b0d609fb8856966e83892b9809cf6719c7edd6495b960451f8b"},
+ {file = "ckzg-2.1.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c399168ba199827dee3104b00cdc7418d4dbdf47a5fcbe7cf938fc928037534"},
+ {file = "ckzg-2.1.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:724f29f9f110d9ef42a6a1a1a7439548c61070604055ef96b2ab7a884cad4192"},
+ {file = "ckzg-2.1.1.tar.gz", hash = "sha256:d6b306b7ec93a24e4346aa53d07f7f75053bc0afc7398e35fa649e5f9d48fcc4"},
+]
+
[[package]]
name = "click"
version = "8.1.8"
@@ -767,6 +1266,115 @@ files = [
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
+[[package]]
+name = "coinbase-agentkit"
+version = "0.4.0"
+description = "Coinbase AgentKit"
+optional = false
+python-versions = "~=3.10"
+groups = ["main"]
+files = [
+ {file = "coinbase_agentkit-0.4.0-py3-none-any.whl", hash = "sha256:c172c9f127a03148ff0a35d5c70d1e5c688f3c2550b94e75b945faacfe1db57c"},
+ {file = "coinbase_agentkit-0.4.0.tar.gz", hash = "sha256:0166bf2ef245a414b23f58155879e70a066526b0e4de64c24d353cd70825762a"},
+]
+
+[package.dependencies]
+allora-sdk = ">=0.2.0,<0.3"
+cdp-sdk = "0.21.0"
+ecdsa = ">=0.19.0,<0.20"
+jsonschema = ">=4.23.0,<5"
+nilql = ">=0.0.0a12,<0.0.1"
+paramiko = ">=3.5.1,<4"
+pydantic = ">=2.0,<3.0"
+pyjwt = {version = ">=2.10.1,<3", extras = ["crypto"]}
+python-dotenv = ">=1.0.1,<2"
+requests = ">=2.31.0,<3"
+web3 = ">=7.10.0,<8"
+
+[[package]]
+name = "coinbase-agentkit-langchain"
+version = "0.3.0"
+description = "Coinbase AgentKit LangChain extension"
+optional = false
+python-versions = "~=3.10"
+groups = ["main"]
+files = [
+ {file = "coinbase_agentkit_langchain-0.3.0-py3-none-any.whl", hash = "sha256:ce80879e1f7210b18558985332784ec24f2845bd0a9529739276d2b199f7fc75"},
+ {file = "coinbase_agentkit_langchain-0.3.0.tar.gz", hash = "sha256:8e3ee37d76250c3400c333aeaf3c7e36544a70a12dfc40adb3a76f782b0bc4d2"},
+]
+
+[package.dependencies]
+coinbase-agentkit = ">=0.3.0,<0.5"
+langchain = ">=0.3.4,<0.4"
+python-dotenv = ">=1.0.1,<2"
+
+[[package]]
+name = "coincurve"
+version = "20.0.0"
+description = "Cross-platform Python CFFI bindings for libsecp256k1"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "coincurve-20.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d559b22828638390118cae9372a1bb6f6594f5584c311deb1de6a83163a0919b"},
+ {file = "coincurve-20.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:33d7f6ebd90fcc550f819f7f2cce2af525c342aac07f0ccda46ad8956ad9d99b"},
+ {file = "coincurve-20.0.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22d70dd55d13fd427418eb41c20fde0a20a5e5f016e2b1bb94710701e759e7e0"},
+ {file = "coincurve-20.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:46f18d481eaae72c169f334cde1fd22011a884e0c9c6adc3fdc1fd13df8236a3"},
+ {file = "coincurve-20.0.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9de1ec57f43c3526bc462be58fb97910dc1fdd5acab6c71eda9f9719a5bd7489"},
+ {file = "coincurve-20.0.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:a6f007c44c726b5c0b3724093c0d4fb8e294f6b6869beb02d7473b21777473a3"},
+ {file = "coincurve-20.0.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:0ff1f3b81330db5092c24da2102e4fcba5094f14945b3eb40746456ceabdd6d9"},
+ {file = "coincurve-20.0.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:82f7de97694d9343f26bd1c8e081b168e5f525894c12445548ce458af227f536"},
+ {file = "coincurve-20.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:e905b4b084b4f3b61e5a5d58ac2632fd1d07b7b13b4c6d778335a6ca1dafd7a3"},
+ {file = "coincurve-20.0.0-cp310-cp310-win_arm64.whl", hash = "sha256:3657bb5ed0baf1cf8cf356e7d44aa90a7902cc3dd4a435c6d4d0bed0553ad4f7"},
+ {file = "coincurve-20.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:44087d1126d43925bf9a2391ce5601bf30ce0dba4466c239172dc43226696018"},
+ {file = "coincurve-20.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5ccf0ba38b0f307a9b3ce28933f6c71dc12ef3a0985712ca09f48591afd597c8"},
+ {file = "coincurve-20.0.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:566bc5986debdf8572b6be824fd4de03d533c49f3de778e29f69017ae3fe82d8"},
+ {file = "coincurve-20.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f4d70283168e146f025005c15406086513d5d35e89a60cf4326025930d45013a"},
+ {file = "coincurve-20.0.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:763c6122dd7d5e7a81c86414ce360dbe9a2d4afa1ca6c853ee03d63820b3d0c5"},
+ {file = "coincurve-20.0.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f00c361c356bcea386d47a191bb8ac60429f4b51c188966a201bfecaf306ff7f"},
+ {file = "coincurve-20.0.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4af57bdadd2e64d117dd0b33cfefe76e90c7a6c496a7b034fc65fd01ec249b15"},
+ {file = "coincurve-20.0.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:a26437b7cbde13fb6e09261610b788ca2a0ca2195c62030afd1e1e0d1a62e035"},
+ {file = "coincurve-20.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:ed51f8bba35e6c7676ad65539c3dbc35acf014fc402101fa24f6b0a15a74ab9e"},
+ {file = "coincurve-20.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:594b840fc25d74118407edbbbc754b815f1bba9759dbf4f67f1c2b78396df2d3"},
+ {file = "coincurve-20.0.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4df4416a6c0370d777aa725a25b14b04e45aa228da1251c258ff91444643f688"},
+ {file = "coincurve-20.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1ccc3e4db55abf3fc0e604a187fdb05f0702bc5952e503d9a75f4ae6eeb4cb3a"},
+ {file = "coincurve-20.0.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac8335b1658a2ef5b3eb66d52647742fe8c6f413ad5b9d5310d7ea6d8060d40f"},
+ {file = "coincurve-20.0.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c7ac025e485a0229fd5394e0bf6b4a75f8a4f6cee0dcf6f0b01a2ef05c5210ff"},
+ {file = "coincurve-20.0.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e46e3f1c21b3330857bcb1a3a5b942f645c8bce912a8a2b252216f34acfe4195"},
+ {file = "coincurve-20.0.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:df9ff9b17a1d27271bf476cf3fa92df4c151663b11a55d8cea838b8f88d83624"},
+ {file = "coincurve-20.0.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:4155759f071375699282e03b3d95fb473ee05c022641c077533e0d906311e57a"},
+ {file = "coincurve-20.0.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:0530b9dd02fc6f6c2916716974b79bdab874227f560c422801ade290e3fc5013"},
+ {file = "coincurve-20.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:eacf9c0ce8739c84549a89c083b1f3526c8780b84517ee75d6b43d276e55f8a0"},
+ {file = "coincurve-20.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:52a67bfddbd6224dfa42085c88ad176559801b57d6a8bd30d92ee040de88b7b3"},
+ {file = "coincurve-20.0.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:61e951b1d695b62376f60519a84c4facaf756eeb9c5aff975bea0942833f185d"},
+ {file = "coincurve-20.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4e9e548db77f4ea34c0d748dddefc698adb0ee3fab23ed19f80fb2118dac70f6"},
+ {file = "coincurve-20.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8cdbf0da0e0809366fdfff236b7eb6e663669c7b1f46361a4c4d05f5b7e94c57"},
+ {file = "coincurve-20.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d72222b4ecd3952e8ffcbf59bc7e0d1b181161ba170b60e5c8e1f359a43bbe7e"},
+ {file = "coincurve-20.0.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9add43c4807f0c17a940ce4076334c28f51d09c145cd478400e89dcfb83fb59d"},
+ {file = "coincurve-20.0.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:bcc94cceea6ec8863815134083e6221a034b1ecef822d0277cf6ad2e70009b7f"},
+ {file = "coincurve-20.0.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:1ffbdfef6a6d147988eabaed681287a9a7e6ba45ecc0a8b94ba62ad0a7656d97"},
+ {file = "coincurve-20.0.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:13335c19c7e5f36eaba2a53c68073d981980d7dc7abfee68d29f2da887ccd24e"},
+ {file = "coincurve-20.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:7fbfb8d16cf2bea2cf48fc5246d4cb0a06607d73bb5c57c007c9aed7509f855e"},
+ {file = "coincurve-20.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4870047704cddaae7f0266a549c927407c2ba0ec92d689e3d2b511736812a905"},
+ {file = "coincurve-20.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:81ce41263517b0a9f43cd570c87720b3c13324929584fa28d2e4095969b6015d"},
+ {file = "coincurve-20.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:572083ccce6c7b514d482f25f394368f4ae888f478bd0b067519d33160ea2fcc"},
+ {file = "coincurve-20.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee5bc78a31a2f1370baf28aaff3949bc48f940a12b0359d1cd2c4115742874e6"},
+ {file = "coincurve-20.0.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f2895d032e281c4e747947aae4bcfeef7c57eabfd9be22886c0ca4e1365c7c1f"},
+ {file = "coincurve-20.0.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d3e2f21957ada0e1742edbde117bb41758fa8691b69c8d186c23e9e522ea71cd"},
+ {file = "coincurve-20.0.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c2baa26b1aad1947ca07b3aa9e6a98940c5141c6bdd0f9b44d89e36da7282ffa"},
+ {file = "coincurve-20.0.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7eacc7944ddf9e2b7448ecbe84753841ab9874b8c332a4f5cc3b2f184db9f4a2"},
+ {file = "coincurve-20.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:c293c095dc690178b822cadaaeb81de3cc0d28f8bdf8216ed23551dcce153a26"},
+ {file = "coincurve-20.0.0-cp39-cp39-win_arm64.whl", hash = "sha256:11a47083a0b7092d3eb50929f74ffd947c4a5e7035796b81310ea85289088c7a"},
+ {file = "coincurve-20.0.0.tar.gz", hash = "sha256:872419e404300302e938849b6b92a196fabdad651060b559dc310e52f8392829"},
+]
+
+[package.dependencies]
+asn1crypto = "*"
+cffi = ">=1.3.0"
+
+[package.extras]
+dev = ["coverage", "pytest", "pytest-benchmark"]
+
[[package]]
name = "colorama"
version = "0.4.6"
@@ -853,6 +1461,17 @@ mypy = ["contourpy[bokeh,docs]", "docutils-stubs", "mypy (==1.11.1)", "types-Pil
test = ["Pillow", "contourpy[test-no-images]", "matplotlib"]
test-no-images = ["pytest", "pytest-cov", "pytest-rerunfailures", "pytest-xdist", "wurlitzer"]
+[[package]]
+name = "crcmod"
+version = "1.7"
+description = "CRC Generator"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "crcmod-1.7.tar.gz", hash = "sha256:dc7051a0db5f2bd48665a990d3ec1cc305a466a77358ca4492826f41f283601e"},
+]
+
[[package]]
name = "cryptography"
version = "44.0.2"
@@ -927,6 +1546,123 @@ files = [
docs = ["ipython", "matplotlib", "numpydoc", "sphinx"]
tests = ["pytest", "pytest-cov", "pytest-xdist"]
+[[package]]
+name = "cytoolz"
+version = "1.0.1"
+description = "Cython implementation of Toolz: High performance functional utilities"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+markers = "implementation_name == \"cpython\""
+files = [
+ {file = "cytoolz-1.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:cec9af61f71fc3853eb5dca3d42eb07d1f48a4599fa502cbe92adde85f74b042"},
+ {file = "cytoolz-1.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:140bbd649dbda01e91add7642149a5987a7c3ccc251f2263de894b89f50b6608"},
+ {file = "cytoolz-1.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e90124bdc42ff58b88cdea1d24a6bc5f776414a314cc4d94f25c88badb3a16d1"},
+ {file = "cytoolz-1.0.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e74801b751e28f7c5cc3ad264c123954a051f546f2fdfe089f5aa7a12ccfa6da"},
+ {file = "cytoolz-1.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:582dad4545ddfb5127494ef23f3fa4855f1673a35d50c66f7638e9fb49805089"},
+ {file = "cytoolz-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd7bd0618e16efe03bd12f19c2a26a27e6e6b75d7105adb7be1cd2a53fa755d8"},
+ {file = "cytoolz-1.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d74cca6acf1c4af58b2e4a89cc565ed61c5e201de2e434748c93e5a0f5c541a5"},
+ {file = "cytoolz-1.0.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:823a3763828d8d457f542b2a45d75d6b4ced5e470b5c7cf2ed66a02f508ed442"},
+ {file = "cytoolz-1.0.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:51633a14e6844c61db1d68c1ffd077cf949f5c99c60ed5f1e265b9e2966f1b52"},
+ {file = "cytoolz-1.0.1-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:f3ec9b01c45348f1d0d712507d54c2bfd69c62fbd7c9ef555c9d8298693c2432"},
+ {file = "cytoolz-1.0.1-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:1855022b712a9c7a5bce354517ab4727a38095f81e2d23d3eabaf1daeb6a3b3c"},
+ {file = "cytoolz-1.0.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:9930f7288c4866a1dc1cc87174f0c6ff4cad1671eb1f6306808aa6c445857d78"},
+ {file = "cytoolz-1.0.1-cp310-cp310-win32.whl", hash = "sha256:a9baad795d72fadc3445ccd0f122abfdbdf94269157e6d6d4835636dad318804"},
+ {file = "cytoolz-1.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:ad95b386a84e18e1f6136f6d343d2509d4c3aae9f5a536f3dc96808fcc56a8cf"},
+ {file = "cytoolz-1.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2d958d4f04d9d7018e5c1850790d9d8e68b31c9a2deebca74b903706fdddd2b6"},
+ {file = "cytoolz-1.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0f445b8b731fc0ecb1865b8e68a070084eb95d735d04f5b6c851db2daf3048ab"},
+ {file = "cytoolz-1.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1f546a96460a7e28eb2ec439f4664fa646c9b3e51c6ebad9a59d3922bbe65e30"},
+ {file = "cytoolz-1.0.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0317681dd065532d21836f860b0563b199ee716f55d0c1f10de3ce7100c78a3b"},
+ {file = "cytoolz-1.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0c0ef52febd5a7821a3fd8d10f21d460d1a3d2992f724ba9c91fbd7a96745d41"},
+ {file = "cytoolz-1.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5ebaf419acf2de73b643cf96108702b8aef8e825cf4f63209ceb078d5fbbbfd"},
+ {file = "cytoolz-1.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5f7f04eeb4088947585c92d6185a618b25ad4a0f8f66ea30c8db83cf94a425e3"},
+ {file = "cytoolz-1.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:f61928803bb501c17914b82d457c6f50fe838b173fb40d39c38d5961185bd6c7"},
+ {file = "cytoolz-1.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:d2960cb4fa01ccb985ad1280db41f90dc97a80b397af970a15d5a5de403c8c61"},
+ {file = "cytoolz-1.0.1-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:b2b407cc3e9defa8df5eb46644f6f136586f70ba49eba96f43de67b9a0984fd3"},
+ {file = "cytoolz-1.0.1-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:8245f929144d4d3bd7b972c9593300195c6cea246b81b4c46053c48b3f044580"},
+ {file = "cytoolz-1.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e37385db03af65763933befe89fa70faf25301effc3b0485fec1c15d4ce4f052"},
+ {file = "cytoolz-1.0.1-cp311-cp311-win32.whl", hash = "sha256:50f9c530f83e3e574fc95c264c3350adde8145f4f8fc8099f65f00cc595e5ead"},
+ {file = "cytoolz-1.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:b7f6b617454b4326af7bd3c7c49b0fc80767f134eb9fd6449917a058d17a0e3c"},
+ {file = "cytoolz-1.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:fcb8f7d0d65db1269022e7e0428471edee8c937bc288ebdcb72f13eaa67c2fe4"},
+ {file = "cytoolz-1.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:207d4e4b445e087e65556196ff472ff134370d9a275d591724142e255f384662"},
+ {file = "cytoolz-1.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21cdf6bac6fd843f3b20280a66fd8df20dea4c58eb7214a2cd8957ec176f0bb3"},
+ {file = "cytoolz-1.0.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4a55ec098036c0dea9f3bdc021f8acd9d105a945227d0811589f0573f21c9ce1"},
+ {file = "cytoolz-1.0.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a13ab79ff4ce202e03ab646a2134696988b554b6dc4b71451e948403db1331d8"},
+ {file = "cytoolz-1.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4e2d944799026e1ff08a83241f1027a2d9276c41f7a74224cd98b7df6e03957d"},
+ {file = "cytoolz-1.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88ba85834cd523b91fdf10325e1e6d71c798de36ea9bdc187ca7bd146420de6f"},
+ {file = "cytoolz-1.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5a750b1af7e8bf6727f588940b690d69e25dc47cce5ce467925a76561317eaf7"},
+ {file = "cytoolz-1.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:44a71870f7eae31d263d08b87da7c2bf1176f78892ed8bdade2c2850478cb126"},
+ {file = "cytoolz-1.0.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:c8231b9abbd8e368e036f4cc2e16902c9482d4cf9e02a6147ed0e9a3cd4a9ab0"},
+ {file = "cytoolz-1.0.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:aa87599ccc755de5a096a4d6c34984de6cd9dc928a0c5eaa7607457317aeaf9b"},
+ {file = "cytoolz-1.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:67cd16537df51baabde3baa770ab7b8d16839c4d21219d5b96ac59fb012ebd2d"},
+ {file = "cytoolz-1.0.1-cp312-cp312-win32.whl", hash = "sha256:fb988c333f05ee30ad4693fe4da55d95ec0bb05775d2b60191236493ea2e01f9"},
+ {file = "cytoolz-1.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:8f89c48d8e5aec55ffd566a8ec858706d70ed0c6a50228eca30986bfa5b4da8b"},
+ {file = "cytoolz-1.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:6944bb93b287032a4c5ca6879b69bcd07df46f3079cf8393958cf0b0454f50c0"},
+ {file = "cytoolz-1.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e027260fd2fc5cb041277158ac294fc13dca640714527219f702fb459a59823a"},
+ {file = "cytoolz-1.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:88662c0e07250d26f5af9bc95911e6137e124a5c1ec2ce4a5d74de96718ab242"},
+ {file = "cytoolz-1.0.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:309dffa78b0961b4c0cf55674b828fbbc793cf2d816277a5c8293c0c16155296"},
+ {file = "cytoolz-1.0.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:edb34246e6eb40343c5860fc51b24937698e4fa1ee415917a73ad772a9a1746b"},
+ {file = "cytoolz-1.0.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0a54da7a8e4348a18d45d4d5bc84af6c716d7f131113a4f1cc45569d37edff1b"},
+ {file = "cytoolz-1.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:241c679c3b1913c0f7259cf1d9639bed5084c86d0051641d537a0980548aa266"},
+ {file = "cytoolz-1.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:5bfc860251a8f280ac79696fc3343cfc3a7c30b94199e0240b6c9e5b6b01a2a5"},
+ {file = "cytoolz-1.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:c8edd1547014050c1bdad3ff85d25c82bd1c2a3c96830c6181521eb78b9a42b3"},
+ {file = "cytoolz-1.0.1-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:b349bf6162e8de215403d7f35f8a9b4b1853dc2a48e6e1a609a5b1a16868b296"},
+ {file = "cytoolz-1.0.1-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:1b18b35256219b6c3dd0fa037741b85d0bea39c552eab0775816e85a52834140"},
+ {file = "cytoolz-1.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:738b2350f340ff8af883eb301054eb724997f795d20d90daec7911c389d61581"},
+ {file = "cytoolz-1.0.1-cp313-cp313-win32.whl", hash = "sha256:9cbd9c103df54fcca42be55ef40e7baea624ac30ee0b8bf1149f21146d1078d9"},
+ {file = "cytoolz-1.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:90e577e08d3a4308186d9e1ec06876d4756b1e8164b92971c69739ea17e15297"},
+ {file = "cytoolz-1.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f3a509e4ac8e711703c368476b9bbce921fcef6ebb87fa3501525f7000e44185"},
+ {file = "cytoolz-1.0.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:a7eecab6373e933dfbf4fdc0601d8fd7614f8de76793912a103b5fccf98170cd"},
+ {file = "cytoolz-1.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e55ed62087f6e3e30917b5f55350c3b6be6470b849c6566018419cd159d2cebc"},
+ {file = "cytoolz-1.0.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:43de33d99a4ccc07234cecd81f385456b55b0ea9c39c9eebf42f024c313728a5"},
+ {file = "cytoolz-1.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:139bed875828e1727018aa0982aa140e055cbafccb7fd89faf45cbb4f2a21514"},
+ {file = "cytoolz-1.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:22c12671194b518aa8ce2f4422bd5064f25ab57f410ba0b78705d0a219f4a97a"},
+ {file = "cytoolz-1.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:79888f2f7dc25709cd5d37b032a8833741e6a3692c8823be181d542b5999128e"},
+ {file = "cytoolz-1.0.1-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:51628b4eb41fa25bd428f8f7b5b74fbb05f3ae65fbd265019a0dd1ded4fdf12a"},
+ {file = "cytoolz-1.0.1-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:1db9eb7179285403d2fb56ba1ff6ec35a44921b5e2fa5ca19d69f3f9f0285ea5"},
+ {file = "cytoolz-1.0.1-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:08ab7efae08e55812340bfd1b3f09f63848fe291675e2105eab1aa5327d3a16e"},
+ {file = "cytoolz-1.0.1-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:e5fdc5264f884e7c0a1711a81dff112708a64b9c8561654ee578bfdccec6be09"},
+ {file = "cytoolz-1.0.1-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:90d6a2e6ab891043ee655ec99d5e77455a9bee9e1131bdfcfb745edde81200dd"},
+ {file = "cytoolz-1.0.1-cp38-cp38-win32.whl", hash = "sha256:08946e083faa5147751b34fbf78ab931f149ef758af5c1092932b459e18dcf5c"},
+ {file = "cytoolz-1.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:a91b4e10a9c03796c0dc93e47ebe25bb41ecc6fafc3cf5197c603cf767a3d44d"},
+ {file = "cytoolz-1.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:980c323e626ba298b77ae62871b2de7c50b9d7219e2ddf706f52dd34b8be7349"},
+ {file = "cytoolz-1.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:45f6fa1b512bc2a0f2de5123db932df06c7f69d12874fe06d67772b2828e2c8b"},
+ {file = "cytoolz-1.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f93f42d9100c415155ad1f71b0de362541afd4ac95e3153467c4c79972521b6b"},
+ {file = "cytoolz-1.0.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a76d20dec9c090cdf4746255bbf06a762e8cc29b5c9c1d138c380bbdb3122ade"},
+ {file = "cytoolz-1.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:239039585487c69aa50c5b78f6a422016297e9dea39755761202fb9f0530fe87"},
+ {file = "cytoolz-1.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c28307640ca2ab57b9fbf0a834b9bf563958cd9e038378c3a559f45f13c3c541"},
+ {file = "cytoolz-1.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:454880477bb901cee3a60f6324ec48c95d45acc7fecbaa9d49a5af737ded0595"},
+ {file = "cytoolz-1.0.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:902115d1b1f360fd81e44def30ac309b8641661150fcbdde18ead446982ada6a"},
+ {file = "cytoolz-1.0.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:e68e6b38473a3a79cee431baa22be31cac39f7df1bf23eaa737eaff42e213883"},
+ {file = "cytoolz-1.0.1-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:32fba3f63fcb76095b0a22f4bdcc22bc62a2bd2d28d58bf02fd21754c155a3ec"},
+ {file = "cytoolz-1.0.1-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:0724ba4cf41eb40b6cf75250820ab069e44bdf4183ff78857aaf4f0061551075"},
+ {file = "cytoolz-1.0.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:c42420e0686f887040d5230420ed44f0e960ccbfa29a0d65a3acd9ca52459209"},
+ {file = "cytoolz-1.0.1-cp39-cp39-win32.whl", hash = "sha256:4ba8b16358ea56b1fe8e637ec421e36580866f2e787910bac1cf0a6997424a34"},
+ {file = "cytoolz-1.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:92d27f84bf44586853d9562bfa3610ecec000149d030f793b4cb614fd9da1813"},
+ {file = "cytoolz-1.0.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:83d19d55738ad9c60763b94f3f6d3c6e4de979aeb8d76841c1401081e0e58d96"},
+ {file = "cytoolz-1.0.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f112a71fad6ea824578e6393765ce5c054603afe1471a5c753ff6c67fd872d10"},
+ {file = "cytoolz-1.0.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5a515df8f8aa6e1eaaf397761a6e4aff2eef73b5f920aedf271416d5471ae5ee"},
+ {file = "cytoolz-1.0.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:92c398e7b7023460bea2edffe5fcd0a76029580f06c3f6938ac3d198b47156f3"},
+ {file = "cytoolz-1.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:3237e56211e03b13df47435b2369f5df281e02b04ad80a948ebd199b7bc10a47"},
+ {file = "cytoolz-1.0.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:ba0d1da50aab1909b165f615ba1125c8b01fcc30d606c42a61c42ea0269b5e2c"},
+ {file = "cytoolz-1.0.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25b6e8dec29aa5a390092d193abd673e027d2c0b50774ae816a31454286c45c7"},
+ {file = "cytoolz-1.0.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:36cd6989ebb2f18fe9af8f13e3c61064b9f741a40d83dc5afeb0322338ad25f2"},
+ {file = "cytoolz-1.0.1-pp38-pypy38_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a47394f8ab7fca3201f40de61fdeea20a2baffb101485ae14901ea89c3f6c95d"},
+ {file = "cytoolz-1.0.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:d00ac423542af944302e034e618fb055a0c4e87ba704cd6a79eacfa6ac83a3c9"},
+ {file = "cytoolz-1.0.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:a5ca923d1fa632f7a4fb33c0766c6fba7f87141a055c305c3e47e256fb99c413"},
+ {file = "cytoolz-1.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:058bf996bcae9aad3acaeeb937d42e0c77c081081e67e24e9578a6a353cb7fb2"},
+ {file = "cytoolz-1.0.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:69e2a1f41a3dad94a17aef4a5cc003323359b9f0a9d63d4cc867cb5690a2551d"},
+ {file = "cytoolz-1.0.1-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:67daeeeadb012ec2b59d63cb29c4f2a2023b0c4957c3342d354b8bb44b209e9a"},
+ {file = "cytoolz-1.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:54d3d36bbf0d4344d1afa22c58725d1668e30ff9de3a8f56b03db1a6da0acb11"},
+ {file = "cytoolz-1.0.1.tar.gz", hash = "sha256:89cc3161b89e1bb3ed7636f74ed2e55984fd35516904fc878cae216e42b2c7d6"},
+]
+
+[package.dependencies]
+toolz = ">=0.8.0"
+
+[package.extras]
+cython = ["cython"]
+
[[package]]
name = "dataclasses-json"
version = "0.6.7"
@@ -983,7 +1719,7 @@ version = "0.3.9"
description = "Distribution utilities"
optional = false
python-versions = "*"
-groups = ["dev"]
+groups = ["main", "dev"]
files = [
{file = "distlib-0.3.9-py2.py3-none-any.whl", hash = "sha256:47f8c22fd27c27e25a65601af709b38e4f0a45ea4fc2e710f65755fa8caaaf87"},
{file = "distlib-0.3.9.tar.gz", hash = "sha256:a60f20dea646b8a33f3e7772f74dc0b2d0772d2837ee1342a00645c81edf9403"},
@@ -1019,7 +1755,7 @@ version = "0.19.1"
description = "ECDSA cryptographic signature library (pure python)"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.6"
-groups = ["demo"]
+groups = ["main", "demo"]
files = [
{file = "ecdsa-0.19.1-py2.py3-none-any.whl", hash = "sha256:30638e27cf77b7e15c4c4cc1973720149e1033827cfd00661ca5c8cc0cdb24c3"},
{file = "ecdsa-0.19.1.tar.gz", hash = "sha256:478cba7b62555866fcb3bb3fe985e06decbdb68ef55713c4e5ab98c57d508e61"},
@@ -1032,6 +1768,219 @@ six = ">=1.9.0"
gmpy = ["gmpy"]
gmpy2 = ["gmpy2"]
+[[package]]
+name = "ed25519-blake2b"
+version = "1.4.1"
+description = "Ed25519 public-key signatures (BLAKE2b fork)"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "ed25519-blake2b-1.4.1.tar.gz", hash = "sha256:731e9f93cd1ac1a64649575f3519a99ffe0bb1e4cf7bf5f5f0be513a39df7363"},
+]
+
+[[package]]
+name = "egcd"
+version = "2.0.2"
+description = "Pure-Python extended Euclidean algorithm implementation that accepts any number of integer arguments."
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "egcd-2.0.2-py3-none-any.whl", hash = "sha256:2f0576a651b4aa9e9c4640bba078f9741d1624f386b55cb5363a79ae4b564bd2"},
+ {file = "egcd-2.0.2.tar.gz", hash = "sha256:3b05b0feb67549f8f76c97afed36c53252c0d7cb9a65bf4e6ca8b99110fb77f2"},
+]
+
+[package.extras]
+coveralls = ["coveralls (>=4.0,<5.0)"]
+docs = ["sphinx (>=5.0,<6.0)", "sphinx-rtd-theme (>=1.1.0,<1.2.0)", "toml (>=0.10.2,<0.11.0)"]
+lint = ["pylint (>=2.17.0,<2.18.0) ; python_version < \"3.12\"", "pylint (>=3.2.0,<3.3.0) ; python_version >= \"3.12\""]
+publish = ["build (>=0.10,<1.0)", "twine (>=4.0,<5.0)"]
+test = ["pytest (>=7.4,<8.0) ; python_version < \"3.12\"", "pytest (>=8.2,<9.0) ; python_version >= \"3.12\"", "pytest-cov (>=4.1,<5.0) ; python_version < \"3.12\"", "pytest-cov (>=5.0,<6.0) ; python_version >= \"3.12\""]
+
+[[package]]
+name = "eth-abi"
+version = "5.2.0"
+description = "eth_abi: Python utilities for working with Ethereum ABI definitions, especially encoding and decoding"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_abi-5.2.0-py3-none-any.whl", hash = "sha256:17abe47560ad753f18054f5b3089fcb588f3e3a092136a416b6c1502cb7e8877"},
+ {file = "eth_abi-5.2.0.tar.gz", hash = "sha256:178703fa98c07d8eecd5ae569e7e8d159e493ebb6eeb534a8fe973fbc4e40ef0"},
+]
+
+[package.dependencies]
+eth-typing = ">=3.0.0"
+eth-utils = ">=2.0.0"
+parsimonious = ">=0.10.0,<0.11.0"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "eth-hash[pycryptodome]", "hypothesis (>=6.22.0,<6.108.7)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-pythonpath (>=0.7.1)", "pytest-timeout (>=2.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["eth-hash[pycryptodome]", "hypothesis (>=6.22.0,<6.108.7)", "pytest (>=7.0.0)", "pytest-pythonpath (>=0.7.1)", "pytest-timeout (>=2.0.0)", "pytest-xdist (>=2.4.0)"]
+tools = ["hypothesis (>=6.22.0,<6.108.7)"]
+
+[[package]]
+name = "eth-account"
+version = "0.13.6"
+description = "eth-account: Sign Ethereum transactions and messages with local private keys"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_account-0.13.6-py3-none-any.whl", hash = "sha256:27b8c86e134ab10adec5022b55c8005f9fbdccba8b99bd318e45aa56863e1416"},
+ {file = "eth_account-0.13.6.tar.gz", hash = "sha256:e496cc4c50fe4e22972f720fda4c13e126e5636d0274163888eb27f08530ac61"},
+]
+
+[package.dependencies]
+bitarray = ">=2.4.0"
+ckzg = ">=2.0.0"
+eth-abi = ">=4.0.0-b.2"
+eth-keyfile = ">=0.7.0,<0.9.0"
+eth-keys = ">=0.4.0"
+eth-rlp = ">=2.1.0"
+eth-utils = ">=2.0.0"
+hexbytes = ">=1.2.0"
+pydantic = ">=2.0.0"
+rlp = ">=1.0.0"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "coverage", "hypothesis (>=6.22.0,<6.108.7)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["coverage", "hypothesis (>=6.22.0,<6.108.7)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
+[[package]]
+name = "eth-hash"
+version = "0.7.1"
+description = "eth-hash: The Ethereum hashing function, keccak256, sometimes (erroneously) called sha3"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_hash-0.7.1-py3-none-any.whl", hash = "sha256:0fb1add2adf99ef28883fd6228eb447ef519ea72933535ad1a0b28c6f65f868a"},
+ {file = "eth_hash-0.7.1.tar.gz", hash = "sha256:d2411a403a0b0a62e8247b4117932d900ffb4c8c64b15f92620547ca5ce46be5"},
+]
+
+[package.dependencies]
+pycryptodome = {version = ">=3.6.6,<4", optional = true, markers = "extra == \"pycryptodome\""}
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+pycryptodome = ["pycryptodome (>=3.6.6,<4)"]
+pysha3 = ["pysha3 (>=1.0.0,<2.0.0) ; python_version < \"3.9\"", "safe-pysha3 (>=1.0.0) ; python_version >= \"3.9\""]
+test = ["pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
+[[package]]
+name = "eth-keyfile"
+version = "0.8.1"
+description = "eth-keyfile: A library for handling the encrypted keyfiles used to store ethereum private keys"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_keyfile-0.8.1-py3-none-any.whl", hash = "sha256:65387378b82fe7e86d7cb9f8d98e6d639142661b2f6f490629da09fddbef6d64"},
+ {file = "eth_keyfile-0.8.1.tar.gz", hash = "sha256:9708bc31f386b52cca0969238ff35b1ac72bd7a7186f2a84b86110d3c973bec1"},
+]
+
+[package.dependencies]
+eth-keys = ">=0.4.0"
+eth-utils = ">=2"
+pycryptodome = ">=3.6.6,<4"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bumpversion (>=0.5.3)", "ipython", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "towncrier (>=21,<22)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["towncrier (>=21,<22)"]
+test = ["pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
+[[package]]
+name = "eth-keys"
+version = "0.7.0"
+description = "eth-keys: Common API for Ethereum key operations"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_keys-0.7.0-py3-none-any.whl", hash = "sha256:b0cdda8ffe8e5ba69c7c5ca33f153828edcace844f67aabd4542d7de38b159cf"},
+ {file = "eth_keys-0.7.0.tar.gz", hash = "sha256:79d24fd876201df67741de3e3fefb3f4dbcbb6ace66e47e6fe662851a4547814"},
+]
+
+[package.dependencies]
+eth-typing = ">=3"
+eth-utils = ">=2"
+
+[package.extras]
+coincurve = ["coincurve (>=17.0.0)"]
+dev = ["asn1tools (>=0.146.2)", "build (>=0.9.0)", "bump_my_version (>=0.19.0)", "coincurve (>=17.0.0)", "eth-hash[pysha3]", "factory-boy (>=3.0.1)", "hypothesis (>=5.10.3)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pyasn1 (>=0.4.5)", "pytest (>=7.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["towncrier (>=24,<25)"]
+test = ["asn1tools (>=0.146.2)", "eth-hash[pysha3]", "factory-boy (>=3.0.1)", "hypothesis (>=5.10.3)", "pyasn1 (>=0.4.5)", "pytest (>=7.0.0)"]
+
+[[package]]
+name = "eth-rlp"
+version = "2.2.0"
+description = "eth-rlp: RLP definitions for common Ethereum objects in Python"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_rlp-2.2.0-py3-none-any.whl", hash = "sha256:5692d595a741fbaef1203db6a2fedffbd2506d31455a6ad378c8449ee5985c47"},
+ {file = "eth_rlp-2.2.0.tar.gz", hash = "sha256:5e4b2eb1b8213e303d6a232dfe35ab8c29e2d3051b86e8d359def80cd21db83d"},
+]
+
+[package.dependencies]
+eth-utils = ">=2.0.0"
+hexbytes = ">=1.2.0"
+rlp = ">=0.6.0"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "eth-hash[pycryptodome]", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["eth-hash[pycryptodome]", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
+[[package]]
+name = "eth-typing"
+version = "5.2.0"
+description = "eth-typing: Common type annotations for ethereum python packages"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_typing-5.2.0-py3-none-any.whl", hash = "sha256:e1f424e97990fc3c6a1c05a7b0968caed4e20e9c99a4d5f4db3df418e25ddc80"},
+ {file = "eth_typing-5.2.0.tar.gz", hash = "sha256:28685f7e2270ea0d209b75bdef76d8ecef27703e1a16399f6929820d05071c28"},
+]
+
+[package.dependencies]
+typing_extensions = ">=4.5.0"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
+[[package]]
+name = "eth-utils"
+version = "5.2.0"
+description = "eth-utils: Common utility functions for python code that interacts with Ethereum"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "eth_utils-5.2.0-py3-none-any.whl", hash = "sha256:4d43eeb6720e89a042ad5b28d4b2111630ae764f444b85cbafb708d7f076da10"},
+ {file = "eth_utils-5.2.0.tar.gz", hash = "sha256:17e474eb654df6e18f20797b22c6caabb77415a996b3ba0f3cc8df3437463134"},
+]
+
+[package.dependencies]
+cytoolz = {version = ">=0.10.1", markers = "implementation_name == \"cpython\""}
+eth-hash = ">=0.3.1"
+eth-typing = ">=5.0.0"
+toolz = {version = ">0.8.2", markers = "implementation_name == \"pypy\""}
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump-my-version (>=0.19.0)", "eth-hash[pycryptodome]", "hypothesis (>=4.43.0)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx-rtd-theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx-rtd-theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["hypothesis (>=4.43.0)", "mypy (==1.10.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
[[package]]
name = "faiss-cpu"
version = "1.10.0"
@@ -1126,19 +2075,19 @@ sgmllib3k = "*"
[[package]]
name = "filelock"
-version = "3.18.0"
+version = "3.16.1"
description = "A platform independent file lock."
optional = false
-python-versions = ">=3.9"
+python-versions = ">=3.8"
groups = ["main", "dev"]
files = [
- {file = "filelock-3.18.0-py3-none-any.whl", hash = "sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de"},
- {file = "filelock-3.18.0.tar.gz", hash = "sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2"},
+ {file = "filelock-3.16.1-py3-none-any.whl", hash = "sha256:2082e5703d51fbf98ea75855d9d5527e33d8ff23099bec374a134febee6946b0"},
+ {file = "filelock-3.16.1.tar.gz", hash = "sha256:c249fbfcd5db47e5e2d6d62198e565475ee65e4831e2561c8e313fa7eb961435"},
]
[package.extras]
-docs = ["furo (>=2024.8.6)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
-testing = ["covdefaults (>=2.3)", "coverage (>=7.6.10)", "diff-cover (>=9.2.1)", "pytest (>=8.3.4)", "pytest-asyncio (>=0.25.2)", "pytest-cov (>=6)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.28.1)"]
+docs = ["furo (>=2024.8.6)", "sphinx (>=8.0.2)", "sphinx-autodoc-typehints (>=2.4.1)"]
+testing = ["covdefaults (>=2.3)", "coverage (>=7.6.1)", "diff-cover (>=9.2)", "pytest (>=8.3.3)", "pytest-asyncio (>=0.24)", "pytest-cov (>=5)", "pytest-mock (>=3.14)", "pytest-timeout (>=2.3.1)", "virtualenv (>=20.26.4)"]
typing = ["typing-extensions (>=4.12.2) ; python_version < \"3.11\""]
[[package]]
@@ -1734,6 +2683,23 @@ files = [
{file = "h11-0.14.0.tar.gz", hash = "sha256:8f19fbbe99e72420ff35c00b27a34cb9937e902a8b810e2c88300c6f0a3b699d"},
]
+[[package]]
+name = "hexbytes"
+version = "1.3.0"
+description = "hexbytes: Python `bytes` subclass that decodes hex, with a readable console output"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "hexbytes-1.3.0-py3-none-any.whl", hash = "sha256:83720b529c6e15ed21627962938dc2dec9bb1010f17bbbd66bf1e6a8287d522c"},
+ {file = "hexbytes-1.3.0.tar.gz", hash = "sha256:4a61840c24b0909a6534350e2d28ee50159ca1c9e89ce275fd31c110312cf684"},
+]
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "eth_utils (>=2.0.0)", "hypothesis (>=3.44.24,<=6.31.6)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["eth_utils (>=2.0.0)", "hypothesis (>=3.44.24,<=6.31.6)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
[[package]]
name = "httpcore"
version = "1.0.7"
@@ -2056,6 +3022,43 @@ files = [
{file = "jsonpointer-3.0.0.tar.gz", hash = "sha256:2b2d729f2091522d61c3b31f82e11870f60b68f43fbc705cb76bf4b832af59ef"},
]
+[[package]]
+name = "jsonschema"
+version = "4.23.0"
+description = "An implementation of JSON Schema validation for Python"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "jsonschema-4.23.0-py3-none-any.whl", hash = "sha256:fbadb6f8b144a8f8cf9f0b89ba94501d143e50411a1278633f56a7acf7fd5566"},
+ {file = "jsonschema-4.23.0.tar.gz", hash = "sha256:d71497fef26351a33265337fa77ffeb82423f3ea21283cd9467bb03999266bc4"},
+]
+
+[package.dependencies]
+attrs = ">=22.2.0"
+jsonschema-specifications = ">=2023.03.6"
+referencing = ">=0.28.4"
+rpds-py = ">=0.7.1"
+
+[package.extras]
+format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
+format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=24.6.0)"]
+
+[[package]]
+name = "jsonschema-specifications"
+version = "2024.10.1"
+description = "The JSON Schema meta-schemas and vocabularies, exposed as a Registry"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "jsonschema_specifications-2024.10.1-py3-none-any.whl", hash = "sha256:a09a0680616357d9a0ecf05c12ad234479f549239d0f5b55f3deea67475da9bf"},
+ {file = "jsonschema_specifications-2024.10.1.tar.gz", hash = "sha256:0f38b83639958ce1152d02a7f062902c41c8fd20d558b0c34344292d417ae272"},
+]
+
+[package.dependencies]
+referencing = ">=0.31.0"
+
[[package]]
name = "kiwisolver"
version = "1.4.8"
@@ -2231,14 +3234,14 @@ tenacity = ">=8.1.0,<8.4.0 || >8.4.0,<10"
[[package]]
name = "langchain-core"
-version = "0.3.51"
+version = "0.3.55"
description = "Building applications with LLMs through composability"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
- {file = "langchain_core-0.3.51-py3-none-any.whl", hash = "sha256:4bd71e8acd45362aa428953f2a91d8162318014544a2216e4b769463caf68e13"},
- {file = "langchain_core-0.3.51.tar.gz", hash = "sha256:db76b9cc331411602cb40ba0469a161febe7a0663fbcaddbc9056046ac2d22f4"},
+ {file = "langchain_core-0.3.55-py3-none-any.whl", hash = "sha256:b3cb36bf37755a616158a79866657c6697b43a2f7c69dd723ce425f1c76c1baa"},
+ {file = "langchain_core-0.3.55.tar.gz", hash = "sha256:0f2b3e311621116a83510c70b0ac9d959030a0a457a69483535cff18501fedc9"},
]
[package.dependencies]
@@ -2306,6 +3309,23 @@ sentence-transformers = ">=2.6.0"
tokenizers = ">=0.19.1"
transformers = ">=4.39.0"
+[[package]]
+name = "langchain-openai"
+version = "0.3.14"
+description = "An integration package connecting OpenAI and LangChain"
+optional = false
+python-versions = "<4.0,>=3.9"
+groups = ["main"]
+files = [
+ {file = "langchain_openai-0.3.14-py3-none-any.whl", hash = "sha256:b8e648d2d7678a5540818199d141ff727c6f1514294b3e1e999a95357c9d66a0"},
+ {file = "langchain_openai-0.3.14.tar.gz", hash = "sha256:0662db78620c2e5c3ccfc1c36dc959c0ddc80e6bdf7ef81632cbf4b2cc9b9461"},
+]
+
+[package.dependencies]
+langchain-core = ">=0.3.53,<1.0.0"
+openai = ">=1.68.2,<2.0.0"
+tiktoken = ">=0.7,<1"
+
[[package]]
name = "langchain-text-splitters"
version = "0.3.8"
@@ -2471,6 +3491,22 @@ profiling = ["gprof2dot"]
rtd = ["jupyter_sphinx", "mdit-py-plugins", "myst-parser", "pyyaml", "sphinx", "sphinx-copybutton", "sphinx-design", "sphinx_book_theme"]
testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"]
+[[package]]
+name = "markdownify"
+version = "1.1.0"
+description = "Convert HTML to markdown."
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "markdownify-1.1.0-py3-none-any.whl", hash = "sha256:32a5a08e9af02c8a6528942224c91b933b4bd2c7d078f9012943776fc313eeef"},
+ {file = "markdownify-1.1.0.tar.gz", hash = "sha256:449c0bbbf1401c5112379619524f33b63490a8fa479456d41de9dc9e37560ebd"},
+]
+
+[package.dependencies]
+beautifulsoup4 = ">=4.9,<5"
+six = ">=1.15,<2"
+
[[package]]
name = "markupsafe"
version = "3.0.2"
@@ -2867,6 +3903,29 @@ example = ["cairocffi (>=1.7)", "contextily (>=1.6)", "igraph (>=0.11)", "momepy
extra = ["lxml (>=4.6)", "pydot (>=3.0.1)", "pygraphviz (>=1.14)", "sympy (>=1.10)"]
test = ["pytest (>=7.2)", "pytest-cov (>=4.0)"]
+[[package]]
+name = "nilql"
+version = "0.0.0a12"
+description = "Library for working with encrypted data within nilDB queries and replies."
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "nilql-0.0.0a12-py3-none-any.whl", hash = "sha256:400b705fa1d9856093f47811062162e6fc8e1d4de8cf1861b2a40aaf78357989"},
+ {file = "nilql-0.0.0a12.tar.gz", hash = "sha256:6bd98ec270f06178439875e3251888fe52759bd309ba3639d4c0562e48a12cd8"},
+]
+
+[package.dependencies]
+bcl = ">=2.3,<3.0"
+pailliers = ">=0.1,<1.0"
+
+[package.extras]
+coveralls = ["coveralls (>=4.0,<5.0)"]
+docs = ["sphinx (>=5.0,<6.0)", "sphinx-rtd-theme (>=2.0.0,<2.1.0)", "toml (>=0.10.2,<0.11.0)"]
+lint = ["pylint (>=3.2.0,<3.3.0)"]
+publish = ["build (>=0.10,<1.0)", "twine (>=4.0,<5.0)"]
+test = ["pytest (>=8.2,<9.0)", "pytest-cov (>=5.0,<6.0)"]
+
[[package]]
name = "nodeenv"
version = "1.9.1"
@@ -3305,6 +4364,29 @@ files = [
{file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
]
+[[package]]
+name = "pailliers"
+version = "0.2.0"
+description = "Minimal pure-Python implementation of Paillier's additively homomorphic cryptosystem."
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "pailliers-0.2.0-py3-none-any.whl", hash = "sha256:ad0ddc72be63f9b3c10200e23178fe527b566c4aa86659ab54a8faeb367ac7d6"},
+ {file = "pailliers-0.2.0.tar.gz", hash = "sha256:a1d3d7d840594f51073e531078b3da4dc5a7a527b410102a0f0fa65d6c222871"},
+]
+
+[package.dependencies]
+egcd = ">=2.0,<3.0"
+rabinmiller = ">=0.1,<1.0"
+
+[package.extras]
+coveralls = ["coveralls (>=4.0,<5.0)"]
+docs = ["sphinx (>=5.0,<6.0)", "sphinx-rtd-theme (>=2.0.0,<2.1.0)", "toml (>=0.10.2,<0.11.0)"]
+lint = ["pylint (>=2.17.0,<2.18.0) ; python_version < \"3.12\"", "pylint (>=3.2.0,<3.3.0) ; python_version >= \"3.12\""]
+publish = ["build (>=0.10,<1.0)", "twine (>=4.0,<5.0)"]
+test = ["pytest (>=7.4,<8.0) ; python_version < \"3.12\"", "pytest (>=8.2,<9.0) ; python_version >= \"3.12\"", "pytest-cov (>=4.1,<5.0) ; python_version < \"3.12\"", "pytest-cov (>=5.0,<6.0) ; python_version >= \"3.12\""]
+
[[package]]
name = "pandas"
version = "2.2.3"
@@ -3391,6 +4473,43 @@ sql-other = ["SQLAlchemy (>=2.0.0)", "adbc-driver-postgresql (>=0.8.0)", "adbc-d
test = ["hypothesis (>=6.46.1)", "pytest (>=7.3.2)", "pytest-xdist (>=2.2.0)"]
xml = ["lxml (>=4.9.2)"]
+[[package]]
+name = "paramiko"
+version = "3.5.1"
+description = "SSH2 protocol library"
+optional = false
+python-versions = ">=3.6"
+groups = ["main"]
+files = [
+ {file = "paramiko-3.5.1-py3-none-any.whl", hash = "sha256:43b9a0501fc2b5e70680388d9346cf252cfb7d00b0667c39e80eb43a408b8f61"},
+ {file = "paramiko-3.5.1.tar.gz", hash = "sha256:b2c665bc45b2b215bd7d7f039901b14b067da00f3a11e6640995fd58f2664822"},
+]
+
+[package.dependencies]
+bcrypt = ">=3.2"
+cryptography = ">=3.3"
+pynacl = ">=1.5"
+
+[package.extras]
+all = ["gssapi (>=1.4.1) ; platform_system != \"Windows\"", "invoke (>=2.0)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8) ; platform_system == \"Windows\""]
+gssapi = ["gssapi (>=1.4.1) ; platform_system != \"Windows\"", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8) ; platform_system == \"Windows\""]
+invoke = ["invoke (>=2.0)"]
+
+[[package]]
+name = "parsimonious"
+version = "0.10.0"
+description = "(Soon to be) the fastest pure-Python PEG parser I could muster"
+optional = false
+python-versions = "*"
+groups = ["main"]
+files = [
+ {file = "parsimonious-0.10.0-py3-none-any.whl", hash = "sha256:982ab435fabe86519b57f6b35610aa4e4e977e9f02a14353edf4bbc75369fc0f"},
+ {file = "parsimonious-0.10.0.tar.gz", hash = "sha256:8281600da180ec8ae35427a4ab4f7b82bfec1e3d1e52f80cb60ea82b9512501c"},
+]
+
+[package.dependencies]
+regex = ">=2022.3.15"
+
[[package]]
name = "passlib"
version = "1.7.4"
@@ -3515,20 +4634,20 @@ xmp = ["defusedxml"]
[[package]]
name = "platformdirs"
-version = "4.3.7"
+version = "4.3.6"
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a `user data dir`."
optional = false
-python-versions = ">=3.9"
+python-versions = ">=3.8"
groups = ["main", "dev"]
files = [
- {file = "platformdirs-4.3.7-py3-none-any.whl", hash = "sha256:a03875334331946f13c549dbd8f4bac7a13a50a895a0eb1e8c6a8ace80d40a94"},
- {file = "platformdirs-4.3.7.tar.gz", hash = "sha256:eb437d586b6a0986388f0d6f74aa0cde27b48d0e3d66843640bfb6bdcdb6e351"},
+ {file = "platformdirs-4.3.6-py3-none-any.whl", hash = "sha256:73e575e1408ab8103900836b97580d5307456908a03e92031bab39e4554cc3fb"},
+ {file = "platformdirs-4.3.6.tar.gz", hash = "sha256:357fb2acbc885b0419afd3ce3ed34564c13c9b95c89360cd9563f73aa5e2b907"},
]
[package.extras]
-docs = ["furo (>=2024.8.6)", "proselint (>=0.14)", "sphinx (>=8.1.3)", "sphinx-autodoc-typehints (>=3)"]
-test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=8.3.4)", "pytest-cov (>=6)", "pytest-mock (>=3.14)"]
-type = ["mypy (>=1.14.1)"]
+docs = ["furo (>=2024.8.6)", "proselint (>=0.14)", "sphinx (>=8.0.2)", "sphinx-autodoc-typehints (>=2.4)"]
+test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=8.3.2)", "pytest-cov (>=5)", "pytest-mock (>=3.14)"]
+type = ["mypy (>=1.11.2)"]
[[package]]
name = "plotly"
@@ -3737,7 +4856,7 @@ version = "7.0.0"
description = "Cross-platform lib for process and system monitoring in Python. NOTE: the syntax of this script MUST be kept compatible with Python 2.7."
optional = false
python-versions = ">=3.6"
-groups = ["demo"]
+groups = ["main", "demo"]
files = [
{file = "psutil-7.0.0-cp36-abi3-macosx_10_9_x86_64.whl", hash = "sha256:101d71dc322e3cffd7cea0650b09b3d08b8e7c4109dd6809fe452dfd00e58b25"},
{file = "psutil-7.0.0-cp36-abi3-macosx_11_0_arm64.whl", hash = "sha256:39db632f6bb862eeccf56660871433e111b6ea58f2caea825571951d4b6aa3da"},
@@ -3755,6 +4874,128 @@ files = [
dev = ["abi3audit", "black (==24.10.0)", "check-manifest", "coverage", "packaging", "pylint", "pyperf", "pypinfo", "pytest", "pytest-cov", "pytest-xdist", "requests", "rstcheck", "ruff", "setuptools", "sphinx", "sphinx_rtd_theme", "toml-sort", "twine", "virtualenv", "vulture", "wheel"]
test = ["pytest", "pytest-xdist", "setuptools"]
+[[package]]
+name = "py-sr25519-bindings"
+version = "0.2.2"
+description = "Python bindings for schnorrkel RUST crate"
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd29a2ee1dfa55a3e17cf18fe0fa5f5749e0c49c9bd9c423a46e1cc242225447"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:58702966f1547e47bfbf70924eef881087bff969e2dca15953cdc95cb2abb4a2"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9b5b2cdf08be144f395508acebd5fa41c81dbee1364de75ff80fe0f1308fd969"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77f7f4b323c628242909228eaac78bf6376b39b8988704e7910e858c2017b487"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:58b1e28fc1c57f69d37556b8f3c51cdd84446f643729789b6c0ce19ce2726bd5"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:512185a3be65a893208e9c30d11948c8b405532756f3bcab16d1dbe5d8e3355e"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:3e5bf9343f3708cfdf5228dbb1b8a093c64144bb9c4bd02cfb014fb2241dd965"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e9ac26fd6b8606796fcaee90fe277820efbe168490659d26295fd0fc7b37ee4a"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:297e29518a01a24b943333fc9fe5e7102cb7083f2d256270f42476bcf5ba666d"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-win32.whl", hash = "sha256:f1ab3c36d94dec25767e2a54a2fb0eb320fc0c3e1d7ea573288b961d432672ef"},
+ {file = "py_sr25519_bindings-0.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:cc40a53600e68848658cf6046cd43ef2ec9f0c8c04ebf8ea3636dd58c1c25296"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c0931d8fd07e13131e652f3211c1f1c12b7a5153bed9217e4483b195515c76f"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:09657937b8f04c034622691c4753fcef0b3857939dbeff72590b7f5de336302d"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6ca700354e8cc3d082426ca5cdc7dd34a05988adec4edc0cd42d31c4ba16fbc0"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ddb490e99d5646ba68f5308fed1b92efbc85470b1171a2b78e555b44a7073570"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b8a479a7510f30d2912f552335cb83d321c0be83832a71cd0bcd190f6356a7bf"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ab25059d290753202f160bb8a4fd3c789ab9663381ca564338015fd3b7625dde"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:315382f207430143cd748f805f13bf56f36fc66726303b491cd38ce78d8972e9"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1a65275421f30e3d563c6f3dec552060f1f85b7840ab8ecf1d48ced008d0ba5f"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:1350c85bdc903105d8fdc7dd369b802bf2821c321fea8aa0929f7a7063437d81"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:3a34fa18885345a0102c3ffbaa17a32cd67d28a60376158508d5ed7f96a478f7"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c63b0966b45870f0b1dfc2a366f1763f4a165f3aec2b02e7464cfb2c6ca09e94"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-win32.whl", hash = "sha256:7bf982a7d34f6eb0c7c42b7f59610a527e9b02654079fb78d7eb757c6bd79d9d"},
+ {file = "py_sr25519_bindings-0.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:d9ee79ec4e722993da24385a8eb85d97878ef67d48d0e706c098c626d798c7bc"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:f22542738ed98fac0d3da2479dd3f26c695594800877a4d8bb116c47e4fd4b7c"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b312b8ac7c8354d5cf1b9aad993bbafbd99cc97b6d246f246e76814f576ed809"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c70ff898fa46f380a535c843e3a1a9824d1849216067bbf28eb9ad225b92f0bb"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:292be23ad53d9f9dbf1703a2a341005629a8f93c57cfad254c8c1230ec7d3fe3"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:673b31e8f59bc1478814b011921073f8ad4e2c78a1d6580b3ddb1a9d7edc4392"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:849f77ab12210e8549e58d444e9199d9aba83a988e99ca8bef04dd53e81f9561"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:cf8c1d329275c41836aee5f8789ab14100dbdc2b6f3a0210fac2abb0f7507c24"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:48f053c5e8cb66125057b25223ef5ff57bb4383a82871d47089397317c5fd792"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:fea3ce0ac6a26a52735bb48f8daafb82d17147f776bb6d9d3c330bd2ccffe20d"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:f44a0a9cb155af6408e3f73833a935abc98934ce097b2ad07dd13e3a88f82cb8"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:8cc531500823ece8d6889082642e9ea06f2eaffd0ed43d65871cb4727429027c"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-win32.whl", hash = "sha256:840c3ec1fc8dde12421369afa9761943efe377a7bd55a97524587e8b5a6546c2"},
+ {file = "py_sr25519_bindings-0.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:c3ee5fd07b2974ce147ac7546b18729d2eb4efebe8eaad178690aaca656487f3"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:3bb2c5fba39a82880c43b0d75e87f4d4a2416717c5fa2122b22e02689c2120e3"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1393798a36f74482c53c254969ae8d92f6549767ef69575206eaaf629cbf2a64"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:29b9ee2e2f8f36676fa2a72af5bdfe257d331b3d83e5a92b45bad2f25a5b975c"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4e932c33f6b660319c950c300c32ad2c0ba9642743a2e709a2fb886d32c28baf"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1fce13a3434c57af097b8b07b69e3821b1f10623754204112c14bd544bd961c1"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16501bd5b9a37623dbf48aa6b197c57c004f9125e190450e041289a8c3eceac7"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:beb12471fb76be707fc9213d39e5be4cf4add7e38e08bc1fbf7e786250977e00"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:55134f0ba34c27fbb8b489a338c6cb6a31465813f615ed93afbd67e844ef3aed"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:785521c868738a2345e3625ad9166ede228f63e9d3f0c7ff8e35f49d636bce04"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:c8cab5620a4ef4cc69a314c9e9ac17af1c0d4d11e297fcefe5d71d827fd7ee21"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:15ae6f86f112c6b23d357b5a98a6cb493f5c2734fabff354a8198be9dea0e90e"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-win32.whl", hash = "sha256:cba9efa48f48bf56e73a528005978b6f05cb2c847e21eb9645bbc6581619482f"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313-win_amd64.whl", hash = "sha256:9cdb4e0f231fd5824f73361a37a102871866d29752f96d88b1da958f1e5ff2d4"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e1d436db7f48dabd4201bb1a88c66a6a3cd15a40e89a236ec1b8cb60037dc1a9"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a9b8c9a81f90dc330eabbdc3ec5f9fdf84a34cd37a1e660cbf5c5daec7b2d08f"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0f496da3eb2d843bd12ccff871d22d086b08cfe95852ca91dcdbd91e350aca8d"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:862fa69f948cb3028051a71ce0d2d88cbe8b52723c782f0972d12f5f85a25637"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-musllinux_1_2_armv7l.whl", hash = "sha256:1111597744d7993ce732f785e97e0d2e4f9554509d90ba4b0e99829dbf1c2e6d"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:c4518b553335f70f18b8167eb2b7f533a66eb703f251d4d4b36c4a03d14cd75e"},
+ {file = "py_sr25519_bindings-0.2.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c917a8f365450be06e051f8d8671c182057cdda42bd5f6883c5f537a2bac4f5a"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b8b2666d381416fb07336c6596a5554dd0e0f1ec50ff32bcc975ae29df79961"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5644353660fd9f97318d70fb7cf362f969a5ee572b61df8f18eda5fea80a6514"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e0728fff2a29d4cc76c4cf22142cd2e2e8dc37745b213a866412980191e1260c"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b74c31e2960c4af5b709b562aaf610989af532aee771fcdf175533de60441607"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3d3c6836df2d67008e3f11080fb216e414cc924de256dd118e50a92cd334f143"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-musllinux_1_2_aarch64.whl", hash = "sha256:dc5a57b67384244083b8d0831f9490dadca34e0543c1bf2f3a876aa4e7081961"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-musllinux_1_2_armv7l.whl", hash = "sha256:ea139e7bf80ddc1c682db439825bec56baf745d643c146a783e9ddb737af266a"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-musllinux_1_2_i686.whl", hash = "sha256:727f91cff7901db2d4e0f762dafd48c2b1086945b4903dcdd0a3eb65624c17c8"},
+ {file = "py_sr25519_bindings-0.2.2-cp37-cp37m-musllinux_1_2_x86_64.whl", hash = "sha256:0e4217f59936ae13fa4215838d2da59c130572408e3f29a9f7ca436924f4b356"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25e6f2c026aa910cac7f16723b6b3e9736fe805e51b6ba41cfb4e25a4c0a6442"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:776d1a1f114b1f0553c9c8336545daaf20443d0b681c47c499377f69406f7a56"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:314893d4ea96877560bc12446956d61707ca46fb99040ffad751a0710a7aa87f"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:277e8ef38c9d899b1855fdcde07ae73a9917e06c46df556b8ca3216ae585b532"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:695e80ccdb710efba2f909235b18eaf230cf0b3f60e8d52a1c904eaeeff839ba"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:2777013ff914bcc87e657657e99922fa48f3bb674734550989fb210fb3d878a2"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-musllinux_1_2_armv7l.whl", hash = "sha256:30f1af9306fda911f296db29b4fff06197d3f38de5643b3d95862d3833db1e41"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:56eeafc92d3c66990ab97ff91c09b3295aea6dac9b64af0227750a8192aeaeec"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:f7126db740eb190cf1ba993e066f03c2914edaf08e6963d10bbdd740922c95e6"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-win32.whl", hash = "sha256:7b20210a0a0b39e3f0bcb7832a3df736eeea2fcc5776dba1ce5b0c050e489145"},
+ {file = "py_sr25519_bindings-0.2.2-cp38-cp38-win_amd64.whl", hash = "sha256:5329e2e54eb9850c2eb84d6a226bd98cdc3597535453eced920035e1e026dced"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b32743bba0a53225097120b212da88c14584022a357d7e91cf19ed0a3adad9f6"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fd56849d9195693c6a8e7c48efe4256918b6afeec090915f3f8f883cdb8addda"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f46c87fd7a8e55c4fce272d4e34663d3c7c3ffee906826a2a16a1400027aa5b9"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2ee25565a287690d6e48302f4584775622ce3329d2ab92fd3b0a4f063d4ca91f"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b1199dab44704aa34401428ca3170da5b7ffdc8c65208a2c75a3c1fe5298b20e"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:702a28e8ead7d1d664bea22168158edcc0e3d36e5cc1a79c7373ab1636f89cc2"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-musllinux_1_2_armv7l.whl", hash = "sha256:3de67c503155019c494e5887d1797f046afe1aeb99ee4b3cf86c15386330f034"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:e1a48b295b562836d72dee7136c0503ea63cd89fec85d418b91ba040471c37ed"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:acebe18639616cd00ad0544d9bdaa73c545f00c4fc0d29b8a4e1e6c447f7a607"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-win32.whl", hash = "sha256:62e4bdd094589446a24dc1029cf2ef6c869e1f4fede04e17335bc92e60640fc5"},
+ {file = "py_sr25519_bindings-0.2.2-cp39-cp39-win_amd64.whl", hash = "sha256:0c2483031813f908da35c380196bc88410e2542482a5b4b51d265c6566de116c"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8c9ee94a9fa1625f3192c89ecacc2bd012e09b57e6d2ad8ede027b31381609d3"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e4c95b3b6230faf2e89f6e5407d63a9d0d52385e6b7d42205570f5ee2f927940"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8ab2b063129babc8f1d9fe6cf5c2d8cc434b6797c553440302da1fab987d74ab"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f1bf93d46ab717089f4fac9b764ea4e7be9f4a45a62bd9919ef850ae8d2ae433"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5e927dd15d61e76fde3fa8bd6d776148ea987944fee1fe815fbc40c6a77f61ad"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:6584cf4c87fae9bdd64bc50dd786d9c805165bb6bc7a1ff545e77b29a78acb8d"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-musllinux_1_2_armv7l.whl", hash = "sha256:a26b1add87dc6086463975aa1d889f39f90b0d22949d4de52e8a53e516bd2ac4"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-musllinux_1_2_i686.whl", hash = "sha256:84cb2c645ce60a04688dedf61ed289d4fb716aef4129313814be1a2d47e268e7"},
+ {file = "py_sr25519_bindings-0.2.2-pp310-pypy310_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:76e660007dd415de22b1d99a884ee39cb6abf03f24377f58e4498533856c2bac"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d0afcb14cd493485175dc9bf8c57b8b37581bbb29f55b6e4f3ce1f803222488"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0b5c5f9080e879badefff596361520b2d9de9d9c4be7c14b36a017d798c451e2"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a42353737c22a3fa425ac350f5fac74d5b2f9c3cdb8ad44dbb367bd7869774cc"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:3ea3dc2c6a2a38b791114bd50021f10db2dd2a1d7a1a1aac0c7e80d885c0d3b5"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-musllinux_1_2_armv7l.whl", hash = "sha256:888d98a2d7736e0f269c8ab1f09dfac04af2d024b18c0485adc3615277f3442b"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-musllinux_1_2_i686.whl", hash = "sha256:50976e9b22328df5696fcbfded27215a35225ee41e0b3f1b26a01ab00ad08143"},
+ {file = "py_sr25519_bindings-0.2.2-pp38-pypy38_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:6449963cd6b300b224a1bc1fec77df012e30d609e54ccffe4f44f4563eab27c2"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:726346118fc2b785525945ee71ea1acf9be84c41e266c2345c93c7d4d6132cbc"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4dd5cfe48faafa7e112a021284663db8b64efd9b2225e69906f0bf7f3159a3ce"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:85432a949143d7a20e452b4c89d5f0ad9a0162e1ce5a904fc157fe204cbe5ded"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:04a2f2eac269bb2f9bf30c795990211cd8d4cfdd28eafbd73b2dfc77a9ef940f"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-musllinux_1_2_armv7l.whl", hash = "sha256:bc7ecd25700a664d835cc9db5f6a4f65fef62a395762487c8c2a661566316e8f"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-musllinux_1_2_i686.whl", hash = "sha256:9efc5526c0eb74c2f8df809c47e22d62febc31db8f38b5c6b1253e810e0ed71f"},
+ {file = "py_sr25519_bindings-0.2.2-pp39-pypy39_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:daa74fdd7bac2d97fbbbbb1ca40a0c02102220d09cfa9695cbde8d2cbedfadb7"},
+ {file = "py_sr25519_bindings-0.2.2.tar.gz", hash = "sha256:192d65d3bc43c6f4121a0732e1f6eb6ad869897ca26368ba032e96a82b3b7606"},
+]
+
[[package]]
name = "pyasn1"
version = "0.4.8"
@@ -3807,16 +5048,55 @@ files = [
]
markers = {demo = "platform_python_implementation != \"PyPy\""}
+[[package]]
+name = "pycryptodome"
+version = "3.22.0"
+description = "Cryptographic library for Python"
+optional = false
+python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
+groups = ["main"]
+files = [
+ {file = "pycryptodome-3.22.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:96e73527c9185a3d9b4c6d1cfb4494f6ced418573150be170f6580cb975a7f5a"},
+ {file = "pycryptodome-3.22.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:9e1bb165ea1dc83a11e5dbbe00ef2c378d148f3a2d3834fb5ba4e0f6fd0afe4b"},
+ {file = "pycryptodome-3.22.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:d4d1174677855c266eed5c4b4e25daa4225ad0c9ffe7584bb1816767892545d0"},
+ {file = "pycryptodome-3.22.0-cp27-cp27m-win32.whl", hash = "sha256:9dbb749cef71c28271484cbef684f9b5b19962153487735411e1020ca3f59cb1"},
+ {file = "pycryptodome-3.22.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:f1ae7beb64d4fc4903a6a6cca80f1f448e7a8a95b77d106f8a29f2eb44d17547"},
+ {file = "pycryptodome-3.22.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:a26bcfee1293b7257c83b0bd13235a4ee58165352be4f8c45db851ba46996dc6"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:009e1c80eea42401a5bd5983c4bab8d516aef22e014a4705622e24e6d9d703c6"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-macosx_10_9_x86_64.whl", hash = "sha256:3b76fa80daeff9519d7e9f6d9e40708f2fce36b9295a847f00624a08293f4f00"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a31fa5914b255ab62aac9265654292ce0404f6b66540a065f538466474baedbc"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0092fd476701eeeb04df5cc509d8b739fa381583cda6a46ff0a60639b7cd70d"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:18d5b0ddc7cf69231736d778bd3ae2b3efb681ae33b64b0c92fb4626bb48bb89"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:f6cf6aa36fcf463e622d2165a5ad9963b2762bebae2f632d719dfb8544903cf5"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-musllinux_1_2_i686.whl", hash = "sha256:aec7b40a7ea5af7c40f8837adf20a137d5e11a6eb202cde7e588a48fb2d871a8"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:d21c1eda2f42211f18a25db4eaf8056c94a8563cd39da3683f89fe0d881fb772"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-win32.whl", hash = "sha256:f02baa9f5e35934c6e8dcec91fcde96612bdefef6e442813b8ea34e82c84bbfb"},
+ {file = "pycryptodome-3.22.0-cp37-abi3-win_amd64.whl", hash = "sha256:d086aed307e96d40c23c42418cbbca22ecc0ab4a8a0e24f87932eeab26c08627"},
+ {file = "pycryptodome-3.22.0-pp27-pypy_73-manylinux2010_x86_64.whl", hash = "sha256:98fd9da809d5675f3a65dcd9ed384b9dc67edab6a4cda150c5870a8122ec961d"},
+ {file = "pycryptodome-3.22.0-pp27-pypy_73-win32.whl", hash = "sha256:37ddcd18284e6b36b0a71ea495a4c4dca35bb09ccc9bfd5b91bfaf2321f131c1"},
+ {file = "pycryptodome-3.22.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:b4bdce34af16c1dcc7f8c66185684be15f5818afd2a82b75a4ce6b55f9783e13"},
+ {file = "pycryptodome-3.22.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2988ffcd5137dc2d27eb51cd18c0f0f68e5b009d5fec56fbccb638f90934f333"},
+ {file = "pycryptodome-3.22.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e653519dedcd1532788547f00eeb6108cc7ce9efdf5cc9996abce0d53f95d5a9"},
+ {file = "pycryptodome-3.22.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f5810bc7494e4ac12a4afef5a32218129e7d3890ce3f2b5ec520cc69eb1102ad"},
+ {file = "pycryptodome-3.22.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:e7514a1aebee8e85802d154fdb261381f1cb9b7c5a54594545145b8ec3056ae6"},
+ {file = "pycryptodome-3.22.0-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:56c6f9342fcb6c74e205fbd2fee568ec4cdbdaa6165c8fde55dbc4ba5f584464"},
+ {file = "pycryptodome-3.22.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87a88dc543b62b5c669895caf6c5a958ac7abc8863919e94b7a6cafd2f64064f"},
+ {file = "pycryptodome-3.22.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7a683bc9fa585c0dfec7fa4801c96a48d30b30b096e3297f9374f40c2fedafc"},
+ {file = "pycryptodome-3.22.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f4f6f47a7f411f2c157e77bbbda289e0c9f9e1e9944caa73c1c2e33f3f92d6e"},
+ {file = "pycryptodome-3.22.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:a6cf9553b29624961cab0785a3177a333e09e37ba62ad22314ebdbb01ca79840"},
+ {file = "pycryptodome-3.22.0.tar.gz", hash = "sha256:fd7ab568b3ad7b77c908d7c3f7e167ec5a8f035c64ff74f10d47a4edd043d723"},
+]
+
[[package]]
name = "pydantic"
-version = "2.10.6"
+version = "2.10.4"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.8"
groups = ["main", "demo"]
files = [
- {file = "pydantic-2.10.6-py3-none-any.whl", hash = "sha256:427d664bf0b8a2b34ff5dd0f5a18df00591adcee7198fbd71981054cef37b584"},
- {file = "pydantic-2.10.6.tar.gz", hash = "sha256:ca5daa827cce33de7a42be142548b0096bf05a7e7b365aebfa5f8eeec7128236"},
+ {file = "pydantic-2.10.4-py3-none-any.whl", hash = "sha256:597e135ea68be3a37552fb524bc7d0d66dcf93d395acd93a00682f1efcb8ee3d"},
+ {file = "pydantic-2.10.4.tar.gz", hash = "sha256:82f12e9723da6de4fe2ba888b5971157b3be7ad914267dea8f05f82b28254f06"},
]
[package.dependencies]
@@ -4017,6 +5297,27 @@ files = [
[package.extras]
windows-terminal = ["colorama (>=0.4.6)"]
+[[package]]
+name = "pyjwt"
+version = "2.10.1"
+description = "JSON Web Token implementation in Python"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "PyJWT-2.10.1-py3-none-any.whl", hash = "sha256:dcdd193e30abefd5debf142f9adfcdd2b58004e644f25406ffaebd50bd98dacb"},
+ {file = "pyjwt-2.10.1.tar.gz", hash = "sha256:3cc5772eb20009233caf06e9d8a0577824723b44e6648ee0a2aedb6cf9381953"},
+]
+
+[package.dependencies]
+cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""}
+
+[package.extras]
+crypto = ["cryptography (>=3.4.0)"]
+dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx", "sphinx-rtd-theme", "zope.interface"]
+docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"]
+tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
+
[[package]]
name = "pylint"
version = "3.3.3"
@@ -4045,6 +5346,33 @@ tomlkit = ">=0.10.1"
spelling = ["pyenchant (>=3.2,<4.0)"]
testutils = ["gitpython (>3)"]
+[[package]]
+name = "pynacl"
+version = "1.5.0"
+description = "Python binding to the Networking and Cryptography (NaCl) library"
+optional = false
+python-versions = ">=3.6"
+groups = ["main"]
+files = [
+ {file = "PyNaCl-1.5.0-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:401002a4aaa07c9414132aaed7f6836ff98f59277a234704ff66878c2ee4a0d1"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cb72a79269189d4e0dc537556f4740f7f0a9ec41c1322598799b0bdad4ef92"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a36d4a9dda1f19ce6e03c9a784a2921a4b726b02e1c736600ca9c22029474394"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0c84947a22519e013607c9be43706dd42513f9e6ae5d39d3613ca1e142fba44d"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06b8f6fa7f5de8d5d2f7573fe8c863c051225a27b61e6860fd047b1775807858"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:a422368fc821589c228f4c49438a368831cb5bbc0eab5ebe1d7fac9dded6567b"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:61f642bf2378713e2c2e1de73444a3778e5f0a38be6fee0fe532fe30060282ff"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-win32.whl", hash = "sha256:e46dae94e34b085175f8abb3b0aaa7da40767865ac82c928eeb9e57e1ea8a543"},
+ {file = "PyNaCl-1.5.0-cp36-abi3-win_amd64.whl", hash = "sha256:20f42270d27e1b6a29f54032090b972d97f0a1b0948cc52392041ef7831fee93"},
+ {file = "PyNaCl-1.5.0.tar.gz", hash = "sha256:8ac7448f09ab85811607bdd21ec2464495ac8b7c66d146bf545b0f08fb9220ba"},
+]
+
+[package.dependencies]
+cffi = ">=1.4.1"
+
+[package.extras]
+docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"]
+tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"]
+
[[package]]
name = "pyparsing"
version = "3.2.3"
@@ -4060,6 +5388,25 @@ files = [
[package.extras]
diagrams = ["jinja2", "railroad-diagrams"]
+[[package]]
+name = "pyproject-api"
+version = "1.8.0"
+description = "API to interact with the python pyproject.toml based projects"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "pyproject_api-1.8.0-py3-none-any.whl", hash = "sha256:3d7d347a047afe796fd5d1885b1e391ba29be7169bd2f102fcd378f04273d228"},
+ {file = "pyproject_api-1.8.0.tar.gz", hash = "sha256:77b8049f2feb5d33eefcc21b57f1e279636277a8ac8ad6b5871037b243778496"},
+]
+
+[package.dependencies]
+packaging = ">=24.1"
+
+[package.extras]
+docs = ["furo (>=2024.8.6)", "sphinx-autodoc-typehints (>=2.4.1)"]
+testing = ["covdefaults (>=2.3)", "pytest (>=8.3.3)", "pytest-cov (>=5)", "pytest-mock (>=3.14)", "setuptools (>=75.1)"]
+
[[package]]
name = "pytest"
version = "8.3.4"
@@ -4178,6 +5525,45 @@ files = [
{file = "pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3"},
]
+[[package]]
+name = "pyunormalize"
+version = "16.0.0"
+description = "Unicode normalization forms (NFC, NFKC, NFD, NFKD). A library independent of the Python core Unicode database."
+optional = false
+python-versions = ">=3.6"
+groups = ["main"]
+files = [
+ {file = "pyunormalize-16.0.0-py3-none-any.whl", hash = "sha256:c647d95e5d1e2ea9a2f448d1d95d8518348df24eab5c3fd32d2b5c3300a49152"},
+ {file = "pyunormalize-16.0.0.tar.gz", hash = "sha256:2e1dfbb4a118154ae26f70710426a52a364b926c9191f764601f5a8cb12761f7"},
+]
+
+[[package]]
+name = "pywin32"
+version = "310"
+description = "Python for Window Extensions"
+optional = false
+python-versions = "*"
+groups = ["main"]
+markers = "platform_system == \"Windows\""
+files = [
+ {file = "pywin32-310-cp310-cp310-win32.whl", hash = "sha256:6dd97011efc8bf51d6793a82292419eba2c71cf8e7250cfac03bba284454abc1"},
+ {file = "pywin32-310-cp310-cp310-win_amd64.whl", hash = "sha256:c3e78706e4229b915a0821941a84e7ef420bf2b77e08c9dae3c76fd03fd2ae3d"},
+ {file = "pywin32-310-cp310-cp310-win_arm64.whl", hash = "sha256:33babed0cf0c92a6f94cc6cc13546ab24ee13e3e800e61ed87609ab91e4c8213"},
+ {file = "pywin32-310-cp311-cp311-win32.whl", hash = "sha256:1e765f9564e83011a63321bb9d27ec456a0ed90d3732c4b2e312b855365ed8bd"},
+ {file = "pywin32-310-cp311-cp311-win_amd64.whl", hash = "sha256:126298077a9d7c95c53823934f000599f66ec9296b09167810eb24875f32689c"},
+ {file = "pywin32-310-cp311-cp311-win_arm64.whl", hash = "sha256:19ec5fc9b1d51c4350be7bb00760ffce46e6c95eaf2f0b2f1150657b1a43c582"},
+ {file = "pywin32-310-cp312-cp312-win32.whl", hash = "sha256:8a75a5cc3893e83a108c05d82198880704c44bbaee4d06e442e471d3c9ea4f3d"},
+ {file = "pywin32-310-cp312-cp312-win_amd64.whl", hash = "sha256:bf5c397c9a9a19a6f62f3fb821fbf36cac08f03770056711f765ec1503972060"},
+ {file = "pywin32-310-cp312-cp312-win_arm64.whl", hash = "sha256:2349cc906eae872d0663d4d6290d13b90621eaf78964bb1578632ff20e152966"},
+ {file = "pywin32-310-cp313-cp313-win32.whl", hash = "sha256:5d241a659c496ada3253cd01cfaa779b048e90ce4b2b38cd44168ad555ce74ab"},
+ {file = "pywin32-310-cp313-cp313-win_amd64.whl", hash = "sha256:667827eb3a90208ddbdcc9e860c81bde63a135710e21e4cb3348968e4bd5249e"},
+ {file = "pywin32-310-cp313-cp313-win_arm64.whl", hash = "sha256:e308f831de771482b7cf692a1f308f8fca701b2d8f9dde6cc440c7da17e47b33"},
+ {file = "pywin32-310-cp38-cp38-win32.whl", hash = "sha256:0867beb8addefa2e3979d4084352e4ac6e991ca45373390775f7084cc0209b9c"},
+ {file = "pywin32-310-cp38-cp38-win_amd64.whl", hash = "sha256:30f0a9b3138fb5e07eb4973b7077e1883f558e40c578c6925acc7a94c34eaa36"},
+ {file = "pywin32-310-cp39-cp39-win32.whl", hash = "sha256:851c8d927af0d879221e616ae1f66145253537bbdd321a77e8ef701b443a9a1a"},
+ {file = "pywin32-310-cp39-cp39-win_amd64.whl", hash = "sha256:96867217335559ac619f00ad70e513c0fcf84b8a3af9fc2bba3b59b97da70475"},
+]
+
[[package]]
name = "pyyaml"
version = "6.0.2"
@@ -4241,6 +5627,25 @@ files = [
{file = "pyyaml-6.0.2.tar.gz", hash = "sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e"},
]
+[[package]]
+name = "rabinmiller"
+version = "0.1.0"
+description = "Pure-Python implementation of the Miller-Rabin primality test."
+optional = false
+python-versions = ">=3.7"
+groups = ["main"]
+files = [
+ {file = "rabinmiller-0.1.0-py3-none-any.whl", hash = "sha256:3fec2d26fc210772ced965a8f0e2870e5582cadf255bc665ef3f4932752ada5f"},
+ {file = "rabinmiller-0.1.0.tar.gz", hash = "sha256:a9873aa6fdd0c26d5205d99e126fd94e6e1bb2aa966e167e136dfbfab0d0556d"},
+]
+
+[package.extras]
+coveralls = ["coveralls (>=4.0,<5.0)"]
+docs = ["sphinx (>=5.0,<6.0)", "sphinx-autodoc-typehints (>=1.23.0,<1.24.0)", "sphinx-rtd-theme (>=2.0.0,<2.1.0)", "toml (>=0.10.2,<0.11.0)"]
+lint = ["pylint (>=2.17.0,<2.18.0) ; python_version < \"3.12\"", "pylint (>=3.2.0,<3.3.0) ; python_version >= \"3.12\""]
+publish = ["build (>=0.10,<1.0)", "twine (>=4.0,<5.0)"]
+test = ["pytest (>=7.4,<8.0) ; python_version < \"3.12\"", "pytest (>=8.2,<9.0) ; python_version >= \"3.12\"", "pytest-cov (>=4.1,<5.0) ; python_version < \"3.12\"", "pytest-cov (>=5.0,<6.0) ; python_version >= \"3.12\""]
+
[[package]]
name = "redis"
version = "5.2.1"
@@ -4260,6 +5665,23 @@ async-timeout = {version = ">=4.0.3", markers = "python_full_version < \"3.11.3\
hiredis = ["hiredis (>=3.0.0)"]
ocsp = ["cryptography (>=36.0.1)", "pyopenssl (==23.2.1)", "requests (>=2.31.0)"]
+[[package]]
+name = "referencing"
+version = "0.36.2"
+description = "JSON Referencing + Python"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "referencing-0.36.2-py3-none-any.whl", hash = "sha256:e8699adbbf8b5c7de96d8ffa0eb5c158b3beafce084968e2ea8bb08c6794dcd0"},
+ {file = "referencing-0.36.2.tar.gz", hash = "sha256:df2e89862cd09deabbdba16944cc3f10feb6b3e6f18e902f7cc25609a34775aa"},
+]
+
+[package.dependencies]
+attrs = ">=22.2.0"
+rpds-py = ">=0.7.0"
+typing-extensions = {version = ">=4.4.0", markers = "python_version < \"3.13\""}
+
[[package]]
name = "regex"
version = "2024.11.6"
@@ -4401,6 +5823,27 @@ files = [
[package.dependencies]
requests = ">=2.0.1,<3.0.0"
+[[package]]
+name = "rlp"
+version = "4.1.0"
+description = "rlp: A package for Recursive Length Prefix encoding and decoding"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "rlp-4.1.0-py3-none-any.whl", hash = "sha256:8eca394c579bad34ee0b937aecb96a57052ff3716e19c7a578883e767bc5da6f"},
+ {file = "rlp-4.1.0.tar.gz", hash = "sha256:be07564270a96f3e225e2c107db263de96b5bc1f27722d2855bd3459a08e95a9"},
+]
+
+[package.dependencies]
+eth-utils = ">=2"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "hypothesis (>=6.22.0,<6.108.7)", "ipython", "pre-commit (>=3.4.0)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "twine", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+rust-backend = ["rusty-rlp (>=0.2.1)"]
+test = ["hypothesis (>=6.22.0,<6.108.7)", "pytest (>=7.0.0)", "pytest-xdist (>=2.4.0)"]
+
[[package]]
name = "roman-numerals-py"
version = "3.1.0"
@@ -4417,6 +5860,130 @@ files = [
lint = ["mypy (==1.15.0)", "pyright (==1.1.394)", "ruff (==0.9.7)"]
test = ["pytest (>=8)"]
+[[package]]
+name = "rpds-py"
+version = "0.24.0"
+description = "Python bindings to Rust's persistent data structures (rpds)"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "rpds_py-0.24.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:006f4342fe729a368c6df36578d7a348c7c716be1da0a1a0f86e3021f8e98724"},
+ {file = "rpds_py-0.24.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:2d53747da70a4e4b17f559569d5f9506420966083a31c5fbd84e764461c4444b"},
+ {file = "rpds_py-0.24.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8acd55bd5b071156bae57b555f5d33697998752673b9de554dd82f5b5352727"},
+ {file = "rpds_py-0.24.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7e80d375134ddb04231a53800503752093dbb65dad8dabacce2c84cccc78e964"},
+ {file = "rpds_py-0.24.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:60748789e028d2a46fc1c70750454f83c6bdd0d05db50f5ae83e2db500b34da5"},
+ {file = "rpds_py-0.24.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6e1daf5bf6c2be39654beae83ee6b9a12347cb5aced9a29eecf12a2d25fff664"},
+ {file = "rpds_py-0.24.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1b221c2457d92a1fb3c97bee9095c874144d196f47c038462ae6e4a14436f7bc"},
+ {file = "rpds_py-0.24.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:66420986c9afff67ef0c5d1e4cdc2d0e5262f53ad11e4f90e5e22448df485bf0"},
+ {file = "rpds_py-0.24.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:43dba99f00f1d37b2a0265a259592d05fcc8e7c19d140fe51c6e6f16faabeb1f"},
+ {file = "rpds_py-0.24.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:a88c0d17d039333a41d9bf4616bd062f0bd7aa0edeb6cafe00a2fc2a804e944f"},
+ {file = "rpds_py-0.24.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cc31e13ce212e14a539d430428cd365e74f8b2d534f8bc22dd4c9c55b277b875"},
+ {file = "rpds_py-0.24.0-cp310-cp310-win32.whl", hash = "sha256:fc2c1e1b00f88317d9de6b2c2b39b012ebbfe35fe5e7bef980fd2a91f6100a07"},
+ {file = "rpds_py-0.24.0-cp310-cp310-win_amd64.whl", hash = "sha256:c0145295ca415668420ad142ee42189f78d27af806fcf1f32a18e51d47dd2052"},
+ {file = "rpds_py-0.24.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:2d3ee4615df36ab8eb16c2507b11e764dcc11fd350bbf4da16d09cda11fcedef"},
+ {file = "rpds_py-0.24.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e13ae74a8a3a0c2f22f450f773e35f893484fcfacb00bb4344a7e0f4f48e1f97"},
+ {file = "rpds_py-0.24.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf86f72d705fc2ef776bb7dd9e5fbba79d7e1f3e258bf9377f8204ad0fc1c51e"},
+ {file = "rpds_py-0.24.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c43583ea8517ed2e780a345dd9960896afc1327e8cf3ac8239c167530397440d"},
+ {file = "rpds_py-0.24.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4cd031e63bc5f05bdcda120646a0d32f6d729486d0067f09d79c8db5368f4586"},
+ {file = "rpds_py-0.24.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:34d90ad8c045df9a4259c47d2e16a3f21fdb396665c94520dbfe8766e62187a4"},
+ {file = "rpds_py-0.24.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e838bf2bb0b91ee67bf2b889a1a841e5ecac06dd7a2b1ef4e6151e2ce155c7ae"},
+ {file = "rpds_py-0.24.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04ecf5c1ff4d589987b4d9882872f80ba13da7d42427234fce8f22efb43133bc"},
+ {file = "rpds_py-0.24.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:630d3d8ea77eabd6cbcd2ea712e1c5cecb5b558d39547ac988351195db433f6c"},
+ {file = "rpds_py-0.24.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:ebcb786b9ff30b994d5969213a8430cbb984cdd7ea9fd6df06663194bd3c450c"},
+ {file = "rpds_py-0.24.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:174e46569968ddbbeb8a806d9922f17cd2b524aa753b468f35b97ff9c19cb718"},
+ {file = "rpds_py-0.24.0-cp311-cp311-win32.whl", hash = "sha256:5ef877fa3bbfb40b388a5ae1cb00636a624690dcb9a29a65267054c9ea86d88a"},
+ {file = "rpds_py-0.24.0-cp311-cp311-win_amd64.whl", hash = "sha256:e274f62cbd274359eff63e5c7e7274c913e8e09620f6a57aae66744b3df046d6"},
+ {file = "rpds_py-0.24.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:d8551e733626afec514b5d15befabea0dd70a343a9f23322860c4f16a9430205"},
+ {file = "rpds_py-0.24.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0e374c0ce0ca82e5b67cd61fb964077d40ec177dd2c4eda67dba130de09085c7"},
+ {file = "rpds_py-0.24.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d69d003296df4840bd445a5d15fa5b6ff6ac40496f956a221c4d1f6f7b4bc4d9"},
+ {file = "rpds_py-0.24.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8212ff58ac6dfde49946bea57474a386cca3f7706fc72c25b772b9ca4af6b79e"},
+ {file = "rpds_py-0.24.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:528927e63a70b4d5f3f5ccc1fa988a35456eb5d15f804d276709c33fc2f19bda"},
+ {file = "rpds_py-0.24.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a824d2c7a703ba6daaca848f9c3d5cb93af0505be505de70e7e66829affd676e"},
+ {file = "rpds_py-0.24.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:44d51febb7a114293ffd56c6cf4736cb31cd68c0fddd6aa303ed09ea5a48e029"},
+ {file = "rpds_py-0.24.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3fab5f4a2c64a8fb64fc13b3d139848817a64d467dd6ed60dcdd6b479e7febc9"},
+ {file = "rpds_py-0.24.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:9be4f99bee42ac107870c61dfdb294d912bf81c3c6d45538aad7aecab468b6b7"},
+ {file = "rpds_py-0.24.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:564c96b6076a98215af52f55efa90d8419cc2ef45d99e314fddefe816bc24f91"},
+ {file = "rpds_py-0.24.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:75a810b7664c17f24bf2ffd7f92416c00ec84b49bb68e6a0d93e542406336b56"},
+ {file = "rpds_py-0.24.0-cp312-cp312-win32.whl", hash = "sha256:f6016bd950be4dcd047b7475fdf55fb1e1f59fc7403f387be0e8123e4a576d30"},
+ {file = "rpds_py-0.24.0-cp312-cp312-win_amd64.whl", hash = "sha256:998c01b8e71cf051c28f5d6f1187abbdf5cf45fc0efce5da6c06447cba997034"},
+ {file = "rpds_py-0.24.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:3d2d8e4508e15fc05b31285c4b00ddf2e0eb94259c2dc896771966a163122a0c"},
+ {file = "rpds_py-0.24.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0f00c16e089282ad68a3820fd0c831c35d3194b7cdc31d6e469511d9bffc535c"},
+ {file = "rpds_py-0.24.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:951cc481c0c395c4a08639a469d53b7d4afa252529a085418b82a6b43c45c240"},
+ {file = "rpds_py-0.24.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c9ca89938dff18828a328af41ffdf3902405a19f4131c88e22e776a8e228c5a8"},
+ {file = "rpds_py-0.24.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ed0ef550042a8dbcd657dfb284a8ee00f0ba269d3f2286b0493b15a5694f9fe8"},
+ {file = "rpds_py-0.24.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2b2356688e5d958c4d5cb964af865bea84db29971d3e563fb78e46e20fe1848b"},
+ {file = "rpds_py-0.24.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78884d155fd15d9f64f5d6124b486f3d3f7fd7cd71a78e9670a0f6f6ca06fb2d"},
+ {file = "rpds_py-0.24.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6a4a535013aeeef13c5532f802708cecae8d66c282babb5cd916379b72110cf7"},
+ {file = "rpds_py-0.24.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:84e0566f15cf4d769dade9b366b7b87c959be472c92dffb70462dd0844d7cbad"},
+ {file = "rpds_py-0.24.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:823e74ab6fbaa028ec89615ff6acb409e90ff45580c45920d4dfdddb069f2120"},
+ {file = "rpds_py-0.24.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c61a2cb0085c8783906b2f8b1f16a7e65777823c7f4d0a6aaffe26dc0d358dd9"},
+ {file = "rpds_py-0.24.0-cp313-cp313-win32.whl", hash = "sha256:60d9b630c8025b9458a9d114e3af579a2c54bd32df601c4581bd054e85258143"},
+ {file = "rpds_py-0.24.0-cp313-cp313-win_amd64.whl", hash = "sha256:6eea559077d29486c68218178ea946263b87f1c41ae7f996b1f30a983c476a5a"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-macosx_10_12_x86_64.whl", hash = "sha256:d09dc82af2d3c17e7dd17120b202a79b578d79f2b5424bda209d9966efeed114"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:5fc13b44de6419d1e7a7e592a4885b323fbc2f46e1f22151e3a8ed3b8b920405"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c347a20d79cedc0a7bd51c4d4b7dbc613ca4e65a756b5c3e57ec84bd43505b47"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:20f2712bd1cc26a3cc16c5a1bfee9ed1abc33d4cdf1aabd297fe0eb724df4272"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:aad911555286884be1e427ef0dc0ba3929e6821cbeca2194b13dc415a462c7fd"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0aeb3329c1721c43c58cae274d7d2ca85c1690d89485d9c63a006cb79a85771a"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a0f156e9509cee987283abd2296ec816225145a13ed0391df8f71bf1d789e2d"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:aa6800adc8204ce898c8a424303969b7aa6a5e4ad2789c13f8648739830323b7"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a18fc371e900a21d7392517c6f60fe859e802547309e94313cd8181ad9db004d"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:9168764133fd919f8dcca2ead66de0105f4ef5659cbb4fa044f7014bed9a1797"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:5f6e3cec44ba05ee5cbdebe92d052f69b63ae792e7d05f1020ac5e964394080c"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-win32.whl", hash = "sha256:8ebc7e65ca4b111d928b669713865f021b7773350eeac4a31d3e70144297baba"},
+ {file = "rpds_py-0.24.0-cp313-cp313t-win_amd64.whl", hash = "sha256:675269d407a257b8c00a6b58205b72eec8231656506c56fd429d924ca00bb350"},
+ {file = "rpds_py-0.24.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a36b452abbf29f68527cf52e181fced56685731c86b52e852053e38d8b60bc8d"},
+ {file = "rpds_py-0.24.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8b3b397eefecec8e8e39fa65c630ef70a24b09141a6f9fc17b3c3a50bed6b50e"},
+ {file = "rpds_py-0.24.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cdabcd3beb2a6dca7027007473d8ef1c3b053347c76f685f5f060a00327b8b65"},
+ {file = "rpds_py-0.24.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5db385bacd0c43f24be92b60c857cf760b7f10d8234f4bd4be67b5b20a7c0b6b"},
+ {file = "rpds_py-0.24.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8097b3422d020ff1c44effc40ae58e67d93e60d540a65649d2cdaf9466030791"},
+ {file = "rpds_py-0.24.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:493fe54318bed7d124ce272fc36adbf59d46729659b2c792e87c3b95649cdee9"},
+ {file = "rpds_py-0.24.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8aa362811ccdc1f8dadcc916c6d47e554169ab79559319ae9fae7d7752d0d60c"},
+ {file = "rpds_py-0.24.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d8f9a6e7fd5434817526815f09ea27f2746c4a51ee11bb3439065f5fc754db58"},
+ {file = "rpds_py-0.24.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:8205ee14463248d3349131bb8099efe15cd3ce83b8ef3ace63c7e976998e7124"},
+ {file = "rpds_py-0.24.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:921ae54f9ecba3b6325df425cf72c074cd469dea843fb5743a26ca7fb2ccb149"},
+ {file = "rpds_py-0.24.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:32bab0a56eac685828e00cc2f5d1200c548f8bc11f2e44abf311d6b548ce2e45"},
+ {file = "rpds_py-0.24.0-cp39-cp39-win32.whl", hash = "sha256:f5c0ed12926dec1dfe7d645333ea59cf93f4d07750986a586f511c0bc61fe103"},
+ {file = "rpds_py-0.24.0-cp39-cp39-win_amd64.whl", hash = "sha256:afc6e35f344490faa8276b5f2f7cbf71f88bc2cda4328e00553bd451728c571f"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:619ca56a5468f933d940e1bf431c6f4e13bef8e688698b067ae68eb4f9b30e3a"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:4b28e5122829181de1898c2c97f81c0b3246d49f585f22743a1246420bb8d399"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8e5ab32cf9eb3647450bc74eb201b27c185d3857276162c101c0f8c6374e098"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:208b3a70a98cf3710e97cabdc308a51cd4f28aa6e7bb11de3d56cd8b74bab98d"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bbc4362e06f950c62cad3d4abf1191021b2ffaf0b31ac230fbf0526453eee75e"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ebea2821cdb5f9fef44933617be76185b80150632736f3d76e54829ab4a3b4d1"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b9a4df06c35465ef4d81799999bba810c68d29972bf1c31db61bfdb81dd9d5bb"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d3aa13bdf38630da298f2e0d77aca967b200b8cc1473ea05248f6c5e9c9bdb44"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:041f00419e1da7a03c46042453598479f45be3d787eb837af382bfc169c0db33"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-musllinux_1_2_i686.whl", hash = "sha256:d8754d872a5dfc3c5bf9c0e059e8107451364a30d9fd50f1f1a85c4fb9481164"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:896c41007931217a343eff197c34513c154267636c8056fb409eafd494c3dcdc"},
+ {file = "rpds_py-0.24.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:92558d37d872e808944c3c96d0423b8604879a3d1c86fdad508d7ed91ea547d5"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f9e0057a509e096e47c87f753136c9b10d7a91842d8042c2ee6866899a717c0d"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d6e109a454412ab82979c5b1b3aee0604eca4bbf9a02693bb9df027af2bfa91a"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fc1c892b1ec1f8cbd5da8de287577b455e388d9c328ad592eabbdcb6fc93bee5"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:9c39438c55983d48f4bb3487734d040e22dad200dab22c41e331cee145e7a50d"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9d7e8ce990ae17dda686f7e82fd41a055c668e13ddcf058e7fb5e9da20b57793"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9ea7f4174d2e4194289cb0c4e172d83e79a6404297ff95f2875cf9ac9bced8ba"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bb2954155bb8f63bb19d56d80e5e5320b61d71084617ed89efedb861a684baea"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04f2b712a2206e13800a8136b07aaedc23af3facab84918e7aa89e4be0260032"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:eda5c1e2a715a4cbbca2d6d304988460942551e4e5e3b7457b50943cd741626d"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-musllinux_1_2_i686.whl", hash = "sha256:9abc80fe8c1f87218db116016de575a7998ab1629078c90840e8d11ab423ee25"},
+ {file = "rpds_py-0.24.0-pp311-pypy311_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:6a727fd083009bc83eb83d6950f0c32b3c94c8b80a9b667c87f4bd1274ca30ba"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:e0f3ef95795efcd3b2ec3fe0a5bcfb5dadf5e3996ea2117427e524d4fbf309c6"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:2c13777ecdbbba2077670285dd1fe50828c8742f6a4119dbef6f83ea13ad10fb"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:79e8d804c2ccd618417e96720ad5cd076a86fa3f8cb310ea386a3e6229bae7d1"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fd822f019ccccd75c832deb7aa040bb02d70a92eb15a2f16c7987b7ad4ee8d83"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0047638c3aa0dbcd0ab99ed1e549bbf0e142c9ecc173b6492868432d8989a046"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a5b66d1b201cc71bc3081bc2f1fc36b0c1f268b773e03bbc39066651b9e18391"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dbcbb6db5582ea33ce46a5d20a5793134b5365110d84df4e30b9d37c6fd40ad3"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:63981feca3f110ed132fd217bf7768ee8ed738a55549883628ee3da75bb9cb78"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-musllinux_1_2_aarch64.whl", hash = "sha256:3a55fc10fdcbf1a4bd3c018eea422c52cf08700cf99c28b5cb10fe97ab77a0d3"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-musllinux_1_2_i686.whl", hash = "sha256:c30ff468163a48535ee7e9bf21bd14c7a81147c0e58a36c1078289a8ca7af0bd"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-musllinux_1_2_x86_64.whl", hash = "sha256:369d9c6d4c714e36d4a03957b4783217a3ccd1e222cdd67d464a3a479fc17796"},
+ {file = "rpds_py-0.24.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:24795c099453e3721fda5d8ddd45f5dfcc8e5a547ce7b8e9da06fecc3832e26f"},
+ {file = "rpds_py-0.24.0.tar.gz", hash = "sha256:772cc1b2cd963e7e17e6cc55fe0371fb9c704d63e44cacec7b9b7f523b78919e"},
+]
+
[[package]]
name = "rsa"
version = "4.9"
@@ -4836,7 +6403,7 @@ version = "2.6"
description = "A modern CSS selector implementation for Beautiful Soup."
optional = false
python-versions = ">=3.8"
-groups = ["dev", "docs", "research"]
+groups = ["main", "dev", "docs", "research"]
files = [
{file = "soupsieve-2.6-py3-none-any.whl", hash = "sha256:e72c4ff06e4fb6e4b5a9f0f55fe6e81514581fca1515028625d0f299c602ccc9"},
{file = "soupsieve-2.6.tar.gz", hash = "sha256:e2e68417777af359ec65daac1057404a3c8a5455bb8abc36f1a9866ab1a51abb"},
@@ -5469,6 +7036,19 @@ files = [
{file = "tomlkit-0.13.2.tar.gz", hash = "sha256:fff5fe59a87295b278abd31bec92c15d9bc4a06885ab12bcea52c71119392e79"},
]
+[[package]]
+name = "toolz"
+version = "1.0.0"
+description = "List processing tools and functional utilities"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+markers = "implementation_name == \"cpython\" or implementation_name == \"pypy\""
+files = [
+ {file = "toolz-1.0.0-py3-none-any.whl", hash = "sha256:292c8f1c4e7516bf9086f8850935c799a874039c8bcf959d47b600e4c44a6236"},
+ {file = "toolz-1.0.0.tar.gz", hash = "sha256:2c86e3d9a04798ac556793bced838816296a2f085017664e4995cb40a1047a02"},
+]
+
[[package]]
name = "torch"
version = "2.6.0"
@@ -5526,6 +7106,32 @@ typing-extensions = ">=4.10.0"
opt-einsum = ["opt-einsum (>=3.3)"]
optree = ["optree (>=0.13.0)"]
+[[package]]
+name = "tox"
+version = "4.23.2"
+description = "tox is a generic virtualenv management and test command line tool"
+optional = false
+python-versions = ">=3.8"
+groups = ["main"]
+files = [
+ {file = "tox-4.23.2-py3-none-any.whl", hash = "sha256:452bc32bb031f2282881a2118923176445bac783ab97c874b8770ab4c3b76c38"},
+ {file = "tox-4.23.2.tar.gz", hash = "sha256:86075e00e555df6e82e74cfc333917f91ecb47ffbc868dcafbd2672e332f4a2c"},
+]
+
+[package.dependencies]
+cachetools = ">=5.5"
+chardet = ">=5.2"
+colorama = ">=0.4.6"
+filelock = ">=3.16.1"
+packaging = ">=24.1"
+platformdirs = ">=4.3.6"
+pluggy = ">=1.5"
+pyproject-api = ">=1.8"
+virtualenv = ">=20.26.6"
+
+[package.extras]
+test = ["devpi-process (>=1.0.2)", "pytest (>=8.3.3)", "pytest-mock (>=3.14)"]
+
[[package]]
name = "tqdm"
version = "4.67.1"
@@ -5642,16 +7248,31 @@ build = ["cmake (>=3.20)", "lit"]
tests = ["autopep8", "flake8", "isort", "llnl-hatchet", "numpy", "pytest", "scipy (>=1.7.1)"]
tutorials = ["matplotlib", "pandas", "tabulate"]
+[[package]]
+name = "types-requests"
+version = "2.32.0.20250328"
+description = "Typing stubs for requests"
+optional = false
+python-versions = ">=3.9"
+groups = ["main"]
+files = [
+ {file = "types_requests-2.32.0.20250328-py3-none-any.whl", hash = "sha256:72ff80f84b15eb3aa7a8e2625fffb6a93f2ad5a0c20215fc1dcfa61117bcb2a2"},
+ {file = "types_requests-2.32.0.20250328.tar.gz", hash = "sha256:c9e67228ea103bd811c96984fac36ed2ae8da87a36a633964a21f199d60baf32"},
+]
+
+[package.dependencies]
+urllib3 = ">=2"
+
[[package]]
name = "typing-extensions"
-version = "4.13.1"
+version = "4.12.2"
description = "Backported and Experimental Type Hints for Python 3.8+"
optional = false
python-versions = ">=3.8"
groups = ["main", "demo", "dev", "docs", "research"]
files = [
- {file = "typing_extensions-4.13.1-py3-none-any.whl", hash = "sha256:4b6cf02909eb5495cfbc3f6e8fd49217e6cc7944e145cdda8caa3734777f9e69"},
- {file = "typing_extensions-4.13.1.tar.gz", hash = "sha256:98795af00fb9640edec5b8e31fc647597b4691f099ad75f469a2616be1a76dff"},
+ {file = "typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d"},
+ {file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
]
[[package]]
@@ -5733,14 +7354,14 @@ standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)
[[package]]
name = "virtualenv"
-version = "20.30.0"
+version = "20.28.1"
description = "Virtual Python Environment builder"
optional = false
python-versions = ">=3.8"
-groups = ["dev"]
+groups = ["main", "dev"]
files = [
- {file = "virtualenv-20.30.0-py3-none-any.whl", hash = "sha256:e34302959180fca3af42d1800df014b35019490b119eba981af27f2fa486e5d6"},
- {file = "virtualenv-20.30.0.tar.gz", hash = "sha256:800863162bcaa5450a6e4d721049730e7f2dae07720e0902b0e4040bd6f9ada8"},
+ {file = "virtualenv-20.28.1-py3-none-any.whl", hash = "sha256:412773c85d4dab0409b83ec36f7a6499e72eaf08c80e81e9576bca61831c71cb"},
+ {file = "virtualenv-20.28.1.tar.gz", hash = "sha256:5d34ab240fdb5d21549b76f9e8ff3af28252f5499fb6d6f031adac4e5a8c5329"},
]
[package.dependencies]
@@ -5750,7 +7371,41 @@ platformdirs = ">=3.9.1,<5"
[package.extras]
docs = ["furo (>=2023.7.26)", "proselint (>=0.13)", "sphinx (>=7.1.2,!=7.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=23.6)"]
-test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8) ; platform_python_implementation == \"PyPy\" or platform_python_implementation == \"GraalVM\" or platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version >= \"3.13\"", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10) ; platform_python_implementation == \"CPython\""]
+test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.4)", "pytest-env (>=0.8.2)", "pytest-freezer (>=0.4.8) ; platform_python_implementation == \"PyPy\" or platform_python_implementation == \"CPython\" and sys_platform == \"win32\" and python_version >= \"3.13\"", "pytest-mock (>=3.11.1)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=68)", "time-machine (>=2.10) ; platform_python_implementation == \"CPython\""]
+
+[[package]]
+name = "web3"
+version = "7.10.0"
+description = "web3: A Python library for interacting with Ethereum"
+optional = false
+python-versions = "<4,>=3.8"
+groups = ["main"]
+files = [
+ {file = "web3-7.10.0-py3-none-any.whl", hash = "sha256:06fcab920554450e9f7d108da5e6b9d29c0d1a981a59a5551cc82d2cb2233b34"},
+ {file = "web3-7.10.0.tar.gz", hash = "sha256:0cace05ea14f800a4497649ecd99332ca4e85c8a90ea577e05ae909cb08902b9"},
+]
+
+[package.dependencies]
+aiohttp = ">=3.7.4.post0"
+eth-abi = ">=5.0.1"
+eth-account = ">=0.13.1"
+eth-hash = {version = ">=0.5.1", extras = ["pycryptodome"]}
+eth-typing = ">=5.0.0"
+eth-utils = ">=5.0.0"
+hexbytes = ">=1.2.0"
+pydantic = ">=2.4.0"
+pyunormalize = ">=15.0.0"
+pywin32 = {version = ">=223", markers = "platform_system == \"Windows\""}
+requests = ">=2.23.0"
+types-requests = ">=2.0.0"
+typing-extensions = ">=4.0.1"
+websockets = ">=10.0.0,<16.0.0"
+
+[package.extras]
+dev = ["build (>=0.9.0)", "bump_my_version (>=0.19.0)", "eth-tester[py-evm] (>=0.12.0b1,<0.13.0b1)", "flaky (>=3.7.0)", "hypothesis (>=3.31.2)", "ipython", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "py-geth (>=5.1.0)", "pytest (>=7.0.0)", "pytest-asyncio (>=0.18.1,<0.23)", "pytest-mock (>=1.10)", "pytest-xdist (>=2.4.0)", "setuptools (>=38.6.0)", "sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)", "tox (>=4.0.0)", "tqdm (>4.32)", "twine (>=1.13)", "wheel"]
+docs = ["sphinx (>=6.0.0)", "sphinx-autobuild (>=2021.3.14)", "sphinx_rtd_theme (>=1.0.0)", "towncrier (>=24,<25)"]
+test = ["eth-tester[py-evm] (>=0.12.0b1,<0.13.0b1)", "flaky (>=3.7.0)", "hypothesis (>=3.31.2)", "mypy (==1.10.0)", "pre-commit (>=3.4.0)", "py-geth (>=5.1.0)", "pytest (>=7.0.0)", "pytest-asyncio (>=0.18.1,<0.23)", "pytest-mock (>=1.10)", "pytest-xdist (>=2.4.0)", "tox (>=4.0.0)"]
+tester = ["eth-tester[py-evm] (>=0.12.0b1,<0.13.0b1)", "py-geth (>=5.1.0)"]
[[package]]
name = "websockets"
@@ -5758,7 +7413,7 @@ version = "15.0.1"
description = "An implementation of the WebSocket Protocol (RFC 6455 & 7692)"
optional = false
python-versions = ">=3.9"
-groups = ["demo"]
+groups = ["main", "demo"]
files = [
{file = "websockets-15.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d63efaa0cd96cf0c5fe4d581521d9fa87744540d4bc999ae6e08595a1014b45b"},
{file = "websockets-15.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ac60e3b188ec7574cb761b08d50fcedf9d77f1530352db4eef1707fe9dee7205"},
@@ -6168,4 +7823,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.11,<3.13"
-content-hash = "1c59588bd2a44d6870e86c0636b8ec40786daf3aaf331c3663a6846050ef115c"
+content-hash = "a9c82ec63992d62e2b032bf17101384157d8332b87cf9f8f974061ee6df57b9d"
diff --git a/pyproject.toml b/pyproject.toml
index 76d4e91..bae1bcf 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -53,6 +53,12 @@ dependencies = [
"aiogram (>=3.19.0,<4.0.0)",
"faiss-cpu (>=1.10.0,<2.0.0)",
"simsimd (>=6.2.1,<7.0.0)",
+ "cdp-sdk (>=0.21.0,<0.22.0)",
+ "coinbase-agentkit (>=0.4.0,<0.5.0)",
+ "coinbase-agentkit-langchain (>=0.3.0,<0.4.0)",
+ "accelerate (>=1.6.0,<2.0.0)",
+ "markdownify (>=1.1.0,<2.0.0)",
+ "langchain-openai (>=0.3.14,<0.4.0)",
]
[project.scripts]
@@ -67,7 +73,7 @@ aioconsole = "^0.8.1"
fastapi = "^0.115.8"
uvicorn = "^0.34.0"
websockets = "^15.0"
-pydantic = "^2.10.6"
+pydantic = "^2.10.4"
pydantic-settings = "^2.7.1"
httpx = "^0.28.1"
python-multipart = "^0.0.20"
diff --git a/requirements.txt b/requirements.txt
index 9efaefa..f71e794 100644
Binary files a/requirements.txt and b/requirements.txt differ
diff --git a/tests/test_human.py b/tests/test_human.py
new file mode 100644
index 0000000..210ea58
--- /dev/null
+++ b/tests/test_human.py
@@ -0,0 +1,96 @@
+import asyncio
+import os
+from dotenv import load_dotenv
+
+from agentconnect.agents import AIAgent, HumanAgent
+from agentconnect.communication import CommunicationHub
+from agentconnect.core.registry import AgentRegistry
+from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ InteractionMode,
+ ModelName,
+ ModelProvider,
+ MessageType
+)
+
+async def main():
+ # Load environment variables
+ load_dotenv()
+
+ # Initialize registry and hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+ # Create agent identities
+ human_identity = AgentIdentity.create_key_based()
+ ai_identity = AgentIdentity.create_key_based()
+
+ # Create a human agent
+ human = HumanAgent(
+ agent_id="human1",
+ name="User",
+ identity=human_identity,
+ organization_id="org1"
+ )
+
+ # Create an AI agent
+ ai_assistant = AIAgent(
+ agent_id="ai1",
+ name="Assistant",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=ai_identity,
+ capabilities=[Capability(
+ name="data_analysis",
+ description="Analyze data and provide insights",
+ input_schema={"data": "string"},
+ output_schema={"analysis": "string"},
+ )],
+ interaction_modes=[InteractionMode.HUMAN_TO_AGENT, InteractionMode.AGENT_TO_AGENT],
+ personality="professional and thorough",
+ organization_id="org1",
+ )
+
+ # Register both agents with the hub
+ await hub.register_agent(human)
+ await hub.register_agent(ai_assistant)
+
+ # Start both agent processing loops
+ human_task = asyncio.create_task(human.run())
+ ai_task = asyncio.create_task(ai_assistant.run())
+
+ try:
+ # Simulate AI agent performing a task
+ print("AI agent performing analysis...")
+ await asyncio.sleep(2) # Simulate work
+
+ analysis_result = "Based on the data, I recommend Strategy A with 78% confidence."
+
+ # AI sends results to human for approval
+ print("AI agent requesting human approval...")
+ await ai_assistant.send_message(
+ receiver_id=human.agent_id,
+ content=f"I've completed my analysis:\n\n{analysis_result}\n\nDo you approve this recommendation? (Type 'approve' or 'reject')",
+ message_type=MessageType.TEXT
+ )
+
+ # At this point, the human will see the message in their terminal
+ # and will be prompted to respond. The script will wait at this point.
+
+ # Let the interaction run for a while
+ print("Waiting for human interaction (30 seconds)...")
+ await asyncio.sleep(30)
+
+ finally:
+ # Cleanup
+ print("Shutting down agents...")
+ await ai_assistant.stop()
+ await human.stop()
+ await hub.unregister_agent(human.agent_id)
+ await hub.unregister_agent(ai_assistant.agent_id)
+ print("Done.")
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file
diff --git a/tests/test_human_simple.py b/tests/test_human_simple.py
new file mode 100644
index 0000000..611ca80
--- /dev/null
+++ b/tests/test_human_simple.py
@@ -0,0 +1,136 @@
+import asyncio
+import os
+import sys
+from dotenv import load_dotenv
+
+from agentconnect.agents import AIAgent, HumanAgent
+from agentconnect.communication import CommunicationHub
+from agentconnect.core.registry import AgentRegistry
+from agentconnect.utils.logging_config import setup_logging, LogLevel, disable_all_logging
+from agentconnect.core.types import (
+ AgentIdentity,
+ Capability,
+ InteractionMode,
+ ModelName,
+ ModelProvider,
+ MessageType
+)
+
+# Global variables for state tracking
+human_responded = asyncio.Event()
+conversation_ended = asyncio.Event()
+
+# Callback to notify when human responds
+def response_handler(response_data):
+ """Callback when human sends a message"""
+ global human_responded, conversation_ended
+
+ # Check if this is an exit message
+ message_type = response_data.get('message_type', MessageType.TEXT)
+ if message_type == MessageType.STOP or response_data.get('content') == "__EXIT__":
+ print("Human requested to end the conversation.")
+ conversation_ended.set()
+
+ # Signal response received
+ human_responded.set()
+
+async def main():
+ setup_logging(LogLevel.INFO)
+ load_dotenv()
+
+ # Basic test to verify human-in-the-loop works
+ print("=== SIMPLIFIED HUMAN-IN-THE-LOOP TEST ===")
+ print("You can now interact with the AI agent.")
+ print("- Type any message to respond")
+ print("- Press Enter without typing to skip responding")
+ print("- Type 'exit', 'quit', or 'bye' to end the conversation")
+
+ # Initialize registry and hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+ # Create identities
+ human_identity = AgentIdentity.create_key_based()
+ ai_identity = AgentIdentity.create_key_based()
+
+ # Create agents
+ human = HumanAgent(
+ agent_id="human1",
+ name="User",
+ identity=human_identity,
+ organization_id="org1",
+ response_callbacks=[response_handler]
+ )
+
+ ai_assistant = AIAgent(
+ agent_id="ai1",
+ name="Assistant",
+ provider_type=ModelProvider.GOOGLE,
+ model_name=ModelName.GEMINI2_5_FLASH_PREVIEW,
+ api_key=os.getenv("GOOGLE_API_KEY", "fake-key-for-testing"),
+ identity=ai_identity,
+ capabilities=[],
+ interaction_modes=[InteractionMode.HUMAN_TO_AGENT],
+ personality="helpful",
+ organization_id="org1",
+ )
+
+ try:
+ # Register and start agents
+ await hub.register_agent(human)
+ await hub.register_agent(ai_assistant)
+
+ human_task = asyncio.create_task(human.run())
+ ai_task = asyncio.create_task(ai_assistant.run())
+
+ # Reset events
+ human_responded.clear()
+ conversation_ended.clear()
+
+ # AI sends a simple message to human
+ print("\nAI sending test message to start conversation...")
+ await ai_assistant.send_message(
+ receiver_id=human.agent_id,
+ content="Hello! This is a test of the human-in-the-loop interaction. You can respond normally, "
+ "skip responding by pressing Enter, or end the conversation by typing 'exit'.",
+ message_type=MessageType.TEXT
+ )
+
+ # Wait for conversation to end or max timeout (5 minutes)
+ print("\nConversation active - waiting for it to end naturally or timeout after 5 minutes...")
+ try:
+ await asyncio.wait_for(conversation_ended.wait(), timeout=300)
+ print("\nTest successful! Conversation ended naturally.")
+ except asyncio.TimeoutError:
+ print("\nTest timeout reached after 5 minutes.")
+
+ except Exception as e:
+ print(f"Error during test: {str(e)}")
+ finally:
+ # Clean up
+ print("\nShutting down...")
+ await ai_assistant.stop()
+ await human.stop()
+ await hub.unregister_agent(human.agent_id)
+ await hub.unregister_agent(ai_assistant.agent_id)
+
+ # Cancel tasks
+ human_task.cancel()
+ ai_task.cancel()
+ try:
+ await human_task
+ except asyncio.CancelledError:
+ pass
+ try:
+ await ai_task
+ except asyncio.CancelledError:
+ pass
+
+ print("Test completed.")
+
+if __name__ == "__main__":
+ try:
+ asyncio.run(main())
+ except KeyboardInterrupt:
+ print("\nTest terminated by user.")
+ sys.exit(0)
\ No newline at end of file
diff --git a/tests/test_react_prompt.py b/tests/test_react_prompt.py
new file mode 100644
index 0000000..be43c63
--- /dev/null
+++ b/tests/test_react_prompt.py
@@ -0,0 +1,48 @@
+"""
+Tests for React prompts with payment capabilities.
+"""
+import sys
+import os
+
+# Add the parent directory to the system path for imports
+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
+
+from agentconnect.prompts.templates.prompt_templates import (
+ ReactConfig,
+ PromptTemplates
+)
+
+def test_react_prompt_with_payments():
+ """Test that React prompt works with payment capabilities enabled."""
+ # Set up a ReactConfig with payment capabilities
+ react_config = ReactConfig(
+ name="Payment Agent",
+ capabilities=[
+ {"name": "Conversation", "description": "general assistance"},
+ {"name": "Payments", "description": "can pay for services"}
+ ],
+ personality="helpful and professional",
+ mode="system_prompt",
+ additional_context={"custom_field": "custom value"},
+ enable_payments=True,
+ payment_token_symbol="ETH",
+ role="payment assistant"
+ )
+
+ # Create prompt
+ prompt_templates = PromptTemplates()
+ prompt = prompt_templates.get_react_prompt(react_config)
+
+ # Get the template string
+ template_string = prompt.prompt.template
+
+ return template_string
+
+if __name__ == "__main__":
+ # Run the test and print the template
+ template = test_react_prompt_with_payments()
+ print("\nGenerated template:")
+ print("=" * 80)
+ print(template)
+ print("=" * 80)
+ print("...")
\ No newline at end of file
diff --git a/tests/test_usage.py b/tests/test_usage.py
new file mode 100644
index 0000000..8e0c55e
--- /dev/null
+++ b/tests/test_usage.py
@@ -0,0 +1,40 @@
+import asyncio
+from agentconnect.agents import AIAgent, HumanAgent
+from agentconnect.core.registry import AgentRegistry
+from agentconnect.communication import CommunicationHub
+from agentconnect.core.types import ModelProvider, ModelName, AgentIdentity, InteractionMode
+from agentconnect.utils.logging_config import setup_logging, LogLevel
+import os
+
+setup_logging(level=LogLevel.DEBUG)
+
+async def main():
+ # Create registry and hub
+ registry = AgentRegistry()
+ hub = CommunicationHub(registry)
+
+ # Create and register agents
+ ai_agent = AIAgent(
+ agent_id="assistant",
+ name="AI Assistant",
+ provider_type=ModelProvider.OPENAI,
+ model_name=ModelName.GPT4O,
+ api_key=os.getenv("OPENAI_API_KEY"),
+ identity=AgentIdentity.create_key_based(),
+ interaction_modes=[InteractionMode.HUMAN_TO_AGENT]
+ )
+ await hub.register_agent(ai_agent)
+
+ human = HumanAgent(
+ agent_id="human-user",
+ name="Human User",
+ identity=AgentIdentity.create_key_based()
+ )
+ await hub.register_agent(human)
+
+ # Start interaction
+ asyncio.create_task(ai_agent.run())
+ await human.start_interaction(ai_agent)
+
+if __name__ == "__main__":
+ asyncio.run(main())
\ No newline at end of file