Releases: SNHuan/EasyAgent
v0.1.4
[0.1.4] - 2026-01-20
Added
-
Message.reasoning_content字段:Message模型新增reasoning_content可选字段,用于存储 LLM 的推理/思考内容(如 Claude 的<think>块)。这使得 agent 的 memory 能够完整保留模型输出的所有信息。 -
Message.from_response()类方法: 新增从LLMResponse直接创建Message的便捷方法。自动将 response 中的content、reasoning_content和tool_calls映射到 Message 对应字段。 -
Message.to_api_dict()方法: 新增将 Message 渲染为 LLM API 所需 dict 格式的方法。在发送给 LLM 时,自动将reasoning_content以<think>...</think>格式合并到content中,确保对话上下文的完整性。
Changed
-
BaseAgent._build_messages(): 改用Message.to_api_dict()构建 API 请求消息,确保reasoning_content被正确渲染到对话历史中。 -
LiteLLMModelcost 计算: 移除 cost 计算失败时的静默降级(设为 0),现在会直接抛出异常。确保模型配置正确,避免隐藏潜在问题。
Why This Matters
现代 LLM(如 Claude、DeepSeek)支持 "思考" 模式,会在 reasoning_content 字段返回推理过程。之前的实现只存储 content,导致:
- 对话历史丢失了模型的推理上下文
- 多轮对话时模型无法"回忆"之前的思考过程
- 日志和 trajectory 无法完整记录模型行为
此次更新确保:
- ✅ Memory 完整存储
content+reasoning_content - ✅ 发送给 LLM 时正确合并为完整上下文
- ✅ 支持 trajectory 导出完整的 thinking 记录
Migration Guide
无破坏性变更。现有代码无需修改即可继续工作。
如需利用新功能:
# 之前的写法(仍然有效)
msg = Message.assistant(content="Hello")
# 新写法:从 LLMResponse 直接创建(推荐)
response = await model.call_with_history(msgs)
msg = Message.from_response(response)
# 或者手动指定 reasoning_content
msg = Message.assistant(
content="The answer is 42",
reasoning_content="Let me think step by step..."
)v0.1.3
🚀 New Features
Sandbox Module
- BaseSandbox Protocol - Abstract interface for isolated code execution
- DockerSandbox - Run code in Docker containers with resource limits (CPU, memory, network)
- LocalSandbox - Local shell execution for development/testing
- Automatic lifecycle management - Sandbox starts/stops with agent execution
SandboxAgent
- New
SandboxAgentclass extendingReactAgentwith integrated sandbox - Fine-grained configuration:
cpu_limit,memory_limit,image,network, etc. - Default tools:
bash,write_file,read_file
Code Execution Tools
- bash - Execute shell commands in sandbox
- write_file - Safely write complex content (handles special characters, multi-line code)
- read_file - Read files from sandbox
Async Tool Execution
- Tools now support async
execute()methods ToolAgent._execute_tool()auto-detects sync/async tools
📦 Dependencies
New optional dependency groups:
pip install easy-agent-sdk[sandbox] # Docker SDK
pip install easy-agent-sdk[web] # httpx for SerperSearch
pip install easy-agent-sdk[all] # All optional deps🔧 Improvements
- Export
ModelConfig,AppConfigfromeasyagent.config - Auto-discover tools with
@register_tooldecorator
📖 Usage
from easyagent import SandboxAgent
from easyagent.config import ModelConfig
from easyagent.model.litellm_model import LiteLLMModel
config = ModelConfig.load()
model = LiteLLMModel(**config.get_model("gpt-4o"))
agent = SandboxAgent(
model=model,
sandbox_type="docker",
image="python:3.12-slim",
cpu_limit=2.0,
memory_limit="1g",
)
result = await agent.run("Write a fibonacci program and run it")v0.1.2
v0.1.1
What's New
Features
- Model kwargs support - Configure LiteLLM parameters (temperature, top_p, reasoning_effort, etc.) directly in config
- StepWindowMemory - New memory strategy for step-based history trimming with image support
Config Example
models:
o3-mini:
api_type: openai
api_key: sk-xxx
kwargs:
reasoning_effort: high
max_completion_tokens: 8192Usage
from easyagent.memory import StepWindowMemory
# Keep last 10 text steps, last 3 image steps
memory = StepWindowMemory(text_steps=10, image_steps=3)v0.1.0
🎉 Initial Release
A lightweight AI Agent framework built on LiteLLM.
Features
- Multi-model support via LiteLLM (OpenAI, Claude, Gemini, etc.)
- Tool calling with
@register_tooldecorator - Memory strategies: Sliding Window & Auto-Summary
- ReAct reasoning loop
- DAG Pipeline for workflow orchestration
- Debug logging with token/cost tracking
Installation
pip install easy-agent-sdk