Skip to content

Releases: SNHuan/EasyAgent

v0.1.4

20 Jan 07:55

Choose a tag to compare

[0.1.4] - 2026-01-20

Added

  • Message.reasoning_content 字段: Message 模型新增 reasoning_content 可选字段,用于存储 LLM 的推理/思考内容(如 Claude 的 <think> 块)。这使得 agent 的 memory 能够完整保留模型输出的所有信息。

  • Message.from_response() 类方法: 新增从 LLMResponse 直接创建 Message 的便捷方法。自动将 response 中的 contentreasoning_contenttool_calls 映射到 Message 对应字段。

  • Message.to_api_dict() 方法: 新增将 Message 渲染为 LLM API 所需 dict 格式的方法。在发送给 LLM 时,自动将 reasoning_content<think>...</think> 格式合并到 content 中,确保对话上下文的完整性。

Changed

  • BaseAgent._build_messages(): 改用 Message.to_api_dict() 构建 API 请求消息,确保 reasoning_content 被正确渲染到对话历史中。

  • LiteLLMModel cost 计算: 移除 cost 计算失败时的静默降级(设为 0),现在会直接抛出异常。确保模型配置正确,避免隐藏潜在问题。

Why This Matters

现代 LLM(如 Claude、DeepSeek)支持 "思考" 模式,会在 reasoning_content 字段返回推理过程。之前的实现只存储 content,导致:

  1. 对话历史丢失了模型的推理上下文
  2. 多轮对话时模型无法"回忆"之前的思考过程
  3. 日志和 trajectory 无法完整记录模型行为

此次更新确保:

  • ✅ Memory 完整存储 content + reasoning_content
  • ✅ 发送给 LLM 时正确合并为完整上下文
  • ✅ 支持 trajectory 导出完整的 thinking 记录

Migration Guide

无破坏性变更。现有代码无需修改即可继续工作。

如需利用新功能:

# 之前的写法(仍然有效)
msg = Message.assistant(content="Hello")

# 新写法:从 LLMResponse 直接创建(推荐)
response = await model.call_with_history(msgs)
msg = Message.from_response(response)

# 或者手动指定 reasoning_content
msg = Message.assistant(
    content="The answer is 42",
    reasoning_content="Let me think step by step..."
)

v0.1.3

15 Jan 11:44

Choose a tag to compare

🚀 New Features

Sandbox Module

  • BaseSandbox Protocol - Abstract interface for isolated code execution
  • DockerSandbox - Run code in Docker containers with resource limits (CPU, memory, network)
  • LocalSandbox - Local shell execution for development/testing
  • Automatic lifecycle management - Sandbox starts/stops with agent execution

SandboxAgent

  • New SandboxAgent class extending ReactAgent with integrated sandbox
  • Fine-grained configuration: cpu_limit, memory_limit, image, network, etc.
  • Default tools: bash, write_file, read_file

Code Execution Tools

  • bash - Execute shell commands in sandbox
  • write_file - Safely write complex content (handles special characters, multi-line code)
  • read_file - Read files from sandbox

Async Tool Execution

  • Tools now support async execute() methods
  • ToolAgent._execute_tool() auto-detects sync/async tools

📦 Dependencies

New optional dependency groups:

pip install easy-agent-sdk[sandbox]  # Docker SDK
pip install easy-agent-sdk[web]      # httpx for SerperSearch
pip install easy-agent-sdk[all]      # All optional deps

🔧 Improvements

  • Export ModelConfig, AppConfig from easyagent.config
  • Auto-discover tools with @register_tool decorator

📖 Usage

from easyagent import SandboxAgent
from easyagent.config import ModelConfig
from easyagent.model.litellm_model import LiteLLMModel

config = ModelConfig.load()
model = LiteLLMModel(**config.get_model("gpt-4o"))

agent = SandboxAgent(
    model=model,
    sandbox_type="docker",
    image="python:3.12-slim",
    cpu_limit=2.0,
    memory_limit="1g",
)

result = await agent.run("Write a fibonacci program and run it")

v0.1.2

15 Jan 10:22

Choose a tag to compare

What's New

  • Add SerperSearch tool for Google search via Serper API

Usage

from easyagent.tool import SerperSearch  # auto-registered

agent = ReactAgent(model=model, tools=["serper_search"])

v0.1.1

15 Jan 09:09

Choose a tag to compare

What's New

Features

  • Model kwargs support - Configure LiteLLM parameters (temperature, top_p, reasoning_effort, etc.) directly in config
  • StepWindowMemory - New memory strategy for step-based history trimming with image support

Config Example

models:
  o3-mini:
    api_type: openai
    api_key: sk-xxx
    kwargs:
      reasoning_effort: high
      max_completion_tokens: 8192

Usage

from easyagent.memory import StepWindowMemory

# Keep last 10 text steps, last 3 image steps
memory = StepWindowMemory(text_steps=10, image_steps=3)

v0.1.0

15 Jan 08:30

Choose a tag to compare

🎉 Initial Release

A lightweight AI Agent framework built on LiteLLM.

Features

  • Multi-model support via LiteLLM (OpenAI, Claude, Gemini, etc.)
  • Tool calling with @register_tool decorator
  • Memory strategies: Sliding Window & Auto-Summary
  • ReAct reasoning loop
  • DAG Pipeline for workflow orchestration
  • Debug logging with token/cost tracking

Installation

pip install easy-agent-sdk