Skip to content

Releases: SuanmoSuanyangTechnology/MemoryBear

MemoryBear v0.3.0 Enterprise Release Notes — Daybreak

15 Apr 12:09

Choose a tag to compare

Release Date: April 15, 2026 | Codename: PoXiao (破晓 · Daybreak)

MemoryBear v0.3.0 builds upon the foundation of previous releases with a broad set of improvements spanning application workflows, memory intelligence, and system robustness. This release introduces versioned API support, multimodal memory perception, and significant workflow enhancements — while resolving a wide range of stability issues across the platform. The result is a more resilient, precise, and developer-friendly MemoryBear.


🚀 I. Core Upgrade Overview

1. Application & API Enhancements

  • Versioned API Support: External service APIs now support specifying a version when making calls, enabling consumers to pin to stable interfaces and facilitating smoother API evolution without breaking existing integrations.
  • Workflow Checklist: A new workflow checklist feature provides structured validation steps, helping users verify workflow configurations before deployment and reducing runtime errors.
  • Deep Thinking Parameter Control: OpenAI deep thinking parameters are now only sent to models that explicitly support deep reasoning, preventing unnecessary parameter injection into incompatible models and avoiding potential errors.
  • Prompt Optimizer Model Return Optimization: Improved the response handling of the prompt optimizer model, delivering more consistent and reliable outputs during prompt engineering workflows.

2. Memory Intelligence 🧠

  • Multimodal Memory Perception Agent: The memory perception agent now supports reading and writing multimodal memory, enabling richer context capture across text, image, and other modalities for more comprehensive user understanding.
  • OpenClaw Built-in Tool: Added a new built-in tool to OpenClaw, expanding the toolkit available for memory-related agent operations and simplifying integration workflows.

3. User Experience 🎨

  • Streaming Render Stabilization: Resolved frequent page redraws during LLM streaming output that caused visible page jitter. The rendering pipeline is now smoother, providing a stable reading experience during model responses.
  • Memory Hub Renaming: The workspace section previously labeled "记忆相关" (Memory Related) has been renamed to "记忆中枢" (Memory Hub), better reflecting its role as the central memory management interface.

4. Workflow Improvements ⚙️

  • Three-Level Variable Template Conversion: Workflow template conversion now supports three-level variables, fixing an issue where single-file-type node selections could not be properly resolved.
  • VL Model Token Tracking: Token usage is now correctly tracked when using VL (Vision-Language) models from model groups, resolving a gap where tokens went unrecorded compared to selecting the same model from the standard model list.
  • Imported Workflow Feature Sync: Workflows imported from external sources now correctly synchronize all feature properties including opening messages, citations, and attribution settings — previously these were silently dropped during import.
  • Session Variable Name Uniqueness: Added uniqueness validation for session variable and start node variable names in workflows, preventing silent conflicts and hard-to-debug runtime issues.
  • File Type Extraction Fix: Workflows can now correctly extract file.type information, resolving a parsing issue that previously returned empty values.
  • Condition Branch Display Fix: Condition branch nodes now render correctly when configured with a value of 0 or when referencing session variables, fixing a display truncation issue.
  • Object/Array Validation Rules: Added validation rules for object and array[object] session variables, preventing JSON serialization errors that previously blocked workflow saves.
  • HTTP Request Body Key Fix: In workflow HTTP request nodes, the body field now uses key instead of name for field identification, aligning with standard HTTP conventions and resolving mapping issues.

5. Knowledge Base 📚

  • Embedding Token Truncation Safety: Fixed errors caused by embedding model token limits being exceeded. A unified 8,000-token truncation safety boundary is now applied to all embedding models. Additionally, Excel row data now maintains independent chunks without merging, using newline separation for cleaner document processing.

6. Robustness & Bug Fixes 🔧

  • Atomic Update & Batch Access Failures: Resolved failures in atomic memory updates and batch access record operations that could cause data inconsistency under concurrent load.
  • Alias Extraction in Conversations: Fixed incorrect alias extraction during application conversations, ensuring user identities are properly resolved from dialogue context.
  • Workflow Alias Extraction: Corrected alias extraction in workflows to properly distinguish between user and AI responses, including consideration of emotion and interest distribution (issue #1006016).
  • RAG Memory Pagination: Fixed data pagination issues in RAG memory retrieval that could return incomplete or duplicated results across pages.
  • Implicit Memory Detail Display: Resolved an issue where implicit memory entries with corresponding numeric values showed empty content in the detail view (observed in workflow v0.2.10 multimodal memory reads).
  • Vector Query Driver Closed: Fixed search_graph_by_embedding: statements vector query exceptions caused by a prematurely closed database driver in memory storage and memory verification flows.
  • User Management Enable/Disable: Fixed abnormal API request behavior when toggling user enable/disable status in user management.
  • Model List Filter Inconsistency: Resolved an issue where model list filter criteria did not match the displayed results, ensuring accurate filtering behavior.

🧭 Looking Ahead

MemoryBear v0.3.0 marks a meaningful step toward production maturity. The breadth of this release — spanning API versioning, multimodal memory, workflow robustness, and rendering stability — reflects a platform that is rapidly hardening its core while continuing to expand its capabilities. The focus on fixing edge cases and improving developer-facing consistency signals a shift toward reliability-first engineering.

The introduction of multimodal memory perception and enhanced workflow variable handling opens new doors for building sophisticated, context-aware agent applications. These capabilities lay the groundwork for deeper integration between memory systems and application logic, moving MemoryBear closer to its vision of truly intelligent, adaptive memory infrastructure.

In upcoming releases, expect continued investment in workflow expressiveness, memory retrieval precision, and cross-modal understanding. The platform will deepen its support for complex agent orchestration patterns while further stabilizing the foundation for large-scale production deployments.


MemoryBear v0.3.0 企业版 发布说明 —— 破晓

发布日期: 2026年4月15日 | 版本代号: 破晓(PoXiao · Daybreak)

MemoryBear v0.3.0 在前版基础上进行了全面升级,涵盖应用工作流、记忆智能和系统稳健性等多个维度。本版本引入了版本化API调用、多模态记忆感知以及大量工作流增强功能,同时修复了平台各层面的稳定性问题,打造更加可靠、精准且开发者友好的 MemoryBear。


🚀 一、核心升级概览

1. 应用与API增强

  • 版本化API调用支持:对外服务API现已支持指定版本调用,使消费方可以锁定稳定接口,便于API平滑演进而不影响现有集成。
  • 工作流检查清单:新增工作流检查清单功能,提供结构化的验证步骤,帮助用户在部署前核验工作流配置,减少运行时错误。
  • 深度思考参数精准控制:OpenAI深度思考参数现仅发送给明确支持深度推理的模型,避免向不兼容模型注入无效参数,防止潜在错误。
  • 提示器模型返回优化:优化了提示器模型的响应处理,在提示工程工作流中提供更一致、可靠的输出。

2. 记忆智能 🧠

  • 多模态记忆感知Agent:记忆感知Agent现已支持多模态记忆的读取与写入,能够跨文本、图像等多种模态捕获更丰富的上下文,实现更全面的用户理解。
  • OpenClaw内置工具:为OpenClaw新增内置工具,扩展了记忆相关Agent操作的工具集,简化集成工作流。

3. 用户体验 🎨

  • 流式渲染稳定性优化:解决了LLM流式输出过程中页面频繁重绘导致的抖动问题,渲染管线更加平滑,模型回复时提供稳定的阅读体验。
  • 记忆中枢更名:工作空间中原"记忆相关"板块已更名为"记忆中枢",更准确地体现其作为核心记忆管理界面的定位。

4. 工作流改进 ⚙️

  • 三级变量模板转换:工作流模板转换现已支持三级变量,修复了单文件类型节点选择无法正确解析的问题。
  • VL模型Token统计:使用模型组合中的VL(视觉语言)模型时,Token用量现已正确统计,解决了与标准模型列表选择同一模型时统计缺失的差异。
  • 导入工作流功能特性同步:从外部导入的工作流现已正确同步所有功能特性,包括开场白、引用和归属设置等——此前这些属性在导入时被静默丢弃。
  • 会话变量名称唯一性校验:为工作流中的会话变量和开始节点变量名称新增唯一性校验,防止静默冲突和难以调试的运行时问题。
  • 文件类型提取修复:工作流现可正确提取 file.type 信息,解决了此前返回空值的解析问题。
  • 条件分支显示修复:条件分支节点在配置值为 0 或引用会话变量时现已正确渲染,修复了显示截断问题。
  • Object/Array校验规则:为 objectarray[object] 类型的会话变量新增校验规则,防止JSON序列化错误导致工作流无法保存。
  • HTTP请求Body字段修正:工作流HTTP请求节点中,body请求体字段从使用 name 改为使用 key 进行字段标识,与标准HTTP规范对齐并解决映射问题。

5. 知识库 📚

  • Embedding Token截断安全边界:修复了因embedding模型token超限导致的报错问题,统一为所有embedding模型添加8,000 token截断安全边界。同时优化Excel处理,每行数据保持独立chunk,不再合并,使用换行分隔以实现更清晰的文档处理。

6. 稳健性与缺陷修复 🔧

  • 原子性更新与批量访问失败:修复了原子性记忆更新和批量访问记录操作在并发负载下可能导致数据不一致的问题。
  • 对话别名提取错误:修复了应用对话中别名提取不正确的问题,确保从对话上下文中正确解析用户身份。
  • 工作流别名提取:修正了工作流中的别名提取逻辑,正确区分用户和AI回复,同步考量情绪和兴趣分布(问题 #1006016)。
  • RAG记忆分页:修复了RAG记忆检索中的分页数据问题,解决了跨页返回不完整或重复结果的情况。
  • 隐式记忆详情显示:修复了隐式记忆条目在有对应数字值时详情中显示为空的问题(见工作流 v0.2.10 多模态记忆读取场景)。
  • 向量查询驱动关闭异常:修复了记忆存储和记忆验证流程中 search_graph_by_embedding: statements 向量查询因数据库驱动提前关闭导致的异常。
  • 用户管理启停异常:修复了用户管理中切换启用/停用状态时的异常API请求行为。
  • 模型列表筛选不一致:修复了模型列表筛选条件与显示结果不匹配的问题,确保筛选行为准确。

🧭 未来展望

MemoryBear v0.3.0 标志着产品向生产成熟度迈出了坚实一步。本版本覆盖API版本化、多模态记忆、工作流稳健性和渲染稳定性等多个维度,体现了一个在持续拓展能力边界的同时快速夯实核心的平台。对边界场景的修复和开发者体验一致性的提升,标志着工程重心正向可靠性优先转变。

多模态记忆感知和增强的工作流变量处理能力的引入,为构建复杂的上下文感知Agent应用打开了新的大门。这些能力为记忆系统与应用逻辑的深度融合奠定了基础,推动 MemoryBear 向真正智能、自适应的记忆基础设施愿景迈进。

在后续版本中,我们将持续投入工作流表达力、记忆检索精度和跨模态理解能力的提升,深化对复杂Agent编排模式的支持,同时进一步稳固大规模生产部署的基础。

MemoryBear v0.2.10 Community Release Notes — Forging the Blade

08 Apr 13:46

Choose a tag to compare

Release Date: April 8, 2026 | Codename: LianJian (炼剑 · Forging the Blade)

MemoryBear v0.2.10 Community Edition delivers a comprehensive set of workflow enhancements, deeper memory capabilities, and critical stability fixes. This release introduces deep thinking mode for Agents, form-based workflow interactions, and multimodal memory reading — sharpening the platform for real-world production use.


🚀 I. Core Upgrade Overview

1. Workflow Engine Enhancements

  • Session Variable File Support: Workflow session variables now support file-type values with configurable defaults, supporting both local and remote file sources. This enables richer data passing between workflow nodes.
  • List Operation Node: A new dedicated node for list manipulation operations, allowing workflows to process, filter, and transform array data natively without custom code.
  • Template Conversion HTML Support: The template conversion node now supports HTML output, expanding rendering capabilities for rich-content workflows.
  • Form Return & Submission: Workflows can now return interactive form definitions to the conversation UI, and the frontend supports form submission back into the workflow — enabling structured data collection mid-conversation.
  • HTTP Node XML Response: The workflow HTTP request node now supports XML format responses, broadening integration compatibility with legacy and enterprise APIs.
  • Opening Remarks & File References: Workflow conversations now support configurable opening remarks and the ability to reference attached files, improving the guided conversation experience.
  • Template Conversion Three-Level Variables: Template conversion nodes now support three-level nested variable access, enabling more complex data extraction from deeply structured objects.
  • Node Connection Add Button: Workflow canvas node connections now include an inline add button, making it easier to insert new nodes between existing connections.

2. Agent Intelligence 🧠

  • Agent Deep Thinking Mode: Agents now support a deep thinking mode that enables more thorough reasoning before responding. This produces higher-quality answers for complex queries at the cost of additional processing time.
  • Model Deep Thinking Feature Toggle: A new model-level feature flag indicates whether a model supports deep thinking, along with a toggle to enable or disable it per application. This gives fine-grained control over reasoning behavior.

3. Memory System Upgrades 📚

  • User Memory Pagination: The user memory library now supports pagination, improving performance and usability when browsing large memory collections.
  • RAG User Memory Data Structure Refresh: The RAG user memory backend API data structures have been redesigned to align with the latest UI requirements, providing cleaner and more consistent data contracts.
  • Multimodal Memory Reading: The memory system now supports reading multimodal memory entries, enabling retrieval of image, audio, and other non-text memory content.
  • Semantic Pruning Threshold Hints: The memory extraction smart semantic pruning threshold now displays descriptive range labels, making it easier to understand and configure the sensitivity of pruning behavior.

4. Frontend & Usability 🎨

  • Skill Tool Deletion Status Display: The frontend skill library tool list now displays deletion status indicators, providing clear visibility into which tools have been removed.
  • Dashboard Day-over-Day Comparison: Total memory capacity, application count, knowledge base count, and API call metrics now include "compared to yesterday" data, enabling quick trend analysis on the home dashboard.

5. Robustness & Bug Fixes 🔧

  • Parameter Extraction Null Handling: Fixed an issue where the parameter extraction node would throw exceptions when processing empty or null values. The node now handles missing data gracefully.
  • Token Consumption Display Optimization: Improved the token consumption extraction and display for certain models, ensuring accurate usage reporting across different LLM providers.
  • Model Parameter Negative Value Fix: Fixed unclear parameter range definitions in single-Agent model settings where negative values were ambiguously allowed or disallowed.
  • App Share Deletion Sync: When a user deletes a previously shared application, the is_active field in the app_shares table now correctly updates for all share records, not just the source application entry.
  • Memory Write Task Ordering: Fixed write_message task execution for the same user to respect chronological order. Single end_user tasks now execute sequentially by timestamp while multiple end_user tasks process concurrently.
  • Multimodal Model Missing Graceful Handling: When no matching multimodal model is configured in memory settings, perceptual memory writes from workflow experience sharing no longer break mid-process. The system now handles the missing model gracefully.
  • Custom Tool Number Variable Pass-through: Custom tools with number type parameters now correctly accept and pass through preceding variable values, resolving type coercion issues.
  • Cluster Sub-Agent Display After Save: Fixed a bug where sub-agents in cluster configurations were not reflected in the UI after saving.
  • Memory-Enabled Streaming Output Fix: Resolved a streaming output failure that occurred when memory was enabled, caused by a string serialization issue in the output pipeline.

🧭 Looking Ahead

MemoryBear v0.2.10 represents a significant step toward production maturity. The combination of deep thinking capabilities, interactive form workflows, and multimodal memory reading demonstrates the platform's evolution from a memory storage system into a comprehensive cognitive infrastructure. Each fix in this release was driven by real-world production usage, hardening the system against edge cases that only surface at scale.

The introduction of Agent deep thinking mode and the expanded workflow engine capabilities signal a clear trajectory: MemoryBear is becoming not just a memory layer, but an intelligent orchestration platform where memory, reasoning, and interaction converge. The SaaS management backend additions further position the platform for multi-tenant enterprise deployments.

We are excited for the upcoming v0.3.0 release on April 17, which will mark a major milestone for the platform. Expect deeper agent reasoning capabilities, expanded multi-agent collaboration features, and further refinements to the memory intelligence pipeline. The blade has been forged — now it's time to wield it.


MemoryBear v0.2.10 社区版 发布说明 —— 百炼成锋

发布日期: 2026年4月8日 | 版本代号: 炼剑(LianJian · Forging the Blade)

MemoryBear v0.2.10 社区版带来了全面的工作流增强、更深层的记忆能力以及关键的稳定性修复。本版本引入了 Agent 深度思考模式、表单交互式工作流与多模态记忆读取——磨砺平台锋芒,为真实生产环境做好准备。


🚀 一、核心升级概览

1. 工作流引擎增强

  • 会话变量文件格式支持:工作流会话变量现已支持文件类型值,可配置默认值,支持本地和远程文件源,实现节点间更丰富的数据传递。
  • 列表操作节点:新增专用列表操作节点,支持工作流中原生处理、过滤和转换数组数据,无需自定义代码。
  • 模板转换支持 HTML:模板转换节点现已支持 HTML 输出,扩展了富内容工作流的渲染能力。
  • 表单返回与提交:工作流现可向对话界面返回交互式表单定义,前端支持将表单数据提交回工作流——实现对话中的结构化数据采集。
  • HTTP 节点 XML 响应:工作流 HTTP 请求节点现已支持 XML 格式响应,拓宽了与传统及企业级 API 的集成兼容性。
  • 开场白与文件引用:工作流对话现支持可配置的开场白及附件文件引用功能,提升引导式对话体验。
  • 模板转换三级变量:模板转换节点现支持三级嵌套变量访问,可从深层结构化对象中提取复杂数据。
  • 节点连线添加按钮:工作流画布节点连线处新增内联添加按钮,便于在已有连接之间插入新节点。

2. Agent 智能 🧠

  • Agent 深度思考模式:Agent 现支持深度思考模式,在回复前进行更充分的推理。对于复杂查询可产出更高质量的回答,代价是额外的处理时间。
  • 模型深度思考特性开关:新增模型级特性标识,标明模型是否支持深度思考,并提供应用级开关控制。实现对推理行为的精细化管理。

3. 记忆系统升级 📚

  • 用户记忆库分页:用户记忆库现已支持分页浏览,提升大规模记忆集合的性能和可用性。
  • RAG 用户记忆数据结构刷新:RAG 用户记忆后端 API 数据结构已重新设计,与最新 UI 需求对齐,提供更清晰一致的数据契约。
  • 多模态记忆读取:记忆系统现支持读取多模态记忆条目,可检索图像、音频等非文本记忆内容。
  • 语义剪枝阈值提示文案:记忆萃取智能语义剪枝阈值现显示描述性区间标签,便于理解和配置剪枝灵敏度。

4. 前端与体验 🎨

  • 技能工具删除状态展示:前端技能库工具列表现显示删除状态标识,清晰展示哪些工具已被移除。
  • 仪表盘日环比数据:总记忆容量、应用数量、知识库数量及 API 调用次数指标现包含"与昨日相比"数据,支持首页仪表盘的快速趋势分析。

5. 稳健性与缺陷修复 🔧

  • 参数提取空值处理:修复参数提取节点在处理空值或 null 值时抛出异常的问题,现已优雅处理缺失数据。
  • Token 消耗展示优化:优化部分模型的 token 消耗提取与展示,确保不同 LLM 提供商的用量报告准确。
  • 模型参数负值修复:修复单 Agent 模型参数设置中参数范围定义不明确、负值允许与否模糊的问题。
  • 应用共享删除同步:用户删除已共享应用后,app_shares 表中的 is_active 字段现可正确更新所有共享记录,而非仅更新源应用条目。
  • 记忆写入任务排序:修复同一用户的 write_message 任务执行顺序问题。单 end_user 任务现按时间戳顺序执行,多 end_user 任务并发处理。
  • 多模态模型缺失优雅处理:当记忆配置中未配置对应的多模态模型时,工作流体验分享的感知记忆写入不再中断,系统现已优雅处理模型缺失情况。
  • 自定义工具 Number 变量传递number 类型参数的自定义工具现可正确接受和传递前置变量值,解决类型转换问题。
  • 集群子代理保存后显示:修复集群配置中子代理保存后未在界面反显的问题。
  • 记忆开启后流式输出修复:解决开启记忆后流式输出失败的问题,原因为输出管道中的字符串序列化问题。

🧭 未来展望

MemoryBear v0.2.10 标志着平台向生产成熟度迈出的重要一步。深度思考能力、交互式表单工作流与多模态记忆读取的结合,展现了平台从记忆存储系统向综合认知基础设施的演进。本版本的每一项修复都源于真实生产环境的使用反馈,使系统在规模化运行中更加坚韧。

Agent 深度思考模式的引入与工作流引擎能力的扩展,指明了清晰的发展方向:MemoryBear 正在成为不仅仅是记忆层,而是记忆、推理与交互融合的智能编排平台。SaaS 管理后台的新增功能进一步为多租户企业级部署奠定基础。

我们期待 4 月 17 日即将到来的 v0.3.0 发布会,这将是平台的一个重要里程碑。届时将带来更深层的 Agent 推理能力、扩展的多智能体协作功能,以及记忆智能管线的进一步优化。剑已炼成,只待出鞘。

MemoryBear v0.2.9 Community Release Notes — Listening to the Wind

31 Mar 10:55
ef62695

Choose a tag to compare

MemoryBear v0.2.9 Community Release Notes — Listening to the Wind

Release Date: March 31, 2026 | Codename: TingFeng (听风 · Listening to the Wind)

MemoryBear v0.2.9 Community delivers a deep upgrade across application capabilities and memory intelligence. This release introduces application logging, advanced episodic memory retrieval with community-based search, multimodal memory persistence into Neo4j, and user alias management — bringing the platform closer to its v0.3.0 launch milestone with refined observability and richer cognitive recall.


🚀 I. Core Upgrade Overview

1. Application Framework

  • Model-Feature Binding: Model selection is now bound to application features such as file upload types, ensuring that only compatible models are available for each capability.
  • Application Logging (Message Records): A new application log system captures full message records across Agent, Workflow, and Agent Cluster interactions. Sharers can view experience sharing logs and API Key call logs (using their own key); shared users can view API Key call logs (using their own key).
  • Application Features — Conversation Opener & Document Citation: Applications now support configurable conversation openers (opening remarks) and document citation in replies, enriching the user interaction experience.
  • API Key Search: Added the ability to search applications by API Key, enabling quick lookup and management of key-to-app associations.
  • Agent Memory Defaults: Agents now have "conversation history memory" enabled by default. When shared via experience sharing, "memory function" is enabled by default and can be toggled off — but only if "conversation history memory" was already enabled; otherwise, the memory function is not available on the shared page.
  • Application Copy Fix: Fixed an issue where modifying a copied application incorrectly sent the original app's app_id instead of the copied app's app_id.

2. Workflow Enhancements 🔧

  • Document Extraction Node: A new workflow node for extracting structured content from documents, enabling document-driven automation pipelines.
  • Reply Node Output Continuation: Reply node outputs can now connect to additional downstream nodes, allowing post-reply processing and branching.
  • Workflow Memory Write with File Variables: Workflow memory write nodes now support configuring file-type variables via sys.files, enabling multimodal content to be persisted into Neo4j.
  • Conditional Branch Whitespace Fix: Fixed an issue where conditional branch nodes failed when conditions contained spaces or newline characters.

3. Memory Intelligence 🧠

  • Multimodal Memory into Neo4j: Multimodal memory (voice, image, video) can now be persisted into Neo4j. Memory configuration supports additional model selection for voice, image, and video with on/off toggles. In this release, only Workflow-based multimodal memory write is supported; Agent support is planned for the next version. Perceptual memory must be associated with a multimodal model and a memory node to be written. Neo4j storage adds a new perceptual memory node type linked to Chunk nodes. Memory engine configuration now includes model filtering by capability — visual, audio, and video — requiring selection of models with matching abilities.
  • Episodic Memory Retrieval with Community Search: Episodic memory retrieval now includes community-based search results across three retrieval modes: Quick Reply returns summaries (community summaries); Normal Reply returns summaries + communities (topic structures) + statements (atomic facts/evidence); Deep Thinking returns summaries + communities + statements + expanded statements (graph-extended implicit evidence).
  • User Alias Management: Agent conversations now extract user name/alias information and display it in the "Core Profile." Editing and saving aliases in the Core Profile syncs to the end_user_info table's other_name field. Multiple aliases are supported with time-based extraction priority. Alias data is accessible via API.
  • Community Node Summary Enrichment: Community node summaries are now enriched with entity_name, entity_description, and Statement.statement content, providing more comprehensive community-level context.
  • Pilot Run Progress Callback: In pilot run (draft run) mode, a progress callback event is now sent upon knowledge extraction completion, including total statement count and chunk count in the payload.
  • Knowledge Extraction Complete Event: A dedicated knowledge_extraction_complete event is now emitted after the knowledge_extraction_result phase concludes, providing a clear lifecycle signal for downstream consumers.
  • Memory Cache System: Added caching for memory system operations, improving retrieval performance and reducing redundant computation.
  • Semantic Pruning Optimization: Fixed an issue where semantic pruning during memory extraction incorrectly removed emotion and interest/hobby data. Improved pruning behavior across multiple ontology scenarios to produce differentiated results.
  • Forgetting Cycle Task Fix: Fixed the run_forgetting_cycle_task scheduled task that was failing during forgetting cycle execution.
  • Memory Config ID Type Fix: Fixed an issue where applications using integer-type config_id values failed to use the selected memory configuration, falling back to the workspace default config instead.
  • Memory Write Issues: Fixed structured output failures and corrected clustering sequence to occur before memory write operations.
  • User Memory Config Version Fix: Fixed an issue where user memory configuration was not correctly mapped to the published version's config record.

4. Model Management 🤖

  • Volcengine Model Provider: Added Volcengine (火山引擎) as a new model provider, expanding the range of available LLM backends.
  • Custom Model Capabilities: Custom models now support declaring full-modality, visual, audio, and video capabilities, enabling precise model-feature matching.
  • API Key Masking: Model list API responses now mask API keys, showing only the first and last 4 characters with **** in between, improving security posture.

5. User & Platform 👥

  • Username Uniqueness Relaxed: Usernames no longer require uniqueness — only email addresses must be unique, simplifying user registration and management.
  • Frontend UI Validation: Comprehensive frontend UI validation pass across application pages (page checklist pending from frontend team).

6. Robustness & Bug Fixes 🔧

  • Remote File Document Type Support: Multimodal remote files now support document types in addition to media files.
  • File Metadata Location: Fixed inconsistency in meta_data where file references were stored under both files and files[] formats.
  • Text-to-Speech Status Feedback: Backend now provides generation status for text-to-speech, keeping the speaker icon in "generating" state until audio is ready for playback.
  • Model Capability Feature Binding: Fixed model capabilities in Agent, Workflow, and Cluster not properly linking to application features.
  • Workflow Loop Node Termination Condition: Fixed loop node "missing right field" error when setting non-empty termination conditions.
  • MCP Market Service Count Reset: Fixed sidebar "ModelScope MCP" showing 9295 available services on first click, then 0 on subsequent clicks.
  • Dashboard App Count with Shared Apps: The dashboard_data endpoint now correctly includes shared applications in the total app count.
  • Agent Web Search Recursion Limit: Fixed Agent web search triggering "Recursion limit of 9 reached without hitting a stop condition."
  • RAG Workspace Workflow Memory Write: Fixed workflows in RAG workspaces unable to write to the memory store.
  • Tool Market Logo Mismatch: Fixed incorrect tool vendor logos in the tool market configuration view.
  • TTS Streaming Failure on Long Text: Fixed text-to-speech streaming failure when conversation text exceeded length thresholds.
  • Shared App Conversation Summary: Fixed shared app conversations with openers always displaying "New conversation" as the sidebar summary.
  • Perceptual Memory Text Preview: Fixed preview and download issues for text-based perceptual memory entries.
  • MCP Market Unauthorized Error: Fixed mcp_market_configs/operational_mcp_servers endpoint returning 401 Unauthorized.
  • Legacy RAG Workspace Agent Issues: Fixed Agent conversation anomalies in legacy RAG workspaces.
  • Shared App Experience Issues: Fixed historical conversation TTS playback and conversation parameter passing in shared app experiences.

🧭 Looking Ahead

By deepening multimodal memory persistence, introducing community-based episodic retrieval, and expanding application observability through logging, this release solidifies the cognitive foundation that MemoryBear is built upon — perception, refinement, association, and forgetting working in concert.

The introduction of community search in episodic memory retrieval marks a qualitative leap in how MemoryBear surfaces knowledge. With three distinct retrieval depths — from quick summaries to graph-expanded implicit evidence — the platform now offers a retrieval experience that mirrors human cognitive recall: fast intuition for simple queries, structured reasoning for complex ones.

v0.3.0 will mark MemoryBear's official product launch event, bringing together all the capabilities refined across the 0.2.x series into a cohesive, production-ready platform. We look forward to unveiling the full vision of MemoryBear as the cognitive memory layer for AI applications at the launch conference.


MemoryBear v0.2.9 社区版 发布说明 —— 听风知意

发布日期: 2026年3月31日 | 版本代号: 听风(TingFeng · Listening to t...

Read more

MemoryBear v0.2.8 Community Release Notes — Jade in Splendor

23 Mar 02:33
6056952

Choose a tag to compare

MemoryBear v0.2.8 Community Release Notes — Jade in Splendor

Release Date: March 20, 2026 | Codename: JingYu (景玉 · Jade in Splendor)

MemoryBear v0.2.8 Community brings major upgrades to application sharing, multimodal interaction, and platform infrastructure. This release introduces voice input/output, multimodal perceptual memory, cloud file storage, and workspace-level identity — delivering a more capable and resilient open platform for AI memory management.


🚀 I. Core Upgrade Overview

1. Application Sharing & Publishing

  • Application Sharing (Agent, Workflow, Agent Cluster): Full sharing support across all application types — Agents, Workflows, and Agent Clusters can now be shared to other workspaces.
  • Memory Enabled by Default on Shared Apps: When an application is published and shared, memory is enabled by default. Users receive a reminder when attempting to disable it, ensuring conversational context continuity.
  • Workflow Memory Sharing Rules: Workflows configured with memory extraction and memory storage have memory enabled by default on the experience sharing page and cannot be disabled; workflows without memory configuration have memory disabled by default and cannot be enabled.
  • Shared Session Web Search Fix: Fixed an issue where web search functionality stopped working after sharing an application session, restoring full internet access for shared apps.

2. Multimodal & Interaction 💬

  • Voice Input for Models & Apps: Model interfaces and applications now support voice input, enabling hands-free interaction with Agents and Workflows.
  • Rich Reply Capabilities: Applications now support voice reply modality — significantly expanding how AI can deliver results to users.
  • Multimodal Memory — Perceptual Memory: The memory system now supports perceptual memory for multimodal inputs, enabling the platform to perceive and remember visual, audio, image, and file interactions across Agent dry runs, experience sharing, and API calls.
  • File Display in Chat (Debug + Share): Uploaded files, images, videos, and audio now display correctly in both Agent and Workflow dry-run and experience sharing chat interfaces.

3. Platform & Infrastructure ⚙️

  • i18n Internationalization: Full internationalization support, enabling the platform to serve multi-language, multi-region users.
  • Cloud File Storage (OSS + S3): The file system now supports cloud uploads via Alibaba Cloud OSS or S3, ensuring reliability and scalability for all documents.
  • Flower Container for Celery Monitoring: A new Flower container for monitoring and managing Celery async and scheduled task execution. Each environment runs 3 workers, and API service restarts have been verified not to cause task interruption or log loss.

4. EndUser Identity Migration 🔐

  • EndUser Migration from app_id to workspace_id: EndUser identity has been migrated from application-level to workspace-level. Dry runs use the logged-in user's user_id as the EndUser other_id (unique per workspace); experience sharing passes other_id from the frontend; API+Key calls use the third-party ID.

5. Episodic Memory 🧠

  • Episodic Memory Clustering Algorithm: Implemented a community-graph-based episodic memory clustering algorithm, including support for generating community graphs for existing users.

6. Robustness & Bug Fixes 🔧

  • MCP Service Deletion Tool 404 Error: Fixed an issue where removing an MCP service from an Agent caused the api/tools endpoint to return a "404, tool does not exist" error.
  • App Export Canvas vs. Saved Config Mismatch: Copying or exporting an Agent/Workflow via the top-left menu now correctly exports the saved configuration rather than the current unsaved canvas state.
  • Workflow Duplicate Node ID Error: Fixed an issue where duplicating a workflow node triggered "Invalid workflow config: node IDs must be unique, duplicate ID: llm_qa".
  • Conditional Branch Wiring Error: Fixed an issue where after adding a conditional branch/question classifier node, IF (case0) and ELSE (case1) connected to different downstream nodes, but after saving and refreshing both downstream nodes connected to IF (case0).
  • Reply Node Content Loss: Fixed an issue where clicking reply node details and then clicking the canvas caused reply node content to disappear.
  • Workflow Port Connection Rules: Optimized workflow node connection ports — left-to-left and right-to-right connections are now prohibited, and the end node option is no longer shown when adding nodes from left-side ports.
  • Knowledge Base List Status Column Width: The status column in the knowledge base list now has a locked or adaptive width, preventing layout issues.
  • Pending Document Preview Support: Documents in "pending" status (parsing enabled but not yet complete) now support preview for all file types.
  • Knowledge Base Association Fixes: Consolidated fixes for multiple issues when associating knowledge bases with applications.
  • Multimodal Conversation Continuity: Fixed an issue where sending multimodal content (images, videos, files, audio) in an Agent or Workflow only allowed one round of conversation, preventing follow-up questions based on uploaded media.
  • Memory Storage Timezone Unification: Database storage time and task execution time now use a unified timezone controlled by environment variables, eliminating cross-environment inconsistencies.
  • Forgetting Strength Decimal Precision: Fixed an issue where memory forgetting strength values displayed excessively long decimals.

🧭 Looking Ahead

MemoryBear v0.2.8 represents a significant step toward enterprise production readiness. The introduction of SaaS administration, independent service deployment, and workspace-level identity architecture demonstrates the platform's evolution from a development tool into a governed, scalable infrastructure layer for AI memory management.

The multimodal memory capabilities introduced in this release mark a pivotal expansion of what MemoryBear can perceive and retain. By enabling perceptual memory across visual, audio, and file interactions, the platform moves closer to truly human-like cognitive assistance — where context is not limited to text but encompasses the full spectrum of user interaction.

In upcoming releases, we will deepen multi-agent collaboration patterns, expand the episodic memory clustering framework, and continue hardening the platform for high-availability enterprise deployments. Enhanced analytics, fine-grained permission controls, and broader MCP ecosystem integration are also on the horizon.


MemoryBear v0.2.8 社区版 发布说明 —— 景玉生辉

发布日期: 2026年3月20日 | 版本代号: 景玉(JingYu · Jade in Splendor)

MemoryBear v0.2.8 社区版在应用共享、多模态交互和平台基础设施方面带来重大升级。本版本引入语音输入输出、多模态感知记忆、云端文件存储和工作空间级身份体系,为 AI 记忆管理提供更强大、更稳健的开放平台。


🚀 一、核心升级概览

1. 应用共享与发布

  • 应用共享(Agent、工作流、Agent 集群):全面支持所有应用类型的共享——Agent、工作流和 Agent 集群均可共享至其他空间。
  • 分享应用默认开启记忆功能:应用发布分享后,记忆功能默认开启。用户尝试关闭时会收到提醒,确保上下文对话的连续性。
  • 工作流记忆分享规则:工作流配置了记忆提取和记忆存储后,体验分享页面默认开启记忆功能且不可取消;未配置记忆的工作流则默认关闭记忆功能且不可开启。
  • 分享会话联网搜索修复:修复了应用会话分享后联网搜索功能失效的问题,恢复分享应用的完整联网能力。

2. 多模态与交互 💬

  • 模型与应用支持语音输入:模型接口和应用现已支持语音输入,实现 Agent 和工作流的免手操作交互。
  • 丰富的回复能力:应用现支持语音回复模态——大幅拓展了 AI 传达结果的方式。
  • 多模态记忆——记忆库感知记忆:记忆系统现支持多模态输入的感知记忆,使平台能够在 Agent 试运行、体验分享和 API 调用中感知和记忆视觉、音频、图片和文件交互。
  • 对话框中展示上传文件(调试 + 分享):上传的文件、图片、视频和音频现可在 Agent 和工作流的试运行及体验分享对话界面中正确展示。

3. 平台与基础设施 ⚙️

  • i18n 国际化:全面支持国际化,使平台能够服务多语言、多地区用户。
  • 云端文件存储(OSS + S3):文件系统现支持通过阿里云 OSS 或 S3 上传至云端,确保所有文档的可靠性和可扩展性。
  • Flower 容器监控 Celery 任务:新增 Flower 容器用于监控和管理 Celery 异步任务及定时任务的执行状态。每个环境运行 3 个 worker,已验证 API 服务重启不会导致任务中断或日志丢失。

4. EndUser 身份迁移 🔐

  • EndUser 从 app_id 迁移至 workspace_id:End_User 身份从应用级迁移至工作空间级。试运行取登录用户的 user_id 作为 End_User 的 other_id(同一空间唯一);体验分享由前端传递 other_id;API+Key 调用取第三方 ID。

5. 情景记忆 🧠

  • 情景记忆聚类算法:实现了基于社区图谱的情景记忆聚类算法,包括为已有老用户生成社区图谱的支持。

6. 稳健性与缺陷修复 🔧

  • 删除 MCP 服务后工具 404 错误:修复了从 Agent 中删除 MCP 服务后,api/tools 接口返回"404,工具不存在"错误的问题。
  • 应用导出保存画布与已保存配置不一致:通过左上角复制或导出 Agent/工作流时,现在正确导出已保存的配置而非当前未保存的画布状态。
  • 工作流复制节点 ID 重复错误:修复了复制工作流节点后运行提示"工作流配置无效:节点 ID 必须唯一,重复的 ID: llm_qa"的问题。
  • 条件分支连线错误:修复了添加条件分支/问题分类器节点后,IF(case0)和 ELSE(case1)分别连接不同下游节点,保存刷新后两个下游节点都连到 IF(case0)的问题。
  • 回复节点内容丢失:修复了点击回复节点详情后再点击画布,回复节点内容消失的问题。
  • 工作流连接桩规则优化:优化了工作流节点连接桩——禁止左左相连和右右相连,左侧连接桩添加节点时不再展示结束节点选项。
  • 知识库列表状态列宽度:知识库列表的状态列现已锁定宽度或自适应,防止布局错乱。
  • 等待中文档支持预览:处于"等待中"状态(已开启解析但未完成)的文档现支持各类型文件的预览。
  • 知识库关联问题汇总修复:统一修复了应用中关联知识库时的多个问题。
  • 多模态对话连续性:修复了在 Agent 或工作流中发送多模态内容(图片、视频、文件、音频)后只能进行一轮对话、无法基于已上传媒体继续提问的问题。
  • 记忆存储时区统一:数据库存储时间和任务执行时间现采用环境变量统一控制时区,消除跨环境不一致问题。
  • 遗忘强度小数精度:修复了记忆遗忘强度值显示过长小数的问题。

🧭 未来展望

MemoryBear v0.2.8 标志着平台向企业生产就绪迈出的重要一步。SaaS 管理后台、独立服务部署和工作空间级身份架构的引入,展现了平台从开发工具向可治理、可扩展的 AI 记忆基础设施层的演进。

本版本引入的多模态记忆能力标志着 MemoryBear 感知与留存能力的关键性扩展。通过在视觉、音频和文件交互中启用感知记忆,平台正朝着真正类人的认知辅助方向迈进——上下文不再局限于文本,而是涵盖用户交互的全部维度。

在后续版本中,我们将深化多智能体协作模式、扩展情景记忆聚类框架,并持续强化平台的高可用企业部署能力。增强的分析能力、细粒度权限控制和更广泛的 MCP 生态集成也已提上日程。

MemoryBear v0.2.7 Release Notes

13 Mar 15:01
3c99fb1

Choose a tag to compare

MemoryBear v0.2.7 Release Notes

Release Date: March 13, 2026 | Codename: WuLing (武陵 · Wuling)

MemoryBear v0.2.7-community marks a significant milestone in platform maturity, bringing powerful capabilities for application portability, tool ecosystem expansion, and memory intelligence refinement. This release introduces application import/export for agents and workflows, integrates the MCP Marketplace for tool discovery, and delivers critical improvements to implicit and emotional memory generation logic. Alongside workflow enhancements, v0.2.7-community strengthens the foundation for production-ready deployments with comprehensive bug fixes addressing RAG space memory generation, semantic pruning accuracy, and end-user management.


🚀 I. Core Upgrade Overview

1. Application Management & Portability

  • Application Import/Export: Full support for importing and exporting applications including both agent configurations and workflow definitions, enabling seamless migration, backup, and sharing of complex application setups across environments.

2. Tool Ecosystem Expansion 🔌

  • MCP Marketplace Integration: Tool management now includes access to the MCP Marketplace, providing a centralized hub for discovering, browsing, and integrating Model Context Protocol tools directly within the platform.

3. Workflow Enhancements 📝

  • Annotation Node: Introduced a new annotation node type in workflow builder, allowing developers to add inline documentation, comments, and contextual notes directly within workflow graphs for improved maintainability and team collaboration.

4. Memory Intelligence Refinement 🧠

  • Implicit & Emotional Memory Generation Logic: Comprehensive optimization of implicit memory and emotional memory generation pipelines, including data existence validation to prevent duplicate processing, timeline-based filtering for scheduled tasks to ensure temporal accuracy, and cache validation for interest distribution to improve consistency and reduce redundant computations.
  • Interest Distribution Generation Logic: Refined the interest distribution generation algorithm to produce more accurate and nuanced user interest profiles, enhancing personalization capabilities across memory-driven features.

5. User Experience Improvements 🎨

  • Knowledge Base Sharing Loading State: Added loading indicators to knowledge base sharing operations, providing clear visual feedback during share link generation and improving perceived responsiveness.

6. Robustness & Bug Fixes 🔧

  • End User Management in App Debugging: Fixed issue where application debugging sessions incorrectly created or associated end_user records, ensuring proper user context isolation during development workflows.
  • Knowledge Base Dataset Creation Flow: Resolved bug preventing users from proceeding to the next step after creating a dataset in knowledge base setup, restoring smooth onboarding experience.
  • RAG Space Memory Generation Failure: Fixed critical issue where memory generation failed and storage was interrupted in RAG-enabled spaces, ensuring reliable memory persistence across all workspace configurations.
  • Application Character Limit Enforcement: Added conditional validation to application name and description fields, preventing edge cases with excessively long input that could cause UI rendering or database storage issues.
  • Semantic Pruning Emotion/Interest Preservation: Optimized memory extraction semantic pruning logic to prevent incorrect deletion of emotional and interest-related memory fragments, preserving critical affective and preference data during consolidation.
  • Semantic Pruning Effectiveness: Enhanced the overall effectiveness of semantic pruning algorithms, improving the balance between memory compression and information retention for more intelligent long-term memory management.

🧭 Looking Ahead

Version 0.2.7-community represents a maturation of MemoryBear's core infrastructure, moving from foundational capabilities to production-ready features. The introduction of application portability and MCP Marketplace integration signals a shift toward ecosystem thinking—enabling not just powerful individual deployments, but interconnected, shareable, and extensible memory-driven systems. The refinements to implicit and emotional memory generation demonstrate a commitment to measurable intelligence quality as the platform scales.

As we continue to refine memory intelligence algorithms, the next phase will focus on memory retrieval benchmarking and episodic memory clustering algorithms to enhance contextual recall and temporal reasoning capabilities.

Looking forward to v0.2.8 and beyond, we will introduce multimodal memory capabilities with distributed service deployment for knowledge bases and models, enabling voice input for applications and expanding application capabilities with voice responses, BI visualizations, PPT generation, and direct image creation. Application conversation sharing and web search functionality will be restored and enhanced. The journey toward truly intelligent, multimodal, context-aware applications continues.


MemoryBear v0.2.7 发布说明

发布日期: 2026年3月13日 | 版本代号: 武陵(WuLing · Wuling)

MemoryBear v0.2.7-community 标志着平台成熟度的重要里程碑,带来应用可移植性、工具生态扩展和记忆智能精细化能力。本版本引入了 Agent 和工作流的应用导入/导出功能,集成 MCP 广场实现工具发现,并对隐性记忆和情绪记忆生成逻辑进行了关键改进。伴随工作流增强,v0.2.7-community 通过全面的缺陷修复(涵盖 RAG 空间记忆生成、语义剪枝准确性和终端用户管理)夯实了生产就绪部署的基础。


🚀 一、核心升级概览

1. 应用管理与可移植性

  • 应用导入/导出:全面支持应用的导入和导出,包括 Agent 配置和工作流定义,实现复杂应用设置在不同环境间的无缝迁移、备份和共享。

2. 工具生态扩展 🔌

  • MCP 广场集成:工具管理现已接入 MCP 广场,提供集中式枢纽用于发现、浏览和集成模型上下文协议(MCP)工具,直接在平台内完成工具生态对接。

3. 工作流增强 📝

  • 备注节点:在工作流构建器中引入全新的备注节点类型,允许开发者在工作流图中直接添加内联文档、注释和上下文说明,提升可维护性和团队协作效率。

4. 记忆智能精细化 🧠

  • 隐性记忆与情绪记忆生成逻辑优化:全面优化隐性记忆和情绪记忆生成管线,包括数据存在性校验以防止重复处理、定时任务的时间轴筛选以确保时序准确性,以及兴趣分布的缓存校验以提升一致性并减少冗余计算。
  • 兴趣分布生成逻辑改进:优化兴趣分布生成算法,产生更准确、更细腻的用户兴趣画像,增强记忆驱动功能的个性化能力。

5. 用户体验改进 🎨

  • 知识库分享加载状态:为知识库分享操作增加加载指示器,在分享链接生成过程中提供清晰的视觉反馈,改善感知响应速度。

6. 稳健性与缺陷修复 🔧

  • 应用调试中的终端用户管理:修复应用调试会话错误创建或关联 end_user 记录的问题,确保开发工作流中的用户上下文正确隔离。
  • 知识库数据集创建流程:解决知识库设置中创建数据集后无法进入下一步的缺陷,恢复流畅的引导体验。
  • RAG 空间记忆生成失败:修复 RAG 启用空间中记忆生成失败和存储中断的关键问题,确保所有工作空间配置下的可靠记忆持久化。
  • 应用字符限制强制执行:为应用名称和描述字段增加条件校验,防止过长输入导致 UI 渲染或数据库存储问题的边界情况。
  • 语义剪枝情绪/兴趣保留:优化记忆萃取语义剪枝逻辑,防止错误删除情绪和兴趣相关的记忆片段,在整合过程中保留关键的情感和偏好数据。
  • 语义剪枝效果优化:增强语义剪枝算法的整体有效性,改善记忆压缩与信息保留之间的平衡,实现更智能的长期记忆管理。

🧭 未来展望

v0.2.7 版本代表了 MemoryBear 核心基础设施的成熟,从基础能力迈向生产就绪特性。应用可移植性和 MCP 广场集成的引入标志着向生态思维的转变——不仅支持强大的独立部署,更实现互联、可共享和可扩展的记忆驱动系统。隐性记忆和情绪记忆生成的精细化改进展现了平台在规模化过程中对可衡量智能质量的承诺。

随着记忆智能算法的持续优化,下一阶段将聚焦记忆检索基准测试和情景记忆聚类算法,增强上下文召回和时序推理能力。

展望 v0.2.8 及更远的未来,我们将引入多模态记忆能力,实现知识库和模型的分服务部署,为应用增加语音输入支持,并扩展应用能力至语音回复、BI 可视化、PPT 生成和直接生图。应用会话分享和联网搜索功能将得到修复和增强。通往真正智能、多模态、上下文感知应用的旅程仍在继续。

MemoryBear v0.2.6 Release Notes — Sharpened Edge, Broader Reach

09 Mar 03:25
b9ebe22

Choose a tag to compare

MemoryBear v0.2.6 Release Notes — Sharpened Edge, Broader Reach

Release Date: March 7, 2026 | Codename: TingJian (听剑 · Listening to the Sword)

MemoryBear v0.2.6 builds upon the foundation of v0.2.5 with a sweeping set of enhancements across application capabilities, memory intelligence, and platform resilience. This release introduces workflow import adaptation for Dify, multimodal input/output support including voice and file types, and a new memory pruning module — significantly expanding the system's versatility. With over a dozen feature additions and critical bug fixes spanning memory, applications, and knowledge base modules, v0.2.6 represents a major step toward production-grade robustness and richer interaction paradigms.


🚀 I. Core Upgrade Overview

1. Workflow & Application Framework

  • Workflow Import Adaptation (Dify): Added compatibility layer for importing Dify workflow definitions, enabling seamless migration of existing workflow configurations into MemoryBear's orchestration engine.
  • Field Character Limits & Validation Rules: Introduced configurable character length limits on workflow fields, with validation rules defined by product specifications to ensure data integrity at the input layer.
  • Application Cloning (Agent, Workflow, Cluster): Users can now duplicate Agent, Workflow, and Cluster applications with a single action, preserving all configurations and accelerating iterative development.
  • Conversation Variables (Debug + Share): Added support for conversation-scoped variables in both debug and share modes for Agent and Workflow applications, enabling stateful multi-turn interactions with persistent context.
  • Streaming message_id in Chat API: The streaming output of the Chat API now includes message_id in each response chunk, allowing clients to reliably track and correlate messages in real-time conversations.

2. Multimodal & Interaction 💬

  • Voice Input & Output: Applications now support voice as both an input and output modality, opening the door to conversational AI experiences beyond text.
  • File Type Input Support (Voice, File, Video): The application input layer has been extended to accept voice recordings, generic files, and video uploads, enabling richer multimodal interaction scenarios.

3. Model & Intelligence 🧠

  • Model Vision & Omni Differentiation: The model configuration system now distinguishes between vision-capable models and Omni models, ensuring correct capability routing and preventing mismatched model-task assignments.
  • Education Memory & Companion Toy Scene Presets: Added ontology configuration presets for education memory and companion toy scenarios, providing out-of-the-box domain-specific memory structures for these verticals.
  • Ontology Default Identifier: Ontology configuration presets now support a default identifier flag, making it easier to manage and select baseline configurations across scenes.
  • Memory Configuration Default Identifier: Memory configurations now carry a default flag, allowing the system to automatically apply baseline settings when no custom configuration is specified.

4. Memory Intelligence 🔬

  • Memory Pruning Module: Introduced a dedicated memory pruning engine that intelligently trims redundant or low-value memory entries, optimizing storage efficiency and retrieval relevance over time.
  • RAG Quick Retrieval with Memory (Deep Think & Normal Reply): The RAG workspace now supports quick retrieval in both deep thinking and normal reply modes, with integrated memory functionality under the memory verification module for enhanced contextual accuracy.

5. Robustness & Bug Fixes 🔧

Model Management

  • Custom Model API Key Configuration: Fixed an issue where adding a custom model under a new provider prevented batch API key configuration in the model list, with error "Provider dashscope model configuration not found". Custom models can now properly inherit provider-level key management.

Knowledge Base

  • Download Original Content Error: Resolved frontend API error when attempting to download original content for non-source documents (user-uploaded files) in knowledge base management.
  • Share Disable Prompt Text: Updated the prompt text displayed when knowledge base sharing is disabled to provide clearer user guidance.

User Memory

  • Profile Extraction Accuracy: Improved extraction accuracy for core profile fields including name, occupation, and interests in the "About Me" section. Enhanced interest distribution analysis to provide more relevant categorization.

Long-Term Memory

  • Duplicate Episodic Memory Cards: Fixed issue where episodic memory cards displayed duplicate content in the explicit memory section.
  • Wrong User Attribution in Episodic Memory: Resolved bug where episodic memory cards contained QA content from incorrect users. Memory attribution now correctly matches the working memory source.

Dashboard

  • Knowledge Base Count Inconsistency: Fixed discrepancy where dashboard API (/api/dashboard/dashboard_data) returned 14 knowledge bases while knowledge management page API (/api/knowledges/knowledges) returned 12. Both endpoints now return consistent counts.
  • Application Count Inconsistency: Corrected application count calculation that included inactive applications (is_active='f'). Dashboard API previously returned 24, now correctly returns 20 matching the application management page.
  • Total Memory Capacity Calculation: Clarified data source and query methodology for total memory capacity metric to ensure accurate reporting.
  • API Call Count Tracking: Improved API call count tracking with clearer data source definition and query methodology.
  • Knowledge Type Distribution Total: Fixed incorrect total count in knowledge type distribution response. Clarified the source of "memory" type in the distribution breakdown.

Infrastructure

  • Celery Environment Variable Fix: Corrected misconfigured Celery environment variables that could cause task queue connection failures or unexpected worker behavior in containerized deployments.
  • Database Connection Pool Leak: Fixed "idle in transaction" connection leak in Celery tasks caused by improper session management. Added automatic rollback on connection checkout to prevent stale MVCC snapshots.

🧭 Looking Ahead

With v0.2.6, MemoryBear crosses an important threshold — the system now handles multimodal inputs, supports cross-platform workflow migration, and features intelligent memory pruning. These are not incremental patches but foundational capabilities that redefine what the platform can do. The addition of enterprise licensing and robust validation layers signals a clear move toward production-grade deployments at scale.

The introduction of voice I/O and file-type support opens entirely new interaction paradigms. Combined with the memory pruning module and enhanced RAG retrieval, MemoryBear is evolving from a memory storage system into an active cognitive assistant — one that not only remembers but also knows what to forget and how to retrieve knowledge efficiently across modalities.

In the next release, we will focus on A2A (Agent-to-Agent) protocol support for seamless multi-agent collaboration, multimodal memory capabilities that extend memory extraction beyond text into voice and visual domains, and application import/export functionality for portable deployment and configuration sharing across environments.


MemoryBear v0.2.6 发布说明 —— 锋芒初露,兼收并蓄

发布日期: 2026年3月7日 | 版本代号: 听剑(TingJian · Listening to the Sword)

MemoryBear v0.2.6 在 v0.2.5 的基础上进行了全面升级,涵盖应用能力、记忆智能和平台稳健性三大维度。本版本引入了 Dify 工作流导入适配、多模态语音与文件输入输出支持,以及全新的记忆剪枝模块——显著拓展了系统的应用边界。通过十余项功能新增和覆盖记忆、应用、知识库模块的关键缺陷修复,v0.2.6 标志着 MemoryBear 向生产级稳健性和更丰富交互范式迈出的重要一步。


🚀 一、核心升级概览

1. 工作流与应用框架

  • 工作流导入适配(Dify):新增 Dify 工作流定义的兼容导入层,支持将已有工作流配置无缝迁移至 MemoryBear 编排引擎。
  • 字段字数限制与校验规则:为工作流字段引入可配置的字符长度限制,校验规则由产品规范定义,确保输入层的数据完整性。
  • 应用复制(Agent、工作流、集群):用户可一键复制 Agent、工作流和集群应用,完整保留所有配置,加速迭代开发效率。
  • 对话变量(调试+分享):Agent 和工作流应用在调试与分享模式下均支持对话级变量,实现有状态的多轮交互与持久上下文。
  • Chat 接口流式输出返回 message_id:Chat API 的流式输出现在在每个响应块中包含 message_id,便于客户端在实时对话中可靠地追踪和关联消息。

2. 多模态与交互 💬

  • 语音输入与输出:应用现已支持语音作为输入和输出模态,为超越文本的对话式 AI 体验打开大门。
  • 文件类型输入支持(语音、文件、视频):应用输入层扩展支持语音录音、通用文件和视频上传,实现更丰富的多模态交互场景。

3. 模型与智能 🧠

  • 模型视觉与 Omni 区分:模型配置体系现已区分视觉能力模型与 Omni 模型,确保能力路由准确,避免模型与任务的错误匹配。
  • 教育记忆与陪伴玩具场景本体预设:新增教育记忆和陪伴玩具场景的本体配置预设,为这些垂直领域提供开箱即用的领域专属记忆结构。
  • 本体配置默认标识:本体配置预设现支持默认标识标记,便于跨场景管理和选择基线配置。
  • 记忆配置默认标识:记忆配置新增默认标记,系统可在未指定自定义配置时自动应用基线设置。

4. 记忆智能 🔬

  • 记忆剪枝模块:引入专用记忆剪枝引擎,智能裁剪冗余或低价值记忆条目,持续优化存储效率和检索相关性。
  • RAG 快速检索集成记忆(深度思考与正常回复):RAG 空间现支持深度思考和正常回复两种模式下的快速检索,并在记忆验证模块中集成记忆功能,提升上下文准确性。

5. 稳健性与缺陷修复 🔧

模型管理

  • 自定义模型 API Key 配置错误:修复在新增模型厂商下添加自定义模型后,无法在"模型列表"中批量配置 API Key 的问题(报错"未找到供应商 dashscope 的模型配置")。自定义模型现可正确继承厂商级别的密钥管理。

知识库管理

  • 下载原始内容接口错误:修复非源文档(用户上传文件)在知识库管理中尝试"下载原始内容"时的前端接口错误。
  • 分享停用提示文案:更新知识库分享停用后的提示文案,提供更清晰的用户引导。

用户记忆

  • 档案提取准确性:提升核心档案字段(姓名、职业、兴趣)在"关于我"部分的提取准确性。优化兴趣分布分析,提供更相关的分类结果。

长期记忆

  • 情景记忆卡片重复:修复显性记忆中情景记忆卡片显示重复内容的问题。
  • 情景记忆用户归属错误:解决情景记忆卡片中包含非该用户 QA 内容的问题。记忆归属现已正确匹配工作记忆来源。

工作空间首页

  • 知识库数量不一致:修复仪表盘 API(/api/dashboard/dashboard_data)返回 14 个知识库,而知识库管理页面 API(/api/knowledges/knowledges)返回 12 个的差异。两个端点现返回一致的计数。
  • 应用数量不一致:修正应用数量计算包含非活跃应用(is_active='f')的问题。仪表盘 API 此前返回 24,现正确返回 20,与应用管理页面一致。
  • 总记忆容量计算:明确总记忆容量指标的数据来源和查询方法,确保准确报告。
  • API 调用次数追踪:改进 API 调用次数追踪,明确数据来源定义和查询方法。
  • 知识库类型分布总数:修复知识库类型分布响应中的总数错误。明确分布细分中"memory"类型的来源。

基础设施

  • Celery 环境变量修复:修正 Celery 环境变量配置错误,此前可能导致容器化部署中任务队列连接失败或 Worker 异常行为。
  • 数据库连接池泄漏:修复 Celery 任务中因会话管理不当导致的"idle in transaction"连接泄漏。新...
Read more

MemoryBear v0.2.5 Release Notes — Refined Foundations

28 Feb 10:36
b9578bd

Choose a tag to compare

MemoryBear v0.2.5 Release Notes — Refined Foundations

Release Date: February 26, 2026 | Codename: XingYun (行云 · Flowing Clouds)

Happy Lunar New Year! As we return to work with renewed energy, we're excited to share v0.2.5 with you. This release focuses on refining core user experience and system stability. Building upon the foundation of v0.2.4, we've addressed critical internationalization issues, enhanced user account management capabilities, improved workflow visualization, and strengthened the knowledge graph construction pipeline. These improvements ensure smoother multi-tenant operations and more intuitive interaction patterns across the platform.


🚀 I. Core Upgrade Overview

1. User Experience & Internationalization 🎨

  • Language Parameter Fix: Language preferences are now correctly retained.
  • Email Update Support: Users can now modify email addresses directly in user management system.

2. Workflow Visualization Enhancements 💬

  • Loop & Iteration Node Output Display: Loop and iteration nodes in workflows now display their execution progress and intermediate outputs in real-time, making it easier to debug complex iterative processes and understand workflow execution state.
  • Variable Selection with Enter Key: Improved workflow variable selection UX by enabling Enter key confirmation, streamlining the variable assignment process and reducing friction during workflow configuration.

3. Optimized Model Management ⚙️

  • Custom models have been removed from the Model marketplace to optimize the model usage experience.

4. Robustness & Bug Fixes 🔧

  • Knowledge Graph Construction Fix: Addressed stability issues in the knowledge graph construction pipeline, ensuring more reliable entity extraction and relationship mapping during document processing.

🧭 Looking Ahead

Version 0.2.5 matures MemoryBear's operational foundations by addressing internationalization edge cases and improving workflow transparency. The workflow visualization improvements lay groundwork for sophisticated debugging and monitoring capabilities. Looking forward, we will deepen enterprise readiness by expanding user management features, refining knowledge graph intelligence, and enhancing workflow orchestration with continued improvements in observability, performance optimization, and seamless integration patterns.


MemoryBear v0.2.5 发布说明 —— 精炼根基

发布日期: 2026年2月26日 | 版本代号: 行云(XingYun · Flowing Clouds)

新春开工大吉!带着全新的活力回归工作,我们很高兴与您分享 v0.2.5 版本。本版本专注于优化核心用户体验和系统稳定性。在 v0.2.4 的基础上,我们解决了关键的国际化问题,增强了用户账户管理能力,改进了工作流可视化,并加固了知识图谱构建流程。这些改进确保了更流畅的多租户运营和更直观的平台交互模式。


🚀 一、核心升级概览

1. 用户体验与国际化 🎨

  • 语言参数修复:语言偏好现正确保留。
  • 邮箱修改支持:用户可直接在用户管理系统中修改邮箱地址。

2. 工作流可视化增强 💬

  • 循环与迭代节点输出展示:工作流中的循环和迭代节点现在能够实时显示执行进度和中间输出,使复杂迭代过程的调试更加容易,并帮助理解工作流执行状态。
  • 变量支持回车选择:改进了工作流变量选择的用户体验,支持使用回车键确认选择,简化了变量分配流程,减少了工作流配置过程中的摩擦。

3. 优化模型管理 ⚙️

  • 模型广场移除自定义模型,优化模型使用体验

4. 稳健性与缺陷修复 🔧

  • 知识图谱构建修复:解决了知识图谱构建流程中的稳定性问题,确保在文档处理过程中更可靠地提取实体和映射关系。

🧭 未来展望

版本 0.2.5 通过解决国际化边界情况和改进工作流透明度,构建更具生产就绪性的平台。工作流可视化改进为更复杂的调试和监控能力奠定基础。未来将继续深化企业就绪性,扩展用户管理功能、优化知识图谱智能和增强工作流编排能力,在可观测性、性能优化和无缝集成模式方面持续改进。

Memory Bear v0.2.4 Release Notes — Intelligent Resilience

11 Feb 11:45
3f87c64

Choose a tag to compare

Memory Bear v0.2.4 Release Notes — Intelligent Resilience

Release Date: February 10, 2026 | Codename: ZhiYuan (智远 · Wisdom Reaching Far)

Building upon the foundation laid by v0.2.3, Memory Bear v0.2.4 delivers a comprehensive upgrade across multimodal capabilities, knowledge integration, and memory system robustness. This release introduces Skills support, expands knowledge base connectivity to major platforms, and significantly hardens the memory configuration pipeline with intelligent fallback mechanisms — ensuring stability even in edge-case scenarios.


🚀 I. Core Upgrade Overview

1. Skills Framework

  • Skills Support: Introduced a new Skills system, enabling extensible capability modules that can be dynamically loaded and orchestrated within agents and workflows.

2. Multimodal & Interaction 💬

  • File Multimodal Support: Full multimodal file handling across message input, LLM processing, and output rendering — supporting richer, media-aware conversations.
  • Voice Interaction: Voice-based interaction capabilities are under active development, laying the groundwork for hands-free conversational experiences. (In Progress)

3. Knowledge Base Integration 📚

  • Feishu Knowledge Base: Seamless integration with Feishu (Lark) document repositories for enterprise knowledge retrieval.
  • Yuque Knowledge Base: Native connector for Yuque documentation platforms, expanding coverage of Chinese enterprise tooling.
  • Web Site Knowledge Base: General-purpose web site crawling and indexing for knowledge base construction from public web content.
  • Visual Model Selection: Knowledge base visual model configuration now supports both LLM and Chat model types, removing the previous restriction to Chat-only selection.

4. Memory Intelligence 🧠

  • Ontology Engineering (Phase 2): Advanced memory scene classification and extraction powered by ontology engineering — enabling structured, domain-aware memory organization with improved categorization accuracy.
  • Default Model Configuration: Emotion analysis, reflection, and memory extraction modules now default to the space-level model, ensuring consistent behavior out of the box.
  • Intelligent Model Fallback: If configured emotion or reflection models are empty or unavailable, the system automatically falls back to the space default model — preventing silent failures.
  • Memory Config Fallback for Models: When any memory-configured model is empty or unavailable, the system gracefully degrades to the space default model.

5. Performance & Scalability ⚡

  • Model Concurrency (model_api_keys): Support for concurrent model API key management, enabling parallel model invocations and improved throughput for high-load scenarios.

6. Robustness & Bug Fixes 🔧

  • Memory Config Version Pinning: Fixed an issue where user memory configurations were not pinned to application release versions, causing inconsistent behavior across deployments.
  • Space Default Memory Protection: Space-level default memory configurations are now protected from deletion; user-level configurations remain deletable.
  • Agent & Workflow Config Fallback: Resolved edge cases in Agent and Workflow nodes where memory config could be empty or selected but unconfigured — comprehensive fallback handling now prevents runtime errors.
  • Implicit Memory Field Rename: Corrected user_id to end_user_id in JSON responses from implicit memory interfaces, aligning with the canonical data model.
  • Memory Config ID Migration: Renamed memory_content to memory_config_id in Agent and Workflow memory configurations for API consistency.
  • Worker-Memory Alerts: Resolved warning-level alerts in the worker-memory service, improving operational monitoring clarity.
  • Bilingual Interface Fixes: Fixed Chinese/English language inconsistencies across memory-related API interfaces.
  • EndUser Memory Config Auto-Backfill (New Users): When a newly created EndUser has memory_config_id as None, the system automatically fetches the latest release's memory_config_id and backfills it.
  • EndUser Memory Config Auto-Backfill (Existing Users): For existing EndUsers with memory_config_id as None, the system similarly retrieves and backfills from the latest release — ensuring backward compatibility without manual migration.

🧭 Looking Ahead

With v0.2.4, Memory Bear takes a decisive step toward production-grade resilience. The intelligent fallback mechanisms across memory configuration, model selection, and EndUser management reflect a maturing platform that gracefully handles real-world variability rather than requiring perfect configuration upfront.

The introduction of the Skills framework and multimodal file support signals the next phase of Memory Bear's evolution — moving from a memory-centric system to a fully extensible cognitive platform. As voice interaction capabilities come online, users will experience increasingly natural and fluid engagement patterns.

Looking forward, we will continue deepening ontology-driven memory intelligence, expanding knowledge base connectors, and refining the Skills ecosystem to unlock more powerful agent orchestration scenarios.



Memory Bear v0.2.4 发布说明 —— 智慧致远

发布日期: 2026年2月10日 | 版本代号: 智远(ZhiYuan · Wisdom Reaching Far)

在 v0.2.3 的基础上,Memory Bear v0.2.4 在多模态能力、知识库集成和记忆系统稳健性方面进行了全面升级。本版本引入了 Skills 技能框架,扩展了对主流平台的知识库连接,并通过智能回退机制显著强化了记忆配置管线——即使在边界场景下也能确保系统稳定运行。


🚀 一、核心升级概览

1. Skills 技能框架

  • Skills 支持:引入全新的 Skills 技能系统,支持可扩展的能力模块,可在 Agent 和工作流中动态加载与编排。

2. 多模态与交互 💬

  • 文件多模态支持:全面支持消息输入、LLM 处理和输出渲染中的多模态文件处理,实现更丰富的媒体感知对话。
  • 语音交互:语音交互功能正在积极开发中,为免提对话体验奠定基础。(开发中)

3. 知识库集成 📚

  • 飞书知识库:无缝对接飞书文档库,支持企业知识检索。
  • 语雀知识库:原生连接语雀文档平台,扩展对国内企业工具生态的覆盖。
  • Web 站点知识库:通用 Web 站点抓取与索引,支持从公开网页内容构建知识库。
  • 视觉模型选择优化:知识库视觉模型配置现已支持 LLM 和 Chat 两种模型类型,移除了此前仅限 Chat 类型的限制。

4. 记忆智能 🧠

  • 本体工程(二期):基于本体工程的高级记忆场景分类与萃取,实现结构化、领域感知的记忆组织,提升分类准确性。
  • 默认模型配置:情绪分析、反思和记忆萃取模块现默认使用空间级模型,确保开箱即用的一致性行为。
  • 智能模型回退:当已配置的情绪或反思模型为空或不可用时,系统自动回退至空间默认模型,避免静默失败。
  • 记忆模型回退兜底:当记忆中配置的模型为空或不可用时,系统优雅降级至空间默认模型。

5. 性能与扩展 ⚡

  • 模型并发(model_api_keys):支持并发模型 API Key 管理,实现并行模型调用,提升高负载场景下的吞吐能力。

6. 稳健性与缺陷修复 🔧

  • 记忆配置版本固定:修复用户记忆配置未跟随应用版本发布固定的问题,消除跨部署的行为不一致。
  • 空间默认记忆保护:空间级默认记忆配置现不可删除;用户级配置仍可删除。
  • Agent 与工作流配置兜底:解决 Agent 和工作流节点中记忆配置可能为空、或已选择但未配置的边界情况——全面的回退处理现可防止运行时错误。
  • 隐形记忆字段重命名:将隐形记忆接口 JSON 响应中的 user_id 修正为 end_user_id,与规范数据模型对齐。
  • 记忆配置 ID 迁移:将 Agent 和工作流记忆配置中的 memory_content 重命名为 memory_config_id,保持 API 一致性。
  • Worker-Memory 告警解决:解决 worker-memory 服务中的告警级别问题,提升运维监控清晰度。
  • 双语接口修复:修复记忆相关 API 接口的中英文不一致问题。
  • 新用户记忆配置自动回填:新创建的 EndUser 若 memory_config_idNone,系统自动从最新 Release 获取 memory_config_id 并回填。
  • 存量用户记忆配置自动回填:已有 EndUser 若 memory_config_idNone,系统同样从最新 Release 获取并回填,确保向后兼容,无需手动迁移。

🧭 未来展望

v0.2.4 标志着 Memory Bear 向生产级稳健性迈出了坚实一步。记忆配置、模型选择和 EndUser 管理中的智能回退机制,体现了一个日趋成熟的平台——能够优雅地应对真实世界的多样性,而非要求完美的前置配置。

Skills 技能框架与多模态文件支持的引入,预示着 Memory Bear 演进的下一阶段——从以记忆为核心的系统迈向全面可扩展的认知平台。随着语音交互能力的上线,用户将体验到更加自然、流畅的交互模式。

展望未来,我们将持续深化基于本体的记忆智能,扩展知识库连接器,并完善 Skills 技能生态,以解锁更强大的 Agent 编排能力。

MemoryBear v0.2.3 Release Notes — Still Waters Run Deep

06 Feb 11:11
79ab929

Choose a tag to compare

Release Date: 2026-2-4 | Codename: Settle (归墟)

This release focuses on stability and refinement, addressing critical bugs across memory systems, workflow sharing, and prompt engineering. Like rivers converging into the deep sea, all streams find their place — calm on the surface, powerful beneath.

🚀 I. Core Upgrade Overview

1. Intelligence & Memory 🧠

  • Prompt Engineering Module: New dedicated prompt engineering capabilities for optimizing AI interactions
  • Long-term & Short-term Memory Integration: Enhanced memory lifecycle management between short-term and long-term storage
  • Bilingual Memory Support: Resolved dual-language issues in episodic and explicit memory systems

2. System Architecture ⚙️

  • Reflection Task Worker: Added worker-periodic container for scheduled reflection tasks
  • Model Configuration Fallback: Memory management now properly falls back to workspace model when no specific model is configured

3. Bug Fixes 🔧

  • Workflow Sharing: Fixed issue where multiple conversations were created during multi-turn dialogues after sharing
  • Streaming Output: Resolved missing "end" marker in chat streaming output that blocked continued conversations
  • Entity Details: Removed "unknown type" memories from long-term memory "All" view
  • Prompt Template Paths: Fixed path resolution errors for conversation summary templates
  • Knowledge Base Schema: Renamed strategy to retrieve_type in Agent and Workflow knowledge base configurations
  • Workspace Avatar: Optimized frequent model API calls when updating workspace avatars
  • Memory Dashboard: Fixed /dashboard/end_users endpoint returning empty responses

🧭 Looking Ahead

v0.2.4 will continue with workflow code execution enhancements and the ontology engineering + memory configuration portal.

MemoryBear — remember better, work smarter. 🐻✨


MemoryBear v0.2.3 发布说明 —— 万流归墟

发布日期: 2026-2-4 | 代号: Settle (归墟)

本版本专注于稳定性与细节打磨,修复了记忆系统、工作流分享、提示词工程等多个关键问题。万流归墟,静水流深,记忆熊愈发沉稳可靠。

🚀 一、核心升级概览

1. 智能与记忆 🧠

  • 提示词工程模块:新增专用提示词工程能力,优化AI交互效果
  • 长短期记忆整合:增强短期记忆与长期记忆之间的生命周期管理
  • 双语记忆支持:解决情景记忆、显性记忆的双语问题

2. 系统架构 ⚙️

  • 反思任务调度器:新增 worker-periodic 容器,支持反思定时任务
  • 模型配置降级:记忆管理在未配置模型时,正确降级使用空间模型

3. 问题修复 🔧

  • 工作流分享:修复分享后多轮对话产生多个conversation的问题
  • 流式输出:修复chat流式输出结尾缺少"end"标记,导致无法继续对话
  • 实体详情:长期记忆"全部"视图中移除"未知类型"记忆
  • 提示词模板路径:修复conversation_summary_system.jinja2路径解析错误
  • 知识库字段:Agent、工作流知识库中 strategy 更名为 retrieve_type
  • 空间头像:优化更新空间头像时频繁调用模型接口的问题
  • 记忆仪表盘:修复 /dashboard/end_users 接口无返回的问题

🧭 展望未来

v0.2.4 将继续完善工作流代码执行功能,并推出本体工程+记忆配置入口。

记忆熊,记得更牢,用得更好。🐻✨

MemoryBear v0.2.2 Release Notes

02 Feb 05:54

Choose a tag to compare

MemoryBear v0.2.2 Release Notes — Stability & Performance

Release Date: January 31, 2026 | Codename: Temper (淬锋)

This release focuses on platform stability and performance optimization — true to its codename "淬锋" (tempered blade), we've refined the system through rigorous testing and fixes. Introducing Python code execution for Agent workflows, improved model concurrency management, and critical fixes across the memory system.


🚀 I. Core Upgrade Overview

1. Agent Platform Enhancements ⚙️

  • Model Concurrency Management: Enhanced Model Plaza with improved concurrent model request handling and resource allocation

2. Memory System Improvements 🧠

  • Celery Queue Fix: Resolved task queue issues for more reliable asynchronous memory processing
  • Memory Agent Optimization: Improved memory Agent performance and efficiency
  • API Response Speed: Optimized memory interface response times for faster operations

3. Emotional Memory & Recognition 💬

  • Emotion Memory Role Recognition Fix: Resolved issues with role/character identification in emotional memory contexts
  • Role Recognition Enhancement: Improved character/role identification accuracy in conversation memory

🧭 Looking Ahead

MemoryBear continues advancing toward human-like memory capabilities for AI applications. This stability-focused release strengthens the foundation for our Perception → Refinement → Association → Forgetting paradigm.

Future releases will build on this solid base with expanded Agent capabilities and deeper memory intelligence features.


MemoryBear v0.2.2 发布说明 —— 稳定与性能

发布日期:2026年1月31日 | 版本代号:淬锋(Temper)

本次发布聚焦平台稳定性和性能优化。正如"淬锋"之名——千锤百炼,淬火成锋,我们通过严格测试和修复打磨系统品质。引入 Agent 工作流的代码执行能力、改进模型并发管理,并修复了记忆系统的多个关键问题。


🚀 一、核心升级概览

1. Agent 平台增强 ⚙️

  • 模型并发管理:优化模型广场的并发请求处理和资源分配能力

2. 记忆系统优化 🧠

  • Celery 队列修复:解决任务队列问题,提升异步记忆处理的可靠性
  • 记忆 Agent 优化:提升记忆 Agent 的性能和效率
  • 接口响应速度优化:优化记忆接口响应时间,加快操作速度

3. 情绪记忆与识别 💬

  • 情绪记忆角色识别修复:解决情绪记忆上下文中的角色/人物识别问题
  • 角色识别增强:提升对话记忆中的角色/人物识别准确性

🧭 未来展望

MemoryBear 持续致力于为 AI 应用提供类人记忆能力。本次以稳定性为核心的发布,进一步夯实了"感知 → 精炼 → 关联 → 遗忘"范式的基础。

未来版本将在此坚实基础上,扩展 Agent 能力并深化记忆智能特性。