Skip to content

Releases: aristoteleo/PantheonOS

v0.5.3

22 Apr 21:37

Choose a tag to compare

What's New in v0.5.3

Features

  • Gene Panel Selection Pipeline — configurable multi-algorithm gene panel design (HVG, DE, RF, scGeneFit, SpaPROS) with runtime-tunable hyperparameters via GenePanelConfig (#95)
  • Chat Export/Import — portable zip bundles with chat.jsonl, referenced files, and path rewriting for offline replay and sharing (#96)
  • Paper Writing & Graph Maker Teams — new multi-agent team templates for academic paper writing and figure generation
  • Dynamic Memory & Learning System — unified memory context injection with compression support (#86)
  • Native Vision in Tool Results — image content blocks in tool_result for capable providers, with 53+ unit tests (#88)
  • OAuth Split Browser Flow — frontend-driven OAuth login for smoother authentication (#93)
  • OpenAI Responses API — default routing to Responses API with Chat Completions fallback
  • Think Tool Plugin — refactored think tool into plugin-managed leader injection (#87)
  • Task & Notebook Plugins — task toolset converted to plugin with closure-based hooks; list_kernels and setup_kernel actions added

Fixes

  • Gemini parallel tool calls — fix invalid JSON / 400 responses on parallel tool calls
  • Gemini image generation — surface inlineData images end-to-end (#92)
  • Kernel E2BIG — prevent [Errno 7] on kernel launch in long sessions by stripping large env vars
  • Chat rename race — fix auto-rename race condition, revert-to-"New Chat" bug, and stale-flag guard (#89)
  • Notebook toolset — resolve multiple bugs in notebook operations
  • Sub-agent streams — tag call_agent streams with parent_tool_call_id and child exec_id (#90)
  • Context bloat — prevent context_variables bloat on kernel spawn
  • Factory settings — fix settings.json and template directory scanning

Refactors

  • Recursive template directory scanning with subpath preservation
  • Unified LLM provider config fallbacks
  • Simplified delegation fork delivery
  • Content blocks API + proxy-mode capability fix
  • Removed orphan skill_learning templates and dead imports

Full Changelog: v0.5.2...v0.5.3

v0.5.2

14 Apr 15:01

Choose a tag to compare

What's New

WSL Support

  • WSL browser fallback for --auto-ui: automatically opens Windows default browser via PowerShell/cmd.exe when running inside WSL (#76, thanks @Rawdream-Xu)
  • URL quoting to prevent shell metacharacter injection in WSL browser commands
  • Warning-level logging for WSL browser fallback failures

Chat Auto-Rename

  • Real-time chat rename notifications: backend now publishes chat_renamed NATS event after auto-generating a chat title, so the frontend updates instantly without polling

Full Changelog

v0.5.1...v0.5.2

v0.5.1 — Claw Gateway, Gemini REST API, Reasoning Effort & Custom Models

10 Apr 03:06

Choose a tag to compare

Highlights

🔌 Pantheon-Claw: Multi-Channel IM Gateway

Chat with Pantheon from 7 messaging platforms: Telegram, Discord, Slack, WeChat, Feishu, QQ, and iMessage.

  • Image & file support — send/receive images and documents across all channels
  • Real-time progress — live tool execution updates ("✓ shell run command (3.2s)")
  • Multi-user group chat — sender name identification for @mentions in channels
  • Platform-native formatting — Telegram MarkdownV2, Slack mrkdwn, Discord markdown
  • File upload — users can send PDFs, CSVs etc. to the agent via Telegram, Discord, Slack
  • Auto bot commands — Telegram auto-registers /menu, Slack uses ! prefix to avoid conflicts
  • Welcome messages — bot posts setup instructions when joining a Slack channel
  • Step-by-step setup guides — collapsible configuration guides in the UI for each platform
  • Workspace isolation/isolate command to toggle between isolated and shared workspaces

🧠 Reasoning Effort Control

  • +think model suffix — append +think to any model name to enable deep reasoning (e.g. gemini/gemini-2.5-flash+think)
  • Effort selector UI — lightbulb button next to model selector with Off/Low/Medium/High options
  • Thinking content display — model reasoning shown in Timeline as collapsible "Thinking" sections
  • Anthropic effort mapping — reasoning_effort mapped to budget_tokens (low=5K, medium=10K, high=30K)
  • reasoning_content separation — thinking content kept separate from main response text

⚡ Gemini REST API Adapter

  • Rewrote Gemini adapter from google-genai SDK to direct REST API (httpx), matching litellm's approach
  • Fixes tool call parameter mixing issues with parallel function calls
  • SSE streaming, v1alpha/v1beta version selection, UUID-based tool call IDs

🔧 Custom Models

  • Custom Models editor — add unlimited custom model endpoints in the config panel
  • Each model has name, base URL, API key, and provider type (OpenAI/Anthropic)
  • Models appear in the selector under a "Custom" provider group
  • Saved to .pantheon/custom_models.json, persists across restarts

🖥️ Frontend Improvements

  • Questions inline in chat — notify_user questions with radio/checkbox/text options rendered directly in chat, not hidden in timeline
  • Answer restoration — previously submitted answers restored from history on page refresh
  • PDF flicker fix — DocumentFragment off-screen rendering eliminates PDF preview flickering
  • Notify card real-time update — tool response version tracking forces timeline recompute
  • NotifyUserCard collapse sync — cards auto-expand when streaming ends
  • Quota 404 fix — chat message quota increment only in Hub mode
  • No scroll on Proceed — clicking Proceed no longer scrolls to top
  • Empty chat on connect — always opens a fresh empty chat, verifies via message count

🔧 Other Changes

  • Codex OAuth — browser-based login, CLI import, NATS RPC integration
  • Ollama auto-detection — local Ollama models appear in selector automatically
  • Token optimization — autocompact, microcompact, cache-safe runtime params
  • Agent metrics — CPU/memory reporting in ping callback
  • Remove litellm dependency — lightweight provider catalog + native SDK adapters
  • Remove non-functional pro models from OpenAI defaults
  • Custom endpoint detection — reads API keys from settings.json, not just env vars

Breaking Changes

  • Gemini adapter now uses REST API instead of google-genai SDK
  • reasoning_content is no longer merged into content for thinking models

Full Changelog

v0.5.0...v0.5.1

Pantheon Desktop v0.3.0

11 Apr 02:35

Choose a tag to compare

Pantheon Desktop v0.3.0

Built from pantheon-ui · 90f9806fb35eeeac3583a1b81cb25367a509cd41

Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json

Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.

Pantheon Desktop (Latest)

11 Apr 02:35

Choose a tag to compare

Rolling release — always points to the most recent Pantheon Desktop build.

Current version: v0.3.0

Versioned release: https://github.com/aristoteleo/PantheonOS/releases/tag/desktop-v0.3.0

Pantheon Desktop v0.1.5

09 Apr 12:45

Choose a tag to compare

Pantheon Desktop v0.1.5

Built from pantheon-ui · 1a3580a6b7142dc6535ed3b3788fbf4a8d6db6bb

Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json

Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.

Pantheon Desktop v0.1.4

09 Apr 11:43

Choose a tag to compare

Pantheon Desktop v0.1.4

Built from pantheon-ui · 6083dfec38d79a7eb3ea9e3e1076d777e3d6246a

Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json

Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.

Pantheon Desktop v0.1.3

09 Apr 09:50

Choose a tag to compare

Pantheon Desktop v0.1.3

Built from pantheon-ui · ea544e674d77bff0bd145757c6d5c4990cb67f08

Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json

Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.

v0.5.0

22 Mar 08:35

Choose a tag to compare

Pantheon v0.5.0

Highlights

Workspace Isolation Mode — Each chat session can now optionally use its own private workspace directory, isolating files, code execution, and shell commands from other sessions.

New Features

  • Two-mode workspace system (project / isolated): Default is project (shared), users can toggle to isolated mode where each chat gets a private directory at .pantheon/workspaces/{session_id}/ (#39, @1-CellBio)
  • Runtime workspace mode switching via new set_chat_workspace_mode() tool — switch between project and isolated mode without creating a new chat
  • All ToolSets are workspace-aware: file_manager, file_transfer, shell, notebook, python_interpreter all respect the active workspace mode
  • Store CLI: Publish, install, search, and manage agent/team/skill packages (pantheon store ...)
  • Store seed system: Batch import packages from factory templates and external repos (LabClaw, OpenClaw Medical, Claude Scientific, ClawBio, OmicClaw)
  • PANTHEON_HUB_URL env var support for Store API client
  • Evolution system improvements: Better evaluator format, real-time visualization, improved error reporting
  • CLI -i flag for initial input
  • Activity-aware idle cleanup via _ping responses
  • Custom endpoints: Support for custom_anthropic and custom_openai providers

Bug Fixes

  • Fix /keys command ProviderMenuEntry unpacking error
  • Fix move_file() crash by creating parent directories before move
  • Fix empty file read_file crash (start_line out of range for 0-line files)
  • Fix ws://127.0.0.1ws://localhost for Windows IPv6 compatibility
  • Fix LaTeX compilation to resolve relative figure paths
  • Fix shell env over max_size issue
  • Fix endpoint startup timeout (3s → 30s)

Desktop App (PyInstaller)

  • Switch to onedir mode with exclude_binaries=True for correct macOS bundling
  • Fix nats-server binary missing from bundle
  • Fix Windows DLL corruption and read-only install location issues
  • Handle fakeredis/lupa compatibility

Contributors

v0.4.10

16 Mar 07:05

Choose a tag to compare

v0.4.10: Pantheon Store, Custom Endpoints, Docker & Bug Fixes

Pantheon Store

  • Store CLI (pantheon store) for publishing, installing, searching, and managing skill/agent/team packages
  • Store seed system for batch-importing packages from external repos (LabClaw, OmicClaw, ClawBio, Claude Scientific)
  • Install state tracking with local manifest (~/.pantheon/store_installs.json)
  • RPC methods (install_store_package, get_installed_store_packages) for frontend Store dialog integration
  • Skill file bundling with SKILL.md directory format and source attribution

Custom Endpoints

  • Flexible custom_models configuration replacing the old unified LLM_API_BASE/LLM_API_KEY proxy
  • Support CUSTOM_ANTHROPIC_* and CUSTOM_OPENAI_* environment variables
  • Interactive /keys command for custom endpoint management
  • Custom model aliases with highest priority in provider detection

Docker

  • Dual-mode Dockerfile with GitHub Actions CI/CD workflow
  • Docker entrypoint supporting both CLI and UI modes

CLI

  • -i flag for initial input: pantheon cli -i "analyze my data" starts REPL with an immediate command

Infrastructure

  • Activity-aware idle cleanup_ping responses now report active threads/tasks so Hub can skip cleanup for busy pods
  • Shell env fix for oversized environment variables in package runtime
  • Setup wizard SKIP_SETUP_WIZARD env var support

Bug Fixes

  • LaTeX PDF compilation — compile in source directory so relative figure paths (../figures.png) resolve correctly
  • Speech-to-text — handle base64-encoded audio over JSON transport; add transcription timeout and spinner UI
  • Context menu z-index — teleport WorkspacePanel context menu to body so it renders above chat cards
  • Remove unused FilesystemTab.vue legacy component
  • Skip Hub API call in local mode to avoid connection errors
  • Fix test failures and Docker Hub repo name