Releases: aristoteleo/PantheonOS
v0.5.3
What's New in v0.5.3
Features
- Gene Panel Selection Pipeline — configurable multi-algorithm gene panel design (HVG, DE, RF, scGeneFit, SpaPROS) with runtime-tunable hyperparameters via
GenePanelConfig(#95) - Chat Export/Import — portable zip bundles with
chat.jsonl, referenced files, and path rewriting for offline replay and sharing (#96) - Paper Writing & Graph Maker Teams — new multi-agent team templates for academic paper writing and figure generation
- Dynamic Memory & Learning System — unified memory context injection with compression support (#86)
- Native Vision in Tool Results — image content blocks in
tool_resultfor capable providers, with 53+ unit tests (#88) - OAuth Split Browser Flow — frontend-driven OAuth login for smoother authentication (#93)
- OpenAI Responses API — default routing to Responses API with Chat Completions fallback
- Think Tool Plugin — refactored think tool into plugin-managed leader injection (#87)
- Task & Notebook Plugins — task toolset converted to plugin with closure-based hooks;
list_kernelsandsetup_kernelactions added
Fixes
- Gemini parallel tool calls — fix invalid JSON / 400 responses on parallel tool calls
- Gemini image generation — surface
inlineDataimages end-to-end (#92) - Kernel E2BIG — prevent
[Errno 7]on kernel launch in long sessions by stripping large env vars - Chat rename race — fix auto-rename race condition, revert-to-"New Chat" bug, and stale-flag guard (#89)
- Notebook toolset — resolve multiple bugs in notebook operations
- Sub-agent streams — tag
call_agentstreams withparent_tool_call_idand childexec_id(#90) - Context bloat — prevent
context_variablesbloat on kernel spawn - Factory settings — fix settings.json and template directory scanning
Refactors
- Recursive template directory scanning with subpath preservation
- Unified LLM provider config fallbacks
- Simplified delegation fork delivery
- Content blocks API + proxy-mode capability fix
- Removed orphan skill_learning templates and dead imports
Full Changelog: v0.5.2...v0.5.3
v0.5.2
What's New
WSL Support
- WSL browser fallback for
--auto-ui: automatically opens Windows default browser via PowerShell/cmd.exe when running inside WSL (#76, thanks @Rawdream-Xu) - URL quoting to prevent shell metacharacter injection in WSL browser commands
- Warning-level logging for WSL browser fallback failures
Chat Auto-Rename
- Real-time chat rename notifications: backend now publishes
chat_renamedNATS event after auto-generating a chat title, so the frontend updates instantly without polling
Full Changelog
v0.5.1 — Claw Gateway, Gemini REST API, Reasoning Effort & Custom Models
Highlights
🔌 Pantheon-Claw: Multi-Channel IM Gateway
Chat with Pantheon from 7 messaging platforms: Telegram, Discord, Slack, WeChat, Feishu, QQ, and iMessage.
- Image & file support — send/receive images and documents across all channels
- Real-time progress — live tool execution updates ("✓ shell run command (3.2s)")
- Multi-user group chat — sender name identification for @mentions in channels
- Platform-native formatting — Telegram MarkdownV2, Slack mrkdwn, Discord markdown
- File upload — users can send PDFs, CSVs etc. to the agent via Telegram, Discord, Slack
- Auto bot commands — Telegram auto-registers /menu, Slack uses ! prefix to avoid conflicts
- Welcome messages — bot posts setup instructions when joining a Slack channel
- Step-by-step setup guides — collapsible configuration guides in the UI for each platform
- Workspace isolation —
/isolatecommand to toggle between isolated and shared workspaces
🧠 Reasoning Effort Control
- +think model suffix — append
+thinkto any model name to enable deep reasoning (e.g.gemini/gemini-2.5-flash+think) - Effort selector UI — lightbulb button next to model selector with Off/Low/Medium/High options
- Thinking content display — model reasoning shown in Timeline as collapsible "Thinking" sections
- Anthropic effort mapping — reasoning_effort mapped to budget_tokens (low=5K, medium=10K, high=30K)
- reasoning_content separation — thinking content kept separate from main response text
⚡ Gemini REST API Adapter
- Rewrote Gemini adapter from google-genai SDK to direct REST API (httpx), matching litellm's approach
- Fixes tool call parameter mixing issues with parallel function calls
- SSE streaming, v1alpha/v1beta version selection, UUID-based tool call IDs
🔧 Custom Models
- Custom Models editor — add unlimited custom model endpoints in the config panel
- Each model has name, base URL, API key, and provider type (OpenAI/Anthropic)
- Models appear in the selector under a "Custom" provider group
- Saved to
.pantheon/custom_models.json, persists across restarts
🖥️ Frontend Improvements
- Questions inline in chat — notify_user questions with radio/checkbox/text options rendered directly in chat, not hidden in timeline
- Answer restoration — previously submitted answers restored from history on page refresh
- PDF flicker fix — DocumentFragment off-screen rendering eliminates PDF preview flickering
- Notify card real-time update — tool response version tracking forces timeline recompute
- NotifyUserCard collapse sync — cards auto-expand when streaming ends
- Quota 404 fix — chat message quota increment only in Hub mode
- No scroll on Proceed — clicking Proceed no longer scrolls to top
- Empty chat on connect — always opens a fresh empty chat, verifies via message count
🔧 Other Changes
- Codex OAuth — browser-based login, CLI import, NATS RPC integration
- Ollama auto-detection — local Ollama models appear in selector automatically
- Token optimization — autocompact, microcompact, cache-safe runtime params
- Agent metrics — CPU/memory reporting in ping callback
- Remove litellm dependency — lightweight provider catalog + native SDK adapters
- Remove non-functional pro models from OpenAI defaults
- Custom endpoint detection — reads API keys from settings.json, not just env vars
Breaking Changes
- Gemini adapter now uses REST API instead of google-genai SDK
reasoning_contentis no longer merged intocontentfor thinking models
Full Changelog
Pantheon Desktop v0.3.0
Pantheon Desktop v0.3.0
Built from pantheon-ui · 90f9806fb35eeeac3583a1b81cb25367a509cd41
Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json
Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.
Pantheon Desktop (Latest)
Rolling release — always points to the most recent Pantheon Desktop build.
Current version: v0.3.0
Versioned release: https://github.com/aristoteleo/PantheonOS/releases/tag/desktop-v0.3.0
Pantheon Desktop v0.1.5
Pantheon Desktop v0.1.5
Built from pantheon-ui · 1a3580a6b7142dc6535ed3b3788fbf4a8d6db6bb
Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json
Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.
Pantheon Desktop v0.1.4
Pantheon Desktop v0.1.4
Built from pantheon-ui · 6083dfec38d79a7eb3ea9e3e1076d777e3d6246a
Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json
Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.
Pantheon Desktop v0.1.3
Pantheon Desktop v0.1.3
Built from pantheon-ui · ea544e674d77bff0bd145757c6d5c4990cb67f08
Auto-updater endpoint (stable):
https://github.com/aristoteleo/PantheonOS/releases/download/desktop-latest/latest.json
Linux auto-update is temporarily disabled. Please install updates manually via .deb or .rpm.
v0.5.0
Pantheon v0.5.0
Highlights
Workspace Isolation Mode — Each chat session can now optionally use its own private workspace directory, isolating files, code execution, and shell commands from other sessions.
New Features
- Two-mode workspace system (
project/isolated): Default isproject(shared), users can toggle toisolatedmode where each chat gets a private directory at.pantheon/workspaces/{session_id}/(#39, @1-CellBio) - Runtime workspace mode switching via new
set_chat_workspace_mode()tool — switch between project and isolated mode without creating a new chat - All ToolSets are workspace-aware:
file_manager,file_transfer,shell,notebook,python_interpreterall respect the active workspace mode - Store CLI: Publish, install, search, and manage agent/team/skill packages (
pantheon store ...) - Store seed system: Batch import packages from factory templates and external repos (LabClaw, OpenClaw Medical, Claude Scientific, ClawBio, OmicClaw)
PANTHEON_HUB_URLenv var support for Store API client- Evolution system improvements: Better evaluator format, real-time visualization, improved error reporting
- CLI
-iflag for initial input - Activity-aware idle cleanup via
_pingresponses - Custom endpoints: Support for
custom_anthropicandcustom_openaiproviders
Bug Fixes
- Fix
/keyscommandProviderMenuEntryunpacking error - Fix
move_file()crash by creating parent directories before move - Fix empty file
read_filecrash (start_lineout of range for 0-line files) - Fix
ws://127.0.0.1→ws://localhostfor Windows IPv6 compatibility - Fix LaTeX compilation to resolve relative figure paths
- Fix shell env over
max_sizeissue - Fix endpoint startup timeout (3s → 30s)
Desktop App (PyInstaller)
- Switch to
onedirmode withexclude_binaries=Truefor correct macOS bundling - Fix nats-server binary missing from bundle
- Fix Windows DLL corruption and read-only install location issues
- Handle fakeredis/lupa compatibility
Contributors
- @Nanguage
- @1-CellBio — session workspace isolation (#39)
- @zqbake
v0.4.10
v0.4.10: Pantheon Store, Custom Endpoints, Docker & Bug Fixes
Pantheon Store
- Store CLI (
pantheon store) for publishing, installing, searching, and managing skill/agent/team packages - Store seed system for batch-importing packages from external repos (LabClaw, OmicClaw, ClawBio, Claude Scientific)
- Install state tracking with local manifest (
~/.pantheon/store_installs.json) - RPC methods (
install_store_package,get_installed_store_packages) for frontend Store dialog integration - Skill file bundling with SKILL.md directory format and source attribution
Custom Endpoints
- Flexible
custom_modelsconfiguration replacing the old unifiedLLM_API_BASE/LLM_API_KEYproxy - Support
CUSTOM_ANTHROPIC_*andCUSTOM_OPENAI_*environment variables - Interactive
/keyscommand for custom endpoint management - Custom model aliases with highest priority in provider detection
Docker
- Dual-mode Dockerfile with GitHub Actions CI/CD workflow
- Docker entrypoint supporting both CLI and UI modes
CLI
-iflag for initial input:pantheon cli -i "analyze my data"starts REPL with an immediate command
Infrastructure
- Activity-aware idle cleanup —
_pingresponses now report active threads/tasks so Hub can skip cleanup for busy pods - Shell env fix for oversized environment variables in package runtime
- Setup wizard
SKIP_SETUP_WIZARDenv var support
Bug Fixes
- LaTeX PDF compilation — compile in source directory so relative figure paths (
../figures.png) resolve correctly - Speech-to-text — handle base64-encoded audio over JSON transport; add transcription timeout and spinner UI
- Context menu z-index — teleport WorkspacePanel context menu to body so it renders above chat cards
- Remove unused
FilesystemTab.vuelegacy component - Skip Hub API call in local mode to avoid connection errors
- Fix test failures and Docker Hub repo name