Releases: RightNow-AI/openfang
v0.4.4
- Wire credential vault into main flows (dashboard save, CLI save, kernel boot) — API keys now stored in AES-256-GCM vault with dual-write to secrets.env for backward compat - Fix cron channel delivery that was a no-op — Channel, LastChannel, and Webhook variants all deliver now - Propagate cron delivery failures to scheduler (one-shot jobs not removed on failure) - Add credential resolver (vault → dotenv → env var) to kernel for unified secret resolution - Add remove_from_vault() to CredentialResolver - Bump to v0.4.4
v0.4.3
Bug Fixes
- Open links in new tab (#612): External links in markdown agent responses now open in a new tab with
target="_blank". - Browser hand install (#611): Homebrew "already an App at" message now correctly recognized as success instead of failure.
- "FREE is not a valid model" (#610): Added
"free","openrouter/free", and"free-reasoning"aliases for OpenRouter free-tier models. - WhatsApp drops media (#605): Gateway now handles images, voice notes, videos, documents, and stickers with descriptive placeholders instead of silently dropping them.
- WhatsApp sender metadata (#597): Sender identity (phone, name) now flows end-to-end from API → kernel → system prompt. Agents know who sent each message.
- Config overwrite warning (#578): Web dashboard shows a warning toast when saving an API key triggers an auto-provider-switch.
- Linux libssl error (#582): OpenSSL is now statically compiled via vendored feature — no runtime libssl dependency.
Enhancements
- Minimax global URL (#576): Default URL switched to global endpoint (
api.minimax.io/v1). China users can override via[provider_urls]in config.toml.
v0.4.2
Bug Fixes
- LoopGuard poll detection (#603): Removed fragile
cmd.len() < 50heuristic. Poll detection is now purely keyword-based — long kubectl/docker commands are correctly identified. - Custom model provider display (#581): Model switcher dropdown now shows
provider:modelformat instead of just the display name. - Telegram typing indicator (#571): Typing indicator now refreshes every 4 seconds continuously during LLM processing, instead of expiring after 5 seconds.
- "No agent selected" error (#569): Agents created via API are now immediately registered in the channel router's name cache.
- Tool calls denied by approval (#537): Denial message now includes guidance to use
auto_approve = truein config.toml or--yoloflag. - WhatsApp sender metadata (#597): MessageRequest and PromptContext now have sender_id/sender_name fields for identity-aware agents.
- LLM auth boot failure (#572): Already fixed in v0.4.0 (auto-detect fallback). Commented for users on older versions.
Enhancements
- NVIDIA NIM provider (#579): Added as OpenAI-compatible provider with 5 models (nemotron-70b, llama-3.1-405b/70b, mistral-large, nemotron-4-340b). Set
NVIDIA_API_KEYto use. - YOLO mode (#573):
openfang start --yoloauto-approves all tool calls. Also configurable viaauto_approve = truein[approval]section. - Skill output persistence (#596): Tool/skill output cards in dashboard now default to expanded instead of collapsed, keeping results visible.
v0.4.1
Bug Fixes
- Memory recall loop (#583):
build_memory_section()no longer tells the model to callmemory_recallwhen memories are already injected into the prompt. Models now use provided memories directly. - Raw errors in channels (#584): Channel bridge sanitizes LLM error messages before sending to users. Rate limits, auth errors, and JSON dumps are replaced with clean user-friendly messages.
- HAND.toml format (#588): Parser now accepts both flat root-level format and the documented
[hand]table format. - Token quota exceeded (#591): Pre-emptive quota-aware compaction triggers before LLM calls when session token count approaches remaining hourly quota headroom.
- log_level config (#594):
log_levelin config.toml now takes effect. Priority:RUST_LOGenv var > config.tomllog_level> default"info". - Max iterations error (#599): Error message now includes guidance on configuring
[autonomous] max_iterationsin agent.toml. - Config backup (#578): config.toml is backed up to config.toml.bak before any auto-rewrite (provider key save, config set/unset).
Enhancements
- Default model in Web UI (#593): Spawn wizard fetches default_provider/default_model from
/api/statusinstead of hardcoding groq/llama-3.3-70b-versatile.
v0.4.0
v0.4.0 — Community Issue Batch Fixes: - Cron jobs can now run workflows directly by ID or name (#485) - Auto-load workflow definitions from ~/.openfang/workflows/ on startup (#486) - Matrix adapter: auto-accept invites, mention detection, DM/group detection, skip old messages (#565) - Cross-channel default recipient via default_chat_id/default_channel_id config (#374) - Telegram reply-to-message context forwarded to agents (#564) - Configurable lifecycle_reactions flag per channel (#563) - Telegram token auto-detection in bot_token_env field (#505) - WhatsApp gateway ESM module fix (#511) - Slack thread auto-response tracking (#560) - Workflow CRUD API, CLI commands, and dashboard UI (#512) - Dashboard username/password authentication with HMAC session tokens (#456) - Added openrouter/hunter-alpha model (#568)
v0.3.49
v0.3.49 — Community Issue Batch #2 Fixes: - WhatsApp Baileys makeWASocket error — ESM static imports (#511) - Telegram bridge Unauthorized when raw token in config (#505) - Slack thread auto-response without re-mentioning bot (#560) - Workflow update/delete API, CLI, and dashboard (#512) - Dashboard username/password login with HMAC sessions (#456) - GLM-4.7 already in catalog, documented (#530)
v0.3.48
v0.3.48 — Community Issue Fixes
Bug Fixes
- Fix Chromium detection for Browser hand — now checks env vars, PATH, well-known install paths, and Playwright cache (#549)
- Fix Gemini 2.5 Flash empty responses — handle thinking model
thought: trueparts correctly (#550) - Fix Twitter hand textbox not editable — API key inputs are now proper password fields (#551)
- Fix Slack threading — extract thread_ts, implement send_in_thread(), add was_mentioned metadata (#552)
- Fix EOF parse error classified as non-retryable — empty response bodies now retry as Overloaded (#535)
- Fix tool calls still showing as text — add 4 new recovery patterns: plugin blocks, Action/Action Input, bare name+JSON, tool_use tags (#537)
- Fix WhatsApp gateway silent disconnects — all disconnect reasons except loggedOut now auto-reconnect with backoff, auto-connect on startup if credentials exist (#555)
- Fix OPENFANG_API_KEY env var not picked up — works in both kernel and CLI as fallback (#557)
Features
- Add Qwen Code CLI as LLM provider with 3 models, auth detection, and full streaming support (#558)
Stats: 2,103 tests, 0 clippy warnings, 17 files changed
v0.3.47
Fix 11 Community Issues
Bug Fixes
#532 — Agent tool filtering — Only send tools declared in agent.toml to the LLM, not all available tools. Massively reduces token usage for agents with specific tool sets. Backwards compatible (empty tools list = all tools).
#531 — Streaming display bug — "streaming" text no longer leaks into chat responses. Fixed phase event handling in dashboard UI so tool-use loops don't show lifecycle signals as content.
#528 — Telegram image MIME detection — Added magic-byte detection (JPEG/PNG/GIF/WebP) so Telegram's application/octet-stream Content-Type no longer breaks vision. Three-tier detection: trusted image/* header → magic bytes → URL extension.
#527 — Custom provider misidentification — Explicit provider names in config are now preserved instead of being overridden by auto-detection. Fixes Tencent/custom base_url setups.
#506 — Gemini 3.x thoughtSignature — Captured at part level for both text and functionCall parts. Fixes INVALID_ARGUMENT: Function call is missing a thought_signature on Gemini 3.x thinking models.
#507 — Telegram group message drops — Added sender_chat fallback when from is absent, and populated was_mentioned metadata so GroupPolicy::MentionOnly works for Telegram groups. Debug logging on all dropped updates.
#519 — Trigger reassignment after restart — Triggers now survive agent restarts via take/restore mechanism, mirroring the existing cron job reassignment pattern.
#523 — Hand re-activation tool profile — tool_allowlist and profile are now correctly set from hand definition on re-activation, preventing tool surface drift.
#524 — Hand readiness API — Enriched with active, degraded, and per-requirement optional fields. Browser's chromium requirement marked optional since Playwright can auto-install it.
#515 — Claude Code agent permissions — Added skip_permissions config (default: true) so Claude Code agents work in daemon mode without blocking on terminal approval prompts.
#509 — Proactive channel_send — Added thread_id parameter for Telegram topic targeting and file_path for local file attachments with MIME detection.
Stats
- 27 files changed, +1730/-252 lines
- 2074 tests passing, 0 clippy warnings
- 14 crates, all clean
v0.3.46 — Community PR Batch
Community Contributions
11 PRs reviewed, labeled, commented, and implemented from scratch following our architecture. All PR authors credited.
Bug Fixes
- #438 — IME composition guard prevents Enter from sending during CJK input (@pandego)
- #433 — Default
User-Agent: openfang/0.3.46on all LLM drivers — fixes 403s from Moonshot/Qwen (@ozekimasaki) - #417 / #392 — Default token quota changed to unlimited (was 1M/hour, caused immediate quota errors on fresh installs) (@f-liva, @cryptonahue)
- #410 — Safe string slicing in channel bridge (9 instances) + desktop server shutdown race condition (@hobostay)
- #413 / #275 — Embedding driver respects
provider_urlsconfig + auto-selects correct model per provider (@castorinop, @woodcoal) - #464 — Wizard generates proper multi-line TOML strings for system_prompt with escaping (@citadelgrad)
New Models & Providers
- #419 / #480 / #439 — Added Z.AI Coding models (glm-5-coding, glm-4.7-coding), Kimi for Code (Anthropic-compatible), fixed Moonshot base URL, added kimi-k2.5-0711 alias (@shipdocs, @skeltavik, @modship)
Internal
- XML-attribute tool call recovery (Pattern 9) for Groq/Llama models that emit
<function name="..." parameters="..." /> - 187 models, 40+ providers, 2040+ tests, 0 clippy warnings
Full Changelog: v0.3.45...v0.3.46
v0.3.45 — Trading Hand + Bug Fixes
New: Trading Hand (8th Bundled Hand)
Full autonomous trading agent built from extensive market research on what traders actually need in 2026.
Architecture
-
HAND.toml (740 lines) — 12 configurable settings, 10 dashboard metrics, 470-line system prompt with 8-phase pipeline:
- State Recovery — restore position/context on restart
- Portfolio Setup — read config, validate risk parameters
- Market Intelligence Scan — gather data from free sources (Yahoo Finance, FRED, Fear & Greed Index)
- Multi-Factor Analysis — technical (RSI/MACD/Bollinger/VWAP/ATR), fundamental, sentiment, macro
- Adversarial Bull/Bear Debate — structured debate with entry zones, stop-loss, take-profit, R:R ratios
- Risk Management Gate — circuit breakers, position sizing caps, drawdown limits, correlation checks
- Trade Execution — three modes: analysis-only, paper trading, live via Alpaca API
- Analytics & Reporting — structured tables for dashboard metrics
-
SKILL.md (937 lines) — Expert reference material:
- RSI/MACD/Bollinger/VWAP/ATR formulas with interpretation guides
- 22 candlestick pattern definitions
- Position sizing + Kelly criterion + Value-at-Risk calculations
- Full Alpaca API reference (discovered they have an official MCP server)
- Free financial data sources catalog
- Superforecasting calibration guide
- Cognitive bias mitigations for trading decisions
Settings
trading_mode(analysis/paper/live),risk_per_trade(default 2%),max_portfolio_pct(10%),watchlist,alpaca_api_key,preferred_timeframe,report_frequency,max_daily_trades,stop_loss_atr_multiplier,take_profit_atr_multiplier,circuit_breaker_drawdown,rebalance_interval
Live-Tested Behaviors
- Refuses to bypass risk management (100% portfolio / no stop-loss = rejected)
- Calculates position sizing correctly including cap violation detection
- Produces adversarial bull/bear debates with concrete price levels
- Generates structured Phase 7 reports with dashboard-ready tables
- Pause/resume/deactivate lifecycle works correctly
Bug Fixes
- #503 — Streaming
<think>tag leak: CreatedStreamingThinkFilterthat processes text deltas incrementally, buffering reasoning content so it never reaches the client during streaming. Handles partial tags split across chunks. 17 new tests. - #504 — Cron jobs orphaned after agent deletion:
kill_agent()now callsremove_agent_jobs()on the cron scheduler and persists the change to disk. 3 new tests.
Stats
- 2031 tests, 0 clippy warnings, 8 bundled hands, 184 models, 40 providers
Full Changelog: v0.3.44...v0.3.45