Skip to content

Latest commit

 

History

History
184 lines (135 loc) · 3.91 KB

File metadata and controls

184 lines (135 loc) · 3.91 KB

Base Config Guide

config/base_config.json is the central runtime config for MacroAgent.

It controls:

  • default provider and model
  • provider connection details
  • channel defaults and channel credentials
  • maintenance model selection for /compact

It does not define user profile memory schema. That now lives in the runtime memory layer and namespace registry.

Precedence

Sensitive values use this order:

  1. environment variable
  2. fallback value in config/base_config.json

This means skills only need to declare:

  • provider
  • model

The runtime resolves the matching API key and base URL automatically.

Top-Level Structure

{
  "defaults": {},
  "providers": {},
  "channels": {},
  "maintenance": {},
  "trim": {}
}

Defaults

defaults controls the fallback runtime selection:

  • provider
  • model
  • channel

If the CLI does not explicitly choose a channel, defaults.channel is used.

Providers

Each provider entry supports:

  • driver
  • api_key
  • api_base
  • api_key_env_name
  • api_base_env_name
  • optional capabilities

Example:

"deepseek": {
  "driver": "openai_compatible",
  "api_key": "",
  "api_base": "https://api.deepseek.com/v1",
  "api_key_env_name": "DEEPSEEK_API_KEY",
  "api_base_env_name": "DEEPSEEK_BASE_URL"
}

For openai_compatible providers, capabilities can also carry transport hints. The current runtime recognizes:

  • supports_reasoning_effort
  • api_mode

Example for a Responses API provider:

"qwen": {
  "driver": "openai_compatible",
  "capabilities": {
    "supports_reasoning_effort": false,
    "api_mode": "responses"
  },
  "api_key": "",
  "api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
  "api_key_env_name": "QWEN_API_KEY",
  "api_base_env_name": "QWEN_BASE_URL"
}

Meaning of api_mode: "responses":

  • configure api_base as the provider root ending at /v1
  • the runtime enables LangChain's Responses API mode
  • the final request path becomes <api_base>/responses, for example .../v1/responses

Supported drivers in the current runtime:

  • anthropic
  • google
  • ollama
  • openai_compatible

This lets multiple providers share one transport style without changing skill config.

Channels

CLI

Minimal local mode:

"cli": {
  "type": "cli",
  "enabled": true
}

Telegram

Configured from token plus optional allowlist:

"telegram": {
  "type": "telegram",
  "enabled": false,
  "token": "",
  "token_env_name": "TELEGRAM_TOKEN",
  "allowed_user_ids": [],
  "allowed_user_ids_env_name": "ALLOWED_TG_USER_IDS"
}

Feishu

Feishu now supports two modes:

  • websocket (preferred, simplest)
  • webhook (fallback)

Example:

"feishu": {
  "type": "feishu",
  "enabled": false,
  "mode": "websocket",
  "host": "127.0.0.1",
  "port": 8081,
  "path": "/feishu/events",
  "app_id": "",
  "app_secret": "",
  "app_id_env_name": "FEISHU_APP_ID",
  "app_secret_env_name": "FEISHU_APP_SECRET",
  "verification_token": "",
  "verification_token_env_name": "FEISHU_VERIFICATION_TOKEN",
  "encrypt_key": "",
  "encrypt_key_env_name": "FEISHU_ENCRYPT_KEY",
  "reply_receive_id_type": "chat_id",
  "allowed_user_ids": [],
  "allowed_user_ids_env_name": "FEISHU_ALLOWED_USER_IDS",
  "allowed_chat_ids": [],
  "allowed_chat_ids_env_name": "FEISHU_ALLOWED_CHAT_IDS"
}

For webhook mode, host, port, and path are also used.

enabled is descriptive today. Actual startup selection is still driven by --channel ... or defaults.channel, and missing credentials are what block Telegram / Feishu startup.

Maintenance

maintenance.compact defines which provider/model /compact uses.

/compact no longer writes directly to confirmed user memory. It generates candidate facts that must be reviewed via /memory.

Recommended Practice

  • Keep secrets in .env
  • Keep stable endpoints and defaults in base_config.json
  • Keep skill-local routing decisions in each skill's config.yml