config/base_config.json is the central runtime config for MacroAgent.
It controls:
- default provider and model
- provider connection details
- channel defaults and channel credentials
- maintenance model selection for
/compact
It does not define user profile memory schema. That now lives in the runtime memory layer and namespace registry.
Sensitive values use this order:
- environment variable
- fallback value in
config/base_config.json
This means skills only need to declare:
providermodel
The runtime resolves the matching API key and base URL automatically.
{
"defaults": {},
"providers": {},
"channels": {},
"maintenance": {},
"trim": {}
}defaults controls the fallback runtime selection:
providermodelchannel
If the CLI does not explicitly choose a channel, defaults.channel is used.
Each provider entry supports:
driverapi_keyapi_baseapi_key_env_nameapi_base_env_name- optional
capabilities
Example:
"deepseek": {
"driver": "openai_compatible",
"api_key": "",
"api_base": "https://api.deepseek.com/v1",
"api_key_env_name": "DEEPSEEK_API_KEY",
"api_base_env_name": "DEEPSEEK_BASE_URL"
}For openai_compatible providers, capabilities can also carry transport hints. The current runtime recognizes:
supports_reasoning_effortapi_mode
Example for a Responses API provider:
"qwen": {
"driver": "openai_compatible",
"capabilities": {
"supports_reasoning_effort": false,
"api_mode": "responses"
},
"api_key": "",
"api_base": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"api_key_env_name": "QWEN_API_KEY",
"api_base_env_name": "QWEN_BASE_URL"
}Meaning of api_mode: "responses":
- configure
api_baseas the provider root ending at/v1 - the runtime enables LangChain's Responses API mode
- the final request path becomes
<api_base>/responses, for example.../v1/responses
Supported drivers in the current runtime:
anthropicgoogleollamaopenai_compatible
This lets multiple providers share one transport style without changing skill config.
Minimal local mode:
"cli": {
"type": "cli",
"enabled": true
}Configured from token plus optional allowlist:
"telegram": {
"type": "telegram",
"enabled": false,
"token": "",
"token_env_name": "TELEGRAM_TOKEN",
"allowed_user_ids": [],
"allowed_user_ids_env_name": "ALLOWED_TG_USER_IDS"
}Feishu now supports two modes:
websocket(preferred, simplest)webhook(fallback)
Example:
"feishu": {
"type": "feishu",
"enabled": false,
"mode": "websocket",
"host": "127.0.0.1",
"port": 8081,
"path": "/feishu/events",
"app_id": "",
"app_secret": "",
"app_id_env_name": "FEISHU_APP_ID",
"app_secret_env_name": "FEISHU_APP_SECRET",
"verification_token": "",
"verification_token_env_name": "FEISHU_VERIFICATION_TOKEN",
"encrypt_key": "",
"encrypt_key_env_name": "FEISHU_ENCRYPT_KEY",
"reply_receive_id_type": "chat_id",
"allowed_user_ids": [],
"allowed_user_ids_env_name": "FEISHU_ALLOWED_USER_IDS",
"allowed_chat_ids": [],
"allowed_chat_ids_env_name": "FEISHU_ALLOWED_CHAT_IDS"
}For webhook mode, host, port, and path are also used.
enabled is descriptive today. Actual startup selection is still driven by --channel ... or defaults.channel, and missing credentials are what block Telegram / Feishu startup.
maintenance.compact defines which provider/model /compact uses.
/compact no longer writes directly to confirmed user memory. It generates candidate facts that must be reviewed via /memory.
- Keep secrets in
.env - Keep stable endpoints and defaults in
base_config.json - Keep skill-local routing decisions in each skill's
config.yml