Config file: .env at project root (or backend/.env). The backend loads project root ./.env first when backend is a submodule. Copy from .env.example or backend/.env.example and edit.
ANTHROPIC_API_KEY=your_key_here
ANTHROPIC_BASE_URL=https://dashscope.aliyuncs.com/apps/anthropic # optional, DashScope etc.
ANTHROPIC_MODEL=qwen-flash # optionalUsed for:
- Multi-round discussion (
run_discussion) - @expert reply (
run_expert_reply)
AI_GENERATION_API_KEY=your_key_here
AI_GENERATION_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
AI_GENERATION_MODEL=qwen-flashUsed for:
- AI-generated expert role
- AI-generated moderator mode
- Source-feed topic body generation (async background task): when a topic is created from a source article, the system immediately returns with a fallback placeholder body and starts a background task that reads the full article text (
content_md) viaAI_GENERATION_MODELto generate a structured discussion guide (context / core issue / why it matters / suggested discussion questions) which is written back to the topic once complete - Source-feed topic role generation (async background task): when a topic is created from a source article, the system uses
AI_GENERATION_MODELwith 4 concurrent requests (one per dimension: 技术/产业/研究/治理) to generate 4 discussion roles tailored to the topic. Roles are written to the executor workspace and topic DB once complete. If env is not set, the topic remains with empty experts (user may add manually).
If AI_GENERATION_API_KEY / AI_GENERATION_BASE_URL / AI_GENERATION_MODEL are not set, the source-feed topic body silently falls back to the template-generated placeholder. Role generation is skipped and the topic starts with no experts. All other features continue to work normally.
Note: Both configs are strictly separate; do not mix them.
All libraries (experts, moderator_modes, mcps, assignable_skills, prompts) are loaded from backend/libs/. No scenario preset.
Docker: When LIBS_PATH points to a custom empty directory (e.g. for persistence), the backend merges from both built-in and the mount. See backend/docs/config.md for details.
WORKSPACE_BASE=./workspaceTopic workspace root directory.
# Resonnet auth mode: none | jwt | proxy
AUTH_MODE=none
# Enforce authentication (default: false)
AUTH_REQUIRED=false
# Account service base URL used in jwt mode
AUTH_SERVICE_BASE_URL=http://topiclab-backend:8000
# Sync published twins to account DB digital_twins
ACCOUNT_SYNC_ENABLED=false- AUTH_MODE=none: default anonymous mode for OSS trial and MVP usage.
- AUTH_MODE=jwt: validates
Authorization: Bearervia the external account service. - AUTH_MODE=proxy: trusts upstream identity headers such as
X-User-Id(optionalX-Tenant-Id,X-User-Scopes). - AUTH_REQUIRED: in
jwtmode, return 401 when token is missing. - AUTH_SERVICE_BASE_URL: account service URL for token introspection in
jwtmode. - ACCOUNT_SYNC_ENABLED: after publish, call
/auth/digital-twins/upsert; when disabled, the main flow does not depend on account storage.
The account service can run independently. Resonnet core flows still work in AUTH_MODE=none without hard dependency on account storage.
# Max internal tool/thinking iterations per request (default 40, minimum 5)
PROFILE_HELPER_MAX_TOOL_ITERATIONS=40- PROFILE_HELPER_MAX_TOOL_ITERATIONS: limits internal agent loop rounds in Profile Helper. Higher values reduce "maximum tool calls reached" failures, but increase latency and token usage. Recommended range: 20-60.
MCP servers are configured in backend/libs/mcps/, using the same structure as skills. The /mcp page is read-only and used for selecting MCPs during topic discussion. Supported types: npm, uvx, remote. See backend/docs/mcp-config.md.
# Short-lived list cache for GET /source-feed/articles (seconds, default 30)
SOURCE_FEED_LIST_CACHE_TTL_SECONDS=30SOURCE_FEED_LIST_CACHE_TTL_SECONDS: controls in-process short TTL cache for source-feed list pages (limit + offsetkey).- Set to
0to disable cache.
The Source feed → Academic sub-tab uses the same article list proxy as Media: GET /source-feed/articles with source_type=gqy, then client-side keeps only source_feed_name in arXiv cs.AI / arXiv cs.LG / arXiv cs.CV (IC may ignore source_feed_name today; see academic-literature-api-overview.md §2.3 note).
Literature endpoints (GET /api/v1/literature/*, including recent) are separate; other features or agents may still use them via topiclab-backend with:
- The same
INFORMATION_COLLECTION_BASE_URLas the source feed (e.g.http://ic.nexus.tashan.ac.cn). - If IC requires
x-ingest-tokenfor literature routes, set in topiclab-backend:If unset, the proxy sends no header; if IC enforces the token, those requests may return 401.LITERATURE_SHARED_TOKEN=your_token
topiclab-backend proxies seven free-tier AMiner Open Platform endpoints. User requests are forwarded to datacenter.aminer.cn with the API key on the backend; the frontend does not call AMiner directly.
- Environment variable (required; otherwise the proxy returns 503):
AMINER_API_KEY= # Obtain from open.aminer.cn console - Route prefix:
/aminer,/api/v1/aminer - Endpoints: Paper search (GET), Scholar search (POST), Patent search (POST), Organization search (POST), Venue search (POST), Paper info (POST), Patent info (GET). See aminer-open-api-limits.md.
- Do not mix the two API configs:
ANTHROPIC_*for Claude Agent SDK,AI_GENERATION_*for OpenAI-compatible API - No fallback: Missing
AI_GENERATION_API_KEYdoes not fall back toANTHROPIC_API_KEY - Different API formats:
ANTHROPIC_BASE_URLexpects Anthropic-compatible API;AI_GENERATION_BASE_URLexpects OpenAI-compatible API
The app will refuse to start if required variables are unset.
Full Resonnet configuration: backend/docs/config.md. Backend source: Resonnet