Skip to content

ustc-time-series/CastClaw

Repository files navigation

CastClaw

CastClaw Logo

Homepage · GitHub Repo · English · Chinese

Autonomous. Multi-agent. Fully interactive.

Drop in a CSV file and describe what you want to forecast. CastClaw orchestrates three specialized agents across your data—planning task definitions, running parallel model experiments, and generating comparative reports. Built-in reflection learns from each session and grows smarter as you use it.

CastClaw Architecture Python ML Backend License

🗞️ News

[2026-03-31] CastClaw open-sourced with complete documentation and multi-provider LLM support.

What makes CastClaw different

🗂️ It plans before it acts
Before running any models, CastClaw drafts a step-by-step forecasting plan and shows it to you. Reorder steps, add domain constraints, then approve—nothing touches your data without explicit consent.

📊 It runs agents in parallel
Upload multiple tables or ask comparative questions. CastClaw automatically assigns a dedicated Forecaster agent to each dataset, runs them in parallel, then synthesizes findings—highlighting where predictions align ([CONSENSUS]) and where they diverge ([UNCERTAIN]).

🤖 It coordinates three specialized agents

  • Planner — Defines tasks, analyzes data trends & seasonality, generates model recommendations
  • Forecaster — Runs 30+ time-series models in parallel experiments with per-trial reflection
  • Critic — Compares results, builds interactive visualizations, distills final reports

🧠 It learns from every session
After each forecasting task, CastClaw reflects on what worked and encodes the pattern into a reusable custom skill. Next time you ask something similar, it calls that skill directly—your personal forecasting assistant gets smarter every time.

💾 It remembers your preferences
Captures your domain terminology, preferred metrics, output format, and evaluation priorities across sessions. Every conversation builds on what it learned from your previous work.

🛠️ It extends with custom skills
Write your own skills—prompt templates or embedded Python/SQL logic—and the agents will call them just like built-in skills. Combined with session learning, CastClaw builds a library tailored to your forecasting workflows.

📦 It manages full experiment lifecycle
From automated data preprocessing → parallel model training → metric evaluation → constraint satisfaction → visual report generation. Everything is tracked: run logs, eval metrics, failure histories, and performance comparisons.

⏸️ It integrates human-in-the-loop feedback
Mid-experiment, pause and inject domain knowledge: "The last 30 days show overfitting—try a smaller look-back window." Forecaster resets counters and adapts the next trial accordingly. No more black-box automation.

Quick Start

Install — Option A: npm (recommended)

npm install -g castclaw

Install — Option B: from source

git clone https://github.com/ustc-time-series/CastClaw.git
cd CastClaw
bun install
cd python && uv sync && cd ..
bun run --cwd packages/castclaw build
bun link --cwd packages/castclaw   # optional: global link

Verify

castclaw --version
cd python && uv run python -c "from castclaw_ml import runner; print('OK')"

LLM configuration

# Anthropic (default)
export ANTHROPIC_API_KEY=sk-ant-...

# Or OpenAI / Google / OpenRouter
export OPENAI_API_KEY=sk-...
export GOOGLE_GENERATIVE_AI_API_KEY=...

Create castclaw.json in your project root (example):

{
  "model": "anthropic/claude-sonnet-4-6"
}

Run

cd /path/to/your/dataset
castclaw

# Or pass a model explicitly
castclaw --model anthropic/claude-sonnet-4-6

After the TUI starts, use Ctrl+1/2/3 to switch agents. In the Planner tab (Ctrl+1), describe your task, for example:

Initialize a forecasting session for data/etth1.csv. Target: OT, time column: date,
horizon: 96 steps, lookback: 336. Use a 70/20/10 train/val/test split. Evaluate with MSE and MAE.

Sample dataset (datasets.zip) on Google Drive

📋 Requirements

Dependency Version Purpose
Bun ≥ 1.3.11 Runtime & package manager
Python ≥ 3.10 ML backend for time-series models
uv Latest Python dependency management
GPU (optional) CUDA 12.8 Deep learning model acceleration
Ascend NPU (optional) Coming soon Deep learning acceleration on Huawei Ascend

🤖 Supported Models (30+)

Statistical: ARIMA, ETS, Theta
Deep Learning: DLinear, NLinear, PatchTST, TimesNet, iTransformer, Autoformer, …
Foundation Models: Chronos (Amazon), TimesFM (Google), Moirai (Salesforce)

🔧 Configuration

Create castclaw.json in your project root:

{
  "model": "anthropic/claude-sonnet-4-6",
  "skills": {
    "paths": ["~/.my-skills/"]
  }
}

LLM Providers: Supports 20+ via Vercel AI SDK (Anthropic, OpenAI, Google, OpenRouter, …). Set the key for your provider, for example:

export ANTHROPIC_API_KEY=sk-ant-...
# or: OPENAI_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OpenRouter, etc.

🎯 Five-Stage Workflow

Stage 1 (Planner)    → Task definition & data ingestion
        ↓
Stage 2 (Planner)    → Qualitative & quantitative pre-analysis
        ↓
Stage 3 (Planner)    → Model skill generation & review
        ↓
Stage 4 (Forecaster) → Parallel experiment loops + HITL feedback
        ↓
Stage 5 (Critic)     → Final report, visualizations, comparisons

📚 Documentation

📂 Repository Structure

CastClaw/
├── packages/castclaw/    # TUI & CLI core
├── packages/app/         # Browser web interface
├── packages/sdk/         # SDK & runtime
├── python/               # ML backend (30+ models)
├── docs/                 # Usage guides
└── infra/                # Infrastructure (SST)

🏆 Key Differentiators

Feature CastClaw Traditional Tools
Pre-experiment planning ✅ Show plan before execution ❌ Execute immediately
Multi-table parallelism ✅ Automatic per-table agents ❌ Sequential analysis
Session learning ✅ Distill skills from interactions ❌ Stateless
Human-in-the-loop ✅ Pause & inject domain feedback ❌ Fully automated
Constraint management ✅ CAST.md for rules & limits ❌ Manual enforcement

🤝 Contributing

Contributions welcome! Please open issues and PRs on GitHub.

📄 License

MIT License — see LICENSE

📫 Contact

Acknowledgments

This project gratefully acknowledges generous support from the industry–university cooperation fund of the University of Science and Technology of China (USTC) and Huawei 2012 Labs Application Scenario Innovation Lab. Computing resources for development and research are provided through Huawei’s Ascend AI Hundred-School Program.


We welcome everyone to use domestic Ascend computing power to run foundation models.

Made with ❤️ by the CastClaw team

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors