An analytics platform that collects, analyzes, and ranks top traders from the Hyperliquid exchange leaderboard using quantitative metrics and AI-powered qualitative analysis.
- Leaderboard Collection: Periodically fetches Top N wallets from Hyperliquid trading leaderboards
- Portfolio Metrics: Calculates comprehensive performance metrics (MDD, ROI, CAGR, Calmar, Avg Drawdown, Recovery Factor)
- AI Analysis: Uses Claude AI to provide qualitative evaluation of trading performance
- Weighted Scoring: Combines multiple metrics into a single weighted score with percentile rankings
- Data Quality Monitoring: Tracks data uniformity, intervals, and completeness
- REST API: Exposes wallet scores via FastAPI for frontend consumption
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Prefect │────▶│ Graph Pipeline │────▶│ ClickHouse │
│ Scheduler │ │ (Spoon-Core) │ │ (OLAP DB) │
│ (10-min) │ │ │ │ │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
▼
┌──────────────────────┐
│ Parallel Processing │
│ • Portfolio Fetch │
│ • Metric Calc │
│ • LLM Analysis │
└──────────────────────┘
Pipeline Stages:
fetch_leaderboard- Retrieves Top N wallets from Hyperliquid APIprepare_wallet_list- Filters and prepares wallet datawallet_analysis(parallel) - Fetches portfolios, calculates metrics, runs AI analysisaggregate_and_store- Batch inserts results into ClickHouse
| Component | Technology | Description |
|---|---|---|
| Language | Python 3.11 | Async/await, type hints |
| Web Framework | FastAPI | REST API with auto-docs |
| Workflow Orchestration | Prefect 3.x | 10-min interval scheduling |
| LLM Framework | Spoon AI | LLM manager for local dev |
| LLM Provider | Anthropic Claude (Haiku) | Cost-efficient AI analysis |
| Database | ClickHouse 24.3 | OLAP for time-series analytics |
| Containerization | Docker & Docker Compose | Multi-service orchestration |
| HTTP Client | httpx | Async HTTP for Hyperliquid API |
| Data Processing | NumPy | Metric calculations |
The project uses Spoon AI for LLM orchestration in local development:
# src/scorer.py
try:
from spoon_ai.llm.manager import LLMManager
from spoon_ai.schema import Message
except ImportError:
from src.llm_wrapper import LLMManager, Message # Docker fallback- Local Development: Uses
spoon_ai.llm.manager.LLMManagerfor unified LLM access - Docker Container: Falls back to lightweight
src/llm_wrapper.py(Anthropic SDK direct)
datastore/
├── src/
│ ├── api.py # FastAPI endpoint (/wallets)
│ ├── data_fetcher.py # Hyperliquid API client
│ ├── db.py # ClickHouse operations
│ ├── llm_wrapper.py # Anthropic SDK wrapper (Docker fallback for Spoon AI)
│ ├── metrics.py # Metric calculations (MDD, ROI, CAGR, Calmar, etc.)
│ └── scorer.py # AI scoring & ranking logic (Spoon AI / Claude)
├── flows/
│ └── wallet_scoring_flow.py # Prefect workflow
├── scripts/
│ ├── init_db.sql # ClickHouse schema
│ ├── migrate_scoring.sql # Migration scripts
│ └── test_*.py # Test scripts
├── docker-compose.yml # Multi-service orchestration
├── Dockerfile.scoring # Application container
├── requirements.txt # Python dependencies
└── .env.example # Environment template
- Docker & Docker Compose
- Anthropic API key
- Clone the repository:
git clone https://github.com/yourusername/datastore.git
cd datastore- Configure environment variables:
cp .env.example .env
# Edit .env with your API keys and configuration- Start the services:
docker-compose up -d- Initialize the database:
docker exec -i clickhouse clickhouse-client < scripts/init_db.sql- Access the services:
- Prefect UI: http://localhost:4200
- FastAPI: http://localhost:8000
- ClickHouse: http://localhost:8123
| Variable | Description | Required | Default |
|---|---|---|---|
ANTHROPIC_API_KEY |
Claude API key for AI analysis | Yes | - |
ANTHROPIC_BASE_URL |
Custom API endpoint (optional proxy) | No | Anthropic default |
CLICKHOUSE_HOST |
ClickHouse server hostname | Yes | clickhouse |
CLICKHOUSE_PORT |
ClickHouse HTTP port | No | 8123 |
TOP_N_WALLETS |
Number of wallets to analyze | No | 100 |
GET /walletsQuery Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
top_n |
int | 100 | Number of wallets (1-200) |
save_to_db |
bool | false | Persist results to ClickHouse |
skip_analysis |
bool | false | Skip AI analysis for faster response |
Response:
{
"wallets": [
{
"wallet_id": "0x...",
"total_pnl": 1234567.89,
"account_value": 500000.00,
"roi": 0.45,
"max_drawdown": -0.15,
"cagr": 0.82,
"calmar_ratio": 5.47,
"weighted_score": 85.5,
"calculated_rank": 1,
"agent_analysis_text": "..."
}
],
"metadata": {
"count": 100,
"timestamp": "2024-01-15T10:30:00Z"
}
}| Metric | Description | Formula |
|---|---|---|
| ROI | Return on Investment | (End Value - Start Value) / Start Value |
| MDD | Maximum Drawdown | Peak-to-trough decline percentage |
| CAGR | Compound Annual Growth Rate | Time-weighted return annualized |
| Calmar Ratio | Risk-adjusted return | CAGR / |MDD| |
| Recovery Factor | Profit vs. drawdown | Total PnL / |MDD| |
| Avg Drawdown | Mean of all drawdowns | Average of all drawdown periods |
- Normalization: Each metric scaled to 0-100
- Agent Scoring: AI assigns discrete scores (20/40/60/80/100) based on percentile rankings
- Weighted Score: Equal-weighted average of 6 metric scores
- Max Drawdown, ROI, CAGR, Calmar Ratio, Avg Drawdown, Recovery Factor
- Percentile Ranking: Position relative to all analyzed wallets
The system tracks data quality metrics:
| Metric | Description | Threshold |
|---|---|---|
data_points |
Number of equity snapshots | Min 30 required |
data_span_days |
Time coverage | - |
avg_interval_hours |
Average time between points | - |
interval_cv_percent |
Coefficient of variation | <5% = uniform |
is_uniform |
Data regularity flag | Boolean |
The system uses Claude AI (Haiku model for cost efficiency) to provide qualitative analysis of trader performance. The prompting is implemented in src/scorer.py.
┌─────────────────────────────────────────────────────────────┐
│ 1. Role Definition │
│ "Senior quant analyst specializing in crypto │
│ perpetual futures evaluation" │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 2. Structured Input Data │
│ • Wallet ID + Trading Period │
│ • Metrics Table with Percentile Rankings │
│ • Interpretation labels (Top 10%, Above Median, etc.) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 3. Trading Style Inference Guide │
│ Pattern-based rules for style classification: │
│ • High ROI + High MDD → Aggressive leveraged trader │
│ • Low MDD + Moderate ROI → Conservative risk manager │
│ • High Calmar + Consistent DD → Systematic/algorithmic │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 4. Explicit Task Definition │
│ • Classify Trading Style │
│ • Identify Unique Characteristics │
│ • Assess Sustainability (High/Medium/Low) │
│ • Assign Discrete Scores (20/40/60/80/100) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 5. Edge Case Handling │
│ • < 30 days data → Reduce confidence │
│ • MDD > -5% → Cap score (untested in volatility) │
│ • CAGR > 500% → Cap score (unsustainable) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ 6. Structured JSON Output │
│ { trading_style, reasoning, scores, overall_assessment }│
└─────────────────────────────────────────────────────────────┘
| Technique | Implementation | Purpose |
|---|---|---|
| Role Prompting | "Senior quant analyst" | Establishes domain expertise context |
| Structured Data | Markdown table with percentiles | Clear, parseable input format |
| Few-shot Patterns | Style inference guide | Teaches metric combination interpretation |
| Constrained Output | Discrete scores (20/40/60/80/100) | Prevents score inflation/deflation |
| JSON Schema | Explicit output format | Ensures parseable responses |
| Edge Case Rules | Explicit handling instructions | Prevents hallucination on edge cases |
| Anti-generic Prompt | "Create unique analysis based on THIS trader" | Avoids boilerplate responses |
The agent classifies traders into one of 12 predefined styles:
| Style | Metric Pattern |
|---|---|
| Aggressive Momentum | High ROI + High MDD |
| Conservative Value | Low MDD + Moderate ROI |
| Systematic Quant | High Calmar + Consistent DD |
| High-Frequency Scalper | Many small trades, low per-trade risk |
| Swing Trader | Medium-term holds, balanced metrics |
| Trend Follower | High CAGR in trending markets |
| Mean Reversion | Stable returns, quick recovery |
| Concentrated Bets | Few high-conviction trades |
| Diversified Portfolio | Stable metrics across all |
| Risk Parity | Balanced risk allocation |
| Breakout Trader | High volatility capture |
| Counter-Trend | Contrarian positioning |
src/scorer.py:generate_agent_scores() # Lines 173-257
{
"trading_style": "Systematic Quant",
"reasoning": {
"mdd": "P85 indicates excellent drawdown control",
"roi": "P70 shows solid but not exceptional returns",
"calmar": "P90 demonstrates superior risk-adjusted performance"
},
"scores": {
"mdd_score": 80,
"roi_score": 60,
"cagr_score": 80,
"calmar_score": 100,
"avg_drawdown_score": 80,
"recovery_factor_score": 80
},
"overall_assessment": "[Systematic Quant] Algorithmic approach with disciplined risk management. MDD P85 shows elite drawdown control, Calmar P90 indicates efficient capital utilization. Sustainability: High with maintained risk limits."
}# Test database connection
python scripts/test_db.py
# Test Hyperliquid API
python scripts/test_portfolio_api.py
# Test scoring logic
python scripts/test_scoring.py
# Run MVP end-to-end test
python scripts/test_mvp.py# Install dependencies
pip install -r requirements.txt
# Run FastAPI locally
uvicorn src.api:app --reload --port 8000- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Hyperliquid for the trading platform and API
- Spoon AI for LLM orchestration framework
- Anthropic for Claude AI
- Prefect for workflow orchestration
- ClickHouse for blazing-fast analytics