This repository outlines three practical data pipelines / user journeys for teams that don’t want to use Hubble directly for analytics. It accompanies the Meridian talk “Stellar Analytics on a Shoestring Budget” and is intended as a reference implementation (examples and starting points, not production code).
Each route helps you get from raw network data to answers, while keeping costs low or zero, often avoiding BigQuery entirely or exporting out of it quickly.
| Route (folder) | What it is | Best for | Cost | Typical setup / run time | Hacker level | Start here |
|---|---|---|---|---|---|---|
| Crypto Analytics Platform (Dune) ( dune_platform/) |
Use a hosted analytics platform with community content and a credit/warehouse model. | “I just want answers now.” | $ – $$ | Minutes (point-and-query) | Low | ./dune_platform/ |
| Export Hubble to a cheaper DB ( export_hubble/) |
Use Hubble as a data source, then export to your own low-cost store/DB (e.g., GCS → DuckDB/Postgres) for ad-hoc analytics without ongoing BQ spend. | Lightly customized, ad-hoc analytics. | $ | Minutes → Hours (e.g., ~20m for small slices) | Med | ./export_hubble/ |
| Build your own Local Pipeline ( local_pipeline/) |
DIY ETL: ingest public ledger data (AWS Open Data S3) or your own indexer, transform (e.g., stellar-etl or Ingest SDK), and analyze locally in DuckDB. |
Full control, totally free local stack. | Free | Hours → Days (depends on data range + transforms) | High | ./local_pipeline/README.md |
- This repo is reference-only: it shows patterns, trade-offs, and glue code to get started.
- If you’re optimizing for zero infra cost and maximum control, start with Local Pipeline.
- If you need fastest time-to-answer with minimal setup, the Dune route fits best.
- If you already rely on Hubble as a source but want to avoid BQ for analytics, use Export Hubble to a cheaper DB.
.
├─ local_pipeline/ # DIY ETL + DuckDB
├─ export_hubble/ # Export from Hubble to low-cost DB
└─ dune_platform/ # Hosted analytics platform path