-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Status: Future Work (Not Active)
This is a long-term roadmap item for BitQuan's evolution.
Current Status: ON HOLD - waiting for:
- v1.0.0 mainnet stable (6+ months runtime)
- Security audits complete (x2 minimum)
- P0/P1 tasks finished (wallet encryption, TLS, JWT)
Timeline: Earliest start date: 2026 Q3 (depends on prerequisites)
Priority: HIGH - This should be implemented BEFORE smart contracts and parallel execution to prevent state bloat.
Contribute:
- Discuss pruning strategies in comments
- Research state expiry mechanisms (Ethereum research, Solana snapshot sync)
- Prototype epoch-based storage design
- Write security analysis of resurrection proofs
DO NOT: Start implementation yet - need mainnet data to tune parameters.
Research Notes
Implementations to Study:
- Ethereum State Expiry research
- EIP-4444, EIP-4762 proposals
- Verkle tree transition (we won't use, but understand tradeoffs)
- Solana Snapshot Sync
- Fast bootstrapping mechanism
- Validation without full history
- Bitcoin UTXO set snapshots (assumeutxo)
- Simple, proven approach
Design Decisions:
| Decision | Options | Recommendation |
|---|---|---|
| Epoch Duration | 50k / 100k / 210k blocks | 210k blocks (aligned with halving) |
| Pruning Mode | Archive vs Pruned | Default pruned, archive opt-in |
| Snapshot Format | Full state vs incremental | Full state per epoch |
| Resurrection | Allowed vs forbidden | Allowed with Merkle proof |
| Storage After Prune | Keep hashes vs delete all | Keep root hashes for verification |
Architecture:
RocksDB Column Families:
blocks: [height] -> Block (full data)
headers: [height] -> BlockHeader (always kept)
utxos: [outpoint] -> UTXO + epoch_id
state: [key] -> value + epoch_id
epoch_roots: [epoch] -> MerkleRoot (pruning anchor)
resurrection_cache: LRU cache for recent proofs
Code
Implementation Phases:
Phase 1: Epoch Tracking (1-2 months)
- Add epoch_id to all state entries
- Track current epoch in chainstate
- No pruning yet, just marking
Phase 2: Pruning Service (1 month)
- Background task: scan and delete old epochs
- Configurable retention (default: 3 epochs = ~12 years)
- Keep epoch root hashes for verification
Phase 3: Snapshot Sync (2 months)
- Generate UTXO snapshot at epoch boundary
- Verify snapshot hash against blockchain
- New nodes: download snapshot + recent blocks
- Reduce sync time from days to hours
Phase 4: State Resurrection (1-2 months, optional)
- User submits: (key, value, Merkle proof, epoch_id)
- Verify proof against epoch_roots
- Charge resurrection fee (prevent spam)
- Restore UTXO to active set
Open Questions:
- How to handle reorgs across epoch boundaries?
- Keep buffer of N blocks before pruning?
- Resurrection fee model?
- Fixed fee vs proportional to proof size?
- Archive node incentives?
- How to encourage people to run archive nodes?
- Migration path?
- How to upgrade existing mainnet without breaking?
Success Criteria:
- New node syncs in under 4 hours (vs 48+ hours full history)
- Pruned node disk usage: under 50GB after 10 years
- Archive node disk usage: under 2TB after 10 years
- Resurrection: 99%+ success rate for valid proofs
- Zero data loss during pruning (tested via resurrection)
Contributors: Add links, papers, or ideas below
Original Task List
Goal: Prevent chain state bloat due to high parallel throughput without introducing Verkle trees—use tuned Merkle and clever pruning.
Core Concept: State Expiry (automatic deletion of obsolete data)
Tech Stack: RocksDB (column families), Sparse Merkle Tree
Task List (to break into separate issues):
- Implement epoch-based storage—partition stored data in the DB by epochs (e.g., annual cycles).
- Create background service to automatically prune old epoch data from disk, keeping only root hashes.
- (Optional/Advanced) Add a state resurrection mechanism allowing users to attach proofs and restore pruned balances.
- Implement snapshot sync so new nodes can fetch only the latest state instead of syncing from genesis.