A fast Solana indexer powered by Tide engine
Features • Quick Start • Architecture • Configuration • API Reference
SNI (Solana Network Indexer) is a high-performance blockchain indexer designed specifically for the Solana network. Built with Rust and powered by the innovative Tide engine, SNI provides real-time indexing capabilities with minimal latency and maximum throughput.
- ⚡ Ultra-fast Processing: Built on the Tide engine for maximum performance
- 🔄 Real-time Indexing: Live blockchain data streaming and processing
- 💾 Flexible Storage: SQLite support with planned PostgreSQL integration
- 🛡️ Production Ready: Robust error handling and comprehensive monitoring
- 🧩 Modular Design: Clean architecture with pluggable components
- 📊 Rich Metrics: Detailed performance and health monitoring
- 🏗️ Block Processing: Complete block data indexing with transaction details
- 💰 Account Tracking: Real-time account state changes and updates
- 🔗 Transaction Indexing: Full transaction history with metadata
- 📈 Slot Monitoring: Slot progression and consensus tracking
- ⚡ High Throughput: Optimized for handling Solana's high TPS
- 🔄 Automatic Recovery: Resilient to network interruptions
- 📊 Health Monitoring: Comprehensive network and system health checks
- 🎯 Smart Buffering: Intelligent data buffering and batching
- 🛠️ Easy Setup: Simple configuration and deployment
- 📝 Rich Logging: Detailed tracing and debugging support
- 🔧 CLI Interface: Intuitive command-line tools
- 📖 Comprehensive Docs: Detailed documentation and examples
SNI is built on a modular architecture leveraging the powerful Tide engine:
graph TB
subgraph "SNI Core"
NM[Network Monitor]
IE[Indexer Engine]
SM[Storage Manager]
API[API Layer]
end
subgraph "Tide Engine"
TC[tide-core]
TG[tide-geyser]
TO[tide-output]
TCONF[tide-config]
TCOMM[tide-common]
end
subgraph "Solana Network"
RPC[RPC Nodes]
VAL[Validators]
GEY[Geyser Streams]
end
subgraph "Storage Layer"
SQLITE[(SQLite DB)]
POSTGRES[(PostgreSQL)]
end
subgraph "Output Formats"
JSON5[JSON5 Files]
PARQUET[Parquet Files]
METRICS[Metrics & Logs]
end
%% Core connections
NM --> RPC
NM --> VAL
IE --> TC
IE --> TG
SM --> SQLITE
SM --> POSTGRES
%% Tide engine internal
TC --> TO
TC --> TCONF
TC --> TCOMM
TG --> GEY
TO --> JSON5
TO --> PARQUET
TO --> METRICS
%% Data flow
GEY -.->|Stream Data| TG
RPC -.->|Block Data| NM
VAL -.->|Validator Info| NM
%% Processing flow
TG -->|Processed Data| IE
IE -->|Indexed Data| SM
TC -->|Formatted Output| TO
%% Styling
classDef coreComponent fill:#e1f5fe,stroke:#01579b,stroke-width:2px
classDef tideComponent fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
classDef solanaComponent fill:#e8f5e8,stroke:#1b5e20,stroke-width:2px
classDef storageComponent fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef outputComponent fill:#fce4ec,stroke:#880e4f,stroke-width:2px
class NM,IE,SM,API coreComponent
class TC,TG,TO,TCONF,TCOMM tideComponent
class RPC,VAL,GEY solanaComponent
class SQLITE,POSTGRES storageComponent
class JSON5,PARQUET,METRICS outputComponent
- 🌊 Tide Engine: Core processing engine providing high-performance data streaming
- 📡 Network Monitor: Real-time network health and validator tracking
- ⚙️ Indexer Engine: Main processing logic for blockchain data
- 💾 Storage Manager: Flexible data persistence layer
- 🔧 Configuration: Comprehensive configuration management
- Rust 1.70+ - Install from rustup.rs
- Git - For cloning the repository
-
Clone the repository
git clone https://github.com/your-org/sni.git cd sni -
Build the project
cargo build --release
-
Run health check
./target/release/sni health
-
Start indexing (with default config)
./target/release/sni start
For development with the Tide workspace:
# Navigate to the project directory
cd /path/to/your/windexer/sni
# Build with development dependencies
cargo build
# Run with debug logging
./target/debug/sni start --debug
# Check project status
cargo checkSNI uses TOML configuration files for easy customization:
[network]
rpc_url = "https://api.mainnet-beta.solana.com"
websocket_url = "wss://api.mainnet-beta.solana.com"
commitment = "confirmed"
timeout_seconds = 30
[storage]
database_url = "sqlite:sni.db"
batch_size = 1000
max_connections = 10
[indexer]
start_slot = "latest"
enable_account_indexing = true
enable_transaction_indexing = true
buffer_size = 10000
[logging]
level = "info"
file = "sni.log"For production deployments, see config/production.toml for advanced settings including:
- Custom RPC endpoints
- Performance tuning parameters
- Monitoring and alerting configuration
- Database optimization settings
# Check network connectivity
sni health
# Start indexing from latest slot
sni start
# Start with custom config
sni start --config custom.toml
# Enable debug logging
sni start --debug
# Show version information
sni versionuse sni::{SolanaIndexer, config::SniConfig};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Load configuration
let config = SniConfig::load("sni.toml")?;
// Create and start indexer
let mut indexer = SolanaIndexer::new(config).await?;
indexer.start().await?;
Ok(())
}SNI provides comprehensive monitoring out of the box:
- 📈 Processing Stats: Blocks, transactions, and accounts processed
- ⏱️ Performance: Processing latency and throughput metrics
- 🌐 Network Health: RPC connectivity and validator status
- 💾 Storage: Database size and query performance
- 🔄 System: Memory usage and resource utilization
✅ Network Status:
Current Slot: 245123456
Current Epoch: 567
Slot in Epoch: 123456/432000
Solana Version: 1.18.22
Block Lag: 2s
✅ Network is healthy and reachable
SNI Stats - Uptime: 3600s | Blocks: 1234 | Transactions: 45678 | Accounts: 12345 | Latency: 15ms
sni/
├── src/
│ ├── main.rs # CLI interface and application entry
│ ├── config.rs # Configuration management
│ ├── indexer.rs # Core indexing engine
│ ├── network.rs # Network monitoring and RPC client
│ ├── storage.rs # Data persistence layer
│ └── api.rs # API endpoints (future)
├── config/ # Configuration templates
├── docs/ # Documentation
└── tests/ # Integration tests
# Development build
cargo build
# Release build with optimizations
cargo build --release
# Run tests
cargo test
# Check code quality
cargo clippy
cargo fmt
# Generate documentation
cargo doc --open# Unit tests
cargo test --lib
# Integration tests
cargo test --test integration
# All tests with output
cargo test -- --nocaptureWe welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests for new functionality
- Run the test suite (
cargo test) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- ✅ Basic block and transaction indexing
- ✅ SQLite storage backend
- ✅ Network health monitoring
- ✅ CLI interface
- 🔄 v0.2.0: PostgreSQL support, enhanced metrics
- 🔄 v0.3.0: GraphQL API, advanced querying
- 🔄 v0.4.0: Distributed processing, horizontal scaling
- 🔄 v0.5.0: Real-time WebSocket API, pub/sub system
This project is dual-licensed under:
- Apache License 2.0 (LICENSE or http://www.apache.org/licenses/LICENSE-2.0)
You may choose either license at your option.
- Solana Foundation - For the overall support and amazing docs and platform
- Tide Engine - For the high-performance processing framework
- Rust Community - For the incredible tools and ecosystem
- Contributors - For making this project better
- 📧 Email: vivek@windnetwork.ai
- 🐛 Issues: GitHub Issues
- 📖 Docs: Full Documentation
Built with ❤️ by the Wind Network team
⭐ Star this repo if you find it useful! ⭐