lib-analytics-core, analytics, timescaledb, event-tracking, migrations
- Core analytics library for ADI platform
- Event tracking with batching and TimescaleDB persistence
- Non-blocking event collection via async channels
- Owns analytics database schema and migrations
Event tracking library used by all services:
- AnalyticsEvent: Enum of all trackable events
- AnalyticsClient: Non-blocking event tracking
- AnalyticsWorker: Background worker for bulk database inserts
- EnrichedEvent: Events with timestamp and metadata
Migration runner for analytics database schema:
- Creates TimescaleDB hypertable for events
- Creates continuous aggregates for fast queries
- Manages compression and retention policies
use lib_analytics_core::{AnalyticsClient, AnalyticsWorker, AnalyticsEvent};
// Initialize (in main)
let (analytics_client, worker_config) = AnalyticsClient::new(
100, // batch_size
10, // flush_interval_secs
);
let analytics_worker = AnalyticsWorker::new(worker_config, pool.clone());
// Spawn worker
tokio::spawn(async move {
analytics_worker.run().await;
});
// Track events
analytics_client.track(AnalyticsEvent::TaskCreated {
task_id: task.id,
user_id: user.id,
project_id: Some(project_id),
cocoon_id: task.cocoon_id,
command: task.command.clone(),
});# Run all migrations
cargo run --bin analytics-migrate --features migrate all
# Check migration status
cargo run --bin analytics-migrate --features migrate status
# Dry run (preview pending migrations)
cargo run --bin analytics-migrate --features migrate dry-run
# Pre-deploy only (creates tables)
cargo run --bin analytics-migrate --features migrate pre
# Post-deploy only (creates aggregates)
cargo run --bin analytics-migrate --features migrate postAuthLoginAttempt- User login attempt (success/failure)AuthCodeVerified- Login code verificationAuthTokenRefresh- Token refresh attemptAuthSessionValidated- Session validation check
TaskCreated- Task createdTaskStarted- Task execution startedTaskCompleted- Task finished successfullyTaskFailed- Task execution failedTaskCancelled- Task cancelled by user
IntegrationConnected- Integration connectedIntegrationDisconnected- Integration disconnectedIntegrationUsed- Integration action performedIntegrationError- Integration error occurredOAuthFlowStarted- OAuth flow initiatedOAuthFlowCompleted- OAuth flow finished
WebhookReceived- Webhook received from external serviceWebhookProcessed- Webhook processing completed
CocoonRegistered- Cocoon registeredCocoonConnected- Cocoon connected to signaling serverCocoonDisconnected- Cocoon disconnectedCocoonClaimed- Cocoon claimed by userCocoonSetupTokenCreated- Setup token generatedCocoonSetupTokenUsed- Setup token redeemed
ProjectCreated- Project createdProjectUpdated- Project updatedProjectDeleted- Project deleted
ApiRequest- HTTP API request (with latency, status code)DatabaseQuery- Database query executedApplicationError- Application error occurred
CREATE TABLE analytics_events (
id BIGSERIAL,
timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
event_type VARCHAR(100) NOT NULL,
service VARCHAR(100) NOT NULL,
user_id UUID,
data JSONB NOT NULL, -- Full event data
PRIMARY KEY (timestamp, id)
);
-- Convert to hypertable (time-series partitioning)
SELECT create_hypertable('analytics_events', 'timestamp');Features:
- Automatic partitioning by day
- Compression after 7 days (~90% space savings)
- 90-day retention policy (auto-delete old data)
- Indexed for fast queries
Auto-updating materialized views:
analytics_daily_active_users- DAU/WAU/MAUanalytics_task_stats_daily- Task metrics by dayanalytics_api_latency_hourly- API performance (p50/p95/p99)analytics_integration_health_daily- Integration statsanalytics_auth_events_daily- Auth metricsanalytics_cocoon_activity_daily- Cocoon usageanalytics_errors_hourly- Error tracking
Refresh policy: Every hour for last 3 days
┌─────────────────────────────────────────────────┐
│ Services (Platform, Auth, Signaling, etc.) │
│ - Use AnalyticsClient │
│ - Track events (non-blocking) │
└───────────────┬─────────────────────────────────┘
│ async channel
▼
┌─────────────────────────────────────────────────┐
│ AnalyticsWorker (background task) │
│ - Batches events (100 or 10s) │
│ - Bulk INSERT to database │
└───────────────┬─────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ PostgreSQL + TimescaleDB │
│ - analytics_events (hypertable) │
│ - Continuous aggregates (auto-update) │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ adi-analytics (read-only) │
│ - Queries aggregates │
│ - Provides HTTP endpoints │
└─────────────────────────────────────────────────┘
- Events sent via unbounded async channel
- Worker batches 100 events or 10 seconds
- No impact on API response times
- Graceful degradation if worker fails
- TimescaleDB hypertable (optimized for time-series)
- Automatic compression (~90% space savings)
- Continuous aggregates (pre-computed rollups)
- Smart retention (90d raw, unlimited aggregates)
- Handles billions of events
- Sub-millisecond event tracking
- Sub-second aggregate queries
- Automatic data lifecycle management
Located in migrations/:
001_create_analytics_events.sql:
- Creates analytics_events table
- Converts to TimescaleDB hypertable
- Adds indexes (user_id, event_type, service)
- Configures compression and retention
002_create_analytics_aggregates.sql:
- Creates 7 continuous aggregates
- Adds refresh policies (hourly)
- Adds indexes for fast dashboard queries
For analytics-migrate binary:
DATABASE_URL- PostgreSQL connection stringPLATFORM_DATABASE_URL- Alternative to DATABASE_URL
For services using the library:
- No special env vars needed
- Just initialize client and worker with database pool
# Library only
cargo build --release
# With migration binary
cargo build --release --features migrate --bin analytics-migrate// In adi-platform/src/main.rs
use lib_analytics_core::{AnalyticsClient, AnalyticsWorker};
let (analytics_client, worker_config) = AnalyticsClient::new(100, 10);
let analytics_worker = AnalyticsWorker::new(worker_config, pool.clone());
tokio::spawn(async move { analytics_worker.run().await });
// Add to AppState
AppState { analytics: analytics_client, ... }
// In handlers
state.analytics.track(AnalyticsEvent::TaskCreated { ... });Same pattern - initialize client, spawn worker, track events.
✅ Separation of Concerns: Core owns data model, API queries it ✅ Non-Blocking: Event tracking never blocks business logic ✅ Batching: Efficient bulk inserts reduce DB load ✅ Type-Safe: Enum ensures valid event structure ✅ Scalable: TimescaleDB handles billions of events ✅ Cost-Effective: Compression + retention manage storage
The migrations are in lib-analytics-core because:
- Core defines
AnalyticsEventenum → schema must match - Core contains
AnalyticsWorker→ writes to database - Core owns the data model → owns the schema
- API is read-only → doesn't own the structure
- Multiple services use core → any can run migrations
The binary ensures migrations can be run independently without requiring a full service deployment.