Dripfeed is a blockchain events indexer for Drips. It ingests protocol events—both historical and near real-time—from the Drips smart contracts, along with related IPFS documents, to build a structured database of higher-level entities (like “Drip Lists” and “Projects”). This database powers the Drips GraphQL API, which provides a unified, read-only endpoint for querying decentralized data across the Drips Network.
As a "read-only" service, Dripfeed and Drips GraphQL API function solely as a query layer for on-chain activity. Blockchain and IPFS remain the ultimate sources of truth. In practice, anyone can run their own instance of the service and, after indexing all past and ongoing events, reach the exact same state as the production Drips app.
⚠️ Dripfeed is designed to run as a single instance per blockchain network.
- Copy
.env.exampleto.envand configure:
DATABASE_URL=postgresql://user:password@localhost:5432/dbname
DB_SCHEMA=public
NETWORK=mainnet
RPC_URL=https://your-rpc-endpoint
# ... other settings- Run database migrations:
npm run db:migrate- Start the indexer:
npm run dev # Development mode with watch
npm run start # Production mode- Configure environment variables in
.env:
cp .env.example .env
# Edit .env with your configuration- Start services:
docker compose up -d- Run migrations (first time only):
docker compose exec dripfeed npm run db:migrateThe Docker setup includes:
dripfeed: The indexer service (runs in dev mode with hot reload).postgres: PostgreSQL database (port 54321).pgadmin: Database admin interface (accessible at http://localhost:5051).
Connecting to pgAdmin:
- Open http://localhost:5051/ in your browser.
- Default credentials (override via env vars):
- Email:
PGADMIN_EMAIL(default: admin@admin.com) - Password:
PGADMIN_PASSWORD(default: admin)
- Email:
- Connect to database using:
- Host:
postgres - Port:
5432 - Username:
POSTGRES_USER(default: user) - Password:
POSTGRES_PASSWORD(default: admin) - Database:
POSTGRES_DB(default: dripfeeddb)
- Host:
See DEVELOPMENT.md guide
- Fetch: Gets logs from RPC, decodes events, stores with block hashes
- Detect: Compares stored block hashes against chain for reorgs
- Process: Routes events to handlers that update entity state
Events are processed sequentially in block order.