A NestJS learning project for building a realistic core banking or digital wallet backend with:
- Event sourcing for the write model
- CQRS for the write/read split
- Snapshotting for faster aggregate loads
- Background projections for query APIs
- PostgreSQL or in-memory infrastructure behind shared interfaces
- Create, deposit into, withdraw from, and freeze accounts
- Persist account changes as immutable domain events
- Rehydrate aggregates from event history, using snapshots every 100 versions
- Maintain read models for account details, balances, and statement history
- Run background projection updates from the global event stream
- Track projection checkpoints so projection work resumes after restart
- Run lightweight versioned SQL migrations in Postgres mode
This repo uses a modular monolith with clear boundaries:
accountsfor account commands, aggregate logic, projections, and queriestransfersfor transfer orchestration scaffoldinginfrastructurefor database access, event store, snapshots, projections, and messaging
High-level flow:
Client
|
v
HTTP Controller
|
v
Command Handler
|
v
Load Aggregate (snapshot + tail events)
|
v
Domain Decision
|
v
Append Events to Event Store
|
+--> Background Projection Runner --> Read Tables
|
+--> Future Kafka / outbox integration
src/
common/
cqrs/
domain/
infrastructure/
db/
event-store/
messaging/
projections/
snapshots/
modules/
accounts/
application/
domain/
query/
transfers/
The write side is event sourced:
- the
AccountAggregateenforces business rules - the
AccountRepositoryloads the aggregate from its stream - new events are appended to the
eventstable - optimistic concurrency is enforced with
expectedVersion
Example account stream:
AccountCreatedMoneyDepositedMoneyWithdrawnAccountFrozen
The current account state is rebuilt from those facts, not from a mutable accounts row.
To avoid replaying very long account streams from version 1 every time, the repo stores snapshots:
- snapshot interval is currently
100versions - snapshots are a performance optimization only
- the event stream remains the source of truth
Aggregate load flow:
- load latest snapshot for
account-{id} - restore aggregate state from snapshot
- read only events after the snapshot version
- replay the remaining tail events
The query side is served from projections, not from aggregate rehydration during reads.
Current projection tables:
account_summaryaccount_statementprojection_checkpoints
The background projection runner:
- reads new events from the global event stream
- projects account events into read tables
- stores its last processed position
- resumes from checkpoint after restart
This means the query side is eventually consistent with the write side.
EVENT_STORE_KIND controls which infrastructure implementation is used:
in-memoryfor fast local learning and testspostgresfor persistent event store, snapshots, and read models
In Postgres mode, the app uses:
eventsfor the append-only event logsnapshotsfor aggregate snapshotsaccount_summaryfor current account stateaccount_statementfor account historyprojection_checkpointsfor projection progressschema_migrationsfor versioned SQL migrations
docker compose up -d postgres zookeeper kafkanpm installdocker compose exec -T postgres psql -U banking -d banking -f /docker-entrypoint-initdb.d/init.sqlinit.sql is useful for first-time container initialization. After that, incremental schema changes should be added as versioned migrations in:
src/infrastructure/db/migrations/migrations.ts
npm run start:devBase URL:
http://localhost:3000/api
Health check:
GET /health
Create account:
curl -X POST http://localhost:3000/api/accounts \
-H "Content-Type: application/json" \
-d "{\"accountId\":\"acc-1\",\"ownerId\":\"user-1\",\"currency\":\"USD\"}"Deposit money:
curl -X POST http://localhost:3000/api/accounts/acc-1/deposits \
-H "Content-Type: application/json" \
-d "{\"amount\":1000,\"currency\":\"USD\",\"transactionId\":\"txn-1\"}"Withdraw money:
curl -X POST http://localhost:3000/api/accounts/acc-1/withdrawals \
-H "Content-Type: application/json" \
-d "{\"amount\":200,\"currency\":\"USD\",\"transactionId\":\"txn-2\"}"Freeze account:
curl -X POST http://localhost:3000/api/accounts/acc-1/freeze \
-H "Content-Type: application/json" \
-d "{\"reason\":\"compliance review\"}"Transfer scaffolding:
curl -X POST http://localhost:3000/api/transfers \
-H "Content-Type: application/json" \
-d "{\"sourceAccountId\":\"acc-1\",\"destinationAccountId\":\"acc-2\",\"amount\":150,\"currency\":\"USD\"}"Get account details from account_summary:
curl http://localhost:3000/api/accounts/acc-1Example response:
{
"accountId": "acc-1",
"ownerId": "user-1",
"currency": "USD",
"status": "ACTIVE",
"balance": 800,
"version": 3,
"createdAt": "2026-03-22T10:00:00.000Z",
"updatedAt": "2026-03-22T10:05:00.000Z"
}Get current balance:
curl http://localhost:3000/api/accounts/acc-1/balanceGet account history from account_statement:
curl "http://localhost:3000/api/accounts/acc-1/history?limit=50&offset=0"Examole response:
{
"accountId": "100000",
"entries": [
{
"eventId": "ab2db266-6610-4a7f-8ad4-fe10851523fc",
"accountId": "100000",
"streamVersion": 1,
"eventType": "AccountCreated",
"occurredAt": "2026-03-22T21:22:08.718Z"
},
{
"eventId": "f5eecd0f-2981-4327-9652-83a1772c5424",
"accountId": "100000",
"streamVersion": 2,
"eventType": "MoneyDeposited",
"amount": 15000,
"currency": "NGN",
"transactionId": "txn-1",
"occurredAt": "2026-03-22T21:26:04.083Z"
},
{
"eventId": "fb917c12-f9e4-4f52-98ae-1dce314a7809",
"accountId": "100000",
"streamVersion": 3,
"eventType": "MoneyDeposited",
"amount": 25000,
"currency": "NGN",
"transactionId": "txn-1",
"occurredAt": "2026-03-22T21:26:47.177Z"
},
{
"eventId": "b4cbbcab-a5ba-424a-8e4a-cebd4eb3f074",
"accountId": "100000",
"streamVersion": 4,
"eventType": "MoneyWithdrawn",
"amount": 5000,
"currency": "NGN",
"transactionId": "txn-1",
"occurredAt": "2026-03-22T21:31:35.785Z"
}
]
}Write endpoints return once the event is appended to the event store. Query endpoints read from projections updated by a background worker. Because of that:
- a write can succeed before the read model reflects it
- reads are usually very fast
- reads may lag briefly behind writes
That tradeoff is intentional in CQRS systems.
This repo does not use an ORM.
Instead it uses:
pgfor direct SQL access- a Nest provider called
PG_POOLfor shared connections - a lightweight migration runner on app startup in Postgres mode
Migration flow:
- app starts
- if
EVENT_STORE_KIND=postgres, the migration runner checksschema_migrations - pending migrations are applied in order
- projection runner and repositories use the resulting schema
For new schema changes:
- add a new migration object with the next version in
src/infrastructure/db/migrations/migrations.ts - do not edit old migrations that may already be applied in real environments
Key environment variables:
PORTdefault3000EVENT_STORE_KINDeitherin-memoryorpostgresPOSTGRES_HOSTPOSTGRES_PORTPOSTGRES_USERPOSTGRES_PASSWORDPOSTGRES_DBKAFKA_BROKERKAFKA_CLIENT_ID
See .env.example.
- account query projections are currently the only implemented read models
- projections are updated by an in-process poller, not Kafka consumers yet
- transfer flow is still scaffolding rather than a full durable saga
- no idempotency store for commands yet
- no read-your-own-write strategy yet for query-after-command UX
- Add replay tooling for rebuilding projections from zero.
- Add transfer status projection and query endpoint.
- Add an outbox pattern for reliable event publication.
- Move projections to dedicated workers or Kafka consumers if needed.
- Add command idempotency keyed by
commandId. - Add integration tests covering concurrency conflicts and projection catch-up.