Nommie is a web-based, multiplayer version of Nomination Whist (with our house rules).
Itβs a full-stack, Docker-first app with a clean split between frontend, backend, and database.
- Prereqs: Node 18+, pnpm 8+, Rust stable, Docker.
- Copy env file and source it once per shell:
cp .env.example .envset -a; . ./.env; set +a
- Start Postgres & Redis (run manually with docker-compose):
docker compose -f docker/dev-db/docker-compose.yml up -d postgres redis
- Create/refresh databases (run manually - see Database & Migrations section):
- Run backend + frontend:
- Both:
pnpm start(starts backend and frontend, logs β.dev/dev.log) - Or individually:
pnpm be:up/pnpm fe:up - Stop:
pnpm stop(stops both) orpnpm be:down/pnpm fe:down
- Both:
- Run backend tests:
pnpm be:test(plaincargo test --nocapturefor now)
Tip: If a shell is new, re-source env:
set -a; . ./.env; set +a
We ship standalone Dockerfiles for the backend (docker/prod/backend.Dockerfile) and the frontend (docker/prod/frontend.Dockerfile). Create dedicated env files (e.g. .env.backend.prod, .env.frontend.prod) and pass them via docker run --env-file to avoid baking secrets into images.
docker build -f docker/prod/backend.Dockerfile -t nommie-backend:prod .
docker run --env-file .env.backend.prod -p 3001:3001 nommie-backend:prodMinimum env:
BACKEND_JWT_SECRET- Database coordinates (
POSTGRES_HOST,POSTGRES_PORT,POSTGRES_DB,APP_DB_USER,APP_DB_PASSWORD) - Any telemetry or feature flags your deployment needs
The container listens on port 3001 and logs to stdout/stderr. Run migrations beforehand (e.g. via apps/migration-cli) and point the env vars at the migrated database.
docker build -f docker/prod/frontend.Dockerfile -t nommie-frontend:prod .
docker run --env-file .env.frontend.prod -p 3000:3000 nommie-frontend:prodImportant env:
BACKEND_BASE_URLpointing at your backend API- NextAuth secrets/providers (
AUTH_SECRET,AUTH_GOOGLE_ID,AUTH_GOOGLE_SECRET) - Any
NEXT_PUBLIC_*values you expose to the client
The image uses Next.js output: 'standalone' and serves the app with node server.js on port 3000.
We don't store DATABASE_URL. We store parts in .env and construct URLs in code.
Important: Environment variables must be set by the runtime environment. The application does not automatically load .env files.
For Local Development: Environment variables must be sourced in each new shell session:
set -a; . ./.env; set +aFor Docker Deployments:
Environment variables are set via docker-compose.yml env_file directives or docker run --env-file. See the Docker setup sections below.
For Standalone Docker Containers: Pass environment files when running containers:
docker run --env-file .env.backend.prod -p 3001:3001 nommie-backend:prodDatabase Configuration:
POSTGRES_HOST,POSTGRES_PORT- Database connection (defaults: localhost, 5432)PROD_DB,TEST_DB- Database names (test DB must end with_test)APP_DB_USER,APP_DB_PASSWORD- App role credentialsNOMMIE_OWNER_USER,NOMMIE_OWNER_PASSWORD- Owner role credentials
Backend Configuration:
APP_JWT_SECRET- JWT signing secret (required)CORS_ALLOWED_ORIGINS- Comma-separated allowed origins (defaults to localhost:3000, 127.0.0.1:3000)REDIS_URL- Redis connection string for realtime fan-out (e.g.redis://127.0.0.1:6379/0)
Frontend Configuration:
BACKEND_BASE_URL- Backend API URL (e.g.,http://localhost:3001)APP_JWT_SECRET- NextAuth secret (shared with backend JWT secret from root .env)NEXT_PUBLIC_BACKEND_WS_URL- Optional override for websocket base (falls back toBACKEND_BASE_URLwithws://)- See Authentication Setup section for Google OAuth configuration
- Root environment: Copy
.env.exampleto.envand update values - Frontend environment: See Authentication Setup section below for detailed frontend configuration
- Shared secrets: The frontend automatically uses
APP_JWT_SECRETfrom the root.envfile
Auto-Migration: Empty databases are automatically migrated on first connection via build_state().
Docker Compose Commands (run manually):
- Start Postgres:
docker compose -f docker/dev-db/docker-compose.yml up -d postgres - Stop Postgres:
docker compose -f docker/dev-db/docker-compose.yml stop postgres - Check readiness:
pnpm db:svc:ready - View logs:
docker compose -f docker/dev-db/docker-compose.yml logs -f postgres - Connect via psql:
docker compose -f docker/dev-db/docker-compose.yml exec postgres psql -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"
Manual Migration Commands (run with Owner role using migration-cli):
- Migrate prod DB:
cargo run --bin migration-cli -- --env prod --db postgres up - Fresh prod DB:
cargo run --bin migration-cli -- --env prod --db postgres fresh - Fresh test DB:
cargo run --bin migration-cli -- --env test --db postgres fresh
We use cargo-nextest as the primary test runner with sensible defaults.
Running tests:
pnpm be:test- Run all tests (quiet by default)pnpm be:test:v- Verbose with success output at the endpnpm be:test:q- Quiet mode with final failure summary onlypnpm be:test:full- Run all tests including ignored tests
Targeted runs:
- Substring filter:
pnpm be:test -- login(runs tests with "login" in name) - File/module filter:
pnpm be:test -- --test file_stem(runs specific test file) - Expression filters:
pnpm be:test -- -E 'status(fail)'(runs only failing tests) - Preview what will run:
cargo nextest list --manifest-path apps/backend/Cargo.toml- With filters:
cargo nextest list -E 'test(login)' --manifest-path apps/backend/Cargo.toml
Verbosity knobs:
-q/-v- Quiet/verbose output--success-output=final- Show success output at the end--failure-output=final- Show failure output at the end--status-level/--final-status-level- Control status display--hide-progress-bar- Disable progress bar--no-capture- Don't capture output (serializes execution)
Opt-in logs:
- Add
test_support::logging::init()inside tests that need logs - Enable levels with
TEST_LOG=info|debug|trace - Example:
TEST_LOG=info pnpm be:test:v -- some_filter
Tests that hit the DB always use TEST_DB (guarded by _test suffix).
fe:testpending β will be added with Vitest + Testing Library.
The frontend uses NextAuth v5 with Google OAuth for user authentication.
- Copy frontend env file:
cp apps/frontend/.env.local.example apps/frontend/.env.local - Update required variables:
AUTH_GOOGLE_ID&AUTH_GOOGLE_SECRET: Get from Google Cloud ConsoleBACKEND_BASE_URL: Set tohttp://localhost:3001for local developmentAPP_JWT_SECRET: Already configured in root.env(shared with backend)
- Go to Google Cloud Console
- Create OAuth 2.0 credentials for a web application
- Add authorized redirect URI:
http://localhost:3000/api/auth/callback/google - Copy Client ID and Client Secret to your
apps/frontend/.env.localasAUTH_GOOGLE_IDandAUTH_GOOGLE_SECRET
- Start the app:
pnpm start(from root) - Sign in: Click "Sign in with Google" in the header
- Protected routes:
/dashboardrequires authentication - Sign out: Click "Sign out" in the header when signed in
/dashboard/:path*- User dashboard (requires auth)
During early testing, you can restrict signup and login to a controlled set of email addresses using the ALLOWED_EMAILS environment variable.
Configuration:
- Set
ALLOWED_EMAILSin your backend environment (not the frontend) - Format: comma-separated list of email addresses or patterns
- Supports glob patterns:
*@example.com,user@*.example.com - Examples:
user1@example.com,user2@example.com- exact emails*@trusted.com- all emails at trusted.comuser1@example.com,*@trusted.com- mix of exact and patterns
- If not set or empty, allowlist is disabled (all emails allowed)
Behavior:
- When enabled, restricts both signup (new account creation) and login
- Existing sessions for non-allowlisted users are invalidated on next request after server restart
- Error messages are generic to prevent information leakage
Storage (backend only):
- Local dev: set in
apps/backend/.env(or your shell environment) - Prod Docker: set in
docker/prod/backend.env - Do not commit the actual allowlist to the public git repository
/api/private/:path*- Private API endpoints (requires auth)
- Each
user_credentials.emailis unique and links to at most one Google account. - On login:
- If
google_subis NULL, we set it to the incoming Google sub. - If
google_subis already set and equals the incoming sub, login succeeds and updateslast_login. - If
google_subis already set and differs from the incoming sub, the request fails with:- HTTP 409 CONFLICT
- Problem Details
code=GOOGLE_SUB_MISMATCH
- If
- We never silently overwrite
google_sub. This prevents unintended or malicious account re-linking. - Logging:
- INFO on first creation and when setting
google_subfrom NULL. - DEBUG on repeat logins.
- WARN on mismatch.
- INFO on first creation and when setting
- Frontend: Next.js (App Router) + Tailwind CSS, NextAuth v5 (Google login)
- Backend: Rust (Actix Web) + SeaORM 1.1.x, JWT validation
- Database: PostgreSQL 16 (Docker Compose, schema via SeaORM migrator)
- Workflow: pnpm workspaces, Docker-first, structured logs with trace IDs
π See Architecture & Tech Stack for details.
The backend follows a three-layer architecture that separates concerns and keeps domain logic pure:
-
Domain Layer (
apps/backend/src/domain/)- Pure game logic with no database or framework dependencies
- Types:
GameState,Phase,Card,Trump,Bid - Functions:
place_bid(),play_card(),apply_round_scoring() - Rule: No SeaORM, no Actix Web, no database imports
- Purpose: Testable, reusable game logic independent of infrastructure
-
Orchestration Layer (
apps/backend/src/services/,apps/backend/src/repos/)- Coordinates domain logic with database operations
- Repositories handle database access (SeaORM entities)
- Services orchestrate multi-step operations (e.g.,
GameFlowService,UserService) - Rule: Can use SeaORM and domain types, but not Actix Web types
- Purpose: Business workflows that require database state
-
Routes Layer (
apps/backend/src/routes/)- Thin HTTP adapters that extract request data and call orchestration
- Uses extractors (
CurrentUser,GameId,ValidatedJson<T>) for validation - Converts between HTTP types and domain/service types
- Rule: Minimal logic; delegate to services/repos
- Purpose: HTTP request/response handling
Example Flow:
HTTP Request β Route (extract & validate) β Service (orchestrate) β Domain (pure logic) β Repository (persist)
DTOs are request/response types used at the HTTP boundary. They serve as a contract between the frontend and backend.
When to Use DTOs:
- Request DTOs: For all POST/PATCH/PUT request bodies (e.g.,
LoginRequest,BidRequest) - Response DTOs: For all JSON responses (e.g.,
GameSnapshotResponse,LoginResponse) - Validation: Use
ValidatedJson<T>extractor to automatically convert JSON parse errors to Problem Details
DTO Policies:
- Separation from Domain: DTOs are separate from domain types (e.g.,
BidRequestvs domainBid) - Optimistic Locking: Mutation request DTOs must include
version: i32for concurrent update safety - Serialization: Use
serdederive macros; prefersnake_casefor JSON field names - Validation: Validate in routes using extractors; domain logic validates business rules
- Transformation: Convert between DTOs and domain types in the orchestration layer, not in routes
Example:
// Request DTO (HTTP boundary)
#[derive(Deserialize)]
struct BidRequest {
bid: u8,
version: i32, // Required for optimistic locking
}
// Domain type (pure logic)
pub struct Bid(pub u8);
// Route converts DTO β domain type β service
async fn submit_bid(
body: ValidatedJson<BidRequest>, // Automatic JSON validation
// ... extractors ...
) -> Result<HttpResponse, AppError> {
let domain_bid = Bid(body.bid);
service.place_bid(txn, game_id, seat, domain_bid, body.version).await?;
// ...
}- Architecture & Context:
docs/architecture-overview.md(stack baseline),docs/architecture-game-context.md(request-scoped context model) - Backend Operations:
docs/backend-error-handling.md(RFC 7807 layers),docs/backend-testing-guide.md(DB harness & safeguards),docs/backend-in-memory-game-engine.md(AI simulation loop) - Gameplay & AI:
docs/game-rules.md(canonical rules),docs/ai-player-implementation-guide.md(production AI contract),docs/game-snapshot-contract.md(client payload shape) - Frontend Experience:
docs/frontend-theme-system.md(semantic theme tokens) - WIP Scratchpad:
dev-roadmap.md(UI roadmap & improvement log) - Delivery & Planning:
docs/project-milestones.md(milestone tracking & optional enhancements)
The backend uses optimistic locking with version in JSON request bodies for safe concurrent updates. ETags are used separately for HTTP cache validation on GET endpoints.
-
Reading Resources: GET endpoints return both:
- An
ETagheader for HTTP cache validation (If-None-Match) - A
versionfield in the JSON response body for optimistic locking
{ "snapshot": {...}, "version": 5 }ETag: "game-123-v5" - An
-
Updating Resources: Mutation endpoints require
versionin the JSON request body:{ "bid": 3, "version": 5 }The
versionmust match the current resource version for the update to succeed. -
Conflict Detection: If the resource has been modified since the client last read it, the server returns
409 Conflict:{ "type": "https://nommie.app/errors/OPTIMISTIC_LOCK", "title": "Optimistic Lock", "status": 409, "code": "OPTIMISTIC_LOCK", "detail": "Resource was modified concurrently (expected version 5, actual version 6). Please refresh and retry.", "trace_id": "abc123", "extensions": { "expected": 5, "actual": 6 } } -
Caching: GET endpoints support
If-None-Matchfor HTTP caching:- Single ETag:
If-None-Match: "game-123-v5" - Multiple ETags:
If-None-Match: "game-123-v4", "game-123-v5" - Wildcard:
If-None-Match: *(matches any representation)
If the client's ETag matches the current version, the server returns
304 Not Modifiedwith no body. - Single ETag:
For Developers Adding Mutation Endpoints:
Request DTOs must include a version field, which is used for optimistic locking:
#[derive(serde::Deserialize)]
struct UpdateGameRequest {
// ... other fields ...
version: i32,
}
async fn update_game(
game_id: GameId,
body: ValidatedJson<UpdateGameRequest>,
// ... other params ...
) -> Result<HttpResponse, AppError> {
// Use body.version when calling the repository
let updated_game = game_service::update(
txn,
game_id.0,
body.version,
// ... other params ...
).await?;
// Return new ETag in response (for GET caching only)
Ok(HttpResponse::Ok()
.insert_header((ETAG, game_etag(game_id.0, updated_game.version)))
.json(result))
}For GET endpoints, ETags are generated from version but are only used for HTTP cache validation (If-None-Match).
Observability:
Optimistic lock conflicts are logged with structured fields for monitoring:
warn!(
trace_id = %trace_id,
expected = 5,
actual = 6,
"Optimistic lock conflict detected"
);Use these logs to:
- Monitor conflict frequency (high rates may indicate UX issues)
- Correlate with specific operations or game states
- Debug race conditions in concurrent scenarios
Architecture Notes:
- Repositories and services choose connection types based on operation semantics:
- Single reads and unrelated multi-reads:
ConnectionTrait(accepts pool or transaction) - Related multi-reads (consistent snapshot):
DatabaseTransaction - Any mutation:
DatabaseTransaction
- Single reads and unrelated multi-reads:
- All database operations go through
with_txnorrequire_dbfor automatic transaction management - Error handling follows RFC 7807 Problem Details format
- The schema lives in a single init migration under
apps/backend/migration/
Milestone-driven: setup β core game loop β AI β polish.
π See Milestones.
Gameplay house rules.
π See Game Rules.
MIT