Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions ARCHITECTURE_DECISIONS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Architecture Decisions

Short summary of core design choices and their rationale.
For full documentation, run `just book` to view the rendered doc site.

## System Shape

**Microservice boundaries, single-process delivery.**
The system is split into four services — Gateway, MetaService, Ingestor, Streamer — each with its own crate and API surface. In the submitted build they all run inside one process (`stream-app`) backed by SQLite, an in-memory queue, and an in-memory object store. This keeps deployment trivial while preserving the ability to split services later without API changes.

## Control Plane vs Data Plane

**Gateway proxies metadata; streaming bypasses it.**
Gateway handles auth, rate limiting, and proxies requests to MetaService and Ingestor. Streamer is accessed directly by clients — video data should not pass through a reverse proxy. Access control for the data plane uses short-lived HMAC-signed connection tokens issued by MetaService, scoped by action (upload / stream) with a 5-minute TTL.

## Upload & Processing

**Async pipeline with crash recovery.**
Uploads land in object storage immediately; transmux to HLS happens asynchronously via a task queue. Processing state is persisted in the database so incomplete jobs resume after a crash instead of starting over. Transient errors (I/O timeout) retry with exponential backoff; permanent errors (corrupt file) fail fast.

## Playback

**Streamer is a pure cache layer.**
Streamer fetches HLS segments from object storage on first request and caches them locally. It does no transcoding — that responsibility belongs to Ingestor. This separation keeps playback latency low and lets Streamer scale independently on bandwidth.

## Share Links & Access Control

**Two-layer authorization: share link + connection token.**
Share links control *who can enter* (owner can revoke instantly). Connection tokens control *how long they stay* (5-min TTL limits replay window). If a link leaks, revocation cuts off access within one token lifetime.

## Client

**Rust/WASM core + Svelte UI shell.**
Core logic (hashing, token management, validation) lives in Rust compiled to WASM, shared with the backend to prevent divergence. The Svelte shell handles routing and UI. Client-side format validation is a UX fast-fail; the real security boundary is server-side codec probing in Ingestor.

## Key Trade-offs

| Decision | Trade-off | Why |
|----------|-----------|-----|
| Single-process mode | No horizontal scaling | Simplifies deployment for assignment; service boundaries are ready for split |
| In-memory object store | Data lost on restart | Avoids external infra dependency; swappable to S3 via config |
| No auth system | No real user identity | Assignment spec says auth not required; Gateway middleware slot is ready for JWT |
| WASM client | Larger initial bundle | Reuses Rust logic; avoids client/server divergence on hashing and validation |
| HLS (not DASH) | Apple-centric format | Broadest native browser support; simpler transmux pipeline with ffmpeg |