Skip to content

Commit 7d17e7e

Browse files
authored
Merge pull request #251 from rararulab/issue-250-arch-summary
docs: add architecture decision summary for interview (#250)
2 parents 6370a32 + e95a025 commit 7d17e7e

File tree

1 file changed

+44
-0
lines changed

1 file changed

+44
-0
lines changed

ARCHITECTURE_DECISIONS.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# Architecture Decisions
2+
3+
Short summary of core design choices and their rationale.
4+
For full documentation, run `just book` to view the rendered doc site.
5+
6+
## System Shape
7+
8+
**Microservice boundaries, single-process delivery.**
9+
The system is split into four services — Gateway, MetaService, Ingestor, Streamer — each with its own crate and API surface. In the submitted build they all run inside one process (`stream-app`) backed by SQLite, an in-memory queue, and an in-memory object store. This keeps deployment trivial while preserving the ability to split services later without API changes.
10+
11+
## Control Plane vs Data Plane
12+
13+
**Gateway proxies metadata; streaming bypasses it.**
14+
Gateway handles auth, rate limiting, and proxies requests to MetaService and Ingestor. Streamer is accessed directly by clients — video data should not pass through a reverse proxy. Access control for the data plane uses short-lived HMAC-signed connection tokens issued by MetaService, scoped by action (upload / stream) with a 5-minute TTL.
15+
16+
## Upload & Processing
17+
18+
**Async pipeline with crash recovery.**
19+
Uploads land in object storage immediately; transmux to HLS happens asynchronously via a task queue. Processing state is persisted in the database so incomplete jobs resume after a crash instead of starting over. Transient errors (I/O timeout) retry with exponential backoff; permanent errors (corrupt file) fail fast.
20+
21+
## Playback
22+
23+
**Streamer is a pure cache layer.**
24+
Streamer fetches HLS segments from object storage on first request and caches them locally. It does no transcoding — that responsibility belongs to Ingestor. This separation keeps playback latency low and lets Streamer scale independently on bandwidth.
25+
26+
## Share Links & Access Control
27+
28+
**Two-layer authorization: share link + connection token.**
29+
Share links control *who can enter* (owner can revoke instantly). Connection tokens control *how long they stay* (5-min TTL limits replay window). If a link leaks, revocation cuts off access within one token lifetime.
30+
31+
## Client
32+
33+
**Rust/WASM core + Svelte UI shell.**
34+
Core logic (hashing, token management, validation) lives in Rust compiled to WASM, shared with the backend to prevent divergence. The Svelte shell handles routing and UI. Client-side format validation is a UX fast-fail; the real security boundary is server-side codec probing in Ingestor.
35+
36+
## Key Trade-offs
37+
38+
| Decision | Trade-off | Why |
39+
|----------|-----------|-----|
40+
| Single-process mode | No horizontal scaling | Simplifies deployment for assignment; service boundaries are ready for split |
41+
| In-memory object store | Data lost on restart | Avoids external infra dependency; swappable to S3 via config |
42+
| No auth system | No real user identity | Assignment spec says auth not required; Gateway middleware slot is ready for JWT |
43+
| WASM client | Larger initial bundle | Reuses Rust logic; avoids client/server divergence on hashing and validation |
44+
| HLS (not DASH) | Apple-centric format | Broadest native browser support; simpler transmux pipeline with ffmpeg |

0 commit comments

Comments
 (0)