DevPlayground delivers a production-shaped coding playground for interview preparation. It is local-first, docker-compose driven, and instrumented for observability and performance feedback. Tracks progress from foundations (DSA) through applied systems (Cloud/DevOps), with machine coding projects that mimic real interviews.
- DSA (
tracks/dsa/)- Master streaming/heap patterns, emit ops-per-second metrics, achieve >100k ops/sec benchmark.
- Machine Coding (
tracks/machine-coding/)- Build REST services with persistence, observability, chaos toggles, and perf gates.
- LLD (
tracks/lld/)- Apply Strategy/Factory patterns with config-driven wiring and high coverage.
- Patterns (
tracks/patterns/)- Replace conditional complexity with Strategy chains, provide golden tests and telemetry.
- System Design (
tracks/system-design/)- Produce rubric-aligned design docs with capacity planning, tradeoffs, and diagrams.
- Performance & Observability (
tracks/perf-obs/)- Instrument Prometheus/Grafana, define SLOs, automate perf regressions.
- Cloud & DevOps (
tracks/cloud-devops/)- Ensure local docker spin-up, CI enforcement, and secure grading environment.
Promotion requires hitting track-specific gates: correctness >= 90%, perf gates met (p95≤80ms, error≤0.5%), and complete observability artifacts.
dsa-streaming-median: Streaming median with memory/time caps, deterministic benchmark harness.mc-url-shortener: URL shortener with TTL, rate limiting, idempotency, and observability.patterns-strategy-discount: Strategy-based discount engine with golden tests.hld-realtime-chat: System design doc covering presence, ordering, scaling, and risks.
- Plan: Derive curriculum milestones & challenge specs.
- Scaffold: Generate monorepo structure, infra, and templates.
- Author: Implement challenge contracts, tests, load scripts, dashboards.
- Judge: Execute deterministic tests (
make test) and load (make load). - Report: Produce HTML reports with scores, traces, and next steps.
make run # docker compose up -d --build
make test # run pytest public suite (per language harnesses)
make load # execute k6 scenarios and parse results
make report CH=mc-url-shortener # generate scorecard + HTML report- Primary DB: SQLite (WAL). Optional Redis toggle via
.env. - Observability: Prometheus, Grafana, Loki, OpenTelemetry traces.
- Ports (defaults): app
8080, Prometheus9090, Grafana3000, Loki3100.
Set MODE=coach for guided hints, MODE=judge for strict grading. Reports live under reports/<challenge>/<timestamp>/.
- Grading runs sandboxed with no network egress unless explicitly whitelisted.
- Chaos toggles simulate packet loss, DB outages, and rate limit spikes.
- Populate templates and challenge scaffolds.
- Implement grading/reporting scripts with rubric weights.
- Generate synthetic report demonstrating scoring pipeline.