Skip to content

anshul-garg27/devplayground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DevPlayground Curriculum & Execution Plan

Overview

DevPlayground delivers a production-shaped coding playground for interview preparation. It is local-first, docker-compose driven, and instrumented for observability and performance feedback. Tracks progress from foundations (DSA) through applied systems (Cloud/DevOps), with machine coding projects that mimic real interviews.

Tracks & Milestones

  • DSA (tracks/dsa/)
    • Master streaming/heap patterns, emit ops-per-second metrics, achieve >100k ops/sec benchmark.
  • Machine Coding (tracks/machine-coding/)
    • Build REST services with persistence, observability, chaos toggles, and perf gates.
  • LLD (tracks/lld/)
    • Apply Strategy/Factory patterns with config-driven wiring and high coverage.
  • Patterns (tracks/patterns/)
    • Replace conditional complexity with Strategy chains, provide golden tests and telemetry.
  • System Design (tracks/system-design/)
    • Produce rubric-aligned design docs with capacity planning, tradeoffs, and diagrams.
  • Performance & Observability (tracks/perf-obs/)
    • Instrument Prometheus/Grafana, define SLOs, automate perf regressions.
  • Cloud & DevOps (tracks/cloud-devops/)
    • Ensure local docker spin-up, CI enforcement, and secure grading environment.

Promotion requires hitting track-specific gates: correctness >= 90%, perf gates met (p95≤80ms, error≤0.5%), and complete observability artifacts.

Seed Challenges

  • dsa-streaming-median: Streaming median with memory/time caps, deterministic benchmark harness.
  • mc-url-shortener: URL shortener with TTL, rate limiting, idempotency, and observability.
  • patterns-strategy-discount: Strategy-based discount engine with golden tests.
  • hld-realtime-chat: System design doc covering presence, ordering, scaling, and risks.

Execution Loop

  1. Plan: Derive curriculum milestones & challenge specs.
  2. Scaffold: Generate monorepo structure, infra, and templates.
  3. Author: Implement challenge contracts, tests, load scripts, dashboards.
  4. Judge: Execute deterministic tests (make test) and load (make load).
  5. Report: Produce HTML reports with scores, traces, and next steps.

Local Setup

make run   # docker compose up -d --build
make test  # run pytest public suite (per language harnesses)
make load  # execute k6 scenarios and parse results
make report CH=mc-url-shortener  # generate scorecard + HTML report
  • Primary DB: SQLite (WAL). Optional Redis toggle via .env.
  • Observability: Prometheus, Grafana, Loki, OpenTelemetry traces.
  • Ports (defaults): app 8080, Prometheus 9090, Grafana 3000, Loki 3100.

Coach vs Judge

Set MODE=coach for guided hints, MODE=judge for strict grading. Reports live under reports/<challenge>/<timestamp>/.

Security & Fairness

  • Grading runs sandboxed with no network egress unless explicitly whitelisted.
  • Chaos toggles simulate packet loss, DB outages, and rate limit spikes.

Next Steps

  • Populate templates and challenge scaffolds.
  • Implement grading/reporting scripts with rubric weights.
  • Generate synthetic report demonstrating scoring pipeline.

About

Production-shaped coding playground for interview prep: DSA, LLD, Machine Coding, System Design with observability

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors