Computer Architecture · GPUs · Accelerators · Systems @ MIT
Optimizing kernels, building accelerators, and designing systems that push hardware to its limits.
Languages: C · C++ · Python · CUDA · JavaScript · TypeScript · Verilog · Assembly
GPU / ML: CUDA · Triton · PyTorch · TensorFlow
Hardware: Verilog / SystemVerilog · RISC-V · FPGA
Systems: Linux · Git · Compilers (MLIR/LLVM) · ROS2 · Docker
GPU-optimized Transformer inference runtime featuring fused CUDA kernels, auto-tuning, and low-latency compute paths.
A RISC-V Vector CPU + Quant Finance Accelerator with custom ISA extensions, pipelined execution units, and simulation tooling.
AI-powered FPL optimization platform with ML predictions, squad optimization, and an intelligent copilot.
MIT - Electrical Engineering w/ Computing and Mathematics
- Research: Undergraduate Researcher @ MIT AeroAstro, MIT Siegel Family Quest for Intelligence
- Teaching: Teaching Assistant - Classical Mechanics @ MIT Physics Department
Current Focus:
- Reinforcement learning of swarm dynamics
- GPU kernel optimization with CUDA and Triton
- Compiler infrastructure (MLIR/LLVM)
- Autonomous robotics and sensor fusion
Always building. Always profiling. Always optimizing.

