$ identify dhwanil_mori
NAME Dhwanil Mori
ROLE AI Systems Researcher · Founder @ RAIN
SCHOOL MS Data Science · George Washington University · May 2026
BASE Virginia, USA
STATUS [ ████████████████████░░░░ ] building
$ cat mission.txt
Most people ship a prompt. I ship a fleet.
I build AI systems that coordinate — not just respond. My work sits at the intersection of multi-agent architecture, failure taxonomy, and decision-aware reasoning. If a system can't fail gracefully, it isn't production-ready.
The question I keep asking: how do autonomous agents break, and how do we know before it matters?
$ git log --research --oneline
f9a3c2e proved → coordination emerges in LLM agents without explicit messaging
b7d1e80 found → cascade failures spike when 3+ agents share a dependency chain
3c6f912 reproduced→ silent hallucination under context overflow — no error thrown
a2b8d47 observed → role ambiguity in system prompts is the #1 cause of agent drift
9e4c031 confirmed → distribution shift breaks convergence even in stable agent fleets
7f5a1b3 open → why do agents over-coordinate after a single failure event?
running experiments in El Farol-style environments — agents coordinate without communication. emergent behaviour is where the real failure modes hide.
$ curl dhwanil.ai/beliefs
{
"on_ai_systems": "Design for failure first. Most breakdowns are predictable.",
"on_agents": "A single model is a tool. A coordinated fleet is a system.",
"on_evals": "If you can't measure it breaking, you can't trust it working.",
"on_open_source": "The best way to learn how something fails is to let everyone use it.",
"on_research": "Theory without deployment is incomplete. Ship the proof."
}|
Deploy a coordinated team of AI agents across 12 LLM providers simultaneously. Each agent has a role, a model, and a job. Watch them think and synthesize — live.
|
Decision intelligence for retail — inventory management and supplier risk scoring using hybrid retrieval and structured LLM reasoning on real operational data.
|
|
Research into why LLM agent systems fail. Using El Farol-style environments where agents must coordinate without communication — emergent behaviour reveals the breakdown points.
|
GPU-accelerated model orchestration on university HPC infrastructure. Slurm-managed parallel inference, secure pipelines, and reproducible evaluation at scale.
|
MS Data Science @ GWU · building in public · open to research collabs


