Field notes on building and running high-leverage agentic development systems.
This repo is a public record of how I think about:
- delegating real software work to agents
- verifying outcomes instead of just counting activity
- scaling multi-agent execution without losing clarity
- turning repeated friction into better rules, automations, and operating patterns
- Proof Beats Activity
- Branch Preflight Before Deep Agent Runs
- Verification Over Throughput
- Delegated Work Blocks
- Multi-Agent Coordination Without Chaos
Most teams over-optimize for "more prompts" or "more agent activity."
That works early.
Later, the real bottlenecks become:
- closure quality
- rescue load
- verification strength
- coordination quality
- learning capture
The useful question is not:
how many prompts did it take?
It is:
did the agent finish a meaningful block of work with enough evidence that we can trust it, learn from it, and repeat it?
Strong agentic development comes from:
- better task framing
- stronger success criteria
- cleaner verification
- clearer handoffs
- disciplined feedback loops
If a pattern works repeatedly, it should not stay tribal knowledge.
It should become one of:
- a rule
- a template
- an automation
- a playbook
- a skill
- operating patterns that improve agentic execution
- lessons from real-world multi-agent workflows
- ways to measure operator leverage
- notes on verification, closure, and self-improving systems
This repo is intentionally lightweight right now.
It is the public-facing counterpart to private internal systems that track:
- agentic activity
- coaching
- learning loops
- operator evolution
Over time this will hold more of the public artifacts, patterns, and field notes that are safe to share.