I research runtime architectures for AI-enabled systems.
My work focuses on what happens after inference: how intelligent systems
are executed, constrained, validated, observed, and governed over time.
I do not study intelligence as a model property, a prompt interaction,
or a standalone capability.
I study intelligence as an executable phenomenon embedded in
long-running software systems.
The systems I work on are defined by explicit constraints:
- execution is authoritative
- state transitions are explicit and inspectable
- side effects are governed, not implicit
- inference is separated from control
- responsibility accumulates over time
I am interested in systems that do not merely produce outputs,
but that remain correct, explainable, and governable as they evolve,
scale, and fail.
ICE is the research environment where this work is formalized and tested.
ICE explores a single, central question:
What does it mean to reliably run intelligent systems over time?
Here, intelligence and cognition are not treated as synonyms.
- Intelligent systems act toward goals under constraints.
- Cognitive systems persist behavior across time, accumulate state,
validate decisions, and govern their own evolution during execution.
ICE studies the intersection of these two dimensions.
Intelligence is treated as something that is run, not invoked.
Authority does not live in models or agents, but in execution substrates.
As AI systems increasingly operate as infrastructure,
implicit control becomes a liability.
ICE exists to study architectures where control, responsibility,
and observability remain explicit by construction.
