Interactive proof of concept for the Informed Connection Doctrine. What ethical human-AI interaction looks like when you actually build it.
-
Updated
Feb 25, 2026 - HTML
Interactive proof of concept for the Informed Connection Doctrine. What ethical human-AI interaction looks like when you actually build it.
Calibrate LLM responses for high-agency power users, prioritizing rigorous analysis and executive control. Overrides the default, RLHF-driven tendency toward immediate, ungrounded solutions that serve lower-agency 'LLM as magic tool' workflows.
Add a description, image, and links to the user-agency topic page so that developers can more easily learn about it.
To associate your repository with the user-agency topic, visit your repo's landing page and select "manage topics."