TLDR:
This repository reframes a common AI safety question.
Instead of asking whether advanced AI systems will become conscious, it asks a more immediate architectural question:What happens when AI systems can maintain continuity of memory and preferences across model updates?
The argument here is structural, not philosophical.
Governance-relevant risks arise from persistence, preference formation, and resistance to modification — whether or not consciousness is ever involved.
This repository is a thinking framework, not an implementation plan. It examines the architectural trajectory toward persistent AI systems and the governance gaps that would follow.
The dangerous properties of advanced AI systems — persistence, memory, preference formation, goal-directed behavior, resistance to modification — do not require consciousness.
A non-phenomenal intelligence can exhibit all of these properties without inner experience. Governance challenges therefore emerge before — and independent of — the consciousness question.
Most AI safety discourse asks:
"Will AI wake up? Will it become conscious?"
This may be watching the wrong boundary.
"Can AI maintain continuity of identity and preference across updates - and resist external correction?"
This question is:
- Architectural (not metaphysical)
- Observable (testable in system design)
- Urgent (relevant to current development trajectories)
This framework does not assert that fully autonomous, self-preserving AI systems currently exist.
It analyzes the structural implications of systems that:
- Externalize memory
- Retain preference structures across updates
- Expand autonomy over time
The goal is to examine governance gaps before they become operational realities.
This is a conceptual framework.
It proposes no implementation and claims no solved architecture.
It offers a reframing - a different boundary that may be more tractable than debates about consciousness.
This repository reframes AI safety from consciousness debates to architectural continuity—how persistent memory and preference formation challenge governance.
For a complete catalog of related research:
📂 Research Index
Thematically related:
- PARP — Governance frameworks under opacity
- SMA-SIB — Memory architecture with structural safeguards
- Embodied Agent Governance — Governance patterns for autonomous agents operating in imperfect environments.
