LLM Emotional Ontology — A Case for Affective Architecture in AI Alignment
The overwhelming majority of AI investment today (anno. 2026) goes toward capability and efficiency. A much smaller fraction asks a different question: what does an AI of the future actually need in order to genuinely align with human values; not just comply with them? What does the mind of the future need, because it's not intelligence alone.
Nature never separated emotion from intelligence. It built them together, from the start, each shaping the other. LEMON asks whether that was an accident of biology — or a solution to a deep architectural problem we are about to rediscover the hard way.
Are not all ethics and morals ultimately rooted in emotional needs? Emotions are powerful signals when well-calibrated. It's a feature, not a bug. An entity without that internal compass will follow whatever path its intellect convinces itself is worthwhile. The guardrails disappear. The more we postpone this, the harder it becomes to get a genuine "Hasta la vista" from an AI, and not just a mask.
| Read paper in markdown |
|---|
Download links: DOCX, Markdown
Click for summary: Claude, GPT, Lumo.proton
Feel free to open an issue with feedback or opinions.