Making AI image evolution visible.
Hi — and welcome 👋
This is a small prototype exploring a simple idea:
What if images could carry a visible, human-readable history of how they’ve been edited?
AEM stands for AI Edit Mark.
AEM Protocol is a prototype for tracking how AI-generated images evolve over time.
Each image carries:
- a visible state (AI·0 → AI·9, EXT, X)
- a hidden watermark
- a signed manifest
Together, these form a lightweight provenance system.
If you're opening the demo for the first time:
- Click “New generated demo image → AI·0”
- Apply an edit (e.g. Brighten → AI·1)
- Click “Download package”
- Switch to Verifier mode and load the file
You should see:
- Verified: AI·1
Try the tamper test to see it break → X
AEM uses a simple, visible system:
| State | Meaning |
|---|---|
| AI·0–9 | AI origin + number of verified edits |
| EXT | External origin (no AI claim) |
| X | Broken or unverifiable provenance |
┌────────────────────────────┐
│ Editor (UI) │
│ (ui.js) │
└────────────┬───────────────┘
│
▼
┌────────────────────────────┐
│ Canonical Manifest Layer │
│ (manifest.js) │
└────────────┬───────────────┘
│
▼
┌────────────────────────────┐
│ Watermark / Image Layer │
│ (watermark.js) │
└────────────┬───────────────┘
│
▼
┌────────────────────────────┐
│ Storage / Transport │
│ │
│ Core: │
│ - image (PNG) │
│ - manifest (JSON) │
│ │
│ Optional bundle: │
│ - aem_package.json │
└────────────┬───────────────┘
│
▼
┌────────────────────────────┐
│ Verifier │
└────────────────────────────┘
Canonical manifest is signed. Everything else is derived.
The manifest is the source of truth.
The image, watermark, and UI are derived from it and must remain outside the signed boundary to avoid circular dependencies.
Trust lives in the manifest, not in the pixels.
The image shows the result. The manifest proves how it got there.
AEM separates:
- what users see
- from what systems can verify
- generate an image →
AI·0 - apply trusted edits →
AI·1,AI·2, … - upload external images →
EXT - simulate broken trust →
X - export a package
- verify it
- test tampering
The demo uses a bundled file:
aem_package.json
This contains:
- the image (as data URL)
- the manifest
- optional metadata
However:
AEM Protocol itself is transport-agnostic.
The core system is:
- image + manifest
The package format is just a convenience layer.
AEM Protocol is transport-agnostic and aligns naturally with object storage systems (e.g. S3-compatible APIs).
In such setups:
- images are stored as objects (e.g.
image.png) - manifests are stored as separate JSON objects
- a small set of AEM fields may be stored as object metadata
- image ↔ manifest linkage is verified via hashes
Example (conceptual):
images/<asset_id>.png
manifests/<asset_id>.json
Optional metadata on the image object:
aem-asset-idaem-stateaem-manifest-hashaem-manifest-url
The bundled aem_package.json used in this demo is a convenience format for export/import, not a requirement of the protocol.
This project is a prototype and has important limitations.
- signing keys are stored in the browser
- no external trust authority
- users control their own signing identity
- simple pixel encoding (LSB-style)
- not robust against compression, resizing, or adversarial edits
AI·0is created locally- no cryptographic proof from a generator
- detects simple tampering
- not hardened against determined attackers
- recently modularized
- some cross-module dependencies remain fragile
- no automated tests
- no key management
- no identity layer
- no revocation
AEM Protocol is intended for:
- experimentation
- design exploration
- discussion
It is not a production-ready provenance system.
AI content currently lacks:
- visible edit history
- clear origin signals
- understandable trust indicators
AEM explores a simple idea:
Can provenance be made visible, not just verifiable?
This prototype currently uses a bundled export format (aem_package.json) for simplicity.
However, AEM Protocol is designed to be transport-agnostic and aligns naturally with object storage systems (e.g. S3-compatible APIs).
In a more realistic integration:
- images are stored as objects (e.g.
image.png) - manifests are stored as separate JSON objects
- a small set of AEM fields may be stored as object metadata
- image ↔ manifest linkage is verified via hashes
This avoids:
- large bundled files
- base64-encoded images
- tight coupling between storage and verification
And enables:
- scalable storage
- API-based workflows
- integration with AI platforms and marketplaces
images/<asset_id>.png
manifests/<asset_id>.json
With optional metadata:
aem-asset-idaem-stateaem-manifest-hashaem-manifest-url
The current package format remains useful for:
- demos
- testing
- single-file export/import
But it is not required by the protocol.
This project is intentionally small, but a few improvements are planned:
- reduce implicit cross-module dependencies
- make module boundaries clearer
- add a simple guided flow (“Generate → Edit → Verify”)
- improve labeling (package vs manifest)
- support image + manifest as separate inputs
- treat
aem_package.jsonas demo format only
- clarify roles (generator, editor, verifier)
- expand integration examples
The goal is not to build a full system, but to explore a clear and usable model for AI provenance.
This project started as a small curiosity about how AI images evolve.
If you found it interesting, useful, or it sparked an idea — You are welcome.
No expectations — just appreciated 🙏
MIT — use at your own risk.