You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Competitor: Directus Signal: Directus v11.16.1 (March 10, 2026) fixed "AI tool approval flow hanging after approving provider-executed tools." This is a regression in their AI assistant — when a user approved an AI action, the UI hung. This indicates their AI tool approval UX is new, fragile, and still being stabilised.
Context: Directus is shipping an AI assistant with tool use — where the AI can perform actions (create items, update fields, trigger flows) and the user approves each action before execution. This is a genuine product investment in agentic CMS workflows. But the fact that the approval flow is hanging (and has bugs in multiple releases) signals they're building this from scratch with all the attendant pain.
What this reveals: Directus's AI model is: AI suggests → human approves → action executes. It's a "human-in-the-loop" assistant model. This is fundamentally different from Numen's pipeline model (autonomous AI with configurable personas), but Directus is clearly targeting the same "AI does CMS work" use case. Their audience: admin users who want to manage content via natural language.
The gap Numen can exploit: Directus has per-action approvals with hanging bugs. Numen has structured pipeline stages with review checkpoints — more predictable, auditable, and reliable. But we don't surface this as a competitive advantage anywhere in the UX. Users don't see "here's every AI decision and why."
Proposed Feature: AI Action Audit Log & Explainability Panel
What it is: A structured, queryable log of every AI decision made in the Numen content pipeline — what action was taken, which persona made it, what model was used, what the input was, what was changed, and why. Surfaced as a per-content timeline and a global AI audit dashboard.
Why it matters:
Trust & transparency: Content teams hesitate to adopt AI when they can't see what changed and why. An audit trail removes the black-box fear
Competitive differentiator: Directus's approval flow is reactive (approve before action). Numen's audit log is retrospective and richer (full provenance of every generated word)
Compliance-ready: Enterprise customers need AI audit trails for content governance, brand safety, and editorial accountability
Debug & improve: When AI-generated content misses the mark, the audit log shows exactly which persona/model/prompt caused it — enabling systematic improvement
Per-content "AI History" timeline tab in the content editor
Global AI Audit Dashboard: filterable by persona, model, date range, content type, action type
"Explain this" button on any AI-generated field — shows the exact prompt used and model reasoning
Export to CSV/JSON for compliance
Competitor context: Directus AI tool approval hangs (v11.16.1 bugfix). Strapi/Payload have no AI audit capability. Ghost has no AI at all. Numen can own "enterprise-grade AI transparency" before anyone else even thinks about it.
Priority: HIGH 🔴
As AI content generation scales, audit trails become mandatory — not optional. This is both a competitive weapon and a prerequisite for enterprise sales. The Directus approval-flow bug is a gift: their users are already frustrated with AI UX. We should be the obvious better alternative.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Overview
Competitor: Directus
Signal: Directus v11.16.1 (March 10, 2026) fixed "AI tool approval flow hanging after approving provider-executed tools." This is a regression in their AI assistant — when a user approved an AI action, the UI hung. This indicates their AI tool approval UX is new, fragile, and still being stabilised.
Context: Directus is shipping an AI assistant with tool use — where the AI can perform actions (create items, update fields, trigger flows) and the user approves each action before execution. This is a genuine product investment in agentic CMS workflows. But the fact that the approval flow is hanging (and has bugs in multiple releases) signals they're building this from scratch with all the attendant pain.
What this reveals: Directus's AI model is: AI suggests → human approves → action executes. It's a "human-in-the-loop" assistant model. This is fundamentally different from Numen's pipeline model (autonomous AI with configurable personas), but Directus is clearly targeting the same "AI does CMS work" use case. Their audience: admin users who want to manage content via natural language.
The gap Numen can exploit: Directus has per-action approvals with hanging bugs. Numen has structured pipeline stages with review checkpoints — more predictable, auditable, and reliable. But we don't surface this as a competitive advantage anywhere in the UX. Users don't see "here's every AI decision and why."
Proposed Feature: AI Action Audit Log & Explainability Panel
What it is: A structured, queryable log of every AI decision made in the Numen content pipeline — what action was taken, which persona made it, what model was used, what the input was, what was changed, and why. Surfaced as a per-content timeline and a global AI audit dashboard.
Why it matters:
Implementation scope:
ai_action_logstable — captures: pipeline_run_id, persona_id, model_used, step, action_type, input_summary, output_summary, tokens_used, duration_ms, metadataCompetitor context: Directus AI tool approval hangs (v11.16.1 bugfix). Strapi/Payload have no AI audit capability. Ghost has no AI at all. Numen can own "enterprise-grade AI transparency" before anyone else even thinks about it.
Priority: HIGH 🔴
As AI content generation scales, audit trails become mandatory — not optional. This is both a competitive weapon and a prerequisite for enterprise sales. The Directus approval-flow bug is a gift: their users are already frustrated with AI UX. We should be the obvious better alternative.
Beta Was this translation helpful? Give feedback.
All reactions