Skip to content

cesaranogilbert/aaia-eu-ai-act-classifier

Repository files navigation

@aaia/eu-ai-act-classifier

Open-source EU Artificial Intelligence Act risk classification engine.

Most AI teams are building without knowing whether their system is prohibited, high-risk, or simply requires a transparency label. The EU AI Act has been enforceable since February 2025. The deadline for high-risk AI systems is 2 August 2026.

This library does one thing: given a description of an AI system, it returns a structured risk classification, the mandatory articles that apply, the compliance gaps that exist, and a prioritised action list — in under 100 milliseconds, with no API calls, no external dependencies, no vendor lock-in.

Built and maintained by the AAIA Trinity STAR Ecosystem — the AI compliance infrastructure layer for DACH enterprises.


What it covers

Regulation Articles Status
EU AI Act (Regulation (EU) 2024/1689) Article 5 — Prohibited practices ✅ Full coverage (7 prohibited practices)
EU AI Act Article 6 + Annex III — High-risk categories ✅ All 8 categories, 16 subcategories
EU AI Act Articles 51–55 — GPAI model obligations ✅ Standard + systemic risk tier
EU AI Act Article 50 — Limited risk / transparency ✅ Chatbots, deepfakes, synthetic content
DACH-specific BaFin, BSI, FMA, FINMA alignment ✅ Germany, Austria, Switzerland considerations

Installation

npm install @aaia/eu-ai-act-classifier

Or clone and run examples directly:

git clone https://github.com/cesaranogilbert/aaia-eu-ai-act-classifier.git
cd aaia-eu-ai-act-classifier
npm install
npx ts-node --esm examples/basic.ts

Usage

import { classify } from "@aaia/eu-ai-act-classifier";

const result = classify({
  name: "TalentScore Pro",
  description:
    "An AI system that performs automated CV screening, candidate ranking, " +
    "and preliminary interview scoring for corporate recruitment.",
  useCase: "Employment and recruitment automation",
  keywords: ["CV screening", "candidate ranking", "hiring algorithm"],
  jurisdictions: ["DE", "AT"],
});

console.log(result.riskTier);       // "HIGH_RISK"
console.log(result.riskLabel);      // "HIGH RISK — Annex III | Full compliance required before deployment"
console.log(result.confidence);     // 0.88

// Which articles apply?
result.applicableArticles.forEach((a) =>
  console.log(`${a.article}: ${a.title} (deadline: ${a.deadline})`)
);

// What do I need to do?
result.recommendedActions.forEach((action) => console.log(action));

// DACH-specific notes?
result.dachConsiderations?.forEach((c) => console.log(c));

API

classify(input: AISystemInput): ClassificationResult

Input

interface AISystemInput {
  name: string;           // System name or identifier
  description: string;    // What does this AI system do?
  useCase: string;        // Primary application domain
  targetUsers?: string;   // Who deploys or interacts with it?
  keywords?: string[];    // Optional — adds precision to classification
  isGPAI?: boolean;       // Is this a general-purpose AI / foundation model?
  trainingFLOPs?: number; // Training compute (systemic risk threshold: 1e25)
  jurisdictions?: ("EU" | "DE" | "AT" | "CH")[];
  isPublicAuthority?: boolean;
}

Output

interface ClassificationResult {
  riskTier: "PROHIBITED" | "HIGH_RISK" | "GPAI_SYSTEMIC" | "GPAI_STANDARD" | "LIMITED_RISK" | "MINIMAL_RISK";
  riskLabel: string;            // Human-readable label with article reference
  confidence: number;           // 0–1 classification confidence score
  prohibitionMatched?: { ... }; // Article 5 match detail (if PROHIBITED)
  highRiskCategoryMatched?: { ... }[]; // Annex III match detail (if HIGH_RISK)
  gpaiSystemicRisk?: boolean;   // Whether systemic risk threshold is met
  applicableArticles: { article, title, description, deadline }[];
  complianceGaps: { article, title, severity, deadline, actionRequired }[];
  recommendedActions: string[]; // Prioritised by severity then deadline
  enforcementTimeline: { phase, date, requirement }[];
  dachConsiderations?: string[]; // DE / AT / CH-specific regulatory notes
  matchDetails: { ... };         // Raw keyword evidence (for auditability)
}

Classification Logic

The engine follows the EU AI Act's own risk hierarchy — highest risk is evaluated first, per Article 6:

1. PROHIBITED (Article 5)      — Match on prohibited practice keywords → must not be deployed
2. HIGH_RISK (Annex III)       — Match on ≥2 category-specific keywords across 8 categories
3. GPAI_SYSTEMIC (Chapter V)   — GPAI model + training compute > 10^25 FLOPs
4. GPAI_STANDARD (Chapter V)   — GPAI model without systemic risk indicators
5. LIMITED_RISK (Article 50)   — Chatbots, deepfakes, AI-generated content systems
6. MINIMAL_RISK                — No mandatory EU AI Act requirements (monitor for changes)

No API calls. No network requests. Fully offline. Deterministic output.


Enforcement Timeline

Phase Date What applies
Phase 1 2 February 2025 Article 5 prohibitions enforceable. Prohibited systems must be withdrawn immediately.
Phase 2 2 August 2025 Chapter V GPAI obligations (Articles 53–55) apply to all GPAI model providers.
Phase 3 2 August 2026 Full enforcement — Annex III high-risk systems must have completed conformity assessment.
Phase 4 2 August 2027 Annex I high-risk systems (AI as safety component in existing EU-regulated products).

Examples

See examples/basic.ts for four working examples:

  1. HR recruitment AIHIGH_RISK (Annex III, Category 4)
  2. Government social scoringPROHIBITED (Article 5(1)(c))
  3. Enterprise LLM deploymentGPAI_STANDARD (Chapter V)
  4. Email spam filterMINIMAL_RISK

The Open + Proprietary Split

This library is the classification engine — the free layer.

It answers: "What tier is my AI system under the EU AI Act?"

The questions it does not answer — "What exactly do I need to implement, step by step, over the next 90 days, given my specific system architecture, sector, and jurisdictions?" — are answered by AAIA NemoClaw, the enterprise compliance deployment framework built on top of this engine.

This library (open-source) AAIA NemoClaw (enterprise)
Risk tier classification 90-day compliance roadmap
Article mapping Article-by-article implementation guide
Compliance gap list Gap remediation with templates and evidence packages
DACH regulatory notes DACH authority liaison documentation
Conformity assessment procedure support
Post-market monitoring system design
Ongoing regulatory update alerts

EU AI Act Readiness Checklist — CHF 97 → Enterprise compliance engagements: github.com/cesaranogilbert


Contributing

Issues and pull requests are welcome. Priority areas:

  • Corrections to article or Annex III interpretation based on official EU AI Office guidance
  • New DACH-specific regulatory mappings (BaFin circulars, FMA guidance, FINMA updates)
  • Sector-specific keyword improvements for classification accuracy
  • Additional examples and integration patterns

Please open an issue before a large PR to align on scope.


Disclaimer

This library provides a structured interpretation of the EU Artificial Intelligence Act for informational and compliance preparation purposes. It is not legal advice. Results should be validated by qualified legal counsel before being relied upon for regulatory decisions. The EU AI Act is subject to ongoing delegated acts, implementing acts, and guidance from the EU AI Office that may affect classification.


License

Apache 2.0 — see LICENSE


Built by the AAIA Trinity STAR Ecosystem — AI Compliance Backbone for DACH Businesses. github.com/cesaranogilbert

About

Open-source EU AI Act risk classification engine. Classifies any AI system into PROHIBITED / HIGH_RISK / GPAI / LIMITED_RISK / MINIMAL_RISK with article mapping, compliance gaps, and DACH-specific guidance.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors