Tagline: Config-driven packs → controls → evidence → audit-ready.
Control panel: TrustStack AI Assurance Hub
Repo: controlforge
TrustStack AI GRC turns AI regulations and security frameworks into a practical, trackable checklist for a specific AI use case — and helps you store evidence and produce an audit-ready report.
This project is being led by Adnan Masood and Heather Dawe for their upcoming book.
- Adnan Masood, PhD - Adnan is a seasoned artificial intelligence and machine learning expert, currently serving as Chief AI Architect leading AI strategy, engineering, and governance at UST. He holds a PhD in AI/ML, has been a Stanford Visiting Scholar and Harvard Business School alum, and is recognized as a thought leader, author, and speaker in responsible and enterprise-grade AI. He has co-authored books on responsible AI and regularly writes and speaks on AI governance, risk management, and ethical deployment of AI systems. He also serves as a Microsoft Regional Director and STEM mentor.
- Heather Dawe, MSc - Heather is an experienced Data and AI leader with over 25 years of industry experience driving innovation in data science, analytics, and AI across healthcare, finance, retail, and government. She has held senior leadership roles such as Chief Data Scientist and Head of Responsible AI, has appeared as an AI thought leader in media outlets, and co-authored the 2023 book Responsible AI in the Enterprise with Adnan Masood. She is known for building multidisciplinary data science teams, advocating for responsible and ethical AI, and championing diversity and skills development in the technology community.
- Industry + segment + use case (config-driven taxonomy)
- Scoping answers (questionnaire defined by the use case)
- Which packs to apply (security / safety / governance)
- Generates a deterministic checklist of required controls
- Explains why each control applies (rule + triggered context)
- Suggests implementation patterns/tools (config-driven)
- Tracks status/owners/notes
- Stores evidence with hashes + an immutable audit log
- Exports an audit-ready report (HTML/JSON/CSV; PDF scaffold included)
Not legal advice. Packs provide structured obligations/checklists but do not replace legal counsel.
registry/taxonomy/→ industries/segments/use-cases (discovered by folder conventions)packs/→ versioned packs (discovered by folder conventions)suggestions/→ patterns/tools catalog (optional)
workspaces/→ file-based projects (no DB)apps/api/→ FastAPI service (pack loader, mapping engine, reporting, file storage)apps/web/→ Next.js UI scaffold (“TrustStack AI Assurance Hub”)docs/→ architecture + authoring guidesschemas/→ JSON Schemas for packs, taxonomy, projects
cd apps/api
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
export TRUSTSTACK_CONFIG_ROOT=../../registry
export TRUSTSTACK_WORKSPACE_ROOT=../../workspaces
uvicorn truststack_grc.main:app --reload --port 8000cd apps/api
py -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
$env:TRUSTSTACK_CONFIG_ROOT = "../../registry"
$env:TRUSTSTACK_WORKSPACE_ROOT = "../../workspaces"
uvicorn truststack_grc.main:app --reload --port 8000cd apps/web
npm install
npm run devcd apps/web
npm install
npm run devOpen:
- Create a project in the web UI (or via API)
- Generate the checklist
- Mark controls complete + upload evidence
- Export a report
Create a new folder and YAML file under registry/taxonomy/industries/…:
registry/taxonomy/industries/<industry_id>/industry.yaml
registry/taxonomy/industries/<industry_id>/segments/<segment_id>/segment.yaml
registry/taxonomy/industries/<industry_id>/segments/<segment_id>/use-cases/<use_case_id>/use_case.yaml
Drop a new pack folder under registry/packs/<domain>/<pack_id>/<version>/:
registry/packs/governance/eu-ai-act/2024-1689/pack.yaml
registry/packs/governance/eu-ai-act/2024-1689/controls/*.yaml
The pack registry discovers it automatically at runtime.
- Taxonomy: Industry → Segment → Use Case (all config)
- Packs: versioned catalog(s) of controls with applicability rules
- Context: normalized object derived from scoping answers
- Checklist: generated control instances stored in a project workspace folder
- Evidence: file uploads with SHA-256 hashes + metadata
- Audit log: append-only NDJSON trail of state changes
- EU — AI Act (Regulation (EU) 2024/1689)
- NIST — AI Risk Management Framework (AI RMF 1.0)
- ISO/IEC 42001:2023 — AI Management System (AIMS)
- ISO/IEC 23894:2023 — AI risk management guidance
- OECD — AI Principles
- OECD/LEGAL/0449 — Recommendation of the Council on AI
- UNESCO — Recommendation on the Ethics of AI
- Council of Europe — Framework Convention on AI (CETS No. 225)
- G7 — Hiroshima Process International Guiding Principles for Advanced AI
- G7 — Hiroshima Process International Code of Conduct for Advanced AI
- ISO/IEC 38507:2022 — Governance implications of AI use by organizations
- NIST — Generative AI Profile (NIST.AI.600-1)
- ISO/IEC 42005:2025 — AI system impact assessment
- ISO/IEC 42006:2025 — Requirements for AIMS certification bodies
- ISO/IEC 22989:2022 — AI concepts and terminology
- CEN-CENELEC JTC 21 — AI standardization supporting EU legislation
- ISO/IEC JTC 1/SC 42 — AI standards portfolio
- US (Federal) — OMB Memorandum M-25-21 (Accelerating Federal Use of AI...)
- US (Federal) — White House Order: Removing Barriers to American Leadership in AI
- UK — AI regulation: a pro-innovation approach (White Paper)
- Canada — Directive on Automated Decision-Making
- Canada — Algorithmic Impact Assessment (AIA) tool
- Canada — Guide on the scope of the Directive on Automated Decision-Making
- Canada — Artificial Intelligence and Data Act (AIDA) (Bill C-27)
- China — Interim Measures for the Management of Generative AI Services
- China — Provisions on the Administration of Deep Synthesis of Internet-based Information Services
- China — Provisions on the Administration of Algorithmic Recommendation for Internet Information Services
- Singapore — Model AI Governance Framework (2nd Edition)
- Singapore — AI Verify
- Singapore — Model AI Governance Framework for Generative AI
- Singapore — Model AI Governance Framework for Agentic AI
- South Korea — Framework Act on AI development and creation of foundation for trust (AI Basic Act)
- South Korea — Implementing Decree (시행령) for the AI Basic Act
- Japan — AI Guidelines for Business (ver. 1.0)
- Japan — AI Guidelines for Business (ver. 1.01 PDF)
- Australia — AI Ethics Principles
- Australia — Mandatory guardrails for AI in high-risk settings (proposal)
- Australia — National AI Plan
- Australia — AI Plan for the Australian Public Service 2025
- India — Digital Personal Data Protection Act, 2023 (DPDP Act)
- India — MeitY Advisory on AI models/LLMs/GenAI for intermediaries/platforms
- India — National Strategy for AI (NITI Aayog)
- Brazil — AI Regulation Bill (PL 2.338/2023) — Senate matter
- Brazil — PL 2.338/2023 — Chamber of Deputies docket
- South Africa — Protection of Personal Information Act (POPIA)
- South Africa — SA National AI Policy Framework (draft)
- Mexico — Federal Law on Protection of Personal Data Held by Private Parties (LFPDPPP)
- Mexico — Agenda Nacional Mexicana de IA 2030 (AI strategy)
- Mexico — DOF publication index (20 Mar 2025)
- US-CA — AI Transparency Act (SB 942; Business & Professions Code §22757)
- US-CA — Frontier AI model statute (SB 53)
- US-CO — Consumer Protections in Interactions with AI Systems (SB24-205)
- US-UT — Artificial Intelligence Amendments (SB 149)
- US-TX — Texas Responsible AI Governance Act (HB 149 / TRAIGA)
- US-NY — RAISE Act (S6953B)
- US-IL — Artificial Intelligence Video Interview Act (820 ILCS 42)
- Model + Data + System Documentation Pack (v1.0)
- ISO/IEC 5338:2023 — AI system life cycle processes
- ISO/IEC TS 42119-2:2025 — Testing standardization for AI
- ISO/IEC 25059:2023 — Quality model for AI systems
- ISO/IEC 8183:2023 — AI data life cycle framework
- ISO/IEC TR 24028:2020 — Trustworthiness overview
- ISO/IEC TS 8200:2024 — Controllability of automated AI systems
- ISO/IEC TR 24027:2021 — Bias measurement/management guidance
- ISO/IEC TR 24029-1:2021 — Robustness assessment (Part 1)
- ISO/IEC 24029-2:2023 — Robustness assessment (Part 2)
- ISO/IEC 23053:2022 — ML-based AI system framework + terminology
- IEEE 7000-2021 — Ethics-by-design process standard
- IEEE 7001-2021 — Transparency requirements for autonomous systems
- IEEE 7003-2024 — Algorithmic bias processes/methodologies
- EU HLEG — Ethics Guidelines for Trustworthy AI (2019)
- IEC 61508 — Functional safety (E/E/PE safety-related systems)
- ISO 26262 — Road vehicles functional safety
- ISO 21448:2022 — SOTIF (Safety of the Intended Functionality)
- ANSI/UL 4600 — Safety case evaluation for autonomous products
- ISO 10218-1:2025 — Industrial robot safety
- ISO 14971:2019 — Medical device risk management
- IEC 62304:2006+A1:2015 — Medical device software lifecycle
- IMDRF GMLP Guiding Principles (N88 FINAL:2025) — FDA page
- NIST — Cybersecurity Framework (CSF) 2.0
- ISO/IEC 27001:2022 — Information Security Management System (ISMS)
- ISO/IEC 27002:2022 — Information security controls
- NIST SP 800-53 Rev. 5 — Security & privacy controls
- NIST SP 800-37 Rev. 2 — Risk Management Framework (RMF)
- NIST SP 800-218 — Secure Software Development Framework (SSDF) v1.1
- OWASP Top 10 for Large Language Model Applications (v1.1)
- OWASP Machine Learning Security Top 10
- MITRE ATLAS — Adversarial Threat Landscape for AI Systems
- Google Secure AI Framework (SAIF)
- NIST IR 8596 — Cybersecurity Framework Profile for AI (draft)
- ISO/IEC 29147:2018 — Vulnerability disclosure
- ISO/IEC 30111:2019 — Vulnerability handling processes
- Cloud Security Alliance — AI Organizational Responsibilities (Core Security Responsibilities)
- Cloud Security Alliance — AI Resilience: Organizational Responsibilities for AI Resilience
- Cloud Security Alliance — AI Security: Principles to Practice (v1.0)
- ETSI — Securing Artificial Intelligence (TC SAI)
Apache-2.0 (see LICENSE).

