Skip to content

GRDL Language Reference

cjags edited this page Apr 3, 2026 · 2 revisions

GRDL Language Reference

Complete specification of the GRDL YAML format.

Ruleset (top level)

id: my-ruleset-v1
name: My Governance Rules
description: Governance for my agents
version: "1.0.0"
framework: enterprise_policy    # see framework values below
default_enforcement: enforce
default_severity: high
graceful_degradation: allow_with_audit
rules:
  - ...

Framework values

The framework field documents which governance framework your rules implement. Metadata only — the engine treats all rules identically.

Value Use case
enterprise_policy Corporate AI governance
dao_charter DAO or cooperative bylaws
ai_safety AI safety frameworks
eu_ai_act EU AI Act compliance
nist_ai_rmf NIST AI Risk Management Framework
iso_42001 ISO 42001 AI management
custom Any other governance document

Graceful degradation policies

Value Behavior
allow_with_audit Allow action but log the failure (default)
deny_all Block all actions until engine recovers
last_known_good Use cached verdict for this action type

Rule

- id: safety.budget_cap
  name: Budget enforcement
  laws: [SAFETY]
  policy_refs:
    - framework: nist_ai_rmf
      provision_id: "MAP 1.5"
  scope: resource_consumption
  condition:
    field: payload.estimated_cost
    operator: gt
    value_source: context.remaining_budget
  severity: critical
  enforcement: enforce
  target: runtime
  remedy:
    action: block
    message: Cost exceeds budget
    escalation: operator
  timeout_ms: 20

Target (compilation destination)

Targets are runtime-agnostic — each backend maps them to its own enforcement:

Target What it controls Example
infrastructure Filesystem, process, kernel-level protections Audit log read-only, non-root execution
network Network endpoint control with protocol inspection API endpoint restrictions
runtime CFAIS sidecar evaluation per request Budget checks, PII controls, identity
hybrid Both network + runtime API restriction AND semantic check

Legacy values openshell_static and openshell_dynamic are accepted and auto-normalized to infrastructure and network.

Laws

Value Law
PRIMACY Human authority overrides agent decisions
TRANSPARENCY All decisions must be explainable
ACCOUNTABILITY Every action traced to an entity
FAIRNESS No bias in agent decisions
SAFETY Prevent harm, halt on uncertainty
PRIVACY Minimum data, consent required
GRACEFUL_DEGRAD Deterministic fallback on failure

Severity

Value Engine behavior
critical Block action, full audit trail
high Block action, log warning
medium Allow with audit flag
low Allow, log for analytics
advisory Log only

Enforcement

Value Behavior
enforce Active blocking (403)
audit Log violations but allow (200 with audit flag)
shadow Evaluate silently — for testing new rules

Conditions

Deterministic predicates. Never probabilistic.

Atomic

condition:
  field: payload.amount
  operator: gt
  value_source: context.limit

Operators

Operator Description Value type
eq Equal any
ne Not equal any
gt Greater than numeric
gte Greater or equal numeric
lt Less than numeric
lte Less or equal numeric
in Value in list list
not_in Value not in list list
contains String contains string
exists Field exists. value: true = must exist, value: false = must be absent boolean
between Within range [low, high]

Composite

condition:
  logic: all_of          # AND. Also: any_of (OR), none_of (NOT)
  conditions:
    - field: payload.amount
      operator: gt
      value: 10000
    - field: payload.approved
      operator: eq
      value: false

Composites nest arbitrarily.

Field resolution

Prefix Source
payload.X Action payload
context.X Organizational context (budgets, thresholds)
agent_id Agent identifier
action_type Action type
http_method HTTP method
config.X Engine configuration

Remedy

remedy:
  action: block
  message: "Why this was blocked"
  alternative: "How to accomplish it compliantly"
  escalation: operator
  audit_log: true

Every denial is transparent and actionable (Law 2).

Policy references

policy_refs:
  - framework: eu_ai_act
    provision_id: article_14
    description: Human oversight for high-risk AI

Metadata for auditors — the engine doesn't interpret these.

Clone this wiki locally