Structured multi-criteria decisions, confidence, and explainable tradeoffs.
ADI Decision Engine is an OpenClaw skill bundle for structured multi-criteria decision analysis, turning messy tradeoff problems into ranked, auditable recommendations.
Built for vendor selection, route planning, hiring shortlists, software/tool comparison, procurement, and research method evaluation.
This repository packages adi-decision as a professional OpenClaw skill. The skill accepts either:
- a structured ADI-style decision request with options, criteria, constraints, and evidence
- a freeform decision brief that first needs to be normalized into a request skeleton
The skill then validates the request, runs deterministic ADI locally, and returns:
- ranked options
- overall confidence
- explanation of the winning tradeoff
- constraint effects
- sensitivity and stability signals
- counterfactual guidance when available
|
Structured input Convert vague tradeoff problems into explicit options, criteria, weights, and constraints. |
Deterministic output Run a reproducible ADI scoring pipeline instead of improvising subjective rankings. |
Explainable rationale Return confidence, constraint effects, contributions, and counterfactual guidance. |
Many decision-support prompts are underspecified, subjective, or hard to audit after the fact. ADI Decision Engine narrows that ambiguity by enforcing an explicit schema and deterministic scoring flow.
This repository focuses on a public, discoverable OpenClaw packaging layer rather than re-implementing the ADI core itself.
- structured decision support for tradeoff-heavy problems
- multi-criteria decision analysis with auditable weights and constraints
- deterministic ranking with explainable recommendations
- confidence-aware scoring and evidence handling
- policy-driven decisions with
balanced,risk_averse, andexploratory - request normalization for vague problem statements
- bundled examples across procurement, routing, hiring, research, and tooling
| Domain | Typical criteria | Why ADI fits |
|---|---|---|
| Vendor selection | cost, quality, lead time, contractual risk | Clear tradeoffs and defensible procurement rationale |
| Route planning | time, cost, walking burden, transfers | Explicit weighted tradeoffs instead of vague preference guessing |
| Hiring shortlists | skill fit, communication, delivery risk, compensation | Transparent scoring and stronger decision hygiene |
| Tool selection | reliability, setup effort, support burden, extensibility | Policy-driven comparisons with confidence-aware ranking |
| Research methods | accuracy, cost, complexity, interpretability | Explorable tradeoffs with auditable assumptions |
.
├── README.md
├── LICENSE
├── CITATION.cff
├── docs/
│ └── images/
└── adi-decision-engine/
├── SKILL.md
├── agents/openai.yaml
├── scripts/
├── references/
└── examples/
The OpenClaw bundle is isolated inside adi-decision-engine/ so that GitHub presentation and skill publishing remain separate concerns.
The publishable bundle lives in adi-decision-engine/. The primary skill contract is in adi-decision-engine/SKILL.md.
Runtime helper scripts are bundled in adi-decision-engine/scripts:
- run_adi.py: validate and execute a full decision request
- validate_request.py: confirm a request is schema-valid
- normalize_problem.py: convert a plain-language brief into a request skeleton
Recommended workflow:
- Start with a plain-language brief or a partial JSON object.
- Use
normalize_problem.pyto turn it into a request skeleton. - Use
validate_request.pyonce criteria, directions, and values are complete. - Use
run_adi.pyto produce the final ranked decision output.
python3- either an importable
adi-decisionpackage or theadiCLI onPATH - no API keys
- no network access for core skill execution
If the ADI runtime dependency is missing, the skill fails honestly and asks for package installation instead of fabricating scores.
This bundle is structured to support:
- OpenClaw frontmatter parsing
- aligned
agents/openai.yamlmetadata - schema-aware request validation
- deterministic local execution through the upstream ADI package
- bundled golden examples for smoke testing
Local smoke verification already completed:
- freeform normalization smoke test
- structured validation on bundled examples
- full execution on vendor selection, route planning, and tool selection examples
- Python compilation for bundled scripts
The skill ships with:
- branded icons in
adi-decision-engine/assets/ - OpenClaw
brand_color - clean
display_name,short_description, anddefault_prompt - implicit invocation enabled through
agents/openai.yaml
- openclaw
- decision-intelligence
- mcda
- decision-support
- weighted-scoring
- tradeoff-analysis
- option-ranking
- explainable-ai
- agent-tooling
- procurement
This repository includes CITATION.cff so that citation metadata is available once the repository is published.
Suggested positioning:
- cite the repository when you use the OpenClaw skill packaging
- cite
adi-decisionseparately when you need to reference the underlying decision engine implementation
This repository is now published on GitHub:
The OpenClaw bundle has also been published to ClawHub as:
adi-decision-engine@0.1.0
At the time of this update, ClawHub reports the skill as temporarily hidden while its security scan is pending. A local readiness and publication log is available in docs/publish_checklist.md.
Released under the MIT License. See LICENSE.