Rank options, quantify confidence, and explain the decision.
ADI is a Python library for structured multi-criteria decisions. Give it options, weighted criteria, constraints, and evidence confidence; it returns a ranked result with explainability, sensitivity analysis, and feedback-driven learning.
It is designed for two use cases:
- application code that needs auditable decision logic
- LLM agents that should call a deterministic ranking tool instead of improvising tradeoffs
Status: alpha. The core API is usable today, but expect refinement as real-world usage expands.
- Deterministic ranking: scores and ranks options from explicit structured inputs.
- Confidence-aware output: low-confidence evidence can reduce score impact or lower final confidence.
- Explainable decisions: returns criterion contributions, constraint reports, and counterfactual guidance.
- Scenario analysis: supports what-if runs and cross-scenario comparison.
- Learning loop: feedback can update saved preference profiles for future runs.
- Agent-ready: ships OpenAI and Anthropic tool schemas out of the box.
From source:
git clone https://github.com/dimgouso/adi-Agent-Decision-Intelligence.git
cd adi-decision
pip install -e .With development tooling:
pip install -e ".[dev]"After PyPI publishing is enabled:
pip install adi-decisionfrom adi import DecisionRequest, decide
from adi.schemas.decision_request import Criterion, Option, OptionValue
request = DecisionRequest(
options=[
Option(
name="Supplier A",
values=[
OptionValue(criterion_name="cost", value=100),
OptionValue(criterion_name="quality", value=0.80, confidence=0.95),
],
),
Option(
name="Supplier B",
values=[
OptionValue(criterion_name="cost", value=85),
OptionValue(criterion_name="quality", value=0.68, confidence=0.70),
],
),
],
criteria=[
Criterion(name="cost", weight=0.45, direction="cost"),
Criterion(name="quality", weight=0.55, direction="benefit"),
],
policy_name="balanced",
)
output = decide(request)
print(output.best_option)
for item in output.ranking:
print(item.option_name, item.score, item.confidence)from adi import call_adi_tool, get_openai_tools
tools = get_openai_tools()
# Pass `tools` to your OpenAI client.
# When the model calls `adi_decide`, route the arguments here:
result = call_adi_tool(tool_arguments)CLI:
adi validate request.json
adi decide --input request.json
adi decide --input request.json --policy risk_averseAPI:
uvicorn adi.interfaces.api:app --reloadKey endpoints:
GET /healthPOST /decideGET /policiesGET /policies/{policy_name}
- Policies:
balanced,risk_averse, andexploratory - Constraints: hard elimination or soft penalties
- Sensitivity: robustness checks under weight perturbation
- Confidence handling: explicit per-value confidence or evidence-derived confidence
- Feedback: accept, reject, or override actions update user profiles
- Typed schemas: Pydantic models for request and response contracts
Run the full local verification suite:
pip install -e ".[dev]"
ruff check adi/
pytest
python -m build
twine check dist/*- Changelog: CHANGELOG.md
- Publishing guide: docs/PUBLISHING.md
- GitHub Releases: github.com/dimgouso/adi-Agent-Decision-Intelligence/releases
MIT