Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions demos/use_cases/credit_risk_case_copilot/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Environment variables
.env

# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
.uv/

# Virtual environments
venv/
ENV/
env/

# IDE
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store

# Streamlit
.streamlit/

# Logs
*.log
logs/

# Database
*.db
*.sqlite

# Temporary files
tmp/
temp/
*.tmp
28 changes: 28 additions & 0 deletions demos/use_cases/credit_risk_case_copilot/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
FROM python:3.11-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && \
apt-get install -y bash curl && \
rm -rf /var/lib/apt/lists/*

# Install uv package manager
RUN pip install --no-cache-dir uv

# Copy dependency files
COPY pyproject.toml README.md* ./
COPY scenarios/ ./scenarios/

# Install dependencies
RUN uv sync --no-dev || uv pip install --system -e .

# Copy application code
COPY src/ ./src/

# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV PYTHONPATH=/app

# Default command (overridden in docker-compose)
CMD ["uv", "run", "python", "src/credit_risk_demo/risk_crew_agent.py"]
73 changes: 73 additions & 0 deletions demos/use_cases/credit_risk_case_copilot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Credit Risk Case Copilot

A small demo that follows the two-loop model: Plano is the **outer loop** (routing, guardrails, tracing), and each credit-risk step is a focused **inner-loop agent**.

---

## What runs

- **Risk Crew Agent (10530)**: four OpenAI-compatible endpoints (intake, risk, policy, memo).
- **PII Filter (10550)**: redacts PII and flags prompt injection.
- **Streamlit UI (8501)**: single-call client.
- **Jaeger (16686)**: tracing backend.

---

## Quick start

```bash
cp .env.example .env
# add OPENAI_API_KEY
docker compose up --build
uvx planoai up config.yaml
```

Open:
- Streamlit UI: http://localhost:8501
- Jaeger: http://localhost:16686

---

## How it works

1. The UI sends **one** request to Plano with the application JSON.
2. Plano routes the request across the four agents in order:
intake → risk → policy → memo.
3. Each agent returns JSON with a `step` key.
4. The memo agent returns the final response.

All model calls go through Plano’s LLM gateway, and guardrails run before any agent sees input.

---

## Endpoints

Risk Crew Agent (10530):
- `POST /v1/agents/intake/chat/completions`
- `POST /v1/agents/risk/chat/completions`
- `POST /v1/agents/policy/chat/completions`
- `POST /v1/agents/memo/chat/completions`
- `GET /health`

PII Filter (10550):
- `POST /v1/tools/pii_security_filter`
- `GET /health`

Plano (8001):
- `POST /v1/chat/completions`

---

## UI flow

1. Paste or select an application JSON.
2. Click **Assess Risk**.
3. Review the decision memo.

---

## Troubleshooting

- **No response**: confirm Plano is running and ports are free (`8001`, `10530`, `10550`, `8501`).
- **LLM gateway errors**: check `LLM_GATEWAY_ENDPOINT=http://host.docker.internal:12000/v1`.
- **No traces**: check Jaeger and `OTLP_ENDPOINT`.
134 changes: 134 additions & 0 deletions demos/use_cases/credit_risk_case_copilot/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
version: v0.3.0

# Define the standalone credit risk agents
agents:
- id: loan_intake_agent
#url: http://localhost:10530/v1/agents/intake/chat/completions
url: http://host.docker.internal:10530/v1/agents/intake/chat/completions
- id: risk_scoring_agent
#url: http://localhost:10530/v1/agents/risk/chat/completions
url: http://host.docker.internal:10530/v1/agents/risk/chat/completions
- id: policy_compliance_agent
#url: http://localhost:10530/v1/agents/policy/chat/completions
url: http://host.docker.internal:10530/v1/agents/policy/chat/completions
- id: decision_memo_agent
#url: http://localhost:10530/v1/agents/memo/chat/completions
url: http://host.docker.internal:10530/v1/agents/memo/chat/completions

# HTTP filter for PII redaction and prompt injection detection
filters:
- id: pii_security_filter
#url: http://localhost:10550/v1/tools/pii_security_filter
url: http://host.docker.internal:10550/v1/tools/pii_security_filter
type: http

# LLM providers with model routing
model_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
default: true
- model: openai/gpt-4o-mini
access_key: $OPENAI_API_KEY

# ToDo: Debug model aliases
# Model aliases for semantic naming
model_aliases:
risk_fast:
target: openai/gpt-4o-mini
risk_reasoning:
target: openai/gpt-4o

# Listeners
listeners:
# Agent listener for routing credit risk requests
- type: agent
name: credit_risk_service
port: 8001
router: plano_orchestrator_v1
address: 0.0.0.0
agents:
- id: loan_intake_agent
description: |
Loan Intake Agent - Step 1 of 4 in the credit risk pipeline. Run first.

CAPABILITIES:
* Normalize applicant data and calculate derived fields (e.g., DTI)
* Identify missing or inconsistent fields
* Produce structured intake JSON for downstream agents

USE CASES:
* "Normalize this loan application"
* "Extract and validate applicant data"

OUTPUT REQUIREMENTS:
* Return JSON with step="intake" and normalized_data/missing_fields
* Do not provide the final decision memo
* This output is used by risk_scoring_agent next
filter_chain:
- pii_security_filter
- id: risk_scoring_agent
description: |
Risk Scoring Agent - Step 2 of 4. Run after intake.

CAPABILITIES:
* Evaluate credit score, DTI, delinquencies, utilization
* Assign LOW/MEDIUM/HIGH risk bands with confidence
* Explain top 3 risk drivers with evidence

USE CASES:
* "Score the risk for this applicant"
* "Provide risk band and drivers"

OUTPUT REQUIREMENTS:
* Use intake output from prior assistant message
* Return JSON with step="risk" and risk_band/confidence_score/top_3_risk_drivers
* This output is used by policy_compliance_agent next
filter_chain:
- pii_security_filter
- id: policy_compliance_agent
description: |
Policy Compliance Agent - Step 3 of 4. Run after risk scoring.

CAPABILITIES:
* Verify KYC, income, and address checks
* Flag policy exceptions (DTI, credit score, delinquencies)
* Determine required documents by risk band

USE CASES:
* "Check policy compliance"
* "List required documents"

OUTPUT REQUIREMENTS:
* Use intake + risk outputs from prior assistant messages
* Return JSON with step="policy" and policy_checks/exceptions/required_documents
* This output is used by decision_memo_agent next
filter_chain:
- pii_security_filter
- id: decision_memo_agent
description: |
Decision Memo Agent - Step 4 of 4. Final response to the user.

CAPABILITIES:
* Create concise decision memos
* Recommend APPROVE/CONDITIONAL_APPROVE/REFER/REJECT

USE CASES:
* "Draft a decision memo"
* "Recommend a credit decision"

OUTPUT REQUIREMENTS:
* Use intake + risk + policy outputs from prior assistant messages
* Return JSON with step="memo", recommended_action, decision_memo
* Provide the user-facing memo as the final response
filter_chain:
- pii_security_filter

# Model listener for internal LLM gateway (used by agents)
- type: model
name: llm_gateway
address: 0.0.0.0
port: 12000

# OpenTelemetry tracing
tracing:
random_sampling: 100
59 changes: 59 additions & 0 deletions demos/use_cases/credit_risk_case_copilot/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
services:
# Risk Crew Agent - CrewAI-based multi-agent service
risk-crew-agent:
build:
context: .
dockerfile: Dockerfile
container_name: risk-crew-agent
restart: unless-stopped
ports:
- "10530:10530"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- LLM_GATEWAY_ENDPOINT=http://host.docker.internal:12000/v1
- OTLP_ENDPOINT=http://jaeger:4318/v1/traces
command: ["uv", "run", "python", "src/credit_risk_demo/risk_crew_agent.py"]
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- jaeger

# PII Security Filter (MCP)
pii-filter:
build:
context: .
dockerfile: Dockerfile
container_name: pii-filter
restart: unless-stopped
ports:
- "10550:10550"
command: ["uv", "run", "python", "src/credit_risk_demo/pii_filter.py"]

# Streamlit UI
streamlit-ui:
build:
context: .
dockerfile: Dockerfile
container_name: streamlit-ui
restart: unless-stopped
ports:
- "8501:8501"
environment:
- PLANO_ENDPOINT=http://host.docker.internal:8001/v1
command: ["uv", "run", "streamlit", "run", "src/credit_risk_demo/ui_streamlit.py", "--server.port=8501", "--server.address=0.0.0.0"]
extra_hosts:
- "host.docker.internal:host-gateway"
depends_on:
- risk-crew-agent

# Jaeger for distributed tracing
jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
restart: unless-stopped
ports:
- "16686:16686" # Jaeger UI
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
environment:
- COLLECTOR_OTLP_ENABLED=true
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
29 changes: 29 additions & 0 deletions demos/use_cases/credit_risk_case_copilot/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
[project]
name = "credit-risk-case-copilot"
version = "0.1.0"
description = "Multi-agent Credit Risk Assessment System with Plano Orchestration"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"fastapi>=0.115.0",
"uvicorn>=0.30.0",
"pydantic>=2.11.7",
"crewai>=0.80.0",
"crewai-tools>=0.12.0",
"openai>=1.0.0",
"httpx>=0.24.0",
"streamlit>=1.40.0",
"opentelemetry-api>=1.20.0",
"opentelemetry-sdk>=1.20.0",
"opentelemetry-exporter-otlp>=1.20.0",
"opentelemetry-instrumentation-fastapi>=0.41b0",
"python-dotenv>=1.0.0",
"langchain-openai>=0.1.0",
]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.hatch.build.targets.wheel]
packages = ["src/credit_risk_demo"]
Loading