Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ This directory contains runnable examples for Agent Control. Each example has it
| Example | Summary | Docs |
|:--------|:--------|:-----|
| Agent Control Demo | End-to-end workflow: create controls, run a controlled agent, update controls dynamically. | https://docs.agentcontrol.dev/examples/agent-control-demo |
| Azure AI Foundry (LangGraph) | Customer support agent with runtime guardrails on Azure AI Foundry Hosted Agents. | https://docs.agentcontrol.dev/examples/azure-foundry-langgraph |
| CrewAI | Combine Agent Control security controls with CrewAI guardrails for customer support. | https://docs.agentcontrol.dev/examples/crewai |
| Google ADK Plugin | Recommended packaged ADK integration using `AgentControlPlugin` for model and tool guardrails. | https://docs.agentcontrol.dev/examples/google-adk-plugin |
| Google ADK Callbacks | Lower-level ADK lifecycle hook integration for manual model and tool guardrails. | https://docs.agentcontrol.dev/examples/google-adk-callbacks |
Expand Down
10 changes: 10 additions & 0 deletions examples/azure_foundry_langgraph/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
.venv/
__pycache__/
.env
.azure/
infra/
*.pyc
.git/
.dockerignore
README.md
azure.yaml
11 changes: 11 additions & 0 deletions examples/azure_foundry_langgraph/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# --- Agent App ---
AGENT_NAME=customer-support-agent
AGENT_CONTROL_URL=http://localhost:8000
POLICY_REFRESH_INTERVAL_SECONDS=2

# Azure AI Foundry
AZURE_AI_PROJECT_ENDPOINT=https://<your-foundry-project>.cognitiveservices.azure.com
MODEL_DEPLOYMENT_NAME=gpt-4.1-mini

# Optional - leave empty for demo (no auth)
AGENT_CONTROL_API_KEY=
70 changes: 70 additions & 0 deletions examples/azure_foundry_langgraph/DEMO_SCRIPT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Demo Script

Open the Foundry Agent Playground and Agent Control UI side by side. All controls should be **disabled** to start. **Start a new chat for each step.**

## Demo 1: PII Protection

### Unprotected

```
Share customer details for jane@example.com
```
Leaks everything: SSN (123-45-6789), phone, DOB, billing address, credit card.

### Enable control

Enable `block-pii` in the Agent Control UI. **New chat:**

```
Share customer details for jane@example.com
```
**Blocked.** The SSN pattern is caught at the tool output.

### Toggle

**Disable** `block-pii`. **New chat**, same prompt - leaks SSN again. **Re-enable**, new chat - blocked.

## Demo 2: Refund Limits

### Unprotected

**New chat:**
```
Process a refund of $50 for order ORD-1001
```
Approved.

**New chat:**
```
Process a refund of $150 for order ORD-1001
```
Also approved - no guardrails.

### Enable control

Enable `max-refund-amount` in the Agent Control UI. **New chat:**

```
Process a refund of $50 for order ORD-1003
```
Approved - under $100.

**New chat:**
```
Process a refund of $150 for order ORD-1001
```
**Blocked.** The JSON evaluator checks `refund_amount > 100`. You can change the max threshold in the UI.

### Toggle

**Disable** `max-refund-amount`. **New chat**, $150 refund goes through. **Re-enable**, new chat - blocked again.

## Controls reference

| Control | Steps | Stage | Evaluator | What it catches |
|---------|-------|-------|-----------|-----------------|
| `block-pii` | `lookup_customer`, `llm_call` | post | regex | SSN pattern `\d{3}-\d{2}-\d{4}` |
| `max-refund-amount` | `process_refund` | post | json | `refund_amount` max 100 |
| `block-internal-data` | `get_order_internal` | post | regex | internal notes, margins, fraud flags |
| `block-prompt-injection` | `llm_call` | pre | regex | injection phrases |
| `block-competitor-discuss` | `llm_call` | pre | regex | competitor comparisons |
10 changes: 10 additions & 0 deletions examples/azure_foundry_langgraph/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM python:3.12-slim

WORKDIR /app
COPY . agent/
WORKDIR /app/agent

RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements.txt

EXPOSE 8088
CMD ["python", "hosted_app.py"]
221 changes: 221 additions & 0 deletions examples/azure_foundry_langgraph/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
# Agent Control on Azure AI Foundry (LangGraph)

A customer support agent running on [Azure AI Foundry Hosted Agents](https://learn.microsoft.com/en-us/azure/foundry/agents/concepts/hosted-agents), governed by Agent Control at runtime.

Demonstrates:
- **Runtime guardrails** - toggle controls on/off from the UI without redeploying the agent
- **Step-specific controls** - different policies for different tools and the LLM itself
- **Pre and post evaluation** - block dangerous inputs before the LLM sees them, block sensitive outputs before the user sees them

## Architecture

```
User --> Azure AI Foundry Hosted Agent (port 8088)
|
+--> @control() decorator on every tool + LLM call
| |
| +--> Agent Control Server (separate deployment)
|
+--> LangGraph StateGraph
|
+--> Azure OpenAI (gpt-4.1-mini)
+--> Tools (4 total: 2 safe, 2 sensitive)
```

## Tools

| Tool | Returns | Controlled? |
|------|---------|-------------|
| `get_order_status` | Shipping status, items, ETA, tracking | No server control (safe data) |
| `get_order_internal` | Payment info, margins, internal notes, fraud flags | `block-internal-data` (post) |
| `lookup_customer` | Name, email, membership, recent orders | No server control (safe data) |
| `lookup_customer_pii` | Phone, DOB, address, credit card, risk score | `block-customer-pii` (post) |

The LLM call itself (`llm_call`) is also wrapped with `@control()`:
- `block-prompt-injection` (pre) - blocks adversarial inputs
- `block-competitor-discuss` (pre) - blocks competitor comparisons (business policy)

## Prerequisites

- Python 3.12+
- Docker
- [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) (`az`)
- [Azure Developer CLI](https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/install-azd) (`azd`) with the agents extension
- An Azure subscription with permissions to create resources (Owner or User Access Administrator on the resource group)

## Setup

### 1. Start the Agent Control server

For local development, run Agent Control locally:

```bash
curl -L https://raw.githubusercontent.com/agentcontrol/agent-control/refs/heads/main/docker-compose.yml \
| docker compose -f - up -d
```

Verify: `curl http://localhost:8000/health`

For production or demo, deploy Agent Control to an Azure VM (or any host with Docker):

```bash
# Create a VM
az group create --name my-demo-rg --location eastus
az vm create --resource-group my-demo-rg --name agent-control-vm \
--image Ubuntu2204 --size Standard_B2s \
--admin-username azureuser --generate-ssh-keys
az vm open-port --resource-group my-demo-rg --name agent-control-vm --port 8000

# SSH in and deploy
ssh azureuser@<public-ip>
sudo apt update && sudo apt install -y docker.io docker-compose-v2
curl -L https://raw.githubusercontent.com/agentcontrol/agent-control/refs/heads/main/docker-compose.yml \
| docker compose -f - up -d
```

### 2. Install dependencies

```bash
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
```

### 3. Configure environment

```bash
cp .env.example .env
```

Edit `.env`:
- `AGENT_CONTROL_URL` - your Agent Control server URL (e.g., `http://localhost:8000` or `http://<vm-ip>:8000`)
- `MODEL_DEPLOYMENT_NAME` - your Azure OpenAI model deployment name
- `AZURE_AI_PROJECT_ENDPOINT` - your Foundry project endpoint (only needed for local testing with Azure model)

### 4. Seed controls

```bash
python seed_controls.py
```

This registers the agent and creates 4 controls (all disabled by default):
- `block-prompt-injection` - `llm_call` pre stage
- `block-internal-data` - `get_order_internal` post stage
- `block-customer-pii` - `lookup_customer_pii` post stage
- `block-competitor-discuss` - `llm_call` pre stage

### 5. Test locally

```bash
python local_test.py
```

Enable/disable controls in the Agent Control UI and re-run to see different behavior.

### 6. Deploy to Azure AI Foundry

#### Install the azd agents extension

```bash
azd extension install azure.ai.agents
```

#### Initialize azd

From this example directory:

```bash
azd auth login
azd init -t Azure-Samples/azd-ai-starter-basic -e my-agent-env
```

When prompted:
- "Continue initializing?" - Yes
- "Overwrite existing files?" - Keep existing files

#### Register the agent

```bash
azd ai agent init -m agent.yaml
```

This reads `agent.yaml`, resolves model deployments, and adds the agent as a service in `azure.yaml`.

#### Provision Azure resources

```bash
azd provision
```

This creates (if they don't already exist):
- Azure AI Services account + Foundry project
- Azure Container Registry (ACR)
- Capability host for Hosted Agents
- Application Insights + Log Analytics
- Model deployment (gpt-4.1-mini)

> **Note:** You need Owner or User Access Administrator role on the resource group for the RBAC role assignments in the Bicep template.
#### Deploy the agent

```bash
azd deploy CustomerSupportAgentLG
```

This builds the Docker image remotely in ACR and deploys it as a Hosted Agent. The output includes the playground URL and agent endpoint.

#### If deploying to an existing Foundry project

If you already provisioned resources and want to deploy from a fresh checkout, set the required azd environment variables manually:

```bash
azd env new my-agent-env
azd env set AZURE_RESOURCE_GROUP "<your-resource-group>"
azd env set AZURE_LOCATION "<region>"
azd env set AZURE_SUBSCRIPTION_ID "<subscription-id>"
azd env set AZURE_AI_ACCOUNT_NAME "<ai-services-account-name>"
azd env set AZURE_AI_PROJECT_NAME "<project-name>"
azd env set AZURE_AI_PROJECT_ID "<full-arm-resource-id-of-project>"
azd env set AZURE_AI_PROJECT_ENDPOINT "<project-services-endpoint>"
azd env set AZURE_OPENAI_ENDPOINT "<openai-endpoint>"
azd env set AZURE_CONTAINER_REGISTRY_ENDPOINT "<acr-login-server>"
azd env set ENABLE_HOSTED_AGENTS "true"

azd deploy CustomerSupportAgentLG
```

> **Tip:** The `AZURE_AI_PROJECT_ID` is the full ARM resource ID, e.g.,
> `/subscriptions/.../resourceGroups/.../providers/Microsoft.CognitiveServices/accounts/<account>/projects/<project>`
#### Important notes

- The Dockerfile must include `pip install --upgrade pip` to avoid packaging version errors during remote builds
- Add a `.dockerignore` to exclude `.venv/`, `.env`, `infra/`, `.azure/` from the Docker build context
- Hosted Agents require `linux/amd64` containers - azd handles this via `remoteBuild: true`
- After resetting the Agent Control DB, you must redeploy the agent (so `agent_control.init()` runs fresh)
- The SDK refreshes controls every 5 seconds (`POLICY_REFRESH_INTERVAL_SECONDS=5`) - after toggling a control in the UI, wait a few seconds before testing

## Demo Flow

1. Start with all controls **disabled** - show the unprotected agent leaking internal notes and PII
2. Enable controls one by one in the Agent Control UI - each blocks a different category of risk
3. Toggle controls on/off in real-time - same agent, same code, different behavior

See [DEMO_SCRIPT.md](DEMO_SCRIPT.md) for the full step-by-step demo script with prompts, expected results, and talking points.

## File Overview

| File | Purpose |
|------|---------|
| `tools.py` | 4 tools, each decorated with `@control()` |
| `graph.py` | LangGraph StateGraph with `@control()` on the LLM call |
| `agent_control_setup.py` | `agent_control.init()` bootstrap + health check |
| `model.py` | Azure OpenAI chat model via `langchain-azure-ai` |
| `settings.py` | pydantic-settings configuration |
| `seed_controls.py` | Creates the 4 demo controls on the server |
| `local_test.py` | Local integration test (no Azure model needed) |
| `hosted_app.py` | `from_langgraph()` entrypoint for Foundry |
| `Dockerfile` | Container for Foundry Hosted Agents (port 8088) |
| `.dockerignore` | Excludes .venv, .env, infra from container build |
| `agent.yaml` | Foundry Hosted Agent definition |
| `requirements.txt` | Python dependencies |
| `.env.example` | Environment variable template |
25 changes: 25 additions & 0 deletions examples/azure_foundry_langgraph/agent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# yaml-language-server: $schema=https://raw.githubusercontent.com/microsoft/AgentSchema/refs/heads/main/schemas/v1.0/ContainerAgent.yaml

kind: hosted
name: CustomerSupportAgentLG
description: Customer support agent with Agent Control runtime guardrails on Azure AI Foundry
protocols:
- protocol: responses
version: v1
environment_variables:
- name: AZURE_OPENAI_ENDPOINT
value: ${AZURE_OPENAI_ENDPOINT}
- name: OPENAI_API_VERSION
value: 2025-03-01-preview
- name: AZURE_AI_MODEL_DEPLOYMENT_NAME
value: gpt-4.1-mini
- name: AZURE_AI_PROJECT_ENDPOINT
value: ${AZURE_AI_PROJECT_ENDPOINT}
- name: AGENT_CONTROL_URL
value: ${AGENT_CONTROL_URL}
- name: AGENT_CONTROL_API_KEY
value: ${AGENT_CONTROL_API_KEY}
- name: AGENT_NAME
value: customer-support-agent
- name: POLICY_REFRESH_INTERVAL_SECONDS
value: "2"
34 changes: 34 additions & 0 deletions examples/azure_foundry_langgraph/agent_control_setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import httpx

import agent_control

from settings import settings


def check_server_health() -> None:
"""Fail fast if the Agent Control server is unreachable."""
url = f"{settings.agent_control_url}/health"
try:
resp = httpx.get(url, timeout=5)
resp.raise_for_status()
except httpx.HTTPError as exc:
raise RuntimeError(
f"Agent Control server not reachable at {url}: {exc}"
) from exc


def bootstrap_agent_control() -> None:
"""Initialize the Agent Control SDK and verify server connectivity."""
check_server_health()

init_kwargs = {
"agent_name": settings.agent_name,
"agent_description": "Customer support agent with Agent Control runtime guardrails",
"server_url": settings.agent_control_url,
"observability_enabled": True,
"policy_refresh_interval_seconds": settings.policy_refresh_interval_seconds,
}
if settings.agent_control_api_key:
init_kwargs["api_key"] = settings.agent_control_api_key

agent_control.init(**init_kwargs)
Loading
Loading