Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion a2a/a2a_currency_converter/deployment/k8s.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ spec:
- name: LLM_API_KEY
value: dummy
# For production, use a Secret:
# - name: OPENAI_API_KEY
# - name: LLM_API_KEY
# valueFrom:
# secretKeyRef:
# name: llm-credentials
Expand Down
5 changes: 4 additions & 1 deletion a2a/weather_service/.dockerignore
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
.venv
.venv

deployment

32 changes: 31 additions & 1 deletion a2a/weather_service/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,33 @@
# Agents

Weather Service Agent
The Weather Service Agent is an example of an [A2A](https://a2a-protocol.org/latest/) agent.

This agent depends on the Kagenti [Weather Tool](https://github.com/kagenti/agent-examples/tree/main/mcp/weather_tool). The weather tool should be running before chatting with the weather service agent.

## Run the agent on Kubernetes with Kagenti

You may deploy using Kagenti's UI or through a Kubernetes manifest.

### Deploy using Kagenti's UI

Kagenti's UI is aware of this example agent. To deploy through the UI

- Browse to http://kagenti-ui.localtest.me:8080/agents/
- Build from source
- Weather service agent
- Expand Environment Variables
- Import from File/URL, URL, https://raw.githubusercontent.com/kagenti/agent-examples/refs/heads/main/a2a/weather_service/.env.openai
- If using [Ollama](https://ollama.com/), instead of the default use https://raw.githubusercontent.com/kagenti/agent-examples/refs/heads/main/a2a/weather_service/.env.ollama
- Fetch and parse
- Import
- Build and deploy agent
- Chat
- `What is the weather in New York?`

### Deploy using a Kubernetes deployment manifest

Deploy the sample manifest:

```bash
kubectl apply -f deployment/k8s.yaml
```
163 changes: 163 additions & 0 deletions a2a/weather_service/deployment/k8s.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
# Best practice to create a ServiceAccount for each agent to allow for more granular permissions if needed.
apiVersion: v1
kind: ServiceAccount
metadata:
name: weather-service
namespace: team1
---
# Expose the A2A endpoint and Agent Card.
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: weather-service
# kagenti.io/type=agent is required for the Agent to be discovered by Kagenti and show up in the UI
kagenti.io/type: agent
protocol.kagenti.io/a2a: ""
name: weather-service
namespace: team1
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8000
selector:
app.kubernetes.io/name: weather-service
kagenti.io/type: agent
type: ClusterIP
---
# Deploy a test build of the Agent
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: agent
app.kubernetes.io/name: weather-service
kagenti.io/framework: LangGraph
kagenti.io/type: agent
kagenti.io/workload-type: deployment
protocol.kagenti.io/a2a: ""
name: weather-service
namespace: team1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: weather-service
kagenti.io/type: agent
template:
metadata:
labels:
app.kubernetes.io/name: weather-service
kagenti.io/framework: LangGraph
kagenti.io/type: agent
protocol.kagenti.io/a2a: ""
spec:
containers:
- env:
# Port to bind A2A server
- name: PORT
value: "8000"
# Host to bind A2A server (0.0.0.0 means bind to all interfaces)
- name: HOST
value: 0.0.0.0
# AGENT_ENDPOINT is the endpoint that appears on the Agent Card
- name: AGENT_ENDPOINT
value: http://weather-service.team1.svc.cluster.local:8080/
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector.kagenti-system.svc.cluster.local:8335
- name: KEYCLOAK_URL
value: http://keycloak.keycloak.svc.cluster.local:8080
- name: UV_CACHE_DIR
value: /app/.cache/uv
# Make direct call to tool.
- name: MCP_URL
value: http://weather-tool-mcp.team1.svc.cluster.local:8000/mcp
# Use this value to make call through MCP Gateway
# value: http://mcp-gateway-istio.gateway-system.svc.cluster.local:8080/mcp
# This is the LLM_API_BASE for ollama running on the host machine. In production, this would point to a real LLM API.
- name: LLM_API_BASE
value: http://host.docker.internal:11434/v1
# This is a dummy key; if using a real LLM API use a real key
- name: LLM_API_KEY
value: dummy
# For production, use a Secret:
# - name: LLM_API_KEY
# valueFrom:
# secretKeyRef:
# name: llm-credentials
# key: openai-api-key
# This is a dummy key; if using OpenAI use a real key
- name: OPENAI_API_KEY
value: dummy
# For production, use a Secret:
# - name: OPENAI_API_KEY
# valueFrom:
# secretKeyRef:
# name: llm-credentials
# key: openai-api-key
- name: LLM_MODEL
value: llama3.2:3b-instruct-fp16
# Kagenti will supply a GITHUB_SECRET_NAME, but the weather service does not need it.
# - name: GITHUB_SECRET_NAME
# value: github-token-secret
# Pin to a specific version
image: ghcr.io/kagenti/agent-examples/weather_service:v0.1.0-alpha.1
# Or use the latest version
# image: ghcr.io/kagenti/agent-examples/weather_service:latest
imagePullPolicy: Always
name: agent
ports:
- containerPort: 8000
name: http
protocol: TCP
readinessProbe:
failureThreshold: 12
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
tcpSocket:
port: 8000
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
# Note that Kagenti's UI doesn't create security contexts for agents, but it's a good practice to set them for defense in depth. This is a fairly restrictive security context that should work for most agents, but you may need to adjust it based on your Agent's needs.
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
# This is for agents that use uv and its dependency cache
- mountPath: /app/.cache
name: cache
# Something like this is needed if the Agent has a Marvin dependency
# - mountPath: /.marvin
# name: marvin
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
serviceAccountName: weather-service
terminationGracePeriodSeconds: 30
# This is for agents that use uv and its dependency cache
volumes:
- emptyDir: {}
name: cache
# Something like this is needed if the Agent has a Marvin dependency
# - emptyDir: {}
# name: marvin
5 changes: 4 additions & 1 deletion mcp/weather_tool/.dockerignore
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
.venv
.venv

deployment

36 changes: 36 additions & 0 deletions mcp/weather_tool/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# MCP Weather tool

This tool demonstrates a small MCP server. The server implements a `get_weather` tool that returns the current weather for a city using https://open-meteo.com/en/docs/geocoding-api .

## Test the MCP server locally

Run locally

```bash
cd mcp/weather_tool
uv run --no-sync weather_tool.py
```

## Deploy the MCP server to Kagenti

### Deploy using the Kagenti UI

- Browse to http://kagenti-ui.localtest.me:8080/tools
- Import Tool
- Deploy from Source
- Select weather tool

### Deploy using a Kubernetes deployment descriptor

Alternately, you can deploy a pre-built image using Kubernetes

- `kubectl apply -f mcp/weather_tool/deployment/k8s.yaml`

## Test the MCP server using Kagenti

- Visit http://kagenti-ui.localtest.me:8080/tools/team1/weather-tool
- Click "Connect & list tools"
- Expand "get_weather"
- Click "invoke tool"
- Enter the name of a city
- Click "invoke"
124 changes: 124 additions & 0 deletions mcp/weather_tool/deployment/k8s.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Best practice to create a ServiceAccount for each agent to allow for more granular permissions if needed.
apiVersion: v1
kind: ServiceAccount
metadata:
name: weather-tool
namespace: team1
---
# Expose the MCP endpoint
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: weather-tool
# kagenti.io/type=tool is required for the MCP Server that provides tool to be discovered by Kagenti and show up in the UI
kagenti.io/type: tool
protocol.kagenti.io/mcp: ""
name: weather-tool-mcp
namespace: team1
spec:
ports:
- name: http
port: 8000
protocol: TCP
targetPort: 8000
selector:
app.kubernetes.io/name: weather-tool
kagenti.io/type: tool
type: ClusterIP
---
# Deploy a test build of the Tool
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: weather-tool
kagenti.io/framework: Python
# Set inject to true for [AuthBridge](https://github.com/kagenti/kagenti-extensions/tree/main/AuthBridge)
kagenti.io/inject: disabled
kagenti.io/type: tool
kagenti.io/workload-type: deployment
protocol.kagenti.io/mcp: ""
name: weather-tool
namespace: team1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: weather-tool
kagenti.io/type: tool
template:
metadata:
labels:
app.kubernetes.io/name: weather-tool
kagenti.io/framework: Python
# Set inject to true for [AuthBridge](https://github.com/kagenti/kagenti-extensions/tree/main/AuthBridge)
kagenti.io/inject: disabled
kagenti.io/transport: streamable_http
kagenti.io/type: tool
protocol.kagenti.io/mcp: ""
spec:
containers:
- env:
# Port to bind MCP server
- name: PORT
value: "8000"
# Host to bind MCP server (0.0.0.0 means bind to all interfaces)
- name: HOST
value: 0.0.0.0
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector.kagenti-system.svc.cluster.local:8335
- name: KEYCLOAK_URL
value: http://keycloak.keycloak.svc.cluster.local:8080
- name: UV_CACHE_DIR
value: /app/.cache/uv
# Pin to a specific version
image: ghcr.io/kagenti/agent-examples/weather_tool:v0.1.0-alpha.1
# Or use the latest version
# image: ghcr.io/kagenti/agent-examples/weather_tool:latest
imagePullPolicy: Always
name: mcp
ports:
- containerPort: 8000
name: http
protocol: TCP
# Note that a readinessProbe is recommended. Kagenti doesn't generate one for UI
# deployments, and we follow that pattern in this example, but you may want to add one for your own tools.
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
# This is for agents that use uv and its dependency cache
- mountPath: /app/.cache
name: cache
- mountPath: /tmp
name: tmp
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
serviceAccountName: weather-tool
terminationGracePeriodSeconds: 30
volumes:
# This is for tools that use uv and its dependency cache
- emptyDir: {}
name: cache
- emptyDir: {}
name: tmp
Loading