A Dockerized A2A agent server built with Vapor and Ollama. Ships with a product catalog agent that answers natural language questions about a tech store inventory, with full conversation memory via A2A task continuations.
- Product catalog agent — searches a JSON product database and generates natural language responses via Ollama
- SSE streaming — token-by-token response streaming over Server-Sent Events
- Conversation continuity — maintains chat history across requests using A2A
taskId - Agent card discovery — standard
.well-known/agent-card.jsonendpoint - Multiple agent modes — product (default), echo, or general LLM
- Docker Compose — one-command setup with Ollama + model auto-pull
You only need one of these:
| Option | What to install | Best for |
|---|---|---|
| Docker (recommended) | Docker Desktop | One-command setup, no local dependencies |
| Local | Swift 6.0+ (Xcode or swift.org) + Ollama (optional) | Faster iteration, no container overhead |
docker compose up --buildThis starts three services:
- ollama — Ollama inference server
- ollama-init — pulls the
qwen3:0.6bmodel (one-time) - a2a-server — the A2A agent on port
8080
First run takes a few minutes to pull the model. Subsequent starts are instant.
With Ollama (natural language responses):
# Install and start Ollama (if not already running)
brew install ollama
ollama serve &
ollama pull qwen3:0.6b
# Run the server
OLLAMA_HOST=http://localhost:11434 swift runWithout Ollama (returns structured search results, zero setup):
swift runWithout Ollama the agent still works — it returns formatted catalog search results instead of LLM-generated prose.
curl http://localhost:8080/.well-known/agent-card.json | jq .Returns the agent's capabilities, skills, and metadata.
curl -X POST http://localhost:8080 \
-H 'Content-Type: application/json' \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "SendMessage",
"params": {
"message": {
"messageId": "msg-1",
"role": "ROLE_USER",
"parts": [{"text": "What laptops do you have?"}]
}
}
}'curl -X POST http://localhost:8080 \
-H 'Content-Type: application/json' \
-N \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "SendStreamingMessage",
"params": {
"message": {
"messageId": "msg-1",
"role": "ROLE_USER",
"parts": [{"text": "Compare your most expensive and cheapest products"}]
}
}
}'Include the taskId from a previous response to continue the conversation:
curl -X POST http://localhost:8080 \
-H 'Content-Type: application/json' \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "SendStreamingMessage",
"params": {
"message": {
"messageId": "msg-2",
"role": "ROLE_USER",
"parts": [{"text": "Which one has better reviews?"}],
"taskId": "TASK_ID_FROM_PREVIOUS_RESPONSE"
}
}
}'Set the AGENT_MODE environment variable to switch agents:
| Mode | Description |
|---|---|
product (default) |
Product catalog + Ollama for natural language answers |
echo |
Simple streaming echo agent (no LLM required) |
llm |
General-purpose Ollama chat agent |
# Run as echo agent (no Ollama needed)
AGENT_MODE=echo swift run
# Run as general LLM agent
AGENT_MODE=llm OLLAMA_HOST=http://localhost:11434 swift run| Variable | Default | Description |
|---|---|---|
AGENT_MODE |
product |
Agent type: product, echo, or llm |
OLLAMA_HOST |
(none) | Ollama server URL (e.g., http://localhost:11434) |
OLLAMA_MODEL |
qwen3:0.6b |
Ollama model name |
CATALOG_PATH |
products.json |
Path to product catalog JSON file |
LOG_LEVEL |
info |
Vapor log level |
A2AServer/
├── Package.swift # SPM manifest (Vapor + a2a-swift)
├── Dockerfile # Multi-stage build (swift:6.0 → ubuntu:24.04)
├── docker-compose.yml # Server + Ollama stack
├── .dockerignore
├── products.json # Product catalog data (20 tech products)
└── Sources/
├── main.swift # Vapor app setup + A2A route registration
├── configure.swift # Vapor configuration (host, port)
├── ProductAgent.swift # Catalog search + Ollama LLM agent
├── ProductCatalog.swift # JSON catalog loading and search
├── OllamaClient.swift # Ollama HTTP client with streaming
├── EchoAgent.swift # Simple streaming echo agent
└── LLMAgent.swift # General-purpose Ollama agent
User: "What laptops do you have?"
│
├─ 1. Search catalog for matching products
│ → SwiftBook Pro 16", SwiftBook Air 13"
│
├─ 2. Build Ollama prompt with:
│ • System prompt (catalog rules + all product data)
│ • Prior conversation turns (from task history)
│ • Current query
│
├─ 3. Stream Ollama response via SSE
│ → Token-by-token artifact updates
│
└─ 4. Store response in task history
→ Enables follow-up questions
The server uses three a2a-swift components:
// 1. Your agent logic
let executor = ProductAgent(catalog: catalog, ollama: ollama)
// 2. SDK handles task lifecycle, events, streaming
let handler = DefaultRequestHandler(executor: executor, card: agentCard)
// 3. Router dispatches JSON-RPC methods
let router = A2ARouter(handler: handler)
// 4. Register with Vapor
app.get(".well-known", "agent-card.json") { ... }
app.post { ... router.route(body: body) ... }Replace products.json with your own catalog. The expected format:
[
{
"id": "unique-id",
"name": "Product Name",
"description": "Product description",
"price": 999.99,
"category": "Category",
"inStock": true,
"tags": ["keyword1", "keyword2"]
}
]Change the Ollama model via environment variable:
OLLAMA_MODEL=llama3.2:3b docker compose up --buildOr modify docker-compose.yml to pull a different model in the init container.