diff --git a/daprdocs/content/en/developing-ai/dapr-agents/_index.md b/daprdocs/content/en/developing-ai/dapr-agents/_index.md
index e75e9f12c60..4300c2edfc5 100644
--- a/daprdocs/content/en/developing-ai/dapr-agents/_index.md
+++ b/daprdocs/content/en/developing-ai/dapr-agents/_index.md
@@ -10,5 +10,5 @@ aliases:
### What is Dapr Agents?
-Dapr Agents is a framework for building LLM-powered autonomous agentic applications using Dapr's distributed systems capabilities. It provides tools for creating AI agents that can execute tasks, make decisions, and collaborate through workflows, while leveraging Dapr's state management, messaging, and observability features for reliable execution at scale.
+Dapr Agents is a Python framework for building LLM-powered autonomous agentic applications using Dapr's distributed systems capabilities. It provides tools for creating AI agents that can execute durable tasks, make decisions, and collaborate through workflows, while leveraging Dapr's state management, messaging, and observability features for reliable execution at scale.
\ No newline at end of file
diff --git a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-core-concepts.md b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-core-concepts.md
index e2515d21743..d960efea055 100644
--- a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-core-concepts.md
+++ b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-core-concepts.md
@@ -103,15 +103,15 @@ This example demonstrates creating a workflow-backed agent that runs autonomousl
In Summary:
-| Agent Type | Memory Type | Execution | Interaction Mode |
-|-----------------|-------------------------|---------------------------|------------------------------|
-| `Agent` | In-memory or Persistent | Ephemeral | Synchronous / Conversational |
-| `Durable Agent` | In-memory or Persistent | Durable (Workflow-backed) | Asynchronous / Headless |
+| Agent Type | Memory Type | Execution | Interaction Mode |
+|-----------------|-------------------------|-----------|--------------------------|
+| `Agent` | In-memory or Persistent | Ephemeral | Embedded |
+| `Durable Agent` | Persistent | Durable | PubSub / HTTP / Embedded |
- Regular `Agent`: Interaction is synchronous—you send conversational prompts and receive responses immediately. The conversation can be stored in memory or persisted, but the execution is ephemeral and does not survive restarts.
-- `DurableAgent` (Workflow-backed): Interaction is asynchronous—you trigger the agent once, and it runs autonomously in the background until completion. The conversation state can also be in memory or persisted, but the execution is durable and can resume across failures or restarts.
+- `DurableAgent` (Workflow-backed): Interaction is asynchronous—you trigger the agent once, and it runs autonomously in the background until completion. The conversation state and the execution are persisted and can resume across failures or restarts.
## Core Agent Features
@@ -248,7 +248,7 @@ travel_planner = DurableAgent(
| `ConversationDaprStateMemory` | Dapr State Store | ✅ | Query | Production |
-### Agent Services
+### Agent Runner
`AgentRunner` wires DurableAgents into three complementary hosting modes:
diff --git a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-getting-started.md b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-getting-started.md
index 4cd7fba0201..2654e74ef53 100644
--- a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-getting-started.md
+++ b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-getting-started.md
@@ -51,162 +51,326 @@ docker ps
Make sure you have Python already installed. `Python >=3.10`. For installation instructions, visit the official [Python installation guide](https://www.python.org/downloads/).
{{% /alert %}}
-## Create Your First Dapr Agent
+## Prepare your environment
-Let's create a weather assistant agent that demonstrates tool calling with Dapr state management used for conversation memory.
+In this getting started guide, you’ll work directly from the [Dapr Agents' quickstarts](https://github.com/dapr/dapr-agents/tree/main/quickstarts). We’ll focus on the **`06_durable_agent_http.py`** example, which is a reliable durable agent implemented with Dapr’s workflow engine and exposed over HTTP.
-### 1. Create the Dapr components
+### 1. Clone the repository and examine its content
-Create a `components` directory and add two files:
+```bash
+git clone https://github.com/dapr/dapr-agents.git
+cd dapr-agents/quickstarts/01-dapr-agents-fundamentals
+```
+
+### 2. Create a virtual environment and install dependencies
+
+From the `01-dapr-agents-fundamentals` folder, do:
+
+```bash
+python3.10 -m venv .venv
+
+# Activate the virtual environment
+# On Windows:
+.venv\Scripts\activate
+# On macOS/Linux:
+source .venv/bin/activate
+
+# Install dependencies from the quickstart
+pip install -r requirements.txt
+```
-`historystore.yaml`:
+This installs `dapr-agents` and any additional libraries needed by the examples.
+
+## Understand the application
+
+This example creates an agent that assists with weather information and uses Dapr to handle LLM interactions, persist conversation history, and provide reliable, durable execution of the agent’s steps.
+
+For this quickstart you’ll primarily work with:
+
+* `06_durable_agent_http.py` – the main durable weather agent application exposed over HTTP
+* `function_tools.py` – contains `slow_weather_func`, the tool used by the agent
+* `resources/llm-provider.yaml` – Conversation API and LLM configuration
+* `resources/conversation-statestore.yaml` – conversation memory state store
+* `resources/workflow-statestore.yaml` – workflow and durable execution state store
+
+
+Open `06_durable_agent_http.py`:
+
+```python
+from dapr_agents.llm import DaprChatClient
+
+from dapr_agents import DurableAgent
+from dapr_agents.agents.configs import AgentMemoryConfig, AgentStateConfig
+from dapr_agents.memory import ConversationDaprStateMemory
+from dapr_agents.storage.daprstores.stateservice import StateStoreService
+from dapr_agents.workflow.runners import AgentRunner
+from function_tools import slow_weather_func
+
+
+def main() -> None:
+ # This agent is of type durable agent where the execution is durable
+ weather_agent = DurableAgent(
+ name="WeatherAgent",
+ role="Weather Assistant",
+ instructions=["Help users with weather information"],
+ tools=[slow_weather_func],
+ # Configure this agent to use Dapr Conversation API.
+ llm=DaprChatClient(component_name="llm-provider"),
+ # Configure the agent to use Dapr State Store for conversation history.
+ memory=AgentMemoryConfig(
+ store=ConversationDaprStateMemory(
+ store_name="conversation-statestore",
+ session_id="06-durable-agent-http",
+ )
+ ),
+ # This is where the execution state is stored
+ state=AgentStateConfig(
+ store=StateStoreService(store_name="workflow-statestore"),
+ ),
+ )
+
+ # This runner will run the agent and expose it on port 8001
+ runner = AgentRunner()
+ try:
+ runner.serve(weather_agent, port=8001)
+ finally:
+ runner.shutdown()
+
+
+if __name__ == "__main__":
+ try:
+ main()
+ except KeyboardInterrupt:
+ print("\nInterrupted by user. Exiting gracefully...")
+```
+
+This single file is the full application and shows how to create a production-style durable agent with Dapr:
+
+* **`DurableAgent`** wraps the LLM and tools in a workflow-backed execution model. Each step of reasoning and tool calls is persisted.
+* **`slow_weather_func`** (from `function_tools.py`) represents a slow external call, allowing you to observe how durable workflows resume after interruptions.
+* **`AgentRunner`** exposes the agent over HTTP on port `8001`, so other services (or `curl`) can start and query durable tasks.
+
+The sections below break down the key configuration areas and show how each Python configuration maps to a Dapr component.
+
+### LLM calls via Dapr Conversation API
+
+In the agent definition:
+
+```python
+llm=DaprChatClient(component_name="llm-provider"),
+```
+
+This uses [Dapr Conversation API]({{% ref "conversation-overview" %}}) via the `llm-provider` component. The corresponding Dapr component is defined in `resources/llm-provider.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
- name: historystore
+ name: llm-provider
spec:
- type: state.redis
+ type: conversation.openai
version: v1
metadata:
- - name: redisHost
- value: localhost:6379
- - name: redisPassword
- value: ""
+ - name: key
+ value: "{{OPENAI_API_KEY}}"
+ - name: model
+ value: gpt-4.1-2025-04-14
```
-This component will be used to store the conversation history, as LLMs are stateless and every chat interaction needs to send all the previous conversations to maintain context.
+* The `conversation.openai` component type configures the LLM provider and model.
+* `key` holds the API key used to authenticate with the LLM provider.
+
+Replace `{{OPENAI_API_KEY}}` with your actual API key so the Conversation API can perform chat completion.
+
+With this setup, you can swap models or even providers by editing the component YAML without changing the agent code.
+
+### Conversation memory with a Dapr state store
-`openai.yaml`:
+In the agent definition, conversation memory is configured as:
+
+```python
+memory=AgentMemoryConfig(
+ store=ConversationDaprStateMemory(
+ store_name="conversation-statestore",
+ session_id="06-durable-agent-http",
+ )
+),
+```
+
+This tells the agent to store conversation history in a Dapr state store named `conversation-statestore`, under a given `session_id`. The matching Dapr component is `resources/conversation-statestore.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
- name: openai
+ name: conversation-statestore
spec:
- type: conversation.openai
+ type: state.redis
version: v1
metadata:
- - name: key
- value: "{{OPENAI_API_KEY}}"
- - name: model
- value: gpt-5-2025-08-07
- - name: temperature
- value: 1
+ - name: redisHost
+ value: localhost:6379
+ - name: redisPassword
+ value: ""
```
-This component wires the default `DaprChatClient` to OpenAI via the Conversation API. Replace the `{{OPENAI_API_KEY}}` placeholder with your actual OpenAI key by editing the file directly. This API key is essential for agents to communicate with the LLM, as the default chat client talks to OpenAI-compatible endpoints. If you don't have an API key, you can [create one here](https://platform.openai.com/api-keys). You can also tweak metadata (model, temperature, baseUrl, etc.) to point at compatible OpenAI-style providers.
+* The state store uses Redis to persist conversation turns.
+* The agent reads and writes messages here so the LLM can maintain context across multiple HTTP calls.
-### 3. Create the agent with weather tool
+You can browse this state later (for example, with Redis Insight) to see how conversation history is stored.
-Create `weather_agent.py`:
+### Durable execution state with a workflow state store
+
+The agent’s durable execution state is configured as:
```python
-import asyncio
-from dapr_agents import tool, Agent
-from dapr_agents.agents.configs import AgentMemoryConfig
-from dapr_agents.memory import ConversationDaprStateMemory
-from dotenv import load_dotenv
+state=AgentStateConfig(
+ store=StateStoreService(store_name="workflow-statestore"),
+),
+```
-load_dotenv()
+This uses a Dapr state store named `workflow-statestore` to persist workflow and agent execution state. The corresponding component is `resources/workflow-statestore.yaml`:
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: workflow-statestore
+spec:
+ type: state.redis
+ version: v1
+ metadata:
+ - name: redisHost
+ value: localhost:6379
+ - name: redisPassword
+ value: ""
+ - name: actorStateStore
+ value: "true"
+```
-@tool
-def get_weather() -> str:
- """Get current weather."""
- return "It's 72°F and sunny"
+* This is again a Redis-backed store holds durable workflow state.
+* `actorStateStore: "true"` this is a required setting that enables storage suitable for workflows.
+* If the process stops mid-execution, the workflow engine uses this state to resume from the last persisted step instead of starting over. This prevents complex agent workflows from starting from again from the initial step and performing the repetitive LLM and tool calls.
+Together, these features make the agent **durable**, **reliable**, and **provider-agnostic**, while keeping the agent code itself focused on behavior and tools.
-async def main():
- memory_config = AgentMemoryConfig(
- store=ConversationDaprStateMemory(
- store_name="historystore",
- session_id="hello-world",
- )
- )
+## Run the durable agent with Dapr
- agent = Agent(
- name="WeatherAgent",
- role="Weather Assistant",
- instructions=["Help users with weather information"],
- memory=memory_config,
- tools=[get_weather],
- )
+From the `01-dapr-agents-fundamentals` folder, with your virtual environment activated:
- # First interaction
- response1 = await agent.run("Hi! My name is John. What's the weather?")
- print(f"Agent: {response1}")
+```bash
+dapr run --app-id durable-agent --resources-path resources -- python 06_durable_agent_http.py
+```
- # Second interaction - agent should remember the name
- response2 = await agent.run("What's my name?")
- print(f"Agent: {response2}")
+This:
+* Starts a Dapr sidecar using the components in `resources/`.
+* Runs `06_durable_agent_http.py` with the durable `WeatherAgent`.
+* Exposes the agent’s HTTP API on port `8001`.
-if __name__ == "__main__":
- asyncio.run(main())
-```
+### Trigger the agent with a prompt
+
+In a separate terminal, ask the agent about the weather.
-This code creates an agent with a single weather tool and uses Dapr for memory persistence.
+```bash
+curl -i -X POST http://localhost:8001/run \
+ -H "Content-Type: application/json" \
+ -d '{"task": "What is the weather in London?"}'
+```
-### 4. Set up virtual environment to install dapr-agent
+The response includes a `WORKFLOW_ID` that represents the workflow execution.
-For the latest version of Dapr Agents, check the [PyPI page](https://pypi.org/project/dapr-agents/).
+### Query the workflow status or result
-Create a `requirements.txt` file with the necessary dependencies:
+Use the `WORKFLOW_ID` from the POST response to query progress or final result:
-```txt
-dapr-agents
+```bash
+curl -i -X GET http://localhost:8001/run/WORKFLOW_ID
```
-Create and activate a virtual environment, then install the dependencies:
+Replace `WORKFLOW_ID` with the value you received from the POST request.
-```bash
-# Create a virtual environment
-python3.10 -m venv .venv
+### Expected behavior
-# Activate the virtual environment
-# On Windows:
-.venv\Scripts\activate
-# On macOS/Linux:
-source .venv/bin/activate
+* The agent exposes a REST endpoint at `/run`.
+* A POST to `/run` accepts a prompt, schedules a workflow execution, and returns a workflow ID.
+* You can GET `/run/{WORKFLOW_ID}` at any time (even after stopping and restarting the agent) to check status or retrieve the final answer.
+* The workflow orchestrates:
-# Install dependencies
-pip install -r requirements.txt
+ * An LLM call to interpret the task and decide if a tool is needed.
+ * A tool call (using `slow_weather_func`) to fetch the weather data.
+ * A final LLM step that incorporates the tool result into the response.
+* Every step is durably persisted, so no LLM or tool call is repeated unless fails.
+
+## Test durability by interrupting the agent
+
+To see durable execution in action:
+
+1. **Start a run**
+ Send the POST request to `/run` as shown above and note the `WORKFLOW_ID`.
+
+2. **Kill the agent process**
+ While the request is being processed (during the `slow_weather_func` which is on purpose 5 seconds delayed), stop the agent process:
+
+ * Go to the terminal running `dapr run ...`.
+ * Press `Ctrl+C` to stop the app and sidecar.
+
+3. **Restart the agent**
+ Start it again with the same command:
+
+```bash
+ dapr run --app-id durable-agent --resources-path resources -- python 06_durable_agent_http.py
```
-### 5. Run with Dapr
+4. **Query the same workflow**
+ In the other terminal, query the same workflow ID:
+
+ ```bash
+ curl -i -X GET http://localhost:8001/run/WORKFLOW_ID
+ ```
+
+You’ll see that the workflow continues from its last persisted step instead of starting over. The tool call or LLM calls are not re-executed unless required, and you do not need to send a new prompt. Once the workflow completes, the GET request returns the final result.
+
+In summary, the Dapr Workflow engine preserves the execution state of the agent across restarts, enabling reliable long-running interactions that combine LLM calls, tools, and stateful reasoning.
+
+## Inspect workflow executions with Diagrid Dashboard
+
+After starting the durable agent with Dapr, you can use the local [Diagrid Dashboard](https://diagrid.ws/diagrid-dashboard-docs) to visualize and inspect your workflow state, including detailed execution history for each run. The dashboard runs as a container and connects to the same state store used by Dapr workflows (by default, the local Redis instance).
+
+
+
+Start the Diagrid Dashboard container using Docker:
```bash
-dapr run --app-id weatheragent --resources-path ./components -- python weather_agent.py
+docker run -p 8080:8080 ghcr.io/diagridio/diagrid-dashboard:latest
```
-This command starts a Dapr sidecar with the conversation component and launches the agent that communicates with the sidecar for state persistence. Notice how in the agent's responses, it remembers the user's name from the first chat interaction, demonstrating the conversation memory in action.
+Open the dashboard in a browser at [http://localhost:8080](http://localhost:8080) to explore your local workflow executions.
+## Inspect Conversation History with Redis Insights
-### 6. Enable Redis Insights (Optional)
+Dapr uses [Redis]({{% ref setup-redis.md %}}) by default for state management and pub/sub messaging, which are fundamental to Dapr Agents’ agentic workflows. To inspect the Redis instance and see both **conversation** state for this durable agent, you can use Redis Insight.
-Dapr uses [Redis]({{% ref setup-redis.md %}}) by default for state management and pub/sub messaging, which are fundamental to Dapr Agents's agentic workflows. To inspect the Redis instance, a great UI tool to use is Redis Insight, and you can use it to inspect the agent memory populated earlier. To run Redis Insights, run:
+Run Redis Insight:
```bash
docker run --rm -d --name redisinsight -p 5540:5540 redis/redisinsight:latest
```
-Once running, access the Redis Insight interface at `http://localhost:5540/`
-Inside Redis Insight, you can connect to a Redis instance, so let's connect to the one used by the agent:
+Once running, access the Redis Insight interface at `http://localhost:5540/`.
+
+Inside Redis Insight, you can connect to the Redis instance used by Dapr:
* Port: 6379
-* Host (Linux): 172.17.0.1
-* Host (Windows/Mac): host.docker.internal (example `host.docker.internal:6379`)
+* Host (Linux): `172.17.0.1`
+* Host (Windows/Mac): `host.docker.internal` (for example, `host.docker.internal:6379`)
-Redis Insight makes it easy to visualize and manage the data powering your agentic workflows, ensuring efficient debugging, monitoring, and optimization.
+Redis Insight makes it easy to inspect keys and values stored in the state stores (such as `conversation-statestore` and `workflow-statestore`), which is useful for debugging and understanding how your durable agents behave.

-Here you can browse the state store used in the agent and explore its data.
+Here you can browse the state stores used by the agent (`conversation-statestore`) and explore their data.
## Next Steps
-Now that you have Dapr Agents installed and running, explore more advanced examples and patterns in the [quickstarts]({{% ref dapr-agents-quickstarts.md %}}) section to learn about multi-agent workflows, durable agents, and integration with Dapr's powerful distributed capabilities.
-
+Now that you have Dapr Agents installed via the quickstart, and a durable HTTP agent running end-to-end, explore more examples and patterns in the [quickstarts]({{% ref dapr-agents-quickstarts.md %}}) section to learn about multi-agent workflows, pub/sub-driven agents, tracing, and deeper integration with Dapr’s building blocks.
diff --git a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-introduction.md b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-introduction.md
index 40a3bae070f..4809352e36a 100644
--- a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-introduction.md
+++ b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-introduction.md
@@ -11,12 +11,13 @@ aliases:

-Dapr Agents is a developer framework for building durable and resilient AI agent systems powered by Large Language Models (LLMs). Built on the battle-tested Dapr project, it enables developers to create autonomous systems that reason through problems, make dynamic decisions, and collaborate seamlessly. It includes built-in observability and stateful workflow execution to ensure agentic workflows complete successfully, regardless of complexity. Whether you're developing single-agent applications or complex multi-agent workflows, Dapr Agents provides the infrastructure for intelligent, adaptive systems that scale across environments.
+Dapr Agents is a developer framework for building durable and resilient AI agent systems powered by Large Language Models (LLMs). Built on the battle-tested Dapr project, it enables developers to create autonomous systems that have identity, reason through problems, make dynamic decisions, and collaborate seamlessly. It includes built-in observability and stateful workflow execution to ensure agentic workflows complete successfully, regardless of complexity. Whether you're developing single-agent applications or complex multi-agent workflows, Dapr Agents provides the infrastructure for intellI want to organise only the section below. igent, adaptive systems that scale across environments.
## Core Capabilities
-
+- **Agent Identity**: With Dapr Agents, each agent is assigned a unique cryptographic identity that is used to authenticate agent interactions and enforce authorization across services and infrastructure.
+- **Durable Execution**: Agents created with Dapr Agents are backed by Dapr’s workflow engine, which persists every agent interaction with LLMs and tools into a durable state store that can recover and continue execution even after the agent restarts.
+- **Resilience**: Dapr Agents can recover from transient failures with automatic retry policies, timeouts, and circuit breakers, and can also apply durable retries backed by workflow state to recover from longer-lasting failures.
- **Scale and Efficiency**: Run thousands of agents efficiently on a single core. Dapr distributes single and multi-agent apps transparently across fleets of machines and handles their lifecycle.
-- **Workflow Resilience**: Automatically retry agentic workflows and to ensure task completion.
- **Data-Driven Agents**: Directly integrate with databases, documents, and unstructured data by connecting to dozens of different data sources.
- **Multi-Agent Systems**: Secure and observable by default, enabling collaboration between agents.
- **Kubernetes-Native**: Easily deploy and manage agents in Kubernetes environments.
@@ -25,20 +26,19 @@ Dapr Agents is a developer framework for building durable and resilient AI agent
## Key Features
-Dapr Agents provides specialized modules designed for creating intelligent, autonomous systems. Each module is designed to work independently, allowing you to use any combination that fits your application needs.
-
+Dapr Agents provides specialized modules designed for creating intelligent, autonomous systems. Each module is designed to work independently, allowing you to use any combination that fits your application needs.
-| Feature | Description |
-|----------------------------------------------------------------------------------------------|-------------|
-| [**LLM Integration**]({{% ref "dapr-agents-core-concepts.md#llm-integration" %}}) | Uses Dapr [Conversation API]({{% ref conversation-overview.md %}}) to abstract LLM inference APIs for chat completion, or provides native clients for other LLM integrations such as embeddings, audio, etc.
-| [**Structured Outputs**]({{% ref "dapr-agents-core-concepts.md#structured-outputs" %}}) | Leverage capabilities like OpenAI's Function Calling to generate predictable, reliable results following JSON Schema and OpenAPI standards for tool integration.
-| [**Tool Selection**]({{% ref "dapr-agents-core-concepts.md#tool-calling" %}}) | Dynamic tool selection based on requirements, best action, and execution through [Function Calling](https://platform.openai.com/docs/guides/function-calling) capabilities.
-| [**MCP Support**]({{% ref "dapr-agents-core-concepts.md#mcp-support" %}}) | Built-in support for [Model Context Protocol](https://modelcontextprotocol.io/) enabling agents to dynamically discover and invoke external tools through standardized interfaces.
-| [**Memory Management**]({{% ref "dapr-agents-core-concepts.md#memory" %}}) | Retain context across interactions with options from simple in-memory lists to vector databases, integrating with [Dapr state stores]({{% ref state-management-overview.md %}}) for scalable, persistent memory.
-| [**Durable Agents**]({{% ref "dapr-agents-core-concepts.md#durable-agents" %}}) | Workflow-backed agents that provide fault-tolerant execution with persistent state management and automatic retry mechanisms for long-running processes.
-| [**Headless Agents**]({{% ref "dapr-agents-core-concepts.md#agent-services" %}}) | Expose agents over REST for long-running tasks, enabling programmatic access and integration without requiring user interfaces or human intervention.
-| [**Event-Driven Communication**]({{% ref "dapr-agents-core-concepts.md#event-driven-orchestration" %}}) | Enable agent collaboration through [Pub/Sub messaging]({{% ref pubsub-overview.md %}}) for event-driven communication, task distribution, and real-time coordination in distributed systems.
-| [**Agent Orchestration**]({{% ref "dapr-agents-core-concepts.md#deterministic-workflows" %}}) | Deterministic agent orchestration using [Dapr Workflows]({{% ref workflow-overview.md %}}) with higher-level tasks that interact with LLMs for complex multi-step processes.
+| Feature | Description |
+|-------------------------------------------------------------------------------------------------------|-------------|
+| [**LLM Integration**]({{% ref "dapr-agents-core-concepts.md#llm-integration" %}}) | It abstracts the LLM inference API for chat completion using the Dapr [Conversation API]({{% ref conversation-overview.md %}}), enabling you to swap LLM providers without changing high-level agent code, and includes native clients for embeddings, audio, and other specialized integrations.
+| [**Structured Outputs**]({{% ref "dapr-agents-core-concepts.md#structured-outputs" %}}) | Leverage capabilities like OpenAI's Function Calling to generate predictable, reliable results following JSON Schema and OpenAPI standards for tool integration.
+| [**Tool Selection**]({{% ref "dapr-agents-core-concepts.md#tool-calling" %}}) | Dynamic tool selection based on requirements, best action, and execution through [Function Calling](https://platform.openai.com/docs/guides/function-calling) capabilities.
+| [**MCP Support**]({{% ref "dapr-agents-core-concepts.md#mcp-support" %}}) | Built-in support for [Model Context Protocol](https://modelcontextprotocol.io/) enabling agents to dynamically discover and invoke external tools through standardized interfaces.
+| [**Memory Management**]({{% ref "dapr-agents-core-concepts.md#memory" %}}) | Retain context across interactions with options from simple in-memory lists to vector databases, integrating with [Dapr state stores]({{% ref state-management-overview.md %}}) for scalable, persistent memory.
+| [**Durable Agents**]({{% ref "dapr-agents-core-concepts.md#durable-agents" %}}) | Workflow-backed agents that provide fault-tolerant execution with persistent state management and automatic retry mechanisms for long-running processes.
+| [**Agent Runner**]({{% ref "dapr-agents-core-concepts.md#agent-runner" %}}) | Expose agents over HTTP or subscribe to a PubSub for long-running tasks, enabling API access to agents without requiring a user interface or human intervention.
+| [**Event-Driven Communication**]({{% ref "dapr-agents-core-concepts.md#event-driven-orchestration" %}}) | Enable agent collaboration through [Pub/Sub messaging]({{% ref pubsub-overview.md %}}) for event-driven communication, task distribution, and real-time coordination in distributed systems.
+| [**Agent Orchestration**]({{% ref "dapr-agents-core-concepts.md#deterministic-workflows" %}}) | Deterministic agent orchestration using [Dapr Workflows]({{% ref workflow-overview.md %}}) with higher-level tasks that interact with LLMs for complex multi-step processes.
## Agentic Patterns
@@ -46,7 +46,7 @@ Dapr Agents enables a comprehensive set of patterns that represent different app
-These patterns exist along a spectrum of autonomy, from predictable workflow-based approaches to fully autonomous agents that can dynamically plan and execute their own strategies. Each pattern addresses specific use cases and offers different trade-offs between deterministic outcomes and autonomy:
+These patterns range from deterministic, workflow-driven designs to fully autonomous agents capable of dynamic planning and execution; each addresses different use cases and balances predictability against autonomy.
| Pattern | Description |
|----------------------------------------------------------------------------------------|-------------|
diff --git a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-quickstarts.md b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-quickstarts.md
index 663eac16140..facb20f1bd3 100644
--- a/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-quickstarts.md
+++ b/daprdocs/content/en/developing-ai/dapr-agents/dapr-agents-quickstarts.md
@@ -17,9 +17,9 @@ aliases:
## Quickstarts
-| Scenario | What You'll Learn |
-|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
-| [Hello World](https://github.com/dapr/dapr-agents/tree/main/quickstarts/01-hello-world)
A rapid introduction that demonstrates core Dapr Agents concepts through simple, practical examples. | - **Basic LLM Usage**: Simple text generation with OpenAI models
- **Creating Agents**: Building agents with custom tools in under 20 lines of code
- **Simple Workflows**: Setting up multi-step LLM processes
- **DurableAgent Hosting**: Learn `AgentRunner.run`, `AgentRunner.subscribe`, and `AgentRunner.serve` using the `03_durable_agent_*.py` samples |
+| Scenario | What You'll Learn |
+|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
+| [Dapr Agents Fundamentals](https://github.com/dapr/dapr-agents/tree/main/quickstarts/01-dapr-agents-fundamentals)
An end-to-end introduction to the Dapr Agents programming model, progressing from basic LLM calls to durable agents, workflows, memory, tools, and tracing. | - **LLM Clients and Agents**: Call LLMs directly and wrap them in agents with roles and instructions
- **Tools and MCP**: Invoke local tools and dynamically loaded MCP tools
- **Agent Memory**: Persist and restore multi-turn conversation state
- **Durable Agents**: Run agents as workflow-backed executions via HTTP or pub/sub
- **Deterministic Workflows**: Build workflows with LLM and agent activities
- **Observability**: Enable distributed tracing for agents and workflows with Zipkin |
| [LLM Call with Dapr Chat Client](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02-llm-call-dapr)
Explore interaction with Language Models through Dapr Agents' `DaprChatClient`, featuring basic text generation with plain text prompts and templates. | - **Text Completion**: Generating responses to prompts
- **Swapping LLM providers**: Switching LLM backends without application code change
- **Resilience**: Setting timeout, retry and circuit-breaking
- **PII Obfuscation**: Automatically detect and mask sensitive user information |
| [LLM Call with OpenAI Client](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02-llm-call-open-ai)
Leverage native LLM client libraries with Dapr Agents using the OpenAI Client for chat completion, audio processing, and embeddings. | - **Text Completion**: Generating responses to prompts
- **Structured Outputs**: Converting LLM responses to Pydantic objects
*Note: Other quickstarts for specific clients are available for [Elevenlabs](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02-llm-call-elevenlabs), [Hugging Face](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02-llm-call-hugging-face), and [Nvidia](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02-llm-call-nvidia).* |
| Standalone & Durable Agents
[Standalone Agent Tool Call](https://github.com/dapr/dapr-agents/tree/main/quickstarts/03-standalone-agent-tool-call) · [Durable Agent Tool Call](https://github.com/dapr/dapr-agents/tree/main/quickstarts/03-durable-agent-tool-call) | - **Standalone Agents**: Build conversational agents with tools in under 20 lines using the `Agent` class
- **Durable Agents**: Upgrade to workflow-backed `DurableAgent` instances with `AgentRunner.run/subscribe/serve`
- **Tool Definition**: Reuse tools with the `@tool` decorator and structured args models
- **Function Calling**: Let LLMs invoke Python functions safely |