You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add telemetry integration docs with vendor-specific examples
- New docs/integrations/telemetry.md: comprehensive guide covering
LangFuse, LangSmith, OpenAI Assistants, W&B, Helicone, AgentOps,
and custom extraction_map. Includes standardized output schema,
meta-variables, troubleshooting.
- Updated agent-config.md: concise telemetry section linking to full guide,
removed old 'enabled' flag references and outdated examples
- Updated README: brief telemetry section with link to docs
- Added to mkdocs nav under Integrations
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a `telemetry` block to your agent config to enable whitebox testing. Humanbound fetches tool calls, memory operations, and resource usage from your observability platform (LangFuse, LangSmith, OpenAI Assistants, W&B, Helicone, AgentOps, or custom).
See the full [Telemetry Integration Guide](https://docs.humanbound.ai/integrations/telemetry/) for vendor-specific setup and the custom extraction map reference.
@@ -103,136 +102,23 @@ The `--endpoint / -e` flag on `hb connect` accepts a JSON config file (or inline
103
102
104
103
## Telemetry (Optional)
105
104
106
-
Telemetry configuration enables **white-box agentic testing**. When configured, Humanbound can see inside your agent's reasoning -- tool calls, memory operations, retrieval steps, and resource usage -- giving the judge far richer context than black-box request/response testing alone.
107
-
108
-
The `telemetry` object sits alongside `chat_completion`, `thread_init`, etc. in your config JSON. The CLI passes it through to the backend unchanged -- no additional CLI flags are needed.
109
-
110
-
### Modes
111
-
112
-
#### `end_of_conversation` (default)
113
-
114
-
After all turns in a conversation complete, Humanbound fetches telemetry from a separate API endpoint (your observability platform). Best for platforms that expose trace/run data via REST API.
Extracts metadata from each chat response using JSONPath navigation via `extraction_map`. No separate endpoint needed -- telemetry is pulled directly from the agent's response payload.
133
-
134
-
```json
135
-
{
136
-
"telemetry": {
137
-
"mode": "per_turn",
138
-
"format": "custom",
139
-
"extraction_map": {
140
-
"tool_calls": "$.choices[0].message.tool_calls",
141
-
"tokens_used": "$.usage.total_tokens",
142
-
"model": "$.model"
143
-
}
144
-
}
145
-
}
146
-
```
147
-
148
-
### Configuration Reference
149
-
150
-
| Field | Required | Description |
151
-
|---|---|---|
152
-
|`mode`| No |`per_turn` or `end_of_conversation` (default: `end_of_conversation`) |
The `telemetry` block enables **whitebox agentic testing**. When present, Humanbound fetches tool calls, memory operations, and resource usage from your observability platform after each conversation, giving the judge visibility into what the agent *did* -- not just what it *said*.
182
106
183
-
Fetches trace data from LangSmith's API.
107
+
If the `telemetry` block is present, it is enabled. No separate flag needed.
For full configuration details, vendor-specific examples, and the custom extraction map reference, see the [Telemetry Integration Guide](../integrations/telemetry.md).
0 commit comments