You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(agent-monitoring): refer to sentry ai skills instead of hardcoding agent instructions (#113049)
- We have a separate repository with agent skills for setting up sentry
- We should unite around that so we don't have drift between the
different repos and skills
- Remove the hard-coded instructions and instruct the agent to retrieve
instructions from skills.sentry.dev
successMessage: t('Copied instrumentation prompt to clipboard'),
56
56
});
57
57
}}
@@ -65,237 +65,17 @@ export function CopyLLMPromptButton() {
65
65
* Contextual note prepended when the instructions follow onboarding setup
66
66
* steps so the LLM knows to complete those first.
67
67
*/
68
-
exportconstLLM_ONBOARDING_INSTRUCTIONS_PREAMBLE=`> The setup steps above contain the correct DSN and project-specific SDK configuration — complete them first.
69
-
> Then use the guide below for additional instrumentation and agent naming.`;
70
68
71
-
exportconstLLM_ONBOARDING_INSTRUCTIONS=`# Instrument Sentry AI Agent Monitoring
69
+
exportconstLLM_ONBOARDING_COPY_MARKDOWN=`
70
+
> The setup steps above contain the correct DSN and project-specific SDK configuration — complete them first.
71
+
> Then follow the skill references below for instrumentation and agent naming.
72
72
73
-
Use this guide alongside the setup steps above.
73
+
# Instrument Sentry AI Agent Monitoring
74
74
75
-
## 1. Verify Sentry + Tracing
75
+
Use these skills as the source of truth:
76
76
77
-
**Search for Sentry initialization:**
78
-
- JS/TS: \`Sentry.init\` in entry points, \`@sentry/*\` in package.json
79
-
- Python: \`sentry_sdk.init\` in entry points, \`sentry-sdk\` in requirements
77
+
## Skill References
80
78
81
-
**If not found:** Set up Sentry first following the official docs:
**Verify tracing is enabled** (REQUIRED for AI monitoring):
86
-
- JS: \`tracesSampleRate: 1.0\` and \`sendDefaultPii: true\` in \`Sentry.init\`. Min SDK version \`10.28.0\`.
87
-
- Python: \`traces_sample_rate=1.0\` and \`send_default_pii=True\` in \`sentry_sdk.init()\`.
88
-
89
-
## 2. Check for Supported AI Libraries
90
-
91
-
Check in this order — **use the highest-level framework found** (e.g., if using Vercel AI SDK with OpenAI provider, use Vercel integration, not OpenAI):
92
-
93
-
| Library | Node.js | Browser | Python | How to Name the Agent |
| Vercel AI SDK | Auto (needs \`experimental_telemetry\` or \`ToolLoopAgent\`) | - | - | \`experimental_telemetry.functionId\` or \`telemetry.functionId\` |
96
-
| LangGraph | Auto | \`instrumentLangGraph()\` | Auto | \`name\` param on \`create_agent\` |
97
-
| LangChain | Auto | \`createLangChainCallbackHandler()\` | Auto | \`name\` param on \`create_agent\` |
98
-
| OpenAI Agents | - | - | Auto | \`name\` param on \`Agent()\` (required) |
99
-
| Pydantic AI | - | - | Auto | \`name\` param on \`Agent()\` |
100
-
| Mastra | Auto | - | - | \`name\` + \`id\` params on \`Agent()\` (required) |
| OpenAI | Auto | \`instrumentOpenAiClient()\` | Auto | Manual instrumentation (see 3B) |
103
-
| Anthropic | Auto | \`instrumentAnthropicAiClient()\` | Auto | Manual instrumentation (see 3B) |
104
-
| Google GenAI | Auto | \`instrumentGoogleGenAiClient()\` | Auto | Manual instrumentation (see 3B) |
105
-
106
-
**If supported library found → Step 3A**
107
-
**If no supported library → Step 3B** (Manual span instrumentation)
108
-
109
-
**IMPORTANT: Always set the agent name.** It enables agent-specific dashboards, trace grouping, and alerting.
110
-
111
-
## 3A. Enable Automatic Integration
112
-
113
-
### Node.js (Auto-enabled)
114
-
115
-
For Node.js (\`@sentry/node\`, \`@sentry/nestjs\`, etc.), AI integrations are **automatically enabled** — just ensure Sentry is initialized with tracing.
116
-
117
-
**Vercel AI SDK Extra Step:** Pass \`experimental_telemetry\` to every \`generateText\`/\`streamText\` call, or configure \`telemetry\` on the \`ToolLoopAgent\` constructor:
\`gen_ai.usage.input_tokens\` must be the **total** input tokens (cached + non-cached). Sentry computes cost as \`(input_tokens - cached_tokens) * price\`, so if \`input_tokens\` only contains non-cached tokens, costs go **negative**. Each \`gen_ai.request\` span should only report its own token usage, not an accumulation of tokens from previous spans in the conversation.
278
-
279
-
\`\`\`python
280
-
# Correct — input_tokens includes cached
281
-
span.set_data("gen_ai.usage.input_tokens", 100) # total
0 commit comments