Skip to content

Commit 7c752c0

Browse files
authored
feat(agent-monitoring): refer to sentry ai skills instead of hardcoding agent instructions (#113049)
- We have a separate repository with agent skills for setting up sentry - We should unite around that so we don't have drift between the different repos and skills - Remove the hard-coded instructions and instruct the agent to retrieve instructions from skills.sentry.dev
1 parent e66f412 commit 7c752c0

File tree

2 files changed

+12
-233
lines changed

2 files changed

+12
-233
lines changed

static/app/views/insights/pages/agents/llmOnboardingInstructions.tsx

Lines changed: 9 additions & 229 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ export function CopyLLMPromptButton() {
5151
trackAnalytics('agent-monitoring.copy-llm-prompt-click', {
5252
organization,
5353
});
54-
copy(LLM_ONBOARDING_INSTRUCTIONS, {
54+
copy(LLM_ONBOARDING_COPY_MARKDOWN, {
5555
successMessage: t('Copied instrumentation prompt to clipboard'),
5656
});
5757
}}
@@ -65,237 +65,17 @@ export function CopyLLMPromptButton() {
6565
* Contextual note prepended when the instructions follow onboarding setup
6666
* steps so the LLM knows to complete those first.
6767
*/
68-
export const LLM_ONBOARDING_INSTRUCTIONS_PREAMBLE = `> The setup steps above contain the correct DSN and project-specific SDK configuration — complete them first.
69-
> Then use the guide below for additional instrumentation and agent naming.`;
7068

71-
export const LLM_ONBOARDING_INSTRUCTIONS = `# Instrument Sentry AI Agent Monitoring
69+
export const LLM_ONBOARDING_COPY_MARKDOWN = `
70+
> The setup steps above contain the correct DSN and project-specific SDK configuration — complete them first.
71+
> Then follow the skill references below for instrumentation and agent naming.
7272
73-
Use this guide alongside the setup steps above.
73+
# Instrument Sentry AI Agent Monitoring
7474
75-
## 1. Verify Sentry + Tracing
75+
Use these skills as the source of truth:
7676
77-
**Search for Sentry initialization:**
78-
- JS/TS: \`Sentry.init\` in entry points, \`@sentry/*\` in package.json
79-
- Python: \`sentry_sdk.init\` in entry points, \`sentry-sdk\` in requirements
77+
## Skill References
8078
81-
**If not found:** Set up Sentry first following the official docs:
82-
- JS/TS: https://docs.sentry.io/platforms/javascript/guides/node/
83-
- Python: https://docs.sentry.io/platforms/python/
84-
85-
**Verify tracing is enabled** (REQUIRED for AI monitoring):
86-
- JS: \`tracesSampleRate: 1.0\` and \`sendDefaultPii: true\` in \`Sentry.init\`. Min SDK version \`10.28.0\`.
87-
- Python: \`traces_sample_rate=1.0\` and \`send_default_pii=True\` in \`sentry_sdk.init()\`.
88-
89-
## 2. Check for Supported AI Libraries
90-
91-
Check in this order — **use the highest-level framework found** (e.g., if using Vercel AI SDK with OpenAI provider, use Vercel integration, not OpenAI):
92-
93-
| Library | Node.js | Browser | Python | How to Name the Agent |
94-
|---------|---------|---------|--------|-----------------------|
95-
| Vercel AI SDK | Auto (needs \`experimental_telemetry\` or \`ToolLoopAgent\`) | - | - | \`experimental_telemetry.functionId\` or \`telemetry.functionId\` |
96-
| LangGraph | Auto | \`instrumentLangGraph()\` | Auto | \`name\` param on \`create_agent\` |
97-
| LangChain | Auto | \`createLangChainCallbackHandler()\` | Auto | \`name\` param on \`create_agent\` |
98-
| OpenAI Agents | - | - | Auto | \`name\` param on \`Agent()\` (required) |
99-
| Pydantic AI | - | - | Auto | \`name\` param on \`Agent()\` |
100-
| Mastra | Auto | - | - | \`name\` + \`id\` params on \`Agent()\` (required) |
101-
| LiteLLM | - | - | \`LiteLLMIntegration()\` | Manual instrumentation (see 3B) |
102-
| OpenAI | Auto | \`instrumentOpenAiClient()\` | Auto | Manual instrumentation (see 3B) |
103-
| Anthropic | Auto | \`instrumentAnthropicAiClient()\` | Auto | Manual instrumentation (see 3B) |
104-
| Google GenAI | Auto | \`instrumentGoogleGenAiClient()\` | Auto | Manual instrumentation (see 3B) |
105-
106-
**If supported library found → Step 3A**
107-
**If no supported library → Step 3B** (Manual span instrumentation)
108-
109-
**IMPORTANT: Always set the agent name.** It enables agent-specific dashboards, trace grouping, and alerting.
110-
111-
## 3A. Enable Automatic Integration
112-
113-
### Node.js (Auto-enabled)
114-
115-
For Node.js (\`@sentry/node\`, \`@sentry/nestjs\`, etc.), AI integrations are **automatically enabled** — just ensure Sentry is initialized with tracing.
116-
117-
**Vercel AI SDK Extra Step:** Pass \`experimental_telemetry\` to every \`generateText\`/\`streamText\` call, or configure \`telemetry\` on the \`ToolLoopAgent\` constructor:
118-
\`\`\`javascript
119-
// Option 1: generateText / streamText / generateObject
120-
const result = await generateText({
121-
model: openai("gpt-5.4"),
122-
prompt: "Tell me a joke",
123-
experimental_telemetry: {
124-
isEnabled: true,
125-
functionId: "my_agent", // Names the agent in Sentry
126-
recordInputs: true,
127-
recordOutputs: true,
128-
},
129-
});
130-
131-
// Option 2: ToolLoopAgent class
132-
const agent = new ToolLoopAgent({
133-
model: "openai/gpt-5.4",
134-
tools: { /* ... */ },
135-
telemetry: {
136-
isEnabled: true,
137-
functionId: "my_agent", // Names the agent in Sentry
138-
recordInputs: true,
139-
recordOutputs: true,
140-
},
141-
});
142-
const agentResult = await agent.generate({ prompt: "Tell me a joke" });
143-
\`\`\`
144-
145-
### Browser (Manual Client Wrapping)
146-
147-
For browser apps (\`@sentry/browser\`, \`@sentry/react\`), **manually wrap each AI client**:
148-
149-
**OpenAI:**
150-
\`\`\`javascript
151-
const client = Sentry.instrumentOpenAiClient(new OpenAI(), {
152-
recordInputs: true,
153-
recordOutputs: true,
154-
});
155-
\`\`\`
156-
157-
**Anthropic:**
158-
\`\`\`javascript
159-
const client = Sentry.instrumentAnthropicAiClient(new Anthropic(), {
160-
recordInputs: true,
161-
recordOutputs: true,
162-
});
163-
\`\`\`
164-
165-
**Google Gen AI:**
166-
\`\`\`javascript
167-
const client = Sentry.instrumentGoogleGenAiClient(new GoogleGenAI({ apiKey }), {
168-
recordInputs: true,
169-
recordOutputs: true,
170-
});
171-
\`\`\`
172-
173-
**LangChain:**
174-
\`\`\`javascript
175-
const callbackHandler = Sentry.createLangChainCallbackHandler({
176-
recordInputs: true,
177-
recordOutputs: true,
178-
});
179-
await llm.invoke("Tell me a joke", { callbacks: [callbackHandler] });
180-
\`\`\`
181-
182-
**LangGraph:**
183-
\`\`\`javascript
184-
Sentry.instrumentLangGraph(agent, {
185-
recordInputs: true,
186-
recordOutputs: true,
187-
});
188-
\`\`\`
189-
190-
**Important:** You must wrap EACH client instance separately. The helpers are not global integrations.
191-
192-
### Python
193-
194-
Most Python AI libraries are **auto-enabled** — just ensure Sentry is initialized with tracing.
195-
196-
**LiteLLM** requires explicit integration:
197-
\`\`\`python
198-
from sentry_sdk.integrations.litellm import LiteLLMIntegration
199-
sentry_sdk.init(
200-
dsn="...",
201-
traces_sample_rate=1.0,
202-
send_default_pii=True,
203-
integrations=[LiteLLMIntegration()],
204-
)
205-
\`\`\`
206-
207-
### How to Name Agents per Framework
208-
209-
**OpenAI Agents SDK** — \`name\` is required:
210-
\`\`\`python
211-
agent = Agent(name="my_agent", instructions="You are a helpful assistant.", model="gpt-5.4")
212-
\`\`\`
213-
214-
**Pydantic AI:**
215-
\`\`\`python
216-
agent = Agent("openai:gpt-5.4", name="my_agent")
217-
\`\`\`
218-
219-
**LangGraph / LangChain:**
220-
\`\`\`python
221-
agent = create_agent(model, tools, name="my_agent")
222-
\`\`\`
223-
224-
**Mastra** (Node.js) — \`id\` and \`name\` are required:
225-
\`\`\`javascript
226-
const agent = new Agent({
227-
id: "my-agent",
228-
name: "My Agent",
229-
instructions: "You are a helpful assistant.",
230-
model: "openai/gpt-5.4",
231-
});
232-
\`\`\`
233-
234-
## 3B. Manual Instrumentation
235-
236-
Create spans with these exact \`op\` values and attributes:
237-
238-
### AI Request (LLM call)
239-
- **op:** \`"gen_ai.request"\`, **name:** \`"chat <model>"\`
240-
- **Required:** \`gen_ai.request.model\`
241-
- **Recommended:** \`gen_ai.usage.input_tokens\`, \`gen_ai.usage.output_tokens\`
242-
243-
\`\`\`python
244-
with sentry_sdk.start_span(op="gen_ai.request", name=f"chat {model}") as span:
245-
span.set_data("gen_ai.request.model", model)
246-
result = llm.generate(messages)
247-
span.set_data("gen_ai.usage.input_tokens", result.input_tokens)
248-
span.set_data("gen_ai.usage.output_tokens", result.output_tokens)
249-
span.set_data("gen_ai.usage.input_tokens.cached", result.cached_tokens)
250-
\`\`\`
251-
252-
### Invoke Agent
253-
- **op:** \`"gen_ai.invoke_agent"\`, **name:** \`"invoke_agent <AgentName>"\`
254-
- **Required:** \`gen_ai.request.model\`, \`gen_ai.agent.name\`
255-
256-
\`\`\`python
257-
with sentry_sdk.start_span(op="gen_ai.invoke_agent", name=f"invoke_agent {agent_name}") as span:
258-
span.set_data("gen_ai.agent.name", agent_name)
259-
span.set_data("gen_ai.request.model", model)
260-
result = agent.run()
261-
\`\`\`
262-
263-
### Execute Tool
264-
- **op:** \`"gen_ai.execute_tool"\`, **name:** \`"execute_tool <tool_name>"\`
265-
- **Required:** \`gen_ai.tool.name\`
266-
267-
\`\`\`python
268-
with sentry_sdk.start_span(op="gen_ai.execute_tool", name=f"execute_tool {tool_name}") as span:
269-
span.set_data("gen_ai.tool.name", tool_name)
270-
span.set_data("gen_ai.tool.input", json.dumps(inputs))
271-
result = tool(**inputs)
272-
span.set_data("gen_ai.tool.output", json.dumps(result))
273-
\`\`\`
274-
275-
## Token Counting & Cost Calculation
276-
277-
\`gen_ai.usage.input_tokens\` must be the **total** input tokens (cached + non-cached). Sentry computes cost as \`(input_tokens - cached_tokens) * price\`, so if \`input_tokens\` only contains non-cached tokens, costs go **negative**. Each \`gen_ai.request\` span should only report its own token usage, not an accumulation of tokens from previous spans in the conversation.
278-
279-
\`\`\`python
280-
# Correct — input_tokens includes cached
281-
span.set_data("gen_ai.usage.input_tokens", 100) # total
282-
span.set_data("gen_ai.usage.input_tokens.cached", 80) # cached subset
283-
span.set_data("gen_ai.usage.output_tokens", 50)
284-
285-
# Wrong — produces negative cost
286-
span.set_data("gen_ai.usage.input_tokens", 20) # non-cached only
287-
span.set_data("gen_ai.usage.input_tokens.cached", 80) # (20 - 80) * price → negative
288-
\`\`\`
289-
290-
See: https://docs.sentry.io/ai/monitoring/agents/costs/#troubleshooting
291-
292-
## Key Rules
293-
294-
1. **Always set the agent name** — enables per-agent dashboards, trace grouping, and alerting
295-
2. **All complex data must be JSON-stringified** — span attributes only accept primitives
296-
3. **\`gen_ai.request.model\` is required** on \`gen_ai.request\` and \`gen_ai.invoke_agent\` spans
297-
4. **Nest spans correctly:** \`gen_ai.invoke_agent\` should contain \`gen_ai.request\` and \`gen_ai.execute_tool\` as children
298-
5. **JS min version:** \`@sentry/node@10.28.0\` or later
299-
6. **Enable PII:** \`sendDefaultPii: true\` (JS) / \`send_default_pii=True\` (Python) to capture inputs/outputs
300-
7. **\`gen_ai.usage.input_tokens\` must include cached tokens** — otherwise cost calculations will be negative
79+
- Source repository: https://github.com/getsentry/sentry-for-ai
80+
- Agent-monitoring skill: https://skills.sentry.dev/sentry-setup-ai-monitoring/SKILL.md
30181
`;

static/app/views/insights/pages/agents/onboarding.tsx

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,7 @@ import {useProjects} from 'sentry/utils/useProjects';
5151
import {useSpans} from 'sentry/views/insights/common/queries/useDiscover';
5252
import {
5353
CopyLLMPromptButton,
54-
LLM_ONBOARDING_INSTRUCTIONS,
55-
LLM_ONBOARDING_INSTRUCTIONS_PREAMBLE,
54+
LLM_ONBOARDING_COPY_MARKDOWN,
5655
} from 'sentry/views/insights/pages/agents/llmOnboardingInstructions';
5756
import {getHasAiSpansFilter} from 'sentry/views/insights/pages/agents/utils/query';
5857
import {Referrer} from 'sentry/views/insights/pages/agents/utils/referrers';
@@ -354,7 +353,7 @@ export function Onboarding() {
354353
borderless
355354
steps={steps}
356355
source="agent_monitoring_onboarding"
357-
postamble={`${LLM_ONBOARDING_INSTRUCTIONS_PREAMBLE}\n\n${LLM_ONBOARDING_INSTRUCTIONS}`}
356+
postamble={LLM_ONBOARDING_COPY_MARKDOWN}
358357
onCopy={() => {
359358
trackAnalytics('agent-monitoring.copy-llm-prompt-click', {
360359
organization,
@@ -382,7 +381,7 @@ function CopyInstructionsButton() {
382381
<CopyMarkdownButton
383382
title={t('Copies setup instructions as Markdown, optimized for use with an LLM.')}
384383
source="agent_monitoring_onboarding"
385-
getMarkdown={() => LLM_ONBOARDING_INSTRUCTIONS}
384+
getMarkdown={() => LLM_ONBOARDING_COPY_MARKDOWN}
386385
onCopy={() => {
387386
trackAnalytics('agent-monitoring.copy-llm-prompt-click', {
388387
organization,

0 commit comments

Comments
 (0)