Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
238 changes: 9 additions & 229 deletions static/app/views/insights/pages/agents/llmOnboardingInstructions.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ export function CopyLLMPromptButton() {
trackAnalytics('agent-monitoring.copy-llm-prompt-click', {
organization,
});
copy(LLM_ONBOARDING_INSTRUCTIONS, {
copy(LLM_ONBOARDING_COPY_MARKDOWN, {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preamble about "setup steps above" leaks into wrong contexts

Medium Severity

Merging the preamble and instructions into a single LLM_ONBOARDING_COPY_MARKDOWN constant means the text referencing "The setup steps above" is now copied in contexts where no setup steps exist. Previously, CopyLLMPromptButton and the CopyMarkdownButton in CopyInstructionsButton only copied LLM_ONBOARDING_INSTRUCTIONS (without the preamble), while the preamble was exclusively added via the postamble prop in the onboarding flow. Now all three code paths — including UnsupportedPlatformOnboarding and NoDocsOnboarding — copy text that tells the LLM to "complete setup steps above" that don't exist.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit db55ba7. Configure here.

successMessage: t('Copied instrumentation prompt to clipboard'),
});
}}
Expand All @@ -65,237 +65,17 @@ export function CopyLLMPromptButton() {
* Contextual note prepended when the instructions follow onboarding setup
* steps so the LLM knows to complete those first.
*/
export const LLM_ONBOARDING_INSTRUCTIONS_PREAMBLE = `> The setup steps above contain the correct DSN and project-specific SDK configuration — complete them first.
> Then use the guide below for additional instrumentation and agent naming.`;

export const LLM_ONBOARDING_INSTRUCTIONS = `# Instrument Sentry AI Agent Monitoring
export const LLM_ONBOARDING_COPY_MARKDOWN = `
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leading newline in markdown constant causes extra whitespace

Low Severity

The template literal for LLM_ONBOARDING_COPY_MARKDOWN starts with a newline character (the backtick is immediately followed by a line break). When copied to clipboard via CopyLLMPromptButton, the content starts with a blank line. When used as postamble, it produces …\n\n---\n\n\n> (an extra blank line after the horizontal rule) since OnboardingCopyMarkdownButton already joins with \n\n---\n\n.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit db55ba7. Configure here.

> The setup steps above contain the correct DSN and project-specific SDK configuration — complete them first.
> Then follow the skill references below for instrumentation and agent naming.

Use this guide alongside the setup steps above.
# Instrument Sentry AI Agent Monitoring

## 1. Verify Sentry + Tracing
Use these skills as the source of truth:

**Search for Sentry initialization:**
- JS/TS: \`Sentry.init\` in entry points, \`@sentry/*\` in package.json
- Python: \`sentry_sdk.init\` in entry points, \`sentry-sdk\` in requirements
## Skill References

**If not found:** Set up Sentry first following the official docs:
- JS/TS: https://docs.sentry.io/platforms/javascript/guides/node/
- Python: https://docs.sentry.io/platforms/python/

**Verify tracing is enabled** (REQUIRED for AI monitoring):
- JS: \`tracesSampleRate: 1.0\` and \`sendDefaultPii: true\` in \`Sentry.init\`. Min SDK version \`10.28.0\`.
- Python: \`traces_sample_rate=1.0\` and \`send_default_pii=True\` in \`sentry_sdk.init()\`.

## 2. Check for Supported AI Libraries

Check in this order — **use the highest-level framework found** (e.g., if using Vercel AI SDK with OpenAI provider, use Vercel integration, not OpenAI):

| Library | Node.js | Browser | Python | How to Name the Agent |
|---------|---------|---------|--------|-----------------------|
| Vercel AI SDK | Auto (needs \`experimental_telemetry\` or \`ToolLoopAgent\`) | - | - | \`experimental_telemetry.functionId\` or \`telemetry.functionId\` |
| LangGraph | Auto | \`instrumentLangGraph()\` | Auto | \`name\` param on \`create_agent\` |
| LangChain | Auto | \`createLangChainCallbackHandler()\` | Auto | \`name\` param on \`create_agent\` |
| OpenAI Agents | - | - | Auto | \`name\` param on \`Agent()\` (required) |
| Pydantic AI | - | - | Auto | \`name\` param on \`Agent()\` |
| Mastra | Auto | - | - | \`name\` + \`id\` params on \`Agent()\` (required) |
| LiteLLM | - | - | \`LiteLLMIntegration()\` | Manual instrumentation (see 3B) |
| OpenAI | Auto | \`instrumentOpenAiClient()\` | Auto | Manual instrumentation (see 3B) |
| Anthropic | Auto | \`instrumentAnthropicAiClient()\` | Auto | Manual instrumentation (see 3B) |
| Google GenAI | Auto | \`instrumentGoogleGenAiClient()\` | Auto | Manual instrumentation (see 3B) |

**If supported library found → Step 3A**
**If no supported library → Step 3B** (Manual span instrumentation)

**IMPORTANT: Always set the agent name.** It enables agent-specific dashboards, trace grouping, and alerting.

## 3A. Enable Automatic Integration

### Node.js (Auto-enabled)

For Node.js (\`@sentry/node\`, \`@sentry/nestjs\`, etc.), AI integrations are **automatically enabled** — just ensure Sentry is initialized with tracing.

**Vercel AI SDK Extra Step:** Pass \`experimental_telemetry\` to every \`generateText\`/\`streamText\` call, or configure \`telemetry\` on the \`ToolLoopAgent\` constructor:
\`\`\`javascript
// Option 1: generateText / streamText / generateObject
const result = await generateText({
model: openai("gpt-5.4"),
prompt: "Tell me a joke",
experimental_telemetry: {
isEnabled: true,
functionId: "my_agent", // Names the agent in Sentry
recordInputs: true,
recordOutputs: true,
},
});

// Option 2: ToolLoopAgent class
const agent = new ToolLoopAgent({
model: "openai/gpt-5.4",
tools: { /* ... */ },
telemetry: {
isEnabled: true,
functionId: "my_agent", // Names the agent in Sentry
recordInputs: true,
recordOutputs: true,
},
});
const agentResult = await agent.generate({ prompt: "Tell me a joke" });
\`\`\`

### Browser (Manual Client Wrapping)

For browser apps (\`@sentry/browser\`, \`@sentry/react\`), **manually wrap each AI client**:

**OpenAI:**
\`\`\`javascript
const client = Sentry.instrumentOpenAiClient(new OpenAI(), {
recordInputs: true,
recordOutputs: true,
});
\`\`\`

**Anthropic:**
\`\`\`javascript
const client = Sentry.instrumentAnthropicAiClient(new Anthropic(), {
recordInputs: true,
recordOutputs: true,
});
\`\`\`

**Google Gen AI:**
\`\`\`javascript
const client = Sentry.instrumentGoogleGenAiClient(new GoogleGenAI({ apiKey }), {
recordInputs: true,
recordOutputs: true,
});
\`\`\`

**LangChain:**
\`\`\`javascript
const callbackHandler = Sentry.createLangChainCallbackHandler({
recordInputs: true,
recordOutputs: true,
});
await llm.invoke("Tell me a joke", { callbacks: [callbackHandler] });
\`\`\`

**LangGraph:**
\`\`\`javascript
Sentry.instrumentLangGraph(agent, {
recordInputs: true,
recordOutputs: true,
});
\`\`\`

**Important:** You must wrap EACH client instance separately. The helpers are not global integrations.

### Python

Most Python AI libraries are **auto-enabled** — just ensure Sentry is initialized with tracing.

**LiteLLM** requires explicit integration:
\`\`\`python
from sentry_sdk.integrations.litellm import LiteLLMIntegration
sentry_sdk.init(
dsn="...",
traces_sample_rate=1.0,
send_default_pii=True,
integrations=[LiteLLMIntegration()],
)
\`\`\`

### How to Name Agents per Framework

**OpenAI Agents SDK** — \`name\` is required:
\`\`\`python
agent = Agent(name="my_agent", instructions="You are a helpful assistant.", model="gpt-5.4")
\`\`\`

**Pydantic AI:**
\`\`\`python
agent = Agent("openai:gpt-5.4", name="my_agent")
\`\`\`

**LangGraph / LangChain:**
\`\`\`python
agent = create_agent(model, tools, name="my_agent")
\`\`\`

**Mastra** (Node.js) — \`id\` and \`name\` are required:
\`\`\`javascript
const agent = new Agent({
id: "my-agent",
name: "My Agent",
instructions: "You are a helpful assistant.",
model: "openai/gpt-5.4",
});
\`\`\`

## 3B. Manual Instrumentation

Create spans with these exact \`op\` values and attributes:

### AI Request (LLM call)
- **op:** \`"gen_ai.request"\`, **name:** \`"chat <model>"\`
- **Required:** \`gen_ai.request.model\`
- **Recommended:** \`gen_ai.usage.input_tokens\`, \`gen_ai.usage.output_tokens\`

\`\`\`python
with sentry_sdk.start_span(op="gen_ai.request", name=f"chat {model}") as span:
span.set_data("gen_ai.request.model", model)
result = llm.generate(messages)
span.set_data("gen_ai.usage.input_tokens", result.input_tokens)
span.set_data("gen_ai.usage.output_tokens", result.output_tokens)
span.set_data("gen_ai.usage.input_tokens.cached", result.cached_tokens)
\`\`\`

### Invoke Agent
- **op:** \`"gen_ai.invoke_agent"\`, **name:** \`"invoke_agent <AgentName>"\`
- **Required:** \`gen_ai.request.model\`, \`gen_ai.agent.name\`

\`\`\`python
with sentry_sdk.start_span(op="gen_ai.invoke_agent", name=f"invoke_agent {agent_name}") as span:
span.set_data("gen_ai.agent.name", agent_name)
span.set_data("gen_ai.request.model", model)
result = agent.run()
\`\`\`

### Execute Tool
- **op:** \`"gen_ai.execute_tool"\`, **name:** \`"execute_tool <tool_name>"\`
- **Required:** \`gen_ai.tool.name\`

\`\`\`python
with sentry_sdk.start_span(op="gen_ai.execute_tool", name=f"execute_tool {tool_name}") as span:
span.set_data("gen_ai.tool.name", tool_name)
span.set_data("gen_ai.tool.input", json.dumps(inputs))
result = tool(**inputs)
span.set_data("gen_ai.tool.output", json.dumps(result))
\`\`\`

## Token Counting & Cost Calculation

\`gen_ai.usage.input_tokens\` must be the **total** input tokens (cached + non-cached). Sentry computes cost as \`(input_tokens - cached_tokens) * price\`, so if \`input_tokens\` only contains non-cached tokens, costs go **negative**. Each \`gen_ai.request\` span should only report its own token usage, not an accumulation of tokens from previous spans in the conversation.

\`\`\`python
# Correct — input_tokens includes cached
span.set_data("gen_ai.usage.input_tokens", 100) # total
span.set_data("gen_ai.usage.input_tokens.cached", 80) # cached subset
span.set_data("gen_ai.usage.output_tokens", 50)

# Wrong — produces negative cost
span.set_data("gen_ai.usage.input_tokens", 20) # non-cached only
span.set_data("gen_ai.usage.input_tokens.cached", 80) # (20 - 80) * price → negative
\`\`\`

See: https://docs.sentry.io/ai/monitoring/agents/costs/#troubleshooting

## Key Rules

1. **Always set the agent name** — enables per-agent dashboards, trace grouping, and alerting
2. **All complex data must be JSON-stringified** — span attributes only accept primitives
3. **\`gen_ai.request.model\` is required** on \`gen_ai.request\` and \`gen_ai.invoke_agent\` spans
4. **Nest spans correctly:** \`gen_ai.invoke_agent\` should contain \`gen_ai.request\` and \`gen_ai.execute_tool\` as children
5. **JS min version:** \`@sentry/node@10.28.0\` or later
6. **Enable PII:** \`sendDefaultPii: true\` (JS) / \`send_default_pii=True\` (Python) to capture inputs/outputs
7. **\`gen_ai.usage.input_tokens\` must include cached tokens** — otherwise cost calculations will be negative
- Source repository: https://github.com/getsentry/sentry-for-ai
- Agent-monitoring skill: https://skills.sentry.dev/sentry-setup-ai-monitoring/SKILL.md
`;
7 changes: 3 additions & 4 deletions static/app/views/insights/pages/agents/onboarding.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,7 @@ import {useProjects} from 'sentry/utils/useProjects';
import {useSpans} from 'sentry/views/insights/common/queries/useDiscover';
import {
CopyLLMPromptButton,
LLM_ONBOARDING_INSTRUCTIONS,
LLM_ONBOARDING_INSTRUCTIONS_PREAMBLE,
LLM_ONBOARDING_COPY_MARKDOWN,
} from 'sentry/views/insights/pages/agents/llmOnboardingInstructions';
import {getHasAiSpansFilter} from 'sentry/views/insights/pages/agents/utils/query';
import {Referrer} from 'sentry/views/insights/pages/agents/utils/referrers';
Expand Down Expand Up @@ -354,7 +353,7 @@ export function Onboarding() {
borderless
steps={steps}
source="agent_monitoring_onboarding"
postamble={`${LLM_ONBOARDING_INSTRUCTIONS_PREAMBLE}\n\n${LLM_ONBOARDING_INSTRUCTIONS}`}
postamble={LLM_ONBOARDING_COPY_MARKDOWN}
onCopy={() => {
trackAnalytics('agent-monitoring.copy-llm-prompt-click', {
organization,
Expand Down Expand Up @@ -382,7 +381,7 @@ function CopyInstructionsButton() {
<CopyMarkdownButton
title={t('Copies setup instructions as Markdown, optimized for use with an LLM.')}
source="agent_monitoring_onboarding"
getMarkdown={() => LLM_ONBOARDING_INSTRUCTIONS}
getMarkdown={() => LLM_ONBOARDING_COPY_MARKDOWN}
onCopy={() => {
trackAnalytics('agent-monitoring.copy-llm-prompt-click', {
organization,
Expand Down
Loading