English | 中文
Condev Monitor is a self-hosted frontend observability platform in a pnpm monorepo. It includes:
- a browser SDK for error, performance, white-screen, replay, and AI streaming capture
- a DSN ingestion and query service backed by ClickHouse
- a monitor management API backed by Postgres
- a Next.js dashboard for applications, issues, metrics, replays, and AI streaming traces
- Features
- Architecture
- Tech Stack
- Project Structure
- Local Development
- Environment Variables
- SDK Integration
- API Surface
- Additional Docs
- Publishing SDK Packages
- License
Browser Monitoring
- JavaScript runtime errors, resource loading errors, and unhandled promise rejections
- Web Vitals plus runtime performance signals:
longTask,jank,lowFps - White-screen detection with startup polling and optional mutation-based runtime watch
- Custom message and custom event capture APIs
- Error-triggered minimal session replay with
rrweb - Batched transport, immediate flush for high-priority events, retry, and offline persistence
Application & Source Maps
- Application management with per-app replay toggle
- Sourcemap upload and lookup by
appId + release + dist + minifiedUrl - Sourcemap token issuance and revocation for CI/CD uploads
- Stack trace resolution on issue detail queries
Dashboard
- Application overview page
- Bug aggregation and recent error event inspection
- Metric charts and percentile summaries
- Replay list and replay player
- AI streaming trace dashboard with TTFB, TTLB, stalls, token usage, and model/provider info
AI-Powered Issue Grouping
- Automatic error fingerprinting with embedding similarity (all-MiniLM-L6-v2)
- TF-IDF + cosine similarity hybrid deduplication with configurable thresholds
- Optional LLM-generated issue titles (OpenAI-compatible API, scheduled daily)
- LLM-assisted merge confirmation for similar orphan issues (scheduled hourly)
- Issue lifecycle: open, resolved, ignored, merged (schema-level; worker writes open/merged, other states reserved for future UI)
Operational Features
- Aggregated error alert emails with a 5-minute per-app throttle
- Self-hosted Docker Compose deployment with ClickHouse, Postgres, Kafka, Caddy, three backends, and the frontend
- Per-app rate limiting and inbound payload filtering on the DSN ingestion path
- Optional frontend-only deployment via OpenNext + Cloudflare
graph LR
ROOT[condev-monitor monorepo]
ROOT --> SDK[packages/<br/>browser SDK + core + AI adapters]
ROOT --> BE1[apps/backend/monitor<br/>NestJS + TypeORM + Postgres]
ROOT --> BE2[apps/backend/dsn-server<br/>NestJS + ClickHouse + PG fallback]
ROOT --> BE3[apps/backend/event-worker<br/>NestJS + Kafka + ClickHouse]
ROOT --> FE[apps/frontend/monitor<br/>Next.js dashboard]
ROOT --> EX[examples/<br/>vanilla + ai-sdk chatbox]
ROOT --> OPS[.devcontainer/<br/>Compose + Caddy + infra config]
graph TB
subgraph Client Apps
APP[Web app]
SDK["@condev-monitor/monitor-sdk-browser"]
AI["@condev-monitor/monitor-sdk-ai"]
end
subgraph Dashboard
FE[Next.js dashboard]
end
subgraph Backend Services
MON[Monitor API<br/>/api/*]
DSN[DSN Server<br/>/dsn-api/* + /tracking/* + /replay/*]
WORKER[Event Worker<br/>Kafka consumer + ClickHouse writer]
end
subgraph Message Queue
KAFKA[Apache Kafka]
end
subgraph Data Stores
PG[(Postgres)]
CH[(ClickHouse)]
end
subgraph Integrations
MAIL[SMTP / Resend]
CADDY[Caddy reverse proxy]
end
APP --> SDK
SDK -->|POST tracking/replay| DSN
AI -->|semantic ai_streaming events| DSN
DSN -->|publish events| KAFKA
KAFKA -->|consume batches| WORKER
WORKER -->|batch insert| CH
FE -->|/api/* rewrite| MON
FE -->|/dsn-api/* rewrite| DSN
MON --> PG
MON --> CH
DSN -->|query| CH
DSN --> PG
DSN --> MAIL
CADDY --> FE
CADDY --> MON
CADDY --> DSN
sequenceDiagram
participant B as Browser SDK
participant D as DSN Server
participant K as Kafka
participant W as Event Worker
participant C as ClickHouse
participant M as Monitor API
participant P as Postgres
participant U as Dashboard UI
B->>D: POST /tracking/:appId
D->>D: rate limit + inbound filter
D->>K: publish to monitor.sdk.events.v1
D-->>B: { ok: true }
K->>W: consume event batch
W->>W: fingerprint + embedding dedup
W->>C: batch insert into events table
Note over W,C: Legacy MV mirrors to base_monitor_storage
B->>D: GET /app-config?appId=...
D->>C: read app_settings
alt setting missing
D->>M: GET /api/application/public/config
M->>P: query application
M-->>D: replayEnabled
end
D-->>B: replayEnabled
U->>M: /api/application, /api/sourcemap, /api/me
M->>P: manage users/apps/tokens/sourcemaps
M->>C: sync replay_enabled into app_settings
U->>D: /dsn-api/issues, /metric, /replays, /ai-streaming
D->>C: aggregate and query observability data
D->>P: lookup sourcemaps / owner email when needed
D-->>U: chart, issue, replay, or AI streaming payload
- The default deploy ingest path is: SDK -> DSN Server -> Kafka -> Event Worker -> ClickHouse. Setting
INGEST_MODE=directin the DSN server bypasses Kafka and writes directly to ClickHouse (useful for local dev or fallback). - The dashboard never calls backend services directly from the browser by host/port; it uses Next.js rewrites:
/api/*->API_PROXY_TARGET-> monitor backend/dsn-api/*->DSN_API_PROXY_TARGET-> dsn-server
- Browser replay enablement is per application. The browser SDK calls
GET /app-config?appId=...before turning replay on. - The monitor backend writes replay settings into ClickHouse
lemonade.app_settings; the DSN server uses that as the fast path and falls back to the monitor API when needed. - The DSN server enforces per-app token-bucket rate limiting. When the limit is exceeded, it responds with
429andRetry-After/X-Rate-Limit-Resetheaders. - Inbound filtering on the DSN server rejects events based on payload size (
INBOUND_MAX_PAYLOAD_BYTES), user-agent blacklist, and release blacklist. - The DSN ClickHouse schema creates:
lemonade.base_monitor_storage(legacy, kept for backward compatibility)lemonade.events(new primary table with ReplacingMergeTree)lemonade.events_to_legacy_mv(compatibility materialized view)lemonade.base_monitor_viewlemonade.app_settingslemonade.issues(semantic issue grouping)lemonade.issue_embeddings(embedding vectors for dedup)lemonade.cron_locks(distributed lock for scheduled tasks)
- Non-replay events have a 90-day TTL. Replay rows have a 30-day TTL. The Event Worker performs embedding-based deduplication before writing, using configurable similarity thresholds.
- The frontend auth flow stores the monitor backend JWT in an HTTP-only cookie named
session_tokenthrough Next route handlers underapp/auth-session/*.
| Layer | Technology |
|---|---|
| Runtime | Node.js 22 |
| Package manager | pnpm 10 + Turbo |
| Dashboard | Next.js 15, React 19, React Query, Tailwind CSS 4, Radix UI |
| Monitor backend | NestJS 11, TypeORM, Passport JWT, Postgres |
| DSN backend | NestJS 11, ClickHouse, pg pool, Handlebars mail templates |
| Event Worker | NestJS 11, KafkaJS, ClickHouse, HuggingFace Transformers |
| Browser SDK | TypeScript, tsup, rrweb, custom transport/offline queue |
| AI tracing | OpenTelemetry span processor adapter for Vercel AI SDK |
| Message queue | Apache Kafka 3.9.2 (KRaft mode, no ZooKeeper) |
| Infra | Docker, Docker Compose, Caddy, OpenNext Cloudflare optional |
Note: the active runtime path for apps/backend/monitor is TypeORM. A Prisma schema exists in the repository, but it is not used by the current Nest bootstrap path.
condev-monitor/
├── apps/
│ ├── backend/
│ │ ├── monitor/ # Auth, users, apps, sourcemaps, replay toggle sync
│ │ ├── dsn-server/ # DSN ingestion, ClickHouse queries, alerts, replay fetch
│ │ └── event-worker/ # Kafka consumer, batch ClickHouse writer, issue dedup
│ └── frontend/
│ └── monitor/ # Next.js dashboard + rrweb player
├── packages/
│ ├── core/ # Core monitoring primitives and capture helpers
│ ├── browser/ # Browser SDK
│ ├── browser-utils/ # Metrics helpers / Web Vitals helpers
│ ├── ai/ # AI semantic monitoring adapters
│ ├── react/ # React integration (ErrorBoundary, useMonitorUser)
│ └── nextjs/ # Next.js integration (registerCondevClient/Server, RSC-safe re-exports)
├── examples/
│ ├── vanilla/ # Vite example with browser SDK and sourcemap scripts
│ └── aisdk-rag-chatbox/ # Next.js example with browser + AI SDK tracing
├── .devcontainer/
│ ├── docker-compose.yml # Local infra: ClickHouse + Postgres + Kafka
│ ├── docker-compose.deply.yml # Full stack deploy file used by root scripts
│ ├── caddy/ # Reverse proxy config
│ └── clickhouse/ # ClickHouse init SQL and config overrides
├── scripts/
│ ├── init-clickhouse.sh # Initializes ClickHouse schema inside the deploy compose
│ └── init-kafka-topics.sh # Creates Kafka topics inside the deploy compose
├── CONTRIBUTING.md
├── DEPLOYMENT.md
├── README.md
└── README.zh-CN.md
pnpm-workspace.yamlincludespackages/*,apps/frontend/*,apps/backend/*, andexamples/*.pnpm start:devruns workspacestart:devscripts through Turbo. In this repository that means both backend services and the event worker.pnpm start:frostarts the dashboard only.- The frontend package name is
@condev-monitor/monitor-client. - The backend package names are
monitor,dsn-server, andevent-worker.
- Node.js
22.15+recommended - pnpm
10.10.0 - Docker + Docker Compose
pnpm installcp apps/backend/monitor/.env.example apps/backend/monitor/.env
cp apps/backend/dsn-server/.env.example apps/backend/dsn-server/.envIf you also want to test the deployment compose locally:
cp .devcontainer/.env.example .devcontainer/.envpnpm docker:startThis single command starts and initializes:
- Postgres from
.devcontainer/docker-compose.yml - ClickHouse from
.devcontainer/docker-compose.yml+ schema from.devcontainer/clickhouse/init/ - Kafka from
.devcontainer/docker-compose.yml+ topic creation viascripts/init-kafka-topics.sh
Note: pnpm docker:start already runs both docker:init-kafka and docker:init-clickhouse automatically. You only need to run those scripts individually for recovery or idempotent re-initialization.
pnpm start:devThis launches:
apps/backend/monitoronhttp://localhost:8081/api/*apps/backend/dsn-serveronhttp://localhost:8082/dsn-api/*apps/backend/event-worker(Kafka consumer, writes to ClickHouse)
pnpm start:froThe dashboard runs on http://localhost:3000.
By default it proxies:
/api/*->http://localhost:8081/dsn-api/*->http://localhost:8082
If your backends run somewhere else:
API_PROXY_TARGET=http://127.0.0.1:8081 \
DSN_API_PROXY_TARGET=http://127.0.0.1:8082 \
pnpm start:fro| Service | URL |
|---|---|
| Dashboard | http://localhost:3000 |
| Monitor API | http://localhost:8081/api |
| DSN Server | http://localhost:8082/dsn-api |
| DSN health | http://localhost:8082/dsn-api/healthz |
| ClickHouse HTTP | http://localhost:8123 |
| Postgres | localhost:5432 |
| Kafka (external) | localhost:9094 |
pnpm --filter vanilla dev
pnpm --filter aisdk-rag-chatbox devexamples/vanilla is the fastest way to verify errors, white-screen checks, performance signals, replay, transport batching, and sourcemap upload.
- Local monitor backend:
apps/backend/monitor/.env - Local dsn-server backend:
apps/backend/dsn-server/.env - Full-stack Docker deployment:
.devcontainer/.env - Frontend local proxy envs: shell environment before
pnpm start:fro
Both backend apps explicitly search for env files in this order:
- app-local
.envunderapps/backend/<service>/ - package-local
.env - compiled-output fallback near
dist
| Variable | Purpose |
|---|---|
DB_TYPE, DB_HOST, DB_PORT, DB_USERNAME, DB_PASSWORD, DB_DATABASE |
Postgres connection for users, applications, sourcemaps, and sourcemap tokens |
DB_AUTOLOAD, DB_SYNC |
TypeORM behavior. Keep DB_SYNC=false in production |
JWT_SECRET |
Required for login, auth guards, reset-password tokens, and email verification tokens |
CORS |
Enables Nest CORS when set to true |
CLICKHOUSE_URL, CLICKHOUSE_USERNAME, CLICKHOUSE_PASSWORD |
Required. Used to sync application replay settings into ClickHouse |
MAIL_ON |
Master switch for monitor email behavior |
RESEND_API_KEY, RESEND_FROM |
Enables Resend mail mode when MAIL_ON=true |
EMAIL_SENDER, EMAIL_SENDER_PASSWORD |
Enables SMTP mail mode when MAIL_ON=true and Resend is not configured |
SMTP_HOST, SMTP_PORT, SMTP_SECURE, SMTP_CONNECTION_TIMEOUT_MS, SMTP_GREETING_TIMEOUT_MS, SMTP_SOCKET_TIMEOUT_MS |
Advanced SMTP transport overrides |
AUTH_REQUIRE_EMAIL_VERIFICATION |
Optional override. If omitted, email verification is required when SMTP or Resend is active |
FRONTEND_URL |
Used to build links for verify-email, reset-password, and email-change flows |
SOURCEMAP_STORAGE_DIR |
Shared sourcemap file storage directory. Defaults to data/sourcemaps under the package root if unset |
ERROR_FILTER |
Enables the global Nest exception filter when set |
Important note: apps/backend/monitor/src/main.ts currently binds the monitor API to fixed port 8081. The commented PORT config is not active in the current code path.
| Variable | Purpose |
|---|---|
PORT |
DSN server listen port. Defaults to 8082 |
DSN_BODY_LIMIT |
Express JSON / URL encoded / text body limit. Increase this for larger replay payloads |
CLICKHOUSE_URL, CLICKHOUSE_USERNAME, CLICKHOUSE_PASSWORD |
Required. Used for ingest and query workloads |
DB_HOST, DB_PORT, DB_USERNAME, DB_PASSWORD, DB_DATABASE |
Postgres lookup for owner email and sourcemap metadata |
MONITOR_API_URL |
Fallback URL for GET /api/application/public/config when replay config is not yet available in ClickHouse |
ALERT_EMAIL_FALLBACK |
Fallback recipient when app owner email cannot be resolved |
APP_OWNER_EMAIL_CACHE_TTL_MS |
Cache TTL for appId -> owner email lookups |
SOURCEMAP_CACHE_MAX, SOURCEMAP_CACHE_TTL_MS |
In-memory sourcemap cache controls for stack trace resolution |
RESEND_API_KEY, RESEND_FROM |
Resend mode for alert emails |
EMAIL_SENDER, EMAIL_SENDER_PASSWORD |
SMTP mode for alert emails |
EMAIL_PASS, EMAIL_PASSWORD |
Legacy aliases that are also accepted by the DSN email module |
INGEST_MODE |
kafka (default in deploy) or direct (writes straight to ClickHouse) |
KAFKA_ENABLED |
Master switch for Kafka producer. Set to true when INGEST_MODE=kafka |
KAFKA_BROKERS |
Comma-separated Kafka broker addresses |
KAFKA_CLIENT_ID |
Kafka producer client identifier |
KAFKA_EVENTS_TOPIC |
Topic for SDK events. Default: monitor.sdk.events.v1 |
KAFKA_REPLAYS_TOPIC |
Topic for replay uploads. Default: monitor.sdk.replays.v1 |
KAFKA_FALLBACK_TO_CLICKHOUSE |
When true, falls back to direct ClickHouse write if Kafka publish fails |
INBOUND_MAX_PAYLOAD_BYTES |
Maximum accepted payload size per request (pre-deserialization) |
INBOUND_UA_BLACKLIST |
Comma-separated user-agent substrings to reject |
INBOUND_RELEASE_BLACKLIST |
Comma-separated release identifiers to reject |
RATE_LIMIT_EVENTS_PER_SEC |
Per-app token bucket refill rate (events/second) |
RATE_LIMIT_BURST |
Per-app token bucket burst capacity |
RATE_LIMIT_MAX_APPS |
Maximum number of tracked apps for rate limiting |
The dashboard does not ship its own .env.example in this repository. The main runtime envs are:
| Variable | Purpose |
|---|---|
API_PROXY_TARGET |
Target origin for /api/* rewrites. Defaults to http://localhost:8081 |
DSN_API_PROXY_TARGET |
Target origin for /dsn-api/* rewrites. Defaults to http://localhost:8082 |
NEXT_TELEMETRY_DISABLED |
Recommended for container or CI builds |
| Variable | Purpose |
|---|---|
CLICKHOUSE_URL |
ClickHouse HTTP endpoint for batch inserts |
CLICKHOUSE_USERNAME |
ClickHouse auth username |
CLICKHOUSE_PASSWORD |
ClickHouse auth password |
KAFKA_BROKERS |
Comma-separated Kafka broker addresses |
KAFKA_CLIENT_ID |
Kafka consumer client identifier |
KAFKA_CONSUMER_GROUP |
Consumer group ID. Default: monitor-clickhouse-writer-v1 |
KAFKA_EVENTS_TOPIC |
Events topic to consume. Default: monitor.sdk.events.v1 |
KAFKA_REPLAYS_TOPIC |
Replays topic to consume. Default: monitor.sdk.replays.v1 |
KAFKA_DLQ_TOPIC |
Dead letter queue topic. Default: monitor.sdk.dlq.v1 |
EVENT_BATCH_SIZE |
Maximum events per ClickHouse batch insert |
EVENT_BATCH_MAX_WAIT_MS |
Maximum wait time before flushing an incomplete batch |
EMBEDDING_MODEL_ID |
HuggingFace model for issue embeddings. Default: Xenova/all-MiniLM-L6-v2 |
ISSUE_EMBEDDING_HIGH_THRESHOLD |
Cosine similarity above this merges issues automatically. Default: 0.92 |
ISSUE_EMBEDDING_LOW_THRESHOLD |
Cosine similarity above this triggers TF-IDF confirmation. Default: 0.85 |
ISSUE_TFIDF_THRESHOLD |
TF-IDF similarity threshold for confirming a match. Default: 0.80 |
LLM_PROVIDER |
LLM provider type. Default: openai-compatible |
LLM_BASE_URL |
LLM API base URL |
LLM_API_KEY |
LLM API key |
LLM_MODEL |
LLM model identifier. Default: gpt-4o-mini |
LLM_MAX_TOKENS |
Maximum tokens for LLM response. Default: 1024 |
LLM_TEMPERATURE |
LLM sampling temperature. Default: 0.1 |
| Variable | Purpose |
|---|---|
POSTGRES_PORT |
Host port for Postgres in local infra compose |
CLICKHOUSE_HTTP_PORT, CLICKHOUSE_NATIVE_PORT |
Host ports for ClickHouse |
CLICKHOUSE_USERNAME, CLICKHOUSE_PASSWORD, CLICKHOUSE_DB |
ClickHouse bootstrap credentials and database name |
CLICKHOUSE_MAX_HTTP_BODY_SIZE |
ClickHouse HTTP write limit |
KAFKA_EXTERNAL_PORT |
Host port for Kafka external listener. Default: 9094 |
KAFKA_BROKERS |
Broker addresses for DSN server and event worker |
KAFKA_CONSUMER_GROUP |
Consumer group for event worker. Default: monitor-clickhouse-writer-v1 |
INGEST_MODE |
kafka or direct. Controls DSN server ingest pipeline |
CADDY_HTTP_HOST_PORT, CADDY_HTTP_CONTAINER_PORT, CADDY_HTTPS_HOST_PORT |
Public port mapping for Caddy |
CADDY_DSN_MAX_BODY_SIZE |
Reverse-proxy body limit for /dsn-api/*, /tracking/*, and /replay/* |
MAIL_ON, AUTH_REQUIRE_EMAIL_VERIFICATION, FRONTEND_URL, DSN_BODY_LIMIT, SOURCEMAP_CACHE_MAX, SOURCEMAP_CACHE_TTL_MS |
Shared deployment-time app settings passed into the containers |
EMBEDDING_MODEL_ID, ISSUE_EMBEDDING_HIGH_THRESHOLD, ISSUE_EMBEDDING_LOW_THRESHOLD, ISSUE_TFIDF_THRESHOLD |
Event worker issue dedup tuning |
LLM_PROVIDER, LLM_BASE_URL, LLM_API_KEY, LLM_MODEL, LLM_MAX_TOKENS, LLM_TEMPERATURE |
Event worker LLM integration |
The current code chooses mail mode like this:
MAIL_ON=false-> effectively disabledMAIL_ON=true+RESEND_API_KEY-> ResendMAIL_ON=true+EMAIL_SENDER+EMAIL_SENDER_PASSWORD-> SMTPMAIL_ON=truewithout provider credentials -> JSON transport / warning-only fallback
There are now two supported integration paths in this repo:
- manual SDK wiring using the snippets below
- skill-assisted integration through the local Codex skill at
.codex/skills/condev-sdk-integration/SKILL.md
If you use Codex to instrument an existing AI app, you can ask it to follow the repository skill instead of wiring the SDK by hand.
Skill entry:
Supported project shapes:
- Next.js App Router + Vercel AI SDK
- React/Vite frontend + separate FastAPI-style RAG backend
The skill codifies the same integration path used by the example apps and keeps the business logic intact. In practice it guides Codex to:
- add DSN env vars first
- wire the browser SDK bootstrap
- add session and user propagation
- add backend semantic traces when the app has a backend AI service
- validate
AI Streaming,AI Traces,AI Sessions,AI Users, andAI Cost
Canonical example references used by the skill:
Example prompts:
Use the condev-sdk-integration skill to instrument this Next.js + Vercel AI SDK app.Use the condev-sdk-integration skill to add Condev monitoring to this React/Vite frontend and FastAPI RAG backend.
For Next.js projects, use @condev-monitor/nextjs which wraps both the browser SDK and AI monitoring behind a single ergonomic API.
instrumentation-client.ts (runs before hydration, client-side):
import { registerCondevClient } from '@condev-monitor/nextjs/client'
registerCondevClient({
// dsn defaults to process.env.NEXT_PUBLIC_CONDEV_DSN
replay: true,
aiStreaming: { urlPatterns: ['/api/chat'] },
})instrumentation.ts (runs in Node.js, server-side):
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { registerCondevServer } = await import('@condev-monitor/nextjs/server')
await registerCondevServer({ debug: true })
// dsn defaults to process.env.CONDEV_SERVER_DSN ?? process.env.CONDEV_DSN ?? process.env.NEXT_PUBLIC_CONDEV_DSN
}
}registerCondevServer automatically handles OTel provider registration — it mutates an existing NodeTracerProvider if one is already set, and falls back to creating and registering a BasicTracerProvider when none is available.
Recommended environment variables for new Next.js projects:
CONDEV_SERVER_DSN=http://localhost:8082/dsn-api/tracking/<appId>
NEXT_PUBLIC_CONDEV_DSN=http://localhost:8082/dsn-api/tracking/<appId>CONDEV_DSN is still supported as a backward-compatible alias, but new integrations should prefer CONDEV_SERVER_DSN.
Vercel AI SDK route helper (minimal AI route wiring):
import { auth } from '@clerk/nextjs/server'
import { streamTextResponseWithCondev } from '@condev-monitor/nextjs/server'
import { convertToModelMessages } from 'ai'
import { openai } from '@ai-sdk/openai'
export async function POST(req: Request) {
const { userId, sessionId: authSessionId } = await auth()
const { messages, chatSessionId } = await req.json()
const sessionId = chatSessionId?.trim() || authSessionId || undefined
return streamTextResponseWithCondev({
request: req,
sessionId,
userId,
input: messages,
name: 'ai.streamText',
model: 'gpt-5-mini',
provider: 'openai.responses',
stream: {
model: openai('gpt-5-mini'),
messages: await convertToModelMessages(messages),
},
})
}This helper automatically wires:
- browser
traceIdcorrelation sessionId/userIdpropagation- AI SDK telemetry metadata
- request abort propagation
- fallback error reporting for provider/runtime failures
- cancelled stream reporting
- tool execution failure reporting
The AI route path is not fixed. It can live anywhere, for example:
app/api/chat/route.tsapp/api/ai/chat/route.tsapp/api/rag/ask/route.tssrc/app/api/agent/run/route.ts
Chat session helper (minimal client wiring for AI Sessions):
'use client'
import { useState } from 'react'
import { useChat } from '@ai-sdk/react'
import { createCondevChatTransport } from '@condev-monitor/nextjs/chat'
export default function ChatPage() {
const [{ chatSessionId, transport }] = useState(() =>
createCondevChatTransport({
sessionStorageKey: 'my-rag-chat-session-id',
api: '/api/rag/ask',
})
)
const { messages, sendMessage } = useChat({
id: chatSessionId,
transport,
})
// ...
}The chat page/component path is also not fixed. It can be:
app/chat/page.tsxapp/(dashboard)/assistant/page.tsx- any client component that uses
useChat(...)
What must stay aligned is:
createCondevChatTransport({ api: '...' })registerCondevClient({ aiStreaming: { urlPatterns: ['...'] } })- your actual server AI route path
React components (available from the main entry):
import { CondevErrorBoundary, useMonitorUser } from '@condev-monitor/nextjs'
// Sync auth state to monitoring
useMonitorUser(currentUser ?? null)
// Wrap a component tree for boundary-level error capture
<CondevErrorBoundary fallback={<ErrorPage />}>
<App />
</CondevErrorBoundary>import { init } from '@condev-monitor/react'
import { CondevErrorBoundary, useMonitorUser } from '@condev-monitor/react'
init({ dsn: 'https://monitor.example.com/tracking/<appId>' })import { init } from '@condev-monitor/monitor-sdk-browser'
const release = import.meta.env.VITE_MONITOR_RELEASE
const dist = import.meta.env.VITE_MONITOR_DIST
init({
dsn: 'https://monitor.example.com/tracking/<appId>',
release,
dist,
whiteScreen: { runtimeWatch: true },
performance: true,
replay: true,
aiStreaming: false,
})| Option | Description |
|---|---|
dsn |
Required. Canonical form is https://<host>/<base>/tracking/<appId> |
release, dist |
Used for sourcemap resolution |
whiteScreen |
false to disable, or an options object for startup polling and runtime mutation watch |
performance |
false to disable, or an options object for longTask, jank, lowFps thresholds |
replay |
false to disable, or an options object for replay buffer window and upload behavior |
transport |
Queue, retry, offline persistence, debug logging |
aiStreaming |
false by default. Enables browser-side streaming network tracing when turned on |
import { triggerWhiteScreenCheck } from '@condev-monitor/monitor-sdk-browser'
triggerWhiteScreenCheck('route-change')import { captureEvent, captureException, captureMessage } from '@condev-monitor/monitor-sdk-core'
captureMessage('hello')
captureEvent({ eventType: 'cta_click', data: { id: 'buy' } })
captureException(new Error('manual error'))The repository ships @condev-monitor/monitor-sdk-ai. For Next.js projects the recommended entry point is @condev-monitor/nextjs/server (see Next.js Quick Start above).
For non-Next.js Node.js runtimes, wire the OTel span processor manually:
import { BasicTracerProvider } from '@opentelemetry/sdk-trace-base'
import { trace } from '@opentelemetry/api'
import { initAIMonitor, VercelAIAdapter } from '@condev-monitor/monitor-sdk-ai'
const processor = initAIMonitor({
dsn: process.env.CONDEV_DSN!,
adapter: new VercelAIAdapter(),
debug: true,
})
trace.setGlobalTracerProvider(
new BasicTracerProvider({
spanProcessors: [processor as any],
})
)This sends semantic ai_streaming events that the DSN server joins with browser-side network traces via traceId.
Use examples/vanilla/scripts as the reference workflow:
gen-release.shbuild-with-sourcemaps.shupload-sourcemaps.sh
The upload script accepts these envs:
| Variable | Purpose |
|---|---|
MONITOR_APP_ID or APP_ID |
Target application id |
MONITOR_TOKEN, SOURCEMAP_TOKEN, or MONITOR_SOURCEMAP_TOKEN |
Sourcemap upload token |
MONITOR_RELEASE or VITE_MONITOR_RELEASE |
Release identifier |
MONITOR_DIST or VITE_MONITOR_DIST |
Optional dist identifier |
MONITOR_PUBLIC_URL |
Public URL prefix used to build minifiedUrl |
MONITOR_API_URL |
Monitor backend base URL, defaults to http://localhost:8081 |
MONITOR_DIST_DIR |
Build output directory, defaults to dist |
The upload endpoint is:
POST /api/sourcemap/upload
Authentication is accepted through either:
Authorization: Bearer <monitor-jwt>X-Sourcemap-Token: <token>X-Api-Token: <token>
| Endpoint | Purpose |
|---|---|
POST /api/admin/register |
Register dashboard user |
POST /api/auth/login |
Login and issue JWT |
POST /api/auth/logout |
Logout marker endpoint |
GET /api/currentUser / GET /api/me |
Current authenticated user |
POST /api/auth/forgot-password |
Send reset email |
POST /api/auth/reset-password |
Reset password |
POST /api/auth/reset-password/verify |
Validate reset token |
POST /api/auth/verify-email |
Verify email token |
POST /api/auth/change-email/request |
Send change-email confirmation |
POST /api/auth/change-email/confirm |
Confirm email change |
GET /api/application |
List current user's applications |
POST /api/application |
Create application |
PUT /api/application |
Update name / replay toggle / metadata |
DELETE /api/application |
Soft-delete application |
GET /api/application/public/config?appId=... |
Public replay config lookup |
GET /api/sourcemap?appId=... |
List sourcemaps |
POST /api/sourcemap/upload |
Upload sourcemap file |
GET /api/sourcemap/token?appId=... |
List sourcemap tokens |
POST /api/sourcemap/token |
Create sourcemap token |
DELETE /api/sourcemap/token/:id |
Revoke sourcemap token |
DELETE /api/sourcemap/:id |
Delete sourcemap record |
| Endpoint | Purpose |
|---|---|
GET /dsn-api/healthz |
Liveness check |
POST /dsn-api/tracking/:app_id |
Main event ingestion endpoint |
GET /dsn-api/app-config?appId=... |
Replay enablement check for SDK |
POST /dsn-api/replay/:app_id |
Replay upload |
GET /dsn-api/replay?appId=...&replayId=... |
Replay detail |
GET /dsn-api/replays?appId=...&range=... |
Replay list |
GET /dsn-api/overview?appId=...&range=... |
Overview totals and time series |
GET /dsn-api/issues?appId=...&range=...&limit=... |
Aggregated issues |
GET /dsn-api/error-events?appId=...&limit=... |
Recent raw error events with sourcemap resolution |
GET /dsn-api/metric?appId=...&range=... |
Performance metrics, percentile summaries, top paths |
GET /dsn-api/ai-streaming?appId=...&range=... |
Joined network + semantic AI streaming traces |
GET /dsn-api/bugs |
Raw error view helper |
GET /dsn-api/span |
Raw base monitor view helper |
- Recommended behind Caddy:
https://<domain>/tracking/<appId> - Direct to dsn-server:
http://<host>:8082/dsn-api/tracking/<appId>
| Document | Description |
|---|---|
| DEPLOYMENT.md | 中文 | Full stack deployment, Caddy routing, Cloudflare frontend, volumes, and operational notes |
| CONTRIBUTING.md | 中文 | Local setup, quality checks, commit conventions, and PR checklist |
| docs/ai-observability-integration.md | Automatic vs manual AI observability coverage, Next.js helper usage, and custom spans |
| examples/aisdk-rag-chatbox/README.md | Concrete Condev integration example for Next.js + Vercel AI SDK |
| examples/rag/README.md | Concrete Condev integration example for React/Vite frontend + FastAPI RAG backend |
There is currently no release automation or Changesets workflow in this repository. Before publishing, bump versions manually in:
packages/*/package.jsonpackages/python/pyproject.toml
The publishable packages live under packages/:
@condev-monitor/monitor-sdk-core@condev-monitor/monitor-sdk-browser-utils@condev-monitor/monitor-sdk-browser@condev-monitor/monitor-sdk-ai@condev-monitor/react@condev-monitor/nextjs
Suggested workflow:
pnpm -r --filter "./packages/*" build
npm login
pnpm -r --filter "./packages/*" publish --access public --no-git-checksIf you keep workspace:* references, publish the related packages together and keep versions aligned.
If you need to publish one-by-one, use dependency order:
@condev-monitor/monitor-sdk-core@condev-monitor/monitor-sdk-browser-utils@condev-monitor/monitor-sdk-browser@condev-monitor/monitor-sdk-ai@condev-monitor/react@condev-monitor/nextjs
The Python package lives at packages/python/pyproject.toml:
- package name:
condev-monitor - import path:
condev_monitor
Suggested workflow:
cd packages/python
python -m pip install --upgrade build twine
python -m build
python -m twine upload dist/*If you want a dry run first, publish to TestPyPI before uploading to PyPI.
Apache-2.0. See LICENSE.