diff --git a/CHANGELOG.md b/CHANGELOG.md index ccda9aa9..518c419b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -6,6 +6,46 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [0.8.5] - 2026-03-28 + +### Security +- **Cross-org isolation fix in SIEM**: `linkDetectionEventsToIncident` now scopes detection events to the requesting organization, preventing cross-tenant data corruption via crafted API calls +- **Cross-org auth bypass in pattern routes**: PUT and DELETE handlers for correlation patterns now verify organization membership before mutating data (same check GET/POST already had) +- **SSRF protection for legacy webhook path**: the alert-notification job's direct `fetch()` call now validates URLs against private/internal IP ranges, matching the `WebhookProvider` safeguard +- **Disabled user login blocked**: `POST /login` now checks the `disabled` flag before creating a session, preventing disabled accounts from obtaining tokens +- **Expired invitation info leak**: `getInvitationByToken` now filters on `expires_at > NOW()`, preventing enumeration of expired invitation details + +### Fixed +- **SIEM dashboard timeline crash**: `time_bucket()` call was missing `::interval` cast on the parameterized bucket width, causing a PostgreSQL type error that broke the timeline widget for all users +- **SSE real-time events broken**: SIEM store and incident detail page read auth token from `localStorage('session_token')` (wrong key), so the SSE connection never authenticated; now uses `getAuthToken()` from the shared auth utility +- **SSE log stream duplicate emission**: when multiple logs shared the same timestamp, the inclusive `from` bound caused them to be re-sent on every poll tick; stream now tracks sent log IDs to deduplicate +- **Incident severity auto-grouping wrong**: `MAX(severity)` used PostgreSQL alphabetical ordering (`medium` > `critical`), producing incorrect severity on auto-grouped incidents; now uses ordinal ranking +- **Sigma notification failures silent**: notification job payload was missing `organization_id` and `project_id`, and `markAsNotified` was called with `null` historyId; both now handled correctly +- **Incidents pagination total always zero**: `loadIncidents` in the SIEM store never wrote `response.total` to `incidentsTotal` +- **Memory leaks on navigation**: 20+ Svelte components called `authStore.subscribe()` without cleanup; all now store the unsubscribe function and call it in `onDestroy` +- **`offset=0` silently dropped**: API client functions used `if (filters.offset)` which is falsy for zero, so page-1 requests never sent the `offset` parameter; changed to `if (filters.offset != null)` +- **Search debounce timer leak**: `searchDebounceTimer` was not cleared in `onDestroy`, causing post-unmount API calls when navigating away mid-search +- **`verifyProjectAccess` double call**: when `projectId` is an array, the first element was verified twice (once before the loop, once inside it); consolidated into a single loop +- **`updateIncident` silent field skip**: `title`, `severity`, and `status` used truthy checks (`&&`) instead of `!== undefined`, inconsistent with `description` and `assigneeId` +- **Webhook error messages empty**: `response.statusText` is empty for HTTP/2; error now reads the response body for useful detail +- **Retention job crash on empty orgs**: `Math.max(...[])` returns `-Infinity`, cascading to an `Invalid Date` in the `drop_chunks` call; early return added when no organizations exist +- **`escapeHtml` DOM leak**: PDF export's `escapeHtml` created orphaned DOM nodes in the parent document; replaced with pure string replacement +- **Webhook headers validation missing**: `CreateChannelDialog` silently swallowed invalid JSON in the custom headers field; now validates on submit +- **`getIncidentDetections` no org scope**: query now accepts optional `organizationId` for defense-in-depth filtering +- **Stale shared package types**: dist contained outdated `Project` and `Incident` interfaces with phantom fields (`slug`, `statusPageVisibility`, `source`, `monitorId`); rebuilt from source + +### Changed +- **Docker config sync**: `docker-compose.build.yml` now matches `docker-compose.yml` with all environment variables (MongoDB, `TRUST_PROXY`, `FRONTEND_URL`, `INTERNAL_DSN`, `DOCKER_CONTAINER`), MongoDB service, and `fluent-bit-metrics` service +- **`NODE_ENV` for backend**: production `docker-compose.yml` now sets `NODE_ENV: production` on the backend service (worker and frontend already had it) +- **`docker/.env.example`**: added `STORAGE_ENGINE`, ClickHouse, and MongoDB configuration sections + +### Dependencies +- `picomatch` 4.0.3 → 4.0.4 (fix ReDoS via extglob quantifiers + POSIX character class method injection) +- `brace-expansion` 5.0.2 → 5.0.5 (fix zero-step sequence DoS) +- `fast-xml-parser` 5.5.6 → 5.5.9 (fix entity expansion limits bypass) +- `fastify` bumped via dependabot +- `kysely` bumped via dependabot + ## [0.8.4] - 2026-03-19 ### Added diff --git a/README.md b/README.md index eb5d5baf..e4eca827 100644 --- a/README.md +++ b/README.md @@ -16,14 +16,14 @@ Coverage Docker Artifact Hub - Version + Version License Status
-> **🚀 RELEASE 0.8.4:** LogTide now supports **Multi-Engine Storage** (ClickHouse, MongoDB) and **Advanced Browser Observability**. +> **🚀 RELEASE 0.8.5:** LogTide now supports **Multi-Engine Storage** (ClickHouse, MongoDB) and **Advanced Browser Observability**. --- @@ -46,7 +46,7 @@ Designed for teams that need **GDPR compliance**, **full data ownership**, and * ### Logs Explorer ![LogTide Logs](docs/images/logs.png) -### Performance & Metrics (New in 0.8.4) +### Performance & Metrics (New in 0.8.5) ![LogTide Metrics](docs/images/metrics.png) ### Distributed Tracing @@ -82,12 +82,49 @@ Total control over your data. Uses pre-built images from Docker Hub. * **Frontend:** `http://localhost:3000` * **API:** `http://localhost:8080` +> **Note:** The default `docker compose up` starts **5 services**: PostgreSQL (TimescaleDB), Redis, backend, worker, and frontend. ClickHouse, MongoDB, and Fluent Bit are opt-in via [Docker profiles](#optional-profiles) and won't run unless explicitly enabled. + +#### Lightweight Setup (3 containers) + +For low-resource environments like a Raspberry Pi or a homelab, use the simplified compose that removes Redis entirely: + +```bash +mkdir logtide && cd logtide +curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/docker-compose.simple.yml +curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/.env.example +mv .env.example .env +docker compose -f docker-compose.simple.yml up -d +``` + +This runs only **PostgreSQL + backend + frontend**. The backend automatically uses PostgreSQL-based alternatives for job queues and live tail streaming. See the [Deployment docs](https://logtide.dev/docs/deployment#simplified-deployment) for details. + +#### Optional Profiles + +Enable additional services with `--profile`: + +```bash +# Docker log collection (Fluent Bit) +docker compose --profile logging up -d + +# System metrics (CPU, memory, disk, network) +docker compose --profile metrics up -d + +# ClickHouse storage engine +docker compose --profile clickhouse up -d + +# MongoDB storage engine +docker compose --profile mongodb up -d + +# Combine profiles +docker compose --profile logging --profile metrics up -d +``` + ### Option B: Cloud (Fastest & Free) We host it for you. Perfect for testing. [**Sign up at logtide.dev**](https://logtide.dev). --- -## ✨ Core Features (v0.8.4) +## ✨ Core Features (v0.8.5) * 🚀 **Multi-Engine Reservoir:** Pluggable storage layer supporting **TimescaleDB**, **ClickHouse**, and **MongoDB**. * 🌐 **Browser SDK Enhancements:** Automatic collection of **Web Vitals** (LCP, INP, CLS), user session tracking, and click/network breadcrumbs. diff --git a/docker/.env.example b/docker/.env.example index 33cf8cb6..a8069ce4 100644 --- a/docker/.env.example +++ b/docker/.env.example @@ -53,6 +53,31 @@ DB_USER=logtide # SMTP_PASS=your_smtp_password # SMTP_FROM=alerts@yourdomain.com +# ============================================================================= +# STORAGE ENGINE (default: timescale) +# ============================================================================= +# Options: timescale, clickhouse, mongodb +# STORAGE_ENGINE=timescale + +# ============================================================================= +# CLICKHOUSE (when using --profile clickhouse) +# ============================================================================= +# CLICKHOUSE_HOST=clickhouse +# CLICKHOUSE_PORT=8123 +# CLICKHOUSE_DATABASE=logtide +# CLICKHOUSE_USERNAME=default +# CLICKHOUSE_PASSWORD=your_clickhouse_password + +# ============================================================================= +# MONGODB (when using --profile mongodb) +# ============================================================================= +# MONGODB_HOST=mongodb +# MONGODB_PORT=27017 +# MONGODB_DATABASE=logtide +# MONGODB_USERNAME= +# MONGODB_PASSWORD= +# MONGODB_AUTH_SOURCE= + # ============================================================================= # HORIZONTAL SCALING (advanced) # ============================================================================= diff --git a/docker/docker-compose.build.yml b/docker/docker-compose.build.yml index 07feb4f7..d4a157c8 100644 --- a/docker/docker-compose.build.yml +++ b/docker/docker-compose.build.yml @@ -87,6 +87,8 @@ services: - "8080:8080" environment: NODE_ENV: production + TRUST_PROXY: ${TRUST_PROXY:-false} + FRONTEND_URL: ${FRONTEND_URL:-http://localhost:3000} DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/${DB_NAME} DATABASE_HOST: postgres DB_USER: ${DB_USER} @@ -101,6 +103,8 @@ services: SMTP_FROM: ${SMTP_FROM:-noreply@logtide.local} INTERNAL_LOGGING_ENABLED: ${INTERNAL_LOGGING_ENABLED:-false} INTERNAL_API_KEY: ${INTERNAL_API_KEY:-} + INTERNAL_DSN: ${INTERNAL_DSN:-} + DOCKER_CONTAINER: "true" SERVICE_NAME: logtide-backend STORAGE_ENGINE: ${STORAGE_ENGINE:-timescale} CLICKHOUSE_HOST: ${CLICKHOUSE_HOST:-clickhouse} @@ -108,6 +112,12 @@ services: CLICKHOUSE_DATABASE: ${CLICKHOUSE_DATABASE:-logtide} CLICKHOUSE_USERNAME: ${CLICKHOUSE_USERNAME:-default} CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-} + MONGODB_HOST: ${MONGODB_HOST:-mongodb} + MONGODB_PORT: ${MONGODB_PORT:-27017} + MONGODB_DATABASE: ${MONGODB_DATABASE:-logtide} + MONGODB_USERNAME: ${MONGODB_USERNAME:-} + MONGODB_PASSWORD: ${MONGODB_PASSWORD:-} + MONGODB_AUTH_SOURCE: ${MONGODB_AUTH_SOURCE:-} depends_on: postgres: condition: service_healthy @@ -133,11 +143,15 @@ services: disable: true environment: NODE_ENV: production + TRUST_PROXY: ${TRUST_PROXY:-false} + FRONTEND_URL: ${FRONTEND_URL:-http://localhost:3000} DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/${DB_NAME} DATABASE_HOST: postgres DB_USER: ${DB_USER} REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379 API_KEY_SECRET: ${API_KEY_SECRET} + PORT: 8080 + HOST: 0.0.0.0 SMTP_HOST: ${SMTP_HOST:-} SMTP_PORT: ${SMTP_PORT:-587} SMTP_USER: ${SMTP_USER:-} @@ -145,6 +159,8 @@ services: SMTP_FROM: ${SMTP_FROM:-noreply@logtide.local} INTERNAL_LOGGING_ENABLED: ${INTERNAL_LOGGING_ENABLED:-false} INTERNAL_API_KEY: ${INTERNAL_API_KEY:-} + INTERNAL_DSN: ${INTERNAL_DSN:-} + DOCKER_CONTAINER: "true" SERVICE_NAME: logtide-worker STORAGE_ENGINE: ${STORAGE_ENGINE:-timescale} CLICKHOUSE_HOST: ${CLICKHOUSE_HOST:-clickhouse} @@ -152,6 +168,12 @@ services: CLICKHOUSE_DATABASE: ${CLICKHOUSE_DATABASE:-logtide} CLICKHOUSE_USERNAME: ${CLICKHOUSE_USERNAME:-default} CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-} + MONGODB_HOST: ${MONGODB_HOST:-mongodb} + MONGODB_PORT: ${MONGODB_PORT:-27017} + MONGODB_DATABASE: ${MONGODB_DATABASE:-logtide} + MONGODB_USERNAME: ${MONGODB_USERNAME:-} + MONGODB_PASSWORD: ${MONGODB_PASSWORD:-} + MONGODB_AUTH_SOURCE: ${MONGODB_AUTH_SOURCE:-} depends_on: backend: condition: service_healthy @@ -173,6 +195,8 @@ services: environment: NODE_ENV: production PUBLIC_API_URL: ${PUBLIC_API_URL:-http://localhost:8080} + LOGTIDE_DSN: ${LOGTIDE_DSN} + PUBLIC_LOGTIDE_DSN: ${PUBLIC_LOGTIDE_DSN} depends_on: - backend restart: unless-stopped @@ -203,6 +227,26 @@ services: networks: - logtide-network + mongodb: + image: mongo:7.0 + container_name: logtide-mongodb + profiles: + - mongodb + environment: + MONGO_INITDB_DATABASE: ${MONGODB_DATABASE:-logtide} + ports: + - "${MONGODB_PORT:-27017}:27017" + volumes: + - mongodb_data:/data/db + healthcheck: + test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"] + interval: 10s + timeout: 5s + retries: 5 + restart: unless-stopped + networks: + - logtide-network + fluent-bit: image: ${FLUENT_BIT_IMAGE:-fluent/fluent-bit:4.2.2} container_name: logtide-fluent-bit @@ -229,6 +273,24 @@ services: networks: - logtide-network + fluent-bit-metrics: + image: ${FLUENT_BIT_IMAGE:-fluent/fluent-bit:4.2.2} + container_name: logtide-fluent-bit-metrics + profiles: + - metrics + volumes: + - ./fluent-bit-metrics.conf:/fluent-bit/etc/fluent-bit.conf:ro + - ./format_metrics.lua:/fluent-bit/etc/format_metrics.lua:ro + - /proc:/host/proc:ro + environment: + LOGTIDE_API_KEY: ${FLUENT_BIT_API_KEY:-} + LOGTIDE_API_HOST: backend + depends_on: + - backend + restart: unless-stopped + networks: + - logtide-network + volumes: postgres_data: driver: local @@ -236,6 +298,8 @@ volumes: driver: local clickhouse_data: driver: local + mongodb_data: + driver: local networks: logtide-network: diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml index bd849fb0..2995da4d 100644 --- a/docker/docker-compose.yml +++ b/docker/docker-compose.yml @@ -101,6 +101,7 @@ services: ports: - "8080:8080" environment: + NODE_ENV: production TRUST_PROXY: ${TRUST_PROXY:-false} FRONTEND_URL: ${FRONTEND_URL:-http://localhost:3000} DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/${DB_NAME} diff --git a/package.json b/package.json index 35c9c333..16fa18f8 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "logtide", - "version": "0.8.4", + "version": "0.8.5", "private": true, "description": "LogTide - Self-hosted log management platform", "author": "LogTide Team", @@ -12,7 +12,9 @@ "express": ">=5.2.0", "qs": ">=6.14.2", "devalue": ">=5.6.4", - "fast-xml-parser": ">=5.5.6", + "picomatch": ">=4.0.4", + "brace-expansion": ">=5.0.3", + "fast-xml-parser": ">=5.5.7", "minimatch": ">=10.2.3", "ajv": ">=8.18.0", "rollup": ">=4.59.0", diff --git a/packages/backend/package.json b/packages/backend/package.json index d70d7422..9168c32b 100644 --- a/packages/backend/package.json +++ b/packages/backend/package.json @@ -1,6 +1,6 @@ { "name": "@logtide/backend", - "version": "0.8.4", + "version": "0.8.5", "private": true, "description": "LogTide Backend API", "type": "module", diff --git a/packages/backend/src/modules/bootstrap/service.ts b/packages/backend/src/modules/bootstrap/service.ts index a262b2b2..67f5b31f 100644 --- a/packages/backend/src/modules/bootstrap/service.ts +++ b/packages/backend/src/modules/bootstrap/service.ts @@ -9,6 +9,7 @@ * This runs at server startup */ +import crypto from 'node:crypto'; import bcrypt from 'bcrypt'; import { db } from '../../database/connection.js'; import { config } from '../../config/index.js'; @@ -39,11 +40,6 @@ export class BootstrapService { // Check if env vars are configured const { INITIAL_ADMIN_EMAIL, INITIAL_ADMIN_PASSWORD, INITIAL_ADMIN_NAME } = config; - if (!INITIAL_ADMIN_EMAIL || !INITIAL_ADMIN_PASSWORD) { - // No initial admin configured, skip - return; - } - // Check if any "real" users exist (with password set - excludes system@logtide.internal) const userCount = await db .selectFrom('users') @@ -58,16 +54,19 @@ export class BootstrapService { return; } - // Create the initial admin user - console.log(`[Bootstrap] No users found. Creating initial admin: ${INITIAL_ADMIN_EMAIL}`); - - const passwordHash = await bcrypt.hash(INITIAL_ADMIN_PASSWORD, SALT_ROUNDS); + // Use env vars if set, otherwise generate random credentials + const adminEmail = INITIAL_ADMIN_EMAIL || 'system@logtide.internal'; + const adminPassword = INITIAL_ADMIN_PASSWORD || crypto.randomBytes(16).toString('base64url'); const adminName = INITIAL_ADMIN_NAME || 'Admin'; + console.log(`[Bootstrap] No users found. Creating initial admin: ${adminEmail}`); + + const passwordHash = await bcrypt.hash(adminPassword, SALT_ROUNDS); + const user = await db .insertInto('users') .values({ - email: INITIAL_ADMIN_EMAIL, + email: adminEmail, password_hash: passwordHash, name: adminName, is_admin: true, @@ -75,7 +74,16 @@ export class BootstrapService { .returning(['id', 'email', 'name', 'is_admin', 'disabled', 'created_at', 'last_login']) .executeTakeFirstOrThrow(); - console.log(`[Bootstrap] Initial admin created successfully: ${user.email} (ID: ${user.id})`); + console.log(''); + console.log('╔══════════════════════════════════════════════════════════╗'); + console.log('║ INITIAL ADMIN ACCOUNT CREATED ║'); + console.log('╠══════════════════════════════════════════════════════════╣'); + console.log(`║ Email: ${adminEmail.padEnd(44)}║`); + console.log(`║ Password: ${adminPassword.padEnd(44)}║`); + console.log('╠══════════════════════════════════════════════════════════╣'); + console.log('║ ⚠ Change this password after first login! ║'); + console.log('╚══════════════════════════════════════════════════════════╝'); + console.log(''); // Set this user as default user for auth-free mode await settingsService.set('auth.default_user_id', user.id); diff --git a/packages/backend/src/modules/correlation/pattern-routes.ts b/packages/backend/src/modules/correlation/pattern-routes.ts index dc0ddb53..5ea8426f 100644 --- a/packages/backend/src/modules/correlation/pattern-routes.ts +++ b/packages/backend/src/modules/correlation/pattern-routes.ts @@ -387,17 +387,29 @@ export default async function patternRoutes(fastify: FastifyInstance) { }, }, }, - async (request, reply) => { + async (request: any, reply) => { const { id } = request.params; - const organizationId = (request as any).organizationId || request.query.organizationId; + const requestedOrgId = (request as any).organizationId || request.query.organizationId; - if (!organizationId) { + if (!requestedOrgId) { return reply.status(400).send({ success: false, error: 'Organization ID is required', }); } + const organizationId = await getUserOrganizationId( + request.user.id, + requestedOrgId + ); + + if (!organizationId) { + return reply.status(403).send({ + success: false, + error: 'No organization access', + }); + } + // Get existing pattern const existing = await db .selectFrom('identifier_patterns') @@ -498,17 +510,29 @@ export default async function patternRoutes(fastify: FastifyInstance) { }, }, }, - async (request, reply) => { + async (request: any, reply) => { const { id } = request.params; - const organizationId = (request as any).organizationId || request.query.organizationId; + const requestedOrgId = (request as any).organizationId || request.query.organizationId; - if (!organizationId) { + if (!requestedOrgId) { return reply.status(400).send({ success: false, error: 'Organization ID is required', }); } + const organizationId = await getUserOrganizationId( + request.user.id, + requestedOrgId + ); + + if (!organizationId) { + return reply.status(403).send({ + success: false, + error: 'No organization access', + }); + } + try { // Check if pattern exists and belongs to this org const existing = await db diff --git a/packages/backend/src/modules/query/routes.ts b/packages/backend/src/modules/query/routes.ts index e68c299a..396e0a6f 100644 --- a/packages/backend/src/modules/query/routes.ts +++ b/packages/backend/src/modules/query/routes.ts @@ -95,26 +95,13 @@ const queryRoutes: FastifyPluginAsync = async (fastify) => { } if (request.user?.id) { - const hasAccess = await verifyProjectAccess( - Array.isArray(projectId) ? projectId[0] : projectId, - request.user.id - ); - - if (!hasAccess) { - return reply.code(403).send({ - error: 'Access denied - you do not have access to this project', - }); - } - - // If multiple projects requested, verify access to all - if (Array.isArray(projectId)) { - for (const pid of projectId) { - const access = await verifyProjectAccess(pid, request.user.id); - if (!access) { - return reply.code(403).send({ - error: `Access denied - you do not have access to project ${pid}`, - }); - } + const projectIds = Array.isArray(projectId) ? projectId : [projectId]; + for (const pid of projectIds) { + const hasAccess = await verifyProjectAccess(pid, request.user.id); + if (!hasAccess) { + return reply.code(403).send({ + error: `Access denied - you do not have access to project ${pid}`, + }); } } } @@ -761,8 +748,9 @@ const queryRoutes: FastifyPluginAsync = async (fastify) => { reply.raw.setHeader('Cache-Control', 'no-cache'); reply.raw.setHeader('Connection', 'keep-alive'); - // Track last timestamp to avoid duplicates + // Track last timestamp and sent IDs to avoid duplicates let lastTimestamp = new Date(); + let sentIds = new Set(); // Send initial connection message reply.raw.write(`data: ${JSON.stringify({ type: 'connected', timestamp: new Date() })}\n\n`); @@ -781,13 +769,26 @@ const queryRoutes: FastifyPluginAsync = async (fastify) => { }); if (newLogs.logs.length > 0) { - // Update last timestamp - const latestLog = newLogs.logs[newLogs.logs.length - 1]; - lastTimestamp = new Date(latestLog.time); + // Filter out already-sent logs + const unseenLogs = newLogs.logs.filter((log: any) => !sentIds.has(log.id)); + + if (unseenLogs.length > 0) { + // Update last timestamp + const latestLog = unseenLogs[unseenLogs.length - 1]; + lastTimestamp = new Date(latestLog.time); + + // Rebuild sentIds with only logs at the latest timestamp to bound memory + sentIds = new Set(); + for (const log of newLogs.logs) { + if (new Date(log.time).getTime() === lastTimestamp.getTime()) { + sentIds.add(log.id); + } + } - // Send each log as separate event - for (const log of newLogs.logs) { - reply.raw.write(`data: ${JSON.stringify({ type: 'log', data: log })}\n\n`); + // Send each new log as separate event + for (const log of unseenLogs) { + reply.raw.write(`data: ${JSON.stringify({ type: 'log', data: log })}\n\n`); + } } } diff --git a/packages/backend/src/modules/retention/service.ts b/packages/backend/src/modules/retention/service.ts index f6882ef2..493ad446 100644 --- a/packages/backend/src/modules/retention/service.ts +++ b/packages/backend/src/modules/retention/service.ts @@ -328,6 +328,18 @@ export class RetentionService { projectsByOrg.set(p.organization_id, list); } + // Early return if no organizations exist + if (organizations.length === 0) { + return { + totalOrganizations: 0, + successfulOrganizations: 0, + failedOrganizations: 0, + totalLogsDeleted: 0, + totalExecutionTimeMs: Date.now() - startTime, + results: [], + }; + } + // Find max retention (used for drop_chunks) const maxRetention = Math.max(...organizations.map(o => o.retention_days)); const maxCutoff = new Date(Date.now() - maxRetention * 24 * 60 * 60 * 1000); diff --git a/packages/backend/src/modules/siem/dashboard-service.ts b/packages/backend/src/modules/siem/dashboard-service.ts index 64441c3a..697a8289 100644 --- a/packages/backend/src/modules/siem/dashboard-service.ts +++ b/packages/backend/src/modules/siem/dashboard-service.ts @@ -108,7 +108,7 @@ export class SiemDashboardService { let query = this.db .selectFrom('detection_events') .select([ - sql`time_bucket(${bucketInterval}, time)`.as('timestamp'), + sql`time_bucket(${bucketInterval}::interval, time)`.as('timestamp'), sql`count(*)::int`.as('count'), ]) .where('organization_id', '=', filters.organizationId) diff --git a/packages/backend/src/modules/siem/routes.ts b/packages/backend/src/modules/siem/routes.ts index 0716e130..31a36620 100644 --- a/packages/backend/src/modules/siem/routes.ts +++ b/packages/backend/src/modules/siem/routes.ts @@ -287,7 +287,8 @@ export async function siemRoutes(fastify: FastifyInstance) { if (body.detectionEventIds && body.detectionEventIds.length > 0) { await siemService.linkDetectionEventsToIncident( incident.id, - body.detectionEventIds + body.detectionEventIds, + body.organizationId ); // Enrich incident with IP data after linking events @@ -479,7 +480,7 @@ export async function siemRoutes(fastify: FastifyInstance) { // Get related data const [detections, comments, history] = await Promise.all([ - siemService.getIncidentDetections(params.id), + siemService.getIncidentDetections(params.id, query.organizationId), siemService.getIncidentComments(params.id), siemService.getIncidentHistory(params.id), ]); diff --git a/packages/backend/src/modules/siem/service.ts b/packages/backend/src/modules/siem/service.ts index adb7f7af..b4054dad 100644 --- a/packages/backend/src/modules/siem/service.ts +++ b/packages/backend/src/modules/siem/service.ts @@ -267,12 +267,12 @@ export class SiemService { const result = await this.db .updateTable('incidents') .set({ - ...(updates.title && { title: updates.title }), + ...(updates.title !== undefined && { title: updates.title }), ...(updates.description !== undefined && { description: updates.description, }), - ...(updates.severity && { severity: updates.severity }), - ...(updates.status && { status: updates.status }), + ...(updates.severity !== undefined && { severity: updates.severity }), + ...(updates.status !== undefined && { status: updates.status }), ...(updates.assigneeId !== undefined && { assignee_id: updates.assigneeId, }), @@ -304,7 +304,8 @@ export class SiemService { */ async linkDetectionEventsToIncident( incidentId: string, - detectionEventIds: string[] + detectionEventIds: string[], + organizationId?: string ): Promise { if (detectionEventIds.length === 0) return; @@ -320,12 +321,17 @@ export class SiemService { ) .execute(); - // Update detection_events to set incident_id - await this.db + // Update detection_events to set incident_id (scoped to org if provided) + let updateQuery = this.db .updateTable('detection_events') .set({ incident_id: incidentId }) - .where('id', 'in', detectionEventIds) - .execute(); + .where('id', 'in', detectionEventIds); + + if (organizationId) { + updateQuery = updateQuery.where('organization_id', '=', organizationId); + } + + await updateQuery.execute(); // Update incident detection_count await this.db @@ -340,11 +346,17 @@ export class SiemService { /** * Get detection events for an incident */ - async getIncidentDetections(incidentId: string): Promise { - const results = await this.db + async getIncidentDetections(incidentId: string, organizationId?: string): Promise { + let query = this.db .selectFrom('detection_events') .selectAll() - .where('incident_id', '=', incidentId) + .where('incident_id', '=', incidentId); + + if (organizationId) { + query = query.where('organization_id', '=', organizationId); + } + + const results = await query .orderBy('time', 'desc') .execute(); diff --git a/packages/backend/src/modules/users/service.ts b/packages/backend/src/modules/users/service.ts index 9d1f44ce..026e68b0 100644 --- a/packages/backend/src/modules/users/service.ts +++ b/packages/backend/src/modules/users/service.ts @@ -110,7 +110,7 @@ export class UsersService { // Find user by email const user = await db .selectFrom('users') - .select(['id', 'email', 'password_hash']) + .select(['id', 'email', 'password_hash', 'disabled']) .where('email', '=', input.email) .executeTakeFirst(); @@ -129,6 +129,11 @@ export class UsersService { throw new Error('Invalid email or password'); } + // Check if account is disabled + if (user.disabled) { + throw new Error('This account has been disabled'); + } + // Update last login await db .updateTable('users') diff --git a/packages/backend/src/queue/jobs/alert-notification.ts b/packages/backend/src/queue/jobs/alert-notification.ts index 5a7adb53..e3e0df83 100644 --- a/packages/backend/src/queue/jobs/alert-notification.ts +++ b/packages/backend/src/queue/jobs/alert-notification.ts @@ -156,20 +156,24 @@ export async function processAlertNotification(job: any) { console.log(`No webhook configured for: ${data.rule_name}`); } - // Mark as notified (with errors if any) - if (errors.length > 0) { - await alertsService.markAsNotified(data.historyId, errors.join('; ')); - } else { - await alertsService.markAsNotified(data.historyId); + // Mark as notified (with errors if any) - skip for Sigma rules (no history entry) + if (data.historyId) { + if (errors.length > 0) { + await alertsService.markAsNotified(data.historyId, errors.join('; ')); + } else { + await alertsService.markAsNotified(data.historyId); + } } console.log(`Alert notification processed: ${data.rule_name}`); } catch (error) { console.error(`Failed to process alert notification: ${data.rule_name}`, error); - await alertsService.markAsNotified( - data.historyId, - error instanceof Error ? error.message : 'Unknown error' - ); + if (data.historyId) { + await alertsService.markAsNotified( + data.historyId, + error instanceof Error ? error.message : 'Unknown error' + ); + } throw error; } } @@ -209,9 +213,36 @@ async function sendEmailNotification(data: AlertNotificationData & { organizatio console.log(`Email sent to: ${data.email_recipients.join(', ')}`); } +const BLOCKED_HOSTS = ['localhost', '127.0.0.1', '0.0.0.0', '[::1]', 'metadata.google.internal']; + +function isPrivateIP(hostname: string): boolean { + if (BLOCKED_HOSTS.includes(hostname.toLowerCase())) return true; + const parts = hostname.split('.').map(Number); + if (parts.length !== 4 || parts.some(isNaN)) return false; + if (parts[0] === 10) return true; + if (parts[0] === 172 && parts[1] >= 16 && parts[1] <= 31) return true; + if (parts[0] === 192 && parts[1] === 168) return true; + if (parts[0] === 169 && parts[1] === 254) return true; + if (parts[0] === 127) return true; + return false; +} + async function sendWebhookNotification(data: AlertNotificationData) { if (!data.webhook_url) return; + // SSRF protection + try { + const url = new URL(data.webhook_url); + if (isPrivateIP(url.hostname)) { + throw new Error('Webhook URLs pointing to private/internal addresses are not allowed'); + } + } catch (e) { + if (e instanceof TypeError) { + throw new Error(`Invalid webhook URL: ${data.webhook_url}`); + } + throw e; + } + const frontendUrl = getFrontendUrl(); const response = await fetch(data.webhook_url, { @@ -236,7 +267,8 @@ async function sendWebhookNotification(data: AlertNotificationData) { }); if (!response.ok) { - throw new Error(`Webhook request failed: ${response.statusText}`); + const errorText = await response.text().catch(() => ''); + throw new Error(`Webhook request failed: HTTP ${response.status}${errorText ? ` - ${errorText}` : ''}`); } console.log(`Webhook notification sent to: ${data.webhook_url}`); diff --git a/packages/backend/src/queue/jobs/incident-autogrouping.ts b/packages/backend/src/queue/jobs/incident-autogrouping.ts index ef971afe..c5eb6b43 100644 --- a/packages/backend/src/queue/jobs/incident-autogrouping.ts +++ b/packages/backend/src/queue/jobs/incident-autogrouping.ts @@ -53,7 +53,7 @@ async function groupByTraceId(organizationId: string): Promise { 'trace_id', 'project_id', db.fn.count('id').as('count'), - db.fn.max('severity').as('maxSeverity'), // Highest severity wins + sql`(ARRAY['critical','high','medium','low','informational'])[MIN(CASE severity WHEN 'critical' THEN 1 WHEN 'high' THEN 2 WHEN 'medium' THEN 3 WHEN 'low' THEN 4 WHEN 'informational' THEN 5 ELSE 5 END)]`.as('maxSeverity'), sql`array_agg(id)`.as('eventIds'), sql`array_agg(service)`.as('services'), db.fn.min('time').as('firstSeen'), diff --git a/packages/backend/src/queue/jobs/sigma-detection.ts b/packages/backend/src/queue/jobs/sigma-detection.ts index 51d26c41..279b044a 100644 --- a/packages/backend/src/queue/jobs/sigma-detection.ts +++ b/packages/backend/src/queue/jobs/sigma-detection.ts @@ -169,6 +169,8 @@ export async function processSigmaDetection(job: any) { historyId: null, // No alert history for Sigma rules rule_id: sigmaRule.id, rule_name: `[Sigma] ${firstMatch.ruleTitle}`, + organization_id: data.organizationId, + project_id: data.projectId || null, log_count: matches.length, threshold: 1, // Sigma rules are match-based, not threshold-based time_window: 1, // Immediate detection diff --git a/packages/backend/src/tests/modules/bootstrap/bootstrap-service.test.ts b/packages/backend/src/tests/modules/bootstrap/bootstrap-service.test.ts index e5392b2d..4b98f86b 100644 --- a/packages/backend/src/tests/modules/bootstrap/bootstrap-service.test.ts +++ b/packages/backend/src/tests/modules/bootstrap/bootstrap-service.test.ts @@ -37,7 +37,7 @@ describe('BootstrapService', () => { }); describe('ensureInitialAdmin', () => { - it('should skip when env vars not set', async () => { + it('should create default system admin when env vars not set', async () => { // Mock config without initial admin const originalEmail = config.INITIAL_ADMIN_EMAIL; const originalPassword = config.INITIAL_ADMIN_PASSWORD; @@ -47,9 +47,11 @@ describe('BootstrapService', () => { await bootstrapService.ensureInitialAdmin(); - // No users should be created + // A default system admin should be created with fallback email const users = await db.selectFrom('users').selectAll().execute(); - expect(users).toHaveLength(0); + expect(users).toHaveLength(1); + expect(users[0].email).toBe('system@logtide.internal'); + expect(users[0].is_admin).toBe(true); // Restore (config as any).INITIAL_ADMIN_EMAIL = originalEmail; diff --git a/packages/backend/src/tests/modules/correlation/pattern-routes.test.ts b/packages/backend/src/tests/modules/correlation/pattern-routes.test.ts index cdfc083c..ef202962 100644 --- a/packages/backend/src/tests/modules/correlation/pattern-routes.test.ts +++ b/packages/backend/src/tests/modules/correlation/pattern-routes.test.ts @@ -397,6 +397,69 @@ describe('Pattern Routes', () => { expect(response.statusCode).toBe(400); }); + + it('should return 403 when user has no access to the specified organization', async () => { + // Create a second user to own the other org + const otherUser = await db + .insertInto('users') + .values({ + email: 'other-put@example.com', + password_hash: 'hash', + name: 'Other User', + }) + .returningAll() + .executeTakeFirstOrThrow(); + + // Create an organization that testUser is NOT a member of + const otherOrg = await db + .insertInto('organizations') + .values({ + name: 'Other Org', + slug: `other-org-put-${Date.now()}`, + owner_id: otherUser.id, + }) + .returningAll() + .executeTakeFirstOrThrow(); + + // Add otherUser as member (but NOT testUser) + await db + .insertInto('organization_members') + .values({ + organization_id: otherOrg.id, + user_id: otherUser.id, + role: 'owner', + }) + .execute(); + + // Create a pattern in the other org + const pattern = await db + .insertInto('identifier_patterns') + .values({ + organization_id: otherOrg.id, + name: 'other_org_pattern', + display_name: 'Other Org Pattern', + pattern: '\\bOTHER\\b', + field_names: [], + enabled: true, + priority: 50, + }) + .returningAll() + .executeTakeFirstOrThrow(); + + // Try to update it with testUser's auth token + const response = await app.inject({ + method: 'PUT', + url: `/v1/patterns/${pattern.id}?organizationId=${otherOrg.id}`, + headers: { + Authorization: `Bearer ${authToken}`, + }, + payload: { + displayName: 'Hacked Pattern', + }, + }); + + expect(response.statusCode).toBe(403); + }); }); describe('DELETE /v1/patterns/:id', () => { @@ -460,6 +523,66 @@ describe('Pattern Routes', () => { expect(response.statusCode).toBe(400); }); + + it('should return 403 when user has no access to the specified organization', async () => { + // Create a second user to own the other org + const otherUser = await db + .insertInto('users') + .values({ + email: 'other-delete@example.com', + password_hash: 'hash', + name: 'Other User', + }) + .returningAll() + .executeTakeFirstOrThrow(); + + // Create an organization that testUser is NOT a member of + const otherOrg = await db + .insertInto('organizations') + .values({ + name: 'Other Org', + slug: `other-org-del-${Date.now()}`, + owner_id: otherUser.id, + }) + .returningAll() + .executeTakeFirstOrThrow(); + + // Add otherUser as member (but NOT testUser) + await db + .insertInto('organization_members') + .values({ + organization_id: otherOrg.id, + user_id: otherUser.id, + role: 'owner', + }) + .execute(); + + // Create a pattern in the other org + const pattern = await db + .insertInto('identifier_patterns') + .values({ + organization_id: otherOrg.id, + name: 'other_org_pattern_del', + display_name: 'Other Org Pattern', + pattern: '\\bOTHER\\b', + field_names: [], + enabled: true, + priority: 50, + }) + .returningAll() + .executeTakeFirstOrThrow(); + + // Try to delete it with testUser's auth token + const response = await app.inject({ + method: 'DELETE', + url: `/v1/patterns/${pattern.id}?organizationId=${otherOrg.id}`, + headers: { + Authorization: `Bearer ${authToken}`, + }, + }); + + expect(response.statusCode).toBe(403); + }); }); describe('Regex Validation Edge Cases', () => { diff --git a/packages/backend/src/tests/modules/retention/service.test.ts b/packages/backend/src/tests/modules/retention/service.test.ts index b79cff17..e2fc3f0f 100644 --- a/packages/backend/src/tests/modules/retention/service.test.ts +++ b/packages/backend/src/tests/modules/retention/service.test.ts @@ -296,6 +296,16 @@ describe('RetentionService', () => { expect(summary.totalExecutionTimeMs).toBeGreaterThanOrEqual(0); }); + + it('should return early when no organizations exist', async () => { + const summary = await service.executeRetentionForAllOrganizations(); + + expect(summary.totalOrganizations).toBe(0); + expect(summary.successfulOrganizations).toBe(0); + expect(summary.failedOrganizations).toBe(0); + expect(summary.totalLogsDeleted).toBe(0); + expect(summary.results).toEqual([]); + }); }); describe('getOrganizationRetentionStatus - edge cases', () => { diff --git a/packages/backend/src/tests/modules/users/users-service.test.ts b/packages/backend/src/tests/modules/users/users-service.test.ts index 10eece80..3da578c6 100644 --- a/packages/backend/src/tests/modules/users/users-service.test.ts +++ b/packages/backend/src/tests/modules/users/users-service.test.ts @@ -191,6 +191,27 @@ describe('UsersService', () => { expect(updatedUser?.lastLogin).not.toBeNull(); }); + it('should reject login for disabled user', async () => { + await usersService.createUser({ + email: 'disabled@example.com', + password: 'password123', + name: 'Disabled User', + }); + + await db + .updateTable('users') + .set({ disabled: true }) + .where('email', '=', 'disabled@example.com') + .execute(); + + await expect( + usersService.login({ + email: 'disabled@example.com', + password: 'password123', + }) + ).rejects.toThrow('This account has been disabled'); + }); + it('should allow multiple concurrent sessions', async () => { await usersService.createUser({ email: 'multi@example.com', diff --git a/packages/backend/src/tests/queue/jobs/alert-notification.test.ts b/packages/backend/src/tests/queue/jobs/alert-notification.test.ts index 7fd5a306..da6f1b59 100644 --- a/packages/backend/src/tests/queue/jobs/alert-notification.test.ts +++ b/packages/backend/src/tests/queue/jobs/alert-notification.test.ts @@ -478,4 +478,165 @@ describe('Alert Notification Job', () => { ); }); }); + + describe('SSRF protection and null historyId', () => { + it('should block webhook to private IP addresses', async () => { + const { organization, project } = await createTestContext(); + const { alertsService } = await import('../../../modules/alerts/index.js'); + + // Mock notification channel with webhook pointing to loopback + mockGetAlertRuleChannels.mockResolvedValueOnce([{ + id: '00000000-0000-0000-0000-000000000010', + type: 'webhook', + enabled: true, + config: { url: 'http://127.0.0.1/webhook' }, + }]); + + const jobData: AlertNotificationData = { + historyId: '00000000-0000-0000-0000-000000000001', + rule_id: '00000000-0000-0000-0000-000000000002', + rule_name: 'SSRF Loopback Test', + organization_id: organization.id, + project_id: project.id, + log_count: 100, + threshold: 50, + time_window: 5, + email_recipients: [], + webhook_url: undefined, + }; + + await processAlertNotification({ data: jobData }); + + // fetch should NOT have been called since the URL is blocked + expect(mockFetch).not.toHaveBeenCalled(); + // markAsNotified should be called with an error about private addresses + expect(alertsService.markAsNotified).toHaveBeenCalledWith( + jobData.historyId, + expect.stringContaining('private/internal') + ); + }); + + it('should block webhook to link-local addresses', async () => { + const { organization, project } = await createTestContext(); + const { alertsService } = await import('../../../modules/alerts/index.js'); + + // Mock notification channel with webhook pointing to cloud metadata endpoint + mockGetAlertRuleChannels.mockResolvedValueOnce([{ + id: '00000000-0000-0000-0000-000000000010', + type: 'webhook', + enabled: true, + config: { url: 'http://169.254.169.254/latest/meta-data/' }, + }]); + + const jobData: AlertNotificationData = { + historyId: '00000000-0000-0000-0000-000000000001', + rule_id: '00000000-0000-0000-0000-000000000002', + rule_name: 'SSRF Link-Local Test', + organization_id: organization.id, + project_id: project.id, + log_count: 100, + threshold: 50, + time_window: 5, + email_recipients: [], + webhook_url: undefined, + }; + + await processAlertNotification({ data: jobData }); + + // fetch should NOT have been called since the URL is blocked + expect(mockFetch).not.toHaveBeenCalled(); + // markAsNotified should be called with an error about private addresses + expect(alertsService.markAsNotified).toHaveBeenCalledWith( + jobData.historyId, + expect.stringContaining('private/internal') + ); + }); + + it('should skip markAsNotified when historyId is null', async () => { + const { organization, project } = await createTestContext(); + const { alertsService } = await import('../../../modules/alerts/index.js'); + + const jobData: AlertNotificationData = { + historyId: null as any, + rule_id: '00000000-0000-0000-0000-000000000002', + rule_name: 'Null HistoryId Alert', + organization_id: organization.id, + project_id: project.id, + log_count: 100, + threshold: 50, + time_window: 5, + email_recipients: [], + webhook_url: undefined, + }; + + await processAlertNotification({ data: jobData }); + + expect(alertsService.markAsNotified).not.toHaveBeenCalled(); + }); + + it('should still call markAsNotified when historyId is present', async () => { + const { organization, project } = await createTestContext(); + const { alertsService } = await import('../../../modules/alerts/index.js'); + + const jobData: AlertNotificationData = { + historyId: '00000000-0000-0000-0000-000000000001', + rule_id: '00000000-0000-0000-0000-000000000002', + rule_name: 'Valid HistoryId Alert', + organization_id: organization.id, + project_id: project.id, + log_count: 100, + threshold: 50, + time_window: 5, + email_recipients: [], + webhook_url: undefined, + }; + + await processAlertNotification({ data: jobData }); + + expect(alertsService.markAsNotified).toHaveBeenCalledWith( + jobData.historyId + ); + }); + + it('should include HTTP status code in webhook error', async () => { + const { organization, project } = await createTestContext(); + const { alertsService } = await import('../../../modules/alerts/index.js'); + + mockFetch.mockResolvedValueOnce({ + ok: false, + status: 503, + statusText: '', + text: () => Promise.resolve('Service Unavailable'), + }); + + // Mock notification channel with webhook + mockGetAlertRuleChannels.mockResolvedValueOnce([{ + id: '00000000-0000-0000-0000-000000000010', + type: 'webhook', + enabled: true, + config: { url: 'https://hooks.example.com/failing-503' }, + }]); + + const jobData: AlertNotificationData = { + historyId: '00000000-0000-0000-0000-000000000001', + rule_id: '00000000-0000-0000-0000-000000000002', + rule_name: 'HTTP Status Code Test', + organization_id: organization.id, + project_id: project.id, + log_count: 100, + threshold: 50, + time_window: 5, + email_recipients: [], + webhook_url: undefined, + }; + + await processAlertNotification({ data: jobData }); + + // markAsNotified should be called with an error containing the 503 status + expect(alertsService.markAsNotified).toHaveBeenCalledWith( + jobData.historyId, + expect.stringContaining('503') + ); + }); + }); }); diff --git a/packages/backend/src/utils/internal-logger.ts b/packages/backend/src/utils/internal-logger.ts index e446bd8d..54e9526f 100644 --- a/packages/backend/src/utils/internal-logger.ts +++ b/packages/backend/src/utils/internal-logger.ts @@ -58,7 +58,7 @@ export async function initializeInternalLogging(): Promise { dsn, service: process.env.SERVICE_NAME || 'logtide-backend', environment: process.env.NODE_ENV || 'development', - release: process.env.npm_package_version || '0.8.4', + release: process.env.npm_package_version || '0.8.5', batchSize: 5, // Smaller batch for internal logs to see them faster flushInterval: 5000, maxBufferSize: 1000, diff --git a/packages/frontend/package.json b/packages/frontend/package.json index 105b8a2b..c80cfdce 100644 --- a/packages/frontend/package.json +++ b/packages/frontend/package.json @@ -1,6 +1,6 @@ { "name": "@logtide/frontend", - "version": "0.8.4", + "version": "0.8.5", "private": true, "description": "LogTide Frontend Dashboard", "type": "module", diff --git a/packages/frontend/src/hooks.client.ts b/packages/frontend/src/hooks.client.ts index a2759ee7..116199dc 100644 --- a/packages/frontend/src/hooks.client.ts +++ b/packages/frontend/src/hooks.client.ts @@ -9,7 +9,7 @@ if (dsn) { dsn, service: 'logtide-frontend-client', environment: env.PUBLIC_NODE_ENV || 'production', - release: env.PUBLIC_APP_VERSION || '0.8.4', + release: env.PUBLIC_APP_VERSION || '0.8.5', debug: env.PUBLIC_NODE_ENV === 'development', browser: { // Core Web Vitals (LCP, INP, CLS, TTFB) diff --git a/packages/frontend/src/hooks.server.ts b/packages/frontend/src/hooks.server.ts index 3d43740c..28cdad4a 100644 --- a/packages/frontend/src/hooks.server.ts +++ b/packages/frontend/src/hooks.server.ts @@ -82,7 +82,7 @@ export const handle = dsn dsn, service: 'logtide-frontend', environment: privateEnv?.NODE_ENV || 'production', - release: process.env.npm_package_version || '0.8.4', }) as unknown as Handle, + release: process.env.npm_package_version || '0.8.5', }) as unknown as Handle, requestLogHandle, configHandle ) diff --git a/packages/frontend/src/lib/api/exceptions.ts b/packages/frontend/src/lib/api/exceptions.ts index ace40f0e..9f605d15 100644 --- a/packages/frontend/src/lib/api/exceptions.ts +++ b/packages/frontend/src/lib/api/exceptions.ts @@ -105,7 +105,7 @@ export async function getErrorGroups( if (filters.limit) { searchParams.append('limit', filters.limit.toString()); } - if (filters.offset) { + if (filters.offset != null) { searchParams.append('offset', filters.offset.toString()); } diff --git a/packages/frontend/src/lib/api/logs.ts b/packages/frontend/src/lib/api/logs.ts index 0bf6bbe1..d9ae86bf 100644 --- a/packages/frontend/src/lib/api/logs.ts +++ b/packages/frontend/src/lib/api/logs.ts @@ -122,7 +122,7 @@ export class LogsAPI { if (filters.q) params.append('q', filters.q); if (filters.searchMode) params.append('searchMode', filters.searchMode); if (filters.limit) params.append('limit', filters.limit.toString()); - if (filters.offset) params.append('offset', filters.offset.toString()); + if (filters.offset != null) params.append('offset', filters.offset.toString()); if (filters.cursor) params.append('cursor', filters.cursor); const url = `${getApiBaseUrl()}/logs?${params.toString()}`; diff --git a/packages/frontend/src/lib/api/siem.ts b/packages/frontend/src/lib/api/siem.ts index 8dc86ce5..65c6c0ef 100644 --- a/packages/frontend/src/lib/api/siem.ts +++ b/packages/frontend/src/lib/api/siem.ts @@ -146,7 +146,7 @@ export async function listIncidents(filters: IncidentFilters): Promise<{ inciden searchParams.append('limit', filters.limit.toString()); } - if (filters.offset) { + if (filters.offset != null) { searchParams.append('offset', filters.offset.toString()); } diff --git a/packages/frontend/src/lib/api/traces.ts b/packages/frontend/src/lib/api/traces.ts index f4c9d72d..cf340f5a 100644 --- a/packages/frontend/src/lib/api/traces.ts +++ b/packages/frontend/src/lib/api/traces.ts @@ -122,7 +122,7 @@ export class TracesAPI { if (filters.from) params.append('from', filters.from); if (filters.to) params.append('to', filters.to); if (filters.limit) params.append('limit', filters.limit.toString()); - if (filters.offset) params.append('offset', filters.offset.toString()); + if (filters.offset != null) params.append('offset', filters.offset.toString()); const url = `${getApiBaseUrl()}/traces?${params.toString()}`; diff --git a/packages/frontend/src/lib/components/Footer.svelte b/packages/frontend/src/lib/components/Footer.svelte index bf2d1a2c..5b6d9607 100644 --- a/packages/frontend/src/lib/components/Footer.svelte +++ b/packages/frontend/src/lib/components/Footer.svelte @@ -1,7 +1,7 @@ diff --git a/packages/frontend/src/lib/components/OrganizationSwitcher.svelte b/packages/frontend/src/lib/components/OrganizationSwitcher.svelte index 635199a4..d83ce311 100644 --- a/packages/frontend/src/lib/components/OrganizationSwitcher.svelte +++ b/packages/frontend/src/lib/components/OrganizationSwitcher.svelte @@ -1,4 +1,5 @@