Skip to content
Merged

0.8.5 #183

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,46 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## [0.8.5] - 2026-03-28

### Security
- **Cross-org isolation fix in SIEM**: `linkDetectionEventsToIncident` now scopes detection events to the requesting organization, preventing cross-tenant data corruption via crafted API calls
- **Cross-org auth bypass in pattern routes**: PUT and DELETE handlers for correlation patterns now verify organization membership before mutating data (same check GET/POST already had)
- **SSRF protection for legacy webhook path**: the alert-notification job's direct `fetch()` call now validates URLs against private/internal IP ranges, matching the `WebhookProvider` safeguard
- **Disabled user login blocked**: `POST /login` now checks the `disabled` flag before creating a session, preventing disabled accounts from obtaining tokens
- **Expired invitation info leak**: `getInvitationByToken` now filters on `expires_at > NOW()`, preventing enumeration of expired invitation details

### Fixed
- **SIEM dashboard timeline crash**: `time_bucket()` call was missing `::interval` cast on the parameterized bucket width, causing a PostgreSQL type error that broke the timeline widget for all users
- **SSE real-time events broken**: SIEM store and incident detail page read auth token from `localStorage('session_token')` (wrong key), so the SSE connection never authenticated; now uses `getAuthToken()` from the shared auth utility
- **SSE log stream duplicate emission**: when multiple logs shared the same timestamp, the inclusive `from` bound caused them to be re-sent on every poll tick; stream now tracks sent log IDs to deduplicate
- **Incident severity auto-grouping wrong**: `MAX(severity)` used PostgreSQL alphabetical ordering (`medium` > `critical`), producing incorrect severity on auto-grouped incidents; now uses ordinal ranking
- **Sigma notification failures silent**: notification job payload was missing `organization_id` and `project_id`, and `markAsNotified` was called with `null` historyId; both now handled correctly
- **Incidents pagination total always zero**: `loadIncidents` in the SIEM store never wrote `response.total` to `incidentsTotal`
- **Memory leaks on navigation**: 20+ Svelte components called `authStore.subscribe()` without cleanup; all now store the unsubscribe function and call it in `onDestroy`
- **`offset=0` silently dropped**: API client functions used `if (filters.offset)` which is falsy for zero, so page-1 requests never sent the `offset` parameter; changed to `if (filters.offset != null)`
- **Search debounce timer leak**: `searchDebounceTimer` was not cleared in `onDestroy`, causing post-unmount API calls when navigating away mid-search
- **`verifyProjectAccess` double call**: when `projectId` is an array, the first element was verified twice (once before the loop, once inside it); consolidated into a single loop
- **`updateIncident` silent field skip**: `title`, `severity`, and `status` used truthy checks (`&&`) instead of `!== undefined`, inconsistent with `description` and `assigneeId`
- **Webhook error messages empty**: `response.statusText` is empty for HTTP/2; error now reads the response body for useful detail
- **Retention job crash on empty orgs**: `Math.max(...[])` returns `-Infinity`, cascading to an `Invalid Date` in the `drop_chunks` call; early return added when no organizations exist
- **`escapeHtml` DOM leak**: PDF export's `escapeHtml` created orphaned DOM nodes in the parent document; replaced with pure string replacement
- **Webhook headers validation missing**: `CreateChannelDialog` silently swallowed invalid JSON in the custom headers field; now validates on submit
- **`getIncidentDetections` no org scope**: query now accepts optional `organizationId` for defense-in-depth filtering
- **Stale shared package types**: dist contained outdated `Project` and `Incident` interfaces with phantom fields (`slug`, `statusPageVisibility`, `source`, `monitorId`); rebuilt from source

### Changed
- **Docker config sync**: `docker-compose.build.yml` now matches `docker-compose.yml` with all environment variables (MongoDB, `TRUST_PROXY`, `FRONTEND_URL`, `INTERNAL_DSN`, `DOCKER_CONTAINER`), MongoDB service, and `fluent-bit-metrics` service
- **`NODE_ENV` for backend**: production `docker-compose.yml` now sets `NODE_ENV: production` on the backend service (worker and frontend already had it)
- **`docker/.env.example`**: added `STORAGE_ENGINE`, ClickHouse, and MongoDB configuration sections

### Dependencies
- `picomatch` 4.0.3 → 4.0.4 (fix ReDoS via extglob quantifiers + POSIX character class method injection)
- `brace-expansion` 5.0.2 → 5.0.5 (fix zero-step sequence DoS)
- `fast-xml-parser` 5.5.6 → 5.5.9 (fix entity expansion limits bypass)
- `fastify` bumped via dependabot
- `kysely` bumped via dependabot

## [0.8.4] - 2026-03-19

### Added
Expand Down
45 changes: 41 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,14 @@
<a href="https://codecov.io/gh/logtide-dev/logtide"><img src="https://codecov.io/gh/logtide-dev/logtide/branch/main/graph/badge.svg" alt="Coverage"></a>
<a href="https://hub.docker.com/r/logtide/backend"><img src="https://img.shields.io/docker/v/logtide/backend?label=docker&logo=docker" alt="Docker"></a>
<a href="https://artifacthub.io/packages/helm/logtide/logtide"><img src="https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/logtide" alt="Artifact Hub"></a>
<img src="https://img.shields.io/badge/version-0.8.4-blue.svg" alt="Version">
<img src="https://img.shields.io/badge/version-0.8.5-blue.svg" alt="Version">
<img src="https://img.shields.io/badge/license-AGPLv3-blue.svg" alt="License">
<img src="https://img.shields.io/badge/status-stable_alpha-success.svg" alt="Status">
</div>

<br />

> **🚀 RELEASE 0.8.4:** LogTide now supports **Multi-Engine Storage** (ClickHouse, MongoDB) and **Advanced Browser Observability**.
> **🚀 RELEASE 0.8.5:** LogTide now supports **Multi-Engine Storage** (ClickHouse, MongoDB) and **Advanced Browser Observability**.

---

Expand All @@ -46,7 +46,7 @@ Designed for teams that need **GDPR compliance**, **full data ownership**, and *
### Logs Explorer
![LogTide Logs](docs/images/logs.png)

### Performance & Metrics (New in 0.8.4)
### Performance & Metrics (New in 0.8.5)
![LogTide Metrics](docs/images/metrics.png)

### Distributed Tracing
Expand Down Expand Up @@ -82,12 +82,49 @@ Total control over your data. Uses pre-built images from Docker Hub.
* **Frontend:** `http://localhost:3000`
* **API:** `http://localhost:8080`

> **Note:** The default `docker compose up` starts **5 services**: PostgreSQL (TimescaleDB), Redis, backend, worker, and frontend. ClickHouse, MongoDB, and Fluent Bit are opt-in via [Docker profiles](#optional-profiles) and won't run unless explicitly enabled.

#### Lightweight Setup (3 containers)

For low-resource environments like a Raspberry Pi or a homelab, use the simplified compose that removes Redis entirely:

```bash
mkdir logtide && cd logtide
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/docker-compose.simple.yml
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/.env.example
mv .env.example .env
docker compose -f docker-compose.simple.yml up -d
```

This runs only **PostgreSQL + backend + frontend**. The backend automatically uses PostgreSQL-based alternatives for job queues and live tail streaming. See the [Deployment docs](https://logtide.dev/docs/deployment#simplified-deployment) for details.

#### Optional Profiles

Enable additional services with `--profile`:

```bash
# Docker log collection (Fluent Bit)
docker compose --profile logging up -d

# System metrics (CPU, memory, disk, network)
docker compose --profile metrics up -d

# ClickHouse storage engine
docker compose --profile clickhouse up -d

# MongoDB storage engine
docker compose --profile mongodb up -d

# Combine profiles
docker compose --profile logging --profile metrics up -d
```

### Option B: Cloud (Fastest & Free)
We host it for you. Perfect for testing. [**Sign up at logtide.dev**](https://logtide.dev).

---

## ✨ Core Features (v0.8.4)
## ✨ Core Features (v0.8.5)

* 🚀 **Multi-Engine Reservoir:** Pluggable storage layer supporting **TimescaleDB**, **ClickHouse**, and **MongoDB**.
* 🌐 **Browser SDK Enhancements:** Automatic collection of **Web Vitals** (LCP, INP, CLS), user session tracking, and click/network breadcrumbs.
Expand Down
25 changes: 25 additions & 0 deletions docker/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,31 @@ DB_USER=logtide
# SMTP_PASS=your_smtp_password
# SMTP_FROM=alerts@yourdomain.com

# =============================================================================
# STORAGE ENGINE (default: timescale)
# =============================================================================
# Options: timescale, clickhouse, mongodb
# STORAGE_ENGINE=timescale

# =============================================================================
# CLICKHOUSE (when using --profile clickhouse)
# =============================================================================
# CLICKHOUSE_HOST=clickhouse
# CLICKHOUSE_PORT=8123
# CLICKHOUSE_DATABASE=logtide
# CLICKHOUSE_USERNAME=default
# CLICKHOUSE_PASSWORD=your_clickhouse_password

# =============================================================================
# MONGODB (when using --profile mongodb)
# =============================================================================
# MONGODB_HOST=mongodb
# MONGODB_PORT=27017
# MONGODB_DATABASE=logtide
# MONGODB_USERNAME=
# MONGODB_PASSWORD=
# MONGODB_AUTH_SOURCE=

# =============================================================================
# HORIZONTAL SCALING (advanced)
# =============================================================================
Expand Down
64 changes: 64 additions & 0 deletions docker/docker-compose.build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,8 @@ services:
- "8080:8080"
environment:
NODE_ENV: production
TRUST_PROXY: ${TRUST_PROXY:-false}
FRONTEND_URL: ${FRONTEND_URL:-http://localhost:3000}
DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/${DB_NAME}
DATABASE_HOST: postgres
DB_USER: ${DB_USER}
Expand All @@ -101,13 +103,21 @@ services:
SMTP_FROM: ${SMTP_FROM:-noreply@logtide.local}
INTERNAL_LOGGING_ENABLED: ${INTERNAL_LOGGING_ENABLED:-false}
INTERNAL_API_KEY: ${INTERNAL_API_KEY:-}
INTERNAL_DSN: ${INTERNAL_DSN:-}
DOCKER_CONTAINER: "true"
SERVICE_NAME: logtide-backend
STORAGE_ENGINE: ${STORAGE_ENGINE:-timescale}
CLICKHOUSE_HOST: ${CLICKHOUSE_HOST:-clickhouse}
CLICKHOUSE_PORT: ${CLICKHOUSE_PORT:-8123}
CLICKHOUSE_DATABASE: ${CLICKHOUSE_DATABASE:-logtide}
CLICKHOUSE_USERNAME: ${CLICKHOUSE_USERNAME:-default}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-}
MONGODB_HOST: ${MONGODB_HOST:-mongodb}
MONGODB_PORT: ${MONGODB_PORT:-27017}
MONGODB_DATABASE: ${MONGODB_DATABASE:-logtide}
MONGODB_USERNAME: ${MONGODB_USERNAME:-}
MONGODB_PASSWORD: ${MONGODB_PASSWORD:-}
MONGODB_AUTH_SOURCE: ${MONGODB_AUTH_SOURCE:-}
depends_on:
postgres:
condition: service_healthy
Expand All @@ -133,25 +143,37 @@ services:
disable: true
environment:
NODE_ENV: production
TRUST_PROXY: ${TRUST_PROXY:-false}
FRONTEND_URL: ${FRONTEND_URL:-http://localhost:3000}
DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/${DB_NAME}
DATABASE_HOST: postgres
DB_USER: ${DB_USER}
REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
API_KEY_SECRET: ${API_KEY_SECRET}
PORT: 8080
HOST: 0.0.0.0
SMTP_HOST: ${SMTP_HOST:-}
SMTP_PORT: ${SMTP_PORT:-587}
SMTP_USER: ${SMTP_USER:-}
SMTP_PASS: ${SMTP_PASS:-}
SMTP_FROM: ${SMTP_FROM:-noreply@logtide.local}
INTERNAL_LOGGING_ENABLED: ${INTERNAL_LOGGING_ENABLED:-false}
INTERNAL_API_KEY: ${INTERNAL_API_KEY:-}
INTERNAL_DSN: ${INTERNAL_DSN:-}
DOCKER_CONTAINER: "true"
SERVICE_NAME: logtide-worker
STORAGE_ENGINE: ${STORAGE_ENGINE:-timescale}
CLICKHOUSE_HOST: ${CLICKHOUSE_HOST:-clickhouse}
CLICKHOUSE_PORT: ${CLICKHOUSE_PORT:-8123}
CLICKHOUSE_DATABASE: ${CLICKHOUSE_DATABASE:-logtide}
CLICKHOUSE_USERNAME: ${CLICKHOUSE_USERNAME:-default}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-}
MONGODB_HOST: ${MONGODB_HOST:-mongodb}
MONGODB_PORT: ${MONGODB_PORT:-27017}
MONGODB_DATABASE: ${MONGODB_DATABASE:-logtide}
MONGODB_USERNAME: ${MONGODB_USERNAME:-}
MONGODB_PASSWORD: ${MONGODB_PASSWORD:-}
MONGODB_AUTH_SOURCE: ${MONGODB_AUTH_SOURCE:-}
depends_on:
backend:
condition: service_healthy
Expand All @@ -173,6 +195,8 @@ services:
environment:
NODE_ENV: production
PUBLIC_API_URL: ${PUBLIC_API_URL:-http://localhost:8080}
LOGTIDE_DSN: ${LOGTIDE_DSN}
PUBLIC_LOGTIDE_DSN: ${PUBLIC_LOGTIDE_DSN}
depends_on:
- backend
restart: unless-stopped
Expand Down Expand Up @@ -203,6 +227,26 @@ services:
networks:
- logtide-network

mongodb:
image: mongo:7.0
container_name: logtide-mongodb
profiles:
- mongodb
environment:
MONGO_INITDB_DATABASE: ${MONGODB_DATABASE:-logtide}
ports:
- "${MONGODB_PORT:-27017}:27017"
volumes:
- mongodb_data:/data/db
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- logtide-network

fluent-bit:
image: ${FLUENT_BIT_IMAGE:-fluent/fluent-bit:4.2.2}
container_name: logtide-fluent-bit
Expand All @@ -229,13 +273,33 @@ services:
networks:
- logtide-network

fluent-bit-metrics:
image: ${FLUENT_BIT_IMAGE:-fluent/fluent-bit:4.2.2}
container_name: logtide-fluent-bit-metrics
profiles:
- metrics
volumes:
- ./fluent-bit-metrics.conf:/fluent-bit/etc/fluent-bit.conf:ro
- ./format_metrics.lua:/fluent-bit/etc/format_metrics.lua:ro
- /proc:/host/proc:ro
environment:
LOGTIDE_API_KEY: ${FLUENT_BIT_API_KEY:-}
LOGTIDE_API_HOST: backend
depends_on:
- backend
restart: unless-stopped
networks:
- logtide-network

volumes:
postgres_data:
driver: local
redis_data:
driver: local
clickhouse_data:
driver: local
mongodb_data:
driver: local

networks:
logtide-network:
1 change: 1 addition & 0 deletions docker/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ services:
ports:
- "8080:8080"
environment:
NODE_ENV: production
TRUST_PROXY: ${TRUST_PROXY:-false}
FRONTEND_URL: ${FRONTEND_URL:-http://localhost:3000}
DATABASE_URL: postgresql://${DB_USER}:${DB_PASSWORD}@postgres:5432/${DB_NAME}
Expand Down
6 changes: 4 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "logtide",
"version": "0.8.4",
"version": "0.8.5",
"private": true,
"description": "LogTide - Self-hosted log management platform",
"author": "LogTide Team",
Expand All @@ -12,7 +12,9 @@
"express": ">=5.2.0",
"qs": ">=6.14.2",
"devalue": ">=5.6.4",
"fast-xml-parser": ">=5.5.6",
"picomatch": ">=4.0.4",
"brace-expansion": ">=5.0.3",
"fast-xml-parser": ">=5.5.7",
"minimatch": ">=10.2.3",
"ajv": ">=8.18.0",
"rollup": ">=4.59.0",
Expand Down
2 changes: 1 addition & 1 deletion packages/backend/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@logtide/backend",
"version": "0.8.4",
"version": "0.8.5",
"private": true,
"description": "LogTide Backend API",
"type": "module",
Expand Down
30 changes: 19 additions & 11 deletions packages/backend/src/modules/bootstrap/service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
* This runs at server startup
*/

import crypto from 'node:crypto';
import bcrypt from 'bcrypt';
import { db } from '../../database/connection.js';
import { config } from '../../config/index.js';
Expand Down Expand Up @@ -39,11 +40,6 @@ export class BootstrapService {
// Check if env vars are configured
const { INITIAL_ADMIN_EMAIL, INITIAL_ADMIN_PASSWORD, INITIAL_ADMIN_NAME } = config;

if (!INITIAL_ADMIN_EMAIL || !INITIAL_ADMIN_PASSWORD) {
// No initial admin configured, skip
return;
}

// Check if any "real" users exist (with password set - excludes system@logtide.internal)
const userCount = await db
.selectFrom('users')
Expand All @@ -58,24 +54,36 @@ export class BootstrapService {
return;
}

// Create the initial admin user
console.log(`[Bootstrap] No users found. Creating initial admin: ${INITIAL_ADMIN_EMAIL}`);

const passwordHash = await bcrypt.hash(INITIAL_ADMIN_PASSWORD, SALT_ROUNDS);
// Use env vars if set, otherwise generate random credentials
const adminEmail = INITIAL_ADMIN_EMAIL || 'system@logtide.internal';
const adminPassword = INITIAL_ADMIN_PASSWORD || crypto.randomBytes(16).toString('base64url');
const adminName = INITIAL_ADMIN_NAME || 'Admin';

console.log(`[Bootstrap] No users found. Creating initial admin: ${adminEmail}`);

const passwordHash = await bcrypt.hash(adminPassword, SALT_ROUNDS);

const user = await db
.insertInto('users')
.values({
email: INITIAL_ADMIN_EMAIL,
email: adminEmail,
password_hash: passwordHash,
name: adminName,
is_admin: true,
})
.returning(['id', 'email', 'name', 'is_admin', 'disabled', 'created_at', 'last_login'])
.executeTakeFirstOrThrow();

console.log(`[Bootstrap] Initial admin created successfully: ${user.email} (ID: ${user.id})`);
console.log('');
console.log('╔══════════════════════════════════════════════════════════╗');
console.log('║ INITIAL ADMIN ACCOUNT CREATED ║');
console.log('╠══════════════════════════════════════════════════════════╣');
console.log(`║ Email: ${adminEmail.padEnd(44)}║`);
console.log(`║ Password: ${adminPassword.padEnd(44)}║`);
console.log('╠══════════════════════════════════════════════════════════╣');
console.log('║ ⚠ Change this password after first login! ║');
console.log('╚══════════════════════════════════════════════════════════╝');
console.log('');

// Set this user as default user for auth-free mode
await settingsService.set('auth.default_user_id', user.id);
Expand Down
Loading
Loading