Skip to content

feat: tunning leveldb configs#5356

Draft
martinconic wants to merge 1 commit intomasterfrom
pf/leveldb-tuning
Draft

feat: tunning leveldb configs#5356
martinconic wants to merge 1 commit intomasterfrom
pf/leveldb-tuning

Conversation

@martinconic
Copy link
Contributor

@martinconic martinconic commented Feb 10, 2026

Checklist

  • I have read the coding guide.
  • My change requires a documentation update, and I have done it.
  • I have added tests to cover my changes.
  • I have filled out the description and linked the related issues.

Description

This PR tunes the embedded goleveldb configuration to better handle the heavy write throughput and concurrent read patterns observed in production nodes. The default settings were causing significant write stalls and suboptimal read performance under load.

Changes:

Increase Block Cache (32MB -> 256MB):

  • Significantly improves read performance for the metadata index (TagItems, PushItems).
  • Reduces disk I/O by keeping a much larger portion of the frequently accessed index "hot" in RAM.
  • Rationale: 32MB was too small for the millions of items in the index, causing constant cache thrashing.

Increase Write Buffer (32MB -> 128MB):

  • Allows larger write batches to accumulate in memory before flushing to disk.
  • Reduces the frequency of Level 0 file creation, which directly reduces write stalls during bursty upload traffic.

Increase Compaction Trigger (8 -> 16 files):

  • Allows more Level 0 files to accumulate before triggering a compaction.
  • Prevents the database from "locking up" writes to force a compaction during heavy load spikes.

Increase Table Size (2MB -> 8MB):

  • Reduces the total number of .ldb files the system has to manage.
  • Lowers file system overhead and helps avoid "too many open files" errors.

Optimize Bloom Filter (64 bits -> 10 bits):

  • Reduces the memory overhead of bloom filters per key. 10 bits is the industry standard for ~1% false positive rate.
  • Rationale: 64 bits was excessive and wasted RAM that is better used for the Block Cache.

Disable Seeks Compaction:

  • Prevents read operations from triggering background compactions.
  • Rationale: In our read-heavy metadata workload, this feature often caused unnecessary write amplification, fighting against actual data writes for I/O bandwidth.

Hardware Impact:

These changes increase the memory footprint of the storer component by approximately 400-500MB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant