Skip to content

Reduce peak memory in getFileHashes #1258

Open
hamidrezahanafi wants to merge 1 commit intochromaui:mainfrom
hamidrezahanafi:hrh.fixbufferallocation
Open

Reduce peak memory in getFileHashes #1258
hamidrezahanafi wants to merge 1 commit intochromaui:mainfrom
hamidrezahanafi:hrh.fixbufferallocation

Conversation

@hamidrezahanafi
Copy link
Copy Markdown

Summary

Reduce peak memory in getFileHashes by pooling read buffers instead of one buffer per file.

Storybook builds with many static assets caused Chromatic CLI to pre-allocate a 64KiB buffer per file before hashing, so memory scaled as O(files × 64KiB) and could push CI containers over their limit. This change reuses a small buffer pool sized to min(concurrency, file count), so allocation is O(concurrency × 64KiB) while preserving the same hashing behavior.

Changes

  • Replace per-file Buffer.allocUnsafe(64 * 1024) upfront with acquire/release from a fixed pool tied to p-limit concurrency.
  • Clamp invalid / out-of-range CHROMATIC_HASH_CONCURRENCY to a safe range and handle files.length === 0 without doing work.
  • Tests: golden hash snapshot; empty list; spy asserting pool-sized allocUnsafe calls; equality of hashes for concurrency: 1 vs higher concurrency, including one file > 64KiB to exercise incremental reads under pooling.

Benchmark (local, 3000 small files, concurrency 8, NODE_OPTIONS=--expose-gc)

Measured with process.memoryUsage() before/after a full hash pass; legacy implementation duplicated the pre-pool “one buffer per file” strategy for comparison.

Variant Δ arrayBuffers Δ external Δ heapUsed
Legacy (buffer per file) +183.2 MiB +187.5 MiB +0.41 MiB
Pooled (this PR) ~0 MiB +0.56 MiB +0.31 MiB

Most of the legacy cost shows up under arrayBuffers / external (Buffer backing memory), not V8 heapUsed. Pooled behavior aligns with 3000 × 64 KiB ≈ 187.5 MiB no longer being held for read buffers at once.

Notes

  • Correctness is covered by snapshot expectations and by comparing pooled vs sequential runs on the same files.

@hamidrezahanafi hamidrezahanafi changed the title Reduce peak memory in getFileHashes by pooling read buffers instead of one buffer per file Reduce peak memory in getFileHashes Mar 26, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 27, 2026

Codecov Report

❌ Patch coverage is 87.23404% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.46%. Comparing base (c175ff8) to head (0e42bc5).

Files with missing lines Patch % Lines
node-src/lib/getFileHashes.ts 87.23% 6 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1258      +/-   ##
==========================================
+ Coverage   72.22%   72.46%   +0.23%     
==========================================
  Files         208      208              
  Lines        7918     7961      +43     
  Branches     1435     1450      +15     
==========================================
+ Hits         5719     5769      +50     
+ Misses       2177     2170       -7     
  Partials       22       22              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant