-
Notifications
You must be signed in to change notification settings - Fork 1
Optimize Docker image size with multi-stage build and enable PM2 cluster mode #239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ange the production runtime command
WalkthroughThis PR restructures the Docker build into a multi-stage pipeline with distinct build and runtime stages. The build stage introduces build-time arguments (NODE_ENV, NPM_INSTALL_FLAGS) and runs npm ci with the specified flags, while the runtime stage creates a non-root user, installs wget, and copies the built application from the build stage. Additionally, the start script in package.json is updated to use pm2-runtime with horizontal scaling instead of direct Node invocation, and pm2-runtime is added as a runtime dependency. Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🔍 Vulnerabilities of
|
| digest | sha256:375137bd5fcb8d8ebf3f5e001f1f5ae1f87c4b650dc92a50e4f949be43437f24 |
| vulnerabilities | |
| platform | linux/amd64 |
| size | 175 MB |
| packages | 986 |
📦 Base Image node:23-alpine
| also known as |
|
| digest | sha256:b9d38d589853406ff0d4364f21969840c3e0397087643aef8eede40edbb6c7cd |
| vulnerabilities |
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
Description | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@Dockerfile`:
- Around line 3-4: The Dockerfile uses ARG NODE_ENV=production which is
build-time only; update the runtime stage to export NODE_ENV so it exists in
containers (e.g., in the final stage set ENV NODE_ENV=${NODE_ENV} or explicitly
ENV NODE_ENV=production) and mirror this for any other ARGs noted (lines 18-22);
locate the ARG NODE_ENV declaration and the final/runtime stage in the
Dockerfile and add the ENV assignment there so the runtime process sees
NODE_ENV=production.
- Around line 12-13: The runtime image currently copies the whole /app including
node_modules built with devDependencies (the RUN npm ci $NPM_INSTALL_FLAGS
step), so prune devDependencies before the final COPY to shrink the image and
reduce attack surface: after installing in the build stage (where RUN npm ci
$NPM_INSTALL_FLAGS runs), run a production-only prune (e.g., npm prune
--production or install with production-only flags / NPM_CONFIG_PRODUCTION) or
remove dev deps from /app/node_modules, then copy only the pruned
/app/node_modules into the runtime stage so the final image does not contain
devDependencies.
- Line 1: Update the Docker base image to a supported LTS by replacing the FROM
reference "FROM node:23-alpine" with a current LTS tag (e.g., "FROM
node:24-alpine" or "FROM node:22-alpine") so the container uses a maintained
Node release; ensure any related build/test scripts or CI that reference node:23
are updated accordingly.
- Around line 26-27: The Dockerfile pins wget to an invalid Alpine revision (RUN
apk add --no-cache wget=1.25.0-r1) which fails on node:23-alpine (Alpine 3.21);
fix by either changing the base image to a fixed Alpine release tag (e.g.,
node:23-alpine3.21) or removing the package revision pin (use RUN apk add
--no-cache wget or pin to wget=1.25.0-r0), and update the RUN line accordingly
so it matches the available package in the chosen base image.
In `@website/package.json`:
- Line 7: The start script currently uses "pm2-runtime app.js -i max" which can
oversubscribe CPUs in containers; change the "start" script (package.json
"start") to read an environment variable like PM2_INSTANCES and pass that to
pm2-runtime instead of "-i max", and ensure the container entrypoint or a small
startup helper (e.g., scripts/generate_constants.js or a new
scripts/detect_pm2_instances.sh invoked before pm2-runtime) computes a safe
default from cgroup CPU limits and sets PM2_INSTANCES when it is not provided.
| @@ -1,38 +1,39 @@ | |||
| FROM node:23-alpine | |||
| FROM node:23-alpine AS build | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find and read the Dockerfile
find . -name "Dockerfile" -type fRepository: speedandfunction/website
Length of output: 105
🌐 Web query:
Node.js LTS versions 2025 2026 current supported versions
💡 Result:
Currently supported Node.js major versions (as of Jan 29, 2026)
- Node.js 25.x — Current (not LTS); supported until Jun 1, 2026 (EOL). [2]
- Node.js 24.x (Krypton) — Active LTS; EOL Apr 30, 2028. [1][2]
- Node.js 22.x (Jod) — Maintenance LTS; EOL Apr 30, 2027. [1][2]
- Node.js 20.x (Iron) — Maintenance LTS; EOL Apr 30, 2026. [1][2]
LTS status across 2025–2026 (key transitions)
- Node.js 24.x: released May 6, 2025, entered Active LTS on Oct 28, 2025, enters Maintenance on Oct 20, 2026. [2]
- Node.js 22.x: entered Maintenance LTS on Oct 21, 2025 (so it was Active LTS before that). [2]
- Node.js 20.x: in Maintenance LTS throughout 2025 and into early 2026; EOL Apr 30, 2026. [2]
- Node.js 18.x: reached EOL Apr 30, 2025 (not supported after that). [1]
Sources: Node.js official releases list [1] and the Node.js Release WG schedule (authoritative lifecycle dates) [2].
🏁 Script executed:
# Read the main Dockerfile to verify the content at lines 1 and 18
cat -n ./Dockerfile | head -30Repository: speedandfunction/website
Length of output: 950
Switch from Node 23 to a supported LTS version.
Node 23 is out of security support. Use Node 24 (Active LTS until Apr 2028) or Node 22 (Maintenance LTS until Apr 2027). Avoid Node 20 as it EOLs Apr 30, 2026.
🔧 Suggested change
-FROM node:23-alpine AS build
+FROM node:24-alpine AS build
...
-FROM node:23-alpine
+FROM node:24-alpineAlso applies to: 18-18
🧰 Tools
🪛 Checkov (3.2.334)
[low] 1-39: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
🤖 Prompt for AI Agents
In `@Dockerfile` at line 1, Update the Docker base image to a supported LTS by
replacing the FROM reference "FROM node:23-alpine" with a current LTS tag (e.g.,
"FROM node:24-alpine" or "FROM node:22-alpine") so the container uses a
maintained Node release; ensure any related build/test scripts or CI that
reference node:23 are updated accordingly.
| ARG NODE_ENV=production | ||
| ARG NPM_INSTALL_FLAGS= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NODE_ENV build ARG doesn’t carry into runtime.
ARG is build‑time only, so the runtime container will default to development unless the env is set elsewhere. Ensure NODE_ENV=production is set in the runtime stage.
🔧 Suggested change
FROM node:23-alpine
+ARG NODE_ENV=production
+ENV NODE_ENV=$NODE_ENVAlso applies to: 18-22
🤖 Prompt for AI Agents
In `@Dockerfile` around lines 3 - 4, The Dockerfile uses ARG NODE_ENV=production
which is build-time only; update the runtime stage to export NODE_ENV so it
exists in containers (e.g., in the final stage set ENV NODE_ENV=${NODE_ENV} or
explicitly ENV NODE_ENV=production) and mirror this for any other ARGs noted
(lines 18-22); locate the ARG NODE_ENV declaration and the final/runtime stage
in the Dockerfile and add the ENV assignment there so the runtime process sees
NODE_ENV=production.
| # Install dependencies based on environment and flags | ||
| RUN npm ci $NPM_INSTALL_FLAGS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd -t f Dockerfile | head -20Repository: speedandfunction/website
Length of output: 101
🏁 Script executed:
cat -n DockerfileRepository: speedandfunction/website
Length of output: 1157
🏁 Script executed:
cat -n website/package.json | head -30Repository: speedandfunction/website
Length of output: 1350
🏁 Script executed:
cat -n website/package.json | tail -50Repository: speedandfunction/website
Length of output: 1865
Remove devDependencies from runtime image.
Multi-stage build copies entire /app including node_modules with devDependencies from the build stage. Prune dev dependencies before copying to runtime to reduce image size and attack surface.
🔧 Suggested change
-RUN npm ci $NPM_INSTALL_FLAGS
+RUN npm ci $NPM_INSTALL_FLAGS \
+ && npm prune --omit=devAlso applies to: 30-30
🤖 Prompt for AI Agents
In `@Dockerfile` around lines 12 - 13, The runtime image currently copies the
whole /app including node_modules built with devDependencies (the RUN npm ci
$NPM_INSTALL_FLAGS step), so prune devDependencies before the final COPY to
shrink the image and reduce attack surface: after installing in the build stage
(where RUN npm ci $NPM_INSTALL_FLAGS runs), run a production-only prune (e.g.,
npm prune --production or install with production-only flags /
NPM_CONFIG_PRODUCTION) or remove dev deps from /app/node_modules, then copy only
the pruned /app/node_modules into the runtime stage so the final image does not
contain devDependencies.
| # Install dependencies needed for health checks with pinned version | ||
| RUN apk add --no-cache wget=1.25.0-r1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /dev/null && find . -name "Dockerfile" -type f 2>/dev/null | head -20Repository: speedandfunction/website
Length of output: 117
🏁 Script executed:
cat -n Dockerfile | head -40Repository: speedandfunction/website
Length of output: 1157
🏁 Script executed:
# Check the Alpine base image being used
grep -n "FROM" Dockerfile | head -5Repository: speedandfunction/website
Length of output: 121
🏁 Script executed:
# Check Alpine package versioning practices and wget availability
# Let's also verify if this pattern is indeed problematic by researching Alpine versioning
web_search
Alpine Linux package version pinning node:alpine floating tagRepository: speedandfunction/website
Length of output: 163
🏁 Script executed:
# Check if there are any other similar patterns in the Dockerfile
rg "apk add.*=" DockerfileRepository: speedandfunction/website
Length of output: 105
🏁 Script executed:
# Verify the node:23-alpine tag specifics
web_search
node:23-alpine base image tag floating Alpine versionRepository: speedandfunction/website
Length of output: 171
🌐 Web query:
Alpine Linux package version pinning floating base image compatibility
💡 Result:
-
Pin the Alpine base image first (don’t float). If you use a floating tag like
alpine:latest(orlatest-stable-style tags), the underlying Alpine release (and thus its repositories) can change over time, which breaks reproducibility and can make previously-available package versions disappear from the solver’s view. Prefer a fixed release tag (e.g.,alpine:3.20) or an image digest for fully repeatable builds. [6] -
Package-version pinning in Alpine is “release-branch scoped”.
apkresolves packages from the repositories listed in/etc/apk/repositories, and each stable Alpine release branch has its own repositories. So a package version you pinned must exist in that same branch’s repo, or installs will fail. [5] -
How to pin package versions (apk constraints).
- In a Dockerfile/build step:
apk add pkg=1.2.3-r0(exact pin) or use range/fuzzy constraints likepkg<…,pkg>…,pkg~…as supported by apk’s dependency notation. [7] - Persistently: pins live in the world file (
/etc/apk/world) with the formatname{@tag}{[<>~=]version}. [0][2]
- In a Dockerfile/build step:
-
Avoid “edge” for deterministic pinning, and don’t mix edge + stable repos.
- Alpine explicitly warns that edge is not appropriate when you need deterministic, repeatable installs via pinning (packages can change/rebuild/revoke). [5][7]
- Alpine also warns: do not enable stable and edge repositories at the same time; mixing them can break the system. [3]
Practical compatibility rule: Fixed base image release (or digest) + /etc/apk/repositories pointing to that same release branch + package pins that exist in that branch ⇒ repeatable. Floating base image tags make that compatibility fragile because the repo branch you’re implicitly targeting can shift underneath your pins. [5][6][7]
🌐 Web query:
node:23-alpine Alpine version what version does it use
💡 Result:
node:23-alpine is built on Alpine Linux 3.21 (the image layer shown is alpine-minirootfs-3.21.3-..., i.e., Alpine 3.21.3). [1]
If you want it explicitly pinned, use node:23-alpine3.21. [2]
Sources: [1], [2]
🌐 Web query:
Alpine apk package versions availability across releases wget
💡 Result:
You can check what versions of an Alpine package exist in each Alpine release branch by querying each branch’s repository index (APKINDEX.tar.gz) (or via the packages web UI).
Option A: Query versions across releases using wget + APKINDEX.tar.gz
Alpine repos publish an APKINDEX.tar.gz per repo/arch; inside is a text APKINDEX with per-package records including P: (name) and V: (version). (wiki.alpinelinux.org)
Example (x86_64, main, package openssl), comparing multiple releases:
pkg=openssl
arch=x86_64
repo=main
for rel in v3.18 v3.19 v3.20 v3.21 edge; do
url="https://dl-cdn.alpinelinux.org/alpine/$rel/$repo/$arch/APKINDEX.tar.gz"
ver=$(
wget -qO- "$url" \
| tar -xzO APKINDEX \
| awk -v p="$pkg" '
$0=="P:"p {hit=1}
hit && /^V:/ {print substr($0,3); exit}
/^$/ {hit=0}
'
)
printf "%-6s %s\n" "$rel" "${ver:-NOT_FOUND}"
doneTo list all versions present in that branch index (if multiple entries exist), remove the exit and print every matching V:.
Repo URL pattern and the fact that Alpine repos are indexed by APKINDEX.tar.gz is standard for apk repositories. (wiki.alpinelinux.org)
Option B: Download the actual .apk with wget
Once you know the exact version (e.g., openssl-3.1.5-r0), the file is typically:
wget "https://dl-cdn.alpinelinux.org/alpine/v3.19/main/x86_64/openssl-3.1.5-r0.apk"(You can also confirm filenames by browsing the repo directory listing for that release/repo/arch.) (dl-cdn.alpinelinux.org)
Option C (often easiest): Use the Alpine packages web UI
Use https://pkgs.alpinelinux.org/packages and switch the branch (e.g., v3.19, v3.20, edge) to see which versions are available. (superuser.com)
Option D: Use apk to fetch without installing
apk fetch downloads packages (and can be combined with selecting specific repos via your repositories configuration). (wiki.alpinelinux.org)
If you tell me package name, arch (e.g., x86_64/aarch64), and which branches you care about (e.g., v3.19–v3.21 + edge), I can tailor the exact one-liner/script.
Citations:
- 1: https://wiki.alpinelinux.org/wiki/Apk_spec?utm_source=openai
- 2: https://wiki.alpinelinux.org/wiki/Apk?utm_source=openai
- 3: https://dl-cdn.alpinelinux.org/alpine/v3.19/main/x86_64/?utm_source=openai
- 4: https://superuser.com/questions/1055060/how-to-install-a-specific-package-version-in-alpine?utm_source=openai
- 5: https://wiki.alpinelinux.org/wiki/Apk?utm_source=openai
🏁 Script executed:
# Verify if wget=1.25.0-r1 is available in Alpine 3.21 (the base of node:23-alpine)
# Query the Alpine 3.21 APKINDEX for wget versions
python3 << 'EOF'
import urllib.request
import tarfile
import io
# Alpine 3.21 repository
url = "https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz"
try:
response = urllib.request.urlopen(url)
tar_data = tarfile.open(fileobj=io.BytesIO(response.read()), mode='r:gz')
# Extract and read APKINDEX
apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
# Parse APKINDEX to find wget versions
lines = apkindex.split('\n')
current_pkg = None
in_wget = False
wget_versions = []
for line in lines:
if line.startswith('P:'):
current_pkg = line[2:]
in_wget = (current_pkg == 'wget')
elif line.startswith('V:') and in_wget:
wget_versions.append(line[2:])
print("wget versions in Alpine 3.21:")
for v in wget_versions:
print(f" - {v}")
if v == "1.25.0-r1":
print(" ^ MATCH: wget=1.25.0-r1 is available in Alpine 3.21")
except Exception as e:
print(f"Error: {e}")
EOFRepository: speedandfunction/website
Length of output: 202
🏁 Script executed:
# Also check what happens if node:23 advances to a future Alpine version
# Check if wget=1.25.0-r1 exists in other Alpine versions (3.20, 3.19, edge)
python3 << 'EOF'
import urllib.request
import tarfile
import io
versions = ['v3.20', 'v3.19', 'edge']
for rel in versions:
url = f"https://dl-cdn.alpinelinux.org/alpine/{rel}/main/x86_64/APKINDEX.tar.gz"
try:
response = urllib.request.urlopen(url, timeout=5)
tar_data = tarfile.open(fileobj=io.BytesIO(response.read()), mode='r:gz')
apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
# Find wget versions
lines = apkindex.split('\n')
current_pkg = None
in_wget = False
wget_versions = []
for line in lines:
if line.startswith('P:'):
current_pkg = line[2:]
in_wget = (current_pkg == 'wget')
elif line.startswith('V:') and in_wget:
wget_versions.append(line[2:])
has_target = "1.25.0-r1" in wget_versions
status = "✓ HAS 1.25.0-r1" if has_target else "✗ MISSING 1.25.0-r1"
print(f"{rel:10} {status} (available: {', '.join(wget_versions[:3])}...)")
except Exception as e:
print(f"{rel:10} Error: {e}")
EOFRepository: speedandfunction/website
Length of output: 505
🏁 Script executed:
# Try with curl and insecure flag to bypass SSL issues
python3 << 'EOF'
import subprocess
import tarfile
import io
import json
# Attempt to fetch without SSL verification using curl
url = "https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz"
try:
# Use curl with insecure flag
result = subprocess.run(
['curl', '-s', '-k', url],
capture_output=True,
timeout=10
)
if result.returncode == 0 and result.stdout:
tar_data = tarfile.open(fileobj=io.BytesIO(result.stdout), mode='r:gz')
apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
# Parse for wget
lines = apkindex.split('\n')
in_wget = False
wget_versions = []
for line in lines:
if line.startswith('P:wget'):
in_wget = True
elif line.startswith('V:') and in_wget:
wget_versions.append(line[2:])
in_wget = False
if wget_versions:
print(f"wget versions in Alpine 3.21: {wget_versions}")
else:
print("Could not parse wget version")
else:
print(f"curl failed: {result.stderr.decode() if result.stderr else 'no output'}")
except Exception as e:
print(f"Error: {e}")
EOFRepository: speedandfunction/website
Length of output: 124
🏁 Script executed:
# Check if there's any documentation or comments in the repo about this
rg -i "alpine\|wget\|version.*pin" README.md docs/ 2>/dev/null | head -20Repository: speedandfunction/website
Length of output: 50
🏁 Script executed:
# This is critical - wget=1.25.0-r1 doesn't exist in Alpine 3.21, only 1.25.0-r0
# Let me verify this is indeed an issue by checking if the build would fail
# Check other Alpine versions to see if r1 exists anywhere
python3 << 'EOF'
import subprocess
import tarfile
import io
versions = ['v3.21', 'v3.20', 'v3.19', 'edge']
print("Checking wget availability across Alpine releases:\n")
for rel in versions:
url = f"https://dl-cdn.alpinelinux.org/alpine/{rel}/main/x86_64/APKINDEX.tar.gz"
try:
result = subprocess.run(
['curl', '-s', '-k', url],
capture_output=True,
timeout=10
)
if result.returncode == 0 and result.stdout:
tar_data = tarfile.open(fileobj=io.BytesIO(result.stdout), mode='r:gz')
apkindex = tar_data.extractfile('APKINDEX').read().decode('utf-8')
# Parse for wget versions
lines = apkindex.split('\n')
in_wget = False
wget_versions = []
for line in lines:
if line.startswith('P:wget'):
in_wget = True
elif line.startswith('V:') and in_wget:
wget_versions.append(line[2:])
in_wget = False
has_r1 = any('1.25.0-r1' in v for v in wget_versions)
has_r0 = any('1.25.0-r0' in v for v in wget_versions)
status = ""
if has_r1:
status = "✓ HAS 1.25.0-r1"
elif has_r0:
status = "○ HAS 1.25.0-r0 only (r1 missing)"
else:
status = "✗ Different version"
print(f"{rel:10} {status:30} {wget_versions}")
except Exception as e:
print(f"{rel:10} Error: {e}")
print("\n*** wget=1.25.0-r1 does NOT exist in Alpine 3.21 (node:23-alpine base)")
EOFRepository: speedandfunction/website
Length of output: 470
🏁 Script executed:
# Verify the concrete issue: this Dockerfile would actually fail to build
# because wget=1.25.0-r1 doesn't exist in Alpine 3.21
echo "Summary of findings:"
echo "1. Base image: node:23-alpine uses Alpine 3.21.3"
echo "2. Pinned version: wget=1.25.0-r1 (from line 27)"
echo "3. Available in Alpine 3.21: wget 1.25.0-r0 ONLY"
echo "4. Result: Build would FAIL with 'wget=1.25.0-r1' not found error"
echo ""
echo "The pinned version is incorrect for the base image being used."Repository: speedandfunction/website
Length of output: 365
Pinned wget=1.25.0-r1 does not exist in Alpine 3.21 and will cause immediate build failures.
node:23-alpine uses Alpine 3.21, which only provides wget 1.25.0-r0. The pinned version 1.25.0-r1 is unavailable. Additionally, since node:23-alpine floats Alpine versions, future updates could introduce further version mismatches. Either pin the Alpine base image to a specific release (e.g., node:23-alpine3.21) or remove the package version pin.
🔧 Suggested change
-RUN apk add --no-cache wget=1.25.0-r1
+RUN apk add --no-cache wget📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Install dependencies needed for health checks with pinned version | |
| RUN apk add --no-cache wget=1.25.0-r1 | |
| # Install dependencies needed for health checks with pinned version | |
| RUN apk add --no-cache wget |
🤖 Prompt for AI Agents
In `@Dockerfile` around lines 26 - 27, The Dockerfile pins wget to an invalid
Alpine revision (RUN apk add --no-cache wget=1.25.0-r1) which fails on
node:23-alpine (Alpine 3.21); fix by either changing the base image to a fixed
Alpine release tag (e.g., node:23-alpine3.21) or removing the package revision
pin (use RUN apk add --no-cache wget or pin to wget=1.25.0-r0), and update the
RUN line accordingly so it matches the available package in the chosen base
image.
| "main": "app.js", | ||
| "scripts": { | ||
| "start": "node scripts/generate_constants.js && node app @apostrophecms/asset:build && node app.js", | ||
| "start": "node scripts/generate_constants.js && node app @apostrophecms/asset:build && pm2-runtime app.js -i max", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check if website/package.json exists and read the relevant lines
if [ -f website/package.json ]; then
echo "=== website/package.json (lines 1-70) ==="
head -70 website/package.json | cat -n
else
echo "website/package.json not found"
fiRepository: speedandfunction/website
Length of output: 2802
🌐 Web query:
pm2-runtime -i max cgroup limits CPU oversubscription container
💡 Result:
pm2-runtime -i max means “start as many cluster instances as the system reports as CPU cores” (i.e., it auto-sizes to CPU core count) [1]. In containers, that can oversubscribe CPU relative to your cgroup limit, because many Node/PM2 setups derive “CPU count” from what the OS reports (often the host’s CPUs), not your container’s quota—e.g., os.cpus().length can still show the host core count even when the container is limited to fewer CPUs [2]. When you then run more busy workers/threads than your quota allows, Linux CFS enforces the quota by throttling once the container uses its allowed CPU time in the period (so you can see worse latency/throughput even though “more workers” were started) [3][4].
What to do in containers
- Don’t use
-i maxunder CPU quotas; set a fixed instance count that matches your container CPU limit, e.g.pm2-runtime -i 2 …(or whatever yourlimits.cpu/--cpusis). This is the standard workaround when “max” mis-detects CPUs [5]. - If you want “auto” sizing, prefer a CPU-count API that’s cgroup-aware: Node now exposes
os.availableParallelism()(intended to reflect the CPUs actually available to the process, including container constraints), whereasos.cpus().lengthcan be misleading in containers [2]. - If possible, use CPU pinning (cpuset) (e.g., Docker
--cpuset-cpus/ K8s static CPU manager) so “CPU count” inside the container aligns with what the process can actually run on; quota-only limits are the common case where you get throttling with too many workers [3][4].
Rule of thumb: size PM2 instances to roughly the container CPU limit (often ≈ limits.cpu, rounded down, minimum 1). If you set limits.cpu: 2, run -i 2 rather than -i max.
Sources: PM2 -i max behavior [1], Node-in-Docker CPU mis-detection example [2], cgroup quota/throttling mechanics [3][4], explicit instance workaround [5].
🏁 Script executed:
# Search for Dockerfile, docker-compose, k8s config, or deployment hints
find . -maxdepth 3 -type f \( -name "Dockerfile*" -o -name "docker-compose*" -o -name "*.yaml" -o -name "*.yml" -o -name ".dockerignore" \) 2>/dev/null | head -20Repository: speedandfunction/website
Length of output: 517
🏁 Script executed:
# Check for any .env or configuration files that mention deployment/container
find . -maxdepth 2 -type f \( -name ".env*" -o -name "Procfile" -o -name "ecosystem.config.js" \) 2>/dev/nullRepository: speedandfunction/website
Length of output: 50
🏁 Script executed:
# Check the Dockerfile to understand container configuration
head -50 DockerfileRepository: speedandfunction/website
Length of output: 884
🏁 Script executed:
# Check deployment workflows for CPU limit configuration
grep -A 10 -B 5 "cpus\|CPU\|limits" deploy_to_aws.yml deploy_to_railway.yml 2>/dev/null | head -40Repository: speedandfunction/website
Length of output: 50
Avoid pm2-runtime -i max oversubscribing CPUs in containers.
-i max uses the host core count and ignores cgroup limits, which can spawn too many workers and cause CPU throttling in containerized deployments. Make the instance count configurable (e.g., from PM2_INSTANCES) and set it based on the container's CPU limit.
🔧 Suggested change
- "start": "node scripts/generate_constants.js && node app `@apostrophecms/asset`:build && pm2-runtime app.js -i max",
+ "start": "node scripts/generate_constants.js && node app `@apostrophecms/asset`:build && pm2-runtime app.js -i ${PM2_INSTANCES:-1}",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "start": "node scripts/generate_constants.js && node app @apostrophecms/asset:build && pm2-runtime app.js -i max", | |
| "start": "node scripts/generate_constants.js && node app `@apostrophecms/asset`:build && pm2-runtime app.js -i ${PM2_INSTANCES:-1}", |
🤖 Prompt for AI Agents
In `@website/package.json` at line 7, The start script currently uses "pm2-runtime
app.js -i max" which can oversubscribe CPUs in containers; change the "start"
script (package.json "start") to read an environment variable like PM2_INSTANCES
and pass that to pm2-runtime instead of "-i max", and ensure the container
entrypoint or a small startup helper (e.g., scripts/generate_constants.js or a
new scripts/detect_pm2_instances.sh invoked before pm2-runtime) computes a safe
default from cgroup CPU limits and sets PM2_INSTANCES when it is not provided.
killev
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
Optimized Docker image size using a multi-stage build that separates build-time and runtime dependencies, eliminating unnecessary layers from the final image. Enhanced production startup by replacing direct Node.js execution with PM2 cluster mode (pm2-runtime -i max), enabling automatic horizontal scaling across all available CPU cores for better resource utilization and improved application performance.