Conversation
📝 WalkthroughWalkthroughThis PR introduces comprehensive infrastructure, configuration, and tooling enhancements to the Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 326a0a0f2f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
11 issues found across 46 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/utils/env.ts">
<violation number="1" location="src/utils/env.ts:25">
P2: This renames the required auth env var to ARCANE_MCP_TOKEN without any backward-compatible fallback. Existing setups that only define AUTH_TOKEN (as documented) will now fail at startup. Consider accepting both names or keeping AUTH_TOKEN until docs/configs are updated.</violation>
</file>
<file name="scripts/check-no-baked-env.sh">
<violation number="1" location="scripts/check-no-baked-env.sh:146">
P2: The hardcoded-credential check in `entrypoint.sh` misses unquoted assignments, creating false negatives for leaked secrets.</violation>
</file>
<file name="tests/test_live.sh">
<violation number="1" location="tests/test_live.sh:105">
P2: The destructive gate test uses `subaction=remove`, but the tool only supports `subaction=delete`. This means the test can pass due to an invalid subaction rather than confirming the destructive confirmation gate.</violation>
</file>
<file name="scripts/check-docker-security.sh">
<violation number="1" location="scripts/check-docker-security.sh:81">
P2: The non-root check is not scoped to the runtime stage, so a USER in the builder stage can make the script pass even when the final image runs as root. Limit the USER search to the runtime (or last) stage to avoid false passes.</violation>
<violation number="2" location="scripts/check-docker-security.sh:102">
P2: The root-user check only matches `USER root`, so `USER 0` or `USER 0:0` will slip through. Extend the regex to catch numeric root so the runtime stage can’t run as root.</violation>
</file>
<file name=".env.example">
<violation number="1" location=".env.example:11">
P2: `ARCANE_MCP_TOKEN` is now documented in `.env.example`, but README still instructs users to set `AUTH_TOKEN` even though the runtime expects `ARCANE_MCP_TOKEN`. This mismatch will cause setup failures for anyone following the README; update the docs (or add a fallback in config) to keep env var naming consistent.</violation>
</file>
<file name="src/utils/response.ts">
<violation number="1" location="src/utils/response.ts:10">
P2: This truncates by character count, not UTF‑8 bytes, so multi‑byte characters can still produce a response larger than MAX_RESPONSE_SIZE. Truncate the UTF‑8 byte buffer instead.</violation>
</file>
<file name=".claude-plugin/plugin.json">
<violation number="1" location=".claude-plugin/plugin.json:19">
P1: Mark the MCP bearer token as sensitive; it is a secret credential and should not be exposed in UI or logs.</violation>
</file>
<file name="skills/arcane/SKILL.md">
<violation number="1" location="skills/arcane/SKILL.md:32">
P2: The container subaction list documents unsupported options (`remove`, `logs`, `inspect`) and omits supported ones (`create`, `delete`, `stats`). Update this list to match the actual tool contract.</violation>
<violation number="2" location="skills/arcane/SKILL.md:113">
P3: Destructive-operation subactions are documented as `remove`, but the server uses `delete`/`prune` (and does not expose `stack:remove`). This list should be updated to the canonical destructive operations to avoid invalid calls.</violation>
<violation number="3" location="skills/arcane/SKILL.md:124">
P2: Use `params: { confirm: true }` instead of a top-level `confirm` flag so destructive ops can pass the confirmation gate.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="docker-compose.yaml">
<violation number="1" location="docker-compose.yaml:11">
P2: This mapping now follows `${PORT}`, but the healthcheck still probes `localhost:3000`. If `PORT` is set, the container will be marked unhealthy because the healthcheck hits the wrong port. Update the healthcheck (and any other internal references) to use `${PORT}` or keep the mapping fixed to 3000.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/utils/response.ts">
<violation number="1" location="src/utils/response.ts:18">
P2: The truncation branch exceeds `MAX_RESPONSE_SIZE` by appending the notice after slicing to the full limit. Reserve bytes for the suffix before slicing so the final returned string stays within the cap.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
6 issues found across 7 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="docker-compose.yaml">
<violation number="1" location="docker-compose.yaml:37">
P2: The default Docker network name is inconsistent with the project’s documented/default env value (`arcane_mcp`), which can break startup on existing setups.</violation>
</file>
<file name="skills/arcane/SKILL.md">
<violation number="1" location="skills/arcane/SKILL.md:106">
P2: The destructive-operations list is incomplete: it omits confirmation-gated actions (at least `container:restart` and `environment:delete`), which makes the skill guidance inconsistent and can lead to unsafe/failed calls.</violation>
</file>
<file name="scripts/check-docker-security.sh">
<violation number="1" location="scripts/check-docker-security.sh:95">
P2: The `PUID` validation regex is overly permissive and can pass values that are not `${PUID}:${PGID}`.</violation>
<violation number="2" location="scripts/check-docker-security.sh:110">
P1: Root-user detection is too narrow and misses `USER root:...` / numeric UID 0, allowing root runtime configs to pass.</violation>
</file>
<file name="scripts/check-no-baked-env.sh">
<violation number="1" location="scripts/check-no-baked-env.sh:147">
P1: The new grep regex uses `\s` with `grep -E`, so CMD/ENTRYPOINT lines are not matched correctly and hardcoded credential detection can fail silently.</violation>
</file>
<file name="src/utils/response.ts">
<violation number="1" location="src/utils/response.ts:23">
P2: This truncation can still exceed `MAX_RESPONSE_SIZE` when the cut point lands mid–UTF-8 character, because decoding may expand trailing invalid bytes to `�`. Reserve a small extra margin before appending the notice.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| USER_VALUE=$(echo "$USER_LINE" | sed 's/^USER[[:space:]]*//') | ||
|
|
||
| # Check for 1000:1000, plain 1000, or variable-based UID:GID (${PUID}:${PGID}) | ||
| if echo "$USER_VALUE" | grep -qE '^\$\{?PUID|^1000(:[0-9]+)?$|^[0-9]+:1000$'; then |
There was a problem hiding this comment.
P2: The PUID validation regex is overly permissive and can pass values that are not ${PUID}:${PGID}.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At scripts/check-docker-security.sh, line 95:
<comment>The `PUID` validation regex is overly permissive and can pass values that are not `${PUID}:${PGID}`.</comment>
<file context>
@@ -76,34 +76,41 @@ else
- # Check for 1000:1000 or variable-based UID:GID
- if echo "$USER_VALUE" | grep -qE '^\$?\{?PUID|1000:1000|1000$'; then
+ # Check for 1000:1000, plain 1000, or variable-based UID:GID (${PUID}:${PGID})
+ if echo "$USER_VALUE" | grep -qE '^\$\{?PUID|^1000(:[0-9]+)?$|^[0-9]+:1000$'; then
pass "Non-root user ($USER_VALUE)"
else
</file context>
| const NOTICE = "\n... [truncated]"; | ||
| const noticeBytes = Buffer.byteLength(NOTICE, "utf8"); | ||
| return ( | ||
| buf.subarray(0, MAX_RESPONSE_SIZE - noticeBytes).toString("utf8") + NOTICE |
There was a problem hiding this comment.
P2: This truncation can still exceed MAX_RESPONSE_SIZE when the cut point lands mid–UTF-8 character, because decoding may expand trailing invalid bytes to �. Reserve a small extra margin before appending the notice.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/utils/response.ts, line 23:
<comment>This truncation can still exceed `MAX_RESPONSE_SIZE` when the cut point lands mid–UTF-8 character, because decoding may expand trailing invalid bytes to `�`. Reserve a small extra margin before appending the notice.</comment>
<file context>
@@ -14,8 +14,13 @@ export function truncateResponse(text: string): string {
+ const noticeBytes = Buffer.byteLength(NOTICE, "utf8");
return (
- buf.subarray(0, MAX_RESPONSE_SIZE).toString("utf8") + "\n... [truncated]"
+ buf.subarray(0, MAX_RESPONSE_SIZE - noticeBytes).toString("utf8") + NOTICE
);
}
</file context>
There was a problem hiding this comment.
4 issues found across 8 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="scripts/lint-plugin.sh">
<violation number="1" location="scripts/lint-plugin.sh:90">
P2: The linter uses `jq` extensively without a prerequisite check, so missing `jq` produces misleading lint results instead of a clear dependency error.</violation>
<violation number="2" location="scripts/lint-plugin.sh:412">
P2: Compose validation only reads `docker-compose.yaml`, so repos using `docker-compose.yml` bypass required compose checks.</violation>
</file>
<file name="LICENSE">
<violation number="1" location="LICENSE:3">
P2: The MIT copyright line references `gotify-mcp contributors` instead of this repository (`arcane-mcp`), creating incorrect legal attribution in the license file.</violation>
</file>
<file name="hooks/scripts/ensure-ignore-files.sh">
<violation number="1" location="hooks/scripts/ensure-ignore-files.sh:105">
P2: Check mode can report a false pass when `.env.example` is still ignored due to rule ordering. Presence checks alone are insufficient for `.gitignore` negation semantics.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
hooks/scripts/ensure-ignore-files.sh
Outdated
| done | ||
| } | ||
|
|
||
| check_or_fix "$GITIGNORE" "${GITIGNORE_PATTERNS[@]}" |
There was a problem hiding this comment.
P2: Check mode can report a false pass when .env.example is still ignored due to rule ordering. Presence checks alone are insufficient for .gitignore negation semantics.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At hooks/scripts/ensure-ignore-files.sh, line 105:
<comment>Check mode can report a false pass when `.env.example` is still ignored due to rule ordering. Presence checks alone are insufficient for `.gitignore` negation semantics.</comment>
<file context>
@@ -1,299 +1,111 @@
+ done
+}
+
+check_or_fix "$GITIGNORE" "${GITIGNORE_PATTERNS[@]}"
+check_or_fix "$DOCKERIGNORE" "${DOCKERIGNORE_PATTERNS[@]}"
</file context>
There was a problem hiding this comment.
Actionable comments posted: 31
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 10-12: The .env.example uses unquoted string values which can be
parsed inconsistently; update the entries for ARCANE_MCP_TRANSPORT,
ARCANE_MCP_TOKEN, and ARCANE_MCP_AUTH_ENABLED to use quoted values (e.g.,
"http", "" for empty token placeholder, and "true") so parsers consistently
treat them as strings—locate and modify the ARCANE_MCP_TRANSPORT,
ARCANE_MCP_TOKEN, and ARCANE_MCP_AUTH_ENABLED lines to add quotes around their
values.
In @.github/workflows/ci.yml:
- Around line 61-70: The audit job is running npm audit without installing
dependencies; update the audit job (the audit job's steps that run
actions/setup-node@v4 and the step that runs npm audit) to run npm ci (or npm
install --prefer-offline --no-audit) before the existing npm audit step so the
lockfile is installed/resolved consistently; ensure the new step runs after
actions/setup-node@v4 and before the npm audit step and preserves caching
behavior already configured for setup-node.
In @.gitignore:
- Around line 55-56: The .gitignore update only added docs/research/ but CI
expects the base ignore rules for the repository; add the missing entries
".lavra/" and ".worktrees" to the .gitignore so the ignore-file check passes;
locate the .gitignore file (current diff shows docs/research/) and append lines
for .lavra/ and .worktrees (one entry per line) ensuring they match the exact
names mentioned in the failing CI message.
In `@arcane.subdomain.conf`:
- Around line 8-12: The Nginx config duplicates include of proxy.conf at the
server scope and again inside location /; remove the redundant include at the
server scope and keep the include only inside the location / block (or vice
versa if you prefer global behavior), so that only one include of proxy.conf
remains; update the server-level include reference and verify that the resolver
and proxy settings in the location / block (resolver 127.0.0.11 valid=30s and
related proxy behavior) continue to work as intended.
In `@CHANGELOG.md`:
- Around line 11-20: In CHANGELOG.md, add a single blank line immediately after
the "### Changed" and "### Added" headings so that content does not start on the
same line as the headings (this resolves markdownlint rule MD022); update the
sections where bullets (e.g., the Migration bullet and Plugin scaffold list)
follow those headings to have one empty line between the heading and the first
list item.
In `@docker-compose.yaml`:
- Around line 35-38: The network key name "jakenet" is misleading versus the
actual Docker network name set by name: ${DOCKER_NETWORK:-arcane-mcp}; rename
the compose network key from jakenet to a name that matches the default (e.g.,
arcane-mcp or docker_network) and update any service network references that use
"jakenet" accordingly so the internal compose key and the configured default
network name (${DOCKER_NETWORK:-arcane-mcp}) are consistent.
- Around line 17-18: The healthcheck's test uses "${PORT:-3000}" which Docker
Compose interpolates at startup (not inside the container) so values from
env_file won't be applied; update the service to either hardcode the port in the
healthcheck test (replace "${PORT:-3000}" with the actual port number) or make
the port available to Compose by defining it at the service level (add an
environment entry mapping PORT to a Compose variable e.g. environment: -
PORT=${PORT:-3000} or a dedicated variable like ARCANE_MCP_BIND_PORT and
reference that in the healthcheck test) so the healthcheck and the app use the
same port; modify the healthcheck -> test and the service environment entries
accordingly.
- Around line 6-7: The docker-compose service uses a hardcoded env_file path
`env_file: ~/.claude-homelab/.env` which conflicts with the README's Quick
Start; change the docker-compose `env_file` entry to a relative path (e.g.,
`env_file: .env`) so it reads the project-root .env used by `cp .env.example
.env`, or alternatively update the README to require the ~/.claude-homelab/.env
location—modify the `env_file` key in docker-compose.yaml (and the README if you
choose the alternate approach) to keep them consistent.
In `@hooks/scripts/ensure-ignore-files.sh`:
- Around line 20-48: Add TypeScript build and coverage ignore patterns to the
GITIGNORE_PATTERNS array: update the GITIGNORE_PATTERNS variable to include
"build/" and "coverage/" (or "build/*" and "coverage/*" if you prefer explicit
globbing), keeping the same quoted string entries and array style so the script
will ignore compilation artifacts and coverage output.
In `@Justfile`:
- Around line 65-67: The setup recipe in the Justfile prints "Created .env from
.env.example" even when .env already exists because the chained &&/|| logic
still executes chmod and echo; change the setup target to use an explicit
conditional that checks for absence of .env (e.g., if [ ! -f .env ]; then cp
.env.example .env && chmod 600 .env && echo "Created .env from .env.example";
fi) so cp, chmod and the success message run only when the file is actually
created; update the recipe body for the setup target and reference .env and
.env.example accordingly.
In `@LICENSE`:
- Line 3: Update the LICENSE file to correct the copyright holder attribution:
replace the string "gotify-mcp contributors" with "arcane-mcp contributors" (or
the correct legal entity name used across this repo/manifests) so the LICENSE
content matches the repository identity; ensure the change appears where the
existing copyright line contains "Copyright (c) 2026 gotify-mcp contributors".
In `@scripts/check-docker-security.sh`:
- Around line 151-173: Update the script header to document the manifest and
source detection heuristics (the patterns used by COPY_LINES, the regexes
checked in the while loop for manifest files and source files, and the caveat
about multi-source COPYs); also adjust the detection logic in the while loop
that sets FIRST_MANIFEST_COPY and FIRST_SOURCE_COPY so that if a single COPY
line contains both a manifest pattern and a source pattern it is treated as a
manifest-first copy (i.e., set FIRST_MANIFEST_COPY when the manifest regex
matches even if the same line also matches the source regex), referencing the
variables COPY_LINES, FIRST_MANIFEST_COPY and FIRST_SOURCE_COPY and the
manifest/source regex checks to locate the code to change.
- Around line 91-92: The extraction of the USER value uses an external sed call;
replace that with bash parameter expansion to avoid a subprocess. Locate the
variables USER_LINE and USER_VALUE derived from RUNTIME_SECTION and change the
assignment of USER_VALUE to strip the leading "USER" and any whitespace using
parameter expansion (e.g., remove the "USER" prefix and leading spaces from
USER_LINE) so the result is identical but more efficient and readable.
In `@scripts/check-no-baked-env.sh`:
- Around line 100-101: The ENV detection regex in the SENSITIVE_ENVS grep only
matches the KEY=VALUE form and uses \s (unsupported in ERE); update the
SENSITIVE_ENVS grep to accept both "ENV KEY=VALUE" and "ENV KEY VALUE" by using
POSIX character classes, e.g. change the pattern to use [[:space:]]+ and allow
either '=' or whitespace separator around the value (match something like
^ENV[[:space:]]+\S*${SENSITIVE_RE}\S*(?:[[:space:]]+|=)), and likewise replace
any use of '\s' in the CMD/ENTRYPOINT detection regex with POSIX [[:space:]] so
those patterns correctly detect credentials in both space-separated and
equals-assigned Dockerfile forms (adjust the grep -E expressions that reference
SENSITIVE_RE and the CMD/ENTRYPOINT regex accordingly).
In `@scripts/check-outdated-deps.sh`:
- Around line 46-47: The script increments CHECKED as soon as a manifest exists
which reports success even when the actual checker never ran; update each
manifest-handling branch (the blocks that test for pyproject.toml, package.json,
Cargo.toml, etc.) to only increment CHECKED after the corresponding tool
invocation completed successfully (or after a real attempt was made), and when
the required tool is missing/failed, do not increment CHECKED and instead exit
with the documented tool-error code (exit 2) or propagate a non-zero status so
the summary is not falsely green; update the uv/npm/cargo missing-tool branches
to set a non-zero status (exit 2) or flag a failed run and ensure the final
summary uses CHECKED and failures to decide exit code rather than assuming
presence of manifests equals success.
- Around line 123-139: The script currently parses npm audit into AUDIT and
computes VULN_COUNT but only prints a warning when VULN_COUNT > 0; modify the
conditional that checks VULN_COUNT to return a non-zero exit status (e.g., exit
1) when VULN_COUNT is greater than zero so the CI/job fails on known
vulnerabilities — update the branch handling VULN_COUNT in the block that
references AUDIT and VULN_COUNT to call exit with a failure code while keeping
the warning message.
In `@scripts/ensure-ignore-files.sh`:
- Around line 153-161: Add the missing "build/" entry to the .gitignore under
the "# ── Node/TypeScript
──────────────────────────────────────────────────────────" section so the
existing check in scripts/ensure-ignore-files.sh (the loop over p in
"node_modules/" "dist/" "build/") passes; update .gitignore to include the
literal line build/ (matching the other patterns like node_modules/ and dist/)
and commit the change.
- Around line 164-173: The single-element for loop that checks for the Rust
ignore entry ("target/") is unnecessary and should be simplified or expanded;
replace the for p in "target/"; do ... done loop by either (A) performing a
direct grep check for "target/" against the GITIGNORE variable (use if grep -qxF
"target/" "$GITIGNORE" ... ), or (B) expand the loop to include additional
Rust-related patterns (e.g., "target/", "Cargo.lock", "/debug/", "*.rlib") and
keep the existing loop logic; reference the PROJECT_DIR, GITIGNORE variables and
the "'target/'" string in your changes.
In `@scripts/lint-plugin.sh`:
- Around line 268-298: The current detection uses TOOL_PATTERN and HELP_PATTERN
with grep setting TOOL_FOUND and HELP_FOUND, but these regexes can miss
registrations like TypeScript decorators (`@Tool`("...")), object shorthand ({
arcane }), or dynamic registrations; update the script to either extend the
patterns to include decorator and object-property forms and/or add an explicit
comment and fallback manual-check notice for non-standard registrations (refer
to TOOL_PATTERN, HELP_PATTERN, TOOL_FOUND, HELP_FOUND and the fail/pass calls)
so reviewers know to manually verify tools that use decorators, object
shorthand, or dynamic registration.
- Around line 159-170: The current sed extraction for pyproject.toml version
(assigning PY_VERSION using grep|sed with PROJECT_DIR) is fragile for single
quotes or other TOML formats; replace it with a more robust parser by invoking a
Python one-liner that uses tomllib/toml (falling back as needed) to load
PROJECT_DIR/pyproject.toml and extract the project.version, then set PY_VERSION
from that output and preserve the existing VERSION_CHECKED / comparison logic
against PLUGIN_VERSION; ensure the command fails gracefully (empty PY_VERSION)
on missing keys or parser errors so the subsequent if [[ -n "$PY_VERSION" ]]
logic continues to work.
In `@skills/arcane/SKILL.md`:
- Around line 50-59: Update the "Volume operations" section in
skills/arcane/SKILL.md to include the missing subactions implemented in
src/mcp/tools/dispatch/volume.ts: add entries for "list-backups",
"create-backup", "delete-backup", "restore", "restore-files", and "browse", and
for each subaction document required parameters (e.g., envId, id/name, backup
id, file paths) and which subactions require envId or volume id/name; ensure the
new lines follow the existing format used for "list|get|create|delete|prune" so
consumers can map these SKILL actions to the corresponding handlers in
volume.ts.
- Around line 1-159: The Markdown headings in SKILL.md (e.g., "Arcane Skill",
"Mode Detection", "MCP Mode — Tool Reference", "Container operations", "Image
operations", etc.) lack blank lines before them and trigger MD022; fix by
inserting a single blank line immediately above each top-level and subheading
line (lines starting with #, ##, or ###) throughout the file so every heading is
separated from the previous content, ensuring consistent spacing and linter
compliance while preserving the existing heading text and examples.
In `@src/index.ts`:
- Around line 27-37: The shutdown handler currently just logs and exits; update
the shutdown function referenced by process.on("SIGTERM")/process.on("SIGINT")
to (1) stop accepting new requests by closing the Express server instance
(assign the return of app.listen(...) to a module-scoped httpServer variable and
call await new Promise(resolve => httpServer.close(resolve))), (2) drain/close
active MCP sessions by awaiting your session drain helper (e.g. await
mcp.closeAllSessions() or await drainActiveSessions()), and (3) only call
process.exit(0) after those awaits complete; also catch/log any errors inside
shutdown instead of swallowing them.
In `@src/mcp/tools/dispatch/gitops.ts`:
- Around line 49-63: The code validates params.path into rawPath but never
forwards it to the call; update the return statement for the "browse" case to
pass rawPath to s.gitops.browseFiles so the API actually browses the requested
subpath (e.g. change the call from s.gitops.browseFiles(eid, requireId(id,
"sync")) to include rawPath as the path argument). Ensure you use the same
identifier rawPath (or params.path) and preserve existing requireId/id usage and
ValidationError checks so undefined is allowed but when present the value is
forwarded.
In `@src/utils/env.ts`:
- Line 25: The exported env property AUTH_TOKEN is mapped to
requireEnv("ARCANE_MCP_TOKEN"), causing a naming mismatch; rename the property
to ARCANE_MCP_TOKEN (or MCP_TOKEN if preferred) in the env export and update all
usages of AUTH_TOKEN (for example in auth middleware) to the new symbol so
references align with the environment variable requireEnv("ARCANE_MCP_TOKEN");
ensure you run project-wide search/replace to update imports/usages and run
tests.
In `@src/utils/pagination.ts`:
- Around line 32-43: The filter+sort path repeatedly calls JSON.stringify on
each item (in variables filtered and sorted), causing unnecessary CPU work;
precompute serialized forms once by mapping allItems to an array of {item, text:
JSON.stringify(item).toLowerCase()} (or without toLowerCase and apply where
needed), then use that text for the query filter and for sort comparisons (use
the same text for asc/desc order), and finally extract the original items for
pagination; update references to filtered/sorted to use the precomputed pairs so
behavior of query matching and string comparison (via localeCompare) stays
identical.
In `@src/utils/response.ts`:
- Around line 11-27: truncateResponse can overshoot MAX_RESPONSE_SIZE when
toString("utf8") inserts U+FFFD for an incomplete multi-byte sequence; adjust
the byte slice to reserve extra bytes for the worst-case replacement so the
final re-encoded string never exceeds the limit. In the function
truncateResponse, change the slice length calculation that currently uses
MAX_RESPONSE_SIZE - noticeBytes to subtract an additional safety margin (use
e.g. const SAFETY_BYTES = 3 and slice to MAX_RESPONSE_SIZE - noticeBytes -
SAFETY_BYTES) and clamp to zero to avoid negative lengths; keep NOTICE,
MAX_RESPONSE_SIZE and Buffer.byteLength usage otherwise unchanged.
In `@src/utils/url.ts`:
- Around line 8-10: The current check that returns url.replace(/localhost/g,
"host.docker.internal") replaces every "localhost" occurrence (including
path/query); instead, parse the input string with the URL constructor, detect
when process.env.RUNNING_IN_DOCKER === "true", replace only the hostname
(URL.hostname) with "host.docker.internal" and return the reconstructed URL
(URL.toString() or href) so protocol, port, path and query are preserved; update
the logic around the existing snippet in src/utils/url.ts that performs this
replacement to use URL parsing and hostname assignment instead of a global
string replace.
In `@tests/dockerfile.test.ts`:
- Around line 4-10: The test file tests/dockerfile.test.ts contains long lines
that differ from Biome v2 formatting causing CI failure; run Biome v2 formatting
(e.g., biome format) on tests/dockerfile.test.ts (or the repo) to wrap/format
the long string literals so the contents of the dockerfile variable and the
describe/it block match the formatter output, then commit the formatted file so
the lint job passes.
In `@tests/test_live.sh`:
- Around line 67-71: The test assigns the shell variable result from the npx
mcporter call but never uses it; remove the unused assignment to avoid dead
variables by changing the command invocation that currently begins with
result=$(npx mcporter call ... ) to a plain command invocation (npx mcporter
call "${SERVER_NAME}.arcane" --http-url "$MCP_URL" --header "$AUTH_HEADER"
action=container subaction=list 2>/dev/null) so the test uses the exit status
for pass/fail, or alternatively, if you need to assert contents, keep the
assignment to result and add assertions that inspect result (e.g., checking
pagination fields) instead of leaving it unused.
- Around line 22-28: The current health check uses the A && B || C pattern with
the pipeline assigning health then chaining pass/fail, which can wrongly trigger
fail if pass ever returns non-zero; change the logic to an explicit if-then-else
using the health variable: run the curl assignment into health, then use if echo
"$health" | jq -e '.status == "ok"' >/dev/null 2>&1; then call pass
"health/status-ok"; else call fail "health/status-ok" "$(echo "$health" | jq -r
'.status // "no status field"')"; fi — also apply the same if-then-else
structure for the initial curl check that currently uses && pass
"health/endpoint" || { fail ...; exit 1; } so you call pass "health/endpoint" on
success and fail + exit on failure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4b4796c5-eb4e-4348-a4ea-1e6cd03e613d
⛔ Files ignored due to path filters (2)
assets/icon.pngis excluded by!**/*.pngand included by**/*assets/logo.svgis excluded by!**/*.svgand included by**/*
📒 Files selected for processing (49)
.app.json.claude-plugin/plugin.json.codex-plugin/plugin.json.dockerignore.env.example.github/workflows/ci.yml.gitignore.mcp.json.pre-commit-config.yamlCHANGELOG.mdDockerfileJustfileLICENSEREADME.mdarcane.subdomain.confassets/screenshots/.gitkeepbackups/.gitkeepdocker-compose.yamlentrypoint.shhooks/hooks.jsonhooks/scripts/ensure-ignore-files.shhooks/scripts/fix-env-perms.shhooks/scripts/sync-env.shlogs/.gitkeepscripts/check-docker-security.shscripts/check-no-baked-env.shscripts/check-outdated-deps.shscripts/ensure-ignore-files.shscripts/lint-plugin.shskills/arcane/SKILL.mdsrc/index.tssrc/mcp/tools/arcane.tssrc/mcp/tools/dispatch/container.tssrc/mcp/tools/dispatch/environment.tssrc/mcp/tools/dispatch/gitops.tssrc/mcp/tools/dispatch/image-update.tssrc/mcp/tools/dispatch/image.tssrc/mcp/tools/dispatch/network.tssrc/mcp/tools/dispatch/project.tssrc/mcp/tools/dispatch/registry.tssrc/mcp/tools/dispatch/volume.tssrc/mcp/tools/dispatch/vulnerability.tssrc/utils/env.tssrc/utils/pagination.tssrc/utils/response.tssrc/utils/url.tstests/dockerfile.test.tstests/test_live.shvitest.dockerfile.config.ts
| ARCANE_MCP_TRANSPORT=http # "http" (default) or "stdio" | ||
| ARCANE_MCP_TOKEN= # required — generate with: openssl rand -hex 32 | ||
| ARCANE_MCP_AUTH_ENABLED=true # false = disable bearer auth (proxy-managed only) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider quoting values for consistency and parser compatibility.
Some parsers handle unquoted values differently. Quoting string values like http, true, and info ensures consistent behavior across different env file parsers.
♻️ Proposed fix: add quotes to string values
-ARCANE_MCP_TRANSPORT=http # "http" (default) or "stdio"
-ARCANE_MCP_TOKEN= # required — generate with: openssl rand -hex 32
-ARCANE_MCP_AUTH_ENABLED=true # false = disable bearer auth (proxy-managed only)
+ARCANE_MCP_TRANSPORT="http" # "http" (default) or "stdio"
+ARCANE_MCP_TOKEN="" # required — generate with: openssl rand -hex 32
+ARCANE_MCP_AUTH_ENABLED="true" # false = disable bearer auth (proxy-managed only)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ARCANE_MCP_TRANSPORT=http # "http" (default) or "stdio" | |
| ARCANE_MCP_TOKEN= # required — generate with: openssl rand -hex 32 | |
| ARCANE_MCP_AUTH_ENABLED=true # false = disable bearer auth (proxy-managed only) | |
| ARCANE_MCP_TRANSPORT="http" # "http" (default) or "stdio" | |
| ARCANE_MCP_TOKEN="" # required — generate with: openssl rand -hex 32 | |
| ARCANE_MCP_AUTH_ENABLED="true" # false = disable bearer auth (proxy-managed only) |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 10-10: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 11-11: [SpaceCharacter] The line has spaces around equal sign
(SpaceCharacter)
[warning] 11-11: [UnorderedKey] The ARCANE_MCP_TOKEN key should go before the ARCANE_MCP_TRANSPORT key
(UnorderedKey)
[warning] 11-11: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 12-12: [UnorderedKey] The ARCANE_MCP_AUTH_ENABLED key should go before the ARCANE_MCP_BIND_PORT key
(UnorderedKey)
[warning] 12-12: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 10 - 12, The .env.example uses unquoted string
values which can be parsed inconsistently; update the entries for
ARCANE_MCP_TRANSPORT, ARCANE_MCP_TOKEN, and ARCANE_MCP_AUTH_ENABLED to use
quoted values (e.g., "http", "" for empty token placeholder, and "true") so
parsers consistently treat them as strings—locate and modify the
ARCANE_MCP_TRANSPORT, ARCANE_MCP_TOKEN, and ARCANE_MCP_AUTH_ENABLED lines to add
quotes around their values.
.github/workflows/ci.yml
Outdated
| audit: | ||
| name: Security Audit | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| - uses: actions/setup-node@v4 | ||
| with: | ||
| node-version: "24" | ||
| cache: "npm" | ||
| - run: npm audit --audit-level=high |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Missing npm ci before npm audit.
The audit job runs npm audit without first installing dependencies. While npm audit can work without node_modules, it benefits from having the lockfile properly resolved in context, and some audit scenarios may behave differently. Consider adding npm ci for consistency with other jobs.
Proposed fix
audit:
name: Security Audit
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "24"
cache: "npm"
+ - run: npm ci
- run: npm audit --audit-level=high📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| audit: | |
| name: Security Audit | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| - uses: actions/setup-node@v4 | |
| with: | |
| node-version: "24" | |
| cache: "npm" | |
| - run: npm audit --audit-level=high | |
| audit: | |
| name: Security Audit | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| - uses: actions/setup-node@v4 | |
| with: | |
| node-version: "24" | |
| cache: "npm" | |
| - run: npm ci | |
| - run: npm audit --audit-level=high |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 61 - 70, The audit job is running npm
audit without installing dependencies; update the audit job (the audit job's
steps that run actions/setup-node@v4 and the step that runs npm audit) to run
npm ci (or npm install --prefer-offline --no-audit) before the existing npm
audit step so the lockfile is installed/resolved consistently; ensure the new
step runs after actions/setup-node@v4 and before the npm audit step and
preserves caching behavior already configured for setup-node.
| export function truncateResponse(text: string): string { | ||
| const buf = Buffer.from(text, "utf8"); | ||
| if (buf.byteLength > MAX_RESPONSE_SIZE) { | ||
| // Slice the byte buffer and decode back to a string. `toString("utf8")` | ||
| // replaces any incomplete multi-byte sequence at the cut point with the | ||
| // U+FFFD replacement character, keeping the result valid UTF-8. | ||
| // | ||
| // Reserve bytes for the truncation notice so the final result never | ||
| // exceeds MAX_RESPONSE_SIZE bytes when re-encoded. | ||
| const NOTICE = "\n... [truncated]"; | ||
| const noticeBytes = Buffer.byteLength(NOTICE, "utf8"); | ||
| return ( | ||
| buf.subarray(0, MAX_RESPONSE_SIZE - noticeBytes).toString("utf8") + NOTICE | ||
| ); | ||
| } | ||
| return text; | ||
| } |
There was a problem hiding this comment.
Minor edge case: U+FFFD replacement may slightly exceed byte limit.
When buf.subarray(0, MAX_RESPONSE_SIZE - noticeBytes).toString("utf8") decodes an incomplete multi-byte sequence, it inserts U+FFFD (3 bytes in UTF-8). If the cut happens mid-character, the replacement could add up to 2 extra bytes compared to the original truncated bytes, causing the final output to marginally exceed MAX_RESPONSE_SIZE.
For a 512KB limit, this 1-2 byte overshoot is negligible in practice. If strict compliance is required:
🛡️ Proposed fix for strict byte limit compliance
export function truncateResponse(text: string): string {
const buf = Buffer.from(text, "utf8");
if (buf.byteLength > MAX_RESPONSE_SIZE) {
const NOTICE = "\n... [truncated]";
const noticeBytes = Buffer.byteLength(NOTICE, "utf8");
- return (
- buf.subarray(0, MAX_RESPONSE_SIZE - noticeBytes).toString("utf8") + NOTICE
- );
+ const targetSize = MAX_RESPONSE_SIZE - noticeBytes;
+ let truncated = buf.subarray(0, targetSize).toString("utf8");
+ // If U+FFFD replacement caused overshoot, trim further
+ while (Buffer.byteLength(truncated, "utf8") > targetSize) {
+ truncated = truncated.slice(0, -1);
+ }
+ return truncated + NOTICE;
}
return text;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/response.ts` around lines 11 - 27, truncateResponse can overshoot
MAX_RESPONSE_SIZE when toString("utf8") inserts U+FFFD for an incomplete
multi-byte sequence; adjust the byte slice to reserve extra bytes for the
worst-case replacement so the final re-encoded string never exceeds the limit.
In the function truncateResponse, change the slice length calculation that
currently uses MAX_RESPONSE_SIZE - noticeBytes to subtract an additional safety
margin (use e.g. const SAFETY_BYTES = 3 and slice to MAX_RESPONSE_SIZE -
noticeBytes - SAFETY_BYTES) and clamp to zero to avoid negative lengths; keep
NOTICE, MAX_RESPONSE_SIZE and Buffer.byteLength usage otherwise unchanged.
| if (process.env.RUNNING_IN_DOCKER === "true") { | ||
| return url.replace(/localhost/g, "host.docker.internal"); | ||
| } |
There was a problem hiding this comment.
Replace only the hostname, not every localhost substring.
Current logic rewrites localhost anywhere in the URL string (including path/query), which can alter valid endpoints unexpectedly. Parse and mutate hostname only.
Proposed fix
export function normalizeServiceUrl(url: string): string {
- if (process.env.RUNNING_IN_DOCKER === "true") {
- return url.replace(/localhost/g, "host.docker.internal");
- }
- return url;
+ if (process.env.RUNNING_IN_DOCKER !== "true") return url;
+
+ try {
+ const parsed = new URL(url);
+ if (parsed.hostname === "localhost") {
+ parsed.hostname = "host.docker.internal";
+ }
+ return parsed.toString();
+ } catch {
+ return url;
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (process.env.RUNNING_IN_DOCKER === "true") { | |
| return url.replace(/localhost/g, "host.docker.internal"); | |
| } | |
| export function normalizeServiceUrl(url: string): string { | |
| if (process.env.RUNNING_IN_DOCKER !== "true") return url; | |
| try { | |
| const parsed = new URL(url); | |
| if (parsed.hostname === "localhost") { | |
| parsed.hostname = "host.docker.internal"; | |
| } | |
| return parsed.toString(); | |
| } catch { | |
| return url; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/url.ts` around lines 8 - 10, The current check that returns
url.replace(/localhost/g, "host.docker.internal") replaces every "localhost"
occurrence (including path/query); instead, parse the input string with the URL
constructor, detect when process.env.RUNNING_IN_DOCKER === "true", replace only
the hostname (URL.hostname) with "host.docker.internal" and return the
reconstructed URL (URL.toString() or href) so protocol, port, path and query are
preserved; update the logic around the existing snippet in src/utils/url.ts that
performs this replacement to use URL parsing and hostname assignment instead of
a global string replace.
tests/dockerfile.test.ts
Outdated
| const dockerfile = readFileSync(new URL("../Dockerfile", import.meta.url), "utf8"); | ||
|
|
||
| describe("Dockerfile runtime image", () => { | ||
| it("does not recreate the non-root user or group with hardcoded uid/gid 1000", () => { | ||
| expect(dockerfile).not.toMatch(/addgroup\s+-g\s+1000\s+app/); | ||
| expect(dockerfile).not.toMatch(/adduser\s+-u\s+1000\s+-G\s+app\s+-H\s+-D\s+app/); | ||
| }); |
There was a problem hiding this comment.
Run Biome on this test before merging.
CI already reports File content differs from formatting output. This file has a couple of long lines that Biome will wrap, so the lint job stays red as-is.
🧹 Formatting diff
-const dockerfile = readFileSync(new URL("../Dockerfile", import.meta.url), "utf8");
+const dockerfile = readFileSync(
+ new URL("../Dockerfile", import.meta.url),
+ "utf8",
+);
describe("Dockerfile runtime image", () => {
- it("does not recreate the non-root user or group with hardcoded uid/gid 1000", () => {
- expect(dockerfile).not.toMatch(/addgroup\s+-g\s+1000\s+app/);
- expect(dockerfile).not.toMatch(/adduser\s+-u\s+1000\s+-G\s+app\s+-H\s+-D\s+app/);
- });
+ it(
+ "does not recreate the non-root user or group with hardcoded uid/gid 1000",
+ () => {
+ expect(dockerfile).not.toMatch(/addgroup\s+-g\s+1000\s+app/);
+ expect(dockerfile).not.toMatch(
+ /adduser\s+-u\s+1000\s+-G\s+app\s+-H\s+-D\s+app/,
+ );
+ },
+ );
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const dockerfile = readFileSync(new URL("../Dockerfile", import.meta.url), "utf8"); | |
| describe("Dockerfile runtime image", () => { | |
| it("does not recreate the non-root user or group with hardcoded uid/gid 1000", () => { | |
| expect(dockerfile).not.toMatch(/addgroup\s+-g\s+1000\s+app/); | |
| expect(dockerfile).not.toMatch(/adduser\s+-u\s+1000\s+-G\s+app\s+-H\s+-D\s+app/); | |
| }); | |
| const dockerfile = readFileSync( | |
| new URL("../Dockerfile", import.meta.url), | |
| "utf8", | |
| ); | |
| describe("Dockerfile runtime image", () => { | |
| it( | |
| "does not recreate the non-root user or group with hardcoded uid/gid 1000", | |
| () => { | |
| expect(dockerfile).not.toMatch(/addgroup\s+-g\s+1000\s+app/); | |
| expect(dockerfile).not.toMatch( | |
| /adduser\s+-u\s+1000\s+-G\s+app\s+-H\s+-D\s+app/, | |
| ); | |
| }, | |
| ); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/dockerfile.test.ts` around lines 4 - 10, The test file
tests/dockerfile.test.ts contains long lines that differ from Biome v2
formatting causing CI failure; run Biome v2 formatting (e.g., biome format) on
tests/dockerfile.test.ts (or the repo) to wrap/format the long string literals
so the contents of the dockerfile variable and the describe/it block match the
formatter output, then commit the formatted file so the lint job passes.
tests/test_live.sh
Outdated
| health=$(curl -sf "${MCP_URL}/health") \ | ||
| && pass "health/endpoint" \ | ||
| || { fail "health/endpoint" "HTTP error — is the server running at $MCP_URL?"; exit 1; } | ||
|
|
||
| echo "$health" | jq -e '.status == "ok"' > /dev/null 2>&1 \ | ||
| && pass "health/status-ok" \ | ||
| || fail "health/status-ok" "$(echo "$health" | jq -r '.status // "no status field"')" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using if-then-else for critical checks.
The A && B || C pattern can unexpectedly execute C if B fails (even when A succeeds). While pass is unlikely to fail, for critical checks like the health endpoint, using explicit if-then-else is safer.
♻️ Proposed fix using if-then-else
-health=$(curl -sf "${MCP_URL}/health") \
- && pass "health/endpoint" \
- || { fail "health/endpoint" "HTTP error — is the server running at $MCP_URL?"; exit 1; }
+if health=$(curl -sf "${MCP_URL}/health"); then
+ pass "health/endpoint"
+else
+ fail "health/endpoint" "HTTP error — is the server running at $MCP_URL?"
+ exit 1
+fi
-echo "$health" | jq -e '.status == "ok"' > /dev/null 2>&1 \
- && pass "health/status-ok" \
- || fail "health/status-ok" "$(echo "$health" | jq -r '.status // "no status field"')"
+if echo "$health" | jq -e '.status == "ok"' > /dev/null 2>&1; then
+ pass "health/status-ok"
+else
+ fail "health/status-ok" "$(echo "$health" | jq -r '.status // "no status field"')"
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| health=$(curl -sf "${MCP_URL}/health") \ | |
| && pass "health/endpoint" \ | |
| || { fail "health/endpoint" "HTTP error — is the server running at $MCP_URL?"; exit 1; } | |
| echo "$health" | jq -e '.status == "ok"' > /dev/null 2>&1 \ | |
| && pass "health/status-ok" \ | |
| || fail "health/status-ok" "$(echo "$health" | jq -r '.status // "no status field"')" | |
| if health=$(curl -sf "${MCP_URL}/health"); then | |
| pass "health/endpoint" | |
| else | |
| fail "health/endpoint" "HTTP error — is the server running at $MCP_URL?" | |
| exit 1 | |
| fi | |
| if echo "$health" | jq -e '.status == "ok"' > /dev/null 2>&1; then | |
| pass "health/status-ok" | |
| else | |
| fail "health/status-ok" "$(echo "$health" | jq -r '.status // "no status field"')" | |
| fi |
🧰 Tools
🪛 Shellcheck (0.11.0)
[info] 23-23: Note that A && B || C is not if-then-else. C may run when A is true.
(SC2015)
[info] 27-27: Note that A && B || C is not if-then-else. C may run when A is true.
(SC2015)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_live.sh` around lines 22 - 28, The current health check uses the A
&& B || C pattern with the pipeline assigning health then chaining pass/fail,
which can wrongly trigger fail if pass ever returns non-zero; change the logic
to an explicit if-then-else using the health variable: run the curl assignment
into health, then use if echo "$health" | jq -e '.status == "ok"' >/dev/null
2>&1; then call pass "health/status-ok"; else call fail "health/status-ok"
"$(echo "$health" | jq -r '.status // "no status field"')"; fi — also apply the
same if-then-else structure for the initial curl check that currently uses &&
pass "health/endpoint" || { fail ...; exit 1; } so you call pass
"health/endpoint" on success and fail + exit on failure.
tests/test_live.sh
Outdated
| result=$(npx mcporter call "${SERVER_NAME}.arcane" \ | ||
| --http-url "$MCP_URL" --header "$AUTH_HEADER" \ | ||
| action=container subaction=list 2>/dev/null) \ | ||
| && pass "action/container/list" \ | ||
| || fail "action/container/list" "call failed" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused variable result.
The result variable is assigned on line 67 but never used. Either use it to inspect the response (e.g., check pagination fields) or remove the assignment.
♻️ Proposed fix: remove unused assignment
-result=$(npx mcporter call "${SERVER_NAME}.arcane" \
+npx mcporter call "${SERVER_NAME}.arcane" \
--http-url "$MCP_URL" --header "$AUTH_HEADER" \
- action=container subaction=list 2>/dev/null) \
+ action=container subaction=list > /dev/null 2>&1 \
&& pass "action/container/list" \
|| fail "action/container/list" "call failed"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| result=$(npx mcporter call "${SERVER_NAME}.arcane" \ | |
| --http-url "$MCP_URL" --header "$AUTH_HEADER" \ | |
| action=container subaction=list 2>/dev/null) \ | |
| && pass "action/container/list" \ | |
| || fail "action/container/list" "call failed" | |
| npx mcporter call "${SERVER_NAME}.arcane" \ | |
| --http-url "$MCP_URL" --header "$AUTH_HEADER" \ | |
| action=container subaction=list > /dev/null 2>&1 \ | |
| && pass "action/container/list" \ | |
| || fail "action/container/list" "call failed" |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 67-67: result appears unused. Verify use (or export if used externally).
(SC2034)
[info] 70-70: Note that A && B || C is not if-then-else. C may run when A is true.
(SC2015)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_live.sh` around lines 67 - 71, The test assigns the shell variable
result from the npx mcporter call but never uses it; remove the unused
assignment to avoid dead variables by changing the command invocation that
currently begins with result=$(npx mcporter call ... ) to a plain command
invocation (npx mcporter call "${SERVER_NAME}.arcane" --http-url "$MCP_URL"
--header "$AUTH_HEADER" action=container subaction=list 2>/dev/null) so the test
uses the exit status for pass/fail, or alternatively, if you need to assert
contents, keep the assignment to result and add assertions that inspect result
(e.g., checking pagination fields) instead of leaving it unused.
There was a problem hiding this comment.
1 issue found across 5 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src/utils/response.ts">
<violation number="1" location="src/utils/response.ts:12">
P1: The new fast-path uses `text.length` (UTF-16 units) instead of UTF-8 byte length, so multi-byte strings can exceed the 512KB response cap without being truncated.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| * MAX_RESPONSE_SIZE bytes when re-encoded. | ||
| */ | ||
| export function truncateResponse(text: string): string { | ||
| if (text.length <= MAX_RESPONSE_SIZE) return text; |
There was a problem hiding this comment.
P1: The new fast-path uses text.length (UTF-16 units) instead of UTF-8 byte length, so multi-byte strings can exceed the 512KB response cap without being truncated.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/utils/response.ts, line 12:
<comment>The new fast-path uses `text.length` (UTF-16 units) instead of UTF-8 byte length, so multi-byte strings can exceed the 512KB response cap without being truncated.</comment>
<file context>
@@ -9,6 +9,7 @@ const MAX_RESPONSE_SIZE = 512 * 1024; // 512KB
* MAX_RESPONSE_SIZE bytes when re-encoded.
*/
export function truncateResponse(text: string): string {
+ if (text.length <= MAX_RESPONSE_SIZE) return text;
const buf = Buffer.from(text, "utf8");
if (buf.byteLength > MAX_RESPONSE_SIZE) {
</file context>
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
src/index.ts (1)
27-37:⚠️ Potential issue | 🟠 MajorGraceful shutdown still exits before the listener and sessions are drained.
Line 29 still calls
process.exit(0)immediately, so SIGTERM/SIGINT can cut off in-flight MCP requests instead of first closing the HTTP server and draining activesessions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 27 - 37, The shutdown handler currently calls process.exit(0) immediately in shutdown(signal), which can terminate in-flight MCP requests; instead, update shutdown to stop accepting new requests and wait for active work to drain: call server.close(...) (or httpServer.close) to stop the HTTP server, iterate over/await draining of active sessions (the sessions collection) and only call process.exit once server.close's callback or promise and all sessions have completed or after a configurable timeout; ensure both SIGTERM and SIGINT continue to call this updated shutdown(signal) and add a timeout fallback to force exit if draining hangs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/index.ts`:
- Line 28: Replace the direct console.error call with the Pino logger to keep
structured logs: in the signal handler where console.error(`Received ${signal},
shutting down gracefully...`) is used, call logger.warn({ signal }, "Received
signal, shutting down gracefully...") instead, ensuring you reference the
existing logger instance used in this module so the structured logging pipeline
is preserved.
- Around line 240-257: The package.json dependency for `@modelcontextprotocol/sdk`
must be pinned to the exact validated version to avoid runtime breakage around
the private-field checks in src/index.ts (see webTransport._initialized and
webTransport.sessionId); update package.json to set "@modelcontextprotocol/sdk":
"1.28.0" (remove the caret), regenerate lockfile (npm install / yarn install) so
the exact version is locked, and verify package.json and the lockfile now
reference 1.28.0 before committing.
In `@src/mcp/tools/dispatch/gitops.ts`:
- Around line 23-29: The Array.isArray ternary around the result of
s.gitops.list(eid) is redundant because s.gitops.list already returns
Promise<GitOpsSync[]>; replace the conditional with a direct use of the returned
items: call paginatedResponse(items, parsePagination(params)) after awaiting
s.gitops.list(eid). Update the block handling "list" so it awaits
s.gitops.list(eid) into items and passes items directly to paginatedResponse
(using parsePagination(params)) — reference symbols: s.gitops.list, items,
paginatedResponse, parsePagination, params, eid.
In `@src/mcp/tools/dispatch/helpers.ts`:
- Around line 14-25: The current validateRelativePath(rawPath) checks the raw
string and therefore misses URL-encoded traversal like %2e%2e or encoded
slashes; update validateRelativePath to first URL-decode (and normalize) the
input, then validate that the decoded path is a string, does not start with '/',
and contains no '..' segments (and reject percent-encoded traversal by decoding
before splitting on '/'); additionally, in gitops.ts remove the no-op call to
validateRelativePath or replace it by capturing and using the validated/decoded
path when calling browseFiles (or pass the validated path into a browseFiles
overload if path filtering is intended) so the validation actually takes effect.
In `@src/mcp/tools/dispatch/volume.ts`:
- Around line 34-40: The Array.isArray guard around the result of s.volumes.list
is redundant because s.volumes.list returns Promise<VolumeInfo[]> (already
normalized to an array); update the "list" case to pass the returned items
directly into paginatedResponse (or use items ?? [] if you prefer a defensive
fallback) instead of Array.isArray(items) ? items : []; change the code around
the s.volumes.list call so paginatedResponse receives the proper VolumeInfo[]
(and keep parsePagination(params) as-is).
In `@src/utils/response.ts`:
- Around line 12-14: The early-return uses text.length (UTF-16 code units) which
can be smaller than the UTF-8 byte size; compute the UTF-8 byte length first and
use that for the guard: replace the text.length check with
Buffer.byteLength(text, "utf8") (or create buf = Buffer.from(text, "utf8") first
and use buf.byteLength) and only return early when that UTF-8 byte length <=
MAX_RESPONSE_SIZE; keep the existing buf/byteLength comparison and truncation
logic that follows, referencing MAX_RESPONSE_SIZE, text, buf, and Buffer.from.
---
Duplicate comments:
In `@src/index.ts`:
- Around line 27-37: The shutdown handler currently calls process.exit(0)
immediately in shutdown(signal), which can terminate in-flight MCP requests;
instead, update shutdown to stop accepting new requests and wait for active work
to drain: call server.close(...) (or httpServer.close) to stop the HTTP server,
iterate over/await draining of active sessions (the sessions collection) and
only call process.exit once server.close's callback or promise and all sessions
have completed or after a configurable timeout; ensure both SIGTERM and SIGINT
continue to call this updated shutdown(signal) and add a timeout fallback to
force exit if draining hangs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: e1058ddc-7327-485c-a600-6dcce73f7aab
📒 Files selected for processing (5)
src/index.tssrc/mcp/tools/dispatch/gitops.tssrc/mcp/tools/dispatch/helpers.tssrc/mcp/tools/dispatch/volume.tssrc/utils/response.ts
| const transport = process.env.ARCANE_MCP_TRANSPORT ?? "http"; | ||
|
|
||
| const shutdown = (signal: string): void => { | ||
| console.error(`Received ${signal}, shutting down gracefully...`); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
cat -n src/index.ts | head -50Repository: jmagar/arcane-mcp
Length of output: 2058
🏁 Script executed:
cat -n src/utils/logger.tsRepository: jmagar/arcane-mcp
Length of output: 2081
🏁 Script executed:
rg "logger\." src/index.tsRepository: jmagar/arcane-mcp
Length of output: 412
🏁 Script executed:
cat -n src/index.ts | sed -n '25,35p'Repository: jmagar/arcane-mcp
Length of output: 422
Replace console.error with the Pino logger.
Line 28 bypasses the service's structured logging pipeline. Use logger.warn({ signal }, "Received signal, shutting down gracefully...") to maintain consistent structured logging as required by the coding guidelines for src/**/*.ts files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/index.ts` at line 28, Replace the direct console.error call with the Pino
logger to keep structured logs: in the signal handler where
console.error(`Received ${signal}, shutting down gracefully...`) is used, call
logger.warn({ signal }, "Received signal, shutting down gracefully...") instead,
ensuring you reference the existing logger instance used in this module so the
structured logging pipeline is preserved.
| if (typeof webTransport._initialized !== "boolean") { | ||
| const msg = | ||
| "SDK incompatibility: _webStandardTransport._initialized is not a boolean. " + | ||
| "Re-verify private field names after upgrading @modelcontextprotocol/sdk."; | ||
| getRequestLogger({ outcome: "internal_error" }).error({}, msg); | ||
| throw new Error(msg); | ||
| } | ||
| if ( | ||
| typeof webTransport.sessionId !== "string" && | ||
| webTransport.sessionId !== null && | ||
| webTransport.sessionId !== undefined | ||
| ) { | ||
| const msg = | ||
| "SDK incompatibility: _webStandardTransport.sessionId is unexpected type. " + | ||
| "Re-verify private field names after upgrading @modelcontextprotocol/sdk."; | ||
| getRequestLogger({ outcome: "internal_error" }).error({}, msg); | ||
| throw new Error(msg); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
import json
from pathlib import Path
pkg = json.loads(Path("package.json").read_text())
declared = (
pkg.get("dependencies", {}).get("@modelcontextprotocol/sdk")
or pkg.get("devDependencies", {}).get("@modelcontextprotocol/sdk")
)
print(f"package.json declares: {declared}")
PY
fd -HI '^(package-lock\.json|pnpm-lock\.yaml|yarn\.lock)$' . -x sh -c '
file="$1"
echo
echo "==> $file"
case "$file" in
*.json)
python - "$file" <<'"'"'PY'"'"'
import json
import sys
from pathlib import Path
data = json.loads(Path(sys.argv[1]).read_text())
versions = set()
def walk(node):
if isinstance(node, dict):
if node.get("name") == "@modelcontextprotocol/sdk" and "version" in node:
versions.add(node["version"])
for value in node.values():
walk(value)
elif isinstance(node, list):
for value in node:
walk(value)
walk(data)
print("resolved versions:", sorted(versions) or ["<not found>"])
PY
;;
*)
rg -n "@modelcontextprotocol/sdk|version:" "$file" -C2 || true
;;
esac
' sh {}Repository: jmagar/arcane-mcp
Length of output: 151
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check package-lock.json structure for the SDK
echo "=== Searching package-lock.json for SDK references ==="
if [ -f package-lock.json ]; then
python3 - <<'PY'
import json
from pathlib import Path
data = json.loads(Path("package-lock.json").read_text())
# Check top-level packages
if "packages" in data:
sdk_entries = {k: v for k, v in data["packages"].items() if "@modelcontextprotocol/sdk" in k}
if sdk_entries:
for key, entry in sdk_entries.items():
print(f"Top-level: {key}")
if "version" in entry:
print(f" version: {entry['version']}")
else:
print("SDK not found in packages section")
else:
print("No 'packages' section found")
# Also check dependencies section for version info
if "dependencies" in data:
sdk_dep = data["dependencies"].get("@modelcontextprotocol/sdk", {})
if sdk_dep:
print(f"\nDependencies section: {sdk_dep}")
PY
else
echo "package-lock.json not found"
fi
echo
echo "=== Trying npm ls to check actual resolved version ==="
npm ls `@modelcontextprotocol/sdk` 2>&1 | head -20 || echo "npm ls command failed or not available"Repository: jmagar/arcane-mcp
Length of output: 426
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify the actual code at lines 240-257 in src/index.ts
echo "=== src/index.ts lines 240-257 ==="
sed -n '240,257p' src/index.ts
# Also check if there are other private SDK field accesses
echo
echo "=== Other references to _webStandardTransport or private SDK fields ==="
rg '_webStandardTransport|_initialized|\.sessionId' src/index.ts -n -C 2Repository: jmagar/arcane-mcp
Length of output: 3309
Pin @modelcontextprotocol/sdk to the exact validated version.
Lines 240–257 add defensive checks for private field access, but package.json still declares ^1.12.0, allowing fresh installs to resolve to any 1.x version beyond the validated 1.28.0. While the guards will fail loudly if field structure changes, this creates unnecessary deployment fragility. Pin the dependency to "@modelcontextprotocol/sdk": "1.28.0" to guarantee consistency across builds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/index.ts` around lines 240 - 257, The package.json dependency for
`@modelcontextprotocol/sdk` must be pinned to the exact validated version to avoid
runtime breakage around the private-field checks in src/index.ts (see
webTransport._initialized and webTransport.sessionId); update package.json to
set "@modelcontextprotocol/sdk": "1.28.0" (remove the caret), regenerate
lockfile (npm install / yarn install) so the exact version is locked, and verify
package.json and the lockfile now reference 1.28.0 before committing.
| case "list": { | ||
| const items = await s.gitops.list(eid); | ||
| return paginatedResponse( | ||
| Array.isArray(items) ? items : [], | ||
| parsePagination(params), | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Array.isArray check is unnecessary.
s.gitops.list() has return type Promise<GitOpsSync[]> and its implementation uses res.data ?? [], guaranteeing an array. The ternary fallback to [] is dead code.
case "list": {
const items = await s.gitops.list(eid);
- return paginatedResponse(
- Array.isArray(items) ? items : [],
- parsePagination(params),
- );
+ return paginatedResponse(items, parsePagination(params));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "list": { | |
| const items = await s.gitops.list(eid); | |
| return paginatedResponse( | |
| Array.isArray(items) ? items : [], | |
| parsePagination(params), | |
| ); | |
| } | |
| case "list": { | |
| const items = await s.gitops.list(eid); | |
| return paginatedResponse(items, parsePagination(params)); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mcp/tools/dispatch/gitops.ts` around lines 23 - 29, The Array.isArray
ternary around the result of s.gitops.list(eid) is redundant because
s.gitops.list already returns Promise<GitOpsSync[]>; replace the conditional
with a direct use of the returned items: call paginatedResponse(items,
parsePagination(params)) after awaiting s.gitops.list(eid). Update the block
handling "list" so it awaits s.gitops.list(eid) into items and passes items
directly to paginatedResponse (using parsePagination(params)) — reference
symbols: s.gitops.list, items, paginatedResponse, parsePagination, params, eid.
| export function validateRelativePath(rawPath: unknown): string { | ||
| if (typeof rawPath !== "string") | ||
| throw new ValidationError("path must be a string"); | ||
| if (rawPath.startsWith("/")) | ||
| throw new ValidationError( | ||
| "path must be a relative path (must not start with /)", | ||
| ); | ||
| if (rawPath.split("/").some((seg) => seg === "..")) | ||
| throw new ValidationError( | ||
| "path must not contain '..' path traversal segments", | ||
| ); | ||
| return rawPath; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify if the backend performs URL decoding on the path query parameter
# Check how the path is ultimately used in the service layer
# Look for path handling in volume and gitops services
rg -n -C5 'browseDirectory|browseFiles' src/services/
# Check if there's any decoding in the HTTP client
rg -n -C3 'decodeURIComponent|decodeURI' src/Repository: jmagar/arcane-mcp
Length of output: 1387
🏁 Script executed:
# Find where validateRelativePath is used/imported
rg -n 'validateRelativePath' src/
# Look for dispatch-related files that might use this helper
fd -e ts src/mcp/tools/dispatch/Repository: jmagar/arcane-mcp
Length of output: 994
🏁 Script executed:
# Read the gitops dispatcher to see path validation and usage
sed -n '1,80p' src/mcp/tools/dispatch/gitops.ts
# Read the volume dispatcher path handling
sed -n '80,110p' src/mcp/tools/dispatch/volume.tsRepository: jmagar/arcane-mcp
Length of output: 2523
🏁 Script executed:
# Check gitops.browseFiles implementation - does it accept path parameter?
sed -n '75,90p' src/services/gitops.ts
# Check volumes.browseDirectory implementation and how path is handled
sed -n '99,120p' src/services/volumes.tsRepository: jmagar/arcane-mcp
Length of output: 727
🏁 Script executed:
# Find the HTTP client implementation - look for client initialization
rg -n 'this.client' src/services/volumes.ts | head -5
# Find where client is created/configured
rg -n 'class.*Service|constructor' src/services/volumes.ts | head -10
# Look for axios or HTTP client imports/setup
rg -n 'import.*http|axios|fetch' src/services/ | head -10Repository: jmagar/arcane-mcp
Length of output: 884
🏁 Script executed:
# Check ArcaneClient get() method implementation
sed -n '1,200p' src/services/arcane-client.ts | grep -A 20 'get('Repository: jmagar/arcane-mcp
Length of output: 43
🏁 Script executed:
# Read the full ArcaneClient file to understand the get() method
wc -l src/services/arcane-client.ts
# Then read it in sections
head -150 src/services/arcane-client.tsRepository: jmagar/arcane-mcp
Length of output: 4817
URL-encoded path traversal bypasses validation, and gitops path validation has no effect.
The validateRelativePath() check for .. operates on the raw user input before URL encoding. An attacker can supply %2e%2e (URL-encoded ..) which passes validation. When the backend's HTTP framework decodes the query parameter, it recovers .. and achieves path traversal. Similarly, foo%2f..%2fbar bypasses the split-on-/ check.
Additionally, in gitops.ts line 50, validateRelativePath() is called but the result is not used, and the browseFiles() method doesn't accept a path parameter—this validation has no effect.
Defensively decode the path before validation:
🛡️ Proposed fix: decode before validating
export function validateRelativePath(rawPath: unknown): string {
if (typeof rawPath !== "string")
throw new ValidationError("path must be a string");
- if (rawPath.startsWith("/"))
+
+ // Decode to catch encoded traversal sequences (%2e%2e, %2f, etc.)
+ let decoded: string;
+ try {
+ decoded = decodeURIComponent(rawPath);
+ } catch {
+ throw new ValidationError("path contains invalid percent-encoding");
+ }
+
+ if (decoded.startsWith("/"))
throw new ValidationError(
"path must be a relative path (must not start with /)",
);
- if (rawPath.split("/").some((seg) => seg === ".."))
+ if (decoded.split("/").some((seg) => seg === ".."))
throw new ValidationError(
"path must not contain '..' path traversal segments",
);
- return rawPath;
+
+ // Return the decoded, validated path
+ return decoded;
}Also remove the unused validation call in gitops.ts line 50 or wire the validated path to the browseFiles() method if path filtering is intended.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mcp/tools/dispatch/helpers.ts` around lines 14 - 25, The current
validateRelativePath(rawPath) checks the raw string and therefore misses
URL-encoded traversal like %2e%2e or encoded slashes; update
validateRelativePath to first URL-decode (and normalize) the input, then
validate that the decoded path is a string, does not start with '/', and
contains no '..' segments (and reject percent-encoded traversal by decoding
before splitting on '/'); additionally, in gitops.ts remove the no-op call to
validateRelativePath or replace it by capturing and using the validated/decoded
path when calling browseFiles (or pass the validated path into a browseFiles
overload if path filtering is intended) so the validation actually takes effect.
| case "list": { | ||
| const items = await s.volumes.list(eid); | ||
| return paginatedResponse( | ||
| Array.isArray(items) ? items : [], | ||
| parsePagination(params), | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Array.isArray check is unnecessary (same as gitops.ts).
s.volumes.list() has return type Promise<VolumeInfo[]> and guarantees an array via res.data ?? []. The fallback is dead code.
case "list": {
const items = await s.volumes.list(eid);
- return paginatedResponse(
- Array.isArray(items) ? items : [],
- parsePagination(params),
- );
+ return paginatedResponse(items, parsePagination(params));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case "list": { | |
| const items = await s.volumes.list(eid); | |
| return paginatedResponse( | |
| Array.isArray(items) ? items : [], | |
| parsePagination(params), | |
| ); | |
| } | |
| case "list": { | |
| const items = await s.volumes.list(eid); | |
| return paginatedResponse(items, parsePagination(params)); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mcp/tools/dispatch/volume.ts` around lines 34 - 40, The Array.isArray
guard around the result of s.volumes.list is redundant because s.volumes.list
returns Promise<VolumeInfo[]> (already normalized to an array); update the
"list" case to pass the returned items directly into paginatedResponse (or use
items ?? [] if you prefer a defensive fallback) instead of Array.isArray(items)
? items : []; change the code around the s.volumes.list call so
paginatedResponse receives the proper VolumeInfo[] (and keep
parsePagination(params) as-is).
| if (text.length <= MAX_RESPONSE_SIZE) return text; | ||
| const buf = Buffer.from(text, "utf8"); | ||
| if (buf.byteLength > MAX_RESPONSE_SIZE) { |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Node.js/JavaScript, does string.lengthrepresent UTF-8 bytes or UTF-16 code units, and can a string with.length <= Nstill encode to more thanN UTF-8 bytes?
💡 Result:
In Node.js/JavaScript, string.length represents the number of UTF-16 code units, not UTF-8 bytes. Yes, a string with .length <= N can encode to more than N UTF-8 bytes, since some UTF-16 code units (especially for non-BMP characters) encode to multiple UTF-8 bytes, and ASCII characters (1 UTF-8 byte) still take 1-2 bytes in UTF-16 but the reverse comparison holds due to multi-byte UTF-8 chars.
Citations:
- 1: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String/length
- 2: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/length
- 3: https://www.npmjs.com/package/string-byte-length?activeTab=code
- 4: https://hsivonen.fi/string-length/
🏁 Script executed:
cat -n src/utils/response.tsRepository: jmagar/arcane-mcp
Length of output: 1419
Early-exit guard bypassed on multibyte payloads.
Line 12 compares text.length (UTF-16 code units) directly to MAX_RESPONSE_SIZE, violating the documented UTF-8 byte limit. A string with .length ≤ 524,288 can exceed 512KB when UTF-8 encoded, yet the guard incorrectly returns it untruncated.
Proposed fix
export function truncateResponse(text: string): string {
- if (text.length <= MAX_RESPONSE_SIZE) return text;
+ // Safe early return: worst-case UTF-8 expansion is 3 bytes per UTF-16 code unit.
+ if (text.length <= Math.floor(MAX_RESPONSE_SIZE / 3)) return text;
const buf = Buffer.from(text, "utf8");
if (buf.byteLength > MAX_RESPONSE_SIZE) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/response.ts` around lines 12 - 14, The early-return uses
text.length (UTF-16 code units) which can be smaller than the UTF-8 byte size;
compute the UTF-8 byte length first and use that for the guard: replace the
text.length check with Buffer.byteLength(text, "utf8") (or create buf =
Buffer.from(text, "utf8") first and use buf.byteLength) and only return early
when that UTF-8 byte length <= MAX_RESPONSE_SIZE; keep the existing
buf/byteLength comparison and truncation logic that follows, referencing
MAX_RESPONSE_SIZE, text, buf, and Buffer.from.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.mcp.json:
- Around line 1-11: The two config sources (.mcp.json and
.claude-plugin/plugin.json) are duplicated for the Arcane MCP URL/token; pick
one source of truth and update the other to consume it: either make the Claude
plugin read the existing mcpServers.arcane-mcp.url and headers.Authorization
values from .mcp.json (or a shared constants/build-time generated config), or
remove the hardcoded "default" arcane_mcp_url/arcane_mcp_token entries in
.claude-plugin/plugin.json and document that .mcp.json supplies
arcane_mcp_url/arcane_mcp_token; ensure you change the Claude plugin’s config
keys (arcane_mcp_url, arcane_mcp_token) to reference mcpServers.arcane-mcp.url
and mcpServers.arcane-mcp.headers.Authorization (or the shared constant names)
so there is a single authoritative config source.
- Around line 1-11: The MCP config uses bash-style default expansion in the
"url" entry for "arcane-mcp" under "mcpServers" (the string
"${ARCANE_MCP_URL:-http://localhost:44332/mcp}"), which MCP does not support;
replace that value with a simple "${ARCANE_MCP_URL}" (or a hardcoded fallback
URL like "http://localhost:44332/mcp" if you need an inline default) and manage
any desired default via environment configuration or docs so MCP can correctly
expand the variable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| { | ||
| "mcpServers": { | ||
| "arcane-mcp": { | ||
| "type": "http", | ||
| "url": "${ARCANE_MCP_URL:-http://localhost:44332/mcp}", | ||
| "headers": { | ||
| "Authorization": "Bearer ${ARCANE_MCP_TOKEN}" | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Configuration duplication with .claude-plugin/plugin.json.
This .mcp.json file is referenced by .codex-plugin/plugin.json but NOT by .claude-plugin/plugin.json, which maintains its own independent userConfig section with hardcoded defaults for arcane_mcp_url and arcane_mcp_token.
This creates two sources of truth:
.mcp.json:${ARCANE_MCP_URL:-http://localhost:44332/mcp}(line 5).claude-plugin/plugin.json:"default": "http://localhost:44332/mcp"(context snippet 2, line 18)
Any future changes to default URLs, token handling, or transport configuration must be manually synchronized between these files, increasing maintenance burden and risk of drift.
Consider one of these approaches:
- Have
.claude-plugin/plugin.jsonalso reference.mcp.json(if the plugin system supports it) - Document that these configs must be kept in sync
- Use a shared constants file or build-time generation to eliminate duplication
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.mcp.json around lines 1 - 11, The two config sources (.mcp.json and
.claude-plugin/plugin.json) are duplicated for the Arcane MCP URL/token; pick
one source of truth and update the other to consume it: either make the Claude
plugin read the existing mcpServers.arcane-mcp.url and headers.Authorization
values from .mcp.json (or a shared constants/build-time generated config), or
remove the hardcoded "default" arcane_mcp_url/arcane_mcp_token entries in
.claude-plugin/plugin.json and document that .mcp.json supplies
arcane_mcp_url/arcane_mcp_token; ensure you change the Claude plugin’s config
keys (arcane_mcp_url, arcane_mcp_token) to reference mcpServers.arcane-mcp.url
and mcpServers.arcane-mcp.headers.Authorization (or the shared constant names)
so there is a single authoritative config source.
🧩 Analysis chain
🌐 Web query:
What environment variable expansion syntax does the Model Context Protocol (MCP) JSON configuration format support? Specifically, does it support bash-style defaults like ${VAR:-default}?
💡 Result:
The Model Context Protocol (MCP) JSON configuration format, used in tools like Claude Code, Cursor, and VS Code for mcp.json, supports environment variable expansion using the syntax ${VAR}, as documented in official Anthropic docs and various client implementations. This allows referencing shell environment variables in fields like command args, env values, headers, etc., e.g., "GITHUB_TOKEN": "${GITHUB_TOKEN}" or "Authorization": "Bearer ${API_TOKEN}". Expansion occurs at runtime by the host application (e.g., Claude Code from version 1.0.48). There is no evidence in official MCP specification, Anthropic docs, VS Code docs, Cursor forums, or GitHub issues that bash-style defaults like ${VAR:-default} are supported. Sources consistently show only simple ${VAR} substitution, treating unset variables as empty strings (e.g., bugs reported when expansion fails, but no mention of defaults). Advanced templating like Handlebars {{var}} in third-party tools (e.g., 1mcp-app) uses different syntax and is not standard MCP JSON config. Workarounds for secrets use scripts or inputs, not defaults in JSON.
Citations:
- 1: https://docs.anthropic.com/en/docs/claude-code/sdk/sdk-mcp
- 2: [BUG] variable expansion not working in
.mcp.jsonanthropics/claude-code#3239 - 3: [FEATURE] MCP Servers configuration : add expanding variables support anthropics/claude-code#916
- 4: https://code.visualstudio.com/docs/copilot/reference/mcp-configuration
- 5: https://forum.cursor.com/t/how-to-use-environment-variables-in-mcp-json/79296
- 6: Support environment variable expansion openai/codex#2680
- 7: Feature Request: Support environment variable expansion in mcp.json kirodotdev/Kiro#6635
🏁 Script executed:
#!/bin/bash
# Search for MCP JSON config parsing logic in the codebase
rg -n -C5 'mcp\.json|mcpServers' --type=ts --type=js -g '!node_modules'Repository: jmagar/arcane-mcp
Length of output: 43
Remove bash-style default syntax from environment variable expansion.
MCP JSON configuration supports only simple ${VAR} environment variable substitution, not bash-style defaults like ${VAR:-default}. The ${ARCANE_MCP_URL:-http://localhost:44332/mcp} syntax on line 5 will not be parsed correctly—MCP will treat it as a literal string or fail to properly expand the variable.
Change line 5 to use only ${ARCANE_MCP_URL} and set the default value through environment configuration, build defaults, or documentation instead. If a hardcoded fallback is required within the config file, use a static URL directly without variable expansion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.mcp.json around lines 1 - 11, The MCP config uses bash-style default
expansion in the "url" entry for "arcane-mcp" under "mcpServers" (the string
"${ARCANE_MCP_URL:-http://localhost:44332/mcp}"), which MCP does not support;
replace that value with a simple "${ARCANE_MCP_URL}" (or a hardcoded fallback
URL like "http://localhost:44332/mcp" if you need an inline default) and manage
any desired default via environment configuration or docs so MCP can correctly
expand the variable.
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".mcp.json">
<violation number="1" location=".mcp.json:5">
P1: MCP JSON configuration only supports simple `${VAR}` environment variable substitution—bash-style defaults like `${VAR:-default}` are not supported. The `:-http://localhost:44332/mcp` portion will not be parsed as a fallback; the variable will either fail to resolve or the entire string (including `:-...`) will be treated as the variable name, resulting in an incorrect URL.
Use `${ARCANE_MCP_URL}` here and set the default value through environment configuration (e.g., in `.env` or documentation) instead.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
.mcp.json
Outdated
| "mcpServers": { | ||
| "arcane-mcp": { | ||
| "type": "http", | ||
| "url": "${ARCANE_MCP_URL:-http://localhost:44332/mcp}", |
There was a problem hiding this comment.
P1: MCP JSON configuration only supports simple ${VAR} environment variable substitution—bash-style defaults like ${VAR:-default} are not supported. The :-http://localhost:44332/mcp portion will not be parsed as a fallback; the variable will either fail to resolve or the entire string (including :-...) will be treated as the variable name, resulting in an incorrect URL.
Use ${ARCANE_MCP_URL} here and set the default value through environment configuration (e.g., in .env or documentation) instead.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .mcp.json, line 5:
<comment>MCP JSON configuration only supports simple `${VAR}` environment variable substitution—bash-style defaults like `${VAR:-default}` are not supported. The `:-http://localhost:44332/mcp` portion will not be parsed as a fallback; the variable will either fail to resolve or the entire string (including `:-...`) will be treated as the variable name, resulting in an incorrect URL.
Use `${ARCANE_MCP_URL}` here and set the default value through environment configuration (e.g., in `.env` or documentation) instead.</comment>
<file context>
@@ -2,9 +2,9 @@
"arcane-mcp": {
"type": "http",
- "url": "${user_config.arcane_mcp_url}",
+ "url": "${ARCANE_MCP_URL:-http://localhost:44332/mcp}",
"headers": {
- "Authorization": "Bearer ${user_config.arcane_mcp_token}"
</file context>
| "url": "${ARCANE_MCP_URL:-http://localhost:44332/mcp}", | |
| "url": "${ARCANE_MCP_URL}", |
There was a problem hiding this comment.
1 issue found across 18 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name=".mcp.json">
<violation number="1" location=".mcp.json:5">
P1: This MCP client config no longer sends the required Authorization header, so requests will be rejected when default auth is enabled.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
.mcp.json
Outdated
| "mcpServers": { | ||
| "arcane-mcp": { | ||
| "type": "http", | ||
| "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}" |
There was a problem hiding this comment.
P1: This MCP client config no longer sends the required Authorization header, so requests will be rejected when default auth is enabled.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .mcp.json, line 5:
<comment>This MCP client config no longer sends the required Authorization header, so requests will be rejected when default auth is enabled.</comment>
<file context>
@@ -2,10 +2,7 @@
- "headers": {
- "Authorization": "Bearer ${ARCANE_MCP_TOKEN}"
- }
+ "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}"
}
}
</file context>
| "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}" | |
| "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}", | |
| "headers": { | |
| "Authorization": "Bearer ${ARCANE_MCP_TOKEN}" | |
| } |
- Fix root-user detection regex to catch USER 0 and USER 0:0 in final stage (P1) - Fix check-no-baked-env.sh grep regex portability: replace \s with [[:space:]] (P1) - Add backward-compatible AUTH_TOKEN fallback for ARCANE_MCP_TOKEN (P2) - Fix LICENSE copyright attribution from gotify-mcp to arcane-mcp (P2) - Fix lint-plugin.sh section 13 to check both docker-compose.yaml and .yml (P2) - Add jq prerequisite check in lint-plugin.sh with clear error message (P2) - Add missing destructive actions container:restart and environment:delete to SKILL.md (P2) - Fix docker-compose.yaml network name default from arcane-mcp to arcane_mcp (P2)
|
You're iterating quickly on this pull request. To help protect your rate limits, cubic has paused automatic reviews on new pushes for now—when you're ready for another review, comment |
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (4)
.gitignore (1)
7-8:⚠️ Potential issue | 🟠 MajorRestore the exact ignore patterns required by CI checks.
The current entries at Line 7, Line 8, and Line 26 don’t match the required patterns enforced by
ensure-ignore-files.sh --check, which is why CI is failing. Add the exact wildcard/negation pairs for runtime dirs and include.worktrees(no trailing slash).✅ Proposed patch
# ── Runtime artifacts ──────────────────────────────────────────────────────── -logs/ -backups/ +logs/* +!logs/.gitkeep +backups/* +!backups/.gitkeep *.log *.pid *.bak *.bak-* @@ .serena/ +.worktrees .worktrees/ .full-review/ .full-review-archive-*Also applies to: 26-26
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 7 - 8, Replace the current simple directory ignores ("logs/" and "backups/") with the canonical wildcard/negation pairs required by the CI check and add the missing ".worktrees" entry (no trailing slash); specifically edit the .gitignore entries that currently show logs/ and backups/ so they use the runtime-dir wildcard + negation pair format used by ensure-ignore-files.sh --check, and add a line containing ".worktrees" (exactly, without a trailing slash).skills/arcane/SKILL.md (1)
50-59:⚠️ Potential issue | 🟠 MajorVolume operations are still missing backup/restore/browse subactions.
src/mcp/tools/arcane.tsexposeslist-backups,create-backup,delete-backup,restore,restore-files, andbrowse, but this section still advertises onlylist|get|create|delete|prune. That leaves the bundled skill with an incomplete contract for some of the highest-value volume workflows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/arcane/SKILL.md` around lines 50 - 59, The Volume operations section in SKILL.md is incomplete—update the list of allowed subactions to match the implementations exported by src/mcp/tools/arcane.ts: include "list-backups", "create-backup", "delete-backup", "restore", "restore-files", and "browse" and document required parameters for each (e.g., backup id for delete-backup/restore, target envId/name for create-backup/restore, paths for restore-files, browse path/options). Reference the subaction names exactly as implemented (list-backups, create-backup, delete-backup, restore, restore-files, browse) and add brief required-field notes (envId, id/name, paths) so the skill contract aligns with the code.docker-compose.yaml (1)
10-18:⚠️ Potential issue | 🟠 MajorThe published port and healthcheck cannot follow
env_filevalues here.Lines 11 and 18 interpolate
${PORT:-3000}, but Compose resolves${...}from the shell / project.env, while service-levelenv_fileis only used to pass variables into the container. Sincesrc/utils/env.tsbinds fromARCANE_MCP_BIND_PORT, a bind port set only in~/.claude-homelab/.envwill still publish and health-check3000, leaving the service unreachable. Either keep the container port fixed at3000here or surface the bind port as a Compose interpolation variable too. (docs.docker.com)One safe fix if the container port is intended to stay constant
ports: - - "${ARCANE_MCP_PORT:-44332}:${PORT:-3000}/tcp" + - "${ARCANE_MCP_PORT:-44332}:3000/tcp" @@ healthcheck: - test: ["CMD-SHELL", "wget -q --spider http://127.0.0.1:${PORT:-3000}/health || exit 1"] + test: ["CMD-SHELL", "wget -q --spider http://127.0.0.1:3000/health || exit 1"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yaml` around lines 10 - 18, The compose file is interpolating ${PORT:-3000} for published port and healthcheck which reads only shell/project .env, not the service-level env_file used by src/utils/env.ts (ARCANE_MCP_BIND_PORT), so runtime bind port mismatches can make the service unreachable; fix by either hardcoding the container port to 3000 in the ports and healthcheck entries (leave host-side variable ARCANE_MCP_PORT or ARCANE_MCP_BIND_PORT only for binding logic inside the container), or export the bind port as a Compose interpolation variable (e.g., add PORT or ARCANE_MCP_BIND_PORT to the Compose project env or top-level env_file) so the published port and the healthcheck use the same interpolated variable as src/utils/env.ts.scripts/check-no-baked-env.sh (1)
99-100:⚠️ Potential issue | 🟠 MajorThe secret-detection regexes still miss supported Dockerfile/command forms.
Docker still accepts
ENV <key> <value>in addition toENV key=value, but Line 100 only scans the=form. The CMD/ENTRYPOINT detector at Lines 147-148 can still miss plain baked credentials likeCMD app --token supersecret, so this script can PASS while secrets remain in the image recipe. (docs.docker.com)Proposed fix
- SENSITIVE_ENVS=$(grep -nE "^ENV\s+\S*${SENSITIVE_RE}\S*\s*=" "$DOCKERFILE" || true) + SENSITIVE_ENVS=$(grep -nE "^ENV[[:space:]]+\S*${SENSITIVE_RE}\S*([[:space:]]*=|[[:space:]]+)" "$DOCKERFILE" || true) @@ - CRED_PATTERNS='(password|secret|token|api[_-]?key)[=: ]+[^${[:space:]]' - HARDCODED_CREDS=$(grep -inE "^(CMD|ENTRYPOINT)\s+.*${CRED_PATTERNS}" "$DOCKERFILE" || true) + CRED_PATTERNS='(password|secret|token|api[_-]?key)[=:[[:space:]]]+[^$[:space:]]' + HARDCODED_CREDS=$(grep -inE "^(CMD|ENTRYPOINT)[[:space:]]+.*${CRED_PATTERNS}" "$DOCKERFILE" || true)Also applies to: 147-148
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/check-no-baked-env.sh` around lines 99 - 100, The ENV regex only matches the key=value form (SENSITIVE_ENVS assignment) and thus misses Dockerfile’s allowed "ENV <key> <value>" form; update the SENSITIVE_ENVS grep to use a combined regex that accepts either "ENV\s+<key>\s*=\s*" or "ENV\s+<key>\s+<value>" (e.g. allow either '=' or whitespace between key and value) and ensure it still anchors against the SENSITIVE_RE names; likewise update the CMD/ENTRYPOINT detector (the grep that scans CMD/ENTRYPOINT lines) to also look for sensitive option names or keys followed by a space (e.g. "--token\s+\S+" or "<KEY>\s+\S+") in addition to the key=value form so plain-space baked credentials are caught.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/docker-publish.yml:
- Around line 54-64: Remove the trailing blank line at the end of the GitHub
Actions workflow block for the "Build and push Docker image" step (the step
using docker/build-push-action@v5) so the file ends immediately after the last
"cache-to: type=gha,mode=max" line; simply delete the extra newline at the end
of the file to satisfy yamllint.
In @.mcp.json:
- Around line 3-6: The HTTP MCP entry "arcane-mcp" only sets the "url" and lacks
the Authorization header so requests can't authenticate; update the "arcane-mcp"
definition to add a headers block with Authorization set to "Bearer
${ARCANE_MCP_TOKEN}" (using env expansion) so the service will send the bearer
token on requests; ensure you add the "headers" object alongside "url" and set
"Authorization" to the expanded env variable.
In `@scripts/check-docker-security.sh`:
- Around line 101-107: The docker-compose fallback currently only checks for
"docker-compose.yaml" and will miss "docker-compose.yml"; update the conditional
that looks for docker-compose files and the subsequent grep check so it
considers both filenames (e.g., "docker-compose.yaml" and "docker-compose.yml")
before deciding to call warn(...) or fail(...), ensuring the grep -qE
'^\s+user:' check runs against whichever compose file exists; adjust the
existence check and grep target used in the block containing the warn "Non-root
user" and fail "Non-root user" calls accordingly.
In `@scripts/lint-plugin.sh`:
- Around line 19-34: The documentation and implementation in
scripts/lint-plugin.sh have a missing/skip of section 13; update the usage
documentation list and the corresponding inline section comments in the script
so numbering is continuous (e.g., renumber 14–16 to 13–15 or add a new 13
entry), and adjust any section comment identifiers in the implementation
(references around the section comment blocks used for checks and the comment
markers near the checks for docker-compose/env/.env/etc.) so all section labels
match the updated documentation and implementation numbering consistently.
In `@skills/arcane/SKILL.md`:
- Around line 101-120: Update the Destructive operations list in SKILL.md to
match the server's actual confirmation gate by adding the missing action names:
include volume:delete-backup, volume:restore, volume:restore-files,
system:stop-all, registry:delete, gitops:delete, and gitops:sync alongside the
existing entries (e.g., container:delete, image:delete, project:destroy) so the
documentation reflects the true confirmation contract agents must follow.
- Around line 93-99: The System operations section in SKILL.md lists subactions
"info | version | ping" that don't match the actual subaction enum declared in
src/mcp/tools/arcane.ts; either update the SKILL.md "System operations" block to
list the exact subaction values accepted by the subaction enum in arcane.ts, or
modify the subaction enum in src/mcp/tools/arcane.ts to include "info",
"version", and "ping" (and ensure any handling code for those cases exists,
e.g., switch/case handlers that use the subaction value). Ensure the documented
token `subaction` in SKILL.md and the enum named `subaction` (or the exported
enum/type) in arcane.ts are identical.
- Around line 29-35: The fenced code blocks in SKILL.md (e.g., the tool-contract
blocks starting with lines showing "mcp__arcane-mcp__arcane" and the other
similar blocks referenced at ranges 42-48, 52-59, 63-69, 74-80, 84-91, 95-99,
121-128) are unlabeled and trigger markdownlint MD040; update each
triple-backtick fence to include the language label "text" (e.g., change ``` to
```text) so the examples remain identical but are lint-clean; locate the blocks
by their unique starting text like "mcp__arcane-mcp__arcane" and other
tool-contract headings and add the "text" label to each corresponding opening
fence.
In `@src/utils/env.ts`:
- Line 35: The ARCANE_API_URL initialization uses
normalizeServiceUrl(requireEnv("ARCANE_API_URL")) but relies on a weaker Docker
check (RUNNING_IN_DOCKER === "true"); update this call to use the same
comprehensive Docker detection used elsewhere in url utilities (the helper that
checks /.dockerenv and cgroups) so localhost rewrites behave consistently.
Locate the ARCANE_API_URL line and replace the plain normalizeServiceUrl call
with a call that passes the comprehensive docker flag or invokes the url helper
(e.g., normalizeServiceUrl(requireEnv("ARCANE_API_URL"), isRunningInDocker()))
so the rewrite logic in normalizeServiceUrl uses the full container detection
rather than only RUNNING_IN_DOCKER.
- Around line 25-33: Replace the nullish-coalescing fallback with a truthy
fallback so an empty ARCANE_MCP_TOKEN doesn't block legacy AUTH_TOKEN; update
the AUTH_TOKEN initializer (the immediately-invoked function that reads
process.env.ARCANE_MCP_TOKEN and process.env.AUTH_TOKEN) to use a truthy check
(e.g., || or explicit length check) instead of ??, and also mirror the same
change in entrypoint.sh where it currently exits if ARCANE_MCP_TOKEN is unset so
that the startup allows AUTH_TOKEN-only installs; ensure both AUTH_TOKEN and
entrypoint.sh checks reference ARCANE_MCP_TOKEN and AUTH_TOKEN consistently.
---
Duplicate comments:
In @.gitignore:
- Around line 7-8: Replace the current simple directory ignores ("logs/" and
"backups/") with the canonical wildcard/negation pairs required by the CI check
and add the missing ".worktrees" entry (no trailing slash); specifically edit
the .gitignore entries that currently show logs/ and backups/ so they use the
runtime-dir wildcard + negation pair format used by ensure-ignore-files.sh
--check, and add a line containing ".worktrees" (exactly, without a trailing
slash).
In `@docker-compose.yaml`:
- Around line 10-18: The compose file is interpolating ${PORT:-3000} for
published port and healthcheck which reads only shell/project .env, not the
service-level env_file used by src/utils/env.ts (ARCANE_MCP_BIND_PORT), so
runtime bind port mismatches can make the service unreachable; fix by either
hardcoding the container port to 3000 in the ports and healthcheck entries
(leave host-side variable ARCANE_MCP_PORT or ARCANE_MCP_BIND_PORT only for
binding logic inside the container), or export the bind port as a Compose
interpolation variable (e.g., add PORT or ARCANE_MCP_BIND_PORT to the Compose
project env or top-level env_file) so the published port and the healthcheck use
the same interpolated variable as src/utils/env.ts.
In `@scripts/check-no-baked-env.sh`:
- Around line 99-100: The ENV regex only matches the key=value form
(SENSITIVE_ENVS assignment) and thus misses Dockerfile’s allowed "ENV <key>
<value>" form; update the SENSITIVE_ENVS grep to use a combined regex that
accepts either "ENV\s+<key>\s*=\s*" or "ENV\s+<key>\s+<value>" (e.g. allow
either '=' or whitespace between key and value) and ensure it still anchors
against the SENSITIVE_RE names; likewise update the CMD/ENTRYPOINT detector (the
grep that scans CMD/ENTRYPOINT lines) to also look for sensitive option names or
keys followed by a space (e.g. "--token\s+\S+" or "<KEY>\s+\S+") in addition to
the key=value form so plain-space baked credentials are caught.
In `@skills/arcane/SKILL.md`:
- Around line 50-59: The Volume operations section in SKILL.md is
incomplete—update the list of allowed subactions to match the implementations
exported by src/mcp/tools/arcane.ts: include "list-backups", "create-backup",
"delete-backup", "restore", "restore-files", and "browse" and document required
parameters for each (e.g., backup id for delete-backup/restore, target
envId/name for create-backup/restore, paths for restore-files, browse
path/options). Reference the subaction names exactly as implemented
(list-backups, create-backup, delete-backup, restore, restore-files, browse) and
add brief required-field notes (envId, id/name, paths) so the skill contract
aligns with the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 92edbe10-b4c2-4dc4-aef5-5d319ebb3ebc
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.jsonand included by**/*
📒 Files selected for processing (22)
.agents/plugins/marketplace.json.beads/.gitignore.beads/README.md.beads/config.yaml.beads/hooks/post-checkout.beads/hooks/post-merge.beads/hooks/pre-commit.beads/hooks/pre-push.beads/hooks/prepare-commit-msg.beads/metadata.json.codex-plugin/plugin.json.github/workflows/docker-publish.yml.gitignore.lavra/memory/knowledge.jsonl.mcp.jsonLICENSEdocker-compose.yamlscripts/check-docker-security.shscripts/check-no-baked-env.shscripts/lint-plugin.shskills/arcane/SKILL.mdsrc/utils/env.ts
💤 Files with no reviewable changes (10)
- .beads/README.md
- .beads/metadata.json
- .beads/config.yaml
- .beads/hooks/post-merge
- .beads/hooks/prepare-commit-msg
- .beads/hooks/post-checkout
- .beads/.gitignore
- .beads/hooks/pre-commit
- .beads/hooks/pre-push
- .lavra/memory/knowledge.jsonl
| - name: Build and push Docker image | ||
| uses: docker/build-push-action@v5 | ||
| with: | ||
| context: . | ||
| platforms: linux/amd64,linux/arm64 | ||
| push: ${{ github.event_name != 'pull_request' }} | ||
| tags: ${{ steps.meta.outputs.tags }} | ||
| labels: ${{ steps.meta.outputs.labels }} | ||
| cache-from: type=gha | ||
| cache-to: type=gha,mode=max | ||
|
|
There was a problem hiding this comment.
Remove trailing blank line.
The yamllint error on line 64 indicates an extra blank line at the end of the file.
🔧 Proposed fix
cache-from: type=gha
cache-to: type=gha,mode=max
-📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Build and push Docker image | |
| uses: docker/build-push-action@v5 | |
| with: | |
| context: . | |
| platforms: linux/amd64,linux/arm64 | |
| push: ${{ github.event_name != 'pull_request' }} | |
| tags: ${{ steps.meta.outputs.tags }} | |
| labels: ${{ steps.meta.outputs.labels }} | |
| cache-from: type=gha | |
| cache-to: type=gha,mode=max | |
| - name: Build and push Docker image | |
| uses: docker/build-push-action@v5 | |
| with: | |
| context: . | |
| platforms: linux/amd64,linux/arm64 | |
| push: ${{ github.event_name != 'pull_request' }} | |
| tags: ${{ steps.meta.outputs.tags }} | |
| labels: ${{ steps.meta.outputs.labels }} | |
| cache-from: type=gha | |
| cache-to: type=gha,mode=max |
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 64-64: too many blank lines (1 > 0)
(empty-lines)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docker-publish.yml around lines 54 - 64, Remove the
trailing blank line at the end of the GitHub Actions workflow block for the
"Build and push Docker image" step (the step using docker/build-push-action@v5)
so the file ends immediately after the last "cache-to: type=gha,mode=max" line;
simply delete the extra newline at the end of the file to satisfy yamllint.
| "arcane-mcp": { | ||
| "type": "http", | ||
| "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}" | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Claude Code MCP .mcp.json HTTP server headers configuration format
💡 Result:
In a project-scoped .mcp.json, HTTP (and SSE) MCP servers go under mcpServers.<name> with "type": "http" (or "sse"), a url, and optional headers as a JSON object mapping header name → header value (strings). Example:
{
"mcpServers": {
"api-server": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_KEY}",
"X-API-Key": "${MY_API_KEY}"
}
}
}
}Claude Code also supports environment-variable expansion inside headers (and url). [1] The Agent SDK docs show the same headers: { Authorization: "Bearer …" } shape for HTTP MCP servers. [2]
Sources: [1] (docs.anthropic.com), [2] (docs.anthropic.com)
Citations:
- 1: https://docs.anthropic.com/en/docs/claude-code/mcp
- 2: https://docs.anthropic.com/en/docs/claude-code/sdk/sdk-mcp
🏁 Script executed:
# Check if there are other mcp or arcane-related config files in the repo
find . -type f \( -name "*.mcp*" -o -name "*arcane*" -o -name "*mcp*" \) 2>/dev/null | head -20Repository: jmagar/arcane-mcp
Length of output: 254
🏁 Script executed:
# Check the full .mcp.json file to understand context
cat -n .mcp.jsonRepository: jmagar/arcane-mcp
Length of output: 253
🏁 Script executed:
# Search for ARCANE_MCP_TOKEN usage in the codebase
rg "ARCANE_MCP_TOKEN" -iRepository: jmagar/arcane-mcp
Length of output: 1913
🏁 Script executed:
# Check arcane-client.ts for authentication details
cat -n src/services/arcane-client.tsRepository: jmagar/arcane-mcp
Length of output: 6095
🏁 Script executed:
# Look for README or documentation about authentication
find . -type f \( -name "README*" -o -name "*.md" \) | head -10Repository: jmagar/arcane-mcp
Length of output: 124
Wire the bearer token into the HTTP MCP definition.
This manifest points at an authenticated HTTP server, but it only sets url. Claude Code's .mcp.json format supports headers on HTTP servers, including env expansion there. Without an Authorization header this config cannot authenticate. Add headers.Authorization: "Bearer ${ARCANE_MCP_TOKEN}" beside url.
Proposed fix
{
"mcpServers": {
"arcane-mcp": {
"type": "http",
- "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}"
+ "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}",
+ "headers": {
+ "Authorization": "Bearer ${ARCANE_MCP_TOKEN}"
+ }
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "arcane-mcp": { | |
| "type": "http", | |
| "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}" | |
| } | |
| "arcane-mcp": { | |
| "type": "http", | |
| "url": "${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}", | |
| "headers": { | |
| "Authorization": "Bearer ${ARCANE_MCP_TOKEN}" | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.mcp.json around lines 3 - 6, The HTTP MCP entry "arcane-mcp" only sets the
"url" and lacks the Authorization header so requests can't authenticate; update
the "arcane-mcp" definition to add a headers block with Authorization set to
"Bearer ${ARCANE_MCP_TOKEN}" (using env expansion) so the service will send the
bearer token on requests; ensure you add the "headers" object alongside "url"
and set "Authorization" to the expanded env variable.
| # Check if docker-compose.yaml handles it via user: directive | ||
| if [[ -f "docker-compose.yaml" ]] && grep -qE '^\s+user:' docker-compose.yaml; then | ||
| warn "Non-root user" "No USER in final stage but docker-compose.yaml sets user: — acceptable if always run via compose" | ||
| else | ||
| fail "Non-root user" "No USER directive in final stage — container runs as root" | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Docker-compose fallback only checks .yaml extension.
The fallback check for user: directive in docker-compose only looks for docker-compose.yaml, but projects may use docker-compose.yml. This is inconsistent with lint-plugin.sh which handles both extensions (lines 418-423).
🔧 Proposed fix to handle both extensions
# Check if docker-compose.yaml handles it via user: directive
- if [[ -f "docker-compose.yaml" ]] && grep -qE '^\s+user:' docker-compose.yaml; then
+ COMPOSE_FILE=""
+ [[ -f "docker-compose.yaml" ]] && COMPOSE_FILE="docker-compose.yaml"
+ [[ -f "docker-compose.yml" ]] && COMPOSE_FILE="docker-compose.yml"
+ if [[ -n "$COMPOSE_FILE" ]] && grep -qE '^\s+user:' "$COMPOSE_FILE"; then
warn "Non-root user" "No USER in final stage but docker-compose.yaml sets user: — acceptable if always run via compose"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Check if docker-compose.yaml handles it via user: directive | |
| if [[ -f "docker-compose.yaml" ]] && grep -qE '^\s+user:' docker-compose.yaml; then | |
| warn "Non-root user" "No USER in final stage but docker-compose.yaml sets user: — acceptable if always run via compose" | |
| else | |
| fail "Non-root user" "No USER directive in final stage — container runs as root" | |
| fi | |
| fi | |
| # Check if docker-compose.yaml handles it via user: directive | |
| COMPOSE_FILE="" | |
| [[ -f "docker-compose.yaml" ]] && COMPOSE_FILE="docker-compose.yaml" | |
| [[ -f "docker-compose.yml" ]] && COMPOSE_FILE="docker-compose.yml" | |
| if [[ -n "$COMPOSE_FILE" ]] && grep -qE '^\s+user:' "$COMPOSE_FILE"; then | |
| warn "Non-root user" "No USER in final stage but docker-compose.yaml sets user: — acceptable if always run via compose" | |
| else | |
| fail "Non-root user" "No USER directive in final stage — container runs as root" | |
| fi | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/check-docker-security.sh` around lines 101 - 107, The docker-compose
fallback currently only checks for "docker-compose.yaml" and will miss
"docker-compose.yml"; update the conditional that looks for docker-compose files
and the subsequent grep check so it considers both filenames (e.g.,
"docker-compose.yaml" and "docker-compose.yml") before deciding to call
warn(...) or fail(...), ensuring the grep -qE '^\s+user:' check runs against
whichever compose file exists; adjust the existence check and grep target used
in the block containing the warn "Non-root user" and fail "Non-root user" calls
accordingly.
scripts/lint-plugin.sh
Outdated
| Checks: | ||
| 1. Manifest files exist (.claude-plugin/plugin.json, etc.) | ||
| 2. plugin.json has all required fields | ||
| 3. userConfig entries have required attributes (type, title, description, sensitive) | ||
| 4. Codex manifest has interface.displayName | ||
| 5. Version numbers are in sync across manifests | ||
| 6. No generic (unprefixed) env var names | ||
| 7. Domain tool + help tool present in source | ||
| 8. Required files present (README.md, CHANGELOG.md, Dockerfile, etc.) | ||
| 9. AGENTS.md and GEMINI.md are symlinks to CLAUDE.md | ||
| 10. skills/ directory has SKILL.md files | ||
| 11. hooks/ scripts exist and are executable | ||
| 12. docker-compose.yaml uses env_file, has user:, no environment: block | ||
| 14. No .env tracked in git | ||
| 15. Required directories exist (backups/, logs/, tests/, skills/) | ||
| 16. assets/ directory has icon files |
There was a problem hiding this comment.
Section numbering skips 13 in documentation.
The usage documentation lists checks numbered 1-12, then skips to 14-16. Section 13 is missing. Additionally, the actual implementation jumps from section 13 (line 414) to section 15 (line 452), with section 14 absent entirely.
🔧 Proposed fix: renumber sections consistently
10. skills/ directory has SKILL.md files
11. hooks/ scripts exist and are executable
12. docker-compose.yaml uses env_file, has user:, no environment: block
- 14. No .env tracked in git
- 15. Required directories exist (backups/, logs/, tests/, skills/)
- 16. assets/ directory has icon files
+ 13. No .env tracked in git
+ 14. Required directories exist (backups/, logs/, tests/, skills/)
+ 15. assets/ directory has icon filesAnd update the section comments in the implementation to match (lines 452, 467, 494).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/lint-plugin.sh` around lines 19 - 34, The documentation and
implementation in scripts/lint-plugin.sh have a missing/skip of section 13;
update the usage documentation list and the corresponding inline section
comments in the script so numbering is continuous (e.g., renumber 14–16 to 13–15
or add a new 13 entry), and adjust any section comment identifiers in the
implementation (references around the section comment blocks used for checks and
the comment markers near the checks for docker-compose/env/.env/etc.) so all
section labels match the updated documentation and implementation numbering
consistently.
| ``` | ||
| mcp__arcane-mcp__arcane | ||
| action: "container" | ||
| subaction: "list" | "get" | "create" | "start" | "stop" | "restart" | "delete" | "stats" | ||
| envId: (required for most subactions) Environment ID | ||
| id: (required for get, start, stop, restart, delete, stats) Container ID or name | ||
| ``` |
There was a problem hiding this comment.
Add languages to the fenced examples.
markdownlint is still flagging these unlabeled fences (MD040). Use text for the tool-contract blocks so the file stays lint-clean.
Also applies to: 42-48, 52-59, 63-69, 74-80, 84-91, 95-99, 121-128
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 29-29: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/arcane/SKILL.md` around lines 29 - 35, The fenced code blocks in
SKILL.md (e.g., the tool-contract blocks starting with lines showing
"mcp__arcane-mcp__arcane" and the other similar blocks referenced at ranges
42-48, 52-59, 63-69, 74-80, 84-91, 95-99, 121-128) are unlabeled and trigger
markdownlint MD040; update each triple-backtick fence to include the language
label "text" (e.g., change ``` to ```text) so the examples remain identical but
are lint-clean; locate the blocks by their unique starting text like
"mcp__arcane-mcp__arcane" and other tool-contract headings and add the "text"
label to each corresponding opening fence.
| ### System operations | ||
|
|
||
| ``` | ||
| mcp__arcane-mcp__arcane | ||
| action: "system" | ||
| subaction: "info" | "version" | "ping" | ||
| ``` |
There was a problem hiding this comment.
These system subactions do not match the tool schema.
Line 98 documents info | version | ping, but the subaction enum in src/mcp/tools/arcane.ts does not accept any of those values. Agents following this section will generate calls that fail validation.
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 95-95: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/arcane/SKILL.md` around lines 93 - 99, The System operations section
in SKILL.md lists subactions "info | version | ping" that don't match the actual
subaction enum declared in src/mcp/tools/arcane.ts; either update the SKILL.md
"System operations" block to list the exact subaction values accepted by the
subaction enum in arcane.ts, or modify the subaction enum in
src/mcp/tools/arcane.ts to include "info", "version", and "ping" (and ensure any
handling code for those cases exists, e.g., switch/case handlers that use the
subaction value). Ensure the documented token `subaction` in SKILL.md and the
enum named `subaction` (or the exported enum/type) in arcane.ts are identical.
| ### Destructive operations — DESTRUCTIVE | ||
|
|
||
| The following subactions require confirmation. Will prompt interactively when possible, | ||
| or require re-call with `params: { confirm: true }`: | ||
|
|
||
| - `container:delete` — Delete a container | ||
| - `container:restart` — Restart a container | ||
| - `container:stop` — Stop a container | ||
| - `environment:delete` — Remove an environment | ||
| - `image:delete` — Remove an image | ||
| - `image:prune` — Prune unused images | ||
| - `volume:delete` — Remove a volume and its data | ||
| - `volume:prune` — Prune unused volumes | ||
| - `network:delete` — Remove a network | ||
| - `network:prune` — Prune unused networks | ||
| - `project:down` — Stop and remove a project's containers | ||
| - `project:destroy` — Destroy a project | ||
| - `project:restart` — Restart a project | ||
| - `project:redeploy` — Redeploy a project | ||
|
|
There was a problem hiding this comment.
The destructive list is out of sync with the actual gate.
This section omits destructive pairs the server still blocks for confirmation, including volume:delete-backup, volume:restore, volume:restore-files, system:stop-all, registry:delete, gitops:delete, and gitops:sync. If the skill leaves those out, agents get the wrong confirmation contract and will hit unexpected runtime blocks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@skills/arcane/SKILL.md` around lines 101 - 120, Update the Destructive
operations list in SKILL.md to match the server's actual confirmation gate by
adding the missing action names: include volume:delete-backup, volume:restore,
volume:restore-files, system:stop-all, registry:delete, gitops:delete, and
gitops:sync alongside the existing entries (e.g., container:delete,
image:delete, project:destroy) so the documentation reflects the true
confirmation contract agents must follow.
| // Accept ARCANE_MCP_TOKEN (current) or AUTH_TOKEN (legacy) for backward compat. | ||
| AUTH_TOKEN: (() => { | ||
| const val = process.env.ARCANE_MCP_TOKEN ?? process.env.AUTH_TOKEN; | ||
| if (!val) | ||
| throw new Error( | ||
| "Missing required environment variable: ARCANE_MCP_TOKEN (or legacy AUTH_TOKEN)", | ||
| ); | ||
| return val; | ||
| })(), |
There was a problem hiding this comment.
The legacy token fallback still won't save AUTH_TOKEN-only installs.
Line 27 uses ??, so an empty ARCANE_MCP_TOKEN blocks fallback to AUTH_TOKEN. On top of that, entrypoint.sh still exits unless ARCANE_MCP_TOKEN is set, so the backward-compat path never helps HTTP deployments.
Proposed fix
AUTH_TOKEN: (() => {
- const val = process.env.ARCANE_MCP_TOKEN ?? process.env.AUTH_TOKEN;
+ const val = process.env.ARCANE_MCP_TOKEN || process.env.AUTH_TOKEN;
if (!val)
throw new Error(
"Missing required environment variable: ARCANE_MCP_TOKEN (or legacy AUTH_TOKEN)",
);
return val;
})(),Mirror the same fallback in entrypoint.sh as well, otherwise startup still fails before this code runs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/env.ts` around lines 25 - 33, Replace the nullish-coalescing
fallback with a truthy fallback so an empty ARCANE_MCP_TOKEN doesn't block
legacy AUTH_TOKEN; update the AUTH_TOKEN initializer (the immediately-invoked
function that reads process.env.ARCANE_MCP_TOKEN and process.env.AUTH_TOKEN) to
use a truthy check (e.g., || or explicit length check) instead of ??, and also
mirror the same change in entrypoint.sh where it currently exits if
ARCANE_MCP_TOKEN is unset so that the startup allows AUTH_TOKEN-only installs;
ensure both AUTH_TOKEN and entrypoint.sh checks reference ARCANE_MCP_TOKEN and
AUTH_TOKEN consistently.
- Add .claude-plugin/plugin.json with arcane_mcp_url/arcane_mcp_token userConfig
- Add .codex-plugin/plugin.json with interface block and skills/mcpServers/apps refs
- Add .mcp.json (Bearer ${user_config.arcane_mcp_token} header)
- Add .app.json (Codex app manifest)
- Add hooks/hooks.json (wrapped format with SessionStart + PostToolUse)
- Add hooks/scripts/sync-env.sh (awk+flock, no auto-token-gen, fails if token unset)
- Add hooks/scripts/fix-env-perms.sh (chmod 600 on .env and backups)
- Add hooks/scripts/ensure-ignore-files.sh (copied from claude-homelab)
- Add skills/arcane/SKILL.md (dual-mode MCP/HTTP fallback, all actions documented)
- Add assets/ (icon.png stub, logo.svg stub, screenshots/.gitkeep)
- Add backups/.gitkeep, logs/.gitkeep
- Add arcane.subdomain.conf (SWAG nginx config with origin validation, /mcp, /health)
- Add scripts/check-docker-security.sh, check-no-baked-env.sh, check-outdated-deps.sh,
ensure-ignore-files.sh (copied from claude-homelab)
- Add Justfile with dev/test/lint/fmt/typecheck/build/up/down/restart/logs/health/
test-live/setup/gen-token/check-contract/clean recipes
- Add .pre-commit-config.yaml (biome + docker-security + no-baked-env + ensure-ignore-files)
- Add .github/workflows/ci.yml (lint/typecheck/test/version-sync/audit/docker-security)
- Add tests/test_live.sh (mcporter-based live integration test)
- Rename AUTH_TOKEN -> ARCANE_MCP_TOKEN in src/utils/env.ts and .env.example
- Update Dockerfile: add HEALTHCHECK, USER 1000:1000, mkdir /app/logs /app/backups
- Update docker-compose.yaml: remove environment: block, add user/network/limits/healthcheck
- Fix .dockerignore: add .codex-plugin to AI tooling exclusions
- Fix .gitignore: add docs/research/ to documentation artifacts
- Add CHANGELOG.md migration note for AUTH_TOKEN rename
- Fix root-user detection regex to catch USER 0 and USER 0:0 in final stage (P1) - Fix check-no-baked-env.sh grep regex portability: replace \s with [[:space:]] (P1) - Add backward-compatible AUTH_TOKEN fallback for ARCANE_MCP_TOKEN (P2) - Fix LICENSE copyright attribution from gotify-mcp to arcane-mcp (P2) - Fix lint-plugin.sh section 13 to check both docker-compose.yaml and .yml (P2) - Add jq prerequisite check in lint-plugin.sh with clear error message (P2) - Add missing destructive actions container:restart and environment:delete to SKILL.md (P2) - Fix docker-compose.yaml network name default from arcane-mcp to arcane_mcp (P2)
044da3e to
3160631
Compare
Summary
Beads
Epic: cw1-alignment
Summary by cubic
Aligns
arcane-mcpwith the canonical MCP plugin setup for cw1-alignment and hardens runtime and tooling. Adds env-backed manifests, strict auth (with legacy fallback), pagination, capped responses, graceful shutdown, Docker/NGINX setup, and CI for container publishing.New Features
.claude-plugin,.codex-plugin,.mcp.json,.app.json,assets/(incl. screenshots);arcane_mcp_tokenmarked sensitive;.mcp.jsonexpands${ARCANE_MCP_URL:-https://arcane.tootie.tv/mcp}.sync-env.sh/fix-env-perms.sh/ensure-ignore-files.shwired viahooks.jsonto sync userConfig →.env, enforce600, rotate backups, and ensure ignore rules.ARCANE_MCP_TOKEN(accepts legacyAUTH_TOKEN); UTF‑8‑safe 512KB truncation; path traversal guard;normalizeServiceUrl()rewriteslocalhost→host.docker.internalin Docker.offset,limit,sort_order,query) returning{ items, total, limit, offset, has_more }.HEALTHCHECK, runtimelogs/andbackups/;docker-compose.yamluses env file, default networkarcane_mcp, standardized volumes/ignores.arcane.subdomain.confwith origin validation and/mcp+/healthupstreams.bin/scripts; Docker security checker catchesUSER root/0; portable grep in env checks; Dockerfile test.skills/arcane/SKILL.mdmarks destructive actions (e.g.,container:restart,environment:delete).Migration
AUTH_TOKEN→ARCANE_MCP_TOKEN;AUTH_TOKENstill works but is deprecated..envand remove composeenvironment:blocks; setARCANE_API_URL,ARCANE_API_KEY,ARCANE_MCP_TOKENthere (Docker rewriteslocalhostautomatically).ARCANE_MCP_BIND_PORT(internalPORT); ensure compose${PORT}matches it and use${ARCANE_MCP_PORT}for host mapping.Written for commit 3160631. Summary will update on new commits.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Chores