Skip to content

fix: address GitHub issues #83–#88 + security & reliability bugs#89

Merged
bnema merged 26 commits intomainfrom
fix/gh-issues-83-84-85-86-87-88
Feb 25, 2026
Merged

fix: address GitHub issues #83–#88 + security & reliability bugs#89
bnema merged 26 commits intomainfrom
fix/gh-issues-83-84-85-86-87-88

Conversation

@bnema
Copy link
Owner

@bnema bnema commented Feb 25, 2026

Summary

Fixes all 7 bugs from issues #83#88 plus 5 additional security/reliability issues found during a deep codebase scan.

Closes #83, #84, #85, #86, #87, #88


Issues #83#88

Refactors

Bug fixes

Root-caused internal bug

  • Fix gordon auth internal reading stale credentials when XDG_RUNTIME_DIR differs between daemon (systemd) and CLI shell — now probes candidates in priority order: XDG_RUNTIME_DIR/run/user/<uid>/~/.gordon/run/os.TempDir()

Security & Reliability (deep scan findings)

  • High — JWT ephemeral bypass via future iat: tokens with iat in the future were classified as ephemeral and skipped revocation checks. Fixed with jwt.WithIssuedAt() + negative-age guard in isEphemeralAccessToken
  • High — Race condition in UnsafeStore.Revoke: concurrent revocations could corrupt revoked.json or silently lose token IDs. Fixed with sync.RWMutex on Revoke/IsRevoked
  • Mediumpass backend ListTokens failed to enumerate tokens with / in subject (tree output parsed incorrectly). Rewrote parser with proper indentation-depth tracking
  • Medium — TLS HTTPS server excluded from graceful shutdown. startTLSServer now returns *http.Server; gracefulShutdown drains it alongside registry and proxy servers
  • Low — Silent recover() in CLI token refresh callback. Panics now logged to stderr instead of discarded

Test coverage

  • 36 files changed, 1637 insertions, 574 deletions
  • New test files: labels_test.go, httputil/localhost_test.go, pass_test.go, unsafe_test.go, run_internal_creds_test.go
  • All tests pass with race detector: go test -race ./... — zero failures, zero data races

Summary by CodeRabbit

  • New Features

    • Token auto-extension (sliding expiry) with non-fatal rotation header and optional remote token refresh callback.
    • Internal credentials resolution now probes multiple standard locations.
    • Push injects consistent build metadata (VERSION, GIT_TAG, GIT_SHA, BUILD_TIME).
    • Attachments can be updated without restarting; improved TLS server lifecycle handling.
  • Bug Fixes

    • Rejects tokens with future issuance timestamps.
    • Config Save preserves routes, external_routes and network_groups.
  • Refactoring

    • Consolidated localhost detection and standardized container label usage.

bnema added 18 commits February 24, 2026 22:46
- fix mock RunAndReturn delegates to _c.Call.Return not _c.Run
- remove dead bare ::1 check in IsLocalhostRequest (false positive risk)
- remove matching ::1 unbracketed test cases in httputil and auth tests
- add defer to restore os.Stderr in push_test to prevent leak on panic
- add recover() wrapper on token refresh callback in CLI client
- remove ineffective omitempty on time.Time in tokenstore pass metadata
- skip ExtendToken for permanent tokens (ExpiresAt <= 0) to avoid silent 24h conversion
- add os.TempDir() as last fallback candidate in getInternalCredentialsCandidates
- remove dead t.Setenv in run_internal_creds_test (function ignores env var)
- wrap labels_test.go constant checks in t.Run for isolated filtering
- add debug logs around UpdateAttachments for operator observability
@coderabbitai
Copy link

coderabbitai bot commented Feb 25, 2026

📝 Walkthrough

Walkthrough

Adds token-extension (ExtendToken) across auth/service/store/middleware, token-refresh callback wiring for the CLI remote client, standardizes CLI image build-args with git metadata, extracts an IsLocalhostRequest helper, introduces attachment tracking and label constants, and expands token-store metadata and update APIs; plus tests and mocks.

Changes

Cohort / File(s) Summary
Auth: service & interface
internal/boundaries/in/auth.go, internal/usecase/auth/service.go, internal/usecase/auth/service_test.go
Adds ExtendToken(ctx, tokenString) to AuthService and Service: 24h extension, 1h debounce, skips ephemeral/service tokens, enforces iat checks, re-signs with same JTI, persists LastExtendedAt, and adds unit tests.
Token store & mocks
internal/adapters/out/tokenstore/pass.go, internal/adapters/out/tokenstore/pass_test.go, internal/adapters/out/tokenstore/unsafe.go, internal/adapters/out/tokenstore/unsafe_test.go, internal/boundaries/out/token_store.go, internal/boundaries/out/mocks/mock_token_store.go
Adds LastExtendedAt metadata, UpdateTokenExpiry API on stores, improved pass ls parsing, concurrency (RWMutex) in UnsafeStore, and matching tests/mocks.
HTTP middleware & utils
internal/adapters/in/http/admin/middleware.go, internal/adapters/in/http/admin/middleware_test.go, internal/adapters/in/http/httputil/localhost.go, internal/adapters/in/http/httputil/localhost_test.go, internal/adapters/in/http/auth/handler.go, internal/adapters/in/http/auth/handler_test.go, internal/adapters/in/http/middleware/auth.go, internal/adapters/in/http/middleware/auth_test.go
Admin middleware non‑fatally calls ExtendToken and may emit X-Gordon-Token; extracts IsLocalhostRequest into httputil with tests; removes duplicate registry-auth helpers and updates tests for token-extension behavior.
CLI remote client & resolver
internal/adapters/in/cli/remote/client.go, internal/adapters/in/cli/remote/config.go, internal/adapters/in/cli/root.go
Adds optional onTokenRefreshed callback and WithTokenRefreshCallback option, observes X-Gordon-Token to update in-memory token and invoke callback; introduces ResolveRemoteFull to return remote name for persistence.
CLI push build-args & tests
internal/adapters/in/cli/push.go, internal/adapters/in/cli/push_test.go
Adds standardBuildArgs and resolveGitSHA, makes buildImageArgs context-aware, injects explicit --build-arg KEY=VALUE pairs (VERSION, GIT_TAG, GIT_SHA, BUILD_TIME), and updates tests/warnings for git-describe fallbacks.
Container attachments & labels
internal/usecase/container/service.go, internal/usecase/container/service_test.go, internal/usecase/container/events.go, internal/usecase/container/events_test.go, internal/adapters/out/docker/runtime.go
Replaces hard-coded label strings with domain constants, rebuilds and persists attachments map in SyncContainers, adds UpdateAttachments API, propagates attachments on config reload, and updates Docker labels (created timestamp).
Config, run & internal creds
internal/usecase/config/service.go, internal/usecase/config/service_test.go, internal/app/run.go, internal/app/run_internal_creds_test.go
Syncs additional config sections on Save (external_routes, network_groups), removes GetRegistryAuthConfig, adds GetInternalCredentialsFromCandidates, and returns TLS server instances/readiness channels for coordinated lifecycle and shutdown.
Misc CLI / lint / small fixes
internal/adapters/in/cli/auth.go, internal/adapters/in/cli/remote/auth.go, internal/adapters/in/cli/ui/components/selector.go, .github/workflows/ci.yml, .golangci.yml, internal/adapters/out/domainsecrets/pass_store.go, internal/adapters/out/secrets/pass.go
Adds guarded stdin FD helper, nolint:gosec annotations for exec.CommandContext and DTOs, small I/O refactors (Fprintf vs WriteString), pins golangci-lint action, and updates lint config.
Domain & tests
internal/domain/auth.go, internal/domain/labels_test.go
Adds LastExtendedAt time.Time to domain.Token and adds tests asserting domain label constant values.
Generated mocks
internal/boundaries/in/mocks/mock_auth_service.go, internal/boundaries/in/mocks/mock_container_service.go
Adds typed mock methods for ExtendToken and UpdateAttachments with Expecter/Run/Return/RunAndReturn helpers.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant AdminMiddleware as Admin<br/>Middleware
    participant AuthService as Auth<br/>Service
    participant TokenStore as Token<br/>Store
    participant Handler as Handler

    Client->>AdminMiddleware: HTTP Request (Bearer token)
    AdminMiddleware->>AuthService: ValidateToken(token)
    AuthService-->>AdminMiddleware: Valid

    AdminMiddleware->>AuthService: ExtendToken(token)
    AuthService->>TokenStore: GetToken(subject)
    TokenStore-->>AuthService: Stored token + LastExtendedAt

    alt Debounced (within 1h)
        AuthService-->>AdminMiddleware: same token (no update)
    else Eligible
        AuthService->>AuthService: Re-sign token (new exp)
        AuthService->>TokenStore: UpdateTokenExpiry(newJWT)
        TokenStore-->>AuthService: saved
        AuthService-->>AdminMiddleware: new token
    end

    AdminMiddleware->>AdminMiddleware: Token differs? add X-Gordon-Token header
    AdminMiddleware->>Handler: next handler
    Handler-->>Client: Response (+ header if new)
Loading
sequenceDiagram
    actor CLI
    participant RemoteClient as Remote<br/>Client
    participant Server as Remote<br/>Server
    participant Callback as Token<br/>Callback
    participant Config as Config<br/>Service

    CLI->>RemoteClient: API call
    RemoteClient->>Server: HTTP request (Bearer)
    Server->>Server: may ExtendToken
    Server-->>RemoteClient: Response (+ X-Gordon-Token header if refreshed)

    RemoteClient->>RemoteClient: detect header differs?
    alt Token refreshed
        RemoteClient->>RemoteClient: update in-memory token
        RemoteClient->>Callback: onTokenRefreshed(newToken)
        Callback->>Config: persist token for remoteName
        Config-->>Callback: saved
    end

    RemoteClient-->>CLI: return response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇 I nudged the tokens, stretched their time,

headers hummed back a tiny rhyme.
Attachments found their proper nest,
labels steadied, builds expressed.
A rabbit hops — saved state and test!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: address GitHub issues #83#88 + security & reliability bugs' clearly describes the main objective of the PR and is directly related to the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/gh-issues-83-84-85-86-87-88

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/adapters/in/cli/push_test.go`:
- Around line 105-114: The test currently only counts occurrences of "GIT_TAG="
in args but doesn't ensure the user override is the last one; update the loop in
internal/adapters/in/cli/push_test.go that iterates over args to also record the
index of the last "GIT_TAG=" occurrence (e.g., lastIdx) and after the loop
assert both that count >= 2 and that args[lastIdx] (or args[lastIdx+1] depending
on how you store indices) equals the expected override string
("GIT_TAG="+<expectedOverrideVar> or starts with that value) to guarantee the
override is applied last; refer to the existing args variable and the loop that
checks strings.HasPrefix(args[i+1], "GIT_TAG=") to locate where to add
last-index tracking and the final assertion.

In `@internal/adapters/in/cli/push.go`:
- Around line 718-721: The warning after exec.CommandContext(ctx, "git",
"describe", "--tags", "--dirty").Output() is misleading because it always claims
"no git tags found" although err can be any failure; update the error handling
in the branch that checks err to either print a generic fallback message (e.g.,
"Warning: unable to determine git tag — image version will be 'latest'") or
include the real err string from err in the fmt.Fprintln call so callers see the
actual failure from exec.CommandContext/Output; locate the exec.CommandContext
call and the fmt.Fprintln usage in push.go and modify that error branch
accordingly.
- Around line 491-505: The git SHA lookup in resolveGitSHA is using exec.Command
without context and must be made cancellable by threading a context through
buildImageArgs → standardBuildArgs → resolveGitSHA and switching to
exec.CommandContext; update signatures so buildImageArgs and standardBuildArgs
accept a context.Context parameter, change resolveGitSHA to accept ctx and call
exec.CommandContext(ctx, "git", "rev-parse", "--short", "HEAD"), and update all
callers (including runPush and any intermediate callers) to pass the existing
cancellable context so the git subprocess is killed when the push context is
cancelled.

In `@internal/adapters/in/cli/remote/client.go`:
- Around line 153-175: The linter G704 false positive can be suppressed where
the request URL is constructed from the operator-controlled c.baseURL plus the
fixed "/admin" prefix and caller path; add a comment like //nolint:gosec // URL
is built from operator-configured baseURL next to the URL construction to
silence gosec. Also ensure streaming responses observe token rotation: either
modify streamLogs to use the existing request() helper (so X-Gordon-Token
handling in request() updates c.token and invokes c.onTokenRefreshed) or
explicitly mirror the token-refresh block used in request() by checking
resp.Header.Get("X-Gordon-Token") after the stream response is received and
updating c.token/c.onTokenRefreshed with the same panic-safe callback wrapper.
- Around line 160-173: The token-refresh callback currently has signature
c.onTokenRefreshed func(string) and any persistence failure is silently
swallowed; change the callback to return an error (func(string) error), update
its invocation in client.go (wrap call with the existing recover defer, capture
the returned error, and log or return it), and propagate that error out of
UpdateRemoteToken so callers (e.g., root.go which calls
remote.UpdateRemoteToken) can handle or explicitly ignore it; update the
caller(s) to handle the new error return (or explicitly discard with a comment)
to make persistence failures visible.

In `@internal/adapters/in/cli/remote/config.go`:
- Around line 148-161: In ResolveRemoteFull, the token resolution calls
resolveToken(flagToken, config, remotes) without passing the resolved name; add
a short inline comment near that call explaining the intended coupling: note
that name is only set when resolveRemoteURL returns the active remote and
therefore equals remotes.Active, so resolveToken’s lookup of remotes.Active is
consistent; reference ResolveRemoteFull, resolveRemoteURL, resolveToken and
remotes.Active in the comment so future readers understand the implicit
dependency.

In `@internal/adapters/in/http/admin/middleware_test.go`:
- Around line 86-87: Add explicit tests covering ExtendToken failure and token
rotation: update the tests that currently set
authSvc.EXPECT().ExtendToken(...).Return("valid-admin-token", nil) to include
two additional cases—one where authSvc.EXPECT().ExtendToken(...).Return("",
errors.New("fail")) and assert the HTTP response is still 200, and another where
authSvc.EXPECT().ExtendToken(...).Return("rotated-token", nil) and assert the
response includes header "X-Gordon-Token: rotated-token". Apply these additions
to the same test contexts referencing ExtendToken at the existing expectation
sites (the blocks around the current authSvc.EXPECT() calls) so both the
non-fatal error path and the token-rotation header behavior are explicitly
asserted.

In `@internal/adapters/in/http/admin/middleware.go`:
- Around line 109-113: The token-extension branch currently swallows extErr;
change it to still be non-fatal but log the error for observability: when
calling authSvc.ExtendToken(ctx, token) (in middleware.go) check extErr and if
non-nil emit a debug/warn log that includes extErr and context identifying info
(e.g., request ID from ctx or user ID) but do NOT log the token itself; keep the
existing behavior of setting X-Gordon-Token when newToken != token. Use the
request-scoped logger if available (or the middleware's logger instance) to
produce the log message.

In `@internal/adapters/in/http/httputil/localhost.go`:
- Around line 12-15: Replace the string-prefix checks in IsLocalhostRequest with
proper IP parsing: call net.SplitHostPort(r.RemoteAddr) to extract the host
(handle SplitHostPort error by falling back to using the whole r.RemoteAddr as
host), parse the host with netip.ParseAddr (or net.ParseIP if preferred) and
return addr.IsLoopback() (or ip.IsLoopback()) to determine localhost; reference
the IsLocalhostRequest function and r.RemoteAddr when locating where to change.

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 345-347: ListTokens currently skips any subject with the ".meta"
suffix (strings.HasSuffix(subject, ".meta")), but validateSubject still allows
such subjects, causing valid names like "alice.meta" to be invisible; update
validateSubject to reject subjects that end with ".meta" (mirror the ListTokens
exclusion), ensuring names with that suffix fail validation and cannot be
created, and add/update unit tests for validateSubject and ListTokens to cover
the ".meta" case.
- Around line 443-447: UpdateTokenExpiry currently delegates to SaveToken which
allows upserts and can panic on a nil token; change PassStore.UpdateTokenExpiry
to enforce update-only semantics by first validating token != nil, then
confirming the token exists (e.g., via the existing lookup method used elsewhere
such as GetToken/FindToken by token.ID or token.Key), return a clear error if
not found, update only the JWT, expiry metadata and LastExtendedAt on the
retrieved record, and then persist the updated record (call the appropriate
update-only persistence method rather than SaveToken or use SaveToken only after
confirming existence) to avoid silent creation or nil-pointer panics.

In `@internal/adapters/out/tokenstore/unsafe_test.go`:
- Around line 53-56: The goroutine in the test currently discards the error
returned by store.Revoke(ctx, id); change the worker to capture and report
errors by writing the returned error into a buffered channel or a shared slice
protected by a mutex (use the existing wg, store, ctx, id identifiers to locate
the goroutine), then after wg.Wait() drain the channel or inspect the slice and
fail the test if any non-nil errors occurred so write failures are not
swallowed.

In `@internal/app/run.go`:
- Around line 858-869: The loop over candidates in run.go currently returns
immediately on the first read or unmarshal error (except IsNotExist), which
prevents falling back to lower-priority files; change the read/unmarshal error
handling in that loop to record the error (e.g., lastErr) and continue to the
next candidate instead of returning, and only return an error after the loop if
no valid InternalCredentials were successfully parsed; update the os.ReadFile
error branch (where it currently does return nil, fmt.Errorf("failed to read
credentials file %s: %w", path, err)) and the json.Unmarshal error branch (where
it currently returns nil, fmt.Errorf("failed to parse credentials at %s: %w",
path, err)) to continue/accumulate, and ensure the function returns a meaningful
error (or nil credentials) if no candidate succeeded.

In `@internal/usecase/auth/service.go`:
- Around line 492-495: ExtendToken currently re-reads the stored token and can
persist a different metadata ID than the JWT's jti, causing TOCTOU drift; fix by
ensuring the same token ID is used for both validation and persistence: inside
ExtendToken, extract the JWT ID (jti) from parsed claims (e.g., tokenClaims.Id
or tokenClaims.Subject's jti), call s.tokenStore.GetToken(ctx,
tokenClaims.Subject) and immediately verify storedToken.ID == jti (or return an
error if mismatch), then use that verified ID for signing the new JWT and for
persisting updatedToken.ID; apply the same consistency check in the other
similar block referenced around lines 510-535 so the stored metadata ID and JWT
jti can never diverge.

In `@internal/usecase/container/service.go`:
- Around line 1076-1079: UpdateAttachments currently stores the caller-provided
map and slices by reference which can lead to races if the caller mutates them
later; change Service.UpdateAttachments to deep-copy the incoming map and each
[]string slice before assigning to s.config.Attachments (retain the existing
s.mu.Lock() / Unlock() around the assignment), handling nil inputs safely so
s.config.Attachments becomes an independent copy that cannot be mutated by
external callers.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8ef1865 and b20fdb2.

📒 Files selected for processing (36)
  • internal/adapters/in/cli/push.go
  • internal/adapters/in/cli/push_test.go
  • internal/adapters/in/cli/remote/client.go
  • internal/adapters/in/cli/remote/config.go
  • internal/adapters/in/cli/root.go
  • internal/adapters/in/http/admin/middleware.go
  • internal/adapters/in/http/admin/middleware_test.go
  • internal/adapters/in/http/auth/handler.go
  • internal/adapters/in/http/auth/handler_test.go
  • internal/adapters/in/http/httputil/localhost.go
  • internal/adapters/in/http/httputil/localhost_test.go
  • internal/adapters/in/http/middleware/auth.go
  • internal/adapters/in/http/middleware/auth_test.go
  • internal/adapters/out/docker/runtime.go
  • internal/adapters/out/tokenstore/pass.go
  • internal/adapters/out/tokenstore/pass_test.go
  • internal/adapters/out/tokenstore/unsafe.go
  • internal/adapters/out/tokenstore/unsafe_test.go
  • internal/app/run.go
  • internal/app/run_internal_creds_test.go
  • internal/boundaries/in/auth.go
  • internal/boundaries/in/container.go
  • internal/boundaries/in/mocks/mock_auth_service.go
  • internal/boundaries/in/mocks/mock_container_service.go
  • internal/boundaries/out/mocks/mock_token_store.go
  • internal/boundaries/out/token_store.go
  • internal/domain/auth.go
  • internal/domain/labels_test.go
  • internal/usecase/auth/service.go
  • internal/usecase/auth/service_test.go
  • internal/usecase/config/service.go
  • internal/usecase/config/service_test.go
  • internal/usecase/container/events.go
  • internal/usecase/container/events_test.go
  • internal/usecase/container/service.go
  • internal/usecase/container/service_test.go

Comment on lines +153 to +175
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}

// If the server returned a refreshed token, update the client's token and notify any callback.
// This implements the sliding expiry: the server re-signs the token and sends it back.
if newToken := resp.Header.Get("X-Gordon-Token"); newToken != "" && newToken != c.token {
c.token = newToken
if c.onTokenRefreshed != nil {
// Token persistence is non-fatal: a panic in the callback must not crash the request.
func() {
defer func() {
if r := recover(); r != nil {
fmt.Fprintf(os.Stderr, "warning: token refresh callback panicked: %v\n", r)
}
}()
c.onTokenRefreshed(newToken)
}()
}
}

return resp, nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Static analysis: G704 (SSRF) is a false positive here — the URL is operator-controlled.

The URL is built from c.baseURL (set via CLI flag/env/config by the operator) plus a hardcoded /admin prefix and caller-supplied path. This is standard for a CLI HTTP client and not exploitable SSRF. Consider adding a //nolint:gosec // URL is built from operator-configured baseURL comment on line 130 (where the URL is constructed) to suppress the lint noise.

Minor gap: streamLogs bypasses request() and won't observe X-Gordon-Token rotation.

If the server returns a refreshed token on a streaming response, it will be silently ignored since streamLogs (line 899) constructs its own request independently. This is likely acceptable for long-lived SSE connections, but worth documenting or addressing if token expiry is shorter than typical stream durations.

🧰 Tools
🪛 GitHub Check: Lint

[failure] 153-153:
G704: SSRF via taint analysis (gosec)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/remote/client.go` around lines 153 - 175, The linter
G704 false positive can be suppressed where the request URL is constructed from
the operator-controlled c.baseURL plus the fixed "/admin" prefix and caller
path; add a comment like //nolint:gosec // URL is built from operator-configured
baseURL next to the URL construction to silence gosec. Also ensure streaming
responses observe token rotation: either modify streamLogs to use the existing
request() helper (so X-Gordon-Token handling in request() updates c.token and
invokes c.onTokenRefreshed) or explicitly mirror the token-refresh block used in
request() by checking resp.Header.Get("X-Gordon-Token") after the stream
response is received and updating c.token/c.onTokenRefreshed with the same
panic-safe callback wrapper.

Comment on lines +160 to +173
if newToken := resp.Header.Get("X-Gordon-Token"); newToken != "" && newToken != c.token {
c.token = newToken
if c.onTokenRefreshed != nil {
// Token persistence is non-fatal: a panic in the callback must not crash the request.
func() {
defer func() {
if r := recover(); r != nil {
fmt.Fprintf(os.Stderr, "warning: token refresh callback panicked: %v\n", r)
}
}()
c.onTokenRefreshed(newToken)
}()
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Panic recovery around the callback is good defensive coding.

One observation: if onTokenRefreshed itself returns an error (e.g., file write fails), the current design swallows it silently since the callback signature is func(string) with no return. The root.go caller already discards the error from UpdateRemoteToken (_ = remote.UpdateRemoteToken(...)), so the contract is consistent — but a future maintainer might not realize persistence failures are silent. Consider logging the error from the callback or changing the signature to func(string) error to make the contract explicit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/remote/client.go` around lines 160 - 173, The
token-refresh callback currently has signature c.onTokenRefreshed func(string)
and any persistence failure is silently swallowed; change the callback to return
an error (func(string) error), update its invocation in client.go (wrap call
with the existing recover defer, capture the returned error, and log or return
it), and propagate that error out of UpdateRemoteToken so callers (e.g., root.go
which calls remote.UpdateRemoteToken) can handle or explicitly ignore it; update
the caller(s) to handle the new error return (or explicitly discard with a
comment) to make persistence failures visible.

Comment on lines +492 to +495
_, storedToken, err := s.tokenStore.GetToken(ctx, tokenClaims.Subject)
if err != nil {
return "", fmt.Errorf("failed to fetch stored token: %w", err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent token ID drift during extension (race/TOCTOU).

ExtendToken re-reads the stored token and then persists updatedToken.ID from storage while signing JWT jti from parsed claims. If the subject token rotates between validation and this read, metadata ID and JWT ID can diverge, producing a freshly returned token that fails subsequent validation.

💡 Proposed fix
 	_, storedToken, err := s.tokenStore.GetToken(ctx, tokenClaims.Subject)
 	if err != nil {
 		return "", fmt.Errorf("failed to fetch stored token: %w", err)
 	}
+
+	// Guard against concurrent rotation between ValidateToken and GetToken.
+	if storedToken.ID != tokenClaims.ID {
+		log.Warn().
+			Str("subject", tokenClaims.Subject).
+			Str("expected_token_id", tokenClaims.ID).
+			Str("stored_token_id", storedToken.ID).
+			Msg("token changed during extension; aborting to avoid inconsistent token state")
+		return "", domain.ErrInvalidToken
+	}
@@
 	updatedToken := &domain.Token{
-		ID:             storedToken.ID,
+		ID:             tokenClaims.ID,
 		Subject:        storedToken.Subject,
 		Scopes:         storedToken.Scopes,
 		IssuedAt:       storedToken.IssuedAt,
 		ExpiresAt:      newExpiresAt,
 		Revoked:        storedToken.Revoked,
 		LastExtendedAt: now,
 	}

Also applies to: 510-535

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/usecase/auth/service.go` around lines 492 - 495, ExtendToken
currently re-reads the stored token and can persist a different metadata ID than
the JWT's jti, causing TOCTOU drift; fix by ensuring the same token ID is used
for both validation and persistence: inside ExtendToken, extract the JWT ID
(jti) from parsed claims (e.g., tokenClaims.Id or tokenClaims.Subject's jti),
call s.tokenStore.GetToken(ctx, tokenClaims.Subject) and immediately verify
storedToken.ID == jti (or return an error if mismatch), then use that verified
ID for signing the new JWT and for persisting updatedToken.ID; apply the same
consistency check in the other similar block referenced around lines 510-535 so
the stored metadata ID and JWT jti can never diverge.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
internal/adapters/out/secrets/pass.go (1)

108-111: 🧹 Nitpick | 🔵 Trivial

IsAvailable blocks indefinitely — consider adding a timeout.

exec.Command("pass", "version") has no deadline. On a system where the pass binary is present but hangs (e.g., GPG agent unresponsive), any caller of IsAvailable blocks forever. This is a pre-existing issue not introduced by this PR, but the surrounding hardening work is a good opportunity to address it.

⏱️ Suggested fix
 func (p *PassProvider) IsAvailable() bool {
-	cmd := exec.Command("pass", "version")
-	err := cmd.Run()
-	return err == nil
+	ctx, cancel := context.WithTimeout(context.Background(), p.timeout)
+	defer cancel()
+	cmd := exec.CommandContext(ctx, "pass", "version") //nolint:gosec // binary and arguments are string literals
+	err := cmd.Run()
+	return err == nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/secrets/pass.go` around lines 108 - 111, The
IsAvailable method of PassProvider currently runs exec.Command("pass",
"version") without a deadline and can hang; update PassProvider.IsAvailable to
create a context with a short timeout (e.g., 1-3s), use exec.CommandContext with
that context to run "pass version", and treat any context deadline exceeded or
other error as unavailable (return false); ensure the command is cancelled when
the context times out so the subprocess is killed and IsAvailable returns
promptly.
internal/adapters/out/domainsecrets/pass_store.go (2)

806-815: 🧹 Nitpick | 🔵 Trivial

context.WithTimeout cancel not deferred in cleanupInsertedPaths.

If passRemove panics, the context cancel leaks. The idiomatic pattern is defer cancel() immediately after context creation. The same non-deferred pattern also appears in ListKeys (lines 87-89) and several loop bodies throughout the file.

♻️ Proposed fix
 	for _, path := range paths {
 		ctx, cancel := context.WithTimeout(context.Background(), s.timeout)
+		defer cancel()
 		_ = s.passRemove(ctx, path)
-		cancel()
 	}

Note: defer inside a loop defers until the function returns (not the iteration), so for long-running loops the preferred fix is a helper closure or extracting the body into a named function. Given cleanupInsertedPaths is only called on error paths with a bounded number of entries, this is acceptable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/domainsecrets/pass_store.go` around lines 806 - 815,
The cleanupInsertedPaths function creates a context with context.WithTimeout and
calls s.passRemove but cancels the context with cancel() non-deferred, which can
leak if passRemove panics; update cleanupInsertedPaths to call cancel via defer
immediately after creating ctx (i.e., ctx, cancel := context.WithTimeout(...);
defer cancel()) so the cancel is always run, and apply the same fix to other
occurrences such as ListKeys and any loop bodies that create contexts (or wrap
loop bodies in a helper function/closure when deferring inside a loop would
otherwise delay cancellation until function return).

36-38: ⚠️ Potential issue | 🟡 Minor

pass version probe has no timeout, risking indefinite startup block.

Every other pass invocation in this file uses a 10-second context.WithTimeout, but the availability check at startup uses a bare exec.Command with no deadline. A stalled GPG agent or slow key-ring unlock could hang server initialization indefinitely.

🔧 Proposed fix
 func NewPassStore(log zerowrap.Logger) (*PassStore, error) {
-	if err := exec.Command("pass", "version").Run(); err != nil {
+	ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
+	defer cancel()
+	if err := exec.CommandContext(ctx, "pass", "version").Run(); err != nil { //nolint:gosec // G204: binary is constant ("pass"); no user input
 		return nil, fmt.Errorf("pass is not available: %w", err)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/domainsecrets/pass_store.go` around lines 36 - 38,
Replace the bare exec.Command("pass", "version").Run() probe with a
context.WithTimeout of 10 seconds and use exec.CommandContext(ctx, "pass",
"version").Run(), ensuring you call cancel() via defer; keep the same error
wrapping (fmt.Errorf("pass is not available: %w", err)). Locate the startup
availability check in pass_store.go (the block that calls exec.Command("pass",
"version")) and mirror the timeout pattern used elsewhere in this file (10s) so
a stalled GPG/keyring does not hang initialization.
internal/adapters/out/tokenstore/pass.go (1)

330-358: ⚠️ Potential issue | 🟠 Major

Single 10 s deadline shared across all GetToken calls in the listing loop.

The ctx created at line 330 is passed directly to every GetToken call (line 351). GetToken creates a new sub-context via context.WithTimeout(ctx, s.timeout), but that is bounded by the already-running outer deadline. For a store with N tokens, the effective per-token budget is (10s − t_ls) / N. Once the outer deadline fires, every remaining call returns immediately with a context-expired error, silently omitting tokens from the result.

Create a fresh child context per GetToken iteration to give each call its own full budget:

🛡️ Proposed fix
 func (s *PassStore) ListTokens(ctx context.Context) ([]domain.Token, error) {
 	listCtx, cancel := context.WithTimeout(ctx, s.timeout)
 	defer cancel()
 
-	cmd := exec.CommandContext(listCtx, "pass", "ls", passTokenPath) //nolint:gosec // passTokenPath is a fixed constant
+	cmd := exec.CommandContext(listCtx, "pass", "ls", passTokenPath) //nolint:gosec
 	output, err := cmd.Output()
 	if err != nil {
 		return []domain.Token{}, nil
 	}
 
 	subjects := parsePassLsOutput(string(output))
 
 	var tokens []domain.Token
 	for _, subject := range subjects {
 		if strings.HasSuffix(subject, ".meta") {
 			continue
 		}
+		getCtx, getCancel := context.WithTimeout(ctx, s.timeout)
-		_, token, err := s.GetToken(ctx, subject)
+		_, token, err := s.GetToken(getCtx, subject)
+		getCancel()
 		if err != nil {
 			s.log.Warn().Err(err).Str("subject", subject).Msg("failed to get token")
 			continue
 		}
 		tokens = append(tokens, *token)
 	}
 	return tokens, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 330 - 358, The loop is
using the outer timeout-bound ctx when calling GetToken, which causes the
per-token calls to share the same remaining deadline; instead, for each subject
create a fresh child context with the full per-call timeout (use
context.WithTimeout(context.Background(), s.timeout) or equivalent) and pass
that newCtx to s.GetToken, calling its cancel immediately after the call to
avoid leaks; keep the outer ctx for the initial exec.CommandContext("pass",
"ls", passTokenPath) but stop propagating that timeout into each GetToken
invocation.
♻️ Duplicate comments (4)
internal/adapters/in/cli/push.go (2)

467-467: ⚠️ Potential issue | 🟠 Major

Thread context.Context through git-SHA resolution to preserve cancellation.

Line 504 uses exec.Command(...) in a flow that already has a cancellable context. If cancellation happens during SHA resolution, that subprocess is not tied to the request lifecycle.

Proposed fix
-func buildAndPush(ctx context.Context, version, platform, dockerfile string, buildArgs []string, versionRef, latestRef string) error {
+func buildAndPush(ctx context.Context, version, platform, dockerfile string, buildArgs []string, versionRef, latestRef string) error {
@@
-	buildCmd := exec.CommandContext(ctx, "docker", buildImageArgs(version, platform, dockerfile, buildArgs, versionRef, latestRef)...) // `#nosec` G204
+	buildCmd := exec.CommandContext(ctx, "docker", buildImageArgs(ctx, version, platform, dockerfile, buildArgs, versionRef, latestRef)...) // `#nosec` G204
@@
-func standardBuildArgs(version string) []string {
-	gitSHA := resolveGitSHA()
+func standardBuildArgs(ctx context.Context, version string) []string {
+	gitSHA := resolveGitSHA(ctx)
@@
-func resolveGitSHA() string {
-	out, err := exec.Command("git", "rev-parse", "--short", "HEAD").Output() // `#nosec` G204
+func resolveGitSHA(ctx context.Context) string {
+	out, err := exec.CommandContext(ctx, "git", "rev-parse", "--short", "HEAD").Output() // `#nosec` G204
@@
-func buildImageArgs(version, platform, dockerfile string, buildArgs []string, versionRef, latestRef string) []string {
+func buildImageArgs(ctx context.Context, version, platform, dockerfile string, buildArgs []string, versionRef, latestRef string) []string {
@@
-	for _, ba := range standardBuildArgs(version) {
+	for _, ba := range standardBuildArgs(ctx, version) {
#!/bin/bash
set -euo pipefail
# Verify context threading across buildImageArgs -> standardBuildArgs -> resolveGitSHA.
rg -n -C2 'func buildImageArgs|func standardBuildArgs|func resolveGitSHA|buildImageArgs\(|standardBuildArgs\(|resolveGitSHA\(' internal/adapters/in/cli/push.go

Also applies to: 491-493, 502-505, 514-515, 527-528

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/push.go` at line 467, The subprocess invocations are
not using the existing cancellable context; update the call sites that invoke
exec.Command(...) (e.g., the line creating buildCmd) to use
exec.CommandContext(ctx, ...) and then thread a context parameter through
buildImageArgs, standardBuildArgs, and resolveGitSHA so any git-SHA resolution
and arg construction use that ctx; modify the function signatures for
buildImageArgs, standardBuildArgs, and resolveGitSHA to accept ctx
context.Context and propagate it through all internal calls, replacing any
exec.Command uses inside those functions with exec.CommandContext(ctx, ...) so
cancellation is honored across the flow.

718-721: ⚠️ Potential issue | 🟡 Minor

Make the fallback warning reflect actual git failures.

Line 720 always says “no git tags found,” but this branch also handles other errors (e.g., git missing, not a repo), which makes troubleshooting harder.

Proposed fix
-		fmt.Fprintln(os.Stderr, "Warning: no git tags found — image version will be 'latest'. Tag your repo to get versioned images.")
+		fmt.Fprintf(os.Stderr, "Warning: unable to derive git version (%v); image version will be 'latest'. Tag your repo to get versioned images.\n", err)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/push.go` around lines 718 - 721, The warning message
for the exec.CommandContext(...) call that runs "git describe --tags --dirty"
(variables out, err) is misleading because the error branch always prints "no
git tags found" even for other failures; update the error handling in the push
flow to include the actual error text (err.Error()) in the stderr message and
change the wording to reflect a generic git failure (e.g., "Unable to determine
git tag: <err> — image version will be 'latest'") so callers can see whether git
is missing, not a repo, or truly lacking tags.
internal/adapters/out/tokenstore/pass.go (2)

345-347: validateSubject still permits the .meta suffix.

The .meta filter here correctly skips metadata companion files, but validateSubject still allows a subject named e.g. alice.meta to be created. Such a token would always be silently excluded from ListTokens. This was raised in a prior review; the fix (rejecting .meta-suffixed subjects in validateSubject) remains unapplied.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 345 - 347,
validateSubject currently permits names ending with ".meta" which leads to
silently excluded tokens; update the validateSubject function to reject any
subject that has the ".meta" suffix (e.g., return an error like "subject may not
end with .meta") so creation/updates fail early; ensure the check is
case-sensitive consistent with existing logic and tie the error into the same
validation error path used by validateSubject so callers get a clear rejection
instead of producing tokens that ListTokens will always skip.

443-447: Nil-token panic and upsert semantics still unaddressed.

UpdateTokenExpiry forwards unconditionally to SaveToken; a nil token dereferences token.Subject at line 94 (panic), and a missing token is silently created rather than returning an error. These were raised in a prior review; the guard and existence-check are still missing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 443 - 447,
UpdateTokenExpiry must not unconditionally forward to SaveToken because a nil
token causes a panic when dereferencing token.Subject and SaveToken currently
upserts (creating missing tokens). Add a nil check in
PassStore.UpdateTokenExpiry (return a descriptive error if token == nil) and
ensure UpdateTokenExpiry verifies the token already exists before updating
(e.g., call the existing lookup method used elsewhere such as
GetToken/GetBySubject or check the backend for token.Subject) and return an
error if not found; do not rely on SaveToken's upsert behavior or change its
semantics here.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.golangci.yml:
- Around line 61-80: The current .golangci.yml adds repo-wide suppressions for
gosec rule IDs G117, G703, G704, and G705 (the blocks with text: "G117:",
"G703:", "G704:", "G705:"), which is too broad; instead, replace these global
text-based ignores with path-scoped exclusions targeting only the specific files
that legitimately require them (or remove the entries and add inline `#nosec`
comments at the exact lines in those files with a brief justification), ensuring
each suppression references the specific file(s) and rationale rather than
disabling the rule across the whole repository.

In `@internal/adapters/in/cli/ui/components/selector.go`:
- Line 85: Replace the fmt.Fprintf(&b, "%s%s", cursor, line) call with explicit
writes to the strings.Builder using b.WriteString and explicitly discard the
returned values to satisfy linters: call _, _ = b.WriteString(cursor) and _, _ =
b.WriteString(line). This keeps the single-allocation optimization (writing
directly into the Builder), removes fmt.Fprintf usage, and signals intentional
ignoring of the (int, error) returns for variables b, cursor, and line.

In `@internal/adapters/out/domainsecrets/pass_store.go`:
- Line 649: Change the broad //nolint:gosec suppressions to target only rule
G204: replace each "//nolint:gosec" on the exec.CommandContext lines (the cmd :=
exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) and the three other
occurrences flagged) with "//nolint:gosec:G204" so only the
subprocess-with-variable-args rule is silenced while leaving other gosec checks
active.

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 286-315: The local type frame duplicates treeEntry; remove the
redundant frame definition and change stack from []frame to []treeEntry so you
can push entries directly without the frame(entry) conversion. Update uses of
stack (the parts-building loop and the pop condition) to treat items as
treeEntry (fields name and depth remain the same), leave parsePassLsEntries and
entries iteration unchanged, and ensure stack append becomes stack =
append(stack, entry).
- Around line 230-249: The parsing silently produces garbage when idx < 3
(Unicode) or idx < 1 (ASCII) because branchByteIdx is clamped to 0 and
nameOffset is then computed from the wrong anchor; change the logic in the
branch that handles the Unicode ("── ") and ASCII ("-- ") cases so that if
branchByteIdx would be negative you skip this line/entry (return/continue from
the loop) instead of setting branchByteIdx = 0. Adjust the handling around
computing branchRune and nameOffset (the variables branchByteIdx, branchRune,
nameOffset used in these branches) so they are only computed after confirming
branchByteIdx >= 0, and ensure the code path that currently clamps to zero is
replaced with an explicit skip to avoid appending corrupt names.

---

Outside diff comments:
In `@internal/adapters/out/domainsecrets/pass_store.go`:
- Around line 806-815: The cleanupInsertedPaths function creates a context with
context.WithTimeout and calls s.passRemove but cancels the context with cancel()
non-deferred, which can leak if passRemove panics; update cleanupInsertedPaths
to call cancel via defer immediately after creating ctx (i.e., ctx, cancel :=
context.WithTimeout(...); defer cancel()) so the cancel is always run, and apply
the same fix to other occurrences such as ListKeys and any loop bodies that
create contexts (or wrap loop bodies in a helper function/closure when deferring
inside a loop would otherwise delay cancellation until function return).
- Around line 36-38: Replace the bare exec.Command("pass", "version").Run()
probe with a context.WithTimeout of 10 seconds and use exec.CommandContext(ctx,
"pass", "version").Run(), ensuring you call cancel() via defer; keep the same
error wrapping (fmt.Errorf("pass is not available: %w", err)). Locate the
startup availability check in pass_store.go (the block that calls
exec.Command("pass", "version")) and mirror the timeout pattern used elsewhere
in this file (10s) so a stalled GPG/keyring does not hang initialization.

In `@internal/adapters/out/secrets/pass.go`:
- Around line 108-111: The IsAvailable method of PassProvider currently runs
exec.Command("pass", "version") without a deadline and can hang; update
PassProvider.IsAvailable to create a context with a short timeout (e.g., 1-3s),
use exec.CommandContext with that context to run "pass version", and treat any
context deadline exceeded or other error as unavailable (return false); ensure
the command is cancelled when the context times out so the subprocess is killed
and IsAvailable returns promptly.

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 330-358: The loop is using the outer timeout-bound ctx when
calling GetToken, which causes the per-token calls to share the same remaining
deadline; instead, for each subject create a fresh child context with the full
per-call timeout (use context.WithTimeout(context.Background(), s.timeout) or
equivalent) and pass that newCtx to s.GetToken, calling its cancel immediately
after the call to avoid leaks; keep the outer ctx for the initial
exec.CommandContext("pass", "ls", passTokenPath) but stop propagating that
timeout into each GetToken invocation.

---

Duplicate comments:
In `@internal/adapters/in/cli/push.go`:
- Line 467: The subprocess invocations are not using the existing cancellable
context; update the call sites that invoke exec.Command(...) (e.g., the line
creating buildCmd) to use exec.CommandContext(ctx, ...) and then thread a
context parameter through buildImageArgs, standardBuildArgs, and resolveGitSHA
so any git-SHA resolution and arg construction use that ctx; modify the function
signatures for buildImageArgs, standardBuildArgs, and resolveGitSHA to accept
ctx context.Context and propagate it through all internal calls, replacing any
exec.Command uses inside those functions with exec.CommandContext(ctx, ...) so
cancellation is honored across the flow.
- Around line 718-721: The warning message for the exec.CommandContext(...) call
that runs "git describe --tags --dirty" (variables out, err) is misleading
because the error branch always prints "no git tags found" even for other
failures; update the error handling in the push flow to include the actual error
text (err.Error()) in the stderr message and change the wording to reflect a
generic git failure (e.g., "Unable to determine git tag: <err> — image version
will be 'latest'") so callers can see whether git is missing, not a repo, or
truly lacking tags.

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 345-347: validateSubject currently permits names ending with
".meta" which leads to silently excluded tokens; update the validateSubject
function to reject any subject that has the ".meta" suffix (e.g., return an
error like "subject may not end with .meta") so creation/updates fail early;
ensure the check is case-sensitive consistent with existing logic and tie the
error into the same validation error path used by validateSubject so callers get
a clear rejection instead of producing tokens that ListTokens will always skip.
- Around line 443-447: UpdateTokenExpiry must not unconditionally forward to
SaveToken because a nil token causes a panic when dereferencing token.Subject
and SaveToken currently upserts (creating missing tokens). Add a nil check in
PassStore.UpdateTokenExpiry (return a descriptive error if token == nil) and
ensure UpdateTokenExpiry verifies the token already exists before updating
(e.g., call the existing lookup method used elsewhere such as
GetToken/GetBySubject or check the backend for token.Subject) and return an
error if not found; do not rely on SaveToken's upsert behavior or change its
semantics here.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b20fdb2 and df677c2.

📒 Files selected for processing (11)
  • .github/workflows/ci.yml
  • .golangci.yml
  • internal/adapters/dto/registry_token.go
  • internal/adapters/in/cli/auth.go
  • internal/adapters/in/cli/push.go
  • internal/adapters/in/cli/remote/auth.go
  • internal/adapters/in/cli/ui/components/selector.go
  • internal/adapters/out/domainsecrets/pass_store.go
  • internal/adapters/out/secrets/pass.go
  • internal/adapters/out/tokenstore/pass.go
  • internal/usecase/container/autoroute.go

Comment on lines +61 to +80
# gosec G117: exported struct fields matching secret-name patterns are intentional DTOs
# (OCI registry tokens, CLI credential request bodies, telemetry config, JWT response frames).
# These fields must be exported and carry credentials by design — they are not accidental leaks.
- text: "G117:"
linters:
- gosec
# gosec G703/G704/G705: taint-analysis false positives for a CLI tool and reverse proxy.
# G704 (SSRF): target URLs come from operator-configured routing tables or explicit CLI flags,
# not from untrusted user input. G703 (path traversal): XDG_RUNTIME_DIR is a trusted env var
# set by the login session, not user-supplied. G705 (XSS): registry manifest data is binary
# OCI content served with an explicit Content-Type, not rendered as HTML.
- text: "G703:"
linters:
- gosec
- text: "G704:"
linters:
- gosec
- text: "G705:"
linters:
- gosec
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Scope gosec suppressions; avoid repository-wide rule-ID ignores.

Line 61 through Line 80 add global text-based ignores for G117/G703/G704/G705. This suppresses those checks across the entire repo and can silently mask future real vulnerabilities outside the currently-justified call sites.

Use path-scoped exclusions (for narrowly known files) or inline #nosec with justification at exact lines instead of disabling whole IDs globally.

Proposed config tightening
-      # gosec G117: exported struct fields matching secret-name patterns are intentional DTOs
-      # (OCI registry tokens, CLI credential request bodies, telemetry config, JWT response frames).
-      # These fields must be exported and carry credentials by design — they are not accidental leaks.
-      - text: "G117:"
-        linters:
-          - gosec
-      # gosec G703/G704/G705: taint-analysis false positives for a CLI tool and reverse proxy.
-      # G704 (SSRF): target URLs come from operator-configured routing tables or explicit CLI flags,
-      # not from untrusted user input. G703 (path traversal): XDG_RUNTIME_DIR is a trusted env var
-      # set by the login session, not user-supplied. G705 (XSS): registry manifest data is binary
-      # OCI content served with an explicit Content-Type, not rendered as HTML.
-      - text: "G703:"
-        linters:
-          - gosec
-      - text: "G704:"
-        linters:
-          - gosec
-      - text: "G705:"
-        linters:
-          - gosec
+      # Keep gosec findings visible globally.
+      # Suppress only specific known false positives via:
+      # 1) path-scoped exclusions for exact packages/files, or
+      # 2) inline `#nosec Gxxx -- justification` at the precise line.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.golangci.yml around lines 61 - 80, The current .golangci.yml adds repo-wide
suppressions for gosec rule IDs G117, G703, G704, and G705 (the blocks with
text: "G117:", "G703:", "G704:", "G705:"), which is too broad; instead, replace
these global text-based ignores with path-scoped exclusions targeting only the
specific files that legitimately require them (or remove the entries and add
inline `#nosec` comments at the exact lines in those files with a brief
justification), ensuring each suppression references the specific file(s) and
rationale rather than disabling the rule across the whole repository.

b.WriteString(styles.Theme.Highlight.Render(cursor + line))
} else {
b.WriteString(fmt.Sprintf("%s%s", cursor, line))
fmt.Fprintf(&b, "%s%s", cursor, line)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

fmt.Fprintf return values discarded — safe here, but consider errcheck linting.

strings.Builder.Write never returns a non-nil error, so silently discarding fmt.Fprintf's (int, error) return is safe at runtime. However, errcheck and similar linters will flag this. If the project enforces errcheck, the idiomatic alternatives are either:

♻️ Option A – keep direct write, suppress via explicit discard
-		fmt.Fprintf(&b, "%s%s", cursor, line)
+		_, _ = fmt.Fprintf(&b, "%s%s", cursor, line)
♻️ Option B – revert to the previous pattern (no linter noise, one allocation)
-		fmt.Fprintf(&b, "%s%s", cursor, line)
+		b.WriteString(cursor + line)

Otherwise, the optimization itself — writing directly into the strings.Builder rather than allocating an intermediate string via fmt.Sprintf — is correct and the rendered output is identical.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fmt.Fprintf(&b, "%s%s", cursor, line)
_, _ = fmt.Fprintf(&b, "%s%s", cursor, line)
Suggested change
fmt.Fprintf(&b, "%s%s", cursor, line)
b.WriteString(cursor + line)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/ui/components/selector.go` at line 85, Replace the
fmt.Fprintf(&b, "%s%s", cursor, line) call with explicit writes to the
strings.Builder using b.WriteString and explicitly discard the returned values
to satisfy linters: call _, _ = b.WriteString(cursor) and _, _ =
b.WriteString(line). This keeps the single-allocation optimization (writing
directly into the Builder), removes fmt.Fprintf usage, and signals intentional
ignoring of the (int, error) returns for variables b, cursor, and line.


func (s *PassStore) passInsert(ctx context.Context, path, value string) error {
cmd := exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path)
cmd := exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Scope the suppression to G204 for precision.

All four //nolint:gosec directives suppress every gosec rule on the respective line, but only G204 (subprocess with variable args) is actually triggered. Narrowing the directive prevents silently hiding any future gosec rule that might fire on the same line.

🔧 Proposed narrowing
-	cmd := exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

-	cmd := exec.CommandContext(ctx, "pass", "rm", "-f", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "rm", "-f", path) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

-	cmd := exec.CommandContext(ctx, "pass", "show", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "show", path) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

-	cmd := exec.CommandContext(ctx, "pass", "ls", basePath) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "ls", basePath) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

Alternatively, if the project uses golangci-lint's nolintlint with require-specific: true, the linter itself will enforce this.

Also applies to: 659-659, 671-671, 690-690

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/domainsecrets/pass_store.go` at line 649, Change the
broad //nolint:gosec suppressions to target only rule G204: replace each
"//nolint:gosec" on the exec.CommandContext lines (the cmd :=
exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) and the three other
occurrences flagged) with "//nolint:gosec:G204" so only the
subprocess-with-variable-args rule is silenced while leaving other gosec checks
active.

Comment on lines +286 to +315
type frame struct {
depth int
name string
}

entries := parsePassLsEntries(output)

var subjects []string
var stack []frame

for i, entry := range entries {
// Pop ancestors that are at the same depth or deeper than the current entry.
for len(stack) > 0 && stack[len(stack)-1].depth >= entry.depth {
stack = stack[:len(stack)-1]
}

// Build the full path for this entry by joining ancestors with this name.
parts := make([]string, 0, len(stack)+1)
for _, f := range stack {
parts = append(parts, f.name)
}
parts = append(parts, entry.name)
subject := strings.Join(parts, "/")

// Determine whether this entry is an intermediate directory node.
// A node is a directory if the immediately following entry is deeper.
isDir := i+1 < len(entries) && entries[i+1].depth > entry.depth

// Always push so deeper entries can use this as ancestor context.
stack = append(stack, frame(entry))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

frame is a redundant duplicate of treeEntry.

The local frame struct has the exact same two fields as treeEntry. Reuse treeEntry directly for the stack to eliminate the dead type and the frame(entry) conversion.

♻️ Proposed refactor
-	type frame struct {
-		depth int
-		name  string
-	}
-
 	entries := parsePassLsEntries(output)
 
 	var subjects []string
-	var stack []frame
+	var stack []treeEntry
 
 	for i, entry := range entries {
 		for len(stack) > 0 && stack[len(stack)-1].depth >= entry.depth {
 			stack = stack[:len(stack)-1]
 		}
 		parts := make([]string, 0, len(stack)+1)
 		for _, f := range stack {
 			parts = append(parts, f.name)
 		}
 		parts = append(parts, entry.name)
 		subject := strings.Join(parts, "/")
 
 		isDir := i+1 < len(entries) && entries[i+1].depth > entry.depth
-		stack = append(stack, frame(entry))
+		stack = append(stack, entry)
 
 		if !isDir {
 			subjects = append(subjects, subject)
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 286 - 315, The local
type frame duplicates treeEntry; remove the redundant frame definition and
change stack from []frame to []treeEntry so you can push entries directly
without the frame(entry) conversion. Update uses of stack (the parts-building
loop and the pop condition) to treat items as treeEntry (fields name and depth
remain the same), leave parsePassLsEntries and entries iteration unchanged, and
ensure stack append becomes stack = append(stack, entry).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (6)
internal/adapters/in/cli/remote/config.go (1)

140-163: LGTM — coupling comment now in place.

The delegation in ResolveRemote and the new ResolveRemoteFull are clean. The inline comment at lines 155–156 addresses the previously flagged implicit coupling between resolveRemoteURL returning the active name and resolveToken independently looking up remotes.Active.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/remote/config.go` around lines 140 - 163, No change
needed: the review approves the refactor and coupling clarification; keep
ResolveRemote delegating to ResolveRemoteFull and retain the inline comment in
ResolveRemoteFull explaining the implicit coupling between resolveRemoteURL
(which may set the active remote name) and resolveToken/resolveInsecureTLS that
read remotes.Active and use the name parameter (functions: ResolveRemote,
ResolveRemoteFull, resolveRemoteURL, resolveToken, resolveInsecureTLS); no code
modifications required.
internal/adapters/in/cli/remote/client.go (2)

175-183: Token rotation is now observed on both request() and streamLogs() — LGTM.

The previous gap where streamLogs bypassed X-Gordon-Token observation is resolved. Header reads at line 931 happen before the body is consumed, so this is safe.

Also applies to: 930-931

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/remote/client.go` around lines 175 - 183, Call
c.observeTokenRotation(resp) wherever an HTTP response may carry a re-signed
token so streamed responses are handled the same as regular requests: add the
call in the streamLogs flow immediately after c.httpClient.Do(req) and before
reading/consuming the body (since headers are available at that point). This
mirrors the behavior in request() and ensures X-Gordon-Token header handling is
applied in streamLogs as well; reference the functions/methods request(),
streamLogs(), and observeTokenRotation to locate where to add the call.

107-115: Persistence errors in the token-refresh callback are silently swallowed.

The callback signature is func(newToken string) with no error return, so any failure (e.g., UpdateRemoteToken failing to write to disk) is invisible. The root.go caller already discards errors with _ = remote.UpdateRemoteToken(...). This was flagged in a previous review; the design remains unchanged.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/remote/client.go` around lines 107 - 115, The
token-refresh callback c.onTokenRefreshed currently swallows persistence
failures because the callback signature is func(newToken string) and callers
(root.go calling remote.UpdateRemoteToken) discard errors; wrap the callback
invocation so it calls the existing UpdateRemoteToken (remote.UpdateRemoteToken)
inside the protected anonymous function and explicitly capture and log any
returned error (e.g., fmt.Fprintf(os.Stderr,... ) or process logger) instead of
ignoring it, preserving the panic recovery already present around
c.onTokenRefreshed; this ensures persistence errors from UpdateRemoteToken are
surfaced even though the callback signature cannot be changed.
internal/app/run.go (1)

858-881: ⚠️ Potential issue | 🟡 Minor

Only the last encountered error is surfaced; earlier errors (e.g. EACCES on a higher-priority path) are silently discarded.

If path 1 returns EACCES and path 3 returns a parse error, the caller sees only the parse error, losing the higher-priority EACCES signal. Accumulating all non-IsNotExist errors preserves the full diagnostic trail, as was originally proposed.

🐛 Proposed fix
 func GetInternalCredentialsFromCandidates(candidates []string) (*InternalCredentials, error) {
-	var lastErr error
+	var errs []string
 	for _, path := range candidates {
 		data, err := os.ReadFile(path)
 		if os.IsNotExist(err) {
 			continue
 		}
 		if err != nil {
-			lastErr = fmt.Errorf("failed to read credentials file %s: %w", path, err)
+			errs = append(errs, fmt.Sprintf("%s: read error: %v", path, err))
 			continue
 		}
 		var creds InternalCredentials
 		if err := json.Unmarshal(data, &creds); err != nil {
-			lastErr = fmt.Errorf("failed to parse credentials at %s: %w", path, err)
+			errs = append(errs, fmt.Sprintf("%s: parse error: %v", path, err))
 			continue
 		}
 		return &creds, nil
 	}
-	if lastErr != nil {
-		return nil, lastErr
+	if len(errs) > 0 {
+		return nil, fmt.Errorf("no usable credentials file found; checked %v (errors: %s)", candidates, strings.Join(errs, "; "))
 	}
 	return nil, fmt.Errorf("no credentials file found (is Gordon running?): checked %v", candidates)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/app/run.go` around lines 858 - 881, The loop over candidates
currently keeps only the last non-NotExist error in lastErr (set when
os.ReadFile or json.Unmarshal fails), which discards earlier important errors;
modify the code in the function that iterates candidates to accumulate all
non-IsNotExist errors (e.g. collect into a slice or build a combined error via
fmt.Errorf("%w; %w", ...) or errors.Join) instead of overwriting lastErr, make
sure errors from os.ReadFile (including EACCES) and json.Unmarshal are
appended/joined with context mentioning the candidate path, and at the end
return the aggregated error when no valid credentials were found rather than a
single lastErr.
internal/adapters/out/tokenstore/pass.go (2)

290-320: 🧹 Nitpick | 🔵 Trivial

frame duplicates treeEntry; reuse one type for the stack.

This removes a redundant local type and simplifies stack handling.

♻️ Optional cleanup
-	type frame struct {
-		depth int
-		name  string
-	}
-
 	entries := parsePassLsEntries(output)
 
 	var subjects []string
-	var stack []frame
+	var stack []treeEntry
@@
-		stack = append(stack, frame(entry))
+		stack = append(stack, entry)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 290 - 320, The local
type frame duplicates the existing treeEntry type returned by
parsePassLsEntries; replace frame with treeEntry to avoid redundancy: stop
declaring type frame, change stack's element type to treeEntry, update
references to frame(entry) to just entry (or cast to treeEntry if needed), and
ensure the loop that builds parts reads f.name from treeEntry values; this
affects the variables stack, parts-building loop, and the stack append where
currently stack = append(stack, frame(entry)).

234-250: ⚠️ Potential issue | 🟡 Minor

Skip malformed tree lines instead of clamping branch index to zero.

branchByteIdx clamping can still derive incorrect offsets and append corrupt names for malformed lines. Prefer dropping such lines immediately.

🔧 Proposed hardening
 		if idx := strings.Index(line, "── "); idx != -1 {
 			// Unicode variant: the two '─' dashes (U+2500) start at byte idx.
 			// The branch character (├ or └, 3 bytes each) immediately precedes them.
 			branchByteIdx := idx - 3
 			if branchByteIdx < 0 {
-				branchByteIdx = 0
+				continue // malformed line
 			}
@@
 		} else if idx := strings.Index(line, "-- "); idx != -1 {
 			// ASCII variant: branch char (| or `, 1 byte) is immediately before the dashes.
 			branchByteIdx := idx - 1
 			if branchByteIdx < 0 {
-				branchByteIdx = 0
+				continue // malformed line
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 234 - 250, The parsing
branch that computes branchByteIdx for the Unicode and ASCII tree-line cases
should not clamp negative indices to zero because that produces incorrect
branchRune and nameOffset and corrupts names; in the blocks that currently set
branchByteIdx := idx - 3 (Unicode) and branchByteIdx := idx - 1 (ASCII), check
if branchByteIdx < 0 and if so skip the malformed line (e.g.,
continue/return/skip this iteration) instead of resetting to 0, then proceed to
compute branchRune = utf8.RuneCountInString(line[:branchByteIdx]) and nameOffset
as before; update both branches where branchByteIdx is clamped to implement this
early-skip behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/adapters/in/cli/push_test.go`:
- Around line 373-396: The test currently closes w and restores os.Stderr twice;
change the defer to only restore os.Stderr (remove w.Close() from the deferred
func) and remove the later explicit os.Stderr = origStderr so there’s a single
restore, but keep the explicit w.Close() before reading from r (so w is closed
prior to io.ReadAll); update the defer and the post-getGitVersion cleanup
accordingly around the variables w, r, origStderr and the call to getGitVersion.

In `@internal/adapters/in/cli/remote/client.go`:
- Around line 88-117: Add synchronization around the Client.token field by
adding a mutex (e.g., tokenMu sync.RWMutex) to the Client struct and use it to
protect reads and writes: in observeTokenRotation, acquire the write lock before
comparing/updating c.token and release it after the update (but invoke
c.onTokenRefreshed without holding the lock); in request() acquire the read lock
when reading c.token and release it immediately after copying the value to use
for the request. Ensure the token callback invocation still runs protected from
panics as currently implemented.

In `@internal/app/run.go`:
- Line 837: The construction of sysRuntime uses fmt.Sprintf("%d", uid) which is
inconsistent with pidFileLocations() and less efficient; replace the fmt.Sprintf
call with strconv.Itoa(uid) when building sysRuntime (the variable assigned via
filepath.Join in run.go) and add the necessary strconv import if missing so the
UID string is produced using strconv.Itoa to match the existing pattern.

---

Duplicate comments:
In `@internal/adapters/in/cli/remote/client.go`:
- Around line 175-183: Call c.observeTokenRotation(resp) wherever an HTTP
response may carry a re-signed token so streamed responses are handled the same
as regular requests: add the call in the streamLogs flow immediately after
c.httpClient.Do(req) and before reading/consuming the body (since headers are
available at that point). This mirrors the behavior in request() and ensures
X-Gordon-Token header handling is applied in streamLogs as well; reference the
functions/methods request(), streamLogs(), and observeTokenRotation to locate
where to add the call.
- Around line 107-115: The token-refresh callback c.onTokenRefreshed currently
swallows persistence failures because the callback signature is func(newToken
string) and callers (root.go calling remote.UpdateRemoteToken) discard errors;
wrap the callback invocation so it calls the existing UpdateRemoteToken
(remote.UpdateRemoteToken) inside the protected anonymous function and
explicitly capture and log any returned error (e.g., fmt.Fprintf(os.Stderr,... )
or process logger) instead of ignoring it, preserving the panic recovery already
present around c.onTokenRefreshed; this ensures persistence errors from
UpdateRemoteToken are surfaced even though the callback signature cannot be
changed.

In `@internal/adapters/in/cli/remote/config.go`:
- Around line 140-163: No change needed: the review approves the refactor and
coupling clarification; keep ResolveRemote delegating to ResolveRemoteFull and
retain the inline comment in ResolveRemoteFull explaining the implicit coupling
between resolveRemoteURL (which may set the active remote name) and
resolveToken/resolveInsecureTLS that read remotes.Active and use the name
parameter (functions: ResolveRemote, ResolveRemoteFull, resolveRemoteURL,
resolveToken, resolveInsecureTLS); no code modifications required.

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 290-320: The local type frame duplicates the existing treeEntry
type returned by parsePassLsEntries; replace frame with treeEntry to avoid
redundancy: stop declaring type frame, change stack's element type to treeEntry,
update references to frame(entry) to just entry (or cast to treeEntry if
needed), and ensure the loop that builds parts reads f.name from treeEntry
values; this affects the variables stack, parts-building loop, and the stack
append where currently stack = append(stack, frame(entry)).
- Around line 234-250: The parsing branch that computes branchByteIdx for the
Unicode and ASCII tree-line cases should not clamp negative indices to zero
because that produces incorrect branchRune and nameOffset and corrupts names; in
the blocks that currently set branchByteIdx := idx - 3 (Unicode) and
branchByteIdx := idx - 1 (ASCII), check if branchByteIdx < 0 and if so skip the
malformed line (e.g., continue/return/skip this iteration) instead of resetting
to 0, then proceed to compute branchRune =
utf8.RuneCountInString(line[:branchByteIdx]) and nameOffset as before; update
both branches where branchByteIdx is clamped to implement this early-skip
behavior.

In `@internal/app/run.go`:
- Around line 858-881: The loop over candidates currently keeps only the last
non-NotExist error in lastErr (set when os.ReadFile or json.Unmarshal fails),
which discards earlier important errors; modify the code in the function that
iterates candidates to accumulate all non-IsNotExist errors (e.g. collect into a
slice or build a combined error via fmt.Errorf("%w; %w", ...) or errors.Join)
instead of overwriting lastErr, make sure errors from os.ReadFile (including
EACCES) and json.Unmarshal are appended/joined with context mentioning the
candidate path, and at the end return the aggregated error when no valid
credentials were found rather than a single lastErr.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between df677c2 and 2473f94.

📒 Files selected for processing (12)
  • internal/adapters/in/cli/push.go
  • internal/adapters/in/cli/push_test.go
  • internal/adapters/in/cli/remote/client.go
  • internal/adapters/in/cli/remote/config.go
  • internal/adapters/in/http/admin/middleware.go
  • internal/adapters/in/http/admin/middleware_test.go
  • internal/adapters/in/http/httputil/localhost.go
  • internal/adapters/out/tokenstore/pass.go
  • internal/adapters/out/tokenstore/unsafe_test.go
  • internal/app/run.go
  • internal/usecase/auth/service.go
  • internal/usecase/container/service.go

Comment on lines +373 to +396
origStderr := os.Stderr
r, w, err := os.Pipe()
if err != nil {
t.Fatal(err)
}
os.Stderr = w
defer func() {
w.Close()
os.Stderr = origStderr
}()

v := getGitVersion(ctx)

w.Close()
os.Stderr = origStderr
stderrOutput, _ := io.ReadAll(r)

// Should return "" (fallback to "latest" handled by determineVersion)
assert.Equal(t, "", v, "expected empty string fallback when no git tags exist")

// Should have printed a warning to stderr
assert.Contains(t, string(stderrOutput), "latest",
"expected a warning about 'latest' fallback on stderr")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Double w.Close() — second close in defer is redundant and returns an error.

w.Close() is called explicitly on line 386, then the deferred closure on line 380 closes it again. Similarly, os.Stderr is restored on line 387 and again in the defer on line 381. The double close won't crash but is a code smell. Restructure so the defer only acts as a safety net.

♻️ Suggested cleanup
-	origStderr := os.Stderr
 	r, w, err := os.Pipe()
 	if err != nil {
 		t.Fatal(err)
 	}
+	origStderr := os.Stderr
 	os.Stderr = w
-	defer func() {
-		w.Close()
-		os.Stderr = origStderr
-	}()
 
 	v := getGitVersion(ctx)
 
 	w.Close()
 	os.Stderr = origStderr
 	stderrOutput, _ := io.ReadAll(r)
+	r.Close()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
origStderr := os.Stderr
r, w, err := os.Pipe()
if err != nil {
t.Fatal(err)
}
os.Stderr = w
defer func() {
w.Close()
os.Stderr = origStderr
}()
v := getGitVersion(ctx)
w.Close()
os.Stderr = origStderr
stderrOutput, _ := io.ReadAll(r)
// Should return "" (fallback to "latest" handled by determineVersion)
assert.Equal(t, "", v, "expected empty string fallback when no git tags exist")
// Should have printed a warning to stderr
assert.Contains(t, string(stderrOutput), "latest",
"expected a warning about 'latest' fallback on stderr")
}
r, w, err := os.Pipe()
if err != nil {
t.Fatal(err)
}
origStderr := os.Stderr
os.Stderr = w
v := getGitVersion(ctx)
w.Close()
os.Stderr = origStderr
stderrOutput, _ := io.ReadAll(r)
r.Close()
// Should return "" (fallback to "latest" handled by determineVersion)
assert.Equal(t, "", v, "expected empty string fallback when no git tags exist")
// Should have printed a warning to stderr
assert.Contains(t, string(stderrOutput), "latest",
"expected a warning about 'latest' fallback on stderr")
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/push_test.go` around lines 373 - 396, The test
currently closes w and restores os.Stderr twice; change the defer to only
restore os.Stderr (remove w.Close() from the deferred func) and remove the later
explicit os.Stderr = origStderr so there’s a single restore, but keep the
explicit w.Close() before reading from r (so w is closed prior to io.ReadAll);
update the defer and the post-getGitVersion cleanup accordingly around the
variables w, r, origStderr and the call to getGitVersion.

Comment on lines +88 to +117
// WithTokenRefreshCallback sets a callback that is invoked when the server
// returns a refreshed token in the X-Gordon-Token response header.
// The CLI uses this to persist the new token to the remotes config atomically.
func WithTokenRefreshCallback(fn func(newToken string)) ClientOption {
return func(c *Client) {
c.onTokenRefreshed = fn
}
}

// observeTokenRotation checks the X-Gordon-Token header on a response and, if
// the server issued a refreshed token, updates the client's in-memory token and
// invokes the optional persistence callback. Panics in the callback are caught
// so they never propagate to the caller.
func (c *Client) observeTokenRotation(resp *http.Response) {
newToken := resp.Header.Get("X-Gordon-Token")
if newToken == "" || newToken == c.token {
return
}
c.token = newToken
if c.onTokenRefreshed != nil {
func() {
defer func() {
if r := recover(); r != nil {
fmt.Fprintf(os.Stderr, "warning: token refresh callback panicked: %v\n", r)
}
}()
c.onTokenRefreshed(newToken)
}()
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

c.token is accessed without synchronization.

observeTokenRotation reads c.token (line 103) and writes c.token (line 106) without a mutex, while request() also reads c.token at line 172. This is safe for the current single-goroutine CLI usage, but if Client is ever used with concurrent goroutines (e.g., from tests or future parallel subcommands), this becomes a data race.

♻️ Suggested fix — add a mutex to guard `token`
 type Client struct {
 	baseURL          string
-	token            string
+	mu               sync.RWMutex
+	token            string
 	httpClient       *http.Client
 	insecureTLS      bool
 	onTokenRefreshed func(newToken string)
 }

Then in observeTokenRotation:

-	newToken := resp.Header.Get("X-Gordon-Token")
-	if newToken == "" || newToken == c.token {
-		return
-	}
-	c.token = newToken
+	newToken := resp.Header.Get("X-Gordon-Token")
+	c.mu.Lock()
+	current := c.token
+	if newToken == "" || newToken == current {
+		c.mu.Unlock()
+		return
+	}
+	c.token = newToken
+	c.mu.Unlock()

And in request() at line 172:

+	c.mu.RLock()
+	tok := c.token
+	c.mu.RUnlock()
-	if c.token != "" {
-		req.Header.Set("Authorization", "Bearer "+c.token)
+	if tok != "" {
+		req.Header.Set("Authorization", "Bearer "+tok)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/in/cli/remote/client.go` around lines 88 - 117, Add
synchronization around the Client.token field by adding a mutex (e.g., tokenMu
sync.RWMutex) to the Client struct and use it to protect reads and writes: in
observeTokenRotation, acquire the write lock before comparing/updating c.token
and release it after the update (but invoke c.onTokenRefreshed without holding
the lock); in request() acquire the read lock when reading c.token and release
it immediately after copying the value to use for the request. Ensure the token
callback invocation still runs protected from panics as currently implemented.


// 2. /run/user/<uid>/gordon/ (systemd default, may not be in CLI's env)
uid := os.Getuid()
sysRuntime := filepath.Join("/run/user", fmt.Sprintf("%d", uid), "gordon", "internal-creds.json")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Prefer strconv.Itoa over fmt.Sprintf("%d", uid) for UID formatting.

pidFileLocations() at line 2213 uses strconv.Itoa(os.Getuid()) for the same construction; using fmt.Sprintf here is an inconsistency.

♻️ Proposed fix
-	sysRuntime := filepath.Join("/run/user", fmt.Sprintf("%d", uid), "gordon", "internal-creds.json")
+	sysRuntime := filepath.Join("/run/user", strconv.Itoa(uid), "gordon", "internal-creds.json")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sysRuntime := filepath.Join("/run/user", fmt.Sprintf("%d", uid), "gordon", "internal-creds.json")
sysRuntime := filepath.Join("/run/user", strconv.Itoa(uid), "gordon", "internal-creds.json")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/app/run.go` at line 837, The construction of sysRuntime uses
fmt.Sprintf("%d", uid) which is inconsistent with pidFileLocations() and less
efficient; replace the fmt.Sprintf call with strconv.Itoa(uid) when building
sysRuntime (the variable assigned via filepath.Join in run.go) and add the
necessary strconv import if missing so the UID string is produced using
strconv.Itoa to match the existing pattern.

@bnema bnema merged commit c8ea1b0 into main Feb 25, 2026
3 of 4 checks passed
@bnema bnema deleted the fix/gh-issues-83-84-85-86-87-88 branch February 25, 2026 02:29
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
internal/adapters/out/domainsecrets/pass_store.go (2)

39-39: 🧹 Nitpick | 🔵 Trivial

//nolint:gosec on probe line still suppresses all gosec rules rather than only G204.

The new probe call carries the same over-broad suppression directive that the previous review flagged on passInsert/passRemove/passShow/listTopLevelEntries.

🔧 Proposed narrowing
-	if err := exec.CommandContext(probeCtx, "pass", "version").Run(); err != nil { //nolint:gosec // binary is a constant ("pass"), no user input
+	if err := exec.CommandContext(probeCtx, "pass", "version").Run(); err != nil { //nolint:gosec // G204: binary is a constant ("pass"), no user input
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/domainsecrets/pass_store.go` at line 39, The probe
invocation exec.CommandContext(probeCtx, "pass", "version").Run() currently uses
an overly broad //nolint:gosec which suppresses all gosec checks; change that to
narrowly suppress only the command injection rule by replacing the directive
with //nolint:gosec:G204 (or remove the nolint and document a brief
justification) and apply the same targeted change for the other pass helper
functions (passInsert, passRemove, passShow, listTopLevelEntries) so only G204
is suppressed instead of all gosec rules.

652-652: 🧹 Nitpick | 🔵 Trivial

//nolint:gosec directives remain over-broad — scope to G204.

All four directives suppress every gosec rule. Only G204 (subprocess with variable args) is triggered. The previous review requested this narrowing and it remains unaddressed.

🔧 Proposed narrowing (all four lines)
-	cmd := exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "insert", "-m", "-f", path) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

-	cmd := exec.CommandContext(ctx, "pass", "rm", "-f", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "rm", "-f", path) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

-	cmd := exec.CommandContext(ctx, "pass", "show", path) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "show", path) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

-	cmd := exec.CommandContext(ctx, "pass", "ls", basePath) //nolint:gosec // binary is constant ("pass"); path arguments validated by secrets path validator
+	cmd := exec.CommandContext(ctx, "pass", "ls", basePath) //nolint:gosec // G204: binary is constant ("pass"); path arguments validated by secrets path validator

Also applies to: 662-662, 674-674, 693-693

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/domainsecrets/pass_store.go` at line 652, The current
//nolint:gosec comments around the exec.CommandContext invocations are too
broad; restrict them to only suppress gosec G204. Replace each existing
//nolint:gosec (or bare //nolint) on the exec.CommandContext(...) lines (the
calls building "pass insert -m -f path" and similar) with a targeted directive
like //nolint:gosec:G204 so only the subprocess-with-variable-args rule is
silenced; update all occurrences (the shown call and the other
exec.CommandContext calls referenced in the comment).
internal/adapters/out/tokenstore/pass.go (1)

462-470: ⚠️ Potential issue | 🟠 Major

UpdateTokenExpiry still persists full caller input, not just expiry-related fields.

You load existing but then save token directly. That can overwrite stable metadata (ID, IssuedAt, Scopes, Revoked) with stale/partial caller state, which conflicts with the method contract.

🔧 Proposed fix
 func (s *PassStore) UpdateTokenExpiry(ctx context.Context, token *domain.Token, newJWT string) error {
@@
 	if existing == nil {
 		return fmt.Errorf("UpdateTokenExpiry: no existing token for subject %q", token.Subject)
 	}
-	return s.SaveToken(ctx, token, newJWT)
+	updated := *existing
+	updated.ExpiresAt = token.ExpiresAt
+	updated.LastExtendedAt = token.LastExtendedAt
+	return s.SaveToken(ctx, &updated, newJWT)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 462 - 470, In
UpdateTokenExpiry, you're loading existing via s.GetToken but then saving the
caller-supplied token, which can overwrite stable fields; instead merge only
expiry-related fields from the caller into the loaded existing token (e.g.,
ExpiresAt, NotBefore and the token string/newJWT if intended) while preserving
ID, IssuedAt, Scopes, Revoked and any other stable metadata, then call
s.SaveToken(ctx, existing, newJWT) (or pass the merged token object) so
SaveToken persists only the updated expiry info; reference UpdateTokenExpiry,
existing, token, newJWT, GetToken and SaveToken when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/adapters/out/secrets/pass.go`:
- Around line 108-113: IsAvailable uses the provider's general timeout
(p.timeout) which is too long for availability probes; add and use a shorter
dedicated probe timeout (e.g., a new PassProvider field like probeTimeout or a
package-level defaultProbeTimeout constant) in the IsAvailable implementation so
exec.CommandContext uses that shorter context instead of p.timeout; update
PassProvider initialization to set probeTimeout (or rely on the constant) and
ensure context.WithTimeout in IsAvailable references the new probe timeout.

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 359-361: The per-token lookup currently creates a detached context
with context.Background() which breaks caller cancellation; change the per-call
context creation in ListTokens to derive from the parent ctx (use
context.WithTimeout(ctx, s.timeout) to create callCtx and callCancel) so
s.GetToken(callCtx, subject) inherits upstream cancellation/deadlines; ensure
you still cancel callCancel() after the call and keep using s.timeout for the
per-call timeout.

---

Duplicate comments:
In `@internal/adapters/out/domainsecrets/pass_store.go`:
- Line 39: The probe invocation exec.CommandContext(probeCtx, "pass",
"version").Run() currently uses an overly broad //nolint:gosec which suppresses
all gosec checks; change that to narrowly suppress only the command injection
rule by replacing the directive with //nolint:gosec:G204 (or remove the nolint
and document a brief justification) and apply the same targeted change for the
other pass helper functions (passInsert, passRemove, passShow,
listTopLevelEntries) so only G204 is suppressed instead of all gosec rules.
- Line 652: The current //nolint:gosec comments around the exec.CommandContext
invocations are too broad; restrict them to only suppress gosec G204. Replace
each existing //nolint:gosec (or bare //nolint) on the exec.CommandContext(...)
lines (the calls building "pass insert -m -f path" and similar) with a targeted
directive like //nolint:gosec:G204 so only the subprocess-with-variable-args
rule is silenced; update all occurrences (the shown call and the other
exec.CommandContext calls referenced in the comment).

In `@internal/adapters/out/tokenstore/pass.go`:
- Around line 462-470: In UpdateTokenExpiry, you're loading existing via
s.GetToken but then saving the caller-supplied token, which can overwrite stable
fields; instead merge only expiry-related fields from the caller into the loaded
existing token (e.g., ExpiresAt, NotBefore and the token string/newJWT if
intended) while preserving ID, IssuedAt, Scopes, Revoked and any other stable
metadata, then call s.SaveToken(ctx, existing, newJWT) (or pass the merged token
object) so SaveToken persists only the updated expiry info; reference
UpdateTokenExpiry, existing, token, newJWT, GetToken and SaveToken when making
the change.

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2473f94 and 070a2bd.

📒 Files selected for processing (4)
  • internal/adapters/in/cli/ui/components/selector.go
  • internal/adapters/out/domainsecrets/pass_store.go
  • internal/adapters/out/secrets/pass.go
  • internal/adapters/out/tokenstore/pass.go

Comment on lines +108 to +113
// Uses a short timeout so a stalled GPG agent does not hang the caller.
func (p *PassProvider) IsAvailable() bool {
cmd := exec.Command("pass", "version")
err := cmd.Run()
return err == nil
ctx, cancel := context.WithTimeout(context.Background(), p.timeout)
defer cancel()
cmd := exec.CommandContext(ctx, "pass", "version") //nolint:gosec // binary is a constant ("pass"), no user input
return cmd.Run() == nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use a dedicated shorter timeout for availability probes.

IsAvailable currently inherits the 10s provider timeout, which is relatively long for a probe path. A smaller dedicated timeout keeps health/startup checks snappier.

♻️ Proposed refactor
 type PassProvider struct {
-	timeout time.Duration
-	log     zerowrap.Logger
+	timeout             time.Duration
+	availabilityTimeout time.Duration
+	log                 zerowrap.Logger
 }

 func NewPassProvider(log zerowrap.Logger) *PassProvider {
 	return &PassProvider{
-		timeout: 10 * time.Second,
-		log:     log,
+		timeout:             10 * time.Second,
+		availabilityTimeout: 2 * time.Second,
+		log:                 log,
 	}
 }
@@
 func (p *PassProvider) IsAvailable() bool {
-	ctx, cancel := context.WithTimeout(context.Background(), p.timeout)
+	ctx, cancel := context.WithTimeout(context.Background(), p.availabilityTimeout)
 	defer cancel()
 	cmd := exec.CommandContext(ctx, "pass", "version") //nolint:gosec // binary is a constant ("pass"), no user input
 	return cmd.Run() == nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/secrets/pass.go` around lines 108 - 113, IsAvailable
uses the provider's general timeout (p.timeout) which is too long for
availability probes; add and use a shorter dedicated probe timeout (e.g., a new
PassProvider field like probeTimeout or a package-level defaultProbeTimeout
constant) in the IsAvailable implementation so exec.CommandContext uses that
shorter context instead of p.timeout; update PassProvider initialization to set
probeTimeout (or rely on the constant) and ensure context.WithTimeout in
IsAvailable references the new probe timeout.

Comment on lines +359 to +361
callCtx, callCancel := context.WithTimeout(context.Background(), s.timeout)
_, token, err := s.GetToken(callCtx, subject)
callCancel()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's examine the exact lines and surrounding context
echo "=== Code at lines 359-361 ==="
sed -n '355,365p' internal/adapters/out/tokenstore/pass.go

echo -e "\n=== Broader context (ListTokens function) ==="
# Find the ListTokens function and show it
rg -n "func.*ListTokens" internal/adapters/out/tokenstore/pass.go -A 30

Repository: bnema/gordon

Length of output: 1807


Per-token lookups should not detach from caller cancellation.

Using context.Background() here ignores upstream cancellation/deadlines and can keep ListTokens running long after the caller is done. While the code comment explains the intent to give each GetToken call a fresh timeout window, this design breaks the cancellation contract—callers expect operations to respect their deadline contexts. Each per-token call should instead inherit the parent context to properly propagate cancellation signals.

🔧 Proposed fix
-		callCtx, callCancel := context.WithTimeout(context.Background(), s.timeout)
+		callCtx, callCancel := context.WithTimeout(ctx, s.timeout)
 		_, token, err := s.GetToken(callCtx, subject)
 		callCancel()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
callCtx, callCancel := context.WithTimeout(context.Background(), s.timeout)
_, token, err := s.GetToken(callCtx, subject)
callCancel()
callCtx, callCancel := context.WithTimeout(ctx, s.timeout)
_, token, err := s.GetToken(callCtx, subject)
callCancel()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/adapters/out/tokenstore/pass.go` around lines 359 - 361, The
per-token lookup currently creates a detached context with context.Background()
which breaks caller cancellation; change the per-call context creation in
ListTokens to derive from the parent ctx (use context.WithTimeout(ctx,
s.timeout) to create callCtx and callCancel) so s.GetToken(callCtx, subject)
inherits upstream cancellation/deadlines; ensure you still cancel callCancel()
after the call and keep using s.timeout for the per-call timeout.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Config save drops auth token_secret (and other server fields), causing restart crash

1 participant