feat: GitLab webhook support + Helm chart for K8s deployment#1
feat: GitLab webhook support + Helm chart for K8s deployment#1
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds GitLab integration and Helm deployment: new GitLab client, webhook endpoint, settings and secrets, DB schema and models extended for platform/project_id, Helm chart and templates, CI workflow, kubefiles and install script changes, and tests for GitLab flows. Changes
Sequence Diagram(s)sequenceDiagram
participant GitLab as GitLab
participant Webhook as Webhook Handler
participant Settings as Settings Service
participant Controller as Build Controller
participant Kubernetes as Kubernetes
GitLab->>Webhook: POST /webhooks/gitlab (Push or MR)
Webhook->>Webhook: verify x-gitlab-token
Webhook->>Settings: get_repo_settings(repo, branch)
Settings-->>Webhook: RepoSettings (platform, project_id)
Webhook->>Controller: trigger_build(CommitInfo(platform=gitlab, project_id))
Controller->>Kubernetes: create Deployment / start build (env includes PLATFORM, PROJECT_ID, GITLAB_URL)
Kubernetes-->>Controller: pod running / build result
Controller->>GitLab: notify_status(sha, state, project_id)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/runboat/models.py (1)
311-316:⚠️ Potential issue | 🔴 Critical
redeploy()bypasses_notify_status— GitLab repos won't receive status notifications on redeploy.Every other status notification call site (
deploy,on_initialize_started,on_initialize_succeeded,on_initialize_failed) was updated to use the platform-aware_notify_status, butredeploy()still callsgithub.notify_statusdirectly. For GitLab-hosted repos, this will either silently fail or post to the wrong platform.🐛 Proposed fix
async def redeploy(self) -> None: """Redeploy a build, to reinitialize it.""" _logger.info(f"Redeploying {self}.") await k8s.kill_job(self.name, job_kind=k8s.DeploymentMode.cleanup) await k8s.kill_job(self.name, job_kind=k8s.DeploymentMode.initialize) await self._deploy( self.commit_info, self.name, self.slug, job_kind=k8s.DeploymentMode.deployment, ) - await github.notify_status( - self.commit_info.repo, - self.commit_info.git_commit, - GitHubStatusState.pending, - target_url=None, - ) + await _notify_status( + self.commit_info, GitHubStatusState.pending, target_url=None + )
🤖 Fix all issues with AI agents
In `@chart/templates/ingress.yaml`:
- Around line 17-22: Guard against a nil .Values.ingress.tls by changing the
conditional that checks {{ .Values.ingress.tls.enabled }} so it first ensures
the tls block exists (for example using the Helm functions: 'and' to check
.Values.ingress.tls and its .enabled, or using 'default false' / 'hasKey' to
treat a missing tls as disabled) and only render the tls block when present and
enabled; update the conditional in the ingress.yaml template surrounding tls so
it references those safer checks instead of directly accessing
.Values.ingress.tls.enabled.
In `@src/runboat/api.py`:
- Around line 112-121: The bare except around settings.get_repo_settings should
be narrowed to the specific expected error(s) so real bugs aren't swallowed;
update the try/except that calls settings.get_repo_settings(repo, branch) to
catch only the specific exception(s) that indicate missing or invalid repo
settings (e.g., KeyError, ValueError, or the project-specific
RepoSettingsNotFound if available) and let other exceptions propagate, then keep
the existing branch: if repo_settings and repo_settings.platform == "gitlab":
call gitlab.get_branch_info(repo, branch, project_id=repo_settings.project_id)
else call github.get_branch_info(repo, branch).
In `@src/runboat/gitlab.py`:
- Around line 78-105: notify_status currently only catches httpx.HTTPStatusError
from _gitlab_request, but _gitlab_request can raise NotFoundOnGitLab for 404s
which will propagate; update notify_status to also catch NotFoundOnGitLab (in
addition to or separate from the existing httpx.HTTPStatusError handler), log a
clear message including the repo/sha and error details, and return/exit quietly
so the background task doesn't crash; reference functions/objects:
notify_status, _gitlab_request, NotFoundOnGitLab, and the existing
httpx.HTTPStatusError handler.
- Around line 88-94: The payload for the GitLab commit status currently includes
both "context" and "name" which are mutually exclusive; update the json_payload
construction in the function that builds the commit status (the json_payload
dict in src/runboat/gitlab.py) to remove the "context" key and only include
"name": "runboat/build", keeping the conditional addition of "target_url"
unchanged so the POST /projects/:id/statuses/:sha request sends only "name" (and
"target_url" when present).
In `@src/runboat/webhooks.py`:
- Around line 108-112: The mypy error is caused by using the bare generic dict
in the signature of _gitlab_repo_info; update the type to include parameters
(e.g., change the parameter type to dict[str, Any]) and add Any to the typing
imports (alongside Annotated if already present) so the function signature
becomes _gitlab_repo_info(payload: dict[str, Any]) -> tuple[str, str | None].
🧹 Nitpick comments (21)
Dockerfile (1)
23-27: Secrets baked into image layers viaENV.Trivy flags
RUNBOAT_GITLAB_TOKENandRUNBOAT_GITLAB_WEBHOOK_TOKEN(DS-0031), but the same pattern already exists for the GitHub counterparts on lines 23-24. While the defaults are empty strings and actual values are injected at runtime, theENVdirective still persists in the image layer metadata. Consider switching all secret-bearing variables to be supplied exclusively at runtime (e.g., via--env/--env-file, Kubernetes Secrets, or Docker secrets) and removing theENVlines for tokens entirely..env.test (1)
1-1: Consider readability of the long JSON value.The
RUNBOAT_REPOSvalue is now a very long single-line JSON string with three repo configs, making it hard to read and maintain in this test env file. This is a minor nit for a test fixture, but splitting it into a separate JSON file or using a multi-line approach could help future maintainability.chart/templates/namespace.yaml (1)
1-13: Helm-managed namespaces: be cautious withhelm uninstall.Creating namespaces as part of the Helm release means
helm uninstallwill delete these namespaces and all resources within them (including build pods/PVCs that may live inbuildNamespace). Consider whether these namespaces should be managed outside the chart, or add thehelm.sh/resource-policy: keepannotation to prevent accidental deletion on uninstall.Also, if
.Values.namespaceand.Values.buildNamespaceare ever set to the same value, this template will emit a duplicate Namespace resource. A conditional guard could prevent that.src/runboat/github.py (1)
34-35: Consider usingLiteral["github", "gitlab"]forplatform.A free-form
strallows any value, making typos silently pass validation. ALiteraltype (orEnum) constrains it to known platforms and gives you Pydantic validation for free.♻️ Suggested change
+from typing import Any, Literal + class CommitInfo(BaseModel): repo: str target_branch: str pr: int | None git_commit: str - platform: str = "github" # "github" or "gitlab" + platform: Literal["github", "gitlab"] = "github" project_id: str | None = None # GitLab numeric project IDchart/templates/rbac.yaml (1)
1-39: Scope RBAC to the build namespace instead of using ClusterRole.This
ClusterRolegrants create/update/delete on secrets, services, ingresses, and other resources across all namespaces. Since builds are deployed into a specific namespace (RUNBOAT_BUILD_NAMESPACE), a namespace-scopedRole+RoleBindingin that namespace would satisfy the principle of least privilege and address the Trivy findings (KSV-0041, KSV-0056).If the controller truly needs cross-namespace access (e.g., it manages its own namespace plus a separate build namespace), consider two scoped
Role/RoleBindingpairs rather than a single cluster-wide grant.♻️ Suggested approach: use Role + RoleBinding scoped to the build namespace
-apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: {{ include "runboat.fullname" . }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ include "runboat.fullname" . }} + namespace: {{ .Values.buildNamespace }} labels: {{- include "runboat.labels" . | nindent 4 }} rules: ... --- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: {{ include "runboat.fullname" . }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "runboat.fullname" . }} + namespace: {{ .Values.buildNamespace }} labels: {{- include "runboat.labels" . | nindent 4 }} subjects: - kind: ServiceAccount name: {{ .Values.serviceAccount.name }} namespace: {{ .Values.namespace }} roleRef: - kind: ClusterRole + kind: Role name: {{ include "runboat.fullname" . }} apiGroup: rbac.authorization.k8s.iochart/templates/deployment.yaml (1)
26-52: Consider adding asecurityContextto the container spec.The container lacks a
securityContext. SettingrunAsNonRoot: true,readOnlyRootFilesystem: true, andallowPrivilegeEscalation: falseis a good security baseline, especially since this pod has RBAC permissions to manage cluster resources.♻️ Suggested addition (after `imagePullPolicy`)
imagePullPolicy: {{ .Values.image.pullPolicy }} + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true ports:.github/workflows/build.yml (2)
3-7: Remove feature branch trigger before merging.Line 5 includes
feat/gitlab-supportas a push trigger. This is useful during development but should be removed before or at merge time to avoid the workflow running on a stale/deleted branch reference.Proposed fix
on: push: - branches: [main, feat/gitlab-support] + branches: [main] pull_request: branches: [main]
47-52: Consider adding Docker layer caching to speed up builds.Each build starts from scratch. Using
docker/build-push-actionwith GitHub Actions cache or--cache-fromcan significantly reduce build times.tests/test_webhook.py (1)
305-308: Incomplete test coverage for_verify_gitlab_token.This test only covers the "no token configured" path. Compare with
test_verify_github_signature(lines 143–155) which tests four scenarios: no secret, missing header, invalid signature, and valid signature. Consider adding cases wheresettings.gitlab_webhook_tokenis set (via monkeypatch or mock) to verify: (1) missing token →False, (2) mismatched token →False, (3) matching token →True.chart/templates/cnpg-cluster.yaml (2)
4-6: Hardcoded resource names may conflict with multiple releases.The Secret name
runboat-db-superuser(line 5) and Cluster namerunboat-db(line 17) are hardcoded, whilechart/templates/secret.yamluses{{ include "runboat.fullname" . }}-env. If two Helm releases coexist in the same namespace, these names will collide. Consider using therunboat.fullnamehelper for consistency (e.g.,{{ include "runboat.fullname" . }}-dband{{ include "runboat.fullname" . }}-db-superuser), and update thesuperuserSecret.namereference at line 38 accordingly.Also applies to: 16-18
33-35:parameterswill fail if.Values.postgresql.cnpg.parametersis not a mapping.
toYamlon a non-mapping (e.g.,null) would render an empty or invalid block underparameters:. Consider guarding with{{- with .Values.postgresql.cnpg.parameters }}similar to howimagePullSecretsis handled on line 23.Proposed fix
postgresql: - parameters: - {{- toYaml .Values.postgresql.cnpg.parameters | nindent 6 }} + {{- with .Values.postgresql.cnpg.parameters }} + parameters: + {{- toYaml . | nindent 6 }} + {{- end }}chart/values.yaml (1)
73-73: Ensure placeholder passwords are never deployed as-is.Lines 73, 80, and 97 contain
CHANGE_MEplaceholder values forsuperuserPassword,password, andapiAdminPassword. Consider adding arequiredcheck in the templates (e.g.,{{ required "postgresql.cnpg.superuserPassword is required" .Values.postgresql.cnpg.superuserPassword }}) to fail athelm installtime if these remain unchanged, or add avalues.schema.jsonwithminLengthconstraints.Also applies to: 80-80, 97-97
src/runboat/settings.py (2)
33-34: Consider constrainingplatformto aLiteralor enum.
platform: str = "github"accepts any string. A typo like"gitlabb"would silently propagate. UsingLiteral["github", "gitlab"](or an enum) provides validation at settings-load time via Pydantic.Proposed fix
+from typing import Annotated, Literal + class RepoSettings(BaseModel): repo: str # regex branch: str # regex builds: list[BuildSettings] - platform: str = "github" # "github" or "gitlab" + platform: Literal["github", "gitlab"] = "github" project_id: str | None = None # GitLab numeric project ID
104-132:get_repo_settingsduplicates the iteration logic ofget_build_settings.Both methods iterate
self.reposwith the same regex matching.get_build_settingscould delegate toget_repo_settingsto eliminate the duplication.Proposed refactor
def get_build_settings(self, repo: str, target_branch: str) -> list[BuildSettings]: - for repo_settings in self.repos: - if not re.match(repo_settings.repo, repo, re.IGNORECASE): - continue - if not re.match(repo_settings.branch, target_branch): - continue - return repo_settings.builds - raise RepoOrBranchNotSupported( - f"Branch {target_branch} of {repo} not supported." - ) + return self.get_repo_settings(repo, target_branch).buildssrc/runboat/webhooks.py (2)
96-105: Token verification silently returns 200 on failure — intentional?When
_verify_gitlab_tokenreturnsFalse(line 122–123), the endpoint returnsNone→ HTTP 200. This mirrors the existing GitHub webhook behavior (line 44–45) so it's consistent, but returning a 401/403 would give GitLab better diagnostics when configuring webhooks. Flagging for awareness — not blocking.
108-112: Return typestr | Noneforproject_idis misleading — it's alwaysstr.
_gitlab_repo_infoalways returnsstr(payload["project"]["id"]), which is neverNone. The return type annotationtuple[str, str | None]suggests it could beNone. Considertuple[str, str]to match the actual behavior.Proposed fix
-def _gitlab_repo_info(payload: dict[str, Any]) -> tuple[str, str | None]: +def _gitlab_repo_info(payload: dict[str, Any]) -> tuple[str, str]:src/runboat/gitlab.py (2)
15-25: Add a timeout to the HTTP client to prevent indefinite hangs.
httpx.AsyncClient()has no explicit timeout. If GitLab is unresponsive, these calls will block the background task indefinitely. httpx's default timeout is 5 seconds per phase, but it's best to be explicit—especially for a service that posts statuses on every webhook event.Proposed fix
async def _gitlab_request(method: str, url: str, json: Any = None) -> Any: - async with httpx.AsyncClient() as client: + async with httpx.AsyncClient(timeout=httpx.Timeout(30.0)) as client: full_url = f"{settings.gitlab_url}/api/v4{url}"
28-49: Inconsistentproject_idhandling betweenget_branch_infoandget_merge_request_info.
get_branch_infouses anif/elsebranch (lines 31–41), whileget_merge_request_infouses the concisepid = project_id or quote(repo, safe="")pattern (line 55). Align both for consistency.Proposed refactor
async def get_branch_info( repo: str, branch: str, project_id: str | None = None ) -> CommitInfo: - if project_id: - branch_data = await _gitlab_request( - "GET", - f"/projects/{project_id}/repository/branches/{quote(branch, safe='')}", - ) - else: - encoded_repo = quote(repo, safe="") - branch_data = await _gitlab_request( - "GET", - f"/projects/{encoded_repo}/repository/branches/{quote(branch, safe='')}", - ) + pid = project_id or quote(repo, safe="") + branch_data = await _gitlab_request( + "GET", + f"/projects/{pid}/repository/branches/{quote(branch, safe='')}", + ) return CommitInfo(chart/templates/_helpers.tpl (2)
11-13:fullnameignoresChart.Name, which could cause collisions.Most Helm charts derive
fullnamefrom both.Release.Nameand.Chart.Name(or offer a.Values.fullnameOverride). Using only.Release.Namemeans two different charts installed under the same release name will produce identical resource names. If this chart is always the sole occupant of a release, this is fine—but it deviates from the common Helm scaffold pattern.♻️ Suggested conventional fullname pattern
{{- define "runboat.fullname" -}} -{{- printf "%s" .Release.Name | trunc 63 | trimSuffix "-" }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} {{- end }}
47-53: User-provided values silently override PG/storage keys.In both
buildEnvandbuildTemplateVars, therange … setloop runs after seeding the dict with defaults (PGHOST,PGPORT,PGUSER,storageClassName). If a user puts a key likePGHOSTin.Values.config.buildEnv, it will silently overwrite the auto-derived value. This may be intentional for flexibility, but if not, it could lead to hard-to-debug misconfigurations. Consider documenting this behavior invalues.yamlor guarding against it.Also applies to: 69-75
src/runboat/models.py (1)
33-33: Consider replacing"github"/"gitlab"string literals with an enum or constants.The platform identity is checked via raw string comparisons in at least 8 places across this file. A typo (e.g.,
"gitab") would silently fall through to the GitHub branch. An enum (or at minimum, module-level constants) would give you IDE autocompletion and catch typos at import/type-check time.♻️ Sketch
# e.g., in a shared constants module or in CommitInfo itself from enum import Enum class Platform(str, Enum): github = "github" gitlab = "gitlab"Then use
Platform.gitlabinstead of"gitlab"everywhere, and typeCommitInfo.platformasPlatformwithdefault=Platform.github.Also applies to: 163-163, 182-182, 188-188, 202-202, 211-211, 459-459, 463-463
| {{- if .Values.ingress.tls.enabled }} | ||
| tls: | ||
| - hosts: | ||
| - {{ .Values.ingress.host }} | ||
| secretName: {{ .Values.ingress.tls.secretName }} | ||
| {{- end }} |
There was a problem hiding this comment.
Guard against missing ingress.tls in values.
If a user sets ingress.enabled: true but omits the tls block entirely in their values override, {{ .Values.ingress.tls.enabled }} will fail at render time with a nil-pointer error. Consider using a safer accessor:
♻️ Suggested change
- {{- if .Values.ingress.tls.enabled }}
+ {{- if and .Values.ingress.tls .Values.ingress.tls.enabled }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {{- if .Values.ingress.tls.enabled }} | |
| tls: | |
| - hosts: | |
| - {{ .Values.ingress.host }} | |
| secretName: {{ .Values.ingress.tls.secretName }} | |
| {{- end }} | |
| {{- if and .Values.ingress.tls .Values.ingress.tls.enabled }} | |
| tls: | |
| - hosts: | |
| - {{ .Values.ingress.host }} | |
| secretName: {{ .Values.ingress.tls.secretName }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In `@chart/templates/ingress.yaml` around lines 17 - 22, Guard against a nil
.Values.ingress.tls by changing the conditional that checks {{
.Values.ingress.tls.enabled }} so it first ensures the tls block exists (for
example using the Helm functions: 'and' to check .Values.ingress.tls and its
.enabled, or using 'default false' / 'hasKey' to treat a missing tls as
disabled) and only render the tls block when present and enabled; update the
conditional in the ingress.yaml template surrounding tls so it references those
safer checks instead of directly accessing .Values.ingress.tls.enabled.
| try: | ||
| repo_settings = settings.get_repo_settings(repo, branch) | ||
| except Exception: | ||
| repo_settings = None | ||
| if repo_settings and repo_settings.platform == "gitlab": | ||
| commit_info = await gitlab.get_branch_info( | ||
| repo, branch, project_id=repo_settings.project_id | ||
| ) | ||
| else: | ||
| commit_info = await github.get_branch_info(repo, branch) |
There was a problem hiding this comment.
Catch a specific exception instead of bare Exception.
The blanket except Exception on line 114 will silently swallow unexpected errors (e.g., malformed config, attribute errors) and fall back to GitHub, masking real bugs. Catch only the expected failure (e.g., KeyError, ValueError, or a dedicated RepoSettingsNotFound exception).
♻️ Suggested change
try:
repo_settings = settings.get_repo_settings(repo, branch)
- except Exception:
+ except (KeyError, ValueError):
repo_settings = None🧰 Tools
🪛 Ruff (0.15.0)
[warning] 114-114: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In `@src/runboat/api.py` around lines 112 - 121, The bare except around
settings.get_repo_settings should be narrowed to the specific expected error(s)
so real bugs aren't swallowed; update the try/except that calls
settings.get_repo_settings(repo, branch) to catch only the specific exception(s)
that indicate missing or invalid repo settings (e.g., KeyError, ValueError, or
the project-specific RepoSettingsNotFound if available) and let other exceptions
propagate, then keep the existing branch: if repo_settings and
repo_settings.platform == "gitlab": call gitlab.get_branch_info(repo, branch,
project_id=repo_settings.project_id) else call github.get_branch_info(repo,
branch).
| async def notify_status( | ||
| repo: str, | ||
| sha: str, | ||
| state: GitLabStatusState, | ||
| target_url: str | None, | ||
| project_id: str | None = None, | ||
| ) -> None: | ||
| if settings.disable_commit_statuses: | ||
| return | ||
| pid = project_id or quote(repo, safe="") | ||
| json_payload: dict[str, Any] = { | ||
| "state": state.value, | ||
| "context": "runboat/build", | ||
| "name": "runboat/build", | ||
| } | ||
| if target_url: | ||
| json_payload["target_url"] = target_url | ||
| try: | ||
| await _gitlab_request( | ||
| "POST", | ||
| f"/projects/{pid}/statuses/{sha}", | ||
| json=json_payload, | ||
| ) | ||
| except httpx.HTTPStatusError as e: | ||
| _logger.error( | ||
| f"Failed to post GitLab commit status (code {e.response.status_code}):\n" | ||
| f"{e.response.text}" | ||
| ) |
There was a problem hiding this comment.
NotFoundOnGitLab from _gitlab_request is not caught in notify_status.
If GitLab returns 404 for the status endpoint (e.g., invalid SHA or deleted project), _gitlab_request raises NotFoundOnGitLab (line 23), which is not caught by the except httpx.HTTPStatusError on line 101. This will propagate as an unhandled exception in the background task.
Proposed fix
try:
await _gitlab_request(
"POST",
f"/projects/{pid}/statuses/{sha}",
json=json_payload,
)
- except httpx.HTTPStatusError as e:
+ except NotFoundOnGitLab:
+ _logger.warning(
+ "GitLab commit status endpoint returned 404 for project %s sha %s",
+ pid,
+ sha,
+ )
+ except httpx.HTTPStatusError as e:
_logger.error(
f"Failed to post GitLab commit status (code {e.response.status_code}):\n"
f"{e.response.text}"
)🧰 Tools
🪛 Ruff (0.15.0)
[warning] 102-105: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
🤖 Prompt for AI Agents
In `@src/runboat/gitlab.py` around lines 78 - 105, notify_status currently only
catches httpx.HTTPStatusError from _gitlab_request, but _gitlab_request can
raise NotFoundOnGitLab for 404s which will propagate; update notify_status to
also catch NotFoundOnGitLab (in addition to or separate from the existing
httpx.HTTPStatusError handler), log a clear message including the repo/sha and
error details, and return/exit quietly so the background task doesn't crash;
reference functions/objects: notify_status, _gitlab_request, NotFoundOnGitLab,
and the existing httpx.HTTPStatusError handler.
| json_payload: dict[str, Any] = { | ||
| "state": state.value, | ||
| "context": "runboat/build", | ||
| "name": "runboat/build", | ||
| } | ||
| if target_url: | ||
| json_payload["target_url"] = target_url |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
GitLab API commit status endpoint POST /projects/:id/statuses/:sha parameters
💡 Result:
For GitLab REST API Set commit pipeline status:
POST /projects/:id/statuses/:sha
Parameters (sent as query params or form data/body, depending on your client):
id(required) — project ID or URL-encoded pathsha(required) — commit SHAstate(required) — one of:pending,running,success,failed,canceled,skippedcoverage(optional, float) — total code coveragedescription(optional, string) — short description (≤ 255 chars)nameorcontext(optional, string) — status label; default:defaultpipeline_id(optional, integer) — pipeline to attach the status to (useful if multiple pipelines exist for the same SHA)ref(optional, string) — branch/tag name (≤ 255 chars)target_url(optional, string) — URL to associate with the status (≤ 255 chars)
[1] (docs.gitlab.com)
Citations:
🏁 Script executed:
cat -n src/runboat/gitlab.py | sed -n '85,100p'Repository: kencove/runboat
Length of output: 628
Remove redundant "context" field from payload.
The GitLab API commit status endpoint (POST /projects/:id/statuses/:sha) accepts name or context as mutually exclusive parameters—not both. The code sends both with identical values, which violates the API specification. Keep only "name" and remove "context":
Diff
json_payload: dict[str, Any] = {
"state": state.value,
- "context": "runboat/build",
"name": "runboat/build",
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| json_payload: dict[str, Any] = { | |
| "state": state.value, | |
| "context": "runboat/build", | |
| "name": "runboat/build", | |
| } | |
| if target_url: | |
| json_payload["target_url"] = target_url | |
| json_payload: dict[str, Any] = { | |
| "state": state.value, | |
| "name": "runboat/build", | |
| } | |
| if target_url: | |
| json_payload["target_url"] = target_url |
🤖 Prompt for AI Agents
In `@src/runboat/gitlab.py` around lines 88 - 94, The payload for the GitLab
commit status currently includes both "context" and "name" which are mutually
exclusive; update the json_payload construction in the function that builds the
commit status (the json_payload dict in src/runboat/gitlab.py) to remove the
"context" key and only include "name": "runboat/build", keeping the conditional
addition of "target_url" unchanged so the POST /projects/:id/statuses/:sha
request sends only "name" (and "target_url" when present).
| def _gitlab_repo_info(payload: dict) -> tuple[str, str | None]: | ||
| """Extract repo path and project_id from a GitLab webhook payload.""" | ||
| repo = payload["project"]["path_with_namespace"] | ||
| project_id = str(payload["project"]["id"]) | ||
| return repo, project_id |
There was a problem hiding this comment.
Fix mypy error: add type parameters to dict.
The CI pipeline reports Missing type parameters for generic type "dict" [type-arg] on line 108. Add type parameters to satisfy mypy.
Proposed fix
-def _gitlab_repo_info(payload: dict) -> tuple[str, str | None]:
+def _gitlab_repo_info(payload: dict[str, Any]) -> tuple[str, str | None]:Note: This requires adding Any to the imports:
from typing import Annotated, Any🧰 Tools
🪛 GitHub Actions: CI
[error] 108-108: mypy: Missing type parameters for generic type "dict" [type-arg].
🤖 Prompt for AI Agents
In `@src/runboat/webhooks.py` around lines 108 - 112, The mypy error is caused by
using the bare generic dict in the signature of _gitlab_repo_info; update the
type to include parameters (e.g., change the parameter type to dict[str, Any])
and add Any to the typing imports (alongside Annotated if already present) so
the function signature becomes _gitlab_repo_info(payload: dict[str, Any]) ->
tuple[str, str | None].
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In @.github/workflows/build.yml:
- Around line 27-29: The build-image job is running for pull_request events and
pushing images; restrict it to push events by adding an if guard to the
build-image job (the job named "build-image") matching the same condition used
by the push-chart job (e.g., github.event_name == 'push' or the exact if
expression used in push-chart) so the job only runs on push events and not on
pull_request events.
In `@chart/templates/cnpg-cluster.yaml`:
- Around line 24-29: The stringData block is inconsistently quoted: username,
password and port use | quote but dbname and host do not, risking YAML breaks if
runboat.dbHost returns special chars; update the
chart/templates/cnpg-cluster.yaml stringData entries so dbname and the result of
include "runboat.dbHost" are also passed through | quote (i.e., apply | quote to
dbname and to the include "runboat.dbHost" expression) to make quoting
consistent with .Values.postgresql.user/.password/.port.
- Around line 62-63: The template unconditionally renders "resources:" and then
the result of toYaml .Values.postgresql.cnpg.resources which becomes "null" if
that value is unset; update the cnpg-cluster template to guard this by checking
the presence of .Values.postgresql.cnpg.resources (or using a default empty map)
before emitting the resources block so you only call toYaml and nindent when the
value is non-nil; reference the existing usage of toYaml and nindent around
.Values.postgresql.cnpg.resources in the template and mirror the same
conditional pattern used for parameters.
- Around line 50-52: The template renders `.Values.postgresql.cnpg.parameters`
directly with `toYaml` which outputs `null` when the value is absent; wrap the
`{{- toYaml .Values.postgresql.cnpg.parameters | nindent 6 }}` call in a `{{-
with .Values.postgresql.cnpg.parameters }}` / `{{- end }}` guard so the
parameters block is only emitted when present, ensuring the CNPG Cluster spec
remains valid; update the `postgresql.parameters` section in the
`cnpg-cluster.yaml` template accordingly.
In `@src/runboat/gitlab.py`:
- Around line 29-50: The get_branch_info function duplicates logic for building
a project id string; refactor by defining pid = project_id or quote(repo,
safe="") (like get_merge_request_info) and then call _gitlab_request once with
f"/projects/{pid}/repository/branches/{quote(branch, safe='')}", keeping the
rest of the return (CommitInfo) unchanged; this removes the if/else branch and
centralizes the project identifier construction while still using quote,
project_id, repo, branch, _gitlab_request and CommitInfo.
In `@src/runboat/kubefiles/runboat-clone-and-install.sh`:
- Around line 17-35: The curl invocations in runboat-clone-and-install.sh
silently pipe HTTP error responses into tar (lines using curl ... | tar),
causing confusing "not in gzip format" errors; update each curl call that
streams into tar to include the --fail flag (e.g., the GitLab branch that uses
ARCHIVE_URL and the GitHub branch that constructs
https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF}) so curl exits
non‑zero on HTTP errors, and quote the constructed GitHub URL to prevent
word-splitting; ensure these changes target the curl lines that feed tar
(references: RUNBOAT_PLATFORM, ARCHIVE_URL, RUNBOAT_GITLAB_TOKEN,
RUNBOAT_GIT_REPO, RUNBOAT_GIT_REF).
- Around line 27-28: The script enables shell tracing which will echo the curl
command and leak RUNBOAT_GITLAB_TOKEN; to fix it, temporarily disable xtrace
around the sensitive curl pipeline: turn off tracing before the curl that uses
RUNBOAT_GITLAB_TOKEN and ARCHIVE_URL, run the curl | tar pipeline, then
re-enable tracing afterward so only that command is suppressed; locate the curl
invocation that references RUNBOAT_GITLAB_TOKEN and wrap it with set +x and set
-x (or equivalent) to prevent the token from being printed.
🧹 Nitpick comments (8)
chart/templates/cnpg-cluster.yaml (2)
59-59: Quote theownervalue for consistency and safety.Line 25 quotes
usernamevia| quote, but line 59 leavesownerunquoted despite both sourcing from.Values.postgresql.user.- owner: {{ .Values.postgresql.user }} + owner: {{ .Values.postgresql.user | quote }}
6-6: Consider using.Release.Namespaceinstead of.Values.namespace(chart-wide).Using
.Values.namespacebypasseshelm install -n <ns>and--create-namespace. The idiomatic Helm approach is{{ .Release.Namespace }}, which automatically respects the release namespace. This pattern is used consistently across the entire chart (11 locations: deployment.yaml, service.yaml, ingress.yaml, rbac.yaml, secret.yaml, serviceaccount.yaml, configmap.yaml, cnpg-cluster.yaml 3×, and _helpers.tpl). This is an optional refactoring to align with Helm best practices, but would require updating all instances for consistency..github/workflows/build.yml (2)
49-53:github.ref_nameis interpolated directly into a shell context — prefer an environment variable.
${{ github.ref_name }}is expanded by the Actions expression engine before the shell runs, meaning a specially crafted branch name could inject shell commands. The risk is low here since pushes require write access, but the recommended pattern is to pass untrusted context via anenvbinding:Proposed fix
- name: Set image tags id: tags + env: + GIT_SHA: ${{ github.sha }} + GIT_REF: ${{ github.ref_name }} run: | - SHA_SHORT=$(echo "${{ github.sha }}" | cut -c1-12) - BRANCH=$(echo "${{ github.ref_name }}" | sed 's/[^a-zA-Z0-9._-]/-/g') + SHA_SHORT=$(echo "$GIT_SHA" | cut -c1-12) + BRANCH=$(echo "$GIT_REF" | sed 's/[^a-zA-Z0-9._-]/-/g') echo "sha_tag=${SHA_SHORT}" >> "$GITHUB_OUTPUT" echo "branch_tag=${BRANCH}" >> "$GITHUB_OUTPUT"
55-67: Consider adding Docker layer caching to speed up builds.Without caching, every build pulls and rebuilds all layers from scratch. GitHub Actions' built-in cache or
docker/build-push-actionwith cache-to/cache-from can significantly reduce build times.src/runboat/webhooks.py (2)
108-112: Return type saysstr | Noneforproject_id, but it always returnsstr.
str(payload["project"]["id"])always produces astr. Thestr | Nonereturn type is misleading — if the key is missing, you'll get aKeyError, notNone. Either narrow the return type totuple[str, str]or add explicit handling for a missingid.Suggested fix (narrow the type)
-def _gitlab_repo_info(payload: dict) -> tuple[str, str | None]: +def _gitlab_repo_info(payload: dict[str, Any]) -> tuple[str, str]:
115-155: Inconsistent auth behavior: GitHub silently ignores bad signatures, GitLab raises 403.Line 45 silently returns on a bad GitHub signature, while line 123 raises
HTTPException(403)for a bad GitLab token. This is intentional per the PR description, but worth noting that the GitHub path leaks information by silently accepting the request (200 OK) on invalid signatures. Consider aligning behavior in a follow-up.src/runboat/gitlab.py (1)
15-26: NewAsyncClientcreated on every request — no connection reuse.Each call to
_gitlab_requestopens and closes a freshhttpx.AsyncClient, losing HTTP/2 multiplexing and connection pooling. For background-task bursts (e.g., status updates on multiple commits), this adds latency and socket churn. Consider creating a module-level or app-lifespan client.Sketch
# Module-level lazy client (closed via app shutdown hook) _client: httpx.AsyncClient | None = None def _get_client() -> httpx.AsyncClient: global _client if _client is None: _client = httpx.AsyncClient() return _clientsrc/runboat/kubefiles/runboat-clone-and-install.sh (1)
22-24: Minimal URL encoding — only/is handled.The
sed 's|/|%2F|g'only encodes forward slashes. GitLab repo paths with dots, spaces, or other special characters would break. This is likely fine for typicalorg/repopaths, but worth noting.
| build-image: | ||
| needs: test | ||
| runs-on: ubuntu-latest |
There was a problem hiding this comment.
Docker images are pushed on pull_request events too — likely unintended.
The build-image job has no if guard, so it runs on both push and pull_request triggers. This means PR builds will authenticate to GCP and push images to your production Artifact Registry, polluting it with throwaway PR images.
Add an if condition to restrict this job to push events, mirroring what push-chart already does:
Proposed fix
build-image:
needs: test
runs-on: ubuntu-latest
+ if: github.event_name == 'push'
permissions:
contents: read
id-token: write📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| build-image: | |
| needs: test | |
| runs-on: ubuntu-latest | |
| build-image: | |
| needs: test | |
| runs-on: ubuntu-latest | |
| if: github.event_name == 'push' | |
| permissions: | |
| contents: read | |
| id-token: write |
🤖 Prompt for AI Agents
In @.github/workflows/build.yml around lines 27 - 29, The build-image job is
running for pull_request events and pushing images; restrict it to push events
by adding an if guard to the build-image job (the job named "build-image")
matching the same condition used by the push-chart job (e.g., github.event_name
== 'push' or the exact if expression used in push-chart) so the job only runs on
push events and not on pull_request events.
| stringData: | ||
| username: {{ .Values.postgresql.user | quote }} | ||
| password: {{ .Values.postgresql.password | quote }} | ||
| dbname: runboat | ||
| host: {{ include "runboat.dbHost" . }} | ||
| port: {{ .Values.postgresql.port | quote }} |
There was a problem hiding this comment.
Inconsistent quoting in stringData fields.
username, password, and port are piped through | quote, but dbname (line 27) and host (line 28) are not. If the helper runboat.dbHost ever returns a value with special YAML characters, it will break rendering. Apply | quote consistently.
Proposed fix
- dbname: runboat
- host: {{ include "runboat.dbHost" . }}
+ dbname: "runboat"
+ host: {{ include "runboat.dbHost" . | quote }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stringData: | |
| username: {{ .Values.postgresql.user | quote }} | |
| password: {{ .Values.postgresql.password | quote }} | |
| dbname: runboat | |
| host: {{ include "runboat.dbHost" . }} | |
| port: {{ .Values.postgresql.port | quote }} | |
| stringData: | |
| username: {{ .Values.postgresql.user | quote }} | |
| password: {{ .Values.postgresql.password | quote }} | |
| dbname: "runboat" | |
| host: {{ include "runboat.dbHost" . | quote }} | |
| port: {{ .Values.postgresql.port | quote }} |
🤖 Prompt for AI Agents
In `@chart/templates/cnpg-cluster.yaml` around lines 24 - 29, The stringData block
is inconsistently quoted: username, password and port use | quote but dbname and
host do not, risking YAML breaks if runboat.dbHost returns special chars; update
the chart/templates/cnpg-cluster.yaml stringData entries so dbname and the
result of include "runboat.dbHost" are also passed through | quote (i.e., apply
| quote to dbname and to the include "runboat.dbHost" expression) to make
quoting consistent with .Values.postgresql.user/.password/.port.
| postgresql: | ||
| parameters: | ||
| {{- toYaml .Values.postgresql.cnpg.parameters | nindent 6 }} |
There was a problem hiding this comment.
Guard parameters against nil values.
If .Values.postgresql.cnpg.parameters is not set (or empty), toYaml will render null, producing an invalid CNPG Cluster spec. Wrap in a {{- with }} block for safety.
Proposed fix
postgresql:
- parameters:
- {{- toYaml .Values.postgresql.cnpg.parameters | nindent 6 }}
+ {{- with .Values.postgresql.cnpg.parameters }}
+ parameters:
+ {{- toYaml . | nindent 6 }}
+ {{- end }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| postgresql: | |
| parameters: | |
| {{- toYaml .Values.postgresql.cnpg.parameters | nindent 6 }} | |
| postgresql: | |
| {{- with .Values.postgresql.cnpg.parameters }} | |
| parameters: | |
| {{- toYaml . | nindent 6 }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In `@chart/templates/cnpg-cluster.yaml` around lines 50 - 52, The template renders
`.Values.postgresql.cnpg.parameters` directly with `toYaml` which outputs `null`
when the value is absent; wrap the `{{- toYaml
.Values.postgresql.cnpg.parameters | nindent 6 }}` call in a `{{- with
.Values.postgresql.cnpg.parameters }}` / `{{- end }}` guard so the parameters
block is only emitted when present, ensuring the CNPG Cluster spec remains
valid; update the `postgresql.parameters` section in the `cnpg-cluster.yaml`
template accordingly.
| resources: | ||
| {{- toYaml .Values.postgresql.cnpg.resources | nindent 4 }} |
There was a problem hiding this comment.
Guard resources against nil values.
Same issue as parameters — if .Values.postgresql.cnpg.resources is unset, this renders null under the resources: key.
Proposed fix
- resources:
- {{- toYaml .Values.postgresql.cnpg.resources | nindent 4 }}
+ {{- with .Values.postgresql.cnpg.resources }}
+ resources:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resources: | |
| {{- toYaml .Values.postgresql.cnpg.resources | nindent 4 }} | |
| {{- with .Values.postgresql.cnpg.resources }} | |
| resources: | |
| {{- toYaml . | nindent 4 }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In `@chart/templates/cnpg-cluster.yaml` around lines 62 - 63, The template
unconditionally renders "resources:" and then the result of toYaml
.Values.postgresql.cnpg.resources which becomes "null" if that value is unset;
update the cnpg-cluster template to guard this by checking the presence of
.Values.postgresql.cnpg.resources (or using a default empty map) before emitting
the resources block so you only call toYaml and nindent when the value is
non-nil; reference the existing usage of toYaml and nindent around
.Values.postgresql.cnpg.resources in the template and mirror the same
conditional pattern used for parameters.
| async def get_branch_info( | ||
| repo: str, branch: str, project_id: str | None = None | ||
| ) -> CommitInfo: | ||
| if project_id: | ||
| branch_data = await _gitlab_request( | ||
| "GET", | ||
| f"/projects/{project_id}/repository/branches/{quote(branch, safe='')}", | ||
| ) | ||
| else: | ||
| encoded_repo = quote(repo, safe="") | ||
| branch_data = await _gitlab_request( | ||
| "GET", | ||
| f"/projects/{encoded_repo}/repository/branches/{quote(branch, safe='')}", | ||
| ) | ||
| return CommitInfo( | ||
| repo=repo, | ||
| target_branch=branch, | ||
| pr=None, | ||
| git_commit=branch_data["commit"]["id"], | ||
| platform="gitlab", | ||
| project_id=project_id, | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Simplify: deduplicate the project_id or quote(repo) pattern.
get_merge_request_info already uses pid = project_id or quote(repo, safe=""). Apply the same here to remove the branch duplication.
Proposed fix
async def get_branch_info(
repo: str, branch: str, project_id: str | None = None
) -> CommitInfo:
- if project_id:
- branch_data = await _gitlab_request(
- "GET",
- f"/projects/{project_id}/repository/branches/{quote(branch, safe='')}",
- )
- else:
- encoded_repo = quote(repo, safe="")
- branch_data = await _gitlab_request(
- "GET",
- f"/projects/{encoded_repo}/repository/branches/{quote(branch, safe='')}",
- )
+ pid = project_id or quote(repo, safe="")
+ branch_data = await _gitlab_request(
+ "GET",
+ f"/projects/{pid}/repository/branches/{quote(branch, safe='')}",
+ )
return CommitInfo(🤖 Prompt for AI Agents
In `@src/runboat/gitlab.py` around lines 29 - 50, The get_branch_info function
duplicates logic for building a project id string; refactor by defining pid =
project_id or quote(repo, safe="") (like get_merge_request_info) and then call
_gitlab_request once with f"/projects/{pid}/repository/branches/{quote(branch,
safe='')}", keeping the rest of the return (CommitInfo) unchanged; this removes
the if/else branch and centralizes the project identifier construction while
still using quote, project_id, repo, branch, _gitlab_request and CommitInfo.
| if [[ "${RUNBOAT_PLATFORM}" == "gitlab" ]]; then | ||
| # GitLab archive API: /projects/:id/repository/archive.tar.gz?sha=:ref | ||
| # Use project_id if available, otherwise URL-encode the repo path | ||
| if [[ -n "${RUNBOAT_PROJECT_ID}" ]]; then | ||
| ARCHIVE_URL="${RUNBOAT_GITLAB_URL}/api/v4/projects/${RUNBOAT_PROJECT_ID}/repository/archive.tar.gz?sha=${RUNBOAT_GIT_REF}" | ||
| else | ||
| ENCODED_REPO=$(echo "${RUNBOAT_GIT_REPO}" | sed 's|/|%2F|g') | ||
| ARCHIVE_URL="${RUNBOAT_GITLAB_URL}/api/v4/projects/${ENCODED_REPO}/repository/archive.tar.gz?sha=${RUNBOAT_GIT_REF}" | ||
| fi | ||
| # Use Bearer auth if RUNBOAT_GITLAB_TOKEN is set (for private repos) | ||
| if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then | ||
| curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | ||
| else | ||
| curl -sSL "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | ||
| fi | ||
| else | ||
| # GitHub tarball URL | ||
| curl -sSL https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF} | tar zxf - --strip-components=1 | ||
| fi |
There was a problem hiding this comment.
Curl failures are silently swallowed by the pipe to tar.
If curl returns a non-200 response (e.g., 401, 404), the HTTP error body is piped into tar, which may fail with a confusing "not in gzip format" error. Add --fail to curl so it exits non-zero on HTTP errors, which set -o pipefail will then propagate.
Proposed fix
- curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1
+ curl -sSfL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1
else
- curl -sSL "${ARCHIVE_URL}" | tar zxf - --strip-components=1
+ curl -sSfL "${ARCHIVE_URL}" | tar zxf - --strip-components=1
fi
else
# GitHub tarball URL
- curl -sSL https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF} | tar zxf - --strip-components=1
+ curl -sSfL "https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF}" | tar zxf - --strip-components=1Note: also quoted the GitHub URL to prevent word-splitting on special characters.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ "${RUNBOAT_PLATFORM}" == "gitlab" ]]; then | |
| # GitLab archive API: /projects/:id/repository/archive.tar.gz?sha=:ref | |
| # Use project_id if available, otherwise URL-encode the repo path | |
| if [[ -n "${RUNBOAT_PROJECT_ID}" ]]; then | |
| ARCHIVE_URL="${RUNBOAT_GITLAB_URL}/api/v4/projects/${RUNBOAT_PROJECT_ID}/repository/archive.tar.gz?sha=${RUNBOAT_GIT_REF}" | |
| else | |
| ENCODED_REPO=$(echo "${RUNBOAT_GIT_REPO}" | sed 's|/|%2F|g') | |
| ARCHIVE_URL="${RUNBOAT_GITLAB_URL}/api/v4/projects/${ENCODED_REPO}/repository/archive.tar.gz?sha=${RUNBOAT_GIT_REF}" | |
| fi | |
| # Use Bearer auth if RUNBOAT_GITLAB_TOKEN is set (for private repos) | |
| if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then | |
| curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | |
| else | |
| curl -sSL "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | |
| fi | |
| else | |
| # GitHub tarball URL | |
| curl -sSL https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF} | tar zxf - --strip-components=1 | |
| fi | |
| if [[ "${RUNBOAT_PLATFORM}" == "gitlab" ]]; then | |
| # GitLab archive API: /projects/:id/repository/archive.tar.gz?sha=:ref | |
| # Use project_id if available, otherwise URL-encode the repo path | |
| if [[ -n "${RUNBOAT_PROJECT_ID}" ]]; then | |
| ARCHIVE_URL="${RUNBOAT_GITLAB_URL}/api/v4/projects/${RUNBOAT_PROJECT_ID}/repository/archive.tar.gz?sha=${RUNBOAT_GIT_REF}" | |
| else | |
| ENCODED_REPO=$(echo "${RUNBOAT_GIT_REPO}" | sed 's|/|%2F|g') | |
| ARCHIVE_URL="${RUNBOAT_GITLAB_URL}/api/v4/projects/${ENCODED_REPO}/repository/archive.tar.gz?sha=${RUNBOAT_GIT_REF}" | |
| fi | |
| # Use Bearer auth if RUNBOAT_GITLAB_TOKEN is set (for private repos) | |
| if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then | |
| curl -sSfL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | |
| else | |
| curl -sSfL "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | |
| fi | |
| else | |
| # GitHub tarball URL | |
| curl -sSfL "https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF}" | tar zxf - --strip-components=1 | |
| fi |
🤖 Prompt for AI Agents
In `@src/runboat/kubefiles/runboat-clone-and-install.sh` around lines 17 - 35, The
curl invocations in runboat-clone-and-install.sh silently pipe HTTP error
responses into tar (lines using curl ... | tar), causing confusing "not in gzip
format" errors; update each curl call that streams into tar to include the
--fail flag (e.g., the GitLab branch that uses ARCHIVE_URL and the GitHub branch
that constructs
https://github.com/${RUNBOAT_GIT_REPO}/tarball/${RUNBOAT_GIT_REF}) so curl exits
non‑zero on HTTP errors, and quote the constructed GitHub URL to prevent
word-splitting; ensure these changes target the curl lines that feed tar
(references: RUNBOAT_PLATFORM, ARCHIVE_URL, RUNBOAT_GITLAB_TOKEN,
RUNBOAT_GIT_REPO, RUNBOAT_GIT_REF).
| if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then | ||
| curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1 |
There was a problem hiding this comment.
set -x leaks the Bearer token into build logs.
Line 3 enables set -x, so the curl -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" command on line 28 will be echoed verbatim to stdout/stderr, exposing the token in pod logs.
Suppress tracing around the sensitive command:
Proposed fix
# Use Bearer auth if RUNBOAT_GITLAB_TOKEN is set (for private repos)
if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then
- curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1
+ set +x # avoid leaking token in logs
+ curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1
+ set -x
else📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then | |
| curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | |
| # Use Bearer auth if RUNBOAT_GITLAB_TOKEN is set (for private repos) | |
| if [[ -n "${RUNBOAT_GITLAB_TOKEN}" ]]; then | |
| set +x # avoid leaking token in logs | |
| curl -sSL -H "Authorization: Bearer ${RUNBOAT_GITLAB_TOKEN}" "${ARCHIVE_URL}" | tar zxf - --strip-components=1 | |
| set -x | |
| else |
🤖 Prompt for AI Agents
In `@src/runboat/kubefiles/runboat-clone-and-install.sh` around lines 27 - 28, The
script enables shell tracing which will echo the curl command and leak
RUNBOAT_GITLAB_TOKEN; to fix it, temporarily disable xtrace around the sensitive
curl pipeline: turn off tracing before the curl that uses RUNBOAT_GITLAB_TOKEN
and ARCHIVE_URL, run the curl | tar pipeline, then re-enable tracing afterward
so only that command is suppressed; locate the curl invocation that references
RUNBOAT_GITLAB_TOKEN and wrap it with set +x and set -x (or equivalent) to
prevent the token from being printed.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@chart/templates/cnpg-cluster.yaml`:
- Around line 56-63: The owner and SQL interpolation use .Values.postgresql.user
raw; update the owner to use Helm's quote function and build the ALTER ROLE
statement with a safely quoted SQL identifier: change the owner reference to use
{{ .Values.postgresql.user | quote }} and generate postInitApplicationSQL using
printf to wrap the username in double quotes while escaping any internal
double-quotes (e.g. replace .Values.postgresql.user `"` `""` then feed into
printf 'ALTER ROLE "%s" CREATEDB'); adjust the template lines referencing owner
and postInitApplicationSQL accordingly (look for the bootstrap.initdb.owner and
bootstrap.initdb.postInitApplicationSQL entries).
| bootstrap: | ||
| initdb: | ||
| database: runboat | ||
| owner: {{ .Values.postgresql.user }} | ||
| secret: | ||
| name: runboat-db-app | ||
| postInitApplicationSQL: | ||
| - ALTER ROLE {{ .Values.postgresql.user }} CREATEDB |
There was a problem hiding this comment.
Unquoted user value in owner field and raw interpolation in SQL.
Line 59: owner is rendered without | quote, inconsistent with other user references. Line 63: .Values.postgresql.user is interpolated directly into the SQL statement — if the username ever contains spaces or special characters, the ALTER ROLE statement will be malformed.
Proposed fix
- owner: {{ .Values.postgresql.user }}
+ owner: {{ .Values.postgresql.user | quote }}
secret:
name: runboat-db-app
postInitApplicationSQL:
- - ALTER ROLE {{ .Values.postgresql.user }} CREATEDB
+ - ALTER ROLE "{{ .Values.postgresql.user }}" CREATEDB🤖 Prompt for AI Agents
In `@chart/templates/cnpg-cluster.yaml` around lines 56 - 63, The owner and SQL
interpolation use .Values.postgresql.user raw; update the owner to use Helm's
quote function and build the ALTER ROLE statement with a safely quoted SQL
identifier: change the owner reference to use {{ .Values.postgresql.user | quote
}} and generate postInitApplicationSQL using printf to wrap the username in
double quotes while escaping any internal double-quotes (e.g. replace
.Values.postgresql.user `"` `""` then feed into printf 'ALTER ROLE "%s"
CREATEDB'); adjust the template lines referencing owner and
postInitApplicationSQL accordingly (look for the bootstrap.initdb.owner and
bootstrap.initdb.postInitApplicationSQL entries).
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/runboat/models.py (1)
300-316:⚠️ Potential issue | 🔴 CriticalBug:
redeploy()still callsgithub.notify_statusdirectly, bypassing GitLab dispatch.Every other status notification in this file has been migrated to the platform-aware
_notify_statushelper, butredeploy()was missed. For GitLab-sourced builds, this will post the commit status to the GitHub API instead of GitLab.🐛 Proposed fix
async def redeploy(self) -> None: """Redeploy a build, to reinitialize it.""" _logger.info(f"Redeploying {self}.") await k8s.kill_job(self.name, job_kind=k8s.DeploymentMode.cleanup) await k8s.kill_job(self.name, job_kind=k8s.DeploymentMode.initialize) await self._deploy( self.commit_info, self.name, self.slug, job_kind=k8s.DeploymentMode.deployment, ) - await github.notify_status( - self.commit_info.repo, - self.commit_info.git_commit, - GitHubStatusState.pending, - target_url=None, - ) + await _notify_status( + self.commit_info, GitHubStatusState.pending, target_url=None + )
🤖 Fix all issues with AI agents
In `@chart/values.yaml`:
- Around line 125-133: Change the sentinel defaults for postgresql credentials
so they aren't a predictable string: set superuserPassword and password to empty
("") in values.yaml and add a Helm template guard in your chart templates
(referencing .Values.postgresql.cnpg.superuserPassword and
.Values.postgresql.password or .Values.postgresql.password) that fails rendering
when either value is empty or still equals "CHANGE_ME"; also add a brief comment
in values.yaml clarifying that cnpg.superuserPassword is the PostgreSQL
superuser credential while postgresql.password is the application/database user
password (they may be different and should be set explicitly).
- Around line 145-148: The apiAdminPassword entry currently defaults to the
sentinel "CHANGE_ME" and needs the same required/fail guard used for the
PostgreSQL passwords; update the Helm chart values and templates to enforce
non-sentinel values by adding a validation that checks apiAdminPassword (and
optionally apiAdminUser) is not "CHANGE_ME" and fails the release if it is —
reference the apiAdminPassword key in chart/values.yaml and the template(s) that
consume it (e.g., the secret/Deployment template that reads
.Values.apiAdminPassword) and implement the same required/fail pattern used for
the PostgreSQL password variables so deployments cannot proceed with the
sentinel value.
In `@src/runboat/kubefiles/runboat-start.sh`:
- Line 41: The --db-filter argument is enclosed in single quotes so
${PGDATABASE} is not expanded; update the token in runboat-start.sh to use
double quotes and a single regex end-anchor so the variable expands correctly,
i.e. change the --db-filter argument containing '^${PGDATABASE}$$' to use double
quotes and a single trailing $ (e.g. --db-filter="^${PGDATABASE}$") so you get
the actual database name and a proper regex end-anchor without invoking $$
(PID).
🧹 Nitpick comments (7)
README.md (1)
151-158: Add a caution about not committing real tokens inmy-values.yaml.The quick-start values file shows tokens inline; it’s easy for users to accidentally commit secrets. Consider adding a short note to use Kubernetes secrets or external secret managers for real deployments.
💡 Suggested doc tweak
# 2. Create your values override cat > my-values.yaml <<EOF @@ github: - token: "ghp_your_token_here" + token: "ghp_your_token_here" # use a secret in real deployments webhookSecret: "$(openssl rand -hex 20)" @@ gitlab: - token: "glpat-your_token_here" + token: "glpat-your_token_here" # use a secret in real deployments webhookToken: "$(openssl rand -hex 16)" EOFAlso applies to: 127-196
src/runboat/kubefiles/initialize.yaml (1)
14-16: Consider adding a security context to harden the container (pre-existing).The static analysis flags
allowPrivilegeEscalationnot being explicitly set tofalseand norunAsNonRootconstraint. These are pre-existing and not introduced by this PR, but worth addressing when convenient:🛡️ Optional security context addition
- name: initialize image: odoo + securityContext: + allowPrivilegeEscalation: false + runAsNonRoot: truesrc/runboat/models.py (1)
27-48: Consider usingLiteral["github", "gitlab"]or an enum for the platform field.Platform is checked via string comparison in ~10 places across this file. A typo (e.g.,
"Github") would silently fall through to the GitHub branch. UsingLiteral["github", "gitlab"]onCommitInfo.platformwould let Pydantic reject invalid values at construction time.src/runboat/kubefiles/ingress_mailhog.yaml (1)
7-8: Consider documenting the TLS configuration design — both ingress files omithostsin the wildcard TLS block.Both
ingress_mailhog.yamlandingress_odoo.yamluse the same pattern: a TLS block with a wildcard secret but nohostsfield. This is valid for nginx-ingress controllers (the cert serves as the default/fallback certificate when hosts are not specified) and aligns with the fact that both ingress rules also omit explicithostfields, making them host-agnostic.However, since this design choice is non-obvious, add a brief inline comment explaining that the TLS block intentionally omits
hoststo use the wildcard certificate as a default, so future maintainers understand this is by design.chart/values.yaml (3)
11-14: Consider using a pinned image tag instead oflatest.
latestis mutable and makes deployments non-reproducible. A common Helm convention is to default the tag to the chart'sappVersion(e.g.,{{ .Chart.AppVersion }}) so each chart release pins a known image.pullPolicy: Alwayspartially compensates but doesn't guarantee consistency across rollbacks or multi-node pulls.
186-210: Tokens in values.yaml will be stored in the Helm release secret in plaintext.This is standard Helm behavior, but for production consider documenting an alternative pattern: referencing pre-existing K8s Secrets (e.g., via
existingSecretfields) instead of injecting tokens through values. This avoids secrets being stored in the Helm release configmap/secret and in version-controlled values files.
36-37: Namespace values may conflict with Helm's native--namespacehandling.Hardcoding
namespace: runboatin values means the chart ignores Helm's--namespaceflag. If a user installs withhelm install -n my-ns, resources will still targetrunboat. Consider using{{ .Release.Namespace }}as the default for the controller namespace, and keepingbuildNamespaceas a separate explicit value.
| ## Superuser password — CHANGE THIS in your values override | ||
| superuserPassword: CHANGE_ME | ||
|
|
||
| ## Connection details (auto-derived from CNPG when cnpg.enabled) | ||
| ## Override these when using an external PostgreSQL. | ||
| host: "" # auto: runboat-db-rw.<namespace>.svc.cluster.local | ||
| port: "5432" | ||
| user: runboat | ||
| password: CHANGE_ME |
There was a problem hiding this comment.
Plaintext sentinel passwords risk accidental deployment with known credentials.
Both superuserPassword and password default to CHANGE_ME. If a user forgets to override these, the cluster runs with a well-known password. Consider:
- Defaulting to
""and adding a template validation (requiredorfail) that blockshelm installwhen passwords are empty. - Clarifying the relationship between
cnpg.superuserPassword(PG superuser) andpostgresql.password(application user) — it's not obvious whether these should match or serve different roles.
Example: Helm template guard for required secrets
Add to your deployment template:
{{- if or (eq .Values.postgresql.cnpg.superuserPassword "CHANGE_ME") (eq .Values.postgresql.password "CHANGE_ME") }}
{{- fail "You must override postgresql.cnpg.superuserPassword and postgresql.password in your values." }}
{{- end }}🤖 Prompt for AI Agents
In `@chart/values.yaml` around lines 125 - 133, Change the sentinel defaults for
postgresql credentials so they aren't a predictable string: set
superuserPassword and password to empty ("") in values.yaml and add a Helm
template guard in your chart templates (referencing
.Values.postgresql.cnpg.superuserPassword and .Values.postgresql.password or
.Values.postgresql.password) that fails rendering when either value is empty or
still equals "CHANGE_ME"; also add a brief comment in values.yaml clarifying
that cnpg.superuserPassword is the PostgreSQL superuser credential while
postgresql.password is the application/database user password (they may be
different and should be set explicitly).
| ## API admin credentials (used for trigger endpoints) | ||
| apiAdminUser: admin | ||
| apiAdminPassword: CHANGE_ME | ||
|
|
There was a problem hiding this comment.
Same sentinel-password concern applies to apiAdminPassword.
Apply the same required/fail guard recommended for the PostgreSQL passwords to prevent deploying with CHANGE_ME as the API admin credential.
🤖 Prompt for AI Agents
In `@chart/values.yaml` around lines 145 - 148, The apiAdminPassword entry
currently defaults to the sentinel "CHANGE_ME" and needs the same required/fail
guard used for the PostgreSQL passwords; update the Helm chart values and
templates to enforce non-sentinel values by adding a validation that checks
apiAdminPassword (and optionally apiAdminUser) is not "CHANGE_ME" and fails the
release if it is — reference the apiAdminPassword key in chart/values.yaml and
the template(s) that consume it (e.g., the secret/Deployment template that reads
.Values.apiAdminPassword) and implement the same required/fail pattern used for
the PostgreSQL password variables so deployments cannot proceed with the
sentinel value.
| unbuffer $(which odoo || which openerp-server) \ | ||
| --data-dir=/mnt/data/odoo-data-dir \ | ||
| --db-filter=^${PGDATABASE} \ | ||
| --db-filter='^${PGDATABASE}$$' \ |
There was a problem hiding this comment.
Bug: single quotes prevent ${PGDATABASE} expansion.
Single quotes in bash suppress all variable expansion. Odoo will receive the literal string ^${PGDATABASE}$$ as the --db-filter regex, which won't match any actual database name — effectively breaking the deployment.
Use double quotes instead. A lone $ at the end of a double-quoted string (not followed by a valid identifier character) is treated as a literal $ by bash, so it works as the regex end-anchor:
🐛 Proposed fix
- --db-filter='^${PGDATABASE}$$' \
+ --db-filter="^${PGDATABASE}$" \📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --db-filter='^${PGDATABASE}$$' \ | |
| --db-filter="^${PGDATABASE}$" \ |
🤖 Prompt for AI Agents
In `@src/runboat/kubefiles/runboat-start.sh` at line 41, The --db-filter argument
is enclosed in single quotes so ${PGDATABASE} is not expanded; update the token
in runboat-start.sh to use double quotes and a single regex end-anchor so the
variable expands correctly, i.e. change the --db-filter argument containing
'^${PGDATABASE}$$' to use double quotes and a single trailing $ (e.g.
--db-filter="^${PGDATABASE}$") so you get the actual database name and a proper
regex end-anchor without invoking $$ (PID).
Add full GitLab platform support that works alongside the existing GitHub integration. Both platforms can be active simultaneously. New files: - src/runboat/gitlab.py — GitLab API client (branch info, MR info, commit status posting via Bearer auth) Modified: - webhooks.py — /webhooks/gitlab endpoint (Push Hook, MR Hook), token verification returns 403 on failure - settings.py — gitlab_token, gitlab_webhook_token, gitlab_url, platform + project_id on RepoSettings - models.py — platform-aware URL generation (GitHub/GitLab), HTTPS deploy links, dual commit status dispatch - db.py — platform and project_id fields in build schema - api.py — /builds/trigger/mr endpoint for GitLab MRs - github.py — platform and project_id on CommitInfo dataclass - kubefiles/runboat-clone-and-install.sh — platform-aware clone using GitLab archive API with Bearer auth for private repos - Dockerfile — GitLab env var defaults - README.md — GitLab feature docs, comparison table, quickstart Tests: 10 new GitLab webhook tests (push, MR open/update/close/merge) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Full Helm chart for deploying Runboat and its dependencies: - Controller Deployment, Service, Ingress with TLS - CNPG PostgreSQL Cluster CR with app user secret + CREATEDB grant - RBAC (ServiceAccount, ClusterRole, ClusterRoleBinding) - Secrets and ConfigMap for all Runboat configuration - Well-documented values.yaml with generic defaults, inline docs, and examples for certs, repos, GitHub/GitLab config Supports both CNPG-managed and external PostgreSQL via values toggle. Wildcard cert instructions included for build ingress TLS. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
GitHub Actions CI (.github/workflows/build.yml): - Build + push Docker image to container registry on push to main - Helm chart lint + package Kubefiles fixes for production deployment: - Add ingressClassName: nginx to build ingress templates - Add TLS with shared wildcard cert on build ingresses - Anchor db-filter regex with $ to prevent matching -baseonly DBs - Increase init job timeout to 1h and CPU limit to 2 cores Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
0766c23 to
150afb4
Compare
Summary
Fork of sbidoul/runboat adding GitLab platform support alongside existing GitHub, plus a Helm chart for Kubernetes deployment.
GitLab Support (works alongside GitHub)
/webhooks/gitlabendpoint handling Push Hook and Merge Request Hook eventsgitlab.py) for branch info, MR info, and commit status posting/api/v1/builds/trigger/mrendpoint for triggering builds from GitLab MRsgithub.com/{repo}) and GitLab (gitlab.com/{repo}/-/)Helm Chart (
chart/)Full Helm chart for deploying Runboat to Kubernetes:
values.yaml— generic defaults, inline docs, examples for certs, repos, GitLab/GitHub configBuild Infrastructure Fixes
ingressClassName: nginxon build ingress templatesrunboat-builds-wildcard-tls) on build ingressesdb-filterregex anchored with$to prevent matching-baseonlydatabasesFiles Changed
src/runboat/gitlab.pysrc/runboat/webhooks.py/webhooks/gitlabendpoint + 403 on bad tokensrc/runboat/settings.pysrc/runboat/models.pysrc/runboat/db.pysrc/runboat/api.py/builds/trigger/mrendpointsrc/runboat/github.pysrc/runboat/exceptions.pysrc/runboat/kubefiles/runboat-clone-and-install.shsrc/runboat/kubefiles/runboat-start.shsrc/runboat/kubefiles/ingress_odoo.yamlsrc/runboat/kubefiles/ingress_mailhog.yamlsrc/runboat/kubefiles/initialize.yamlchart/.github/workflows/build.ymltests/test_webhook.pyREADME.mdTest plan
helm lint)https://runboat.dev.review-kencove.com🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Config
Tests
Improvements