-
Notifications
You must be signed in to change notification settings - Fork 4
Feature: Create charts for newtopo #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds multiple Helm charts (arangodb, asynqmon, redis, topology-app, topology-web, traefik) with chart metadata, values, templates, and .helmignore files; introduces ~16 ArangoDB CRD manifests and numerous topology-app payload/seed templates plus Traefik job/config templates. Changes
Sequence Diagram(s)sequenceDiagram
participant Helm as Helm (install/upgrade)
participant K8s as Kubernetes API
participant Job as Init Job Pod
participant Secret as Kubernetes Secret
participant Arango as ArangoDB HTTP API
Helm->>K8s: create Job (post-install hook)
K8s->>Job: schedule Pod
Job->>Secret: read ARANGO_USER / ARANGO_PASS
Job->>Arango: poll /_api/version until available
Job->>Arango: POST /_api/database (create topo DB)
Job->>Arango: POST /_api/collection (create topo_list)
Arango-->>Job: respond (ok/existing)
Job-->>K8s: exit Succeeded
K8s-->>Helm: hook completed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In @kubernetes/arangodb/Chart.yaml:
- Around line 24-31: Update the Chart.yaml so the appVersion matches a
kube-arangodb-compatible ArangoDB release: change the appVersion value from
"1.16.0" to a 3.x version (for example "3.13.0" or later) to align with the
kube-arangodb dependency version "1.3.4" and ensure operator compatibility;
verify the updated appVersion is reflected alongside the dependencies block that
contains the kube-arangodb entry.
In @kubernetes/arangodb/values.yaml:
- Around line 56-59: The values.yaml currently hardcodes auth.rootPassword =
"changeme", which is a high-risk default; remove the default rootPassword from
values.yaml (leave it unset) and update the chart templates to enforce
explicit/secure passwords by validating auth.rootPassword (and/or generate a
random password into the Kubernetes secret if you prefer automation).
Specifically, eliminate the "changeme" default value under auth.rootPassword in
values.yaml and add a Helm template validation that fails deployment if
auth.rootPassword is empty or equals "changeme" (use the
auth.secretName/arango-root-pwd and auth.rootUser symbols to locate related
templates), or alternatively implement logic that generates and stores a secure
random password into the secret when no password is provided.
In @kubernetes/asynqmon/values.yaml:
- Around line 24-29: Update the image configuration: change image.repository
from "webconnex/asynqmon" to the official "hibiken/asynqmon" and replace the
image.tag value "latest" with the explicit version "1.16.0" to match the chart
appVersion; edit the image block (image.repository and image.tag) in the
values.yaml so the repository and tag are updated accordingly and keep
pullPolicy as-is.
In @kubernetes/topology-app/Chart.yaml:
- Around line 25-28: The Chart.yaml dependency for the chart named "common" uses
an invalid version string "1.x.x"; update the dependencies block in Chart.yaml
(the entry with name: common and repository:
https://svtechnmaa.github.io/charts/artifacthub/) to use a valid Helm version
constraint such as an exact "1.5.3", a caret "^1.5.3", a tilde "~1.5.3", or a
range like ">=1.4.3 <2.0.0" (choose the one that matches your compatibility
needs).
In @kubernetes/topology-app/manifest.yaml:
- Around line 1474-1482: The DATABASE_URL in the ConfigMap is exposing plaintext
DB credentials; move that value into a Kubernetes Secret (e.g., name it
topology-app-db-secret with stringData/DATABASE_URL set to the connection
string) and remove DATABASE_URL from the ConfigMap; then update the
Deployment/Pod spec to pull the variable from the secret (use env or envFrom
with secretRef / secretKeyRef referencing topology-app-db-secret and the key
DATABASE_URL) so the app reads DATABASE_URL from the Secret instead of the
ConfigMap.
- Around line 26-51: The manifest stores plaintext Password fields for the User
entries (IDs 1–4) in a ConfigMap; move those secrets into a Kubernetes Secret
resource (kind: Secret) using stringData keys (e.g., admin-password,
user-password, readonly-password, dsmanager-password), remove the Password
fields from the ConfigMap/User seed data, and update the seed process to read
passwords from the Secret (via env vars or a mounted file) instead of the
ConfigMap; additionally, prefer storing password hashes rather than plaintext
and ensure the code that seeds users (the component that consumes the User
entries) is updated to substitute secret values at runtime.
- Around line 1174-1179: The JSON in the Icon array has a trailing comma after
the last object (the object with "ID": 8, "Name": "Warning", "URL":
"/public/router-big_warning.svg"), which makes the manifest invalid; remove the
trailing comma after that object's closing brace so the array ends with ] and
the surrounding JSON remains valid.
In @kubernetes/topology-app/templates/seed-configs/_seed-datasources.tpl:
- Around line 15-28: The template directly interpolates string fields (e.g.,
$ds.title, $ds.type, $ds.url, $ds.username, $ds.password, $ds.schema) into JSON
which risks malformed output or injection; instead construct a map/object of the
datasource (using the template dict/map helpers) with the same keys (ID,
ParentID, ExternalID, Title, Type, Url, Storage, Username, Password, Details,
Schema, Query) populated from the $ds fields, then render that structure with
toJson so all string values are properly escaped; update the
_seed-datasources.tpl to stop inline string interpolation and emit each
datasource by serializing the constructed map (and still preserve the comma
logic between entries).
In @kubernetes/topology-app/templates/seed-configs/_seed-icons.tpl:
- Around line 39-43: The JSON object for the icon with "ID": 8 / "Name":
"Warning" contains a trailing comma after the "URL" field which makes the array
invalid; remove the trailing comma after "/public/router-big_warning.svg" so the
object ends with "URL": "/public/router-big_warning.svg" (no comma) to produce
valid JSON.
In @kubernetes/topology-app/templates/seed-configs/_seed-menu-items.tpl:
- Around line 16-51: The static "Visualizer" JSON object is not separated from
the dynamic entries, producing invalid JSON when enabledKeys is non-empty; move
the block that builds $enabledKeys (the range over .Values.config.menuItems that
appends to $enabledKeys) to before the static Visualizer object or alternatively
leave it where it is and add a conditional comma after the static object using
the computed $length (or a test like if gt (len $enabledKeys) 0) so a comma is
emitted only when dynamic entries follow; adjust the template around the static
object and the subsequent range (symbols: $enabledKeys, $length, the range over
$enabledKeys, and $ds variables) to ensure valid comma placement between
objects.
In @kubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl:
- Around line 1-2: The script defined in "seed-queries-script" uses bash-only
constructs (e.g., [[ ... ]] tests and echo -e) but declares #!/bin/sh; either
change the shebang to a bash interpreter (#!/usr/bin/env bash) or rewrite the
bashisms to POSIX shell equivalents (use [ ... ] or test, replace echo -e with
printf, adjust any [[ ]] logic to single-bracket/portable logic). Update all
occurrences of [[ ]] (lines noted around 31, 38, 46, 121) and echo -e to their
POSIX-safe counterparts or switch the shebang, ensuring consistent use across
the template.
In @kubernetes/topology-app/templates/seed-configs/_seed-queries.tpl:
- Line 18: The SQL in the template uses the wrong column alias "sericename";
update the query string (the "Content" value in _seed-queries.tpl) to alias
t2.name AS servicename instead of AS sericename so downstream code that expects
"servicename" receives the correct column name.
In @kubernetes/topology-app/templates/traefik-configmap.yaml:
- Around line 29-38: The Traefik routers "default" and "default-tls" contain
invalid empty service values which will break routing; update those router
blocks (routers.default and routers["default-tls"]) to either reference a valid
existing Traefik service name or remove/comment out the entire router entries if
they are placeholders, ensuring that any TLS config remains attached only to
routers that point to a real service.
In @kubernetes/topology-app/values.yaml:
- Around line 245-246: Remove the hardcoded Grafana credentials in values.yaml
(the username and password keys currently set to "thrukadmin") and replace them
with a reference to a Kubernetes Secret; create or reuse the same secret used
for DB credentials and update values.yaml to pull credentials from that secret
(use secretKeyRef for username and password) so the chart no longer contains
plaintext credentials.
- Line 159: The values.yaml currently hardcodes credentials in DATABASE_URL
which must be removed; instead add separate keys under topologyAppEnv for
DATABASE_HOST, DATABASE_PORT, DATABASE_NAME and replace the plaintext
DATABASE_URL with references to Kubernetes Secrets (e.g., DATABASE_USER_SECRET
and DATABASE_PASSWORD_SECRET objects) and update the deployment template to
build the connection string from those secret refs; also document the required
secret name and keys (username/password) so operators create a Secret like
topology-db-credentials and the app or chart template constructs the final
DATABASE_URL at deploy/runtime.
🟠 Major comments (22)
kubernetes/redis/templates/deployment.yaml-31-36 (1)
31-36: Template references undefined values whenaffinityis empty.When
.Values.affinityis empty (the default), the template falls back to using.Values.podAffinityPresetand.Values.podAntiAffinityPreset, but these values are not defined invalues.yaml. This will cause template rendering errors.This issue was also flagged in the values.yaml review. Ensure these values are added to
values.yaml:podAffinityPreset: "" podAntiAffinityPreset: ""kubernetes/redis/values.yaml-128-139 (1)
128-139: MissingpodAffinityPresetandpodAntiAffinityPresetvalues.The deployment template references
.Values.podAffinityPresetand.Values.podAntiAffinityPresetbut these values are not defined here. This will cause template rendering issues whenaffinityis empty.Add missing affinity preset values
nodeAffinityPreset: type: "" key: "" values: [] +podAffinityPreset: "" +podAntiAffinityPreset: "" + affinity:kubernetes/asynqmon/templates/deployment.yaml-44-45 (1)
44-45: Make imagePullSecrets configurable instead of hardcoded.The imagePullSecrets is hardcoded to "ghcr-pull-secret", which will cause deployment failures if this secret doesn't exist in the target namespace. This should be configurable via values.yaml and optional.
♻️ Proposed fix to make imagePullSecrets configurable
In values.yaml, add:
imagePullSecrets: [] # - name: ghcr-pull-secretIn deployment.yaml, replace the hardcoded section with:
- imagePullSecrets: - - name: ghcr-pull-secret + {{- if .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml .Values.imagePullSecrets | nindent 8 }} + {{- end }}kubernetes/topology-app/templates/seed-configs/_seed-datasources.tpl-24-24 (1)
24-24: Major: Plaintext password in seed configuration.The password field is directly interpolated into the ConfigMap as plaintext. This violates secrets management best practices and exposes credentials.
Consider using Kubernetes Secrets instead of ConfigMaps for datasource credentials, or reference secret names in the seed data that the application can resolve at runtime.
Would you like me to suggest an approach for externalizing credentials using Kubernetes Secrets?
kubernetes/topology-app/templates/seed-configs/_seed-datasources.tpl-14-28 (1)
14-28: Add validation for required datasource fields.The template accesses datasource fields ($ds.id, $ds.title, $ds.type, etc.) without validating their existence. If a datasource configuration is missing required fields, the template will fail during rendering with cryptic errors.
♻️ Add field validation
Add validation before the loop to ensure all required fields are present:
{{- range $i, $key := $enabledKeys -}} {{- $ds := index $.Values.config.datasources $key }} {{- if not (and $ds.id $ds.title $ds.type $ds.url) }} {{- fail (printf "Datasource %s is missing required fields (id, title, type, url)" $key) }} {{- end }} { "ID": {{ $ds.id }}, ...kubernetes/topology-app/templates/payloads/_payload-dashboards.tpl-2-2 (1)
2-2: Refactor single-line JSON for maintainability.The entire dashboard payload is formatted as a single-line JSON string with escaped quotes, making it extremely difficult to read, validate, and maintain. Additionally, there appear to be irregular spacing issues within the JSON string itself.
Consider reformatting as a proper multi-line JSON structure (similar to other payload templates) or using YAML/JSON formatting with proper indentation.
♻️ Suggested refactor
{{- define "payload-dashboards" -}} -{\"layout\":\"2x2\",\"A[layout]\":\"single\",\"A[A][url]\":\"https://example.com\",\"A[A][title]\":\"Example\",\" A[A][external]\":\"true\",\"A[B][url]\":\"\",\"A[B][title]\":\"\",\"A[B][external]\":\"false\",\"A[C][url]\":\"\",\"A[C][title]\":\"\ ",\"A[C][external]\":\"false\",\"A[D][url]\":\"\",\"A[D][title]\":\"\",\"A[D][external]\":\"false\",\"B[layout]\":\"single\",\"B[A][u rl]\":\"\",\"B[A][title]\":\"\",\"B[A][external]\":\"false\",\"B[B][url]\":\"\",\"B[B][title]\":\"\",\"B[B][external]\":\"false\",\"B [C][url]\":\"\",\"B[C][title]\":\"\",\"B[C][external]\":\"false\",\"B[D][url]\":\"\",\"B[D][title]\":\"\",\"B[D][external]\":\"false\ ",\"C[layout]\":\"single\",\"C[A][url]\":\"\",\"C[A][title]\":\"\",\"C[A][external]\":\"false\",\"C[B][url]\":\"\",\"C[B][title]\":\" \",\"C[B][external]\":\"false\",\"C[C][url]\":\"\",\"C[C][title]\":\"\",\"C[C][external]\":\"false\",\"C[D][url]\":\"\",\"C[D][title] \":\"\",\"C[D][external]\":\"false\",\"D[layout]\":\"single\",\"D[A][url]\":\"\",\"D[A][title]\":\"\",\"D[A][external]\":\"false\",\" D[B][url]\":\"\",\"D[B][title]\":\"\",\"D[B][external]\":\"false\",\"D[C][url]\":\"\",\"D[C][title]\":\"\",\"D[C][external]\":\"false \",\"D[D][url]\":\"\",\"D[D][title]\":\"\",\"D[D][external]\":\"false\"} +{ + "layout": "2x2", + "A[layout]": "single", + "A[A][url]": "https://example.com", + "A[A][title]": "Example", + "A[A][external]": "true", + "A[B][url]": "", + "A[B][title]": "", + "A[B][external]": "false", + ... +} {{- end }}Note: The full JSON structure should be expanded with all sections (A, B, C, D) properly formatted.
Committable suggestion skipped: line range outside the PR's diff.
kubernetes/topology-app/templates/seed-configs/_seed-queries.tpl-90-107 (1)
90-107: Queries don't filter by specific hard_state values.Both
default_critical_service_count_from_icingadbanddefault_warning_service_count_from_icingadbuse identical SQL queries that group byhard_statebut don't filter for specific values. The ResultMappers emitnumber_service_criticalandnumber_service_warningrespectively, but both will receive counts for ALL hard_state values, not just critical (typically hard_state=2) or warning (typically hard_state=1).Consider adding WHERE clauses to filter by the appropriate hard_state values in each query.
💡 Suggested fix
For Query 11 (critical):
- "Content": "SELECT t1.name AS hostname,t1.address, t3.hard_state, COUNT(t3.hard_state) AS hard_state_count FROM host t1 JOIN service t2 ON t1.id = t2.host_id JOIN service_state t3 ON t2.id = t3.service_id WHERE t2.name NOT LIKE 'If%' GROUP BY t1.id, t1.name, t1.address, t3.hard_state;", + "Content": "SELECT t1.name AS hostname,t1.address, COUNT(t3.hard_state) AS hard_state_count FROM host t1 JOIN service t2 ON t1.id = t2.host_id JOIN service_state t3 ON t2.id = t3.service_id WHERE t2.name NOT LIKE 'If%' AND t3.hard_state = 2 GROUP BY t1.id, t1.name, t1.address;",For Query 12 (warning):
- "Content": "SELECT t1.name AS hostname,t1.address, t3.hard_state, COUNT(t3.hard_state) AS hard_state_count FROM host t1 JOIN service t2 ON t1.id = t2.host_id JOIN service_state t3 ON t2.id = t3.service_id WHERE t2.name NOT LIKE 'If%' GROUP BY t1.id, t1.name, t1.address, t3.hard_state;", + "Content": "SELECT t1.name AS hostname,t1.address, COUNT(t3.hard_state) AS hard_state_count FROM host t1 JOIN service t2 ON t1.id = t2.host_id JOIN service_state t3 ON t2.id = t3.service_id WHERE t2.name NOT LIKE 'If%' AND t3.hard_state = 1 GROUP BY t1.id, t1.name, t1.address;",kubernetes/topology-app/templates/seed-configs/_seed-states.tpl-8-8 (1)
8-8: Single quotes produce invalid JSON.The
Payloadfield uses single quotes ('{{ include ... }}') which are not valid JSON string delimiters. JSON requires double quotes for strings. This will produce malformed JSON output.Consider one of these approaches:
- Escape the included JSON and wrap in double quotes
- Embed the payload object directly without string wrapping (if Payload should be an object, not a string)
Option: Embed payload as object instead of string
If
Payloadshould be a JSON object rather than a string:- "Payload": '{{ include "payload-ok" . | nindent 8 }}', + "Payload": {{ include "payload-ok" . | nindent 8 }},Apply similar changes to lines 15, 22, 29, and 36.
kubernetes/topology-app/templates/traefik-configmap.yaml-141-143 (1)
141-143:insecureSkipVerify: truedisables TLS certificate verification.This bypasses certificate validation for the
localssltransport, which could allow man-in-the-middle attacks. If needed for internal services with self-signed certs, document this clearly and consider restricting to specific environments.kubernetes/traefik/templates/traefik-configmap.yaml.bak-1-191 (1)
1-191: Remove.bakfile from templates directory.Backup files (
.bak) should not be committed to the repository. Helm may attempt to process files in the templates directory, and this creates duplicate configuration that could lead to confusion. Remove this file and rely on version control for history.kubernetes/topology-app/templates/traefik-configmap.yaml-10-19 (1)
10-19: Security headers disabled and overly permissive CORS.
frameDeny: falseandbrowserXssFilter: falsedisable important security protections.Access-Control-Allow-Origin: "*"allows any origin, which may expose APIs to cross-origin attacks.If this is for development, consider using environment-specific values. For production, restrict the allowed origin and enable security headers.
kubernetes/topology-app/manifest.yaml-1778-1778 (1)
1778-1778: Typo: "lastest" should be "latest".The image tag contains a typo that will cause image pull failures unless an image with this exact typo exists.
🔧 Proposed fix
- image: ghcr.io/moophat/topology-app:lastest + image: ghcr.io/moophat/topology-app:latestApply this fix to lines 1778, 1818, and 1853.
Also applies to: 1818-1818, 1853-1853
kubernetes/topology-web/values.yaml-29-29 (1)
29-29: Typo: "lastest" should be "latest".The image tag contains a spelling error. While using
latestis generally discouraged for production, the current typo will cause image pull failures.🔧 Proposed fix
- tag: "lastest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. + tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED.Consider using a specific version tag for production deployments.
kubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl-64-65 (1)
64-65: Logging Grafana API response may expose sensitive data.The full API response is logged, which could contain sensitive datasource credentials or configuration details in production logs.
🔒 Suggested fix
response_from_grafana=$(get_datasourceID_fromGrafana) - echo "response_from_grafana: $response_from_grafana" + if [[ -n "$response_from_grafana" ]]; then + echo "Successfully retrieved datasources from Grafana" + else + echo "Warning: Empty response from Grafana API" + fikubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl-35-37 (1)
35-37: Potential SQL injection vulnerability.The
$namevariable is interpolated directly into SQL queries without sanitization. If datasource names contain malicious characters, this could lead to SQL injection.🔒 Recommended mitigation
Consider using parameterized queries or at minimum sanitize the input:
get_datasourceID_fromMariadb() { local parent=$1 local name=$2 + # Basic sanitization - escape single quotes + name=$(echo "$name" | sed "s/'/''/g") if [[ "$parent" != "Grafana" ]]; then echo 9999 return fiBetter yet, use prepared statements if the MariaDB client supports them, or validate that
$namematches expected patterns.Committable suggestion skipped: line range outside the PR's diff.
kubernetes/topology-app/manifest.yaml-1787-1790 (1)
1787-1790: Hardcoded IP address reduces portability.The Grafana health check URL uses a hardcoded IP (
192.168.1.1). This should be configurable or use a service DNS name.🔧 Suggested fix
Consider using the internal Grafana service DNS name:
- until $(curl --output /dev/null -k --silent --head --fail https://192.168.1.1/grafana/api/health); do + until $(curl --output /dev/null -k --silent --head --fail http://grafana:3000/grafana/api/health); doOr parameterize via environment variable from the ConfigMap.
kubernetes/arangodb/crds/platform-chart.yaml-16-18 (1)
16-18: Schema validation bypass introduces security and correctness risks.The use of
x-kubernetes-preserve-unknown-fields: truedisables schema validation, allowing arbitrary fields that could lead to misconfigurations.Implement a proper OpenAPI schema to validate ArangoPlatformChart resources and ensure type safety.
kubernetes/arangodb/crds/scheduler-cronjob.yaml-16-18 (1)
16-18: Schema validation bypass introduces security and correctness risks.Using
x-kubernetes-preserve-unknown-fields: truedisables schema validation, allowing arbitrary fields that could result in misconfigurations or security vulnerabilities.Define a proper OpenAPI schema to validate ArangoSchedulerCronJob resources.
kubernetes/arangodb/crds/replication-deploymentreplication.yaml-18-20 (1)
18-20: Schema validation bypass poses security and correctness risks.Using
x-kubernetes-preserve-unknown-fields: truedisables validation for this CRD, allowing arbitrary fields that may cause misconfigurations or security issues.Define a proper OpenAPI schema to validate ArangoDeploymentReplication resources.
kubernetes/arangodb/values.yaml-18-55 (1)
18-55: Confusing configuration: mode "Single" with cluster settings.The
modeis set toSingle(line 18), but extensive cluster configuration is provided for agents, coordinators, and dbservers (lines 19-55). This creates confusion about the intended deployment mode.Clarify the configuration by either:
- Changing mode to
Clusterif cluster deployment is intended, or- Removing or commenting out cluster configuration if Single mode is default, or
- Adding documentation explaining when each section applies
📝 Suggested clarification
-# Single or Cluster -mode: Single +# Deployment mode: Single or Cluster +# Single: One ArangoDB instance (for development/testing) +# Cluster: Distributed deployment with agents, coordinators, and dbservers (for production) +mode: Single + +# Cluster configuration (only applies when mode: Cluster) cluster: environment: Productionkubernetes/arangodb/values.yaml-20-24 (1)
20-24: Debug logging in production environment impacts performance and security.The agents are configured with
--log.level=debugwhile the cluster environment is set toProduction. Debug-level logging:
- Significantly impacts performance
- May expose sensitive information in logs
- Increases storage requirements
Set log level to
infoorwarningfor production environments.📝 Proposed fix
cluster: environment: Production agents: counts: 3 args: - - --log.level=debug + - --log.level=info resources: {}kubernetes/arangodb/crds/backups-backup.yaml-18-20 (1)
18-20: Define explicit schema properties for the ArangoBackup resource instead of using x-kubernetes-preserve-unknown-fields.The root schema lacks any property validation. Setting
x-kubernetes-preserve-unknown-fields: truewithout defining spec and status properties disables field validation entirely, allowing arbitrary data to be persisted. This creates security risks (validation bypass, supply-chain attacks on controllers) and makes errors harder to catch. Define proper OpenAPI schema withpropertiesforspecandstatusto enforce structure and enable validation.
🟡 Minor comments (13)
kubernetes/redis/Chart.yaml-24-24 (1)
24-24:appVersiondoes not match the Redis image version.The
appVersionis set to"1.16.0"but the image tag invalues.yamlis"7-alpine". TheappVersionshould reflect the application version being deployed.Proposed fix
-appVersion: "1.16.0" +appVersion: "7"kubernetes/redis/templates/service.yaml-43-45 (1)
43-45: Potential issue:nodePortmay be undefined.When
nodePortEnabledis true andservice.typeisNodePort, but a port entry doesn't have anodePortvalue defined, this will render an empty or invalidnodePort:field. Consider adding a conditional check.Suggested fix
{{- if and (eq $.Values.service.type "NodePort") $.Values.service.nodePortEnabled }} + {{- if .nodePort }} nodePort: {{ .nodePort }} + {{- end }} {{- end }}kubernetes/asynqmon/values.yaml-141-143 (1)
141-143: Make Redis service address configurable.The
REDIS_ADDRvalue is hardcoded toredis:6379, which assumes a service named "redis" exists. Other charts in this repository (Airflow, Akvorado, Report-UI) use templated service names like{{ .Chart.Name }}-redisor{{ .Release.Name }}-redisinstead. Consider makingREDIS_ADDRconfigurable via values to match the actual deployed Redis service name and maintain consistency with the rest of the codebase.kubernetes/topology-app/templates/payloads/_payload-critical.tpl-5-14 (1)
5-14: Inconsistent edge label configuration across payload templates.The
labelCfgis missing the"position": "center"field that exists in_payload-unknown.tpl(line 7). This inconsistency could cause different label positioning behavior between state visualizations.🔧 Proposed fix to align with other payload templates
"labelCfg": { "autoRotate": true, + "position": "center", "style": {kubernetes/traefik/templates/service.yaml-5-5 (1)
5-5: Fix YAML syntax error onlabelsfield.The static analysis flagged a syntax error here. The issue is that
{{-afterlabels:with a space can produce invalid YAML when the included content starts with a list item. The patternlabels: {{-should be split across lines.Proposed fix
- labels: {{- include "common.labels.standard" . | nindent 4 }} + labels: + {{- include "common.labels.standard" . | nindent 4 }}kubernetes/traefik/templates/job-generate-cert.yaml-24-27 (1)
24-27: File existence check is ineffective — files will never exist.The check
if [ ! -f /tls.crt ] || [ ! -f /tls.key ]will always be true since no volume is mounted at these paths. The certificate will always be regenerated on each Job run. If this is intentional (always regenerate), remove the misleading conditional. If you want to skip regeneration when the secret already exists, check the secret instead:Option: Check if secret exists instead
- if [ ! -f /tls.crt ] || [ ! -f /tls.key ]; then - echo "Generating TLS certificate..." - openssl req -x509 -nodes -days {{ .Values.tls.durationDays }} -newkey rsa:2048 -keyout /tls.key -out /tls.crt -subj "/CN={{ .Values.tls.commonName }}" - fi + if ! kubectl get secret {{ .Values.tls.secretName }} > /dev/null 2>&1; then + echo "Generating TLS certificate..." + openssl req -x509 -nodes -days {{ .Values.tls.durationDays }} -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN={{ .Values.tls.commonName }}" + kubectl create secret tls {{ .Values.tls.secretName }} --cert=/tmp/tls.crt --key=/tmp/tls.key + else + echo "Secret {{ .Values.tls.secretName }} already exists, skipping..." + fiCommittable suggestion skipped: line range outside the PR's diff.
kubernetes/topology-web/templates/service.yaml-43-45 (1)
43-45: Add a check for.nodePortexistence before rendering.If
nodePortEnabledis true but a port entry lacks anodePortvalue, this will render an empty or invalidnodePort:field. Add a guard to prevent rendering when the value is undefined.Proposed fix
- {{- if and (eq $.Values.service.type "NodePort") $.Values.service.nodePortEnabled }} - nodePort: {{ .nodePort }} + {{- if and (eq $.Values.service.type "NodePort") $.Values.service.nodePortEnabled .nodePort }} + nodePort: {{ .nodePort }} {{- end }}kubernetes/traefik/values.yaml-24-29 (1)
24-29: Consider updating Traefik to a more recent version.The specified version
v3.1.2is valid but outdated. The latest stable release is v3.6 (released November 7, 2025). Update the tag to a recent version for security patches and improvements, while keeping it pinned rather than usingLATEST.kubernetes/traefik/templates/deployment.yaml-54-54 (1)
54-54: Inconsistent image helper usage - missingglobalparameter.This template uses
common.images.imagewithout theglobalparameter, whiletopology-web/templates/deployment.yamlincludes it. This inconsistency may cause issues if a global image registry is configured.🔧 Proposed fix
- image: {{ include "common.images.image" ( dict "imageRoot" .Values.image ) }} + image: {{ include "common.images.image" ( dict "imageRoot" .Values.image "global" .Values.global ) }}kubernetes/topology-app/templates/payloads/_payload-templates.tpl-97-105 (1)
97-105: Typo: "ultilization" should be "utilization".The alias names contain a spelling error that will propagate to the data model.
✏️ Proposed fix
{ "queryId": "5", "queryFormat": "${source_hostname}|IfCheck-${physical_interface}", - "alias": "in_ultilization" + "alias": "in_utilization" }, { "queryId": "6", "queryFormat": "${source_hostname}|IfCheck-${physical_interface}", - "alias": "out_ultilization" + "alias": "out_utilization" }kubernetes/arangodb/values.yaml-3-7 (1)
3-7: Update to latest 3.12.x patch and document ArangoDB version constraints.The image tag is set to
3.12.2, which is compatible with the kube-arangodb operator 1.3.4 (requires ArangoDB >= 3.11.0). However, ArangoDB 3.12.2 is not the latest patch in the 3.12 release line—upgrade to the latest 3.12.x patch for the latest security and bug fixes.Consider documenting the minimum and tested ArangoDB version requirements for this chart (aligned with the operator's supported version matrix).
kubernetes/arangodb/templates/arangodeployment.yaml-23-25 (1)
23-25: Whitespace handling can be improved, but cluster field structure is correct.The cluster configuration fields are correctly placed directly in the spec (not nested under a
cluster:key). This matches the ArangoDB operator's expectations for Cluster mode deployments, where agents, coordinators, and dbservers are spec-level properties.However, line 24 should use
{{-instead of{{to avoid introducing a blank line:Fix: Add dash to trim whitespace
{{- if eq .Values.mode "Cluster"}} - {{ .Values.cluster | toYaml | nindent 2 }} + {{- .Values.cluster | toYaml | nindent 2 }} {{- end }}kubernetes/traefik/templates/serviceaccount.yaml-14-17 (1)
14-17: Removedeletefrom the secret verbs in the Role.The TLS job only creates and patches secrets via
kubectl create(with--dry-run=client) andkubectl apply. It never deletes secrets. Thedeleteverb should be removed to adhere to the principle of least privilege.rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "patch"]
kubernetes/topology-app/templates/seed-configs/_seed-queries.tpl
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
kubernetes/asynqmon/values.yaml (5)
24-29: Avoid usinglatestas the default image tag.The comment acknowledges this, but the default should be a pinned version. Using
latestleads to non-reproducible deployments and can introduce breaking changes unexpectedly.♻️ Suggested fix
image: repository: hibiken/asynqmon # CHANGE THIS TO YOUR IMAGE # This sets the pull policy for images. pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. - tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. + tag: "" # Defaults to chart appVersion. Override with a specific version.Alternatively, pin to a specific version (e.g.,
v0.7.1or current stable) and setpullPolicy: IfNotPresentto ensure consistent deployments.
88-98: Consider providing default resource requests.While leaving resources empty increases portability, it can lead to scheduling issues and resource contention in production. Consider providing minimal defaults or at least uncommenting the example values as a starting point.
141-143: Hardcoded environment values reduce chart portability.
TZ: Asia/Ho_Chi_Minhshould default toUTCor be left empty for broader applicability.REDIS_ADDR: redis:6379assumes a specific Redis service name. Consider templating this or documenting the dependency clearly.♻️ Suggested fix
env: - TZ: Asia/Ho_Chi_Minh - REDIS_ADDR: redis:6379 + TZ: UTC + # REDIS_ADDR should match your Redis service. Override in your deployment values. + REDIS_ADDR: "redis-master:6379"If Redis is deployed via a separate Helm chart (e.g., Bitnami Redis), consider using a templated value like
{{ .Release.Name }}-redis-master:6379in the deployment template.
100-108: Enable health probes for production readiness.Liveness and readiness probes are commented out. For production deployments, consider enabling at least a readiness probe to ensure traffic is only routed to healthy pods.
♻️ Suggested configuration
livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 10 periodSeconds: 15 readinessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 5 periodSeconds: 10
111-116: Reconsider autoscaling defaults.
maxReplicas: 100seems excessive for a monitoring dashboard like asynqmon. A more realistic default (e.g., 3-5) would better reflect typical usage.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
kubernetes/asynqmon/values.yaml
🔇 Additional comments (1)
kubernetes/asynqmon/values.yaml (1)
48-77: Service configuration looks good.The service configuration is well-documented with appropriate defaults for a ClusterIP service.
7f57e0a to
d19f0a1
Compare
d19f0a1 to
97b02be
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🤖 Fix all issues with AI agents
In @kubernetes/topology-app/manifest.yaml:
- Around line 25-51: The ConfigMap entry seed-users.yaml contains plaintext
passwords for User records (IDs 1-4 and Names admin, user, readonly, dsmanager),
which is insecure; move these secrets into Kubernetes Secrets (or an external
secret store) and reference them from the manifest instead of embedding values
in seed-users.yaml, e.g., create Secret objects holding the passwords and update
whichever bootstrapping code reads seed-users.yaml to pull password values from
the corresponding Secret keys for each User ID/Name.
- Around line 1418-1421: The manifest currently sets
Access-Control-Allow-Origin: "*" which is too permissive given
Access-Control-Allow-Credentials: "true"; update the CORS config (the
Access-Control-Allow-Origin header) to return a specific allowed origin or
dynamically echo the incoming Origin from a configured whitelist instead of *,
and ensure you maintain Access-Control-Allow-Credentials and set Vary: Origin or
use a config-driven list of allowed front-end origins (e.g., via an
env/configmap) so only known frontend domains are permitted; reference the
headers Access-Control-Allow-Credentials and Access-Control-Allow-Origin when
locating the change.
- Around line 1392-1400: The DATABASE_URL in the topology-app-env ConfigMap
contains credentials and must be moved to a Secret: create a Kubernetes Secret
(e.g., name it topology-app-secret) that stores the DATABASE_URL
(base64-encoded) and remove DATABASE_URL from the ConfigMap data block; then
update the Deployment spec env for DATABASE_URL to use valueFrom.secretKeyRef
with name: topology-app-secret and key: DATABASE_URL (keeping other env vars
like BACKEND_URL, PORT, etc. in the ConfigMap as-is). Ensure the Deployment
references the existing ConfigMap for the remaining envs and that the Secret
exists before rollout.
- Around line 1543-1545: The manifest currently disables TLS verification under
serversTransports -> localssl by setting insecureSkipVerify: true, which is
unsafe for production; change this to false (or remove the insecureSkipVerify
key) and configure proper TLS by supplying valid certificates/CA trust: update
the server transport to reference a proper TLS secret or CA bundle and ensure
the service uses that truststore so certificate validation succeeds rather than
bypassing it.
- Around line 1225-1226: The seed-queries-script.sh shebang asserts /bin/sh but
the script uses bash-only constructs like [[ ]] and echo -e; either change the
shebang to /bin/bash (or /usr/bin/env bash) to match those constructs, or make
the script POSIX-compliant by replacing all [[ ... ]] tests with [ ... ] (with
proper quoting) and replacing echo -e with printf; update every occurrence
(e.g., the test usages around the blocks currently using [[ ]] at lines
referenced in the review and any echo -e calls) so the script consistently uses
the chosen option.
In @kubernetes/topology-app/values.yaml:
- Around line 167-193: The values.yaml currently hardcodes plaintext passwords
under config.users (fields: password for users admin, user, readonly,
dsmanager); replace those password fields with secret references (e.g.,
passwordSecretRef with name topology-app-users and per-user keys like
admin-password, user-password, readonly-password, dsmanager-password) and create
a corresponding Kubernetes Secret (or use Sealed Secrets/External Secrets/Vault)
to inject runtime credentials, or alternatively accept them via Helm
--set/values file at deploy time so no plaintext passwords remain in
values.yaml.
- Around line 157-165: The DATABASE_URL in topologyAppEnv exposes credentials in
plain text; replace the hard-coded "user:password" value with a reference to a
Kubernetes Secret (e.g., set DATABASE_URL to a template/value that uses the
secret key) and add corresponding Secret entries for the DB username, password,
and optionally full connection string, then update the deployment/helm chart to
mount or inject that Secret into the pod environment instead of using the
ConfigMap; search for DATABASE_URL and topologyAppEnv to locate the values and
adjust the chart templates to use envFrom or valueFrom.secretKeyRef for secure
injection.
🧹 Nitpick comments (3)
kubernetes/topology-app/values.yaml (1)
33-33: Consider pinning a specific image tag.Using
latestcan lead to unpredictable deployments. Pin to a specific version (e.g., semantic version or commit SHA) for reproducibility and easier rollbacks.kubernetes/topology-app/manifest.yaml (2)
1666-1667: All containers use unpinnedlatesttag.The init container, server, and worker all use
moophat/topology-app:latest. Pin to a specific version for deployment reproducibility and to prevent unexpected changes during pod restarts.Also applies to: 1701-1702, 1731-1732
1695-1724: Consider adding health probes and resource limits.The
topology-app-servercontainer lacks:
- Liveness/readiness probes: Kubernetes cannot detect if the app is healthy, potentially routing traffic to unhealthy pods.
- Resource limits: Without limits, a misbehaving pod can consume unbounded resources.
🛠️ Example addition
livenessProbe: httpGet: path: /api/v1/health port: 4000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /api/v1/health port: 4000 initialDelaySeconds: 5 periodSeconds: 5 resources: requests: cpu: 100m memory: 256Mi limits: cpu: 500m memory: 512Mi
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
kubernetes/arangodb/values.yamlkubernetes/asynqmon/values.yamlkubernetes/topology-app/manifest.yamlkubernetes/topology-app/templates/seed-configs/_seed-queries.tplkubernetes/topology-app/templates/seed-configs/_seed-users.tplkubernetes/topology-app/values.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- kubernetes/topology-app/templates/seed-configs/_seed-queries.tpl
- kubernetes/asynqmon/values.yaml
- kubernetes/arangodb/values.yaml
🔇 Additional comments (1)
kubernetes/topology-app/values.yaml (1)
223-263: Verify empty credentials are intentional for datasources.Multiple datasources (Redis, Arango, Grafana) have empty
usernameandpasswordfields. Ensure these services either don't require authentication or that credentials will be provided via override values in production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @kubernetes/topology-app/values.yaml:
- Around line 157-165: The DATABASE_URL value under topologyAppEnv is hardcoded
with credentials; move the secret material into a Kubernetes Secret and update
the deployment to read it via env valueFrom.secretKeyRef. Create a Secret (e.g.,
topology-app-secrets) with key database-url containing the full connection
string, remove or replace DATABASE_URL in values.yaml to reference the secret,
and modify the deployment/cronjob container env (the place that currently reads
topologyAppEnv -> DATABASE_URL) to use valueFrom.secretKeyRef with name
topology-app-secrets and key database-url.
- Around line 168-193: The users block in values.yaml currently contains
plaintext passwords for usernames admin, user, readonly, and dsmanager; replace
these insecure literals by referencing Kubernetes Secrets or an external secret
manager (e.g., create a Secret and reference it via existingSecret or External
Secrets) and update the chart to read credentials at runtime instead of
embedding them in the users array; if you must keep example values, change them
to clearly marked placeholders and add the commented config
snippet/documentation warning above the users section indicating these must be
overridden in production.
🧹 Nitpick comments (6)
kubernetes/topology-web/values.yaml (4)
24-29: Avoid usinglatestas the default image tag.The comment on line 29 warns against using "latest", yet it's set as the default. This can cause non-reproducible deployments and makes rollbacks difficult. Consider using a specific versioned tag or the chart's
appVersion.♻️ Suggested fix
image: repository: svtechnmaa/svtech_topology_visualizer_fe # CHANGE THIS TO YOUR IMAGE # This sets the pull policy for images. pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. - tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. + tag: "" # Defaults to chart appVersion if empty. Override with a specific version tag.
111-116: Consider a more conservative default formaxReplicas.A default of 100 max replicas is unusually high and could lead to unexpected resource consumption if autoscaling is accidentally enabled without tuning. A value like 5-10 is typically more appropriate as a safe default.
♻️ Suggested fix
autoscaling: enabled: false minReplicas: 1 - maxReplicas: 100 + maxReplicas: 10 targetCPUUtilizationPercentage: 80 # targetMemoryUtilizationPercentage: 80
141-142: Hardcoded timezone may not be appropriate for all deployments.
Asia/Ho_Chi_Minhis region-specific. For a reusable chart, consider usingUTCas the default or documenting that this value should be overridden per deployment.♻️ Suggested fix
env: - TZ: Asia/Ho_Chi_Minh + TZ: UTC # Override with your preferred timezone
75-77:externalTrafficPolicyis ineffective with ClusterIP service type.The
externalTrafficPolicyfield only applies toLoadBalancerandNodePortservice types. Since the defaulttypeisClusterIP(line 51), this setting has no effect. Consider adding a comment to clarify this or moving it under the commented NodePort/LoadBalancer configuration.kubernetes/topology-app/values.yaml (2)
28-33: Avoid usinglatesttag as the default.The comment correctly warns against using
latest, but the default value still uses it. Consider using a specific version tag or leaving it empty to force explicit configuration. Usinglatestcan lead to unpredictable deployments and makes rollbacks difficult.Suggested fix
image: repository: svtechnmaa/svtech_topology_visualizer_be # CHANGE THIS TO YOUR IMAGE # This sets the pull policy for images. pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. - tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. + tag: "" # Set to a specific version tag. Uses chart appVersion if empty.
104-112: Consider enabling health probes by default.Liveness and readiness probes are commented out. For production deployments, having default probe configurations helps ensure pod health monitoring. Consider providing uncommented defaults with sensible values that users can customize.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
kubernetes/topology-app/values.yamlkubernetes/topology-web/values.yaml
🔇 Additional comments (6)
kubernetes/topology-web/values.yaml (3)
1-16: Global values section is well-structured.The shared volume configuration with commented examples provides good documentation for users who need to enable persistence.
48-61: Service configuration is correctly defined.The port mapping (port 80 → targetPort 3000) aligns with the
topologyWebConfig.PORTvalue of "3000". The ClusterIP type is appropriate for internal services.
144-147: Service name and port configuration are correctly aligned.The
topology-appservice is deployed with the nametopology-app(via{{ .Chart.Name }}in the service template), exposed on port 80, which correctly matches theBACKEND_URL: http://topology-app:80reference in topology-web values. Kubernetes DNS will resolve this properly within the cluster.kubernetes/topology-app/values.yaml (3)
52-91: LGTM!Service configuration is comprehensive with sensible defaults and good documentation references.
122-135: LGTM!Volume configuration using projected ConfigMaps is appropriate for combining seed configs with traefik config. The executable permission (493/0755) is suitable for script files.
223-273: Verify datasource authentication requirements.All datasources have empty
usernameandpasswordfields. Clarify whether:
- These datasources support anonymous access (intentionally empty)
- Credentials need to be configured for production use
If authentication is required, consider using
existingSecretpattern similar to other Helm charts, allowing credentials to be provided via Kubernetes Secrets rather than values.yaml.Suggested pattern for secret-based credentials
datasources: redis: enabled: true # ... other fields ... # Option 1: Inline (not recommended for production) username: "" password: "" # Option 2: Reference existing secret existingSecret: "" # Name of secret containing credentials usernameKey: "username" passwordKey: "password"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@kubernetes/topology-app/templates/deployment.yaml`:
- Around line 66-71: The readiness loop that calls curl to wait for Grafana (the
until loop invoking curl --output /dev/null -k --silent --head --fail https://{{
.Values.global.frontendVip }}/grafana/api/health) must be hardened by adding a
maximum timeout/retry count and by not hardcoding TLS verification to be
disabled; change the logic to retry only up to a configurable limit (e.g.,
GRAFANA_READY_TIMEOUT or MAX_RETRIES) and fail the init container after timeout,
and replace the hardcoded -k flag with a conditional that reads a Helm value
(e.g., .Values.global.frontendVipTlsSkipVerify) so TLS verification is skipped
only when explicitly configured; keep the follow-up echo and
/opt/seed-queries-script.sh invocation as-is but ensure the readiness check
exits nonzero on timeout so the container fails fast.
🧹 Nitpick comments (3)
kubernetes/topology-app/templates/deployment.yaml (3)
44-45: Consider makingimagePullSecretsconfigurable.The secret name is hardcoded, which reduces flexibility for deployments using different registries or secret names.
♻️ Suggested refactor
- imagePullSecrets: - - name: ghcr-pull-secret + {{- if .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml .Values.imagePullSecrets | nindent 8 }} + {{- else if .Values.global.imagePullSecrets }} + imagePullSecrets: + {{- toYaml .Values.global.imagePullSecrets | nindent 8 }} + {{- end }}Then add a default in
values.yaml:imagePullSecrets: - name: ghcr-pull-secret
97-132: Consider addingstartupProbefor migration-aware startup.The server runs with
-migratewhich may take variable time during first deployment or schema changes. Without astartupProbe, thelivenessProbemay kill the container before migration completes.♻️ Optional: Add startupProbe
{{- if .Values.readinessProbe }} readinessProbe: {{- toYaml .Values.readinessProbe | nindent 12 }} {{- end }} + {{- if .Values.startupProbe }} + startupProbe: + {{- toYaml .Values.startupProbe | nindent 12 }} + {{- end }} resources:Then define a suitable
startupProbeinvalues.yamlwith longerfailureThresholdto accommodate migrations.
81-94: Consider extracting repeatedvolumeMountslogic into a named template.The same volumeMounts block is duplicated across all three containers. Extracting this to a helper template would reduce duplication and ensure consistency.
♻️ Example helper template
Create in
_helpers.tpl:{{- define "topology-app.volumeMounts" -}} {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }} {{- range .Values.global.sharedPersistenceVolume }} {{- if has $.Chart.Name .shareFor }} - name: {{ .volumeName }} mountPath: {{ .path }} {{- end }} {{- end }} {{- end }} {{- if .Values.volumeMounts }} {{- toYaml .Values.volumeMounts }} {{- end }} {{- end }}Then use in each container:
volumeMounts: {{- include "topology-app.volumeMounts" . | nindent 12 }}Also applies to: 135-148, 181-194
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@kubernetes/asynqmon/templates/deployment.yaml`:
- Around line 44-45: Make the imagePullSecrets entry configurable and omitted
when empty: replace the hardcoded imagePullSecrets block (the "imagePullSecrets"
key and "ghcr-pull-secret" value) in the Deployment template with a Helm-driven
conditional that reads from a values key like .Values.imagePullSecrets (default
an empty list in values.yaml). Implement the template logic to only render the
imagePullSecrets section when .Values.imagePullSecrets is non-empty and iterate
over the list to emit each secret name, and add a default imagePullSecrets: []
in values.yaml.
- Around line 79-108: The deployment template currently always renders the
volumeMounts and volumes keys which can become null when no shared or
chart-specific volumes exist; update the template around the volumeMounts and
volumes blocks to conditionally render those entire sections only when
.Values.global.sharedPersistenceVolume/.Values.global.sharedVolume.enabled or
.Values.volumeMounts (for volumeMounts) and
.Values.global.sharedPersistenceVolume/.Values.global.sharedVolume.enabled or
.Values.volumes (for volumes) are present, or render an explicit empty list []
instead of null; locate the existing blocks referencing volumeMounts, volumes,
.Values.global.sharedPersistenceVolume, .Values.volumeMounts, .Values.volumes
and $.Chart.Name and wrap them with appropriate if guards or output [] to ensure
valid Kubernetes manifests.
In `@kubernetes/redis/templates/deployment.yaml`:
- Around line 44-48: The YAML is invalid because the list item for
imagePullSecrets is not properly nested; update the indentation so the dash
entry is a child of the imagePullSecrets key (ensure the "- name:
ghcr-pull-secret" line is indented under imagePullSecrets), preserving alignment
with the surrounding keys like securityContext and the template expression
toYaml .Values.podSecurityContext so the resulting block correctly renders as
imagePullSecrets: [ - name: ghcr-pull-secret ] in YAML structure.
- Around line 79-108: The template currently always emits volumeMounts: and
volumes: which can render as null when no shared or per-chart entries exist;
wrap each block (the volumeMounts section that contains the range/has checks and
the volumes section that contains the pvc range) with a conditional that only
renders the entire block when at least one entry will be emitted—for example,
check if .Values.volumeMounts (or .Values.volumes) is set OR if there are any
shared entries using a filtered count like (gt (len (where
.Values.global.sharedPersistenceVolume "shareFor" $.Chart.Name)) 0) and
.Values.global.sharedVolume.enabled; apply the same guard logic for both the
volumeMounts and volumes blocks so they are omitted entirely when empty.
In `@kubernetes/topology-web/templates/deployment.yaml`:
- Around line 29-36: The template fallback uses .Values.podAffinityPreset,
.Values.podAntiAffinityPreset and .Values.nodeAffinityPreset.* when
.Values.affinity is absent, but those presets aren't defined; either add default
keys in values.yaml (e.g. podAffinityPreset, podAntiAffinityPreset,
nodeAffinityPreset with type/key/values) or update the deployment template to
guard each include call (check .Values.podAffinityPreset /
.Values.podAntiAffinityPreset / .Values.nodeAffinityPreset exist and are
non-empty before calling common.affinities.pods or common.affinities.nodes) so
the includes are only executed when the preset values are defined.
In `@kubernetes/topology-web/values.yaml`:
- Around line 111-116: The values file is missing defaults for
.Values.podAffinityPreset and .Values.podAntiAffinityPreset referenced by the
deployment template; add default keys podAffinityPreset and
podAntiAffinityPreset (e.g., empty string or "soft"/"hard" per chart
conventions) at the top-level of values.yaml so .Values.podAffinityPreset and
.Values.podAntiAffinityPreset resolve even when .Values.affinity is unset;
update the autoscaling block area to include these two keys with sensible
defaults to match how the template expects them.
In `@kubernetes/traefik/templates/deployment.yaml`:
- Around line 91-120: The template renders bare "volumeMounts:" and "volumes:"
headers even when there are no items, producing null in Kubernetes; guard those
entire sections so they only render when there are entries (shared or
chart-specific) or explicitly emit an empty list. Update the volumeMounts block
to only render if (.Values.volumeMounts present) OR (and
.Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled) (or
check lengths with len > 0), and similarly wrap the volumes block to only render
when (.Values.volumes present) OR (and .Values.global.sharedPersistenceVolume
.Values.global.sharedVolume.enabled), referencing the volumeMounts/volumes
headers and the .Values.global.sharedPersistenceVolume,
.Values.global.sharedVolume.enabled, .Values.volumeMounts and .Values.volumes
symbols so the sections are omitted when empty (or output "[]").
♻️ Duplicate comments (3)
kubernetes/topology-app/templates/deployment.yaml (1)
66-71: Add timeout to Grafana readiness check and avoid disabling SSL verification.This issue was previously flagged. The
untilloop lacks a maximum retry limit (risking indefinite blocking) andcurl -kdisables certificate verification.kubernetes/topology-app/values.yaml (2)
156-164: Security: Database credentials should not be hardcoded in values.yaml.This issue was previously flagged. The
DATABASE_URLcontains placeholder credentials that will be committed to version control. Use Kubernetes Secrets andsecretKeyRefinstead.
167-192: Security: Plaintext passwords in configuration file.This issue was previously flagged. User passwords are stored in plain text and will be committed to version control. Use Kubernetes Secrets or external secret management.
🧹 Nitpick comments (9)
kubernetes/topology-app/templates/deployment.yaml (2)
44-45: Consider makingimagePullSecretsconfigurable.The image pull secret name is hardcoded to
ghcr-pull-secret. For flexibility across different environments and registries, consider making this configurable via values.♻️ Suggested refactor
imagePullSecrets: - - name: ghcr-pull-secret + {{- if .Values.imagePullSecrets }} + {{- toYaml .Values.imagePullSecrets | nindent 6 }} + {{- else }} + - name: ghcr-pull-secret + {{- end }}Then add to
values.yaml:imagePullSecrets: [] # - name: ghcr-pull-secret
83-96: Consider extracting duplicated volumeMounts logic into a helper template.The same volumeMounts block (shared volumes + chart-specific mounts) is repeated three times across the init container, server container, and worker container. This increases maintenance burden and risk of inconsistency.
♻️ Suggested helper template
Create a helper in
_helpers.tpl:{{- define "topology-app.volumeMounts" -}} # shareVolume: VolumeMount {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }} {{- range .Values.global.sharedPersistenceVolume }} {{- if has $.Chart.Name .shareFor }} - name: {{ .volumeName }} mountPath: {{ .path }} {{- end }} {{- end }} {{- end }} # unshareVolume(volumeMount): define in the chart's values.yaml {{- if .Values.volumeMounts }} {{- toYaml .Values.volumeMounts | nindent 0 }} {{- end }} {{- end }}Then use in containers:
volumeMounts: {{- include "topology-app.volumeMounts" . | nindent 12 }}Also applies to: 139-152, 187-200
kubernetes/topology-app/values.yaml (2)
33-33: Avoid usinglatesttag as the default.While the comment acknowledges this isn't recommended, having
latestas the default value creates reproducibility and rollback issues. Consider using a placeholder that forces explicit configuration or a specific version.♻️ Suggested change
- tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. + tag: "" # REQUIRED: Specify a specific image tag. Using "latest" is not recommended.Then add validation in the deployment template:
{{- if not .Values.image.tag }} {{- fail "image.tag is required. Please specify a specific image version." }} {{- end }}
222-272: Document empty credential fields and consider secret references for production.The datasources section has empty
usernameandpasswordfields. While this may be intentional for development or anonymous access, it's unclear whether:
- These are placeholders requiring override
- Anonymous access is expected
- Credentials should come from secrets in production
Add comments clarifying the intended usage and consider providing a
existingSecretpattern for production deployments.📝 Suggested documentation
datasources: + # Note: Empty username/password fields below are placeholders. + # For production, override these values or use existingSecretRef pattern. redis: enabled: true id: 1 title: Redis type: Redis storage: true url: "redis://redis:6379" - username: "" - password: "" + username: "" # Override for authenticated Redis + password: "" # Or use existingSecretRef in production schema: "0"kubernetes/topology-web/values.yaml (3)
29-29: Image taglatestis explicitly noted as not recommended.The comment correctly warns against using
latest, but the default still uses it. Consider changing the default to a specific version tag to prevent unexpected behavior during deployments when the upstream image is updated.
114-114: Consider a more reasonable default formaxReplicas.A
maxReplicasvalue of 100 seems excessive for a web frontend component. If autoscaling is accidentally enabled without proper configuration, this could lead to unexpected resource consumption. Consider a more conservative default like 10.
141-141: Hardcoded timezone may not be appropriate for all deployments.The timezone is hardcoded to
Asia/Ho_Chi_Minh. Consider using a more neutral default likeUTCor documenting that this should be overridden per deployment environment.kubernetes/topology-web/templates/deployment.yaml (1)
44-45: HardcodedimagePullSecretsname reduces flexibility.The secret name
ghcr-pull-secretis hardcoded. Consider making this configurable via values.yaml to support different registry authentication setups across environments.Proposed fix
In
values.yaml, add:imagePullSecrets: - name: ghcr-pull-secretIn
deployment.yaml, replace lines 44-45 with:- imagePullSecrets: - - name: ghcr-pull-secret + {{- if .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml .Values.imagePullSecrets | nindent 8 }} + {{- end }}kubernetes/traefik/templates/deployment.yaml (1)
44-46: MakeimagePullSecretsconfigurable instead of hard-coded.Hard-coding
ghcr-pull-secretmakes the chart fail in clusters where that secret doesn’t exist. Prefer.Values.imagePullSecretswith a conditional render.♻️ Proposed change
- imagePullSecrets: - - name: ghcr-pull-secret + {{- if .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml .Values.imagePullSecrets | nindent 8 }} + {{- end }}
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.