Skip to content

Conversation

@vyvuvivo
Copy link
Contributor

@vyvuvivo vyvuvivo commented Jan 27, 2026

Summary by CodeRabbit

Release Notes

  • New Features
    • Added Kubernetes Helm charts for deploying ArangoDB, Redis, Asynqmon, Topology application, Topology web interface, and Traefik proxy, enabling standardized containerized deployments with configurable templates and pre-configured default values.

✏️ Tip: You can customize this high-level summary in your review settings.

@vyvuvivo vyvuvivo requested review from duchieu2k and vutuong January 27, 2026 02:36
@coderabbitai
Copy link

coderabbitai bot commented Jan 27, 2026

Walkthrough

Adds comprehensive Kubernetes Helm charts for deploying multiple services: ArangoDB (with 15+ CustomResourceDefinitions), Asynqmon, Redis, Traefik, Topology-App, and Topology-Web. Each chart includes standard files (.helmignore, Chart.yaml, values.yaml) and service-specific templates.

Changes

Cohort / File(s) Summary
ArangoDB CRDs
kubernetes/arangodb/crds/analytics-graphanalyticsengine.yaml, crds/apps-job.yaml, crds/backups-backup.yaml, crds/backups-backuppolicy.yaml, crds/database-clustersynchronization.yaml, crds/database-deployment.yaml, crds/database-member.yaml, crds/database-task.yaml, crds/ml-extension.yaml, crds/ml-job-batch.yaml, crds/ml-job-cron.yaml, crds/ml-storage.yaml, crds/networking-route.yaml, crds/platform-chart.yaml, crds/platform-service.yaml, crds/platform-storage.yaml, crds/replication-deploymentreplication.yaml, crds/scheduler-batchjob.yaml, crds/scheduler-cronjob.yaml, crds/scheduler-deployment.yaml, crds/scheduler-pod.yaml, crds/scheduler-profile.yaml
Introduces 22 Kubernetes CustomResourceDefinitions for ArangoDB ecosystem, covering analytics, jobs, backups, database operations, ML extensions, networking, platform services, replication, and scheduling. Each CRD specifies API group, versions, storage/serving configuration, printer columns, and status subresources.
ArangoDB Helm Chart
kubernetes/arangodb/.helmignore, kubernetes/arangodb/Chart.yaml, kubernetes/arangodb/values.yaml, kubernetes/arangodb/templates/arangodeployment.yaml, templates/initjob.yaml, templates/secret.yaml
Complete Helm chart for ArangoDB deployment with dependencies on common chart and kube-arangodb. Includes values for image configuration, service types, cluster/single modes, authentication, and init job that polls server availability and creates topology database.
Asynqmon Helm Chart
kubernetes/asynqmon/.helmignore, kubernetes/asynqmon/Chart.yaml, kubernetes/asynqmon/values.yaml, kubernetes/asynqmon/templates/deployment.yaml, templates/service.yaml
Standard Helm chart for Asynqmon deployment with image hibiken/asynqmon:latest, configurable service ports (port 4000), replicas, affinity, security contexts, and resource management. Supports autoscaling and shared volumes.
Redis Helm Chart
kubernetes/redis/.helmignore, kubernetes/redis/Chart.yaml, kubernetes/redis/values.yaml, kubernetes/redis/templates/deployment.yaml, templates/service.yaml
Helm chart for Redis with configurable deployment parameters, image settings, service configuration with optional LoadBalancer/NodePort exposure, resource limits, probes, autoscaling, and shared volume support.
Topology-App Helm Chart
kubernetes/topology-app/.helmignore, kubernetes/topology-app/Chart.yaml, kubernetes/topology-app/values.yaml, kubernetes/topology-app/templates/deployment.yaml, templates/seed-configs-cm.yaml, templates/service.yaml, templates/topology-app-env.yaml, templates/traefik-configmap.yaml
Comprehensive Helm chart for topology-app with init containers for seeding data and migrations, dual containers (server/worker), extensive environment configuration, ConfigMaps for Traefik routing, and seed data loading.
Topology-App Payload Templates
kubernetes/topology-app/templates/payloads/_payload-critical.tpl, _payload-dashboards.tpl, _payload-down.tpl, _payload-ok.tpl, _payload-templates.tpl, _payload-unknown.tpl, _payload-warning.tpl
Seven Go template files defining JSON payload configurations for topology visualization states (ok, warning, critical, down, unknown), dashboards layout, and template styling with node/edge styling, colors, fonts, and positioning.
Topology-App Seed Templates
kubernetes/topology-app/templates/seed-configs/_seed-dashboards.tpl, _seed-datasources.tpl, _seed-groups.tpl, _seed-icons.tpl, _seed-menu-items.tpl, _seed-permissions.tpl, _seed-queries-script.tpl, _seed-queries.tpl, _seed-resource-types.tpl, _seed-states.tpl, _seed-templates.tpl, _seed-users.tpl
Twelve seed configuration templates populating dashboard, datasource, user, group, permission, icon, menu, query, and resource metadata. Includes complex seed-queries-script.tpl for Grafana datasource ID retrieval and MariaDB integration.
Topology-Web Helm Chart
kubernetes/topology-web/.helmignore, kubernetes/topology-web/Chart.yaml, kubernetes/topology-web/values.yaml, kubernetes/topology-web/templates/deployment.yaml, templates/service.yaml, templates/topology-web-configmap.yaml
Helm chart for topology-web frontend with npm-based deployment, configurable service ports (port 80 → 3000), environment variables (BACKEND_URL, NODE_ENV, PORT), and standard Kubernetes deployment patterns.
Traefik Helm Chart
kubernetes/traefik/.helmignore, kubernetes/traefik/Chart.yaml, kubernetes/traefik/values.yaml, kubernetes/traefik/templates/deployment.yaml, templates/job-generate-cert.yaml, templates/service.yaml, templates/serviceaccount.yaml, templates/traefik-configmap.yaml.bak
Complete Traefik ingress controller chart with TLS certificate generation job (self-signed via openssl), ServiceAccount with secret permissions, CORS middleware configuration, and routers for multiple backends (topology-app, topology-web, dashboard).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • vutuong
  • duchieu2k
  • phamtranlinhchi
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Title check ⚠️ Warning The title 'fix: Update arangodb template' does not accurately represent the primary changes in this PR, which predominantly adds multiple new Helm charts (arangodb, asynqmon, redis, topology-app, topology-web, traefik) with numerous CRD files, templates, and configuration files. Revise the title to reflect the main objective (adding new Helm charts) or the broader scope change. For example: 'feat: Add Helm charts for arangodb, asynqmon, redis, topology services, and traefik' or 'feat: Add multiple new Helm charts and Kubernetes manifests'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In `@kubernetes/arangodb/values.yaml`:
- Around line 20-33: The values.yaml currently nests the pod spec under
single.single which doesn't match templates that reference .Values.single; open
kubernetes/arangodb/values.yaml and flatten the structure by moving the contents
under the inner "single" up one level into the top-level "single" key (remove
the extra nested single key so the resources, volumeClaimTemplate, etc. are
directly under single), ensuring templates that reference .Values.single in
arangodeployment.yaml resolve correctly.
- Around line 92-95: The Helm chart currently encodes and writes
auth.rootPassword unconditionally (template uses {{ .Values.auth.rootPassword |
b64enc | quote }}) which allows an empty password to become a blank secret;
update the chart templates to validate or conditionally render the password:
either enforce non-empty by checking .Values.auth.rootPassword and
failing/returning an error if empty, or wrap the secret data entry in an if
block that only includes the base64-encoded rootPassword when
.Values.auth.rootPassword is non-empty and otherwise rely on the provided
auth.secretName; modify the template that references auth.rootPassword and add
the conditional check/error path so empty values.yaml rootPassword no longer
create a blank credential.

In `@kubernetes/topology-app/templates/seed-configs/_seed-icons.tpl`:
- Around line 39-44: The JSON array in the template ends with a trailing comma
after the last object (the object with "ID": 8, "Name": "Warning", "URL":
"/public/router-big_warning.svg"), which makes the JSON invalid; remove the
trailing comma following that object's closing brace so the array ends with ]
(i.e., delete the comma after the last element in the _seed-icons.tpl array).

In `@kubernetes/topology-app/templates/seed-configs/_seed-templates.tpl`:
- Line 8: The template uses invalid JSON by wrapping the included payload with
single quotes in the "Payload" value; update the "Payload" line that references
include "payload-templates" so it emits valid JSON: if the consumer expects a
JSON string, serialize the included payload to a JSON-escaped string (e.g., use
the templating helper that converts to JSON) and wrap it with double quotes,
otherwise remove the surrounding quotes so the included payload is emitted as a
nested JSON object; ensure to keep the use of include "payload-templates" and
adjust piping/escaping accordingly so output is valid JSON.
🟠 Major comments (26)
kubernetes/topology-app/templates/seed-configs/_seed-dashboards.tpl-2-3 (1)

2-3: Unused variable $data - payload is hardcoded instead of reused.

The template includes payload-dashboards and parses it into $data, but this variable is never used. Instead, the Payload field on line 9 duplicates the same JSON string. Either use $data or $raw to populate the Payload, or remove the unused include.

Proposed fix using the included template
 {{- define "seed-dashboards" -}}
 {{- $raw := include "payload-dashboards" . -}}
-{{- $data := fromJson $raw -}}
 {
   "Dashboard": [
     {
       "ID": 1,
       "Name": "test",
-      "Payload": "{\"layout\":\"2x2\",\"A[layout]\":\"single\",...}",
+      "Payload": {{ $raw | quote }},
       "CreatedBy": 1
     }
   ]
 }
 {{- end -}}
kubernetes/topology-app/templates/payloads/_payload-dashboards.tpl-2-2 (1)

2-2: Remove embedded whitespace from JSON keys—they will not match expected dashboard configuration values.

The JSON string contains literal whitespace within keys (e.g., " A[A][external]" with 4 leading spaces, "B[A][u rl]" with spaces in the middle). This breaks JSON parsing and key matching for any code that expects standard key names without whitespace.

Remove all accidental spaces from key names. Additionally, consider reformatting this as a multi-line JSON block or storing it in a separate file for maintainability.

kubernetes/topology-app/templates/seed-configs/_seed-datasources.tpl-15-25 (1)

15-25: Move datasource credentials from ConfigMap to Kubernetes Secret.

The Username and Password fields are rendered as plaintext into the seed-datasources ConfigMap (kubernetes/topology-app/templates/seed-configs/seed-configs-cm.yaml), exposing credentials to any principal with ConfigMap read access. ConfigMaps store data as base64-encoded plaintext without encryption.

Refactor the seed initialization to:

  • Move datasource credentials to a Kubernetes Secret
  • Read credentials from the Secret during app initialization (e.g., via environment variables or secret volume mounts)
  • Pass credentials to the application at runtime rather than embedding them in the seed JSON file
kubernetes/topology-app/templates/seed-configs/_seed-queries.tpl-91-106 (1)

91-106: Fix array index for hard_state_count and add state filtering.

The SQL query selects t1.name, t1.address, t3.hard_state, COUNT(t3.hard_state), mapping to array indices 0–3. The ResultMapper reads values[2] which is hard_state (not the count); it should read values[3] for the count value.

Additionally, the queries group by hard_state without filtering to specific states, so they return counts for all states (OK=0, Warning=1, Critical=2). Update the SQL WHERE clause to filter by the intended state(s) or add state-specific mapping in the ResultMapper logic, since the query names ("critical" and "warning") imply state-specific counts.

🛠️ Fix for array index
-      "ResultMapper": "function(result) { _.forEach(result.results.A.frames[0].data.values[0], function(hostname, index) { var address = result.results.A.frames[0].data.values[1][index]; var hard_state_count = result.results.A.frames[0].data.values[2][index]; emit([hostname, address].join('|'), { number_service_critical: hard_state_count }) }) }",
+      "ResultMapper": "function(result) { _.forEach(result.results.A.frames[0].data.values[0], function(hostname, index) { var address = result.results.A.frames[0].data.values[1][index]; var hard_state_count = result.results.A.frames[0].data.values[3][index]; emit([hostname, address].join('|'), { number_service_critical: hard_state_count }) }) }",
...
-      "ResultMapper": "function(result) { _.forEach(result.results.A.frames[0].data.values[0], function(hostname, index) { var address = result.results.A.frames[0].data.values[1][index]; var hard_state_count = result.results.A.frames[0].data.values[2][index]; emit([hostname, address].join('|'), { number_service_warning: hard_state_count }) }) }",
+      "ResultMapper": "function(result) { _.forEach(result.results.A.frames[0].data.values[0], function(hostname, index) { var address = result.results.A.frames[0].data.values[1][index]; var hard_state_count = result.results.A.frames[0].data.values[3][index]; emit([hostname, address].join('|'), { number_service_warning: hard_state_count }) }) }",
kubernetes/traefik/templates/service.yaml-3-5 (1)

3-5: Use a release-aware fullname for the Service name.

name: {{ .Chart.Name }} can collide across releases in the same namespace and ignores nameOverride/fullnameOverride. The common chart (already a dependency) provides the common.names.fullname helper for this purpose.

Proposed fix
 metadata:
-  name: {{ .Chart.Name }}
+  name: {{ include "common.names.fullname" . }}
kubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl-1-3 (1)

1-3: Shebang/syntax mismatch: script uses bash features but declares /bin/sh.

The script uses #!/bin/sh but contains bash-specific syntax like [[ ]] (lines 31, 38, 46, 54, 121, 123, 125, 130, 144). On strict POSIX shells (e.g., dash on Debian/Ubuntu), this will fail.

🔧 Proposed fix
-#!/bin/sh
+#!/bin/bash

Or replace all [[ ]] with POSIX-compliant [ ] tests.

kubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl-35-37 (1)

35-37: Database password exposed in process arguments.

Passing the password via -p"$PASS" makes it visible in ps output. Consider using environment variable MYSQL_PWD or a credentials file instead.

🔒 Recommended approach
-  grafana_datasource_ID=$(mariadb -u "$USER" -p"$PASS" -h "$MARIADB_HOST" -D "$DB" --skip-ssl -Nse "SELECT id FROM datasources WHERE type='Grafana' AND title='internal_grafana' LIMIT 1;")
+  grafana_datasource_ID=$(MYSQL_PWD="$PASS" mariadb -u "$USER" -h "$MARIADB_HOST" -D "$DB" --skip-ssl -Nse "SELECT id FROM datasources WHERE type='Grafana' AND title='internal_grafana' LIMIT 1;")

Apply similar changes to all mariadb invocations (lines 37, 84, 90-91).

kubernetes/traefik/templates/traefik-configmap.yaml.bak-141-143 (1)

141-143: Update traefik-configmap.yaml.bak to parameterize insecureSkipVerify and default to false.

This file appears to be a backup (.bak). The same hardcoded insecureSkipVerify: true also exists in the active kubernetes/topology-app/templates/traefik-configmap.yaml. Both need fixing to address the TLS verification bypass, which creates MITM vulnerability. Make this configurable via Helm values and default to false.

kubernetes/traefik/templates/traefik-configmap.yaml.bak-1-6 (1)

1-6: Remove the .yaml.bak file from templates/ or rename it to prevent Helm from rendering it.

Helm renders all files in the templates/ directory regardless of extension. This will create an unintended duplicate ConfigMap on every deployment. Move the file outside templates/, delete it if no longer needed, or prefix the filename with an underscore (e.g., _traefik-configmap.yaml.bak) to exclude it from rendering.

kubernetes/traefik/templates/traefik-configmap.yaml.bak-11-19 (1)

11-19: CORS: wildcard origin + credentials combination is invalid and breaks browser requests.

The Fetch/CORS specification does not allow Access-Control-Allow-Origin: "*" when Access-Control-Allow-Credentials: true is set. Browsers will reject such responses with an error. Remove credentials or specify an explicit origin instead.

🔒 Example adjustment
-              Access-Control-Allow-Credentials: "true"
-              Access-Control-Allow-Origin: "*"
+              Access-Control-Allow-Credentials: "true"
+              Access-Control-Allow-Origin: "https://example.com"
kubernetes/traefik/templates/traefik-configmap.yaml.bak-30-60 (1)

30-60: Fix undefined and empty service references in Traefik routers.

default and default-tls routers have service: "" (empty string), which Traefik rejects. Use noop@internal if these are redirect-only routers, or specify a valid service. dashboard-tls references service: "dashboard", which is not defined in the services section; it should reference api@internal (like the dashboard router does) or a properly defined service.

kubernetes/asynqmon/templates/deployment.yaml-79-108 (1)

79-108: Wrap volumeMounts and volumes keys in conditionals to prevent invalid manifests when empty.

With default values (global.sharedVolume.enabled: false, empty arrays), the current template renders empty keys that violate Kubernetes schema validation. Both keys must be guarded to render only when they contain items.

Suggested fix
-          volumeMounts:
-            # shareVolume: VolumeMount
-            {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
-            {{- range .Values.global.sharedPersistenceVolume }}
-            {{- if has $.Chart.Name .shareFor }}
-            - name: {{ .volumeName }}
-              mountPath: {{ .path }}
-            {{- end }}
-            {{- end }}
-            {{- end }}
-            # unshareVolume(volumeMount): define in the chart's values.yaml
-            {{- if .Values.volumeMounts }}
-            {{- toYaml .Values.volumeMounts | nindent 12 }}
-            {{- end }}
+          {{- if or (and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled) .Values.volumeMounts }}
+          volumeMounts:
+            # shareVolume: VolumeMount
+            {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
+            {{- range .Values.global.sharedPersistenceVolume }}
+            {{- if has $.Chart.Name .shareFor }}
+            - name: {{ .volumeName }}
+              mountPath: {{ .path }}
+            {{- end }}
+            {{- end }}
+            {{- end }}
+            # unshareVolume(volumeMount): define in the chart's values.yaml
+            {{- if .Values.volumeMounts }}
+            {{- toYaml .Values.volumeMounts | nindent 12 }}
+            {{- end }}
+          {{- end }}
@@
-      volumes:
-      # shareVolume(volume): define in the global values
-        {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
-        {{- range .Values.global.sharedPersistenceVolume }}
-        {{- if has $.Chart.Name .shareFor }}
-        - name: {{ .volumeName }}
-          persistentVolumeClaim:
-            claimName: {{ .pvcName }}
-        {{- end }}
-        {{- end }}
-        {{- end }}
-      # unshareVolume(volume): define in the chart's values.yaml
-        {{- if .Values.volumes }}
-        {{- toYaml .Values.volumes | nindent 8 }}
-        {{- end }}
+      {{- if or (and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled) .Values.volumes }}
+      volumes:
+        # shareVolume(volume): define in the global values
+        {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
+        {{- range .Values.global.sharedPersistenceVolume }}
+        {{- if has $.Chart.Name .shareFor }}
+        - name: {{ .volumeName }}
+          persistentVolumeClaim:
+            claimName: {{ .pvcName }}
+        {{- end }}
+        {{- end }}
+        {{- end }}
+        # unshareVolume(volume): define in the chart's values.yaml
+        {{- if .Values.volumes }}
+        {{- toYaml .Values.volumes | nindent 8 }}
+        {{- end }}
+      {{- end }}
kubernetes/topology-app/templates/seed-configs/_seed-users.tpl-7-7 (1)

7-7: Security concern: Passwords stored in plaintext.

User passwords will be rendered as plaintext in the ConfigMap. This exposes credentials in Kubernetes resources that can be read by anyone with ConfigMap access.

Consider using password hashes instead, or reference secrets for sensitive seed data.

kubernetes/arangodb/templates/secret.yaml-1-8 (1)

1-8: Add validation for empty password and standard labels.

The rootPassword defaults to empty per the values.yaml, which could result in deploying with no password if not overridden. Consider adding a check or a generated default.

♻️ Suggested improvements
 apiVersion: v1
 kind: Secret
 metadata:
   name: {{ .Values.auth.secretName }}
+  namespace: {{ .Release.Namespace }}
+  labels:
+    app.kubernetes.io/name: arangodb
+    app.kubernetes.io/instance: {{ .Release.Name }}
 type: Opaque
 data:
   username: {{ .Values.auth.rootUser | b64enc | quote }}
-  password: {{ .Values.auth.rootPassword | b64enc | quote }}
+  password: {{ required "auth.rootPassword is required" .Values.auth.rootPassword | b64enc | quote }}

Using required will fail the Helm installation with a clear message if the password is not provided.

kubernetes/topology-app/templates/seed-configs/_seed-states.tpl-2-40 (1)

2-40: Invalid JSON syntax with single quotes in Payload values.

The template uses single quotes around Payload values (lines 8, 15, 22, 29, 36), which violates JSON spec—JSON requires double quotes. Additionally, nindent 8 adds literal whitespace inside the single-quoted string, producing invalid JSON:

"Payload": '{"edge": {...}}',

The payload templates return JSON objects that must be properly embedded. Two valid approaches:

  1. As escaped string (matches _seed-dashboards.tpl pattern): Use double quotes with escaped inner JSON
  2. As unquoted object: Remove quotes entirely if Payload should be an object

Example fix using escaped string approach:

♻️ Fix for one entry
     {
       "ID": 1,
       "Name": "Ok",
       "Order": 0,
-      "Payload": '{{ include "payload-ok" . | nindent 8 }}',
+      "Payload": "{{ include "payload-ok" . | tojson }}",
       "IsAbnormal": false
     },

Note: _seed-templates.tpl has the identical issue on line 8.

kubernetes/arangodb/templates/initjob.yaml-35-36 (1)

35-36: TLS certificate verification is disabled (-k flag).

Using -k (insecure) skips TLS certificate verification, which could expose credentials to MITM attacks in untrusted networks. If self-signed certificates are expected, consider mounting the CA certificate and using --cacert instead.

For environments where TLS verification should be enforced, consider:

  • Mounting the ArangoDB CA certificate as a secret/configmap
  • Using --cacert /path/to/ca.crt instead of -k
  • Making the insecure flag configurable via values (e.g., .Values.init.insecureSkipTLSVerify)

Also applies to: 43-46, 49-52

kubernetes/topology-app/templates/seed-configs/_seed-menu-items.tpl-26-51 (1)

26-51: Trailing comma causes invalid JSON when no dynamic menu items are enabled.

Line 27 always emits a comma after the "Visualizer" item. If .Values.config.menuItems has no enabled entries, the resulting JSON will have a trailing comma before the closing bracket, which is invalid JSON syntax.

🐛 Proposed fix to conditionally emit the comma
       "ExternalTab": false,
       "Hidden": false,
       "Override": null
-    },
+    }{{- if gt (len $enabledKeys) 0 }},{{- end }}
     {{- $enabledKeys := list -}}
     {{- range $key, $ds := .Values.config.menuItems -}}
       {{- if $ds.enabled -}}
         {{- $enabledKeys = append $enabledKeys $key -}}
       {{- end -}}
     {{- end -}}

However, this requires computing $enabledKeys before emitting the comma. A cleaner approach is to restructure the logic:

-    },
-    {{- $enabledKeys := list -}}
-    {{- range $key, $ds := .Values.config.menuItems -}}
-      {{- if $ds.enabled -}}
-        {{- $enabledKeys = append $enabledKeys $key -}}
-      {{- end -}}
-    {{- end -}}
-
-    {{- $length := len $enabledKeys -}}
-
-    {{- range $i, $key := $enabledKeys -}}
+    }
+    {{- $enabledKeys := list -}}
+    {{- range $key, $ds := .Values.config.menuItems -}}
+      {{- if $ds.enabled -}}
+        {{- $enabledKeys = append $enabledKeys $key -}}
+      {{- end -}}
+    {{- end -}}
+    {{- $length := len $enabledKeys -}}
+    {{- range $i, $key := $enabledKeys -}}
+    ,
     {{- $ds := index $.Values.config.menuItems $key }}
kubernetes/arangodb/templates/arangodeployment.yaml-23-27 (1)

23-27: Fix indentation in mode-specific settings template section.

Lines 23-27 have leading whitespace that will be preserved in the rendered output. Since line 26 uses {{ tpl ... }} (not {{-), the 2-space indentation will be output before the template content. Combined with nindent 2, this places the single or cluster key at 4-space indentation instead of the required 2-space indentation under spec:.

Remove the leading 2 spaces from lines 23-27 so the whitespace is trimmed by the {{- tags:

Indentation fix
   {{- else }}
     type: None
   {{- end }}
-  {{- /* Render mode-specific settings via tpl to avoid duplicate conditionals */}}
-  {{- $modes := dict "Cluster" .Values.cluster "Single" .Values.single -}}
-  {{- with index $modes .Values.mode }}
-  {{ tpl (toYaml .) $ | nindent 2 }}
-  {{- end }}
+{{- /* Render mode-specific settings via tpl to avoid duplicate conditionals */}}
+{{- $modes := dict "Cluster" .Values.cluster "Single" .Values.single -}}
+{{- with index $modes .Values.mode }}
+{{ tpl (toYaml .) $ | nindent 2 }}
+{{- end }}
kubernetes/topology-app/templates/traefik-configmap.yaml-52-60 (1)

52-60: dashboard-tls references an undefined service.
service: "dashboard" is not defined under services. The services section only defines backend and frontend. Use api@internal to expose Traefik's dashboard, matching the non-TLS dashboard router.

🔧 Proposed fix
-          service: "dashboard"
+          service: "api@internal"
kubernetes/topology-app/templates/traefik-configmap.yaml-141-143 (1)

141-143: Avoid insecureSkipVerify: true by default.
This disables TLS verification and weakens security. Make it configurable and default to false.

🔧 Proposed fix (value-driven)
-          insecureSkipVerify: true
+          insecureSkipVerify: {{ .Values.traefik.insecureSkipVerify | default false }}

Note: Also add the corresponding configuration entry to kubernetes/topology-app/values.yaml:

traefik:
  insecureSkipVerify: false
kubernetes/topology-web/templates/service.yaml-3-5 (1)

3-5: Use release-scoped service name to avoid collisions.
metadata.name uses .Chart.Name, which is static across releases and will collide in a shared namespace when deploying multiple releases. The Deployment correctly uses common.names.fullname (which includes the release name), but the Service does not. This inconsistency means multiple releases in the same namespace will fail because all Service instances would try to use the same name.

🔧 Proposed fix
-  name: {{ .Chart.Name }}
+  name: {{ include "common.names.fullname" . }}
kubernetes/topology-app/templates/deployment.yaml-44-47 (1)

44-47: Make imagePullSecrets configurable via values.
Hard-coded ghcr-pull-secret reduces chart portability. Pods will fail if this secret doesn't exist in the target namespace. Add an optional, configurable setting that deployments can override.

🔧 Suggested approach
-      imagePullSecrets:
-      - name: ghcr-pull-secret
+      {{- if .Values.imagePullSecrets }}
+      imagePullSecrets:
+        {{- toYaml .Values.imagePullSecrets | nindent 8 }}
+      {{- end }}

Add to values.yaml:

imagePullSecrets: []
kubernetes/topology-app/templates/traefik-configmap.yaml-30-38 (1)

30-38: Routers with empty service are invalid.
Traefik requires a service per router; an empty string will fail config validation. Both the default and default-tls routers have empty services and will not function. Point them to actual services (e.g., frontend or backend, which are defined in this ConfigMap).

🔧 Example fix (catch-all to frontend)
         default:
           middlewares: []
           rule: "PathPrefix(`/`)"
-          service: ""
+          service: "frontend"
         default-tls:
           middlewares: []
           rule: "PathPrefix(`/`)"
-          service: ""
+          service: "frontend"
           tls: {}
kubernetes/topology-app/values.yaml-156-189 (1)

156-189: Move default credentials and PII from values.yaml to Kubernetes Secrets or external secret store.

Lines 158 and 167–189 contain cleartext database credentials, default usernames/passwords, and email addresses. Values files are typically version-controlled, making this a security and compliance risk. Require explicit secret overrides per environment instead of shipping defaults.

Example pattern
-  DATABASE_URL: "user:password@tcp(mariadb:3306)/dbname?parseTime=true"
+  DATABASE_URL: "" # set via Secret / external secret
...
-  - username: admin
-    password: adminpassword
+  - username: admin
+    password: "" # set via Secret / seed process
kubernetes/topology-app/templates/traefik-configmap.yaml-10-19 (1)

10-19: CORS wildcard + credentials will break browser requests.
Access-Control-Allow-Origin: "*" cannot be combined with Access-Control-Allow-Credentials: "true"; browsers reject this per the CORS/Fetch specification with the error "Credential is not supported if the CORS header 'Access-Control-Allow-Origin' is '*'". Use a specific origin (or a list) instead.

🔧 Proposed fix (value-driven)
-              Access-Control-Allow-Origin: "*"
+              Access-Control-Allow-Origin: {{ .Values.traefik.allowedOrigin | quote }}
kubernetes/topology-app/templates/deployment.yaml-60-69 (1)

60-69: Remove the unnecessary command substitution in the Grafana wait loop.
While until $(curl ...) technically works with the current flags (--output /dev/null --silent), it's a fragile anti-pattern. If curl produces any output (e.g., if someone removes --silent), bash will attempt to execute that output as a command, causing failures. Use the standard pattern until curl ...; instead, which is clearer and safer.

Consider making TLS verification configurable instead of hard-wiring -k, to allow stricter verification in secure environments if needed.

Proposed fix
-              until $(curl --output /dev/null -k --silent --head --fail https://{{ .Values.global.frontendVip }}/grafana/api/health); do
+              until curl --output /dev/null -k --silent --head --fail https://{{ .Values.global.frontendVip }}/grafana/api/health; do
🟡 Minor comments (10)
kubernetes/arangodb/Chart.yaml-29-31 (1)

29-31: Missing newline at end of file.

POSIX-compliant text files should end with a newline character.

🔧 Suggested fix
   - name: kube-arangodb
     version: "1.3.4"
     repository: https://arangodb.github.io/kube-arangodb
+
kubernetes/redis/Chart.yaml-24-24 (1)

24-24: Incorrect appVersion for Redis.

The appVersion: "1.16.0" appears to be a placeholder from Helm scaffolding. Redis does not use this versioning scheme—typical Redis versions are 7.x, 6.x, etc. Update this to reflect the actual Redis version being deployed (e.g., "7.2.4").

kubernetes/asynqmon/Chart.yaml-24-24 (1)

24-24: Incorrect appVersion for Asynqmon.

The appVersion: "1.16.0" is a placeholder. Asynqmon versions are typically in the 0.7.x or 0.8.x range. Update this to match the actual application version being deployed.

kubernetes/arangodb/crds/database-task.yaml-14-20 (1)

14-20: Missing status subresource on storage version (v1).

Version v2alpha1 defines a status subresource (line 35-36), but v1 (the storage version) does not. This inconsistency may cause issues when clients interact with the status subresource via v2alpha1 while the storage version lacks it. Consider adding the status subresource to v1 for consistency, as done in database-clustersynchronization.yaml.

Proposed fix
     - name: v1
       schema:
         openAPIV3Schema:
           type: object
           x-kubernetes-preserve-unknown-fields: true
       served: true
       storage: true
+      subresources:
+        status: {}
kubernetes/topology-app/templates/seed-configs/_seed-datasources.tpl-19-27 (1)

19-27: JSON-escape datasource string fields.

Direct interpolation can produce invalid JSON when values contain quotes, backslashes, or control characters. Use toJson for string fields to properly escape and quote them.

♻️ Suggested fix
-      "Title": "{{ $ds.title }}",
-      "Type": "{{ $ds.type }}",
-      "Url": "{{ $ds.url }}",
+      "Title": {{ $ds.title | toJson }},
+      "Type": {{ $ds.type | toJson }},
+      "Url": {{ $ds.url | toJson }},
       "Storage": {{ $ds.storage }},
-      "Username": "{{ $ds.username }}",
-      "Password": "{{ $ds.password }}",
+      "Username": {{ $ds.username | toJson }},
+      "Password": {{ $ds.password | toJson }},
       "Details": null,
-      "Schema": "{{ $ds.schema }}",
+      "Schema": {{ $ds.schema | toJson }},
       "Query": ""
kubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl-86-86 (1)

86-86: Typo in error message: literal string instead of variable.

The error message prints the literal string 'MARIADB_HOST' instead of the variable value.

🐛 Fix
-    echo "❌ Failed to connect database '$DB' from host 'MARIADB_HOST'. Exiting..."
+    echo "❌ Failed to connect database '$DB' from host '$MARIADB_HOST'. Exiting..."
kubernetes/traefik/templates/serviceaccount.yaml-19-31 (1)

19-31: Add namespace to RoleBinding metadata.

The RoleBinding is missing namespace in its metadata, unlike the ServiceAccount and Role. This inconsistency could cause deployment issues.

🐛 Fix the missing namespace
 ---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   name: tls-job-rolebinding
+  namespace: {{ .Release.Namespace }}
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: tls-job-role
kubernetes/topology-app/templates/payloads/_payload-templates.tpl-97-105 (1)

97-105: Fix typos in alias names: "ultilization" should be "utilization".

The aliases at lines 99 and 104 contain typos that should be corrected.

🐛 Fix the typos
     {
       "queryId": "5",
       "queryFormat": "${source_hostname}|IfCheck-${physical_interface}",
-      "alias": "in_ultilization"
+      "alias": "in_utilization"
     },
     {
       "queryId": "6",
       "queryFormat": "${source_hostname}|IfCheck-${physical_interface}",
-      "alias": "out_ultilization"
+      "alias": "out_utilization"
     },
kubernetes/arangodb/templates/initjob.yaml-35-41 (1)

35-41: Availability loop lacks timeout and may run indefinitely.

The until loop has no maximum attempt limit or timeout. If ArangoDB never becomes available (misconfiguration, resource issues), this job will loop forever until the backoffLimit is hit via pod restarts, wasting resources.

Also, the hardcoded sleep 60 after availability is arbitrary. Consider making this configurable or adding a comment explaining why 60 seconds is necessary.

🔧 Suggested improvement with timeout
          set -e
+
+          MAX_ATTEMPTS=60
+          ATTEMPT=0

          ARANGO_HOST="{{ include "common.names.fullname" . }}:8529"

          until curl -sf -k -u "$ARANGO_USER:$ARANGO_PASS" \
            https://$ARANGO_HOST/_admin/server/availability   ; do
+            ATTEMPT=$((ATTEMPT + 1))
+            if [ $ATTEMPT -ge $MAX_ATTEMPTS ]; then
+              echo "Timeout waiting for ArangoDB after $MAX_ATTEMPTS attempts"
+              exit 1
+            fi
            echo "Waiting for ArangoDB..."
            sleep 5
          done
-          echo "ArangoDB is available, sleep 60s"
-          sleep 60
+          echo "ArangoDB is available, waiting for stabilization..."
+          sleep {{ .Values.init.stabilizationDelay | default 60 }}
kubernetes/arangodb/templates/arangodeployment.yaml-6-6 (1)

6-6: Add documentation for .Values.global.ci in values.yaml.

The global.ci flag is undocumented in values.yaml. Users cannot understand that setting global.ci: true forces the deployment mode to "Single" for CI environments. Add a comment above ci: false explaining this override behavior.

🧹 Nitpick comments (32)
kubernetes/arangodb/crds/analytics-graphanalyticsengine.yaml (1)

16-18: Consider adding schema validation for production use.

Using x-kubernetes-preserve-unknown-fields: true with no defined properties disables all spec validation, allowing arbitrary fields. This is acceptable for alpha CRDs or when the operator handles validation internally, but consider defining explicit schema properties before promoting to beta/stable to catch configuration errors early.

If this is mirroring official ArangoDB operator CRDs, no changes are needed.

kubernetes/arangodb/crds/scheduler-pod.yaml (1)

16-18: Consider adding schema validation or printer columns.

Using x-kubernetes-preserve-unknown-fields: true disables schema validation entirely. While acceptable for upstream CRD compatibility, consider adding additionalPrinterColumns for better kubectl get output if this is a custom CRD.

kubernetes/arangodb/Chart.yaml (1)

26-28: Consider pinning the common dependency version.

The version constraint 1.x.x allows automatic updates to any 1.y.z release, which could introduce unexpected breaking changes. Consider using a more specific constraint like 1.0.x or pinning to an exact version for reproducible builds.

♻️ Suggested fix
   - name: common
     repository: https://svtechnmaa.github.io/charts/artifacthub/
-    version: 1.x.x
+    version: 1.0.x
kubernetes/arangodb/crds/ml-extension.yaml (1)

14-31: Consider adding OpenAPI schema validation in the future.

Using x-kubernetes-preserve-unknown-fields: true with minimal schema validation provides flexibility but bypasses structural validation. This is acceptable for initial development, but consider defining explicit schema properties as the API matures to catch configuration errors early.

Also note that no conversion webhook is defined for the v1alpha1 → v1beta1 migration. Kubernetes will serve both versions without conversion, which works if the schema is identical but may cause issues if fields diverge.

kubernetes/topology-app/templates/seed-configs/_seed-dashboards.tpl (1)

8-9: Hardcoded test data with example.com URL.

The dashboard contains hardcoded test data ("Name": "test", https://example.com). Consider making these configurable via Helm values if this is intended for production use.

kubernetes/traefik/values.yaml (1)

127-177: Keep the TLS secret name single-sourced to avoid drift.
volumes[].secret.secretName and tls.secretName must be updated together today. Consider a YAML anchor to keep them in sync.

♻️ Example (YAML anchor) update
 volumes:
 - name: traefik-tls
   secret:
-    secretName: traefik-tls-secret
+    secretName: &traefikTlsSecret traefik-tls-secret
@@
 tls:
   durationDays: 365
-  secretName: traefik-tls-secret
+  secretName: *traefikTlsSecret
   commonName: "traefik"
kubernetes/topology-app/Chart.yaml (1)

25-28: Pin the common chart dependency to a specific version for reproducibility. 1.x.x allows minor and patch updates during helm dependency update, which can introduce unplanned changes. Pin to a known-good version (currently 1.4.3) or commit a Chart.lock file to ensure consistent deployments.

kubernetes/traefik/templates/deployment.yaml (2)

44-45: Consider making imagePullSecrets configurable.

The image pull secret name is hardcoded to ghcr-pull-secret. This reduces flexibility across different environments or registries.

♻️ Suggested refactor
-      imagePullSecrets:
-      - name: ghcr-pull-secret
+      {{- if .Values.imagePullSecrets }}
+      imagePullSecrets:
+        {{- toYaml .Values.imagePullSecrets | nindent 8 }}
+      {{- end }}

Then in values.yaml:

imagePullSecrets:
  - name: ghcr-pull-secret

71-73: HTTP-to-HTTPS redirect is hardcoded.

The entrypoint redirect configuration is hardcoded. Consider making this configurable via values if you need flexibility to disable redirects in certain environments (e.g., local development behind a proxy).

kubernetes/topology-app/templates/seed-configs/_seed-queries-script.tpl (1)

131-131: Magic number 5 needs documentation or a named constant.

The comment mentions "datasourceID of Grafana Internal should be set to 5" but it's unclear why. This magic number should be documented or extracted to a variable.

♻️ Suggested improvement
+        # Offset for internal Grafana datasource IDs (configured in datasources table)
+        GRAFANA_INTERNAL_OFFSET=5
         if [[ -n "$id" && "$id" != "null" ]]; then
-          ds_id=$((id + 5)) `#the` datasourceID of Grafana Internal should be set to 5
+          ds_id=$((id + GRAFANA_INTERNAL_OFFSET))
kubernetes/asynqmon/templates/deployment.yaml (1)

44-45: Make imagePullSecrets configurable (avoid hardcoded secret).

Hardcoding ghcr-pull-secret reduces portability and can cause missing-secret warnings. Prefer a values-driven list and render only when set.

♻️ Suggested Helm update
-      imagePullSecrets:
-      - name: ghcr-pull-secret
+      {{- if .Values.imagePullSecrets }}
+      imagePullSecrets:
+        {{- toYaml .Values.imagePullSecrets | nindent 8 }}
+      {{- end }}
kubernetes/asynqmon/values.yaml (1)

24-30: Avoid defaulting to latest image tag.

Pin to a specific version to keep deploys reproducible.

♻️ Suggested default
-  tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. 
+  tag: "1.16.0" # Pin to a known-good version
kubernetes/topology-app/templates/topology-app-env.yaml (1)

1-13: Consider using a Secret for sensitive configuration values.

DATABASE_URL and REDIS_ADDRESS may contain credentials or connection strings with embedded passwords. ConfigMaps store data in plaintext and are not suitable for sensitive information.

Additionally, consider adding standard Helm labels and namespace for better resource management:

♻️ Suggested improvements
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: topology-app-env
+  namespace: {{ .Release.Namespace }}
+  labels:
+    app.kubernetes.io/name: topology-app
+    app.kubernetes.io/instance: {{ .Release.Name }}
 data:
   BACKEND_URL: "{{ .Values.topologyAppEnv.BACKEND_URL }}"
-  DATABASE_URL: "{{ .Values.topologyAppEnv.DATABASE_URL }}"
   FRONTEND_URL: "{{ .Values.topologyAppEnv.FRONTEND_URL }}"
   PORT: "{{ .Values.topologyAppEnv.PORT }}"
-  REDIS_ADDRESS: "{{ .Values.topologyAppEnv.REDIS_ADDRESS }}"
   UPLOAD_DIR: "{{ .Values.topologyAppEnv.UPLOAD_DIR }}"
   PERMISSIONS_ENABLED: "{{ .Values.topologyAppEnv.PERMISSIONS_ENABLED }}"
   TRAEFIK_TEMPLATE: "{{ .Values.topologyAppEnv.TRAEFIK_TEMPLATE }}"

Move DATABASE_URL and REDIS_ADDRESS to a separate Secret if they contain credentials.

kubernetes/topology-app/templates/seed-configs/_seed-users.tpl (1)

9-10: Rename $groupID to $group for clarity.

The variable $groupID suggests a simple ID value, but it actually contains a group object with an ID field. In values.yaml, groups are defined as objects (e.g., { ID: 1 }), so the loop variable should be named $group to accurately reflect that it holds the entire group object, not just the ID.

The code functions correctly as-is, but the variable naming is misleading.

kubernetes/topology-web/templates/topology-web-configmap.yaml (1)

1-8: Add namespace and standard Helm labels for consistency.

This ConfigMap lacks namespace and standard Helm labels that other resources in this PR include (e.g., serviceaccount.yaml and seed-configs-cm.yaml). This can cause deployment issues when installing to non-default namespaces.

♻️ Suggested improvement
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: topology-web-env
+  namespace: {{ .Release.Namespace }}
+  labels:
+    app.kubernetes.io/instance: {{ .Release.Name }}
+    app.kubernetes.io/managed-by: Helm
 data:
   BACKEND_URL: "{{ .Values.topologyWebConfig.BACKEND_URL }}"
   NODE_ENV: "{{ .Values.topologyWebConfig.NODE_ENV }}"
-  PORT: "{{ .Values.topologyWebConfig.PORT }}"
+  PORT: "{{ .Values.topologyWebConfig.PORT | toString }}"

Also consider using toString for PORT to ensure numeric values in values.yaml are properly converted to strings.

kubernetes/topology-app/templates/seed-configs-cm.yaml (2)

1-8: Add explicit namespace for consistency.

While Helm defaults to the release namespace, explicitly specifying it improves clarity and consistency with other resources like serviceaccount.yaml.

♻️ Suggested improvement
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: seed-configs
+  namespace: {{ .Release.Namespace }}
   labels:
     app.kubernetes.io/instance: {{ .Release.Name }}
     app.kubernetes.io/managed-by: Helm

9-28: Consider renaming .yaml keys to .json for content accuracy.

The seed templates (e.g., seed-icons, seed-templates) output JSON, not YAML. Using .yaml extensions could be misleading. If the consuming application expects .yaml keys, this is fine; otherwise, consider renaming for clarity.

kubernetes/topology-app/templates/payloads/_payload-templates.tpl (1)

163-202: Consider documenting the state magic numbers.

The nodeRuleCfg and edgeRuleCfg sections use numeric state values (1-5) that aren't self-documenting. Adding a comment or referencing the states from seed-states would improve maintainability.

For example, if states map to icon IDs:

  • 1 = Ok
  • 2 = Warning
  • 3 = Critical
  • 4 = Down
  • 5 = Unknown

A brief comment at the top of the rules section would help future maintainers understand the mapping.

kubernetes/traefik/templates/serviceaccount.yaml (1)

14-17: Remove the delete verb from the Role to follow least-privilege principles.

The TLS job only uses kubectl apply to create or update the TLS secret, which requires create, get, and patch verbs. The delete verb is not exercised by the job and should be removed.

kubernetes/topology-app/templates/seed-configs/_seed-menu-items.tpl (1)

45-46: Null values rendered as empty strings may cause issues.

When $ds.username or $ds.password is null or unset, the template will render "Username": "" instead of "Username": null. This differs from the static items above which use null. If the consuming application expects null vs empty string distinction, this could cause inconsistent behavior.

♻️ Proposed fix to handle null values consistently
-      "Username": "{{ $ds.username }}",
-      "Password": "{{ $ds.password }}",
+      "Username": {{ if $ds.username }}"{{ $ds.username }}"{{ else }}null{{ end }},
+      "Password": {{ if $ds.password }}"{{ $ds.password }}"{{ else }}null{{ end }},
kubernetes/redis/values.yaml (2)

100-108: Consider enabling Redis-specific health probes.

The commented HTTP probes won't work for Redis since it doesn't expose an HTTP endpoint by default. For production readiness, consider using TCP or exec-based probes.

♻️ Suggested Redis-appropriate probes
livenessProbe:
  tcpSocket:
    port: 6379
  initialDelaySeconds: 30
  periodSeconds: 10
readinessProbe:
  exec:
    command:
    - redis-cli
    - ping
  initialDelaySeconds: 5
  periodSeconds: 5

141-142: Hardcoded timezone may not suit all deployments.

The timezone Asia/Ho_Chi_Minh is hardcoded. Consider making this more generic (e.g., UTC) or documenting that users should override this based on their requirements.

kubernetes/arangodb/templates/initjob.yaml (1)

43-52: Silent error suppression loses visibility into failures.

The || true pattern makes the database and collection creation idempotent but also hides actual failures (network issues, auth problems, etc.). Consider checking the response or logging the output for debugging.

♻️ Suggested improvement for better observability
          echo "Create database topology (if not exists)"
-          curl -sf -k -u "$ARANGO_USER:$ARANGO_PASS" \
+          RESULT=$(curl -s -k -u "$ARANGO_USER:$ARANGO_PASS" \
            -H "Content-Type: application/json" \
            -X POST https://$ARANGO_HOST/_db/_system/_api/database \
-            -d '{"name":"topology"}' || true
+            -d '{"name":"topology"}') || true
+          echo "Database creation result: $RESULT"

          echo "Create collection topo_list in _system"
-          curl -sf -k -u "$ARANGO_USER:$ARANGO_PASS" \
+          RESULT=$(curl -s -k -u "$ARANGO_USER:$ARANGO_PASS" \
            -H "Content-Type: application/json" \
            -X POST https://$ARANGO_HOST/_db/_system/_api/collection \
-            -d '{"name":"topo_list"}' || true
+            -d '{"name":"topo_list"}') || true
+          echo "Collection creation result: $RESULT"
kubernetes/topology-app/templates/service.yaml (1)

3-5: Use a release-scoped Service name to avoid collisions.

{{ .Chart.Name }} can conflict when multiple releases share a namespace. Prefer a fullname helper tied to the release.

♻️ Suggested change
-  name: {{ .Chart.Name }}
+  name: {{ include "common.names.fullname" . }}
kubernetes/traefik/templates/job-generate-cert.yaml (1)

3-12: Make the Job name release-scoped.

Hard-coded generate-cert can collide across releases in the same namespace. Use a fullname-based suffix.

♻️ Suggested change
-  name: generate-cert
+  name: {{ include "common.names.fullname" . }}-generate-cert
   labels:
-    job-name: generate-cert
+    job-name: {{ include "common.names.fullname" . }}-generate-cert
@@
     metadata:
       labels:
-        job-name: generate-cert
+        job-name: {{ include "common.names.fullname" . }}-generate-cert
kubernetes/redis/templates/deployment.yaml (2)

44-46: Make imagePullSecrets configurable or conditional.

Hard-coding ghcr-pull-secret can break installs where the secret is absent or named differently. Consider wiring this through values.

♻️ Suggested change
-      imagePullSecrets:
-      - name: ghcr-pull-secret
+      {{- if .Values.imagePullSecrets }}
+      imagePullSecrets:
+        {{- toYaml .Values.imagePullSecrets | nindent 8 }}
+      {{- end }}

79-108: Guard volumeMounts/volumes to avoid null lists.

If no shared or chart-local volumes are configured, the template renders volumeMounts: / volumes: with a null value, which can fail validation. Wrap those blocks or default to empty lists.

♻️ Suggested change
-          volumeMounts:
-            # shareVolume: VolumeMount
-            {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
-            {{- range .Values.global.sharedPersistenceVolume }}
-            {{- if has $.Chart.Name .shareFor }}
-            - name: {{ .volumeName }}
-              mountPath: {{ .path }}
-            {{- end }}
-            {{- end }}
-            {{- end }}
-            # unshareVolume(volumeMount): define in the chart's values.yaml
-            {{- if .Values.volumeMounts }}
-            {{- toYaml .Values.volumeMounts | nindent 12 }}
-            {{- end }}
+          {{- if or (and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled) .Values.volumeMounts }}
+          volumeMounts:
+            # shareVolume: VolumeMount
+            {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
+            {{- range .Values.global.sharedPersistenceVolume }}
+            {{- if has $.Chart.Name .shareFor }}
+            - name: {{ .volumeName }}
+              mountPath: {{ .path }}
+            {{- end }}
+            {{- end }}
+            {{- end }}
+            # unshareVolume(volumeMount): define in the chart's values.yaml
+            {{- if .Values.volumeMounts }}
+            {{- toYaml .Values.volumeMounts | nindent 12 }}
+            {{- end }}
+          {{- end }}
@@
-      volumes:
-      # shareVolume(volume): define in the global values
-        {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
-        {{- range .Values.global.sharedPersistenceVolume }}
-        {{- if has $.Chart.Name .shareFor }}
-        - name: {{ .volumeName }}
-          persistentVolumeClaim:
-            claimName: {{ .pvcName }}
-        {{- end }}
-        {{- end }}
-        {{- end }}
-      # unshareVolume(volume): define in the chart's values.yaml
-        {{- if .Values.volumes }}
-        {{- toYaml .Values.volumes | nindent 8 }}
-        {{- end }}
+      {{- if or (and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled) .Values.volumes }}
+      volumes:
+      # shareVolume(volume): define in the global values
+        {{- if and .Values.global.sharedPersistenceVolume .Values.global.sharedVolume.enabled }}
+        {{- range .Values.global.sharedPersistenceVolume }}
+        {{- if has $.Chart.Name .shareFor }}
+        - name: {{ .volumeName }}
+          persistentVolumeClaim:
+            claimName: {{ .pvcName }}
+        {{- end }}
+        {{- end }}
+        {{- end }}
+      # unshareVolume(volume): define in the chart's values.yaml
+        {{- if .Values.volumes }}
+        {{- toYaml .Values.volumes | nindent 8 }}
+        {{- end }}
+      {{- end }}
kubernetes/topology-web/templates/deployment.yaml (1)

76-79: Use a release-scoped ConfigMap name to avoid collisions.

topology-web-env is static; multiple releases in one namespace will collide unless the ConfigMap template is also static by design. Prefer a fullname-based name and update the ConfigMap template accordingly.

♻️ Suggested change
-            - configMapRef:
-                name: topology-web-env
+            - configMapRef:
+                name: {{ include "common.names.fullname" . }}-env
kubernetes/topology-web/templates/service.yaml (1)

37-45: Guard NodePort rendering when nodePortEnabled is true.
If nodePortEnabled is true but ports[].nodePort is missing, Helm will emit an invalid Service. Consider enforcing with required or only rendering when the value is set.

♻️ Suggested guard
       {{- if and (eq $.Values.service.type "NodePort") $.Values.service.nodePortEnabled }}
-      nodePort: {{ .nodePort }}
+      nodePort: {{ required "service.ports[].nodePort is required when nodePortEnabled is true" .nodePort }}
       {{- end }}
kubernetes/topology-app/templates/deployment.yaml (1)

113-166: Consider running migrations only once to avoid races.
Migrations run in the init container and in both server and worker containers. If migrations aren’t idempotent or locked, this can race or corrupt state. If init handles migrations, drop -migrate from the app containers.

♻️ Suggested change (run migrations in init only)
-              ../server -server -migrate
+              ../server -server
...
-              ../server -worker -migrate
+              ../server -worker
kubernetes/topology-web/values.yaml (1)

24-30: Avoid defaulting to latest for the image tag.
latest makes deployments non-deterministic. Prefer a pinned tag or rely on the chart’s appVersion (if supported by your image helper).

♻️ Proposed fix
-  tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. 
+  tag: "" # pin an immutable tag (or let the chart use appVersion)
kubernetes/topology-app/values.yaml (1)

28-34: Avoid defaulting to latest for the image tag.
Pin a version to keep deployments reproducible.

♻️ Proposed fix
-  tag: "latest" # CHANGE THIS TO YOUR IMAGE TAG. "LATEST" IS NOT RECOMMENDED. 
+  tag: "" # pin an immutable tag (or let the chart use appVersion)

Comment on lines +20 to +33
mode: Single
single:
single:
resources: {}
volumeClaimTemplate:
spec:
storageClassName: local-path
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status: {}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, check if the template file exists and search for Values.single references
if [ -f "kubernetes/arangodb/templates/arangodeployment.yaml" ]; then
  echo "=== References to Values.single in arangodeployment.yaml ==="
  rg -n "Values\.single" kubernetes/arangodb/templates/arangodeployment.yaml -A 2 -B 2
else
  echo "Template file not found at expected location"
  # Try to find it
  echo "=== Searching for arangodeployment templates ==="
  find kubernetes/arangodb -name "*arangodeployment*" -type f
fi

Repository: svtechnmaa/charts

Length of output: 385


🏁 Script executed:

#!/bin/bash
# Also check the values.yaml structure to understand the hierarchy
echo "=== Values.yaml structure around single configuration ==="
cat -n kubernetes/arangodb/values.yaml | head -50

Repository: svtechnmaa/charts

Length of output: 1692


🏁 Script executed:

#!/bin/bash
# Check all template files that might reference the single configuration
echo "=== All template files in kubernetes/arangodb ==="
find kubernetes/arangodb/templates -type f -name "*.yaml" -o -name "*.yml" | head -20

Repository: svtechnmaa/charts

Length of output: 247


Flatten the nested single.single structure in values.yaml.

The template correctly references .Values.single (line 24 of arangodeployment.yaml), but the values are incorrectly nested under single.single. This will cause the rendered spec to contain an extra nested single: key. The structure should match the flat layout of the cluster configuration: move the contents of single.single directly under single:.

Current incorrect structure
single:
  single:
    resources: {}
    volumeClaimTemplate:
      spec:
        ...

Should be:

single:
  resources: {}
  volumeClaimTemplate:
    spec:
      ...
🤖 Prompt for AI Agents
In `@kubernetes/arangodb/values.yaml` around lines 20 - 33, The values.yaml
currently nests the pod spec under single.single which doesn't match templates
that reference .Values.single; open kubernetes/arangodb/values.yaml and flatten
the structure by moving the contents under the inner "single" up one level into
the top-level "single" key (remove the extra nested single key so the resources,
volumeClaimTemplate, etc. are directly under single), ensuring templates that
reference .Values.single in arangodeployment.yaml resolve correctly.

Comment on lines +92 to +95
auth:
secretName: arango-root-pwd
rootUser: root
rootPassword: "" No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for rootPassword/secretName handling in the secret template
rg -n "rootPassword|secretName" kubernetes/arangodb/templates/secret.yaml

Repository: svtechnmaa/charts

Length of output: 163


🏁 Script executed:

#!/bin/bash
# Read the complete secret.yaml template
cat kubernetes/arangodb/templates/secret.yaml

Repository: svtechnmaa/charts

Length of output: 273


🏁 Script executed:

#!/bin/bash
# Search for values schema files
fd -name "values.schema.json" -o -name "values.schema.yaml" kubernetes/arangodb/

Repository: svtechnmaa/charts

Length of output: 291


🏁 Script executed:

#!/bin/bash
# Search for values schema files with correct syntax
find kubernetes/arangodb -name "values.schema*" -type f

Repository: svtechnmaa/charts

Length of output: 43


🏁 Script executed:

#!/bin/bash
# Check lines around 92-95 in values.yaml
sed -n '85,100p' kubernetes/arangodb/values.yaml

Repository: svtechnmaa/charts

Length of output: 301


Add validation to prevent empty rootPassword from creating a blank root credential.

The template directly outputs {{ .Values.auth.rootPassword | b64enc | quote }} without validation. When rootPassword: "" in values.yaml, this creates a secret with an empty password. Either enforce a non-empty rootPassword requirement, or make the password field conditional to skip it when empty and use only the existing secret.

🤖 Prompt for AI Agents
In `@kubernetes/arangodb/values.yaml` around lines 92 - 95, The Helm chart
currently encodes and writes auth.rootPassword unconditionally (template uses {{
.Values.auth.rootPassword | b64enc | quote }}) which allows an empty password to
become a blank secret; update the chart templates to validate or conditionally
render the password: either enforce non-empty by checking
.Values.auth.rootPassword and failing/returning an error if empty, or wrap the
secret data entry in an if block that only includes the base64-encoded
rootPassword when .Values.auth.rootPassword is non-empty and otherwise rely on
the provided auth.secretName; modify the template that references
auth.rootPassword and add the conditional check/error path so empty values.yaml
rootPassword no longer create a blank credential.

Comment on lines +39 to +44
{
"ID": 8,
"Name": "Warning",
"URL": "/public/router-big_warning.svg",
}
]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Invalid JSON: Remove trailing comma after the last array element.

The trailing comma on line 42 makes this invalid JSON and will cause parsing errors when the seed data is consumed.

🐛 Fix the trailing comma
     {
       "ID": 8,
       "Name": "Warning",
-      "URL": "/public/router-big_warning.svg",
+      "URL": "/public/router-big_warning.svg"
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
"ID": 8,
"Name": "Warning",
"URL": "/public/router-big_warning.svg",
}
]
{
"ID": 8,
"Name": "Warning",
"URL": "/public/router-big_warning.svg"
}
]
🤖 Prompt for AI Agents
In `@kubernetes/topology-app/templates/seed-configs/_seed-icons.tpl` around lines
39 - 44, The JSON array in the template ends with a trailing comma after the
last object (the object with "ID": 8, "Name": "Warning", "URL":
"/public/router-big_warning.svg"), which makes the JSON invalid; remove the
trailing comma following that object's closing brace so the array ends with ]
(i.e., delete the comma after the last element in the _seed-icons.tpl array).

"ID": 1,
"Name": "Default",
"Order": 0,
"Payload": '{{ include "payload-templates" . | nindent 8 }}',
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Invalid JSON: Single quotes are not valid JSON string delimiters.

JSON requires double quotes for string values. The single quotes around the Payload value will cause parsing errors.

🐛 Suggested fix using proper JSON escaping

If the payload needs to be a JSON string (escaped), consider:

-          "Payload": '{{ include "payload-templates" . | nindent 8 }}',
+          "Payload": {{ include "payload-templates" . | toJson }},

Or if Payload should be a nested JSON object (not a string):

-          "Payload": '{{ include "payload-templates" . | nindent 8 }}',
+          "Payload": {{ include "payload-templates" . | nindent 10 }},

The correct approach depends on whether the consuming application expects Payload to be a JSON string or a nested object.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"Payload": '{{ include "payload-templates" . | nindent 8 }}',
"Payload": {{ include "payload-templates" . | toJson }},
🤖 Prompt for AI Agents
In `@kubernetes/topology-app/templates/seed-configs/_seed-templates.tpl` at line
8, The template uses invalid JSON by wrapping the included payload with single
quotes in the "Payload" value; update the "Payload" line that references include
"payload-templates" so it emits valid JSON: if the consumer expects a JSON
string, serialize the included payload to a JSON-escaped string (e.g., use the
templating helper that converts to JSON) and wrap it with double quotes,
otherwise remove the surrounding quotes so the included payload is emitted as a
nested JSON object; ensure to keep the use of include "payload-templates" and
adjust piping/escaping accordingly so output is valid JSON.

@duchieu2k duchieu2k changed the title Feature/new topo fix: Update arangodb template Jan 27, 2026
@duchieu2k duchieu2k closed this Jan 27, 2026
@vyvuvivo vyvuvivo deleted the feature/new-topo branch January 27, 2026 03:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants