-
Notifications
You must be signed in to change notification settings - Fork 29
Helm charts for maestro installation #449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughReplace legacy OpenShift template deployments with Helm-based charts and Makefile targets; add full Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Dev as Developer / CI (Makefile)
participant Helm as Helm CLI
participant KubeAPI as Kubernetes API
participant Pod as Maestro Pod
rect rgb(240,248,255)
Dev->>Helm: make deploy / make deploy-agent (generate values.yaml)
note right of Helm: values include image, broker, TLS, DB, consumerName
end
rect rgb(245,255,240)
Helm->>KubeAPI: helm upgrade --install (render & apply manifests)
KubeAPI->>KubeAPI: create/update Secrets, ServiceAccount, RBAC, CRD, Deployments, Services
end
rect rgb(255,249,240)
KubeAPI->>Pod: schedule & start containers
Pod->>KubeAPI: readiness/liveness probes
KubeAPI-->>Helm: deployment status
Helm-->>Dev: exit code + readiness result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (14)
charts/maestro-agent/Chart.yaml (1)
6-6: Consider pinning appVersion to a specific version.Using
"latest"as the appVersion can lead to non-deterministic deployments and makes it difficult to track which version is running in production. Consider using a specific version tag or a template variable that can be overridden during packaging.charts/maestro-server/Chart.yaml (1)
6-6: Consider pinning appVersion to a specific version.Using
"latest"as the appVersion can lead to non-deterministic deployments and makes it difficult to track which version is running in production. Consider using a specific version tag or a template variable that can be overridden during packaging.charts/maestro-server/templates/pubsub.yaml (1)
36-45: Consider using external secret management for GCP credentials.Storing GCP service account credentials in Helm values (line 44) poses a security risk, as values files may be committed to version control or stored insecurely.
Recommendations:
- Preferred for GKE: Use Workload Identity (already documented in the README) by annotating the service account instead of providing credentials
- Alternative: Use external secret management solutions like:
- Google Secret Manager with External Secrets Operator
- Sealed Secrets
- HashiCorp Vault
The current approach is acceptable for development/testing but should be avoided in production environments.
charts/maestro-agent/templates/clusterrole.yaml (1)
27-33: Themaestro-agent:executionClusterRole is intentionally empty but lacks documentation of its purpose.The
maestro-agent:executionClusterRole (line 30) has an empty rules list and is bound to the agent's ServiceAccount via ClusterRoleBinding, but there is no documentation or inline comments explaining whether this is a placeholder for future permissions, reserved for user customization, or intended for a specific use case. Adding a comment to the template (e.g.,# Placeholder for customizable execution permissions) or updating the README with clarification would improve maintainability and prevent confusion.test/e2e/setup/rosa/setup/agent.sh (1)
71-75: Consider adding validation before secret creation.The secret creation assumes certificate files exist but doesn't validate their presence. If certificate generation fails earlier (lines 43-48), the script will attempt to create a secret with missing files, leading to unclear error messages.
🔎 Proposed validation
# Deploy Maestro agent oc create namespace maestro-agent || true + +# Validate certificate files exist +if [[ ! -f "${certs_dir}/iot-ca.pem" ]] || \ + [[ ! -f "${consumer_cert_dir}/${consumer_id}.crt" ]] || \ + [[ ! -f "${consumer_cert_dir}/${consumer_id}.private.key" ]]; then + echo "Error: Required certificate files not found" + exit 1 +fi + oc -n maestro-agent delete secrets maestro-agent-certs --ignore-not-foundtest/e2e/setup/rosa/setup/maestro.sh (1)
151-164: Consider adding validation before secret creation.Similar to the agent script, certificate file existence should be validated before attempting secret creation to provide clearer error messages on failure.
charts/maestro-agent/README.md (1)
181-190: Consider adding a dedicated Security section.While line 187 mentions cluster-admin permissions, elevating this to a dedicated "Security Considerations" section would better highlight the RBAC implications, TLS requirements, and secret management best practices.
🔎 Suggested section structure
## Security Considerations ### RBAC Permissions The agent is deployed with cluster-admin permissions by default... ### TLS Configuration Production deployments should enable TLS for broker connections... ### Secret Management Sensitive credentials (MQTT passwords, certificates) should be stored in Kubernetes secrets...charts/maestro-server/values.yaml (2)
12-16: Consider changing default image tag from "latest".Using
tag: latestas the default can lead to unpredictable deployments. Consider using a specific version or a placeholder likev0.0.0with documentation requiring explicit tag specification for production.
111-126: Document that embedded PostgreSQL is for development only.The embedded PostgreSQL configuration includes a plaintext password, which is acceptable for local development but should never be used in production. Consider adding a comment explicitly stating this is for development/testing purposes only.
🔎 Suggested comment addition
-# PostgreSQL deployment (optional, for development) +# PostgreSQL deployment (for development/testing only - DO NOT use in production) postgresql: enabled: falsecharts/maestro-server/templates/service.yaml (2)
72-74: Healthcheck service port definition is inconsistent with other services.The healthcheck service is missing
protocol: TCPandname: healthcheckfields that are present in other service definitions. WhileTCPis the default protocol, adding these fields would improve consistency and clarity.🔎 Proposed fix for consistency
ports: - - port: {{ .Values.service.healthcheck.port }} - targetPort: {{ .Values.service.healthcheck.port }} + - name: healthcheck + port: {{ .Values.service.healthcheck.port }} + targetPort: {{ .Values.service.healthcheck.port }} + protocol: TCP
90-94: Consider making grpc-broker port configurable.The grpc-broker service port is hardcoded to
8091. For consistency with other services that use configurable ports, consider adding a value like.Values.service.grpcBroker.port.charts/maestro-agent/values.yaml (1)
15-19: Consider using a pinned image tag instead oflatest.The default image tag is set to
latest, which can lead to unpredictable deployments since the actual image version may change between deployments. For production use, consider defaulting to a specific version tag or documenting that users should override this value.charts/maestro-server/templates/postgresql.yaml (1)
78-90: Liveness probe has a high initial delay.The liveness probe has
initialDelaySeconds: 120(2 minutes), which is quite conservative. While this allows PostgreSQL time to initialize, it also means Kubernetes won't detect a failed container for at least 2 minutes after startup. Consider whether this aligns with your availability requirements.The readiness probe at 5 seconds is appropriate for traffic routing decisions.
test/setup/deploy_server.sh (1)
62-62: Consider clarifying the conditional logic.The inline bash substitution
$( [ "$enable_istio" = "true" ] && echo "false" || echo "true" )correctly inverts the Istio flag but could be more readable with a comment or variable.🔎 Optional refactor for clarity
+# HTTPS is disabled when Istio is enabled (Istio handles mTLS) +https_enabled=$( [ "$enable_istio" = "true" ] && echo "false" || echo "true" ) + # Server configuration server: https: - enabled: $( [ "$enable_istio" = "true" ] && echo "false" || echo "true" ) + enabled: ${https_enabled}
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (53)
HELM_CHARTS.mdMakefilecharts/README.mdcharts/maestro-agent/Chart.yamlcharts/maestro-agent/README.mdcharts/maestro-agent/templates/_helpers.tplcharts/maestro-agent/templates/clusterrole.yamlcharts/maestro-agent/templates/clusterrolebinding.yamlcharts/maestro-agent/templates/crd.yamlcharts/maestro-agent/templates/deployment.yamlcharts/maestro-agent/templates/role.yamlcharts/maestro-agent/templates/rolebinding.yamlcharts/maestro-agent/templates/secret.yamlcharts/maestro-agent/templates/serviceaccount.yamlcharts/maestro-agent/values.yamlcharts/maestro-server/Chart.yamlcharts/maestro-server/README.mdcharts/maestro-server/templates/_helpers.tplcharts/maestro-server/templates/clusterrole.yamlcharts/maestro-server/templates/clusterrolebinding.yamlcharts/maestro-server/templates/deployment.yamlcharts/maestro-server/templates/grpc.yamlcharts/maestro-server/templates/mqtt.yamlcharts/maestro-server/templates/postgresql.yamlcharts/maestro-server/templates/pubsub.yamlcharts/maestro-server/templates/route.yamlcharts/maestro-server/templates/service.yamlcharts/maestro-server/templates/serviceaccount.yamlcharts/maestro-server/values.yamltemplates/README.mdtemplates/agent-template-aro-hcp.ymltemplates/agent-template-gcp.ymltemplates/agent-template-rosa.ymltemplates/agent-template.ymltemplates/agent-tls-template.ymltemplates/db-template.ymltemplates/db-tls-template.ymltemplates/mqtt-template.ymltemplates/mqtt-tls-template.ymltemplates/route-template.ymltemplates/route-tls-template.ymltemplates/service-template-aro-hcp.ymltemplates/service-template-gcp.ymltemplates/service-template-rosa.ymltemplates/service-template.ymltemplates/service-tls-template.ymltest/e2e/setup/gcp/setup/agent.shtest/e2e/setup/gcp/setup/maestro.shtest/e2e/setup/rosa/setup/agent.shtest/e2e/setup/rosa/setup/maestro.shtest/setup/deploy_agent.shtest/setup/deploy_server.shtest/setup/env_setup.sh
💤 Files with no reviewable changes (17)
- templates/mqtt-template.yml
- templates/mqtt-tls-template.yml
- templates/agent-template-rosa.yml
- templates/service-template.yml
- templates/agent-template.yml
- templates/README.md
- templates/service-template-gcp.yml
- templates/route-template.yml
- templates/agent-template-gcp.yml
- templates/route-tls-template.yml
- templates/service-template-aro-hcp.yml
- templates/db-tls-template.yml
- templates/service-tls-template.yml
- templates/agent-template-aro-hcp.yml
- templates/service-template-rosa.yml
- templates/db-template.yml
- templates/agent-tls-template.yml
🧰 Additional context used
🪛 checkmake (0.2.2)
Makefile
[warning] 488-488: Missing required phony target "all"
(minphony)
🪛 YAMLlint (1.37.1)
charts/maestro-agent/templates/crd.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/role.yaml
[error] 8-8: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/route.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mqtt.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/postgresql.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/clusterrole.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/serviceaccount.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/rolebinding.yaml
[error] 8-8: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/clusterrolebinding.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/service.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/clusterrolebinding.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/secret.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/grpc.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/pubsub.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/clusterrole.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/serviceaccount.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: e2e-broadcast-subscription
- GitHub Check: e2e-grpc-broker
- GitHub Check: e2e
- GitHub Check: upgrade
- GitHub Check: e2e-with-istio
🔇 Additional comments (48)
HELM_CHARTS.md (1)
1-123: LGTM!The documentation is comprehensive and well-structured, providing clear guidance on using Helm charts for Maestro deployment, configuration options, and troubleshooting.
charts/maestro-server/templates/grpc.yaml (1)
1-13: LGTM!The conditional gRPC secret template is correctly structured with appropriate guards and includes the necessary configuration for the gRPC message broker.
charts/maestro-server/templates/clusterrole.yaml (1)
1-13: LGTM!The ClusterRole defines appropriate minimal permissions for authentication and authorization checks, following the principle of least privilege.
charts/maestro-agent/templates/role.yaml (1)
1-32: LGTM!The Role definitions are well-structured with appropriate permissions for agent operations including leader election, event recording, and extension API server authentication access. The inline comments clearly explain the purpose of each permission set.
test/e2e/setup/gcp/setup/maestro.sh (2)
119-172: LGTM! Well-structured Helm values generation.The Helm values file generation correctly structures the configuration for the maestro-server deployment. The image registry parsing using bash parameter expansion is appropriate, and the
sslMode: disableon line 137 is acceptable given that Cloud SQL proxy is enabled (lines 140-141), which provides encrypted connectivity.
175-182: LGTM! Proper Helm deployment with error handling.The Helm deployment command is well-structured with appropriate flags and error handling.
charts/maestro-agent/templates/serviceaccount.yaml (1)
1-10: LGTM! Standard ServiceAccount template.The ServiceAccount template follows Helm best practices with proper use of template helpers for name and labels, and conditional rendering of annotations.
Note: The YAMLlint syntax error is a false positive—it doesn't recognize Helm template directives.
charts/maestro-server/templates/clusterrolebinding.yaml (1)
1-14: LGTM! Standard ClusterRoleBinding template.The ClusterRoleBinding correctly binds the ClusterRole to the ServiceAccount using appropriate template helpers and namespace scoping.
charts/maestro-server/README.md (2)
159-178: LGTM! Clear documentation of development-only components.The PostgreSQL and MQTT sections are properly marked as "Development Only" with the default password appropriately documented for non-production use.
206-231: LGTM! Comprehensive Pub/Sub documentation with security guidance.The documentation provides clear guidance on Google Cloud Pub/Sub setup and appropriately recommends Workload Identity (lines 226-231) as a more secure alternative to embedding credentials, which aligns with GCP best practices.
test/e2e/setup/gcp/setup/agent.sh (1)
63-96: LGTM! Well-structured Helm-based agent deployment.The agent setup script properly mirrors the server deployment approach with:
- Appropriate Helm values generation for Pub/Sub configuration
- Workload Identity annotation for secure GCP access (line 72)
- Proper error handling throughout the deployment flow
test/e2e/setup/rosa/setup/agent.sh (1)
98-105: LGTM! Helm deployment follows best practices.The use of
helm upgrade --installensures idempotent deployments, and the wait condition properly validates the deployment status.charts/maestro-server/templates/serviceaccount.yaml (1)
1-10: LGTM! ServiceAccount template follows Helm best practices.The template correctly uses Helm helpers for naming and labeling, with proper conditional rendering of annotations. The YAMLlint syntax error is a false positive—the
{{-syntax is valid Helm templating.charts/maestro-agent/templates/clusterrolebinding.yaml (1)
1-16: LGTM! Standard ClusterRoleBinding structure.The agent and execution bindings follow proper Helm templating patterns. YAMLlint syntax errors are false positives for Helm template directives.
Also applies to: 33-47
charts/maestro-agent/templates/rolebinding.yaml (2)
18-32: Verify necessity of kube-system RoleBinding.The agent requires a RoleBinding in the
kube-systemnamespace for extension-apiserver functionality. Ensure this is documented in the chart's security considerations, as kube-system access warrants explicit justification.
1-17: LGTM! Standard RoleBinding structure.The namespace-scoped RoleBinding follows proper Helm patterns.
charts/maestro-agent/README.md (1)
1-219: LGTM! Comprehensive and well-structured documentation.The README provides clear installation instructions, thorough configuration reference, practical examples, and helpful troubleshooting guidance. The documentation aligns well with the Helm chart implementation.
charts/maestro-agent/templates/secret.yaml (3)
1-21: LGTM! gRPC secret template is well-structured.The conditional rendering ensures clean configuration and the use of
stringDatafor YAML content is appropriate.
22-51: LGTM! MQTT secret template handles credentials securely.The conditional rendering of credentials and proper topic path construction with consumer name interpolation are correctly implemented.
52-89: LGTM! Pub/Sub secret template properly handles nested configuration.The multi-level conditional rendering and separate handling of
credentialsJSONas a distinct secret key are correctly implemented.charts/maestro-server/values.yaml (3)
32-51: LGTM! Database configuration follows security best practices.The use of
sslMode: verify-fullensures secure database connections, and the documented secret key expectations provide clear guidance. The Cloud SQL Proxy integration is well-designed for GCP deployments.
53-77: LGTM! Server and broker configuration is flexible and well-structured.The multi-protocol support, separate metrics endpoint, and configurable timeouts demonstrate good architectural practices.
1-173: LGTM! Comprehensive values file with sensible defaults.The configuration surface is well-organized, properly documented, and provides flexibility for different deployment scenarios while maintaining secure defaults for production use.
test/e2e/setup/rosa/setup/maestro.sh (1)
185-192: Clarify MQTT configuration and Secret creation.With
mqtt.enabled: false, the Helm chart skips the mqtt.yaml template entirely—which includes the embedded Mosquitto Deployment AND the Secret containing the broker configuration. However, the deployment references{{ .Values.messageBroker.secretName }}, so this Secret must exist. Verify that the Secret is created externally (via separate kubectl command or script step) or consider settingmqtt.enabled: trueif these connection details should be used. The current values suggest using AWS IoT as an external broker, which is valid, but the automatic Secret creation is disabled.charts/maestro-server/templates/deployment.yaml (2)
1-60: Well-structured deployment with proper conditional volumes.The deployment template correctly handles multiple broker types (mqtt, grpc, pubsub) with conditional volume definitions. The volume structure with optional secrets for TLS certificates is appropriate.
Note: The YAMLlint error on line 6 is a false positive —
{{-is valid Helm/Go template syntax for whitespace trimming.
61-93: Init containers properly configured for migration and optional Cloud SQL Proxy sidecar.The migration init container correctly mounts database secrets and runs the migration command. The Cloud SQL Proxy uses
restartPolicy: Always, which is the Kubernetes 1.28+ native sidecar pattern — appropriate for a database proxy that should run alongside the main container.charts/maestro-server/templates/service.yaml (1)
1-60: Service definitions are well-structured with proper conditional logic.The API, metrics, and gRPC services correctly implement optional nodePort configuration using the
ne (int ...) 0pattern. Labels and selectors are properly templated via helpers.Note: The YAMLlint error on line 7 is a false positive —
{{-is valid Helm template syntax.charts/maestro-agent/values.yaml (1)
1-62: Well-documented values file with comprehensive broker configuration.The values file provides clear defaults and helpful comments for all three supported message broker types (MQTT, gRPC, Pub/Sub). The structure is logical with proper grouping of related configuration options.
charts/maestro-server/templates/mqtt.yaml (2)
1-68: MQTT deployment template correctly handles TLS conditional configuration.The template properly gates TLS-related volume mounts, config options, and secret fields based on
.Values.mqtt.tls.enabled. The ConfigMap dynamically adjusts mosquitto.conf for TLS vs anonymous modes.Note: The YAMLlint error on line 1 is a false positive —
{{-is valid Helm template syntax.
88-112: Secret configuration looks correct.The broker configuration secret properly includes optional username/password and TLS fields conditionally. The topics configuration follows the expected pattern.
charts/maestro-server/templates/postgresql.yaml (2)
105-133: Secrets properly separate credentials from connection details.The template creates two secrets: one for PostgreSQL container environment variables (
-creds) and another for application consumption with connection details. This separation is a good practice for managing database credentials.
1-104: PostgreSQL deployment template is well-structured with proper conditional persistence.The template correctly handles:
- Optional PVC creation based on
.Values.postgresql.persistence.enabled- Conditional volume mounts only when persistence is enabled
- Proper probe configuration for database health checks
Note: The YAMLlint error on line 1 is a false positive —
{{-is valid Helm template syntax.test/setup/env_setup.sh (3)
125-133: LGTM: HTTPS certificate generation is correctly implemented.The certificate generation using
stepCLI with multiple SANs is appropriate for test environments. The--no-password --insecureflags are acceptable here since this is explicitly for testing/development.
171-171: Clear migration marker for Helm-based deployment.The comment appropriately indicates that MQTT broker deployment responsibility has shifted to the Helm chart in
deploy_server.sh, maintaining clarity about the deployment flow.
199-199: Consistent with Helm migration approach.The comment clearly indicates that database deployment is now handled by the Helm chart, consistent with the overall migration strategy.
charts/maestro-agent/templates/_helpers.tpl (1)
1-64: LGTM: Standard Helm helper templates.The helper templates follow Helm best practices with proper naming, labeling, and image construction logic. The truncation to 63 characters and trailing dash trimming align with Kubernetes resource naming constraints.
test/setup/deploy_server.sh (2)
153-158: LGTM: Helm deployment command is correctly structured.The
helm upgrade --installpattern with--waitand--timeoutis appropriate. The 5-minute timeout should be sufficient for test environments.
167-171: Correct protocol selection based on Istio configuration.The endpoint protocol correctly uses
http://when Istio is enabled (as Istio handles mTLS) andhttps://otherwise, consistent with the server configuration at line 62.charts/maestro-agent/templates/deployment.yaml (2)
1-45: LGTM: Deployment template correctly implements conditional volume mounts.The conditional logic for MQTT and gRPC certificate mounts is correct, mounting secrets only when the respective broker type is configured with TLS.
Note: The YAMLlint syntax error at line 6 is a false positive - this is valid Helm template syntax for including labels.
46-59: Conditional volumes correctly mirror volume mounts.The volume definitions properly match the conditional volume mounts, ensuring certificates are available when TLS is enabled for the respective broker type.
charts/maestro-server/templates/_helpers.tpl (1)
1-64: LGTM: Standard Helm helper templates for server chart.The helper templates follow the same best practices as the agent chart, providing consistent naming, labeling, and image construction patterns.
Makefile (4)
130-135: Clear documentation of Helm-based deployment targets.The updated help text clearly indicates that server and agent deployments now use Helm, making the migration transparent to users.
291-292: Helpful migration documentation.The comment clearly explains the shift from template generation to Helm charts, providing guidance for users familiar with the old approach.
338-354: Good validation for agent deployment.The
deploy-agenttarget correctly validates thatCONSUMER_NAMEis set before proceeding, preventing deployment failures with a clear error message.
469-488: LGTM: Useful Helm chart utility targets.The lint, package, and template targets provide helpful utilities for chart development and validation. The static analysis warning about a missing "all" target is a style preference that doesn't apply to this project's structure.
test/setup/deploy_agent.sh (3)
47-91: LGTM: Helm values file generation with proper MQTT configuration.The values file generation correctly handles MQTT configuration with conditional TLS settings, maintaining proper YAML structure for the Helm chart.
94-109: Consistent gRPC broker configuration.The gRPC configuration follows the same pattern as MQTT, with proper conditional TLS handling and correct YAML structure.
112-117: LGTM: Agent Helm deployment command is correctly structured.The deployment command follows the same pattern as the server deployment, using
helm upgrade --installwith appropriate wait and timeout settings.
| {{- if .Values.route.tls.enabled }} | ||
| tls: | ||
| termination: {{ .Values.route.tls.termination }} | ||
| insecureEdgeTerminationPolicy: {{ .Values.route.tls.insecureEdgeTerminationPolicy }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Validate TLS termination values.
The TLS termination settings are used directly from values without validation. Invalid values could result in failed Route creation.
🔎 Proposed validation
{{- if .Values.route.tls.enabled }}
tls:
+ {{- if not (has .Values.route.tls.termination (list "edge" "passthrough" "reencrypt")) }}
+ {{- fail "route.tls.termination must be one of: edge, passthrough, reencrypt" }}
+ {{- end }}
termination: {{ .Values.route.tls.termination }}
+ {{- if and .Values.route.tls.insecureEdgeTerminationPolicy (not (has .Values.route.tls.insecureEdgeTerminationPolicy (list "Allow" "Redirect" "None" ""))) }}
+ {{- fail "route.tls.insecureEdgeTerminationPolicy must be one of: Allow, Redirect, None, or empty" }}
+ {{- end }}
insecureEdgeTerminationPolicy: {{ .Values.route.tls.insecureEdgeTerminationPolicy }}
{{- end }}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In charts/maestro-server/templates/route.yaml around lines 15–19, the TLS
termination values are used directly; validate .Values.route.tls.termination
against the allowed OpenShift values (edge, passthrough, reencrypt) and
.Values.route.tls.insecureEdgeTerminationPolicy against (Allow, Disable,
Redirect). Update the template to either enforce allowed values by checking
membership and calling the Helm fail function with a clear error message when
invalid, or apply a safe default via the default function if the provided value
is missing/invalid; implement the check using a small allowed-list and
contains/if logic so invalid values cause a clear template error or fallback to
a safe default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
test/setup/deploy_server.sh (2)
101-104: Hardcoded database credentials for testing.While acceptable for e2e/testing purposes, consider adding a comment clarifying these credentials are for local testing only and should never be used in production-like environments.
164-165: Address TODO: Replace sleep with health check.The hardcoded
sleep 30is fragile - it may be too short on slower systems or unnecessarily long on fast ones. Consider implementing an actual health check loop using the/healthor/healthzendpoint.Would you like me to generate a health check polling script that waits for the service to become ready, or open an issue to track this improvement?
🔎 Suggested health check implementation
-# TODO use maestro service health check to ensure the service ready -sleep 30 # wait 30 seconds for the service ready +# Wait for maestro service to become ready via health check +echo "Waiting for maestro service to become ready..." +max_attempts=60 +attempt=0 +while [ $attempt -lt $max_attempts ]; do + if curl -s -o /dev/null -w "%{http_code}" "http://127.0.0.1:30083/healthcheck" | grep -q "200"; then + echo "Maestro service is ready" + break + fi + attempt=$((attempt + 1)) + sleep 1 +done + +if [ $attempt -eq $max_attempts ]; then + echo "Timed out waiting for maestro service" + exit 1 +ficharts/maestro-server/templates/deployment.yaml (3)
1-4: Consider templating the deployment name.The deployment name is hardcoded as "maestro" which may conflict with other releases in the same namespace or limit flexibility. Consider using a helper template like
{{ include "maestro-server.fullname" . }}to allow customization and avoid naming collisions.🔎 Proposed refactor
apiVersion: apps/v1 kind: Deployment metadata: - name: maestro + name: {{ include "maestro-server.fullname" . }} labels: {{- include "maestro-server.labels" . | nindent 4 }}
26-26: Make resource names configurable for better reusability.Several ConfigMap and Secret names are hardcoded, which limits flexibility for different environments or release names. Consider making these configurable via values.yaml with sensible defaults.
Affected resources:
- Line 26:
maestro-logging-config(ConfigMap)- Line 41:
maestro-server-certs(MQTT certs)- Line 47:
maestro-grpc-broker-cert(gRPC certs)- Line 53:
maestro-pubsub-creds(PubSub creds)- Line 59:
maestro-https-certs(HTTPS certs)Example: Making the logging ConfigMap name configurable
In values.yaml, add:
logging: configMapName: maestro-logging-configThen update the template:
- name: logging-config configMap: - name: maestro-logging-config + name: {{ .Values.logging.configMapName }} optional: trueAlso applies to: 41-41, 47-47, 53-53, 59-59
66-66: Inconsistent image pull policy configuration.The cloud-sql-proxy container hardcodes
imagePullPolicy: IfNotPresent, while the migration container (line 76) and main container (line 97) use the templated.Values.image.pullPolicy. Consider making the cloud-sql-proxy pull policy configurable as well for consistency.🔎 Proposed refactor
In values.yaml, add a field like:
database: cloudSqlProxy: imagePullPolicy: IfNotPresentThen update the template:
- name: cloud-sql-proxy restartPolicy: Always image: {{ .Values.database.cloudSqlProxy.image }} - imagePullPolicy: IfNotPresent + imagePullPolicy: {{ .Values.database.cloudSqlProxy.imagePullPolicy }}
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (2)
charts/maestro-server/templates/deployment.yamltest/setup/deploy_server.sh
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: e2e
- GitHub Check: e2e-grpc-broker
- GitHub Check: e2e-with-istio
- GitHub Check: e2e-broadcast-subscription
- GitHub Check: upgrade
🔇 Additional comments (8)
test/setup/deploy_server.sh (5)
17-28: LGTM!Environment variable setup with sensible defaults. The configuration is well-organized and follows consistent naming conventions.
131-136: LGTM!The conditional append for broadcast subscription correctly adds
agentTopicunder themqtt:block with proper indentation.
138-152: LGTM!The gRPC broker configuration is correctly appended as a top-level block when
msg_brokeris set to "grpc".
154-160: LGTM!The Helm deployment command is well-structured with
--waitand--timeoutflags ensuring reliable deployment in CI/CD scenarios.
168-173: LGTM!The conditional endpoint logic correctly handles Istio-enabled scenarios where Istio manages mTLS, outputting
http://when Istio is enabled andhttps://otherwise. This is consistent with theserver.https.enabledconfiguration in the Helm values.charts/maestro-server/templates/deployment.yaml (3)
94-128: LGTM!The container setup properly templates the image, uses configurable pull policy, and correctly mounts all required volumes based on the configuration. The environment variable setup with POD_NAME is a good pattern for generating unique client IDs.
170-183: LGTM! Previous issue resolved.The health check probes now correctly use
.Values.server.healthCheck.bindPortinstead of the hardcoded port 8083, addressing the issue raised in the previous review. The conditional scheme selection based on HTTPS enablement is also appropriate.
150-154: gRPC TLS certificate paths are properly configured.The gRPC TLS file paths in
values.yaml(lines 150-157) already include the complete paths with the mount point prefix (/secrets/grpc-broker-cert/). The values are correctly aligned with the volume mount defined at lines 108-111 ofdeployment.yaml. No configuration changes are needed.Likely an incorrect or invalid review comment.
| {{- if .Values.database.cloudSqlProxy.enabled }} | ||
| - name: cloud-sql-proxy | ||
| restartPolicy: Always | ||
| image: {{ .Values.database.cloudSqlProxy.image }} | ||
| imagePullPolicy: IfNotPresent | ||
| args: | ||
| - "--structured-logs" | ||
| - "--port=5432" | ||
| - {{ .Values.database.cloudSqlProxy.instanceConnectionName | quote }} | ||
| securityContext: | ||
| runAsNonRoot: true | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for Kubernetes version requirements in chart files
echo "=== Checking Chart.yaml and similar files for K8s version requirements ==="
fd -e yaml -e txt | xargs rg -i "kubeVersion|kubernetes.*version|version.*requirement" -l | head -20
echo ""
echo "=== Checking Chart.yaml and requirements files ==="
fd "Chart.yaml|requirements.yaml|values.yaml" -t f | head -10Repository: openshift-online/maestro
Length of output: 326
🏁 Script executed:
#!/bin/bash
# Check Chart.yaml for kubeVersion
echo "=== Content of maestro-server Chart.yaml ==="
cat -n charts/maestro-server/Chart.yaml | head -50
echo ""
echo "=== Checking for kubeVersion in all Chart files ==="
fd "Chart.yaml" -x grep -H "kubeVersion\|version:" {}
echo ""
echo "=== Content of maestro-server deployment.yaml ==="
wc -l charts/maestro-server/templates/deployment.yaml
echo ""
echo "=== Checking for restartPolicy in deployment.yaml ==="
rg "restartPolicy" charts/maestro-server/templates/deployment.yaml -nRepository: openshift-online/maestro
Length of output: 977
🏁 Script executed:
#!/bin/bash
# Read the full deployment.yaml to understand context
echo "=== Full deployment.yaml content ==="
cat -n charts/maestro-server/templates/deployment.yamlRepository: openshift-online/maestro
Length of output: 8325
Add Kubernetes version constraint to Chart.yaml for sidecar feature dependency.
The cloud-sql-proxy init container uses restartPolicy: Always (line 64), which implements the native sidecar feature introduced in Kubernetes 1.28 (stable in 1.33). This requires explicit version constraints in the chart's Chart.yaml to prevent deployment failures on incompatible clusters.
Add kubeVersion: ">=1.28" to Chart.yaml to document this dependency. Consider noting in the chart README that K8s 1.29+ is recommended for default feature gate support, or document the required feature gate for earlier versions.
🤖 Prompt for AI Agents
In charts/maestro-server/templates/deployment.yaml lines 62-73 the init
container uses restartPolicy: Always which relies on the native sidecar feature
introduced in Kubernetes 1.28; update charts/maestro-server/Chart.yaml to add a
kubeVersion constraint (e.g. kubeVersion: ">=1.28") at the top-level to prevent
installs on incompatible clusters, and update the chart README to recommend
Kubernetes 1.29+ for default feature gate support or document the required
feature gate for earlier K8s versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
charts/maestro-agent/templates/crd.yaml (1)
66-68: Fix the field description.The description for the
namespacefield incorrectly states "Name is the namespace..." when it should say "Namespace is the namespace..." to match the field name.🔎 Proposed fix
namespace: - description: Name is the namespace of the Kubernetes resource, empty string indicates it is a cluster scoped resource. + description: Namespace is the namespace of the Kubernetes resource, empty string indicates it is a cluster scoped resource. type: string
🧹 Nitpick comments (11)
charts/maestro-server/values.yaml (3)
5-5: Consider a safer default forenvironment.Setting
environment: productionas the default may lead users to inadvertently deploy with production settings when they intend to test or develop. Consider defaulting todevelopmentorstaging, and require users to explicitly setproductionin their values overrides.
18-18: Single replica is not HA-ready for production.The default
replicas: 1does not provide high availability. For production environments (the defaultenvironmentsetting), consider defaulting to at least 2-3 replicas to ensure resilience.
139-147: Hard-coded TLS certificate paths may not match actual secret structure.The TLS paths (
/secrets/mqtt-certs/ca.crt,/secrets/mqtt-certs/client.crt,/secrets/mqtt-certs/client.key) are hard-coded and assume a specific secret key structure. If the actual secret uses different keys (e.g.,tls.crt,tls.key), the deployment will fail at runtime.Consider documenting the expected secret structure in comments or adding a validation mechanism to ensure the secret keys match the expected paths.
test/setup/deploy_agent.sh (1)
114-121: Redundant readiness wait after Helm's--waitflag.Line 119 includes
--wait --timeout 5m, which already waits for the deployment to be ready. Line 121 then adds another explicitkubectl waitwith a 200s timeout. This is redundant and extends the total wait time unnecessarily.🔎 Proposed fix
Remove the redundant kubectl wait command:
helm upgrade --install maestro-agent \ ./charts/maestro-agent \ --namespace "${agent_namespace}" \ --values "$values_file" \ --wait \ --timeout 5m - -kubectl wait deploy/maestro-agent -n ${agent_namespace} --for condition=Available=True --timeout=200sMakefile (3)
253-266: Consider adding a check for Helm installation.The new
deployandundeploytargets assume Helm is installed and available. If Helm is not present, the user will see cryptic command-not-found errors. Consider adding a prerequisite target that verifies Helm is installed before executing these targets.🔎 Proposed implementation
Add a check-helm target:
.PHONY: check-helm check-helm: @which helm > /dev/null || (echo "ERROR: Helm is not installed. Please install Helm first." && exit 1) .PHONY: deploy deploy: check-helm helm upgrade --install maestro-server \ ./charts/maestro-server \ --namespace $(namespace) \ --create-namespace \ --set mqtt.enabled=true \ --set postgresql.enabled=true
260-261: Hard-coded--setflags reduce flexibility.The
deploytarget hard-codes--set mqtt.enabled=trueand--set postgresql.enabled=true, which makes it difficult to customize without modifying the Makefile. Consider allowing users to pass additional Helm values via a variable (e.g.,HELM_VALUES).🔎 Example implementation
+# Additional Helm values for deployment +HELM_VALUES ?= .PHONY: deploy deploy: helm upgrade --install maestro-server \ ./charts/maestro-server \ --namespace $(namespace) \ --create-namespace \ --set mqtt.enabled=true \ - --set postgresql.enabled=true + --set postgresql.enabled=true \ + $(HELM_VALUES)Usage:
make deploy HELM_VALUES="--set replicas=3"
281-281: Fragileinstall_crdsconditional logic.The expression
$(if $(install_crds),$(install_crds),false)will evaluate to the string "false" (not boolean false) even wheninstall_crdsis unset or empty. This could lead to unexpected behavior if Helm interprets the string "false" differently than expected.🔎 Proposed fix
Use explicit default assignment:
+install_crds ?= false + .PHONY: deploy-agent deploy-agent: @if [ -z "$(consumer_name)" ]; then \ echo "Error: consumer_name must be set"; \ exit 1; \ fi helm upgrade --install maestro-agent \ ./charts/maestro-agent \ --namespace $(agent_namespace) \ --create-namespace \ --set consumerName=$(consumer_name) \ - --set installCRDs=$(if $(install_crds),$(install_crds),false) + --set installCRDs=$(install_crds)README.md (2)
289-289: Clarify what "default image" means.The statement "If you want to use the default image, you can skip the
make pushstep" could be clearer. Does "default image" refer to:
- The pre-built image from the Helm chart's default registry (quay.io)?
- The latest tag from a remote registry?
Consider rewording for clarity, e.g., "If you want to use the pre-built image from the Helm chart (rather than building and pushing locally), you can skip the
make pushstep."
328-328: Document whyinstall_crds=falseis needed.The addition of
export install_crds=falseon line 328 is not explained. Users may not understand why this is necessary before deploying the agent. Consider adding a brief comment explaining that this prevents reinstalling CRDs that were already installed with the server, or that CRDs should be installed separately.charts/maestro-agent/templates/deployment.yaml (2)
38-47: Hard-coded secret names reduce flexibility.The conditional volume mounts (lines 38-47) and volumes (lines 52-61) reference hard-coded secret names:
maestro-agent-certsfor MQTT TLS (line 55)maestro-grpc-broker-certfor gRPC TLS (line 60)If users want to provide their own secrets with different names, they cannot do so without modifying the template. Consider making these secret names configurable via values.yaml.
🔎 Example implementation
Add to values.yaml:
messageBroker: mqtt: tls: secretName: maestro-agent-certs grpc: tls: secretName: maestro-grpc-broker-certUpdate template:
{{- if and (eq .Values.messageBroker.type "mqtt") .Values.messageBroker.mqtt.rootCert }} - name: mqtt-certs secret: - secretName: maestro-agent-certs + secretName: {{ .Values.messageBroker.mqtt.tls.secretName | default "maestro-agent-certs" }} {{- end }} {{- if and (eq .Values.messageBroker.type "grpc") .Values.messageBroker.grpc.tls.caCert }} - name: grpc-broker-cert secret: - secretName: maestro-grpc-broker-cert + secretName: {{ .Values.messageBroker.grpc.tls.secretName | default "maestro-grpc-broker-cert" }} {{- end }}Also applies to: 52-61
18-21: Consider adding resource limits and health probes.The container spec is missing:
- Resource requests and limits (CPU/memory)
- Liveness and readiness probes
These are important for production deployments to ensure proper resource allocation and health monitoring. Consider adding them to the deployment template, with sensible defaults in values.yaml.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (8)
MakefileREADME.mdcharts/README.mdcharts/maestro-agent/templates/crd.yamlcharts/maestro-agent/templates/deployment.yamlcharts/maestro-agent/values.yamlcharts/maestro-server/values.yamltest/setup/deploy_agent.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- charts/maestro-agent/values.yaml
🧰 Additional context used
🪛 checkmake (0.2.2)
Makefile
[warning] 255-255: Target body for "deploy" exceeds allowed length of 5 (6).
(maxbodylength)
[warning] 420-420: Missing required phony target "all"
(minphony)
🪛 YAMLlint (1.37.1)
charts/maestro-agent/templates/crd.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: e2e-with-istio
- GitHub Check: e2e
- GitHub Check: upgrade
- GitHub Check: e2e-grpc-broker
- GitHub Check: e2e-broadcast-subscription
🔇 Additional comments (4)
charts/maestro-agent/templates/crd.yaml (1)
1-1: YAMLlint syntax error is a false positive.The reported syntax error from YAMLlint is expected because this is a Helm template file using Go template syntax (
{{- if .Values.installCRDs }}), not plain YAML. This is standard practice for conditional CRD installation in Helm charts.charts/README.md (1)
1-22: LGTM! Documentation structure is clear.The top-level README provides a clean overview of the available Helm charts and links to their respective documentation. The structure is easy to navigate and follows standard documentation conventions.
Note: There is a past review comment referencing lines 148-172 (Chart Development section with incorrect Makefile targets), but those lines are not included in the provided code snippet for review. If that section exists in the full file, please ensure the Makefile targets are corrected as noted in the previous review.
charts/maestro-agent/templates/deployment.yaml (1)
6-6: Static analysis error is a false positive.The YAMLlint error
syntax error: expected the node content, but found '-'at line 6 is a false positive. The syntax{{- include "maestro-agent.labels" . | nindent 4 }}is valid Helm templating. YAMLlint does not understand Helm template directives and treats them as YAML syntax errors. This can be safely ignored.test/setup/deploy_agent.sh (1)
78-83: MQTT configuration is correct as-is.The MQTT broker is explicitly configured with
allow_anonymous truewhen TLS is disabled (the default), so empty credentials will not cause authentication failures. The test setup deliberately allows anonymous MQTT access, making the empty user and password values appropriate for this environment.Likely an incorrect or invalid review comment.
| $ curl -k -X POST -H "Authorization: Bearer $TOKEN" \ | ||
| -H "Content-Type: application/json" \ | ||
| https://maestro.${external_apps_domain}/api/maestro/v1/consumers \ | ||
| https://maestro-maestro.${external_apps_domain}/api/maestro/v1/consumers \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix endpoint URL: remove duplicate "maestro" prefix.
The endpoint https://maestro-maestro.${external_apps_domain} contains a duplicated service name. Based on the maestro repository documentation, the correct endpoint should be https://maestro.${external_apps_domain}. Change maestro-maestro to maestro to match the actual service route.
🤖 Prompt for AI Agents
In README.md around line 306, the endpoint URL contains a duplicated service
prefix "maestro-maestro"; update the URL to use the correct host by changing
"https://maestro-maestro.${external_apps_domain}/api/maestro/v1/consumers" to
"https://maestro.${external_apps_domain}/api/maestro/v1/consumers" so the
documentation matches the actual service route.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
charts/maestro-server/templates/deployment.yaml (2)
28-31: Unused volume "service" should be removed.This volume references secret "maestro" but is never mounted in any container, making it dead code.
🔎 Proposed fix
- name: rds secret: secretName: {{ .Values.database.secretName }} - - name: service - secret: - secretName: maestro - optional: true - name: {{ .Values.messageBroker.type }} secret: secretName: {{ .Values.messageBroker.secretName }}
67-79: Kubernetes version constraint needed for native sidecar feature.The
cloud-sql-proxyinit container usesrestartPolicy: Always(line 70), which relies on the native sidecar feature introduced in Kubernetes 1.28. This was flagged in a previous review - ensurekubeVersion: ">=1.28"is added toChart.yaml.
🧹 Nitpick comments (1)
test/setup/env_setup.sh (1)
200-209: Consider replacing fixedsleepwith a more robust readiness check.Using
sleep 10beforekubectl waitis fragile and may fail under varying cluster load or startup times. Consider polling until the deployment exists or using a retry loop.Also, line 204 contains a commented-out command that appears to be leftover code.
🔎 Proposed fix
kubectl apply -f "${PWD}/test/setup/service-ca-crds" - # kubectl create namespace openshift-config-managed || true kubectl apply -f "${PWD}/test/setup/service-ca" - sleep 10 # wait for openshift service-ca-operator is created + # Wait for service-ca-operator deployment to exist before waiting for it + until kubectl get deploy/service-ca-operator -n openshift-service-ca-operator &>/dev/null; do sleep 2; done kubectl wait deploy/service-ca-operator -n openshift-service-ca-operator --for condition=Available=True --timeout=300s - sleep 10 # wait for openshift service-ca is created + until kubectl get deploy/service-ca -n openshift-service-ca &>/dev/null; do sleep 2; done kubectl wait deploy/service-ca -n openshift-service-ca --for condition=Available=True --timeout=300s
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (4)
charts/maestro-server/templates/deployment.yamlcharts/maestro-server/values.yamltest/setup/deploy_server.shtest/setup/env_setup.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- charts/maestro-server/values.yaml
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: e2e-with-istio
- GitHub Check: upgrade
- GitHub Check: e2e
- GitHub Check: e2e-grpc-broker
- GitHub Check: e2e-broadcast-subscription
🔇 Additional comments (9)
test/setup/env_setup.sh (2)
126-134: LGTM - HTTPS certificate generation is properly structured.The certificate creation is correctly guarded by a directory existence check to avoid regenerating certs on subsequent runs. The SAN entries appropriately cover the service name variations and localhost.
210-228: LGTM - gRPC certificate setup is consistent with other cert blocks.The certificate generation follows the same pattern used for MQTT and gRPC broker certs, including proper directory guarding and secret cleanup before creation.
test/setup/deploy_server.sh (3)
147-161: LGTM - gRPC broker configuration appended correctly.The gRPC broker block is correctly appended as a new top-level section with proper YAML structure.
163-169: LGTM - Helm deployment is well-configured.The
helm upgrade --installwith--waitand--timeoutensures the deployment is verified before proceeding.
177-182: LGTM - REST API endpoint scheme logic is correct.When Istio handles mTLS, using
httpfor the external endpoint is appropriate since Istio's sidecar manages the TLS termination.charts/maestro-server/templates/deployment.yaml (4)
1-16: LGTM - Deployment metadata and spec are well-structured.The rolling update strategy with 25% maxSurge/maxUnavailable is a sensible default for zero-downtime deployments.
Note: The static analysis error on line 6 is a false positive - YAMLlint doesn't understand Helm/Go template syntax, and
{{- include "maestro-server.labels" . | nindent 4 }}is valid.
80-99: LGTM - Migration init container is correctly configured.The migration container properly mounts RDS secrets and handles SSL mode conditionally.
100-132: LGTM - Main container volume mounts are comprehensive.The conditional mounting for different broker types (MQTT, gRPC, PubSub) and TLS certs is well-structured.
184-199: LGTM - Health probes are correctly configured.The liveness and readiness probes now use the templated
{{ .Values.server.healthCheck.bindPort }}and correctly switch scheme based on HTTPS enablement. This addresses the previous review feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (3)
charts/maestro-server/templates/deployment.yaml (3)
28-31: Unused volume still present.The "service" volume referencing secret "maestro" is declared but not mounted in any container. This remains dead code that should be removed.
68-79: Kubernetes version constraint still needed for sidecar feature.The
cloud-sql-proxyinit container usesrestartPolicy: Always(line 70), which requires Kubernetes 1.28+. The Chart.yaml should include akubeVersionconstraint to prevent deployment failures on incompatible clusters.
166-170: Nil pointer issue still present.The template accesses
.Values.grpc.tls.enabledand related TLS fields without first ensuring.Values.grpcand.Values.grpc.tlsexist. This issue was previously identified and remains unresolved.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (4)
charts/maestro-server/templates/deployment.yamlcharts/maestro-server/templates/grpc.yamltest/setup/deploy_server.shtest/setup/grpc-client/grpc-client.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- test/setup/deploy_server.sh
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/grpc.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: e2e-with-istio
- GitHub Check: e2e-grpc-broker
- GitHub Check: e2e
- GitHub Check: e2e-broadcast-subscription
- GitHub Check: upgrade
| {{- if and .Values.server.grpc.enabled .Values.server.grpc.tls.enabled }} | ||
| - name: maestro-grpc-cert | ||
| mountPath: /secrets/maestro-grpc-cert | ||
| readOnly: true | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential nil pointer if .Values.server.grpc is undefined.
Similar to lines 61-66, this volume mount uses the same condition that accesses .Values.server.grpc.enabled and .Values.server.grpc.tls.enabled without existence checks.
🔎 Proposed fix - add existence checks
- {{- if and .Values.server.grpc.enabled .Values.server.grpc.tls.enabled }}
+ {{- if and .Values.server.grpc (and .Values.server.grpc.enabled .Values.server.grpc.tls) .Values.server.grpc.tls.enabled }}
- name: maestro-grpc-cert
mountPath: /secrets/maestro-grpc-cert
readOnly: true
{{- end }}🤖 Prompt for AI Agents
In charts/maestro-server/templates/deployment.yaml around lines 128 to 132, the
Helm condition accesses .Values.server.grpc.enabled and
.Values.server.grpc.tls.enabled directly which can nil-pointer if
.Values.server.grpc or .Values.server.grpc.tls is undefined; update the if
expression to also check for existence (e.g. and .Values.server.grpc
.Values.server.grpc.enabled .Values.server.grpc.tls
.Values.server.grpc.tls.enabled) following the same pattern used at lines 61-66
so the volumeMount is only added when those parent keys exist and are enabled.
| apiVersion: rbac.authorization.k8s.io/v1 | ||
| kind: ClusterRoleBinding | ||
| metadata: | ||
| name: grpc-pub-sub | ||
| roleRef: | ||
| apiGroup: rbac.authorization.k8s.io | ||
| kind: ClusterRole | ||
| name: grpc-pub-sub | ||
| subjects: | ||
| - kind: User | ||
| name: grpc-client | ||
| apiGroup: rbac.authorization.k8s.io | ||
| - kind: ServiceAccount | ||
| name: grpc-client | ||
| namespace: maestro |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for ClusterRole definitions with name grpc-pub-sub
# Search for ClusterRole kind with grpc-pub-sub in all YAML files
echo "=== Searching for ClusterRole with grpc-pub-sub ==="
rg -t yaml "kind:\s*ClusterRole" -A 5 | rg -B 5 "grpc-pub-sub"
echo -e "\n=== Searching for grpc-pub-sub in all files ==="
rg "grpc-pub-sub" --type-list
echo -e "\n=== All occurrences of grpc-pub-sub ==="
rg "grpc-pub-sub" -nRepository: openshift-online/maestro
Length of output: 1598
Ensure the ClusterRole grpc-pub-sub is defined or created before this binding is applied.
The ClusterRoleBinding references a ClusterRole named grpc-pub-sub that is not defined in this file. This ClusterRole must exist in the cluster before the binding can function. Verify that this role is created elsewhere in your setup process (e.g., by a test helper function or a separate manifest file).
Additionally, consider whether the User subject grpc-client is necessary. For gRPC client test setups, a ServiceAccount binding is typically sufficient and more conventional than a User subject.
🤖 Prompt for AI Agents
In test/setup/grpc-client/grpc-client.yaml lines 1-15, the ClusterRoleBinding
references a ClusterRole named "grpc-pub-sub" that is not defined here and may
not exist when this binding is applied; ensure the ClusterRole is created
beforehand by adding or referencing the ClusterRole manifest (or creating it in
your test setup/CI job before applying this binding) and/or reorder manifests so
the role is applied before this binding, and remove the User subject
"grpc-client" if it's not needed for tests (keep only the ServiceAccount subject
for the gRPC client) or document why both subjects are required.
| - kind: ServiceAccount | ||
| name: grpc-client | ||
| namespace: maestro | ||
| --- | ||
| apiVersion: v1 | ||
| kind: ServiceAccount | ||
| metadata: | ||
| name: grpc-client | ||
| --- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add namespace to ServiceAccount to match ClusterRoleBinding reference.
The ClusterRoleBinding references the ServiceAccount in the maestro namespace (line 15), but the ServiceAccount definition doesn't specify a namespace. Without an explicit namespace, the ServiceAccount will be created in the default namespace, causing the ClusterRoleBinding to fail.
🔎 Proposed fix
apiVersion: v1
kind: ServiceAccount
metadata:
name: grpc-client
+ namespace: maestro📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - kind: ServiceAccount | |
| name: grpc-client | |
| namespace: maestro | |
| --- | |
| apiVersion: v1 | |
| kind: ServiceAccount | |
| metadata: | |
| name: grpc-client | |
| --- | |
| - kind: ServiceAccount | |
| name: grpc-client | |
| namespace: maestro | |
| --- | |
| apiVersion: v1 | |
| kind: ServiceAccount | |
| metadata: | |
| name: grpc-client | |
| namespace: maestro | |
| --- |
🤖 Prompt for AI Agents
In test/setup/grpc-client/grpc-client.yaml around lines 13 to 21 the
ServiceAccount is defined without a namespace so it will be created in the
default namespace while the ClusterRoleBinding expects it in the "maestro"
namespace; add a metadata.namespace: maestro entry to the ServiceAccount
definition (ensure it sits under metadata alongside name) so the ServiceAccount
is created in the maestro namespace and matches the ClusterRoleBinding
reference.
| apiVersion: v1 | ||
| kind: Secret | ||
| metadata: | ||
| name: grpc-client-token | ||
| annotations: | ||
| kubernetes.io/service-account.name: grpc-client | ||
| type: kubernetes.io/service-account-token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add namespace to Secret to match ServiceAccount location.
The Secret is annotated to be associated with the grpc-client ServiceAccount but doesn't specify a namespace. For the service account token to work correctly, the Secret must be in the same namespace as the ServiceAccount (which should be maestro).
🔎 Proposed fix
apiVersion: v1
kind: Secret
metadata:
name: grpc-client-token
+ namespace: maestro
annotations:
kubernetes.io/service-account.name: grpc-client
type: kubernetes.io/service-account-token📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| apiVersion: v1 | |
| kind: Secret | |
| metadata: | |
| name: grpc-client-token | |
| annotations: | |
| kubernetes.io/service-account.name: grpc-client | |
| type: kubernetes.io/service-account-token | |
| apiVersion: v1 | |
| kind: Secret | |
| metadata: | |
| name: grpc-client-token | |
| namespace: maestro | |
| annotations: | |
| kubernetes.io/service-account.name: grpc-client | |
| type: kubernetes.io/service-account-token |
🤖 Prompt for AI Agents
In test/setup/grpc-client/grpc-client.yaml around lines 22 to 28, the Secret
lacks a namespace so it may reside in the wrong namespace and won't be
associated with the grpc-client ServiceAccount; add a namespace: maestro field
under metadata so the Secret is created in the same namespace as the
ServiceAccount.
|
e2e-grpc-broker test issue was fixed in open-cluster-management-io/sdk-go#181 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
♻️ Duplicate comments (4)
charts/maestro-server/templates/mosquitto.yaml (1)
58-60: Misleading volume name for ephemeral storage.The volume is named
mosquitto-persistent-storagebut usesemptyDir: {}which is ephemeral and doesn't persist data across pod restarts. Consider renaming to reflect its ephemeral nature.🔎 Proposed fix
volumes: - - name: mosquitto-persistent-storage + - name: mosquitto-data emptyDir: {}And update the corresponding volumeMount on line 48:
volumeMounts: - - name: mosquitto-persistent-storage + - name: mosquitto-data mountPath: /mosquitto/datacharts/maestro-server/templates/grpc.yaml (1)
1-12: Add existence check for.Values.messageBroker.grpcbefore accessing nested fields.When
messageBroker.typeis "grpc", the template accesses.Values.messageBroker.grpc.urlwithout verifying thegrpcsection exists. This could cause a template rendering error if the configuration block is missing.🔎 Proposed fix
-{{- if eq .Values.messageBroker.type "grpc" -}} +{{- if and (eq .Values.messageBroker.type "grpc") .Values.messageBroker.grpc -}} --- apiVersion: v1 kind: Secret metadata: name: {{ .Values.messageBroker.secretName }} labels: {{- include "maestro-server.labels" . | nindent 4 }} stringData: config.yaml: | - url: {{ .Values.messageBroker.grpc.url }} + url: {{ .Values.messageBroker.grpc.url | required "messageBroker.grpc.url is required when messageBroker.type is grpc" }} {{- end }}charts/maestro-server/templates/deployment.yaml (2)
62-71: Native sidecar feature requires Kubernetes 1.28+.The
restartPolicy: Alwayson thecloud-sql-proxyinit container (line 64) uses the native sidecar feature introduced in Kubernetes 1.28 (stable in 1.29+). Consider adding akubeVersionconstraint toChart.yamlto prevent installation failures on older clusters.
28-31: Unused volume: "service" secret is declared but never mounted.The
servicevolume referencing secretmaestrois defined but not mounted in any container. This appears to be dead configuration that should be removed.🔎 Proposed fix
- name: rds secret: secretName: {{ .Values.database.secretName }} - - name: service - secret: - secretName: maestro - optional: true - name: {{ .Values.messageBroker.type }} secret: secretName: {{ .Values.messageBroker.secretName }}
🧹 Nitpick comments (4)
test/e2e/setup/rosa/setup/teardown.sh (1)
53-67: Add timeout handling when RDS deletion polling exhausts.If the loop completes all 20 iterations without the DB being deleted, the script continues silently. Consider adding an explicit warning or error when the timeout is reached.
🔎 Proposed fix
i=1 while [ $i -le 20 ] do db_status=$(aws rds describe-db-instances --region ${region} --db-instance-identifier maestro --output json 2>/dev/null | jq -r '.DBInstances[0].DBInstanceStatus // empty') if [[ -z "$db_status" ]]; then echo "RDS instance deleted" break fi echo "[$i] DB status: ${db_status}" i=$((i + 1)) sleep 30 done + if [[ -n "$db_status" ]]; then + echo "Warning: RDS instance deletion timed out after 10 minutes (status: ${db_status})" + fi else echo "RDS instance not found or already deleted" fitest/e2e/setup/gcp/setup/agent.sh (1)
46-51: Consider using generated image values in Helm deployment.The script defines
IMAGE_REGISTRY,IMAGE_REPOSITORY, andIMAGE_TAGvariables (lines 49-51) but doesn't include them in the generated Helm values file. If custom images are intended, add them to the values file.🔎 Proposed fix - add image config to values
cat > ${output_dir}/maestro-agent-values.yaml <<EOF consumerName: ${consumer_id} environment: production +image: + registry: ${IMAGE_REGISTRY} + repository: ${IMAGE_REPOSITORY} + tag: ${IMAGE_TAG} + serviceAccount: name: maestro-agent-sacharts/maestro-server/values.yaml (1)
12-16: Consider using a specific image tag instead oflatestfor production.Using
latestas the default tag can lead to unpredictable deployments when the underlying image is updated. For production environments, pinning to a specific version tag ensures reproducibility.🔎 Suggested improvement
image: registry: quay.io repository: redhat-user-workloads/maestro-rhtap-tenant/maestro/maestro - tag: latest + tag: "" # Set to a specific version for production (e.g., "v1.0.0") pullPolicy: IfNotPresentThen in templates, use a default or require an override for production.
test/e2e/setup/rosa/setup/agent.sh (1)
73-90: YAML type inconsistency:portshould be an integer, not a string.The
portvalue is quoted as a string ("8883"), but other port values in the chart likely expect integers. This could cause type comparison issues in templates.🔎 Proposed fix
messageBroker: type: mqtt mqtt: host: ${mqtt_host} user: "" - port: "8883" + port: 8883 rootCert: /secrets/mqtt-certs/ca.crt clientCert: /secrets/mqtt-certs/client.crt clientKey: /secrets/mqtt-certs/client.key
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (15)
charts/maestro-agent/templates/deployment.yamlcharts/maestro-agent/values.yamlcharts/maestro-server/templates/deployment.yamlcharts/maestro-server/templates/grpc.yamlcharts/maestro-server/templates/mosquitto.yamlcharts/maestro-server/templates/mqtt.yamlcharts/maestro-server/templates/pubsub.yamlcharts/maestro-server/values.yamltest/e2e/setup/gcp/setup/agent.shtest/e2e/setup/gcp/setup/maestro.shtest/e2e/setup/rosa/README.mdtest/e2e/setup/rosa/setup/agent.shtest/e2e/setup/rosa/setup/maestro.shtest/e2e/setup/rosa/setup/teardown.shtest/setup/deploy_server.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- charts/maestro-agent/values.yaml
🧰 Additional context used
🪛 Shellcheck (0.11.0)
test/setup/deploy_server.sh
[warning] 20-20: enable_broadcast appears unused. Verify use (or export if used externally).
(SC2034)
test/e2e/setup/rosa/setup/maestro.sh
[error] 114-114: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 114-114: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
[error] 132-132: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 132-132: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
test/e2e/setup/rosa/setup/teardown.sh
[warning] 7-7: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🪛 YAMLlint (1.37.1)
charts/maestro-server/templates/grpc.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mosquitto.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mqtt.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/pubsub.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🔇 Additional comments (15)
test/e2e/setup/rosa/README.md (1)
15-16: Helpful additions to ROSA Prerequisites.The two new bullet points (lines 15–16) provide useful preparatory guidance for users setting up ROSA clusters, clarifying the need to create account roles and cluster-admin before proceeding with Maestro deployment. These additions align well with the Helm-based deployment workflow introduced in the PR.
charts/maestro-agent/templates/deployment.yaml (1)
1-61: LGTM!The deployment template is well-structured with proper conditional handling for different broker types (mqtt/grpc) and TLS configurations. The YAMLlint error on line 6 is a false positive caused by Helm templating syntax (
{{-), which is valid for Helm charts.test/e2e/setup/gcp/setup/maestro.sh (1)
112-167: LGTM!The Helm values file generation and deployment flow are well-structured. The script properly configures the pubsub message broker, database connectivity with Cloud SQL Proxy, and GKE workload identity. The values align with the Helm chart expectations.
test/e2e/setup/gcp/setup/agent.sh (1)
63-92: LGTM!The Helm values generation and deployment flow are consistent with the maestro server setup. The pubsub broker configuration and workload identity annotations are properly configured.
charts/maestro-server/templates/mosquitto.yaml (1)
76-87: LGTM on ConfigMap and TLS configuration!The ConfigMap correctly generates mosquitto.conf with conditional TLS settings. When TLS is enabled, it properly configures certificate paths and client certificate authentication; otherwise, it allows anonymous access for development/testing scenarios.
charts/maestro-server/values.yaml (2)
154-169: Embedded PostgreSQL configuration is appropriate for development only.The hardcoded credentials (
user: maestro,password: maestro) are acceptable for development/testing, but ensure the chart README clearly documents thatpostgresql.enabled: falseshould be used in production with an external database. Theenabled: falsedefault is correct.
1-177: Well-structured Helm values file with comprehensive configuration options.The values file provides good organization with clear sections for each component (database, messageBroker, server, services). The comments explaining expected secret keys (lines 39-45) and configuration options are helpful for operators.
test/e2e/setup/rosa/setup/agent.sh (1)
92-99: Helm deployment migration looks correct.The migration from OpenShift templates to Helm-based deployment is well-structured. The
--create-namespaceflag is appropriate, and the readiness wait ensures the deployment is available before proceeding.test/setup/deploy_server.sh (2)
148-154: Helm deployment command is well-configured.The
--waitand--timeout 5mflags ensure the deployment completes before proceeding, which is appropriate for test setup scripts.
161-172: Endpoint exposure logic correctly handles Istio vs non-Istio scenarios.The conditional schema selection (http vs https) based on Istio enablement is correct, as Istio handles mTLS at the service mesh level.
charts/maestro-server/templates/deployment.yaml (2)
1-16: YAMLlint syntax error is a false positive.The reported syntax error at line 6 is due to Helm templating syntax (
{{- include ... | nindent 4 }}), which is valid Helm but not recognized by standard YAML linters.
174-187: Health check probe ports now correctly use templated values.The liveness and readiness probes properly reference
{{ .Values.server.healthCheck.bindPort }}instead of hardcoded values, addressing the previous review concern.test/e2e/setup/rosa/setup/maestro.sh (3)
39-47: Good idempotent password handling.The password persistence logic correctly reuses an existing password if available, preventing credential mismatch issues on re-runs. This is a solid improvement for script idempotency.
69-88: Improved idempotency for IoT policy creation.The script now checks if the IoT policy exists before attempting to create it, and gracefully handles attachment failures. This makes the setup script more robust for re-runs.
300-347: Helm-based deployment is well-structured.The Helm values file generation covers all necessary configuration for the ROSA environment, including proper TLS settings for AWS IoT (
sslMode: verify-full) and the shared subscription topic pattern with proper YAML escaping for the$shareprefix.
| {{- if .Values.messageBroker.mqtt.tls.enabled }} | ||
| caFile: {{ .Values.messageBroker.mqtt.tls.caFile }} | ||
| clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }} | ||
| clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add existence check for .Values.messageBroker.mqtt.tls before accessing nested fields.
Accessing .Values.messageBroker.mqtt.tls.enabled without first verifying the tls section exists could cause a template rendering error if the TLS configuration block is not defined in values.
🔎 Proposed fix
- {{- if .Values.messageBroker.mqtt.tls.enabled }}
+ {{- if and .Values.messageBroker.mqtt.tls .Values.messageBroker.mqtt.tls.enabled }}
caFile: {{ .Values.messageBroker.mqtt.tls.caFile }}
clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }}
clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }}
{{- end }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {{- if .Values.messageBroker.mqtt.tls.enabled }} | |
| caFile: {{ .Values.messageBroker.mqtt.tls.caFile }} | |
| clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }} | |
| clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }} | |
| {{- end }} | |
| {{- if and .Values.messageBroker.mqtt.tls .Values.messageBroker.mqtt.tls.enabled }} | |
| caFile: {{ .Values.messageBroker.mqtt.tls.caFile }} | |
| clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }} | |
| clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In charts/maestro-server/templates/mqtt.yaml around lines 18-22, the template
accesses .Values.messageBroker.mqtt.tls.enabled and nested fields without
verifying that .Values.messageBroker.mqtt.tls exists; wrap the block in a
protective conditional that checks the tls object before accessing its
properties (e.g. use a combined test like "if .Values.messageBroker.mqtt.tls and
.Values.messageBroker.mqtt.tls.enabled"), and only then reference caFile,
clientCertFile, and clientKeyFile (or provide sensible defaults) to prevent
template rendering errors when tls is undefined.
|
|
||
| - Install the CLIs: `oc`, `rosa`, `aws` and `jq` | ||
| - Ensue your `aws` CLI is logined with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region | ||
| - Ensue your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in Prerequisites section.
Line 8 has a grammatical error: "Ensue" should be "Ensure".
🔎 Proposed fix
-- Ensue your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region
+- Ensure your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Ensue your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region | |
| - Ensure your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region |
🤖 Prompt for AI Agents
In test/e2e/setup/rosa/README.md around line 8, the word "Ensue" is a typo;
replace it with "Ensure" so the sentence reads "Ensure your `aws` CLI is logged
in with your AWS account and your AWS account should have the permissions to
operate AWS IoT and AWS RDS PostgreSQL in your provided region."
| if [[ ! " ${unique_azs[@]} " =~ " ${subnet_az} " ]]; then | ||
| unique_azs+=("$subnet_az") | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shell array membership check has syntax issues.
The [[ " ${unique_azs[@]} " =~ " ${subnet_az} " ]] pattern has two issues flagged by shellcheck: arrays implicitly concatenate in [[ ]], and the quoted right-hand side matches literally instead of as a regex.
🔎 Proposed fix using a loop for reliable membership check
- # Track unique AZs
- if [[ ! " ${unique_azs[@]} " =~ " ${subnet_az} " ]]; then
- unique_azs+=("$subnet_az")
- fi
+ # Track unique AZs
+ az_exists=false
+ for existing_az in "${unique_azs[@]}"; do
+ if [[ "$existing_az" == "$subnet_az" ]]; then
+ az_exists=true
+ break
+ fi
+ done
+ if [[ "$az_exists" == "false" ]]; then
+ unique_azs+=("$subnet_az")
+ fiCommittable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Shellcheck (0.11.0)
[error] 114-114: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 114-114: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
🤖 Prompt for AI Agents
In test/e2e/setup/rosa/setup/maestro.sh around lines 114-116, the current array
membership check using [[ " ${unique_azs[@]} " =~ " ${subnet_az} " ]] is
unreliable and matches as a regex; replace it with an explicit loop that
iterates over "${unique_azs[@]}", compares each element to "${subnet_az}" using
= or == inside [[ ]] (with proper quoting), set a found flag if matched, and
only append subnet_az to unique_azs if the flag remains false.
| PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" ; pwd -P)" | ||
| ROSA_DIR="$(cd ${PWD}/.. && pwd -P)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add failure handling for cd command.
The static analysis correctly identifies that cd can fail silently. Add || exit 1 to ensure the script fails if the directory change doesn't succeed.
🔎 Proposed fix
-PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" ; pwd -P)"
-ROSA_DIR="$(cd ${PWD}/.. && pwd -P)"
+PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" || exit 1; pwd -P)"
+ROSA_DIR="$(cd ${PWD}/.. || exit 1 && pwd -P)"Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 7-7: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🤖 Prompt for AI Agents
In test/e2e/setup/rosa/setup/teardown.sh around lines 7-8, the two cd
invocations can fail silently; update the command substitutions so that each cd
is followed by "|| exit 1" to ensure the script exits on failure (i.e., add
failure handling to the subshell cd "$(dirname ${BASH_SOURCE[0]})" and to the cd
${PWD}/.. invocation).
| if [[ $policy_name == maestro* ]]; then | ||
| echo "delelet policy $policy_name" | ||
| aws iot detach-policy --region ${region} --target $cert_arn --policy-name $policy_name | ||
| aws iot delete-policy --region ${region} --policy-name $policy_name | ||
| echo "Detaching and deleting policy: $policy_name" | ||
| aws iot detach-policy --region ${region} --target $cert_arn --policy-name $policy_name 2>/dev/null | ||
| aws iot delete-policy --region ${region} --policy-name $policy_name 2>/dev/null && \ | ||
| echo "Policy $policy_name deleted" || \ | ||
| echo "Failed to delete policy $policy_name" | ||
|
|
||
| echo "delelet certificate $cert_id" | ||
| aws iot update-certificate --region ${region} --certificate-id $cert_id --new-status REVOKED | ||
| echo "Revoking and deleting certificate: $cert_id" | ||
| aws iot update-certificate --region ${region} --certificate-id $cert_id --new-status REVOKED 2>/dev/null | ||
| sleep 5 | ||
| aws iot delete-certificate --region ${region} --certificate-id $cert_id | ||
| aws iot delete-certificate --region ${region} --certificate-id $cert_id 2>/dev/null && \ | ||
| echo "Certificate $cert_id deleted" || \ | ||
| echo "Failed to delete certificate $cert_id" | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Certificate deletion may be attempted multiple times.
The certificate deletion logic (lines 133-138) is inside the policy loop. If a certificate has multiple attached maestro policies, the script will attempt to revoke and delete the same certificate multiple times. Move the certificate deletion outside the policy loop or add a break after the first deletion attempt.
🔎 Proposed fix
# List and remove maestro policies
+ has_maestro_policy=false
for policy_name in $(aws iot list-attached-policies --region ${region} --target $cert_arn --output json 2>/dev/null | jq -r '.policies[].policyName // empty'); do
if [[ $policy_name == maestro* ]]; then
+ has_maestro_policy=true
echo "Detaching and deleting policy: $policy_name"
aws iot detach-policy --region ${region} --target $cert_arn --policy-name $policy_name 2>/dev/null
aws iot delete-policy --region ${region} --policy-name $policy_name 2>/dev/null && \
echo "Policy $policy_name deleted" || \
echo "Failed to delete policy $policy_name"
-
- echo "Revoking and deleting certificate: $cert_id"
- aws iot update-certificate --region ${region} --certificate-id $cert_id --new-status REVOKED 2>/dev/null
- sleep 5
- aws iot delete-certificate --region ${region} --certificate-id $cert_id 2>/dev/null && \
- echo "Certificate $cert_id deleted" || \
- echo "Failed to delete certificate $cert_id"
fi
done
+
+ if [[ "$has_maestro_policy" == "true" ]]; then
+ echo "Revoking and deleting certificate: $cert_id"
+ aws iot update-certificate --region ${region} --certificate-id $cert_id --new-status REVOKED 2>/dev/null
+ sleep 5
+ aws iot delete-certificate --region ${region} --certificate-id $cert_id 2>/dev/null && \
+ echo "Certificate $cert_id deleted" || \
+ echo "Failed to delete certificate $cert_id"
+ fi
doneCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In test/e2e/setup/rosa/setup/teardown.sh around lines 126-139 the script revokes
and deletes the certificate inside the policy loop, causing repeated
revoke/delete attempts if the same cert has multiple maestro policies; move the
certificate revocation and deletion out of the policy loop so policies are
detached/deleted in the loop and then, once after the loop, call aws iot
update-certificate (REVOKED), sleep, and aws iot delete-certificate once (with
the same error handling), or alternatively add a break after the first
successful delete to ensure the certificate is only revoked/deleted a single
time.
6d9ae0c to
4f5aec2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/e2e/setup/rosa/setup/agent.sh (1)
7-8: Add error handling forcdcommands.The
cdcommands can fail silently. Add|| exit 1to ensure the script exits if directory changes fail.🔎 Proposed fix
-PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" ; pwd -P)" -ROOT_DIR="$(cd ${PWD}/.. && pwd -P)" +PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" || exit 1; pwd -P)" +ROOT_DIR="$(cd ${PWD}/.. || exit 1 && pwd -P)"
♻️ Duplicate comments (14)
charts/maestro-agent/templates/crd.yaml (1)
66-68: Fix the field description.The description for the
namespacefield incorrectly states "Name is the namespace..." when it should say "Namespace is the namespace...".🔎 Proposed fix
namespace: - description: Name is the namespace of the Kubernetes resource, empty string indicates it is a cluster scoped resource. + description: Namespace is the namespace of the Kubernetes resource, empty string indicates it is a cluster scoped resource. type: stringtest/e2e/setup/rosa/setup/teardown.sh (2)
7-8: Add error handling forcdcommands.As previously noted, the
cdcommands can fail silently. Add|| exit 1to ensure the script exits if directory changes fail.
126-139: Certificate deletion is attempted multiple times.As previously noted, the certificate revocation and deletion logic is inside the policy loop. If a certificate has multiple maestro policies attached, the script will attempt to revoke and delete the same certificate multiple times. Move the certificate deletion outside the policy loop.
charts/maestro-server/templates/route.yaml (1)
15-19: Add validation for TLS termination values.The TLS configuration uses values directly without validation. Invalid
terminationorinsecureEdgeTerminationPolicyvalues will cause Route creation to fail at runtime.🔎 Suggested validation approach
{{- if .Values.route.tls.enabled }} tls: + {{- if not (has .Values.route.tls.termination (list "edge" "passthrough" "reencrypt")) }} + {{- fail "route.tls.termination must be one of: edge, passthrough, reencrypt" }} + {{- end }} termination: {{ .Values.route.tls.termination }} + {{- if and .Values.route.tls.insecureEdgeTerminationPolicy (not (has .Values.route.tls.insecureEdgeTerminationPolicy (list "Allow" "Redirect" "None" ""))) }} + {{- fail "route.tls.insecureEdgeTerminationPolicy must be one of: Allow, Redirect, None, or empty" }} + {{- end }} insecureEdgeTerminationPolicy: {{ .Values.route.tls.insecureEdgeTerminationPolicy }} {{- end }}charts/maestro-server/templates/mqtt.yaml (1)
18-22: Add existence check for TLS configuration.Accessing
.Values.messageBroker.mqtt.tls.enabledwithout verifying that thetlsobject exists could cause template rendering errors if the TLS section is undefined.🔎 Proposed fix
- {{- if .Values.messageBroker.mqtt.tls.enabled }} + {{- if and .Values.messageBroker.mqtt.tls .Values.messageBroker.mqtt.tls.enabled }} caFile: {{ .Values.messageBroker.mqtt.tls.caFile }} clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }} clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }} {{- end }}charts/maestro-server/templates/pubsub.yaml (1)
1-32: Add existence checks and prevent empty sections.Two issues in the template:
- Accessing
.Values.messageBroker.pubsub.*without verifying thepubsubobject exists could cause template rendering errors.- The
topics(lines 14-22) andsubscriptions(lines 23-31) sections could render empty if all nested fields are undefined, resulting in malformed YAML.🔎 Proposed fix
-{{- if eq .Values.messageBroker.type "pubsub" -}} +{{- if and (eq .Values.messageBroker.type "pubsub") .Values.messageBroker.pubsub -}} --- apiVersion: v1 kind: Secret metadata: name: {{ .Values.messageBroker.secretName }} labels: {{- include "maestro-server.labels" . | nindent 4 }} stringData: config.yaml: | {{- if .Values.messageBroker.pubsub.projectID }} projectID: {{ .Values.messageBroker.pubsub.projectID }} {{- end }} - {{- if .Values.messageBroker.pubsub.topics }} + {{- if or .Values.messageBroker.pubsub.topics.sourceEvents .Values.messageBroker.pubsub.topics.sourceBroadcast }} topics: {{- if .Values.messageBroker.pubsub.topics.sourceEvents }} sourceEvents: {{ .Values.messageBroker.pubsub.topics.sourceEvents }} {{- end }} {{- if .Values.messageBroker.pubsub.topics.sourceBroadcast }} sourceBroadcast: {{ .Values.messageBroker.pubsub.topics.sourceBroadcast }} {{- end }} {{- end }} - {{- if .Values.messageBroker.pubsub.subscriptions }} + {{- if or .Values.messageBroker.pubsub.subscriptions.agentEvents .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} subscriptions: {{- if .Values.messageBroker.pubsub.subscriptions.agentEvents }} agentEvents: {{ .Values.messageBroker.pubsub.subscriptions.agentEvents }} {{- end }} {{- if .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} agentBroadcast: {{ .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} {{- end }} {{- end }} {{- end }}charts/maestro-agent/templates/clusterrolebinding.yaml (1)
17-32: Consider making cluster-admin binding conditional.While the comment acknowledges the permission tradeoff, binding to
cluster-admingrants unrestricted cluster access and poses a significant security risk for production deployments.Consider making this binding conditional via a values flag (e.g.,
.Values.rbac.clusterAdmin.enabled) so users can opt into this elevated permission model rather than receiving it by default.test/setup/deploy_server.sh (1)
20-20: Unused variable should be removed or integrated.The
enable_broadcastvariable is defined but never used in the script. Either remove it if broadcast subscription configuration is not needed, or integrate it into the Helm values generation (e.g., for MQTTagentTopicconfiguration).🔎 Proposed fix - remove if unused
msg_broker=${MESSAGE_DRIVER_TYPE:-"mqtt"} server_replicas=${SERVER_REPLICAS:-"1"} -enable_broadcast=${ENABLE_BROADCAST_SUBSCRIPTION:-"false"} enable_istio=${ENABLE_ISTIO:-"false"}Based on learnings, past reviews identified this same issue.
charts/maestro-server/templates/deployment.yaml (4)
32-35: Remove unused volume.The "service" volume references secret "maestro" but is never mounted in any container. This appears to be dead code that should be removed.
🔎 Proposed fix
- name: rds secret: secretName: {{ .Values.database.secretName }} - - name: service - secret: - secretName: maestro - optional: true - name: {{ .Values.messageBroker.type }} secret: secretName: {{ .Values.messageBroker.secretName }}
66-75: Kubernetes version constraint needed for sidecar feature.The
cloud-sql-proxyinit container usesrestartPolicy: Always(line 68), which implements the native sidecar feature introduced in Kubernetes 1.28. Consider addingkubeVersion: ">=1.28"toChart.yamlto prevent deployment failures on incompatible clusters.
59-64: Potential nil pointer if.Values.server.grpcis undefined.The condition accesses
.Values.server.grpc.enabledand.Values.server.grpc.tls.enabledwithout first verifying that.Values.server.grpcand.Values.server.grpc.tlsexist. This could cause a template rendering error if these configuration blocks are not defined in values.Verify that
values.yamlalways defines the fullserver.grpcandserver.grpc.tlsstructures with defaults to prevent nil pointer errors during template rendering.
158-162: Potential nil pointer if.Values.messageBroker.grpcis undefined.When
messageBroker.typeis "grpc", the template accesses.Values.messageBroker.grpc.tls.enabledand related fields. If themessageBroker.grpcormessageBroker.grpc.tlssections are not defined in values, this could cause a template rendering error.Verify that the
messageBroker.grpc.tlsstructure is properly initialized invalues.yamlwhenmessageBroker.typeis set to "grpc".test/e2e/setup/rosa/setup/maestro.sh (2)
114-116: Shell array membership check has syntax issues.The pattern
[[ ! " ${unique_azs[@]} " =~ " ${subnet_az} " ]]is problematic: arrays implicitly concatenate in[[ ]](SC2199), and the quoted right-hand side matches literally instead of as a regex (SC2076). Use a loop-based approach for reliable membership checking.🔎 Proposed fix using a loop
- # Track unique AZs - if [[ ! " ${unique_azs[@]} " =~ " ${subnet_az} " ]]; then - unique_azs+=("$subnet_az") - fi + # Track unique AZs + az_exists=false + for existing_az in "${unique_azs[@]}"; do + if [[ "$existing_az" == "$subnet_az" ]]; then + az_exists=true + break + fi + done + if [[ "$az_exists" == "false" ]]; then + unique_azs+=("$subnet_az") + fi
130-136: Same array membership check issue.This uses the same problematic pattern as line 114. Apply the same loop-based fix for reliable AZ membership checking.
🧹 Nitpick comments (1)
test/setup/deploy_agent.sh (1)
113-121: Consider removing redundant readiness wait.The
helm upgrade --installwith--waitalready waits for resources to be ready (5m timeout). The subsequentkubectl wait(200s timeout) is redundant. Consider removing the kubectl wait or aligning timeouts if both are intentionally kept for extra safety.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (57)
MakefileREADME.mdcharts/README.mdcharts/maestro-agent/Chart.yamlcharts/maestro-agent/README.mdcharts/maestro-agent/templates/_helpers.tplcharts/maestro-agent/templates/clusterrole.yamlcharts/maestro-agent/templates/clusterrolebinding.yamlcharts/maestro-agent/templates/crd.yamlcharts/maestro-agent/templates/deployment.yamlcharts/maestro-agent/templates/role.yamlcharts/maestro-agent/templates/rolebinding.yamlcharts/maestro-agent/templates/secret.yamlcharts/maestro-agent/templates/serviceaccount.yamlcharts/maestro-agent/values.yamlcharts/maestro-server/Chart.yamlcharts/maestro-server/README.mdcharts/maestro-server/templates/_helpers.tplcharts/maestro-server/templates/clusterrole.yamlcharts/maestro-server/templates/clusterrolebinding.yamlcharts/maestro-server/templates/deployment.yamlcharts/maestro-server/templates/grpc.yamlcharts/maestro-server/templates/mosquitto.yamlcharts/maestro-server/templates/mqtt.yamlcharts/maestro-server/templates/postgresql.yamlcharts/maestro-server/templates/pubsub.yamlcharts/maestro-server/templates/route.yamlcharts/maestro-server/templates/service.yamlcharts/maestro-server/templates/serviceaccount.yamlcharts/maestro-server/values.yamltemplates/README.mdtemplates/agent-template-aro-hcp.ymltemplates/agent-template-gcp.ymltemplates/agent-template-rosa.ymltemplates/agent-template.ymltemplates/agent-tls-template.ymltemplates/db-template.ymltemplates/db-tls-template.ymltemplates/mqtt-template.ymltemplates/mqtt-tls-template.ymltemplates/route-template.ymltemplates/route-tls-template.ymltemplates/service-template-aro-hcp.ymltemplates/service-template-gcp.ymltemplates/service-template-rosa.ymltemplates/service-template.ymltemplates/service-tls-template.ymltest/e2e/setup/gcp/setup/agent.shtest/e2e/setup/gcp/setup/maestro.shtest/e2e/setup/rosa/README.mdtest/e2e/setup/rosa/setup/agent.shtest/e2e/setup/rosa/setup/maestro.shtest/e2e/setup/rosa/setup/teardown.shtest/setup/deploy_agent.shtest/setup/deploy_server.shtest/setup/env_setup.shtest/setup/grpc-client/grpc-client.yaml
💤 Files with no reviewable changes (17)
- templates/route-template.yml
- templates/route-tls-template.yml
- templates/mqtt-template.yml
- templates/db-template.yml
- templates/agent-template-rosa.yml
- templates/service-template-rosa.yml
- templates/README.md
- templates/agent-tls-template.yml
- templates/service-template-gcp.yml
- templates/mqtt-tls-template.yml
- templates/agent-template-gcp.yml
- templates/service-tls-template.yml
- templates/agent-template-aro-hcp.yml
- templates/service-template-aro-hcp.yml
- templates/service-template.yml
- templates/db-tls-template.yml
- templates/agent-template.yml
🚧 Files skipped from review as they are similar to previous changes (11)
- README.md
- charts/README.md
- charts/maestro-agent/templates/_helpers.tpl
- charts/maestro-server/values.yaml
- test/setup/grpc-client/grpc-client.yaml
- test/e2e/setup/rosa/README.md
- charts/maestro-agent/values.yaml
- charts/maestro-server/README.md
- charts/maestro-server/templates/_helpers.tpl
- charts/maestro-server/Chart.yaml
- charts/maestro-agent/README.md
🧰 Additional context used
🪛 checkmake (0.2.2)
Makefile
[warning] 255-255: Target body for "deploy" exceeds allowed length of 5 (6).
(maxbodylength)
[warning] 420-420: Missing required phony target "all"
(minphony)
🪛 Shellcheck (0.11.0)
test/e2e/setup/rosa/setup/maestro.sh
[error] 114-114: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 114-114: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
[error] 132-132: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 132-132: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
test/setup/deploy_server.sh
[warning] 20-20: enable_broadcast appears unused. Verify use (or export if used externally).
(SC2034)
test/e2e/setup/rosa/setup/teardown.sh
[warning] 7-7: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🪛 YAMLlint (1.37.1)
charts/maestro-server/templates/mqtt.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/pubsub.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/rolebinding.yaml
[error] 8-8: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/route.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/clusterrole.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/clusterrolebinding.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/crd.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/role.yaml
[error] 8-8: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/secret.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/serviceaccount.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/clusterrole.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/clusterrolebinding.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/grpc.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mosquitto.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/postgresql.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/service.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/serviceaccount.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: e2e-with-istio
- GitHub Check: e2e-grpc-broker
- GitHub Check: e2e-broadcast-subscription
- GitHub Check: upgrade
- GitHub Check: e2e
🔇 Additional comments (33)
charts/maestro-agent/templates/crd.yaml (4)
1-1: LGTM: Standard CRD installation guard.The conditional installation pattern is appropriate for allowing users to control CRD deployment.
Note: The YAMLlint syntax error is a false positive—
{{- ... }}is valid Helm template syntax.
2-16: LGTM: CRD metadata and scope are correctly configured.The cluster-scoped CRD with
preserveUnknownFields: falsefollows best practices for Kubernetes CRDs.
17-44: LGTM: Spec schema is well-defined.The spec properties (
agentID,hubHash,manifestWorkName) are clearly described and appropriately typed for linking managed cluster resources to hub manifestworks.
82-92: LGTM: Version configuration and status subresource are correct.The
servedandstorageflags are properly set, and the status subresource enables standard Kubernetes status operations.test/e2e/setup/rosa/setup/teardown.sh (1)
10-110: Well-structured teardown flow with proper error handling.The teardown logic is comprehensive and includes appropriate error handling for most operations:
- Environment variable validation
- Conditional VPC discovery
- Proper handling of "not found" scenarios for RDS, subnets, and security groups
- Clear progress messages throughout
charts/maestro-server/templates/postgresql.yaml (2)
1-104: Well-structured PostgreSQL deployment with proper health checks.The PostgreSQL Deployment includes:
- Proper environment variable configuration from secrets
- TCP socket liveness probe with appropriate delays
pg_isreadyreadiness probe for accurate status checking- Conditional persistence support with proper volume mounts
121-133: Thedb.hostvalue is correctly configured.Line 128 uses
.Values.postgresql.database.host, which defaults tomaestro-dbin values.yaml—the same value as.Values.postgresql.service.name. The configuration already provides the correct in-cluster Service name for PostgreSQL connections. No change needed.test/e2e/setup/rosa/setup/agent.sh (1)
39-99: Clean Helm-based agent deployment.The script successfully transitions from template-based deployment to Helm:
- Proper AWS IoT certificate and policy creation with JSON output parsing
- Well-structured Helm values file generation with broker configuration
- Clean
helm upgrade --installinvocation with proper namespace handlingcharts/maestro-server/templates/clusterrole.yaml (1)
1-13: LGTM! Appropriate RBAC permissions for authentication/authorization.The ClusterRole grants necessary permissions for:
subjectaccessreviewsfor authorization checkstokenreviewsfor authentication validationThese are standard permissions for services performing authentication and authorization operations.
charts/maestro-agent/Chart.yaml (1)
1-16: LGTM! Well-structured Helm chart metadata.The Chart.yaml includes all required fields with appropriate values:
- Correct apiVersion (v2)
- Clear description and keywords
- Proper maintainer and source attribution
charts/maestro-agent/templates/serviceaccount.yaml (1)
1-10: LGTM! Clean ServiceAccount template.The ServiceAccount uses proper Helm helpers for:
- Name derivation via
maestro-agent.serviceAccountName- Standard labels via
maestro-agent.labels- Optional annotations with correct YAML indentation
charts/maestro-agent/templates/clusterrole.yaml (1)
1-33: LGTM! Appropriate RBAC configuration for Maestro Agent.Both ClusterRoles are well-defined:
maestro-agent:agent: Includes necessary permissions for:
- Managing
AppliedManifestWorkresources and their status/finalizers- Performing authorization checks via
subjectaccessreviews- Service account impersonation for executor permission verification
maestro-agent:execution: Empty rules list is intentional for binding to specific resources via RoleBindings.
charts/maestro-server/templates/clusterrolebinding.yaml (1)
1-14: LGTM!The ClusterRoleBinding follows standard Helm templating patterns with proper use of helpers for naming, labels, and service account references.
charts/maestro-server/templates/mosquitto.yaml (1)
1-88: LGTM!The Mosquitto template properly handles conditional TLS configuration, with appropriate volume mounts, ConfigMap generation, and service exposure. The TLS-enabled and anonymous modes are correctly configured in the mosquitto.conf.
test/e2e/setup/gcp/setup/agent.sh (1)
63-91: LGTM! Clean Helm-based deployment flow.The migration from OpenShift templates to Helm charts is well-structured. The values file generation correctly captures GCP-specific configuration (workload identity annotations, PubSub settings), and error handling is properly implemented for both Helm deployment and readiness checks.
charts/maestro-agent/templates/rolebinding.yaml (1)
1-32: LGTM!Both RoleBindings follow correct patterns. The extension-apiserver binding in
kube-systemwith the subject in the release namespace is the appropriate configuration for API extension access.test/e2e/setup/gcp/setup/maestro.sh (1)
112-167: LGTM! Clean migration to Helm-based deployment.The replacement of OpenShift templating with Helm values-driven deployment is well-structured. The values file comprehensively covers environment, database, message broker, and server configuration.
charts/maestro-agent/templates/role.yaml (1)
1-32: LGTM! RBAC roles are appropriately scoped.The permissions granted are suitable for leader election (configmaps, leases), event handling, and extension-apiserver authentication. The use of
resourceNameson Line 31 properly restricts access to the specific configmap.charts/maestro-server/templates/serviceaccount.yaml (1)
1-10: LGTM! ServiceAccount template is clean and well-structured.The template correctly uses Helm helpers for naming and labels, and properly handles optional annotations.
charts/maestro-server/templates/service.yaml (1)
1-95: LGTM! Service definitions are well-structured.The multiple service definitions are properly templated with conditional nodePort rendering (lines 16-17, 55-56) and appropriate conditional inclusion of the gRPC broker service (lines 75-95) based on message broker type.
test/setup/env_setup.sh (2)
126-134: LGTM! HTTPS certificate setup is correct.The certificate generation for the maestro server properly creates CA and server certificates with appropriate SANs for in-cluster communication.
200-229: LGTM! TLS-gated setup is well-structured.The conditional TLS setup properly deploys service-ca components and generates gRPC certificates when TLS is enabled. The service-ca deployment wait logic ensures components are ready before proceeding.
test/setup/deploy_server.sh (1)
44-158: LGTM! Helm values generation is comprehensive.The Helm values file generation covers all necessary configuration areas: environment, image, database, message broker, server settings, and service configuration. The conditional logic for Istio annotations (lines 34-42) and MQTT/PostgreSQL embedding is well-implemented.
charts/maestro-agent/templates/deployment.yaml (1)
35-61: LGTM! Volume mounting logic is consistent and correct.The volumeMounts and volumes sections properly handle broker-specific configuration. The conditional mounting of TLS certificates (lines 38-47 and 52-61) correctly checks both broker type and certificate presence before adding the volume mounts.
charts/maestro-server/templates/deployment.yaml (1)
176-191: LGTM!The resource limits and health probes are correctly configured. The liveness and readiness probes now use the templated
{{ .Values.server.healthCheck.bindPort }}value, ensuring consistency with the server configuration.test/setup/deploy_agent.sh (1)
95-111: LGTM for gRPC configuration.The gRPC broker configuration with optional TLS has proper YAML structure and indentation.
test/e2e/setup/rosa/setup/maestro.sh (4)
39-47: LGTM!Good idempotent password handling - reuses existing password from file or generates a new one. This prevents credential mismatches when re-running the setup script.
69-87: LGTM!Idempotent IoT policy handling with proper existence checks and graceful error handling for policy attachment.
300-348: LGTM!The Helm values file generation and deployment command are well-structured. The values correctly configure database, MQTT broker with TLS, and server settings for the ROSA environment.
223-256: LGTM!Idempotent RDS instance creation with proper existence checking and comprehensive error handling. The wait loop provides adequate timeout for database availability.
Makefile (3)
253-286: LGTM!The Helm-based deployment targets are well-structured with proper:
- Namespace creation (
--create-namespace)- Idempotent upgrade/install pattern
- Input validation for
consumer_namein deploy-agent- Graceful error handling in undeploy targets
396-420: LGTM!Useful Helm utility targets for chart development workflow: linting, packaging, and template rendering (dry-run). These support good CI/CD practices.
83-88: LGTM!Help text accurately documents the new Helm-based deployment targets and their requirements.
| {{- if eq .Values.messageBroker.type "grpc" -}} | ||
| --- | ||
| apiVersion: v1 | ||
| kind: Secret | ||
| metadata: | ||
| name: {{ .Values.messageBroker.secretName }} | ||
| labels: | ||
| {{- include "maestro-server.labels" . | nindent 4 }} | ||
| stringData: | ||
| config.yaml: | | ||
| url: {{ .Values.messageBroker.grpc.url }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd -e yaml "values.yaml" charts/maestro-server/ -x cat {}Repository: openshift-online/maestro
Length of output: 4762
🏁 Script executed:
fd -e yaml "templates" charts/maestro-server/ | xargs grep -l "messageBroker.type"Repository: openshift-online/maestro
Length of output: 52
🏁 Script executed:
find charts/maestro-server/templates -name "*.yaml" -type f | head -20Repository: openshift-online/maestro
Length of output: 583
🏁 Script executed:
cat charts/maestro-server/templates/mqtt.yamlRepository: openshift-online/maestro
Length of output: 1319
🏁 Script executed:
cat charts/maestro-server/templates/pubsub.yamlRepository: openshift-online/maestro
Length of output: 1277
The grpc section is always present in the default values; however, the template lacks defensive checks compared to mqtt.yaml.
The default values.yaml includes the grpc configuration section with the required url field, so the template will work with standard deployments. However, unlike mqtt.yaml which validates both the broker type and secretName before accessing nested fields, grpc.yaml only checks the type. For consistency and robustness with custom value overrides, consider adding a check for the presence of nested fields (e.g., {{- if .Values.messageBroker.grpc }} before accessing grpc.url) or document that the grpc section is mandatory when type is set to "grpc".
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In charts/maestro-server/templates/grpc.yaml around lines 1 to 12, the template
only checks .Values.messageBroker.type == "grpc" before accessing
.Values.messageBroker.grpc.url which can break if a user overrides values and
removes the grpc block; update the template to defensively verify the nested
grpc object exists (e.g., wrap access in an additional {{- if
.Values.messageBroker.grpc }} check or combine both conditions) or validate
presence of .Values.messageBroker.grpc.url before using it, and ensure the
secretName is likewise checked to match the defensive pattern used in mqtt.yaml
so the template fails gracefully or provides a clear error when required fields
are missing.
4f5aec2 to
150b22d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (12)
test/e2e/setup/rosa/setup/teardown.sh (2)
7-8: Add failure handling forcdcommands (previously flagged).The
cdcommands in the subshells can fail silently, leavingPWDorROSA_DIRempty and causing downstream errors. This issue was previously identified but remains unresolved.🔎 Proposed fix
-PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" ; pwd -P)" -ROSA_DIR="$(cd ${PWD}/.. && pwd -P)" +PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" || exit 1; pwd -P)" +ROSA_DIR="$(cd ${PWD}/.. || exit 1; pwd -P)"
126-139: Certificate deletion attempted multiple times in policy loop (previously flagged).The certificate revocation and deletion logic (lines 133-138) executes inside the policy loop. If a certificate has multiple maestro policies attached, the script attempts to revoke and delete the same certificate for each policy, which is inefficient and may cause errors. This issue was previously identified but remains unresolved.
🔎 Proposed fix
# List and remove maestro policies + has_maestro_policy=false for policy_name in $(aws iot list-attached-policies --region ${region} --target $cert_arn --output json 2>/dev/null | jq -r '.policies[].policyName // empty'); do if [[ $policy_name == maestro* ]]; then + has_maestro_policy=true echo "Detaching and deleting policy: $policy_name" aws iot detach-policy --region ${region} --target $cert_arn --policy-name $policy_name 2>/dev/null aws iot delete-policy --region ${region} --policy-name $policy_name 2>/dev/null && \ echo "Policy $policy_name deleted" || \ echo "Failed to delete policy $policy_name" - - echo "Revoking and deleting certificate: $cert_id" - aws iot update-certificate --region ${region} --certificate-id $cert_id --new-status REVOKED 2>/dev/null - sleep 5 - aws iot delete-certificate --region ${region} --certificate-id $cert_id 2>/dev/null && \ - echo "Certificate $cert_id deleted" || \ - echo "Failed to delete certificate $cert_id" fi done + + if [[ "$has_maestro_policy" == "true" ]]; then + echo "Revoking and deleting certificate: $cert_id" + aws iot update-certificate --region ${region} --certificate-id $cert_id --new-status REVOKED 2>/dev/null + sleep 5 + aws iot delete-certificate --region ${region} --certificate-id $cert_id 2>/dev/null && \ + echo "Certificate $cert_id deleted" || \ + echo "Failed to delete certificate $cert_id" + fi donetest/e2e/setup/rosa/setup/maestro.sh (2)
114-116: Array membership check has known issues (duplicate).This pattern was already flagged in previous reviews. The array membership check should use an explicit loop instead of the
=~operator within[[ ]].
130-136: Array membership check issue persists (duplicate).This was previously flagged and marked as addressed in earlier commits, but the problematic pattern remains. Use an explicit loop for reliable membership checking.
charts/maestro-server/templates/grpc.yaml (1)
1-12: Consider adding defensive check forsecretNamelike in mqtt.yaml.The mqtt.yaml template guards creation with
{{- if and (eq .Values.messageBroker.type "mqtt") .Values.messageBroker.secretName }}, but grpc.yaml only checks the type. For consistency and to avoid creating a Secret with an empty/undefined name, consider adding the same guard.🔎 Proposed fix
-{{- if eq .Values.messageBroker.type "grpc" -}} +{{- if and (eq .Values.messageBroker.type "grpc") .Values.messageBroker.secretName -}}charts/maestro-server/templates/mqtt.yaml (1)
18-22: Add defensive check for.Values.messageBroker.mqtt.tlsexistence.Accessing
.Values.messageBroker.mqtt.tls.enabledwithout verifying thetlssection exists could cause a template rendering error if users provide custom values that omit thetlsblock.🔎 Proposed fix
- {{- if .Values.messageBroker.mqtt.tls.enabled }} + {{- if and .Values.messageBroker.mqtt.tls .Values.messageBroker.mqtt.tls.enabled }} caFile: {{ .Values.messageBroker.mqtt.tls.caFile }} clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }} clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }} {{- end }}charts/maestro-server/templates/deployment.yaml (3)
66-75: Cloud SQL Proxy uses native sidecar pattern requiring K8s 1.28+.The
restartPolicy: Alwayson the init container (line 68) implements the native sidecar feature. This was flagged in a past review recommending addingkubeVersion: ">=1.28"to Chart.yaml. Please ensure this constraint is documented.#!/bin/bash # Check if kubeVersion constraint exists in Chart.yaml echo "=== Checking Chart.yaml for kubeVersion constraint ===" cat charts/maestro-server/Chart.yaml | grep -i "kubeVersion" || echo "No kubeVersion constraint found"
59-64: Nil pointer risk when accessing.Values.server.grpc.tls.enabled.The condition directly accesses nested fields without verifying parent objects exist. Although past reviews marked this as addressed, the current code still shows the same pattern. Consider adding existence checks.
🔎 Proposed fix
- {{- if and .Values.server.grpc.enabled .Values.server.grpc.tls.enabled }} + {{- if and .Values.server.grpc .Values.server.grpc.enabled .Values.server.grpc.tls .Values.server.grpc.tls.enabled }} - name: maestro-grpc-cert secret: secretName: maestro-grpc-cert optional: true {{- end }}
32-35: Unusedservicevolume should be removed.The "service" volume referencing secret "maestro" is declared but never mounted in any container. This appears to be dead code.
🔎 Proposed fix
- name: rds secret: secretName: {{ .Values.database.secretName }} - - name: service - secret: - secretName: maestro - optional: true - name: {{ .Values.messageBroker.type }}charts/maestro-server/values.yaml (1)
32-36: Database SSL is disabled by default.As flagged previously,
sslMode: disableas the default exposes database traffic to potential eavesdropping. For production deployments, SSL should be enabled by default.test/setup/deploy_server.sh (1)
18-21: Unused variableenable_broadcast.The
enable_broadcastvariable is defined but never used in the script. Either remove it or incorporate it into the MQTT configuration if broadcast subscription is needed.🔎 Proposed fix - remove if unused
msg_broker=${MESSAGE_DRIVER_TYPE:-"mqtt"} server_replicas=${SERVER_REPLICAS:-"1"} -enable_broadcast=${ENABLE_BROADCAST_SUBSCRIPTION:-"false"} enable_istio=${ENABLE_ISTIO:-"false"}charts/maestro-server/templates/pubsub.yaml (1)
14-22: Emptytopics:section may be rendered if both child fields are undefined.If
.Values.messageBroker.pubsub.topicsexists but bothsourceEventsandsourceBroadcastare undefined or empty, the template will emit an emptytopics:section which may cause YAML parsing issues.🔎 Proposed fix - only emit topics if at least one child is defined
- {{- if .Values.messageBroker.pubsub.topics }} + {{- if or .Values.messageBroker.pubsub.topics.sourceEvents .Values.messageBroker.pubsub.topics.sourceBroadcast }} topics: {{- if .Values.messageBroker.pubsub.topics.sourceEvents }} sourceEvents: {{ .Values.messageBroker.pubsub.topics.sourceEvents }} {{- end }} {{- if .Values.messageBroker.pubsub.topics.sourceBroadcast }} sourceBroadcast: {{ .Values.messageBroker.pubsub.topics.sourceBroadcast }} {{- end }} {{- end }}
🧹 Nitpick comments (4)
charts/maestro-agent/templates/deployment.yaml (2)
8-8: Consider making replicas configurable.The replica count is hard-coded to 1. Consider making this configurable via
values.yamlto allow flexibility in different deployment scenarios.🔎 Proposed refactor
In
values.yaml, add:replicaCount: 1Then update the deployment:
- replicas: 1 + replicas: {{ .Values.replicaCount }}
18-34: Consider adding a security context.Adding a security context improves the security posture by running the container with minimal privileges.
🔎 Proposed refactor
Add security context in
values.yaml:securityContext: runAsNonRoot: true runAsUser: 1000 allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: trueThen add to the container spec:
imagePullPolicy: {{ .Values.image.pullPolicy }} + securityContext: + {{- toYaml .Values.securityContext | nindent 10 }} env:Note: Adjust
readOnlyRootFilesystemif the agent needs write access to the filesystem.test/e2e/setup/gcp/setup/maestro.sh (1)
159-167: Helm deployment command looks good.The
helm upgrade --installpattern ensures idempotent deployments. Consider adding--waitflag for consistency with other deployment scripts (e.g.,test/setup/deploy_server.shuses--wait --timeout 5m).🔎 Proposed enhancement
helm upgrade --install maestro-server \ ${PROJECT_DIR}/charts/maestro-server \ --namespace maestro \ --create-namespace \ - --values ${output_dir}/maestro-server-values.yaml || { + --values ${output_dir}/maestro-server-values.yaml \ + --wait \ + --timeout 5m || { echo "Failed to deploy maestro server" >&2 exit 1 }charts/maestro-server/templates/mosquitto.yaml (1)
64-68: TLS secret name is hardcoded.The secret name
maestro-mqtt-certsis hardcoded. For consistency with other configurable values, consider making this configurable via values (e.g.,.Values.mosquitto.tls.secretName).
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (17)
charts/maestro-agent/templates/deployment.yamlcharts/maestro-agent/values.yamlcharts/maestro-server/templates/deployment.yamlcharts/maestro-server/templates/grpc.yamlcharts/maestro-server/templates/mosquitto.yamlcharts/maestro-server/templates/mqtt.yamlcharts/maestro-server/templates/pubsub.yamlcharts/maestro-server/values.yamltemplates/service-template.ymltemplates/service-tls-template.ymltest/e2e/setup/gcp/setup/agent.shtest/e2e/setup/gcp/setup/maestro.shtest/e2e/setup/rosa/README.mdtest/e2e/setup/rosa/setup/agent.shtest/e2e/setup/rosa/setup/maestro.shtest/e2e/setup/rosa/setup/teardown.shtest/setup/deploy_server.sh
💤 Files with no reviewable changes (2)
- templates/service-tls-template.yml
- templates/service-template.yml
🚧 Files skipped from review as they are similar to previous changes (4)
- test/e2e/setup/rosa/README.md
- test/e2e/setup/rosa/setup/agent.sh
- test/e2e/setup/gcp/setup/agent.sh
- charts/maestro-agent/values.yaml
🧰 Additional context used
🪛 Shellcheck (0.11.0)
test/e2e/setup/rosa/setup/teardown.sh
[warning] 7-7: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
test/e2e/setup/rosa/setup/maestro.sh
[error] 114-114: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 114-114: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
[error] 132-132: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 132-132: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
test/setup/deploy_server.sh
[warning] 20-20: enable_broadcast appears unused. Verify use (or export if used externally).
(SC2034)
🪛 YAMLlint (1.37.1)
charts/maestro-server/templates/mosquitto.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/grpc.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mqtt.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/pubsub.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: e2e
- GitHub Check: upgrade
- GitHub Check: e2e-with-istio
- GitHub Check: e2e-broadcast-subscription
- GitHub Check: e2e-grpc-broker
🔇 Additional comments (15)
charts/maestro-agent/templates/deployment.yaml (1)
35-61: LGTM! Volume configuration is correct.The volume mounts and volume definitions are properly structured with consistent conditional logic for broker-specific TLS certificates. Each mount has a corresponding volume definition, and the conditionals correctly check both broker type and certificate presence.
test/e2e/setup/rosa/setup/maestro.sh (7)
39-47: Good addition of password persistence.The idempotent password management ensures consistency across script runs and avoids orphaning database credentials.
69-87: LGTM: Idempotent IoT policy creation.The existence check and graceful error handling prevent failures on re-runs.
96-102: LGTM: Idempotent security group rule.The error suppression and conditional messaging handle duplicate rule creation gracefully.
143-167: LGTM: Robust subnet creation with CIDR fallback.The multiple CIDR attempts with conflict detection provide good resilience for diverse VPC configurations.
170-221: LGTM: Comprehensive DB subnet group management.The existence check, multi-AZ validation, and conditional recreation provide robust idempotency.
224-256: LGTM: Idempotent DB instance creation.The existence check and error output inspection provide good resilience and debuggability.
341-347: LGTM: Helm deployment command.The
upgrade --installpattern with--create-namespaceprovides idempotent deployment.charts/maestro-server/templates/mqtt.yaml (1)
1-32: LGTM overall structure.The template correctly guards the Secret creation on both broker type and secretName. The conditional handling of username, password, TLS, and topics provides good flexibility.
test/e2e/setup/gcp/setup/maestro.sh (1)
112-157: Well-structured Helm values file generation for GCP deployment.The values file correctly configures Cloud SQL Proxy, Pub/Sub message broker with proper GCP resource names, and disables features not needed for GKE (route, https). The
sslMode: disableis appropriate since Cloud SQL Proxy handles the secure connection to the database.charts/maestro-server/values.yaml (1)
155-170: Embedded PostgreSQL configuration is appropriate for development.The hardcoded credentials and small persistence size are acceptable for the optional development-only PostgreSQL deployment. The clear comment on line 155 (
# PostgreSQL deployment (optional, for development)) appropriately documents the intended use case.charts/maestro-server/templates/mosquitto.yaml (1)
76-88: ConfigMap correctly handles TLS and anonymous modes.The conditional configuration properly sets up TLS with certificate verification when enabled, or allows anonymous access for development scenarios when TLS is disabled.
charts/maestro-server/templates/deployment.yaml (1)
176-191: Health check probes correctly use configurable port.The liveness and readiness probes now use
{{ .Values.server.healthCheck.bindPort }}instead of the previously hardcoded8083. The scheme also correctly switches between HTTP and HTTPS based on configuration.test/setup/deploy_server.sh (2)
44-162: Well-structured Helm values generation.The heredoc correctly generates a comprehensive values file with proper YAML structure. The conditional logic for Istio annotations and HTTPS enablement is clean and handles both scenarios appropriately.
164-170: Helm deployment with proper wait and timeout.Using
--wait --timeout 5mensures the deployment is fully ready before proceeding, which is good practice for test setup scripts.
| containers: | ||
| - name: maestro-agent | ||
| image: {{ include "maestro-agent.image" . }} | ||
| imagePullPolicy: {{ .Values.image.pullPolicy }} | ||
| env: | ||
| - name: "MAESTRO_ENV" | ||
| value: {{ .Values.environment | quote }} | ||
| - name: CERT_CALLBACK_REFRESH_DURATION | ||
| value: {{ .Values.clientCertRefreshDuration | quote}} | ||
| command: | ||
| - /usr/local/bin/maestro | ||
| - agent | ||
| - --consumer-name={{ .Values.consumerName }} | ||
| - --workload-source-driver={{ .Values.messageBroker.type }} | ||
| - --workload-source-config=/secrets/{{ .Values.messageBroker.type }}/config.yaml | ||
| - --cloudevents-client-id={{ .Values.cloudeventsClientId }} | ||
| - -v={{ .Values.logging.klogV }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add resource limits and requests.
The container lacks resource limits and requests, which can lead to resource contention, poor scheduling decisions, and difficulty in capacity planning.
🔎 Proposed fix
Add resource configuration in values.yaml:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128MiThen add to the container spec:
- -v={{ .Values.logging.klogV }}
+ resources:
+ {{- toYaml .Values.resources | nindent 10 }}
volumeMounts:Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In charts/maestro-agent/templates/deployment.yaml around lines 18 to 34, the
container spec is missing resource requests and limits; add a configurable
resources block by first adding a resources section to values.yaml (example
keys: resources.requests.cpu, resources.requests.memory, resources.limits.cpu,
resources.limits.memory) and then insert into the container spec a resources:
stanza that references these values (e.g. use {{- toYaml .Values.resources |
nindent X }} or explicit .Values.resources.requests/.limits entries) so the pod
has configurable cpu/memory requests and limits.
Add liveness and readiness probes.
The container lacks health probes, preventing Kubernetes from properly managing the pod lifecycle and detecting failures.
🔎 Proposed fix
Add probe configuration in values.yaml:
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5Then add to the container spec (adjust paths/ports based on actual agent endpoints):
- -v={{ .Values.logging.klogV }}
+ livenessProbe:
+ {{- toYaml .Values.livenessProbe | nindent 10 }}
+ readinessProbe:
+ {{- toYaml .Values.readinessProbe | nindent 10 }}
volumeMounts:Note: Verify the actual health check endpoints exposed by the maestro agent.
🤖 Prompt for AI Agents
In charts/maestro-agent/templates/deployment.yaml around lines 18 to 34 the
container spec lacks Kubernetes liveness and readiness probes; add probe
definitions to the container using values from values.yaml (e.g.
.Values.livenessProbe and .Values.readinessProbe) so the pod can be
health-checked. Update values.yaml to include livenessProbe and readinessProbe
blocks (HTTP GET path and port, initialDelaySeconds, periodSeconds) and then add
corresponding livenessProbe: and readinessProbe: entries under the container
spec, referencing the correct health endpoints/port for the maestro agent and
keeping them configurable via Helm values.
| volumes: | ||
| - name: mosquitto-persistent-storage | ||
| emptyDir: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Volume name mosquitto-persistent-storage is misleading for emptyDir.
The volume uses emptyDir: {} which does not persist data across pod restarts, but the name suggests persistent storage. This was previously flagged and marked as addressed, but the issue appears to persist. Consider renaming to mosquitto-data for clarity.
🔎 Proposed fix
volumes:
- - name: mosquitto-persistent-storage
+ - name: mosquitto-data
emptyDir: {}And update the volumeMount on line 48:
volumeMounts:
- - name: mosquitto-persistent-storage
+ - name: mosquitto-data
mountPath: /mosquitto/dataCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In charts/maestro-server/templates/mosquitto.yaml around lines 58 to 60, the
volume is defined as name: mosquitto-persistent-storage but uses emptyDir: {}
(which is ephemeral); rename the volume to mosquitto-data and update the
corresponding volumeMount on line 48 to use mosquitto-data so the name
accurately reflects it is not persistent storage.
| {{- if .Values.messageBroker.pubsub.subscriptions }} | ||
| subscriptions: | ||
| {{- if .Values.messageBroker.pubsub.subscriptions.agentEvents }} | ||
| agentEvents: {{ .Values.messageBroker.pubsub.subscriptions.agentEvents }} | ||
| {{- end }} | ||
| {{- if .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | ||
| agentBroadcast: {{ .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same empty section issue for subscriptions:.
Apply the same fix pattern to avoid emitting an empty subscriptions: section.
🔎 Proposed fix
- {{- if .Values.messageBroker.pubsub.subscriptions }}
+ {{- if or .Values.messageBroker.pubsub.subscriptions.agentEvents .Values.messageBroker.pubsub.subscriptions.agentBroadcast }}
subscriptions:
{{- if .Values.messageBroker.pubsub.subscriptions.agentEvents }}
agentEvents: {{ .Values.messageBroker.pubsub.subscriptions.agentEvents }}
{{- end }}
{{- if .Values.messageBroker.pubsub.subscriptions.agentBroadcast }}
agentBroadcast: {{ .Values.messageBroker.pubsub.subscriptions.agentBroadcast }}
{{- end }}
{{- end }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {{- if .Values.messageBroker.pubsub.subscriptions }} | |
| subscriptions: | |
| {{- if .Values.messageBroker.pubsub.subscriptions.agentEvents }} | |
| agentEvents: {{ .Values.messageBroker.pubsub.subscriptions.agentEvents }} | |
| {{- end }} | |
| {{- if .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | |
| agentBroadcast: {{ .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | |
| {{- end }} | |
| {{- end }} | |
| {{- if or .Values.messageBroker.pubsub.subscriptions.agentEvents .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | |
| subscriptions: | |
| {{- if .Values.messageBroker.pubsub.subscriptions.agentEvents }} | |
| agentEvents: {{ .Values.messageBroker.pubsub.subscriptions.agentEvents }} | |
| {{- end }} | |
| {{- if .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | |
| agentBroadcast: {{ .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} | |
| {{- end }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In charts/maestro-server/templates/pubsub.yaml around lines 23–31, the template
currently renders a bare "subscriptions:" header whenever
.Values.messageBroker.pubsub.subscriptions exists even if it contains no
configured keys; change the condition so "subscriptions:" is emitted only when
at least one concrete subscription value is set (e.g., agentEvents or
agentBroadcast). Replace the outer if with a logical check that tests the
presence/truthiness of those child keys (or add a second wrapper conditional) so
that the block and its header are skipped when there are no subscriptions, and
keep indentation consistent with surrounding YAML.
| # This is a YAML-formatted file. | ||
| # Declare variables to be passed into your templates. | ||
|
|
||
| environment: production |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default environment: production may be misleading given insecure defaults.
The default environment is set to "production", but database.sslMode defaults to disable (line 35). This combination could lead operators to accidentally deploy with insecure database connections in production. Consider either changing the default environment to "development" or changing the default sslMode to "require".
🤖 Prompt for AI Agents
In charts/maestro-server/values.yaml around line 5, the default environment is
set to "production" while database.sslMode defaults to "disable", which can
mislead operators into deploying insecure DB connections; change the default to
a safer pairing by either setting environment to "development" or changing
database.sslMode default to "require" (pick one consistent default), and update
any related documentation/comments to reflect the chosen secure default.
This commit replaces the OpenShift template-based deployment system with Helm charts for both maestro-server and maestro-agent components. Key changes: 1. **New Helm Charts** - Added charts/maestro-server/ with full Helm chart - Added charts/maestro-agent/ with full Helm chart - Both charts support MQTT, gRPC, and PubSub message brokers - Added optional embedded PostgreSQL and MQTT for development 2. **Makefile Refactoring** - Removed all oc process template generation code - Replaced template-based deploy/undeploy targets with Helm - Added simplified targets: deploy, deploy-dev, deploy-agent - Added utility targets: lint-charts, package-charts, template-server, template-agent - Updated help text to reflect Helm-based workflow 3. **E2E Test Infrastructure** - Updated test/setup/deploy_server.sh to use Helm - Updated test/setup/deploy_agent.sh to use Helm - Updated test/setup/env_setup.sh to remove template-based calls - E2E tests now deploy via Helm while maintaining same interface 4. **Documentation** - Added HELM_CHARTS.md with comprehensive guide - Added charts/README.md with migration guide - Added detailed READMEs for both charts - Documented all configuration options Benefits: - Cleaner deployment interface (make deploy, make deploy-dev) - Better configuration management via values.yaml - Support for all three message broker types - Embedded dependencies for development - Industry-standard Helm packaging Signed-off-by: $(git config user.name) <$(git config user.email)> Signed-off-by: clyang82 <chuyang@redhat.com>
Signed-off-by: clyang82 <chuyang@redhat.com>
Signed-off-by: clyang82 <chuyang@redhat.com>
Signed-off-by: clyang82 <chuyang@redhat.com>
Signed-off-by: clyang82 <chuyang@redhat.com>
Signed-off-by: clyang82 <chuyang@redhat.com>
Signed-off-by: clyang82 <chuyang@redhat.com>
150b22d to
62c7657
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In @charts/maestro-agent/templates/clusterrolebinding.yaml:
- Around line 17-32: Replace the hardcoded cluster-admin ClusterRoleBinding with
a Helm-values-driven conditional: wrap the ClusterRoleBinding resource (the
ClusterRoleBinding named "maestro-agent:execution-admin" that references the
serviceAccount from include "maestro-agent.serviceAccountName" and
.Release.Namespace) in an if block keyed to a new boolean
.Values.rbac.clusterAdmin.enabled (default false), and when disabled either omit
the resource entirely or render a more restrictive role/ClusterRoleBinding
alternative (or document that users must supply an appropriate
Role/RoleBinding); ensure the values schema/docs are updated to expose
rbac.clusterAdmin.enabled so users can opt into cluster-admin when they
explicitly set it.
In @charts/maestro-agent/templates/secret.yaml:
- Around line 65-66: The conditional in the Helm template is mismatched: it
checks .Values.messageBroker.pubsub.credentialsJSON but renders credentialsFile,
so update the condition to check .Values.messageBroker.pubsub.credentialsFile
(or alternatively render credentialsJSON instead) so the rendered field matches
the tested value; locate the conditional around the credentials block in the
secret.yaml template and make the condition and the referenced field
(.Values.messageBroker.pubsub.credentialsFile or
.Values.messageBroker.pubsub.credentialsJSON) consistent.
In @charts/maestro-server/templates/deployment.yaml:
- Around line 32-35: The "service" volume (name: service, secretName: maestro)
is never mounted and should be removed; delete the volume entry that defines
name: service / secretName: maestro from the Deployment spec and ensure no
remaining references to that volume (e.g., any volumeMounts) exist elsewhere in
the template.
In @charts/maestro-server/templates/postgresql.yaml:
- Around line 105-133: The chart currently renders DB credentials directly from
values into two Secret manifests (the Secret named via {{
.Values.postgresql.service.name }}-creds and the Secret using {{
.Values.database.secretName }}), exposing plaintext credentials; add support for
an opt-in external/bring-your-own secret by introducing
.Values.postgresql.database.existingSecret (and/or reuse
.Values.database.secretName) in values.yaml, change the template to
conditionally skip creating these Secret resources when existingSecret is set
and instead reference that secret name where consumed, and update README to
document using external secret providers or supplying existingSecret to avoid
storing credentials in chart values.
🟡 Minor comments (16)
charts/maestro-agent/README.md-103-103 (1)
103-103: Fix gRPC port inconsistency in documentation.The default gRPC URL shows port
8091, but the examples at lines 34 and 156 use port8090. This inconsistency may confuse users and cause connectivity issues.🔎 Verification needed
Please verify the correct gRPC port and update either the default value or the examples to be consistent. Also confirm whether the service name should be
maestro-grpc-brokerormaestro-grpc.test/e2e/setup/rosa/README.md-8-8 (1)
8-8: Fix typo: "Ensue" should be "Ensure".The word "Ensue" is incorrect; it should be "Ensure".
📝 Proposed fix
-- Ensue your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided region +- Ensure your `aws` CLI is logged in with your AWS account and your AWS account should have the permissions to operate AWS IoT and AWS RDS PostgreSQL in your provided regioncharts/maestro-server/templates/grpc.yaml-1-12 (1)
1-12: Add defensive check for.Values.messageBroker.grpcexistence.The template accesses
.Values.messageBroker.grpc.urlwithout verifying that thegrpcconfiguration block exists. If a user setsmessageBroker.type: "grpc"but omits themessageBroker.grpcsection in their values override, the template will fail during rendering.🔒 Proposed defensive fix
-{{- if eq .Values.messageBroker.type "grpc" -}} +{{- if and (eq .Values.messageBroker.type "grpc") .Values.messageBroker.grpc -}} --- apiVersion: v1 kind: Secret metadata: name: {{ .Values.messageBroker.secretName }} labels: {{- include "maestro-server.labels" . | nindent 4 }} stringData: config.yaml: | - url: {{ .Values.messageBroker.grpc.url }} + url: {{ .Values.messageBroker.grpc.url | required "messageBroker.grpc.url is required when type is grpc" }} {{- end }}charts/maestro-server/templates/pubsub.yaml-23-31 (1)
23-31: Prevent emptysubscriptions:section from being emitted.Similar to the topics section, if
.Values.messageBroker.pubsub.subscriptionsexists but bothagentEventsandagentBroadcastare undefined, the template will emit an emptysubscriptions:section.🔎 Proposed fix
- {{- if .Values.messageBroker.pubsub.subscriptions }} + {{- if or .Values.messageBroker.pubsub.subscriptions.agentEvents .Values.messageBroker.pubsub.subscriptions.agentBroadcast }} subscriptions:charts/maestro-server/templates/pubsub.yaml-14-22 (1)
14-22: Prevent emptytopics:section from being emitted.If
.Values.messageBroker.pubsub.topicsexists but bothsourceEventsandsourceBroadcastare undefined, the template will emit an emptytopics:section which may cause YAML parsing issues.🔎 Proposed fix
- {{- if .Values.messageBroker.pubsub.topics }} + {{- if or .Values.messageBroker.pubsub.topics.sourceEvents .Values.messageBroker.pubsub.topics.sourceBroadcast }} topics:charts/maestro-server/templates/mqtt.yaml-18-22 (1)
18-22: Guard TLS configuration access.Accessing
.Values.messageBroker.mqtt.tls.enabledwithout first verifying thetlssection exists could cause template rendering errors if the TLS configuration block is undefined.🔎 Proposed fix
- {{- if .Values.messageBroker.mqtt.tls.enabled }} + {{- if and .Values.messageBroker.mqtt.tls .Values.messageBroker.mqtt.tls.enabled }} caFile: {{ .Values.messageBroker.mqtt.tls.caFile }} clientCertFile: {{ .Values.messageBroker.mqtt.tls.clientCertFile }} clientKeyFile: {{ .Values.messageBroker.mqtt.tls.clientKeyFile }} {{- end }}charts/maestro-server/templates/pubsub.yaml-1-1 (1)
1-1: Guard against undefined.Values.messageBroker.pubsub.The template accesses
.Values.messageBroker.pubsub.*fields starting at line 11 without verifying thepubsubsection exists. This will cause template rendering errors if the section is undefined.🔎 Proposed fix
-{{- if eq .Values.messageBroker.type "pubsub" -}} +{{- if and (eq .Values.messageBroker.type "pubsub") .Values.messageBroker.pubsub -}}charts/maestro-agent/templates/crd.yaml-67-67 (1)
67-67: Fix the field description.The description incorrectly states "Name is the namespace..." when it should say "Namespace is the namespace..." to match the field name.
🔎 Proposed fix
- description: Name is the namespace of the Kubernetes resource, empty string indicates it is a cluster scoped resource. + description: Namespace is the namespace of the Kubernetes resource, empty string indicates it is a cluster scoped resource.test/e2e/setup/rosa/setup/teardown.sh-7-8 (1)
7-8: Add failure handling forcdcommands.Both
cdcommands can fail silently. Add|| exit 1to ensure the script exits on failure.🔎 Proposed fix
-PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" ; pwd -P)" -ROSA_DIR="$(cd ${PWD}/.. && pwd -P)" +PWD="$(cd "$(dirname ${BASH_SOURCE[0]})" || exit 1; pwd -P)" +ROSA_DIR="$(cd "${PWD}/.." || exit 1; pwd -P)"charts/maestro-server/templates/mosquitto.yaml-48-60 (1)
48-60: Renamemosquitto-persistent-storageto reflect ephemeral nature.As previously flagged, the volume name
mosquitto-persistent-storageis misleading since it usesemptyDir: {}, which is ephemeral and does not persist data across pod restarts. This could lead to incorrect assumptions about data durability.📝 Proposed fix
volumeMounts: - - name: mosquitto-persistent-storage + - name: mosquitto-data mountPath: /mosquitto/data - name: mosquitto-config mountPath: /mosquitto/config/mosquitto.conf subPath: mosquitto.conf {{- if .Values.mosquitto.tls.enabled }} - name: mosquitto-certs mountPath: /mosquitto/certs readOnly: true {{- end }} volumes: - - name: mosquitto-persistent-storage + - name: mosquitto-data emptyDir: {}test/e2e/setup/rosa/setup/agent.sh-73-99 (1)
73-99: Add error handling for Helm deployment.The Helm deployment section correctly generates the values file and executes the Helm command. However, there's no error checking after the Helm command (lines 94-98), which means if the Helm installation fails, the script will still proceed to wait for the deployment (line 99), potentially timing out without useful feedback.
🛡️ Add error handling
# Deploy Maestro agent using Helm PROJECT_DIR="$(cd ${ROOT_DIR}/../../../.. && pwd -P)" helm upgrade --install maestro-agent \ ${PROJECT_DIR}/charts/maestro-agent \ --namespace maestro-agent \ --create-namespace \ - --values ${output_dir}/maestro-agent-values.yaml + --values ${output_dir}/maestro-agent-values.yaml || { + echo "Failed to deploy maestro-agent using Helm" + exit 1 +} + +echo "Waiting for maestro-agent deployment to be ready..." oc -n maestro-agent wait deploy/maestro-agent --for condition=Available=True --timeout=300sMakefile-98-98 (1)
98-98: Fix incomplete Go version string.
GO_VERSION:=go1.25.has a trailing dot which makes it an invalid version string. This will cause the version verification inverifyandinstalltargets to fail or produce confusing output.🔎 Proposed fix
-GO_VERSION:=go1.25. +GO_VERSION:=go1.25test/e2e/setup/rosa/setup/maestro.sh-130-136 (1)
130-136: Same array membership check issue at line 132.Apply the same loop-based fix for reliable AZ membership checking.
🔎 Proposed fix
- for az in $(aws ec2 describe-availability-zones --region=${region} --output json | jq -r '.AvailabilityZones[].ZoneName'); do - if [[ ! " ${unique_azs[@]} " =~ " ${az} " ]]; then - target_az="$az" - break - fi - done + for az in $(aws ec2 describe-availability-zones --region=${region} --output json | jq -r '.AvailabilityZones[].ZoneName'); do + az_in_use=false + for existing_az in "${unique_azs[@]}"; do + if [[ "$existing_az" == "$az" ]]; then + az_in_use=true + break + fi + done + if [[ "$az_in_use" == "false" ]]; then + target_az="$az" + break + fi + donetest/setup/deploy_agent.sh-102-111 (1)
102-111: Same YAML indentation issue for gRPC TLS configuration.The gRPC TLS block has the same structural issue — the
tls:section is appended as a separate heredoc and will not be properly nested undergrpc:.🔎 Proposed fix
if [ "$msg_broker" = "grpc" ]; then cat >> "$values_file" <<EOF grpc: url: maestro-grpc-broker.${namespace}:8091 +$( [ "$tls_enable" = "true" ] && echo " tls: + caCert: /secrets/grpc-broker-cert/ca.crt + clientCert: /secrets/grpc-broker-cert/client.crt + clientKey: /secrets/grpc-broker-cert/client.key" || echo "" ) EOF - - # Add TLS configuration if enabled - if [ "$tls_enable" = "true" ]; then - cat >> "$values_file" <<EOF - tls: - caCert: /secrets/grpc-broker-cert/ca.crt - clientCert: /secrets/grpc-broker-cert/client.crt - clientKey: /secrets/grpc-broker-cert/client.key -EOF - fi fiCommittable suggestion skipped: line range outside the PR's diff.
test/setup/deploy_agent.sh-85-92 (1)
85-92: YAML indentation issue for MQTT TLS configuration remains.The TLS fields (
rootCert,clientCert,clientKey) are appended as a separate heredoc with 4-space indentation, but they need to be nested under themqtt:block. Currently they appear as top-level keys undermessageBrokerrather than undermqtt.🔎 Proposed fix - integrate TLS into the mqtt block
if [ "$msg_broker" = "mqtt" ]; then cat >> "$values_file" <<EOF mqtt: host: maestro-mqtt.${namespace} port: "1883" user: "" password: "" +$( [ "$tls_enable" = "true" ] && echo " rootCert: /secrets/mqtt-certs/ca.crt + clientCert: /secrets/mqtt-certs/client.crt + clientKey: /secrets/mqtt-certs/client.key" || echo "" ) EOF - - # Add TLS configuration if enabled - if [ "$tls_enable" = "true" ]; then - cat >> "$values_file" <<EOF - rootCert: /secrets/mqtt-certs/ca.crt - clientCert: /secrets/mqtt-certs/client.crt - clientKey: /secrets/mqtt-certs/client.key -EOF - fi fiCommittable suggestion skipped: line range outside the PR's diff.
test/e2e/setup/rosa/setup/maestro.sh-114-116 (1)
114-116: Shell array membership check has syntax issues.The pattern
[[ ! " ${unique_azs[@]} " =~ " ${subnet_az} " ]]is unreliable: arrays implicitly concatenate in[[ ]], and the quoted right-hand side matches literally instead of as a regex. This can produce false positives or negatives.🔎 Proposed fix using a loop
- # Track unique AZs - if [[ ! " ${unique_azs[@]} " =~ " ${subnet_az} " ]]; then - unique_azs+=("$subnet_az") - fi + # Track unique AZs + az_exists=false + for existing_az in "${unique_azs[@]}"; do + if [[ "$existing_az" == "$subnet_az" ]]; then + az_exists=true + break + fi + done + if [[ "$az_exists" == "false" ]]; then + unique_azs+=("$subnet_az") + fiCommittable suggestion skipped: line range outside the PR's diff.
🧹 Nitpick comments (11)
charts/maestro-server/templates/_helpers.tpl (2)
55-57: Add default fallback for service account name.The helper directly references
.Values.serviceAccount.namewithout a fallback. If the value is not provided, templates using this helper may fail to render.♻️ Proposed refactor with default handling
{{/* Service Account name */}} {{- define "maestro-server.serviceAccountName" -}} -{{- .Values.serviceAccount.name }} +{{- default "maestro-server" .Values.serviceAccount.name }} {{- end }}
62-64: Validate or provide defaults for image components.The helper constructs an image reference without validating that
registry,repository, andtagare provided. Missing values would produce malformed image references.♻️ Proposed refactor with validation
{{/* Image name */}} {{- define "maestro-server.image" -}} +{{- if not .Values.image.registry -}} +{{- fail "image.registry is required" -}} +{{- end -}} +{{- if not .Values.image.repository -}} +{{- fail "image.repository is required" -}} +{{- end -}} +{{- if not .Values.image.tag -}} +{{- fail "image.tag is required" -}} +{{- end -}} {{- printf "%s/%s:%s" .Values.image.registry .Values.image.repository .Values.image.tag }} {{- end }}charts/maestro-agent/values.yaml (2)
5-5: Consider using a specific image tag instead of "latest" for production.The
environmentis set toproduction(line 5), but the image tag is set tolatest(line 28). Usinglatestin production environments is an anti-pattern because it makes deployments non-reproducible, complicates rollbacks, and can lead to unexpected behavior when the image updates.Consider either:
- Changing the default
tagto a specific version (e.g.,v1.0.0,sha-abc123)- Or documenting that users must override the tag value for production deployments
📝 Suggested improvement
environment: production # CRD installation control # Set to false if CRDs are already installed installCRDs: true # Consumer name - identifies the agent/cluster consumerName: cluster1 # cloudevents client Id. in rosa, only consumer name is used to create certs cloudeventsClientId: cluster1-work-agent # default client certificate refresh/reload duration for message broker clientCertRefreshDuration: 5m # Service Account configuration serviceAccount: name: maestro-agent-sa annotations: {} image: registry: quay.io repository: redhat-user-workloads/maestro-rhtap-tenant/maestro/maestro - tag: latest + tag: v1.0.0 # Use a specific version tag for production pullPolicy: IfNotPresentAlso applies to: 25-29
59-72: Consider documenting required Pub/Sub configuration.The Pub/Sub configuration includes helpful inline comments about GCP resource names, but the empty string defaults for
projectID,credentialsJSON, and all topics/subscriptions might lead to silent failures if users setmessageBroker.type: pubsubwithout realizing these fields are required.Consider adding a comment block at the top of the
pubsubsection noting which fields are required when using this broker type, or document this in the chart's README.📝 Example documentation improvement
# Google Cloud Pub/Sub configuration + # Required fields when type is "pubsub": + # - projectID: Your GCP project ID + # - credentialsJSON or credentialsFile: GCP service account credentials + # - topics.agentEvents, topics.agentBroadcast: Full topic resource names + # - subscriptions.sourceEvents, subscriptions.sourceBroadcast: Full subscription resource names pubsub: projectID: "" credentialsJSON: ""charts/maestro-server/templates/mqtt.yaml (1)
23-31: Prevent emptytopics:section from being emitted.If
.Values.messageBroker.mqtt.topicsexists but bothsourceEventsandagentEventsare undefined, the template will emit an emptytopics:section.🔎 Proposed fix
- {{- if .Values.messageBroker.mqtt.topics }} + {{- if or .Values.messageBroker.mqtt.topics.sourceEvents .Values.messageBroker.mqtt.topics.agentEvents }} topics:charts/maestro-server/templates/postgresql.yaml (2)
20-34: Consider makingstorageClassNameconfigurable.The PVC uses the cluster's default StorageClass. For production deployments, users may need to specify a particular StorageClass to meet performance or availability requirements.
💡 Add optional storageClassName configuration
Update the values file to include:
postgresql: persistence: enabled: true size: "10Gi" storageClassName: "" # empty string uses defaultThen in the PVC spec:
spec: accessModes: - ReadWriteOnce + {{- if .Values.postgresql.persistence.storageClassName }} + storageClassName: {{ .Values.postgresql.persistence.storageClassName }} + {{- end }} resources: requests: storage: {{ .Values.postgresql.persistence.size }}
35-104: Add resource limits and makeimagePullPolicyconfigurable.The Deployment configuration is generally sound with appropriate probes and strategy. However, consider the following improvements:
Resource limits are missing: Production deployments should specify resource requests and limits to ensure predictable performance and prevent resource contention.
Hardcoded
imagePullPolicy: Line 59 hardcodesIfNotPresent, which should be configurable via values to support different deployment scenarios.💡 Add resource limits and configurable imagePullPolicy
Add to values.yaml:
postgresql: resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" imagePullPolicy: IfNotPresentUpdate the Deployment:
containers: - name: postgresql image: {{ .Values.postgresql.image }} - imagePullPolicy: IfNotPresent + imagePullPolicy: {{ .Values.postgresql.imagePullPolicy }} env: # ... env vars ... + resources: + {{- toYaml .Values.postgresql.resources | nindent 12 }} livenessProbe: # ... probes ...charts/maestro-server/templates/mosquitto.yaml (1)
20-68: Add liveness and readiness probes, resource limits, and makeimagePullPolicyconfigurable.The Deployment is missing several production-readiness features:
- No health probes: Without liveness and readiness probes, Kubernetes cannot detect if Mosquitto is healthy or ready to accept connections.
- No resource limits: Similar to PostgreSQL, resource requests and limits should be specified.
- Hardcoded
imagePullPolicy: Should be configurable via values.💡 Add probes and resource management
Add to values.yaml:
mosquitto: resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" imagePullPolicy: IfNotPresentUpdate the Deployment:
containers: - image: {{ .Values.mosquitto.image }} - imagePullPolicy: IfNotPresent + imagePullPolicy: {{ .Values.mosquitto.imagePullPolicy }} name: mosquitto + resources: + {{- toYaml .Values.mosquitto.resources | nindent 10 }} + livenessProbe: + tcpSocket: + port: {{ .Values.mosquitto.service.port }} + initialDelaySeconds: 10 + periodSeconds: 10 + readinessProbe: + tcpSocket: + port: {{ .Values.mosquitto.service.port }} + initialDelaySeconds: 5 + periodSeconds: 5 ports: - containerPort: {{ .Values.mosquitto.service.port }}test/setup/env_setup.sh (1)
200-209: Consider usingkubectl waitwith appropriate conditions instead of fixed sleeps.The fixed
sleep 10statements (lines 206 and 208) can be flaky in CI environments. While you're already usingkubectl waitafter the sleeps, consider using a loop with retry logic or relying solely on the wait commands.🔎 Suggested improvement
kubectl apply -f "${PWD}/test/setup/service-ca" - sleep 10 # wait for openshift service-ca-operator is created - kubectl wait deploy/service-ca-operator -n openshift-service-ca-operator --for condition=Available=True --timeout=300s - sleep 10 # wait for openshift service-ca is created - kubectl wait deploy/service-ca -n openshift-service-ca --for condition=Available=True --timeout=300s + # Wait for service-ca-operator deployment to exist, then wait for availability + kubectl wait deploy/service-ca-operator -n openshift-service-ca-operator --for condition=Available=True --timeout=300s + # Wait for service-ca deployment to exist, then wait for availability + kubectl wait deploy/service-ca -n openshift-service-ca --for condition=Available=True --timeout=300scharts/maestro-server/templates/service.yaml (1)
75-95: Consider making the gRPC broker port configurable.The gRPC broker service hardcodes port
8091. For consistency with other services that use values (e.g.,.Values.service.api.port), consider making this configurable via values.🔎 Proposed fix
ports: - name: grpc-broker - port: 8091 - targetPort: 8091 + port: {{ .Values.service.grpcBroker.port | default 8091 }} + targetPort: {{ .Values.service.grpcBroker.port | default 8091 }} protocol: TCPtest/setup/deploy_server.sh (1)
174-176: Consider reducing or removing the fixed sleep.The 30-second sleep at line 175 is a significant delay. Consider using a health check endpoint or reducing the wait time if the service is already verified as available by
kubectl wait.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (57)
MakefileREADME.mdcharts/README.mdcharts/maestro-agent/Chart.yamlcharts/maestro-agent/README.mdcharts/maestro-agent/templates/_helpers.tplcharts/maestro-agent/templates/clusterrole.yamlcharts/maestro-agent/templates/clusterrolebinding.yamlcharts/maestro-agent/templates/crd.yamlcharts/maestro-agent/templates/deployment.yamlcharts/maestro-agent/templates/role.yamlcharts/maestro-agent/templates/rolebinding.yamlcharts/maestro-agent/templates/secret.yamlcharts/maestro-agent/templates/serviceaccount.yamlcharts/maestro-agent/values.yamlcharts/maestro-server/Chart.yamlcharts/maestro-server/README.mdcharts/maestro-server/templates/_helpers.tplcharts/maestro-server/templates/clusterrole.yamlcharts/maestro-server/templates/clusterrolebinding.yamlcharts/maestro-server/templates/deployment.yamlcharts/maestro-server/templates/grpc.yamlcharts/maestro-server/templates/mosquitto.yamlcharts/maestro-server/templates/mqtt.yamlcharts/maestro-server/templates/postgresql.yamlcharts/maestro-server/templates/pubsub.yamlcharts/maestro-server/templates/route.yamlcharts/maestro-server/templates/service.yamlcharts/maestro-server/templates/serviceaccount.yamlcharts/maestro-server/values.yamltemplates/README.mdtemplates/agent-template-aro-hcp.ymltemplates/agent-template-gcp.ymltemplates/agent-template-rosa.ymltemplates/agent-template.ymltemplates/agent-tls-template.ymltemplates/db-template.ymltemplates/db-tls-template.ymltemplates/mqtt-template.ymltemplates/mqtt-tls-template.ymltemplates/route-template.ymltemplates/route-tls-template.ymltemplates/service-template-aro-hcp.ymltemplates/service-template-gcp.ymltemplates/service-template-rosa.ymltemplates/service-template.ymltemplates/service-tls-template.ymltest/e2e/setup/gcp/setup/agent.shtest/e2e/setup/gcp/setup/maestro.shtest/e2e/setup/rosa/README.mdtest/e2e/setup/rosa/setup/agent.shtest/e2e/setup/rosa/setup/maestro.shtest/e2e/setup/rosa/setup/teardown.shtest/setup/deploy_agent.shtest/setup/deploy_server.shtest/setup/env_setup.shtest/setup/grpc-client/grpc-client.yaml
💤 Files with no reviewable changes (17)
- templates/service-template-aro-hcp.yml
- templates/README.md
- templates/service-tls-template.yml
- templates/route-template.yml
- templates/agent-tls-template.yml
- templates/agent-template.yml
- templates/service-template-gcp.yml
- templates/route-tls-template.yml
- templates/db-tls-template.yml
- templates/agent-template-rosa.yml
- templates/agent-template-gcp.yml
- templates/agent-template-aro-hcp.yml
- templates/service-template-rosa.yml
- templates/service-template.yml
- templates/mqtt-template.yml
- templates/mqtt-tls-template.yml
- templates/db-template.yml
🚧 Files skipped from review as they are similar to previous changes (8)
- charts/README.md
- test/setup/grpc-client/grpc-client.yaml
- charts/maestro-agent/templates/_helpers.tpl
- charts/maestro-server/values.yaml
- charts/maestro-agent/Chart.yaml
- charts/maestro-server/README.md
- charts/maestro-server/Chart.yaml
- README.md
🧰 Additional context used
🪛 checkmake (0.2.2)
Makefile
[warning] 273-273: Target body for "deploy" exceeds allowed length of 5 (6).
(maxbodylength)
[warning] 438-438: Missing required phony target "all"
(minphony)
🪛 Shellcheck (0.11.0)
test/e2e/setup/rosa/setup/teardown.sh
[warning] 7-7: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
test/setup/deploy_server.sh
[warning] 20-20: enable_broadcast appears unused. Verify use (or export if used externally).
(SC2034)
test/e2e/setup/rosa/setup/maestro.sh
[error] 114-114: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 114-114: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
[error] 132-132: Arrays implicitly concatenate in [[ ]]. Use a loop (or explicit * instead of @).
(SC2199)
[warning] 132-132: Remove quotes from right-hand side of =~ to match as a regex rather than literally.
(SC2076)
🪛 YAMLlint (1.37.1)
charts/maestro-agent/templates/secret.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mosquitto.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/serviceaccount.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/clusterrole.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/clusterrole.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/clusterrolebinding.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/role.yaml
[error] 8-8: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/clusterrolebinding.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/crd.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-agent/templates/rolebinding.yaml
[error] 8-8: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/grpc.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/mqtt.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/postgresql.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/pubsub.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/route.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/service.yaml
[error] 7-7: syntax error: expected the node content, but found '-'
(syntax)
charts/maestro-server/templates/serviceaccount.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: Red Hat Konflux / maestro-e2e-on-pull-request
- GitHub Check: Red Hat Konflux / maestro-on-pull-request
- GitHub Check: e2e-with-istio
- GitHub Check: upgrade
- GitHub Check: e2e
- GitHub Check: e2e-grpc-broker
- GitHub Check: verify
- GitHub Check: e2e-broadcast-subscription
🔇 Additional comments (25)
charts/maestro-agent/templates/clusterrole.yaml (1)
27-33: Verify the empty rules array in maestro-agent:execution ClusterRole.The
maestro-agent:executionClusterRole has an emptyrulesarray, granting no permissions. Please confirm this is intentional (e.g., for aggregation or as a placeholder).If this role is meant to grant execution permissions, please add the appropriate rules or remove the unused ClusterRole.
charts/maestro-agent/templates/rolebinding.yaml (1)
1-32: LGTM! Clean RBAC structure.The RoleBinding resources are correctly configured with proper namespace scoping and appropriate use of
kube-systemfor the extension-apiserver binding. The Helm templating follows best practices.Note: The YAMLlint syntax error on line 8 is a false positive—the
{{-syntax is valid Helm template notation.charts/maestro-agent/templates/serviceaccount.yaml (1)
1-10: LGTM! Well-structured ServiceAccount template.The ServiceAccount resource is correctly configured with proper helper usage for name and labels, and the conditional annotations block follows Helm best practices.
Note: The YAMLlint syntax error on line 6 is a false positive—this is valid Helm template syntax.
charts/maestro-server/templates/clusterrolebinding.yaml (1)
1-14: LGTM! Proper ClusterRoleBinding configuration.The ClusterRoleBinding is correctly structured with appropriate use of Helm helpers for name, labels, and service account reference. The RBAC binding follows Kubernetes best practices.
Note: The YAMLlint syntax error on line 6 is a false positive—this is valid Helm template syntax.
charts/maestro-server/templates/clusterrole.yaml (1)
1-13: LGTM! Minimal RBAC permissions for authentication/authorization.The ClusterRole grants appropriate permissions for a server component that needs to validate tokens and perform subject access reviews. The permissions are correctly scoped to authentication and authorization operations only.
test/e2e/setup/gcp/setup/agent.sh (2)
63-84: LGTM! Helm values generation is well-structured.The Helm values file generation correctly uses heredoc syntax with proper variable substitution. The YAML structure includes all necessary configuration for the maestro-agent deployment, including the GKE workload identity annotation.
86-92: LGTM! Helm deployment command follows best practices.The Helm deployment uses
upgrade --installfor idempotent deployments, includes proper error handling, and waits for the deployment to become ready with an appropriate timeout.charts/maestro-server/templates/mqtt.yaml (1)
1-17: LGTM! Proper guards and optional field handling.The template correctly guards the Secret creation with both type and secretName checks. Optional username and password fields are properly conditionally included.
charts/maestro-agent/templates/clusterrolebinding.yaml (2)
1-16: LGTM! Appropriately scoped agent binding.The binding to the custom
maestro-agent:agentClusterRole follows Helm best practices with proper templating.
33-47: LGTM! Appropriately scoped execution binding.The binding to the custom
maestro-agent:executionClusterRole follows the same proper templating pattern as the agent binding.test/e2e/setup/rosa/setup/teardown.sh (1)
126-139: Certificate deletion may be attempted multiple times.The certificate revocation and deletion logic is inside the policy loop. If a certificate has multiple attached maestro policies, the script will attempt to revoke and delete the same certificate multiple times.
Move the certificate deletion outside the policy loop or add a break statement after the first deletion.
Likely an incorrect or invalid review comment.
test/e2e/setup/gcp/setup/maestro.sh (1)
159-171: Helm deployment correctly references the proper resource names.The Helm release is named
maestro-serverwhile the underlying Deployment resource is namedmaestro(as defined in the chart template). The wait command correctly references the Deployment resource.charts/maestro-agent/templates/role.yaml (2)
1-19: LGTM! Appropriate RBAC for agent operations.The Role grants necessary permissions for leader election (configmaps and leases) and event recording. The permissions follow the least-privilege principle and are appropriately scoped to the release namespace.
20-32: Access toextension-apiserver-authenticationConfigMap is necessary and correctly configured.The maestro-agent requires read access to this ConfigMap, as evidenced by logs showing the agent starts controllers to monitor CA certificate data (
client-ca-fileandrequestheader-client-ca-file) from this ConfigMap during startup. The RBAC rule is properly restricted withresourceNamesand read-only permissions (get,list,watch).charts/maestro-server/templates/postgresql.yaml (1)
1-19: LGTM! Standard PostgreSQL Service definition.The Service is properly configured with ClusterIP type and appropriate port mappings. The OpenShift-specific annotation for URI exposure is acceptable and won't affect non-OpenShift deployments.
charts/maestro-server/templates/serviceaccount.yaml (1)
1-10: LGTM! Clean ServiceAccount implementation.The ServiceAccount uses proper Helm template helpers for naming and labeling, with support for optional annotations (useful for cloud provider workload identity configurations). The implementation is correct and follows Helm best practices.
charts/maestro-server/templates/mosquitto.yaml (1)
1-19: LGTM! Standard Mosquitto Service definition.The Service configuration is appropriate with ClusterIP type and proper port mapping. The OpenShift-specific annotation won't affect non-OpenShift deployments.
test/e2e/setup/rosa/setup/agent.sh (1)
39-61: LGTM! Improved AWS CLI output parsing.The addition of
--output jsonwithjqparsing (lines 44, 55, 60) makes the script more robust and maintainable compared to text-based parsing. The AWS IoT setup logic is sound.test/setup/env_setup.sh (1)
126-134: LGTM! HTTPS certificate setup is well-structured.The certificate generation flow correctly creates a CA and server certificate with appropriate SANs for local and in-cluster access. The idempotency check (
if [ ! -d "$https_cert_dir" ]) prevents re-creation on subsequent runs.charts/maestro-server/templates/service.yaml (1)
1-21: LGTM! Service definitions use proper Helm templating patterns.The static analysis error about line 7 is a false positive — YAMLlint doesn't understand Helm template syntax. The
{{- include "..." . | nindent N }}pattern is the correct way to include helpers with proper indentation.test/setup/deploy_server.sh (1)
44-162: LGTM! Well-structured Helm values generation.The heredoc-based values file generation is comprehensive and correctly handles conditional Istio annotations, database configuration, message broker settings, and service definitions. The shell command substitutions for boolean values (
$( [ "$enable_istio" = "true" ] && echo "false" || echo "true" )) work correctly.charts/maestro-agent/templates/deployment.yaml (1)
1-16: LGTM! Deployment structure follows Helm best practices.The use of helper templates for labels (
maestro-agent.labels,maestro-agent.selectorLabels) and service account name ensures consistency across the chart.test/e2e/setup/rosa/setup/maestro.sh (1)
300-347: LGTM! Helm-based deployment for ROSA environment is well-structured.The Helm values file generation and deployment correctly configure the Maestro server for AWS IoT and RDS PostgreSQL integration with TLS enabled.
Makefile (2)
271-304: LGTM! Helm-based deploy targets are well-structured.The
deploy,undeploy,deploy-agent, andundeploy-agenttargets correctly use Helm for deployment. Thedeploy-agenttarget appropriately validates thatconsumer_nameis set before proceeding.
414-438: LGTM! Useful Helm chart utility targets added.The
lint-charts,package-charts,template-server, andtemplate-agenttargets provide helpful utilities for chart development and CI validation.
| apiVersion: rbac.authorization.k8s.io/v1 | ||
| kind: ClusterRoleBinding | ||
| metadata: | ||
| name: maestro-agent:execution-admin | ||
| labels: | ||
| {{- include "maestro-agent.labels" . | nindent 4 }} | ||
| roleRef: | ||
| apiGroup: rbac.authorization.k8s.io | ||
| kind: ClusterRole | ||
| # We deploy a controller that could work with permission lower than cluster-admin, the tradeoff is | ||
| # responsivity because list/watch cannot be maintained over too many namespaces. | ||
| name: cluster-admin | ||
| subjects: | ||
| - kind: ServiceAccount | ||
| name: {{ include "maestro-agent.serviceAccountName" . }} | ||
| namespace: {{ .Release.Namespace }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster-admin binding security concern persists.
The binding to cluster-admin grants unrestricted cluster access. While the in-code comment explains the responsiveness tradeoff, this remains a significant security risk for production deployments. Consider making this binding optional via a values flag (e.g., .Values.rbac.clusterAdmin.enabled) to allow more restrictive permissions by default.
Would you like me to propose a values-driven conditional approach that allows users to opt into cluster-admin while defaulting to more restrictive permissions?
🤖 Prompt for AI Agents
In @charts/maestro-agent/templates/clusterrolebinding.yaml around lines 17 - 32,
Replace the hardcoded cluster-admin ClusterRoleBinding with a Helm-values-driven
conditional: wrap the ClusterRoleBinding resource (the ClusterRoleBinding named
"maestro-agent:execution-admin" that references the serviceAccount from include
"maestro-agent.serviceAccountName" and .Release.Namespace) in an if block keyed
to a new boolean .Values.rbac.clusterAdmin.enabled (default false), and when
disabled either omit the resource entirely or render a more restrictive
role/ClusterRoleBinding alternative (or document that users must supply an
appropriate Role/RoleBinding); ensure the values schema/docs are updated to
expose rbac.clusterAdmin.enabled so users can opt into cluster-admin when they
explicitly set it.
| {{- if .Values.messageBroker.pubsub.credentialsJSON }} | ||
| credentialsFile: {{ .Values.messageBroker.pubsub.credentialsFile }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conditional check and field reference mismatch.
The condition checks for .Values.messageBroker.pubsub.credentialsJSON but the output references .Values.messageBroker.pubsub.credentialsFile. This inconsistency means the credentialsFile field will only be rendered when credentialsJSON is present, which may not be the intended logic.
🔎 Proposed fix
- {{- if .Values.messageBroker.pubsub.credentialsJSON }}
+ {{- if .Values.messageBroker.pubsub.credentialsFile }}
credentialsFile: {{ .Values.messageBroker.pubsub.credentialsFile }}
{{- end }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {{- if .Values.messageBroker.pubsub.credentialsJSON }} | |
| credentialsFile: {{ .Values.messageBroker.pubsub.credentialsFile }} | |
| {{- if .Values.messageBroker.pubsub.credentialsFile }} | |
| credentialsFile: {{ .Values.messageBroker.pubsub.credentialsFile }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In @charts/maestro-agent/templates/secret.yaml around lines 65 - 66, The
conditional in the Helm template is mismatched: it checks
.Values.messageBroker.pubsub.credentialsJSON but renders credentialsFile, so
update the condition to check .Values.messageBroker.pubsub.credentialsFile (or
alternatively render credentialsJSON instead) so the rendered field matches the
tested value; locate the conditional around the credentials block in the
secret.yaml template and make the condition and the referenced field
(.Values.messageBroker.pubsub.credentialsFile or
.Values.messageBroker.pubsub.credentialsJSON) consistent.
| - name: service | ||
| secret: | ||
| secretName: maestro | ||
| optional: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove unused volume.
The "service" volume references secret "maestro" but is never mounted in any container. This is dead code.
♻️ Proposed fix
- name: rds
secret:
secretName: {{ .Values.database.secretName }}
- - name: service
- secret:
- secretName: maestro
- optional: true
- name: {{ .Values.messageBroker.type }}
secret:
secretName: {{ .Values.messageBroker.secretName }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: service | |
| secret: | |
| secretName: maestro | |
| optional: true | |
| - name: rds | |
| secret: | |
| secretName: {{ .Values.database.secretName }} | |
| - name: {{ .Values.messageBroker.type }} | |
| secret: | |
| secretName: {{ .Values.messageBroker.secretName }} |
🤖 Prompt for AI Agents
In @charts/maestro-server/templates/deployment.yaml around lines 32 - 35, The
"service" volume (name: service, secretName: maestro) is never mounted and
should be removed; delete the volume entry that defines name: service /
secretName: maestro from the Deployment spec and ensure no remaining references
to that volume (e.g., any volumeMounts) exist elsewhere in the template.
| --- | ||
| apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: {{ .Values.mosquitto.service.name }} | ||
| labels: | ||
| {{- include "maestro-server.labels" . | nindent 4 }} | ||
| data: | ||
| mosquitto.conf: | | ||
| listener {{ .Values.mosquitto.service.port }} 0.0.0.0 | ||
| {{- if .Values.mosquitto.tls.enabled }} | ||
| cafile /mosquitto/certs/ca.crt | ||
| certfile /mosquitto/certs/server.crt | ||
| keyfile /mosquitto/certs/server.key | ||
| require_certificate true | ||
| use_identity_as_username true | ||
| {{- else }} | ||
| allow_anonymous true | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document security implications of anonymous access mode.
When mosquitto.tls.enabled is false, the configuration allows anonymous access (line 86), which means any client can connect without authentication. This is acceptable for development/testing but creates a security risk in production environments.
Consider:
- Adding a comment in the values file warning about this security implication
- Supporting password-based authentication as an alternative to TLS client certificates
- Documenting this behavior in the chart's README
📚 Suggested documentation
Add to charts/maestro-server/values.yaml:
mosquitto:
enabled: false
tls:
enabled: false
# WARNING: When TLS is disabled, anonymous access is allowed.
# This is INSECURE for production use. Enable TLS with client
# certificates or implement alternative authentication.| --- | ||
| apiVersion: v1 | ||
| kind: Secret | ||
| metadata: | ||
| name: {{ .Values.postgresql.service.name }}-creds | ||
| labels: | ||
| {{- include "maestro-server.labels" . | nindent 4 }} | ||
| annotations: | ||
| template.openshift.io/expose-database_name: "{.data['database-name']}" | ||
| template.openshift.io/expose-password: "{.data['database-password']}" | ||
| template.openshift.io/expose-username: "{.data['database-user']}" | ||
| stringData: | ||
| database-name: {{ .Values.postgresql.database.name }} | ||
| database-password: {{ .Values.postgresql.database.password }} | ||
| database-user: {{ .Values.postgresql.database.user }} | ||
| --- | ||
| apiVersion: v1 | ||
| kind: Secret | ||
| metadata: | ||
| name: {{ .Values.database.secretName }} | ||
| labels: | ||
| {{- include "maestro-server.labels" . | nindent 4 }} | ||
| stringData: | ||
| db.host: {{ .Values.postgresql.database.host }} | ||
| db.port: {{ .Values.postgresql.service.port | quote }} | ||
| db.name: {{ .Values.postgresql.database.name }} | ||
| db.user: {{ .Values.postgresql.database.user }} | ||
| db.password: {{ .Values.postgresql.database.password }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider external secret management for database credentials.
Both Secrets render credentials directly from Helm values using stringData, which means sensitive database credentials will be stored in plaintext in values files and potentially in version control or CI/CD logs.
For production deployments, consider:
- Supporting external secret management solutions (Sealed Secrets, External Secrets Operator, Vault, etc.)
- Allowing users to bring their own Secret and skip these manifests
- At minimum, documenting the security implications in the chart's README
🔐 Suggested approach for external secret support
Add to values.yaml:
postgresql:
database:
# When existingSecret is set, skip creating the secrets
existingSecret: ""
secretName: "maestro-db"
# Only used when existingSecret is empty
name: "maestro"
user: "maestro"
password: "" # Should be provided via --set or secure valuesUpdate the template with conditional rendering:
+{{- if and .Values.postgresql.enabled (not .Values.postgresql.database.existingSecret) -}}
---
apiVersion: v1
kind: Secret
# ... both secrets ...
+{{- end }}And reference the secret name appropriately:
secretName: {{ .Values.postgresql.database.existingSecret | default .Values.database.secretName }}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @charts/maestro-server/templates/postgresql.yaml around lines 105 - 133, The
chart currently renders DB credentials directly from values into two Secret
manifests (the Secret named via {{ .Values.postgresql.service.name }}-creds and
the Secret using {{ .Values.database.secretName }}), exposing plaintext
credentials; add support for an opt-in external/bring-your-own secret by
introducing .Values.postgresql.database.existingSecret (and/or reuse
.Values.database.secretName) in values.yaml, change the template to
conditionally skip creating these Secret resources when existingSecret is set
and instead reference that secret name where consumed, and update README to
document using external secret providers or supplying existingSecret to avoid
storing credentials in chart values.
No description provided.