Skip to content

[Helm] Camel-K Operator sharding via Helm is broken. #6533

@timmy-mathew-ah

Description

@timmy-mathew-ah

Bug description

Installing multiple global Camel-K operators via Helm for operator sharding fails on the second helm install due to hardcoded ClusterRole and ClusterRoleBinding names in rbacs-descoped.yaml, reference line here and same issue with other ClusterRoleBindings defined after in the file.

The official documentation on multiple operators describes a multi-operator setup where each operator has a unique OPERATOR_ID and only reconciles resources annotated with the matching camel.apache.org/operator.id. However, the Helm chart cannot actually deploy this setup because all cluster-scoped RBAC resources use fixed names regardless of the Helm release name or operator.operatorId value.

First install (succeeds):

helm install camel-k-shard-1 camel-k/camel-k \
 --create-namespace \
 --namespace platform-camel-k-shard-1 \
 --set-string operator.global=true \
 --set operator.operatorId=camel-k-shard-1

Second install (fails):

helm install camel-k-shard-2 camel-k/camel-k \
 --create-namespace \
 --namespace platform-camel-k-shard-2 \
 --set-string operator.global=true \
 --set operator.operatorId=camel-k-shard-2

Error:

Error: INSTALLATION FAILED: unable to continue with install: ClusterRole "camel-k-operator" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "camel-k-shard-2": current value is "camel-k-shard-1"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "platform-camel-k-shard-2": current value is "platform-camel-k-shard-1"

Issue #6145 reported this and it was seemed to be fixed by PR #6191 which removed several unnecessary cluster-scoped resources.

However, the core problem still exists in rbacs-descoped.yaml the template used when operator.global=true. All ClusterRole and ClusterRoleBinding names in this file are hardcoded and not parameterized by {{ .Release.Name }}, {{ .Values.operator.operatorId }}, or {{ include "camel-k.fullname" . }}.

The namespace being dynamic proves the intent was to support multiple installs in different namespaces so can the metadata.name be fixed in a similar way as described below to use camel-k-operator-{{ .Release.Name }} ? Happy to open a PR.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app: camel-k
  name: camel-k-operator-{{ .Release.Name }}   # ← unique per Helm release
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: camel-k-operator
subjects:
- kind: ServiceAccount
  name: camel-k-operator
  namespace: '{{ .Release.Namespace }}'

Camel K or runtime version

v2.9.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions