When creating a DatabaseCluster with proxy.type: proxysql, the everest-operator panics with assignment to entry in nil map inside applyProxySQLCfg. The cluster gets stuck in creating status indefinitely and never provisions any pods.
This bug was introduced in v1.3.0 (#552) and is present through the current v1.14.0.
Version
- OpenEverest: v1.14.0
- Kubernetes: v1.34.3
Steps to Reproduce
- Install OpenEverest v1.14.0
- Create a
DatabaseCluster with proxy.type: proxysql and explicit proxy resources:
apiVersion: everest.percona.com/v1alpha1
kind: DatabaseCluster
metadata:
name: mysql-pxc
namespace: everest-db
spec:
engine:
type: pxc
replicas: 3
version: "8.0.39-30.1"
resources:
cpu: "1"
memory: 2G
storage:
class: standard
size: 20Gi
userSecretsName: everest-secrets-mysql-pxc
proxy:
type: proxysql
replicas: 1
expose:
type: ClusterIP
resources:
cpu: 200m
memory: 200M
- Observe:
kubectl get databasecluster shows status creating forever, no pods are created.
Error
From kubectl logs -n everest-system deployment/everest-operator:
ERROR Observed a panic {"panic": "assignment to entry in nil map", ...
goroutine 791 [running]:
...
github.com/percona/everest-operator/internal/controller/everest/providers/pxc.(*applier).applyProxySQLCfg(0xc000c9f548)
/workspace/internal/controller/everest/providers/pxc/applier.go:706 +0x8e8
github.com/percona/everest-operator/internal/controller/everest/providers/pxc.(*applier).Proxy(0xc000c9f548)
/workspace/internal/controller/everest/providers/pxc/applier.go:320 +0x198
Root Cause (Analysis)
Primary bug (introduced in v1.3.0, commit b731282 / EVEREST-1555)
In defaultSpec(), the ProxySQL spec initializes Resources.Limits but never initializes Resources.Requests, leaving it as a nil map:
ProxySQL: &pxcv1.ProxySQLSpec{
PodSpec: pxcv1.PodSpec{
Resources: corev1.ResourceRequirements{
Limits: corev1.ResourceList{...}, // initialized
// Requests is nil, never initialized
},
},
},
Commit b731282 (EVEREST-1555) added shouldUpdateRequests and started writing into proxySQL.Resources.Requests[corev1.ResourceCPU] inside applyProxySQLCfg, but never initialized the map in defaultSpec(). Any DatabaseCluster with non-zero proxy.resources.cpu or proxy.resources.memory hits the panic.
Note: if proxy.resources is omitted entirely (zero values), the if !p.DB.Spec.Proxy.Resources.CPU.IsZero() guard prevents the panic, so clusters without explicit proxy resources may not be affected.
Secondary bug (current main)
applyProxySQLCfg references HAProxy's resources in the condition that guards ProxySQL's Requests assignment, a copy-paste error:
// Wrong: reads HAProxy resources instead of ProxySQL
if shouldUpdateRequests ||
p.currentPerconaXtraDBClusterSpec.HAProxy.Resources.Requests.Cpu().
Equal(p.DB.Spec.Proxy.Resources.CPU) {
proxySQL.Resources.Requests[corev1.ResourceCPU] = ...
}
Additional Error (after removing proxy.resources)
If proxy.resources is omitted (working around the nil map panic), a second error surfaces from the PXC operator:
Error: validate cr: ProxySQL: volumeSpec should be specified
The everest-operator does not populate volumeSpec for ProxySQL when translating the DatabaseCluster CRD into a PerconaXtraDBCluster resource. This means ProxySQL is completely unusable via the DatabaseCluster CRD in v1.14.0, both bugs must be fixed together.
There is no workaround for this second bug: any manual patch to the PerconaXtraDBCluster resource is reverted by the everest-operator on the next reconcile loop (~5s). The DatabaseCluster CRD does not expose a volumeSpec field for the proxy, so there is no supported way to pass it through.
Workaround
Use proxy.type: haproxy instead
When creating a
DatabaseClusterwithproxy.type: proxysql, theeverest-operatorpanics withassignment to entry in nil mapinsideapplyProxySQLCfg. The cluster gets stuck increatingstatus indefinitely and never provisions any pods.This bug was introduced in v1.3.0 (#552) and is present through the current v1.14.0.
Version
Steps to Reproduce
DatabaseClusterwithproxy.type: proxysqland explicit proxy resources:kubectl get databaseclustershows statuscreatingforever, no pods are created.Error
From
kubectl logs -n everest-system deployment/everest-operator:Root Cause (Analysis)
Primary bug (introduced in v1.3.0, commit b731282 / EVEREST-1555)
In
defaultSpec(), theProxySQLspec initializesResources.Limitsbut never initializesResources.Requests, leaving it as a nil map:Commit
b731282(EVEREST-1555) addedshouldUpdateRequestsand started writing intoproxySQL.Resources.Requests[corev1.ResourceCPU]insideapplyProxySQLCfg, but never initialized the map indefaultSpec(). AnyDatabaseClusterwith non-zeroproxy.resources.cpuorproxy.resources.memoryhits the panic.Secondary bug (current main)
applyProxySQLCfgreferences HAProxy's resources in the condition that guards ProxySQL'sRequestsassignment, a copy-paste error:Additional Error (after removing proxy.resources)
If
proxy.resourcesis omitted (working around the nil map panic), a second error surfaces from the PXC operator:The
everest-operatordoes not populatevolumeSpecfor ProxySQL when translating theDatabaseClusterCRD into aPerconaXtraDBClusterresource. This means ProxySQL is completely unusable via theDatabaseClusterCRD in v1.14.0, both bugs must be fixed together.There is no workaround for this second bug: any manual patch to the
PerconaXtraDBClusterresource is reverted by theeverest-operatoron the next reconcile loop (~5s). TheDatabaseClusterCRD does not expose avolumeSpecfield for the proxy, so there is no supported way to pass it through.Workaround
Use
proxy.type: haproxyinstead