Skip to content

feat: support Valkey Cluster with sharding#116

Open
ankitpatisfdc wants to merge 4 commits intovalkey-io:mainfrom
ankitpatisfdc:feature/valkey-cluster-mode
Open

feat: support Valkey Cluster with sharding#116
ankitpatisfdc wants to merge 4 commits intovalkey-io:mainfrom
ankitpatisfdc:feature/valkey-cluster-mode

Conversation

@ankitpatisfdc
Copy link

This PR builds on @qjsoq’s excellent work while incorporating @sgissi’s preference for a single chart (which is honestly my own preference, too) and @lyatanski’s suggestion of podManagementPolicy: Parallel.

I worked on this PR as I needed Valkey 9, and the latest improvements in the valkey-helm chart.

In addition, I fixed an issue where hostnames longer than 45 characters were truncated, which is a common problem in Kubernetes clusters.

Here’s the chart in action:

000 20260109185229+0530 ankit.pati@ankitpa-ltm9hdr ~ $ kubectl --context=stage-us-central1-rbe-0 --namespace=buildfarm exec statefulset/ci-valkey --container=ci-valkey -- valkey-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_nodes_pfail:0
cluster_nodes_fail:0
cluster_voting_nodes_pfail:0
cluster_voting_nodes_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:3
cluster_stats_messages_ping_sent:2511
cluster_stats_messages_pong_sent:2713
cluster_stats_messages_meet_sent:1
cluster_stats_messages_publish_sent:30
cluster_stats_messages_sent:5255
cluster_stats_messages_ping_received:2713
cluster_stats_messages_pong_received:2515
cluster_stats_messages_publish_received:66
cluster_stats_messages_received:5294
total_cluster_links_buffer_limit_exceeded:0

000 20260109185233+0530 ankit.pati@ankitpa-ltm9hdr ~ $ kubectl --context=stage-us-central1-rbe-0 --namespace=buildfarm exec statefulset/ci-valkey --container=ci-valkey -- valkey-cli cluster nodes
a96796fa1a1bafd8e1c1b5dcff7e86a81ecc301b 172.20.132.138:6379@16379,ci-valkey-5.ci-valkey-headless.buildfarm.svc.cluster.local slave 1e746d6101bee77d21909a6482d010a9e53e02c7 0 1767964954000 2 connected
1d89a83e526e664e53171afceb86cf5ead015dc1 172.20.129.82:6379@16379,ci-valkey-2.ci-valkey-headless.buildfarm.svc.cluster.local myself,master - 0 0 3 connected 10923-16383
351cdd7429b97164834490f6c0a220fa5ea4b73e 172.20.130.147:6379@16379,ci-valkey-0.ci-valkey-headless.buildfarm.svc.cluster.local master - 0 1767964956000 1 connected 0-5460
d566ebaf7e130369eb0e6c8dd766da85bb28aac3 172.20.129.201:6379@16379,ci-valkey-4.ci-valkey-headless.buildfarm.svc.cluster.local slave 351cdd7429b97164834490f6c0a220fa5ea4b73e 0 1767964955648 1 connected
1e746d6101bee77d21909a6482d010a9e53e02c7 172.20.132.24:6379@16379,ci-valkey-1.ci-valkey-headless.buildfarm.svc.cluster.local master - 0 1767964956657 2 connected 5461-10922
95504bc6ad4ff8ecec3e905a31d213153bce93b6 172.20.131.201:6379@16379,ci-valkey-3.ci-valkey-headless.buildfarm.svc.cluster.local slave 1d89a83e526e664e53171afceb86cf5ead015dc1 0 1767964955000 3 connected

000 20260109185237+0530 ankit.pati@ankitpa-ltm9hdr ~ $

@ankitpatisfdc ankitpatisfdc force-pushed the feature/valkey-cluster-mode branch from 5d173ec to 787bbd3 Compare January 9, 2026 13:33
@ferozed
Copy link

ferozed commented Jan 13, 2026

When can we get this PR merged to main branch?

@sgissi
Copy link
Collaborator

sgissi commented Jan 14, 2026

Thanks for the PR! I'll review it. We were planning on Sentinel before Cluster but let me take a look and test. If you already tested, feel free to post here, especially the interaction with other parts like TLS/authentication

@ankitpatisfdc
Copy link
Author

@sgissi Thank you for looking into this.

I’ve ensured all code paths I touched are covered by unit tests, but I’ve only tested non-TLS and unauthed Valkey cluster deployments, because Istio does mTLS in my Kubernetes clusters so Valkey doesn’t have to. There’s no access to Valkey from outside the same Kubernetes cluster.

I’ve posted my testing results in the introductory comment on my PR.

I’ll take a closer look at TLS and auth today, and update you with my findings.

@ferozed
Copy link

ferozed commented Jan 15, 2026

@ankitpatisfdc I deployed the latest from your branch, but it is not working.

This is the state after deploying

$ kubectl -n namespace exec statefulset/valkey --container=valkey -- valkey-cli cluster info
cluster_state:fail
cluster_slots_assigned:10923
cluster_slots_ok:10923
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:2
cluster_current_epoch:2
cluster_my_epoch:2
cluster_stats_messages_ping_sent:4556
cluster_stats_messages_meet_sent:296
cluster_stats_messages_sent:4852
cluster_stats_messages_pong_received:4848
cluster_stats_messages_received:4848
total_cluster_links_buffer_limit_exceeded:0
 root [ /home/oradev ]$ kubectl -n namespace exec statefulset/valkey --container=valkey -- valkey-cli cluster nodes
2825cef15a5534ce917fa6ea12d2e1378885d20a :6379@16379,valkey-1.valkey-headless.namespace.svc.cluster.local myself,master - 0 0 2 connected 5461-10922
d917f3a655d1bc88c5064df92fce971dc23a26a3 10.244.68.153:6379@16379 master - 0 1768436271806 1 connected 0-5460

All other pods are logging this message:

Waiting for cluster to be initialized by pod-0...

Whereas pod-0 thinks it created the cluster:

Initializing as ordinal 0. Total nodes: 6, Primaries: 3, Replicas per shard: 1
8:M 14 Jan 2026 23:02:17.758 # WARNING: Changing databases number from 16 to 1 since we are in cluster mode
8:M 14 Jan 2026 23:02:17.760 * oO0OoO0OoO0Oo Valkey is starting oO0OoO0OoO0Oo
8:M 14 Jan 2026 23:02:17.760 * Valkey version=8.1.2, bits=64, commit=00000000, modified=0, pid=8, just started
8:M 14 Jan 2026 23:02:17.760 * Configuration loaded
8:M 14 Jan 2026 23:02:17.761 * monotonic clock: POSIX clock_gettime
Waiting for local Valkey to start...
8:M 14 Jan 2026 23:02:17.762 * Running mode=cluster, port=6379.
8:M 14 Jan 2026 23:02:17.762 * No cluster configuration found, I'm d917f3a655d1bc88c5064df92fce971dc23a26a3
8:M 14 Jan 2026 23:02:17.767 * Server initialized
8:M 14 Jan 2026 23:02:17.767 * Ready to accept connections tcp
Local Valkey is ready at 10.244.68.153
8:M 14 Jan 2026 23:02:19.775 # Cluster is currently down: I am part of a minority partition.
No healthy cluster found. Proceeding with initial creation logic.
This is the primary-0 node, creating a new cluster...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Waiting for valkey-2.valkey-headless.namespace.svc.cluster.local to be ready...
Creating cluster with nodes:  valkey-0.valkey-headless.namespace.svc.cluster.local:6379 valkey-1.valkey-headless.namespace.svc.cluster.local:6379 valkey-2.valkey-headless.namespace.svc.cluster.local:6379 valkey-3.valkey-headless.namespace.svc.cluster.local:6379 valkey-4.valkey-headless.
namespace.svc.cluster.local:6379 valkey-5.valkey-headless.namespace.svc.cluster.local:6379
>>> Performing hash slots allocation on 6 node(s)...
Primary[0] -> Slots 0 - 5460
Primary[1] -> Slots 5461 - 10922
Primary[2] -> Slots 10923 - 16383
Adding replica valkey-4.valkey-headless.namespace.svc.cluster.local:6379 to valkey-0.valkey-headless.namespace.svc.cluster.local:6379
Adding replica valkey-5.valkey-headless.namespace.svc.cluster.local:6379 to valkey-1.valkey-headless.namespace.svc.cluster.local:6379
Adding replica valkey-3.valkey-headless.namespace.svc.cluster.local:6379 to valkey-2.valkey-headless.namespace.svc.cluster.local:6379
M: d917f3a655d1bc88c5064df92fce971dc23a26a3 valkey-0.valkey-headless.namespace.svc.cluster.local:6379
   slots:[0-5460] (5461 slots) master
M: 2825cef15a5534ce917fa6ea12d2e1378885d20a valkey-1.valkey-headless.namespace.svc.cluster.local:6379
   slots:[5461-10922] (5462 slots) master
M: 5a16b43e96da92e98ab1a59da9a4190380084cfb valkey-2.valkey-headless.namespace.svc.cluster.local:6379
   slots:[10923-16383] (5461 slots) master
S: ea8679d3f6954432409e066c212df5826e701c37 valkey-3.valkey-headless.namespace.svc.cluster.local:6379
   replicates 5a16b43e96da92e98ab1a59da9a4190380084cfb
S: 294ce1fec0facafea64cf459609a89c40947a275 valkey-4.valkey-headless.namespace.svc.cluster.local:6379
   replicates d917f3a655d1bc88c5064df92fce971dc23a26a3
S: dd2db49f518dd570be02b9904e77e0021bb4c5d4 valkey-5.valkey-headless.namespace.svc.cluster.local:6379
   replicates 2825cef15a5534ce917fa6ea12d2e1378885d20a
Can I set the above configuration? (type 'yes' to accept): 8:M 14 Jan 2026 23:03:04.134 # Cluster is currently down: At least one hash slot is not served by any available node. Please check the 'cluster-require-full-coverage' configuration.
8:M 14 Jan 2026 23:03:04.147 * configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
8:M 14 Jan 2026 23:03:04.216 * IP address for this node updated to 10.244.68.153

This is the spec and status from statefulset

spec:
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain
  podManagementPolicy: Parallel
  replicas: 6
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: valkey
      app.kubernetes.io/name: valkey
  serviceName: valkey-headless
status:
  availableReplicas: 6
  collisionCount: 0
  currentReplicas: 6
  currentRevision: valkey-d8fcddc58
  observedGeneration: 1
  readyReplicas: 6
  replicas: 6
  updateRevision: valkey-d8fcddc58
  updatedReplicas: 6

Any ideas what I am doing wrong?

@ankitpatisfdc
Copy link
Author

@ferozed Can I please take a look at your values.yaml, if you’re using one? If not, can you confirm that you aren’t using one? That’ll make it easier for me to try and repro your issue.

@ankitpatisfdc ankitpatisfdc force-pushed the feature/valkey-cluster-mode branch from 787bbd3 to 9210209 Compare January 15, 2026 15:48
@ankitpatisfdc
Copy link
Author

@sgissi I rebased on the latest main, and fixed a bug with authentication in cluster mode.

After that, I ran the following tests for combinations of TLS, authentication, sharding, and replication.

All tests passed.

This should probably be a Bash script; I’ll work on that next.

The init-cluster.sh script should also be a proper Bash script instead of an sh script for robustness, and I’ll get back to it once this PR is merged.

@ferozed Do you have Istio or another service mesh in your cluster by any chance? Please exclude Valkey from the mesh (refer to the helm install commands below), and please retry again with the tip of the PR branch.

Setup

000 20260115080610+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 create secret generic valkey-auth --from-literal=default=password
secret/valkey-auth created

000 20260115080732+0530 feature/valkey-cluster-mode u=!valkey-helm> openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout valkey-ca.key -out valkey-ca.crt -subj /CN=valkey-ca


000 20260115080835+0530 feature/valkey-cluster-mode u=!valkey-helm> openssl req -nodes -newkey rsa:2048 -keyout valkey-server.key -out valkey-server.csr -subj /CN=valkey.default.svc.cluster.local -addext 'subjectAltName=DNS:valkey.default.svc.cluster.local,DNS:valkey-headless.default.svc.cluster.local,DNS:*.valkey-headless.default.svc.cluster.local'


000 20260115080939+0530 feature/valkey-cluster-mode u=!valkey-helm> openssl x509 -req -in valkey-server.csr -CA valkey-ca.crt -CAkey valkey-ca.key -CAcreateserial -out valkey-server.crt -days 365 -copy_extensions copyall
Certificate request self-signature ok
subject=CN=valkey.default.svc.cluster.local

000 20260115081050+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 create secret generic valkey-tls --from-file=server.crt=valkey-server.crt --from-file=server.key=valkey-server.key --from-file=ca.crt=valkey-ca.crt
secret/valkey-tls created

000 20260115081140+0530 feature/valkey-cluster-mode u=!valkey-helm> rm -- valkey-ca.key valkey-ca.crt valkey-ca.srl valkey-server.key valkey-server.csr valkey-server.crt

000 20260115081233+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 run valkey-testbench --image=valkey/valkey:9.0.1 --labels=sidecar.istio.io/inject=false --restart=Never --overrides='{"spec":{"containers":[{"name":"valkey-testbench","image":"valkey/valkey:9.0.1","command":["sleep","infinity"],"volumeMounts":[{"name":"tls","mountPath":"/tls","readOnly":true}]}],"volumes":[{"name":"tls","secret":{"secretName":"valkey-tls"}}]}}' --command -- sleep infinity
pod/valkey-testbench created

TLS 🔴, Authentication 🔴, Sharding 🔴, Replication 🔴

000 20260115081330+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false'
NAME: valkey
LAST DEPLOYED: Thu Jan 15 08:13:45 2026
NAMESPACE: default
STATUS: deployed


000 20260115081447+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec statefulset/ci-valkey --container=ci-valkey -- valkey-cli -h valkey.default.svc.cluster.local ping
PONG

000 20260115081640+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

TLS 🔴, Authentication 🔴, Sharding 🔴, Replication 🟢

000 20260115102638+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=replica.enabled=true --set=replica.persistence.size=5Gi
NAME: valkey
LAST DEPLOYED: Thu Jan 15 10:27:06 2026
NAMESPACE: default
STATUS: deployed


000 20260115102716+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local ping
PONG

000 20260115102905+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115103114+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🔴, Authentication 🔴, Sharding 🟢, Replication 🔴

000 20260115130933+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.replicasPerShard=0
NAME: valkey
LAST DEPLOYED: Thu Jan 15 13:12:14 2026
NAMESPACE: default
STATUS: deployed


000 20260115131224+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115131332+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster nodes
193eb53bed28e33433b0ff03979afa0676e7b2aa 172.20.130.24:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768463085087 2 connected 5461-10922
cf20dc4f62a5e7f9a9026a724d6dc62b4f3b67fa 172.20.131.141:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local master - 0 1768463084000 1 connected 0-5460
15656602f78c25e47216bc7960c89fad90e952b5 172.20.131.208:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local myself,master - 0 0 3 connected 10923-16383

000 20260115131446+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115131528+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🔴, Authentication 🔴, Sharding 🟢, Replication 🟢

000 20260115131931+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.shards=3 --set=cluster.replicasPerShard=1
NAME: valkey
LAST DEPLOYED: Thu Jan 15 13:19:38 2026
NAMESPACE: default
STATUS: deployed


000 20260115131947+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115132027+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster nodes
5c1f6974561571deae29c7b835145cf8daa9102b 172.20.131.142:6379@16379,valkey-3.valkey-headless.default.svc.cluster.local myself,slave b428027e00756a80451e2f76fb71166cb8c788ab 0 0 3 connected
c03b176c9bd5dd2ffe6e3c27c6f33e1bf5c3d701 172.20.128.148:6379@16379,valkey-4.valkey-headless.default.svc.cluster.local slave 8305d5f2c09a78468a536b062501231919209971 0 1768463613000 1 connected
8305d5f2c09a78468a536b062501231919209971 172.20.130.89:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local master - 0 1768463614897 1 connected 0-5460
b428027e00756a80451e2f76fb71166cb8c788ab 172.20.130.25:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768463612000 3 connected 10923-16383
8c27c70848dc5dea2baaf1942142b2b2b387202d 172.20.130.246:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768463613000 2 connected 5461-10922
a979635c13098514a2e4fc645e826739c198c1a7 172.20.131.209:6379@16379,valkey-5.valkey-headless.default.svc.cluster.local slave 8c27c70848dc5dea2baaf1942142b2b2b387202d 0 1768463613893 2 connected

000 20260115132335+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115132420+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🔴, Authentication 🟢, Sharding 🔴, Replication 🔴

000 20260115164104+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all'
NAME: valkey
LAST DEPLOYED: Thu Jan 15 16:41:43 2026
NAMESPACE: default
STATUS: deployed


000 20260115164241+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning ping
PONG

000 20260115164454+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local ping
NOAUTH Authentication required.

000 20260115164519+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

TLS 🔴, Authentication 🟢, Sharding 🔴, Replication 🟢

000 20260115164646+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=replica.enabled=true --set=replica.persistence.size=5Gi
NAME: valkey
LAST DEPLOYED: Thu Jan 15 16:48:54 2026
NAMESPACE: default
STATUS: deployed


000 20260115165005+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning ping
PONG

000 20260115165125+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local ping
NOAUTH Authentication required.

000 20260115165134+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115170305+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🔴, Authentication 🟢, Sharding 🟢, Replication 🔴

000 20260115173547+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.replicasPerShard=0
NAME: valkey
LAST DEPLOYED: Thu Jan 15 17:35:57 2026
NAMESPACE: default
STATUS: deployed


000 20260115173606+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115173710+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning cluster nodes
151024e4616c792fdf5dac9fe676e08e289a7da8 172.20.130.250:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768478932638 3 connected 10923-16383
f6928fac6437ed26089b3b3a3e7dfc6210ab1bd9 172.20.130.30:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local myself,master - 0 0 2 connected 5461-10922
30ebcfd3c0a0b58d6cf0988fa0915d6dc7a8e79f 172.20.131.144:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local master - 0 1768478933643 1 connected 0-5460

000 20260115173854+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster nodes
NOAUTH Authentication required.

000 20260115173951+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115174014+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🔴, Authentication 🟢, Sharding 🟢, Replication 🟢

000 20260115174057+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.shards=3 --set=cluster.replicasPerShard=1
NAME: valkey
LAST DEPLOYED: Thu Jan 15 17:44:58 2026
NAMESPACE: default
STATUS: deployed


000 20260115174507+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115174607+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning cluster nodes
6f64101457661d5178979966cc83a3de553b9a34 172.20.130.251:6379@16379,valkey-5.valkey-headless.default.svc.cluster.local slave d3c43aa94fbc01598d4c174815fe02bbe5755d93 0 1768479407000 2 connected
f7ec436441d50c3ab299a5cb6ab07f13519e87f9 172.20.128.210:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local myself,master - 0 0 1 connected 0-5460
0ff2527be0d1c9c043217d32f28b2a0f251b4d71 172.20.131.13:6379@16379,valkey-4.valkey-headless.default.svc.cluster.local slave f7ec436441d50c3ab299a5cb6ab07f13519e87f9 0 1768479408000 1 connected
c548b15cef079fb899f608aff1defa8373a8aa7e 172.20.131.212:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768479407000 3 connected 10923-16383
ac0b303b3335dcab93a86ddde9b463e2519276ce 172.20.128.149:6379@16379,valkey-3.valkey-headless.default.svc.cluster.local slave c548b15cef079fb899f608aff1defa8373a8aa7e 0 1768479408651 3 connected
d3c43aa94fbc01598d4c174815fe02bbe5755d93 172.20.130.31:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768479407647 2 connected 5461-10922

000 20260115174649+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster nodes
NOAUTH Authentication required.

000 20260115174655+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115174808+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🟢, Authentication 🔴, Sharding 🔴, Replication 🔴

000 20260115180037+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls
NAME: valkey
LAST DEPLOYED: Thu Jan 15 18:02:28 2026
NAMESPACE: default
STATUS: deployed


000 20260115180321+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls --cacert /tls/ca.crt ping
PONG

000 20260115180325+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local ping
Error: Connection reset by peer
command terminated with exit code 1

000 20260115180353+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls ping
Could not connect to Valkey at valkey.default.svc.cluster.local:6379: SSL_connect failed: certificate verify failed
command terminated with exit code 1

000 20260115180519+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

TLS 🟢, Authentication 🔴, Sharding 🔴, Replication 🟢

000 20260115180828+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls --set=replica.enabled=true --set=replica.persistence.size=5Gi
NAME: valkey
LAST DEPLOYED: Thu Jan 15 18:08:49 2026
NAMESPACE: default
STATUS: deployed


000 20260115180939+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls --cacert /tls/ca.crt ping
PONG

000 20260115181011+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local ping
Error: Connection reset by peer
command terminated with exit code 1

000 20260115181023+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls ping
Could not connect to Valkey at valkey.default.svc.cluster.local:6379: SSL_connect failed: certificate verify failed
command terminated with exit code 1

000 20260115181031+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115181122+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🟢, Authentication 🔴, Sharding 🟢, Replication 🔴

000 20260115181131+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.replicasPerShard=0
NAME: valkey
LAST DEPLOYED: Thu Jan 15 18:13:03 2026
NAMESPACE: default
STATUS: deployed


000 20260115181355+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls --cacert /tls/ca.crt cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115181443+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster info | grep ^cluster_state:
Error: Connection reset by peer
command terminated with exit code 1

000 20260115181456+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls cluster info | grep ^cluster_state:
Could not connect to Valkey at valkey.default.svc.cluster.local:6379: SSL_connect failed: certificate verify failed
command terminated with exit code 1

000 20260115183927+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls --cacert /tls/ca.crt cluster nodes
456b48b8193eb0f1adfe58cf1e1671dca1b35144 172.20.130.252:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local myself,master - 0 0 2 connected 5461-10922
013c331de166433696739e13c5f25279c8dc94cd 172.20.130.33:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768482574500 3 connected 10923-16383
01ae64df4045ec5184044176298630b3079d0c1f 172.20.131.146:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local master - 0 1768482573000 1 connected 0-5460

000 20260115183935+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115184001+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🟢, Authentication 🔴, Sharding 🟢, Replication 🟢

000 20260115184118+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.shards=3 --set=cluster.replicasPerShard=1
NAME: valkey
LAST DEPLOYED: Thu Jan 15 18:42:29 2026
NAMESPACE: default
STATUS: deployed


000 20260115184327+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls --cacert /tls/ca.crt cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115184342+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster info | grep ^cluster_state:
Error: Connection reset by peer
command terminated with exit code 1

000 20260115184426+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls cluster info | grep ^cluster_state:
Could not connect to Valkey at valkey.default.svc.cluster.local:6379: SSL_connect failed: certificate verify failed
command terminated with exit code 1

000 20260115184523+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local --tls --cacert /tls/ca.crt cluster nodes
8804138f43cdc3676eea7d7f405b0277c68c1704 172.20.128.150:6379@16379,valkey-3.valkey-headless.default.svc.cluster.local myself,slave b3d2d98bfee9ea3416f8cbaa65438c89d80843a8 0 0 3 connected
3d99b163e09e17ce932fdd52bc6da55b0faf05fd 172.20.131.147:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local master - 0 1768482928289 1 connected 0-5460
b3d2d98bfee9ea3416f8cbaa65438c89d80843a8 172.20.131.214:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768482927000 3 connected 10923-16383
9577545cc262ae63b754ed8441d4c781d5113051 172.20.130.253:6379@16379,valkey-5.valkey-headless.default.svc.cluster.local slave d20445b76fe9cbb0fa9cf967215440c92c1d21a3 0 1768482928000 2 connected
4ef18f61ca883e7fd1fdbc05e90bd0424aab3241 172.20.128.211:6379@16379,valkey-4.valkey-headless.default.svc.cluster.local slave 3d99b163e09e17ce932fdd52bc6da55b0faf05fd 0 1768482929293 1 connected
d20445b76fe9cbb0fa9cf967215440c92c1d21a3 172.20.130.34:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768482926279 2 connected 5461-10922

000 20260115184530+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115184650+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🟢, Authentication 🟢, Sharding 🔴, Replication 🔴

000 20260115185025+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls
NAME: valkey
LAST DEPLOYED: Thu Jan 15 18:50:30 2026
NAMESPACE: default
STATUS: deployed


000 20260115185709+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning --tls --cacert /tls/ca.crt ping
PONG

000 20260115185804+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

TLS 🟢, Authentication 🟢, Sharding 🔴, Replication 🟢

000 20260115185848+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls --set=replica.enabled=true --set=replica.persistence.size=5Gi
NAME: valkey
LAST DEPLOYED: Thu Jan 15 18:59:03 2026
NAMESPACE: default
STATUS: deployed


000 20260115185913+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning --tls --cacert /tls/ca.crt ping
PONG

000 20260115190100+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115190153+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🟢, Authentication 🟢, Sharding 🟢, Replication 🔴

000 20260115190204+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.replicasPerShard=0
NAME: valkey
LAST DEPLOYED: Thu Jan 15 19:03:09 2026
NAMESPACE: default
STATUS: deployed


000 20260115190319+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning --tls --cacert /tls/ca.crt cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115190456+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning --tls --cacert /tls/ca.crt cluster nodes
04ff8156b86024dd3da682692848ac55559b2a17 172.20.130.36:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768484187097 3 connected 10923-16383
af63c8d4941239708ba0ec65537e40ac58f08ef2 172.20.131.216:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768484185086 2 connected 5461-10922
3ce00f07261458c5d24a290d0d6ec2a7cddf1bbf 172.20.131.148:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local myself,master - 0 0 1 connected 0-5460

000 20260115190627+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115190811+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

TLS 🟢, Authentication 🟢, Sharding 🟢, Replication 🟢

000 20260115190827+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --set-string='podLabels.sidecar\.istio\.io/inject=false' --set=auth.enabled=true --set=auth.usersExistingSecret=valkey-auth --set=auth.aclUsers.default.permissions='~* &* +@all' --set=tls.enabled=true --set=tls.existingSecret=valkey-tls --set=cluster.enabled=true --set=cluster.persistence.size=5Gi --set=cluster.shards=3 --set=cluster.replicasPerShard=1
NAME: valkey
LAST DEPLOYED: Thu Jan 15 19:09:52 2026
NAMESPACE: default
STATUS: deployed


000 20260115191034+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning --tls --cacert /tls/ca.crt cluster info | grep ^cluster_state:
cluster_state:ok

000 20260115191040+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -a password -h valkey.default.svc.cluster.local --no-auth-warning --tls --cacert /tls/ca.crt cluster nodes
da04a8e7916224355a215eeab9d80393527c9a32 172.20.128.151:6379@16379,valkey-3.valkey-headless.default.svc.cluster.local slave e411026bc43f43076e656b046157578f4204ee80 0 1768484456448 3 connected
9324bca34d588f3130d3a56fdcd06cc6a744df93 172.20.131.149:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local master - 0 1768484456000 1 connected 0-5460
e411026bc43f43076e656b046157578f4204ee80 172.20.130.254:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768484455000 3 connected 10923-16383
6d78ac200832929ed366a792e2dcc53f9369a577 172.20.131.15:6379@16379,valkey-5.valkey-headless.default.svc.cluster.local slave 0351272fa34c0a428683a3541cfe422073a6c874 0 1768484457454 2 connected
212dbff057c8200efbd3e7fabf12d14aa9e0a8c7 172.20.131.217:6379@16379,valkey-4.valkey-headless.default.svc.cluster.local myself,slave 9324bca34d588f3130d3a56fdcd06cc6a744df93 0 0 1 connected
0351272fa34c0a428683a3541cfe422073a6c874 172.20.130.37:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768484455442 2 connected 5461-10922

000 20260115191058+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 uninstall valkey
release "valkey" uninstalled

000 20260115191127+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete persistentvolumeclaims --selector=app.kubernetes.io/instance=valkey
persistentvolumeclaim "valkey-data-valkey-0" deleted from default namespace

Teardown

000 20260115191158+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete pod valkey-testbench
pod "valkey-testbench" deleted from default namespace

000 20260115191305+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 delete secret valkey-auth valkey-tls
secret "valkey-auth" deleted from default namespace
secret "valkey-tls" deleted from default namespace

@mirwaisx
Copy link

hello
would appreciate if someone can look into this. also #18 could be closed.
thanks to @ankitpatisfdc and #51 by @qjsoq , i have checkout this PR and could create a cluster with following helm values:

cluster.enabled=true
cluster.shards: 3
cluster.replicasPerShard=1
cluster.persistence.size="5Gi"
# Persistent storage configuration (standalone deployment only)
dataStorage.enabled=false

Doing helm install, you will get 6 valkey nodes (Total nodes = shards × (1 + replicasPerShard))

Just some minor fixes need to be done (@ankitpatisfdc ?) for the output (NOTES.txt) we get after helm install, for example with these configs we still get a false positive warning/output:

================================================================================
📦 STANDALONE MODE
================================================================================

💾 Persistence
- Persistence is DISABLED. Data will not survive Pod restarts.
- Enable with:
    `--set dataStorage.enabled=true --set dataStorage.requestedSize=5Gi` (optionally set `dataStorage.className`)
    or
    `--set dataStorage.enabled=true --set dataStorage.persistentVolumeClaimName=valkey-data-pvc` to use an existing PVC
    or
    `--set dataStorage.enabled=true --set dataStorage.hostPath=/some/path/` to use a hostPath volume.

@ferozed
Copy link

ferozed commented Jan 20, 2026

@ankitpatisfdc I am retrying with istio disabled. BTW is this a requirement? That this chart cannot work with istio in the picture?

Regarding the MR -> you need to add the new values to the schema file.

@ferozed
Copy link

ferozed commented Jan 20, 2026

@ankitpatisfdc it didnt work, even after disabling istio

Here is the status from the pod

cluster_state:fail
cluster_slots_assigned:14892
cluster_slots_ok:14892
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:1
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
total_cluster_links_buffer_limit_exceeded:0

Here is the values.yaml

# disable istio sidecar injection as a workaround to see if cluster mode works.
podAnnotations:
  sidecar.istio.io/inject: "false"

# cluster configs
cluster:
   # Enable cluster mode (creates a sharded Valkey cluster)
   enabled: true

   # Number of shards (primary nodes). Minimum recommended is 3 for cluster mode.
   # Each shard handles a portion of the hash slot range (16384 slots total).
   shards: 3

   # Number of replicas per shard (for high availability within each shard)
   # Total nodes = shards × (1 + replicasPerShard)
   # For example: 3 shards with 1 replica each = 6 nodes total
   replicasPerShard: 1

   # Username for cluster replication authentication, ignored if auth.enabled is false.
   # IMPORTANT: When auth.enabled is true, this user MUST be defined in auth.aclUsers.
   # The user must have appropriate replication permissions: +psync +replconf +ping
   replicationUser: \"default\"

   # Cluster node timeout in milliseconds (how long before a node is considered failed)
   nodeTimeout: 15000

   # Require all hash slots to be covered for the cluster to accept writes
   # Set to false to allow partial cluster operation
   requireFullCoverage: true

   # Allow cluster to serve read requests when in down state
   allowReadsWhenDown: false

   # Persistence configuration (required for cluster mode)
   persistence:
      # Size of the PVC for each node (required when cluster.enabled is true)
      size: \"4Gi\"
      # Storage class name (empty = use default storage class)
      storageClass: \"oci-bv\"
      # Access modes for the PVC
      accessModes:
      - ReadWriteOnce

   # Bus port for cluster communication (default: service.port + 10000)
   # This port is used for node-to-node communication in the cluster
   busPort: 16379

replica:
   enabled: false

   # Number of replica instances (total pods = replicas + 1 master)
   replicas: 6

   # Username for replicas to authenticate to master, ignored if auth.enabled is false.
   # IMPORTANT: When auth.enabled is true, this user MUST be defined in auth.aclUsers.
   # The chart requires this to retrieve the password for replica authentication.
   # The user must have appropriate replication permissions: +psync +replconf +ping
   replicationUser: \"default\"

   # Replication settings
   # Use diskless replication (sync directly from memory) vs disk-based
   disklessSync: false

   # Write safety - require minimum number of healthy replicas to accept writes
   # Set to 0 to disable this check, or 1+ to require minimum replicas before accepting writes
   # This ensures data durability by requiring at least N replicas to be in sync
   minReplicasToWrite: 0

   # Maximum replication lag in seconds before a replica is considered unhealthy
   minReplicasMaxLag: 10

   # Read service configuration
   service:
      # Enable read service (load balances read traffic across all pods)
      enabled: true
      # Service type (ClusterIP, NodePort, LoadBalancer)
      type: ClusterIP
      # Port on which the read service will be exposed
      port: 6379
      # Optional annotations for the read service
      annotations: {}
      # NodePort value (if service.type is NodePort)
      nodePort: 0
      # ClusterIP value
      clusterIP: \"\"
      # Application protocol
      appProtocol: \"\"
      # Class of a load balancer implementation
      loadBalancerClass: \"\"

   # Persistence configuration (required for replicas)
   persistence:
      # Size of the PVC for each replica (required when replica.enabled is true)
      size: \"4Gi\"
      # Storage class name (empty = use default storage class)
      storageClass: \"oci-bv\"
      # Access modes for the PVC
      accessModes:
      - ReadWriteOnce

@ankitpatisfdc
Copy link
Author

@mirwaisx Thanks for catching the missing updates to NOTES.txt. I’m working on it, and will post an update shortly.

@ferozed Thanks for catching the schema issues as well. I’ll update it shortly.

is this a requirement? That this chart cannot work with istio

It can, but not directly out of this repo. It needs some additional resources which are not included in this chart because they’re specific to Istio. Perhaps I’ll add them behind an optional flag in a follow-up PR.

Here’s my attempt at using your values.yaml, with some changes: I had to unescape all the double quotes and remove the storage class as that seemed Oracle-specific.

000 20260121124231+0530 feature/valkey-cluster-mode u=!valkey-helm> yq '... comments=""' ferozed-values.yaml

podAnnotations:
  sidecar.istio.io/inject: "false"
cluster:
  enabled: true
  shards: 3
  replicasPerShard: 1
  replicationUser: "default"
  nodeTimeout: 15000
  requireFullCoverage: true
  allowReadsWhenDown: false
  persistence:
    size: "4Gi"
    accessModes:
      - ReadWriteOnce
  busPort: 16379
replica:
  enabled: false
  replicas: 6
  replicationUser: "default"
  disklessSync: false
  minReplicasToWrite: 0
  minReplicasMaxLag: 10
  service:
    enabled: true
    type: ClusterIP
    port: 6379
    annotations: {}
    nodePort: 0
    clusterIP: ""
    appProtocol: ""
    loadBalancerClass: ""
  persistence:
    size: "4Gi"
    accessModes:
      - ReadWriteOnce

000 20260121124232+0530 feature/valkey-cluster-mode u=!valkey-helm> helm --kube-context=mirror-us-east1-rbe-0 install valkey valkey --dependency-update --values=ferozed-values.yaml
NAME: valkey
LAST DEPLOYED: Wed Jan 21 12:43:25 2026
NAMESPACE: default
STATUS: deployed


000 20260121124521+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster nodes

812cf086c7bd6a0bbb1fd80b8a329679c75826fb 172.20.130.214:6379@16379,valkey-2.valkey-headless.default.svc.cluster.local master - 0 1768979739000 3 connected 10923-16383
0b2145c15431fe44fc84937d3c66f983eee0a1f9 172.20.128.156:6379@16379,valkey-5.valkey-headless.default.svc.cluster.local slave 0e3c2981a18f1e52f8ea9206e36108bcd679fce4 0 1768979739000 2 connected
5a266935e5923175bd49ea8be981ea9e5ea73e8d 172.20.131.233:6379@16379,valkey-4.valkey-headless.default.svc.cluster.local slave a0405694bf9de2d2cbdcc98993bdc4a092160411 0 1768979740533 1 connected
a0405694bf9de2d2cbdcc98993bdc4a092160411 172.20.130.108:6379@16379,valkey-0.valkey-headless.default.svc.cluster.local myself,master - 0 0 1 connected 0-5460
b7f759bab1f4d135f06560a995cb8d05db769120 172.20.128.215:6379@16379,valkey-3.valkey-headless.default.svc.cluster.local slave 812cf086c7bd6a0bbb1fd80b8a329679c75826fb 0 1768979740000 3 connected
0e3c2981a18f1e52f8ea9206e36108bcd679fce4 172.20.130.40:6379@16379,valkey-1.valkey-headless.default.svc.cluster.local master - 0 1768979741539 2 connected 5461-10922

000 20260121124542+0530 feature/valkey-cluster-mode u=!valkey-helm> kubectl --context=mirror-us-east1-rbe-0 exec valkey-testbench -- valkey-cli -h valkey.default.svc.cluster.local cluster info

cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_nodes_pfail:0
cluster_nodes_fail:0
cluster_voting_nodes_pfail:0
cluster_voting_nodes_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:2
cluster_stats_messages_ping_sent:130
cluster_stats_messages_pong_sent:123
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:254
cluster_stats_messages_ping_received:123
cluster_stats_messages_pong_received:134
cluster_stats_messages_received:257
total_cluster_links_buffer_limit_exceeded:0

000 20260121124603+0530 feature/valkey-cluster-mode u=!valkey-helm>

As you can see, the cluster works just fine. Perhaps the issue is specific to OCI? Please help me with logs.

Signed-off-by: Ankit Pati <ankit.pati@salesforce.com>
Signed-off-by: Ankit Pati <ankit.pati@salesforce.com>
@ankitpatisfdc ankitpatisfdc force-pushed the feature/valkey-cluster-mode branch from 9210209 to 48d4882 Compare January 26, 2026 20:12
Signed-off-by: Ankit Pati <ankit.pati@salesforce.com>
Signed-off-by: Ankit Pati <ankit.pati@salesforce.com>
@ankitpatisfdc ankitpatisfdc force-pushed the feature/valkey-cluster-mode branch from 48d4882 to 021f4f2 Compare January 26, 2026 20:15
@ankitpatisfdc
Copy link
Author

Rebased on latest main to resolve merge conflict, updated NOTES, and added missing values to schema (not all of them were introduced by this PR).

@sgissi Please let me know if there’s anything I can do to expedite the merging of this PR, as this is a very big change and risks getting too many merge conflicts.

@ferozed
Copy link

ferozed commented Jan 31, 2026

logs.txt

@ankitpatisfdc what kind of logs do you need from the cluster? Just logs from each node? I have attached that.

@ankitpatisfdc
Copy link
Author

@ferozed Valkey node startup logs will be enough, thanks.

Also please ensure you delete the persistent volumes and PVCs from your prior attempts. Commands are available in the testing comment (the longest comment above).

@ferozed
Copy link

ferozed commented Feb 1, 2026

@ferozed Valkey node startup logs will be enough, thanks.

Also please ensure you delete the persistent volumes and PVCs from your prior attempts. Commands are available in the testing comment (the longest comment above).

@ankitpatisfdc logs are attached to my message above. Are they sufficient?

@ferozed
Copy link

ferozed commented Feb 11, 2026

@ankitpatisfdc did you look at my logs? Could you figure out why it doesnt join cluster for me?

@ankitpatisfdc
Copy link
Author

@ferozed I did look at your logs, and it’s not clear to me. Can you please try in a different cloud or even a kind cluster?

@raja-anbazhagan
Copy link

I tried latest commits from ankitpatisfdc:feature/valkey-cluster-mode and able to deploy without any problems locally. Very good work.

@piellick
Copy link

Great work guys 👏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants