Skip to content

Add support for pod scheduling configuration (tolerations, nodeSelector, affinity, topologySpreadConstraints) #11

@recharte

Description

@recharte

Note

This issue was originally reported in percona/percona-helm-charts#680.
It has been migrated here as part of the repository move.

Description

The current Percona Everest Helm chart does not expose any configuration options to control pod scheduling behavior (e.g., tolerations, nodeSelector, affinity, or topologySpreadConstraints) for its main workloads.

At the moment, the chart only allows configuration of images, resources, environment variables, and a few runtime parameters, but lacks standard Kubernetes scheduling policies that are critical in production environments.

Why This Is Needed

In many Kubernetes setups, nodes are tainted or segregated based on workload type, hardware capabilities, or availability zones. Without scheduling controls, Everest pods might end up running on unsuitable or shared nodes, which can lead to:

  • Pods being unschedulable on tainted nodes (without tolerations)
  • Lack of control over placement (no nodeSelector/affinity)
  • Poor zone distribution (no topologySpreadConstraints)

Adding scheduling options will make the chart more flexible and production-ready.

Proposed Solution

Add optional scheduling fields for each major component in values.yaml — for example:

server:
  nodeSelector: {}
  tolerations: []
  affinity: {}
  topologySpreadConstraints: []

operator:
  nodeSelector: {}
  tolerations: []
  affinity: {}
  topologySpreadConstraints: []

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions