Skip to content
This repository was archived by the owner on Apr 4, 2023. It is now read-only.
This repository was archived by the owner on Apr 4, 2023. It is now read-only.

Scale down of masters results in unavailable cluster #370

@cehoffman

Description

@cehoffman

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Cluster went unavailable after a scale down of master pods.

What you expected to happen:

Reducing the number of masters should result in a new minimum master count applied to all nodes before the scaled down masters are terminated.

How to reproduce it (as minimally and precisely as possible):

Create a cluster with 3 masters (minimum 2). Add 3 more masters (bad i know, but was for a simple configuration change so should be quick). After 3 new masters are up and functional, scale old set to 0.

Anything else we need to know?: ES 6.3.1

Environment:

  • Kubernetes version (use kubectl version): 1.9.6
  • Cloud provider or hardware configuration**: Azure
  • Install tools:
  • Others:

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions