-
Notifications
You must be signed in to change notification settings - Fork 12
Doc: Add design for external controller. #187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| * There is no simple way for the promotion token from a demoted cluster to | ||
| transfer to the newly promoted cluster | ||
| * There needs to be a central location where Azure DNS can be managed | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we also want a central controller to manage failovers - planned and unplanned. Additionally we need to control backups centrally to move backup schedules to all sites so we can perform a site swap and still have backups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also this controller needs to be handled in our update design -- please research how it cna play a role during update aka it should probably corrdinate multi-region/cloud updates of oeprators and individual DocumentDB clusters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add info about a health check. For Updates I think we should use fleet itself to coordinate multi-cloud updates, but I'll add that to the design here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the controller can probably help to manage operator updates, I think it would be better to use fleet's staged update process instead of creating our own, and then let the operators themselves manage the updates of the individual documentdb clusters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assume we have only one replica in each region - during update we will need to fail over to another region and then update that region. We can of course defer this case to future work and assume the primary region during update is HA enabled and the local operator will handle the feailovers automatically
| It will try to remain as minimal as possible. | ||
|
|
||
| ### Promotion token management | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for promotion it should determine which remote clister has the highest LSN and pcik that one to minimze data transfer time and downtime. Potentially user cna overwrite and focrce a specific region but that's needs to be not the default behavior.
| cluster individually. | ||
|
|
||
| This will need the following information | ||
| * Azure Resource group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can that be pluggable? What if soemone adds non Azure DNS handling... be more general
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really see a way this could be pluggable. We'll need to specifically use the Azure API to create these resources and there's no general API for DNS creation as far as I'm aware
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e.g. add a field DNSStrategy=Azure and if soemone likes others they can propose them.
Signed-off-by: Alexander Laye <alaye@microsoft.com>
|
|
||
| ### Automatic failover | ||
|
|
||
| The operator will have a health check endpoint that the controller can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this needs to be per documentdb cluster -- and not go through the operator.
- We don't want to run a side swap if the oeprator is down but the rest is fine
- There can. be instances that only some clusteres are down whereas others are still fine (partial outage)
- We need to throttle/prioritize the side swaps... e.g. we can't have them happen all at once which might overwhelm the new primaries
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to remove auto-failover language.
|
|
||
| ## Updates | ||
|
|
||
| Updates of the operators will be coordinated through KubeFleet's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can do operator updates through the fleet functionality... on the fence if we need to put that in the opertaor so a user can run kubectl documentdb update operators?
In any case we need to control the update of individual multi-region/cloud doucmentdb clusteres through the controller - so users can d that with just one command and it will roll out accordingly (optionally plau into fleet's deployment mechanism)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that documentdb updates at that precise a level should probably be handled at the operator level, not by a multi-cloud controller.
No description provided.