-
Notifications
You must be signed in to change notification settings - Fork 259
OCPBUGS-38662: node controller: Don't degrade when node rebooting for upgrade #2081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
OCPBUGS-38662: node controller: Don't degrade when node rebooting for upgrade #2081
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tchap The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@tchap: This pull request references Jira Issue OCPBUGS-38662, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Treat nodes as Degraded only when they are not rebooting for upgrade. This prevents Degraded=True from bliping during upgrade.
03571ff to
6af210e
Compare
|
@tchap: This pull request references Jira Issue OCPBUGS-38662, which is valid. 3 validation(s) were run on this bug
The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
b172632 to
800afca
Compare
800afca to
43be5d6
Compare
4b67aaf to
2dd855e
Compare
| } | ||
|
|
||
| func nodeRebootingForUpgrade(node *coreapiv1.Node) bool { | ||
| return node.Annotations[machineConfigDaemonPostConfigAction] == machineConfigDaemonStateRebooting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we start a conversation with the mco team or the forum-ocp-updates to check what information is available during an upgrade that we could use?
| ) | ||
|
|
||
| // DefaultRebootingNodeDegradedInertia is the default period during which a node rebooting for upgrade is not considered Degraded. | ||
| const DefaultRebootingNodeDegradedInertia = 10 * time.Minute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually need to decide on the default value based on how long it usually takes to restart.
Filip proposed something around an hour, but needs to be verified.
2dd855e to
9b6527a
Compare
The node is yet again considered Degraded when it's rebooting for too long. The default inertia interval is set to 2 hours.
9b6527a to
eeb0eae
Compare
|
@tchap: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| degradedMsg = fmt.Sprintf("node %q not ready since %s because %s (%s)", node.Name, nodeReadyCondition.LastTransitionTime, nodeReadyCondition.Reason, nodeReadyCondition.Message) | ||
| } | ||
| if len(degradedMsg) > 0 { | ||
| if nodeRebootingForUpgrade(node) && !shouldDegradeRebootingNode(nodeReadyCondition, c.rebootingNodeDegradedInertia) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m wondering what the chances are that this is the only controller that becomes degraded during an upgrade. Something tells me there might be many more controllers that are unhappy.
I would prefer a more global solution to this problem.
Also, silencing a degraded condition for two hours seems excessive.
What about using StatusSyncer.WithDegradedInertia? I think it has the potential to be a broader solution.
What is the maximum time after a node becomes degraded once the kubelet stops posting status updates?
Thoughts ?
Treat nodes as
Degradedonly when they are not rebooting for upgrade.This prevents
Degraded=Truein operators from bliping during upgrade.Based on https://github.com/openshift/machine-config-operator/blob/b4ea81d6885c61c91e1503d5f9f5ebed696092b1/pkg/daemon/update.go#L2812
Now the issue is that the annotation can be removed when the reboot is finished before the node becomes
Ready. But this should be like seconds before it'sReady. On top of that, there is a 2-minute inertia forDegradedon the operator status, so this should be plenty to cover the gap. Another potential issue is that failing to update the annotations is only logged, it does not prevent the reboot. So we may have rebooting nodes missing the annotation.An alternative would be ignore nodes with
machineconfiguration.openshift.io/state == Working, but that seems to be too broad a condition as it covers basically the whole upgrade process, so that can show too positive view of the system. But it is a possibility.There may be other ways to implement this for sure, but that would require pulling in other objects like
machineconfignodefor the nodes, where you can check the timestamp of the last reboot, for example. This doesn't seem particularly necessary as the inertia should cover vast majority of our use cases. So I focused on what we already have, which are theNodeobjects.As discussed, I also ended up adding a
Degradedinertia here, which means that the rebooting node is only ignored for certain period of time, which is by default 10 minutes. In case something goes very wrong during the upgrade...