From 9b8bf1df440a1e8a440dff1c58778a2384af6679 Mon Sep 17 00:00:00 2001 From: huangmingxia Date: Mon, 15 Dec 2025 23:37:22 -0500 Subject: [PATCH] HIVE-3020: docs: add MachinePool adoption documentation improvements --- docs/using-hive.md | 311 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 308 insertions(+), 3 deletions(-) diff --git a/docs/using-hive.md b/docs/using-hive.md index 2a2df3bfbe..a12ce7e6e0 100644 --- a/docs/using-hive.md +++ b/docs/using-hive.md @@ -35,6 +35,7 @@ - [Example Adoption ClusterDeployment](#example-adoption-clusterdeployment) - [Adopting with hiveutil](#adopting-with-hiveutil) - [Transferring ownership](#transferring-ownership) + - [MachinePool Adoption](#machinepool-adoption) - [Configuration Management](#configuration-management) - [Vertical Scaling](#vertical-scaling) - [SyncSet](#syncset) @@ -1212,7 +1213,7 @@ Hive will then: It is possible to adopt cluster deployments into Hive. This will allow you to manage the cluster as if it had been provisioned by Hive, including: -- [MachinePools](#machine-pools) +- [MachinePools](#machine-pools) - See [MachinePool Adoption](#machinepool-adoption) for how to adopt existing MachineSets when adopting a cluster - [SyncSets and SelectorSyncSets](syncset.md) - [Deprovisioning](#cluster-deprovisioning) @@ -1253,10 +1254,46 @@ spec: name: pull-secret ``` +### Example Adoption ClusterDeployment for vSphere +```yaml +apiVersion: hive.openshift.io/v1 +kind: ClusterDeployment +metadata: + name: my-vsphere-cluster + namespace: mynamespace +spec: + baseDomain: vsphere.example.com + clusterMetadata: + adminKubeconfigSecretRef: + name: my-vsphere-cluster-adopted-admin-kubeconfig + clusterID: f2e99580-389c-4ec5-b07f-4f489d6c0929 + infraID: my-vsphere-cluster-khjpw + metadataJSONSecretRef: + name: my-vsphere-cluster-metadata-json + clusterName: my-vsphere-cluster + controlPlaneConfig: + servingCertificates: {} + installed: true + preserveOnDelete: true + platform: + vsphere: + certificatesSecretRef: + name: my-vsphere-cluster-adopted-vsphere-certificates + cluster: # vSphere cluster name where VMs are deployed + credentialsSecretRef: + name: my-vsphere-cluster-adopted-vsphere-credentials + datacenter: + defaultDatastore: + network: + vCenter: + pullSecretRef: + name: my-vsphere-cluster-adopted-pull-secret +``` + Note for `metadataJSONSecretRef`: 1. If the referenced Secret is available -- e.g. if the cluster was previously managed by hive -- simply copy it in. -1. If you have the original metadata.json file -- e.g. if the cluster was provisioned directly via openshift-install -- create the Secret from it: `oc create secret generic my-gcp-cluster-metadata-json -n mynamespace --from-file=metadata.json=/tmp/metadata.json` -1. Otherwise, you may need to compose the file by hand. See the samples below. +2. If you have the original metadata.json file -- e.g. if the cluster was provisioned directly via openshift-install -- create the Secret from it: `oc create secret generic my-gcp-cluster-metadata-json -n mynamespace --from-file=metadata.json=/tmp/metadata.json` +3. Otherwise, you may need to compose the file by hand. See the samples below. If the cluster you are looking to adopt is on AWS and leverages Privatelink, you'll also need to include that setting under `spec.platform.aws` to ensure the VPC Endpoint Service for the cluster is tracked in the ClusterDeployment. @@ -1360,6 +1397,274 @@ If you wish to transfer ownership of a cluster which is already managed by hive, 1. Edit the `ClusterDeployment`, setting `spec.preserveOnDelete` to `true`. This ensures that the next step will only release the hive resources without destroying the cluster in the cloud infrastructure. 1. Delete the `ClusterDeployment` 1. From the hive instance that will adopt the cluster, `oc apply` the `ClusterDeployment`, creds and certs manifests you saved in the first step. + +### MachinePool Adoption + +When adopting a cluster, you can also adopt existing MachineSets by creating MachinePools that match the existing MachineSets. + +**Terminology:** + +In this section, we use the following terms to avoid confusion: + +- **MachinePool resource name** (`metadata.name`): The Kubernetes resource name of the MachinePool, e.g., `mycluster-worker` + - Must follow the pattern: `-` + - This restriction is enforced by webhook validation. If you attempt to create a MachinePool with a `metadata.name` that does not match this pattern, the webhook will reject it. + - Example: If your `ClusterDeployment` is named `mycluster` and `MachinePool.spec.name` is `worker`, then `MachinePool.metadata.name` must be exactly `mycluster-worker` +- **Pool spec name** (`spec.name`): The pool name defined in the MachinePool specification, e.g., `worker` + - Used in the `hive.openshift.io/machine-pool` label + - Used to generate MachineSet names + - You can choose any value for `spec.name` - it does NOT need to match existing MachineSet names. The only requirement is that it matches the `hive.openshift.io/machine-pool` label value when adopting existing MachineSets. + +When adopting MachineSets, the `hive.openshift.io/machine-pool` label value must match the **pool spec name** (`spec.name`), not the MachinePool resource name. + +**Environment:** + +In this section, we distinguish between two cluster environments: + +- **Hub cluster**: Where Hive is running and where MachinePool resources are created +- **Spoke cluster**: The managed cluster where MachineSets exist + +All commands in this procedure will be clearly marked with either `# On hub cluster` or `# On spoke cluster` to indicate where each command should be executed. + +Hive supports adopting existing MachineSets into MachinePool management in two scenarios: + +#### Scenario 1: Adopt MachinePools When Adopting a Cluster + +This scenario applies when you are adopting a cluster that was previously unmanaged by Hive. After adopting the cluster, you can bring the MachinePools along by labeling the existing MachineSets and creating corresponding MachinePools. + +Steps: + +1. Adopt the cluster (see [Cluster Adoption](#cluster-adoption) above) +2. Adopt the MachinePools using the [MachinePool Adoption Procedure](#machinepool-adoption-procedure) outlined below + - If there are additional MachineSets that should also be managed by Hive, create separate MachinePools for each distinct configuration + +#### Scenario 2: Adopt Additional MachineSets for a Cluster Already Managed by Hive + +If you want to adopt additional MachineSets for a cluster that is already managed by Hive, you can do so by creating MachinePools that match the existing MachineSets. + +Steps: +1. Label the existing MachineSets with `hive.openshift.io/machine-pool=`, where `` is the value you will use for `MachinePool.spec.name` (the machine pool name, not the MachinePool resource name). +2. Create a corresponding MachinePool in the Hive hub cluster to manage these MachineSets + +#### MachinePool Adoption Procedure + +To adopt existing MachineSets: + +1. Identify and inspect the existing MachineSets in the cluster that you want to manage: + ```bash + # On spoke cluster - List all MachineSets + oc get machinesets -n openshift-machine-api + + # On spoke cluster - Get detailed information about a specific MachineSet + oc get machineset -n openshift-machine-api -o yaml + ``` + + **Important**: Note the following details for each MachineSet you want to adopt: + - Instance type (e.g., `m5.xlarge` for AWS) + - Availability zone/failure domain (e.g., `us-east-1a`) + - Current replica count + - Any platform-specific configurations (root volume settings, etc.) + +2. **Label the existing MachineSets** with the `hive.openshift.io/machine-pool` label. The label value must match the `spec.name` (machine pool name) you will use in the MachinePool: + ```bash + # On spoke cluster - Label the MachineSet + oc label machineset -n openshift-machine-api hive.openshift.io/machine-pool= + ``` + + **Note**: You must label each MachineSet you want to adopt. Each MachineSet in each availability zone needs the label. + +3. **Create a MachinePool** with specifications that exactly match the existing MachineSets: + - The `spec.name` (machine pool name) must match the label value you applied in step 2 + - The `spec.platform` configuration (instance type, zones, etc.) must exactly match the existing MachineSets. For platform-specific limitations, see [Platform-Specific Limitations](#platform-specific-limitations) + - The `spec.replicas` should match the current total replica count across all zones, or you can adjust it and Hive will reconcile + - The `spec.platform..zones` array must include all zones where MachineSets are labeled, and the order matters (see [Zone Configuration Warnings](#zone-configuration-warnings) below) + + Example MachinePool for adopting existing worker MachineSets on AWS: + ```yaml + apiVersion: hive.openshift.io/v1 + kind: MachinePool + metadata: + name: mycluster-worker # MachinePool resource name + namespace: mynamespace + spec: + clusterDeploymentRef: + name: mycluster + name: worker # Machine pool name (spec.name) - must match the label value from step 2 + platform: + aws: + type: m5.xlarge # Must exactly match existing MachineSet instance type + zones: # Must match all zones where MachineSets are labeled + - us-east-1a + - us-east-1b + - us-east-1c + replicas: 3 # Total replicas across all zones + ``` + Example MachinePool for adopting existing worker MachineSets on GCP: + ```yaml + apiVersion: hive.openshift.io/v1 + kind: MachinePool + metadata: + name: mihuanggcp-worker + spec: + clusterDeploymentRef: + name: mihuanggcp + name: worker + platform: + gcp: + osDisk: + diskSizeGB: 128 + diskType: pd-ssd + type: n1-standard-4 + zones: + - us-central1-a + - us-central1-c + - us-central1-f + replicas: 3 + ``` + + Example MachinePool for adopting existing worker MachineSets on vSphere: + ```yaml + apiVersion: hive.openshift.io/v1 + kind: MachinePool + metadata: + name: mihuang-1213a-worker + namespace: adopt + spec: + clusterDeploymentRef: + name: mihuang-1213a + name: worker + platform: + vsphere: + coresPerSocket: 4 + cpus: 8 + memoryMB: 16384 + osDisk: + diskSizeGB: 120 + replicas: 2 + ``` +4. **Apply the MachinePool**: + ```bash + # On hub cluster - Create the MachinePool + oc apply -f machinepool-adopt.yaml + ``` + +5. **Verify the adoption**: + ```bash + # On hub cluster - Check MachinePool status + oc get machinepool mycluster-worker -n mynamespace -o yaml + + # On spoke cluster - Verify MachineSets were not recreated + oc get machinesets -n openshift-machine-api + ``` + +#### Warning: Avoid Unintended Hive Management + +Hive determines which MachineSets it manages based on two criteria: + +1. **Name pattern match**: MachineSet name starts with `--` (e.g., `mycluster-worker-us-east-1a-xxx`) +2. **Label match**: MachineSet has the `hive.openshift.io/machine-pool` label with a value matching the MachinePool's `spec.name` + +If a MachineSet meets either of these criteria, Hive will consider it managed and may modify or delete it to match the MachinePool specification. + +**Important**: If you manually create MachineSets with names matching the Hive naming pattern (e.g., `mycluster-worker-us-east-1a-xxx`) but do NOT want Hive to manage them, ensure: +- The MachineSet name does NOT start with `--` +- The MachineSet does NOT have the `hive.openshift.io/machine-pool` label + +If both the naming pattern and the label are present, Hive will assume this is a Hive-managed MachineSet and may modify or delete it to match the MachinePool specification. This can lead to unexpected MachineSet deletion or modification. + +#### Zone Configuration Warnings + +Zone configuration (failure domain configuration) is one of the most error-prone aspects of MachinePool adoption. Incorrect zone configuration can cause Hive to create new MachineSets and delete existing ones, leading to unexpected resource creation and potential service disruption. + +1: Zone Mismatch Causes New MachineSet Creation + +If the configured zones in `MachinePool.spec.platform..zones` do not match the existing MachineSets' failure domains (availability zones), Hive will: +- NOT adopt the existing MachineSets (even if they have the correct label) +- Create new MachineSets in the configured zones +- This can lead to unexpected resource creation and costs + +Example of zone mismatch: +- Existing MachineSets: in zones `us-east-1a` and `us-east-1f` (with `hive.openshift.io/machine-pool=worker` label) +- MachinePool configured with zones: `us-east-1b` and `us-east-1c` +- Result: + - Existing MachineSets in `us-east-1a` and `us-east-1f` are not adopted (zone mismatch) + - If the existing MachineSets have the `hive.openshift.io/machine-pool` label, they will be deleted because they are considered controlled by the MachinePool but don't match the generated MachineSets + - New MachineSets are created in `us-east-1b` and `us-east-1c` to match MachinePool config + +2: Zone Order Affects Replica Distribution + +When using fixed replicas (not autoscaling), the order of zones (failure domains) in the array determines how replicas are distributed. You must ensure the zone order in `MachinePool.spec.platform..zones` matches the current replica distribution across zones, as incorrect zone order will cause Hive to redistribute replicas, leading to Machine creation or deletion. + +Hive distributes replicas using this algorithm: + +```go +replicas := int32(total / numOfAZs) +if int64(idx) < total % numOfAZs { + replicas++ // Earlier zones in the array get extra replicas +} +``` + +Example of zone order impact: + +Current state (total: 3 replicas): +- `us-east-1f`: 2 replicas +- `us-east-1a`: 1 replica + +Correct zone order (preserves current distribution): +```yaml +spec: + platform: + aws: + zones: + - us-east-1f # Index 0: gets 2 replicas + - us-east-1a # Index 1: gets 1 replica + replicas: 3 +``` + +Incorrect zone order (causes Machine recreation): +```yaml +spec: + platform: + aws: + zones: + - us-east-1a # Index 0: will get 2 replicas + - us-east-1f # Index 1: will get 1 replica + replicas: 3 +``` + +Result of incorrect order: +- Hive will scale `us-east-1a` from 1 to 2 replicas → 1 new Machine created +- Hive will scale `us-east-1f` from 2 to 1 replica → 1 Machine deleted + +#### Platform-Specific Limitations + +##### Nutanix and vSphere: Multiple Failure Domains + +**Note:** vSphere zone support is coming soon but is not yet officially supported. + +Nutanix and vSphere follow similar mechanisms for failure domain handling. + +**For clusters configured with a single failure domain:** + +- Nutanix and vSphere MachineSets can be adopted normally +- MachinePool adoption works correctly + +**For clusters configured with multiple failure domains (e.g., FD1, FD2):** + +After an OpenShift cluster is created, the failure domain configuration information is stored in the `Infrastructurespec.platformSpec.*.failureDomains`. The failure domains in the Infrastructure resource can be modified. + +- If a newly added MachineSet in the spoke cluster is in FD1 or FD2, MachinePool adoption and autoscaling work normally. + +**Limited Scenario:** + +After creating a cluster with multiple failure domains (FD1, FD2) using Hive, if a new MachineSet is added in FD3 on the spoke cluster, it cannot be adopted. +The `ClusterDeployment.spec.platform.*.failureDomains` is immutable and does not support modification. Hive uses the ClusterDeployment's FailureDomains to generate MachineSets. Even if the `Infrastructurespec.platformSpec.*.failureDomains` resource has FD3, if the ClusterDeployment's FailureDomains does not have FD3: + +- MachinePool adoption will fail because there is no generated FD3 MachineSet to match against +- The FD3 MachineSet will be deleted by Hive because it has the correct `hive.openshift.io/machine-pool` label (making `isControlledByMachinePool` return true) but no matching generated MachineSet exists + +**Note:** There is one difference between Nutanix and vSphere: Nutanix can only configure one PrismCentral, while vSphere supports configuring multiple VCenters (topology). _After a vSphere cluster is created, adding new VCenters is not supported_; however, new failure domains can be added within existing VCenters. + ## Configuration Management ### Vertical Scaling