Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
214 changes: 210 additions & 4 deletions docs/using-hive.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
- [Example Adoption ClusterDeployment](#example-adoption-clusterdeployment)
- [Adopting with hiveutil](#adopting-with-hiveutil)
- [Transferring ownership](#transferring-ownership)
- [MachinePool Adoption](#machinepool-adoption)
- [Configuration Management](#configuration-management)
- [Vertical Scaling](#vertical-scaling)
- [SyncSet](#syncset)
Expand Down Expand Up @@ -1212,7 +1213,7 @@ Hive will then:

It is possible to adopt cluster deployments into Hive.
This will allow you to manage the cluster as if it had been provisioned by Hive, including:
- [MachinePools](#machine-pools)
- [MachinePools](#machine-pools) - See [MachinePool Adoption](#machinepool-adoption) for how to adopt existing MachineSets when adopting a cluster
- [SyncSets and SelectorSyncSets](syncset.md)
- [Deprovisioning](#cluster-deprovisioning)

Expand Down Expand Up @@ -1254,9 +1255,13 @@ spec:
```

Note for `metadataJSONSecretRef`:
1. If the referenced Secret is available -- e.g. if the cluster was previously managed by hive -- simply copy it in.
1. If you have the original metadata.json file -- e.g. if the cluster was provisioned directly via openshift-install -- create the Secret from it: `oc create secret generic my-gcp-cluster-metadata-json -n mynamespace --from-file=metadata.json=/tmp/metadata.json`
1. Otherwise, you may need to compose the file by hand. See the samples below.
- The `metadataJSONSecretRef` file is optional for cluster adoption. If you do not specify `metadataJSONSecretRef` in the ClusterDeployment, Hive will automatically generate the metadata.json content from the ClusterDeployment fields and create a secret named `{cluster-name}-metadata-json` (see [retrofitMetadataJSON](https://github.com/openshift/hive/blob/master/pkg/controller/clusterdeployment/clusterdeployment_controller.go#L1110)). The ClusterDeployment will be automatically updated with the `metadataJSONSecretRef` after the secret is created. You only need to manually provide a metadata.json secret if you have specific metadata that cannot be derived from the ClusterDeployment fields.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section almost looks like it was awaiting a rebase on #2805... but some of your content goes beyond what merged there. I don't mind if you want to beef it up, but it should be done via a separate PR to keep things crisp. (Cc @jianping-shu)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! @2uasimojo A separate PR #2824 has already been submitted. This document will be updated accordingly.

- If you need to manually provide the metadata.json secret, use one of the following approaches:
1. If the referenced Secret is available -- e.g. if the cluster was previously managed by hive -- simply copy it in.
2. If you have the original metadata.json file -- e.g. if the cluster was provisioned directly via openshift-install -- create the Secret from it: `oc create secret generic my-gcp-cluster-metadata-json -n mynamespace --from-file=metadata.json=/tmp/metadata.json`.
3. Otherwise, you may need to compose the file by hand. See the [metadata.json samples](#metadatajson-samples) below.

#### metadata.json Samples

If the cluster you are looking to adopt is on AWS and leverages Privatelink, you'll also need to include that setting under `spec.platform.aws` to ensure the VPC Endpoint Service for the cluster is tracked in the ClusterDeployment.

Expand Down Expand Up @@ -1360,6 +1365,207 @@ If you wish to transfer ownership of a cluster which is already managed by hive,
1. Edit the `ClusterDeployment`, setting `spec.preserveOnDelete` to `true`. This ensures that the next step will only release the hive resources without destroying the cluster in the cloud infrastructure.
1. Delete the `ClusterDeployment`
1. From the hive instance that will adopt the cluster, `oc apply` the `ClusterDeployment`, creds and certs manifests you saved in the first step.

### MachinePool Adoption

When adopting a cluster, you can also adopt existing MachineSets by creating MachinePools that match the existing MachineSets.

Hive supports adopting existing MachineSets into MachinePool management in two scenarios:

#### Scenario 1: Adopt MachinePools When Adopting a Cluster

This scenario applies when you are adopting a cluster that was previously unmanaged by Hive. After adopting the cluster, you can bring the MachinePools along by labeling the existing MachineSets and creating corresponding MachinePools.

Steps:

1. Adopt the cluster (see [Cluster Adoption](#cluster-adoption) above)
2. Adopt the MachinePools using the [MachinePool Adoption Procedure](#machinepool-adoption-procedure) outlined below
- If there are additional MachineSets that should also be managed by Hive, create separate MachinePools for each distinct configuration

#### Scenario 2: Adopt Additional MachineSets for a Cluster Already Managed by Hive

If you want to adopt additional MachineSets for a cluster that is already managed by Hive, you can do so by creating MachinePools that match the existing MachineSets.

Steps:
1. Label the existing MachineSets with `hive.openshift.io/machine-pool=<pool-name>`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confusing terminology here: "pool name" could refer to the MachinePool.metadata.name or the MachinePool.spec.name. I don't have any great ideas for unambiguous terms for these, do you? You've done a good job being specific below, but it might benefit the reader if we introduced precise terminology early and used it consistently throughout.

2. Create a corresponding MachinePool in the Hive hub cluster to manage these MachineSets

#### MachinePool Adoption Procedure

To adopt existing MachineSets:

1. Identify and inspect the existing MachineSets in the cluster that you want to manage:
```bash
# List all MachineSets
oc get machinesets -n openshift-machine-api

# Get detailed information about a specific MachineSet
oc get machineset <machineset-name> -n openshift-machine-api -o yaml
```

**Important**: Note the following details for each MachineSet you want to adopt:
- Instance type (e.g., `m5.xlarge` for AWS)
- Availability zone/failure domain (e.g., `us-east-1a`)
- Current replica count
- Any platform-specific configurations (root volume settings, etc.)

2. **Label the existing MachineSets** with the `hive.openshift.io/machine-pool` label. The label value must match the `spec.name` you will use in the MachinePool:
Copy link
Member

@2uasimojo 2uasimojo Jan 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Naming deep dive:

This webhook check forces the MachinePool name to be composed of the CD name and the pool.spec.name.

  1. This restriction should be noted here so the reader doesn't have to find out about it by bouncing off the webhook :)
  2. Are we allowed to pick any arbitrary pool.spec.name regardless of the names of the existing MachineSets? I know we'll use the spec.name to compose the names of new msets when we're creating them; and I know we used to match existing msets by name but don't anymore (right??). If you didn't test this scenario explicitly, please do. LMK if you want to discuss specifics.

(LATER)
3. Oh, there's also HIVE-2199, which I think indicates that we could end up deleting msets whose names match the naming convention accidentally? This may warrant a warning in the documentation, or maybe even a much more prescriptive procedure.

```bash
oc label machineset <machineset-name> -n openshift-machine-api hive.openshift.io/machine-pool=<pool-name>
```

**Note**: You must label each MachineSet you want to adopt. Each MachineSet in each availability zone needs the label.

3. **Create a MachinePool** with specifications that exactly match the existing MachineSets:
- The `spec.name` must match the label value you applied in step 2
- The `spec.platform` configuration (instance type, zones, etc.) must exactly match the existing MachineSets
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Annoyingly, we also rely on the CD.spec.platform for certain pieces of MachineSet configuration.

I say "annoyingly" because it makes our life hard here, but the reasoning behind it is that these are knobs that we expect to be the same for all mpools in a given cluster. That being the case, I don't know that it's even necessary to mention this quirk in the documentation... unless we can come up with counterexamples where it matters. Off the top of my head:

  • For AWS, if existing msets have one set of UserTags and {cd.spec.platform.aws.userTags U pool.spec.platform.aws.userTags} has a different set, the delta might only be realized under unusual circumstances.
    • New msets would get the hive-defined tags... but I can't think of a scenario where we would be creating a new mset after an adoption like this.
    • Existing msets would only get updated if this annotation is used... and such annotations aren't part of the official API, so we really shouldn't document them.
    • Even if the mset gets updated, MAPI on the spoke will ignore the changes except when creating new Machines, e.g. as a result of scaling.
  • For VSphere, it may be technically possible in OCP to put different msets into different datacenters/datastores/folders/clusters/networks. I'm not sure how that will play with zonal support, but for now we simply don't support putting msets in any FD other than the singular one defined in cd.spec.platform.vsphere... so if existing msets are not in that FD, we can't adopt them. (Cc @dlom)

- The `spec.replicas` should match the current total replica count across all zones, or you can adjust it and Hive will reconcile
- The `spec.platform.<cloud>.zones` array must include all zones where MachineSets are labeled, and the order matters (see [Zone Configuration Warnings](#zone-configuration-warnings) below)

Example MachinePool for adopting existing worker MachineSets on AWS:
```yaml
apiVersion: hive.openshift.io/v1
kind: MachinePool
metadata:
name: mycluster-worker
namespace: mynamespace
spec:
clusterDeploymentRef:
name: mycluster
name: worker # Must match the label value from step 2
platform:
aws:
type: m5.xlarge # Must exactly match existing MachineSet instance type
zones: # Must match all zones where MachineSets are labeled
- us-east-1a
- us-east-1b
- us-east-1c
replicas: 3 # Total replicas across all zones
```
Example MachinePool for adopting existing worker MachineSets on GCP:
```yaml
apiVersion: hive.openshift.io/v1
kind: MachinePool
metadata:
name: mihuanggcp-worker
spec:
clusterDeploymentRef:
name: mihuanggcp
name: worker
platform:
gcp:
osDisk:
diskSizeGB: 128
diskType: pd-ssd
type: n1-standard-4
zones:
- us-central1-a
# - us-central1-b
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems odd that this is commented out. The reader might assume that it's because we discovered the existing msets weren't in this AZ... but -d and -e are also missing?

- us-central1-c
- us-central1-f
replicas: 3
```

Example MachinePool for adopting existing worker MachineSets on vSphere:
```yaml
apiVersion: hive.openshift.io/v1
kind: MachinePool
metadata:
name: mihuang-1213a-worker
namespace: adopt
spec:
clusterDeploymentRef:
name: mihuang-1213a
name: worker
platform:
vsphere:
coresPerSocket: 4
cpus: 8
memoryMB: 16384
osDisk:
diskSizeGB: 120
replicas: 2
```
4. **Apply the MachinePool**:
```bash
oc apply -f machinepool-adopt.yaml
```

5. **Verify the adoption**:
```bash
# Check MachinePool status, MachineSets are listed in status
oc get machinepool mycluster-worker -n mynamespace -o yaml

# Check that existing MachineSets were not recreated
oc get machinesets -n openshift-machine-api
Comment on lines +1496 to +1500
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear that these two commands are run on different clusters (the first on the hub, the second on the spoke). Above you called the latter "the cluster that you want to manage". Let's establish the terms "hub" and "spoke" early in this section and then be explicit using those terms any time oc is being invoked.

```

#### Zone Configuration Warnings

Zone configuration (failure domain configuration) is one of the most error-prone aspects of MachinePool adoption. Incorrect zone configuration can cause Hive to create new MachineSets and delete existing ones, leading to unexpected resource creation and potential service disruption.

1: Zone Mismatch Causes New MachineSet Creation

If the configured zones in `MachinePool.spec.platform.<cloud>.zones` do not match the existing MachineSets' failure domains (availability zones), Hive will:
- NOT adopt the existing MachineSets (even if they have the correct label)
- Create new MachineSets in the configured zones
- This can lead to unexpected resource creation and costs

Example of zone mismatch:
- Existing MachineSets: in zones `us-east-1a` and `us-east-1f` (with `hive.openshift.io/machine-pool=worker` label)
- MachinePool configured with zones: `us-east-1b` and `us-east-1c`
- Result:
- Existing MachineSets in `us-east-1a` and `us-east-1f` are not adopted (zone mismatch)
- If the existing MachineSets have the `hive.openshift.io/machine-pool` label, they will be deleted because they are considered controlled by the MachinePool but don't match the generated MachineSets
- New MachineSets are created in `us-east-1b` and `us-east-1c` to match MachinePool config

2: Zone Order Affects Replica Distribution

When using fixed replicas (not autoscaling), the order of zones (failure domains) in the array determines how replicas are distributed. You must ensure the zone order in `MachinePool.spec.platform.<cloud>.zones` matches the current replica distribution across zones, as incorrect zone order will cause Hive to redistribute replicas, leading to Machine creation or deletion.

Hive distributes replicas using this algorithm:

```go
replicas := int32(total / numOfAZs)
if int64(idx) < total % numOfAZs {
replicas++ // Earlier zones in the array get extra replicas
}
```

Example of zone order impact:

Current state (total: 3 replicas):
- `us-east-1f`: 2 replicas
- `us-east-1a`: 1 replica

Correct zone order (preserves current distribution):
```yaml
spec:
platform:
aws:
zones:
- us-east-1f # Index 0: gets 2 replicas
- us-east-1a # Index 1: gets 1 replica
replicas: 3
```

Incorrect zone order (causes Machine recreation):
```yaml
spec:
platform:
aws:
zones:
- us-east-1a # Index 0: will get 2 replicas
- us-east-1f # Index 1: will get 1 replica
replicas: 3
```

Result of incorrect order:
- Hive will scale `us-east-1a` from 1 to 2 replicas → 1 new Machine created
- Hive will scale `us-east-1f` from 2 to 1 replica → 1 Machine deleted

Special case: If the total number of replicas equals the number of zones, zone order does not matter (each zone gets exactly 1 replica).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be annoyingly precise, this extends to whenever the number of replicas is an exact multiple of the number of zones. It may not be worth trying to explain that here -- in fact, I'm not sure it's necessary to mention the subject at all. WDYT?


## Configuration Management

### Vertical Scaling
Expand Down