From 70509b80d90709304f1b2c2e00cdca8ccda30520 Mon Sep 17 00:00:00 2001 From: Steven Smith Date: Wed, 4 Feb 2026 09:33:11 -0500 Subject: [PATCH] ADV errors for Upgrading guide --- modules/accessing-images.adoc | 51 +++++ modules/downgrade-quay-deployment.adoc | 2 +- ...ly-approving-pending-operator-upgrade.adoc | 14 ++ modules/operator-lifecycle-manager.adoc | 11 ++ modules/operator-upgrade.adoc | 179 +----------------- modules/proc_upgrade_standalone.adoc | 67 +------ modules/reverting-quayecosystem-upgrade.adoc | 23 +++ ...ecossytem-configurations-for-upgrades.adoc | 38 ++++ modules/upgrading-geo-repl-quay.adoc | 24 +-- modules/upgrading-minor-red-hat-quay.adoc | 6 + modules/upgrading-quay-operator.adoc | 21 ++ modules/upgrading-quayecosystem.adoc | 28 +++ modules/upgrading-quayregistry.adoc | 12 ++ modules/upgrading-red-hat-quay.adoc | 30 +++ upgrade_quay/master.adoc | 32 +++- 15 files changed, 280 insertions(+), 258 deletions(-) create mode 100644 modules/accessing-images.adoc create mode 100644 modules/manually-approving-pending-operator-upgrade.adoc create mode 100644 modules/operator-lifecycle-manager.adoc create mode 100644 modules/reverting-quayecosystem-upgrade.adoc create mode 100644 modules/supported-quayecossytem-configurations-for-upgrades.adoc create mode 100644 modules/upgrading-minor-red-hat-quay.adoc create mode 100644 modules/upgrading-quay-operator.adoc create mode 100644 modules/upgrading-quayecosystem.adoc create mode 100644 modules/upgrading-quayregistry.adoc create mode 100644 modules/upgrading-red-hat-quay.adoc diff --git a/modules/accessing-images.adoc b/modules/accessing-images.adoc new file mode 100644 index 000000000..2c9379831 --- /dev/null +++ b/modules/accessing-images.adoc @@ -0,0 +1,51 @@ +:_mod-docs-content-type: CONCEPT +[id="accessing-images_{context}"] += Accessing images + +[role="_abstract"] +To access {productname} and Clair images for standalone upgrades, pull from registry.redhat.io or registry.access.redhat.com and configure authentication as described in Red Hat Container Registry Authentication. + +{productname} image from version 3.4.0 and later are available from link:https://registry.redhat.io[registry.redhat.io] and +link:https://registry.access.redhat.com[registry.access.redhat.com], with authentication set up as described in link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication]. + +[id="upgrade-to-producty-from-3_15_z"] +== Upgrade to {producty} from 3.15.z + +* **Quay:** {productrepo}/{quayimage}:{producty} +ifdef::downstream[] +* **Clair:** {productrepo}/{clairimage}:{producty} +endif::downstream[] +ifdef::upstream[] +* **Clair:** {productrepo}/{clairimage}:{clairproductminv} +endif::upstream[] +* **PostgreSQL:** {postgresimage} +* **Redis:** {redisimage} +* **Clair-PosgreSQL:** {postgresimage} + +[id="upgrade-to-producty-from-3_14_z"] +== Upgrade to {producty} from 3.14.z + +* **Quay:** {productrepo}/{quayimage}:{producty} +ifdef::downstream[] +* **Clair:** {productrepo}/{clairimage}:{producty} +endif::downstream[] +ifdef::upstream[] +* **Clair:** {productrepo}/{clairimage}:{clairproductminv} +endif::upstream[] +* **PostgreSQL:** {postgresimage} +* **Redis:** {redisimage} +* **Clair-PosgreSQL:** {postgresimage} + +[id="upgrade-to-producty-from-3_13_z"] +== Upgrade to {producty} from 3.13.z + +* **Quay:** {productrepo}/{quayimage}:{producty} +ifdef::downstream[] +* **Clair:** {productrepo}/{clairimage}:{producty} +endif::downstream[] +ifdef::upstream[] +* **Clair:** {productrepo}/{clairimage}:{clairproductminv} +endif::upstream[] +* **PostgreSQL:** {postgresimage} +* **Redis:** {redisimage} +* **Clair-PosgreSQL:** {postgresimage}5 \ No newline at end of file diff --git a/modules/downgrade-quay-deployment.adoc b/modules/downgrade-quay-deployment.adoc index 61413dc20..b9ea7d552 100644 --- a/modules/downgrade-quay-deployment.adoc +++ b/modules/downgrade-quay-deployment.adoc @@ -1,8 +1,8 @@ :_mod-docs-content-type: CONCEPT - [id="downgrade-quay-deployment"] = Downgrading {productname} +[role="_abstract"] {productname} only supports rolling back, or downgrading, to previous z-stream versions, for example, 3.12.3 -> 3.12.2. Rolling back to previous y-stream versions ({producty} -> {producty-n1}) is not supported. This is because {productname} updates might contain database schema upgrades that are applied when upgrading to a new version of {productname}. Database schema upgrades are not considered backwards compatible. [IMPORTANT] diff --git a/modules/manually-approving-pending-operator-upgrade.adoc b/modules/manually-approving-pending-operator-upgrade.adoc new file mode 100644 index 000000000..806c82c00 --- /dev/null +++ b/modules/manually-approving-pending-operator-upgrade.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT +[id="manually-approving-pending-operator-upgrade_{context}"] += Manually approving a pending Operator upgrade + +[role="_abstract"] +To approve a pending {productname} Operator upgrade when using `Manual` approval, open the *Subscription* tab, review the install plan and resources, and click Approve. You can then monitor the upgrade progress on the Installed Operators page. + +The following image shows the *Subscription* tab in the UI, including the update `Channel`, the `Approval` strategy, the `Upgrade status` and the `InstallPlan`: + +image:update-channel-approval-strategy.png[Subscription tab including upgrade Channel and Approval strategy] + +The list of Installed Operators provides a high-level summary of the current Quay installation: + +image:installed-operators-list.png[Installed Operators] \ No newline at end of file diff --git a/modules/operator-lifecycle-manager.adoc b/modules/operator-lifecycle-manager.adoc new file mode 100644 index 000000000..261ddfc6a --- /dev/null +++ b/modules/operator-lifecycle-manager.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT +[id="operator-lifecycle-manager_{context}"] += Operator Lifecycle Manager + +[role="_abstract"] +Operator Lifecycle Manager (OLM) installs and upgrades the {productname} Operator. You can use automatic or manual approval in the Subscription to control when new Operator versions are applied. + +[WARNING] +==== +When the {productname} Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the *OperatorHub* page for the {productname} Operator during installation. It can also be found in the {productname} Operator `Subscription` object by the `approvalStrategy` field. Choosing `Automatic` means that your {productname} Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the `Manual` approval strategy should be selected. +==== \ No newline at end of file diff --git a/modules/operator-upgrade.adoc b/modules/operator-upgrade.adoc index e2b941234..6beab2759 100644 --- a/modules/operator-upgrade.adoc +++ b/modules/operator-upgrade.adoc @@ -1,180 +1,11 @@ -:_mod-docs-content-type: PROCEDURE - +:_mod-docs-content-type: CONCEPT [id="operator-upgrade"] = Upgrading the {productname} Operator Overview -The {productname} Operator follows a _synchronized versioning_ scheme, which means that each version of the Operator is tied to the version of {productname} and the components that it manages. There is no field on the `QuayRegistry` custom resource which sets the version of {productname} to `deploy`; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of {productname} on Kubernetes. - -[id="operator-lifecycle-manager"] -== Operator Lifecycle Manager - -The {productname} Operator should be installed and upgraded using the link:https://docs.openshift.com/container-platform/{ocp-y}/operators/understanding/olm/olm-understanding-olm.html[Operator Lifecycle Manager (OLM)]. When creating a `Subscription` with the default `approvalStrategy: Automatic`, OLM will automatically upgrade the {productname} Operator whenever a new version becomes available. - -[WARNING] -==== -When the {productname} Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the *OperatorHub* page for the {productname} Operator during installation. It can also be found in the {productname} Operator `Subscription` object by the `approvalStrategy` field. Choosing `Automatic` means that your {productname} Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the `Manual` approval strategy should be selected. -==== - -[id="upgrading-quay-operator"] -== Upgrading the {productname} Operator - -The standard approach for upgrading installed Operators on {ocp} is documented at link:https://docs.openshift.com/container-platform/{ocp-y}/operators/admin/olm-upgrading-operators.html[Upgrading installed Operators]. - -In general, {productname} supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from {productname} 3.9 to the latest version of {producty} is not supported. Instead, users would have to upgrade as follows: - -. 3.9.z -> 3.10.z -. 3.10.z -> 3.11.z -. 3.11.z -> 3.14.z - -This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. - -In some cases, {productname} supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for {productname} {productmin}: - -* 3.13.z -> {productmin} -* 3.14.z -> {productmin} -* 3.15.z -> {productmin} - -For users on standalone deployments of {productname} wanting to upgrade to {productmin} see the link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#standalone_upgrade[Standalone upgrade] guide. - -[id="upgrading-red-hat-quay"] -=== Upgrading {productname} to version {productmin} - -To update {productname} from one minor version to the next, for example, {producty-n1} -> {productmin}, you must change the update channel for the {productname} Operator. - -.Procedure - -. In the {ocp} Web Console, navigate to *Operators* -> *Installed Operators*. - -. Click on the {productname} Operator. - -. Navigate to the *Subscription* tab. - -. Under *Subscription details* click *Update channel*. - -. Select *stable-3.16* -> *Save*. - -. Check the progress of the new installation under *Upgrade status*. Wait until the upgrade status changes to *1 installed* before proceeding. - -. In your {ocp} cluster, navigate to *Workloads* -> *Pods*. Existing pods should be terminated, or in the process of being terminated. - -. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: `clair-postgres-upgrade`, `quay-postgres-upgrade`, and `quay-app-upgrade`. - -. After the `clair-postgres-upgrade`, `quay-postgres-upgrade`, and `quay-app-upgrade` pods are marked as *Completed*, the remaining pods for your {productname} deployment spin up. This takes approximately ten minutes. - -. Verify that the `quay-database` uses the `postgresql-13` image, and `clair-postgres` pods now uses the `postgresql-15` image. - -. After the `quay-app` pod is marked as *Running*, you can reach your {productname} registry. - -[id="upgrading-minor-red-hat-quay"] -=== Upgrading to the next minor release version - -For `z` stream upgrades, for example, 3.13.1 -> 3.13.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a `z` stream upgrade depends on the `approvalStrategy` as outlined above. If the approval strategy is set to `Automatic`, the {productname} Operator upgrades automatically to the newest `z` stream. This results in automatic, rolling {productname} updates to newer `z` streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin. - -[id="changing-update-channel-for-operator"] -=== Changing the update channel for the {productname} Operator - -The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the {productname} Operator to start tracking and receiving updates from a newer channel, change the update channel in the *Subscription* tab for the installed {productname} Operator. For subscriptions with an `Automatic` approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators. - -[id="manually-approving-pending-operator-upgrade"] -=== Manually approving a pending Operator upgrade - -If an installed Operator has the approval strategy in its subscription set to `Manual`, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the {productname} Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the `Subscription` tab for the {productname} Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click `Approve` and return to the page that lists Installed Operators to monitor the progress of the upgrade. - -The following image shows the *Subscription* tab in the UI, including the update `Channel`, the `Approval` strategy, the `Upgrade status` and the `InstallPlan`: - -image:update-channel-approval-strategy.png[Subscription tab including upgrade Channel and Approval strategy] - -The list of Installed Operators provides a high-level summary of the current Quay installation: - -image:installed-operators-list.png[Installed Operators] - -[id="upgrading-quayregistry"] -== Upgrading a QuayRegistry resource - -When the {productname} Operator starts, it immediately looks for any `QuayRegistries` it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used: - -* If `status.currentVersion` is unset, reconcile as normal. -* If `status.currentVersion` equals the Operator version, reconcile as normal. -* If `status.currentVersion` does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the `status.currentVersion` to the Operator's version once complete. If it cannot be upgraded, return an error and leave the `QuayRegistry` and its deployed Kubernetes objects alone. - -[id="upgrading-quayecosystem"] -== Upgrading a QuayEcosystem - -Upgrades are supported from previous versions of the Operator which used the `QuayEcosystem` API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the `QuayEcosystem` for it to be migrated. A new `QuayRegistry` will be created for the Operator to manage, but the old `QuayEcosystem` will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing `QuayEcosystem` to a new `QuayRegistry`, use the following procedure. - -.Procedure - -. Add `"quay-operator/migrate": "true"` to the `metadata.labels` of the `QuayEcosystem`. -+ -[source,terminal] ----- -$ oc edit quayecosystem ----- -+ -[source,yaml] ----- -metadata: - labels: - quay-operator/migrate: "true" ----- -. Wait for a `QuayRegistry` to be created with the same `metadata.name` as your `QuayEcosystem`. The `QuayEcosystem` will be marked with the label `"quay-operator/migration-complete": "true"`. - -. After the `status.registryEndpoint` of the new `QuayRegistry` is set, access {productname} and confirm that all data and settings were migrated successfully. - -. If everything works correctly, you can delete the `QuayEcosystem` and Kubernetes garbage collection will clean up all old resources. - -[id="reverting-quayecosystem-upgrade"] -=== Reverting QuayEcosystem Upgrade - -If something goes wrong during the automatic upgrade from `QuayEcosystem` to `QuayRegistry`, follow these steps to revert back to using the `QuayEcosystem`: - -.Procedure - -. Delete the `QuayRegistry` using either the UI or `kubectl`: -+ -[source,terminal] ----- -$ kubectl delete -n quayregistry ----- - -. If external access was provided using a `Route`, change the `Route` to point back to the original `Service` using the UI or `kubectl`. +[role="_abstract"] +The {productname} Operator uses synchronized versioning: each Operator version deploys a single, matching version of {productname} and its components. You can use this scheme to plan upgrades and keep components compatible. [NOTE] ==== -If your `QuayEcosystem` was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but {productname} will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component. -==== - -[id="supported-quayecossytem-configurations-for-upgrades"] -=== Supported QuayEcosystem Configurations for Upgrades - -The {productname} Operator reports errors in its logs and in `status.conditions` if migrating a `QuayEcosystem` component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in {productname}'s `config.yaml` file. - -*Database* - -Ephemeral database not supported (`volumeSize` field must be set). - -*Redis* - -Nothing special needed. - -*External Access* - -Only passthrough `Route` access is supported for automatic migration. Manual migration required for other methods. - -* `LoadBalancer` without custom hostname: -After the `QuayEcosystem` is marked with label `"quay-operator/migration-complete": "true"`, delete the `metadata.ownerReferences` field from existing `Service` _before_ deleting the `QuayEcosystem` to prevent Kubernetes from garbage collecting the `Service` and removing the load balancer. A new `Service` will be created with `metadata.name` format `-quay-app`. Edit the `spec.selector` of the existing `Service` to match the `spec.selector` of the new `Service` so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old `Service`; the Quay Operator will not manage it. - -* `LoadBalancer`/`NodePort`/`Ingress` with custom hostname: -A new `Service` of type `LoadBalancer` will be created with `metadata.name` format `-quay-app`. Change your DNS settings to point to the `status.loadBalancer` endpoint provided by the new `Service`. - -*Clair* - -Nothing special needed. - -*Object Storage* - -`QuayEcosystem` did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. - -*Repository Mirroring* - -Nothing special needed. +There is no field on the `QuayRegistry` custom resource which sets the version of {productname} to `deploy`; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of {productname} on Kubernetes. +==== \ No newline at end of file diff --git a/modules/proc_upgrade_standalone.adoc b/modules/proc_upgrade_standalone.adoc index 6757ca7d7..36098c30a 100644 --- a/modules/proc_upgrade_standalone.adoc +++ b/modules/proc_upgrade_standalone.adoc @@ -1,6 +1,9 @@ -:_mod-docs-content-type: PROCEDURE -[id="standalone-upgrade"] -= Standalone upgrade +:_mod-docs-content-type: CONCEPT +[id="standalone-upgrade_{context}"] += Standalone {productname} upgrade + +[role="_abstract"] +To upgrade a standalone {productname} and Clair deployment, follow the procedure for your current version in sequential order. You stop the containers, back up the database and storage, then start the new Clair and {productname} images. In general, single-step upgrades from prior (N-2, N-3) minor versions. This helps simplify the upgrade procedure for customers on older releases. The following upgrade paths are supported for {productname} {producty}: @@ -18,11 +21,6 @@ This document describes the steps needed to perform each individual upgrade. Det * link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#upgrade_to_3_16_z_from_3_14_z[Upgrade to 3.16.z from 3.14.z] * link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#upgrade_to_3_16_z_from_3_13_z[Upgrade to 3.16.z from 3.13.z] -//// -* link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#upgrade_to_3_15_z_from_3_14_z[Upgrade to 3.15.z from 3.14.z] -* link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#upgrade_to_3_15_z_from_3_13_z[Upgrade to 3.15.z from 3.13.z] -* link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#upgrade_to_3_15_z_from_3_12_z[Upgrade to 3.15.z from 3.12.z] -//// See the link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/red_hat_quay_release_notes/index[{productname} Release Notes] for information on features for individual releases. @@ -31,55 +29,4 @@ The general procedure for a manual upgrade consists of the following steps: . Stop the `Quay` and `Clair` containers. . Backup the database and image storage (optional but recommended). . Start Clair using the new version of the image. -. Wait until Clair is ready to accept connections before starting the new version of {productname}. - -[id="accessing-images"] -== Accessing images - -{productname} image from version 3.4.0 and later are available from link:https://registry.redhat.io[registry.redhat.io] and -link:https://registry.access.redhat.com[registry.access.redhat.com], with authentication set up as described in link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication]. - -== Upgrade to {producty} from 3.15.z - -.Target images - -* **Quay:** {productrepo}/{quayimage}:{producty} -ifdef::downstream[] -* **Clair:** {productrepo}/{clairimage}:{producty} -endif::downstream[] -ifdef::upstream[] -* **Clair:** {productrepo}/{clairimage}:{clairproductminv} -endif::upstream[] -* **PostgreSQL:** {postgresimage} -* **Redis:** {redisimage} -* **Clair-PosgreSQL:** {postgresimage} - -== Upgrade to {producty} from 3.14.z - -.Target images - -* **Quay:** {productrepo}/{quayimage}:{producty} -ifdef::downstream[] -* **Clair:** {productrepo}/{clairimage}:{producty} -endif::downstream[] -ifdef::upstream[] -* **Clair:** {productrepo}/{clairimage}:{clairproductminv} -endif::upstream[] -* **PostgreSQL:** {postgresimage} -* **Redis:** {redisimage} -* **Clair-PosgreSQL:** {postgresimage} - -== Upgrade to {producty} from 3.13.z - -.Target images - -* **Quay:** {productrepo}/{quayimage}:{producty} -ifdef::downstream[] -* **Clair:** {productrepo}/{clairimage}:{producty} -endif::downstream[] -ifdef::upstream[] -* **Clair:** {productrepo}/{clairimage}:{clairproductminv} -endif::upstream[] -* **PostgreSQL:** {postgresimage} -* **Redis:** {redisimage} -* **Clair-PosgreSQL:** {postgresimage}5 \ No newline at end of file +. Wait until Clair is ready to accept connections before starting the new version of {productname}. \ No newline at end of file diff --git a/modules/reverting-quayecosystem-upgrade.adoc b/modules/reverting-quayecosystem-upgrade.adoc new file mode 100644 index 000000000..5ebbf6aec --- /dev/null +++ b/modules/reverting-quayecosystem-upgrade.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: PROCEDURE +[id="reverting-quayecosystem-upgrade_{context}"] += Reverting QuayEcosystem Upgrade + +[role="_abstract"] +To revert to the `QuayEcosystem` when an upgrade to `QuayRegistry` fails or causes issues, delete the `QuayRegistry` and restore the `Route` to the original `Service`. You can then use the {productname} deployment managed by the `QuayEcosystem`. + + +[NOTE] +==== +If your `QuayEcosystem` was managing the PostgreSQL database, the upgrade process migrates your data to a new PostgreSQL database managed by the upgraded Operator. Your old database is not changed or removed but {productname} will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process exits and it is recommended that you continue with your database as an unmanaged component. +==== + +.Procedure + +. Delete the `QuayRegistry` using either the UI or `kubectl`: ++ +[source,terminal] +---- +$ kubectl delete -n quayregistry +---- + +. If external access was provided using a `Route`, change the `Route` to point back to the original `Service` using the UI or `kubectl`. diff --git a/modules/supported-quayecossytem-configurations-for-upgrades.adoc b/modules/supported-quayecossytem-configurations-for-upgrades.adoc new file mode 100644 index 000000000..dbb405e77 --- /dev/null +++ b/modules/supported-quayecossytem-configurations-for-upgrades.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: CONCEPT +[id="supported-quayecossytem-configurations-for-upgrades_{context}"] += Supported QuayEcosystem Configurations for Upgrades + +[role="_abstract"] +The {productname} Operator reports errors in its logs and in `status.conditions` if migrating a `QuayEcosystem` component fails or is unsupported. + +All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in {productname}'s `config.yaml` file. + +*Database*:: + +Ephemeral database not supported (`volumeSize` field must be set). + +*Redis*:: + +Nothing special needed. + +*External Access*:: + +Only passthrough `Route` access is supported for automatic migration. Manual migration required for other methods. ++ +* `LoadBalancer` without custom hostname: +After the `QuayEcosystem` is marked with label `"quay-operator/migration-complete": "true"`, delete the `metadata.ownerReferences` field from existing `Service` _before_ deleting the `QuayEcosystem` to prevent Kubernetes from garbage collecting the `Service` and removing the load balancer. A new `Service` will be created with `metadata.name` format `-quay-app`. Edit the `spec.selector` of the existing `Service` to match the `spec.selector` of the new `Service` so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old `Service`; the Quay Operator will not manage it. ++ +* `LoadBalancer`/`NodePort`/`Ingress` with custom hostname: +A new `Service` of type `LoadBalancer` will be created with `metadata.name` format `-quay-app`. Change your DNS settings to point to the `status.loadBalancer` endpoint provided by the new `Service`. + +*Clair*:: + +Nothing special needed. + +*Object Storage*:: + +`QuayEcosystem` did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. + +*Repository Mirroring*:: + +Nothing special needed. \ No newline at end of file diff --git a/modules/upgrading-geo-repl-quay.adoc b/modules/upgrading-geo-repl-quay.adoc index 896d3d465..6b7b6906c 100644 --- a/modules/upgrading-geo-repl-quay.adoc +++ b/modules/upgrading-geo-repl-quay.adoc @@ -2,7 +2,8 @@ [id="upgrading-geo-repl-quay"] = Upgrading a geo-replication deployment of standalone {productname} -Use the following procedure to upgrade your geo-replication {productname} deployment. +[role="_abstract"] +To upgrade a geo-replication deployment of standalone {productname}, stop operations on all instances, back up the deployment, then follow the procedure to upgrade each system. Expect intermittent downtime when upgrading to the next y-stream release. [IMPORTANT] ==== @@ -11,17 +12,18 @@ Use the following procedure to upgrade your geo-replication {productname} deploy * It is highly recommended to back up your {productname} deployment before upgrading. ==== +[NOTE] +==== +This procedure assumes that you are running {productname} services on three (or more) systems. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/deploy_red_hat_quay_-_high_availability/index#preparing_for_red_hat_quay_high_availability[Preparing for {productname} high availability]. +==== + + .Prerequisites * You have logged into `registry.redhat.io` .Procedure -[NOTE] -==== -This procedure assumes that you are running {productname} services on three (or more) systems. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/deploy_red_hat_quay_-_high_availability/index#preparing_for_red_hat_quay_high_availability[Preparing for {productname} high availability]. -==== - . Obtain a list of all {productname} instances on each system running a {productname} instance. .. Enter the following command on System A to reveal the {productname} instances: @@ -32,11 +34,6 @@ $ sudo podman ps ---- + .Example output -+ -[source,terminal] -+ -.Example output -+ [source,terminal] ---- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES @@ -51,9 +48,7 @@ $ sudo podman ps ---- + .Example output -+ [source,terminal] -+ ---- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02 @@ -67,9 +62,7 @@ $ sudo podman ps ---- + .Example output -+ [source,terminal] -+ ---- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03 @@ -141,7 +134,6 @@ $ sudo podman ps ---- + .Example output -+ [source,terminal] ---- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES diff --git a/modules/upgrading-minor-red-hat-quay.adoc b/modules/upgrading-minor-red-hat-quay.adoc new file mode 100644 index 000000000..ae7d96e14 --- /dev/null +++ b/modules/upgrading-minor-red-hat-quay.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT +[id="upgrading-minor-red-hat-quay_{context}"] += Upgrading to the next minor release version + +[role="_abstract"] +Z-stream upgrades, for example, 3.13.1 -> 3.13.2, for {productname} use your existing channel and approval strategy. With `Automatic` approval, the Operator applies new z-stream updates with little or no downtime; with `Manual` approval, you approve each update first. \ No newline at end of file diff --git a/modules/upgrading-quay-operator.adoc b/modules/upgrading-quay-operator.adoc new file mode 100644 index 000000000..57583cf8b --- /dev/null +++ b/modules/upgrading-quay-operator.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: CONCEPT +[id="upgrading-quay-operator_{context}"] += Upgrading the {productname} Operator + +[role="_abstract"] +To upgrade the {productname} Operator, use the standard {ocp} process for installed Operators and follow N-1 minor version paths. + +In general, {productname} supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from {productname} 3.9 to the latest version of {producty} is not supported. Instead, users would have to upgrade as follows: + +. 3.9.z -> 3.10.z +. 3.10.z -> 3.11.z +. 3.11.z -> 3.14.z +. 3.14.z -> 3.16.z + +This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. + +In some cases, {productname} supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for {productname} {productmin}: + +* 3.13.z -> {productmin} +* 3.14.z -> {productmin} +* 3.15.z -> {productmin} \ No newline at end of file diff --git a/modules/upgrading-quayecosystem.adoc b/modules/upgrading-quayecosystem.adoc new file mode 100644 index 000000000..bb77e7b45 --- /dev/null +++ b/modules/upgrading-quayecosystem.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE +[id="upgrading-quayecosystem_{context}"] += Upgrading a QuayEcosystem + +[role="_abstract"] +To migrate an existing `QuayEcosystem` to a `QuayRegistry` managed by the {productname} Operator, add the `migration` label to the `QuayEcosystem` custom resource and wait for the new `QuayRegistry` to start. You can then verify the migration and delete the old `QuayEcosystem`. + +.Procedure + +. Add `"quay-operator/migrate": "true"` to the `metadata.labels` of the `QuayEcosystem`. ++ +[source,terminal] +---- +$ oc edit quayecosystem +---- ++ +[source,yaml] +---- +metadata: + labels: + quay-operator/migrate: "true" +---- + +. Wait for a `QuayRegistry` CR to be created with the same `metadata.name` as your `QuayEcosystem`. The `QuayEcosystem` CR is marked with the label `"quay-operator/migration-complete": "true"`. + +. After the `status.registryEndpoint` of the new `QuayRegistry` is set, access {productname} and confirm that all data and settings were migrated successfully. + +. If everything works correctly, you can delete the `QuayEcosystem`. Kubernetes garbage collection cleans up all old resources. \ No newline at end of file diff --git a/modules/upgrading-quayregistry.adoc b/modules/upgrading-quayregistry.adoc new file mode 100644 index 000000000..b025172a7 --- /dev/null +++ b/modules/upgrading-quayregistry.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: CONCEPT +[id="upgrading-quayregistry_{context}"] += Upgrading a QuayRegistry resource + +[role="_abstract"] +The {productname} Operator reconciles `QuayRegistry` resources and upgrades them when the Operator version differs from the current version. When an upgrade is supported, the Operator applies it and updates status; when not, it returns an error and leaves the QuayRegistry unchanged. + +The following logic is used: + +* If `status.currentVersion` is unset, reconcile as normal. +* If `status.currentVersion` equals the Operator version, reconcile as normal. +* If `status.currentVersion` does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the `status.currentVersion` to the Operator's version once complete. If it cannot be upgraded, return an error and leave the `QuayRegistry` and its deployed Kubernetes objects alone. \ No newline at end of file diff --git a/modules/upgrading-red-hat-quay.adoc b/modules/upgrading-red-hat-quay.adoc new file mode 100644 index 000000000..986e530a6 --- /dev/null +++ b/modules/upgrading-red-hat-quay.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: PROCEDURE +[id="upgrading-red-hat-quay_{context}"] += Upgrading {productname} to version {productmin} + +[role="_abstract"] +To upgrade {productname} to the next version, change the Operator update channel in the {ocp} Web Console and wait for the upgrade pods to complete. You can then verify the database images and access your registry. + +.Procedure + +. In the {ocp} Web Console, navigate to *Operators* -> *Installed Operators*. + +. Click on the {productname} Operator. + +. Navigate to the *Subscription* tab. + +. Under *Subscription details* click *Update channel*. + +. Select *stable-3.16* -> *Save*. + +. Check the progress of the new installation under *Upgrade status*. Wait until the upgrade status changes to *1 installed* before proceeding. + +. In your {ocp} cluster, navigate to *Workloads* -> *Pods*. Existing pods should be terminated, or in the process of being terminated. + +. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: `clair-postgres-upgrade`, `quay-postgres-upgrade`, and `quay-app-upgrade`. + +. After the `clair-postgres-upgrade`, `quay-postgres-upgrade`, and `quay-app-upgrade` pods are marked as *Completed*, the remaining pods for your {productname} deployment spin up. This takes approximately ten minutes. + +. Verify that the `quay-database` uses the `postgresql-13` image, and `clair-postgres` pods now uses the `postgresql-15` image. + +. After the `quay-app` pod is marked as *Running*, you can reach your {productname} registry. diff --git a/upgrade_quay/master.adoc b/upgrade_quay/master.adoc index d1f662044..f00058586 100644 --- a/upgrade_quay/master.adoc +++ b/upgrade_quay/master.adoc @@ -1,17 +1,35 @@ -include::_attributes/attributes.adoc[][id="upgrade-quay-v3"] +:_mod-docs-content-type: ASSEMBLY +include::_attributes/attributes.adoc[] +[id="upgrade-quay-v3"] = Upgrade {productname} +:context: upgrade-quay-v3 -The upgrade procedure for {productname} depends on the type of installation that you are using. +[role="_abstract"] +To upgrade {productname}, follow the procedure that matches your installation type. You can use the Operator with the Operator Lifecycle Manager (OLM) for Operator-based installations, or the standalone procedure for proof of concept or highly available setups. -The {productname} Operator provides a simple method to deploy and manage a {productname} cluster. This is the preferred procedure for deploying {productname} on {ocp}. - -The {productname} Operator should be upgraded using the link:https://docs.openshift.com/container-platform/{ocp-y}/operators/understanding/olm/olm-understanding-olm.html[Operator Lifecycle Manager (OLM)] as described in the section "Upgrading Quay using the Quay Operator". +//operator-based upgrades +include::modules/operator-upgrade.adoc[leveloffset=+1] +include::modules/operator-lifecycle-manager.adoc[leveloffset=+2] +include::modules/upgrading-quay-operator.adoc[leveloffset=+2] +include::modules/upgrading-red-hat-quay.adoc[leveloffset=+2] +include::modules/upgrading-minor-red-hat-quay.adoc[leveloffset=+2] +include::modules/manually-approving-pending-operator-upgrade.adoc[leveloffset=+2] +include::modules/upgrading-quayregistry.adoc[leveloffset=+2] +include::modules/upgrading-quayecosystem.adoc[leveloffset=+2] +include::modules/reverting-quayecosystem-upgrade.adoc[leveloffset=+2] +include::modules/supported-quayecossytem-configurations-for-upgrades.adoc[leveloffset=+2] -The procedure for upgrading a proof of concept or highly available installation of {productname} and Clair is documented in the section "Standalone upgrade". +//standalone upgrades -include::modules/operator-upgrade.adoc[leveloffset=+1] include::modules/proc_upgrade_standalone.adoc[leveloffset=+1] +include::modules/accessing-images.adoc[leveloffset=+2] + +//geo-replication upgrades include::modules/upgrading-geo-repl-quay.adoc[leveloffset=+1] include::modules/upgrading-geo-repl-quay-operator.adoc[leveloffset=+1] + +//QBO upgrades include::modules/qbo-operator-upgrade.adoc[leveloffset=+1] + +//downgrade upgrades include::modules/downgrade-quay-deployment.adoc[leveloffset=+1]