Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions modules/accessing-images.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
:_mod-docs-content-type: CONCEPT
[id="accessing-images_{context}"]
= Accessing images

[role="_abstract"]
To access {productname} and Clair images for standalone upgrades, pull from registry.redhat.io or registry.access.redhat.com and configure authentication as described in Red Hat Container Registry Authentication.

{productname} image from version 3.4.0 and later are available from link:https://registry.redhat.io[registry.redhat.io] and
link:https://registry.access.redhat.com[registry.access.redhat.com], with authentication set up as described in link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication].

[id="upgrade-to-producty-from-3_15_z"]
== Upgrade to {producty} from 3.15.z

* **Quay:** {productrepo}/{quayimage}:{producty}
ifdef::downstream[]
* **Clair:** {productrepo}/{clairimage}:{producty}
endif::downstream[]
ifdef::upstream[]
* **Clair:** {productrepo}/{clairimage}:{clairproductminv}
endif::upstream[]
* **PostgreSQL:** {postgresimage}
* **Redis:** {redisimage}
* **Clair-PosgreSQL:** {postgresimage}

[id="upgrade-to-producty-from-3_14_z"]
== Upgrade to {producty} from 3.14.z

* **Quay:** {productrepo}/{quayimage}:{producty}
ifdef::downstream[]
* **Clair:** {productrepo}/{clairimage}:{producty}
endif::downstream[]
ifdef::upstream[]
* **Clair:** {productrepo}/{clairimage}:{clairproductminv}
endif::upstream[]
* **PostgreSQL:** {postgresimage}
* **Redis:** {redisimage}
* **Clair-PosgreSQL:** {postgresimage}

[id="upgrade-to-producty-from-3_13_z"]
== Upgrade to {producty} from 3.13.z

* **Quay:** {productrepo}/{quayimage}:{producty}
ifdef::downstream[]
* **Clair:** {productrepo}/{clairimage}:{producty}
endif::downstream[]
ifdef::upstream[]
* **Clair:** {productrepo}/{clairimage}:{clairproductminv}
endif::upstream[]
* **PostgreSQL:** {postgresimage}
* **Redis:** {redisimage}
* **Clair-PosgreSQL:** {postgresimage}5
2 changes: 1 addition & 1 deletion modules/downgrade-quay-deployment.adoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
:_mod-docs-content-type: CONCEPT

[id="downgrade-quay-deployment"]
= Downgrading {productname}

[role="_abstract"]
{productname} only supports rolling back, or downgrading, to previous z-stream versions, for example, 3.12.3 -> 3.12.2. Rolling back to previous y-stream versions ({producty} -> {producty-n1}) is not supported. This is because {productname} updates might contain database schema upgrades that are applied when upgrading to a new version of {productname}. Database schema upgrades are not considered backwards compatible.

[IMPORTANT]
Expand Down
14 changes: 14 additions & 0 deletions modules/manually-approving-pending-operator-upgrade.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
:_mod-docs-content-type: CONCEPT
[id="manually-approving-pending-operator-upgrade_{context}"]
= Manually approving a pending Operator upgrade

[role="_abstract"]
To approve a pending {productname} Operator upgrade when using `Manual` approval, open the *Subscription* tab, review the install plan and resources, and click Approve. You can then monitor the upgrade progress on the Installed Operators page.

The following image shows the *Subscription* tab in the UI, including the update `Channel`, the `Approval` strategy, the `Upgrade status` and the `InstallPlan`:

image:update-channel-approval-strategy.png[Subscription tab including upgrade Channel and Approval strategy]

The list of Installed Operators provides a high-level summary of the current Quay installation:

image:installed-operators-list.png[Installed Operators]
11 changes: 11 additions & 0 deletions modules/operator-lifecycle-manager.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
:_mod-docs-content-type: CONCEPT
[id="operator-lifecycle-manager_{context}"]
= Operator Lifecycle Manager

[role="_abstract"]
Operator Lifecycle Manager (OLM) installs and upgrades the {productname} Operator. You can use automatic or manual approval in the Subscription to control when new Operator versions are applied.

[WARNING]
====
When the {productname} Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the *OperatorHub* page for the {productname} Operator during installation. It can also be found in the {productname} Operator `Subscription` object by the `approvalStrategy` field. Choosing `Automatic` means that your {productname} Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the `Manual` approval strategy should be selected.
====
179 changes: 5 additions & 174 deletions modules/operator-upgrade.adoc
Original file line number Diff line number Diff line change
@@ -1,180 +1,11 @@
:_mod-docs-content-type: PROCEDURE

:_mod-docs-content-type: CONCEPT
[id="operator-upgrade"]
= Upgrading the {productname} Operator Overview

The {productname} Operator follows a _synchronized versioning_ scheme, which means that each version of the Operator is tied to the version of {productname} and the components that it manages. There is no field on the `QuayRegistry` custom resource which sets the version of {productname} to `deploy`; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of {productname} on Kubernetes.

[id="operator-lifecycle-manager"]
== Operator Lifecycle Manager

The {productname} Operator should be installed and upgraded using the link:https://docs.openshift.com/container-platform/{ocp-y}/operators/understanding/olm/olm-understanding-olm.html[Operator Lifecycle Manager (OLM)]. When creating a `Subscription` with the default `approvalStrategy: Automatic`, OLM will automatically upgrade the {productname} Operator whenever a new version becomes available.

[WARNING]
====
When the {productname} Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the *OperatorHub* page for the {productname} Operator during installation. It can also be found in the {productname} Operator `Subscription` object by the `approvalStrategy` field. Choosing `Automatic` means that your {productname} Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the `Manual` approval strategy should be selected.
====

[id="upgrading-quay-operator"]
== Upgrading the {productname} Operator

The standard approach for upgrading installed Operators on {ocp} is documented at link:https://docs.openshift.com/container-platform/{ocp-y}/operators/admin/olm-upgrading-operators.html[Upgrading installed Operators].

In general, {productname} supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from {productname} 3.9 to the latest version of {producty} is not supported. Instead, users would have to upgrade as follows:

. 3.9.z -> 3.10.z
. 3.10.z -> 3.11.z
. 3.11.z -> 3.14.z

This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade.

In some cases, {productname} supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for {productname} {productmin}:

* 3.13.z -> {productmin}
* 3.14.z -> {productmin}
* 3.15.z -> {productmin}

For users on standalone deployments of {productname} wanting to upgrade to {productmin} see the link:https://access.redhat.com/documentation/en-us/red_hat_quay/{producty}/html-single/upgrade_red_hat_quay/index#standalone_upgrade[Standalone upgrade] guide.

[id="upgrading-red-hat-quay"]
=== Upgrading {productname} to version {productmin}

To update {productname} from one minor version to the next, for example, {producty-n1} -> {productmin}, you must change the update channel for the {productname} Operator.

.Procedure

. In the {ocp} Web Console, navigate to *Operators* -> *Installed Operators*.

. Click on the {productname} Operator.

. Navigate to the *Subscription* tab.

. Under *Subscription details* click *Update channel*.

. Select *stable-3.16* -> *Save*.

. Check the progress of the new installation under *Upgrade status*. Wait until the upgrade status changes to *1 installed* before proceeding.

. In your {ocp} cluster, navigate to *Workloads* -> *Pods*. Existing pods should be terminated, or in the process of being terminated.

. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: `clair-postgres-upgrade`, `quay-postgres-upgrade`, and `quay-app-upgrade`.

. After the `clair-postgres-upgrade`, `quay-postgres-upgrade`, and `quay-app-upgrade` pods are marked as *Completed*, the remaining pods for your {productname} deployment spin up. This takes approximately ten minutes.

. Verify that the `quay-database` uses the `postgresql-13` image, and `clair-postgres` pods now uses the `postgresql-15` image.

. After the `quay-app` pod is marked as *Running*, you can reach your {productname} registry.

[id="upgrading-minor-red-hat-quay"]
=== Upgrading to the next minor release version

For `z` stream upgrades, for example, 3.13.1 -> 3.13.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a `z` stream upgrade depends on the `approvalStrategy` as outlined above. If the approval strategy is set to `Automatic`, the {productname} Operator upgrades automatically to the newest `z` stream. This results in automatic, rolling {productname} updates to newer `z` streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.

[id="changing-update-channel-for-operator"]
=== Changing the update channel for the {productname} Operator

The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the {productname} Operator to start tracking and receiving updates from a newer channel, change the update channel in the *Subscription* tab for the installed {productname} Operator. For subscriptions with an `Automatic` approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.

[id="manually-approving-pending-operator-upgrade"]
=== Manually approving a pending Operator upgrade

If an installed Operator has the approval strategy in its subscription set to `Manual`, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the {productname} Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the `Subscription` tab for the {productname} Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click `Approve` and return to the page that lists Installed Operators to monitor the progress of the upgrade.

The following image shows the *Subscription* tab in the UI, including the update `Channel`, the `Approval` strategy, the `Upgrade status` and the `InstallPlan`:

image:update-channel-approval-strategy.png[Subscription tab including upgrade Channel and Approval strategy]

The list of Installed Operators provides a high-level summary of the current Quay installation:

image:installed-operators-list.png[Installed Operators]

[id="upgrading-quayregistry"]
== Upgrading a QuayRegistry resource

When the {productname} Operator starts, it immediately looks for any `QuayRegistries` it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:

* If `status.currentVersion` is unset, reconcile as normal.
* If `status.currentVersion` equals the Operator version, reconcile as normal.
* If `status.currentVersion` does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the `status.currentVersion` to the Operator's version once complete. If it cannot be upgraded, return an error and leave the `QuayRegistry` and its deployed Kubernetes objects alone.

[id="upgrading-quayecosystem"]
== Upgrading a QuayEcosystem

Upgrades are supported from previous versions of the Operator which used the `QuayEcosystem` API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the `QuayEcosystem` for it to be migrated. A new `QuayRegistry` will be created for the Operator to manage, but the old `QuayEcosystem` will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing `QuayEcosystem` to a new `QuayRegistry`, use the following procedure.

.Procedure

. Add `"quay-operator/migrate": "true"` to the `metadata.labels` of the `QuayEcosystem`.
+
[source,terminal]
----
$ oc edit quayecosystem <quayecosystemname>
----
+
[source,yaml]
----
metadata:
labels:
quay-operator/migrate: "true"
----
. Wait for a `QuayRegistry` to be created with the same `metadata.name` as your `QuayEcosystem`. The `QuayEcosystem` will be marked with the label `"quay-operator/migration-complete": "true"`.

. After the `status.registryEndpoint` of the new `QuayRegistry` is set, access {productname} and confirm that all data and settings were migrated successfully.

. If everything works correctly, you can delete the `QuayEcosystem` and Kubernetes garbage collection will clean up all old resources.

[id="reverting-quayecosystem-upgrade"]
=== Reverting QuayEcosystem Upgrade

If something goes wrong during the automatic upgrade from `QuayEcosystem` to `QuayRegistry`, follow these steps to revert back to using the `QuayEcosystem`:

.Procedure

. Delete the `QuayRegistry` using either the UI or `kubectl`:
+
[source,terminal]
----
$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>
----

. If external access was provided using a `Route`, change the `Route` to point back to the original `Service` using the UI or `kubectl`.
[role="_abstract"]
The {productname} Operator uses synchronized versioning: each Operator version deploys a single, matching version of {productname} and its components. You can use this scheme to plan upgrades and keep components compatible.

[NOTE]
====
If your `QuayEcosystem` was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but {productname} will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component.
====

[id="supported-quayecossytem-configurations-for-upgrades"]
=== Supported QuayEcosystem Configurations for Upgrades

The {productname} Operator reports errors in its logs and in `status.conditions` if migrating a `QuayEcosystem` component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in {productname}'s `config.yaml` file.

*Database*

Ephemeral database not supported (`volumeSize` field must be set).

*Redis*

Nothing special needed.

*External Access*

Only passthrough `Route` access is supported for automatic migration. Manual migration required for other methods.

* `LoadBalancer` without custom hostname:
After the `QuayEcosystem` is marked with label `"quay-operator/migration-complete": "true"`, delete the `metadata.ownerReferences` field from existing `Service` _before_ deleting the `QuayEcosystem` to prevent Kubernetes from garbage collecting the `Service` and removing the load balancer. A new `Service` will be created with `metadata.name` format `<QuayEcosystem-name>-quay-app`. Edit the `spec.selector` of the existing `Service` to match the `spec.selector` of the new `Service` so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old `Service`; the Quay Operator will not manage it.

* `LoadBalancer`/`NodePort`/`Ingress` with custom hostname:
A new `Service` of type `LoadBalancer` will be created with `metadata.name` format `<QuayEcosystem-name>-quay-app`. Change your DNS settings to point to the `status.loadBalancer` endpoint provided by the new `Service`.

*Clair*

Nothing special needed.

*Object Storage*

`QuayEcosystem` did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.

*Repository Mirroring*

Nothing special needed.
There is no field on the `QuayRegistry` custom resource which sets the version of {productname} to `deploy`; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of {productname} on Kubernetes.
====
Loading
Loading