From ddbba67a0852f4ed715ba977579e4b5ecc220d33 Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Tue, 27 Jan 2026 13:41:33 +0100 Subject: [PATCH] Fix a bunch of grammatical errors --- content/blog/2022-09-02-route.md | 2 +- ...023-11-17-argo-configmanagement-plugins.md | 2 +- content/blog/2023-12-05-nutanix-testing.md | 2 +- .../blog/2024-01-26-more-secrets-options.md | 4 +-- content/blog/2024-07-12-in-cluster-git.md | 2 +- .../cluster-sizing.md | 2 +- .../getting-started.md | 2 +- .../openshift-virtualization.md | 2 +- content/patterns/devsecops/cluster-sizing.md | 2 +- content/patterns/devsecops/devel-cluster.md | 2 +- .../devsecops/ideas-for-customization.md | 2 +- .../patterns/industrial-edge/demo-script.md | 2 +- .../cluster-sizing.md | 2 +- content/patterns/omnicloud/getting-started.md | 4 +-- content/patterns/regional-dr/_index.md | 32 +++++++++---------- .../openshift-virtualization.md | 2 +- 16 files changed, 33 insertions(+), 33 deletions(-) diff --git a/content/blog/2022-09-02-route.md b/content/blog/2022-09-02-route.md index 036d9d2bb..a1c8fc1f0 100644 --- a/content/blog/2022-09-02-route.md +++ b/content/blog/2022-09-02-route.md @@ -45,7 +45,7 @@ As you can see the spec describes the a **host:** or path to the route, the tar If we focus on the **host:** value you see that we need to provide the Ingress_Domain to the host. You might ask yourself: *why is this a problem?* -If you manage just one cluster, and your application just runs on that cluster, you can just hard code the ingress domain and be on your merry way. But what happens when you are deploying this application to multiple clusters and their domains are different? Whoever is doing the Ops to deploy your application will have to change the Ingress_Domain to match the the cluster domain manually before deploying the application. +If you manage just one cluster, and your application just runs on that cluster, you can just hard code the ingress domain and be on your merry way. But what happens when you are deploying this application to multiple clusters and their domains are different? Whoever is doing the Ops to deploy your application will have to change the Ingress_Domain to match the cluster domain manually before deploying the application. Let's go a step further and say you are using *GitOps*, and this definition lives in a *git* repository, what happens then? In our humble opinion it becomes a bit more complicated to make sure the ingress domain is set correctly. diff --git a/content/blog/2023-11-17-argo-configmanagement-plugins.md b/content/blog/2023-11-17-argo-configmanagement-plugins.md index 14f4a4d48..13d752bcc 100644 --- a/content/blog/2023-11-17-argo-configmanagement-plugins.md +++ b/content/blog/2023-11-17-argo-configmanagement-plugins.md @@ -133,7 +133,7 @@ cluster that will be running the demo can be discovered, so rather than requirin mechanism that extracted that information and stored it as a Helm variable. Meanwhile, the components of industrial-edge that used this information had very opinionated kustomize-based deployment mechanisms and workflows to update them. We did not want to change this mechanism at the time, so it was better for us to work out how to apply Helm templating -on top of a set of of manifests that kustomize had already rendered. The CMP 1.0 framework was suitable for this, and +on top of a set of manifests that kustomize had already rendered. The CMP 1.0 framework was suitable for this, and fairly straightforward to use, so we did. However, we did not, at that time, put any thought into parameterizing the use of config management plugins; making too radical a change to how the repo server worked would have difficult, and would have required injecting a new (and unsupported) image into a product; not something to be undertaken lightly. diff --git a/content/blog/2023-12-05-nutanix-testing.md b/content/blog/2023-12-05-nutanix-testing.md index 7a6e2af53..b1f9e90be 100644 --- a/content/blog/2023-12-05-nutanix-testing.md +++ b/content/blog/2023-12-05-nutanix-testing.md @@ -23,6 +23,6 @@ Pattern consumers can now rest assured that the core pattern functionality will This would not be possible without the wonderful co-operation of Nutanix, who are doing all the work of deploying OpenShift and our pattern on their platform, executing the tests, and reporting the results. -To facilitate this, the patterns team have begun the process of open sourcing the downstream tests for all our patterns. Soon all tests will live alongside the the patterns they target, allowing them to be easily executed and/or improved by pattern consumers and platform owners. +To facilitate this, the patterns team have begun the process of open sourcing the downstream tests for all our patterns. Soon all tests will live alongside the patterns they target, allowing them to be easily executed and/or improved by pattern consumers and platform owners. Our thanks once again to Nutanix. \ No newline at end of file diff --git a/content/blog/2024-01-26-more-secrets-options.md b/content/blog/2024-01-26-more-secrets-options.md index b5dc63297..1e5b43424 100644 --- a/content/blog/2024-01-26-more-secrets-options.md +++ b/content/blog/2024-01-26-more-secrets-options.md @@ -62,7 +62,7 @@ loaded by the appropriate backend code. Users of the pattern framework will be able to change secrets backends as straightforwardly as we can make possible. The only other change the user will need to make (to use another ESO backend) is to use the backend's mechanism to refer to keys. (For example: in Vault, -keys have have names like `secret/data/global/config-demo`; in the Kubernetes backend +keys have names like `secret/data/global/config-demo`; in the Kubernetes backend it would just be the secret object name that's being used to store the secret material, such as `config-demo`). @@ -297,7 +297,7 @@ and running them. `k8s_secret_utils` is used for loading both the `kubernetes` and `none` backends. It -### Changes to to vault_utils Ansible Role +### Changes to vault_utils Ansible Role Some code has been factored out of `vault_utils` and now lives in roles called `cluster_pre_check` and `find_vp_secrets` roles. A new task file has been added, `push_parsed_secrets.yaml` that knows how to use diff --git a/content/blog/2024-07-12-in-cluster-git.md b/content/blog/2024-07-12-in-cluster-git.md index 842da4715..946781f94 100644 --- a/content/blog/2024-07-12-in-cluster-git.md +++ b/content/blog/2024-07-12-in-cluster-git.md @@ -64,7 +64,7 @@ There are fundamentally two ways to set up the in-cluster gitea server. ## Configuration -Once the the in-gitea cluster is enabled, its configuration will be done via a normal argo application +Once the in-cluster gitea is enabled, its configuration will be done via a normal argo application that can be seen in the cluster-wide argo: ![gitea-argo-application](/images/gitea-argocd-application.png) diff --git a/content/patterns/ansible-edge-gitops-kasten/cluster-sizing.md b/content/patterns/ansible-edge-gitops-kasten/cluster-sizing.md index 9da0db13b..deeebbf61 100644 --- a/content/patterns/ansible-edge-gitops-kasten/cluster-sizing.md +++ b/content/patterns/ansible-edge-gitops-kasten/cluster-sizing.md @@ -46,7 +46,7 @@ Here's an inventory of what gets deployed by the **Ansible Edge GitOps** pattern The Ansible Edge GitOps pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture. -The Hub OpenShift Cluster is made up of the the following on the AWS deployment tested: +The Hub OpenShift Cluster is made up of the following on the AWS deployment tested: | Node Type | Number of nodes | Cloud Provider | Instance Type | :---- | :----: | :---- | :---- diff --git a/content/patterns/ansible-edge-gitops-kasten/getting-started.md b/content/patterns/ansible-edge-gitops-kasten/getting-started.md index b0dca23a6..833078e40 100644 --- a/content/patterns/ansible-edge-gitops-kasten/getting-started.md +++ b/content/patterns/ansible-edge-gitops-kasten/getting-started.md @@ -101,7 +101,7 @@ secrets: chpasswd: { expire: False } ``` -* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest) +* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest) ```yaml - name: aap-manifest diff --git a/content/patterns/ansible-edge-gitops-kasten/openshift-virtualization.md b/content/patterns/ansible-edge-gitops-kasten/openshift-virtualization.md index ea0b1af40..f118d1a56 100644 --- a/content/patterns/ansible-edge-gitops-kasten/openshift-virtualization.md +++ b/content/patterns/ansible-edge-gitops-kasten/openshift-virtualization.md @@ -339,7 +339,7 @@ Click on the "three dots" menu on the right, which will open a dialog like the f [![kubevirt411-vm-open-console](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png)](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png) -The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section. +The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section. ### Initial User login (cloud-user) diff --git a/content/patterns/devsecops/cluster-sizing.md b/content/patterns/devsecops/cluster-sizing.md index 5e5dfad49..aa12ea442 100644 --- a/content/patterns/devsecops/cluster-sizing.md +++ b/content/patterns/devsecops/cluster-sizing.md @@ -36,7 +36,7 @@ The hub can be modified to deploy OpenShift Pipelines if needed. See Development The Secure Supply Chain pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture. -The Hub OpenShift Cluster is made up of the the following on the AWS deployment tested: +The Hub OpenShift Cluster is made up of the following on the AWS deployment tested: | Node Type | Number of nodes | Cloud Provider | Instance Type | :---- | :----: | :---- | :---- diff --git a/content/patterns/devsecops/devel-cluster.md b/content/patterns/devsecops/devel-cluster.md index 930cc0257..7a3fa4c58 100644 --- a/content/patterns/devsecops/devel-cluster.md +++ b/content/patterns/devsecops/devel-cluster.md @@ -66,4 +66,4 @@ There are a number of steps you can do to check that the components have deploye ## Next up -Deploy the the Multicluster DevSecOps [secured production cluster](/devsecops/production-cluster) +Deploy the Multicluster DevSecOps [secured production cluster](/devsecops/production-cluster) diff --git a/content/patterns/devsecops/ideas-for-customization.md b/content/patterns/devsecops/ideas-for-customization.md index eac9b3d87..53bf70077 100644 --- a/content/patterns/devsecops/ideas-for-customization.md +++ b/content/patterns/devsecops/ideas-for-customization.md @@ -32,4 +32,4 @@ While this can be done with any of the patterns the Multicluster DevSecOps patte 1. `values-smart-signs.yaml` -GitOps and DevSecOps would be used to make sure that applications would be deployed on the correct clusters. Some of the "clusters" might be light single-node clusters. Some applications be be deployed to several cluster groups. E.g. the application to place information on a smart sign might also be deployed to the tram cars that also have smart signs in passenger compartments or the engineers compartment. +GitOps and DevSecOps would be used to make sure that applications would be deployed on the correct clusters. Some of the "clusters" might be light single-node clusters. Some applications can be deployed to several cluster groups. E.g. the application to place information on a smart sign might also be deployed to the tram cars that also have smart signs in passenger compartments or the engineers compartment. diff --git a/content/patterns/industrial-edge/demo-script.md b/content/patterns/industrial-edge/demo-script.md index d638de094..5fdba1517 100644 --- a/content/patterns/industrial-edge/demo-script.md +++ b/content/patterns/industrial-edge/demo-script.md @@ -8,7 +8,7 @@ the latest product and technology improvements. * Show Red Hat Operators being deployed * Show available Red Hat Pipelines for the Industrial Edge pattern -* Show the seed pipeline running and explain what is is doing +* Show the seed pipeline running and explain what it is doing * Demonstration of the Red Hat ArgoCD views * Show the openshift-gitops-server view * Show the datacenter-gitops-server view diff --git a/content/patterns/multicloud-gitops-Portworx/cluster-sizing.md b/content/patterns/multicloud-gitops-Portworx/cluster-sizing.md index f1336fad1..a918f5b44 100644 --- a/content/patterns/multicloud-gitops-Portworx/cluster-sizing.md +++ b/content/patterns/multicloud-gitops-Portworx/cluster-sizing.md @@ -42,7 +42,7 @@ Here's an inventory of what gets deployed by the Multicloud GitOps pattern on th The Multicloud GitOps pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture. -The datacenter hub OpenShift cluster is made up of the the following on the AWS deployment tested: +The datacenter hub OpenShift cluster is made up of the following on the AWS deployment tested: | Node Type | Number of nodes | Cloud Provider | Instance Type | :---- | :----: | :---- | :---- diff --git a/content/patterns/omnicloud/getting-started.md b/content/patterns/omnicloud/getting-started.md index b4491e036..d2a8dcac6 100644 --- a/content/patterns/omnicloud/getting-started.md +++ b/content/patterns/omnicloud/getting-started.md @@ -33,7 +33,7 @@ aliases: /omnicloud/getting-started/ ### Glossary -- Red Hat Openshift Container Platform : OCP is an enterprise Kubernetes platform that enables organizations to build, deploy, and manage containerized applications at scale. +- Red Hat OpenShift Container Platform : OCP is an enterprise Kubernetes platform that enables organizations to build, deploy, and manage containerized applications at scale. - Red Hat Ansible Automation Platform : AAP is an enterprise-grade automation solution that enables organizations to automate IT processes, application deployments, and infrastructure management across hybrid and multi-cloud environment - Red Hat Advanced Cluster Management : centralized platform for managing multiple OpenShift clusters across on-premises, hybrid, and multi-cloud environments. - Hub Cluster : Control plane cluster which deploys & manages OpenShift cluster on targeted cloud or on-prem environment. @@ -280,7 +280,7 @@ For connected environments: [https://console.redhat.com/openshift/downloads] ``` -- Login to the Openshift cluster using: +- Login to the OpenShift cluster using: ``` $ oc login --token= --server= diff --git a/content/patterns/regional-dr/_index.md b/content/patterns/regional-dr/_index.md index 6c89099aa..30b377842 100644 --- a/content/patterns/regional-dr/_index.md +++ b/content/patterns/regional-dr/_index.md @@ -22,12 +22,12 @@ As more and more institution and mission critical organizations are moving in the cloud, the possible impact of having a provider failure, might this be only related to only one region, is very high. -This pattern is designed to prove the resiliency capabilities of Red Hat Openshift +This pattern is designed to prove the resiliency capabilities of Red Hat OpenShift in such scenario. -The Regional Disaster Recovery Pattern, is designed to setup an multiple instances -of Openshift Container Platform cluster connectedbetween them to prove multi-region -resiliency by maintaing the application running in the event of a regional failure. +The Regional Disaster Recovery Pattern is designed to set up multiple instances +of OpenShift Container Platform cluster connected between them to prove multi-region +resiliency by maintaining the application running in the event of a regional failure. In this scenario we will be working in a Regional Disaster Recovery setup, and the synchronization parameters can be specified in the value file. @@ -67,7 +67,7 @@ so consider this when designing your infrastructure deployment on the values files of the pattern). This is the main reason because this RegionalDR is configured in an Active-Passive mode. -It requires an already existing Openshift cluster, which will be used for installing the +It requires an already existing OpenShift cluster, which will be used for installing the pattern, deploying active and passive clusters manage the application scheduling. @@ -85,10 +85,10 @@ clusters. The _Regional DR Pattern_ leverages [Red Hat OpenShift Data Foundation][odf]'s [Regional DR][rdr] solution, automating applications failover between -[Red Had Advanced Cluster Management][acm] managed clusters in different regions. +[Red Hat Advanced Cluster Management][acm] managed clusters in different regions. - The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process -- The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF +- The demo application uses MongoDB writing its data on a Persistent Volume Claim backed by ODF - We have developed a DR trigger which will be used to start the DR process - The end user needs to configure which PV's need synchronization and the latencies - ACS Can be used for eventual policies @@ -96,11 +96,11 @@ The _Regional DR Pattern_ leverages [Red Hat OpenShift Data Foundation][odf]'s hybernated clusters ready to be used ### Red Hat Technologies -- [Red Hat Openshift Container Platform][ocp] -- [Red Hat Openshift Data Foundation][odf] -- [Red Hat Openshift GitOps][ops] -- [Red Hat Openshift Advanced Cluster Management][acm] -- [Red Hat Openshift Advanced Cluster Security][acs] +- [Red Hat OpenShift Container Platform][ocp] +- [Red Hat OpenShift Data Foundation][odf] +- [Red Hat OpenShift GitOps][ops] +- [Red Hat OpenShift Advanced Cluster Management][acm] +- [Red Hat OpenShift Advanced Cluster Security][acs] ## Operators and Technologies this Pattern Uses - [Regional DR Trigger Operator][opr] @@ -108,9 +108,9 @@ The _Regional DR Pattern_ leverages [Red Hat OpenShift Data Foundation][odf]'s ## Tested on -- Red Hat Openshift Container Platform v4.13 -- Red Hat Openshift Container Platform v4.14 -- Red Hat Openshift Container Platform v4.15 +- Red Hat OpenShift Container Platform v4.13 +- Red Hat OpenShift Container Platform v4.14 +- Red Hat OpenShift Container Platform v4.15 ## Architecture This section explains the architecture deployed by this Pattern and its Logical @@ -123,7 +123,7 @@ and Physical perspectives. ## Installation -This patterns is designed to be installed in an Openshift cluster which will +This pattern is designed to be installed in an OpenShift cluster which will work as the orchestrator for the other clusters involved. The Adanced Cluster Manager installed will neither run the applications nor store any data from them, but it will take care of the plumbing of the various clusters involved, diff --git a/content/patterns/virtualization-starter-kit/openshift-virtualization.md b/content/patterns/virtualization-starter-kit/openshift-virtualization.md index 1215c97e1..584313541 100644 --- a/content/patterns/virtualization-starter-kit/openshift-virtualization.md +++ b/content/patterns/virtualization-starter-kit/openshift-virtualization.md @@ -281,7 +281,7 @@ Click on the "three dots" menu on the right, which will open a dialog like the f [![show-vm-open-console](/images/virtualization-starter-kit/aeg-open-vm-console.png)](/images/virtualization-starter-kit/aeg-open-vm-console.png) -The virtual machine console view will show a standard RHEL console login screen. It can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, or the default cloudInit. +The virtual machine console view will show a standard RHEL console login screen. It can be accessed using the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, or the default cloudInit. ### Initial User login (cloud-user)