diff --git a/content/patterns/industrial-edge/_index.md b/content/patterns/industrial-edge/_index.md
index 7aa36298e..ee7166d26 100644
--- a/content/patterns/industrial-edge/_index.md
+++ b/content/patterns/industrial-edge/_index.md
@@ -8,6 +8,7 @@ rh_products:
- Red Hat Advanced Cluster Management
- Red Hat Quay
- Red Hat AMQ
+- Red Hat OpenShift AI
industries:
- Industrial
- Manufacturing
@@ -23,13 +24,34 @@ ci: manuela
# Industrial Edge Pattern
-_Red Hat Validated Patterns are detailed deployments created for different use cases. These pre-defined computing configurations bring together the Red Hat portfolio and technology ecosystem to help you stand up your architectures faster. Example application code is provided as a demonstration, along with the various open source projects and Red Hat products required for the deployment to work. Users can then modify the pattern for their own specific application._
-
-**Use Case:** Boosting manufacturing efficiency and product quality with artificial intelligence/machine learning (AI/ML) out to the edge of the network.
-
-**Background:** Microcontrollers and other types of simple computers have long been widely used on factory floors and processing plants to monitor and control the many machines required to implement the many machines required to implement many modern manufacturing workflows.
-The manufacturing industry has consistently used technology to fuel innovation, production optimization, and operations. However, historically, control systems were mostly “dumb” in that they mostly took actions in response to pre-programmed triggers and heuristics. For example, predictive maintenance commonly took place on either a set length of time or the number of hours was in service.
-Supervisory control and data acquisition (SCADA) has often been used to collectively describe these hardware and software systems, which mostly functioned independently of the company’s information technology (IT) systems. Companies increasingly see the benefit of bridging these operational technology (OT) systems with their IT. Factory systems can be much more flexible as a result. They can also benefit from newer technologies such as AI/ML, thereby allowing for tasks like maintenance to be scheduled based on multiple real-time measurements rather than simple programmed triggers while bringing processing power closer to data.
+_Red Hat Validated Patterns are detailed deployments created for different use
+cases. These pre-defined computing configurations bring together the Red Hat
+portfolio and technology ecosystem to help you stand up your architectures
+faster. Example application code is provided as a demonstration, along with the
+various open source projects and Red Hat products required for the deployment
+to work. Users can then modify the pattern for their own specific application._
+
+**Use Case:** Boosting manufacturing efficiency and product quality with
+artificial intelligence/machine learning (AI/ML) out to the edge of the
+network.
+
+**Background:** Microcontrollers and other types of simple computers have long
+been widely used on factory floors and processing plants to monitor and control
+the many machines required to implement the many machines required to implement
+many modern manufacturing workflows. The manufacturing industry has
+consistently used technology to fuel innovation, production optimization, and
+operations. However, historically, control systems were mostly “dumb” in that
+they mostly took actions in response to pre-programmed triggers and heuristics.
+For example, predictive maintenance commonly took place on either a set length
+of time or the number of hours was in service. Supervisory control and data
+acquisition (SCADA) has often been used to collectively describe these hardware
+and software systems, which mostly functioned independently of the company’s
+information technology (IT) systems. Companies increasingly see the benefit of
+bridging these operational technology (OT) systems with their IT. Factory
+systems can be much more flexible as a result. They can also benefit from newer
+technologies such as AI/ML, thereby allowing for tasks like maintenance to be
+scheduled based on multiple real-time measurements rather than simple
+programmed triggers while bringing processing power closer to data.
## Solution Overview
@@ -40,17 +62,23 @@ Supervisory control and data acquisition (SCADA) has often been used to collecti
_Figure 1. Industrial edge solution overview._
-Figure 1 provides an overview of the industrial edge solution. It is applicable across a number of verticals including manufacturing.
+Figure 1 provides an overview of the industrial edge solution. It is applicable
+across a number of verticals including manufacturing.
This solution:
- Provides real-time insights from the edge to the core datacenter
- Secures GitOps and DevOps management across core and factory sites
- Provides AI/ML tools that can reduce maintenance costs
-Different roles within an organization have different concerns and areas of focus when working with this distributed AL/ML architecture across two logical types of sites: the core datacenter and the factories. (As shown in Figure 2.)
+Different roles within an organization have different concerns and areas of
+focus when working with this distributed AL/ML architecture across two logical
+types of sites: the core datacenter and the factories. (As shown in Figure 2.)
-- **The core datacenter**. This is where data scientists, developers, and operations personnel apply the changes to their models, application code, and configurations.
-- **The factories**. This is where new applications, updates and operational changes are deployed to improve quality and efficiency in the factory..
+- **The core datacenter**. This is where data scientists, developers, and
+ operations personnel apply the changes to their models, application code, and
+ configurations.
+- **The factories**. This is where new applications, updates and operational
+ changes are deployed to improve quality and efficiency in the factory..
[](/images/ai-ml-architecture.png)
@@ -63,9 +91,22 @@ _Figure 3. Overall data flows of solution._
Figure 3 provides a different high-level view of the solution with a focus on the two major dataflow streams.
-1. Moving sensor data and events from the operational/shop floor edge towards the core. The idea is to centralize as much as possible, but decentralize as needed. For example, sensitive production data might not be allowed to leave the premises. Think of a temperature curve of an industrial oven; it might be considered crucial intellectual property of the customer. Or the sheer amount of raw data (maybe 10,000 events per second) might be too expensive to transfer to a cloud datacenter. In the above diagram, this is from left to right. In other diagrams the edge / operational level is usually at the bottom and the enterprise/cloud level at the top. Thus, this is also referred to as northbound traffic.
+1. Moving sensor data and events from the operational/shop floor edge towards
+ the core. The idea is to centralize as much as possible, but decentralize as
+ needed. For example, sensitive production data might not be allowed to leave
+ the premises. Think of a temperature curve of an industrial oven; it might
+ be considered crucial intellectual property of the customer. Or the sheer
+ amount of raw data (maybe 10,000 events per second) might be too expensive
+ to transfer to a cloud datacenter. In the above diagram, this is from left
+ to right. In other diagrams the edge / operational level is usually at the
+ bottom and the enterprise/cloud level at the top. Thus, this is also
+ referred to as northbound traffic.
-2. Push code, configuration, master data, and machine learning models from the core (where development, testing, and training is happening) towards the edge / shop floors. As there might be 100 plants with 1000s of lines, automation and consistency is key. In the above diagram, this is from right to left, in a top/down view, it is called southbound traffic.
+2. Push code, configuration, master data, and machine learning models from the
+ core (where development, testing, and training is happening) towards the
+ edge / shop floors. As there might be 100 plants with 1000s of lines,
+ automation and consistency is key. In the above diagram, this is from right
+ to left, in a top/down view, it is called southbound traffic.
## Logical Diagrams
@@ -89,6 +130,8 @@ It includes, among other components::
[**Red Hat Data Foundation**](https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation?intcmp=7013a00000318EWAAY) is software-defined storage for containers. Engineered as the data and storage services platform for Red Hat OpenShift, Red Hat Data Foundation helps teams develop and deploy applications quickly and efficiently across clouds. It is based on the open source Ceph, Rook, and Noobaa projects.
+[**Red Hat OpenShift AI**](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai) Red Hat® OpenShift® AI is a flexible, scalable artificial intelligence (AI) and machine learning (ML) platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments.
+
[**Red Hat Advanced Cluster Management for Kubernetes (RHACM)**](https://www.redhat.com/en/technologies/management/advanced-cluster-management?intcmp=7013a00000318EWAAY) controls clusters and applications from a single console, with built-in security policies. It extends the value of Red Hat OpenShift by deploying applications, managing multiple clusters, and enforcing policies across multiple clusters at scale.
[**Red Hat Enterprise Linux**](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux?intcmp=7013a00000318EWAAY) is the world’s leading enterprise Linux platform. It’s an open source operating system (OS). It’s the foundation from which you can scale existing apps—and roll out emerging technologies—across bare-metal, virtual, container, and all types of cloud environments.
@@ -101,21 +144,44 @@ It includes, among other components::
_Figure 5: Industrial Edge solution showing messaging and ML components schematically._
-As shown in Figure 5, data coming from sensors is transmitted over MQTT (Message Queuing Telemetry Transport) to Red Hat AMQ, which routes sensor data for two purposes: model development in the core data center and live inference in the factory data centers. The data is then relayed on to Red Hat AMQ for further distribution within the factory datacenter and out to the core datacenter. MQTT is the most commonly used messaging protocol for Internet of Things (IoT) applications.
-
-The lightweight Apache Camel K, a lightweight integration framework built on Apache Camel that runs natively on Kubernetes, provides MQTT (Message Queuing Telemetry Transport) integration that normalizes and routes sensor data to the other components.
-
-That sensor data is mirrored into a data lake that is provided by Red Hat OpenShift Data Foundation. Data scientists then use various tools from the open source Open Data Hub project to perform model development and training, pulling and analyzing content from the data lake into notebooks where they can apply ML frameworks.
-
-Once the models have been tuned and are deemed ready for production, the artifacts are committed to git which kicks off an image build of the model using OpenShift Pipelines (based on the upstream Tekton), a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers.
-
-The model image is pushed into OpenShift’s integrated registry running in the core datacenter which is then pushed back down to the factory datacenter for use in inference.
+As shown in Figure 5, data coming from sensors is transmitted over MQTT
+(Message Queuing Telemetry Transport) to Red Hat AMQ, which routes sensor data
+for two purposes: model development in the core data center and live inference
+in the factory data centers. The data is then relayed on to Red Hat AMQ for
+further distribution within the factory datacenter and out to the core
+datacenter. MQTT is the most commonly used messaging protocol for Internet
+of Things (IoT) applications.
+
+The lightweight Apache Camel K, a lightweight integration framework built on
+Apache Camel that runs natively on Kubernetes, provides MQTT (Message Queuing
+Telemetry Transport) integration that normalizes and routes sensor data to the
+other components.
+
+That sensor data is mirrored into a data lake that is provided by Red Hat
+OpenShift Data Foundation. Data scientists then use various tools from the open
+source Open Data Hub project to perform model development and training, pulling
+and analyzing content from the data lake into notebooks where they can apply ML
+frameworks.
+
+Once the models have been tuned and are deemed ready for production, the
+artifacts are committed to git which kicks off an image build of the model
+using OpenShift Pipelines (based on the upstream Tekton), a serverless CI/CD
+system that runs pipelines with all the required dependencies in isolated
+containers.
+
+The model image is pushed into OpenShift’s integrated registry running in the
+core datacenter which is then pushed back down to the factory datacenter for
+use in inference.
[](/images/industrial-edge/edge-mfg-devops-network-sd.png)
_Figure 6: Industrial Edge solution showing network flows schematically._
-As shown in Figure 6, in order to protect the factories and operations infrastructure from cyber attacks, the operations network needs to be segregated from the enterprise IT network and the public internet. The factory machinery, controllers, and devices need to be further segregated from the factory data center and need to be protected behind a firewall.
+As shown in Figure 6, in order to protect the factories and operations
+infrastructure from cyber attacks, the operations network needs to be
+segregated from the enterprise IT network and the public internet. The factory
+machinery, controllers, and devices need to be further segregated from the
+factory data center and need to be protected behind a firewall.
### Edge manufacturing with GitOps
@@ -123,25 +189,35 @@ As shown in Figure 6, in order to protect the factories and operations infrastru
_Figure 7: Industrial Edge solution showing a schematic view of the GitOps workflows._
-GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. Figure 6 shows how, for these industrial edge manufacturing environments, GitOps provides a consistent, declarative approach to managing individual cluster changes and upgrades across the centralized and edge sites. Any changes to configuration and applications can be automatically pushed into operational systems at the factory.
+GitOps is an operational framework that takes DevOps best practices used for
+application development such as version control, collaboration, compliance, and
+CI/CD, and applies them to infrastructure automation. Figure 6 shows how, for
+these industrial edge manufacturing environments, GitOps provides a consistent,
+declarative approach to managing individual cluster changes and upgrades across
+the centralized and edge sites. Any changes to configuration and applications
+can be automatically pushed into operational systems at the factory.
### Secrets exchange and management
-[](/images/industrial-edge/edge-mfg-security-sd.png)
-
-_Figure 8: Schematic view of secrets exchange and management in an Industrial Edge solution._
-
-Authentication is used to securely deploy and update components across multiple locations. The credentials are stored using a secrets management solution like Hashicorp Vault. The external secrets component is used to integrate various secrets management tools (AWS Secrets Manager, Google Secrets Manager, Azure Key Vault). As shown in Figure 7, these secrets are then passed to Red Hat Advanced Cluster Management for Kubernetes (RHACM) which pushes the secrets to the RHACM agent at the edge clusters based on policy. RHACM is also responsible for providing secrets to OpenShift for GitOps workflows( using Tekton and Argo CD).
-
-For logical, physical and dataflow diagrams, please see excellent work done by the [Red Hat Portfolio Architecture team](https://www.redhat.com/architect/portfolio/detail/26)
+Authentication is used to securely deploy and update components across multiple
+locations. The credentials are stored using a secrets management solution like
+Hashicorp Vault on the hub. The external secrets component is used to integrate various
+secrets management tools (AWS Secrets Manager, Google Secrets Manager, Azure
+Key Vault). These secrets are then pulled from the HUB's Vault on to the different
+factory clusters.
## Demo Scenario
-This scenario is derived from the [MANUela work](https://github.com/sa-mw-dach/manuela) done by Red Hat Middleware Solution Architects in Germany in 2019/20. The name MANUela stands for MANUfacturing Edge Lightweight Accelerator, you will see this acronym in a lot of artifacts. It was developed on a platform called [stormshift](https://github.com/stormshift/documentation).
+This scenario is derived from the [MANUela
+work](https://github.com/sa-mw-dach/manuela) done by Red Hat Middleware
+Solution Architects in Germany in 2019/20. The name MANUela stands for
+MANUfacturing Edge Lightweight Accelerator, you will see this acronym in a lot
+of artifacts. It was developed on a platform called
+[stormshift](https://github.com/stormshift/documentation).
-The demo has been updated 2021 with an advanced GitOps framework.
+The demo has been updated with an advanced GitOps framework.
-[](/images/industrial-edge/highleveldemodiagram.png)
+[](/images/industrial-edge/highleveldemodiagram-v2.png)
_Figure 9. High-level demo summary. The specific example is machine condition monitoring based on sensor data in an industrial setting, using AI/ML. It could be easily extended to other use cases such as predictive maintenance, or other verticals._
@@ -166,12 +242,3 @@ To deploy the Industrial Edge Pattern demo yourself, follow the [demo script](de
View and download all of the diagrams above in our open source tooling site.
[[Open Diagrams]](https://www.redhat.com/architect/portfolio/tool/index.html?#gitlab.com/osspa/portfolio-architecture-examples/-/raw/main/diagrams/edge-manufacturing-efficiency.drawio)
-
-
-## Pattern Structure
-
-
-
-## Presentation
-
-View a presentation slide deck about Industrial Edge [here](https://speakerdeck.com/rhvalidatedpatterns/industrial-edge)
diff --git a/content/patterns/industrial-edge/application.md b/content/patterns/industrial-edge/application.md
index 35f4f3b34..d7b758fef 100644
--- a/content/patterns/industrial-edge/application.md
+++ b/content/patterns/industrial-edge/application.md
@@ -8,172 +8,156 @@ aliases: /industrial-edge/application/
## Background
-Up until now the Industrial Edge 2.0 validated patterns has focused primarily on successfully deploying the architectural pattern. Now it is time to see GitOps and DevOps in action as we go through a number of demonstrations to change both configuration information and the applications that we are deploying.
+Up until now the Industrial Edge 2.0 validated patterns has focused primarily
+on successfully deploying the architectural pattern. Now it is time to see
+GitOps and DevOps in action as we go through a number of demonstrations to
+change both configuration information and the applications that we are
+deploying.
-If you have already deployed the data center and optionally a factory (edge) cluster, then you have already seen several applications deployed in the OpenShift GitOps console. If you haven't done this then we recommend you deploy the data center after you have setup the Quay repositories described below.
+If you have already deployed the data center and optionally a factory (edge)
+cluster, then you have already seen several applications deployed in the
+OpenShift GitOps console.
## Prerequisite preparation
-### Quay public registry setup
-
-In the [Quay.io](https://quay.io) registry please ensure you have the following repositories and that they are set for public access. Replace your-org with the name of your organization or Quay.io username.
-
-* _your-org_/iot-software-sensor
-* _your-org_/iot-consumer
-* _your-org_/iot-frontend
-* _your-org_/iot-anomaly-detection
-* _your-org_/http-ionic
-
-These repositories are needed in order to provide container images built at the data center to be consumed by the factories (edge).
-
-### Local laptop/workstation
-
-Make sure you have `git` and OpenShift's `oc` command-line clients.
-
### OpenShift Cluster
-Make sure you have the `kubeadmin` administrator login for the data center cluster. Use this or the `kubeconfig` (export the path) to provide administrator access to your data center cluster. It is not required that you have access to the edge (factory) clusters. GitOps and DevOps will take care of the edge clusters.
-
-### GitHub account
-
-You will need to login into GitHub and be able to fork two repositories.
-
-* validatedpatterns/industrial-edge
-* validatedpatterns-demos/manuela-dev
+Make sure you have the `kubeadmin` administrator login for the data center
+cluster. Use this or the `kubeconfig` (export the path) to provide
+administrator access to your data center and factory/edge clusters.
## Configuration changes with GitOps
-There will may be times where you need to change the configuration of some of the edge devices in one or more of your factories. In our example, we have various sensors at the factory. Modification can be made to these sensors using `ConfigMaps`.
+There will may be times where you need to change the configuration of some of
+the edge devices in one or more of your factories. In our example, we have
+various sensors at the factory. Modification can be made to these sensors using
+`ConfigMaps`.
-[](/images/industrial-edge/highleveldemodiagram.png)
+[](/images/industrial-edge/highleveldemodiagram-v2.png)
-In this demonstration we will turn on a temperature sensor for sensor #2. We will first do this in the data center because this will demonstrate the power of GitOps without having to involve the edge/factory. However if you do have an factory joined using Advanced Cluster Management, then the changes will make their way out to the factory. But it is not necessary for the demo as we have a complete test environment on the data center.
+In this demonstration we will turn on a temperature sensor for sensor #2. We
+will first do this in the data center because this will demonstrate the power
+of GitOps without having to involve the edge/factory. However if you do have
+an factory joined using Advanced Cluster Management, then the changes will make
+their way out to the factory. But it is not necessary for the demo as we have a
+complete test environment on the data center.
-Make sure you are able to see the dashboard application in a tab on your browser. You can find the URL for the dashboard application by looking at the following in your OpenShift console.
+Make sure you are able to see the dashboard application in a tab on your
+browser. You can find the URL for the dashboard application by looking at the
+following in your OpenShift console.
[](/images/industrial-edge/network-routing-line-dashboard.png)
-Select Networking->Routes on the left-hand side of the console. Using the Projects pull-down, select `manuela-tst-all`. Click on the URL under the Location column for the route Name `line-dashboard`. this will launch the line-dashboard monitoring application in a browser tab. The URL will look like:
+Select Networking->Routes on the left-hand side of the console. Using the
+Projects pull-down, select `manuela-tst-all`. Click on the URL under the
+Location column for the route Name `line-dashboard`. this will launch the
+line-dashboard monitoring application in a browser tab. The URL will look like:
`line-dashboard-manuela-tst-all.apps.*cluster-name*.*domain*`
-Once the the application is open in your browser, click on the “Realtime Data” Navigation on the left and wait a bit. Data should be visualized as received. Note that there is only vibration data shown! If you wait a bit more (usually every 2-3 minutes), you will see an anomaly and alert on it.
+Once the the application is open in your browser, click on the “Realtime Data”
+Navigation on the left and wait a bit. Data should be visualized as received.
+Note that there is only vibration data shown! If you wait a bit more (usually
+every 2-3 minutes), you will see an anomaly and alert on it.
[](/images/industrial-edge/app-line-dashboard-before.png)
-Now let's turn on the temperature sensor. Using you favorite editor, edit the following file:
+Now let's turn on the temperature sensor. Go to the gitea link on the nine box login using the
+`gitea_admin` user and the autogenerated password that can be found in the secret called
+`gitea-admin-secret` in the `vp-gitea` namespace:
-```sh
-industrial-edge/charts/data-center/manuela-test/templates/machine-sensor/machine-sensor-2-configmap.yaml
+[](/images/industrial-edge/gitea-signin.png)
+
+You can run the following command to obtain the gitea user's password automatically:
+
+```
+oc extract -n vp-gitea secret/gitea-admin-secret --to=- --keys=password 2>/dev/null
```
-Change `SENSOR_TEMPERATURE_ENABLED: "false"` to `SENSOR_TEMPERATURE_ENABLED: "true"`.
+In the `industrial-edge` repository, edit the file called
+`charts/datacenter/manuela-tst/templates/machine-sensor/machine-sensor-2-configmap.yaml`
+and change `SENSOR_TEMPERATURE_ENABLED: "false"` to `SENSOR_TEMPERATURE_ENABLED: "true"`.
-Then change and commit this to your git repository so that the change will be picked up by OpenShift GitOps (ArgoCD).
+[](/images/industrial-edge/gitea-edit.png)
+[](/images/industrial-edge/gitea-commit.png)
-```sh
-git add industrial-edge-charts/data/center/manuela-test/templates/machine-sensor/machine-sensor-2-configmap.yaml
-git commit -m "Turned on temprature sensor for machine sensor #2"
-git push
-```
+Then change and commit this to your git repository so that the change will be
+picked up by OpenShift GitOps (ArgoCD).
-You can track the progress of this commit/push in your OpenShift GitOps console in the `manuela-test-all` application. You will notice components regarding machine-sensor-2 getting sync-ed. You can speed this up by manually pressing the Refresh button.
+You can track the progress of this commit/push in your OpenShift GitOps console
+in the `manuela-test-all` application. You will notice components regarding
+machine-sensor-2 getting sync-ed. You can speed this up by manually pressing
+the Refresh button.
[](/images/industrial-edge/argocd-line-dashboard.png)
The dashboard app should pickup the change automatically, once data from the temperature sensor is received.
Sometimes a page/tab refreshed is needed for the change to be picked up.
-[](/images/industrial-edge/app-line-dashboard.png)
+[](/images/industrial-edge/argocd-machine-sensor2.png)
## Application changes using DevOps
-The `line-dashboard` application has temperature sensors. In this demonstration we are going to make a simple change to that application, rebuild and redeploy it. In the `manuela-dev` repository there is a file `components/iot-consumer/index.js`. This JavaScript program consumes message data coming from the line servers and one of functions it performs is to check the temperature to see if it has exceeded a threshold. There is three lines of code in there that does some Celsius to Fahrenheit conversion.
-
-Depending on the state of your `manuela-dev` repository this may or may not be commented out. Ideally for the demonstration you would want it uncommented and therefore effective. What this means is that while the labels on the frontend application are showing Celsius, the data is actually in Fahrenheit. This is a good place to start because that data won't make any sense.
-
-[](/images/industrial-edge/fahrenheit-temp.png)
-
-Machines running over 120C is not normal. However examining the code explains why. There is an erroneous conversion taking place. What must happen is we remove or comment out this code.
-
-[](/images/industrial-edge/uncommented-code.png)
+The `line-dashboard` application has temperature sensors. In this demonstration
+we are going to make a simple change to that application, rebuild and redeploy
+it. In the `manuela-dev` repository there is a file
+`components/iot-frontend/src/app/app.component.html`. Let's change the
+`IoT Dashboard` to something else, say,
+`IoT Dashboard - DEVOPS was here!`. We do this in the
+gitea web interface directly clicking on the editing icon for the file:
-If you haven't deployed the uncommented code it might be best to prepare that before the demonstration. After pointing out the problem, comment out the code.
+[](/images/industrial-edge/gitea-iot-edit.png)
-[](/images/industrial-edge/commented-code.png)
-
-Now that the erroneous conversion code has been commented out it is is time rebuild and redeploy. First commit and push the code to the repository. While in the directory for your `manuela-dev` repository run the following commands. The `components/iot-consumer/index.js` file should be the only changed file.
-
-```sh
-git add components/iot-consumer/index.js
-git commit -m "commented out C to F temp conversion"
-git push
-```
-
-Now its time to kick off the CI pipeline. Due to the need for GitHub secrets and Quay secrets as part of this process, we currently can't use the OpenShift console's Pipelines to kick off the pipeline in the demo environment. Instead, use the command-line. While in the `industrial-edge` repository directory, run the following:
+We can now kick off the pipeline called `build-and-test-iot-frontend` that will do the following:
+1. Rebuild the image from the manuela-dev code
+2. Push the change on the hub datacenter in the manuela-tst-all namespace
+3. Create a PR in gitea
+To start the pipeline run we can just run the following command from our terminal:
```sh
-make build-and-test
+make build-and-test-iot-frontend
```
-This build takes some time because the pipeline is rebuilding all the images. You can monitor the pipeline's progress in the Openshift console's pipelines section.
+The pipeline will look a bit like the following:
-Alternatively you can can try and run the shorter `build-iot-consumer` pipeline run in the OpenShift console. This should just run and test the specific application.
+[](/images/industrial-edge/pipeline-iot-frontend.png)
-[](/images/industrial-edge/build-and-test-pipeline.png)
+After the pipeline completed the `manuela-test` application in Argo will eventually refresh and push the
+changes to the cluster and the line dash board route in the `manuela-tst-all` namespace will have picked up
+the changes:
-You can also see some updates happening in the `manuela-tst` application in OpenShift GitOps (ArgoCD).
+[](/images/industrial-edge/line-dashboard-devops.png)
-When the pipeline is complete check the `lines-dashboard` application again in the browser. More reasonable, Celsius, temperatures are displayed. (Compare with above.)
+The pipeline will also have created a PR in gitea, like the following one:
-[](/images/industrial-edge/celsius-temp.png)
+[](/images/industrial-edge/gitea-pipeline-pr.png)
-The steps above have successfully applied the change to the Manuela test environment at the data center. In order for these changes to be pushed out to the factories it must be accepted and pushed to the Git repository. Examine the project in GitHub. There is a new Pull Request (PR) called **Pull request created by Tekton task github-add-pull-request**. Select that PR and merge the pull request.
-
-[](/images/industrial-edge/tekton-pull-request.png)
-
-OpenShift GitOps will see the new change and apply it out to the factories.
+Now an operator can verify that the change is correct on the datacenter in the
+`manuela-tst-all` line dashboard and if deemed correct, he can merge the PR in
+gitea which will roll out the change to the production factory!
## Application AI model changes with DevOps
-After a successful deployment of Industrial Edge 2.0, check to see that Jupyter Hub is running. To do this go to project `manuela-ml-workspace` check that `jupyterhub` pods are up and running.
-
-[](/images/industrial-edge/jupyterhub-pods.png)
-
-Then, in the same project `manuela-ml-namespace`, select Networking/Routes and click on the URL associated with `jupyterhub` in the Location column.
-
-[](/images/industrial-edge/jupyterhub-url.png)
-
-This will bring you to a web page at an address in the following format:
-
-* `jupyterhub-manuela-ml-workspace.apps.*clustername*.*your-domain*`
-
-Options for different types of Jupyter servers are shown. There are two options that are useful for this demo.
-
-* Standard Data Science. Select this notebook image for simpler notebooks like `Data Analyses.ipynb`
-* Tensorflow Notebook Image. Select this notebook image for more a complex notebook that require Tensorflow. E.g. `Anomaly Detection-using-TF-and-Deep-Learning.ipynb`
-
-At the bottom of the screen there is a `Start server` button. Select the type of Notebook server image and press `Start server`.
-
-[](/images/industrial-edge/jupyterhub-init-console.png)
-
-Selecting Tensorflow notebook image:
-
-[](/images/industrial-edge/jupyter-tf-server.png)
+On the OpenShift console click on the nine-box and choose `Red Hat OpenShift AI`. You'll be taken
+to the AI console which will look like the following:
-On the next screen upload the following files from `manuela-dev/ml-models/anomaly-detection`:
+[](/images/industrial-edge/rhoai-console-home.png)
-* One of the Jupyter notebooks
- * `Data-Analyses.ipynb` for a somewhat simpler demo
- * `Anomaly Detection-using-TF-and-Deep-Learning.ipynb` for a Tensorflow demo.
-* raw-data.cvs
+Click on `Data Science Projects` on the left sidebar and choose the `ml-development` project. You'll
+be taken to the project which will contain a couple of workbenches and a model:
-[](/images/industrial-edge/upload-ml-files.png)
+[](/images/industrial-edge/rhoai-ml-development.png)
-Open the notebook by double clicking on the notebook file (ending in `.ipynb`)
+Clicking on the `JupyterLab` workbench you'll be taken to the notebook where data analysis for this
+pattern is being done. The `manuela-dev` code will be preloaded in the notebook and you can click
+on the left file browser on `manuela-dev/ml-models/anomaly-detection/1-preprocessing.ipynb`:
-[](/images/industrial-edge/anomaly-detection-notebook.png)
+[](/images/industrial-edge/notebook-console.png)
-After opening the notebook successfully, walk through the demonstration by pressing play and iterating through the commands in the playbook. Jupyter playbooks are interactive and you may make changes and also save those changes. Also, some steps in the notebook take milliseconds, however, other steps can take a long time (up to an hour), so check on the completion of steps.
+After opening the notebook successfully, walk through the demonstration by
+pressing play and iterating through the commands in the playbook. Jupyter
+playbooks are interactive and you may make changes and also save those changes.
-Remember that changes to the notebook will require downloading, committing, and pushing that notebook to the git repository so that it gets redeployed to the factories.
+Running through all the six notebooks will automatically regenerate the anomaly
+model, prepare the data for the training and push the changes to the internal
+gitea so the inference service can pick up the new model.
diff --git a/content/patterns/industrial-edge/cluster-sizing.md b/content/patterns/industrial-edge/cluster-sizing.md
index b45c72e48..e8e549104 100644
--- a/content/patterns/industrial-edge/cluster-sizing.md
+++ b/content/patterns/industrial-edge/cluster-sizing.md
@@ -12,9 +12,9 @@ The **Industrial-Edge** pattern has been tested in the following Certified Cloud
| **Certified Cloud Providers** | 4.8 | 4.9 | 4.10 |
| :---- | :---- | :---- | :---- |
-| Amazon Web Services| | |:heavy_check_mark: |
-| Microsoft Azure| :heavy_check_mark: | | |
-| Google Cloud Platform| |:heavy_check_mark: | |
+| Amazon Web Services| | | X |
+| Microsoft Azure| X | | |
+| Google Cloud Platform| | X | |
## General OpenShift Minimum Requirements
diff --git a/content/patterns/industrial-edge/demo-script.md b/content/patterns/industrial-edge/demo-script.md
index 02a88a8ca..d638de094 100644
--- a/content/patterns/industrial-edge/demo-script.md
+++ b/content/patterns/industrial-edge/demo-script.md
@@ -2,8 +2,9 @@
## Objectives
-There's no experience like hands-on experience and being able to see industrial edge scenarios. This is a demo for the Industrial Edge Validated Pattern using the latest product and technology improvements.
-
+There's no experience like hands-on experience and being able to see industrial
+edge scenarios. This is a demo for the Industrial Edge Validated Pattern using
+the latest product and technology improvements.
* Show Red Hat Operators being deployed
* Show available Red Hat Pipelines for the Industrial Edge pattern
@@ -13,20 +14,19 @@ There's no experience like hands-on experience and being able to see industrial
* Show the datacenter-gitops-server view
* Show the factory-gitops-server view
-
#### For Information on the Red Hat Validated Patterns, visit our [website](https://validatedpatterns.io)
## See the pattern in action
Watch the following video for a demonstration of [OpenShift Pipelines in the Industrial Edge Pattern](https://www.youtube.com/watch?v=BMUiaCm6pZ8)
-In this article, we give an overview of the demo and step by step instructions on how to get started.
+In this article, we give an overview of the demo and step by step instructions on how to get started.
## Getting Started
**_NOTE:_** This demo takes a "bring your own cluster" approach, which means this pattern/demo will not deploy any OpenShift clusters.
-This demo script begins after the completion of you running `./pattern.sh make install` from our [Getting Started Guide](../getting-started)
+This demo script begins after the completion of you running `./pattern.sh make install` from our [Getting Started Guide](../getting-started)
### Demo: Quick Health Check
**_NOTE:_** This is a complex setup, and sometimes things can go wrong. Do a quick check of the essentials:
@@ -68,7 +68,8 @@ If you run into any problems, checkout the potential/Known issues list: http://v
## Summary
-In this demo we: , we show you how to get started with the Industrial Edge Validated Pattern. In
+In this demo we show you how to get started with the Industrial Edge Validated Pattern.
+More specifically, we:
* Show you how to get started with the Industrial Edge Pattern
* Make configuration changes with GitOps
diff --git a/content/patterns/industrial-edge/factory.md b/content/patterns/industrial-edge/factory.md
deleted file mode 100644
index 598b81fd5..000000000
--- a/content/patterns/industrial-edge/factory.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: Factory Sites
-weight: 20
-aliases: /industrial-edge/factory/
----
-
-# Having a factory (edge) cluster join the datacenter (hub)
-
-## Allow ACM to deploy the factory application to a subset of clusters
-
-By default the `factory` applications are deployed on all clusters that ACM knows about.
-
-```json
- managedSites:
- - name: factory
- clusterSelector:
- matchExpressions:
- - key: vendor
- operator: In
- values:
- - OpenShift
-```
-
-This is useful for cost-effective demos, but is hardly realistic.
-
-To deploy the `factory` applications only on managed clusters with the label
-`site=factory`, change the site definition in `values-datacenter.yaml` to:
-
-```json
- managedSites:
- - name: factory
- clusterSelector:
- matchLabels:
- site: factory
-```
-
-Remember to commit the changes and push to GitHub so that GitOps can see
-your changes and apply them.
-
-## Deploy a factory cluster
-
-For instructions on how to prepare and import a factory cluster please read the section [importing a cluster](/learn/importing-a-cluster). Use `clusterGroup=factory`.
-
-### You're done
-
-That's it! Go to your factory (edge) OpenShift console and check for the open-cluster-management-agent pod being launched. Be patient, it will take a while for the ACM agent and agent-addons to launch. After that, the operator OpenShift GitOps will run. When it's finished coming up launch the OpenShift GitOps (ArgoCD) console from the top right of the OpenShift console.
-
-## Next up
-
-Work your way through the Industrial Edge 2.0 [GitOps/DevOps demos](/industrial-edge/application)
diff --git a/content/patterns/industrial-edge/getting-started.md b/content/patterns/industrial-edge/getting-started.md
index ab247d47c..71cdd8d3e 100644
--- a/content/patterns/industrial-edge/getting-started.md
+++ b/content/patterns/industrial-edge/getting-started.md
@@ -8,19 +8,13 @@ aliases: /industrial-edge/getting-started/
# Prerequisites
-1. An OpenShift cluster (Go to [the OpenShift console](https://console.redhat.com/openshift/create)). Cluster must have a dynamic StorageClass to provision PersistentVolumes. See also [sizing your cluster](../../industrial-edge/cluster-sizing).
+1. An OpenShift cluster (Go to [the OpenShift
+ console](https://console.redhat.com/openshift/create)). Cluster must have a
+ dynamic StorageClass to provision PersistentVolumes. See also [sizing your
+ cluster](../../industrial-edge/cluster-sizing).
1. (Optional) A second OpenShift cluster for edge/factory
-1. A GitHub account (and a token for it with repositories permissions, to read from and write to your forks)
-1. A quay account with the following repositories set as public:
- - http-ionic
- - httpd-ionic
- - iot-anomaly-detection
- - iot-consumer
- - iot-frontend
- - iot-software-sensor
-
-The use of this blueprint depends on having at least one running Red Hat
+The use of this pattern depends on having at least one running Red Hat
OpenShift cluster. It is desirable to have a cluster for deploying the data
center assets and a separate cluster(s) for the factory assets.
@@ -32,114 +26,16 @@ service](https://console.redhat.com/openshift/create).
For installation tooling dependencies, see [Patterns quick start](/learn/quickstart)
+The Industrial Edge pattern installs an in-cluster gitea instance by default. This
+means that there is no need to fork the pattern's git repository and that ArgoCD will point
+directly at the in-cluster git repository. Changes should be done there and not on github.
+See this [post](https://validatedpatterns.io/blog/2024-07-12-in-cluster-git/) for more information.
# How to deploy
-1. Fork the [industrial-edge](https://github.com/validatedpatterns/industrial-edge) repository on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes.
-
-1. Fork the [manuela-dev](https://github.com/validatedpatterns-demos/manuela-dev) repository on GitHub. It is necessary to fork this repository because the GitOps framework will push tags to this repository that match the versions of software that it will deploy.
-
-1. Clone the forked copy of the `industrial-edge` repository. Create a deployment branch using the branch `v2.3`.
-
- ```sh
- git clone git@github.com:{your-username}/industrial-edge.git
- cd industrial-edge
- git checkout v2.3
- git switch -c deploy-v2.3
- ```
-
-1. A `values-secret-industrial-edge.yaml` file is used to automate setup of secrets needed for:
-
- - A git repository hosted on a service such as GitHub, GitLab, or so on.
- - A container image registry (E.g. Quay)
- - S3 storage (E.g. AWS)
-
- DO NOT COMMIT THIS FILE. You do not want to push personal credentials to GitHub.
+1. Clone the [industrial-edge](https://github.com/validatedpatterns/industrial-edge) repository on GitHub.
- ```sh
- cp values-secret.yaml.template ~/values-secret-industrial-edge.yaml
- vi ~/values-secret-industrial-edge.yaml
- ```
-
-1. Customize the following secret values.
-
- ```yaml
- version: "2.0"
- secrets:
- - name: imageregistry
- fields:
- # E.G. Quay -> Robot Accounts -> Robot Login
- - name: username
- value:
- - name: password
- value:
-
- - name: git
- fields:
- # Go to: https://github.com/settings/tokens
- - name: username
- value:
- - name: password
- value:
-
- - name: aws
- fields:
- - name: aws_access_key_id
- ini_file: ~/.aws/credentials
- ini_key: aws_access_key_id
- - name: aws_secret_access_key
- ini_file: ~/.aws/credentials
- ini_key: aws_secret_access_key
- ```
-
-1. Customize the deployment for your cluster. Change the appropriate values in `values-global.yaml`
-
- ```yaml
- main:
- clusterGroupName: datacenter
-
- global:
- pattern: industrial-edge
-
- options:
- useCSV: False
- syncPolicy: Automatic
- installPlanApproval: Automatic
-
- imageregistry:
- account: PLAINTEXT
- hostname: quay.io
- type: quay
-
- git:
- hostname: github.com
- account: PLAINTEXT
- #username: PLAINTEXT
- email: SOMEWHERE@EXAMPLE.COM
- dev_revision: main
-
- s3:
- bucket:
- name: BUCKETNAME
- region: AWSREGION
- message:
- aggregation:
- count: 50
- custom:
- endpoint:
- enabled: false
- ```
-
- ```sh
- vi values-global.yaml
- git add values-global.yaml
- git commit -m "Added personal values to values-global" values-global.yaml
- git push origin deploy-v2.3
- ```
-
-1. You can deploy the pattern using the [Validated Patterns Operator](/infrastructure/using-validated-pattern-operator/) directly. If you deploy the pattern using the Validated Patterns Operator, installed through `Operator Hub`, you will need to run `./pattern.sh make load-secrets` through a terminal session on your laptop or bastion host.
-
-1. If you deploy the pattern through a terminal session on your laptop or bastion host login to your cluster by using the `oc login` command or by exporting the `KUBECONFIG` file.
+1. On your laptop or bastion host login to your cluster by using the `oc login` command or by exporting the `KUBECONFIG` file.
```sh
oc login
@@ -151,94 +47,82 @@ For installation tooling dependencies, see [Patterns quick start](/learn/quickst
export KUBECONFIG=~/my-ocp-cluster/auth/kubeconfig
```
-1. Apply the changes to your cluster from the root directory of the pattern.
+1. Deploy the industrial edge pattern:
```sh
+ cd
./pattern.sh make install
```
- The `make install` target deploys the Validated Patterns Operator, all the resources that are defined in the `values-datacenter.yaml` and runs the `make load-secrets` target to load the secrets configured in your `values-secrets-industrial-edge.yaml` file.
+ The `make install` target deploys the Validated Patterns Operator, all the resources that are defined in the `values-datacenter.yaml`
# Validating the Environment
-1. In the OpenShift Container Platform web console, navigate to the **Operators → OperatorHub** page.
-2. Verify that the following Operators are installed on the HUB cluster:
+1. Verify that the following Operators are installed on the HUB cluster:
```text
- Operator Name Namespace
- ------------------------------------------------------
- advanced-cluster-management open-cluster-management
- amq-broker-rhel8 manuela-tst-all
- amq-streams manuela-data-lake
- red-hat-camel-k manuela-data-lake
- seldon-operator manuela-ml-workspace
- openshift-pipelines-operator- openshift-operators
- opendatahub-operator openshift-operators
- patterns-operator openshift-operators
+ $ oc get operators.operators.coreos.com -A
+ NAME AGE
+ advanced-cluster-management.open-cluster-management 3h8m
+ amq-broker-rhel8.manuela-tst-all 3h8m
+ amq-streams.manuela-data-lake 3h8m
+ amq-streams.manuela-tst-all 3h8m
+ camel-k.manuela-data-lake 3h8m
+ camel-k.manuela-tst-all 3h8m
+ mcg-operator.openshift-storage 3h7m
+ multicluster-engine.multicluster-engine 3h4m
+ ocs-client-operator.openshift-storage 3h7m
+ ocs-operator.openshift-storage 3h7m
+ odf-csi-addons-operator.openshift-storage 3h7m
+ odf-operator.openshift-storage 3h8m
+ odf-prometheus-operator.openshift-storage 3h7m
+ openshift-gitops-operator.openshift-operators 3h11m
+ openshift-pipelines-operator-rh.openshift-operators 3h8m
+ patterns-operator.openshift-operators 3h12m
+ recipe.openshift-storage 3h7m
+ rhods-operator.redhat-ods-operator 3h8m
+ rook-ceph-operator.openshift-storage 3h7m
```
1. Access the ArgoCD environment
- You can find the ArgoCD application links listed under the **Red Hat applications** in the OpenShift Container Platform web console.
-
- 
+ You can find the ArgoCD application links listed under the nine box **Red
+ Hat applications** in the OpenShift Container Platform web console.
- You can also obtain the ArgoCD URLs and passwords (optional) by displaying the fully qualified domain names, and matching login credentials, for all ArgoCD instances:
+ 
- ```sh
- ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster`
- CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'`
- eval $CMD
- ```
-
- The result should look something like:
-
- ```text
- NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
- datacenter-gitops-server datacenter-gitops-server-industrial-edge-datacenter.apps.mycluster.mydomain.com datacenter-gitops-server https passthrough/Redirect None
- # admin.password
- REDACTED
-
- NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
- factory-gitops-server factory-gitops-server-industrial-edge-factory.apps.mycluster.mydomain.com factory-gitops-server https passthrough/Redirect None
- # admin.password
- REDACTED
-
- NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
- cluster cluster-openshift-gitops.apps.mycluster.mydomain.com cluster 8080 reencrypt/Allow None
- kam kam-openshift-gitops.apps.mycluster.mydomain.com kam 8443 passthrough/None None
- openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.mycluster.mydomain.com openshift-gitops-server https passthrough/Redirect None
- # admin.password
- REDACTED
- ```
+ The most important ArgoCD instance to examine at this point is the
+ `Datacenter ArgoCD`. This is where all the applications for the datacenter,
+ including the test environment, can be tracked.
- The most important ArgoCD instance to examine at this point is `data-center-gitops-server`. This is where all the applications for the datacenter, including the test environment, can be tracked.
+1. Check that all applications are synchronised. It should look like the following:
-1. Apply the secrets from the `values-secret-industrial-edge.yaml` to the secrets management Vault. This can be done through Vault's UI - manually without the file. The required secrets and scopes are:
+ 
- - **secret/hub/git** git *username* & *password* (GitHub token)
- - **secret/hub/imageregistry** Quay or DockerHub *username* & *password*
- - **secret/hub/aws** - AWS values read from your *~/.aws/credentials*
-
- Using the Vault UI check that the secrets have been setup.
-
- For more information on secrets management see [here](/secrets). For information on Hashicorp's Vault see [here](/secrets/vault)
+## Next Steps
-1. Check all applications are synchronised
+Once the data center has been setup correctly and confirmed to be working, you can:
-## Next Steps
+1. Add a dedicated cluster to the main datacenter hub cluster.
-[Help & Feedback](https://groups.google.com/g/validatedpatterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 }
-[Report Bugs](https://github.com/validatedpatterns/industrial-edge/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 }
+ By default the `factory` applications defined in the `values-factory.yaml` file
+ are deployed on all clusters imported into ACM and that have the label
+ `clusterGroup=factory`
-Once the data center has been setup correctly and confirmed to be working, you can:
+ For instructions on how to prepare and import a factory cluster please read the
+ section [importing a cluster](/learn/importing-a-cluster). Use
+ `clusterGroup=factory` as the label.
-1. Add a dedicated cluster to [deploy the factory pieces using ACM](/industrial-edge/factory)
-2. Once the data center and the factory have been deployed you will want to check out and test the Industrial Edge 2.0 demo code. You can find that [here](../application/)
+2. Once the data center and the factory have been deployed you will want to
+ check out and test the Industrial Edge 2.0 demo code. You can find that
+ [here](../application/). The argo applications on the factory cluster will look
+ like the following:
- a. Making [configuration changes](https://validatedpatterns.io/industrial-edge/application/#configuration-changes-with-gitops) with GitOps
- a. Making [application changes](https://validatedpatterns.io/industrial-edge/application/#application-changes-using-devops) using DevOps
- a. Making [AI/ML model changes](https://validatedpatterns.io/industrial-edge/application/#application-ai-model-changes-with-devops) with DevOps
+ 
# Uninstalling
We currently do not support uninstalling this pattern.
+
+# Help & Feedback
+
+[Help & Feedback](https://groups.google.com/g/validatedpatterns) - [Report Bugs](https://github.com/validatedpatterns/industrial-edge/issues)
diff --git a/content/patterns/industrial-edge/ideas-for-customization.md b/content/patterns/industrial-edge/ideas-for-customization.md
index 73cd6276a..18332cd53 100644
--- a/content/patterns/industrial-edge/ideas-for-customization.md
+++ b/content/patterns/industrial-edge/ideas-for-customization.md
@@ -8,17 +8,40 @@ aliases: /industrial-edge/ideas-for-customization/
# Why change it?
-One of the major goals of the Red Hat patterns development process is to create modular, customizable demos. The Industrial Edge demonstration includes multiple, simulated, IoT devices publishing their temperature and vibration telemetry to our data center and ultimately persisting the data into an AWS S3 storage service bucket which we call the Data Lake. All of this is done using our Red Hat certified products running on OpenShift.
-
-This demo in particular can be customized in a number of ways that might be very interesting - and here are some starter ideas with some instructions on exactly what and where changes would need to be made in the pattern to accommodate those changes.
+One of the major goals of the Red Hat patterns development process is to create
+modular, customizable demos. The Industrial Edge demonstration includes
+multiple, simulated, IoT devices publishing their temperature and vibration
+telemetry to our data center and ultimately persisting the data into an AWS S3
+storage service bucket which we call the Data Lake. All of this is done using
+our Red Hat certified products running on OpenShift.
+
+This demo in particular can be customized in a number of ways that might be
+very interesting - and here are some starter ideas with some instructions on
+exactly what and where changes would need to be made in the pattern to
+accommodate those changes.
# HOWTO Forking the Industrial Edge repository to your github account
-Hopefully we are all familiar with GitHub. If you are not GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere. Our Industrial Edge GitOps repository is available in our [Validated Patterns GitHub](https://github.com/validatedpatterns "Validated Patterns Homepage") organization.
-
-To fork this repository, and deploy the Industrial Edge pattern, follow the steps found in our [Getting Started](https://validatedpatterns.io/industrial-edge/getting-started "Industrial Edge Getting Started Guide") section. This will allow you to follow the next few HOWTO guides in this section.
-
-Our sensors have been configured to send data relating to the vibration of the devices. To show the power of GitOps, and keeping state in a git repository, we can make a change to the config map of one of the sensors to detect and report data on temperature. This is done via a variable called *SENSOR_TEMPERATURE_ENABLED* that is initially set to false. Setting this variable to true will trigger the GitOps engine to synchronize the application, restart the machine sensor and apply the change.
+Hopefully we are all familiar with GitHub. If you are not GitHub is a code
+hosting platform for version control and collaboration. It lets you and others
+work together on projects from anywhere. Our Industrial Edge GitOps repository
+is available in our [Validated Patterns
+GitHub](https://github.com/validatedpatterns "Validated Patterns Homepage")
+organization.
+
+To fork this repository, and deploy the Industrial Edge pattern, follow the
+steps found in our [Getting
+Started](https://validatedpatterns.io/industrial-edge/getting-started
+"Industrial Edge Getting Started Guide") section. This will allow you to
+follow the next few HOWTO guides in this section.
+
+Our sensors have been configured to send data relating to the vibration of the
+devices. To show the power of GitOps, and keeping state in a git repository,
+we can make a change to the config map of one of the sensors to detect and
+report data on temperature. This is done via a variable called
+*SENSOR_TEMPERATURE_ENABLED* that is initially set to false. Setting this
+variable to true will trigger the GitOps engine to synchronize the application,
+restart the machine sensor and apply the change.
There are two environments in the Industrial Edge demonstration:
@@ -31,12 +54,9 @@ As an operator you would first make changes to the staging first. Here are the
* There are two config maps that we can change:
* machine-sensor-1-configmap.yaml
* machine-sensor-2-configmap.yaml
-* Change the following variable in *machine-sensor-1-configmap.yaml*
+* Change the following variable in *machine-sensor-1-configmap.yaml* in the gitea web interface
* **SENSOR_TEMPERATURE_ENABLED: "true"**
* Make sure you commit the changes to **git**
- * **git add machine-sensor-1-configmap.yaml**
- * **git commit -m "Changed SENSOR_TEMPERATURE_ENABLED to true"**
- * **git push**
* Now you can go to the Line Dashboard application and see how the UI shows the temperature for that device. You can find the route link by:
* Change the Project context to manuela-tst-all
* Navigate to Networking->Routes
@@ -44,9 +64,15 @@ As an operator you would first make changes to the staging first. Here are the
# HOWTO Applying the pattern to a new use case
-There are a lot of IoT devices that we could add to this pattern. In today's world we have IoT devices that perform different functions and these devices are connected to a network where they have the ability of sending telemetry data to other devices or a central data center. In this particular use case we address an Industrial sector but what about applying this use case to other sectors such as Automotive or Delivery service companies?
+There are a lot of IoT devices that we could add to this pattern. In today's
+world we have IoT devices that perform different functions and these devices
+are connected to a network where they have the ability of sending telemetry
+data to other devices or a central data center. In this particular use case we
+address an Industrial sector but what about applying this use case to other
+sectors such as Automotive or Delivery service companies?
-If we take the Deliver Service use case, and apply it to this pattern, we would have to take into account the following aspects:
+If we take the Deliver Service use case, and apply it to this pattern, we would
+have to take into account the following aspects:
* The main components in the pattern architecture can be used as is.
* The broker and kafka components are the vehicles for the streaming data coming from the devices.
@@ -59,7 +85,7 @@ The idea is that this pattern can be used for other use cases keeping the main c
# Next Steps
-What ideas for customization do you have? Can you use this pattern for other use cases? Let us know through our feedback link below.
+What ideas for customization do you have? Can you use this pattern for other
+use cases? Let us know through our feedback link below.
-[Help & Feedback](https://groups.google.com/g/validatedpatterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 }
-[Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 }
+[Help & Feedback](https://groups.google.com/g/validatedpatterns) - [Report Bugs](https://github.com/validatedpatterns/industrial-edge/issues)
diff --git a/content/patterns/industrial-edge/troubleshooting.md b/content/patterns/industrial-edge/troubleshooting.md
index d06414bf9..a5414a277 100644
--- a/content/patterns/industrial-edge/troubleshooting.md
+++ b/content/patterns/industrial-edge/troubleshooting.md
@@ -12,83 +12,58 @@ aliases: /industrial-edge/troubleshooting/
The framework for deploying the applications and their operators has been made easy for the user
by using OpenShift GitOps for continuous deployment (Argo CD). It takes time to deploy everything.
-You may have to go back and forth between the OpenShift cluster console and the OpenShift GitOps console to check on applications and operators being up and in a ready state.
+You may have to go back and forth between the OpenShift cluster console and the
+OpenShift GitOps console to check on applications and operators being up and in
+a ready state.
-The applications deployment for the main data center are as follows. First OpenShift GitOps operator will deploy. See the OpenShift Console to see that it is running. Then OpenShift GitOps takes over the rest of the deployment. It deploys the following applications
+The applications deployment for the main data center are as follows. First
+OpenShift GitOps operator will deploy. See the OpenShift Console to see that it
+is running. Then OpenShift GitOps takes over the rest of the deployment. It
+deploys the following applications:
- Advanced Cluster Management operator in the application `acm`. this will manage the edge clusters
-- Open Data Hub in the application `odh` for the data science components.
+- Red Hat OpenShift AI in the application `data-science-cluster` and `data-science-project` for the data science components.
- OpenShift Pipelines is deployed in the application `pipelines`
- AMQ Streams is deployed to manage data coming from factories and stored in a data lake.
-- The data lake uses S3 based storage and is deployed in the `central-s3` application
+- The data lake uses S3 based storage and is deployed in the `production-data-lake` application
- Testing at the data center is managed by the `manuela-test` application
-Make sure that all these applications are `Healthy` 💚 and `Synced` ✅ in the OpenShift GitOps console. If in a state other than `Healthy` (`Progressing, Degraded, Missing, Unknown'`) then it's time to dive deeper into that application and see what has happened.
+Make sure that all these applications are `Healthy` 💚 and `Synced` ✅ in the
+OpenShift GitOps console. If in a state other than `Healthy` (`Progressing,
+Degraded, Missing, Unknown'`) then it's time to dive deeper into that
+application and see what has happened.
-The applications deployed on the factory (edge) cluster are as follows. After a successful importing [1] a factory cluster to the main ACM hub, you should check in the factory cluster's OpenShift UI to see if the projects `open-cluster-manager-agent` and `open-cluster-manager-agent-addons` are running. When these are deployed then OpenShift GitOps operator will be deployed on the cluster. From there OpenShift GitOps deploys the following applications:
+The applications deployed on the factory (edge) cluster are as follows. After a
+successful importing [1] a factory cluster to the main ACM hub, you should
+check in the factory cluster's OpenShift UI to see if the projects
+`open-cluster-manager-agent` and `open-cluster-manager-agent-addons` are
+running. When these are deployed then OpenShift GitOps operator will be
+deployed on the cluster. From there OpenShift GitOps deploys the following
+applications:
-- `datalake` application sets streams to the data center.
- `stormshift` sets up application and AMQ integration components
-- `odh` sets up the AI/ML models that have been developed by the data scientists.
+- `golang-external-secrets` sets up the bits to be able to get secrets from the data center.
[1] ACM has different ways of describing this process based on which tool you are using. Attach, Join, Import are terms associated with bringing a cluster under the management of a hub cluster.
-### Install loop does not complete
-
-#### Symptom: `make install` does not complete in a timely fashion (~10 minutes from start). Status messages keep scrolling
-
-**Cause:** One of the conditions for installation has not been completed. See below for details.
-
-**Resolution:** Re-run the failing step outside the loop. See below for how.
-
-It is safe to exit the loop (via Ctrl-C, for example) and run the operations separately.
-
-The industrial edge pattern runs two post-install operations after creating the main ArgoCD applications:
-
-**Extracting the secret from the datacenter ArgoCD instance for use in the Pipelines**
-
-This depends on the installation of both the cluster-wide GitOps operator, and the installation of an instance in the datacenter namespace. The logic is controlled [here](https://github.com/validatedpatterns/industrial-edge/blob/main/Makefile) (where the parameters are set) and [here](https://github.com/validatedpatterns/common/blob/main/Makefile), which does the interactions with the cluster (to extract the secret and create a resource in manuela-ci).
-
-This task runs first, and if it does not complete, the seed pipeline will not start either. Things to check:
-
-- Check to make sure the operators are installing in your cluster correctly.
-- Ensure you have enough capacity in your cluster to run all the needed resources.
-
-You can attempt to run the extraction outside of `make install`. Ensure that you have logged in to the cluster (via `oc login` or by exporting a suitable KUBECONFIG:
-
-- Run `make secret` in the base directory of your industrial-edge repository fork.
-
-**Running the "seed" pipeline to populate the image registries for the manuela-tst-all namespace and the edge/factory
-namespaces (manuela-stormshift-messaging, manuela-line-dashboard etc.).**
-
-It is important that the seed pipeline run and complete because the applications will be "degraded" until they can deploy the images, and seed is what populates the images in the local cluster registries and instructs the applications to use them.
-
-The seed pipeline depends on the Pipelines operator to be installed, as well as the `tkn` Task (in the manuela-ci namespace). The script checks for both. (`make install` calls the `sleep-seed` target, which checks for the resources before trying to kick off a seed pipeline run.
-
-- Run `make seed` in the base directory of your industrial edge repository fork. This kicks off the pipeline without checking for its dependencies.
-
-This target does *not* ensure that the seed pipeline completes. See below on how to re-run seed if the seed pipeline
-fails for any reason. It is safe to run the seed pipeline multiple times - each time it runs it will update the image targets for each of the images in both test (manuela-tst-all) and production (manuela-stormshift-messaging etc).
-
### Subscriptions not being installed
#### Symptom: Install seems to "freeze" at a specific point. Expected operators do not install in the cluster
-**Cause:** It is possible an operator was requested to be installed that isn't allowed to be installed on this version of OpenShift.
+**Cause:** It is possible an operator was requested to be installed that isn't
+allowed to be installed on this version of OpenShift.
**Resolution:**
-In general, use the project-supplied `global.options.UseCSV` setting of `False`. This requests the current, best version of the operator available. If a specific CSV (Cluster Service Version) is requested but unavailable, that operator will not be able to install at all, and when an operator fails to install, that may have a cascading effect on other operators.
+In general, use the project-supplied `global.options.UseCSV` setting of
+`False`. This requests the current, best version of the operator available.
+If a specific CSV (Cluster Service Version) is requested but unavailable, that
+operator will not be able to install at all, and when an operator fails to
+install, that may have a cascading effect on other operators.
## Potential (Known) Operational Issues
### Pipeline Failures
-#### Symptom: "User not found" error in first stage of pipeline run
-
-**Cause:** Despite the message, the error is most likely that you don't have a fork of [manuela-dev](https://github.com/validatedpatterns-demos/manuela-dev).
-
-**Resolution:** Fork [manuela-dev](https://github.com/validatedpatterns-demos/manuela-dev) into your namespace in GitHub and run `make seed`.
-
#### Symptom: Intermittent failures in Pipeline stages
Some sample errors:
@@ -115,8 +90,9 @@ k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
panic(0x1b40ee0, 0x1fe47b0)
```
-When this happens, the pipeline may not entirely stop running. It is safe to stop/cancel the pipeline run, and
-desirable to do so, since multiple pipelines attempting to change the repository at the same time could cause more failures.
+When this happens, the pipeline may not entirely stop running. It is safe to
+stop/cancel the pipeline run, and desirable to do so, since multiple pipelines
+attempting to change the repository at the same time could cause more failures.
**Resolution:** Run `make seed` in the root of the repository OR re-run the failed pipeline segment (e.g. seed-iot-frontend or seed-iot-consumer).
@@ -150,15 +126,12 @@ It is also possible that multiple pipelines were running at the same time and we
**Resolution:** Fix the issue as identified by the error message, and commit and push the fix OR revert the last one.
-Certain changes might invalidate objects in ArgoCD, and this will prevent ArgoCD from deploying the change related to
-that commit. The error message for that situation might look like this (this particular change removed the Image details from the kustomization.yaml file, and we resolved it by re-adding the image entries:
+Certain changes might invalidate objects in ArgoCD, and this will prevent
+ArgoCD from deploying the change related to that commit. The error message for
+that situation might look like this (this particular change removed the Image
+details from the kustomization.yaml file, and we resolved it by re-adding the
+image entries:
```text
rpc error: code = Unknown desc = Manifest generation error (cached): `/bin/bash -c helm template . --name-template ${ARGOCD_APP_NAME:0:52} -f https://github.com/claudiol/industrial-edge/raw/deployment/values-global.yaml -f https://github.com/claudiol/industrial-edge/raw/deployment/values-datacenter.yaml --set global.repoURL=$ARGOCD_APP_SOURCE_REPO_URL --set global.targetRevision=$ARGOCD_APP_SOURCE_TARGET_REVISION --set global.namespace=$ARGOCD_APP_NAMESPACE --set global.pattern=industrial-edge --set global.valuesDirectoryURL=https://github.com/claudiol/industrial-edge/raw/deployment --post-renderer ./kustomize` failed exit status 1: Error: error while running post render on files: error while running command /tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst/kustomize. error output: ++ dirname /tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst/kustomize + BASE=/tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst + '[' /tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst = /tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst ']' + BASE=./ + cat + echo / /tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst / /tmp/https:__github.com_claudiol_industrial-edge/charts/datacenter/manuela-tst + ls -al total 44 drwxr-xr-x. 3 default root 166 Oct 6 20:59 . drwxr-xr-x. 7 default root 98 Oct 6 20:28 .. -rw-r--r--. 1 default root 1105 Oct 6 20:28 Chart.yaml -rw-r--r--. 1 default root 22393 Oct 6 20:59 helm.yaml -rw-r--r--. 1 default root 98 Oct 6 20:59 kustomization.yaml -rwxr-xr-x. 1 default root 316 Oct 6 20:28 kustomize -rw-r--r--. 1 default root 348 Oct 6 20:28 system-image-builder-role-binding.yaml drwxr-xr-x. 7 default root 115 Oct 6 20:28 templates -rw-r--r--. 1 default root 585 Oct 6 20:28 values.yaml + kubectl kustomize ./ Error: json: cannot unmarshal object into Go struct field Kustomization.images of type []image.Image : exit status 1 Use --debug flag to render out invalid YAML
```
-
-#### Symptom: Applications show "not in sync" status in ArgoCD
-
-**Cause:** There is a discrepancy between what the git repository says the application should have, and how that state is realized in ArgoCD.
-
-The installation mechanism currently installs operators as parts of multiple applications when running on the same cluster, so it is a race condition in ArgoCD to see which one "wins." This is a problem with the way we are installing the patterns. We are tracking this as [#38](https://github.com/validatedpatterns/industrial-edge/issues/38).
diff --git a/static/images/industrial-edge/argocd-machine-sensor2.png b/static/images/industrial-edge/argocd-machine-sensor2.png
new file mode 100644
index 000000000..bdf8079f0
Binary files /dev/null and b/static/images/industrial-edge/argocd-machine-sensor2.png differ
diff --git a/static/images/industrial-edge/datacenter-argocd-apps.png b/static/images/industrial-edge/datacenter-argocd-apps.png
new file mode 100644
index 000000000..ef3ba450e
Binary files /dev/null and b/static/images/industrial-edge/datacenter-argocd-apps.png differ
diff --git a/static/images/industrial-edge/factory-apps.png b/static/images/industrial-edge/factory-apps.png
new file mode 100644
index 000000000..5ccd1e6ca
Binary files /dev/null and b/static/images/industrial-edge/factory-apps.png differ
diff --git a/static/images/industrial-edge/gitea-commit.png b/static/images/industrial-edge/gitea-commit.png
new file mode 100644
index 000000000..402a06315
Binary files /dev/null and b/static/images/industrial-edge/gitea-commit.png differ
diff --git a/static/images/industrial-edge/gitea-edit.png b/static/images/industrial-edge/gitea-edit.png
new file mode 100644
index 000000000..7c9f965bf
Binary files /dev/null and b/static/images/industrial-edge/gitea-edit.png differ
diff --git a/static/images/industrial-edge/gitea-iot-edit.png b/static/images/industrial-edge/gitea-iot-edit.png
new file mode 100644
index 000000000..a10fead19
Binary files /dev/null and b/static/images/industrial-edge/gitea-iot-edit.png differ
diff --git a/static/images/industrial-edge/gitea-pipeline-pr.png b/static/images/industrial-edge/gitea-pipeline-pr.png
new file mode 100644
index 000000000..fd0666b76
Binary files /dev/null and b/static/images/industrial-edge/gitea-pipeline-pr.png differ
diff --git a/static/images/industrial-edge/gitea-signin.png b/static/images/industrial-edge/gitea-signin.png
new file mode 100644
index 000000000..fdb28a3d5
Binary files /dev/null and b/static/images/industrial-edge/gitea-signin.png differ
diff --git a/static/images/industrial-edge/highleveldemodiagram-v2.png b/static/images/industrial-edge/highleveldemodiagram-v2.png
new file mode 100644
index 000000000..77d9410a1
Binary files /dev/null and b/static/images/industrial-edge/highleveldemodiagram-v2.png differ
diff --git a/static/images/industrial-edge/line-dashboard-devops.png b/static/images/industrial-edge/line-dashboard-devops.png
new file mode 100644
index 000000000..d02628a43
Binary files /dev/null and b/static/images/industrial-edge/line-dashboard-devops.png differ
diff --git a/static/images/industrial-edge/nine-box.png b/static/images/industrial-edge/nine-box.png
new file mode 100644
index 000000000..f56cfb080
Binary files /dev/null and b/static/images/industrial-edge/nine-box.png differ
diff --git a/static/images/industrial-edge/notebook-console.png b/static/images/industrial-edge/notebook-console.png
new file mode 100644
index 000000000..e50fe633e
Binary files /dev/null and b/static/images/industrial-edge/notebook-console.png differ
diff --git a/static/images/industrial-edge/pipeline-iot-frontend.png b/static/images/industrial-edge/pipeline-iot-frontend.png
new file mode 100644
index 000000000..3d9df3694
Binary files /dev/null and b/static/images/industrial-edge/pipeline-iot-frontend.png differ
diff --git a/static/images/industrial-edge/rhoai-console-home.png b/static/images/industrial-edge/rhoai-console-home.png
new file mode 100644
index 000000000..866d23ddd
Binary files /dev/null and b/static/images/industrial-edge/rhoai-console-home.png differ
diff --git a/static/images/industrial-edge/rhoai-ml-development.png b/static/images/industrial-edge/rhoai-ml-development.png
new file mode 100644
index 000000000..17deb1f22
Binary files /dev/null and b/static/images/industrial-edge/rhoai-ml-development.png differ