diff --git a/content/patterns/industrial-edge/_index.md b/content/patterns/industrial-edge/_index.md index ee7166d26..b28a6b350 100644 --- a/content/patterns/industrial-edge/_index.md +++ b/content/patterns/industrial-edge/_index.md @@ -24,34 +24,13 @@ ci: manuela # Industrial Edge Pattern -_Red Hat Validated Patterns are detailed deployments created for different use -cases. These pre-defined computing configurations bring together the Red Hat -portfolio and technology ecosystem to help you stand up your architectures -faster. Example application code is provided as a demonstration, along with the -various open source projects and Red Hat products required for the deployment -to work. Users can then modify the pattern for their own specific application._ - -**Use Case:** Boosting manufacturing efficiency and product quality with -artificial intelligence/machine learning (AI/ML) out to the edge of the -network. - -**Background:** Microcontrollers and other types of simple computers have long -been widely used on factory floors and processing plants to monitor and control -the many machines required to implement the many machines required to implement -many modern manufacturing workflows. The manufacturing industry has -consistently used technology to fuel innovation, production optimization, and -operations. However, historically, control systems were mostly “dumb” in that -they mostly took actions in response to pre-programmed triggers and heuristics. -For example, predictive maintenance commonly took place on either a set length -of time or the number of hours was in service. Supervisory control and data -acquisition (SCADA) has often been used to collectively describe these hardware -and software systems, which mostly functioned independently of the company’s -information technology (IT) systems. Companies increasingly see the benefit of -bridging these operational technology (OT) systems with their IT. Factory -systems can be much more flexible as a result. They can also benefit from newer -technologies such as AI/ML, thereby allowing for tasks like maintenance to be -scheduled based on multiple real-time measurements rather than simple -programmed triggers while bringing processing power closer to data. +_Red Hat Validated Patterns are predefined deployment configurations designed for various use cases. They integrate Red Hat products and open-source technologies to accelerate architecture setup. Each pattern includes example application code, demonstrating its use with the necessary components. Users can customize these patterns to fit their specific applications._ + +**Use Case:** Boosting manufacturing efficiency and product quality with artificial intelligence/machine learning (AI/ML) out to the edge of the network. + +**Background:** Microcontrollers and other simple computers have long been used in factories and processing plants to monitor and control machinery in modern manufacturing. The industry has consistently leveraged technology to drive innovation, optimize production, and improve operations. Traditionally, control systems operated on fixed rules, responding to pre-programmed triggers and heuristics. For instance, predictive maintenance was typically scheduled based on elapsed time or service hours. + +Supervisory Control and Data Acquisition (SCADA) systems have historically functioned independently of a company’s IT infrastructure. However, businesses increasingly recognize the value of integrating operational technology (OT) with IT. This integration enhances factory system flexibility and enables the adoption of advanced technologies such as AI and machine learning. As a result, tasks like maintenance can be scheduled based on real-time data rather than rigid schedules, while computing power is brought closer to the source of data generation. ## Solution Overview @@ -62,23 +41,18 @@ programmed triggers while bringing processing power closer to data. _Figure 1. Industrial edge solution overview._ -Figure 1 provides an overview of the industrial edge solution. It is applicable -across a number of verticals including manufacturing. +Figure 1 provides an overview of the industrial edge solution. It is applicable across a number of verticals including manufacturing. This solution: - Provides real-time insights from the edge to the core datacenter - Secures GitOps and DevOps management across core and factory sites - Provides AI/ML tools that can reduce maintenance costs -Different roles within an organization have different concerns and areas of -focus when working with this distributed AL/ML architecture across two logical -types of sites: the core datacenter and the factories. (As shown in Figure 2.) +Different roles within an organization have different concerns and areas of focus when working with this distributed AL/ML architecture across two logical types of sites: the core datacenter and the factories. (As shown in Figure 2.) -- **The core datacenter**. This is where data scientists, developers, and - operations personnel apply the changes to their models, application code, and +- **The core datacenter**. This is where data scientists, developers, and operations personnel apply the changes to their models, application code, and configurations. -- **The factories**. This is where new applications, updates and operational - changes are deployed to improve quality and efficiency in the factory.. +- **The factories**. This is where new applications, updates and operational changes are deployed to improve quality and efficiency in the factory.. [![Industrial Edge Architecture](/images/ai-ml-architecture.png)](/images/ai-ml-architecture.png) @@ -91,22 +65,13 @@ _Figure 3. Overall data flows of solution._ Figure 3 provides a different high-level view of the solution with a focus on the two major dataflow streams. -1. Moving sensor data and events from the operational/shop floor edge towards - the core. The idea is to centralize as much as possible, but decentralize as - needed. For example, sensitive production data might not be allowed to leave - the premises. Think of a temperature curve of an industrial oven; it might - be considered crucial intellectual property of the customer. Or the sheer - amount of raw data (maybe 10,000 events per second) might be too expensive - to transfer to a cloud datacenter. In the above diagram, this is from left - to right. In other diagrams the edge / operational level is usually at the - bottom and the enterprise/cloud level at the top. Thus, this is also - referred to as northbound traffic. +1. Transmitting sensor data and events from the operational edge to the core aims to centralize processing where possible while decentralizing when necessary. Certain data, such as sensitive production metrics, may need to remain on-premises. For example, an industrial oven’s temperature curve could be considered proprietary intellectual property. Additionally, the high volume of raw data—potentially tens of thousands of events per second—may make cloud transfer impractical due to cost or bandwidth constraints. + +In the preceding diagram, data movement flows from left to right, while in other representations, the operational edge is typically shown at the bottom, with enterprise or cloud systems at the top. This directional flow is often referred to as northbound traffic. + +2. Push code, configurations, master data, and machine learning models from the core (where development, testing, and training occur) to the edge and shop floors. With potentially hundreds of plants and thousands of production lines, automation and consistency are essential for effective deployment. -2. Push code, configuration, master data, and machine learning models from the - core (where development, testing, and training is happening) towards the - edge / shop floors. As there might be 100 plants with 1000s of lines, - automation and consistency is key. In the above diagram, this is from right - to left, in a top/down view, it is called southbound traffic. +In the diagram, data flows from right to left, and when viewed in a top-down orientation, this flow is referred to as southbound traffic. ## Logical Diagrams @@ -144,44 +109,21 @@ It includes, among other components:: _Figure 5: Industrial Edge solution showing messaging and ML components schematically._ -As shown in Figure 5, data coming from sensors is transmitted over MQTT -(Message Queuing Telemetry Transport) to Red Hat AMQ, which routes sensor data -for two purposes: model development in the core data center and live inference -in the factory data centers. The data is then relayed on to Red Hat AMQ for -further distribution within the factory datacenter and out to the core -datacenter. MQTT is the most commonly used messaging protocol for Internet -of Things (IoT) applications. - -The lightweight Apache Camel K, a lightweight integration framework built on -Apache Camel that runs natively on Kubernetes, provides MQTT (Message Queuing -Telemetry Transport) integration that normalizes and routes sensor data to the -other components. - -That sensor data is mirrored into a data lake that is provided by Red Hat -OpenShift Data Foundation. Data scientists then use various tools from the open -source Open Data Hub project to perform model development and training, pulling -and analyzing content from the data lake into notebooks where they can apply ML -frameworks. - -Once the models have been tuned and are deemed ready for production, the -artifacts are committed to git which kicks off an image build of the model -using OpenShift Pipelines (based on the upstream Tekton), a serverless CI/CD -system that runs pipelines with all the required dependencies in isolated -containers. - -The model image is pushed into OpenShift’s integrated registry running in the -core datacenter which is then pushed back down to the factory datacenter for -use in inference. +As illustrated in Figure 5, sensor data is transmitted via MQTT (Message Queuing Telemetry Transport) to Red Hat AMQ, which routes it for two key purposes: model development in the core data center and live inference at the factory data centers. The data is then forwarded to Red Hat AMQ for further distribution within the factory and back to the core data center. MQTT is the standard messaging protocol for Internet of Things (IoT) applications. + +Apache Camel K, a lightweight integration framework based on Apache Camel and designed to run natively on Kubernetes, offers MQTT integration to normalize and route sensor data to other components. + +The sensor data is mirrored into a data lake managed by Red Hat OpenShift Data Foundation. Data scientists utilize tools from the open-source Open Data Hub project to develop and train models, extracting and analyzing data from the lake in notebooks while applying machine learning (ML) frameworks. + +Once the models are fine-tuned and production-ready, the artifacts are committed to Git, triggering an image build of the model using OpenShift Pipelines (based on the upstream Tekton), a serverless CI/CD system that runs pipelines with all necessary dependencies in isolated containers. + +The model image is pushed to OpenShift’s integrated registry in the core data center and then pushed back down to the factory data center for use in live inference. [![Using network segragation to protect factories and operations infrastructure from cyber attacks](/images/industrial-edge/edge-mfg-devops-network-sd.png)](/images/industrial-edge/edge-mfg-devops-network-sd.png) _Figure 6: Industrial Edge solution showing network flows schematically._ -As shown in Figure 6, in order to protect the factories and operations -infrastructure from cyber attacks, the operations network needs to be -segregated from the enterprise IT network and the public internet. The factory -machinery, controllers, and devices need to be further segregated from the -factory data center and need to be protected behind a firewall. +As shown in Figure 6, to safeguard the factory and operations infrastructure from cyberattacks, the operations network must be segregated from the enterprise IT network and the public internet. Additionally, factory machinery, controllers, and devices should be further isolated from the factory data center and protected behind a firewall. ### Edge manufacturing with GitOps @@ -189,31 +131,16 @@ factory data center and need to be protected behind a firewall. _Figure 7: Industrial Edge solution showing a schematic view of the GitOps workflows._ -GitOps is an operational framework that takes DevOps best practices used for -application development such as version control, collaboration, compliance, and -CI/CD, and applies them to infrastructure automation. Figure 6 shows how, for -these industrial edge manufacturing environments, GitOps provides a consistent, -declarative approach to managing individual cluster changes and upgrades across -the centralized and edge sites. Any changes to configuration and applications -can be automatically pushed into operational systems at the factory. +GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. Figure 6 shows how, for these industrial edge manufacturing environments, GitOps provides a consistent, declarative approach to managing individual cluster changes and upgrades across the centralized and edge sites. Any changes to configuration and applications can be automatically pushed into operational systems at the factory. ### Secrets exchange and management -Authentication is used to securely deploy and update components across multiple -locations. The credentials are stored using a secrets management solution like -Hashicorp Vault on the hub. The external secrets component is used to integrate various -secrets management tools (AWS Secrets Manager, Google Secrets Manager, Azure -Key Vault). These secrets are then pulled from the HUB's Vault on to the different -factory clusters. +Authentication is used to securely deploy and update components across multiple locations. The credentials are stored using a secrets management solution such as Hashicorp Vault on the hub. The external secrets component is used to integrate various secrets management tools (AWS Secrets Manager, Google Secrets Manager, Azure Key Vault). These secrets are then pulled from the HUB's Vault on to the different factory clusters. ## Demo Scenario -This scenario is derived from the [MANUela -work](https://github.com/sa-mw-dach/manuela) done by Red Hat Middleware -Solution Architects in Germany in 2019/20. The name MANUela stands for -MANUfacturing Edge Lightweight Accelerator, you will see this acronym in a lot -of artifacts. It was developed on a platform called -[stormshift](https://github.com/stormshift/documentation). +This scenario is derived from the [MANUela work](https://github.com/sa-mw-dach/manuela) done by Red Hat Middleware Solution Architects in Germany in 2019/20. The name MANUela stands for +MANUfacturing Edge Lightweight Accelerator, you will see this acronym in many of the artifacts. It was developed on a platform called [stormshift](https://github.com/stormshift/documentation). The demo has been updated with an advanced GitOps framework. diff --git a/content/patterns/industrial-edge/add-managed-cluster.md b/content/patterns/industrial-edge/add-managed-cluster.md new file mode 100644 index 000000000..f5c6aad99 --- /dev/null +++ b/content/patterns/industrial-edge/add-managed-cluster.md @@ -0,0 +1,33 @@ +--- +title: Adding a managed cluster +weight: 20 +aliases: /industrial-edge/getting-started/ +--- + +# Attach a managed cluster (factory) to the management hub + +By default, Red Hat Advanced Cluster Management (RHACM) manages the `clusterGroup` applications that are deployed on all clusters. + +Add a `managedClusterGroup` for each cluster or group of clusters that you want to manage by following this procedure. + +## Procedure + +1. By default the `factory` applications defined in the `values-factory.yaml` file are deployed on all clusters imported into ACM and that have the label `clusterGroup=factory`. + +2. In the left navigation panel of the web console associated with your deployed hub cluster, click **local-cluster**. Select **All Clusters**. The RHACM web console is displayed. + +3. In the **Managing clusters just got easier** window, click **Import an existing cluster**. + + - Enter the cluster name (you can get this from the login token string, for example: `https://api..:6443`). + - You can leave the **Cluster set** blank. + - In the **Additional labels** dialog box, enter the `key=value` as `clusterGroup=factory`. + - Choose **KubeConfig** as the "Import mode". + - In the **KubeConfig** window, paste your KubeConfig content. Click **Next**. + +4. You can skip the **Automation** screen. Click **Next**. + +5. Review the summary details and click **Import**. + +6. Once the data center and the factory have been deployed you will want to check out and test the Industrial Edge 2.0 demo code. You can find that [here](../application/). The Argo applications on the factory cluster appear as follows: + + ![ArgoCD Factory Apps](/images/industrial-edge/factory-apps.png) \ No newline at end of file diff --git a/content/patterns/industrial-edge/application.md b/content/patterns/industrial-edge/application.md index d7b758fef..5a85d9dcb 100644 --- a/content/patterns/industrial-edge/application.md +++ b/content/patterns/industrial-edge/application.md @@ -6,158 +6,85 @@ aliases: /industrial-edge/application/ # Demonstrating Industrial Edge example applications -## Background +## Prerequisites -Up until now the Industrial Edge 2.0 validated patterns has focused primarily -on successfully deploying the architectural pattern. Now it is time to see -GitOps and DevOps in action as we go through a number of demonstrations to -change both configuration information and the applications that we are -deploying. +Ensure you have administrator access to the data center cluster using one of the following methods: -If you have already deployed the data center and optionally a factory (edge) -cluster, then you have already seen several applications deployed in the -OpenShift GitOps console. +* The `kubeadmin` login credentials +* The `kubeconfig` file (ensure the path is exported) -## Prerequisite preparation +The steps followed so far should have successfully deployed the data center cluster, and optionally, a factory (edge) cluster. -### OpenShift Cluster - -Make sure you have the `kubeadmin` administrator login for the data center -cluster. Use this or the `kubeconfig` (export the path) to provide -administrator access to your data center and factory/edge clusters. +With the infrastructure in place, it’s now time to see GitOps and DevOps in action through demonstrations that will modify both configuration data and deployed applications. ## Configuration changes with GitOps -There will may be times where you need to change the configuration of some of -the edge devices in one or more of your factories. In our example, we have -various sensors at the factory. Modification can be made to these sensors using +There might be times where you need to change the configuration of some of the edge devices in one or more of your factories. In our example, we have various sensors at the factory. Modification can be made to these sensors using `ConfigMaps`. [![highleveldemodiagram](/images/industrial-edge/highleveldemodiagram-v2.png)](/images/industrial-edge/highleveldemodiagram-v2.png) -In this demonstration we will turn on a temperature sensor for sensor #2. We -will first do this in the data center because this will demonstrate the power -of GitOps without having to involve the edge/factory. However if you do have -an factory joined using Advanced Cluster Management, then the changes will make -their way out to the factory. But it is not necessary for the demo as we have a -complete test environment on the data center. - -Make sure you are able to see the dashboard application in a tab on your -browser. You can find the URL for the dashboard application by looking at the -following in your OpenShift console. - -[![network-routing-line-dashboard](/images/industrial-edge/network-routing-line-dashboard.png)](/images/industrial-edge/network-routing-line-dashboard.png) - -Select Networking->Routes on the left-hand side of the console. Using the -Projects pull-down, select `manuela-tst-all`. Click on the URL under the -Location column for the route Name `line-dashboard`. this will launch the -line-dashboard monitoring application in a browser tab. The URL will look like: - -`line-dashboard-manuela-tst-all.apps.*cluster-name*.*domain*` - -Once the the application is open in your browser, click on the “Realtime Data” -Navigation on the left and wait a bit. Data should be visualized as received. -Note that there is only vibration data shown! If you wait a bit more (usually -every 2-3 minutes), you will see an anomaly and alert on it. - -[![app-line-dashboard-before](/images/industrial-edge/app-line-dashboard-before.png)](/images/industrial-edge/app-line-dashboard-before.png) - -Now let's turn on the temperature sensor. Go to the gitea link on the nine box login using the -`gitea_admin` user and the autogenerated password that can be found in the secret called -`gitea-admin-secret` in the `vp-gitea` namespace: - -[![gitea-signin](/images/industrial-edge/gitea-signin.png)](/images/industrial-edge/gitea-signin.png) - -You can run the following command to obtain the gitea user's password automatically: - -``` -oc extract -n vp-gitea secret/gitea-admin-secret --to=- --keys=password 2>/dev/null -``` - -In the `industrial-edge` repository, edit the file called -`charts/datacenter/manuela-tst/templates/machine-sensor/machine-sensor-2-configmap.yaml` -and change `SENSOR_TEMPERATURE_ENABLED: "false"` to `SENSOR_TEMPERATURE_ENABLED: "true"`. - -[![gitea-edit](/images/industrial-edge/gitea-edit.png)](/images/industrial-edge/gitea-edit.png) -[![gitea-commit](/images/industrial-edge/gitea-commit.png)](/images/industrial-edge/gitea-commit.png) +## Application changes using DevOps -Then change and commit this to your git repository so that the change will be -picked up by OpenShift GitOps (ArgoCD). +The `line-dashboard` application has temperature sensors. In this demonstration you are going to make a simple change to that application, rebuild and redeploy +it. -You can track the progress of this commit/push in your OpenShift GitOps console -in the `manuela-test-all` application. You will notice components regarding -machine-sensor-2 getting sync-ed. You can speed this up by manually pressing -the Refresh button. +1. Edit the file `components/iot-frontend/src/app/app.component.html` in the `manuela-dev` repository there is a file -[![argocd-line-dashboard](/images/industrial-edge/argocd-line-dashboard.png)](/images/industrial-edge/argocd-line-dashboard.png) +2. Change the +`IoT Dashboard` to for example, +`IoT Dashboard - DEVOPS was here!`. Do this in the +gitea web interface directly clicking on the editing icon for the file: -The dashboard app should pickup the change automatically, once data from the temperature sensor is received. -Sometimes a page/tab refreshed is needed for the change to be picked up. + [![gitea-iot-edit](/images/industrial-edge/gitea-iot-edit.png)](/images/industrial-edge/gitea-iot-edit.png) -[![app-line-dashboard](/images/industrial-edge/argocd-machine-sensor2.png)](/images/industrial-edge/argocd-machine-sensor2.png) +3. Commit this change to your git repository so that the change will be picked up by OpenShift GitOps (ArgoCD). -## Application changes using DevOps + [![gitea-commit](/images/industrial-edge/gitea-commit.png)](/images/industrial-edge/gitea-commit-1.png) -The `line-dashboard` application has temperature sensors. In this demonstration -we are going to make a simple change to that application, rebuild and redeploy -it. In the `manuela-dev` repository there is a file -`components/iot-frontend/src/app/app.component.html`. Let's change the -`IoT Dashboard` to something else, say, -`IoT Dashboard - DEVOPS was here!`. We do this in the -gitea web interface directly clicking on the editing icon for the file: +4. Start the pipeline called `build-and-test-iot-frontend` that will do the following: -[![gitea-iot-edit](/images/industrial-edge/gitea-iot-edit.png)](/images/industrial-edge/gitea-iot-edit.png) + 1. Rebuild the image from the manuela-dev code + 2. Push the change on the hub datacenter in the manuela-tst-all namespace + 3. Create a PR in gitea -We can now kick off the pipeline called `build-and-test-iot-frontend` that will do the following: -1. Rebuild the image from the manuela-dev code -2. Push the change on the hub datacenter in the manuela-tst-all namespace -3. Create a PR in gitea + 4.1 Start the pipeline by running the following command in `industrial-edge` repository: -To start the pipeline run we can just run the following command from our terminal: -```sh -make build-and-test-iot-frontend -``` + ```sh + make build-and-test-iot-frontend + ``` The pipeline will look a bit like the following: [![tekton-pipeline](/images/industrial-edge/pipeline-iot-frontend.png)](/images/industrial-edge/pipeline-iot-frontend.png) -After the pipeline completed the `manuela-test` application in Argo will eventually refresh and push the -changes to the cluster and the line dash board route in the `manuela-tst-all` namespace will have picked up -the changes: +After the pipeline completed the `manuela-test` application in Argo will eventually refresh and push the changes to the cluster and the line dash board route in the `manuela-tst-all` namespace will have picked up the changes. You might need to clear your browser cache to see the change: [![linedashboard-devops](/images/industrial-edge/line-dashboard-devops.png)](/images/industrial-edge/line-dashboard-devops.png) -The pipeline will also have created a PR in gitea, like the following one: +The pipeline will also have created a PR in gitea, such as the following one: [![gitea-pipeline-pr](/images/industrial-edge/gitea-pipeline-pr.png)](/images/industrial-edge/gitea-pipeline-pr.png) -Now an operator can verify that the change is correct on the datacenter in the -`manuela-tst-all` line dashboard and if deemed correct, he can merge the PR in -gitea which will roll out the change to the production factory! +Verify that the change is correct on the datacenter in the `manuela-tst-all` line dashboard and if deemed correct, you can merge the PR in gitea which will roll out the change to the production factory! ## Application AI model changes with DevOps -On the OpenShift console click on the nine-box and choose `Red Hat OpenShift AI`. You'll be taken -to the AI console which will look like the following: +1. On the OpenShift console click the nine-box and select `Red Hat OpenShift AI`. The AI console will open, appearing as follows: + + [![rhoai-console](/images/industrial-edge/rhoai-console-home.png)](/images/industrial-edge/rhoai-console-home.png) -[![rhoai-console](/images/industrial-edge/rhoai-console-home.png)](/images/industrial-edge/rhoai-console-home.png) +2. Click the `Data Science Projects` on the left sidebar and choose the `ml-development` project. The project will open, containing a couple of workbenches and a model.: -Click on `Data Science Projects` on the left sidebar and choose the `ml-development` project. You'll -be taken to the project which will contain a couple of workbenches and a model: + [![rhoai-ml-development](/images/industrial-edge/rhoai-ml-development.png)](/images/industrial-edge/rhoai-ml-development.png) -[![rhoai-ml-development](/images/industrial-edge/rhoai-ml-development.png)](/images/industrial-edge/rhoai-ml-development.png) +3. Click the `JupyterLab` workbench to open the notebook where this pattern's data analysis is performed. The `manuela-dev` code will be preloaded in the notebook. -Clicking on the `JupyterLab` workbench you'll be taken to the notebook where data analysis for this -pattern is being done. The `manuela-dev` code will be preloaded in the notebook and you can click -on the left file browser on `manuela-dev/ml-models/anomaly-detection/1-preprocessing.ipynb`: +4. click the left file browser on `manuela-dev/ml-models/anomaly-detection/1-preprocessing.ipynb`: -[![notebook-console](/images/industrial-edge/notebook-console.png)](/images/industrial-edge/notebook-console.png) + [![notebook-console](/images/industrial-edge/notebook-console.png)](/images/industrial-edge/notebook-console.png) -After opening the notebook successfully, walk through the demonstration by -pressing play and iterating through the commands in the playbook. Jupyter -playbooks are interactive and you may make changes and also save those changes. +After opening the notebook successfully, walk through the demonstration by pressing play and iterating through the commands in the playbooks. Jupyter playbooks are interactive and you might make changes and also save those changes. -Running through all the six notebooks will automatically regenerate the anomaly -model, prepare the data for the training and push the changes to the internal +Running through all the six notebooks will automatically regenerate the anomaly model, prepare the data for the training and push the changes to the internal gitea so the inference service can pick up the new model. diff --git a/content/patterns/industrial-edge/getting-started.md b/content/patterns/industrial-edge/getting-started.md index 0b5d74c62..27816df53 100644 --- a/content/patterns/industrial-edge/getting-started.md +++ b/content/patterns/industrial-edge/getting-started.md @@ -6,51 +6,60 @@ aliases: /industrial-edge/getting-started/ # Deploying the Industrial Edge Pattern -# Prerequisites +## Prerequisites -1. An OpenShift cluster (Go to [the OpenShift - console](https://console.redhat.com/openshift/create)). Cluster must have a - dynamic StorageClass to provision PersistentVolumes. See also [sizing your - cluster](../../industrial-edge/cluster-sizing). -1. (Optional) A second OpenShift cluster for edge/factory +- An OpenShift cluster + - To create an OpenShift cluster, go to the [Red Hat Hybrid Cloud console](https://console.redhat.com/). + - Select **OpenShift → Red Hat OpenShift Container Platform → Create cluster**. + - The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. Verify that a dynamic `StorageClass` exists before creating one by running the following command: -The use of this pattern depends on having at least one running Red Hat -OpenShift cluster. It is desirable to have a cluster for deploying the data -center assets and a separate cluster(s) for the factory assets. + ```sh + oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class" + ``` -If you do not have a running Red Hat OpenShift cluster you can start one on a -public or private cloud by using [Red Hat's cloud -service](https://console.redhat.com/openshift/create). + **Example output:** -## Prerequisites + ```sh + NAME PROVISIONER DEFAULT + gp2-csi ebs.csi.aws.com + gp3-csi ebs.csi.aws.com true + ``` + + For more information about creating a dynamic `StorageClass`, see the [Dynamic provisioning](https://docs.openshift.com/container-platform/latest/storage/dynamic-provisioning.html) documentation. + +- *Optional:* A second OpenShift cluster for the edge/factory. + +- [Install the tooling dependencies](https://validatedpatterns.io/learn/quickstart/). + +The use of this pattern depends on having at least one running Red Hat OpenShift cluster. However, consider creating a cluster for deploying the GitOps management hub assets and a separate cluster for the managed cluster. For installation tooling dependencies, see [Patterns quick start](/learn/quickstart) -The Industrial Edge pattern installs an in-cluster gitea instance by default. This -means that there is no need to fork the pattern's git repository and that ArgoCD will point -directly at the in-cluster git repository. Changes should be done there and not on github. +The Industrial Edge pattern installs an in-cluster gitea instance by default. This means that there is no need to fork the pattern's git repository and that ArgoCD will point directly at the in-cluster git repository. Changes should be done there and not on github. See this [post](https://validatedpatterns.io/blog/2024-07-12-in-cluster-git/) for more information. -# How to deploy +# Procedure -1. Clone the [industrial-edge](https://github.com/validatedpatterns/industrial-edge) repository on GitHub. +1. Clone the [industrial-edge](https://github.com/validatedpatterns/industrial-edge) repository on GitHub by running the following command: -1. On your laptop or bastion host login to your cluster by using the `oc login` command or by exporting the `KUBECONFIG` file. + ```sh + $ git clone git@github.com:validatedpatterns/industrial-edge.git + ``` +2. Ensure you are in the root directory of the industrial-edge git repository by running the following command: ```sh - oc login + $ cd /path/to/your/repository ``` - or +3. On your laptop or bastion host login to your cluster by exporting the `KUBECONFIG` file. ```sh - export KUBECONFIG=~/my-ocp-cluster/auth/kubeconfig + $ export KUBECONFIG=~/my-ocp-cluster/auth/kubeconfig ``` -1. Deploy the industrial edge pattern: +4. Deploy the industrial edge pattern: ```sh - cd ./pattern.sh make install ``` The `make install` target deploys the Validated Patterns Operator, all the resources that are defined in the `values-datacenter.yaml` @@ -62,33 +71,33 @@ See this [post](https://validatedpatterns.io/blog/2024-07-12-in-cluster-git/) fo ```text $ oc get operators.operators.coreos.com -A NAME AGE - advanced-cluster-management.open-cluster-management 3h8m - amq-broker-rhel8.manuela-tst-all 3h8m - amq-streams.manuela-data-lake 3h8m - amq-streams.manuela-tst-all 3h8m - camel-k.manuela-data-lake 3h8m - camel-k.manuela-tst-all 3h8m - mcg-operator.openshift-storage 3h7m - multicluster-engine.multicluster-engine 3h4m - ocs-client-operator.openshift-storage 3h7m - ocs-operator.openshift-storage 3h7m - odf-csi-addons-operator.openshift-storage 3h7m - odf-operator.openshift-storage 3h8m - odf-prometheus-operator.openshift-storage 3h7m - openshift-gitops-operator.openshift-operators 3h11m - openshift-pipelines-operator-rh.openshift-operators 3h8m - patterns-operator.openshift-operators 3h12m - recipe.openshift-storage 3h7m - rhods-operator.redhat-ods-operator 3h8m - rook-ceph-operator.openshift-storage 3h7m + advanced-cluster-management.open-cluster-management 10m + amq-broker-rhel8.manuela-tst-all 10m + amq-streams.manuela-data-lake 10m + amq-streams.manuela-tst-all 10m + camel-k.manuela-data-lake 10m + camel-k.manuela-tst-all 10m + cephcsi-operator.openshift-storage 10m + mcg-operator.openshift-storage 10m + multicluster-engine.multicluster-engine 7m19s + ocs-client-operator.openshift-storage 10m + ocs-operator.openshift-storage 10m + odf-csi-addons-operator.openshift-storage 10m + odf-operator.openshift-storage 10m + odf-prometheus-operator.openshift-storage 10m + openshift-gitops-operator.openshift-operators 17m + openshift-pipelines-operator-rh.openshift-operators 10m + patterns-operator.openshift-operators 17m + recipe.openshift-storage 10m + rhods-operator.redhat-ods-operator 10m + rook-ceph-operator.openshift-storage 10m ``` - **Note: The list above was taken on OpenShift 4.16. It might change slightly depending on the OpenShift version being used (e.g. odf has less operator components on OpenShift 4.15 and earlier)** + > **Note:** The list above was taken on OpenShift 4.17. It might change slightly depending on the OpenShift version being used. For example, odf has fewer operator components on OpenShift 4.15 and earlier. 1. Access the ArgoCD environment - You can find the ArgoCD application links listed under the nine box **Red - Hat applications** in the OpenShift Container Platform web console. + You can find the ArgoCD application links listed under the nine box **Red Hat applications** in the OpenShift Container Platform web console. ![ArgoCD Links](/images/industrial-edge/nine-box.png) @@ -100,31 +109,10 @@ See this [post](https://validatedpatterns.io/blog/2024-07-12-in-cluster-git/) fo ![ArgoCD Apps](/images/industrial-edge/datacenter-argocd-apps.png) -## Next Steps - -Once the data center has been setup correctly and confirmed to be working, you can: - -1. Add a dedicated cluster to the main datacenter hub cluster. - - By default the `factory` applications defined in the `values-factory.yaml` file - are deployed on all clusters imported into ACM and that have the label - `clusterGroup=factory` - - For instructions on how to prepare and import a factory cluster please read the - section [importing a cluster](/learn/importing-a-cluster). Use - `clusterGroup=factory` as the label. - -2. Once the data center and the factory have been deployed you will want to - check out and test the Industrial Edge 2.0 demo code. You can find that - [here](../application/). The argo applications on the factory cluster will look - like the following: - - ![ArgoCD Factory Apps](/images/industrial-edge/factory-apps.png) - # Uninstalling We currently do not support uninstalling this pattern. # Help & Feedback -[Help & Feedback](https://groups.google.com/g/validatedpatterns) - [Report Bugs](https://github.com/validatedpatterns/industrial-edge/issues) +[Help & Feedback](https://groups.google.com/g/validatedpatterns) - [Report Bugs](https://github.com/validatedpatterns/industrial-edge/issues). \ No newline at end of file diff --git a/content/patterns/industrial-edge/ideas-for-customization.md b/content/patterns/industrial-edge/ideas-for-customization.md index 18332cd53..1573d3a3e 100644 --- a/content/patterns/industrial-edge/ideas-for-customization.md +++ b/content/patterns/industrial-edge/ideas-for-customization.md @@ -6,82 +6,118 @@ aliases: /industrial-edge/ideas-for-customization/ # Ideas for Customization -# Why change it? - -One of the major goals of the Red Hat patterns development process is to create -modular, customizable demos. The Industrial Edge demonstration includes -multiple, simulated, IoT devices publishing their temperature and vibration -telemetry to our data center and ultimately persisting the data into an AWS S3 -storage service bucket which we call the Data Lake. All of this is done using -our Red Hat certified products running on OpenShift. - -This demo in particular can be customized in a number of ways that might be -very interesting - and here are some starter ideas with some instructions on -exactly what and where changes would need to be made in the pattern to -accommodate those changes. - -# HOWTO Forking the Industrial Edge repository to your github account - -Hopefully we are all familiar with GitHub. If you are not GitHub is a code -hosting platform for version control and collaboration. It lets you and others -work together on projects from anywhere. Our Industrial Edge GitOps repository -is available in our [Validated Patterns -GitHub](https://github.com/validatedpatterns "Validated Patterns Homepage") -organization. - -To fork this repository, and deploy the Industrial Edge pattern, follow the -steps found in our [Getting -Started](https://validatedpatterns.io/industrial-edge/getting-started -"Industrial Edge Getting Started Guide") section. This will allow you to -follow the next few HOWTO guides in this section. - -Our sensors have been configured to send data relating to the vibration of the -devices. To show the power of GitOps, and keeping state in a git repository, -we can make a change to the config map of one of the sensors to detect and -report data on temperature. This is done via a variable called -*SENSOR_TEMPERATURE_ENABLED* that is initially set to false. Setting this -variable to true will trigger the GitOps engine to synchronize the application, -restart the machine sensor and apply the change. +## Why change it? + +One of the major goals of the Red Hat patterns development process is to create modular, customizable demos. The Industrial Edge demonstration includes +multiple, simulated, IoT devices publishing their temperature and vibration telemetry to our data center and ultimately persisting the data into an AWS S3 storage service bucket which we call the Data Lake. All of this is done using our Red Hat certified products running on OpenShift. + +This demo in particular can be customized in a number of ways that might be very interesting - and here are some starter ideas with some instructions on +exactly what and where changes would need to be made in the pattern to accommodate those changes. There are two environments in the Industrial Edge demonstration: * The staging environment that lives in the *manuela-tst-all* namespace * The production environment which lives in the *stormshift* namespaces +## Enabling the temperature sensor for machine sensor 2 + +Our sensors have been configured to send data relating to the vibration of the devices. To show the power of GitOps, and keeping state in a git repository, +you can make a change to the config map of one of the sensors to detect and report data on temperature. This is done using a variable called `*SENSOR_TEMPERATURE_ENABLED*` that is initially set to `false`. Setting this variable to `true` will trigger the GitOps engine to synchronize the application, restart the machine sensor and apply the change. + As an operator you would first make changes to the staging first. Here are the steps to see how the GitOps engine does it's magic. These changes will be reflected in the staging environment Line Dashboard UI in the *manuela-tst-all* namespace. -* The config maps in question live in the charts/datacenter/manuela-tst/templates/machine-sensor directory -* There are two config maps that we can change: - * machine-sensor-1-configmap.yaml - * machine-sensor-2-configmap.yaml -* Change the following variable in *machine-sensor-1-configmap.yaml* in the gitea web interface - * **SENSOR_TEMPERATURE_ENABLED: "true"** -* Make sure you commit the changes to **git** -* Now you can go to the Line Dashboard application and see how the UI shows the temperature for that device. You can find the route link by: - * Change the Project context to manuela-tst-all - * Navigate to Networking->Routes - * Press on the Location link to see navigate to the UI. - -# HOWTO Applying the pattern to a new use case - -There are a lot of IoT devices that we could add to this pattern. In today's -world we have IoT devices that perform different functions and these devices -are connected to a network where they have the ability of sending telemetry -data to other devices or a central data center. In this particular use case we -address an Industrial sector but what about applying this use case to other -sectors such as Automotive or Delivery service companies? - -If we take the Deliver Service use case, and apply it to this pattern, we would -have to take into account the following aspects: - -* The main components in the pattern architecture can be used as is. - * The broker and kafka components are the vehicles for the streaming data coming from the devices. -* The IoT sensor software would have to be developed. The IoT devices will now be mobile so that presents a few challenges tracking the devices in part due to spotty connectivity to send the data stream. -* The number of IoT devices to be tracked will increase depending on the fleet of delivery trucks out in the field. - * Scalability will be an important aspect for the pattern to be able to handle. -* A new AI/ML model would have to be developed to "learn" through the analysis of the data stream from the IoT devices. - -The idea is that this pattern can be used for other use cases keeping the main components in place. The components that would be new to the pattern are: IoT device code, AI/ML models, and specific kafka/broker topics to keep track of. +* The config maps in question live in the `charts/datacenter/manuela-tst/templates/machine-sensor` directory: + +* There are two config maps that you can change: + * `machine-sensor-1-configmap.yaml` + * `machine-sensor-2-configmap.yaml` + +In this customization you will turn on a temperature sensor for sensor #2. Do this first in the data center because this will demonstrate the power of GitOps without having to involve the edge/factory cluster. + +However, if you do have a factory joined using Advanced Cluster Management, then the changes will make their way out to the factory. But it is not necessary for the demo as we have a complete test environment on the data center. + +Follow these steps in the OpenShift console to access the dashboard application in a tab on your browser: + +1. Select **Networking**->**Routes** on the left-hand side of the console. Using the Projects pull-down, select `manuela-tst-all`. The following screen appears: + + [![network-routing-line-dashboard](/images/industrial-edge/network-routing-line-dashboard.png)](/images/industrial-edge/network-routing-line-dashboard.png) + +2. Click the URL under the Location column for the route Name `line-dashboard`. This will launch the line-dashboard monitoring application in a browser tab. The URL will look like: + + `line-dashboard-manuela-tst-all.apps.*cluster-name*.*domain*` + +3. Once the application is open in your browser, click the **Realtime Data** navigation on the left and wait a bit. Data should be visualized as received. + + > **Note:** There is only vibration data shown! If you wait a bit more (usually every 2-3 minutes), you will see an anomaly and alert on it. + + [![app-line-dashboard-before](/images/industrial-edge/app-line-dashboard-before.png)](/images/industrial-edge/app-line-dashboard-before.png) + +4. Now turn on the temperature sensor. Log in using the `gitea_admin` username and the autogenerated password. This password is stored in the `gitea-admin-secret` secret located in the `vp-gitea` namespace. To retrieve it: + + 4.1 Go to **Workloads** > **Secrets** in the left-hand menu. + + 4.2 Using the Projects pull-down, select the `vp-gitea` project and open the `gitea-admin-secret`. + + 4.3 Copy the password found under **Data** into the sign in screen located in the nine box **Red Hat applications** in the OpenShift Container Platform web console. + + [![gitea-signin](/images/industrial-edge/gitea-signin.png)](/images/industrial-edge/gitea-signin.png) + + > **Note:** Alternatively, you can run the following command to obtain the Gitea user's password automatically: + > + ```sh + oc extract -n vp-gitea secret/gitea-admin-secret --to=- --keys=password 2>/dev/null + ``` + +5. In the `industrial-edge` repository, edit the file called `charts/datacenter/manuela-tst/templates/machine-sensor/machine-sensor-2-configmap.yaml` +and change `SENSOR_TEMPERATURE_ENABLED: "false"` to `SENSOR_TEMPERATURE_ENABLED: "true"` as shown in the screenshot. + + [![gitea-edit](/images/industrial-edge/gitea-edit.png)](/images/industrial-edge/gitea-edit.png) + +6. Commit this change to your git repository so that the change will be picked up by OpenShift GitOps (ArgoCD). + + [![gitea-commit](/images/industrial-edge/gitea-commit.png)](/images/industrial-edge/gitea-commit.png) + +7. Track the progress of this commit/push in your OpenShift GitOps console in the `manuela-test-all` application. You will notice components regarding +machine-sensor-2 getting sync-ed. You can speed this up by manually pressing the `Refresh` button. + + [![argocd-line-dashboard](/images/industrial-edge/argocd-line-dashboard.png)](/images/industrial-edge/argocd-line-dashboard.png) + +8. The dashboard app should pickup the change automatically, once data from the temperature sensor is received. Sometimes a page/tab refreshed is needed for the change to be picked up. + + [![app-line-dashboard](/images/industrial-edge/argocd-machine-sensor2.png)](/images/industrial-edge/argocd-machine-sensor2.png) + +# Adapting the Industrial Edge Pattern for a delivery service use case + +This procedure outlines the steps needed to adapt the Industrial Edge pattern for a **delivery service use case**, while keeping the main architectural components in place. + +**1. Identify the Core Architecture Components to Reuse** +The following components from the Industrial Edge pattern can be reused as is: +- **Broker and Kafka components**: These will handle streaming data from IoT devices. + +**2. Develop IoT Sensor Software for Delivery Vehicles** +- Create or modify IoT sensor software to be deployed on **mobile delivery vehicles**. +- Address challenges related to **intermittent connectivity**, ensuring data can be buffered and sent when a network connection is available. + +**3. Scale the Solution for a Growing Fleet of Vehicles** +- Assess the number of IoT devices required based on fleet size. +- Ensure **Kafka and broker components** can scale dynamically to handle increased data traffic. + +**4. Implement AI/ML for Real-Time Data Analysis** +- Develop a new **AI/ML model** to process and analyze telemetry data from IoT devices. +- Train the model to recognize trends in delivery operations, such as **route efficiency, fuel consumption, and vehicle health**. + +**5. Define and Configure Kafka Topics for IoT Data** +- Create **Kafka topics** specific to delivery service tracking, such as: + - `vehicle-location` + - `delivery-status` + - `fuel-consumption` + - `temperature-monitoring` +- Ensure these topics align with **data processing and analytics needs**. + +**6. Deploy and Monitor the Adapted System** +- Deploy the updated IoT software on delivery vehicles. +- Monitor data ingestion and processing through **Kafka topics and AI/ML insights**. +- Scale infrastructure a # Next Steps diff --git a/static/images/industrial-edge/datacenter-argocd-apps.png b/static/images/industrial-edge/datacenter-argocd-apps.png index ef3ba450e..746d0999b 100644 Binary files a/static/images/industrial-edge/datacenter-argocd-apps.png and b/static/images/industrial-edge/datacenter-argocd-apps.png differ diff --git a/static/images/industrial-edge/gitea-commit-1.png b/static/images/industrial-edge/gitea-commit-1.png new file mode 100644 index 000000000..3572300aa Binary files /dev/null and b/static/images/industrial-edge/gitea-commit-1.png differ diff --git a/static/images/industrial-edge/gitea-commit.png b/static/images/industrial-edge/gitea-commit.png index 402a06315..0b48ba6ed 100644 Binary files a/static/images/industrial-edge/gitea-commit.png and b/static/images/industrial-edge/gitea-commit.png differ diff --git a/static/images/industrial-edge/gitea-edit.png b/static/images/industrial-edge/gitea-edit.png index 7c9f965bf..f2f82796c 100644 Binary files a/static/images/industrial-edge/gitea-edit.png and b/static/images/industrial-edge/gitea-edit.png differ diff --git a/static/images/industrial-edge/gitea-iot-edit.png b/static/images/industrial-edge/gitea-iot-edit.png index a10fead19..0dc252fff 100644 Binary files a/static/images/industrial-edge/gitea-iot-edit.png and b/static/images/industrial-edge/gitea-iot-edit.png differ diff --git a/static/images/industrial-edge/gitea-pipeline-pr.png b/static/images/industrial-edge/gitea-pipeline-pr.png index fd0666b76..7777316ac 100644 Binary files a/static/images/industrial-edge/gitea-pipeline-pr.png and b/static/images/industrial-edge/gitea-pipeline-pr.png differ diff --git a/static/images/industrial-edge/gitea-signin.png b/static/images/industrial-edge/gitea-signin.png index fdb28a3d5..04faf90de 100644 Binary files a/static/images/industrial-edge/gitea-signin.png and b/static/images/industrial-edge/gitea-signin.png differ diff --git a/static/images/industrial-edge/network-routing-line-dashboard.png b/static/images/industrial-edge/network-routing-line-dashboard.png index dd69f1340..8fd9f57a9 100644 Binary files a/static/images/industrial-edge/network-routing-line-dashboard.png and b/static/images/industrial-edge/network-routing-line-dashboard.png differ diff --git a/static/images/industrial-edge/nine-box.png b/static/images/industrial-edge/nine-box.png index f56cfb080..a3430d1e6 100644 Binary files a/static/images/industrial-edge/nine-box.png and b/static/images/industrial-edge/nine-box.png differ diff --git a/static/images/industrial-edge/pipeline-iot-frontend.png b/static/images/industrial-edge/pipeline-iot-frontend.png index 3d9df3694..1e32b40ca 100644 Binary files a/static/images/industrial-edge/pipeline-iot-frontend.png and b/static/images/industrial-edge/pipeline-iot-frontend.png differ