diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc index dd9463098..ebcb9ef9e 100644 --- a/content/patterns/medical-diagnosis/_index.adoc +++ b/content/patterns/medical-diagnosis/_index.adoc @@ -94,19 +94,10 @@ The following diagram shows the components that are deployed with the the data f image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"] -== Recorded demo - -link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]] - == Presentation View presentation for the Medical Diagnosis Validated Pattern link:https://speakerdeck.com/rhvalidatedpatterns/md-speakerdeck[here] -[id="demo-script"] -== Demo Script - -Use this demo script to successfully complete the Medical Diagnosis pattern demo link:demo-script/#demo-intro[here] - [id="next-steps_med-diag-index"] == Next steps diff --git a/content/patterns/medical-diagnosis/cluster-sizing.adoc b/content/patterns/medical-diagnosis/cluster-sizing.adoc index 2cbc124f4..87853cb18 100644 --- a/content/patterns/medical-diagnosis/cluster-sizing.adoc +++ b/content/patterns/medical-diagnosis/cluster-sizing.adoc @@ -1,106 +1,15 @@ --- title: Cluster Sizing -weight: 20 -aliases: /medical-diagnosis/cluster-sizing/ +weight: 30 +aliases: /medical-diagnosis/medical-diagnosis-cluster-sizing/ --- + :toc: :imagesdir: /images :_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -:aws_node: xlarge - - -//Module to be included -//:_content-type: CONCEPT -//:imagesdir: ../../images -[id="about-openshift-cluster-sizing-med"] -== About OpenShift cluster sizing for the {med-pattern} -{aws_node} -To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster: - -|=== -| Name | Kind | Namespace | Description - -| Medical Diagnosis Hub -| Application -| medical-diagnosis-hub -| Hub GitOps management - -| {rh-gitops} -| Operator -| openshift-operators -| {rh-gitops-short} - -| {rh-ocp-data-first} -| Operator -| openshift-storage -| Cloud Native storage solution - -| {rh-amq-streams} -| Operator -| openshift-operators -| AMQ Streams provides Apache Kafka access - -| {rh-serverless-first} -| Operator -| - knative-serving (knative-eventing) -| Provides access to Knative Serving and Eventing functions -|=== - -//AI: Removed the following since we have CI status linked on the patterns page -//[id="tested-platforms-cluster-sizing"] -//== Tested Platforms -//: Removed the following in favor of the link to OCP docs -//[id="general-openshift-minimum-requirements-cluster-sizing"] -//== General OpenShift Minimum Requirements -The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.16/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.16/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal]. - -For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.16/installing/installing-preparing.html[{ocp} documentation]. - -//Module to be included -//:_content-type: CONCEPT -//:imagesdir: ../../images - -[id="med-openshift-cluster-size"] -=== About {med-pattern} OpenShift cluster size - -The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. - -For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators. -//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure) -[NOTE] -==== -You might want to add resources when more developers are working on building their applications. -==== - -The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes. - -[cols="^,^,^,^"] -|=== -| Node type | Number of nodes | Cloud provider | Instance type - -| Control plane and worker -| 3 and 3 -| Google Cloud -| n1-standard-8 - -| Control plane and worker -| 3 and 3 -| Amazon Cloud Services -| m5.2xlarge - -| Control plane and worker -| 3 and 3 -| Microsoft Azure -| Standard_D8s_v3 -|=== +include::modules/comm-attributes.adoc[] +include::modules/medical-diagnosis/metadata-medical-diagnosis.adoc[] -[role="_additional-resources"] -.Additional resource -* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types] -* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure] -* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide] -//Removed section for instance types as we did for MCG +include::modules/cluster-sizing-template.adoc[] \ No newline at end of file diff --git a/content/patterns/medical-diagnosis/demo-script.adoc b/content/patterns/medical-diagnosis/demo-script.adoc index a80964de6..a446fd616 100644 --- a/content/patterns/medical-diagnosis/demo-script.adoc +++ b/content/patterns/medical-diagnosis/demo-script.adoc @@ -1,6 +1,6 @@ --- -title: Demo Script -weight: 60 +title: Verifying the demo +weight: 20 aliases: /medical-diagnosis/demo/ --- @@ -19,148 +19,57 @@ image::../../images/medical-edge/aiml_pipeline.png[link="/images/medical-edge/ai [NOTE] ==== -We simulate the function of the remote medical facility with an application called `image-generator` +We simulate the function of the remote medical facility with an application called the `image-generator`. ==== +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images +[id="viewing-the-grafana-based-dashboard-getting-started"] +== Enabling the Grafana based dashboard -[id="demo-objectives"] +The Grafana dashboard offers a visual representation of the AI/ML workflow, including CPU and memory metrics for the pod running the risk assessment application. Additionally, it displays a graphical overview of the AI/ML workflow, illustrating the images being generated at the remote medical facility. -== Objectives +This showcase application is deployed with self-signed certificates, which are considered untrusted by most browsers. If valid certificates have not been provisioned for your OpenShift cluster, you will need to manually accept the untrusted certificates by following these steps: -In this demo you will complete the following: +. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the *Networking* > *Routes* for *All Projects*. Click the URL for the `s3-rgw`. ++ +image::../../images/medical-edge/storage-route.png[s3-rgw route] ++ +Ensure that you see XML and not the access denied error message. ++ +image::../../images/medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"] -* Prepare your local workstation -* Update the pattern repo with your cluster values -* Deploy the pattern -* Access the dashboard +. While still looking at *Routes*, change the project to `xraylab-1`. Click the URL for the `image-server` and ensure that you do not see an access denied error message. You should see a `Hello world` message. -[id="getting-started"] +This showcase application does not have access to a x-ray machine hanging around that we can use for this demo, so one is emulated by creating an s3 bucket and hosting the x-ray images within it. In the "real world" an x-ray would be taken at an edge medical facility and then uploaded to an OpenShift Data Foundations (ODF) S3 compatible bucket in the Core Hospital, triggering the AI/ML workflow. -== Getting Started +To emulate the edge medical facility we use an application called `image-generator` which when scaled up will download the x-rays from s3 and put them in an ODF s3 bucket in the cluster, triggering the AI/ML workflow. -* Follow the link:../getting-started[Getting Started Guide] to ensure that you have met all of the pre-requisites -* Review link:../getting-started/#preparing-for-deployment[Preparing for Deployment] for updating the pattern with your cluster values +Turn on the image file flow. There are couple of ways to go about this. -[NOTE] -==== -This demo begins after `./pattern.sh make install` has been executed -==== - -[id="demo"] - -== Demo - -Now that we have deployed the pattern onto our cluster, we can begin to discover what has changed, and then move onto the dashboard. - -[id="admin-view"] - -=== Administrator View - Review Changes to cluster - -Login to your cluster's console with the `kubeadmin` user - -Let's check out what operators were installed - In the accordion menu on the left: - -* click Operators -* click Installed Operators - -[NOTE] - -==== -Ensure that **All Projects** is selected -==== - -image::../../images/medical-edge/admin_developer-contexts.png[link="/images/medical-edge/admin_developer-contexts.png"] - - -If you started with a new cluster then there were no layered products or operators installed. With the Validated Patterns framework we describe or declare what our cluster's desired state is and the GitOps engine does the rest. This includes creating the instance of the operator and any additional configuration between other API's to ensure everything is working together nicely. - - -[id="dev-view"] - -=== Developer View - Review Changes to cluster - -Let’s switch to the developer context by click on `Administrator` in the top left corner of the accordion menu then click `Developer` - -* Change projects to `xraylab-1` -* Click on `Topology` - - -image::../../images/medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"] - -Look at all of the resources that have been created for this demo application. What we see in this interface is the collection of all components required for this AI/ML workflow to properly execute. There are even more resources and configurations that get deployed but because we don't directly interact with them we won't worry too much about them. The take away here is when you utilize the framework you are able to build in automation just like this which allows your developers to focus on their important developer things. - - -[id="certificate-warn"] - -=== Invalid Certificates - -We are deploying this demo using self-signed certificates that are untrusted by our browser. Unless you have provisioned valid certificates for your OpenShift cluster you must accept the invalid certificates for: - -* s3-rgw | openshift-storage namespace -* grafana | xraylab-1 namespace - -[source,shell] ----- - -S3RGW_ROUTE=https://$(oc get route -n openshift-storage s3-rgw -o jsonpath='{.spec.host}') - -echo $S3RGW_ROUTE - -GRAFANA_ROUTE=https://$(oc get route -n xraylab-1 grafana -o jsonpath='{.spec.host}') - -echo $GRAFANA_ROUTE ----- - -[WARNING] - -==== -You must accept the security risks / self signed certificates before scaling the image-generator application -==== - -[id="scale-up"] - -=== Scale up the deployment - -As we mentioned earlier, we don't have an x-ray machine hanging around that we can use for this demo, so we emulate one by creating an s3 bucket and hosting the x-ray images within it. In the "real world" an x-ray would be taken at an edge medical facility and then uploaded to an OpenShift Data Foundations (ODF) S3 compatible bucket in the Core Hospital, triggering the AI/ML workflow. - -To emulate the edge medical facility we use an application called `image-generator` which (when scaled up) will download the x-rays from s3 and put them in an ODF s3 bucket in the cluster, triggering the AI/ML workflow. - -Let's scale the `image-generator` deploymentConfig up to start the pipeline - -[NOTE] -==== -Make sure that you are in the `xraylab-1` project under the `Developer` context in the OpenShift Console -==== - -In the Topology menu under the Developer context in the OpenShift Console: +. Go to the {ocp} web console and change the view from *Administrator* to *Developer* and select *Topology*. From there select the `xraylab-1` project. -* Search for the `image-generator` application in the Topology console +. Right-click on the `image-generator` pod icon and select `Edit Pod count`. -image::../../images/medical-edge/image-generator.png[link="/images/medical-edge/image-generator.png"] +. Up the pod count from `0` to `1` and save. -* Click on the `image-generator` application ( you may have to zoom in on the highlighted application) -* Switch to the `Details` menu in the application menu context -* Click the `^` next to the pod donut +Alternatively, you can have the same outcome on the Administrator console. -image::../../images/medical-edge/image-generator-scale.png[link="/images/medical-edge/image-generator-scale.png"] +. Go to the {ocp} web console under *Workloads*, select *Deployments* for the *Project* `xraylab-1`. +. Click `image-generator` and increase the pod count to 1. [id="demo-dashboard"] -== Demo Dashboard +== Viewing the Grafana dashboard -Now let’s jump over to the dashboard +Access the Grafana dashboard to view the AI/ML workflow. Carry out the following steps: -* Return to the topology screen -* Select “Grafana” in the drop down for Filter by resource -* Click the grafana icon -* Open url to go open a browser for the grafana dashboard. +. In the {ocp} web console, select the nines menu and right click the *Grafana* icon. -Within the grafana dashboard: +. Within the grafana dashboard click the Dashboards icon. -* click the dashboards icon -* click Manage -* select xraylab-1 -* finally select the XRay Lab folder +. Select the `xraylab-1` folder and the XRay Lab menu item. image::../../images/medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"] @@ -176,4 +85,4 @@ You did it! You have completed the deployment of the medical diagnosis pattern! The medical diagnosis pattern is more than just the identification and detection of pneumonia in x-ray images. It is an object detection and classification model built on top of Red Hat OpenShift and can be transformed to fit multiple use-cases within the object classification paradigm. Similar use-cases would be detecting contraband items in the Postal Service or even in luggage in an airport baggage scanner. -For more information on Validated Patterns visit our link:https://validatedpatterns.io/[website] +For more information about Validated Patterns, visit our link:https://validatedpatterns.io/[website]. diff --git a/content/patterns/medical-diagnosis/getting-started.adoc b/content/patterns/medical-diagnosis/getting-started.adoc index daacebdcc..53d728aa9 100644 --- a/content/patterns/medical-diagnosis/getting-started.adoc +++ b/content/patterns/medical-diagnosis/getting-started.adoc @@ -14,12 +14,30 @@ include::modules/comm-attributes.adoc[] .Prerequisites +.Prerequisites + * An OpenShift cluster ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. - ** Select *Services* -> *Containers* -> *Create cluster*. - ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../medical-diagnosis/cluster-sizing[sizing your cluster]. + ** Select *OpenShift \-> Red Hat OpenShift Container Platform \-> Create cluster*. + ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. Verify that a dynamic `StorageClass` exists before creating one by running the following command: ++ +[source,terminal] +---- +$ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class" +---- ++ +.Example output ++ +[source,terminal] +---- +NAME PROVISIONER DEFAULT +gp2-csi ebs.csi.aws.com +gp3-csi ebs.csi.aws.com true +---- ++ +For more information about creating a dynamic `StorageClass`, see the https://docs.openshift.com/container-platform/latest/storage/dynamic-provisioning.html[Dynamic provisioning]. * A GitHub account and a token for it with repositories permissions, to read from and write to your forks. -* An S3-capable Storage set up in your public or private cloud for the x-ray images +* An S3-capable storage set up in your public or private cloud for the x-ray images * The Helm binary, see link:https://helm.sh/docs/intro/install/[Installing Helm] For installation tooling dependencies, see link:https://validatedpatterns.io/learn/quickstart/[Patterns quick start]. @@ -32,9 +50,9 @@ The {med-pattern} does not have a dedicated hub or edge cluster. === Setting up an S3 Bucket for the xray-images An S3 bucket is required for image processing. -For information about creating a bucket in AWS S3, see the <> section. +The link:https://github.com/validatedpatterns/utilities[utilities] repo and specifically the `aws-tools` directory contains some S3 tools and EC2 tools. -For information about creating the buckets on other cloud providers, see the following links: +For the official documentation on creating the buckets on AWS and other cloud providers, see the following links: * link:https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html[AWS S3] * link:https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal[Azure Blob Storage] @@ -45,35 +63,75 @@ For information about creating the buckets on other cloud providers, see the fol == Utilities //AI: Update the use of community and VP post naming tier update -To use the link:https://github.com/validatedpatterns/utilities[utilities] that are available, export some environment variables for your cloud provider. +Follow this procedure to use the scripts provided in the link:https://github.com/validatedpatterns/utilities[utilities] repo to configure and S3 bucket in your AWS environment for the x-ray images. + +.Procedure -.Example for AWS. Ensure that you replace values with your keys: +. Fork the link:https://github.com/validatedpatterns/utilities[utilities] repository on GitHub. Forking the repository allows you to update the repository as part of the GitOps and DevOps processes. +. Clone the forked copy of this repository. ++ [source,terminal] ---- -export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX -export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +$ git clone git@github.com:validatedpatterns/utilities.git ---- -Create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository. +. Go to your repository: Ensure you are in the root directory of your Git repository by using: ++ +[source,terminal] +---- +$ cd utilities +---- +. Run the following command to set the upstream repository: ++ [source,terminal] ---- -$ python s3-create.py -b mytest-bucket -r us-west-2 -p -$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2 +git remote add -f upstream git@github.com:validatedpatterns/utilities.git ---- -.Example output +. Change to the `aws-tools` directory: ++ +[source,terminal] +---- +$ cd aws-tools +---- + +. Run the following commands in your terminal to export environment variables for AWS authentication: ++ +[source,terminal] +---- +export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX +export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +---- ++ +[NOTE] +==== +Ensure that you replace values with your keys +==== + +. Create the S3 bucket by running the following command: ++ +[source,terminal] +---- +$ python s3-create.py -b kevtest-bucket -r us-east-1 -p +---- -image:/videos/bucket-setup.svg[Bucket setup] +. Copy over the data from the validated patterns public bucket to the created bucket for your demo. ++ +[source,terminal] +---- +$ python s3-sync-buckets.py -s validated-patterns-md-xray -t kevtest-bucket -r us-east-1 +---- -Note the name and URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:` +Note the name of the bucket for further pattern configuration. Later you will update the `bucketSource` in the `values-global.yaml` file, where there is a section for `s3:`. [id="preparing-for-deployment"] == Preparing for deployment .Procedure . Fork the link:https://github.com/validatedpatterns/medical-diagnosis[medical-diagnosis] repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes. + . Clone the forked copy of this repository. + [source,terminal] @@ -81,6 +139,37 @@ Note the name and URL for the bucket for further pattern configuration. For exam $ git clone git@github.com:/medical-diagnosis.git ---- +. Go to your repository: Ensure you are in the root directory of your Git repository by using: ++ +[source,terminal] +---- +$ cd /path/to/your/repository +---- + +. Run the following command to set the upstream repository: ++ +[source,terminal] +---- +$ git remote add -f upstream git@github.com:validatedpatterns/medical-diagnosis.git +---- + +. Verify the setup of your remote repositories by running the following command: ++ +[source,terminal] +---- +$ git remote -v +---- ++ +.Example output ++ +[source,terminal] +---- +origin git@github.com:kquinn/medical-diagnosis.git (fetch) +origin git@github.com:kquinn/medical-diagnosis.git (push) +upstream git@github.com:validatedpatterns/medical-diagnosis.git (fetch) +upstream git@github.com:validatedpatterns/medical-diagnosis.git (push) +---- + . Create a local copy of the Helm values file that can safely include credentials. + [WARNING] @@ -93,7 +182,6 @@ Run the following commands: [source,terminal] ---- $ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml -$ vi ~/values-secret-medical-diagnosis.yaml ---- + .Example `values-secret.yaml` file @@ -135,241 +223,249 @@ secrets: vaultPolicy: validatedPatternDefaultPolicy ---- + -By default, Vault password policy generates the passwords for you. However, you can create your own passwords. +By default, the Vault password policy generates the passwords for you. However, you can create your own passwords. + +. If you want to create custom passwords for the database users you will need to edit this file: ++ +[source,terminal] +---- +$ vi ~/values-secret-medical-diagnosis.yaml +---- + [NOTE] ==== When defining a custom password for the database users, avoid using the `$` special character as it gets interpreted by the shell and will ultimately set the incorrect desired password. ==== -. To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands: +. Create and switch to a new branch named my-branch, by running the following command: + [source,terminal] ---- $ git checkout -b my-branch +---- + +. Edit the `values-global.yaml` updating the S3 and datacenter details. ++ +[source,terminal] +---- $ vi values-global.yaml ---- + -Replace instances of PROVIDE_ with your specific configuration +.Example edited `values-global.yaml` file + [source,yaml] ---- - ...omitted - datacenter: - cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP - storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi - region: PROVIDE_CLOUD_REGION #us-east-2 - clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName - domain: PROVIDE_DNS_DOMAIN #example.com +global: + pattern: xray + + options: + useCSV: False + syncPolicy: Automatic + installPlanApproval: Automatic + + datacenter: + storageClassName: gp3-csi + cloudProvider: aws + region: us-east-1 + clustername: mytestcluster + domain: aws.validatedpatterns.io - s3: - # Values for S3 bucket access - # Replace with AWS region where S3 bucket was created - # Replace and with your OpenShift cluster values - # bucketSource: "https://s3..amazonaws.com/" - bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray - # Bucket base name used for xray images - bucketBaseName: "xray-source" + xraylab: + namespace: "xraylab-1" + + s3: + # Values for S3 bucket access + # bucketSource: "provide s3 bucket name where images are stored" + bucketSource: kevtest-bucket + # Bucket base name used for image-generator and image-server applications. + bucketBaseName: "xray-source" + +main: + clusterGroupName: hub + multiSourceConfig: + enabled: true + clusterGroupChartVersion: 0.9.* + +# Example Configuration + #datacenter: + # cloudProvider: aws + # storageClassName: gp2 + # region: us-east-1 + # clustername: example-sample + # domain: patterns.redhat.com ---- + +. Add `values-global.yaml` to the staging area: + [source,terminal] ---- $ git add values-global.yaml -$ git commit values-global.yaml -$ git push origin my-branch ---- -. To deploy the pattern, you can use the link:/infrastructure/using-validated-pattern-operator/[{validated-patterns-op}]. If you do use the Operator, skip to <>. - -. To preview the changes that will be implemented to the Helm charts, run the following command: +. Commit the staged changes with a message: + [source,terminal] ---- -$ ./pattern.sh make show +$ git commit -m "Update values-global.yaml" ---- -. Login to your cluster by running the following command: -+ -[source,terminal] ----- -$ oc login ----- -+ -Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: +. Push the changes to your forked repository: + [source,terminal] ---- - export KUBECONFIG=~/ +$ git push origin my-branch ---- -[id="check-the-values-files-before-deployment"] -=== Check the values files before deployment +You can proceed to install the {med-pattern} pattern by using the web console or from command line by using the script `./pattern.sh` script. -To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and make updates, if required. +To install the {med-pattern} pattern by using the web console you must first install the Validated Patterns Operator. The Validated Patterns Operator installs and manages Validated Patterns. -You must review the following values files before deploying the {med-pattern}: +//Include Procedure module here +[id="installing-validated-patterns-operator_{context}"] +== Installing the {validated-patterns-op} using the web console -|=== -| Values File | Description +.Prerequisites +* Access to an {ocp} cluster by using an account with `cluster-admin` permissions. + +.Procedure -| values-secret.yaml -| Values file that includes the secret parameters required by the pattern +. Navigate in the {hybrid-console-first} to the *Operators* → *OperatorHub* page. -| values-global.yaml -| File that contains all the global values used by Helm to deploy the pattern -|=== +. Scroll or type a keyword into the *Filter by keyword* box to find the Operator you want. For example, type `validated patterns` to find the {validated-patterns-op}. +. Select the Operator to display additional information. ++ [NOTE] ==== -Before you run the `./pattern.msh make install` command, ensure that you have the correct values for: -``` -- domain -- clusterName -- cloudProvider -- storageClassName -- region -- bucketSource -``` +Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. ==== -//image::/videos/predeploy.svg[link="/videos/predeploy.svg"] +. Read the information about the Operator and click *Install*. -[id="med-deploy-pattern_{context}"] -== Deploy +. On the *Install Operator* page: -. To apply the changes to your cluster, run the following command: -+ -[source,terminal] ----- -$ ./pattern.sh make install ----- -+ -If the installation fails, you can go over the instructions and make updates, if required. -To continue the installation, run the following command: -+ -[source,terminal] ----- -$ ./pattern.sh make update ----- -+ -This step might take some time, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation. +.. Select an *Update channel* (if more than one is available). + +.. Select a *Version* (if more than one is available). + +.. Select an *Installation mode*: + -image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"] +The only supported mode for this Operator is *All namespaces on the cluster (default)*. This installs the Operator in the default `openshift-operators` namespace to watch and be made available to all namespaces in the cluster. This option is not always available. -. Verify that the Operators have been installed. -.. To verify, in the {ocp} web console, navigate to *Operators* → *Installed Operators* page. -.. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. Ensure that {ocp-data-short} is listed in the list of installed Operators. +.. Select *Automatic* or *Manual* approval strategy. +. Click *Install* to make the Operator available to the selected namespaces on this {ocp} cluster. -[id="using-openshift-gitops-to-check-on-application-progress-getting-started"] -=== Using OpenShift GitOps to check on Application progress +.Verification +To confirm that the installation is successful: -To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator. +. Navigate to the *Operators* → *Installed Operators* page. -. Obtain the ArgoCD URLs and passwords. -+ -The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern. -+ -Display the fully qualified domain names, and matching login credentials, for -all ArgoCD instances: -+ -[source,terminal] ----- -ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster` -CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'` -eval $CMD ----- -+ -.Example output +. Check that the Operator is installed in the selected namespace and its status is `Succeeded`. + +//Include Procedure module here +[id="create-pattern-instance_{context}"] +== Creating the Medical Diagnosis GitOps instance + +.Prerequisites +The {med-pattern} is successfully installed in the relevant namespace. + +.Procedure + +. Navigate to the *Operators* → *Installed Operators* page. + +. Click the installed *{validated-patterns-op}*. + +. Under the *Details* tab, in the *Provided APIs* section, in the +*Pattern* box, click *Create instance* that displays the *Create Pattern* page. + +. On the *Create Pattern* page, select *Form view* and enter information in the following fields: + +** *Name* - A name for the pattern deployment that is used in the projects that you created. +** *Labels* - Apply any other labels you might need for deploying this pattern. +** *Cluster Group Name* - Select a cluster group name to identify the type of cluster where this pattern is being deployed. For example, if you are deploying the {ie-pattern}, the cluster group name is `datacenter`. If you are deploying the {mcg-pattern}, the cluster group name is `hub`. + -[source,text] ----- -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -hub-gitops-server hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com hub-gitops-server https passthrough/Redirect None -# admin.password -xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -cluster cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com cluster 8080 reencrypt/Allow None -kam kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com kam 8443 passthrough/None None -openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com openshift-gitops-server https passthrough/Redirect None -# admin.password -FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6 ----- +To know the cluster group name for the patterns that you want to deploy, check the relevant pattern-specific requirements. +. Expand the *Git Config* section to reveal the options and enter the required information. +. Leave *In Cluster Git Server* unchanged. +.. Change the *Target Repo* URL to your forked repository URL. For example, change `+https://github.com/validatedpatterns/+` to `+https://github.com//+` +.. Optional: You might need to change the *Target Revision* field. The default value is `HEAD`. However, you can also provide a value for a branch, tag, or commit that you want to deploy. For example, `v2.1`, `main`, or a branch that you created, `my-branch`. +. Click *Create*. + -[IMPORTANT] +[NOTE] ==== -Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance. +A pop-up error with the message "Oh no! Something went wrong." might appear during the process. This error can be safely disregarded as it does not impact the installation of the Multicloud GitOps pattern. Use the Hub ArgoCD UI, accessible through the nines menu, to check the status of ArgoCD instances, which will display states such as progressing, healthy, and so on, for each managed application. The Cluster ArgoCD provides detailed status on each application, as defined in the clustergroup values file. ==== -. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern. +The {rh-gitops} Operator displays in list of *Installed Operators*. The {rh-gitops} Operator installs the remaining assets and artifacts for this pattern. To view the installation of these assets and artifacts, such as {rh-rhacm-first}, ensure that you switch to *Project:All Projects*. +Wait some time for everything to deploy. You can track the progress through the `Hub ArgoCD` UI from the nines menu. The `xraylab-database` project appears stuck in a `Degraded` state. This is the expected behavior when installing using the OpenShift Container Platform console. -[id="viewing-the-grafana-based-dashboard-getting-started"] -=== Viewing the Grafana based dashboard - -. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the Routes for project `openshift-storage``. Click the URL for the `s3-rgw`. -+ -image::/images/medical-edge/storage-route.png[link="/images/medical-edge/storage-route.png"] +* To resolve this you need to run the following to load the secrets into the vault: + -Ensure that you see some XML and not the access denied error message. +[source,terminal] +---- +$ ./pattern.sh make load-secrets +---- + -image::/images/medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"] +[NOTE] +==== +You must have created a local copy of the secret values file by running the following command: -. Turn on the image file flow. There are three ways to go about this. -+ -You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster). -+ [source,terminal] ---- -$ oc scale deployment/image-generator --replicas=1 -n xraylab-1 +$ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml ---- -+ -Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the `xraylab-1` project. -+ -image::/images/medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"] -+ -Right-click on the `image-generator` pod icon and select `Edit Pod count`. -+ -image::/images/medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"] -+ -Up the pod count from `0` to `1` and save. -+ -image::/images/medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-topology-pod-count.png"] -+ -Alternatively, you can have the same outcome on the Administrator console. -+ -Go to the OpenShift UI under Workloads, select Deployments for Project `xraylab-1`. -Click `image-generator` and increase the pod count to 1. -+ -image::/images/medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"] +==== + +The deployment will not take long but it should deploy successfully. +Alternatively you can deploy the {med-pattern} pattern by using the command line script `pattern.sh`. -[id="making-some-changes-on-the-dashboard-getting-started"] -=== Making some changes on the dashboard +[id="deploying-cluster-using-patternsh-file"] +== Deploying the cluster by using the pattern.sh file -You can change some of the parameters and watch how the changes effect the dashboard. +To deploy the cluster by using the `pattern.sh` file, complete the following steps: -. You can increase or decrease the number of image generators. +. Log in to your cluster by running the following command: + [source,terminal] ---- -$ oc scale deployment/image-generator --replicas=2 +$ oc login ---- + -Check the dashboard. +Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: + [source,terminal] ---- -$ oc scale deployment/image-generator --replicas=0 +$ export KUBECONFIG=~/ ---- -+ -Watch the dashboard stop processing images. -. You can also simulate the change of the AI model version - as it's only an environment variable in the Serverless Service configuration. +. Deploy the pattern to your cluster. Run the following command: + [source,terminal] ---- -$ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]' +$ ./pattern.sh make install ---- + +. Verify that the Operators have been installed. + .. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page. + .. Check that the *Red Hat OpenShift GitOps Operator* is installed in the `openshift-operators` namespace and its status is `Succeeded`. +. Wait some time for everything to deploy. You can track the progress through the `Hub ArgoCD` UI from the nines menu. + -This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service. +image::../../images/medical-edge/medical-diags-overview.png[link="/images/medical-edge/medical-diags-overview.png"] + +As part of installing by using the script `pattern.sh` pattern, HashiCorp Vault is installed. Running `./pattern.sh make install` also calls the `load-secrets` makefile target. This `load-secrets` target looks for a YAML file describing the secrets to be loaded into vault and in case it cannot find one it will use the `values-secret.yaml.template` file in the git repository to try to generate random secrets. + +For more information, see section on https://validatedpatterns.io/secrets/vault/[Vault]. + +.Verification + +To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator. + +[IMPORTANT] +==== +Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance. +==== + +. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern. \ No newline at end of file diff --git a/content/patterns/medical-diagnosis/ideas-for-customization.adoc b/content/patterns/medical-diagnosis/ideas-for-customization.adoc index fba7350e2..c22011461 100644 --- a/content/patterns/medical-diagnosis/ideas-for-customization.adoc +++ b/content/patterns/medical-diagnosis/ideas-for-customization.adoc @@ -30,3 +30,37 @@ These are just a few ideas to help you understand how you could use the {med-pat //We have relevant links on the patterns page //AI: Why does this point to AEG though? https://github.com/validatedpatterns/ansible-edge-gitops/issues[Report Bugs] + + +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images +[id="making-some-changes-on-the-dashboard-getting-started"] +=== Making some changes on the dashboard + +You can change some of the parameters and watch how the changes effect the dashboard. + +. You can increase or decrease the number of image generators. ++ +[source,terminal] +---- +$ oc scale deployments/image-generator --replicas=2 -n xraylab-1 +---- ++ +Check the dashboard. ++ +[source,terminal] +---- +$ oc scale deployments/image-generator --replicas=0 -n xraylab-1 +---- ++ +Watch the dashboard stop processing images. + +. You can also simulate the change of the AI model version - as it is only an environment variable in the Serverless Service configuration. ++ +[source,terminal] +---- +$ oc patch ksvc risk-assessment -n xraylab-1 --type=merge -p '{"spec":{"template":{"metadata":{"annotations":{"redeployTimestamp":"'"$(date +%F_%T)"'"}}}}}' +---- ++ +This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service. diff --git a/content/patterns/medical-diagnosis/troubleshooting.adoc b/content/patterns/medical-diagnosis/troubleshooting.adoc index f36b9b126..a61752a98 100644 --- a/content/patterns/medical-diagnosis/troubleshooting.adoc +++ b/content/patterns/medical-diagnosis/troubleshooting.adoc @@ -92,7 +92,7 @@ Ensure that the Prometheus data source exists and that the status is available. ''' Problem:: The dashboard is showing red in the corners of the dashboard panes. + -image::medical-edge/medDiag-noDB.png[link="/images/medical-edge/medDiag-noDB.png"] +image::../../images/medical-edge/medDiag-noDB.png[link="/images/medical-edge/medDiag-noDB.png"] Solution:: This is most likely due to the *xraylab* database not being available or misconfigured. Please check the database and ensure that it is functioning properly. @@ -100,7 +100,7 @@ Solution:: This is most likely due to the *xraylab* database not being available + [source,terminal] ---- -$ oc exec -it xraylabdb-1- bash +$ oc exec -it xraylabdb-1- bash -n xraylab-1 $ mysql -u root USE xraylabdb; @@ -133,7 +133,7 @@ MariaDB [xraylabdb]> show tables; 3 rows in set (0.000 sec) ---- + -. Verify the password set in the `values-secret.yaml` is working +. If you set a password in `~/values-secret-medical-diagnosis.yaml` verify the password is working by running the following commands: + [source,terminal] ---- diff --git a/static/images/medical-edge/medical-diags-overview.png b/static/images/medical-edge/medical-diags-overview.png new file mode 100644 index 000000000..3d313fa5c Binary files /dev/null and b/static/images/medical-edge/medical-diags-overview.png differ diff --git a/static/images/medical-edge/storage-rgw-route.png b/static/images/medical-edge/storage-rgw-route.png index a9f9e413d..2c5d12d19 100644 Binary files a/static/images/medical-edge/storage-rgw-route.png and b/static/images/medical-edge/storage-rgw-route.png differ diff --git a/static/images/medical-edge/storage-route.png b/static/images/medical-edge/storage-route.png index 368671665..856b038b7 100644 Binary files a/static/images/medical-edge/storage-route.png and b/static/images/medical-edge/storage-route.png differ