diff --git a/content/patterns/openshift-ai/_index.adoc b/content/patterns/openshift-ai/_index.adoc index 09446d0cd..abf7f7e12 100644 --- a/content/patterns/openshift-ai/_index.adoc +++ b/content/patterns/openshift-ai/_index.adoc @@ -28,5 +28,5 @@ include::modules/rhoai-architecture.adoc[leveloffset=+1] [id="next-steps_rhoai-index"] == Next steps -* link:getting-started[Deploy the Pattern] using Helm. +* link:getting-started[Deploy the Pattern]. diff --git a/content/patterns/openshift-ai/ai-demo-app.adoc b/content/patterns/openshift-ai/ai-demo-app.adoc new file mode 100644 index 000000000..4c4aca257 --- /dev/null +++ b/content/patterns/openshift-ai/ai-demo-app.adoc @@ -0,0 +1,12 @@ +--- +title: AI Demo +weight: 20 +aliases: /rhoai/ai-demo/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +include::modules/rhoai-demo-app.adoc[leveloffset=1] diff --git a/content/patterns/openshift-ai/cluster-sizing.adoc b/content/patterns/openshift-ai/cluster-sizing.adoc new file mode 100644 index 000000000..e4e43b6ae --- /dev/null +++ b/content/patterns/openshift-ai/cluster-sizing.adoc @@ -0,0 +1,14 @@ +--- +title: Cluster sizing +weight: 50 +aliases: /openshift-ai/openshift-ai-cluster-sizing/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY + +include::modules/comm-attributes.adoc[] +include::modules/openshift-ai/metadata-openshift-ai.adoc[] + +include::modules/cluster-sizing-template.adoc[] diff --git a/content/patterns/rag-llm-gitops/getting-started.md b/content/patterns/rag-llm-gitops/getting-started.md index 137c79680..799cd5d31 100644 --- a/content/patterns/rag-llm-gitops/getting-started.md +++ b/content/patterns/rag-llm-gitops/getting-started.md @@ -12,13 +12,13 @@ aliases: /rag-llm-gitops/getting-started/ ## Procedure -1. Create the installation configuration file using the steps described in [Creating the installation configuration file](https://docs.openshift.com/container-platform/4.17/installing/installing_aws/ipi/installing-aws-customizations.html#installation-initializing_installing-aws-customizations). +1. Create the installation configuration file using the steps described in [Creating the installation configuration file](https://docs.openshift.com/container-platform/latest/installing/installing_aws/ipi/installing-aws-customizations.html#installation-initializing_installing-aws-customizations). > **Note:** > Supported regions are `us-east-1` `us-east-2` `us-west-1` `us-west-2` `ca-central-1` `sa-east-1` `eu-west-1` `eu-west-2` `eu-west-3` `eu-central-1` `eu-north-1` `ap-northeast-1` `ap-northeast-2` `ap-northeast-3` `ap-southeast-1` `ap-southeast-2` and `ap-south-1`. For more information about installing on AWS see, [Installation methods](https://docs.openshift.com/container-platform/latest/installing/installing_aws/preparing-to-install-on-aws.html). > -2. Customize the generated `install-config.yaml` creating one control plane node with instance type `m5a.2xlarge` and 3 worker nodes with instance type `p3.2xlarge`. A sample YAML file is shown here: +2. Customize the generated `install-config.yaml` creating one control plane node with instance type `m5.2xlarge` and 3 worker nodes with instance type `m5.2xlarge`. A sample YAML file is shown here: ```yaml additionalTrustBundlePolicy: Proxyonly apiVersion: v1 @@ -29,7 +29,7 @@ aliases: /rag-llm-gitops/getting-started/ name: worker platform: aws: - type: p3.2xlarge + type: m5.2xlarge replicas: 3 controlPlane: architecture: amd64 @@ -37,7 +37,7 @@ aliases: /rag-llm-gitops/getting-started/ name: master platform: aws: - type: m5a.2xlarge + type: m5.2xlarge replicas: 1 metadata: creationTimestamp: null diff --git a/modules/rhoai-demo-app.adoc b/modules/rhoai-demo-app.adoc new file mode 100644 index 000000000..e6476adda --- /dev/null +++ b/modules/rhoai-demo-app.adoc @@ -0,0 +1,185 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="creating-data-science-project"] += AI Demos + +== First AI demo + +In this demo, you will configure a Jupyter notebook server using a specified image within a Data Science project, customizing it to meet your specific requirements. + +.Procedure + +. Click the *Red Hat OpenShift AI* from the nines menu on the OpenShift Console. + +. Click *Log in with OpenShift* + +. Click on the *Data Science Projects* tab. + +. Click *Create project* + +.. Enter a name for the project for example `my-first-ai-project` in the *Name* field and click *Create*. + +. Click on *Create a workbench*. Now you are ready to move to the next step to define the workbench. + +.. Enter a name for the workbench. + +.. Select the *Notebook image* from the *image selection* dropdown as *Standard Data Science*. + +.. Select the Container size to *Small* under *Deployment size*. + +.. Scroll down and in the *Cluster storage* section, create a name for the new persistent storage that will be created. + +.. Set the *persistent storage size* to 10 Gi. + +.. Click the *Create workbench* button at the bottom left of the page. ++ +After successful implementation, the status of the workbench turns to *Running* + +.. Click the *Open↗* button, located beside the status. + +.. Authorize the access with the OpenShift cluster by clicking on the *Allow selected permissions*. After granting permissions with OpenShift, you will be directed to the Jupyter Notebook page. + +== Accessing the current data science project within Jupyter Notebook + +The Jupyter Notebook provides functionality to fetch or clone existing GitHub repositories, similar to any other standard IDE. Therefore, in this section, you will clone an existing simple AI/ML code into the notebook using the following instructions. + +. From the top, click on the *Git clone* icon. ++ +image::rhoai/git-clone-button.png[Git clone button] + +. In the popup window enter the URL of the GitHub repository in the *Git Repository URL* field: ++ +[source,text] +---- +https://github.com/redhat-developer-demos/openshift-ai.git +---- + +. Click the *Clone* button. + +. After fetching the github repository, the project appears in the directory section on the left side of the notebook. + +. Expand the */openshift-ai/1_First-app/* directory. + +. Open the *openshift-ai-test.ipynb* file. ++ +You will be presented with the view of a Jupyter Notebook. + +## Running code in a Jupyter notebook + +In the previous section, you imported and opened the notebook. To run the code within the notebook, click the *Run* icon located at the top of the interface. + +After clicking *Run*, the notebook automatically moves to the next cell. This is part of the design of Jupyter Notebooks, where scripts or code snippets are divided into multiple cells. Each cell can be run independently, allowing you to test specific sections of code in isolation. This structure greatly aids in both developing complex code incrementally and debugging it more effectively, as you can pinpoint errors and test solutions cell by cell. + +After executing a cell, you can immediately see the output just below it. This immediate feedback loop is invaluable for iterative testing and refining of code. + +[id="interactive-classification-project"] +== Performing an interactive classification with Jupyter notebook + +In this section, you will perform an interactive classification using a Jupyter notebook. + +.Procedure + +. Click the *Red Hat OpenShift AI* from the nines menu on the OpenShift Console. + +. Click *Log in with OpenShift* + +. Click on the *Data Science Projects* tab. + +. Click *Create project* + +.. Enter a name for the project for example `my-classification-project` in the *Name* field and click *Create*. + +. Click on *Create a workbench*. Now you are ready to move to the next step to define the workbench. + +.. Give the workbench a name for example *interactive-classification*. + +.. Select the *Notebook image* from the *image selection* dropdown as *TensorFlow*. + +.. Select the Container size to *Medium* under *Deployment size*. + +.. Scroll down and in the *Cluster storage* section, create a name for the new persistent storage that will be created. + +.. Set the *persistent storage size* to 20 Gi. + +.. Click the *Create workbench* button at the bottom of the page. ++ +After successful implementation, the status of the workbench turns to *Running* + +.. Click the *Open↗* button, located beside the status. + +.. Authorize the access with the OpenShift cluster by clicking on the *Allow selected permissions*. After granting permissions with OpenShift, you will be directed to the Jupyter Notebook page. + +## Obtaining and preparing the dataset + +Simplify data preparation in AI projects by automating the fetching of datasets using Kaggle's API following these steps: + +. Navigate to the Kaggle website and log in with your account credentials. + +. Click on your profile icon at the top right corner of the page, then select Account from the dropdown menu. + +. Scroll down to the section labeled API. Here, you'll find a Create New Token button. Click this button. + +. A file named `kaggle.json` will be downloaded to your local machine. This file contains your Kaggle API credentials. + +. Upload the `kaggle.json` file to your JupyterLab IDE environment. You can drag and drop the file into the file browser of JupyterLab IDE. This step might visually look different depending on your Operating System and Desktop User interface. + +. Clone the Interactive Image Classification Project from the GitHub repository using the following instructions: + +.. At the top of the JupyterLab interface, click on the *Git Clone* icon. + +.. In the popup window, enter the URL of the GitHub repository in the *Git Repository URL* field: ++ +[source,text] +---- +https://github.com/redhat-developer-demos/openshift-ai.git +---- + +.. Click the *Clone* button. + +.. After cloning, navigate to the *openshift-ai/2_interactive_classification* directory within the cloned repository. + +. Open the Python Notebook in the JupyterLab Interface. ++ +The JupyterLab interface is presented after uploading `kaggle.json` and cloning the `openshift-ai` repository shown the file browser on the left with `openshift-ai` and `.kaggle.json`. + +. Open `Interactive_Image_Classification_Notebook.ipynb` in the `openshift-ai` directory and run the notebook, the notebook contains all necessary instructions and is self-documented. + +. Run the cells in the Python Notebook as follows: + +.. Start by executing each cell in order by pressing the play button or using the keyboard shortcut "Shift + Enter" + +.. Once you run the cell in Step 4, you should see an output as shown in the following screenshot. ++ +image::rhoai/predict-step4.png[Interactive Real-Time Data Streaming and Visualization] + +.. Running the cell in Step 5, produces an output of two images, one of a cat and one of a dog, with their respective predictions labeled as "Cat" and "Dog". + +.. Once the code in the cell is executed in Step 6, a predict button appears as shown in screenshot below. The interactive session displays images with their predicted labels in real-time as the user clicks the *Predict* button. This dynamic interaction helps in understanding how well the model performs across a random set of images and provides insights into potential improvements for model training. ++ +image::rhoai/predict.png[Interactive Real-Time Image Prediction with Widgets] + +## Addressing misclassification in your AI Model + +Misclassification in machine learning models can significantly hinder your model's accuracy and reliability. To combat this, it's crucial to verify dataset balance, align preprocessing methods, and tweak model parameters. These steps are essential for ensuring that your model not only learns well, but also generalizes well, to new, unseen data. + +. Adjust the Number of epochs to optimize training speed ++ +Changing the number of *epochs* can help you find the sweet spot where your model learns enough to perform well without overfitting. This is crucial for building a robust model that performs consistently. + +. Try different values for steps per epoch. ++ +Modifying *steps_per_epoch* affects how many batches of samples are used in one epoch. This can influence the granularity of the model updates and can help in dealing with imbalanced datasets or overfitting. + +For example make these modifications in your notebook or another Python environment as part of *Step 3: Build and Train the Model*: + +[source,text] +---- +# Adjust the number of epochs and steps per epoch +model.fit(train_generator, steps_per_epoch=100, epochs=10) +---- + +[role="_additional-resources"] +.Additional resources + +* link:https://developers.redhat.com/learn/openshift-ai[Red Hat OpenShift AI learning] \ No newline at end of file diff --git a/modules/rhoai-deploying.adoc b/modules/rhoai-deploying.adoc index 8ec989cf0..20caabdbb 100644 --- a/modules/rhoai-deploying.adoc +++ b/modules/rhoai-deploying.adoc @@ -4,134 +4,284 @@ [id="deploying-rhoai-pattern"] = Deploying the Red Hat OpenShift AI Pattern +TEST + .Prerequisites * An OpenShift cluster ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. - ** Select *Services \-> Containers \-> Create cluster*. - ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../multicloud-gitops/mcg-cluster-sizing[sizing your cluster]. -* Optional: A second OpenShift cluster for multicloud demonstration. -//Replaced git and podman prereqs with the tooling dependencies page -* https://validatedpatterns.io/learn/quickstart/[Install the tooling dependencies]. - -The use of this pattern depends on having at least one running Red Hat OpenShift cluster. However, consider creating a cluster for deploying the GitOps management hub assets and a separate cluster for the managed cluster. + ** Select *OpenShift \-> Red Hat OpenShift Container Platform \-> Create cluster*. + ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. Verify that a dynamic `StorageClass` exists before creating one by running the following command: ++ +[source,terminal] +---- +$ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class" +---- ++ +.Example output ++ +[source,terminal] +---- +NAME PROVISIONER DEFAULT +gp2-csi ebs.csi.aws.com +gp3-csi ebs.csi.aws.com true +---- ++ +For more information about creating a dynamic `StorageClass`, see the https://docs.openshift.com/container-platform/latest/storage/dynamic-provisioning.html[Dynamic provisioning] documentation. -If you do not have a running Red Hat OpenShift cluster, you can start one on a -public or private cloud by using https://console.redhat.com/openshift/create[Red Hat Hybrid Cloud Console]. .Procedure -. Fork the https://github.com/validatedpatterns-sandbox/openshift-ai[openshift-ai] repository on GitHub. -. Clone the forked copy of this repository. -+ -[source,terminal] ----- -git clone git@github.com:your-username/openshift-ai.git ----- - -//. Create a local copy of the secret values file that can safely include credentials. Run the following commands: -//+ -//[source,terminal] -//---- -//cp values-secret.yaml.template ~/values-secret-travelops.yaml -//---- -//+ -//[source,yaml] -//---- -//version: "2.0" -//# Ideally you NEVER COMMIT THESE VALUES TO GIT (although if all passwords are -//# automatically generated inside the vault this should not really matter) -// -//secrets: -// - name: mysql-credentials -// vaultPrefixes: -// - global -// fields: -// - name: rootpasswd -// onMissingValue: generate -// vaultPolicy: validatedPatternDefaultPolicy -// -//# Uncomment the following if you want to enable HTPasswd oAuth -//# - name: htpasswd -//# vaultPrefixes: -//# - global -//# fields: -//# - name: htpasswd -//# path: '/path/to/users.htpasswd' -//---- -//+ -//[WARNING] -//==== -//Do not commit this file. You do not want to push personal credentials to GitHub. If you do not want to customize the secrets, these steps are not needed. The framework generates a random password for the config-demo application. -//==== -// -. If you want a peak under the covers to see what the pattern contains, you can do so with the following command: -+ -[source,terminal] ----- -cat values-hub.yaml ----- - -But don't worry if it looks intimidating. - -. Deploy the pattern by running `./pattern.sh make install` or by using the link:/infrastructure/using-validated-pattern-operator/[Validated Patterns Operator]. +. From the https://github.com/validatedpatterns-sandbox/openshift-ai[openshift-ai] repository on GitHub click the Fork button. -[id="deploying-cluster-using-patternsh-file"] -== Deploying the cluster by using the pattern.sh file +. Clone the forked copy of this repository by running the following command. ++ +[source,terminal] +---- +$ git clone git@github.com:/openshift-ai.git +---- -To deploy the cluster by using the `pattern.sh` file, complete the following steps: +. Navigate to your repository: Ensure you are in the root directory of your Git repository by using: ++ +[source,terminal] +---- +$ cd /path/to/your/repository +---- + +. Run the following command to set the upstream repository: ++ +[source,terminal] +---- +$ git remote add -f upstream git@github.com:validatedpatterns-sandbox/openshift-ai.git +---- -. Login to your cluster by running the following command: +. Verify the setup of your remote repositories by running the following command: + [source,terminal] ---- - oc login +$ git remote -v ---- + -Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: +.Example output + [source,terminal] ---- - export KUBECONFIG=~/ +origin git@github.com:/openshift-ai.git (fetch) +origin git@github.com:/openshift-ai.git (push) +upstream git@github.com:validatedpatterns-sandbox/openshift-ai.git (fetch) +upstream git@github.com:validatedpatterns-sandbox/openshift-ai.git (push) ---- -. Deploy the pattern to your cluster. Run the following command: +. Create a local copy of the secret values file that can safely include credentials. Run the following commands: + [source,terminal] ---- - ./pattern.sh make install +$ cp values-secret.yaml.template ~/values-secret-openshift-ai.yaml ---- ++ +[NOTE] +==== +Putting the `values-secret.yaml` in your home directory ensures that it does not get pushed to your git repository. It is based on the `values-secrets.yaml.template` file provided by the pattern in the top level directory. When you create your own patterns you will add your secrets to this file and save. +==== -[id="verify-rhoai-pattern-install"] -== Verify Red Hat OpenShift AI Pattern installation +. Create a new feature branch, for example `my-branch` from the `rhoai` branch for your content: ++ +[source,terminal] +---- +$ git checkout -b my-branch rhoai +---- -. Verify that the Operators have been installed. - .. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page. - .. Set your project to `All Projects` and verify the operators are isntalled and have a status of `Succeeded`. -. Verify that all applications are synchronized. Under the project `openshift-ai-hub` click the URL for the `hub` gitops `server`. +. Create a local branch and push it to origin to gain the flexibility needed to customize the OpenShift AI pattern by running the following command: + -image::rhoai/rhods-sync-success.png[ArgoCD Applications,link="/images/rhoai/rhods-sync-success.png"] +[source,terminal] +---- +$ git push origin my-branch +---- + +You can proceed to install the OpenShift AI pattern by using the web console or from command line by using the script `./pattern.sh` script. + +To install the OpenShift AI pattern by using the web console you must first install the Validated Patterns Operator. The Validated Patterns Operator installs and manages Validated Patterns. + +//Include Procedure module here +[id="installing-validated-patterns-operator_{context}"] +== Installing the {validated-patterns-op} using the web console + +.Prerequisites +* Access to an {ocp} cluster by using an account with `cluster-admin` permissions. +.Procedure + +. Navigate in the {hybrid-console-first} to the *Operators* → *OperatorHub* page. + +. Scroll or type a keyword into the *Filter by keyword* box to find the Operator you want. For example, type `validated patterns` to find the {validated-patterns-op}. + +. Select the Operator to display additional information. + -As part of this pattern, HashiCorp Vault has been installed. Refer to the section on https://validatedpatterns.io/secrets/vault/[Vault]. +[NOTE] +==== +Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. +==== +. Read the information about the Operator and click *Install*. -[id="verify-rhoai-dashboards"] -== Verify installation by checking the OpenShift AI Dashboard +. On the *Install Operator* page: -. Access the OpenShift AI dashboard +.. Select an *Update channel* (if more than one is available). +.. Select a *Version* (if more than one is available). + +.. Select an *Installation mode*: ++ +The only supported mode for this Operator is *All namespaces on the cluster (default)*. This installs the Operator in the default `openshift-operators` namespace to watch and be made available to all namespaces in the cluster. This option is not always available. + +.. Select *Automatic* or *Manual* approval strategy. + +. Click *Install* to make the Operator available to the default `openshift-operators` namespace on this {ocp} cluster. + +.Verification +To confirm that the installation is successful: + +. Navigate to the *Operators* → *Installed Operators* page. + +. Check that the Operator is installed in the selected namespace and its status is `Succeeded`. + +//Include Procedure module here +[id="create-pattern-instance_{context}"] +== Creating the OpenShift AI instance + +.Prerequisites +The {validated-patterns-op} is successfully installed in the relevant namespace. + +.Procedure + +. Navigate to the *Operators* → *Installed Operators* page. + +. Click the installed *{validated-patterns-op}*. + +. Under the *Details* tab, in the *Provided APIs* section, in the +*Pattern* box, click *Create instance* that displays the *Create Pattern* page. + +. On the *Create Pattern* page, select *Form view* and enter information in the following fields: + +** *Name* - A name for the pattern deployment that is used in the projects that you created. +** *Labels* - Apply any other labels you might need for deploying this pattern. +** *Cluster Group Name* - Select a cluster group name to identify the type of cluster where this pattern is being deployed. For example, if you are deploying the {ie-pattern}, the cluster group name is `datacenter`. If you are deploying the {mcg-pattern}, the cluster group name is `hub`. ++ +To know the cluster group name for the patterns that you want to deploy, check the relevant pattern-specific requirements. +. Expand the *Git Config* section to reveal the options and enter the required information. +. Leave *In Cluster Git Server* unchanged. +.. Change the *Target Repo* URL to your forked repository URL. For example, change https://github.com/validatedpatterns/ to https://github.com//. +.. Optional: You might need to change the *Target Revision* field. The default value is `HEAD`. However, you can also provide a value for a branch, tag, or commit that you want to deploy. For example, `v2.1`, `main`, or a branch that you created, `my-branch`. +. Click *Create*. + -[source, terminal] +[NOTE] +==== +A pop-up error with the message "Oh no! Something went wrong." might appear during the process. This error can be safely disregarded as it does not impact the installation of the OpenShift AI pattern. Use the Hub ArgoCD UI, accessible through the nines menu, to check the status of ArgoCD instances, which will display states such as progressing, healthy, and so on, for each managed application. The Cluster ArgoCD provides detailed status on each application, as defined in the clustergroup values file. +==== + +The *{rh-gitops} Operator* displays in list of *Installed Operators*. The *{rh-gitops} Operator* installs the remaining assets and artifacts for this pattern. To view the installation of these assets and artifacts, such as *{rh-rhacm-first}*, ensure that you switch to *Project:All Projects*. + +Wait some time for everything to deploy. You can track the progress through the `Hub ArgoCD` UI from the nines menu. + +. Navigate to the root directory of the cloned repository by running the following command: ++ +[source,terminal] ---- -RHODS=https://$(oc get route -n redhat-ods-applications -o jsonpath='{.spec.host}') -echo ${RHODS} +$ cd /path/to/your/repository ---- -You can also get to the dashboard from the OpenShift Console by selecting the application shortcut icon and then selecting the link for Red Hat OpenShift Ai +. Log in to your cluster by running the following this procedure: -image:rhoai/rhods-application_menu.png[Application ShortCut,link="/images/rhoai/rhods-application_menu.png"] +.. Obtain an API token by visiting https://oauth-openshift.apps../oauth/token/request -Log in to the Dashboard using your OpenShift credentials. You will find an environment that is ready for further configuration. This pattern provides the fundamental platform pieces to support MLOps workflows. The installation of OpenShift Pipelines enables the immediate use of pipelines if that is the desired approach for deployment. +.. Log in with this retrieved token by running the following command: ++ +[source,terminal] +---- +$ oc login --token= --server=https://api..:6443 +---- + +. Alternatively log in by running the following command: ++ +[source,terminal] +---- +$ export KUBECONFIG=~/ +---- -image:rhoai/rhods-ai_dashboard.png[OpenShift AI Dashboard,link="/images/rhoai/rhods-ai_dashboard.png"] +. Run the following to load the secrets into the vault: ++ +[source,terminal] +---- +$ ./pattern.sh make load-secrets +---- ++ +[NOTE] +==== +You must have created a local copy of the secret values file by running the following command: + +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret-openshift-ai.yaml +---- +==== + +Alternatively you can deploy the OpenShift AI pattern by using the command line script `pattern.sh`. + +[id="deploying-cluster-using-patternsh-file"] +== Deploying the cluster by using the pattern.sh script + +To deploy the cluster by using the `pattern.sh` script, complete the following steps: + +. Navigate to the root directory of the cloned repository by running the following command: ++ +[source,terminal] +---- +$ cd /path/to/your/repository +---- + +. Log in to your cluster by running the following this procedure: + +.. Obtain an API token by visiting https://oauth-openshift.apps../oauth/token/request + +.. Log in with this retrieved token by running the following command: ++ +[source,terminal] +---- +$ oc login --token= --server=https://api..:6443 +---- + +. Alternatively log in by running the following command: ++ +[source,terminal] +---- +$ export KUBECONFIG=~/ +---- + +. Deploy the pattern to your cluster by running the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- + +. Verify that the Operators have been installed. + .. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page. + .. Check that *{rh-gitops} Operator* is installed in the `openshift-operators` namespace and its status is `Succeeded`. +. Verify that all applications are synchronized. Under *Networking \-> Routes* select the *Location URL* associated with the *hub-gitops-server* . All application are report status as `Synched`. ++ +image::rhoai/rhods-sync-success.png[ArgoCD Applications,link="/images/rhoai/rhods-sync-success.png"] + +As part of installing by using the script `pattern.sh` pattern, HashiCorp Vault is installed. Running `./pattern.sh make install` also calls the `load-secrets` makefile target. This `load-secrets` target looks for a YAML file describing the secrets to be loaded into vault and in case it cannot find one it will use the `values-secret.yaml.template` file in the git repository to try to generate random secrets. + +For more information, see section on https://validatedpatterns.io/secrets/vault/[Vault]. + +[id="verify-rhoai-dashboards"] +== Verify installation by checking the OpenShift AI Dashboard + +. Access the OpenShift AI dashboard from nines menu on the OpenShift Console and select the link for **Red Hat OpenShift AI**. ++ +image:rhoai/rhods-application_menu.png[Application ShortCut,link="/images/rhoai/rhods-application_menu.png"] + +. Log in to the dashboard using your OpenShift credentials. You will find an environment that is ready for further configuration. This pattern provides the fundamental platform pieces to support MLOps workflows. The installation of OpenShift Pipelines enables the immediate use of pipelines if that is the desired approach for deployment. ++ +image:rhoai/rhods-ai_dashboard.png[OpenShift AI Dashboard,link="/images/rhoai/hods-ai_dashboard.png"] \ No newline at end of file diff --git a/static/images/rhoai/add-kaggle.png b/static/images/rhoai/add-kaggle.png new file mode 100644 index 000000000..e3a05517c Binary files /dev/null and b/static/images/rhoai/add-kaggle.png differ diff --git a/static/images/rhoai/git-clone-button.png b/static/images/rhoai/git-clone-button.png new file mode 100644 index 000000000..b7a76d106 Binary files /dev/null and b/static/images/rhoai/git-clone-button.png differ diff --git a/static/images/rhoai/my-first-workbench.png b/static/images/rhoai/my-first-workbench.png new file mode 100644 index 000000000..88ea2bb75 Binary files /dev/null and b/static/images/rhoai/my-first-workbench.png differ diff --git a/static/images/rhoai/predict-step4.png b/static/images/rhoai/predict-step4.png new file mode 100644 index 000000000..17b8beb8e Binary files /dev/null and b/static/images/rhoai/predict-step4.png differ diff --git a/static/images/rhoai/predict.png b/static/images/rhoai/predict.png new file mode 100644 index 000000000..ad6de9073 Binary files /dev/null and b/static/images/rhoai/predict.png differ diff --git a/static/images/rhoai/prediction-dog-cat.png b/static/images/rhoai/prediction-dog-cat.png new file mode 100644 index 000000000..be0f8464c Binary files /dev/null and b/static/images/rhoai/prediction-dog-cat.png differ diff --git a/static/images/rhoai/rhods-application_menu.png b/static/images/rhoai/rhods-application_menu.png index b612c09e2..39976c55d 100644 Binary files a/static/images/rhoai/rhods-application_menu.png and b/static/images/rhoai/rhods-application_menu.png differ diff --git a/static/images/rhoai/rhods-sync-success.png b/static/images/rhoai/rhods-sync-success.png index d88ea171a..1a399d4a0 100644 Binary files a/static/images/rhoai/rhods-sync-success.png and b/static/images/rhoai/rhods-sync-success.png differ