Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/blog/2025-01-09-AGOF_v2.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ An AGOF Pattern MUST define the following repositories:
1. AGOF repository (default: https://github.com/validatedpatterns/agof.git). This repository contains AGOF itself,
and is scaffolding for the rest of the process.

1. An Infrastructure as Code repository. This is the main "pattern" content. It contains an AAP configuration,
2. An Infrastructure as Code repository. This is the main "pattern" content. It contains an AAP configuration,
expressed in terms suitable for processing by the infra.aap_configuration collection. This repository will contain
references to other resources, which are described immediately following.

Expand All @@ -301,7 +301,7 @@ accomlishing a particular result. Multiple collection repositories may be define
provided by collections available via Ansible Galaxy or Automation Hub, it is still necessary to provide a playbook
to serve as the basis for a Job Template in AAP to do the configuration work.

1. One or more inventory repositories. Ansible Good Practices state that inventories should be separated from
2. One or more inventory repositories. Ansible Good Practices state that inventories should be separated from
the content. This allows for using separate inventories with the same collection codebase - a feature that users
frequently requested from Ansible Edge GitOps because they wanted to change it from configuring virtual machines in
AWS to use actual hardware nodes (for example). It would also be possible to have effectively an empty inventory and
Expand Down
57 changes: 57 additions & 0 deletions content/patterns/federated-edge-observability/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: Federated Edge Observability
date: 2025-02-01
tier: sandbox
summary: This pattern uses OpenShift Virtualization to simulate an edge environment for VMs which then report metrics via OpenTelemetry.
rh_products:
- Red Hat OpenShift Container Platform
- Red Hat Ansible Automation Platform
- Red Hat OpenShift Virtualization
- Red Hat Enterprise Linux
- Red Hat OpenShift Data Foundation
industries:
aliases: /federated-edge-observability
links:
install: getting-started
help: https://groups.google.com/g/validatedpatterns
bugs: https://github.com/validatedpatterns-sandbox/federated-edge-observability/issues
ci: federatedobservability
---

# Federated Edge Observability

## Background

Organizations are interested in accelerating their deployment speeds and improving delivery quality in their Edge environments, where many devices may not fully or even partially embrace the GitOps philosophy. Further, there are VMs and other devices that can and should be managed with Ansible. This pattern explores some of the possibilities of using an OpenShift-based Ansible Automated Platform deployment and managing Edge devices, based on work done with a partner in the Chemical space.

This pattern uses OpenShift Virtualization (the productization of Kubevirt) to simulate the Edge environment for VMs.

### Solution elements

- How to use a GitOps approach to manage virtual machines, either in public clouds (limited to AWS for technical reasons) or on-prem OpenShift installations
- How to integrate AAP into OpenShift
- How to manage Edge devices using AAP hosted in OpenShift

### Red Hat Technologies

- Red Hat OpenShift Container Platform (Kubernetes)
- Red Hat Ansible Automation Platform (formerly known as "Ansible Tower")
- Red Hat OpenShift GitOps (ArgoCD)
- OpenShift Virtualization (Kubevirt)
- Red Hat Enterprise Linux 9

### Other Technologies this Pattern Uses

- Hashicorp Vault
- External Secrets Operator
- OpenTelemetry
- Grafana
- Mimir

## Architecture

Similar to other patterns, this pattern starts with a central management hub, which hosts the AAP and Vault components, and the observability collection and visualization components.

## What Next

- [Getting Started: Deploying and Validating the Pattern](getting-started)
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
title: Ansible Automation Platform
weight: 40
aliases: /federated-edge-observability/ansible-automation-platform/
---

# Ansible Automation Platform

## How to Log In

The default login user is `admin` and the password is generated randomly at install time; you will need the password to login in to the AAP interface. You do not have to log in to the interface - the pattern will configure the AAP instance; the pattern retrieves the password using the same technique as the `ansible_get_credentials.sh` script described below. If you want to inspect the AAP instance, or change any aspects of its configuration, there are two ways to login and look at it. Both mechanisms are equivalent; you get the same password to the same instance using either technique.

## Via the OpenShift Console

In the OpenShift console, navigate to Workloads > Secrets and select the "ansible-automation-platform" project if you want to limit the number of Secrets you can see.

[![secrets-navigation](/images/ansible-edge-gitops/ocp-console-secrets-aap-admin-password.png)](/images/ansible-edge-gitops/ocp-console-secrets-aap-admin-password.png)

The Secret you are looking for is in the `ansible-automation-platform` project and is named `controller-admin-password`. If you click on it, you can see the Data.password field. It is shown revealed below to show that it is the same as what is shown by the script method of retrieving it below:

[![secrets-detail](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png)](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png)

## Via [ansible_get_credentials.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_get_credentials.sh)

With your KUBECONFIG set, you can run `./scripts/ansible-get-credentials.sh` from your top-level pattern directory. This will use your OpenShift cluster admin credentials to retrieve the URL for your Ansible Automation Platform instance, as well as the password for its `admin` user, which is auto-generated by the AAP operator by default. The output of the command looks like this (your password will be different):

```text
./scripts/ansible_get_credentials.sh
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
'all'

PLAY [Install manifest on AAP controller] ******************************************************************************

TASK [Retrieve API hostname for AAP] ***********************************************************************************
ok: [localhost]

TASK [Set ansible_host] ************************************************************************************************
ok: [localhost]

TASK [Retrieve admin password for AAP] *********************************************************************************
ok: [localhost]

TASK [Set admin_password fact] *****************************************************************************************
ok: [localhost]

TASK [Report AAP Endpoint] *********************************************************************************************
ok: [localhost] => {
"msg": "AAP Endpoint: https://controller-ansible-automation-platform.apps.mhjacks-aeg.blueprints.rhecoeng.com"
}

TASK [Report AAP User] *************************************************************************************************
ok: [localhost] => {
"msg": "AAP Admin User: admin"
}

TASK [Report AAP Admin Password] ***************************************************************************************
ok: [localhost] => {
"msg": "AAP Admin Password: CKollUjlir0EfrQuRrKuOJRLSQhi4a9E"
}

PLAY RECAP *************************************************************************************************************
localhost : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
242 changes: 242 additions & 0 deletions content/patterns/federated-edge-observability/getting-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,242 @@
---
title: Getting Started
weight: 10
aliases: /federated-edge-observability/getting-started/
---

# Deploying the Federated Edge Observability Pattern

# General Prerequisites

1. An OpenShift cluster ( Go to [the OpenShift console](https://console.redhat.com/openshift/create)). Currently this pattern only supports AWS. It could also run on a baremetal OpenShift cluster, because OpenShift Virtualization supports that; there would need to be some customizations made to support it as the default is AWS. We hope that GCP and Azure will support provisioning metal workers in due course so this can be a more clearly multicloud pattern.
1. A GitHub account (and, optionally, a token for it with repositories permissions, to read from and write to your forks)
1. The helm binary, see [here](https://helm.sh/docs/intro/install/)
1. Ansible, which is used in the bootstrap and provisioning phases of the pattern install (and to configure Ansible Automation Platform).
1. Please note that when run on AWS, this pattern will provision an additional worker node, which will be a metal instance (c5n.metal) to run the Edge Virtual Machines. This worker is provisioned through the OpenShift MachineAPI and will be automatically cleaned up when the cluster is destroyed.

The use of this pattern depends on having a running Red Hat
OpenShift cluster. It is desirable to have a cluster for deploying the GitOps
management hub assets and a separate cluster(s) for the managed cluster(s).

If you do not have a running Red Hat OpenShift cluster you can start one on a
public or private cloud by using [Red Hat's cloud service](https://console.redhat.com/openshift/create).

# Credentials Required in Pattern

In addition to the openshift cluster, you will need to prepare a number of secrets, or credentials, which will be used
in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/validatedpatterns-sandbox/federated-edge-observability/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows:

* AWS Credentials (an access key and a secret key). These are used to provision the metal worker in AWS (which hosts
the VMs). If the portworx variant of the pattern is used, these credentials will be used to modify IAM rules to allow
portworx to run correctly.

```yaml
---
# NEVER COMMIT THESE VALUES TO GIT
version: "2.0"
secrets:
```
* A username and SSH Keypair (private key and public key). These will be used to provide access to the Kiosk VMs in the demo.

```yaml
- name: vm-ssh
fields:
- name: username
value: 'Username of user to attach privatekey and publickey to - cloud-user is a typical value'

- name: privatekey
value: 'Private ssh key of the user who will be able to elevate to root to provision kiosks'

- name: publickey
value: 'Public ssh key of the user who will be able to elevate to root to provision kiosks'
```

* A Red Hat Subscription Management username and password. These will be used to register Kiosk VM templates to the Red Hat Content Delivery Network and install content on the VMs and to install the Otel collector.

```yaml
- name: rhsm
fields:
- name: username
value: 'username of user to register RHEL VMs'
- name: password
value: 'password of rhsm user in plaintext'
```

* A userData block to use with cloud-init. This will allow console login as the user you specify (traditionally cloud-user) with the password you specify. The value in cloud-init is used as the default; roles in the edge-gitops-vms chart can also specify other secrets to use by referencing them in the role block.

```yaml
- name: cloud-init
fields:
- name: userData
value: |-
#cloud-config
user: 'username of user for console, probably cloud-user'
password: 'a suitable password to use on the console'
chpasswd: { expire: False }
```

* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest)

```yaml
- name: aap-manifest
fields:
- name: b64content
path: 'full pathname of file containing Satellite Manifest for entitling Ansible Automation Platform'
base64: true
```

```yaml
- name: automation-hub-token
fields:
- name: token
value: 'An automation hub token for retrieving Certified and Validated Ansible content'
```

* An automation hub token generated at <https://console.redhat.com/ansible/automation-hub/token>. This is needed for
the Ansible Configuration-as-Code tools.

```yaml
- name: agof-vault-file
fields:
- name: agof-vault-file
path: 'full pathname of a valid agof_vault file for secrets to overlay the iac config'
base64: true
```

* An (optional) AGOF vault file. For this pattern, use the following (you do not need additional secrets for this
pattern):

```yaml
- name: agof-vault-file
fields:
- name: agof-vault-file
value: '---'
base64: true
```

```yaml
- name: otel-cert
fields:
- name: tls.key
path: 'full pathname to a pre-existing tls key'

- name: tls.crt
path: 'full pathname to a pre-existing tls certificate'
```

Certificates for the open telemetry collector infrastructure. "Snakeoil" (that is, self-signed) certs will automatically be generated by the makefile as follows by the `make snakeoil-certs` target, which is automatically run by `make install`:

```yaml
- name: otel-cert
fields:
- name: tls.key
path: ~/federated-edge-observability-otel-collector-edge-observability-stack.key

- name: tls.crt
path: ~/federated-edge-observability-otel-collector-edge-observability-stack.crt
```

# How to deploy

1. Login to your cluster using oc login or exporting the KUBECONFIG

```sh
oc login
```

or set KUBECONFIG to the path to your `kubeconfig` file. For example:

```sh
export KUBECONFIG=~/my-ocp-env/hub/auth/kubeconfig
```

1. Fork the [federated-edge-observability](https://github.com/validatedpatterns-sandbox/federated-edge-observability) repo on GitHub. It is necessary to fork to preserve customizations you make to the default configuration files.

1. Clone the forked copy of this repository.

```sh
git clone git@github.com:your-username/ansible-edge-gitops.git
```

1. Create a local copy of the Helm values file that can safely include credentials

WARNING: DO NOT COMMIT THIS FILE

You do not want to push personal credentials to GitHub.

```sh
cp values-secret.yaml.template ~/values-secret.yaml
vi ~/values-secret.yaml
```

1. Customize the deployment for your cluster (Optional - the defaults in values-global.yaml are designed to work in AWS):

```sh
git checkout -b my-branch
vi values-global.yaml
git add values-global.yaml
git commit values-global.yaml
git push origin my-branch
```

Please review the [Patterns quick start](/learn/quickstart/) page. This section describes deploying the pattern using `pattern.sh`. You can deploy the pattern using the [validated pattern operator](/infrastructure/using-validated-pattern-operator/). If you do use the operator then skip to Validating the Environment below.

1. (Optional) Preview the changes. If you'd like to review what is been deployed with the pattern, `pattern.sh` provides a way to show what will be deployed.

```sh
./pattern.sh make show
```

1. Apply the changes to your cluster. This will install the pattern via the Validated Patterns Operator, and then run any necessary follow-up steps.

```sh
./pattern.sh make install
```

The installation process will take between 45-60 minutes to complete.

# Installation Validation

* Check the operators have been installed using the OpenShift console

```text
OpenShift Console Web UI -> Installed Operators
```

![federated-edge-observability-operators](/images/federated-edge-observability/FEO-operators.png "Federated Edge Observability Operators")

* Check all applications are synchronised

Under the project `federated-edge-observability-hub` click on the URL for the `hub`gitops`server`. All applications will sync, but this takes time as ODF has to completely install, and OpenShift Virtualization cannot provision VMs until the metal node has been fully provisioned and ready.

![federated-edge-observability-applications](/images/federated-edge-observability/FEO-applications.png "Federated Edge Observability Applications")

* Under Virtualization > Virtual Machines, the virtual machines will eventually show as "Running." Once they are in "Running" state the Provisioning workflow will run on them, install the OpenTelemetry collector, and start reporting metrics to the Edge Observability Stack in the hub cluster.

![federated-edge-observability-vms](/images/federated-edge-observability/FEO-vms.png "Federated Edge Observability Virtual Machines")

* The Grafana graphs should be receiving data and drawing graphs for each of the nodes:

![federated-edge-observability-grafana](/images/federated-edge-observability/FEO-grafana.png "Federated Edge Observability Graphs")

Please see [Ansible Automation Platform](/federated-edge-observability/ansible-automation-platform/) for more information on how this pattern uses the Ansible Automation Platform Operator for OpenShift.

# Infrastructure Elements of this Pattern

## [Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible)

A fully functional installation of the Ansible Automation Platform operator is installed on your OpenShift cluster to configure and maintain the VMs for this demo. AAP maintains a dynamic inventory of kiosk machines and can configure a VM from template to fully functional kiosk in about 10 minutes.

## OpenShift [Virtualization](https://docs.openshift.com/container-platform/4.16/virt/about_virt/about-virt.html)

OpenShift Virtualization is a Kubernetes-native way to run virtual machine workloads. It is used in this pattern to host VMs simulating an Edge environment; the chart that configures the VMs is designed to be flexible to allow easy customization to model different VM sizes, mixes, versions and profiles for future pattern development.

## HashiCorp [Vault](https://www.vaultproject.io/)

Vault is used as the authoritative source for the Kiosk ssh pubkey via the External Secrets Operator.
As part of this pattern HashiCorp Vault has been installed. Refer to the section on [Vault](https://validatedpatterns.io/secrets/vault/).

# Next Steps

## [Help & Feedback](https://groups.google.com/g/validatedpatterns)
## [Report Bugs](https://github.com/validatedpatterns-sandbox/federated-edge-observability/issues)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.