This is a prototype for creating a crossplane provider for the Ansible Automation Platform(AAP). The architecture design for creating a Crossplane Provider for the AAP is structured around bridging Kubernetes' declarative model with Ansible's task-driven execution. The main goals of this is to test creating resources in AAP via kubernetes CRDs via crossplane and showing this is a viable solution going forward.
- TL;DR
- 1. Architectural Choice: Upjet vs. Native Go Provider
- 2. Component Architecture
- 3. Handling the "Ansible Problem" (State vs. Action)
- 4. Implementation Workflow (Upjet Approach)
- 5. Security Architecture
- 6. Build and Deploy on OpenShift
- 7. Documentation
- Build vs deploy overview
- Build (provider image/package): Build & push image (Podman), Package image (xpkg)
- Deploy: OpenShift (full guide), Deploy via Quay, CRC / OpenShift Local, Validate provider vs AAP API
- Workflows (CI)
The recommendation is to start with Upjet due to the existing robust Terraform provider for AAP, which allows for instant generation of the majority of Crossplane Custom Resource Definitions (CRDs).
| Feature | Upjet (Terraform-Based) | Native (Go + Crossplane SDK) |
|---|---|---|
| Effort | Low (Weeks) | High (Months) |
| Logic Source | Reuses Terraform Ansible Provider | Direct interaction with AAP REST API |
| State | Manages a .tfstate in K8s Secrets |
No state file; queries AAP directly |
| Reliability | Inherits TF Provider bugs/limitations | Precise control over AAP-specific quirks |
The provider must implement the Crossplane Resource Model (XRM) to manage AAP resources.
Map key AAP entities to Kubernetes CRDs, including:
- Organization
- Project
- Inventory & Host
- JobTemplate (the executable unit)
- WorkflowJobTemplate
An architected configuration that securely stores AAP connection details (URL, OAuth2 Token, or Username/Password), pulling from a Kubernetes Secret.
The controller for every resource must perform four key operations:
- Observe — Call the AAP API (GET) to check for resource existence.
- Create — If it doesn't exist, POST the desired state.
- Update — If the AAP state differs from the YAML (drift), PATCH the AAP API.
- Delete — If the CRD is deleted in K8s, DELETE the resource in AAP.
The biggest challenge is that Crossplane provisions state ("This Job Template should exist"), while Ansible performs actions ("Run this playbook now").
- To manage AAP configuration: Treat JobTemplates as static provisioning resources.
- To trigger Jobs: Architect a special CRD (e.g.,
JobRun) with av1alpha1lifecycle. When this CRD is created, it triggers a job in AAP. A decision must be made on whether deleting theJobRunCRD should cancel the running job in AAP or do nothing.
The recommended technical steps for the Upjet path are:
| Step | Action |
|---|---|
| Initialize | Use the upjet-provider-template repository. |
| Configure | Point the generator to the existing Terraform provider for Ansible. |
| Map | Define which Terraform resources map to which K8s groups (e.g., job.ansible.upbound.io). |
| Generate | Run make generate to create the Go types and CRD manifests. |
| Test | Use a local Kind cluster to apply a JobTemplate YAML and verify it appears in the AAP UI. |
- RBAC: Ensure the ServiceAccount running the Provider Pod has narrow permissions (only secrets and its own CRDs).
- AAP Scoping: Use AAP Application Tokens instead of admin passwords, scoping the tokens to specific AAP Organizations to minimize the blast radius.
This section describes how to build the AAP Crossplane provider, deploy AAP on OpenShift using the Red Hat Ansible Automation Platform Operator, then deploy Crossplane and this provider on the same (or another) OpenShift cluster so the provider can manage AAP resources declaratively.
From this repo, use the Upjet scaffold to generate and build the provider binary and CRDs:
- Clone the upjet-provider-template, run
hack/prepare.shwith AAP naming (see hack/prepare-aap.sh), then copy in the scaffold fromprovider/and merge provider/Makefile.aap into the provider repo’s Makefile. - In the provider repo:
make generate.init,make generate, thenmake build.
Full steps, prerequisites, and troubleshooting: BUILD.md.
Deploy Ansible Automation Platform on OpenShift using the official operator so you have an AAP API endpoint for the Crossplane provider to talk to.
-
Install the Ansible Automation Platform Operator from OperatorHub (cluster-scoped, manual approval recommended):
- OpenShift Console → Operators → OperatorHub → search for Ansible Automation Platform.
- Install into a dedicated namespace (e.g.
aap); choose a stable channel (e.g.stable-2.4-cluster-scoped). - Red Hat: Deploying the AAP Operator on OpenShift.
-
Create an Automation controller instance (the AAP controller):
- Create an
AutomationControlleror equivalent CR in the operator’s namespace and configure storage, replicas, and TLS as needed. - Wait for the controller to be ready and note the AAP URL (e.g. route or ingress) and admin credentials.
- Create an
-
Create an AAP Application Token (recommended for the Crossplane provider): In AAP UI, create a token scoped to the desired organization(s) and save it for the provider’s
ProviderConfigSecret.
Install Crossplane in a dedicated namespace (e.g. crossplane-system). Prefer the Helm install with security context set so OpenShift accepts the pods:
oc new-project crossplane-system
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
helm install crossplane crossplane-stable/crossplane \
--namespace crossplane-system \
--set provider.packageRuntime.configuration.securityContext=false \
--waitAlternatively, use the Crossplane OpenShift Operator (OLM) if available in your catalog. See Crossplane on OpenShift and Installing Crossplane on OpenShift for variations and security context notes.
OpenShift: Use the Helm values file deploy/crossplane-values-openshift.yaml so Crossplane pods run with UIDs in the cluster’s restricted range; no SCC grants (e.g. anyuid) are required. See docs/deploy/openshift-deploy.md for the full OpenShift deploy guide.
CRC / OpenShift Local: See docs/deploy/DEPLOY-ON-CRC.md for namespace UID range, Quay-based provider images, and differences from full OpenShift.
Verify:
oc get pods -n crossplane-system-
Install the provider into the cluster (use the image you built from the scaffold, or push to a registry and reference it):
- Create a
Providerresource that points to your provider image, or usekubectl crossplane install provider/ the Crossplane CLI with the provider package. - Ensure the provider’s ServiceAccount has RBAC that allows reading Secrets (in the namespace where the
ProviderConfigsecret lives) and managing the provider’s CRDs.
- Create a
-
Create the AAP credentials Secret in the same namespace as the provider (e.g.
crossplane-system), with the gateway root URL (no/api/controllersuffix; the embedded Terraform ansible/aap client discovers the controller API viaGET {host}/api/) and an Application Token. See deploy/aap-credentials-secret.yaml and./deploy/create-aap-credentials-secret.sh. From in-cluster Pods, prefer gateway Service DNS (e.g.http://aap.<aap-namespace>.svc.cluster.local). -
Create a ProviderConfig that references this Secret (see provider/examples/providerconfig.yaml). Set
spec.credentials.secretRefto the Secret name and key above. -
Apply managed resources (e.g.
Inventory,Group,Host) that reference thisProviderConfig; the provider will reconcile them against the AAP API.
| Step | Action |
|---|---|
| 1 | Build the provider from this repo’s scaffold (BUILD.md). |
| 2 | Deploy AAP on OpenShift via the AAP Operator; create controller instance and obtain AAP URL + token. |
| 3 | Install Crossplane on OpenShift (Helm or OLM). |
| 4 | Install the AAP Crossplane provider and create Secret + ProviderConfig. |
| 5 | Create Crossplane MRs (Inventory, Group, etc.) and verify in the AAP UI. |
- Deploying the AAP Operator on OpenShift
- Deploying AAP 2 on Red Hat OpenShift
- Crossplane – OpenShift Operator
- BUILD.md (this repo)
Detailed guides are in docs/, split into build (provider image/package) and deploy (Crossplane, credentials, provider install):
- Build vs deploy overview
- Build (provider image/package): Build & push image (Podman), Package image (xpkg)
- Deploy: OpenShift (full guide), Deploy via Quay, CRC / OpenShift Local, Validate provider vs AAP API
- Provider HTTP APIs (controller v2 vs
/api/gateway/v1/): provider/AAP-HTTP-APIS.md