| title | Source Code Structure | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| description | Infrastructure as Code components organized by deployment location and purpose for enterprise deployments of Arc-enabled Azure IoT Operations solutions | ||||||||||
| author | Edge AI Team | ||||||||||
| ms.date | 2025-06-07 | ||||||||||
| ms.topic | reference | ||||||||||
| keywords |
|
||||||||||
| estimated_reading_time | 6 |
The source code for this project is organized into discrete categories optimized for enterprise
deployments of Arc-enabled Azure IoT Operations solutions. The components are grouped by their
deployment location (cloud vs. edge) and purpose, taking project execution phases into account.
When a project first kicks off, pass the azure-resource-providers scripts off to Azure subscription
managers to ensure that all resource providers are pre-registered before work begins.
Cloud infrastructure teams can handle the 000-cloud components, while physical plant
engineers can deploy the 100-edge components for on-premises cluster set-up.
- (000-cloud/000-resource-group) - Resource Groups for all Azure resources
- (000-cloud/010-security-identity) - Identity and security resources including Key Vault, Managed Identities, and role assignments
- (000-cloud/020-observability) - Cloud-side monitoring and observability resources
- (000-cloud/030-data) - Data storage and Schema Registry resources
- (000-cloud/031-fabric) - Microsoft Fabric resources for data warehousing and analytics
- (000-cloud/040-messaging) - Event Grid, Event Hubs, Service Bus and messaging resources
- (000-cloud/051-vm-host) - VM provisioning resources with configurable host operating system
- (100-edge/100-cncf-cluster) - Installation of a CNCF cluster that is AIO compatible (initially limited to K3s) and Arc enablement of target clusters, workload identity
- (100-edge/110-iot-ops) - AIO deployment of core infrastructure components (MQ Broker, Edge Storage Accelerator, Secrets Sync Controller, Workload Identity Federation, OpenTelemetry Collector, etc.)
- (100-edge/120-observability) - Edge-specific observability components and monitoring tools
- (100-edge/130-messaging) - Edge messaging components and data routing capabilities
- (500-application) - Custom workloads and applications, including a basic Inference Pipeline, TIG/TICK stacks, InfluxDB Data Historian, reference data backup from cloud to edge, etc.
- (600-workload-orchestration) - Multi-cluster workload orchestration tools and solutions including Kalypso and Azure Arc workload orchestration
- (900-tools-utilities) - Utility scripts, tools, and supporting resources for edge deployments
- (starter-kit/dataflows-acsa-egmqtt-bidirectional) - Sample that provides assets with Azure IoT Operations Dataflows and supported infrastructure creation
- (azure-resource-providers) - Scripts to register required Azure resource providers for AIO and Arc in your subscription
Please refer to the root README for a complete list of prerequisites and setup instructions, ensure your Azure CLI is correctly configured with your subscription context set correctly. For a step-by-step guide with details, follow the instructions Getting Started and Prerequisites Setup.
Each component directory that contains Terraform IaC will have a corresponding terraform directory which is a
Terraform Module. This directory may
also contain:
tests→ Terraform tests used for testing the component module.modules→ Internal Terraform modules that will set up individual parts of the component.
To use and deploy these component modules, either refer to a blueprint that will use these components
in tandem for a full or partial scenario deployment, or step into the ci/terraform directory to individually deploy
each component. The ci directory will handle deploying default configurations and is meant to
be used in a CI system for module verification.
The following steps are for ci deployment completed from a local machine.
You must complete all prerequisites and environment setup before running these steps. Ensure your Azure CLI is logged in and your subscription context is set correctly.
Note on Telemetry: If you wish to opt-out of sending telemetry data to Microsoft when deploying Azure resources with Terraform, you can set the environment variable ARM_DISABLE_TERRAFORM_PARTNER_ID=true before running any terraform commands.
Set up terraform settings and apply them:
-
cd into the
<component>/ci/terraformdirectorycd ./ci/terraform -
Set up required env vars:
# Required by the azurerm terraform provider export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
-
Create a
terraform.tfvarsfile with at least the following minimum configuration settings:# Required, environment hosting resource: "dev", "prod", "test", etc... environment = "<environment>" # Required, short unique alphanumeric string: "sample123", "plantwa", "uniquestring", etc... resource_prefix = "<resource-prefix>" # Optional, instance/replica number: "001", "002", etc... instance = "<instance>"
For additional variables to configure, refer to the
variables.tfor any of thevariables.*.tffiles located in theterraformdirectory for the component.[!NOTE]: To have Terraform automatically use your variables you can name your tfvars file
terraform.auto.tfvars. Terraform will use variables from any*.auto.tfvarsfiles located in the same deployment folder. -
Create necessary components. Anything in
ci/terraform/main.tfthat is of typedatarather thanresourceneeds to be pre-created. Common components include resource group and UAMI. Referenced variables should match yourterraform.tfvarsfile.# Create resource group az group create --resource-group "rg-$resource_prefix-$environment-$instance" --location westus3 # Create UAMI az identity --resource-group "rg-$resource_prefix-$environment-$instance" --name "id-$resource_prefix-aio-$environment-$instance"
Some modules may require more (or fewer) pre-created resources.
-
Initialize and apply terraform
# Pulls down providers and modules, initializes state and backend # Use '-update -reconfigure' if provider or backend updates are required terraform init # -update -reconfigure # Review resource change list, then type 'y' enter to deploy, or, deploy with '-auto-approve' terraform apply -var-file=terraform.tfvars # -auto-approve
To destroy the resources created by Terraform, run the following command:
# Remove all resources deployed by this terraform component
terraform destroyThis may take sometime to complete, the resources can also be deleted from the Azure portal. If deleting manually, be sure delete your local state representation by removing the following:
terraform.tfstate→ Localtfstatefile representing what was deployed by Terraform..terraform.lock.hcl→ Local lock file fortfstate, prevents conflicting updates..terraform→ (Optional) Terraform providers and modules pulled down fromterraform init
Scripts for deploying all Terraform CI is included with the operate-all-terraform.sh script. This script helps assist in automatically deploying each individual Terraform component, in order:
-
Refer to Terraform - Create Resources above to add a
terraform.tfvarslocated atsrc/terraform.tfvars. -
Execute
operate-all-terraform.sh# Use '--start-layer' and '--end-layer' to specify where the script should start and end deploying. ./operate-all-terraform.sh # --start-layer 030-iot-ops-cloud-reqs --end-layer 040-iot-ops
To simplify doc generation, this directory makes use of terraform-docs. To generate docs for new modules or re-generate docs for existing modules, run the following command from the root of this repository:
./scripts/update-all-terraform-docs.shThis generates docs based on the configuration defined in terraform-docs.yml, located at the root of this repository.
Each Terraform component under src/<component>/terraform includes Terraform tests. To run these tests, ensure you have logged into your Azure subscription. cd into the terraform directory that you would like to test.
Then execute following commands:
# Required by the azurerm terraform provider
export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
# Runs the tests if there is a tests folder in the same directory.
terraform testOptionally, if needed (due to restrictive subscription level permissions), testing with the pre-fetched value for OID for Azure Arc Custom Locations can be done by using the following set of commands:
# Terraform is case-sensitive with variable names provided by environment variables
export TF_VAR_CUSTOM_LOCATIONS_OID=$(az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv)
export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
terraform test
# Additionally, you can use the '-var' parameter to pass variables on the command line
# terraform test -var custom_locations_oid=$TF_VAR_CUSTOM_LOCATIONS_OIDEach component directory that contains Bicep IaC will have a corresponding bicep directory with Bicep templates.
Similar to Terraform modules, these templates are designed to be reusable building blocks for your infrastructure
deployments.
The following steps are for manual local deployment of Bicep components.
You must complete all prerequisites and environment setup before running these steps. Ensure your Azure CLI is logged in and your subscription context is set correctly.
Set up Bicep parameters and deploy:
-
cd into the
<component>/ci/bicepdirectorycd ./component-directory/ci/bicep -
Create a resource group for your deployment:
# Replace with your preferred location LOCATION="eastus2" # Create a unique resource group name RESOURCE_GROUP_NAME="rg-aio-bicep-deployment" # Create the resource group az group create --resource_group $RESOURCE_GROUP_NAME --location $LOCATION
-
Create a parameter file named
main.dev.bicepparamin theci/bicepdirectory with your deployment parameters:// Parameters for component deployment using './bicep/main.bicep' // Required parameters param common = { resourcePrefix: 'myprefix' // Replace with a unique prefix location: 'eastus2' // Replace with your Azure region environment: 'dev' // 'dev', 'test', or 'prod' instance: '001' // Instance identifier } // Component-specific parameters will vary based on the component // Example additional parameters: // param storageAccountSku = 'Standard_LRS' // param enableDiagnostics = true
-
Deploy the Bicep template with your chosen deployment name:
DEPLOYMENT_NAME=YourDeploymentName # Deploy a specific component az deployment group create \ --name $DEPLOYMENT_NAME \ --resource-group $RESOURCE_GROUP_NAME \ --parameters ./main.dev.bicepparam
To remove resources created by Bicep deployments, you can:
# Remove a specific deployment
az deployment group delete --resource-group $RESOURCE_GROUP_NAME --name $DEPLOYMENT_NAME
# Remove the entire resource group and all resources
az group delete --name $RESOURCE_GROUP_NAME --no-waitTo simplify doc generation, this directory makes use of scripts to generate documentation for Bicep modules. To generate docs for new modules or re-generate docs for existing modules, run the following command from the root of this repository:
./scripts/update-all-bicep-docs.shThis generates documentation based on the configuration defined in the repository, using the metadata from your Bicep files.
🤖 Crafted with precision by ✨Copilot following brilliant human instruction, then carefully refined by our team of discerning human reviewers.