diff --git a/includes/provisioning_platforms/index.asciidoc b/includes/provisioning_platforms/index.asciidoc index 4e9550f18..f3f306b5b 100644 --- a/includes/provisioning_platforms/index.asciidoc +++ b/includes/provisioning_platforms/index.asciidoc @@ -82,27 +82,8 @@ A *programming language* is either "declarative" or "imperative". Declarative pr Imperative programming languages state the how. The internal delta calculation needs to be explicitly programmed here. If possible declarative programming languages are recommended due to automatic delta calculation. Typical case is infrastructure. -Typical declarative options are shown in detail in the table below. The overall recommendation is to go for terraform. Major reasons for downvoting Bicep/ ARM: - -* ARM: difficult readability for humans -* Bicep: Lack of support for testing based on plan and testing ecosystem since first added recently. - -Table with declarative programming language options: -[options="header"] -|======================= -|Criteria|Bicep |ARM | Terraform -|Same syntax across clouds |- (Azure Only) |- (Azure Only) |+ (multi) -|What if |o (no complete prop list;only display of plan; unexpected delete) |- (not available) |+ (plan command) -|Detection current |o (Real anaylsis but time) |+ (Real anaylsis) |o (Statefile) -|Testing/ static analysis |o (Only via ARM)|+ (available) |+ (available) -|Human Readability |+ |- |+ -|Reverse Engineering |- (Extra ARM step + adjust) |o (adjust) |+ (Direct via Terraformer) -|Latest features |o (No embedded fallback) |+ (native) |o (Time lag but embedded fallback) -|======================= - -The major options for imperative programming languages are Azure CLI, Powershell (Windows) or Linux based scripting. Azure CLI is recommended as prefered choice since it works on linux and windows based VMs. - -The created resources should follow a *uniform naming schema*. This requires naming to be factored out in a centralized module. Terraform supports factoring out common code in modules. However the backend must already exist and should also follow a naming convention. The recommendation is therefore to expose the common terraform module via an additional path that does not require a backend to determine the names for the azure resources representing the backend. +[.internal] +provisioning_platforms_azure_dec_opt ==== Provisioning ===== Organizational Mapping diff --git a/solutions/streamproc_platforms/index.asciidoc b/includes/streamproc_platforms/index.asciidoc similarity index 100% rename from solutions/streamproc_platforms/index.asciidoc rename to includes/streamproc_platforms/index.asciidoc diff --git a/solutions/streamproc_problem/arch_overview.png b/includes/streamproc_problem/arch_overview.png similarity index 100% rename from solutions/streamproc_problem/arch_overview.png rename to includes/streamproc_problem/arch_overview.png diff --git a/solutions/streamproc_problem/arch_overview.pptx b/includes/streamproc_problem/arch_overview.pptx similarity index 100% rename from solutions/streamproc_problem/arch_overview.pptx rename to includes/streamproc_problem/arch_overview.pptx diff --git a/solutions/streamproc_problem/index.asciidoc b/includes/streamproc_problem/index.asciidoc similarity index 100% rename from solutions/streamproc_problem/index.asciidoc rename to includes/streamproc_problem/index.asciidoc diff --git a/solutions/microservices_azure_aks/index.asciidoc b/solutions/microservices_azure_aks/index.asciidoc index d47cdbf1b..19da6e57f 100644 --- a/solutions/microservices_azure_aks/index.asciidoc +++ b/solutions/microservices_azure_aks/index.asciidoc @@ -54,7 +54,7 @@ The picture below summarizes some of the services mentioned above: image::aks_overview.png[AKS Overview, width=794, height=568] [.internal] -solution_microservices_azure_aks_infra_detailed_native_setup +microservices_azure_aks_infra_detailed_native_setup === Application ==== Overview diff --git a/solutions/provisioning_azure_azuredevops/index.asciidoc b/solutions/provisioning_azure_azuredevops/index.asciidoc index fe6362e2b..9aaf1248b 100644 --- a/solutions/provisioning_azure_azuredevops/index.asciidoc +++ b/solutions/provisioning_azure_azuredevops/index.asciidoc @@ -101,13 +101,8 @@ Adding teams instead of projects is recommended over projects due to https://doc * Tracking and auditing: It's easier to link work items and other objects for tracking and auditing purposes * Maintainability: You minimize the maintenance of security groups and process updates. -The table below lists typical configurations along with their characteristics: -[options="header"] -|======================= -|Criteria|1 project, N teams |1 org, N projects/ teams | N orgs -|General guidance | Smaller or larger organizations with highly aligned teams | Good when different efforts require different processes (multi) | Legacy migration -|Process |Aligned processes across teams; team flexibility to customize boards, dashboards, and so on |Different processes per prj;e.g. different work item types, custom fields |same as many projects -|======================= +[.internal] +provisioning_azure_devops_struct ==== Remaining goals (Automation Code) @@ -159,75 +154,10 @@ resources: trigger: true # Run app-ci pipeline when any run of security-lib-ci completes ``` -Implicit Chaining for *orchestration* is possible by using trigger condition. Calling pipelines explicitly is so far only possible with scripting. The code snippet below shows an example: -```Powershell -# -# Make call to schedule pipeline run -# - -# Body -$body = @{ - stagesToSkip = @() - resources = @{ - self = @{ - refName = $branch_name - } - } - templateParameters = $params - variables = @{} -} -$bodyJson = $body | ConvertTo-Json -# Uri extracted from the Azure DevOps UI -# $org_uri and $prj_id contain names of current organization/ project -# $pl_id denotes the internal pipeline id to be started -$uri = "${org_uri}${prj_id}/_apis/pipelines/${pl_id}/runs?api-version=5.1-preview.1" - -# Output paramters -Write-Host("-------- Call ${pl_name} --------") -Write-Host("Headers: ${headersJson}") -Write-Host("Json body: ${bodyJson}") -Write-Host("Uri: ${uri}") - -try -{ - # Trigger pipeline - $result = Invoke-RestMethod -Method POST -Headers $headers -Uri $uri -Body $bodyJson - Write-Host("Result: ${result}") - - # Wait until run completed - $buildid = $result.id - $start_time = (get-date).ToString('T') - Write-Host("------------ Loop until ${pl_name} completed --------") - Write-Host("started runbuild ${buildid} at ${start_time}") - - # Uri for checking state - $uri = "${org_uri}${prj_id}/_apis/pipelines/${pl_id}/runs/${buildid}?api-version=5.1-preview.1" - - Do { - Start-Sleep -Seconds 60 - $current_time = (get-date).ToString('T') - - # Retrieve current state - $result = Invoke-RestMethod -Method GET -Headers $headers -Uri $uri - $status = $result.state - Write-Host("Received state ${status} at ${current_time}...") - } until ($status -eq "completed") - - # return result - $pl_run_result = $result.result - Write-Host("Result: ${pl_run_result}") - return $pl_run_result -} -catch { - $excMsg = $_.Exception.Message - Write-Host("Exception text: ${excMsg}") - return "Failed" -} -``` -Orchestration must take dependencies into account. They might result from the deployed code or the scope of the pipeline (scope is e.g. a single microservice and code includes the libraries needed). -Orchestrated pipelines must pass data between them. The recommended method is to use key vault. +Implicit Chaining for *orchestration* is possible by using trigger condition. Calling pipelines explicitly is so far only possible with scripting. -*Recreation of resources in short intervals* might cause pipelines to fail. Even if resources are deleted they might still exist in the background (even although soft delete is not applicable). Programming languages can therefore get confused if pipelines recreate things in short intervals. Creating a new resource group can solve the problem since they are part of the tecnical resource id. +[.internal] +provisioning_azure_devops_orch As part of the *configuration* Azure DevOps provides the possibility to provide various settings that are used for development such as enforcing pull requests instead of direct pushes to the repo. The major configuration mechanisms in YAML are variables, parameters and variable groups. Variable groups bundle multiple settings as key value pairs. Parameters are not possible in a variable section (Dynamic inclusion of variable groups is possible via file switching). If they are declared on top level they have to be passed when the pipeline is called programmatically or manually by the user. diff --git a/solutions/streamproc_azure_kafka/index.asciidoc b/solutions/streamproc_azure_kafka/index.asciidoc index 3848cf6a6..dd3ce314a 100644 --- a/solutions/streamproc_azure_kafka/index.asciidoc +++ b/solutions/streamproc_azure_kafka/index.asciidoc @@ -8,6 +8,10 @@ toc::[] :idprefix: :idseparator: - +include::../../includes/streamproc_problem/index.asciidoc[] + +include::../../includes/streamproc_platforms/index.asciidoc[] + == Apache Kafka on Microsoft Azure === Options for running Apache Kafka on Microsoft Azure