diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9824752..0beacfc 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,13 +1,15 @@
-## [project-title] Changelog
+# Observability Accelerators Changelog
-
-# x.y.z (yyyy-mm-dd)
+## 1.0 (2023-05-19)
-*Features*
-* ...
+### Features
-*Bug Fixes*
-* ...
+- Initial public release
-*Breaking Changes*
-* ...
+### Bug Fixes
+
+- Not applicable
+
+### Breaking Changes
+
+- Not applicable
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index a9115cf..1fe1f0c 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,6 +1,6 @@
-# Contributing to [project-title]
+# Contributing to Observability Accelerators
-This project welcomes contributions and suggestions. Most contributions require you to agree to a
+This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
@@ -12,61 +12,67 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
- - [Code of Conduct](#coc)
- - [Issues and Bugs](#issue)
- - [Feature Requests](#feature)
- - [Submission Guidelines](#submit)
+- [Code of Conduct](#coc)
+- [Issues and Bugs](#issue)
+- [Feature Requests](#feature)
+- [Submission Guidelines](#submit)
## Code of Conduct
+
Help us keep this project open and inclusive. Please read and follow our [Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
## Found an Issue?
+
If you find a bug in the source code or a mistake in the documentation, you can help us by
[submitting an issue](#submit-issue) to the GitHub Repository. Even better, you can
[submit a Pull Request](#submit-pr) with a fix.
## Want a Feature?
-You can *request* a new feature by [submitting an issue](#submit-issue) to the GitHub
-Repository. If you would like to *implement* a new feature, please submit an issue with
+
+You can _request_ a new feature by [submitting an issue](#submit-issue) to the GitHub
+Repository. If you would like to _implement_ a new feature, please submit an issue with
a proposal for your work first, to be sure that we can use it.
-* **Small Features** can be crafted and directly [submitted as a Pull Request](#submit-pr).
+- **Small Features** can be crafted and directly [submitted as a Pull Request](#submit-pr).
## Submission Guidelines
### Submitting an Issue
+
Before you submit an issue, search the archive, maybe your question was already answered.
If your issue appears to be a bug, and hasn't been reported, open a new issue.
Help us to maximize the effort we can spend fixing issues and adding new
-features, by not reporting duplicate issues. Providing the following information will increase the
+features, by not reporting duplicate issues. Providing the following information will increase the
chances of your issue being dealt with quickly:
-* **Overview of the Issue** - if an error is being thrown a non-minified stack trace helps
-* **Version** - what version is affected (e.g. 0.1.2)
-* **Motivation for or Use Case** - explain what are you trying to do and why the current behavior is a bug for you
-* **Browsers and Operating System** - is this a problem with all browsers?
-* **Reproduce the Error** - provide a live example or a unambiguous set of steps
-* **Related Issues** - has a similar issue been reported before?
-* **Suggest a Fix** - if you can't fix the bug yourself, perhaps you can point to what might be
+- **Overview of the Issue** - if an error is being thrown a non-minified stack trace helps
+- **Version** - what version is affected (e.g. 0.1.2)
+- **Motivation for or Use Case** - explain what are you trying to do and why the current behavior is a bug for you
+- **Browsers and Operating System** - is this a problem with all browsers?
+- **Reproduce the Error** - provide a live example or a unambiguous set of steps
+- **Related Issues** - has a similar issue been reported before?
+- **Suggest a Fix** - if you can't fix the bug yourself, perhaps you can point to what might be
causing the problem (line of code or commit)
You can file new issues by providing the above information at the corresponding repository's issues link: https://github.com/[organization-name]/[repository-name]/issues/new].
### Submitting a Pull Request (PR)
+
Before you submit your Pull Request (PR) consider the following guidelines:
-* Search the repository (https://github.com/[organization-name]/[repository-name]/pulls) for an open or closed PR
+- Search the repository (https://github.com/[organization-name]/[repository-name]/pulls) for an open or closed PR
that relates to your submission. You don't want to duplicate effort.
-* Make your changes in a new git fork:
+- Make your changes in a new git fork:
+
+- Commit your changes using a descriptive commit message
+- Push your fork to GitHub:
+- In GitHub, create a pull request
+- If we suggest changes then:
-* Commit your changes using a descriptive commit message
-* Push your fork to GitHub:
-* In GitHub, create a pull request
-* If we suggest changes then:
- * Make the required updates.
- * Rebase your fork and force push to your GitHub repository (this will update your Pull Request):
+ - Make the required updates.
+ - Rebase your fork and force push to your GitHub repository (this will update your Pull Request):
```shell
git rebase master -i
diff --git a/README.md b/README.md
index 364f052..c3d8fa3 100644
--- a/README.md
+++ b/README.md
@@ -1,57 +1,17 @@
-# Project Name
+# Observability Accelerators
-(short, 1-3 sentenced, description of the project)
+This repository contains multiple samples that are meant to accelerate development in the Observability and Monitoring space on Azure.
-## Features
+Each accelerator focuses on a different application architecture. They contain all source code and infrastructure as code necessary to deploy the application, as well as in-depth documentation that details important O&M concepts.
-This project framework provides the following features:
+Navigate to one of the accelerators in the list below. The README will include instructions on how to get started with that application.
-* Feature 1
-* Feature 2
-* ...
+## Accelerator Index
-## Getting Started
+| Accelerator |
+| -------------------------------------------------------------------------------------------------------------------------- |
+| [Azure Monitor in a Message-Based Distributed Application on AKS](./accelerators/aks-sb-azmonitor-microservices/README.md) |
-### Prerequisites
+## Trademarks
-(ideally very short, if any)
-
-- OS
-- Library version
-- ...
-
-### Installation
-
-(ideally very short)
-
-- npm install [package name]
-- mvn install
-- ...
-
-### Quickstart
-(Add steps to get up and running quickly)
-
-1. git clone [repository clone url]
-2. cd [repository name]
-3. ...
-
-
-## Demo
-
-A demo app is included to show how to use the project.
-
-To run the demo, follow these steps:
-
-(Add steps to start up the demo)
-
-1.
-2.
-3.
-
-## Resources
-
-(Any additional resources or related projects)
-
-- Link to supporting information
-- Link to similar sample
-- ...
+This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
diff --git a/accelerators/aks-sb-azmonitor-microservices/.devcontainer/Dockerfile b/accelerators/aks-sb-azmonitor-microservices/.devcontainer/Dockerfile
new file mode 100644
index 0000000..a6bd0b3
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/.devcontainer/Dockerfile
@@ -0,0 +1,18 @@
+# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/ubuntu/.devcontainer/base.Dockerfile
+
+# [Choice] Ubuntu version (use ubuntu-22.04 or ubuntu-18.04 on local arm64/Apple Silicon): ubuntu-22.04, ubuntu-20.04, ubuntu-18.04
+ARG VARIANT="jammy"
+FROM mcr.microsoft.com/vscode/devcontainers/base:0-${VARIANT}
+
+# [Optional] Uncomment this section to install additional OS packages.
+RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
+ && apt-get -y install --no-install-recommends figlet
+
+
+ARG USERNAME=vscode
+USER $USERNAME
+
+COPY kubelogin.sh /tmp/kubelogin.sh
+RUN mkdir -p "/home/$USERNAME/.local/bin" && \
+ /tmp/kubelogin.sh
+ENV PATH="/home/vscode/.local/bin:${PATH}"
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/.devcontainer/devcontainer.json b/accelerators/aks-sb-azmonitor-microservices/.devcontainer/devcontainer.json
new file mode 100644
index 0000000..fb28795
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/.devcontainer/devcontainer.json
@@ -0,0 +1,46 @@
+// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
+// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/ubuntu
+{
+ "name": "aks-sb-azmonitor-microservices",
+ "build": {
+ "dockerfile": "Dockerfile",
+ // Update 'VARIANT' to pick an Ubuntu version: jammy / ubuntu-22.04, focal / ubuntu-20.04, bionic /ubuntu-18.04
+ // Use ubuntu-22.04 or ubuntu-18.04 on local arm64/Apple Silicon.
+ "args": {
+ "VARIANT": "ubuntu-22.04"
+ }
+ },
+ // Use 'forwardPorts' to make a list of ports inside the container available locally.
+ // "forwardPorts": [],
+ // Use 'postCreateCommand' to run commands after the container is created.
+ // "postCreateCommand": "uname -a",
+ // Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
+ "remoteUser": "vscode",
+ "features": {
+ "ghcr.io/devcontainers/features/terraform:1": {
+ "version": "1.3"
+ },
+ "ghcr.io/devcontainers/features/azure-cli:1": {},
+ "ghcr.io/stuartleeks/dev-container-features/azure-cli-persistence:0": {},
+ "ghcr.io/stuartleeks/dev-container-features/shell-history:0": {},
+ "ghcr.io/devcontainers/features/docker-from-docker:1": {},
+ "ghcr.io/devcontainers/features/kubectl-helm-minikube:1": {
+ "helm": "3.10.1"
+ }
+ },
+ "runArgs": [
+ // Attach dev container to host network so allow accessing services on the host
+ // when running via docker-compose
+ "--network", "host"
+ ],
+ "customizations": {
+ "vscode": {
+ "extensions": [
+ "timonwong.shellcheck",
+ "hashicorp.terraform",
+ "ms-azuretools.vscode-bicep",
+ "humao.rest-client"
+ ]
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/.devcontainer/kubelogin.sh b/accelerators/aks-sb-azmonitor-microservices/.devcontainer/kubelogin.sh
new file mode 100644
index 0000000..49d7f5c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/.devcontainer/kubelogin.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+set -e
+
+wget -O /tmp/kubelogin-linux-amd64.zip \
+ https://github.com/Azure/kubelogin/releases/download/v0.0.24/kubelogin-linux-amd64.zip
+
+unzip /tmp/kubelogin-linux-amd64.zip -d /tmp/kubelogin
+
+cp /tmp/kubelogin/bin/linux_amd64/kubelogin "/home/$USERNAME/.local/bin/kubelogin"
diff --git a/accelerators/aks-sb-azmonitor-microservices/.env.sample b/accelerators/aks-sb-azmonitor-microservices/.env.sample
new file mode 100644
index 0000000..01e996c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/.env.sample
@@ -0,0 +1,8 @@
+# Unique name to assign in all deployed services, your high school hotmail alias is a great idea!
+USERNAME=
+
+# Email address for alert notifications
+EMAIL_ADDRESS=
+
+# Uncomment the following line to change the deployment location
+# LOCATION=westeurope
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/.gitattributes b/accelerators/aks-sb-azmonitor-microservices/.gitattributes
new file mode 100644
index 0000000..c91154c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/.gitattributes
@@ -0,0 +1,6 @@
+# Ensure that all shell scripts are checked out with LF line endings
+# on Windows. This is necessary because Git for Windows defaults to
+# CRLF line endings, which breaks the shell scripts.
+# NOTE: for best results on Windows, clone the code in a in a file system
+# under Windows Subsystem for Linux (WSL) - see https://www.docker.com/blog/docker-desktop-wsl-2-best-practices/
+*.sh text eol=lf
diff --git a/accelerators/aks-sb-azmonitor-microservices/.gitignore b/accelerators/aks-sb-azmonitor-microservices/.gitignore
new file mode 100644
index 0000000..0f84b33
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/.gitignore
@@ -0,0 +1,7 @@
+plan.out
+terraform.tfvars
+azuredeploy.parameters.json
+.env
+
+output.json
+env.yaml
diff --git a/accelerators/aks-sb-azmonitor-microservices/README.md b/accelerators/aks-sb-azmonitor-microservices/README.md
new file mode 100644
index 0000000..61e41c0
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/README.md
@@ -0,0 +1,34 @@
+# Azure Monitor in a Message Based Distributed Application
+
+Using Azure Monitor to observe a distributed application comes with unique challenges and considerations. The ability to generate and view traces, ensure service availability, use custom telemetry to track business critical indicators, etc. are all more complex in a distributed environment. The sample is designed to demonstrate how to automatically and manually instrument data in variety of languages within a distributed application, as well as provide similar examples for visualization and alerts based on this incoming data.
+
+The sample contains a conceptual cargo processing application to demonstrate these points. The microservice-based solution is deployed to Azure Kubernetes Service and employs multiple communication protocols, including HTTP and message-based interactions, to enable seamless communication between its services. The services cover a wide variety of programming languages and instrumentation libraries - the Java services utilize OpenTelemetry exporters, while the Node, .NET, and Python services use the Application Insights SDKs for instrumentation purposes.
+
+The sample contains all code and documentation necessary to deploy and monitor the application. Source code for the microservices can be found in the [/src](./src/) folder, while Bicep and Terraform versions (identical output) of the supporting infrastructure can be found in the [/infrastructure/bicep](./infrastructure/bicep/) and [/infrastructure/terraform](./infrastructure/bicep/) folders, respectively.
+
+## Use Case
+
+
+
+A `cargo-processing-api` service (Java) receives a PUT request with an object in the request body containing ports, products, and other cargo related information. The api validates the request schema and places a message containing the cargo object on an Azure Service Bus queue. A `cargo-processing-validator` service (Typescript) validates the internal cargo properties to ensure it can be successfully shipped before placing the cargo object with boolean validation result on a Service Bus topic. Finally, two services (.NET and Python) with subscriptions to the topic receive the final message, filtering for `valid = True` or `valid = False` flags, respectively, before storing the message in a dedicated Cosmos DB container for further processing.
+
+A fifth, `operations-api` service (Java) implements the [async request-reply](https://learn.microsoft.com/azure/architecture/patterns/async-request-reply) pattern, adding a level of resiliency to the long running operation.
+
+Each microservice sends telemetry data to Application Insights, while AKS, Key Vault, Cosmos DB, and Service Bus are each configured to export telemetry data directly to the Log Analytics Workspace associated with the Application Insights resource.
+
+## Docs
+
+Getting started instructions and documentation on observability and monitoring topics within the application can be found in the following pages:
+
+| Topic | Content |
+| --------------------------------------- | ----------------------------------------------------------------------------------------------- |
+| Getting Started | [getting-started.md](./docs/getting-started.md) |
+| Auto vs Manually Instrumented Telemetry | [auto-vs-manually-instrumented-telemetry.md](./docs/auto-vs-manually-instrumented-telemetry.md) |
+| Distributed Tracing | [distributed-tracing.md](./docs/distributed-tracing.md) |
+| Health Checks | [health-checks.md](./docs/health-checks.md) |
+| Custom Dimensions | [custom-dimensions.md](./docs/custom-dimensions.md) |
+| Custom Metrics | [custom-metrics.md](./docs/custom-metrics.md) |
+| Workbooks | [workbooks.md](./docs/workbooks.md) |
+| Alerts | [alerts.md](./docs/alerts.md) |
+| Introducing Chaos | [introducing-chaos.md](./docs/introducing-chaos.md) |
+| Reducing Telemetry Volume | [reducing-telemetry-volume.md](./docs/reducing-telemetry-volume.md) |
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/Dockerfile b/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/Dockerfile
new file mode 100644
index 0000000..d64dd2c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/Dockerfile
@@ -0,0 +1,14 @@
+# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
+ARG VARIANT=16-bullseye
+FROM mcr.microsoft.com/vscode/devcontainers/typescript-node:0-${VARIANT}
+
+# [Optional] Uncomment this section to install additional OS packages.
+# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
+# && apt-get -y install --no-install-recommends
+
+# [Optional] Uncomment if you want to install an additional version of node using nvm
+# ARG EXTRA_NODE_VERSION=10
+# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
+
+RUN su node -c "npm install -g @cadl-lang/compiler"
+RUN su node -c "npm install -g cadl-vscode"
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/base.Dockerfile b/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/base.Dockerfile
new file mode 100644
index 0000000..35b6654
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/base.Dockerfile
@@ -0,0 +1,17 @@
+# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
+ARG VARIANT=16-bullseye
+FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
+
+# Install tslint, typescript. eslint is installed by javascript image
+ARG NODE_MODULES="tslint-to-eslint-config typescript"
+COPY library-scripts/meta.env /usr/local/etc/vscode-dev-containers
+RUN su node -c "umask 0002 && npm install -g ${NODE_MODULES}" \
+ && npm cache clean --force > /dev/null 2>&1
+
+# [Optional] Uncomment this section to install additional OS packages.
+# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
+# && apt-get -y install --no-install-recommends
+
+# [Optional] Uncomment if you want to install an additional version of node using nvm
+# ARG EXTRA_NODE_VERSION=10
+# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/devcontainer.json b/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/devcontainer.json
new file mode 100644
index 0000000..bbc0729
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/.devcontainer/devcontainer.json
@@ -0,0 +1,32 @@
+// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
+// https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/typescript-node
+{
+ "name": "Node.js, TypeScript & CADL",
+ "build": {
+ "dockerfile": "Dockerfile",
+ // Update 'VARIANT' to pick a Node version: 18, 16, 14.
+ // Append -bullseye or -buster to pin to an OS version.
+ // Use -bullseye variants on local on arm64/Apple Silicon.
+ "args": {
+ "VARIANT": "16-bullseye"
+ }
+ },
+
+ // Configure tool-specific properties.
+ "customizations": {
+ // Configure properties specific to VS Code.
+ "vscode": {
+ // Add the IDs of extensions you want installed when the container is created.
+ "extensions": [
+ "dbaeumer.vscode-eslint",
+ "/usr/local/share/npm-global/lib/node_modules/cadl-vscode/cadl-vscode-0.16.0.vsix"
+ ]
+ }
+ },
+
+ // Use 'forwardPorts' to make a list of ports inside the container available locally.
+ // "forwardPorts": [],
+
+ // Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
+ "remoteUser": "node"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/.gitignore b/accelerators/aks-sb-azmonitor-microservices/api-spec/.gitignore
new file mode 100644
index 0000000..9794b20
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/.gitignore
@@ -0,0 +1,2 @@
+node_modules
+cadl-output
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/cadl-project.yaml b/accelerators/aks-sb-azmonitor-microservices/api-spec/cadl-project.yaml
new file mode 100644
index 0000000..43afbf8
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/cadl-project.yaml
@@ -0,0 +1,2 @@
+emitters:
+ "@cadl-lang/openapi3": true
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/main.cadl b/accelerators/aks-sb-azmonitor-microservices/api-spec/main.cadl
new file mode 100644
index 0000000..d8ee9c4
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/main.cadl
@@ -0,0 +1,26 @@
+import "@cadl-lang/rest";
+import "./models.cadl";
+
+@serviceTitle("CargoProcessingService")
+namespace CargoProcessingService;
+
+using Cadl.Http;
+using Cadl.Rest;
+using ServiceModels;
+
+@route("/operations")
+interface OperationsService {
+ @put
+ @createsOrUpdatesResource(Operation)
+ putOperation(@path id: string): Operation | Error;
+ @get
+ getOperation(@path id: string): Operation | Error;
+}
+
+@route("/cargo")
+interface CargoService {
+ @put
+ updateCargo(@path id: string, @header("operation-id") operationId?: string, @body body: Cargo): CargoHydrated | Error;
+ @post
+ createCargo(@body body: Cargo): CargoHydrated | Error;
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/models.cadl b/accelerators/aks-sb-azmonitor-microservices/api-spec/models.cadl
new file mode 100644
index 0000000..e976cc4
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/models.cadl
@@ -0,0 +1,65 @@
+import "@cadl-lang/rest";
+
+namespace ServiceModels;
+using Cadl.Http;
+using Cadl.Rest;
+
+@error
+model Error {
+ code: int32;
+ message: string;
+ target: string;
+}
+
+model Product {
+ name: string;
+ quantity: int32;
+}
+
+model Port {
+ source: string;
+ destination: string;
+}
+
+model DemandDates {
+ start: plainDate;
+ end: plainDate;
+}
+
+model Cargo {
+ product: Product;
+ port: Port;
+ demandDates: DemandDates;
+ @header
+ operationId: string;
+}
+
+model CargoHydrated {
+ ...Cargo;
+ @visibility("read")
+ @key
+ id: string;
+ @visibility("read")
+ timestamp: zonedDateTime;
+ @header
+ waitTime: int32
+}
+
+model CargoValidated {
+ ...Cargo;
+ @visibility("read")
+ @key
+ id: string;
+ @visibility("read")
+ timestamp: zonedDateTime;
+ valid: boolean;
+ error: string;
+}
+
+model Operation {
+ id: string;
+ state: string;
+ result?: CargoValidated;
+ error?: string;
+ updatedAt: zonedDateTime;
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/package-lock.json b/accelerators/aks-sb-azmonitor-microservices/api-spec/package-lock.json
new file mode 100644
index 0000000..df98e86
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/package-lock.json
@@ -0,0 +1,1756 @@
+{
+ "name": "api-spec",
+ "version": "1.0.0",
+ "lockfileVersion": 2,
+ "requires": true,
+ "packages": {
+ "": {
+ "name": "api-spec",
+ "version": "1.0.0",
+ "license": "ISC",
+ "dependencies": {
+ "@cadl-lang/compiler": "0.35.0",
+ "@cadl-lang/openapi3": "0.15.0",
+ "@cadl-lang/rest": "0.17.0"
+ }
+ },
+ "node_modules/@babel/code-frame": {
+ "version": "7.16.7",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.16.7.tgz",
+ "integrity": "sha512-iAXqUn8IIeBTNd72xsFlgaXHkMBMt6y4HJp1tIaK465CWLT/fG1aqB7ykr95gHHmlBdGbFeWWfyB4NJJ0nmeIg==",
+ "dependencies": {
+ "@babel/highlight": "^7.16.7"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/helper-validator-identifier": {
+ "version": "7.18.6",
+ "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.18.6.tgz",
+ "integrity": "sha512-MmetCkz9ej86nJQV+sFCxoGGrUbU3q02kgLciwkrt9QqEB7cP39oKEY0PakknEO0Gu20SskMRi+AYZ3b1TpN9g==",
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@babel/highlight": {
+ "version": "7.18.6",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.18.6.tgz",
+ "integrity": "sha512-u7stbOuYjaPezCuLj29hNW1v64M2Md2qupEKP1fHc7WdOA3DgLh37suiSrZYY7haUB7iBeQZ9P1uiRF359do3g==",
+ "dependencies": {
+ "@babel/helper-validator-identifier": "^7.18.6",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=6.9.0"
+ }
+ },
+ "node_modules/@cadl-lang/compiler": {
+ "version": "0.35.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/compiler/-/compiler-0.35.0.tgz",
+ "integrity": "sha512-0hztF32Qev2K6NAenVx6at8zYGwaWrIVRIFdqyp3/6ZDJ3q8yffH9eERP0ddq2E5TOtKlWF52MgvuIOWY9qyEQ==",
+ "dependencies": {
+ "@babel/code-frame": "~7.16.7",
+ "ajv": "~8.9.0",
+ "change-case": "~4.1.2",
+ "globby": "~13.1.1",
+ "js-yaml": "~4.1.0",
+ "mkdirp": "~1.0.4",
+ "mustache": "~4.2.0",
+ "node-fetch": "3.2.8",
+ "node-watch": "~0.7.1",
+ "picocolors": "~1.0.0",
+ "prettier": "~2.7.1",
+ "prompts": "~2.4.1",
+ "vscode-languageserver": "~7.0.0",
+ "vscode-languageserver-textdocument": "~1.0.1",
+ "yargs": "~17.3.1"
+ },
+ "bin": {
+ "cadl": "cmd/cadl.js",
+ "cadl-server": "cmd/cadl-server.js"
+ },
+ "engines": {
+ "node": ">=16.0.0"
+ }
+ },
+ "node_modules/@cadl-lang/openapi": {
+ "version": "0.12.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/openapi/-/openapi-0.12.0.tgz",
+ "integrity": "sha512-yoP/gO03oZ09e3n0oW6XgAIcVqBcUmPLQEPvrYqo0/UsZx/ibGZG8oKhhf/C3Kqrp0Vr/qcr6y7SV3NCEHE8bw==",
+ "peer": true,
+ "engines": {
+ "node": ">=16.0.0"
+ },
+ "peerDependencies": {
+ "@cadl-lang/compiler": "~0.35.0",
+ "@cadl-lang/rest": "~0.17.0"
+ }
+ },
+ "node_modules/@cadl-lang/openapi3": {
+ "version": "0.15.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/openapi3/-/openapi3-0.15.0.tgz",
+ "integrity": "sha512-Ee0muF6/S1eLDDQ9m2/R0N/PeXNNM7J3Q+JHWNE0SepJb/LTlihyN5n/0MAAsaT0mPXoQwSe5Lt8lZ3KaDULqQ==",
+ "engines": {
+ "node": ">=16.0.0"
+ },
+ "peerDependencies": {
+ "@cadl-lang/compiler": "~0.35.0",
+ "@cadl-lang/openapi": "~0.12.0",
+ "@cadl-lang/rest": "~0.17.0",
+ "@cadl-lang/versioning": "~0.8.0"
+ }
+ },
+ "node_modules/@cadl-lang/rest": {
+ "version": "0.17.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/rest/-/rest-0.17.0.tgz",
+ "integrity": "sha512-Q5UhVXWXW3XAuri/cAYLw3NJleCXzmqu9TDh6mc+YWbRThvfWx2GYKRbp+7WWCWI1e0zAQt4D49WkYwr/4OJRA==",
+ "engines": {
+ "node": ">=16.0.0"
+ },
+ "peerDependencies": {
+ "@cadl-lang/compiler": "~0.35.0"
+ }
+ },
+ "node_modules/@cadl-lang/versioning": {
+ "version": "0.8.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/versioning/-/versioning-0.8.0.tgz",
+ "integrity": "sha512-TF5iWtJEaQBKmo4RN/yvzdllWwwCWVTbQnEHHAefVRoq4/ThwO5mGKZI8/RG9zeHcJOGHlvGKyu7n1xY4SlqUw==",
+ "peer": true,
+ "dependencies": {
+ "@cadl-lang/compiler": "~0.35.0"
+ },
+ "engines": {
+ "node": ">=16.0.0"
+ }
+ },
+ "node_modules/@nodelib/fs.scandir": {
+ "version": "2.1.5",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
+ "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==",
+ "dependencies": {
+ "@nodelib/fs.stat": "2.0.5",
+ "run-parallel": "^1.1.9"
+ },
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/@nodelib/fs.stat": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz",
+ "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==",
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/@nodelib/fs.walk": {
+ "version": "1.2.8",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz",
+ "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==",
+ "dependencies": {
+ "@nodelib/fs.scandir": "2.1.5",
+ "fastq": "^1.6.0"
+ },
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/ajv": {
+ "version": "8.9.0",
+ "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.9.0.tgz",
+ "integrity": "sha512-qOKJyNj/h+OWx7s5DePL6Zu1KeM9jPZhwBqs+7DzP6bGOvqzVCSf0xueYmVuaC/oQ/VtS2zLMLHdQFbkka+XDQ==",
+ "dependencies": {
+ "fast-deep-equal": "^3.1.1",
+ "json-schema-traverse": "^1.0.0",
+ "require-from-string": "^2.0.2",
+ "uri-js": "^4.2.2"
+ },
+ "funding": {
+ "type": "github",
+ "url": "https://github.com/sponsors/epoberezkin"
+ }
+ },
+ "node_modules/ansi-regex": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
+ "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/ansi-styles": {
+ "version": "3.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz",
+ "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==",
+ "dependencies": {
+ "color-convert": "^1.9.0"
+ },
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/argparse": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz",
+ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="
+ },
+ "node_modules/braces": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
+ "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "dependencies": {
+ "fill-range": "^7.0.1"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/camel-case": {
+ "version": "4.1.2",
+ "resolved": "https://registry.npmjs.org/camel-case/-/camel-case-4.1.2.tgz",
+ "integrity": "sha512-gxGWBrTT1JuMx6R+o5PTXMmUnhnVzLQ9SNutD4YqKtI6ap897t3tKECYla6gCWEkplXnlNybEkZg9GEGxKFCgw==",
+ "dependencies": {
+ "pascal-case": "^3.1.2",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/capital-case": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/capital-case/-/capital-case-1.0.4.tgz",
+ "integrity": "sha512-ds37W8CytHgwnhGGTi88pcPyR15qoNkOpYwmMMfnWqqWgESapLqvDx6huFjQ5vqWSn2Z06173XNA7LtMOeUh1A==",
+ "dependencies": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3",
+ "upper-case-first": "^2.0.2"
+ }
+ },
+ "node_modules/chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "dependencies": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ },
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/change-case": {
+ "version": "4.1.2",
+ "resolved": "https://registry.npmjs.org/change-case/-/change-case-4.1.2.tgz",
+ "integrity": "sha512-bSxY2ws9OtviILG1EiY5K7NNxkqg/JnRnFxLtKQ96JaviiIxi7djMrSd0ECT9AC+lttClmYwKw53BWpOMblo7A==",
+ "dependencies": {
+ "camel-case": "^4.1.2",
+ "capital-case": "^1.0.4",
+ "constant-case": "^3.0.4",
+ "dot-case": "^3.0.4",
+ "header-case": "^2.0.4",
+ "no-case": "^3.0.4",
+ "param-case": "^3.0.4",
+ "pascal-case": "^3.1.2",
+ "path-case": "^3.0.4",
+ "sentence-case": "^3.0.4",
+ "snake-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/cliui": {
+ "version": "7.0.4",
+ "resolved": "https://registry.npmjs.org/cliui/-/cliui-7.0.4.tgz",
+ "integrity": "sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ==",
+ "dependencies": {
+ "string-width": "^4.2.0",
+ "strip-ansi": "^6.0.0",
+ "wrap-ansi": "^7.0.0"
+ }
+ },
+ "node_modules/color-convert": {
+ "version": "1.9.3",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz",
+ "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==",
+ "dependencies": {
+ "color-name": "1.1.3"
+ }
+ },
+ "node_modules/color-name": {
+ "version": "1.1.3",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz",
+ "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw=="
+ },
+ "node_modules/constant-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/constant-case/-/constant-case-3.0.4.tgz",
+ "integrity": "sha512-I2hSBi7Vvs7BEuJDr5dDHfzb/Ruj3FyvFyh7KLilAjNQw3Be+xgqUBA2W6scVEcL0hL1dwPRtIqEPVUCKkSsyQ==",
+ "dependencies": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3",
+ "upper-case": "^2.0.2"
+ }
+ },
+ "node_modules/data-uri-to-buffer": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.0.tgz",
+ "integrity": "sha512-Vr3mLBA8qWmcuschSLAOogKgQ/Jwxulv3RNE4FXnYWRGujzrRWQI4m12fQqRkwX06C0KanhLr4hK+GydchZsaA==",
+ "engines": {
+ "node": ">= 12"
+ }
+ },
+ "node_modules/dir-glob": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
+ "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
+ "dependencies": {
+ "path-type": "^4.0.0"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/dot-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/dot-case/-/dot-case-3.0.4.tgz",
+ "integrity": "sha512-Kv5nKlh6yRrdrGvxeJ2e5y2eRUpkUosIW4A2AS38zwSz27zu7ufDwQPi5Jhs3XAlGNetl3bmnGhQsMtkKJnj3w==",
+ "dependencies": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "node_modules/escalade": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz",
+ "integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw==",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/escape-string-regexp": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz",
+ "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==",
+ "engines": {
+ "node": ">=0.8.0"
+ }
+ },
+ "node_modules/fast-deep-equal": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
+ "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="
+ },
+ "node_modules/fast-glob": {
+ "version": "3.2.12",
+ "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz",
+ "integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==",
+ "dependencies": {
+ "@nodelib/fs.stat": "^2.0.2",
+ "@nodelib/fs.walk": "^1.2.3",
+ "glob-parent": "^5.1.2",
+ "merge2": "^1.3.0",
+ "micromatch": "^4.0.4"
+ },
+ "engines": {
+ "node": ">=8.6.0"
+ }
+ },
+ "node_modules/fastq": {
+ "version": "1.13.0",
+ "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.13.0.tgz",
+ "integrity": "sha512-YpkpUnK8od0o1hmeSc7UUs/eB/vIPWJYjKck2QKIzAf71Vm1AAQ3EbuZB3g2JIy+pg+ERD0vqI79KyZiB2e2Nw==",
+ "dependencies": {
+ "reusify": "^1.0.4"
+ }
+ },
+ "node_modules/fetch-blob": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz",
+ "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/jimmywarting"
+ },
+ {
+ "type": "paypal",
+ "url": "https://paypal.me/jimmywarting"
+ }
+ ],
+ "dependencies": {
+ "node-domexception": "^1.0.0",
+ "web-streams-polyfill": "^3.0.3"
+ },
+ "engines": {
+ "node": "^12.20 || >= 14.13"
+ }
+ },
+ "node_modules/fill-range": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
+ "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "dependencies": {
+ "to-regex-range": "^5.0.1"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/formdata-polyfill": {
+ "version": "4.0.10",
+ "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
+ "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
+ "dependencies": {
+ "fetch-blob": "^3.1.2"
+ },
+ "engines": {
+ "node": ">=12.20.0"
+ }
+ },
+ "node_modules/get-caller-file": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz",
+ "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==",
+ "engines": {
+ "node": "6.* || 8.* || >= 10.*"
+ }
+ },
+ "node_modules/glob-parent": {
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
+ "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
+ "dependencies": {
+ "is-glob": "^4.0.1"
+ },
+ "engines": {
+ "node": ">= 6"
+ }
+ },
+ "node_modules/globby": {
+ "version": "13.1.2",
+ "resolved": "https://registry.npmjs.org/globby/-/globby-13.1.2.tgz",
+ "integrity": "sha512-LKSDZXToac40u8Q1PQtZihbNdTYSNMuWe+K5l+oa6KgDzSvVrHXlJy40hUP522RjAIoNLJYBJi7ow+rbFpIhHQ==",
+ "dependencies": {
+ "dir-glob": "^3.0.1",
+ "fast-glob": "^3.2.11",
+ "ignore": "^5.2.0",
+ "merge2": "^1.4.1",
+ "slash": "^4.0.0"
+ },
+ "engines": {
+ "node": "^12.20.0 || ^14.13.1 || >=16.0.0"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/has-flag": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz",
+ "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==",
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/header-case": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/header-case/-/header-case-2.0.4.tgz",
+ "integrity": "sha512-H/vuk5TEEVZwrR0lp2zed9OCo1uAILMlx0JEMgC26rzyJJ3N1v6XkwHHXJQdR2doSjcGPM6OKPYoJgf0plJ11Q==",
+ "dependencies": {
+ "capital-case": "^1.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/ignore": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.0.tgz",
+ "integrity": "sha512-CmxgYGiEPCLhfLnpPp1MoRmifwEIOgjcHXxOBjv7mY96c+eWScsOP9c112ZyLdWHi0FxHjI+4uVhKYp/gcdRmQ==",
+ "engines": {
+ "node": ">= 4"
+ }
+ },
+ "node_modules/is-extglob": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
+ "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/is-glob": {
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz",
+ "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==",
+ "dependencies": {
+ "is-extglob": "^2.1.1"
+ },
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/is-number": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
+ "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==",
+ "engines": {
+ "node": ">=0.12.0"
+ }
+ },
+ "node_modules/js-tokens": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz",
+ "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="
+ },
+ "node_modules/js-yaml": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz",
+ "integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==",
+ "dependencies": {
+ "argparse": "^2.0.1"
+ },
+ "bin": {
+ "js-yaml": "bin/js-yaml.js"
+ }
+ },
+ "node_modules/json-schema-traverse": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz",
+ "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug=="
+ },
+ "node_modules/kleur": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz",
+ "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w==",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/lower-case": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/lower-case/-/lower-case-2.0.2.tgz",
+ "integrity": "sha512-7fm3l3NAF9WfN6W3JOmf5drwpVqX78JtoGJ3A6W0a6ZnldM41w2fV5D490psKFTpMds8TJse/eHLFFsNHHjHgg==",
+ "dependencies": {
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/merge2": {
+ "version": "1.4.1",
+ "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
+ "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==",
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/micromatch": {
+ "version": "4.0.5",
+ "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz",
+ "integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==",
+ "dependencies": {
+ "braces": "^3.0.2",
+ "picomatch": "^2.3.1"
+ },
+ "engines": {
+ "node": ">=8.6"
+ }
+ },
+ "node_modules/mkdirp": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz",
+ "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==",
+ "bin": {
+ "mkdirp": "bin/cmd.js"
+ },
+ "engines": {
+ "node": ">=10"
+ }
+ },
+ "node_modules/mustache": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/mustache/-/mustache-4.2.0.tgz",
+ "integrity": "sha512-71ippSywq5Yb7/tVYyGbkBggbU8H3u5Rz56fH60jGFgr8uHwxs+aSKeqmluIVzM0m0kB7xQjKS6qPfd0b2ZoqQ==",
+ "bin": {
+ "mustache": "bin/mustache"
+ }
+ },
+ "node_modules/no-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/no-case/-/no-case-3.0.4.tgz",
+ "integrity": "sha512-fgAN3jGAh+RoxUGZHTSOLJIqUc2wmoBwGR4tbpNAKmmovFoWq0OdRkb0VkldReO2a2iBT/OEulG9XSUc10r3zg==",
+ "dependencies": {
+ "lower-case": "^2.0.2",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/node-domexception": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
+ "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/jimmywarting"
+ },
+ {
+ "type": "github",
+ "url": "https://paypal.me/jimmywarting"
+ }
+ ],
+ "engines": {
+ "node": ">=10.5.0"
+ }
+ },
+ "node_modules/node-fetch": {
+ "version": "3.2.8",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.2.8.tgz",
+ "integrity": "sha512-KtpD1YhGszhntMpBDyp5lyagk8KIMopC1LEb7cQUAh7zcosaX5uK8HnbNb2i3NTQK3sIawCItS0uFC3QzcLHdg==",
+ "dependencies": {
+ "data-uri-to-buffer": "^4.0.0",
+ "fetch-blob": "^3.1.4",
+ "formdata-polyfill": "^4.0.10"
+ },
+ "engines": {
+ "node": "^12.20.0 || ^14.13.1 || >=16.0.0"
+ },
+ "funding": {
+ "type": "opencollective",
+ "url": "https://opencollective.com/node-fetch"
+ }
+ },
+ "node_modules/node-watch": {
+ "version": "0.7.3",
+ "resolved": "https://registry.npmjs.org/node-watch/-/node-watch-0.7.3.tgz",
+ "integrity": "sha512-3l4E8uMPY1HdMMryPRUAl+oIHtXtyiTlIiESNSVSNxcPfzAFzeTbXFQkZfAwBbo0B1qMSG8nUABx+Gd+YrbKrQ==",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/param-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/param-case/-/param-case-3.0.4.tgz",
+ "integrity": "sha512-RXlj7zCYokReqWpOPH9oYivUzLYZ5vAPIfEmCTNViosC78F8F0H9y7T7gG2M39ymgutxF5gcFEsyZQSph9Bp3A==",
+ "dependencies": {
+ "dot-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/pascal-case": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/pascal-case/-/pascal-case-3.1.2.tgz",
+ "integrity": "sha512-uWlGT3YSnK9x3BQJaOdcZwrnV6hPpd8jFH1/ucpiLRPh/2zCVJKS19E4GvYHvaCcACn3foXZ0cLB9Wrx1KGe5g==",
+ "dependencies": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/path-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/path-case/-/path-case-3.0.4.tgz",
+ "integrity": "sha512-qO4qCFjXqVTrcbPt/hQfhTQ+VhFsqNKOPtytgNKkKxSoEp3XPUQ8ObFuePylOIok5gjn69ry8XiULxCwot3Wfg==",
+ "dependencies": {
+ "dot-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/path-type": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
+ "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==",
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/picocolors": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.0.0.tgz",
+ "integrity": "sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ=="
+ },
+ "node_modules/picomatch": {
+ "version": "2.3.1",
+ "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
+ "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==",
+ "engines": {
+ "node": ">=8.6"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/jonschlinkert"
+ }
+ },
+ "node_modules/prettier": {
+ "version": "2.7.1",
+ "resolved": "https://registry.npmjs.org/prettier/-/prettier-2.7.1.tgz",
+ "integrity": "sha512-ujppO+MkdPqoVINuDFDRLClm7D78qbDt0/NR+wp5FqEZOoTNAjPHWj17QRhu7geIHJfcNhRk1XVQmF8Bp3ye+g==",
+ "bin": {
+ "prettier": "bin-prettier.js"
+ },
+ "engines": {
+ "node": ">=10.13.0"
+ },
+ "funding": {
+ "url": "https://github.com/prettier/prettier?sponsor=1"
+ }
+ },
+ "node_modules/prompts": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz",
+ "integrity": "sha512-NxNv/kLguCA7p3jE8oL2aEBsrJWgAakBpgmgK6lpPWV+WuOmY6r2/zbAVnP+T8bQlA0nzHXSJSJW0Hq7ylaD2Q==",
+ "dependencies": {
+ "kleur": "^3.0.3",
+ "sisteransi": "^1.0.5"
+ },
+ "engines": {
+ "node": ">= 6"
+ }
+ },
+ "node_modules/punycode": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz",
+ "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==",
+ "engines": {
+ "node": ">=6"
+ }
+ },
+ "node_modules/queue-microtask": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
+ "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/feross"
+ },
+ {
+ "type": "patreon",
+ "url": "https://www.patreon.com/feross"
+ },
+ {
+ "type": "consulting",
+ "url": "https://feross.org/support"
+ }
+ ]
+ },
+ "node_modules/require-directory": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz",
+ "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/require-from-string": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz",
+ "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==",
+ "engines": {
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/reusify": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz",
+ "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==",
+ "engines": {
+ "iojs": ">=1.0.0",
+ "node": ">=0.10.0"
+ }
+ },
+ "node_modules/run-parallel": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz",
+ "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==",
+ "funding": [
+ {
+ "type": "github",
+ "url": "https://github.com/sponsors/feross"
+ },
+ {
+ "type": "patreon",
+ "url": "https://www.patreon.com/feross"
+ },
+ {
+ "type": "consulting",
+ "url": "https://feross.org/support"
+ }
+ ],
+ "dependencies": {
+ "queue-microtask": "^1.2.2"
+ }
+ },
+ "node_modules/sentence-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/sentence-case/-/sentence-case-3.0.4.tgz",
+ "integrity": "sha512-8LS0JInaQMCRoQ7YUytAo/xUu5W2XnQxV2HI/6uM6U7CITS1RqPElr30V6uIqyMKM9lJGRVFy5/4CuzcixNYSg==",
+ "dependencies": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3",
+ "upper-case-first": "^2.0.2"
+ }
+ },
+ "node_modules/sisteransi": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
+ "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg=="
+ },
+ "node_modules/slash": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/slash/-/slash-4.0.0.tgz",
+ "integrity": "sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew==",
+ "engines": {
+ "node": ">=12"
+ },
+ "funding": {
+ "url": "https://github.com/sponsors/sindresorhus"
+ }
+ },
+ "node_modules/snake-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/snake-case/-/snake-case-3.0.4.tgz",
+ "integrity": "sha512-LAOh4z89bGQvl9pFfNF8V146i7o7/CqFPbqzYgP+yYzDIDeS9HaNFtXABamRW+AQzEVODcvE79ljJ+8a9YSdMg==",
+ "dependencies": {
+ "dot-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/string-width": {
+ "version": "4.2.3",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
+ "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
+ "dependencies": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.1"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/strip-ansi": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
+ "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
+ "dependencies": {
+ "ansi-regex": "^5.0.1"
+ },
+ "engines": {
+ "node": ">=8"
+ }
+ },
+ "node_modules/supports-color": {
+ "version": "5.5.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
+ "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==",
+ "dependencies": {
+ "has-flag": "^3.0.0"
+ },
+ "engines": {
+ "node": ">=4"
+ }
+ },
+ "node_modules/to-regex-range": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
+ "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
+ "dependencies": {
+ "is-number": "^7.0.0"
+ },
+ "engines": {
+ "node": ">=8.0"
+ }
+ },
+ "node_modules/tslib": {
+ "version": "2.4.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.4.0.tgz",
+ "integrity": "sha512-d6xOpEDfsi2CZVlPQzGeux8XMwLT9hssAsaPYExaQMuYskwb+x1x7J371tWlbBdWHroy99KnVB6qIkUbs5X3UQ=="
+ },
+ "node_modules/upper-case": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/upper-case/-/upper-case-2.0.2.tgz",
+ "integrity": "sha512-KgdgDGJt2TpuwBUIjgG6lzw2GWFRCW9Qkfkiv0DxqHHLYJHmtmdUIKcZd8rHgFSjopVTlw6ggzCm1b8MFQwikg==",
+ "dependencies": {
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/upper-case-first": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/upper-case-first/-/upper-case-first-2.0.2.tgz",
+ "integrity": "sha512-514ppYHBaKwfJRK/pNC6c/OxfGa0obSnAl106u97Ed0I625Nin96KAjttZF6ZL3e1XLtphxnqrOi9iWgm+u+bg==",
+ "dependencies": {
+ "tslib": "^2.0.3"
+ }
+ },
+ "node_modules/uri-js": {
+ "version": "4.4.1",
+ "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz",
+ "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==",
+ "dependencies": {
+ "punycode": "^2.1.0"
+ }
+ },
+ "node_modules/vscode-jsonrpc": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/vscode-jsonrpc/-/vscode-jsonrpc-6.0.0.tgz",
+ "integrity": "sha512-wnJA4BnEjOSyFMvjZdpiOwhSq9uDoK8e/kpRJDTaMYzwlkrhG1fwDIZI94CLsLzlCK5cIbMMtFlJlfR57Lavmg==",
+ "engines": {
+ "node": ">=8.0.0 || >=10.0.0"
+ }
+ },
+ "node_modules/vscode-languageserver": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver/-/vscode-languageserver-7.0.0.tgz",
+ "integrity": "sha512-60HTx5ID+fLRcgdHfmz0LDZAXYEV68fzwG0JWwEPBode9NuMYTIxuYXPg4ngO8i8+Ou0lM7y6GzaYWbiDL0drw==",
+ "dependencies": {
+ "vscode-languageserver-protocol": "3.16.0"
+ },
+ "bin": {
+ "installServerIntoExtension": "bin/installServerIntoExtension"
+ }
+ },
+ "node_modules/vscode-languageserver-protocol": {
+ "version": "3.16.0",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver-protocol/-/vscode-languageserver-protocol-3.16.0.tgz",
+ "integrity": "sha512-sdeUoAawceQdgIfTI+sdcwkiK2KU+2cbEYA0agzM2uqaUy2UpnnGHtWTHVEtS0ES4zHU0eMFRGN+oQgDxlD66A==",
+ "dependencies": {
+ "vscode-jsonrpc": "6.0.0",
+ "vscode-languageserver-types": "3.16.0"
+ }
+ },
+ "node_modules/vscode-languageserver-textdocument": {
+ "version": "1.0.7",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver-textdocument/-/vscode-languageserver-textdocument-1.0.7.tgz",
+ "integrity": "sha512-bFJH7UQxlXT8kKeyiyu41r22jCZXG8kuuVVA33OEJn1diWOZK5n8zBSPZFHVBOu8kXZ6h0LIRhf5UnCo61J4Hg=="
+ },
+ "node_modules/vscode-languageserver-types": {
+ "version": "3.16.0",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver-types/-/vscode-languageserver-types-3.16.0.tgz",
+ "integrity": "sha512-k8luDIWJWyenLc5ToFQQMaSrqCHiLwyKPHKPQZ5zz21vM+vIVUSvsRpcbiECH4WR88K2XZqc4ScRcZ7nk/jbeA=="
+ },
+ "node_modules/web-streams-polyfill": {
+ "version": "3.2.1",
+ "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.2.1.tgz",
+ "integrity": "sha512-e0MO3wdXWKrLbL0DgGnUV7WHVuw9OUvL4hjgnPkIeEvESk74gAITi5G606JtZPp39cd8HA9VQzCIvA49LpPN5Q==",
+ "engines": {
+ "node": ">= 8"
+ }
+ },
+ "node_modules/wrap-ansi": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz",
+ "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==",
+ "dependencies": {
+ "ansi-styles": "^4.0.0",
+ "string-width": "^4.1.0",
+ "strip-ansi": "^6.0.0"
+ },
+ "engines": {
+ "node": ">=10"
+ },
+ "funding": {
+ "url": "https://github.com/chalk/wrap-ansi?sponsor=1"
+ }
+ },
+ "node_modules/wrap-ansi/node_modules/ansi-styles": {
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
+ "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
+ "dependencies": {
+ "color-convert": "^2.0.1"
+ },
+ "engines": {
+ "node": ">=8"
+ },
+ "funding": {
+ "url": "https://github.com/chalk/ansi-styles?sponsor=1"
+ }
+ },
+ "node_modules/wrap-ansi/node_modules/color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "dependencies": {
+ "color-name": "~1.1.4"
+ },
+ "engines": {
+ "node": ">=7.0.0"
+ }
+ },
+ "node_modules/wrap-ansi/node_modules/color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "node_modules/y18n": {
+ "version": "5.0.8",
+ "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz",
+ "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==",
+ "engines": {
+ "node": ">=10"
+ }
+ },
+ "node_modules/yargs": {
+ "version": "17.3.1",
+ "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.3.1.tgz",
+ "integrity": "sha512-WUANQeVgjLbNsEmGk20f+nlHgOqzRFpiGWVaBrYGYIGANIIu3lWjoyi0fNlFmJkvfhCZ6BXINe7/W2O2bV4iaA==",
+ "dependencies": {
+ "cliui": "^7.0.2",
+ "escalade": "^3.1.1",
+ "get-caller-file": "^2.0.5",
+ "require-directory": "^2.1.1",
+ "string-width": "^4.2.3",
+ "y18n": "^5.0.5",
+ "yargs-parser": "^21.0.0"
+ },
+ "engines": {
+ "node": ">=12"
+ }
+ },
+ "node_modules/yargs-parser": {
+ "version": "21.1.1",
+ "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz",
+ "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==",
+ "engines": {
+ "node": ">=12"
+ }
+ }
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.16.7",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.16.7.tgz",
+ "integrity": "sha512-iAXqUn8IIeBTNd72xsFlgaXHkMBMt6y4HJp1tIaK465CWLT/fG1aqB7ykr95gHHmlBdGbFeWWfyB4NJJ0nmeIg==",
+ "requires": {
+ "@babel/highlight": "^7.16.7"
+ }
+ },
+ "@babel/helper-validator-identifier": {
+ "version": "7.18.6",
+ "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.18.6.tgz",
+ "integrity": "sha512-MmetCkz9ej86nJQV+sFCxoGGrUbU3q02kgLciwkrt9QqEB7cP39oKEY0PakknEO0Gu20SskMRi+AYZ3b1TpN9g=="
+ },
+ "@babel/highlight": {
+ "version": "7.18.6",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.18.6.tgz",
+ "integrity": "sha512-u7stbOuYjaPezCuLj29hNW1v64M2Md2qupEKP1fHc7WdOA3DgLh37suiSrZYY7haUB7iBeQZ9P1uiRF359do3g==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.18.6",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@cadl-lang/compiler": {
+ "version": "0.35.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/compiler/-/compiler-0.35.0.tgz",
+ "integrity": "sha512-0hztF32Qev2K6NAenVx6at8zYGwaWrIVRIFdqyp3/6ZDJ3q8yffH9eERP0ddq2E5TOtKlWF52MgvuIOWY9qyEQ==",
+ "requires": {
+ "@babel/code-frame": "~7.16.7",
+ "ajv": "~8.9.0",
+ "change-case": "~4.1.2",
+ "globby": "~13.1.1",
+ "js-yaml": "~4.1.0",
+ "mkdirp": "~1.0.4",
+ "mustache": "~4.2.0",
+ "node-fetch": "3.2.8",
+ "node-watch": "~0.7.1",
+ "picocolors": "~1.0.0",
+ "prettier": "~2.7.1",
+ "prompts": "~2.4.1",
+ "vscode-languageserver": "~7.0.0",
+ "vscode-languageserver-textdocument": "~1.0.1",
+ "yargs": "~17.3.1"
+ }
+ },
+ "@cadl-lang/openapi": {
+ "version": "0.12.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/openapi/-/openapi-0.12.0.tgz",
+ "integrity": "sha512-yoP/gO03oZ09e3n0oW6XgAIcVqBcUmPLQEPvrYqo0/UsZx/ibGZG8oKhhf/C3Kqrp0Vr/qcr6y7SV3NCEHE8bw==",
+ "peer": true,
+ "requires": {}
+ },
+ "@cadl-lang/openapi3": {
+ "version": "0.15.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/openapi3/-/openapi3-0.15.0.tgz",
+ "integrity": "sha512-Ee0muF6/S1eLDDQ9m2/R0N/PeXNNM7J3Q+JHWNE0SepJb/LTlihyN5n/0MAAsaT0mPXoQwSe5Lt8lZ3KaDULqQ==",
+ "requires": {}
+ },
+ "@cadl-lang/rest": {
+ "version": "0.17.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/rest/-/rest-0.17.0.tgz",
+ "integrity": "sha512-Q5UhVXWXW3XAuri/cAYLw3NJleCXzmqu9TDh6mc+YWbRThvfWx2GYKRbp+7WWCWI1e0zAQt4D49WkYwr/4OJRA==",
+ "requires": {}
+ },
+ "@cadl-lang/versioning": {
+ "version": "0.8.0",
+ "resolved": "https://registry.npmjs.org/@cadl-lang/versioning/-/versioning-0.8.0.tgz",
+ "integrity": "sha512-TF5iWtJEaQBKmo4RN/yvzdllWwwCWVTbQnEHHAefVRoq4/ThwO5mGKZI8/RG9zeHcJOGHlvGKyu7n1xY4SlqUw==",
+ "peer": true,
+ "requires": {
+ "@cadl-lang/compiler": "~0.35.0"
+ }
+ },
+ "@nodelib/fs.scandir": {
+ "version": "2.1.5",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
+ "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==",
+ "requires": {
+ "@nodelib/fs.stat": "2.0.5",
+ "run-parallel": "^1.1.9"
+ }
+ },
+ "@nodelib/fs.stat": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz",
+ "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A=="
+ },
+ "@nodelib/fs.walk": {
+ "version": "1.2.8",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz",
+ "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==",
+ "requires": {
+ "@nodelib/fs.scandir": "2.1.5",
+ "fastq": "^1.6.0"
+ }
+ },
+ "ajv": {
+ "version": "8.9.0",
+ "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.9.0.tgz",
+ "integrity": "sha512-qOKJyNj/h+OWx7s5DePL6Zu1KeM9jPZhwBqs+7DzP6bGOvqzVCSf0xueYmVuaC/oQ/VtS2zLMLHdQFbkka+XDQ==",
+ "requires": {
+ "fast-deep-equal": "^3.1.1",
+ "json-schema-traverse": "^1.0.0",
+ "require-from-string": "^2.0.2",
+ "uri-js": "^4.2.2"
+ }
+ },
+ "ansi-regex": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz",
+ "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="
+ },
+ "ansi-styles": {
+ "version": "3.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz",
+ "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==",
+ "requires": {
+ "color-convert": "^1.9.0"
+ }
+ },
+ "argparse": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz",
+ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="
+ },
+ "braces": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
+ "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "requires": {
+ "fill-range": "^7.0.1"
+ }
+ },
+ "camel-case": {
+ "version": "4.1.2",
+ "resolved": "https://registry.npmjs.org/camel-case/-/camel-case-4.1.2.tgz",
+ "integrity": "sha512-gxGWBrTT1JuMx6R+o5PTXMmUnhnVzLQ9SNutD4YqKtI6ap897t3tKECYla6gCWEkplXnlNybEkZg9GEGxKFCgw==",
+ "requires": {
+ "pascal-case": "^3.1.2",
+ "tslib": "^2.0.3"
+ }
+ },
+ "capital-case": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/capital-case/-/capital-case-1.0.4.tgz",
+ "integrity": "sha512-ds37W8CytHgwnhGGTi88pcPyR15qoNkOpYwmMMfnWqqWgESapLqvDx6huFjQ5vqWSn2Z06173XNA7LtMOeUh1A==",
+ "requires": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3",
+ "upper-case-first": "^2.0.2"
+ }
+ },
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ },
+ "change-case": {
+ "version": "4.1.2",
+ "resolved": "https://registry.npmjs.org/change-case/-/change-case-4.1.2.tgz",
+ "integrity": "sha512-bSxY2ws9OtviILG1EiY5K7NNxkqg/JnRnFxLtKQ96JaviiIxi7djMrSd0ECT9AC+lttClmYwKw53BWpOMblo7A==",
+ "requires": {
+ "camel-case": "^4.1.2",
+ "capital-case": "^1.0.4",
+ "constant-case": "^3.0.4",
+ "dot-case": "^3.0.4",
+ "header-case": "^2.0.4",
+ "no-case": "^3.0.4",
+ "param-case": "^3.0.4",
+ "pascal-case": "^3.1.2",
+ "path-case": "^3.0.4",
+ "sentence-case": "^3.0.4",
+ "snake-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "cliui": {
+ "version": "7.0.4",
+ "resolved": "https://registry.npmjs.org/cliui/-/cliui-7.0.4.tgz",
+ "integrity": "sha512-OcRE68cOsVMXp1Yvonl/fzkQOyjLSu/8bhPDfQt0e0/Eb283TKP20Fs2MqoPsr9SwA595rRCA+QMzYc9nBP+JQ==",
+ "requires": {
+ "string-width": "^4.2.0",
+ "strip-ansi": "^6.0.0",
+ "wrap-ansi": "^7.0.0"
+ }
+ },
+ "color-convert": {
+ "version": "1.9.3",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz",
+ "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==",
+ "requires": {
+ "color-name": "1.1.3"
+ }
+ },
+ "color-name": {
+ "version": "1.1.3",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz",
+ "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw=="
+ },
+ "constant-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/constant-case/-/constant-case-3.0.4.tgz",
+ "integrity": "sha512-I2hSBi7Vvs7BEuJDr5dDHfzb/Ruj3FyvFyh7KLilAjNQw3Be+xgqUBA2W6scVEcL0hL1dwPRtIqEPVUCKkSsyQ==",
+ "requires": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3",
+ "upper-case": "^2.0.2"
+ }
+ },
+ "data-uri-to-buffer": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.0.tgz",
+ "integrity": "sha512-Vr3mLBA8qWmcuschSLAOogKgQ/Jwxulv3RNE4FXnYWRGujzrRWQI4m12fQqRkwX06C0KanhLr4hK+GydchZsaA=="
+ },
+ "dir-glob": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
+ "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
+ "requires": {
+ "path-type": "^4.0.0"
+ }
+ },
+ "dot-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/dot-case/-/dot-case-3.0.4.tgz",
+ "integrity": "sha512-Kv5nKlh6yRrdrGvxeJ2e5y2eRUpkUosIW4A2AS38zwSz27zu7ufDwQPi5Jhs3XAlGNetl3bmnGhQsMtkKJnj3w==",
+ "requires": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "escalade": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz",
+ "integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw=="
+ },
+ "escape-string-regexp": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz",
+ "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg=="
+ },
+ "fast-deep-equal": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
+ "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="
+ },
+ "fast-glob": {
+ "version": "3.2.12",
+ "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz",
+ "integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==",
+ "requires": {
+ "@nodelib/fs.stat": "^2.0.2",
+ "@nodelib/fs.walk": "^1.2.3",
+ "glob-parent": "^5.1.2",
+ "merge2": "^1.3.0",
+ "micromatch": "^4.0.4"
+ }
+ },
+ "fastq": {
+ "version": "1.13.0",
+ "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.13.0.tgz",
+ "integrity": "sha512-YpkpUnK8od0o1hmeSc7UUs/eB/vIPWJYjKck2QKIzAf71Vm1AAQ3EbuZB3g2JIy+pg+ERD0vqI79KyZiB2e2Nw==",
+ "requires": {
+ "reusify": "^1.0.4"
+ }
+ },
+ "fetch-blob": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz",
+ "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==",
+ "requires": {
+ "node-domexception": "^1.0.0",
+ "web-streams-polyfill": "^3.0.3"
+ }
+ },
+ "fill-range": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
+ "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "requires": {
+ "to-regex-range": "^5.0.1"
+ }
+ },
+ "formdata-polyfill": {
+ "version": "4.0.10",
+ "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz",
+ "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==",
+ "requires": {
+ "fetch-blob": "^3.1.2"
+ }
+ },
+ "get-caller-file": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz",
+ "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="
+ },
+ "glob-parent": {
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz",
+ "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
+ "requires": {
+ "is-glob": "^4.0.1"
+ }
+ },
+ "globby": {
+ "version": "13.1.2",
+ "resolved": "https://registry.npmjs.org/globby/-/globby-13.1.2.tgz",
+ "integrity": "sha512-LKSDZXToac40u8Q1PQtZihbNdTYSNMuWe+K5l+oa6KgDzSvVrHXlJy40hUP522RjAIoNLJYBJi7ow+rbFpIhHQ==",
+ "requires": {
+ "dir-glob": "^3.0.1",
+ "fast-glob": "^3.2.11",
+ "ignore": "^5.2.0",
+ "merge2": "^1.4.1",
+ "slash": "^4.0.0"
+ }
+ },
+ "has-flag": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz",
+ "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw=="
+ },
+ "header-case": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/header-case/-/header-case-2.0.4.tgz",
+ "integrity": "sha512-H/vuk5TEEVZwrR0lp2zed9OCo1uAILMlx0JEMgC26rzyJJ3N1v6XkwHHXJQdR2doSjcGPM6OKPYoJgf0plJ11Q==",
+ "requires": {
+ "capital-case": "^1.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "ignore": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.0.tgz",
+ "integrity": "sha512-CmxgYGiEPCLhfLnpPp1MoRmifwEIOgjcHXxOBjv7mY96c+eWScsOP9c112ZyLdWHi0FxHjI+4uVhKYp/gcdRmQ=="
+ },
+ "is-extglob": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz",
+ "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="
+ },
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "is-glob": {
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz",
+ "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==",
+ "requires": {
+ "is-extglob": "^2.1.1"
+ }
+ },
+ "is-number": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
+ "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="
+ },
+ "js-tokens": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz",
+ "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ=="
+ },
+ "js-yaml": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz",
+ "integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==",
+ "requires": {
+ "argparse": "^2.0.1"
+ }
+ },
+ "json-schema-traverse": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz",
+ "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug=="
+ },
+ "kleur": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz",
+ "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w=="
+ },
+ "lower-case": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/lower-case/-/lower-case-2.0.2.tgz",
+ "integrity": "sha512-7fm3l3NAF9WfN6W3JOmf5drwpVqX78JtoGJ3A6W0a6ZnldM41w2fV5D490psKFTpMds8TJse/eHLFFsNHHjHgg==",
+ "requires": {
+ "tslib": "^2.0.3"
+ }
+ },
+ "merge2": {
+ "version": "1.4.1",
+ "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
+ "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="
+ },
+ "micromatch": {
+ "version": "4.0.5",
+ "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz",
+ "integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==",
+ "requires": {
+ "braces": "^3.0.2",
+ "picomatch": "^2.3.1"
+ }
+ },
+ "mkdirp": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz",
+ "integrity": "sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw=="
+ },
+ "mustache": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/mustache/-/mustache-4.2.0.tgz",
+ "integrity": "sha512-71ippSywq5Yb7/tVYyGbkBggbU8H3u5Rz56fH60jGFgr8uHwxs+aSKeqmluIVzM0m0kB7xQjKS6qPfd0b2ZoqQ=="
+ },
+ "no-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/no-case/-/no-case-3.0.4.tgz",
+ "integrity": "sha512-fgAN3jGAh+RoxUGZHTSOLJIqUc2wmoBwGR4tbpNAKmmovFoWq0OdRkb0VkldReO2a2iBT/OEulG9XSUc10r3zg==",
+ "requires": {
+ "lower-case": "^2.0.2",
+ "tslib": "^2.0.3"
+ }
+ },
+ "node-domexception": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz",
+ "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ=="
+ },
+ "node-fetch": {
+ "version": "3.2.8",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.2.8.tgz",
+ "integrity": "sha512-KtpD1YhGszhntMpBDyp5lyagk8KIMopC1LEb7cQUAh7zcosaX5uK8HnbNb2i3NTQK3sIawCItS0uFC3QzcLHdg==",
+ "requires": {
+ "data-uri-to-buffer": "^4.0.0",
+ "fetch-blob": "^3.1.4",
+ "formdata-polyfill": "^4.0.10"
+ }
+ },
+ "node-watch": {
+ "version": "0.7.3",
+ "resolved": "https://registry.npmjs.org/node-watch/-/node-watch-0.7.3.tgz",
+ "integrity": "sha512-3l4E8uMPY1HdMMryPRUAl+oIHtXtyiTlIiESNSVSNxcPfzAFzeTbXFQkZfAwBbo0B1qMSG8nUABx+Gd+YrbKrQ=="
+ },
+ "param-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/param-case/-/param-case-3.0.4.tgz",
+ "integrity": "sha512-RXlj7zCYokReqWpOPH9oYivUzLYZ5vAPIfEmCTNViosC78F8F0H9y7T7gG2M39ymgutxF5gcFEsyZQSph9Bp3A==",
+ "requires": {
+ "dot-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "pascal-case": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/pascal-case/-/pascal-case-3.1.2.tgz",
+ "integrity": "sha512-uWlGT3YSnK9x3BQJaOdcZwrnV6hPpd8jFH1/ucpiLRPh/2zCVJKS19E4GvYHvaCcACn3foXZ0cLB9Wrx1KGe5g==",
+ "requires": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "path-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/path-case/-/path-case-3.0.4.tgz",
+ "integrity": "sha512-qO4qCFjXqVTrcbPt/hQfhTQ+VhFsqNKOPtytgNKkKxSoEp3XPUQ8ObFuePylOIok5gjn69ry8XiULxCwot3Wfg==",
+ "requires": {
+ "dot-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "path-type": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
+ "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw=="
+ },
+ "picocolors": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.0.0.tgz",
+ "integrity": "sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ=="
+ },
+ "picomatch": {
+ "version": "2.3.1",
+ "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz",
+ "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="
+ },
+ "prettier": {
+ "version": "2.7.1",
+ "resolved": "https://registry.npmjs.org/prettier/-/prettier-2.7.1.tgz",
+ "integrity": "sha512-ujppO+MkdPqoVINuDFDRLClm7D78qbDt0/NR+wp5FqEZOoTNAjPHWj17QRhu7geIHJfcNhRk1XVQmF8Bp3ye+g=="
+ },
+ "prompts": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.4.2.tgz",
+ "integrity": "sha512-NxNv/kLguCA7p3jE8oL2aEBsrJWgAakBpgmgK6lpPWV+WuOmY6r2/zbAVnP+T8bQlA0nzHXSJSJW0Hq7ylaD2Q==",
+ "requires": {
+ "kleur": "^3.0.3",
+ "sisteransi": "^1.0.5"
+ }
+ },
+ "punycode": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz",
+ "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A=="
+ },
+ "queue-microtask": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
+ "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="
+ },
+ "require-directory": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz",
+ "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="
+ },
+ "require-from-string": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz",
+ "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="
+ },
+ "reusify": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz",
+ "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw=="
+ },
+ "run-parallel": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz",
+ "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==",
+ "requires": {
+ "queue-microtask": "^1.2.2"
+ }
+ },
+ "sentence-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/sentence-case/-/sentence-case-3.0.4.tgz",
+ "integrity": "sha512-8LS0JInaQMCRoQ7YUytAo/xUu5W2XnQxV2HI/6uM6U7CITS1RqPElr30V6uIqyMKM9lJGRVFy5/4CuzcixNYSg==",
+ "requires": {
+ "no-case": "^3.0.4",
+ "tslib": "^2.0.3",
+ "upper-case-first": "^2.0.2"
+ }
+ },
+ "sisteransi": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
+ "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg=="
+ },
+ "slash": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/slash/-/slash-4.0.0.tgz",
+ "integrity": "sha512-3dOsAHXXUkQTpOYcoAxLIorMTp4gIQr5IW3iVb7A7lFIp0VHhnynm9izx6TssdrIcVIESAlVjtnO2K8bg+Coew=="
+ },
+ "snake-case": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/snake-case/-/snake-case-3.0.4.tgz",
+ "integrity": "sha512-LAOh4z89bGQvl9pFfNF8V146i7o7/CqFPbqzYgP+yYzDIDeS9HaNFtXABamRW+AQzEVODcvE79ljJ+8a9YSdMg==",
+ "requires": {
+ "dot-case": "^3.0.4",
+ "tslib": "^2.0.3"
+ }
+ },
+ "string-width": {
+ "version": "4.2.3",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz",
+ "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==",
+ "requires": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.1"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz",
+ "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==",
+ "requires": {
+ "ansi-regex": "^5.0.1"
+ }
+ },
+ "supports-color": {
+ "version": "5.5.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
+ "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==",
+ "requires": {
+ "has-flag": "^3.0.0"
+ }
+ },
+ "to-regex-range": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
+ "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
+ "requires": {
+ "is-number": "^7.0.0"
+ }
+ },
+ "tslib": {
+ "version": "2.4.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.4.0.tgz",
+ "integrity": "sha512-d6xOpEDfsi2CZVlPQzGeux8XMwLT9hssAsaPYExaQMuYskwb+x1x7J371tWlbBdWHroy99KnVB6qIkUbs5X3UQ=="
+ },
+ "upper-case": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/upper-case/-/upper-case-2.0.2.tgz",
+ "integrity": "sha512-KgdgDGJt2TpuwBUIjgG6lzw2GWFRCW9Qkfkiv0DxqHHLYJHmtmdUIKcZd8rHgFSjopVTlw6ggzCm1b8MFQwikg==",
+ "requires": {
+ "tslib": "^2.0.3"
+ }
+ },
+ "upper-case-first": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/upper-case-first/-/upper-case-first-2.0.2.tgz",
+ "integrity": "sha512-514ppYHBaKwfJRK/pNC6c/OxfGa0obSnAl106u97Ed0I625Nin96KAjttZF6ZL3e1XLtphxnqrOi9iWgm+u+bg==",
+ "requires": {
+ "tslib": "^2.0.3"
+ }
+ },
+ "uri-js": {
+ "version": "4.4.1",
+ "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.4.1.tgz",
+ "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==",
+ "requires": {
+ "punycode": "^2.1.0"
+ }
+ },
+ "vscode-jsonrpc": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/vscode-jsonrpc/-/vscode-jsonrpc-6.0.0.tgz",
+ "integrity": "sha512-wnJA4BnEjOSyFMvjZdpiOwhSq9uDoK8e/kpRJDTaMYzwlkrhG1fwDIZI94CLsLzlCK5cIbMMtFlJlfR57Lavmg=="
+ },
+ "vscode-languageserver": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver/-/vscode-languageserver-7.0.0.tgz",
+ "integrity": "sha512-60HTx5ID+fLRcgdHfmz0LDZAXYEV68fzwG0JWwEPBode9NuMYTIxuYXPg4ngO8i8+Ou0lM7y6GzaYWbiDL0drw==",
+ "requires": {
+ "vscode-languageserver-protocol": "3.16.0"
+ }
+ },
+ "vscode-languageserver-protocol": {
+ "version": "3.16.0",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver-protocol/-/vscode-languageserver-protocol-3.16.0.tgz",
+ "integrity": "sha512-sdeUoAawceQdgIfTI+sdcwkiK2KU+2cbEYA0agzM2uqaUy2UpnnGHtWTHVEtS0ES4zHU0eMFRGN+oQgDxlD66A==",
+ "requires": {
+ "vscode-jsonrpc": "6.0.0",
+ "vscode-languageserver-types": "3.16.0"
+ }
+ },
+ "vscode-languageserver-textdocument": {
+ "version": "1.0.7",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver-textdocument/-/vscode-languageserver-textdocument-1.0.7.tgz",
+ "integrity": "sha512-bFJH7UQxlXT8kKeyiyu41r22jCZXG8kuuVVA33OEJn1diWOZK5n8zBSPZFHVBOu8kXZ6h0LIRhf5UnCo61J4Hg=="
+ },
+ "vscode-languageserver-types": {
+ "version": "3.16.0",
+ "resolved": "https://registry.npmjs.org/vscode-languageserver-types/-/vscode-languageserver-types-3.16.0.tgz",
+ "integrity": "sha512-k8luDIWJWyenLc5ToFQQMaSrqCHiLwyKPHKPQZ5zz21vM+vIVUSvsRpcbiECH4WR88K2XZqc4ScRcZ7nk/jbeA=="
+ },
+ "web-streams-polyfill": {
+ "version": "3.2.1",
+ "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.2.1.tgz",
+ "integrity": "sha512-e0MO3wdXWKrLbL0DgGnUV7WHVuw9OUvL4hjgnPkIeEvESk74gAITi5G606JtZPp39cd8HA9VQzCIvA49LpPN5Q=="
+ },
+ "wrap-ansi": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz",
+ "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==",
+ "requires": {
+ "ansi-styles": "^4.0.0",
+ "string-width": "^4.1.0",
+ "strip-ansi": "^6.0.0"
+ },
+ "dependencies": {
+ "ansi-styles": {
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
+ "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
+ "requires": {
+ "color-convert": "^2.0.1"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ }
+ }
+ },
+ "y18n": {
+ "version": "5.0.8",
+ "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz",
+ "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="
+ },
+ "yargs": {
+ "version": "17.3.1",
+ "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.3.1.tgz",
+ "integrity": "sha512-WUANQeVgjLbNsEmGk20f+nlHgOqzRFpiGWVaBrYGYIGANIIu3lWjoyi0fNlFmJkvfhCZ6BXINe7/W2O2bV4iaA==",
+ "requires": {
+ "cliui": "^7.0.2",
+ "escalade": "^3.1.1",
+ "get-caller-file": "^2.0.5",
+ "require-directory": "^2.1.1",
+ "string-width": "^4.2.3",
+ "y18n": "^5.0.5",
+ "yargs-parser": "^21.0.0"
+ }
+ },
+ "yargs-parser": {
+ "version": "21.1.1",
+ "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz",
+ "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw=="
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/api-spec/package.json b/accelerators/aks-sb-azmonitor-microservices/api-spec/package.json
new file mode 100644
index 0000000..4bcd4ff
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/api-spec/package.json
@@ -0,0 +1,17 @@
+{
+ "name": "api-spec",
+ "version": "1.0.0",
+ "description": "",
+ "main": "index.js",
+ "scripts": {
+ "test": "echo \"Error: no test specified\" && exit 1"
+ },
+ "keywords": [],
+ "author": "",
+ "license": "ISC",
+ "dependencies": {
+ "@cadl-lang/compiler": "0.35.0",
+ "@cadl-lang/openapi3": "0.15.0",
+ "@cadl-lang/rest": "0.17.0"
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/custom-metric-alert.png b/accelerators/aks-sb-azmonitor-microservices/assets/custom-metric-alert.png
new file mode 100644
index 0000000..f078cb5
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/custom-metric-alert.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/custom-metric-dimensions.png b/accelerators/aks-sb-azmonitor-microservices/assets/custom-metric-dimensions.png
new file mode 100644
index 0000000..e694ae6
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/custom-metric-dimensions.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-rg-list.png b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-rg-list.png
new file mode 100644
index 0000000..1365948
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-rg-list.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-initial.png b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-initial.png
new file mode 100644
index 0000000..62dfdf8
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-initial.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-slow1.png b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-slow1.png
new file mode 100644
index 0000000..bb3b53d
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-slow1.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-slow2.png b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-slow2.png
new file mode 100644
index 0000000..642b000
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/dimensions-workbook-slow2.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/full-trace-invalid.png b/accelerators/aks-sb-azmonitor-microservices/assets/full-trace-invalid.png
new file mode 100644
index 0000000..93a1a9c
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/full-trace-invalid.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/full-trace-valid.png b/accelerators/aks-sb-azmonitor-microservices/assets/full-trace-valid.png
new file mode 100644
index 0000000..fc4d2fe
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/full-trace-valid.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/health-check-logs.png b/accelerators/aks-sb-azmonitor-microservices/assets/health-check-logs.png
new file mode 100644
index 0000000..78789be
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/health-check-logs.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/log-auto.png b/accelerators/aks-sb-azmonitor-microservices/assets/log-auto.png
new file mode 100644
index 0000000..4d7a016
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/log-auto.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/log-manual.png b/accelerators/aks-sb-azmonitor-microservices/assets/log-manual.png
new file mode 100644
index 0000000..958e3f5
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/log-manual.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/metric-auto-rus.png b/accelerators/aks-sb-azmonitor-microservices/assets/metric-auto-rus.png
new file mode 100644
index 0000000..356991d
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/metric-auto-rus.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/metric-auto.png b/accelerators/aks-sb-azmonitor-microservices/assets/metric-auto.png
new file mode 100644
index 0000000..daa31b2
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/metric-auto.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/metric-manual.png b/accelerators/aks-sb-azmonitor-microservices/assets/metric-manual.png
new file mode 100644
index 0000000..e4690f1
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/metric-manual.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/sb-microservice-accelerator-arch-diagram.drawio.png b/accelerators/aks-sb-azmonitor-microservices/assets/sb-microservice-accelerator-arch-diagram.drawio.png
new file mode 100644
index 0000000..cc7e73c
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/sb-microservice-accelerator-arch-diagram.drawio.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/span-auto.png b/accelerators/aks-sb-azmonitor-microservices/assets/span-auto.png
new file mode 100644
index 0000000..08d2065
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/span-auto.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/span-manual.png b/accelerators/aks-sb-azmonitor-microservices/assets/span-manual.png
new file mode 100644
index 0000000..c208f2c
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/span-manual.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/verify-invalid-cargo.png b/accelerators/aks-sb-azmonitor-microservices/assets/verify-invalid-cargo.png
new file mode 100644
index 0000000..65a8f98
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/verify-invalid-cargo.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/verify-valid-cargo.png b/accelerators/aks-sb-azmonitor-microservices/assets/verify-valid-cargo.png
new file mode 100644
index 0000000..3caa3b6
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/verify-valid-cargo.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/workbook-aks-metric.png b/accelerators/aks-sb-azmonitor-microservices/assets/workbook-aks-metric.png
new file mode 100644
index 0000000..25fefc0
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/workbook-aks-metric.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/assets/workbook-key-vault-metric.png b/accelerators/aks-sb-azmonitor-microservices/assets/workbook-key-vault-metric.png
new file mode 100644
index 0000000..21c6ba9
Binary files /dev/null and b/accelerators/aks-sb-azmonitor-microservices/assets/workbook-key-vault-metric.png differ
diff --git a/accelerators/aks-sb-azmonitor-microservices/deploy-bicep.sh b/accelerators/aks-sb-azmonitor-microservices/deploy-bicep.sh
new file mode 100644
index 0000000..bc987c9
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/deploy-bicep.sh
@@ -0,0 +1,109 @@
+#!/bin/bash
+set -e
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+function help() {
+ echo
+ echo "deploy-bicep.sh"
+ echo
+ echo "Deploy sample via Bicep"
+ echo
+ echo -e "\t--skip-helm-deploy\t(Optional)Skip Helm deployment of services to AKS"
+ echo -e "\t--aks-aad-auth\t(Optional)Enable AAD authentication for AKS"
+ echo
+}
+
+
+# Set default values here
+SKIP_HELM_DEPLOY=false
+AKS_AAD_AUTH=false
+
+
+# Process switches:
+SHORT=h
+LONG=skip-helm-deploy,aks-aad-auth,help
+OPTS=$(getopt -a -n files --options $SHORT --longoptions $LONG -- "$@")
+
+eval set -- "$OPTS"
+
+while :
+do
+ case "$1" in
+ --skip-helm-deploy)
+ SKIP_HELM_DEPLOY=true
+ shift 1
+ ;;
+ --aks-aad-auth )
+ AKS_AAD_AUTH=true
+ shift 1
+ ;;
+ -h | --help)
+ help
+ exit 0
+ ;;
+ --)
+ shift;
+ break
+ ;;
+ *)
+ echo "Unexpected '$1'"
+ help
+ exit 1
+ ;;
+ esac
+done
+
+if [[ -z $IN_CD ]]; then # skip loading env vars if running in CD (as they are already set)
+ if [[ ! -f "$script_dir/.env" ]]; then
+ echo "Please create a .env file (using .env.sample as a starter)" 1>&2
+ exit 1
+ fi
+ source "$script_dir/.env"
+fi
+
+if [[ -z "$USERNAME" ]]; then
+ echo 'USERNAME not set - ensure you have specifed a value for it in your .env file' 1>&2
+ exit 6
+fi
+
+if [[ -z "$EMAIL_ADDRESS" ]]; then
+ echo 'EMAIL_ADDRESS not set - ensure you have specifed a value for it in your .env file' 1>&2
+ exit 6
+fi
+
+deploy_args=()
+if [[ "$AKS_AAD_AUTH" == "true" ]]; then
+ deploy_args+=(--aks-aad-auth)
+fi
+
+# Set default values
+LOCATION=${LOCATION:-eastus}
+
+figlet infra
+echo "Starting Bicep deployment to $LOCATION"
+echo "${deploy_args[@]}" | xargs "$script_dir/infrastructure/scripts/deploy-bicep-infrastructure.sh" --username "$USERNAME" --email-address "$EMAIL_ADDRESS" --location "$LOCATION"
+echo "Bicep deployment completed"
+
+figlet images
+echo "Building and pushing service images"
+ACR_NAME=$(jq -r '.acr_name' < "$script_dir/output.json")
+if [[ ${#ACR_NAME} -eq 0 ]]; then
+ echo 'ERROR: Missing output value acr_name' 1>&2
+ exit 6
+fi
+"$script_dir/infrastructure/scripts/build-and-push-images.sh" --acr-name "$ACR_NAME" --image-tag latest
+
+figlet env
+echo "Creating env files"
+"$script_dir/infrastructure/scripts/create-env-files-from-output.sh"
+
+if [[ "$SKIP_HELM_DEPLOY" == "true" ]]; then
+ echo "Skipping Helm deployment"
+else
+ figlet services
+ echo "Deploying services"
+ echo "${deploy_args[@]}" | xargs "$script_dir/infrastructure/scripts/deploy-helm-charts.sh"
+fi
+
+echo "Deployment completed"
diff --git a/accelerators/aks-sb-azmonitor-microservices/deploy-terraform.sh b/accelerators/aks-sb-azmonitor-microservices/deploy-terraform.sh
new file mode 100644
index 0000000..2ad5756
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/deploy-terraform.sh
@@ -0,0 +1,109 @@
+#!/bin/bash
+set -e
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+function help() {
+ echo
+ echo "deploy-terraform.sh"
+ echo
+ echo "Deploy sample via Terraform"
+ echo
+ echo -e "\t--skip-helm-deploy\t(Optional)Skip Helm deployment of services to AKS"
+ echo -e "\t--aks-aad-auth\t(Optional)Enable AAD authentication for AKS"
+ echo
+}
+
+
+# Set default values here
+SKIP_HELM_DEPLOY=false
+AKS_AAD_AUTH=false
+
+
+# Process switches:
+SHORT=h
+LONG=skip-helm-deploy,aks-aad-auth,help
+OPTS=$(getopt -a -n files --options $SHORT --longoptions $LONG -- "$@")
+
+eval set -- "$OPTS"
+
+while :
+do
+ case "$1" in
+ --skip-helm-deploy)
+ SKIP_HELM_DEPLOY=true
+ shift 1
+ ;;
+ --aks-aad-auth )
+ AKS_AAD_AUTH=true
+ shift 1
+ ;;
+ -h | --help)
+ help
+ exit 0
+ ;;
+ --)
+ shift;
+ break
+ ;;
+ *)
+ echo "Unexpected '$1'"
+ help
+ exit 1
+ ;;
+ esac
+done
+
+if [[ -z $IN_CD ]]; then # skip loading env vars if running in CD (as they are already set)
+ if [[ ! -f "$script_dir/.env" ]]; then
+ echo "Please create a .env file (using .env.sample as a starter)" 1>&2
+ exit 1
+ fi
+ source "$script_dir/.env"
+fi
+
+if [[ -z "$USERNAME" ]]; then
+ echo 'USERNAME not set - ensure you have specifed a value for it in your .env file' 1>&2
+ exit 6
+fi
+
+if [[ -z "$EMAIL_ADDRESS" ]]; then
+ echo 'EMAIL_ADDRESS not set - ensure you have specifed a value for it in your .env file' 1>&2
+ exit 6
+fi
+
+deploy_args=()
+if [[ "$AKS_AAD_AUTH" == "true" ]]; then
+ deploy_args+=(--aks-aad-auth)
+fi
+
+# Set default values
+LOCATION=${LOCATION:-eastus}
+
+figlet infra
+echo "Starting Terraform deployment to $LOCATION"
+echo "${deploy_args[@]}" | xargs "$script_dir/infrastructure/scripts/deploy-terraform-infrastructure.sh" --username "$USERNAME" --email-address "$EMAIL_ADDRESS" --location "$LOCATION"
+echo "Terraform deployment completed"
+
+figlet images
+echo "Building and pushing service images"
+ACR_NAME=$(jq -r '.acr_name' < "$script_dir/output.json")
+if [[ ${#ACR_NAME} -eq 0 ]]; then
+ echo 'ERROR: Missing output value acr_name' 1>&2
+ exit 6
+fi
+"$script_dir/infrastructure/scripts/build-and-push-images.sh" --acr-name "$ACR_NAME" --image-tag latest
+
+figlet env
+echo "Creating env files"
+"$script_dir/infrastructure/scripts/create-env-files-from-output.sh"
+
+if [[ "$SKIP_HELM_DEPLOY" == "true" ]]; then
+ echo "Skipping Helm deployment"
+else
+ figlet services
+ echo "Deploying services"
+ echo "${deploy_args[@]}" | xargs "$script_dir/infrastructure/scripts/deploy-helm-charts.sh"
+fi
+
+echo "Deployment completed"
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/alerts.md b/accelerators/aks-sb-azmonitor-microservices/docs/alerts.md
new file mode 100644
index 0000000..c74c122
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/alerts.md
@@ -0,0 +1,57 @@
+# Alerts
+
+[Alerts](https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-overview) proactively notify application administrators when the data ingested by Azure Monitor suggests the application is experiencing problems, or will in the near future. Visualization tooling like Workbooks highlight important indicators from the application and can illuminate issues, but require active, manual watching by administrators. Alerts can take those same indicators one step further by taking automated, prescriptive action when certain conditions are met. Rather than requiring active watching of a dashboard, alerts let application admins understand and resolve issues with the application _before_ they become problematic for most downstream users of the system.
+
+Azure alert rules are scoped to a specific resource. These resources emit different telemetry signals, defined by the resource type. Service Bus namespaces emit a numeric `DeadletteredMessages` metric, for instance, while AKS emits `node_cpu_usage_percentage`, among other metrics. The application relies on a number of metric alerts that utilize these signals. It also utilizes several log alerts that use KQL queries to pull the data evaluated in alert conditions. The `cargoProcessingAPIHealthCheckFailure` alert, for example, uses the following KQL query to pull failed health checks for the `cargo-processing-api` service:
+
+```sql
+requests
+| where cloud_RoleName == "cargo-processing-api" and name == "GET /actuator/health" and success == "False"
+```
+
+Alert conditions combine the signal and some numeric threshold that may be met over a defined window of time. If a signal exceeds some threshold over a time window defined in an alert rule, the alert fires and triggers an action group. Severity levels dictate the relative importance of the alert and mitigation steps. Certain alerts suggest with high likelihood that the application is already experiencing issues, like the microservice exceptions alert (`microserviceExceptions`). Immediate attention should be paid to uncover the underlying issue and resolve the alert. Others, like the Key Vault saturation rate (`keyVaultSaturation`) or number of invalid cargo objects saved (`cosmosInvalidCargo`), don't necessarily require immediate action but suggest that an administrator should take a closer look.
+
+We elected to create alert rules for signals that suggested issues with the underlying infrastructure or the service code deployed to AKS that utilizes it. Each of the microservices has average duration, health check failure, and health check not reporting alerts. A single microservice exceptions alert is split across the 5 services and alerts when any microservice throws a certain number of exceptions. The combination of these alerts proactively notifies when a service has experienced failure or become less performant. Service Bus exposes many message count metrics, like dead-lettered and abandoned messages, that are also important indicators of application issues and are used in rules. Deadlettered messages, for example, may suggest that the initial `cargo-processing-api` service is not properly validating the cargo object structure before sending the message to the `ingest-cargo` queue. The AKS and Log Analytics alerts include the pre-defined, [recommended alert rules](https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-overview#recommended-alert-rules) that suggest impending failure for those resource types. The full list of alerts deployed alongside the application is as follows:
+
+| Alert Name | Description | Entity | Alert Type | Severity |
+| ------------------------------------------ | -------------------------------------------------------------------------------------------------- | ----------------- | ---------- | -------- |
+| cosmosRUs | Alert when RUs exceed 400. | Cosmos DB | Metric | 1 |
+| cosmosInvalidCargo | Alert when more than 10 documents have been saved to the invalid-cargo container. | Cosmos DB | Metric | 3 |
+| serviceBusAbandonedMessages | Alert when a Service Bus entity has abandoned more than 10 messages. | Service Bus | Metric | 2 |
+| serviceBusDeadLetteredMessages | Alert when a Service Bus entity has dead-lettered more than 10 messages. | Service Bus | Metric | 2 |
+| serviceBusThrottledRequests | Alert when a Service Bus entity has throttled more than 10 requests. | Service Bus | Metric | 2 |
+| aksCPUPercentage | Alert when Node CPU percentage exceeds 80. | AKS | Metric | 2 |
+| aksMemoryPercentage | Alert when Node memory working set percentage exceeds 80. | AKS | Metric | 2 |
+| aksPodRestarts | Alert when a microservice restarts more than once. | AKS | Log | 1 |
+| keyVaultSaturation | Alert when Key Vault saturation falls outside the range of a dynamic threshold. | Key Vault | Metric | 3 |
+| logAnalyticsDataIngestionDailyCap | Alert when the Log Analytics data ingestion daily cap has been reached. | Log Analytics | Log | 2 |
+| logAnalyticsDataIngestionRate | Alert when the Log Analytics max data ingestion rate has been reached. | Log Analytics | Log | 2 |
+| logAnalyticsOperationalIssues | Alert when the Log Analytics workspace has an operational issue. | Log Analytics | Log | 3 |
+| microserviceExceptions | Alert when a microservice throws more than 5 exceptions. | App Insights/Code | Log | 1 |
+| productQtyScheduledForDestinationPort | Alert when a single port/destination receives more than quantity 1000 of a given product. | App Insights/Code | Metric | 3 |
+| e2eAverageDuration | Alert when the end to end average request duration exceeds 5 seconds. | App Insights/Code | Log | 1 |
+| cargoProcessingAPIRequests | Alert when the cargo-processing-api microservice is not receiving any requests. | App Insights/Code | Log | 3 |
+| cargoProcessingAPIAverageDuration | Alert when the cargo-processing-api microservice average request duration exceeds 2 seconds. | App Insights/Code | Log | 1 |
+| cargoProcessingAPIHealthCheckFailure | Alert when a cargo-processing-api microservice health check fails. | App Insights/Code | Log | 1 |
+| cargoProcessingAPIHealthCheckNotReporting | Alert when the cargo-processing-api microservice health check is not reporting. | App Insights/Code | Log | 1 |
+| cargoProcessingValidatorAverageDuration | Alert when the cargo-processing-validator microservice average request duration exceeds 2 seconds. | App Insights/Code | Log | 1 |
+| validCargoManagerAverageDuration | Alert when the valid-cargo-manager microservice average request duration exceeds 2 seconds. | App Insights/Code | Log | 1 |
+| validCargoManagerHealthCheckFailure | Alert when a valid-cargo-manager microservice health check fails. | App Insights/Code | Log | 1 |
+| validCargoManagerHealthCheckNotReporting | Alert when the valid-cargo-manager microservice health check is not reporting. | App Insights/Code | Log | 1 |
+| invalidCargoManagerAverageDuration | Alert when the invalid-cargo-manager microservice average request duration exceeds 2 seconds. | App Insights/Code | Log | 1 |
+| invalidCargoManagerHealthCheckFailure | Alert when an invalid-cargo-manager microservice health check fails. | App Insights/Code | Log | 1 |
+| invalidCargoManagerHealthCheckNotReporting | Alert when the invalid-cargo-manager microservice health check is not reporting. | App Insights/Code | Log | 1 |
+| operationsAPIAverageDuration | Alert when the operations-api microservice average request duration exceeds 1 second. | App Insights/Code | Log | 1 |
+| operationsAPIHealthCheckFailure | Alert when an operations-api microservice health check fails. | App Insights/Code | Log | 1 |
+| operationsAPIHealthCheckNotReporting | Alert when the operations-api microservice health check is not reporting. | App Insights/Code | Log | 1 |
+
+All alerts in the cargo processing application are [_stateful_](https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-overview#alerts-and-state), meaning that they will fire when the condition is met, but _will not_ fire again until the condition is resolved. They all utilize the same action group, which notifies an administrator via email. [Action groups](https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups) _can_ contain additional actions, like triggering webhooks, Logic Apps, Azure Functions, and more. The notification email address is set in the initial `.env`:
+
+```yaml
+# Email address for alert notifications
+EMAIL_ADDRESS=youremail@organization.com
+```
+
+Most alerts use static thresholds to evaluate the telemetry signals emitted from the application. These alert rules use specific threshold values for a signal pre-defined by the application team. The Cosmos DB RUs alert, for instance, defines a static threshold of 400 RUs that will trigger an alert when exceeded. The Key Vault saturation rate alert, however, uses a [dynamic threshold](https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-dynamic-thresholds) that uses a machine learning algorithm to define it. The algorithm uses 10 days of recent data to evaluate patterns and calculate the correct threshold for the signal. The thresholds and windows defined in the alert conditions are easily configurable via [Bicep](../infrastructure/bicep/modules/alerts.bicep) or [Terraform](../infrastructure/terraform/modules/alerts/main.tf).
+
+No [alert processing rules](https://learn.microsoft.com/en-us/azure/azure-monitor/alerts/alerts-processing-rules?tabs=portal) are used, but could be easily added to modify or suppress certain alerts before they fire.
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/auto-vs-manually-instrumented-telemetry.md b/accelerators/aks-sb-azmonitor-microservices/docs/auto-vs-manually-instrumented-telemetry.md
new file mode 100644
index 0000000..759ecdc
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/auto-vs-manually-instrumented-telemetry.md
@@ -0,0 +1,26 @@
+# Auto vs Manually Instrumented Telemetry
+
+The telemetry data generated by the application can be separated into two distinct groups - automatically and manually instrumented data.
+
+Automatically instrumented logs, metrics, and traces are produced by the application without any addition of custom code. Each exporter or SDK auto-instruments a unique set of telemetry data. The Java-based API services that utilize OpenTelemetry exporters for Azure Monitor, for instance, instrument a significant amount of telemetry data, by default, while the Typescript-based `cargo-processing-validator` service uses Application Insights SDK setup methods to [define the level of auto-instrumentation](../src/cargo-processing-validator/src/index.ts). Each SDK/exporter defines its own set of [auto-collected items](https://opentelemetry.io/docs/instrumentation/java/automatic/) for review.
+
+Much of the telemetry we depend on for visualization or alert functionalities is generated out-of-the-box by the services in the application. Distributed tracing, for instance, depends on a number of spans produced automatically by these services. The Azure resources that support the application like Cosmos DB, Service Bus, Key Vault, etc. automatically export additional data that are used in Workbooks and Alert rules, like the number of dead-lettered messages in each queue and topic subscription.
+
+Manually instrumented data refers to the data generated via custom code added to one of the microservices. The exporters and SDKs expose various methods to produce telemetry data in order to augment the initial, automatically instrumented set. It fills in the gaps that auto-instrumented data fails to provide. The set of auto-instrumented data generated by the application was first examined before determinations were made about what additional data was required to support the proposed Workbooks tiles and Alert rules. We elected to manually instrument data that enabled distributed traces, additional logging for debugging purposes, health checks, tracking of specific business rules, and more.
+
+The following examples display automatically and manually instrumented logs, metrics, and trace data in Azure Monitor that was exported by the application:
+
+The `cargo-processing-api` service automatically instruments a log related to sending a batch of messages, while the "Validating cargo schema" log results from a `logger.info()` call within its [CargoController](../src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/CargoController.java) class:
+
+
+
+The `jvm_memory_used` metric is automatically instrumented by the `cargo-processing-api` service, while the `port_product_qty` custom metric is [manually instrumented within](../src/valid-cargo-manager/Services/SubscriptionReceiver.cs) the `valid-cargo-manager` service:
+
+
+
+The `TotalRequestUnits` metric is automatically instrumented by the Cosmos DB resource:
+
+
+The span that represents the initial POST request to the `cargo-processing-api` is automatically instrumented. The message send dependency to the `validated-cargo` Service Bus topic is represented by a manually instrumented span [generated within](../src/cargo-processing-validator/src/services/ServiceBusWithTelemetry.ts) the `cargo-processing-validator` service:
+
+
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/custom-dimensions.md b/accelerators/aks-sb-azmonitor-microservices/docs/custom-dimensions.md
new file mode 100644
index 0000000..65bb259
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/custom-dimensions.md
@@ -0,0 +1,148 @@
+# Custom Dimensions
+
+When examining the behavior of a system, we often find aspects that we want to explore more deeply. For example, if we see that some requests in the system are taking longer than expected we may want to know more about the specific requests that are slow. Are all requests slow or just some of them? Are there common features to the slow requests? Do the requests all come from a particular system? In a multi-tenant system, are the slow requests distributed across all tenants or just a subset?
+
+As we explore these questions, we may find that we need to filter our telemetry data by additional properties. For example, we may want to filter our telemetry data by the request path, or by the tenant ID. Some of these properties will be available in the telemetry data by default, but others will not. For the properties that aren't part of the default data collected, we can use custom dimensions to add the additional information to our telemetry data.
+
+One place where we add custom dimensions in this project is in the `cargo-processing-validator` service. In the next couple of sections we will see what the custom dimensions look like from the monitoring dashboard and explore the implementation in code.
+
+## Custom Dimensions in Action
+
+In this section we will generate some test load on the system and then explore the telemetry dashboard to see the custom dimensions in action.
+
+To generate the test load, we will use the code in `src/cargo-test-scripts`. That folder contains a dev container for use with Visual Studio Code which makes it easy to run the scripts. If you are using Visual Studio Code, you can open the folder in a dev container by selecting the `Open Folder in Container` option from the `File` menu. The folder also contains a [README.md](../src/cargo-test-scripts/README.md) file with instructions for running the scripts from the command line.
+
+From the terminal in the dev container, run the following command to generate some test load:
+
+```bash
+cat << EOF | node index.js -c -
+{
+ "tests": [
+ {
+ "name": "Send cargo to cargo processing api",
+ "target": "cargo-processing-api",
+ "volume": 500,
+ "validateResults": false,
+ "delayBetweenCargoInMilliseconds": 1500,
+ "startingRetryBufferInMilliseconds": 300,
+ "properties": {
+ "chanceToInvalidate": 0
+
+ }
+ }
+ ]
+}
+EOF
+```
+
+This command will generate 500 cargo messages and send them to the `cargo-processing-api` service. The cargo messages will be sent at a rate of one message every 1.5 seconds. The cargo messages will not be validated, so all of them will be sent to the `valid-cargo-manager` service.
+
+Now that we are sending load, we can open the Once you have this running, open the [Azure portal](https://portal.azure.com) and navigate to the resource group you deployed to. Next, select the `Service Processing` Workbook as shown below:
+
+
+
+Next, click the `Open Workbook` button. You should see a screen similar to the following (if you don't see any telemetry then ):
+
+
+
+The top chart in the diagram above shows the end-to-end processing time for a cargo message and the chart below it shows the number of requests.
+
+Now that we have some baseline load through the system, kill the previous load command by pressing `Ctrl+C`, and run the following command instead:
+
+```bash
+cat << EOF | node index.js -c -
+{
+ "tests": [
+ {
+ "name": "Send cargo to cargo processing api with 50% chance of slow port",
+ "target": "cargo-processing-api",
+ "volume": 500,
+ "validateResults": false,
+ "delayBetweenCargoInMilliseconds": 1500,
+ "startingRetryBufferInMilliseconds": 300,
+ "properties": {
+ "chanceToInvalidate": 0,
+ "chaosSettings": [
+ {
+ "target": "cargo-processing-api",
+ "type": "slow-port",
+ "chanceToCauseChaos": 2,
+ "isEnabled": true
+ }
+ ]
+ }
+ }
+ ]
+}
+EOF
+```
+
+This command will generate and send cargo messages as before, but this time with a 50% chance that the destination port will be set to `slow-port`. The code in the `cargo-processing-validator` service has code that simulates making a call to a service at the destination port. When the the port is `slow-port` the simulation adds an extra delay.
+
+Now that we have some load that includes the `slow-port`, we can go back to the `Service Processing` Workbook and refresh the data. You should see a screen similar to the following:
+
+
+
+In the top chart we can see a slight increase in the overall processing time. The chart below shows that the number of requests hasn't increased. Continuing down the charts, the bottom chart shows that the increase in processing time is due to the `cargo-processing-validator` service.
+
+Further down the workbook we have the "Service Breakdown" section which allows us to drill into telemetry for each of the services. From the `cloud_RoleName` dropdown, select `cargo-processing-validator` and you should see a screen similar to the following:
+
+
+
+The top chart in the "Service Breakdown" section shows the request breakdown for the selected service (mean, median, max and 95%th centile durations) and this confirms that the increase in processing time is due to the `cargo-processing-validator` service. The chart below shows the dependency breakdown for the selected service. Looking at this chart we can see that the dependency for the simulated call to the destination port service looks normal for all ports apart from the `slow-port`.
+The final chart shows the end-to-end processing time broken out by destination port, and this also highlights the increase in processing time for the `slow-port`.
+
+## Implementing Custom Dimensions in cargo-processing-validator
+
+In this section we will look at the code in the `cargo-processing-validator` service to see how the custom dimensions are implemented. The code for the `cargo-processing-validator` service is in the `src/cargo-processing-validator/src` folder.
+
+When a message is received from Service Bus, a `request` telemetry item is started. The code that does this is unaware of the content of the messages, so it only attaches standard fields on the telemetry item. There are two steps to adding custom dimensions to the telemetry item. First, we obtain the telemetry correlation context and set the custom properties. Secondly, we use a telemetry processor to modify the telemetry items before they are sent to Application Insights.
+
+The code below (from `services/ServiceBusProcessingService.ts`) shows how we obtain the telemetry correlation context and set the custom properties when processing a message from Service Bus:
+
+```typescript
+// get the correlation context
+const correlationContext = appInsights.getCorrelationContext();
+// strip commas from the destination port value as they are not allowed
+const destination = validatedCargo.port.destination.replaceAll(',', ';');
+// set the custom property
+correlationContext.customProperties.setProperty(
+ CUSTOM_PROPERTY_CARGO_DESTINATION,
+ destination
+);
+```
+
+Once the correlation context is updated, we need a telemetry processor that will modify the telemetry items before they are sent to Application Insights. The code below (from `index.ts`) shows how we add a telemetry processor to the Application Insights client and use it to update the telemetry items based on the values in the correlation context:
+
+```typescript
+const client = appInsights.defaultClient;
+client.addTelemetryProcessor((envelope, contextObjects) => {
+ // envelope is the telemetry item being processed
+ // Here we set a variable to point to the properties of the telemetry item for convenience
+ const envelopeProperties = envelope.data?.baseData?.properties;
+
+ // Check whether we have the destination property set on contextObjects.correlationContext
+ // which is the correlation context associated with the telemetry item being processed (if set).
+ if (
+ envelopeProperties &&
+ customProperties?.getProperty(CUSTOM_PROPERTY_CARGO_DESTINATION)
+ ) {
+ // Assign the custom dimension value on the telemetry item
+ envelopeProperties['cargo-destination'] = customProperties.getProperty(
+ CUSTOM_PROPERTY_CARGO_DESTINATION
+ );
+ }
+
+ // return true to allow the telemetry item through (we could return false to discard it)
+ return true;
+});
+```
+
+With these pieces in place, we can now see the custom dimension in the telemetry items that are sent to Application Insights. For example, the following query will show the request telemetry items for the `cargo-processing-validator` service and add a `destinationPort` field using the value of the `cargo-destination` custom dimension:
+
+```kusto
+requests
+| where cloud_RoleName == "cargo-processing-validator"
+| extend destinationPort = customDimensions["cargo-destination"]
+| order by timestamp desc
+```
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/custom-metrics.md b/accelerators/aks-sb-azmonitor-microservices/docs/custom-metrics.md
new file mode 100644
index 0000000..23cc1a1
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/custom-metrics.md
@@ -0,0 +1,37 @@
+# Custom Metrics
+
+Metrics track key indicators over time and provide a neat, numeric value that can be displayed in a time series, used in alerts, and more. The application tracks a multitude of metrics automatically, like the `jvm_memory_used` metric that measures the number of bytes used by the Java based API services.
+
+Organizations often have additional indicators related to specific business rules or industry-wide ones that are meaningful to track and necessary to understand system health. Custom metrics enable generation of data points over time against these metrics that aren't tracked by default. The application tracks an additional metric, `port_product_qty`, that captures the total quantity of specific products scheduled for shipment to specific ports. Ports do not have unlimited capacity to accept shipping containers. Administrators need to be able to retrieve data on an ad-hoc basis that illuminates product velocity on each port and rely on alerts that proactively notify them when the total shipping container quantity of a given product scheduled for a specific destination port exceeds some value defined by the business.
+
+The `valid-cargo-manager` generates the custom metric as it is the last service to interact with a valid cargo object destined for shipment to a port (invalid cargo objects are simply stored for later processing). It generates a multi-dimensional custom metric, tracking the product quantity, while passing in `product`, `source`, and `destination` dimensions taken from the cargo.
+
+```c#
+private void TrackMultiDimensionalMetrics(ValidCargo cargo)
+{
+ var metric = _telemetryClient.GetMetric("port_product_qty", "product", "source", "destination", _customMetricConfiguration);
+
+ metric.TrackValue(cargo.Product.Quantity,
+ cargo.Product.Name,
+ cargo.Port.Source,
+ cargo.Port.Destination);
+}
+```
+
+Importantly, the `GetMetric` and `TrackValue` methods pre-aggregate the metric before sending the values every minute. `TrackMetric`, also exposed by the SDK, sends a separate telemetry item every time the method is called and is no longer the preffered approach for generating custom metrics. Rather than generate a new record with a specific value every time the metric is tracked, the service exports an aggregated metric record every minute that includes properties like **value**, **valueCount**, **valueSum**, **valueMin**, and **valueMax**. **valueCount** defines the number of times the metric was tracked over that minute, **valueSum** is the total sum of each of the values, etc.
+
+The custom metric is exported each minute for every specific custom dimension combination. All metric data tracked that includes the same `product`, `source`, and `destination` within the same minute will be grouped together in Application Insights records. If `TrackValue` is called twice within the same minute with `product-Cars, source-New York City, destination-Miami` then they will be grouped together. If, in that same minute `TrackValue` is called with `product-Cars, source-Seattle, destination-Tacoma` then that metric data is exported separately:
+
+
+
+The custom metric is exported to Application Insights as both a [log-based and pre-aggregated](https://learn.microsoft.com/en-us/azure/azure-monitor/app/pre-aggregated-metrics-log-metrics) metric. The pre-aggregated version is optimized for time series and enables faster, more performant queries. It _only_ maintains certain dimensions and other specific properties, in contrast with the log-based version that includes all relevant information attached to the record. To ensure that the pre-aggregated metric version has the dimensions we rely on, they must be [enabled via the App Insights resource in the Portal](https://learn.microsoft.com/en-us/azure/azure-monitor/app/pre-aggregated-metrics-log-metrics#custom-metrics-dimensions-and-pre-aggregation) after deployment (currently in Preview and unsupported in ARM).
+
+The alert we employ relies on the `product` and `destination` dimensions within the custom metric, alerting when the total quantity of a given `product` exceeds 1000 for a given `destination` port over a single minute interval. The alert rule maintains different time series for each `product`/`destination` combination and alerts on each separately:
+
+
+
+The `source` port is irrelevant. Cars sent to Miami from New York and cars sent to Miami from Boston will roll up together and the total product quantity across both will be used. If `source` was added as a dimension to the alert, for instance, these would be split into two different time series and alerted on separately. The number of ports and products used could quickly inflate the number of time series Azure Monitor maintains, resulting in throttling, reduced system performance, increased cost, etc. By default, Azure Monitor limits metrics to 1000 total time series and 100 unique values per dimension. These values can be customized and set by the TelemetryClient that originally exports the metrics. The `valid-cargo-manager` that instruments the `port_product_qty` custom metric sets series count and values per dimension limits to 100 and 40 respectively, to guard against potential scale issues. The configuration allows for 40 unique destination ports and products, with no more than 100 time series maintained:
+
+```c#
+ _customMetricConfiguration = new MetricConfiguration(seriesCountLimit: 100, valuesPerDimensionLimit: 40, new MetricSeriesConfigurationForMeasurement(restrictToUInt32Values: false));
+```
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/distributed-tracing.md b/accelerators/aks-sb-azmonitor-microservices/docs/distributed-tracing.md
new file mode 100644
index 0000000..89a6ac3
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/distributed-tracing.md
@@ -0,0 +1,42 @@
+# Distributed Tracing
+
+Distributed tracing depends on careful stitching together of auto and manually instrumented spans from both OpenTelemetry and Application Insights based tooling destined for export to Azure.
+
+## Azure Monitor and OpenTelemetry Data Models
+
+Azure Monitor splits the concept of a generic [OpenTelemetry span](https://opentelemetry.io/docs/concepts/signals/traces/#spans-in-opentelemetry) into a number of specific telemetry items like Requests and Dependencies. Rather than refer to these items as "spans", the term "operation" is heavily used in documentation and tooling. A trace is a distributed logical operation comprised of smaller sub-operations - the Requests, Dependencies, PageViews, etc. In Application Insights, all operations in a distributed trace will share the same `operation_Id` value, while ordering within the trace is defined by `operation_ParentId` values. An operation's `operation_ParentId` will point to the `Id` of another operation in the trace.
+
+OpenTelemetry-based tooling like the OpenTelemetry exporters for Java and the Application Insights SDK for Python (which relies on OpenCensus) use OpenTelemetry span terminology in exposed methods and classes. Spans in these tools encompass all telemetry types and generally expose a [SpanKind](https://opentelemetry.io/docs/concepts/signals/traces/#span-kind) property that dictates the type of item that surfaces in Application Insights. `SpanKind.SERVER` and `SpanKind.CLIENT` spans created in [`invalid-cargo-manager` instrumentation methods](../src/invalid-cargo-manager/src/service/message_receiver.py), for instance, result in export of Request and Dependency items in Application Insights. The `SpanId`, parent `SpanId`, and `TraceId` values in these OpenTelemetry-based libraries surface in Application Insights as `Id`, `operation_ParentId`, and `operation_Id`, respectively.
+
+## Instrumenting the Distributed Trace
+
+### Concepts
+
+The instrumentation process requires generation of operations (spans) with proper attachment of the `operation_Id` and `operation_ParentId` values to ensure they are connected to one another in the same trace, in the correct order.
+
+Each SDK/exporter exposes different methods that allow for creation of operations. The Application Insights SDK for Node [tracks specific operations](https://learn.microsoft.com/en-us/azure/azure-monitor/app/nodejs#telemetryclient-api) using methods like `trackDependency()` and `trackRequest()` on its `TelemetryClient` class. The .NET SDK uses `Activity` classes and [`StartOperation()` calls](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-end-to-end-tracing?tabs=net-standard-sdk-2#trace-message-processing) exposed by its own `TelemetryClient` class to do the same. The OpenCensus based Python OpenTelemetry exporter, on the other hand, enables [creation of spans](https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-enable?tabs=python#instrument-with-opentelemetry) via a Tracer. No spans are manually instrumented in either of the Java-based APIs, but the libraries do [expose the functionality](https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-enable?tabs=java#add-custom-spans) to generate them, if necessary.
+
+In order for one service's operations to be properly tied to an operation from an upstream service, a trace context must be passed between them. OpenTelemetry based tooling uses the widely recognized [W3C Trace Context](https://www.w3.org/TR/trace-context/#trace-context-http-headers-format) as a means to pass the required values and Application Insights is transitioning to use the same. W3C Trace Context defines a `traceparent` string that contains the Id values necessary to set a telemetry item's `operation_Id` and `operation_ParentId`. It uses the following syntax:
+
+`---`
+
+The `` value is uniquely generated by the first service in the distributed trace and becomes the `operation_Id` in Application Insights. The `` value refers to the `Id` of the most recent operation in the trace and becomes the `operation_ParentId` property for the next operation. When an upstream service makes a request or sends a message to a downstream service, it attaches the `traceparent` string in the manner dictated by the [communication protocol](https://www.w3.org/TR/trace-context-protocols-registry/#registry). Services that communicate via HTTP, like the inter-service communication between the `cargo-processing-api` and `operations-api` pass the string in the request headers, while services that communicate via message brokers, like all other inter-service communications in the application, pass the value in the message's application properties.
+
+### Implementation
+
+A distributed trace begins when a POST request is made to the `cargo-processing-api` service. The initial request is automatically instrumented and generates the `operation_Id` that will be attached to all subsequent telemetry items. The service makes a PUT request to the `operations-api` and automatically attaches the `traceparent` value in the headers, passing in the `operation_Id` and the `Id` of the last instrumented item. The last instrumented item in this case is the Dependency generated by the `cargo-processing-api` which refers to the PUT request. The `operations-api` similarly auto-instruments its own subsequent span data that is tied into the trace. It breaks open the `traceparent` string and uses the values to set trace context for the spans it will instrument. The initial span auto-instrumented by the `operations-api` when the PUT request is made surfaces as a Request, while other spans that refer to Cosmos DB interactions become Dependencies in Application Insights.
+
+When the `cargo-processing-api` receives a response back from the `operations-api`, it sends a message to the `ingest-cargo` Service Bus queue. The `traceparent` string is automatically passed by the service in the message's application properties and is received by the `cargo-processing-validator`. The `` value passed in the `traceparent` that becomes the `operation_ParentId` now refers to the message send Dependency item generated by the `cargo-processing-api`. The first operation produced by the `cargo-processing-validator` must be parented to this value. The service pulls the necessary `operation_Id` and `operation_ParentId` from the `traceparent` and uses the values in [pre-processor functionality](../src/cargo-processing-validator/src/index.ts) to attach the proper `operation_ParentId` to telemetry items prior to export. After manually instrumenting a request and a number of dependencies related to Service Bus operations and custom business logic, the `cargo-processing-validator` service sends a message to the `validated-cargo` Service Bus Topic. The `traceparent` string is again passed in the message's application properties. While the Java API services automatically attach the `traceparent` string, the `cargo-processing-validator` attaches the value [manually](../src/cargo-processing-validator/src/services/ServiceBusWithTelemetry.ts) in a `Diagnostic-Id` property.
+
+The `valid-cargo-manager` and `invalid-cargo-manager` are both prepared to pull the `operation_Id` and `operation_ParentId` values from the `Diagnostic-Id`. The `valid-cargo-manager` uses the values to [manually instrument a request](../src/valid-cargo-manager/Services/SubscriptionReceiver.cs), then begins automatically instrumenting Cosmos DB and Service Bus operations. The `invalid-cargo-manager` does the same to manually instrument a request, but follows with a [series of manually instrumented dependencies](../src/invalid-cargo-manager/src/service/message_receiver.py) that refer to the same Cosmos DB and Service Bus operations.
+
+## Visualization and Analysis
+
+The generated distributed traces through the valid and invalid flows are easily viewable in the Application Insights [Transaction Diagnostics window](https://learn.microsoft.com/en-us/azure/azure-monitor/app/transaction-diagnostics#transaction-diagnostics-experience):
+
+
+
+
+Transaction Diagnostics displays a distributed trace's individual components with their timing and success properties. It is a visual representation of a KQL query that pulls all operation data associated with a specific `operation_Id`. The interface quickly reveals where issues arose within a specific trace. Inspecting individual traces is a helpful debugging tool, especially combined with correlated logs that provide some additional level of detail about why an operation may have failed.
+
+Aggregated trace data allows for construction of an application topology, visible within the [Application Map](https://learn.microsoft.com/en-us/azure/azure-monitor/app/app-map?tabs=net), and supports a number of monitoring functionalities that the application relies on. Using operation timing and failure properties, performance data can be quickly retrieved and filtered by service, helping to identify which components in the application may be experiencing failure or performance issues. Combined, they enable retrieval of end to end transaction duration. KQL queries that pull end to end and per-service failure and performance data are heavily used in Workbooks and automated Alert rules.
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/getting-started.md b/accelerators/aks-sb-azmonitor-microservices/docs/getting-started.md
new file mode 100644
index 0000000..97c8cf7
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/getting-started.md
@@ -0,0 +1,108 @@
+# Getting Started
+
+## Prerequisites
+
+Visual Studio Code and dev containers are used to automatically install the required packages necessary to deploy and run the application. To get started, you will need to have the following installed:
+
+- Docker ([link](https://docs.docker.com/get-docker/))
+- Visual Studio Code ([link](https://code.visualstudio.com/download))
+ - Dev Containers extension ([link](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers))
+
+Alternatively, you can deploy and run the application from your local machine but will need to have the following additionally installed:
+
+- Azure CLI ([link](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli))
+- Azure Kubelogin ([link](https://github.com/Azure/Kubelogin))
+- Kubectl ([link](https://kubernetes.io/docs/reference/kubectl/))
+- Helm ([link](https://helm.sh/))
+- Various command line tools ([figlet](http://www.figlet.org/), [jq](https://stedolan.github.io/jq/))
+
+## Running the Application
+
+Open the repository in Visual Studio Code. If you have the DevContainer extension installed, you will be prompted to "Reopen in Container" to work using the DevContainer.
+
+Copy the `.env.sample` file to `.env` and fill in the required values.
+
+The sample uses either Bicep or Terraform to provision the required infrastructure. Run `./deploy-bicep.sh` to deploy the application to Azure using Bicep, or `./deploy-terraform.sh` to do so using Terraform. The scripts will create the required resources in Azure, build the docker images, push them to Azure Container Registry and deploy the containers to Azure Kubernetes Service (AKS).
+
+> **_NOTE:_** By default, the AKS cluster is deployed without AAD integration. To enable AAD integration, pass the `--aks-aad-auth` switch to the deployment script. This will configure authentication for the current `az` user. To configure for a service principal, set the `ARM_CLIENT_ID` value to the client ID for the service principal.
+
+## Sending Requests
+
+After deploying the application, you can use the [`cargo-processing-api.http`](../http/cargo-processing-api.http) file to send requests to it.
+
+The file contains a number of requests that can be sent to the cargo-processing-api service. It uses an `.env` file generated by the deployment script that contains the IP address of the AKS NGINX ingress controller.
+
+Use the "Send Request" options in the file to send `POST`/`PUT` requests to the cargo-processing-api and see the responses.
+
+> **_NOTE:_** By default, the `cargo-processing-api.http` file is configured to use services deployed to AKS. If you are running the services locally, uncomment the lines that set the service address to `localhost`.
+
+## Verifying Successful Deployment
+
+A cargo object sent to the `cargo-processing-api` service can take one of two paths depending on the validation result from the `cargo-processing-validator` service. The first path, when the cargo is valid, incorporates the `cargo-processing-api`, `operations-api`, `cargo-processing-validator` , and `valid-cargo-manager` services and results in a record being stored in the `valid-cargo` Cosmos DB container. An invalid piece of cargo reaches the `invalid-cargo-manager` rather than the `valid-cargo-manager` service and is stored in the `invalid-cargo` Cosmos DB container. End to end functionality can be verified by sending a request through both flows and ensuring that the cargo objects are stored in the proper Cosmos DB containers.
+
+The [`cargo-processing-api.http`](../http/cargo-processing-api.http) file contains `createRequest` and `createRequest_invalid` requests that are used to send a valid and invalid cargo object to the `cargo-processing-api` service, respectively. Use the "Send Request" option on `createRequest` to send a valid request and note the ID returned in the right-hand window (`080f393d-893c-3d80-a267-350c6abef090` in the below example).
+
+```json
+HTTP/1.1 202
+Date: Tue, 25 Apr 2023 21:46:07 GMT
+Content-Type: application/json
+Transfer-Encoding: chunked
+Connection: close
+operation-id: 49d8f01c-a284-44b4-8c97-605d224016af
+
+{
+ "id": "080f393d-893c-3d80-a267-350c6abef090",
+ "timestamp": "2023-04-25T21:46:06.310Z",
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Tacoma"
+ },
+ "demandDates": {
+ "start": "2023-05-05T09:45:52.548Z",
+ "end": "2023-05-10T09:45:52.548Z"
+ }
+}
+```
+
+The subsequent request in the `.http` file can be used to retrieve the status of that request. Next, use the "Send Request" option on `createRequest_invalid` to send a invalid request and note the ID returned in the right-hand window (`8438307f-8303-3d9c-b958-9caf08f610b4` in the below example).
+
+```json
+HTTP/1.1 202
+Date: Tue, 25 Apr 2023 21:48:16 GMT
+Content-Type: application/json
+Transfer-Encoding: chunked
+Connection: close
+operation-id: 9d3bdc2f-a4aa-45e5-8965-d9e53716c1e7
+
+{
+ "id": "8438307f-8303-3d9c-b958-9caf08f610b4",
+ "timestamp": "2023-04-25T21:48:16.438Z",
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Tacoma"
+ },
+ "demandDates": {
+ "start": "2023-07-04T09:48:20.816Z",
+ "end": "2023-07-09T09:48:20.816Z"
+ }
+}
+```
+
+Finally, navigate to the Cosmos DB instance's Data Explorer window and verify that a new record has been added to both the `valid-cargo` and `invalid-cargo` containers with IDs and other properties that match the ones copied earlier.
+
+
+
+
+## Local Development
+
+To run the services locally, you still need to deploy the supporting infrastructure in Azure. You can run the deployment scripts described in the [Running the Application](#running-the-application) section, but pass the `--skip-helm-deploy` switch to skip the Helm deployment of services to AKS. This will ensure that the services you run locally will be the only services retrieving messages from the Service Bus queues etc.
+
+After the infrastructure deployment completes, run `run-local.sh` to start all of the services locally via `docker compose`. To run a service individually, open it in its dev container and follow the instructions provided in the service's README.
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/health-checks.md b/accelerators/aks-sb-azmonitor-microservices/docs/health-checks.md
new file mode 100644
index 0000000..6db9391
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/health-checks.md
@@ -0,0 +1,66 @@
+# Health Checks
+
+Monitoring and quickly responding to changes in service health is crucial for distributed applications deployed to an AKS environment. Health checks report the internal status of a microservice at regular intervals and are used by orchestrators, like Kubernetes, to determine if each service is functioning properly. Health checks should examine connections to databases and other dependencies and can report health based on memory usage, CPU utilization, network connectivity, or any other key performance indicators that are critical to the functioning of the microservice. Essentially, a health check should verify that the microservice is able to perform its intended function and that it is not experiencing any critical errors or failures. AKS automatically triggers these health checks and acts upon pods that report back unhealthy.
+
+Health check functionality is often exposed via HTTP endpoints, but Kubernetes supports consumption of TCP and gRPC endpoints as well and is also capable of running `exec` commands exposed by pods. Kubernetes consumes the endpoints or commands via [3 types of probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) - startup, readiness, and liveness probes. Startup probes run after deployment and make the kubelet agent aware that the containers in the pod have started. Kubernetes will not start readiness and liveness probes until the startup probe reports success. Readiness probes alert Kubernetes that the pod is ready to accept traffic and liveness probes are subsequently used to regularly check that the pod is healthy. Pods that fail liveness probes are automatically restarted by AKS to fix ephemeral issues. While different endpoints or commands can be used for each probe type, we elected to reuse the same health check endpoints in our services, declared via the helm charts that deploy the services to AKS.
+
+Services like the `cargo-processing-api` and `operations-api`, which are Spring Boot apps that already expose HTTP endpoints, are easy candidates to expose health checks via HTTP endpoint. `spring-boot-starter-actuator` used in these projects is capable of [exposing a `/health` endpoint](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints) that reports internal application health using indicators like [dependency connections and disk space](https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.auto-configured-health-indicators). The endpoint is configured via the [application.properties](../src/cargo-processing-api/src/main/resources/application.properties) file:
+
+```java
+management.endpoints.web.exposure.include=health,info
+endpoints.health.sensitive=false
+management.endpoint.health.show-details=always
+```
+
+The `/actuator/health` endpoint that Spring Boot spins up is declared within the helm charts for those services:
+
+```yaml
+livenessProbe:
+ httpGet:
+ path: /actuator/health
+ port: 8080
+ initialDelaySeconds: 60
+ periodSeconds: 20
+ failureThreshold: 3
+ timeoutSeconds: 10
+```
+
+The `cargo-processing-validator`, `valid-cargo-manager`, and `invalid-cargo-manager` are background worker services that do not already expose HTTP endpoints. The `cargo-processing-validator` and `invalid-cargo-manager` do not include explicit health checks. Instead, they are designed to [self-destruct](../src/cargo-processing-validator/src/index.ts) when errors occur. These services restart via error when failed dependency connections arise, rather than failed liveness probes that would result from those same connections. In contrast, we elected to demonstrate TCP health check functionality on the `valid-cargo-manager`. A [HealthCheckController](../src/valid-cargo-manager/Controllers/HealthCheckController.cs) that starts a TCP server is added to the list of [services configured during startup](../src/valid-cargo-manager/Program.cs). The controller uses [CosmosDBHealthChecker](../src/valid-cargo-manager/HealthCheck/CosmosDbHealthChecker.cs) and [ServiceBusHealthChecker](../src/valid-cargo-manager/HealthCheck/ServiceBusHealthChecker.cs) classes to report status of connection to those dependent services. The exposed TCP port and other configuration details are set via the [appsettings.json file](../src/valid-cargo-manager/appsettings.sample.json):
+
+```json
+"HealthCheck": {
+ "TcpServer": {
+ "Port": 3030
+ },
+ "CosmosDB": {
+ "MaxDurationMs": 200
+ },
+ "ServiceBus": {
+ "MaxDurationMs": 200
+ }
+}
+```
+
+The TCP socket that the service exposes is then declared within its helm chart:
+
+```yaml
+livenessProbe:
+ tcpSocket:
+ port: 3030
+ initialDelaySeconds: 30
+ periodSeconds: 10
+ failureThreshold: 3
+ timeoutSeconds: 10
+```
+
+Kubernetes automatically consumes these endpoints and will take action on a pod if a probe fails, like a pod restart if a liveness probe fails. The calls to these endpoints can be viewed in the Logs window, via the `requests` table:
+
+```sql
+requests
+| where cloud_RoleName == "cargo-processing-api" and url contains "/health"
+```
+
+
+
+While Kubernetes will automatically respond to these events, the application additionally includes alerts that proactively notifies admins about issues related to health checks so they can take additional action to debug, if necessary. Each microservice has a health check failure and health check not reporting alert that consumes the same logs used above, as well as a pod restart alert triggered when a service pod restarts more than once within 5 minutes.
+Health checks often fail due to ephemeral issues that can be resolved by automatic Kubernetes actions, like a pod restart, but other underlying issues may require human intervention. Alerts offer an additional monitoring layer that serves to reduce the downtime to fix those more major issues surfaced by health check issues.
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/introducing-chaos.md b/accelerators/aks-sb-azmonitor-microservices/docs/introducing-chaos.md
new file mode 100644
index 0000000..5c5333b
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/introducing-chaos.md
@@ -0,0 +1,25 @@
+# Introducing Chaos
+
+Chaos engineering involves intentionally introducing failures to assess resilience and identify potential weaknesses in an application. Controlled experiments are conducted to understand how the application behaves in unexpected situations. Development teams can identify proper mitigation techniques for real-world scenarios _before_ they occur in production. Chaos engineering is closely tied with the concepts of observability and monitoring - system behavior must be accurately measured over time to understand how it responds to various failure scenarios. Introduction of chaos into the cargo processing application allows us to test the alerting and visualization functionality included in the project, as well as use those same tools to determine best case mitigation techniques for a set of fault scenarios that the team expects the application to handle gracefully.
+
+Azure offers [Azure Chaos Studio](https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-overview) as a tool to inject common fault scenarios into the application, like CPU/memory pressure or downed nodes in a cluster. Rather than use Chaos Studio, we elected add chaos into the application code directly, in both the `cargo-processing-api` and `cargo-processing-validator` services, with built in integration with our existing load test scripts.
+
+The [cargo-test-scripts](../src/cargo-test-scripts/) folder includes a JavaScript based application used to send requests to the `cargo-processing-api` ingress endpoint or to downstream services directly. Tests are supplied via [json based test run configurations](../src/cargo-test-scripts/testConfigurations/valid_tests.json) that send a configurable number of requests to specific target services. Importantly, `properties.chaosSettings` is available on tests that target the `cargo-processing-api` and `cargo-processing-validator` services, with a set of available `type` options that cause specific fault scenarios in those services:
+
+| Target | Type | Description |
+| -------------------------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| cargo-processing-api | operations-api-failure | Will cause a chaos exception to occur when the cargo-processing-api attempts to call the operations-api. |
+| cargo-processing-api | process-ending | Will cause the cargo-processing-api to shut down. |
+| cargo-processing-api | service-bus-failure | Will cause the service to close the service-bus connection right before it attempts to use it. |
+| cargo-processing-api | invalid-schema | Will cause the test script to modify the cargo object being sent in a way that causes the cargo-processing-api to throw an invalid json schema exception. |
+| cargo-processing-validator | service-bus-failure | Will cause the service to close the service-bus connection right before it attempts to use it. |
+| cargo-processing-validator | process-killing | Will cause the cargo-processing-validator to shut down. |
+| cargo-processing-validator | invalid-schema | Sends a message that is missing its demandDates directly to the ingest-cargo queue. |
+
+The test scripts use a [raiseChaos utility function](../src/cargo-test-scripts/dataBuilderUtils.js) that sets a cargo object's `source` port to the `target` and `destination` port to the `type` specified above in a chaos test. The services themselves are configured to execute fault scenarios when the source and destination ports match these known strings. The `cargo-processing-validator` service contains a [ChaosMonkey](../src/cargo-processing-validator/src/chaos/ChaosMonkey.ts) class that determines whether to cause chaos based on the source and destination ports. It includes [ProcessEnding](../src/cargo-processing-validator/src/chaos/ProcessEndingMonkey.ts) and [ServiceBusKilling](../src/cargo-processing-validator/src/chaos/ServiceBusKillingMonkey.ts) classes that exit the running process or close the existing service bus connection, respectively. If the `source` port for a cargo object is set to `cargo-processing-validator` and `destination` port is set to `process-killing`, the `ProcessEndingMonkey` will initialize and exit the current process, for example. The `cargo-processing-api` service has similar ChaosMonkey implementations.
+
+The chaos tests should cause internal exceptions, restarts, and health check issues that should immediately surface in alerts (detailed below). Workbook tiles (detailed below) should illuminate how the application performed over the test run, displaying increases in request duration, dead-lettered messages, and other indicators of application health. To run a chaos test, open the [cargo-test-scripts](../src/cargo-test-scripts/) folder in its dedicated dev container. The folder contains a number of [pre-defined test configurations](../src/cargo-test-scripts/testConfigurations/) that includes a [`cargo_processing_api_chaos_tests.json`](../src/cargo-test-scripts/testConfigurations/cargo_processing_api_chaos_tests.json) configuration. From the terminal in the dev container, run the following command to trigger each of the types of fault scenarios listed above:
+
+```bash
+node ./index.js -c ./testConfigurations/cargo_processing_api_chaos_tests.json
+```
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/reducing-telemetry-volume.md b/accelerators/aks-sb-azmonitor-microservices/docs/reducing-telemetry-volume.md
new file mode 100644
index 0000000..69576e2
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/reducing-telemetry-volume.md
@@ -0,0 +1,11 @@
+# Reducing Telemetry Volume
+
+The application does not have high scale requirements, though there are a variety of techniques that were used, or could be used, to mitigate storage and cost concerns.
+
+The `valid-cargo-manager` and both Java APIs [implement adaptive sampling by default](https://learn.microsoft.com/en-us/azure/azure-monitor/app/sampling?tabs=net-core-new#configuring-adaptive-sampling-for-aspnet-applications), limiting the number of requests sent to Application Insights. Both services target a specific number of items to export per minute - the actual sampling rate can vary depending on the number of requests the services handle. Given the low scale requirements for the application, sampling does not actually kick in for either of these services with the current test scripts provided. Implementing coordinated fixed-rate sampling across the service architecture would result in reduced storage costs and alleviate retention and rotation concerns. The Java services can implement explicit [fixed rate sampling and sampling overrides](https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-sampling-overrides#getting-started) via the [`applicationinsights.json`](../src/cargo-processing-api/applicationinsights.json) file, or by supplying specific environment variables that overwrite those properties. The `cargo-processing-validator` can do the same by [providing a percentage](https://github.com/microsoft/ApplicationInsights-node.js/blob/dd7c195f481acdaf39c4abc271424fb750aac81f/README.md#sampling) to the `applicationInsights.defaultClient` in the [`index.ts`](../src/cargo-processing-validator/src/index.ts) file. The `valid-cargo-manager` can [add a sampling rate](https://learn.microsoft.com/en-us/azure/azure-monitor/app/sampling?tabs=net-core-new#configuring-fixed-rate-sampling-for-aspnet-applications) to the [`CreateHostBuilder`](../src/valid-cargo-manager/Program.cs), while the `invalid-cargo-manager` could do so via a `ProbabilitySampler` that can be [passed to tracer classes](https://learn.microsoft.com/en-us/azure/azure-monitor/app/sampling?tabs=net-core-new#configuring-fixed-rate-sampling-for-opencensus-python-applications), rather than the `AlwaysOnSampler` that is [currently used](../src/invalid-cargo-manager/src/service/telemetry_publisher.py).
+
+Especially noisy libraries or specific telemetry items can be targeted with pre-processing functionality that serves to suppress or transform those items. The `valid-cargo-manager` suppresses items from the `Microsoft` namespace that do not meet or exceed the `Warning` severity level, for instance, via the [appsettings.json file](../src/valid-cargo-manager/appsettings.sample.json). The `cargo-processing-validator` includes a [code based pre-processor](../src/cargo-processing-validator/src/index.ts) used to transform outgoing telemetry items before export, as does the [`invalid-cargo-manager`](../src/invalid-cargo-manager/src/service/telemetry_publisher.py). The existing functionalities could be extended to suppress or remove unnecessary properties from additional items, and similar pre-processing functionality could be added to the [Java](https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-standalone-telemetry-processors#getting-started) services via configuration file.
+
+[Log levels](https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.loglevel?view=dotnet-plat-ext-7.0) are used to accommodate the fact that certain services automatically instrument telemetry data that is not necessary to capture by default. The `cargo-processing-validator` and `invalid-cargo-manager` services do not emit nearly as much data as the other three services and are [configured to capture all logs](../src/invalid-cargo-manager/src/service/logging_config.py) using DEBUG. The Java based APIs, on the other hand, are configured to [capture all logs of level INFO and above](../src/cargo-processing-api/applicationinsights.json), automatically suppressing debug statements, by default. As mentioned, the `valid-cargo-manager` uses the appsettings.json file to [capture WARNING and above](../src/valid-cargo-manager/appsettings.sample.json) for logs from the Microsoft namespace and INFO for others. These configuration based log levels can be easily reduced to DEBUG to capture additional logs in a debugging scenario. The application uses the default retention policy for all Azure Monitor tables - generally 90 days but certain tables have a 30 day default retention policy, though these could be [set on the Log Analytics resource](https://learn.microsoft.com/en-us/azure/templates/microsoft.operationalinsights/workspaces?pivots=deployment-language-bicep#workspaceproperties) in Bicep or Terraform.
+
+Visit Azure documentation for additional information on [sampling](https://learn.microsoft.com/en-us/azure/azure-monitor/app/sampling?tabs=net-core-new), [pre-processing](https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling), log levels and [retention](https://learn.microsoft.com/en-us/azure/azure-monitor/app/data-retention-privacy), and more.
diff --git a/accelerators/aks-sb-azmonitor-microservices/docs/workbooks.md b/accelerators/aks-sb-azmonitor-microservices/docs/workbooks.md
new file mode 100644
index 0000000..0b8772a
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/docs/workbooks.md
@@ -0,0 +1,74 @@
+# Workbooks
+
+The application utilizes Azure Workbooks to visualize/analyze the extensive telemetry data that has been captured by the centralized Azure Monitor backend. Workbooks allow you to seamlessly display and track all relevant configured data within the Azure Portal, without the need to navigate away.
+
+Workbooks can be deployed using infrastructure as code tools, similar to other Azure services. In this scenario, the deployment creates three distinct workbooks, each focusing on specific categories that feature the following charts:
+
+| Workbook | Chart | Type | Description |
+| ----------------- | -------------------------------- | --------- | ------------------------------------------------------------------------------------------------- |
+| Index | servicesExceptionsQuery | KQL Query | Displays exceptions that ocurred while working with the system. |
+| Index | servicesMonitoringQuery | KQL Query | Displays the big picture of resources per service. |
+| Index | workbooksLinksText | Text | Includes the links to access to remaining workbooks. |
+| Infrastructure | serviceBusCompletedTimesQuery | KQL Query | Displays statistics of service bus completed operations. |
+| Infrastructure | serviceBusMessagingMetric | Metric | Displays the count of active, delivered and dead-lettered messages in a Queue or Topic. |
+| Infrastructure | serviceBusThrottledMetric | Metric | Displays the number of throttled requests in Service Bus. |
+| Infrastructure | cosmosDbLatencyOfReadsQuery | KQL Query | Displays the average time per read requests from Cosmos DB. |
+| Infrastructure | cosmosDbOperationsQuery | KQL Query | Displays the number of valid, invalid, and operations writes into CosmosDB. |
+| Infrastructure | keyVaultSaturationMetric | Metric | Displays the KeyVault saturation percentage. |
+| Infrastructure | keyVaultLatencyMetric | Metric | Displays the latency when executing an operation to KeyVault. |
+| Infrastructure | keyVaultResultsMetric | Metric | Displays the count of Key Vault API Results. |
+| Infrastructure | aksCpuMetric | Metric | Displays the max count of CPU percentage of the cluster. |
+| Infrastructure | aksRequestsMetric | Metric | Displays the average inflight requests to the cluster. |
+| System processing | endpointsRequestsStatisticsQuery | KQL Query | Displays different measures for time per requests. |
+| System processing | endpointsRequestsQuery | KQL Query | Extracts the last column from previous chart in order to gain more focus. |
+| System processing | lastOperationsQuery | KQL Query | Shows the last 100 operations executed and their associated operation ID. |
+| System processing | transactionSearchBladeText | Text | Link to a transaction search blade. |
+| System processing | additionalTelemetryText | Text | Link to get more telemetry in sections like Application Map, Availability, Failures, Performance. |
+| System processing | operationsParameters | KQL Query | Parameters designed to get more details in the following charts. |
+| System processing | endToEndProcessingQuery | KQL Query | Displays the end to end processing time. |
+| System processing | requestsCountQuery | KQL Query | Displays the request count. |
+| System processing | servicesProcessingTimeQuery | KQL Query | Displays the processing time in the services. |
+| System processing | serviceDependencyQuery | KQL Query | Displays the service dependency duration. |
+| System processing | destinationPortBreakdownQuery | KQL Query | Displays the end to end processing time by destination port. |
+| System processing | podRestartQuery | KQL Query | Displays the number of times each service pod has restarted. |
+
+No matter what infrastructure deployment tool is used, workbooks content is supplied via the same set of **json** templates, found in the [workbooks](../infrastructure/workbooks/) folder. The templates demonstrate proper workbook structure/syntax and include a variety of types of visualization items [available for use in workbooks](https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/workbooks-visualizations), like text and charts. The templates also illustrate how to pass required parameters from Bicep and Terraform to the workbooks json content, like the IDs of the source resources for log query and metric visualizations. In the following snippet, a metric chart receives the ID of the AKS cluster and uses it in the **resourcesIds** field.
+
+```json
+{
+ "type": 10,
+ "content": {
+ "chartId": "workbook171b383f-5043-41dd-9154-a1fa92367891",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "pink",
+ "resourceType": "microsoft.containerservice/managedclusters",
+ "metricScope": 0,
+ "resourceIds": ["${aks_id}"],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.containerservice/managedclusters",
+ "metric": "microsoft.containerservice/managedclusters-Nodes (PREVIEW)-node_cpu_usage_percentage",
+ "aggregation": 3,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "aksCpuMetric"
+}
+```
+
+Development teams can adapt the workbook presentation according to how they want to visualize data. Chart colors, for instance, can be used to visually separate the tools they are monitoring, allowing for easy identification what resource and signal is being observed:
+
+
+
+
+Azure Workbooks can provide a dynamic presentation that captures all relevant data in one single visualization tool, enabling creation of a single pane of glass for application administrators. Not all projects will look for the same telemetry, as each solution will focus on different metrics according to their specific needs.
diff --git a/accelerators/aks-sb-azmonitor-microservices/http/.env.sample b/accelerators/aks-sb-azmonitor-microservices/http/.env.sample
new file mode 100644
index 0000000..28a43c7
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/http/.env.sample
@@ -0,0 +1,6 @@
+# Copy this file to .env and fill in the values.
+
+
+# Run the following command to get the SERVICE_IP value:
+# kubectl get svc --namespace default cargo-processing-api --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"
+SERVICE_IP=
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/http/cargo-processing-api.http b/accelerators/aks-sb-azmonitor-microservices/http/cargo-processing-api.http
new file mode 100644
index 0000000..2b88fe4
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/http/cargo-processing-api.http
@@ -0,0 +1,122 @@
+# This file shows how to make requests against the deployed API
+# The following lines load the IP address for the deployed services from a .env file
+# This file is created for you when you deploy the services
+@cargo_service=http://{{$dotenv SERVICE_IP}}
+@operations_service=http://{{$dotenv SERVICE_IP}}
+
+# Uncomment the following lines to use locally running services
+# @cargo_service=http://localhost:8080
+# @operations_service=http://localhost:8081
+
+
+#
+# issue a POST request to create a valid cargo request
+#
+# @name createRequest
+POST {{cargo_service}}/cargo/
+Content-Type: application/json
+operation-id: {{$guid}}
+
+{
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Tacoma"
+ },
+ "demandDates": {
+ "start": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 10 d}}Z",
+ "end": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 15 d}}Z"
+ }
+}
+
+###
+# issue a PUT request to update the previous cargo request
+#
+
+PUT {{cargo_service}}/cargo/{{createRequest.response.body.id}}
+Content-Type: application/json
+
+{
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Seattle"
+ },
+ "demandDates": {
+ "start": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 10 d}}Z",
+ "end": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 15 d}}Z"
+ }
+}
+
+
+###
+# issue a GET request to retrieve the status of the previous cargo request
+#
+GET {{operations_service}}/operations/{{createRequest.response.headers.operation-id}}
+
+###############################################################
+
+#
+# issue a POST request to create a valid cargo request (start date cannot be more than 60 days in the future)
+#
+
+# @name createRequest_invalid
+POST {{cargo_service}}/cargo/
+Content-Type: application/json
+operation-id: {{$guid}}
+
+{
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Tacoma"
+ },
+ "demandDates": {
+ "start": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 70 d}}Z",
+ "end": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 75 d}}Z"
+ }
+}
+
+
+
+###
+# issue a GET request to retrieve the status of the previous cargo request
+#
+
+GET {{operations_service}}/operations/{{createRequest_invalid.response.headers.operation-id}}
+
+
+###############################################################
+# Test degraded behaviour:
+
+###
+# issue a POST request to create a cargo request with processing delays
+# (destination port slow-port)
+#
+POST {{cargo_service}}/cargo/
+Content-Type: application/json
+operation-id: {{$guid}}
+
+{
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "slow-port"
+ },
+ "demandDates": {
+ "start": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 10 d}}Z",
+ "end": "{{$localDatetime "YYYY-MM-DDThh:mm:ss.ms" 15 d}}Z"
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/abbreviations.json b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/abbreviations.json
new file mode 100644
index 0000000..a4fc9df
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/abbreviations.json
@@ -0,0 +1,135 @@
+{
+ "analysisServicesServers": "as",
+ "apiManagementService": "apim-",
+ "appConfigurationConfigurationStores": "appcs-",
+ "appManagedEnvironments": "cae-",
+ "appContainerApps": "ca-",
+ "authorizationPolicyDefinitions": "policy-",
+ "automationAutomationAccounts": "aa-",
+ "blueprintBlueprints": "bp-",
+ "blueprintBlueprintsArtifacts": "bpa-",
+ "cacheRedis": "redis-",
+ "cdnProfiles": "cdnp-",
+ "cdnProfilesEndpoints": "cdne-",
+ "cognitiveServicesAccounts": "cog-",
+ "cognitiveServicesFormRecognizer": "cog-fr-",
+ "cognitiveServicesTextAnalytics": "cog-ta-",
+ "computeAvailabilitySets": "avail-",
+ "computeCloudServices": "cld-",
+ "computeDiskEncryptionSets": "des",
+ "computeDisks": "disk",
+ "computeDisksOs": "osdisk",
+ "computeGalleries": "gal",
+ "computeSnapshots": "snap-",
+ "computeVirtualMachines": "vm",
+ "computeVirtualMachineScaleSets": "vmss-",
+ "containerInstanceContainerGroups": "ci",
+ "containerRegistryRegistries": "cr",
+ "containerServiceManagedClusters": "aks-",
+ "databricksWorkspaces": "dbw-",
+ "dataFactoryFactories": "adf-",
+ "dataLakeAnalyticsAccounts": "dla",
+ "dataLakeStoreAccounts": "dls",
+ "dataMigrationServices": "dms-",
+ "dBforMySQLServers": "mysql-",
+ "dBforPostgreSQLServers": "psql-",
+ "devicesIotHubs": "iot-",
+ "devicesProvisioningServices": "provs-",
+ "devicesProvisioningServicesCertificates": "pcert-",
+ "documentDBDatabaseAccounts": "cosmos-",
+ "eventGridDomains": "evgd-",
+ "eventGridDomainsTopics": "evgt-",
+ "eventGridEventSubscriptions": "evgs-",
+ "eventHubNamespaces": "evhns-",
+ "eventHubNamespacesEventHubs": "evh-",
+ "hdInsightClustersHadoop": "hadoop-",
+ "hdInsightClustersHbase": "hbase-",
+ "hdInsightClustersKafka": "kafka-",
+ "hdInsightClustersMl": "mls-",
+ "hdInsightClustersSpark": "spark-",
+ "hdInsightClustersStorm": "storm-",
+ "hybridComputeMachines": "arcs-",
+ "insightsActionGroups": "ag-",
+ "insightsComponents": "appi-",
+ "keyVaultVaults": "kv-",
+ "kubernetesConnectedClusters": "arck",
+ "kustoClusters": "dec",
+ "kustoClustersDatabases": "dedb",
+ "logicIntegrationAccounts": "ia-",
+ "logicWorkflows": "logic-",
+ "machineLearningServicesWorkspaces": "mlw-",
+ "managedIdentityUserAssignedIdentities": "id-",
+ "managementManagementGroups": "mg-",
+ "migrateAssessmentProjects": "migr-",
+ "networkApplicationGateways": "agw-",
+ "networkApplicationSecurityGroups": "asg-",
+ "networkAzureFirewalls": "afw-",
+ "networkBastionHosts": "bas-",
+ "networkConnections": "con-",
+ "networkDnsZones": "dnsz-",
+ "networkExpressRouteCircuits": "erc-",
+ "networkFirewallPolicies": "afwp-",
+ "networkFirewallPoliciesWebApplication": "waf",
+ "networkFirewallPoliciesRuleGroups": "wafrg",
+ "networkFrontDoors": "fd-",
+ "networkFrontdoorWebApplicationFirewallPolicies": "fdfp-",
+ "networkLoadBalancersExternal": "lbe-",
+ "networkLoadBalancersInternal": "lbi-",
+ "networkLoadBalancersInboundNatRules": "rule-",
+ "networkLocalNetworkGateways": "lgw-",
+ "networkNatGateways": "ng-",
+ "networkNetworkInterfaces": "nic-",
+ "networkNetworkSecurityGroups": "nsg-",
+ "networkNetworkSecurityGroupsSecurityRules": "nsgsr-",
+ "networkNetworkWatchers": "nw-",
+ "networkPrivateDnsZones": "pdnsz-",
+ "networkPrivateLinkServices": "pl-",
+ "networkPublicIPAddresses": "pip-",
+ "networkPublicIPPrefixes": "ippre-",
+ "networkRouteFilters": "rf-",
+ "networkRouteTables": "rt-",
+ "networkRouteTablesRoutes": "udr-",
+ "networkTrafficManagerProfiles": "traf-",
+ "networkVirtualNetworkGateways": "vgw-",
+ "networkVirtualNetworks": "vnet-",
+ "networkVirtualNetworksSubnets": "snet-",
+ "networkVirtualNetworksVirtualNetworkPeerings": "peer-",
+ "networkVirtualWans": "vwan-",
+ "networkVpnGateways": "vpng-",
+ "networkVpnGatewaysVpnConnections": "vcn-",
+ "networkVpnGatewaysVpnSites": "vst-",
+ "notificationHubsNamespaces": "ntfns-",
+ "notificationHubsNamespacesNotificationHubs": "ntf-",
+ "operationalInsightsWorkspaces": "log-",
+ "portalDashboards": "dash-",
+ "powerBIDedicatedCapacities": "pbi-",
+ "purviewAccounts": "pview-",
+ "recoveryServicesVaults": "rsv-",
+ "resourcesResourceGroups": "rg-",
+ "searchSearchServices": "srch-",
+ "serviceBusNamespaces": "sb-",
+ "serviceBusNamespacesQueues": "sbq-",
+ "serviceBusNamespacesTopics": "sbt-",
+ "serviceEndPointPolicies": "se-",
+ "serviceFabricClusters": "sf-",
+ "signalRServiceSignalR": "sigr",
+ "sqlManagedInstances": "sqlmi-",
+ "sqlServers": "sql-",
+ "sqlServersDataWarehouse": "sqldw-",
+ "sqlServersDatabases": "sqldb-",
+ "sqlServersDatabasesStretch": "sqlstrdb-",
+ "storageStorageAccounts": "st",
+ "storageStorageAccountsVm": "stvm",
+ "storSimpleManagers": "ssimp",
+ "streamAnalyticsCluster": "asa-",
+ "synapseWorkspaces": "syn",
+ "synapseWorkspacesAnalyticsWorkspaces": "synw",
+ "synapseWorkspacesSqlPoolsDedicated": "syndp",
+ "synapseWorkspacesSqlPoolsSpark": "synsp",
+ "timeSeriesInsightsEnvironments": "tsi-",
+ "webServerFarms": "plan-",
+ "webSitesAppService": "app-",
+ "webSitesAppServiceEnvironment": "ase-",
+ "webSitesFunctions": "func-",
+ "webStaticSites": "stapp-"
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/azuredeploy.parameters.sample.json b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/azuredeploy.parameters.sample.json
new file mode 100644
index 0000000..269e46c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/azuredeploy.parameters.sample.json
@@ -0,0 +1,48 @@
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "value": "eastus"
+ },
+ "uniqueUserName": {
+ "value": "myusername"
+ },
+ "cosmosDatabaseName": {
+ "value": "cargo"
+ },
+ "cosmosContainer1Name": {
+ "value": "valid-cargo"
+ },
+ "cosmosContainer2Name": {
+ "value": "invalid-cargo"
+ },
+ "cosmosContainer3Name": {
+ "value": "operations"
+ },
+ "serviceBusQueue1Name": {
+ "value": "ingest-cargo"
+ },
+ "serviceBusQueue2Name": {
+ "value": "operation-state"
+ },
+ "serviceBusTopicName": {
+ "value": "validated-cargo"
+ },
+ "serviceBusSubscription1Name": {
+ "value": "valid-cargo"
+ },
+ "serviceBusSubscription2Name": {
+ "value": "invalid-cargo"
+ },
+ "serviceBusTopicRule1Name": {
+ "value": "valid"
+ },
+ "serviceBusTopicRule2Name": {
+ "value": "invalid"
+ },
+ "notificationEmailAddress": {
+ "value": "alias@microsoft.com"
+ }
+ }
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/main.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/main.bicep
new file mode 100644
index 0000000..3b2cb59
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/main.bicep
@@ -0,0 +1,219 @@
+targetScope = 'subscription'
+
+//parameters section
+@description('Specifies the supported Azure location (region) where the resources will be deployed')
+@minLength(1)
+param location string
+
+@description('This value will explain who is the author of specific resources and will be reflected in every deployed tool')
+@minLength(1)
+param uniqueUserName string
+
+@description('Name for the Cosmos DB SQL database')
+@minLength(1)
+param cosmosDatabaseName string
+
+@description('Name for the first Cosmos DB SQL container')
+@minLength(1)
+param cosmosContainer1Name string
+
+@description('Name for the second Cosmos DB SQL container')
+@minLength(1)
+param cosmosContainer2Name string
+
+@description('Name for the third Cosmos DB SQL container')
+@minLength(1)
+param cosmosContainer3Name string
+
+@description('Name for the first Service Bus Queue')
+@minLength(1)
+param serviceBusQueue1Name string
+
+@description('Name for the second Service Bus Queue')
+@minLength(1)
+param serviceBusQueue2Name string
+
+@description('Name for the Service Bus Topic')
+@minLength(1)
+param serviceBusTopicName string
+
+@description('Name for the first Service Bus Subscription')
+@minLength(1)
+param serviceBusSubscription1Name string
+
+@description('Name for the second Service Bus Subscription')
+@minLength(1)
+param serviceBusSubscription2Name string
+
+@description('Name for the first Service Bus Subscriptions filter rule')
+@minLength(1)
+param serviceBusTopicRule1Name string
+
+@description('Name for the second Service Bus Subscriptions filter rule')
+@minLength(1)
+param serviceBusTopicRule2Name string
+
+@description('Tenant Id for the service principal that will be in charge of KeyVault access')
+@minLength(1)
+param kvTenantId string = tenant().tenantId
+
+@description('Definition Id for AcrPull role')
+@minLength(1)
+param roleAcrPull string = 'b24988ac-6180-42a0-ab88-20f7382dd24c'
+
+@description('Configure Azure Active Directory authentication for Kubernetes cluster')
+param aksAadAuth bool = false
+
+@description('The object ID of the Azure Active Directory user to make cluster admin (only valid if aksAadAuth is true)')
+param aksAadAdminUserObjectId string = ''
+
+@description('Email address for alert notifications')
+@minLength(1)
+param notificationEmailAddress string
+
+//load abbreviations for Azure features
+var abbrs = loadJsonContent('abbreviations.json')
+
+//variables section
+var toolName = 'bicep'
+var resourceGroupName = '${abbrs.resourcesResourceGroups}${toolName}-${uniqueUserName}'
+var acrName = '${abbrs.containerRegistryRegistries}${toolName}${uniqueUserName}'
+var kvName = '${abbrs.keyVaultVaults}${toolName}-${uniqueUserName}'
+var appInsightsName = '${abbrs.insightsComponents}${uniqueUserName}'
+var logAnalyticsName = '${abbrs.operationalInsightsWorkspaces}${toolName}-${uniqueUserName}'
+var aksName = '${abbrs.containerServiceManagedClusters}${toolName}-${uniqueUserName}'
+var cosmosDBName = '${abbrs.documentDBDatabaseAccounts}${toolName}-${uniqueUserName}'
+var serviceBusName = '${abbrs.serviceBusNamespaces}${toolName}-${uniqueUserName}'
+
+//resourceGroup section
+resource resourceGroup 'Microsoft.Resources/resourceGroups@2021-04-01' = {
+ name: resourceGroupName
+ location: location
+}
+
+resource contributorRoleDefinition 'Microsoft.Authorization/roleDefinitions@2018-01-01-preview' existing = {
+ scope: subscription()
+ name: roleAcrPull
+}
+
+//modules section
+module acr 'modules/acr.bicep' = {
+ name: 'acrDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ acrName: acrName
+ aksPrincipalId: aks.outputs.clusterPrincipalID
+ roleDefinitionId: contributorRoleDefinition.id
+ }
+}
+
+module kv 'modules/key-vault.bicep' = {
+ name: 'keyVaultDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ kvName: kvName
+ kvTenantId: kvTenantId
+ serviceBusNamespaceName: serviceBus.outputs.serviceBusNamespaceName
+ appInsightsConnectionString: appInsights.outputs.connectionString
+ logAnalyticsWorkspaceId: appInsights.outputs.workspaceId
+ clusterKeyVaultSecretProviderObjectId: aks.outputs.clusterKeyVaultSecretProviderObjectId
+ cosmosDBEndpoint: cosmos.outputs.cosmosDBEndpoint
+ cosmosDBAccountName: cosmos.outputs.cosmosDBAccountName
+ }
+}
+
+module appInsights 'modules/app-insights.bicep' = {
+ name: 'appInsightsDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ appInsightsName: appInsightsName
+ logAnalyticsName: logAnalyticsName
+ }
+}
+
+module workbook 'modules/workbooks.bicep' = {
+ name: 'workbookDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ workspaceId: appInsights.outputs.workspaceId
+ uniqueUserName: uniqueUserName
+ serviceBusNamespaceId: serviceBus.outputs.serviceBusNamespaceId
+ appInsightsId: appInsights.outputs.appInsightsId
+ keyVaultId: kv.outputs.kvId
+ aksId: aks.outputs.clusterId
+ }
+}
+
+module aks 'modules/aks.bicep' = {
+ name: 'kubernetesDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ aksName: aksName
+ logAnalyticsWorkspaceId: appInsights.outputs.workspaceId
+ aksAadAuth: aksAadAuth
+ aksAadAdminUserObjectId: aksAadAdminUserObjectId
+ }
+}
+
+module cosmos 'modules/cosmos.bicep' = {
+ name: 'cosmosDBDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ accountName: cosmosDBName
+ databaseName: cosmosDatabaseName
+ container1Name: cosmosContainer1Name
+ container2Name: cosmosContainer2Name
+ container3Name: cosmosContainer3Name
+ logAnalyticsWorkspaceId: appInsights.outputs.workspaceId
+ }
+}
+
+module serviceBus 'modules/service-bus.bicep' = {
+ name: 'serviceBusDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ serviceBusName: serviceBusName
+ serviceBusQueue1Name: serviceBusQueue1Name
+ serviceBusQueue2Name: serviceBusQueue2Name
+ serviceBusTopicName: serviceBusTopicName
+ serviceBusSubscription1Name: serviceBusSubscription1Name
+ serviceBusSubscription2Name: serviceBusSubscription2Name
+ serviceBusTopicRule1Name: serviceBusTopicRule1Name
+ serviceBusTopicRule2Name: serviceBusTopicRule2Name
+ logAnalyticsWorkspaceId: appInsights.outputs.workspaceId
+ }
+}
+
+module alerts 'modules/alerts.bicep' = {
+ name: 'alertsDeploy'
+ scope: resourceGroup
+ params: {
+ location: resourceGroup.location
+ actionGroupName: 'default-actiongroup'
+ notificationEmailAddress: notificationEmailAddress
+ cosmosDBId: cosmos.outputs.cosmosDBId
+ keyVaultId: kv.outputs.kvId
+ serviceBusNamespaceId: serviceBus.outputs.serviceBusNamespaceId
+ aksClusterId: aks.outputs.clusterId
+ appInsightsId: appInsights.outputs.appInsightsId
+ logAnalyticsWorkspaceId: appInsights.outputs.workspaceId
+ }
+}
+
+//output section
+output rg_name string = resourceGroup.name
+output insights_name string = appInsights.outputs.insightsName
+output sb_namespace_name string = serviceBus.outputs.serviceBusNamespaceName
+output cosmosdb_name string = cosmos.outputs.cosmosDBAccountName
+output kv_name string = kv.outputs.kvName
+output acr_name string = acr.outputs.acrName
+output aks_name string = aks.outputs.clusterName
+output aks_key_vault_secret_provider_client_id string = aks.outputs.clusterKeyVaultSecretProviderClientId
+output tenant_id string = subscription().tenantId
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/acr.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/acr.bicep
new file mode 100644
index 0000000..a5af59b
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/acr.bicep
@@ -0,0 +1,45 @@
+@description('Default value obtained from resource group, it can be overwritten')
+@minLength(1)
+param location string = resourceGroup().location
+
+@description('Name for the ACR')
+@minLength(1)
+param acrName string
+
+@description('The principal ID of the AKS cluster')
+@minLength(1)
+param aksPrincipalId string
+
+@description('Built-in role for role assignment')
+@minLength(1)
+param roleDefinitionId string
+
+@description('Expected ACR sku')
+@allowed([
+ 'Basic'
+ 'Classic'
+ 'Premium'
+ 'Standard'
+])
+param acrSku string = 'Standard'
+
+resource containerRegistry 'Microsoft.ContainerRegistry/registries@2022-02-01-preview' = {
+ name: acrName
+ location: location
+ sku: {
+ name: acrSku
+ }
+}
+
+resource assignAcrPullToAks 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
+ name: guid(resourceGroup().id, acrName, aksPrincipalId, 'AssignAcrPullToAks')
+ scope: containerRegistry
+ properties: {
+ description: 'Assign AcrPull role to AKS'
+ principalId: aksPrincipalId
+ principalType: 'ServicePrincipal'
+ roleDefinitionId: roleDefinitionId
+ }
+}
+
+output acrName string = containerRegistry.name
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/aks.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/aks.bicep
new file mode 100644
index 0000000..f4cb5cf
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/aks.bicep
@@ -0,0 +1,100 @@
+@description('Default value obtained from resource group, it can be overwritten')
+param location string = resourceGroup().location
+
+@description('The name of the AKS resource')
+@minLength(1)
+param aksName string
+
+@description('Disk size (in GB) to provision for each of the agent pool nodes. Specifying 0 will apply the default disk size for that agentVMSize')
+@minValue(0)
+@maxValue(1023)
+param aksDiskSizeGB int = 30
+
+@description('The number of nodes for the cluster')
+@minValue(1)
+@maxValue(50)
+param aksNodeCount int = 3
+
+@description('The size of the Virtual Machine')
+param aksVMSize string = 'Standard_D2s_v3'
+
+@description('The name of the Log Analytics workspace linked to AKS')
+@minLength(1)
+param logAnalyticsWorkspaceId string
+
+@description('Configure Azure Active Directory authentication for Kubernetes cluster')
+param aksAadAuth bool
+
+@description('The object ID of the Azure Active Directory user to make cluster admin (only valid if aksAadAuth is true)')
+param aksAadAdminUserObjectId string = ''
+
+var aksAadProfile = {
+ managed: true
+ enableAzureRBAC: true
+ tenantId: subscription().tenantId
+}
+
+resource aks 'Microsoft.ContainerService/managedClusters@2020-09-01' = {
+ name: aksName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ dnsPrefix: 'aks'
+ aadProfile: aksAadAuth ? aksAadProfile : null
+ agentPoolProfiles: [
+ {
+ name: 'agentpool'
+ osDiskSizeGB: aksDiskSizeGB
+ count: aksNodeCount
+ minCount: 1
+ maxCount: aksNodeCount
+ vmSize: aksVMSize
+ osType: 'Linux'
+ mode: 'System'
+ enableAutoScaling: true
+ }
+ ]
+ addonProfiles: {
+ omsAgent: {
+ enabled: true
+ config: {
+ logAnalyticsWorkspaceResourceID: logAnalyticsWorkspaceId
+ }
+ }
+ azureKeyvaultSecretsProvider: {
+ enabled: true
+ config: {
+ enableSecretRotation: 'true'
+ rotationPollInterval: '2m'
+ }
+ }
+ }
+ }
+}
+
+resource adminRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (aksAadAuth) {
+ name: guid(subscription().id, resourceGroup().id, 'aks-admin-${aksAadAdminUserObjectId}')
+ scope: aks
+ properties: {
+ // Azure Kubernetes Service Cluster Admin Role
+ roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', '0ab0b1a8-8aac-4efd-b8c2-3ee1fb270be8')
+ principalId: aksAadAdminUserObjectId
+ }
+}
+resource userRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (aksAadAuth) {
+ name: guid(subscription().id, resourceGroup().id, 'aks-user-${aksAadAdminUserObjectId}')
+ scope: aks
+ properties: {
+ // Azure Kubernetes Service Cluster User Role
+ roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', '4abbcc35-e782-43d8-92c5-2d3f1bd2253f')
+ principalId: aksAadAdminUserObjectId
+ }
+}
+
+output clusterName string = aks.name
+output clusterId string = aks.id
+output clusterPrincipalID string = aks.properties.identityProfile.kubeletidentity.objectId
+output clusterKeyVaultSecretProviderClientId string = aks.properties.addonProfiles.azureKeyvaultSecretsProvider.identity.clientId
+output clusterKeyVaultSecretProviderObjectId string = aks.properties.addonProfiles.azureKeyvaultSecretsProvider.identity.objectId
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/alerts.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/alerts.bicep
new file mode 100644
index 0000000..b614152
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/alerts.bicep
@@ -0,0 +1,953 @@
+@description('Default value obtained from resource group, it can be overwritten')
+param location string = resourceGroup().location
+
+@description('Name for the default action group')
+@minLength(1)
+param actionGroupName string
+
+@description('Email address for alert notifications')
+@minLength(1)
+param notificationEmailAddress string
+
+@description('Cosmos DB resource id')
+param cosmosDBId string
+
+@description('Service Bus namespace resource id')
+param serviceBusNamespaceId string
+
+@description('AKS cluster resource id')
+param aksClusterId string
+
+@description('Key Vault resource id')
+param keyVaultId string
+
+@description('Application Insights resource id')
+param appInsightsId string
+
+@description('Log Analytics workspace resource id')
+param logAnalyticsWorkspaceId string
+
+var defaultMetricAlertActions = [
+ {
+ actionGroupId: defaultActionGroup.id
+ }
+]
+
+var defaultLogAlertActions = {
+ actionGroups: [
+ defaultActionGroup.id
+ ]
+}
+
+var serviceBusSplitByEntityDimensions = [
+ {
+ name: 'EntityName'
+ operator: 'Include'
+ values: [
+ '*'
+ ]
+ }
+]
+
+resource defaultActionGroup 'Microsoft.Insights/actionGroups@2022-06-01' = {
+ name: actionGroupName
+ location: 'global'
+ properties: {
+ enabled: false
+ groupShortName: length(actionGroupName) <= 12 ? actionGroupName : substring(actionGroupName, 0, 12)
+ emailReceivers: [
+ {
+ name: 'email-receiver'
+ emailAddress: notificationEmailAddress
+ useCommonAlertSchema: false
+ }
+ ]
+ }
+}
+
+resource cosmosRusAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'cosmosRUs'
+ location: 'global'
+ properties: {
+ description: 'Alert when RUs exceed 400.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'TotalRequestUnits'
+ metricNamespace: 'Microsoft.DocumentDB/databaseAccounts'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Total'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 400
+ }
+ ]
+ }
+ scopes: [ cosmosDBId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource cosmosInvalidCargoAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'cosmosInvalidCargo'
+ location: 'global'
+ properties: {
+ description: 'Alert when more than 10 documents have been saved to the invalid-cargo container.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'DocumentCount'
+ metricNamespace: 'Microsoft.DocumentDB/databaseAccounts'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Total'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 10
+ dimensions: [
+ {
+ name: 'CollectionName'
+ operator: 'Include'
+ values: [
+ 'invalid-cargo'
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ scopes: [ cosmosDBId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT5M'
+ severity: 3
+ enabled: false
+ }
+}
+
+resource serviceBusAbandonedMessagesAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'serviceBusAbandonedMessages'
+ location: 'global'
+ properties: {
+ description: 'Alert when a Service Bus entity has abandoned more than 10 messages.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'AbandonMessage'
+ metricNamespace: 'Microsoft.ServiceBus/namespaces'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Total'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 10
+ dimensions: serviceBusSplitByEntityDimensions
+ }
+ ]
+ }
+ scopes: [ serviceBusNamespaceId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource serviceBusDeadLetteredMessagesAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'serviceBusDeadLetteredMessages'
+ location: 'global'
+ properties: {
+ description: 'Alert when a Service Bus entity has dead-lettered more than 10 messages.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'DeadletteredMessages'
+ metricNamespace: 'Microsoft.ServiceBus/namespaces'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Average'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 10
+ dimensions: serviceBusSplitByEntityDimensions
+ }
+ ]
+ }
+ scopes: [ serviceBusNamespaceId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource serviceBusThrottledRequestsAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'serviceBusThrottledRequests'
+ location: 'global'
+ properties: {
+ description: 'Alert when a Service Bus entity has throttled more than 10 requests.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'ThrottledRequests'
+ metricNamespace: 'Microsoft.ServiceBus/namespaces'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Total'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 10
+ dimensions: serviceBusSplitByEntityDimensions
+ }
+ ]
+ }
+ scopes: [ serviceBusNamespaceId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource aksCPUPercentageAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'aksCPUPercentage'
+ location: 'global'
+ properties: {
+ description: 'Alert when Node CPU percentage exceeds 80.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'node_cpu_usage_percentage'
+ metricNamespace: 'Microsoft.ContainerService/managedClusters'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Average'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 80
+ }
+ ]
+ }
+ scopes: [ aksClusterId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource aksMemoryPercentageAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'aksMemoryPercentage'
+ location: 'global'
+ properties: {
+ description: 'Alert when Node memory working set percentage exceeds 80.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'node_memory_working_set_percentage'
+ metricNamespace: 'Microsoft.ContainerService/managedClusters'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Average'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 80
+ }
+ ]
+ }
+ scopes: [ aksClusterId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource keyVaultSaturationAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'keyVaultSaturation'
+ location: 'global'
+ properties: {
+ description: 'Alert when Key Vault saturation falls outside the range of a dynamic threshold.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.MultipleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'SaturationShoebox'
+ metricNamespace: 'Microsoft.KeyVault/vaults'
+ name: 'Metric1'
+ skipMetricValidation: false
+ timeAggregation: 'Average'
+ criterionType: 'DynamicThresholdCriterion'
+ operator: 'GreaterOrLessThan'
+ alertSensitivity: 'Medium'
+ failingPeriods: {
+ minFailingPeriodsToAlert: 4
+ numberOfEvaluationPeriods: 4
+ }
+ }
+ ]
+ }
+ scopes: [ keyVaultId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 3
+ enabled: false
+ }
+}
+
+// Tenant specific issues prevent deployment of custom metric alert
+//
+resource productQtyScheduledForDestinationPortAlert 'Microsoft.Insights/metricAlerts@2018-03-01' = {
+ name: 'productQtyScheduledForDestinationPort'
+ location: 'global'
+ properties: {
+ description: 'Alert when a single port/destination receives more than quantity 1000 of a given product.'
+ criteria: {
+ 'odata.type': 'Microsoft.Azure.Monitor.SingleResourceMultipleMetricCriteria'
+ allOf: [
+ {
+ metricName: 'port_product_qty'
+ metricNamespace: 'azure.applicationinsights'
+ name: 'Metric1'
+ skipMetricValidation: true
+ timeAggregation: 'Total'
+ criterionType: 'StaticThresholdCriterion'
+ operator: 'GreaterThan'
+ threshold: 1000
+ dimensions: [
+ {
+ name: 'destination'
+ operator: 'Include'
+ values: [
+ '*'
+ ]
+ }
+ {
+ name: 'product'
+ operator: 'Include'
+ values: [
+ '*'
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultMetricAlertActions
+ evaluationFrequency: 'PT1M'
+ windowSize: 'PT1M'
+ severity: 3
+ enabled: false
+ }
+}
+
+resource microserviceExceptionsAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'microserviceExceptions'
+ location: location
+ properties: {
+ description: 'Alert when a microservice throws more than 5 exceptions.'
+ criteria: {
+ allOf: [
+ {
+ query: 'exceptions\n'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 5
+ dimensions: [
+ {
+ name: 'cloud_RoleName'
+ operator: 'Include'
+ values: [
+ '*'
+ ]
+ }
+ ]
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource cargoProcessingAPIRequestsAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'cargoProcessingAPIRequests'
+ location: location
+ properties: {
+ description: 'Alert when the cargo-processing-api microservice is not receiving any requests.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "cargo-processing-api" and (name == "POST /cargo/" or name == "PUT /cargo/{cargoId}")'
+ timeAggregation: 'Count'
+ operator: 'Equal'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 3
+ enabled: false
+ }
+}
+
+resource e2eAverageDurationAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'e2eAverageDuration'
+ location: location
+ properties: {
+ description: 'Alert when the end to end average request duration exceeds 5 seconds.'
+ criteria: {
+ allOf: [
+ {
+ query: 'let cargo_processing_api = requests\r\n| where cloud_RoleName == "cargo-processing-api" and (name == "POST /cargo/" or name == "PUT /cargo/{cargoId}")\r\n| project-rename ingest_timestamp = timestamp\r\n| project ingest_timestamp, operation_Id;\r\nlet operation_api_succeeded = requests\r\n| where cloud_RoleName == "operations-api" and name == "ServiceBus.process" and customDimensions["operation-state"] == "Succeeded"\r\n| extend operation_api_completed = timestamp + (duration*1ms)\r\n| project operation_Id, operation_api_completed;\r\ncargo_processing_api\r\n| join kind=inner operation_api_succeeded on $left.operation_Id == $right.operation_Id\r\n| extend end_to_end_Duration_ms = (operation_api_completed - ingest_timestamp) /1ms\r\n| summarize avg(end_to_end_Duration_ms)'
+ metricMeasureColumn: 'avg_end_to_end_Duration_ms'
+ timeAggregation: 'Average'
+ operator: 'GreaterThan'
+ threshold: 5000
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource cargoProcessingAPIAverageDurationAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'cargoProcessingAPIAverageDuration'
+ location: location
+ properties: {
+ description: 'Alert when the cargo-processing-api microservice average request duration exceeds 2 seconds.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "cargo-processing-api" and (name == "POST /cargo/" or name == "PUT /cargo/{cargoId}")\r\n| summarize avg(duration)'
+ metricMeasureColumn: 'avg_duration'
+ timeAggregation: 'Average'
+ operator: 'GreaterThan'
+ threshold: 2000
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource cargoProcessingValidatorAverageDurationAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'cargoProcessingValidatorAverageDuration'
+ location: location
+ properties: {
+ description: 'Alert when the cargo-processing-validator microservice average request duration exceeds 2 seconds.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "cargo-processing-validator" and (name == "ServiceBus.ProcessMessage" or name == "ServiceBusQueue.ProcessMessage")\r\n| summarize avg(duration)'
+ metricMeasureColumn: 'avg_duration'
+ timeAggregation: 'Average'
+ operator: 'GreaterThan'
+ threshold: 2000
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource validCargoManagerAverageDurationAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'validCargoManagerAverageDuration'
+ location: location
+ properties: {
+ description: 'Alert when the valid-cargo-manager microservice average request duration exceeds 2 seconds.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "valid-cargo-manager" and name == "ServiceBusTopic.ProcessMessage"\r\n| summarize avg(duration)'
+ metricMeasureColumn: 'avg_duration'
+ timeAggregation: 'Average'
+ operator: 'GreaterThan'
+ threshold: 2000
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource invalidCargoManagerAverageDurationAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'invalidCargoManagerAverageDuration'
+ location: location
+ properties: {
+ description: 'Alert when the invalid-cargo-manager microservice average request duration exceeds 2 seconds.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "invalid-cargo-manager" and name == "ServiceBusTopic.ProcessMessage"\r\n| summarize avg(duration)'
+ metricMeasureColumn: 'avg_duration'
+ timeAggregation: 'Average'
+ operator: 'GreaterThan'
+ threshold: 2000
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource operationsAPIAverageDurationAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'operationsAPIAverageDuration'
+ location: location
+ properties: {
+ description: 'Alert when the operations-api microservice average request duration exceeds 1 second.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "operations-api" and name == "ServiceBus.process"\r\n| summarize avg(duration)'
+ metricMeasureColumn: 'avg_duration'
+ timeAggregation: 'Average'
+ operator: 'GreaterThan'
+ threshold: 1000
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource logAnalyticsDataIngestionDailyCapAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'logAnalyticsDataIngestionDailyCap'
+ location: location
+ properties: {
+ description: 'Alert when the Log Analytics data ingestion daily cap has been reached.'
+ criteria: {
+ allOf: [
+ {
+ query: '_LogOperation | where Category == "Ingestion" | where Operation has "Data collection"'
+ resourceIdColumn: '_ResourceId'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ logAnalyticsWorkspaceId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource logAnalyticsDataIngestionRateAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'logAnalyticsDataIngestionRate'
+ location: location
+ properties: {
+ description: 'Alert when the Log Analytics max data ingestion rate has been reached.'
+ criteria: {
+ allOf: [
+ {
+ query: '_LogOperation | where Category == "Ingestion" | where Operation has "Ingestion rate"'
+ resourceIdColumn: '_ResourceId'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ logAnalyticsWorkspaceId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 2
+ enabled: false
+ }
+}
+
+resource logAnalyticsOperationalIssuesAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'logAnalyticsOperationalIssues'
+ location: location
+ properties: {
+ description: 'Alert when the Log Analytics workspace has an operational issue.'
+ criteria: {
+ allOf: [
+ {
+ query: '_LogOperation | where Level == "Warning"'
+ resourceIdColumn: '_ResourceId'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ logAnalyticsWorkspaceId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'P1D'
+ windowSize: 'P1D'
+ severity: 3
+ enabled: false
+ }
+}
+
+resource cargoProcessingAPIHealthCheckFailureAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'cargoProcessingAPIHealthCheckFailure'
+ location: location
+ properties: {
+ description: 'Alert when a cargo-processing-api microservice health check fails.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "cargo-processing-api" and name == "GET /actuator/health" and success == "False"'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource cargoProcessingAPIHealthCheckNotReportingAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'cargoProcessingAPIHealthCheckNotReporting'
+ location: location
+ properties: {
+ description: 'Alert when the cargo-processing-api microservice health check is not reporting.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "cargo-processing-api" and name == "GET /actuator/health"'
+ timeAggregation: 'Count'
+ operator: 'Equal'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource validCargoManagerHealthCheckFailureAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'validCargoManagerHealthCheckFailureAlert'
+ location: location
+ properties: {
+ description: 'Alert when a valid-cargo-manager microservice health check fails.'
+ criteria: {
+ allOf: [
+ {
+ query: 'customMetrics\r\n| where cloud_RoleName == "valid-cargo-manager" and name == "HeartbeatState" and value != 2'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT30M'
+ windowSize: 'PT30M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource validCargoManagerHealthCheckNotReportingAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'validCargoManagerHealthCheckNotReporting'
+ location: location
+ properties: {
+ description: 'Alert when the valid-cargo-manager microservice health check is not reporting.'
+ criteria: {
+ allOf: [
+ {
+ query: 'customMetrics\r\n| where cloud_RoleName == "valid-cargo-manager" and name == "HeartbeatState"'
+ timeAggregation: 'Count'
+ operator: 'Equal'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT30M'
+ windowSize: 'PT30M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource invalidCargoManagerHealthCheckFailureAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'invalidCargoManagerHealthCheckFailure'
+ location: location
+ properties: {
+ description: 'Alert when an invalid-cargo-manager microservice health check fails.'
+ criteria: {
+ allOf: [
+ {
+ query: 'traces\r\n| where cloud_RoleName == "invalid-cargo-manager" and message contains "peeked at messages for over"'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource invalidCargoManagerHealthCheckNotReportingAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'invalidCargoManagerHealthCheckNotReporting'
+ location: location
+ properties: {
+ description: 'Alert when the invalid-cargo-manager microservice health check is not reporting.'
+ criteria: {
+ allOf: [
+ {
+ query: 'traces\r\n| where cloud_RoleName == "invalid-cargo-manager" and (message contains "since last peek" or message contains "peeked at messages for over")'
+ timeAggregation: 'Count'
+ operator: 'Equal'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource operationsAPIHealthCheckFailureAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'operationsAPIHealthCheckFailure'
+ location: location
+ properties: {
+ description: 'Alert when an operations-api microservice health check fails.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "operations-api" and name == "GET /actuator/health" and success == "False"'
+ timeAggregation: 'Count'
+ operator: 'GreaterThan'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource operationsAPIHealthCheckNotReportingAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'operationsAPIHealthCheckNotReporting'
+ location: location
+ properties: {
+ description: 'Alert when the operations-api microservice health check is not reporting.'
+ criteria: {
+ allOf: [
+ {
+ query: 'requests\r\n| where cloud_RoleName == "operations-api" and name == "GET /actuator/health"'
+ timeAggregation: 'Count'
+ operator: 'Equal'
+ threshold: 0
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ }
+ ]
+ }
+ scopes: [ appInsightsId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
+
+resource aksPodRestartsAlert 'Microsoft.Insights/scheduledQueryRules@2022-06-15' = {
+ name: 'aksPodRestarts'
+ location: location
+ properties: {
+ description: 'Alert when a microservice restarts more than once.'
+ criteria: {
+ allOf: [
+ {
+ query: 'KubePodInventory\r\n| summarize numRestarts = sum(PodRestartCount) by ServiceName'
+ metricMeasureColumn: 'numRestarts'
+ timeAggregation: 'Total'
+ operator: 'GreaterThan'
+ threshold: 1
+ failingPeriods: {
+ numberOfEvaluationPeriods: 1
+ minFailingPeriodsToAlert: 1
+ }
+ dimensions: [
+ {
+ name: 'ServiceName'
+ operator: 'Include'
+ values: [
+ 'cargo-processing-api'
+ 'cargo-processing-validator'
+ 'invalid-cargo-manager'
+ 'operations-api'
+ 'valid-cargo-manager'
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ scopes: [ logAnalyticsWorkspaceId ]
+ actions: defaultLogAlertActions
+ evaluationFrequency: 'PT5M'
+ windowSize: 'PT5M'
+ severity: 1
+ enabled: false
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/app-insights.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/app-insights.bicep
new file mode 100644
index 0000000..ac1b57a
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/app-insights.bicep
@@ -0,0 +1,33 @@
+@description('Default value obtained from resource group, it can be overwritten')
+param location string = resourceGroup().location
+
+@description('Name of the Application Insights instance')
+param appInsightsName string
+
+@description('Name of the Log Analytics instance')
+param logAnalyticsName string
+
+resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2021-06-01' = {
+ name: logAnalyticsName
+ location: location
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+}
+
+resource applicationInsights 'Microsoft.Insights/components@2020-02-02' = {
+ name: appInsightsName
+ location: location
+ kind: 'web'
+ properties: {
+ Application_Type: 'web'
+ WorkspaceResourceId: logAnalyticsWorkspace.id
+ }
+}
+
+output connectionString string = applicationInsights.properties.ConnectionString
+output workspaceId string = logAnalyticsWorkspace.id
+output insightsName string = applicationInsights.name
+output appInsightsId string = applicationInsights.id
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/cosmos.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/cosmos.bicep
new file mode 100644
index 0000000..dff8b70
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/cosmos.bicep
@@ -0,0 +1,161 @@
+@description('Default value obtained from resource group, it can be overwriten')
+param location string = resourceGroup().location
+
+@description('Cosmos DB account name, max length 44 characters, lowercase')
+@minLength(1)
+@maxLength(44)
+param accountName string = 'sql-${uniqueString(resourceGroup().id)}'
+
+@description('The default consistency level of the Cosmos DB account.')
+@allowed([
+ 'Eventual'
+ 'ConsistentPrefix'
+ 'Session'
+ 'BoundedStaleness'
+ 'Strong'
+])
+param defaultConsistencyLevel string = 'Session'
+
+@description('Enable automatic failover for regions')
+param automaticFailover bool = true
+
+@description('The name for the database')
+@minLength(1)
+param databaseName string
+
+@description('The name for the container 1')
+@minLength(1)
+param container1Name string
+
+@description('The name for the container 2')
+@minLength(1)
+param container2Name string
+
+@description('The name for the container 3')
+@minLength(1)
+param container3Name string
+
+@description('Name for diagnostic settings')
+@minLength(1)
+param diagnosticSettingsName string = 'cosmosDbDiagnostics'
+
+@description('Log analytics workspace id')
+@minLength(1)
+param logAnalyticsWorkspaceId string
+
+var accountNameVar = toLower(accountName)
+var locations = [
+ {
+ locationName: location
+ failoverPriority: 0
+ isZoneRedundant: false
+ }
+]
+
+resource accountNameResource 'Microsoft.DocumentDB/databaseAccounts@2021-01-15' = {
+ name: accountNameVar
+ kind: 'GlobalDocumentDB'
+ location: location
+ properties: {
+ consistencyPolicy: {
+ defaultConsistencyLevel: defaultConsistencyLevel
+ }
+ locations: locations
+ databaseAccountOfferType: 'Standard'
+ enableAutomaticFailover: automaticFailover
+ }
+
+ resource database 'sqlDatabases' = {
+ name: databaseName
+ properties: {
+ resource: {
+ id: databaseName
+ }
+ }
+
+ resource container1 'containers' = {
+ name: container1Name
+ properties: {
+ resource: {
+ id: container1Name
+ partitionKey: {
+ paths: [
+ '/id'
+ ]
+ kind: 'Hash'
+ }
+ }
+ }
+ }
+
+ resource container2 'containers' = {
+ name: container2Name
+ properties: {
+ resource: {
+ id: container2Name
+ partitionKey: {
+ paths: [
+ '/id'
+ ]
+ kind: 'Hash'
+ }
+ }
+ }
+ }
+
+ resource container3 'containers' = {
+ name: container3Name
+ properties: {
+ resource: {
+ id: container3Name
+ partitionKey: {
+ paths: [
+ '/id'
+ ]
+ kind: 'Hash'
+ }
+ }
+ }
+ }
+ }
+}
+
+resource cosmosDbDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
+ name: diagnosticSettingsName
+ scope: accountNameResource
+ properties: {
+ logs: [
+ {
+ category: 'DataPlaneRequests'
+ enabled: true
+ }
+ {
+ category: 'QueryRuntimeStatistics'
+ enabled: true
+ }
+ {
+ category: 'PartitionKeyStatistics'
+ enabled: true
+ }
+ {
+ category: 'PartitionKeyRUConsumption'
+ enabled: true
+ }
+ {
+ category: 'ControlPlaneRequests'
+ enabled: true
+ }
+ ]
+ metrics: [
+ {
+ category: 'Requests'
+ enabled: true
+ }
+ ]
+ workspaceId: logAnalyticsWorkspaceId
+ }
+}
+
+output cosmosDBId string = accountNameResource.id
+output cosmosDBEndpoint string = accountNameResource.properties.documentEndpoint
+output cosmosDBAccountName string = accountNameResource.name
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/key-vault.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/key-vault.bicep
new file mode 100644
index 0000000..58e4194
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/key-vault.bicep
@@ -0,0 +1,125 @@
+@description('Location obtained from resource group')
+param location string = resourceGroup().location
+
+@description('KeyVault name')
+@minLength(1)
+param kvName string
+
+@description('Expected KeyVault sku')
+@allowed([
+ 'premium'
+ 'standard'
+])
+param kvSku string = 'standard'
+
+@description('Tenant Id for the service principal that will be in charge of KeyVault access')
+@minLength(1)
+param kvTenantId string = tenant().tenantId
+
+//secrets stored in KeyVault
+@description('Service Bus Namespace name')
+@minLength(1)
+param serviceBusNamespaceName string
+
+@description('App Insights Connection String')
+@minLength(1)
+@secure()
+param appInsightsConnectionString string
+
+@description('Cosmos DB endpoint')
+@minLength(1)
+param cosmosDBEndpoint string
+
+@description('Cosmos DB account name')
+@minLength(1)
+param cosmosDBAccountName string
+
+@description('Name for diagnostic settings')
+@minLength(1)
+param diagnosticSettingsName string = 'keyVaultDiagnostics'
+
+@description('Log analytics workspace id')
+@minLength(1)
+param logAnalyticsWorkspaceId string
+
+@description('The Object ID of the user-defined Managed Identity used by the AKS Secret Provider')
+@minLength(1)
+@secure()
+param clusterKeyVaultSecretProviderObjectId string
+
+resource keyVault 'Microsoft.KeyVault/vaults@2021-11-01-preview' = {
+ name: kvName
+ location: location
+ properties: {
+ tenantId: kvTenantId
+ sku: {
+ family: 'A'
+ name: kvSku
+ }
+ createMode: 'default'
+ publicNetworkAccess: 'Enabled'
+ accessPolicies: [
+ {
+ objectId: clusterKeyVaultSecretProviderObjectId
+ permissions: {
+ secrets: [
+ 'get'
+ ]
+ }
+ tenantId: subscription().tenantId
+ }
+ ]
+ enabledForTemplateDeployment: true
+ }
+
+ resource appInsightsStringSecret 'secrets' = {
+ name: 'AppInsightsConnectionString'
+ properties: {
+ value: appInsightsConnectionString
+ }
+ }
+
+ resource serviceBusSecret 'secrets' = {
+ name: 'ServiceBusConnectionString'
+ properties: {
+ value: listKeys(resourceId('Microsoft.ServiceBus/namespaces/AuthorizationRules', serviceBusNamespaceName, 'RootManageSharedAccessKey'), '2022-01-01-preview').primaryConnectionString
+ }
+ }
+
+ resource cosmosDBEndpointSecret 'secrets' = {
+ name: 'CosmosDBEndpoint'
+ properties: {
+ value: cosmosDBEndpoint
+ }
+ }
+
+ resource cosmosDBKeySecret 'secrets' = {
+ name: 'CosmosDBKey'
+ properties: {
+ value: listKeys(resourceId('Microsoft.DocumentDB/databaseAccounts', cosmosDBAccountName), '2022-05-15').primaryMasterKey
+ }
+ }
+}
+
+resource keyVaultDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
+ name: diagnosticSettingsName
+ scope: keyVault
+ properties: {
+ logs: [
+ {
+ categoryGroup: 'allLogs'
+ enabled: true
+ }
+ ]
+ metrics: [
+ {
+ category: 'AllMetrics'
+ enabled: true
+ }
+ ]
+ workspaceId: logAnalyticsWorkspaceId
+ }
+}
+
+output kvName string = keyVault.name
+output kvId string = keyVault.id
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/service-bus.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/service-bus.bicep
new file mode 100644
index 0000000..0ff68a5
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/service-bus.bicep
@@ -0,0 +1,130 @@
+@description('Name for the Service Bus Namespace')
+@minLength(1)
+param serviceBusName string
+
+@description('Default value obtained from resource group, it can be overwritten')
+@minLength(1)
+param location string = resourceGroup().location
+
+@description('Name for the first Service Bus Queue')
+@minLength(1)
+param serviceBusQueue1Name string
+
+@description('Name for the second Service Bus Queue')
+@minLength(1)
+param serviceBusQueue2Name string
+
+@description('Name for the Service Bus Topic')
+@minLength(1)
+param serviceBusTopicName string
+
+@description('Name for the first Service Bus Subscription')
+@minLength(1)
+param serviceBusSubscription1Name string
+
+@description('Name for the second Service Bus Subscription')
+@minLength(1)
+param serviceBusSubscription2Name string
+
+@description('Name for the first Service Bus Subscriptions filter rule')
+@minLength(1)
+param serviceBusTopicRule1Name string
+
+@description('Name for the second Service Bus Subscriptions filter rule')
+@minLength(1)
+param serviceBusTopicRule2Name string
+
+@description('Name for diagnostic settings')
+@minLength(1)
+param diagnosticSettingsName string = 'serviceBusDiagnostics'
+
+@description('Log analytics workspace id')
+@minLength(1)
+param logAnalyticsWorkspaceId string
+
+resource serviceBusNamespace 'Microsoft.ServiceBus/namespaces@2022-01-01-preview' = {
+ name: serviceBusName
+ location: location
+ sku: {
+ capacity: 1
+ name: 'Standard'
+ tier: 'Standard'
+ }
+
+ properties: {
+ publicNetworkAccess: 'Enabled'
+ }
+
+ resource serviceBusQueue 'queues' = {
+ name: serviceBusQueue1Name
+ }
+
+ resource serviceBusQueue2 'queues' = {
+ name: serviceBusQueue2Name
+ }
+}
+
+resource serviceBusTopic 'Microsoft.ServiceBus/namespaces/topics@2022-01-01-preview' = {
+ name: serviceBusTopicName
+ parent: serviceBusNamespace
+ properties: {
+ supportOrdering: true
+ }
+
+ resource serviceBusSubscription1 'subscriptions' = {
+ name: serviceBusSubscription1Name
+ properties: {
+ maxDeliveryCount: 1
+ }
+
+ resource serviceBusTopicRule 'rules' = {
+ name: serviceBusTopicRule1Name
+ properties: {
+ filterType: 'SqlFilter'
+ sqlFilter: {
+ sqlExpression: 'valid = True'
+ }
+ }
+ }
+ }
+
+ resource serviceBusSubscription2 'subscriptions' = {
+ name: serviceBusSubscription2Name
+ properties: {
+ maxDeliveryCount: 1
+ }
+
+ resource serviceBusTopicRule 'rules' = {
+ name: serviceBusTopicRule2Name
+ properties: {
+ filterType: 'SqlFilter'
+ sqlFilter: {
+ sqlExpression: 'valid = False'
+ }
+ }
+ }
+ }
+}
+
+resource serviceBusDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
+ name: diagnosticSettingsName
+ scope: serviceBusNamespace
+ properties: {
+ logs: [
+ {
+ categoryGroup: 'allLogs'
+ enabled: true
+ }
+ ]
+ metrics: [
+ {
+ category: 'AllMetrics'
+ enabled: true
+ }
+ ]
+ workspaceId: logAnalyticsWorkspaceId
+ }
+}
+
+output serviceBusNamespaceName string = serviceBusNamespace.name
+output serviceBusNamespaceId string = serviceBusNamespace.id
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/workbooks.bicep b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/workbooks.bicep
new file mode 100644
index 0000000..5447e41
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/bicep/modules/workbooks.bicep
@@ -0,0 +1,81 @@
+@description('Default value obtained from resource group, it can be overwritten')
+param location string = resourceGroup().location
+
+@description('This value will explain who is the author of specific resources and will be reflected in every deployed tool')
+@minLength(1)
+param uniqueUserName string
+
+@description('Linked resource for Workook')
+@minLength(1)
+param workspaceId string
+
+@description('Id for monitored Service Bus Namespace')
+@minLength(1)
+param serviceBusNamespaceId string
+
+@description('Id for monitored Key Vault resource')
+@minLength(1)
+param keyVaultId string
+
+@description('Id for App Insights resource')
+@minLength(1)
+param appInsightsId string
+
+@description('Id for monitored AKS resource')
+@minLength(1)
+param aksId string
+
+var indexWorkbookName = guid(subscription().subscriptionId, resourceGroup().name, uniqueUserName, 'index')
+var baseIndexWorkbookContent = loadTextContent('../../workbooks/index.json')
+var indexInsightsWorkbookContent = replace(baseIndexWorkbookContent, '\${app_insights_id}', appInsightsId)
+var indexWorkspaceWorkbookContent =replace(indexInsightsWorkbookContent, '\${logs_workspace_id}', uriComponent(workspaceId))
+var indexInfrastructureWorkbookContent = replace(indexWorkspaceWorkbookContent, '\${infrastructure_workbook_id}', uriComponent(infrastructureWorkbook.id))
+var indexFinalWorkbookContent = replace(indexInfrastructureWorkbookContent, '\${system_workbook_id}', uriComponent(serviceProcessingWorkbook.id))
+resource observabilityWorkbook 'Microsoft.Insights/workbooks@2022-04-01' = {
+ name: indexWorkbookName
+ location: location
+ kind: 'shared'
+ properties: {
+ category: 'workbook'
+ displayName: 'Index'
+ serializedData: string(indexFinalWorkbookContent)
+ version: '0.01'
+ sourceId: workspaceId
+ }
+}
+
+var infrastructureWorkbookName = guid(subscription().subscriptionId, resourceGroup().name, uniqueUserName, 'infrastructure')
+var baseInfrastructureWorkbookContent = loadTextContent('../../workbooks/infrastructure.json')
+var baseInfrastructureSeviceBusWorkbookContent = replace(baseInfrastructureWorkbookContent, '\${servicebus_namespace_id}', serviceBusNamespaceId)
+var baseInfrastructureKeyVaultWorkbookContent = replace(baseInfrastructureSeviceBusWorkbookContent, '\${key_vault_id}', keyVaultId)
+var infrastructureUrlWorkbookContent =replace(baseInfrastructureKeyVaultWorkbookContent, '\${app_insights_id_url}', uriComponent(appInsightsId))
+var baseInfrastructureAksWorkbookContent = replace(infrastructureUrlWorkbookContent, '\${aks_id}', aksId)
+var infrastructureFinalWorkbookContent = replace(baseInfrastructureAksWorkbookContent, '\${app_insights_id}', appInsightsId)
+resource infrastructureWorkbook 'Microsoft.Insights/workbooks@2022-04-01' = {
+ name: infrastructureWorkbookName
+ location: location
+ kind: 'shared'
+ properties: {
+ category: 'workbook'
+ displayName: 'Infrastructure'
+ serializedData: string(infrastructureFinalWorkbookContent)
+ version: '0.01'
+ sourceId: workspaceId
+ }
+}
+
+var serviceProcessingWorkbookName = guid(subscription().subscriptionId, resourceGroup().name, uniqueUserName, 'service-processing')
+var baseServiceProcessingWorkbookContent = loadTextContent('../../workbooks/system-processing.json')
+var serviceProcessingWorkbookContent = replace(baseServiceProcessingWorkbookContent, '\${app_insights_id}', appInsightsId)
+resource serviceProcessingWorkbook 'Microsoft.Insights/workbooks@2022-04-01' = {
+ name: serviceProcessingWorkbookName
+ location: location
+ kind: 'shared'
+ properties: {
+ category: 'workbook'
+ displayName: 'System Processing'
+ serializedData: string(serviceProcessingWorkbookContent)
+ version: '0.01'
+ sourceId: workspaceId
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/build-and-push-images.sh b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/build-and-push-images.sh
new file mode 100644
index 0000000..9aa7a86
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/build-and-push-images.sh
@@ -0,0 +1,78 @@
+#!/bin/bash
+set -e
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+function help() {
+ echo
+ echo "build-images.sh"
+ echo
+ echo "Build images"
+ echo
+ echo -e "\t--acr-name\t(Optional)The name of the Azure Container Registry to push to. If not provided, the images will be built but not pushed."
+ echo -e "\t--image-tag\t(Optional)The tag to build the image with (defaults to 'latest')"
+ echo
+}
+
+
+# Set default values here
+acr_name=""
+image_tag="latest"
+
+
+# Process switches:
+SHORT=h
+LONG=acr-name:,image-tag:,help
+OPTS=$(getopt -a -n files --options $SHORT --longoptions $LONG -- "$@")
+
+eval set -- "$OPTS"
+
+while :
+do
+ case "$1" in
+ --acr-name)
+ acr_name=$2
+ shift 2
+ ;;
+ --image-tag)
+ image_tag=$2
+ shift 2
+ ;;
+ -h | --help)
+ help
+ ;;
+ --)
+ shift;
+ break
+ ;;
+ *)
+ echo "Unexpected '$1'"
+ help
+ exit 1
+ ;;
+ esac
+done
+
+image_base_name=""
+if [[ -n $acr_name ]]; then
+ echo -e "**\n** Authenticating to container registry ($acr_name)...\n**"
+ az acr login --name "$acr_name"
+
+ image_base_name="${acr_name}.azurecr.io/"
+fi
+
+
+services_to_build=("cargo-processing-api" "cargo-processing-validator" "invalid-cargo-manager" "operations-api" "valid-cargo-manager")
+for service in "${services_to_build[@]}"
+do
+ echo
+ echo "*******************************************************************************************************************"
+ echo -e "\n**\n** Building ${service}...\n**"
+ echo "*******************************************************************************************************************"
+ docker build --progress plain -t "${image_base_name}${service}:${image_tag}" "$script_dir/../../src/${service}"
+
+ if [[ -n $acr_name ]]; then
+ echo -e "\n**\n** Pushing ${service}...\n**"
+ docker push "${image_base_name}${service}:${image_tag}"
+ fi
+done
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/create-env-files-from-output.sh b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/create-env-files-from-output.sh
new file mode 100644
index 0000000..a4b1053
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/create-env-files-from-output.sh
@@ -0,0 +1,246 @@
+#!/bin/bash
+set -e
+
+#
+# This script expects to find an output.json in the project root with the values
+# from the infrastructure deployment.
+# It then creates the env files, settings files, and helm chart values files for each service
+#
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+RESOURCE_GROUP=$(jq -r '.rg_name' < "$script_dir/../../output.json")
+if [[ ${#RESOURCE_GROUP} -eq 0 ]]; then
+ echo 'ERROR: Missing output value rg_name' 1>&2
+ exit 6
+fi
+
+APP_INSIGHTS=$(jq -r '.insights_name' < "$script_dir/../../output.json")
+if [[ ${#APP_INSIGHTS} -eq 0 ]]; then
+ echo 'ERROR: Missing output value insights_name' 1>&2
+ exit 6
+fi
+
+SERVICE_BUS_NAMESPACE=$(jq -r '.sb_namespace_name' < "$script_dir/../../output.json")
+if [[ ${#SERVICE_BUS_NAMESPACE} -eq 0 ]]; then
+ echo 'ERROR: Missing output value sb_namespace_name' 1>&2
+ exit 6
+fi
+
+COSMOSDB_NAME=$(jq -r '.cosmosdb_name' < "$script_dir/../../output.json")
+if [[ ${#COSMOSDB_NAME} -eq 0 ]]; then
+ echo 'ERROR: Missing output value cosmosdb_name' 1>&2
+ exit 6
+fi
+
+ACR_NAME=$(jq -r '.acr_name' < "$script_dir/../../output.json")
+if [[ ${#ACR_NAME} -eq 0 ]]; then
+ echo 'ERROR: Missing output value acr_name' 1>&2
+ exit 6
+fi
+
+KEYVAULT_NAME=$(jq -r '.kv_name' < "$script_dir/../../output.json")
+if [[ ${#KEYVAULT_NAME} -eq 0 ]]; then
+ echo 'ERROR: Missing output value kv_name' 1>&2
+ exit 6
+fi
+
+TENANT_ID=$(jq -r '.tenant_id' < "$script_dir/../../output.json")
+if [[ ${#TENANT_ID} -eq 0 ]]; then
+ echo 'ERROR: Missing output value tenant_id' 1>&2
+ exit 6
+fi
+
+AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID=$(jq -r '.aks_key_vault_secret_provider_client_id' < "$script_dir/../../output.json")
+if [[ ${#AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID} -eq 0 ]]; then
+ echo 'ERROR: Missing output value aks_key_vault_secret_provider_client_id' 1>&2
+ exit 6
+fi
+
+#get information from Application Insights
+APP_INSIGHTS_KEY=$(az resource show -g "${RESOURCE_GROUP}" -n "${APP_INSIGHTS}" --resource-type "microsoft.insights/components" --query properties.ConnectionString --output tsv)
+
+#get information from Service Bus
+SERVICE_BUS_CONNECTION_STRING=$(az servicebus namespace authorization-rule keys list --resource-group "${RESOURCE_GROUP}" --namespace-name "${SERVICE_BUS_NAMESPACE}" --name RootManageSharedAccessKey --query primaryConnectionString --output tsv)
+
+#get information from Cosmos DB
+COSMOS_DB_ENDPOINT=$(az resource show -g "${RESOURCE_GROUP}" -n "${COSMOSDB_NAME}" --resource-type "microsoft.documentdb/databaseaccounts" --query properties.documentEndpoint --output tsv)
+COSMOS_DB_KEY=$(az cosmosdb keys list -g "${RESOURCE_GROUP}" -n "${COSMOSDB_NAME}" --query primaryMasterKey --output tsv)
+
+#create env file for cargo-processing-api
+cat << EOF > "$script_dir/../../src/cargo-processing-api/.env"
+APPLICATIONINSIGHTS_CONNECTION_STRING=$APP_INSIGHTS_KEY
+APPLICATIONINSIGHTS_VERSION=3.4.7
+
+#Service Bus Information
+servicebus_connection_string=$SERVICE_BUS_CONNECTION_STRING
+accelerator_queue_name=ingest-cargo
+
+# Operation API
+operations_api_url=http://operations-api:8081/
+EOF
+echo "CREATED: env file for CARGO-PROCESSING-API"
+
+#create helm values file for cargo-processing-api
+cat << EOF > "$script_dir/../../src/cargo-processing-api/helm/env.yaml"
+image:
+ repository: $ACR_NAME.azurecr.io/cargo-processing-api
+
+keyVault:
+ name: $KEYVAULT_NAME
+ tenantId: $TENANT_ID
+
+aksKeyVaultSecretProviderIdentityId: $AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID
+EOF
+echo "CREATED: helm value file for CARGO-PROCESSING-API"
+
+
+#create env file for cargo-processing-validator
+cat < "$script_dir/../../src/cargo-processing-validator/.env"
+APPLICATIONINSIGHTS_CONNECTION_STRING=$APP_INSIGHTS_KEY
+SERVICE_BUS_CONNECTION_STRING=$SERVICE_BUS_CONNECTION_STRING
+QUEUE_NAME="ingest-cargo"
+TOPIC_NAME="validated-cargo"
+MAX_WAIT_TIME_IN_MS=1000
+MAX_MESSAGE_DEQUEUE_COUNT=10
+OPERATION_QUEUE_NAME="operation-state"
+EOF
+echo "CREATED: env file for CARGO-PROCESSING-VALIDATOR"
+
+#create helm values file for cargo-processing-validator
+cat << EOF > "$script_dir/../../src/cargo-processing-validator/helm/env.yaml"
+image:
+ repository: $ACR_NAME.azurecr.io/cargo-processing-validator
+
+keyVault:
+ name: $KEYVAULT_NAME
+ tenantId: $TENANT_ID
+
+aksKeyVaultSecretProviderIdentityId: $AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID
+EOF
+echo "CREATED: helm value file for CARGO-PROCESSING-VALIDATOR"
+
+
+#create env file for invalid-cargo-manager
+cat << EOF > "$script_dir/../../src/invalid-cargo-manager/.env"
+SERVICE_BUS_CONNECTION_STR=$SERVICE_BUS_CONNECTION_STRING
+SERVICE_BUS_TOPIC_NAME=validated-cargo
+SERVICE_BUS_SUBSCRIPTION_NAME=invalid-cargo
+SERVICE_BUS_QUEUE_NAME=operation-state
+SERVICE_BUS_MAX_MESSAGE_COUNT=1
+SERVICE_BUS_MAX_WAIT_TIME=30
+
+COSMOS_DB_ENDPOINT=$COSMOS_DB_ENDPOINT
+COSMOS_DB_KEY=$COSMOS_DB_KEY
+COSMOS_DB_DATABASE_NAME=cargo
+COSMOS_DB_CONTAINER_NAME=invalid-cargo
+
+APPLICATIONINSIGHTS_CONNECTION_STRING=$APP_INSIGHTS_KEY
+CLOUD_LOGGING_LEVEL=INFO
+CONSOLE_LOGGING_LEVEL=DEBUG
+
+HEALTH_CHECK_SERVICE_BUS_DEGRADED_THRESHOLD_SECONDS=30
+HEALTH_CHECK_SERVICE_BUS_UNHEALTHY_THRESHOLD_SECONDS=60
+EOF
+echo "CREATED: env file for INVALID-CARGO-MANAGER"
+
+#create helm values file for invalid-cargo-manager
+cat << EOF > "$script_dir/../../src/invalid-cargo-manager/helm/env.yaml"
+image:
+ repository: $ACR_NAME.azurecr.io/invalid-cargo-manager
+
+keyVault:
+ name: $KEYVAULT_NAME
+ tenantId: $TENANT_ID
+
+aksKeyVaultSecretProviderIdentityId: $AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID
+EOF
+echo "CREATED: helm value file for INVALID-CARGO-MANAGER"
+
+
+#create env file for operations-api
+cat << EOF > "$script_dir/../../src/operations-api/.env"
+APPLICATIONINSIGHTS_CONNECTION_STRING=$APP_INSIGHTS_KEY
+APPLICATIONINSIGHTS_VERSION=3.4.7
+
+# Service Bus Information
+SERVICEBUS_CONNECTION_STRING=$SERVICE_BUS_CONNECTION_STRING
+SERVICEBUS_PREFETCH_COUNT=10
+OPERATION_STATE_QUEUE_NAME=operation-state
+
+# Cosmos Db Information
+COSMOS_DB_ENDPOINT=$COSMOS_DB_ENDPOINT
+COSMOS_DB_KEY=$COSMOS_DB_KEY
+COSMOS_DB_DATABASE_NAME=cargo
+COSMOS_DB_CONTAINER_NAME=invalid-cargo
+EOF
+echo "CREATED: env file for OPERATIONS-API"
+
+#create helm values file for operations-api
+cat << EOF > "$script_dir/../../src/operations-api/helm/env.yaml"
+image:
+ repository: $ACR_NAME.azurecr.io/operations-api
+
+keyVault:
+ name: $KEYVAULT_NAME
+ tenantId: $TENANT_ID
+
+aksKeyVaultSecretProviderIdentityId: $AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID
+EOF
+echo "CREATED: helm value file for OPERATIONS-API"
+
+
+#create appsettings.json file for valid-cargo-manager
+cat < "$script_dir/../../src/valid-cargo-manager/appsettings.json"
+{
+ "ApplicationInsights": {
+ "ConnectionString": "$APP_INSIGHTS_KEY"
+ },
+ "ServiceBus": {
+ "ConnectionString": "$SERVICE_BUS_CONNECTION_STRING",
+ "Topic": "validated-cargo",
+ "Queue": "operation-state",
+ "Subscription": "valid-cargo",
+ "PrefetchCount": 100,
+ "MaxConcurrentCalls": 10
+ },
+ "CosmosDB": {
+ "EndpointUri": "$COSMOS_DB_ENDPOINT",
+ "PrimaryKey": "$COSMOS_DB_KEY",
+ "Database": "cargo",
+ "Container": "valid-cargo"
+ },
+ "Logging": {
+ "LogLevel": {
+ "Default": "Information",
+ "Microsoft": "Warning",
+ "Microsoft.Hosting.Lifetime": "Information"
+ }
+ },
+ "HealthCheck": {
+ "TcpServer": {
+ "Port": 3030
+ },
+ "CosmosDB": {
+ "MaxDurationMs": 200
+ },
+ "ServiceBus": {
+ "MaxDurationMs": 200
+ }
+ }
+}
+EOF
+echo "CREATED: appsettings.json file for VALID-CARGO-MANAGER"
+
+#create helm values file for valid-cargo-manager
+cat << EOF > "$script_dir/../../src/valid-cargo-manager/helm/env.yaml"
+image:
+ repository: $ACR_NAME.azurecr.io/valid-cargo-manager
+
+keyVault:
+ name: $KEYVAULT_NAME
+ tenantId: $TENANT_ID
+
+aksKeyVaultSecretProviderIdentityId: $AKS_KEY_VAULT_SECRET_PROVIDER_CLIENT_ID
+EOF
+echo "CREATED: helm value file for VALID-CARGO-MANAGER"
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-bicep-infrastructure.sh b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-bicep-infrastructure.sh
new file mode 100644
index 0000000..9888ec9
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-bicep-infrastructure.sh
@@ -0,0 +1,169 @@
+#!/bin/bash
+set -e
+
+#
+# This script generates the bicep parameters file and then uses that to deploy the infrastructure
+# An output.json file is generated in the project root containing the outputs from the deployment
+# The output.json format is consistent between Terraform and Bicep deployments
+#
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+help()
+{
+ echo ""
+ echo ""
+ echo ""
+ echo "Command"
+ echo " deploy-bicep-infrastructure.sh : Will deploy all required services services."
+ echo ""
+ echo "Arguments"
+ echo " --username, -u : REQUIRED: Unique name to assign in all deployed services, your high school hotmail alias is a great idea!"
+ echo " --email-address, -e : REQUIRED: Email address for alert notifications"
+ echo " --location, -l : REQUIRED: Azure region to deploy to"
+ echo " --aks-aad-auth : OPTIONAL Enable AAD authentication for AKS"
+ echo ""
+ exit 1
+}
+
+SHORT=u:,l:,h
+LONG=username:,email-address:,location:,aks-aad-auth,help
+OPTS=$(getopt -a -n files --options $SHORT --longoptions $LONG -- "$@")
+
+eval set -- "$OPTS"
+
+USERNAME=''
+LOCATION=''
+EMAIL_ADDRESS=''
+AKS_AAD_AUTH=false
+while :
+do
+ case "$1" in
+ -u | --username )
+ USERNAME="$2"
+ shift 2
+ ;;
+ -e | --email-address )
+ EMAIL_ADDRESS="$2"
+ shift 2
+ ;;
+ -l | --location )
+ LOCATION="$2"
+ shift 2
+ ;;
+ --aks-aad-auth )
+ AKS_AAD_AUTH=true
+ shift 1
+ ;;
+ -h | --help)
+ help
+ ;;
+ --)
+ shift;
+ break
+ ;;
+ *)
+ echo "Unexpected option: $1"
+ ;;
+ esac
+done
+
+if [[ ${#USERNAME} -eq 0 ]]; then
+ echo 'ERROR: Missing required parameter --username | -u' 1>&2
+ exit 6
+fi
+
+if [[ ${#EMAIL_ADDRESS} -eq 0 ]]; then
+ echo 'ERROR: Missing required parameter --email-address | -e' 1>&2
+ exit 6
+fi
+
+if [[ ${#LOCATION} -eq 0 ]]; then
+ echo 'ERROR: Missing required parameter --location | -l' 1>&2
+ exit 6
+fi
+
+
+if [[ "$AKS_AAD_AUTH" == true ]]; then
+ if [[ -z "$ARM_CLIENT_ID" ]]; then
+ # Get the ID of the currently signed in user
+ current_user_object_id=$(az ad signed-in-user show --query id -o tsv)
+ else
+ # Get the ID of the service principal for ARM_CLIENT_ID
+ current_user_object_id=$(az ad sp show --id "$ARM_CLIENT_ID" --query id -o tsv)
+ fi
+ echo "Enabling AKS AAD authentication (current user object ID: $current_user_object_id)"
+fi
+
+cat << EOF > "$script_dir/../bicep/azuredeploy.parameters.json"
+{
+ "\$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "value": "${LOCATION}"
+ },
+ "uniqueUserName": {
+ "value": "${USERNAME}"
+ },
+ "cosmosDatabaseName": {
+ "value": "cargo"
+ },
+ "cosmosContainer1Name": {
+ "value": "valid-cargo"
+ },
+ "cosmosContainer2Name": {
+ "value": "invalid-cargo"
+ },
+ "cosmosContainer3Name": {
+ "value": "operations"
+ },
+ "serviceBusQueue1Name": {
+ "value": "ingest-cargo"
+ },
+ "serviceBusQueue2Name": {
+ "value": "operation-state"
+ },
+ "serviceBusTopicName": {
+ "value": "validated-cargo"
+ },
+ "serviceBusSubscription1Name": {
+ "value": "valid-cargo"
+ },
+ "serviceBusSubscription2Name": {
+ "value": "invalid-cargo"
+ },
+ "serviceBusTopicRule1Name": {
+ "value": "valid"
+ },
+ "serviceBusTopicRule2Name": {
+ "value": "invalid"
+ },
+ "aksAadAuth": {
+ "value": $AKS_AAD_AUTH
+ },
+ "aksAadAdminUserObjectId" : {
+ "value": "$current_user_object_id"
+ },
+ "notificationEmailAddress": {
+ "value": "${EMAIL_ADDRESS}"
+ }
+ }
+}
+EOF
+
+echo "Bicep parameters file created"
+
+cd "$script_dir/../bicep/"
+
+deployment_name="deployment-${USERNAME}-${LOCATION}"
+echo "Starting Bicep deployment ($deployment_name)"
+az deployment sub create \
+ --location "$LOCATION" \
+ --template-file main.bicep \
+ --name "$deployment_name" \
+ --parameters azuredeploy.parameters.json \
+ --output json \
+ | jq "[.properties.outputs | to_entries | .[] | {key:.key, value: .value.value}] | from_entries" > "$script_dir/../../output.json"
+
+echo "Bicep deployment completed"
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-helm-charts.sh b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-helm-charts.sh
new file mode 100644
index 0000000..528c54f
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-helm-charts.sh
@@ -0,0 +1,205 @@
+#!/bin/bash
+set -e
+
+#
+# This script expects to find an output.json in the project root with the values
+# from the infrastructure deployment.
+# It deploys helm charts for each service to the AKS cluster
+#
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+
+function help() {
+ echo
+ echo "deploy-helm-charts.sh"
+ echo
+ echo "Deploy solution into AKS using Helm"
+ echo
+ echo -e "\t--aks-aad-auth\t(Optional)Enable AAD authentication for AKS"
+ echo
+}
+
+
+# Set default values here
+AKS_AAD_AUTH=false
+
+
+# Process switches:
+SHORT=h
+LONG=aks-aad-auth,help
+OPTS=$(getopt -a -n files --options $SHORT --longoptions $LONG -- "$@")
+
+eval set -- "$OPTS"
+
+while :
+do
+ case "$1" in
+ --aks-aad-auth )
+ AKS_AAD_AUTH=true
+ shift 1
+ ;;
+ -h | --help)
+ help
+ exit 0
+ ;;
+ --)
+ shift;
+ break
+ ;;
+ *)
+ echo "Unexpected '$1'"
+ help
+ exit 1
+ ;;
+ esac
+done
+
+
+RESOURCE_GROUP=$(jq -r '.rg_name' < "$script_dir/../../output.json")
+if [[ ${#RESOURCE_GROUP} -eq 0 ]]; then
+ echo 'ERROR: Missing output value rg_name' 1>&2
+ exit 6
+fi
+
+AKS_NAME=$(jq -r '.aks_name' < "$script_dir/../../output.json")
+if [[ ${#AKS_NAME} -eq 0 ]]; then
+ echo 'ERROR: Missing output value aks_name' 1>&2
+ exit 6
+fi
+
+
+if [[ "$AKS_AAD_AUTH" == "true" ]]; then
+ echo "Getting Admin AKS credentials"
+ # Temporarily get cluster admin credentials to set up user permisions for default namespace
+
+ # Get kubeconfig for the AKS cluster
+ az aks get-credentials --resource-group "$RESOURCE_GROUP" --name "$AKS_NAME" --admin --overwrite-existing
+ # Update the kubeconfig to use https://github.com/azure/kubelogin
+ kubelogin convert-kubeconfig -l azurecli
+
+ if [[ -z "$ARM_CLIENT_ID" ]]; then
+ # Get the UPN of the currently signed in user
+ current_user_object_id=$(az ad signed-in-user show --query id -o tsv)
+ else
+ # Get the ID of the service principal for ARM_CLIENT_ID
+ current_user_object_id=$(az ad sp show --id "$ARM_CLIENT_ID" --query id -o tsv)
+ fi
+
+ echo "Adding user-full-access role & binding"
+cat < "$script_dir/../../http/.env"
+SERVICE_IP=$ingress_ip
+EOF
+echo "CREATED: env file for http docs"
+
+
+
+#get information from Service Bus
+SERVICE_BUS_NAMESPACE=$(jq -r '.sb_namespace_name' < "$script_dir/../../output.json")
+if [[ ${#SERVICE_BUS_NAMESPACE} -eq 0 ]]; then
+ echo 'ERROR: Missing output value sb_namespace_name' 1>&2
+ exit 6
+fi
+SERVICE_BUS_CONNECTION_STRING=$(az servicebus namespace authorization-rule keys list --resource-group "${RESOURCE_GROUP}" --namespace-name "${SERVICE_BUS_NAMESPACE}" --name RootManageSharedAccessKey --query primaryConnectionString --output tsv)
+
+
+
+#create env file for cargo-test-scripts
+cat << EOF > "$script_dir/../../src/cargo-test-scripts/.env"
+SERVICEBUS_CONNECTION_STRING=$SERVICE_BUS_CONNECTION_STRING
+QUEUE_NAME=ingest-cargo
+TOPIC_NAME=validated-cargo
+CARGO_PROCESSING_API_URL=http://$ingress_ip/cargo
+OPERATIONS_API_URL=http://$ingress_ip/cargo
+
+EOF
+echo "CREATED: env file for CARGO-TEST-SCRIPTS"
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-terraform-infrastructure.sh b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-terraform-infrastructure.sh
new file mode 100644
index 0000000..b608117
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/scripts/deploy-terraform-infrastructure.sh
@@ -0,0 +1,152 @@
+#!/bin/bash
+set -e
+
+#
+# This script generates the terraform.tfvars file and then uses that to deploy the infrastructure
+# An output.json file is generated in the project root containing the outputs from the deployment
+# The output.json format is consistent between Terraform and Bicep deployments
+#
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+help()
+{
+ echo ""
+ echo ""
+ echo ""
+ echo "Command"
+ echo " deploy-terraform-infrastructure.sh : Will deploy all required services."
+ echo ""
+ echo "Arguments"
+ echo " --username, -u : REQUIRED: Unique name to assign in all deployed services, your high school hotmail alias is a great idea!"
+ echo " --email-address, -e : REQUIRED: Email address for alert notifications"
+ echo " --location, -l : REQUIRED: Azure region to deploy to"
+ echo " --aks-aad-auth : OPTIONAL Enable AAD authentication for AKS"
+ echo ""
+ exit 1
+}
+
+SHORT=u:,l:,h
+LONG=username:,email-address:,location:,aks-aad-auth,help
+OPTS=$(getopt -a -n files --options $SHORT --longoptions $LONG -- "$@")
+
+eval set -- "$OPTS"
+
+USERNAME=''
+LOCATION=''
+EMAIL_ADDRESS=''
+AKS_AAD_AUTH=false
+while :
+do
+ case "$1" in
+ -u | --username )
+ USERNAME="$2"
+ shift 2
+ ;;
+ -e | --email-address )
+ EMAIL_ADDRESS="$2"
+ shift 2
+ ;;
+ -l | --location )
+ LOCATION="$2"
+ shift 2
+ ;;
+ --aks-aad-auth )
+ AKS_AAD_AUTH=true
+ shift 1
+ ;;
+ -h | --help)
+ help
+ ;;
+ --)
+ shift;
+ break
+ ;;
+ *)
+ echo "Unexpected option: $1"
+ ;;
+ esac
+done
+
+if [[ ${#USERNAME} -eq 0 ]]; then
+ echo 'ERROR: Missing required parameter --username | -u' 1>&2
+ exit 6
+fi
+
+if [[ ${#EMAIL_ADDRESS} -eq 0 ]]; then
+ echo 'ERROR: Missing required parameter --email-address | -e' 1>&2
+ exit 6
+fi
+
+if [[ ${#LOCATION} -eq 0 ]]; then
+ echo 'ERROR: Missing required parameter --location | -l' 1>&2
+ exit 6
+fi
+
+current_user_object_id=""
+if [[ "$AKS_AAD_AUTH" == true ]]; then
+ if [[ -z "$ARM_CLIENT_ID" ]]; then
+ # Get the ID of the currently signed in user
+ current_user_object_id=$(az ad signed-in-user show --query id -o tsv)
+ else
+ # Get the ID of the service principal for ARM_CLIENT_ID
+ current_user_object_id=$(az ad sp show --id "$ARM_CLIENT_ID" --query id -o tsv)
+ fi
+ echo "Enabling AKS AAD authentication (current user object ID: $current_user_object_id)"
+fi
+
+cat << EOF > "$script_dir/../terraform/terraform.tfvars"
+location = "${LOCATION}"
+prefix = "dev"
+unique_username = "${USERNAME}"
+cosmosdb_database_name = "cargo"
+cosmosdb_container1_name = "valid-cargo"
+cosmosdb_container2_name = "invalid-cargo"
+cosmosdb_container3_name = "operations"
+service_bus_queue1_name = "ingest-cargo"
+service_bus_queue2_name = "operation-state"
+service_bus_topic_name = "validated-cargo"
+service_bus_subscription1_name = "valid-cargo"
+service_bus_subscription2_name = "invalid-cargo"
+service_bus_topic_rule1_name = "valid"
+service_bus_topic_rule2_name = "invalid"
+aks_aad_auth = ${AKS_AAD_AUTH}
+aks_aad_admin_user_object_id = "${current_user_object_id}"
+notification_email_address = "${EMAIL_ADDRESS}"
+EOF
+
+echo -e "\n*** Terraform parameters file created"
+
+cd "$script_dir"/../terraform/
+
+if [[ -n "$TERRAFORM_STATE_STORAGE_ACCOUNT_NAME" ]]; then
+ # init with Azure backend
+ echo -e "\n*** Initializing Terraform (with Azure backend: $TERRAFORM_STATE_STORAGE_ACCOUNT_NAME)"
+cat > backend.tf << EOF
+terraform {
+ backend "azurerm" {}
+}
+EOF
+ terraform init -upgrade \
+ -backend-config "resource_group_name=${TERRAFORM_STATE_RESOURCE_GROUP_NAME}" \
+ -backend-config "storage_account_name=${TERRAFORM_STATE_STORAGE_ACCOUNT_NAME}" \
+ -backend-config "container_name=${TERRAFORM_STATE_CONTAINER_NAME}" \
+ -backend-config "key=${TERRAFORM_STATE_KEY}"
+else
+ # init with local backend
+ echo -e "\n*** Initializing Terraform (with local backend)"
+ rm -rf backend.tf
+ terraform init -upgrade
+fi
+
+echo -e "\n*** Planning Terraform resources"
+
+terraform plan -var-file=terraform.tfvars -out=plan.out
+
+echo -e "\n*** Deploying Terraform resources"
+
+terraform apply "plan.out"
+
+echo -e "\n*** Gathering required outputs"
+
+terraform output -json | jq "[. | to_entries | .[] | {key:.key, value: .value.value}] | from_entries" > "${script_dir}/../../output.json"
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/.gitignore b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/.gitignore
new file mode 100644
index 0000000..d4951f9
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/.gitignore
@@ -0,0 +1,5 @@
+.terraform/*
+.terraform*
+*.tfstate
+*.tfstate.backup
+backend.tf
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/main.tf
new file mode 100644
index 0000000..33bb716
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/main.tf
@@ -0,0 +1,171 @@
+data "azurerm_client_config" "current_config" {}
+
+resource "azurerm_resource_group" "rg" {
+ name = "rg-${var.prefix}-tf-${var.unique_username}"
+ location = var.location
+}
+
+//Cosmos DB module
+resource "azurecaf_name" "cosmosdb" {
+ name = "accl"
+ resource_type = "azurerm_cosmosdb_account"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+module "cosmosdb" {
+ source = "./modules/cosmos"
+ account_name = azurecaf_name.cosmosdb.result
+ location = azurerm_resource_group.rg.location
+ resource_group_name = azurerm_resource_group.rg.name
+ cosmosdb_database_name = var.cosmosdb_database_name
+ cosmosdb_valid_container_name = var.cosmosdb_container1_name
+ cosmosdb_invalid_container_name = var.cosmosdb_container2_name
+ cosmosdb_operations_container_name = var.cosmosdb_container3_name
+ log_analytics_workspace_id = module.app_insights.log_analytics_workspace_id
+}
+
+//ACR module
+resource "azurecaf_name" "acr" {
+ name = "accl"
+ resource_type = "azurerm_container_registry"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+module "acr" {
+ source = "./modules/acr"
+ name = azurecaf_name.acr.result
+ location = azurerm_resource_group.rg.location
+ resource_group_name = azurerm_resource_group.rg.name
+}
+
+//AKS module
+resource "azurecaf_name" "aks" {
+ name = "accl"
+ resource_type = "azurerm_kubernetes_cluster"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+module "aks" {
+ source = "./modules/aks"
+ name = azurecaf_name.aks.result
+ prefix = var.prefix
+ location = azurerm_resource_group.rg.location
+ resource_group_name = azurerm_resource_group.rg.name
+ acr_id = module.acr.acr_id
+ log_analytics_workspace_id = module.app_insights.log_analytics_workspace_id
+ aks_aad_auth = var.aks_aad_auth
+ aks_aad_admin_user_object_id = var.aks_aad_admin_user_object_id
+}
+
+//Application Insights module
+resource "azurecaf_name" "appi" {
+ name = "accl"
+ resource_type = "azurerm_application_insights"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+resource "azurecaf_name" "log" {
+ name = "accl"
+ resource_type = "azurerm_log_analytics_workspace"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+module "app_insights" {
+ source = "./modules/app_insights"
+ app_insights_name = azurecaf_name.appi.result
+ log_analytics_workspace_name = azurecaf_name.log.result
+ location = azurerm_resource_group.rg.location
+ resource_group_name = azurerm_resource_group.rg.name
+}
+
+module "workbooks" {
+ source = "./modules/workbooks"
+ workspace_id = module.app_insights.log_analytics_workspace_id
+ resource_group_name = azurerm_resource_group.rg.name
+ location = azurerm_resource_group.rg.location
+ servicebus_namespace_id = module.service_bus.servicebus_namespace_id
+ app_insights_id = module.app_insights.app_insights_id
+ key_vault_id = module.key_vault.kv_id
+ aks_id = module.aks.aks_id
+}
+
+module "alerts" {
+ source = "./modules/alerts"
+ resource_group_name = azurerm_resource_group.rg.name
+ location = azurerm_resource_group.rg.location
+ notification_email_address = var.notification_email_address
+ action_group_name = "default-actiongroup"
+ cosmosdb_id = module.cosmosdb.cosmosdb_id
+ servicebus_namespace_id = module.service_bus.servicebus_namespace_id
+ aks_id = module.aks.aks_id
+ kv_id = module.key_vault.kv_id
+ app_insights_id = module.app_insights.app_insights_id
+ log_analytics_workspace_id = module.app_insights.log_analytics_workspace_id
+}
+
+//Service Bus module
+resource "azurecaf_name" "service_bus" {
+ name = "accl"
+ resource_type = "azurerm_servicebus_namespace"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+module "service_bus" {
+ source = "./modules/service_bus"
+ services_bus_namespace_name = azurecaf_name.service_bus.result
+ location = azurerm_resource_group.rg.location
+ resource_group_name = azurerm_resource_group.rg.name
+ log_analytics_workspace_id = module.app_insights.log_analytics_workspace_id
+ service_bus_queue1_name = var.service_bus_queue1_name
+ service_bus_queue2_name = var.service_bus_queue2_name
+ service_bus_topic_name = var.service_bus_topic_name
+ service_bus_valid_subscription = var.service_bus_subscription1_name
+ service_bus_invalid_subscription = var.service_bus_subscription2_name
+ service_bus_valid_rule = var.service_bus_topic_rule1_name
+ service_bus_invalid_rule = var.service_bus_topic_rule2_name
+}
+
+//Key Vault module
+resource "azurecaf_name" "kv_compute" {
+ name = "accl"
+ resource_type = "azurerm_key_vault"
+ prefixes = [var.prefix]
+ suffixes = [azurerm_resource_group.rg.location]
+ random_length = 3
+ clean_input = true
+}
+
+module "key_vault" {
+ source = "./modules/keyvault"
+ location = azurerm_resource_group.rg.location
+ resource_group_name = azurerm_resource_group.rg.name
+ kev_vault_name = azurecaf_name.kv_compute.result
+ log_analytics_workspace_id = module.app_insights.log_analytics_workspace_id
+ aks_key_vault_secret_provider_object_id = module.aks.aks_key_vault_secret_provider_object_id
+ key_vault_secrets = tomap(
+ {
+ "AppInsightsConnectionString" = module.app_insights.connection_string
+ "ServiceBusConnectionString" = module.service_bus.connection_string
+ "CosmosDBEndpoint" = module.cosmosdb.cosmosdb_endpoint
+ "CosmosDBKey" = module.cosmosdb.cosmosdb_key
+ }
+ )
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/main.tf
new file mode 100644
index 0000000..fbfd094
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/main.tf
@@ -0,0 +1,7 @@
+resource "azurerm_container_registry" "acr" {
+ name = var.name
+ resource_group_name = var.resource_group_name
+ location = var.location
+ sku = "Standard"
+ admin_enabled = false
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/outputs.tf
new file mode 100644
index 0000000..492dc93
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/outputs.tf
@@ -0,0 +1,8 @@
+output "acr_id" {
+ value = azurerm_container_registry.acr.id
+ sensitive = true
+}
+
+output "acr_name" {
+ value = azurerm_container_registry.acr.name
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/variables.tf
new file mode 100644
index 0000000..c80de96
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/acr/variables.tf
@@ -0,0 +1,14 @@
+variable "name" {
+ type = string
+ description = "resource name"
+}
+
+variable "location" {
+ type = string
+ description = "The Azure region in which ACR should be provisioned"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "The Azure Resource Group where the ACR should be provisioned"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/main.tf
new file mode 100644
index 0000000..3949a23
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/main.tf
@@ -0,0 +1,63 @@
+data "azurerm_client_config" "current_config" {}
+
+resource "azurerm_kubernetes_cluster" "aks" {
+ name = var.name
+ location = var.location
+ resource_group_name = var.resource_group_name
+ dns_prefix = var.kubernetes_dns_prefix
+ private_cluster_enabled = false
+
+
+ default_node_pool {
+ name = "agentpool"
+ min_count = 1
+ max_count = var.kubernetes_node_count
+ enable_auto_scaling = true
+ type = "VirtualMachineScaleSets"
+ vm_size = var.kubernetes_vm_size
+ os_disk_size_gb = var.kubernetes_vm_disk_size
+ }
+
+ // Use dynamic to conditionally set AAD auth block
+ dynamic "azure_active_directory_role_based_access_control" {
+ for_each = var.aks_aad_auth ? [1] : []
+ content {
+ managed = true
+ tenant_id = data.azurerm_client_config.current_config.tenant_id
+ azure_rbac_enabled = true
+ }
+ }
+
+ identity {
+ type = "SystemAssigned"
+ }
+
+ key_vault_secrets_provider {
+ secret_rotation_enabled = true
+ secret_rotation_interval = "2m"
+ }
+
+ oms_agent {
+ log_analytics_workspace_id = var.log_analytics_workspace_id
+ }
+}
+
+resource "azurerm_role_assignment" "acrpull_role" {
+ scope = var.acr_id
+ role_definition_name = "AcrPull"
+ principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
+ skip_service_principal_aad_check = true
+}
+
+resource "azurerm_role_assignment" "aks_admin_role" {
+ count = var.aks_aad_auth ? 1 : 0
+ scope = azurerm_kubernetes_cluster.aks.id
+ role_definition_name = "Azure Kubernetes Service Cluster Admin Role"
+ principal_id = var.aks_aad_admin_user_object_id
+}
+resource "azurerm_role_assignment" "aks_user_role" {
+ count = var.aks_aad_auth ? 1 : 0
+ scope = azurerm_kubernetes_cluster.aks.id
+ role_definition_name = "Azure Kubernetes Service Cluster User Role"
+ principal_id = var.aks_aad_admin_user_object_id
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/outputs.tf
new file mode 100644
index 0000000..594c2a4
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/outputs.tf
@@ -0,0 +1,17 @@
+output "aks_name" {
+ value = azurerm_kubernetes_cluster.aks.name
+}
+
+output "aks_id" {
+ value = azurerm_kubernetes_cluster.aks.id
+}
+
+output "aks_key_vault_secret_provider_client_id" {
+ value = azurerm_kubernetes_cluster.aks.key_vault_secrets_provider[0].secret_identity[0].client_id
+ sensitive = true
+}
+
+output "aks_key_vault_secret_provider_object_id" {
+ value = azurerm_kubernetes_cluster.aks.key_vault_secrets_provider[0].secret_identity[0].object_id
+ sensitive = true
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/variables.tf
new file mode 100644
index 0000000..15bf9af
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/aks/variables.tf
@@ -0,0 +1,65 @@
+variable "name" {
+ type = string
+ description = "The AKS resource name"
+}
+
+variable "location" {
+ type = string
+ description = "The Azure region in which AKS should be provisioned"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "The Azure Resource Group where the AKS should be provisioned"
+}
+
+variable "prefix" {
+ type = string
+ description = "Name prefix"
+}
+
+variable "kubernetes_dns_prefix" {
+ type = string
+ description = "AKS DNS prefix"
+ default = "aks"
+}
+
+variable "kubernetes_node_count" {
+ type = number
+ description = "The agent count"
+ default = 3
+}
+
+variable "kubernetes_vm_size" {
+ type = string
+ description = "Azure Kubernetes Cluster VM Size"
+ default = "Standard_D2s_v3"
+}
+
+variable "kubernetes_vm_disk_size" {
+ type = string
+ description = "Azure Kubernetes Cluster VM Disk Size"
+ default = "30"
+}
+
+variable "log_analytics_workspace_id" {
+ type = string
+ description = "The ID of the Log Analytics Workspace related to the cluster."
+}
+
+variable "acr_id" {
+ type = string
+ description = "Id from ACR to get acrPull role assignment"
+}
+
+variable "aks_aad_auth" {
+ type = bool
+ description = "Configure Azure Active Directory authentication for Kubernetes cluster"
+ default = false
+}
+
+variable "aks_aad_admin_user_object_id" {
+ type = string
+ description = "Object ID of the AAD user to be added as an admin to the AKS cluster"
+ default = ""
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/alerts/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/alerts/main.tf
new file mode 100644
index 0000000..15003ae
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/alerts/main.tf
@@ -0,0 +1,975 @@
+resource "azurerm_monitor_action_group" "default" {
+ name = var.action_group_name
+ resource_group_name = var.resource_group_name
+ short_name = length(var.action_group_name) <= 12 ? var.action_group_name : substr(var.action_group_name, 0, 12)
+
+ email_receiver {
+ name = "email-receiver"
+ email_address = var.notification_email_address
+ use_common_alert_schema = false
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "cosmos_rus" {
+ name = "cosmos_rus"
+ resource_group_name = var.resource_group_name
+ scopes = [var.cosmosdb_id]
+ severity = 1
+ description = "Alert when RUs exceed 400."
+ enabled = false
+ frequency = "PT1M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.DocumentDB/databaseAccounts"
+ metric_name = "TotalRequestUnits"
+ aggregation = "Total"
+ operator = "GreaterThan"
+ threshold = 400
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "cosmos_invalid_cargo" {
+ name = "cosmos_invalid_cargo"
+ resource_group_name = var.resource_group_name
+ scopes = [var.cosmosdb_id]
+ severity = 3
+ description = "Alert when more than 10 documents have been saved to the invalid-cargo container."
+ enabled = false
+ frequency = "PT1M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.DocumentDB/databaseAccounts"
+ metric_name = "DocumentCount"
+ aggregation = "Total"
+ operator = "GreaterThan"
+ threshold = 10
+ dimension {
+ name = "CollectionName"
+ operator = "Include"
+ values = ["invalid_cargo"]
+ }
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "service_bus_abandoned_messages" {
+ name = "service_bus_abandoned_messages"
+ resource_group_name = var.resource_group_name
+ scopes = [var.servicebus_namespace_id]
+ severity = 2
+ description = "Alert when a Service Bus entity has abandoned more than 10 messages."
+ enabled = false
+ frequency = "PT1M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.ServiceBus/namespaces"
+ metric_name = "AbandonMessage"
+ aggregation = "Total"
+ operator = "GreaterThan"
+ threshold = 10
+ dimension {
+ name = "EntityName"
+ operator = "Include"
+ values = ["*"]
+ }
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "service_bus_dead_lettered_messages" {
+ name = "service_bus_dead_lettered_messages"
+ resource_group_name = var.resource_group_name
+ scopes = [var.servicebus_namespace_id]
+ severity = 2
+ description = "Alert when a Service Bus entity has dead-lettered more than 10 messages."
+ enabled = false
+ frequency = "PT1M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.ServiceBus/namespaces"
+ metric_name = "DeadletteredMessages"
+ aggregation = "Average"
+ operator = "GreaterThan"
+ threshold = 10
+ dimension {
+ name = "EntityName"
+ operator = "Include"
+ values = ["*"]
+ }
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "service_bus_throttled_requests" {
+ name = "service_bus_throttled_requests"
+ resource_group_name = var.resource_group_name
+ scopes = [var.servicebus_namespace_id]
+ severity = 2
+ description = "Alert when a Service Bus entity has throttled more than 10 requests."
+ enabled = false
+ frequency = "PT1M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.ServiceBus/namespaces"
+ metric_name = "ThrottledRequests"
+ aggregation = "Total"
+ operator = "GreaterThan"
+ threshold = 10
+ dimension {
+ name = "EntityName"
+ operator = "Include"
+ values = ["*"]
+ }
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "aks_cpu_percentage" {
+ name = "aks_cpu_percentage"
+ resource_group_name = var.resource_group_name
+ scopes = [var.aks_id]
+ severity = 2
+ description = "Alert when Node CPU percentage exceeds 80."
+ enabled = false
+ frequency = "PT5M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.ContainerService/managedClusters"
+ metric_name = "node_cpu_usage_percentage"
+ aggregation = "Average"
+ operator = "GreaterThan"
+ threshold = 80
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "aks_memory_percentage" {
+ name = "aks_memory_percentage"
+ resource_group_name = var.resource_group_name
+ scopes = [var.aks_id]
+ severity = 2
+ description = "Alert when Node memory working set percentage exceeds 80."
+ enabled = false
+ frequency = "PT5M"
+ window_size = "PT5M"
+
+ criteria {
+ metric_namespace = "Microsoft.ContainerService/managedClusters"
+ metric_name = "node_memory_working_set_percentage"
+ aggregation = "Average"
+ operator = "GreaterThan"
+ threshold = 80
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+resource "azurerm_monitor_metric_alert" "key_vault_saturation_rate" {
+ name = "key_vault_saturation_rate"
+ resource_group_name = var.resource_group_name
+ scopes = [var.kv_id]
+ severity = 3
+ description = "Alert when Key Vault saturation falls outside the range of a dynamic threshold."
+ enabled = false
+ frequency = "PT5M"
+ window_size = "PT5M"
+
+ dynamic_criteria {
+ metric_namespace = "Microsoft.KeyVault/vaults"
+ metric_name = "SaturationShoebox"
+ aggregation = "Average"
+ operator = "GreaterOrLessThan"
+ alert_sensitivity = "Medium"
+ evaluation_total_count = 4
+ evaluation_failure_count = 4
+ }
+
+ action {
+ action_group_id = azurerm_monitor_action_group.default.id
+ }
+}
+
+# Tenant specific issues prevent deployment of custom metric alert
+#
+# resource "azurerm_monitor_metric_alert" "product_qty_scheduled_for_destination_port" {
+# name = "product_qty_scheduled_for_destination_port"
+# resource_group_name = var.resource_group_name
+# scopes = [var.app_insights_id]
+# severity = 3
+# description = "Alert when a single port/destination receives more than quantity 1000 of a given product."
+# enabled = false
+# frequency = "PT1M"
+# window_size = "PT1M"
+
+# criteria {
+# metric_namespace = "azure.applicationinsights"
+# metric_name = "port_product_qty"
+# aggregation = "Total"
+# operator = "GreaterThan"
+# threshold = 1000
+# skip_metric_validation = true
+
+# dimension {
+# name = "destination"
+# operator = "Include"
+# values = ["*"]
+# }
+
+# dimension {
+# name = "product"
+# operator = "Include"
+# values = ["*"]
+# }
+# }
+
+# action {
+# action_group_id = azurerm_monitor_action_group.default.id
+# }
+# }
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "microservice_exceptions" {
+ name = "microservice_exceptions"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when a microservice throws more than 5 exceptions."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ exceptions
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 5
+ operator = "GreaterThan"
+
+ dimension {
+ name = "cloud_RoleName"
+ operator = "Include"
+ values = ["*"]
+ }
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "cargo_processing_api_requests" {
+ name = "cargo_processing_api_requests"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 3
+ description = "Alert when the cargo-processing-api microservice is not receiving any requests."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "cargo-processing-api" and (name == "POST /cargo/" or name == "PUT /cargo/{cargoId}")
+ QUERY
+ time_aggregation_method = "Count"
+ # usage of the "Equal" operator is currently blocked
+ # LessThan 1 should suffice as a workaround for Equal 0 until the bug is fixed is released in 3.36.0
+ # please see discussion at https://github.com/hashicorp/terraform-provider-azurerm/issues/19581
+ threshold = 1
+ operator = "LessThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "e2e_average_duration" {
+ name = "e2e_average_duration"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the end to end average request duration exceeds 5 seconds."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ let cargo_processing_api = requests
+ | where cloud_RoleName == "cargo-processing-api" and (name == "POST /cargo/" or name == "PUT /cargo/{cargoId}")
+ | project-rename ingest_timestamp = timestamp
+ | project ingest_timestamp, operation_Id;
+ let operation_api_succeeded = requests
+ | where cloud_RoleName == "operations-api" and name == "ServiceBus.process" and customDimensions["operation-state"] == "Succeeded"
+ | extend operation_api_completed = timestamp + (duration*1ms)
+ | project operation_Id, operation_api_completed;
+ cargo_processing_api
+ | join kind=inner operation_api_succeeded on $left.operation_Id == $right.operation_Id
+ | extend end_to_end_Duration_ms = (operation_api_completed - ingest_timestamp) /1ms
+ | summarize avg(end_to_end_Duration_ms)
+ QUERY
+ time_aggregation_method = "Average"
+ threshold = 5000
+ operator = "GreaterThan"
+ metric_measure_column = "avg_end_to_end_Duration_ms"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "cargo_processing_api_average_duration" {
+ name = "cargo_processing_api_average_duration"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the cargo-processing-api microservice average request duration exceeds 2 seconds."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "cargo-processing-api" and (name == "POST /cargo/" or name == "PUT /cargo/{cargoId}")
+ | summarize avg(duration)
+ QUERY
+ time_aggregation_method = "Average"
+ threshold = 2000
+ operator = "GreaterThan"
+ metric_measure_column = "avg_duration"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "cargo_processing_validator_average_duration" {
+ name = "cargo_processing_validator_average_duration"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the cargo-processing-validator microservice average request duration exceeds 2 seconds."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "cargo-processing-validator" and (name == "ServiceBus.ProcessMessage" or name == "ServiceBusQueue.ProcessMessage")
+ | summarize avg(duration)
+ QUERY
+ time_aggregation_method = "Average"
+ threshold = 2000
+ operator = "GreaterThan"
+ metric_measure_column = "avg_duration"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "valid_cargo_manager_average_duration" {
+ name = "valid_cargo_manager_average_duration"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the valid-cargo-manager microservice average request duration exceeds 2 seconds."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "valid-cargo-manager" and name == "ServiceBusTopic.ProcessMessage"
+ | summarize avg(duration)
+ QUERY
+ time_aggregation_method = "Average"
+ threshold = 2000
+ operator = "GreaterThan"
+ metric_measure_column = "avg_duration"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "invalid_cargo_manager_average_duration" {
+ name = "invalid_cargo_manager_average_duration"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the invalid-cargo-manager microservice average request duration exceeds 2 seconds."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "invalid-cargo-manager" and name == "ServiceBusTopic.ProcessMessage"
+ | summarize avg(duration)
+ QUERY
+ time_aggregation_method = "Average"
+ threshold = 2000
+ operator = "GreaterThan"
+ metric_measure_column = "avg_duration"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "operations_api_average_duration" {
+ name = "operations_api_average_duration"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the operations-api microservice average request duration exceeds 1 second."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "operations-api" and name == "ServiceBus.process"
+ | summarize avg(duration)
+ QUERY
+ time_aggregation_method = "Average"
+ threshold = 1000
+ operator = "GreaterThan"
+ metric_measure_column = "avg_duration"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "log_analytics_data_ingestion_daily_cap" {
+ name = "log_analytics_data_ingestion_daily_cap"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.log_analytics_workspace_id]
+ severity = 2
+ description = "Alert when the Log Analytics data ingestion daily cap has been reached."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ _LogOperation
+ | where Category == "Ingestion"
+ | where Operation has "Data collection"
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+ resource_id_column = "_ResourceId"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "log_analytics_data_ingestion_rate" {
+ name = "log_analytics_data_ingestion_rate"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.log_analytics_workspace_id]
+ severity = 2
+ description = "Alert when the Log Analytics max data ingestion rate has been reached."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ _LogOperation
+ | where Category == "Ingestion"
+ | where Operation has "Ingestion rate"
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+ resource_id_column = "_ResourceId"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "log_analytics_operational_issues" {
+ name = "log_analytics_operational_issues"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "P1D"
+ window_duration = "P1D"
+ scopes = [var.log_analytics_workspace_id]
+ severity = 3
+ description = "Alert when the Log Analytics workspace has an operational issue."
+ enabled = false
+ # tf stateful rules can not run in a frequency greater than 12 hours, auto_mitigation_enabled must be false
+ auto_mitigation_enabled = false
+
+ criteria {
+ query = <<-QUERY
+ _LogOperation
+ | where Level == "Warning"
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+ resource_id_column = "_ResourceId"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "cargo_processing_api_health_check_failure" {
+ name = "cargo_processing_api_health_check_failure"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when a cargo-processing-api microservice health check fails."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "cargo-processing-api" and name == "GET /actuator/health" and success == "False"
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "cargo_processing_api_health_check_not_reporting" {
+ name = "cargo_processing_api_health_check_not_reporting"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the cargo-processing-api microservice health check is not reporting."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "cargo-processing-api" and name == "GET /actuator/health"
+ QUERY
+ time_aggregation_method = "Count"
+ # usage of the "Equal" operator is currently blocked
+ # LessThan 1 should suffice as a workaround for Equal 0 until the bug is fixed is released in 3.36.0
+ # please see discussion at https://github.com/hashicorp/terraform-provider-azurerm/issues/19581
+ threshold = 1
+ operator = "LessThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "valid_cargo_manager_health_check_failure" {
+ name = "valid_cargo_manager_health_check_failure"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT30M"
+ window_duration = "PT30M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when a valid-cargo-manager microservice health check fails."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ customMetrics
+ | where cloud_RoleName == "valid-cargo-manager" and name == "HeartbeatState" and value != 2
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "valid_cargo_manager_health_check_not_reporting" {
+ name = "valid_cargo_manager_health_check_not_reporting"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT30M"
+ window_duration = "PT30M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the valid-cargo-manager microservice health check is not reporting."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ customMetrics
+ | where cloud_RoleName == "valid-cargo-manager" and name == "HeartbeatState"
+ QUERY
+ time_aggregation_method = "Count"
+ # usage of the "Equal" operator is currently blocked
+ # LessThan 1 should suffice as a workaround for Equal 0 until the bug is fixed is released in 3.36.0
+ # please see discussion at https://github.com/hashicorp/terraform-provider-azurerm/issues/19581
+ threshold = 1
+ operator = "LessThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "invalid_cargo_manager_health_check_failure" {
+ name = "invalid_cargo_manager_health_check_failure"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when an invalid-cargo-manager microservice health check fails."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ traces
+ | where cloud_RoleName == "invalid-cargo-manager" and message contains "peeked at messages for over"
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "invalid_cargo_manager_health_check_not_reporting" {
+ name = "invalid_cargo_manager_health_check_not_reporting"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the invalid-cargo-manager microservice health check is not reporting."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ traces
+ | where cloud_RoleName == "invalid-cargo-manager" and (message contains "since last peek" or message contains "peeked at messages for over")
+ QUERY
+ time_aggregation_method = "Count"
+ # usage of the "Equal" operator is currently blocked
+ # LessThan 1 should suffice as a workaround for Equal 0 until the bug is fixed is released in 3.36.0 is released in 3.36.0
+ # please see discussion at https://github.com/hashicorp/terraform-provider-azurerm/issues/19581
+ threshold = 1
+ operator = "LessThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "operations_api_health_check_failure" {
+ name = "operations_api_health_check_failure"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when an operations-api microservice health check fails."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "operations-api" and name == "GET /actuator/health" and success == "False"
+ QUERY
+ time_aggregation_method = "Count"
+ threshold = 0
+ operator = "GreaterThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "operations_api_health_check_not_reporting" {
+ name = "operations_api_health_check_not_reporting"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.app_insights_id]
+ severity = 1
+ description = "Alert when the operations-api microservice health check is not reporting."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ requests
+ | where cloud_RoleName == "operations-api" and name == "GET /actuator/health"
+ QUERY
+ time_aggregation_method = "Count"
+ # usage of the "Equal" operator is currently blocked
+ # LessThan 1 should suffice as a workaround for Equal 0 until the bug is fixed is released in 3.36.0
+ # please see discussion at https://github.com/hashicorp/terraform-provider-azurerm/issues/19581
+ threshold = 1
+ operator = "LessThan"
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
+
+resource "azurerm_monitor_scheduled_query_rules_alert_v2" "aks_pod_restarts" {
+ name = "aks_pod_restarts"
+ resource_group_name = var.resource_group_name
+ location = var.location
+
+ evaluation_frequency = "PT5M"
+ window_duration = "PT5M"
+ scopes = [var.log_analytics_workspace_id]
+ severity = 1
+ description = "Alert when a microservice restarts more than once."
+ enabled = false
+ auto_mitigation_enabled = true
+
+ criteria {
+ query = <<-QUERY
+ KubePodInventory
+ | summarize numRestarts = sum(PodRestartCount) by ServiceName
+ QUERY
+ time_aggregation_method = "Total"
+ threshold = 1
+ operator = "GreaterThan"
+ metric_measure_column = "numRestarts"
+
+ dimension {
+ name = "ServiceName"
+ operator = "Include"
+ values = [
+ "cargo-processing-api",
+ "cargo-processing-validator",
+ "invalid-cargo-manager",
+ "operations-api",
+ "valid-cargo-manager"
+ ]
+ }
+
+ failing_periods {
+ minimum_failing_periods_to_trigger_alert = 1
+ number_of_evaluation_periods = 1
+ }
+ }
+
+ action {
+ action_groups = [azurerm_monitor_action_group.default.id]
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/alerts/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/alerts/variables.tf
new file mode 100644
index 0000000..9b5b153
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/alerts/variables.tf
@@ -0,0 +1,49 @@
+variable "location" {
+ type = string
+ description = "Location for the Azure Workbook"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "Resource group for the Azure Workbook"
+}
+
+variable "action_group_name" {
+ type = string
+ description = "Name for the default action group"
+}
+
+variable "notification_email_address" {
+ type = string
+ description = "Email address for alert notifications"
+}
+
+variable "cosmosdb_id" {
+ type = string
+ description = "Id for monitored Cosmos DB"
+}
+
+variable "servicebus_namespace_id" {
+ type = string
+ description = "Id for monitored Service Bus namespace"
+}
+
+variable "aks_id" {
+ type = string
+ description = "Id for monitored AKS cluster"
+}
+
+variable "kv_id" {
+ type = string
+ description = "Id for monitored Key Vault"
+}
+
+variable "app_insights_id" {
+ type = string
+ description = "Id for monitored Application Insights"
+}
+
+variable "log_analytics_workspace_id" {
+ type = string
+ description = "Id for monitored Log Analytics workspace"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/main.tf
new file mode 100644
index 0000000..69b7e97
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/main.tf
@@ -0,0 +1,28 @@
+resource "azurerm_application_insights" "app_insights" {
+ name = var.app_insights_name
+ location = var.location
+ resource_group_name = var.resource_group_name
+ application_type = var.application_type
+ workspace_id = azurerm_log_analytics_workspace.log_analytics.id
+}
+
+resource "azurerm_log_analytics_workspace" "log_analytics" {
+ name = var.log_analytics_workspace_name
+ location = var.location
+ resource_group_name = var.resource_group_name
+ sku = var.log_analytics_workspace_sku
+ retention_in_days = 31
+}
+
+resource "azurerm_log_analytics_solution" "log_solution" {
+ solution_name = "ContainerInsights"
+ location = azurerm_log_analytics_workspace.log_analytics.location
+ resource_group_name = azurerm_log_analytics_workspace.log_analytics.resource_group_name
+ workspace_resource_id = azurerm_log_analytics_workspace.log_analytics.id
+ workspace_name = azurerm_log_analytics_workspace.log_analytics.name
+
+ plan {
+ publisher = "Microsoft"
+ product = "OMSGallery/ContainerInsights"
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/outputs.tf
new file mode 100644
index 0000000..5516376
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/outputs.tf
@@ -0,0 +1,16 @@
+output "name" {
+ value = azurerm_application_insights.app_insights.name
+}
+
+output "connection_string" {
+ value = azurerm_application_insights.app_insights.connection_string
+ sensitive = true
+}
+
+output "log_analytics_workspace_id" {
+ value = azurerm_log_analytics_workspace.log_analytics.id
+}
+
+output "app_insights_id" {
+ value = azurerm_application_insights.app_insights.id
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/variables.tf
new file mode 100644
index 0000000..47dfbdb
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/app_insights/variables.tf
@@ -0,0 +1,31 @@
+variable "app_insights_name" {
+ type = string
+ description = "The name of the Application Insights resource"
+}
+
+variable "location" {
+ type = string
+ description = "The Azure region in which AppInsights should be provisioned"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "The Azure Resource Group where the AppInsights should be provisioned"
+}
+
+variable "application_type" {
+ type = string
+ description = "The kind of application that will be sending the telemetry"
+ default = "web"
+}
+
+variable "log_analytics_workspace_name" {
+ type = string
+ description = "The resource name for log analytics"
+}
+
+variable "log_analytics_workspace_sku" {
+ type = string
+ description = "Specifies the SKU of the Log Analytics Workspace."
+ default = "PerGB2018"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/main.tf
new file mode 100644
index 0000000..a1be6ba
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/main.tf
@@ -0,0 +1,151 @@
+resource "azurerm_cosmosdb_account" "account" {
+ name = var.account_name
+ location = var.location
+ resource_group_name = var.resource_group_name
+ offer_type = "Standard"
+ kind = "GlobalDocumentDB"
+ enable_automatic_failover = true
+
+
+ consistency_policy {
+ consistency_level = "Session"
+ max_interval_in_seconds = 400
+ }
+
+ geo_location {
+ location = var.location
+ failover_priority = 0
+ }
+}
+
+resource "azurerm_cosmosdb_sql_database" "db" {
+ name = var.cosmosdb_database_name
+ resource_group_name = azurerm_cosmosdb_account.account.resource_group_name
+ account_name = azurerm_cosmosdb_account.account.name
+}
+
+resource "azurerm_cosmosdb_sql_container" "valid_container" {
+ name = var.cosmosdb_valid_container_name
+ resource_group_name = azurerm_cosmosdb_account.account.resource_group_name
+ account_name = azurerm_cosmosdb_account.account.name
+ database_name = azurerm_cosmosdb_sql_database.db.name
+ partition_key_path = "/id"
+}
+
+resource "azurerm_cosmosdb_sql_container" "invalid_container" {
+ name = var.cosmosdb_invalid_container_name
+ resource_group_name = azurerm_cosmosdb_account.account.resource_group_name
+ account_name = azurerm_cosmosdb_account.account.name
+ database_name = azurerm_cosmosdb_sql_database.db.name
+ partition_key_path = "/id"
+}
+
+resource "azurerm_cosmosdb_sql_container" "operations_container" {
+ name = var.cosmosdb_operations_container_name
+ resource_group_name = azurerm_cosmosdb_account.account.resource_group_name
+ account_name = azurerm_cosmosdb_account.account.name
+ database_name = azurerm_cosmosdb_sql_database.db.name
+ partition_key_path = "/id"
+}
+
+
+
+resource "azurerm_monitor_diagnostic_setting" "diagnostic_settings" {
+ name = var.cosmos_db_diagnostic_settings_name
+ target_resource_id = azurerm_cosmosdb_account.account.id
+ log_analytics_workspace_id = var.log_analytics_workspace_id
+ log_analytics_destination_type = "AzureDiagnostics"
+
+ /*
+ category groups are still not allowed so we need to set all fields one by one
+ reference: https://github.com/hashicorp/terraform-provider-azurerm/issues/17349
+ supported log categories per resource can be found here:
+ https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-categories
+ */
+
+ log {
+ category = "DataPlaneRequests"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "QueryRuntimeStatistics"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "PartitionKeyStatistics"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "PartitionKeyRUConsumption"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "ControlPlaneRequests"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "CassandraRequests"
+ enabled = false
+
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "GremlinRequests"
+ enabled = false
+
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "MongoRequests"
+ enabled = false
+
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "TableApiRequests"
+ enabled = false
+
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+
+ metric {
+ category = "Requests"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/outputs.tf
new file mode 100644
index 0000000..38b080c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/outputs.tf
@@ -0,0 +1,17 @@
+output "name" {
+ value = azurerm_cosmosdb_account.account.name
+}
+
+output "cosmosdb_id" {
+ value = azurerm_cosmosdb_account.account.id
+}
+
+output "cosmosdb_endpoint" {
+ value = azurerm_cosmosdb_account.account.endpoint
+}
+
+output "cosmosdb_key" {
+ value = azurerm_cosmosdb_account.account.primary_key
+ sensitive = true
+}
+
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/variables.tf
new file mode 100644
index 0000000..3704ed2
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/cosmos/variables.tf
@@ -0,0 +1,41 @@
+variable "account_name" {
+ description = "CosmosDB account name"
+}
+
+variable "location" {
+ type = string
+ description = "The Azure region in which CosmosDB should be provisioned"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "The Azure Resource Group where the CosmosDB should be provisioned"
+}
+
+variable "cosmosdb_database_name" {
+ type = string
+ description = "Name for the Cosmos DB SQL database"
+}
+
+variable "cosmosdb_valid_container_name" {
+ description = "Name for the Cosmos DB SQL container that stores valid cargo"
+}
+
+variable "cosmosdb_invalid_container_name" {
+ description = "Name for the Cosmos DB SQL container that stores invalid cargo"
+}
+
+variable "cosmosdb_operations_container_name" {
+ description = "Name for the Cosmos DB SQL container that stores operations"
+}
+
+variable "cosmos_db_diagnostic_settings_name" {
+ type = string
+ description = "Name for the diagnostic settings"
+ default = "cosmosDbDiagnostics"
+}
+
+variable "log_analytics_workspace_id" {
+ type = string
+ description = "Id for the targeted log analytics workspace"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/main.tf
new file mode 100644
index 0000000..44b3734
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/main.tf
@@ -0,0 +1,97 @@
+data "azurerm_client_config" "current_config" {}
+
+resource "azurerm_key_vault" "akv" {
+ name = var.kev_vault_name
+ location = var.location
+ resource_group_name = var.resource_group_name
+ tenant_id = data.azurerm_client_config.current_config.tenant_id
+ sku_name = "standard"
+}
+
+resource "azurerm_key_vault_access_policy" "admin" {
+ key_vault_id = azurerm_key_vault.akv.id
+ tenant_id = data.azurerm_client_config.current_config.tenant_id
+ object_id = data.azurerm_client_config.current_config.object_id
+
+ key_permissions = [
+ "Create",
+ "Get",
+ "List",
+ "Delete"
+ ]
+
+ secret_permissions = [
+ "List",
+ "Set",
+ "Get",
+ "Delete",
+ "Purge",
+ "Recover",
+ "Backup",
+ "Restore"
+ ]
+}
+
+resource "azurerm_key_vault_access_policy" "aks" {
+ key_vault_id = azurerm_key_vault.akv.id
+ tenant_id = data.azurerm_client_config.current_config.tenant_id
+ object_id = var.aks_key_vault_secret_provider_object_id
+
+ secret_permissions = [
+ "Get"
+ ]
+}
+
+resource "azurerm_key_vault_secret" "akvSecret" {
+ for_each = var.key_vault_secrets
+
+ name = each.key
+ value = each.value
+ key_vault_id = azurerm_key_vault.akv.id
+ content_type = "text/plain"
+ expiration_date = var.secrets_expiration_date
+
+ # explicitly depend on access policy so destroy works
+ depends_on = [
+ azurerm_key_vault_access_policy.admin
+ ]
+}
+
+resource "azurerm_monitor_diagnostic_setting" "diagnostic_settings" {
+ name = var.key_vault_diagnostic_settings_name
+ target_resource_id = azurerm_key_vault.akv.id
+ log_analytics_workspace_id = var.log_analytics_workspace_id
+
+ /*
+ category groups are still not allowed so we need to set all fields one by one
+ reference: https://github.com/hashicorp/terraform-provider-azurerm/issues/17349
+ supported log categories per resource can be found here:
+ https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-categories
+ */
+
+ log {
+ category = "AuditEvent"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "AzurePolicyEvaluationDetails"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+
+ metric {
+ category = "AllMetrics"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/outputs.tf
new file mode 100644
index 0000000..c655101
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/outputs.tf
@@ -0,0 +1,7 @@
+output "kv_name" {
+ value = azurerm_key_vault.akv.name
+}
+
+output "kv_id" {
+ value = azurerm_key_vault.akv.id
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/variables.tf
new file mode 100644
index 0000000..82d1e10
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/keyvault/variables.tf
@@ -0,0 +1,41 @@
+variable "kev_vault_name" {
+ type = string
+ description = "Name of the Key Vault instance"
+}
+
+variable "location" {
+ type = string
+ description = "The Azure region in which Key Vault should be provisioned"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "The Azure Resource Group where the Key Vault should be provisioned"
+}
+
+variable "key_vault_secrets" {
+ type = map(string)
+ description = "Map name/value of secrets for the AKV."
+}
+
+variable "secrets_expiration_date" {
+ type = string
+ description = "Secrets expiration date."
+ default = "2022-12-30T20:00:00Z"
+}
+
+variable "key_vault_diagnostic_settings_name" {
+ type = string
+ description = "Name for the diagnostic settings"
+ default = "keyVaultDiagnostics"
+}
+
+variable "log_analytics_workspace_id" {
+ type = string
+ description = "Id for the targeted log analytics workspace"
+}
+
+variable "aks_key_vault_secret_provider_object_id" {
+ type = string
+ description = "The Object ID of the user-defined Managed Identity used by the AKS Secret Provider"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/main.tf
new file mode 100644
index 0000000..cab4c85
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/main.tf
@@ -0,0 +1,103 @@
+resource "azurerm_servicebus_namespace" "bus_namespace" {
+ name = var.services_bus_namespace_name
+ location = var.location
+ resource_group_name = var.resource_group_name
+ capacity = var.service_bus_capacity
+ sku = var.service_bus_sku
+}
+
+resource "azurerm_servicebus_queue" "bus_queue1" {
+ name = var.service_bus_queue1_name
+ namespace_id = azurerm_servicebus_namespace.bus_namespace.id
+}
+
+resource "azurerm_servicebus_queue" "bus_queue2" {
+ name = var.service_bus_queue2_name
+ namespace_id = azurerm_servicebus_namespace.bus_namespace.id
+}
+
+resource "azurerm_servicebus_topic" "validation_topic" {
+ name = var.service_bus_topic_name
+ namespace_id = azurerm_servicebus_namespace.bus_namespace.id
+}
+
+resource "azurerm_servicebus_subscription" "valid_subscription" {
+ name = var.service_bus_valid_subscription
+ topic_id = azurerm_servicebus_topic.validation_topic.id
+ max_delivery_count = 1
+}
+
+resource "azurerm_servicebus_subscription" "invalid_subscription" {
+ name = var.service_bus_invalid_subscription
+ topic_id = azurerm_servicebus_topic.validation_topic.id
+ max_delivery_count = 1
+}
+
+resource "azurerm_servicebus_subscription_rule" "valid_rule" {
+ name = var.service_bus_valid_rule
+ subscription_id = azurerm_servicebus_subscription.valid_subscription.id
+ filter_type = "SqlFilter"
+ sql_filter = "valid = True"
+}
+
+resource "azurerm_servicebus_subscription_rule" "invalid_rule" {
+ name = var.service_bus_invalid_rule
+ subscription_id = azurerm_servicebus_subscription.invalid_subscription.id
+ filter_type = "SqlFilter"
+ sql_filter = "valid = False"
+}
+
+resource "azurerm_monitor_diagnostic_setting" "diagnostic_settings" {
+ name = var.service_bus_diagnostic_settings_name
+ target_resource_id = azurerm_servicebus_namespace.bus_namespace.id
+ log_analytics_workspace_id = var.log_analytics_workspace_id
+
+ /*
+ category groups are still not allowed so we need to set all fields one by one
+ reference: https://github.com/hashicorp/terraform-provider-azurerm/issues/17349
+ supported log categories per resource can be found here:
+ https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/resource-logs-categories
+ */
+
+ log {
+ category = "OperationalLogs"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "ApplicationMetricsLogs"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "RuntimeAuditLogs"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+ log {
+ category = "VNetAndIPFilteringLogs"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+
+ metric {
+ category = "AllMetrics"
+ enabled = true
+ retention_policy {
+ days = 0
+ enabled = false
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/outputs.tf
new file mode 100644
index 0000000..493e693
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/outputs.tf
@@ -0,0 +1,12 @@
+output "name" {
+ value = azurerm_servicebus_namespace.bus_namespace.name
+}
+
+output "connection_string" {
+ value = azurerm_servicebus_namespace.bus_namespace.default_primary_connection_string
+ sensitive = true
+}
+
+output "servicebus_namespace_id" {
+ value = azurerm_servicebus_namespace.bus_namespace.id
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/variables.tf
new file mode 100644
index 0000000..c74a99a
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/service_bus/variables.tf
@@ -0,0 +1,72 @@
+variable "services_bus_namespace_name" {
+ type = string
+ description = "Name for the service bus namespace"
+}
+
+variable "location" {
+ type = string
+ description = "Location for the service bus namespace"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "Resource group for the service bus namespace"
+}
+
+variable "service_bus_capacity" {
+ type = number
+ description = "Capacity for the Service Bus namespace"
+ default = 0
+}
+
+variable "service_bus_sku" {
+ type = string
+ description = "Sku for the service bus namespace"
+ default = "Standard"
+}
+
+variable "service_bus_queue1_name" {
+ type = string
+ description = "Name for the first service bus queue (ingest)"
+}
+
+variable "service_bus_queue2_name" {
+ type = string
+ description = "Name for the second service bus queue (operations)"
+}
+
+variable "service_bus_topic_name" {
+ type = string
+ description = "Name for the service bus topic"
+}
+
+variable "service_bus_valid_subscription" {
+ type = string
+ description = "Name for the valid subscription"
+}
+
+variable "service_bus_invalid_subscription" {
+ type = string
+ description = "Name for the valid subscription"
+}
+
+variable "service_bus_valid_rule" {
+ type = string
+ description = "Name for the valid rule"
+}
+
+variable "service_bus_invalid_rule" {
+ type = string
+ description = "Name for the invalid rule"
+}
+
+variable "service_bus_diagnostic_settings_name" {
+ type = string
+ description = "Name for the diagnostic settings"
+ default = "serviceBusDiagnostics"
+}
+
+variable "log_analytics_workspace_id" {
+ type = string
+ description = "Id for the targeted log analytics workspace"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/workbooks/main.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/workbooks/main.tf
new file mode 100644
index 0000000..7c95160
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/workbooks/main.tf
@@ -0,0 +1,42 @@
+resource "random_uuid" "index_uuid" {
+}
+resource "random_uuid" "observability_uuid" {
+}
+resource "random_uuid" "service_processing_uuid" {
+}
+
+resource "azurerm_application_insights_workbook" "index" {
+ name = random_uuid.index_uuid.result
+ resource_group_name = var.resource_group_name
+ location = var.location
+ display_name = "Index"
+ source_id = lower(var.workspace_id)
+ data_json = templatefile(
+ "${path.module}/../../../workbooks/index.json",
+ { app_insights_id = var.app_insights_id, logs_workspace_id = urlencode(var.workspace_id), infrastructure_workbook_id = urlencode(azurerm_application_insights_workbook.infrastructure.id), system_workbook_id = urlencode(azurerm_application_insights_workbook.system_processing.id)}
+ )
+}
+
+resource "azurerm_application_insights_workbook" "infrastructure" {
+ name = random_uuid.observability_uuid.result
+ resource_group_name = var.resource_group_name
+ location = var.location
+ display_name = "Infrastructure"
+ source_id = lower(var.workspace_id)
+ data_json = templatefile(
+ "${path.module}/../../../workbooks/infrastructure.json",
+ { servicebus_namespace_id = var.servicebus_namespace_id, key_vault_id = var.key_vault_id, app_insights_id = var.app_insights_id, app_insights_id_url = urlencode(var.app_insights_id), aks_id = var.aks_id }
+ )
+}
+
+resource "azurerm_application_insights_workbook" "system_processing" {
+ name = random_uuid.service_processing_uuid.result
+ resource_group_name = var.resource_group_name
+ location = var.location
+ display_name = "System Processing"
+ source_id = lower(var.workspace_id)
+ data_json = templatefile(
+ "${path.module}/../../../workbooks/system-processing.json",
+ { app_insights_id = var.app_insights_id, app_insights_id_url = urlencode(var.app_insights_id) }
+ )
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/workbooks/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/workbooks/variables.tf
new file mode 100644
index 0000000..e27467c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/modules/workbooks/variables.tf
@@ -0,0 +1,34 @@
+variable "workspace_id" {
+ type = string
+ description = "Name for the Azure Workbook"
+}
+
+variable "location" {
+ type = string
+ description = "Location for the Azure Workbook"
+}
+
+variable "resource_group_name" {
+ type = string
+ description = "Resource group for the Azure Workbook"
+}
+
+variable "servicebus_namespace_id" {
+ type = string
+ description = "Id for monitored Service Bus Namespace"
+}
+
+variable "app_insights_id" {
+ type = string
+ description = "Id for Application Insights resource"
+}
+
+variable "key_vault_id" {
+ type = string
+ description = "Id for Key Vault resource"
+}
+
+variable "aks_id" {
+ type = string
+ description = "Id for AKS cluster resource"
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/outputs.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/outputs.tf
new file mode 100644
index 0000000..62d00e1
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/outputs.tf
@@ -0,0 +1,36 @@
+output "rg_name" {
+ value = azurerm_resource_group.rg.name
+}
+
+output "insights_name" {
+ value = module.app_insights.name
+}
+
+output "sb_namespace_name" {
+ value = module.service_bus.name
+}
+
+output "cosmosdb_name" {
+ value = module.cosmosdb.name
+}
+
+output "kv_name" {
+ value = module.key_vault.kv_name
+}
+
+output "acr_name" {
+ value = module.acr.acr_name
+}
+
+output "aks_name" {
+ value = module.aks.aks_name
+}
+
+output "aks_key_vault_secret_provider_client_id" {
+ value = module.aks.aks_key_vault_secret_provider_client_id
+ sensitive = true
+}
+
+output "tenant_id" {
+ value = data.azurerm_client_config.current_config.tenant_id
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/provider.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/provider.tf
new file mode 100644
index 0000000..8632878
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/provider.tf
@@ -0,0 +1,28 @@
+provider "azurerm" {
+ features {
+ resource_group {
+ prevent_deletion_if_contains_resources = false
+ }
+ }
+}
+
+terraform {
+ required_providers {
+ azuread = {
+ source = "hashicorp/azuread"
+ version = "~> 2.0.0"
+ }
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "3.31.0"
+ }
+ azurecaf = {
+ source = "aztfmod/azurecaf"
+ version = "~> 1.2.10"
+ }
+ azapi = {
+ source = "azure/azapi"
+ version = "1.0.0"
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/sample.tfvars b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/sample.tfvars
new file mode 100644
index 0000000..c24e63b
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/sample.tfvars
@@ -0,0 +1,15 @@
+location = "eastus"
+prefix = "dev"
+unique_username = "myusername"
+cosmosdb_database_name = "cargo"
+cosmosdb_container1_name = "valid-cargo"
+cosmosdb_container2_name = "invalid-cargo"
+cosmosdb_container3_name = "operations"
+service_bus_queue1_name = "ingest-cargo"
+service_bus_queue2_name = "operation-state"
+service_bus_topic_name = "validated-cargo"
+service_bus_subscription1_name = "valid-cargo"
+service_bus_subscription2_name = "invalid-cargo"
+service_bus_topic_rule1_name = "valid"
+service_bus_topic_rule2_name = "invalid"
+notification_email_address = "alias@microsoft.com"
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/variables.tf b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/variables.tf
new file mode 100644
index 0000000..320b41b
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/terraform/variables.tf
@@ -0,0 +1,85 @@
+variable "location" {
+ type = string
+ description = "Specifies the supported Azure location (region) where the resources will be deployed"
+}
+
+variable "prefix" {
+ type = string
+ description = "Prefix for resource names"
+}
+
+variable "unique_username" {
+ type = string
+ description = "This value will explain who is the author of specific resources and will be reflected in every deployed tool"
+}
+
+variable "cosmosdb_database_name" {
+ type = string
+ description = "Name for the Cosmos DB SQL database"
+}
+
+variable "cosmosdb_container1_name" {
+ type = string
+ description = "Name for the first Cosmos DB SQL container"
+}
+
+variable "cosmosdb_container2_name" {
+ type = string
+ description = "Name for the second Cosmos DB SQL container"
+}
+
+variable "cosmosdb_container3_name" {
+ description = "Name for the third Cosmos DB SQL container"
+}
+
+variable "service_bus_queue1_name" {
+ type = string
+ description = "Name for the first service bus queue (ingest)"
+}
+
+variable "service_bus_queue2_name" {
+ type = string
+ description = "Name for the second service bus queue (operations)"
+}
+
+variable "service_bus_topic_name" {
+ type = string
+ description = "Name for the Service Bus Topic"
+}
+
+variable "service_bus_subscription1_name" {
+ type = string
+ description = "Name for the first Service Bus Subscription"
+}
+
+variable "service_bus_subscription2_name" {
+ type = string
+ description = "Name for the second Service Bus Subscription"
+}
+
+variable "service_bus_topic_rule1_name" {
+ type = string
+ description = "Name for the first Service Bus Subscriptions filter rulee"
+}
+
+variable "service_bus_topic_rule2_name" {
+ type = string
+ description = "Name for the second Service Bus Subscriptions filter rulee"
+}
+
+variable "aks_aad_auth" {
+ type = bool
+ description = "Configure Azure Active Directory authentication for Kubernetes cluster"
+ default = false
+}
+
+variable "aks_aad_admin_user_object_id" {
+ type = string
+ description = "Object ID of the AAD user to be added as an admin to the AKS cluster"
+ default = ""
+}
+
+variable "notification_email_address" {
+ type = string
+ description = "Email address for alert notifications"
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/index.json b/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/index.json
new file mode 100644
index 0000000..fb94988
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/index.json
@@ -0,0 +1,218 @@
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "# Observability and Monitoring Main Dashboard\nThis workbook has been created to provide a consolidated view of microservices observability\n\nIt contains two main sections. The first one displays the Exceptions made from any of the components involved in the whole system.\n\nSecond section can redirect you to two more workbooks that are more focused on Infrastructure or System's behaviour to get a deeper insight of data collected."
+ },
+ "name": "mainTitleText"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Exceptions"
+ },
+ "name": "exceptionsText"
+ },
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "parameters": [
+ {
+ "id": "899fa4be-a565-4534-b537-6070e46fd44e",
+ "version": "KqlParameterItem/1.0",
+ "name": "Show",
+ "type": 2,
+ "isRequired": true,
+ "query": "datatable(x:string, y:string)[\r\n\"['New Failure Rate (%)'], ['Existing Failure Rate (%)']\", 'New and Existing Failures',\r\n\"['New Failure Rate (%)']\", 'Only New Failures',\r\n\"['Existing Failure Rate (%)']\", 'Only Existing Failures',\r\n]",
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces",
+ "value": "['New Failure Rate (%)']"
+ },
+ {
+ "id": "38721383-ec13-430d-8229-997332f57352",
+ "version": "KqlParameterItem/1.0",
+ "name": "OverTimeRange",
+ "type": 4,
+ "isRequired": true,
+ "typeSettings": {
+ "selectableValues": [
+ {
+ "durationMs": 1800000
+ },
+ {
+ "durationMs": 3600000
+ },
+ {
+ "durationMs": 14400000
+ },
+ {
+ "durationMs": 43200000
+ },
+ {
+ "durationMs": 86400000
+ },
+ {
+ "durationMs": 259200000
+ },
+ {
+ "durationMs": 604800000
+ }
+ ]
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "value": {
+ "durationMs": 43200000
+ }
+ },
+ {
+ "id": "8dc31735-b2c2-40a9-94a6-2b73f69a9303",
+ "version": "KqlParameterItem/1.0",
+ "name": "UseComparisonTimeRangeOf",
+ "type": 1,
+ "isRequired": true,
+ "query": "let t = {OverTimeRange:seconds};\r\nlet w = case(t <= 86400, '7d', t <= 259200, '14d', t <= 120960, '28d', '60d');\r\nrange i from 1 to 1 step 1\r\n| project x = w",
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces"
+ },
+ {
+ "id": "3d002cfd-8dca-4015-9f77-26b62fcc2564",
+ "version": "KqlParameterItem/1.0",
+ "name": "ProblemFilter",
+ "type": 2,
+ "isRequired": true,
+ "multiSelect": true,
+ "quote": "'",
+ "delimiter": ",",
+ "query": "exceptions\r\n| where timestamp {OverTimeRange}\r\n| summarize Count = count() by problemId\r\n| order by Count desc\r\n| project v = problemId, t = problemId, s=false\r\n| union (datatable(v:string, t:string, s:boolean)[\r\n'*', 'All Exceptions', true\r\n])\r\n",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components"
+ },
+ {
+ "id": "a4eb0f16-861b-4587-ad9a-774db54a0cc2",
+ "version": "KqlParameterItem/1.0",
+ "name": "Source",
+ "type": 2,
+ "isRequired": true,
+ "query": "datatable(x:string, y:string)[\r\n'1 == 1', 'Server and Client Exceptions',\r\n'client_Type <> \"Browser\"', 'Only Server Exceptions',\r\n'client_Type == \"Browser\"', 'Only Client Exceptions',\r\n]",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "typeSettings": {
+ "additionalResourceOptions": []
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "value": "1 == 1"
+ }
+ ],
+ "style": "pills",
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components"
+ },
+ "name": "displayExceptionsParameters"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let startTime = {OverTimeRange:start};\r\nlet grain = {OverTimeRange:grain};\r\nlet bigWindowTimeRange = {UseComparisonTimeRangeOf};\r\nlet bigWindow = exceptions\r\n| where timestamp >= ago(bigWindowTimeRange) and timestamp < bin(startTime, grain)\r\n| where {Source}\r\n| where problemId in ({ProblemFilter}) or '*' in ({ProblemFilter})\r\n| summarize makeset(problemId, 10000);\r\nexceptions\r\n| where timestamp {OverTimeRange}\r\n| where {Source}\r\n| summarize Count = count(), Users = dcount(user_Id) by problemId\r\n| where problemId in ({ProblemFilter}) or '*' in ({ProblemFilter})\r\n| extend IsNew = iff(problemId !in (bigWindow), true, false)\r\n| where \"{Show}\" == \"['New Failure Rate (%)'], ['Existing Failure Rate (%)']\" or IsNew\r\n| order by Users desc, Count desc, problemId asc\r\n| project Problem = iff(IsNew, strcat('🔸 ', problemId), strcat('🔹 ', problemId)), ['Exception Count'] = Count, ['Users Affected'] = Users",
+ "size": 0,
+ "showAnalytics": true,
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Exception Count",
+ "formatter": 4,
+ "formatOptions": {
+ "min": 0,
+ "palette": "yellow"
+ }
+ },
+ {
+ "columnMatch": "Users Affected",
+ "formatter": 4,
+ "formatOptions": {
+ "min": 0,
+ "palette": "green"
+ }
+ }
+ ]
+ }
+ },
+ "name": "servicesExceptionsQuery"
+ }
+ ]
+ },
+ "name": "exceptionsGroup"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## Performance"
+ },
+ "name": "performanceTitleText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let cpu = performanceCounters\r\n| where name == \"% Processor Time Normalized\"\r\n| summarize CPU=avg(value) by cloud_RoleName;\r\nlet ioRate = performanceCounters\r\n| where name == \"IO Data Bytes/sec\"\r\n| summarize ioRate=avg(value) by cloud_RoleName;\r\nlet memory = performanceCounters\r\n| where name == \"Available Bytes\"\r\n| summarize Memory=avg(value) by cloud_RoleName;\r\nlet requests = requests\r\n| summarize req_Duration=avg(duration), requestsCount = count() by cloud_RoleName;\r\nlet average = dependencies\r\n| summarize average = avg(duration), dependenciesCount = count() by cloud_RoleName;\r\naverage\r\n| join kind=fullouter requests on cloud_RoleName\r\n| join kind=fullouter memory on cloud_RoleName \r\n| join kind=fullouter ioRate on cloud_RoleName\r\n| join kind=fullouter cpu on cloud_RoleName\r\n| project Service_Name=cloud_RoleName, CPU=iff(isnull(CPU), \"N/A\", strcat(bin(CPU, 0.01), \" %\")), Memory=iff(isnull(Memory), \"N/A\", format_bytes(Memory, 2, \"GB\")), IO_Rate=iff(isnull(ioRate), \"N/A\", strcat(bin(ioRate, 0.01), \" B/s\")), Avg_Dependency=iff(isnull(average), \"N/A\", strcat(bin(average, 0.01), \" ms\")), Dependencies_Count=iff(isnull(dependenciesCount), \"N/A\", tostring(dependenciesCount)), Req_Duration=iff(isnull(req_Duration), \"N/A\", strcat(bin(req_Duration, 0.01), \" ms\")), Requests_Count=iff(isnull(requestsCount), \"N/A\", tostring(requestsCount))",
+ "size": 0,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ]
+ },
+ "name": "servicesMonitoringQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## Additional workbooks\r\n\r\nThere are two workbooks made to keep track of the entire system's information.\r\n\r\n|Workbooks|Descriiption|Link|\r\n|---------|------------|----|\r\n|Infrastructure|Data related to infrastructure|[Link](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/${logs_workspace_id}/ConfigurationId/${infrastructure_workbook_id}/Type/workbook/WorkbookTemplateName/Infrastructure)|\r\n|System|Data related to system functionality|[Link](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/${logs_workspace_id}/ConfigurationId/${system_workbook_id}/Type/workbook/WorkbookTemplateName/System%20Processing)|"
+ },
+ "name": "workbooksLinksText"
+ }
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+ }
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/infrastructure.json b/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/infrastructure.json
new file mode 100644
index 0000000..0662f4a
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/infrastructure.json
@@ -0,0 +1,477 @@
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "# Infrastructure Dashboard\nThis workbook has been created to provide a consolidated view of the system infrastructure"
+ },
+ "name": "mainTitleText"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Service Bus Telemetry\r\n\r\nThis section displays telemetry obtained from Service Bus operations."
+ },
+ "name": "serviceBusTitleText"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Service Bus completed operations\r\nThese tiles display the following:\r\n* The fastest time an operation takes to be completed.\r\n* The average time all operations take to be completed.\r\n* The slowest time an operation takes to be completed.\r\n\r\nAll data is being displayed in milliseconds."
+ },
+ "name": "serviceBusDescriptionText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "dependencies\r\n| where name == \"ServiceBus.complete\"\r\n| summarize Result = avg(duration), Name = \"Average\"\r\n| union (dependencies\r\n| where name == \"ServiceBus.complete\"\r\n| top 1 by duration asc \r\n| summarize count() by Result = duration, Name = \"Fastest\")\r\n| union ( dependencies\r\n| where name == \"ServiceBus.complete\"\r\n| top 1 by duration desc \r\n| summarize count() by Result = duration, Name = \"Slowest\")",
+ "size": 0,
+ "showAnalytics": true,
+ "title": "Statistics of service bus completed operations (ms)",
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "visualization": "tiles",
+ "tileSettings": {
+ "titleContent": {
+ "columnMatch": "Name",
+ "formatter": 1
+ },
+ "leftContent": {
+ "columnMatch": "Result",
+ "formatter": 12,
+ "formatOptions": {
+ "palette": "auto"
+ },
+ "numberFormat": {
+ "unit": 17,
+ "options": {
+ "style": "decimal",
+ "maximumFractionDigits": 2,
+ "maximumSignificantDigits": 3
+ }
+ }
+ },
+ "showBorder": false,
+ "sortOrderField": 1
+ },
+ "graphSettings": {
+ "type": 0,
+ "topContent": {
+ "columnMatch": "id",
+ "formatter": 1
+ },
+ "centerContent": {
+ "columnMatch": "duration",
+ "formatter": 1,
+ "numberFormat": {
+ "unit": 17,
+ "options": {
+ "maximumSignificantDigits": 3,
+ "maximumFractionDigits": 2
+ }
+ }
+ },
+ "nodeIdField": "duration",
+ "sourceIdField": "timestamp",
+ "targetIdField": "name",
+ "graphOrientation": 3,
+ "showOrientationToggles": false,
+ "nodeSize": null,
+ "staticNodeSize": 100,
+ "colorSettings": null,
+ "hivesMargin": 5
+ }
+ },
+ "customWidth": "50",
+ "name": "serviceBusCompletedTimesQuery",
+ "styleSettings": {
+ "showBorder": true
+ }
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Count of Messages\r\n\r\nThis chart displays:\r\n* The count of active messages in a Queue/Topic\r\n* The count of delivered messages in a Queue/Topic\r\n* The count of dead-lettered messages in a Queue/Topic"
+ },
+ "name": "serviceBusMessageCountText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbook0f9894a2-554d-406d-b03e-c87fe7b37293",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "resourceType": "microsoft.servicebus/namespaces",
+ "metricScope": 0,
+ "resourceIds": [
+ "${servicebus_namespace_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.servicebus/namespaces",
+ "metric": "microsoft.servicebus/namespaces--ActiveMessages",
+ "aggregation": 4,
+ "splitBy": null
+ },
+ {
+ "namespace": "microsoft.servicebus/namespaces",
+ "metric": "microsoft.servicebus/namespaces--Messages",
+ "aggregation": 4
+ },
+ {
+ "namespace": "microsoft.servicebus/namespaces",
+ "metric": "microsoft.servicebus/namespaces--DeadletteredMessages",
+ "aggregation": 4
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "serviceBusMessagingMetric"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Throttled Requests\r\n\r\nThis chart displays the number of throttled requests in Service Bus."
+ },
+ "name": "serviceBusThrottledText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbooke8c22d13-3c2a-4fc8-8722-0180737c45f4",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "blueDark",
+ "resourceType": "microsoft.servicebus/namespaces",
+ "metricScope": 0,
+ "resourceIds": [
+ "${servicebus_namespace_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.servicebus/namespaces",
+ "metric": "microsoft.servicebus/namespaces--ThrottledRequests",
+ "aggregation": 1,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "serviceBusThrottledMetric"
+ }
+ ]
+ },
+ "name": "serviceBusTelemetryGroup"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Cosmos DB Telemetry\r\n\r\nThis section displays telemetry obtained from Cosmos DB operations."
+ },
+ "name": "cosmosDbTitleText"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Average time for reads from Cosmos DB\r\n\r\nThis chart displays the average time per read requests from Cosmos DB."
+ },
+ "name": "cosmosDbDescriptionText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "dependencies \r\n| where target == \"readDatabase.cargo\" \r\n| summarize Average = avg(duration) by bin(timestamp, 10m)\r\n| render timechart",
+ "size": 0,
+ "showAnalytics": true,
+ "aggregation": 3,
+ "color": "green",
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "visualization": "areachart"
+ },
+ "name": "latencyOfReadsCosmosDbQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Number of valid, invalid and operations saved.\r\n\r\nThis chart displays the total number of valid, invalid and operations writes into Cosmos DB."
+ },
+ "name": "cosmosDbOperationsText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "dependencies\r\n| summarize dependencies = count() by name\r\n| where name == \"upsertItem.operations\" or name == \"upsertItem.invalid-cargo\" or name == \"upsertItem.valid-cargo\"",
+ "size": 0,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "visualization": "piechart"
+ },
+ "name": "cosmosDbOperationsQuery"
+ }
+ ]
+ },
+ "name": "cosmosDbTelemetryGroup"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Key Vault\r\n\r\n### Key Vault Saturation\r\n\r\nThis metric displays the percentage of saturation Key Vault is having at the moment."
+ },
+ "name": "keyVaultTitleText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbook1dfaaa15-6964-4398-a9ab-4849c2e07653",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "turquoise",
+ "resourceType": "microsoft.keyvault/vaults",
+ "metricScope": 0,
+ "resourceIds": [
+ "${key_vault_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.keyvault/vaults",
+ "metric": "microsoft.keyvault/vaults--SaturationShoebox",
+ "aggregation": 4,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "keyVaultSaturationMetric"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Key Vault Latency\r\n\r\nThis metric displays the latency when executing an operation to KeyVault. The metric displays an average time and it is being estimated in milliseconds."
+ },
+ "name": "keyVaultLatencyText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbook7000b67b-e89a-4481-99d3-779513f70214",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "turquoise",
+ "resourceType": "microsoft.keyvault/vaults",
+ "metricScope": 0,
+ "resourceIds": [
+ "${key_vault_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.keyvault/vaults",
+ "metric": "microsoft.keyvault/vaults--ServiceApiLatency",
+ "aggregation": 4,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "keyVaultLatencyMetric"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Key Vault Results (Count)\r\n\r\nThis metric displays the count of Key Vault API Results."
+ },
+ "name": "keyVaultResultsText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbook93558986-b83b-4a80-8cbf-1d588fc01058",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "turquoise",
+ "resourceType": "microsoft.keyvault/vaults",
+ "metricScope": 0,
+ "resourceIds": [
+ "${key_vault_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.keyvault/vaults",
+ "metric": "microsoft.keyvault/vaults--ServiceApiResult",
+ "aggregation": 7,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "keyVaultResultsMetric"
+ }
+ ]
+ },
+ "name": "keyVaultTelemetryGroup"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Kubernetes\r\n\r\n### CPU Percentage\r\n\r\nThis chart displays the max count of CPU percentage of the cluster."
+ },
+ "name": "aksTitleText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbook171b383f-5043-41dd-9154-a1fa92367891",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "pink",
+ "resourceType": "microsoft.containerservice/managedclusters",
+ "metricScope": 0,
+ "resourceIds": [
+ "${aks_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.containerservice/managedclusters",
+ "metric": "microsoft.containerservice/managedclusters-Nodes (PREVIEW)-node_cpu_usage_percentage",
+ "aggregation": 3,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "aksCpuMetric"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Requests\r\n\r\nThis chart shows the average inflight requests to the cluster."
+ },
+ "name": "aksRequestsText"
+ },
+ {
+ "type": 10,
+ "content": {
+ "chartId": "workbook2e1c3664-7b39-433d-81b2-863ab1b9b307",
+ "version": "MetricsItem/2.0",
+ "size": 0,
+ "showAnalytics": true,
+ "chartType": 3,
+ "color": "pink",
+ "resourceType": "microsoft.containerservice/managedclusters",
+ "metricScope": 0,
+ "resourceIds": [
+ "${aks_id}"
+ ],
+ "timeContext": {
+ "durationMs": 3600000
+ },
+ "metrics": [
+ {
+ "namespace": "microsoft.containerservice/managedclusters",
+ "metric": "microsoft.containerservice/managedclusters-API Server (PREVIEW)-apiserver_current_inflight_requests",
+ "aggregation": 4,
+ "splitBy": null
+ }
+ ],
+ "gridSettings": {
+ "rowLimit": 10000
+ }
+ },
+ "name": "aksRequestsMetric"
+ }
+ ]
+ },
+ "name": "aksTelemetryGroup"
+ }
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/system-processing.json b/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/system-processing.json
new file mode 100644
index 0000000..5990268
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/infrastructure/workbooks/system-processing.json
@@ -0,0 +1,492 @@
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "# System Processing Dashboard\n\nThis workbook shows data from system operation across services."
+ },
+ "name": "mainTitleText"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Microservices"
+ },
+ "name": "microservicesTitleText"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Statistics for endpoints requests\r\n\r\nThis chart displays different measures for time per requests. First measure is the mean per endpoint, second column goes for Median, columns 3, 4 ans 5 are for different percentiles ranges and finally last column displays the total amount of number requests per endpoint."
+ },
+ "name": "endpointsStatisticsText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "requests\r\n| summarize Mean = avg(duration), (Median, p80, p95, p99) = percentiles(duration, 50, 80, 95, 99), Requests = count() by name\r\n| order by Requests desc",
+ "size": 0,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Mean",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "orange"
+ }
+ },
+ {
+ "columnMatch": "Median",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "yellow"
+ }
+ },
+ {
+ "columnMatch": "p80",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "green"
+ }
+ },
+ {
+ "columnMatch": "p95",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "blue"
+ }
+ },
+ {
+ "columnMatch": "p99",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "purple"
+ }
+ },
+ {
+ "columnMatch": "Requests",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "pink"
+ }
+ }
+ ]
+ }
+ },
+ "name": "endpointsRequestsStatisticsQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Total request to endpoints\r\n\r\nThis chart extracts the last column from previous chart in order to gain more focus on this metric."
+ },
+ "name": "endpointsRequestsText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let dataset=requests\r\n| where client_Type != \"Browser\";\r\n\r\ndataset\r\n| summarize\r\n Count=sum(itemCount),\r\n Average=sum(itemCount * duration) / sum(itemCount) \r\n| project operation_Name=\"Overall\", Count,Average\r\n| union(dataset\r\n | summarize\r\n Count=sum(itemCount),\r\n Average=sum(itemCount * duration) / sum(itemCount) \r\n by operation_Name\r\n | sort by Count desc, Average desc\r\n )",
+ "size": 0,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "Average",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "turquoise"
+ }
+ },
+ {
+ "columnMatch": "Count",
+ "formatter": 8,
+ "formatOptions": {
+ "palette": "orange"
+ }
+ }
+ ]
+ }
+ },
+ "name": "endpointsRequestsQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Last 100 operations executed\r\n\r\nThis list shows the last 100 of operations executed and their asociated operation ID. You can use this value to request more information from the link after the list that will redirect you to a **Transaction Search** tool."
+ },
+ "name": "operationsText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "requests\r\n| top 100 by timestamp\r\n| distinct name, operation_Id",
+ "size": 0,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ]
+ },
+ "name": "lastOperationsQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "You can go and check the **Transaction Search** [here](https://portal.azure.com/#blade/AppInsightsExtension/BladeRedirect/BladeName/searchV1/ResourceId/%2Fsubscriptions%2F30a83aff-7a8b-4ca3-aa48-ab93268b5a8b%2FresourceGroups%2Frg-dev-tf-amines4%2Fproviders%2FMicrosoft.Insights%2Fcomponents%2Fdev-appi-accl-glc-eastus2/BladeInputs/%7B%22tables%22%3A%5B%22availabilityResults%22%2C%22requests%22%2C%22exceptions%22%2C%22pageViews%22%2C%22traces%22%2C%22customEvents%22%2C%22dependencies%22%5D%7D). \r\n\r\nAnd using the list above of the last 100 operation IDs start looking for an specific operation."
+ },
+ "name": "transactionSearchBladeText"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Additional telemetry\r\n\r\nYou can find in these sections more information that you can use or add to this workbook.\r\n\r\n|Application map|Availability|Failures|Performance|\r\n|---------------|------------|--------|-----------|\r\n|[Link](https://portal.azure.com/#blade/AppInsightsExtension/BladeRedirect/BladeName/applicationMap/ResourceId/%2Fsubscriptions%2F30a83aff-7a8b-4ca3-aa48-ab93268b5a8b%2FresourceGroups%2Frg-dev-tf-amines4%2Fproviders%2FMicrosoft.Insights%2Fcomponents%2Fdev-appi-accl-glc-eastus2/BladeInputs/%7B%22MainResourceId%22%3A%22%2Fsubscriptions%2F30a83aff-7a8b-4ca3-aa48-ab93268b5a8b%2FresourceGroups%2Frg-dev-tf-amines4%2Fproviders%2FMicrosoft.Insights%2Fcomponents%2Fdev-appi-accl-glc-eastus2%22%2C%22TimeContext%22%3A%7B%22durationMs%22%3A3600000%2C%22createdTime%22%3A%222023-03-07T15%3A39%3A08.000Z%22%2C%22isInitialTime%22%3Afalse%2C%22grain%22%3A1%2C%22useDashboardTimeRange%22%3Afalse%7D%2C%22DataModel%22%3A%7B%22exclude4xxError%22%3Atrue%2C%22timeContext%22%3A%7B%22durationMs%22%3A3600000%2C%22createdTime%22%3A%222023-03-07T15%3A39%3A08.000Z%22%2C%22isInitialTime%22%3Afalse%2C%22grain%22%3A1%2C%22useDashboardTimeRange%22%3Afalse%7D%2C%22layoutOption%22%3A%22Organic%22%2C%22nodeContentFilter%22%3A%22%22%7D%7D)|[Link](https://portal.azure.com/#blade/AppInsightsExtension/BladeRedirect/BladeName/availability/ResourceId/%2Fsubscriptions%2F30a83aff-7a8b-4ca3-aa48-ab93268b5a8b%2FresourceGroups%2Frg-dev-tf-amines4%2Fproviders%2FMicrosoft.Insights%2Fcomponents%2Fdev-appi-accl-glc-eastus2/BladeInputs/%7B%22filters%22%3A%5B%5D%2C%22timeContext%22%3A%7B%22durationMs%22%3A86400000%2C%22createdTime%22%3A%222023-03-07T12%3A54%3A05.627Z%22%2C%22endTime%22%3A%222023-03-07T15%3A39%3A00.000Z%22%2C%22isInitialTime%22%3Afalse%2C%22grain%22%3A1%2C%22useDashboardTimeRange%22%3Afalse%7D%2C%22experience%22%3A5%2C%22roleSelectors%22%3A%5B%5D%7D)|[Link](https://portal.azure.com/#blade/AppInsightsExtension/BladeRedirect/BladeName/failures/ResourceId/%2Fsubscriptions%2F30a83aff-7a8b-4ca3-aa48-ab93268b5a8b%2FresourceGroups%2Frg-dev-tf-amines4%2Fproviders%2FMicrosoft.Insights%2Fcomponents%2Fdev-appi-accl-glc-eastus2/BladeInputs/%7B%22filters%22%3A%5B%5D%2C%22timeContext%22%3A%7B%22durationMs%22%3A86400000%2C%22createdTime%22%3A%222023-03-07T12%3A54%3A05.627Z%22%2C%22endTime%22%3A%222023-03-07T12%3A58%3A00.000Z%22%2C%22isInitialTime%22%3Afalse%2C%22grain%22%3A1%2C%22useDashboardTimeRange%22%3Afalse%7D%2C%22selectedOperation%22%3Anull%2C%22experience%22%3A4%2C%22roleSelectors%22%3A%5B%5D%2C%22clientTypeMode%22%3A%22Server%22%7D)|[Link](https://portal.azure.com/#blade/AppInsightsExtension/BladeRedirect/BladeName/performance/ResourceId/%2Fsubscriptions%2F30a83aff-7a8b-4ca3-aa48-ab93268b5a8b%2FresourceGroups%2Frg-dev-tf-amines4%2Fproviders%2FMicrosoft.Insights%2Fcomponents%2Fdev-appi-accl-glc-eastus2/BladeInputs/%7B%22filters%22%3A%5B%5D%2C%22timeContext%22%3A%7B%22durationMs%22%3A86400000%2C%22createdTime%22%3A%222023-03-07T12%3A54%3A05.627Z%22%2C%22endTime%22%3A%222023-03-07T15%3A41%3A00.000Z%22%2C%22isInitialTime%22%3Afalse%2C%22grain%22%3A1%2C%22useDashboardTimeRange%22%3Afalse%7D%2C%22selectedOperation%22%3Anull%2C%22experience%22%3A1%2C%22roleSelectors%22%3A%5B%5D%2C%22clientTypeMode%22%3A%22Server%22%7D)|"
+ },
+ "name": "aditionalTelemetryText"
+ }
+ ]
+ },
+ "name": "microservicesTelemetryGroup"
+ },
+ {
+ "type": 12,
+ "content": {
+ "version": "NotebookGroup/1.0",
+ "groupType": "editable",
+ "items": [
+ {
+ "type": 1,
+ "content": {
+ "json": "## Microservices operations telemetry\r\n\r\nSelect from the following parameters the options to display desired results:\r\nFirst parameter is for a time range and second one is for the service you want to monitor."
+ },
+ "name": "operationsTitleText"
+ },
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "parameters": [
+ {
+ "id": "8f9405b8-1cc0-419f-a465-f35464bb15c0",
+ "version": "KqlParameterItem/1.0",
+ "name": "param_time_range",
+ "label": "Time Range",
+ "type": 4,
+ "description": "Select the time range for queries",
+ "isRequired": true,
+ "typeSettings": {
+ "selectableValues": [
+ {
+ "durationMs": 900000
+ },
+ {
+ "durationMs": 1800000
+ },
+ {
+ "durationMs": 3600000
+ },
+ {
+ "durationMs": 14400000
+ },
+ {
+ "durationMs": 43200000
+ },
+ {
+ "durationMs": 86400000
+ },
+ {
+ "durationMs": 172800000
+ },
+ {
+ "durationMs": 259200000
+ },
+ {
+ "durationMs": 604800000
+ },
+ {
+ "durationMs": 1209600000
+ },
+ {
+ "durationMs": 2419200000
+ },
+ {
+ "durationMs": 2592000000
+ }
+ ],
+ "allowCustom": true
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "value": {
+ "durationMs": 1800000
+ }
+ },
+ {
+ "id": "5da2ece4-7e2b-4356-a8ce-795bf3e58bd2",
+ "version": "KqlParameterItem/1.0",
+ "name": "paramCloudRoleName",
+ "label": "Cloud Role",
+ "type": 2,
+ "query": "dependencies\r\n| distinct cloud_RoleName\r\n| order by cloud_RoleName asc",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "typeSettings": {
+ "additionalResourceOptions": []
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components"
+ },
+ {
+ "id": "0093df18-0e13-4eac-b50e-1afbc78a7b9c",
+ "version": "KqlParameterItem/1.0",
+ "name": "appinsights",
+ "type": 5,
+ "description": "Used as a single place to set the app insights resource to query",
+ "isHiddenWhenLocked": true,
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "jsonData": "[\"/subscriptions/30a83aff-7a8b-4ca3-aa48-ab93268b5a8b/resourceGroups/rg-dev-tf-amines4/providers/Microsoft.Insights/components/dev-appi-accl-glc-eastus2\"]",
+ "value": "${app_insights_id}"
+ }
+ ],
+ "style": "pills",
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces"
+ },
+ "name": "operationsParameters"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### End to end processing time\r\n\r\nThis chart displays the end to end processing time. This is measured in seconds and to be displayed requires the selection of parameters time range and cloud role."
+ },
+ "name": "endToEndProcessingText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let put_name = strcat(\"PUT /cargo/{cargoId\", \"}\"); // TODO - determine how to escape curly braces!\r\nlet cargo_processing_api = requests\r\n| where cloud_RoleName == \"cargo-processing-api\" and (name == \"POST /cargo/\" or name == put_name) and timestamp {param_time_range}\r\n| project-rename ingest_timestamp = timestamp\r\n| project ingest_timestamp, operation_Id\r\n;\r\nlet operation_api_succeeded = requests\r\n| where cloud_RoleName == \"operations-api\" and name == \"ServiceBus.process\" and customDimensions[\"operation-state\"] == \"Succeeded\"\r\n| extend operation_api_completed = timestamp + (duration*1ms)\r\n| project operation_Id, operation_api_completed\r\n;\r\ncargo_processing_api\r\n| join kind=inner operation_api_succeeded on $left.operation_Id == $right.operation_Id\r\n| extend end_to_end_Duration_s = (operation_api_completed - ingest_timestamp) /1s\r\n| summarize avg(end_to_end_Duration_s), max(end_to_end_Duration_s) by bin(ingest_timestamp, {param_time_range:grain})\r\n| order by ingest_timestamp desc\r\n| project ingest_timestamp, avg_end_to_end_Duration_s, max_end_to_end_Duration_s\r\n| render timechart \r\n",
+ "size": 0,
+ "aggregation": 3,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "chartSettings": {
+ "seriesLabelSettings": [
+ {
+ "seriesName": "avg_end_to_end_Duration_s",
+ "label": "Avg duration (s)",
+ "color": "blue"
+ },
+ {
+ "seriesName": "max_end_to_end_Duration_s",
+ "label": "Max duration (s)",
+ "color": "lightBlue"
+ }
+ ]
+ }
+ },
+ "name": "endToEndProcessingQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Request count\r\n\r\nThis chart displays the count of ingest of requests. It required the selection of parameters time range and cloud role."
+ },
+ "name": "requestsCountText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let put_name = strcat(\"PUT /cargo/{cargoId\", \"}\"); // TODO - determine how to escape curly braces!\r\nrequests\r\n| where cloud_RoleName == \"cargo-processing-api\" and (name == \"POST /cargo/\" or name == put_name) and timestamp {param_time_range}\r\n| summarize request_count=count() by bin(timestamp, {param_time_range:grain})\r\n| project timestamp, request_count\r\n| render timechart \r\n",
+ "size": 1,
+ "showAnalytics": true,
+ "color": "gray",
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ]
+ },
+ "name": "requestsCountQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Services processing time \r\n\r\nThis chart displays the processing time in the services. This is measured in seconds and to be displayed requires the selection of parameters time range and cloud role."
+ },
+ "name": "servicesProcessingTimeText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let put_name = strcat(\"PUT /cargo/{cargoId\", \"}\"); // TODO - determine how to escape curly braces!\r\nlet cargo_processing_api = requests\n | where cloud_RoleName == \"cargo-processing-api\" and (name == \"POST /cargo/\" or name == put_name) and timestamp {param_time_range}\n | project-rename durationMs=duration\n | extend duration=durationMs * 1ms\n | project timestamp, cloud_RoleName, cloud_RoleInstance, duration, operation_Id\n;\nlet cargo_processing_validator = requests\n | where cloud_RoleName == \"cargo-processing-validator\" and (name == \"ServiceBus.ProcessMessage\" or name == \"ServiceBusQueue.ProcessMessage\")\n | project-rename durationMs=duration\n | extend duration=durationMs * 1ms\n | project timestamp, cloud_RoleName, cloud_RoleInstance, duration, operation_Id\n;\nlet valid_cargo_manager = requests\n | where cloud_RoleName == \"valid-cargo-manager\" and name == \"ServiceBusTopic.ProcessMessage\"\n | project-rename durationMs=duration\n | extend duration=durationMs * 1ms\n | project timestamp, cloud_RoleName, cloud_RoleInstance, name, duration, operation_Id\n;\nlet invalid_cargo_manager = requests\n | where cloud_RoleName == \"invalid-cargo-manager\" and name == \"ServiceBusTopic.ProcessMessage\"\n | project-rename durationMs=duration\n | extend duration=durationMs * 1ms\n | project timestamp, cloud_RoleName, cloud_RoleInstance, name, duration, operation_Id\n;\ncargo_processing_api\n| join kind=leftouter cargo_processing_validator on $left.operation_Id == $right.operation_Id\n| join kind=leftouter valid_cargo_manager on $left.operation_Id == $right.operation_Id\n| join kind=leftouter invalid_cargo_manager on $left.operation_Id == $right.operation_Id\n| project-rename\n cpa_timestamp=timestamp, cpa_duration=duration, \n cpv_timestamp=timestamp1, cpv_duration=duration1,\n vcm_timestamp=timestamp2, vcm_duration=duration2,\n icm_timestamp=timestamp3, icm_duration=duration3\n| extend\n time_to_cpv=cpv_timestamp - cpa_timestamp,\n time_to_vcm=vcm_timestamp - cpv_timestamp,\n time_to_icm=icm_timestamp - cpv_timestamp\n| extend\n in_cpa_s = cpa_duration / 1s,\n in_cpv_s = cpv_duration / 1s,\n in_vcm_s = vcm_duration / 1s,\n in_icm_s = icm_duration / 1s\n| summarize \n avg(in_cpa_s),\n avg(in_cpv_s),\n avg(in_vcm_s),\n avg(in_icm_s)\n by bin (cpa_timestamp, {param_time_range:grain})\n| order by cpa_timestamp desc\n| render areachart with(kind=stacked)\n",
+ "size": 0,
+ "aggregation": 3,
+ "showAnalytics": true,
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ],
+ "chartSettings": {
+ "xAxis": "cpa_timestamp",
+ "seriesLabelSettings": [
+ {
+ "seriesName": "avg_to_cpv_s",
+ "label": "Average time to cargo-processing_validator",
+ "color": "redBright"
+ },
+ {
+ "seriesName": "avg_to_vcm_s",
+ "color": "green"
+ },
+ {
+ "seriesName": "avg_to_icm_s",
+ "color": "lightBlue"
+ },
+ {
+ "seriesName": "avg_in_cpa_s",
+ "color": "yellow"
+ },
+ {
+ "seriesName": "avg_in_cpv_s",
+ "color": "red"
+ },
+ {
+ "seriesName": "avg_in_vcm_s",
+ "color": "greenDark"
+ },
+ {
+ "seriesName": "avg_in_icm_s",
+ "color": "blue"
+ }
+ ]
+ }
+ },
+ "name": "servicesProcessingTimeQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Service dependency\r\n\r\nThis chart displays the service dependency duration. This is measured in seconds and to be displayed requires the selection of parameters time range and cloud role."
+ },
+ "name": "serviceDependencyText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let replace_guid = '[({]?[a-fA-F0-9]{8}[-]?([a-fA-F0-9]{4}[-]?){3}[a-fA-F0-9]{12}[})]?';\r\ndependencies\r\n| where cloud_RoleName == \"{paramCloudRoleName}\" and timestamp {param_time_range}\r\n| extend name_pattern = replace_regex(name, replace_guid, \"\")\r\n| extend duration_s = duration /1000\r\n| summarize avg(duration_s) by name_pattern, bin(timestamp, {param_time_range:grain})\r\n| project-reorder timestamp, avg_duration_s , name_pattern\r\n| render areachart with(kind=stacked)",
+ "size": 0,
+ "aggregation": 3,
+ "showAnalytics": true,
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ]
+ },
+ "name": "serviceDependencyQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Breakdown by destination port\r\n\r\nThis chart displays the end to end processing time by destination port. This is measured in seconds and to be displayed requires the selection of parameters time range and cloud role."
+ },
+ "name": "destinationPortBreakdownText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "let put_name = strcat(\"PUT /cargo/{cargoId\", \"}\"); // TODO - determine how to escape curly braces!\r\nlet portMap = requests\r\n| where cloud_RoleName == \"cargo-processing-validator\"\r\n| extend destinationPort = customDimensions[\"cargo-destination\"]\r\n| project operation_Id, destinationPort;\r\nlet cargo_processing_api = requests\r\n| where cloud_RoleName == \"cargo-processing-api\" and (name == \"POST /cargo/\" or name == put_name) and timestamp {param_time_range}\r\n| project-rename ingest_timestamp = timestamp\r\n| project ingest_timestamp, operation_Id\r\n;\r\nlet operation_api_succeeded = requests\r\n| where cloud_RoleName == \"operations-api\" and name == \"ServiceBus.process\" and customDimensions[\"operation-state\"] == \"Succeeded\"\r\n| extend operation_api_completed = timestamp + (duration*1ms)\r\n| project operation_Id, operation_api_completed\r\n;\r\ncargo_processing_api\r\n| join kind=inner operation_api_succeeded on $left.operation_Id == $right.operation_Id\r\n| join kind=leftouter portMap on $left.operation_Id == $right.operation_Id\r\n| extend end_to_end_Duration_s = (operation_api_completed - ingest_timestamp) /1s\r\n| extend destinationPort=iif(destinationPort ==\"\", \"\", destinationPort)\r\n| summarize avg(end_to_end_Duration_s) by destinationPort, bin(ingest_timestamp, {param_time_range:grain})\r\n| project ingest_timestamp, avg_end_to_end_Duration_s, destinationPort\r\n| render timechart ",
+ "size": 0,
+ "aggregation": 3,
+ "showAnalytics": true,
+ "queryType": 0,
+ "resourceType": "microsoft.insights/components",
+ "crossComponentResources": [
+ "${app_insights_id}"
+ ]
+ },
+ "name": "destinationPortBreakdownQuery"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "### Pod Restarts\r\n\r\nThis chart shows the number of times each service pod has restarted."
+ },
+ "name": "podRestartText"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "KubePodInventory\r\n| where ServiceName == \"{paramCloudRoleName}\"\r\n| summarize numRestarts = sum(PodRestartCount) by ServiceName, bin(TimeGenerated, 1m)\r\n| render timechart",
+ "size": 0,
+ "showAnalytics": true,
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 0,
+ "resourceType": "microsoft.operationalinsights/workspaces"
+ },
+ "name": "podRestartQuery"
+ }
+ ]
+ },
+ "name": "operationsTelemetryGroup"
+ }
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/run-local.sh b/accelerators/aks-sb-azmonitor-microservices/run-local.sh
new file mode 100644
index 0000000..7493e16
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/run-local.sh
@@ -0,0 +1,20 @@
+#!/bin/bash
+set -e
+
+script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
+
+if [[ ! -f "$script_dir/.env" ]]; then
+ echo "Please create a .env file (using .env.sample as a starter)" 1>&2
+ exit 1
+fi
+
+source "$script_dir/.env"
+
+if [[ -z "$USERNAME" ]]; then
+ echo 'USERNAME not set - ensure you have specifed a value for it in your .env file' 1>&2
+ exit 6
+fi
+
+echo "Starting services locally (Ctrl+C to stop)"
+cd "$script_dir/src"
+docker compose up
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.devcontainer/Dockerfile b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.devcontainer/Dockerfile
new file mode 100644
index 0000000..32bfefa
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.devcontainer/Dockerfile
@@ -0,0 +1,25 @@
+# See here for image contents: https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/java/.devcontainer/base.Dockerfile
+
+# [Choice] Java version (use -bullseye variants on local arm64/Apple Silicon): 11, 17, 11-bullseye, 17-bullseye, 11-buster, 17-buster
+ARG VARIANT="17-bullseye"
+FROM mcr.microsoft.com/vscode/devcontainers/java:0-${VARIANT}
+
+# [Option] Install Maven
+ARG INSTALL_MAVEN="false"
+ARG MAVEN_VERSION=""
+# [Option] Install Gradle
+ARG INSTALL_GRADLE="false"
+ARG GRADLE_VERSION=""
+RUN if [ "${INSTALL_MAVEN}" = "true" ]; then su vscode -c "umask 0002 && . /usr/local/sdkman/bin/sdkman-init.sh && sdk install maven \"${MAVEN_VERSION}\""; fi \
+ && if [ "${INSTALL_GRADLE}" = "true" ]; then su vscode -c "umask 0002 && . /usr/local/sdkman/bin/sdkman-init.sh && sdk install gradle \"${GRADLE_VERSION}\""; fi
+
+# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
+ARG NODE_VERSION="none"
+RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
+
+# [Optional] Uncomment this section to install additional OS packages.
+# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
+# && apt-get -y install --no-install-recommends
+
+# [Optional] Uncomment this line to install global node packages.
+# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g " 2>&1
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.devcontainer/devcontainer.json b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.devcontainer/devcontainer.json
new file mode 100644
index 0000000..583e6d2
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.devcontainer/devcontainer.json
@@ -0,0 +1,39 @@
+// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
+// https://github.com/microsoft/vscode-dev-containers/tree/v0.238.0/containers/java
+{
+ "name": "Java",
+ "build": {
+ "dockerfile": "Dockerfile",
+ "args": {
+ // Update the VARIANT arg to pick a Java version: 11, 17
+ // Append -bullseye or -buster to pin to an OS version.
+ // Use the -bullseye variants on local arm64/Apple Silicon.
+ "VARIANT": "17-bullseye",
+ // Options
+ "INSTALL_MAVEN": "true",
+ "INSTALL_GRADLE": "false",
+ "NODE_VERSION": "lts/*"
+ }
+ },
+ // Configure tool-specific properties.
+ "customizations": {
+ // Configure properties specific to VS Code.
+ "vscode": {
+ // Set *default* container specific settings.json values on container create.
+ "settings": {
+ "java.jdt.ls.lombokSupport.enabled": true
+ },
+ // Add the IDs of extensions you want installed when the container is created.
+ "extensions": [
+ "vscjava.vscode-java-pack",
+ "redhat.fabric8-analytics"
+ ]
+ }
+ },
+ // Use 'forwardPorts' to make a list of ports inside the container available locally.
+ // "forwardPorts": [],
+ // Use 'postCreateCommand' to run commands after the container is created.
+ // "postCreateCommand": "java -version",
+ // Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
+ "remoteUser": "vscode"
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.dockerignore b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.dockerignore
new file mode 100644
index 0000000..2ce5e1c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.dockerignore
@@ -0,0 +1,2 @@
+.env
+helm
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.env.sample b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.env.sample
new file mode 100644
index 0000000..cb2518d
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.env.sample
@@ -0,0 +1,9 @@
+APPLICATIONINSIGHTS_CONNECTION_STRING=
+APPLICATIONINSIGHTS_VERSION=3.4.7
+
+# Service Bus Information
+servicebus_connection_string=
+accelerator_queue_name=ingest-cargo
+
+# Operation API
+operations_api_url=http://operations-api:8081/
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.gitignore b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.gitignore
new file mode 100644
index 0000000..8977a26
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.gitignore
@@ -0,0 +1,3 @@
+target
+
+.env
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/launch.json b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/launch.json
new file mode 100644
index 0000000..52a4a37
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/launch.json
@@ -0,0 +1,24 @@
+{
+ // Use IntelliSense to learn about possible attributes.
+ // Hover to view descriptions of existing attributes.
+ // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "type": "java",
+ "name": "Launch Current File",
+ "request": "launch",
+ "mainClass": "${file}",
+ "envFile": "${workspaceFolder}/.env"
+ },
+ {
+ "type": "java",
+ "name": "Launch Application",
+ "request": "launch",
+ "mainClass": "com.microsoft.cse.cargoprocessing.api.Application",
+ "projectName": "cargoprocessing.api",
+ "vmArgs": "-javaagent:${workspaceFolder}/target/dependency/applicationinsights-agent-3.4.7.jar",
+ "envFile": "${workspaceFolder}/.env"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/settings.json b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/settings.json
new file mode 100644
index 0000000..c5f3f6b
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/settings.json
@@ -0,0 +1,3 @@
+{
+ "java.configuration.updateBuildConfiguration": "interactive"
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/tasks.json b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/tasks.json
new file mode 100644
index 0000000..b681057
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/.vscode/tasks.json
@@ -0,0 +1,19 @@
+{
+ // See https://go.microsoft.com/fwlink/?LinkId=733558
+ // for the documentation about the tasks.json format
+ "version": "2.0.0",
+ "tasks": [
+ {
+ "label": "verify",
+ "type": "shell",
+ "command": "mvn -B verify",
+ "group": "build"
+ },
+ {
+ "label": "test",
+ "type": "shell",
+ "command": "mvn -B test",
+ "group": "test"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/Dockerfile b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/Dockerfile
new file mode 100644
index 0000000..1487559
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/Dockerfile
@@ -0,0 +1,27 @@
+FROM mcr.microsoft.com/openjdk/jdk:17-ubuntu as base
+
+
+FROM maven:3.8.5-openjdk-17-slim as build
+WORKDIR /src
+
+RUN mvn -version
+
+COPY pom.xml .
+RUN mvn -B dependency:resolve-plugins dependency:resolve
+# RUN mvn -B dependency:go-offline
+
+COPY . .
+RUN mvn package
+
+RUN ls -al target
+RUN ls -al target/dependency
+
+FROM base as final
+COPY applicationinsights.json applicationinsights.json
+
+ARG JAR_FILE=/src/target/*.jar
+ARG DEPENDENCY=/src/target/dependency
+COPY --from=build ${DEPENDENCY}/applicationinsights-agent-3.4.7.jar applicationinsights-agent-3.4.7.jar
+COPY --from=build ${JAR_FILE} app.jar
+
+ENTRYPOINT ["java", "-javaagent:applicationinsights-agent-3.4.7.jar" ,"-jar","/app.jar" ]
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/README.md b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/README.md
new file mode 100644
index 0000000..f8ec5e6
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/README.md
@@ -0,0 +1,72 @@
+# Running the service
+
+## Pre-Requisites
+
+1. Service Bus [namespace](https://docs.microsoft.com/en-us/cli/azure/servicebus/namespace?view=azure-cli-latest#az-servicebus-namespace-create) with [queue](https://docs.microsoft.com/en-us/cli/azure/servicebus/queue?view=azure-cli-latest#az-servicebus-queue-create)
+1. Application Insights [account](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource#azure-cli-preview)
+
+## Debugging from VSCode Dev Container
+
+* Open the project in the dev container.
+ * Make sure to open in the devcontainer
+ * Ignore the alerts for Java on the initial load. The alerts move faster than the dev container builds.
+ * If you see an alert for Lombok asking to reload, please do reload.
+* Rename `.env.sample` to `.env` and add connection strings for Service Bus and Application Insights.
+* Build the Build task 2 options:
+ * From the command pallet `Tasks: Run Build Task`
+ * From the terminal `mvn -B verify`
+* Configure debugger to use the "Launch Application" configuration.
+* Run the Debugger.
+* Post a message to ".../cargo/{GUID VALUE}" that conforms to the [Cargo API](../../api-spec/main.cadl) specification.
+
+## Docker Container
+
+* Rename `.env.sample` to `.env` and add connection strings for Service Bus and Application Insights.
+* Run `docker compose up` to run the service.
+* Post a message to ".../cargo/{GUID VALUE}" that conforms to the [Cargo API](../../api-spec/main.cadl) specification.
+
+## Samples
+
+Sample PUT request:
+
+``` bash
+curl --request PUT \
+ --url http://localhost:8080/cargo/2dfc711b-7335-4b17-aede-2d67fbf6866f \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Seattle"
+ },
+ "demandDates": {
+ "start": "2022-06-24T00:00:00.000Z",
+ "end": "2022-06-30T00:00:00.000Z"
+ }
+}'
+```
+
+Sample POST request:
+
+``` bash
+curl --request POST \
+ --url http://localhost:8080/cargo/ \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "product": {
+ "name": "Toys",
+ "quantity": 100
+ },
+ "port": {
+ "source": "New York City",
+ "destination": "Tacoma"
+ },
+ "demandDates": {
+ "start": "2022-06-24T00:00:00.000Z",
+ "end": "2022-06-30T00:00:00.000Z"
+ }
+}'
+```
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/applicationinsights.json b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/applicationinsights.json
new file mode 100644
index 0000000..8efec15
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/applicationinsights.json
@@ -0,0 +1,17 @@
+{
+ "role": {
+ "name": "cargo-processing-api"
+ },
+ "instrumentation": {
+ "logging": {
+ "level": "INFO"
+ }
+ },
+ "preview": {
+ "instrumentation": {
+ "springIntegration": {
+ "enabled": true
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/docker-compose.yml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/docker-compose.yml
new file mode 100644
index 0000000..47fdf30
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/docker-compose.yml
@@ -0,0 +1,11 @@
+version: "3.9"
+
+services:
+ cargo_processing_api:
+ env_file:
+ - .env
+ build:
+ context: .
+ dockerfile: Dockerfile
+ ports:
+ - "8080:8080"
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/.helmignore b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/.helmignore
new file mode 100644
index 0000000..0e8a0eb
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*.orig
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/Chart.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/Chart.yaml
new file mode 100644
index 0000000..83847e9
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/Chart.yaml
@@ -0,0 +1,24 @@
+apiVersion: v2
+name: cargo-processing-api
+description: cargo-processing-api for aks-sb-azmonitor-microservices
+
+# A chart can be either an 'application' or a 'library' chart.
+#
+# Application charts are a collection of templates that can be packaged into versioned archives
+# to be deployed.
+#
+# Library charts provide useful utilities or functions for the chart developer. They're included as
+# a dependency of application charts to inject those utilities and functions into the rendering
+# pipeline. Library charts do not define any templates and therefore cannot be deployed.
+type: application
+
+# This is the chart version. This version number should be incremented each time you make changes
+# to the chart and its templates, including the app version.
+# Versions are expected to follow Semantic Versioning (https://semver.org/)
+version: 0.1.0
+
+# This is the version number of the application being deployed. This version number should be
+# incremented each time you make changes to the application. Versions are not expected to
+# follow Semantic Versioning. They should reflect the version the application is using.
+# It is recommended to use it with quotes.
+appVersion: v1
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/cargo-processing-api.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/cargo-processing-api.yaml
new file mode 100644
index 0000000..c5b1597
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/cargo-processing-api.yaml
@@ -0,0 +1,36 @@
+
+image:
+ pullPolicy: Always
+ tag: "latest"
+
+replicaCount: 1
+
+autoscaling:
+ enabled: false
+ minReplicas: 1
+ maxReplicas: 100
+ targetCPUUtilizationPercentage: 80
+
+imagePullSecrets: []
+nameOverride: ""
+fullnameOverride: ""
+podAnnotations: {}
+podSecurityContext: {}
+securityContext: {}
+resources: {}
+nodeSelector: {}
+tolerations: []
+affinity: {}
+
+
+# When running one of the deploy-*.sh scripts, an additional env.yaml
+# values file is created containing values specific to the deployed environment
+# with the following values:
+# image:
+# repository:
+
+# keyVault:
+# name:
+# tenantId:
+
+# aksKeyVaultSecretProviderIdentityId:
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/NOTES.txt b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/NOTES.txt
new file mode 100644
index 0000000..0e7f6bf
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/NOTES.txt
@@ -0,0 +1,5 @@
+1. Get the application URL by running these commands:
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "aks-sb-azmonitor-microservices.fullname" . }}'
+export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "aks-sb-azmonitor-microservices.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
+echo http://$SERVICE_IP:80
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/_helpers.tpl b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/_helpers.tpl
new file mode 100644
index 0000000..1e34b64
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/_helpers.tpl
@@ -0,0 +1,51 @@
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "aks-sb-azmonitor-microservices.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
+{{- end }}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "aks-sb-azmonitor-microservices.fullname" -}}
+{{- if .Values.fullnameOverride }}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
+{{- else }}
+{{- $name := default .Chart.Name .Values.nameOverride }}
+{{- if contains $name .Release.Name }}
+{{- .Release.Name | trunc 63 | trimSuffix "-" }}
+{{- else }}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
+{{- end }}
+{{- end }}
+{{- end }}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "aks-sb-azmonitor-microservices.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
+{{- end }}
+
+{{/*
+Common labels
+*/}}
+{{- define "aks-sb-azmonitor-microservices.labels" -}}
+helm.sh/chart: {{ include "aks-sb-azmonitor-microservices.chart" . }}
+{{ include "aks-sb-azmonitor-microservices.selectorLabels" . }}
+{{- if .Chart.AppVersion }}
+app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
+{{- end }}
+app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- end }}
+
+{{/*
+Selector labels
+*/}}
+{{- define "aks-sb-azmonitor-microservices.selectorLabels" -}}
+app.kubernetes.io/name: {{ include "aks-sb-azmonitor-microservices.name" . }}
+app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end }}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/deployment.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/deployment.yaml
new file mode 100644
index 0000000..feda6c3
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/deployment.yaml
@@ -0,0 +1,97 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "aks-sb-azmonitor-microservices.fullname" . }}
+ labels:
+ {{- include "aks-sb-azmonitor-microservices.labels" . | nindent 4 }}
+spec:
+ {{- if not .Values.autoscaling.enabled }}
+ replicas: {{ .Values.replicaCount }}
+ {{- end }}
+ selector:
+ matchLabels:
+ {{- include "aks-sb-azmonitor-microservices.selectorLabels" . | nindent 6 }}
+ template:
+ metadata:
+ {{- with .Values.podAnnotations }}
+ annotations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ labels:
+ {{- include "aks-sb-azmonitor-microservices.selectorLabels" . | nindent 8 }}
+ spec:
+ serviceAccountName: default
+ {{- with .Values.imagePullSecrets }}
+ imagePullSecrets:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ securityContext:
+ {{- toYaml .Values.podSecurityContext | nindent 8 }}
+ containers:
+ - name: {{ .Chart.Name }}
+ securityContext:
+ {{- toYaml .Values.securityContext | nindent 12 }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: APPLICATIONINSIGHTS_VERSION
+ value: 3.4.7
+ - name: accelerator_queue_name
+ value: ingest-cargo
+ - name: operations_api_url
+ value: http://operations-api/
+ - name: APPLICATIONINSIGHTS_CONNECTION_STRING
+ valueFrom:
+ secretKeyRef:
+ name: cargo-processing-api-secrets
+ key: AppInsightsConnectionString
+ - name: servicebus_connection_string
+ valueFrom:
+ secretKeyRef:
+ name: cargo-processing-api-secrets
+ key: ServiceBusConnectionString
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /actuator/health
+ port: 8080
+ initialDelaySeconds: 60
+ periodSeconds: 20
+ failureThreshold: 3
+ timeoutSeconds: 10
+
+ startupProbe:
+ httpGet:
+ path: /actuator/health
+ port: 8080
+ periodSeconds: 10
+ failureThreshold: 30
+ timeoutSeconds: 10
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
+ volumeMounts:
+ - name: secrets-store
+ mountPath: "/mnt/secrets-store"
+ readOnly: true
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ volumes:
+ - name: secrets-store
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: {{ include "aks-sb-azmonitor-microservices.fullname" . }}
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/hpa.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/hpa.yaml
new file mode 100644
index 0000000..0a3ca97
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/hpa.yaml
@@ -0,0 +1,28 @@
+{{- if .Values.autoscaling.enabled }}
+apiVersion: autoscaling/v2beta1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: {{ include "aks-sb-azmonitor-microservices.fullname" . }}
+ labels:
+ {{- include "aks-sb-azmonitor-microservices.labels" . | nindent 4 }}
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: {{ include "aks-sb-azmonitor-microservices.fullname" . }}
+ minReplicas: {{ .Values.autoscaling.minReplicas }}
+ maxReplicas: {{ .Values.autoscaling.maxReplicas }}
+ metrics:
+ {{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
+ - type: Resource
+ resource:
+ name: cpu
+ targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
+ {{- end }}
+ {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
+ - type: Resource
+ resource:
+ name: memory
+ targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
+ {{- end }}
+{{- end }}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/secretProviderClass.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/secretProviderClass.yaml
new file mode 100644
index 0000000..983846c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/secretProviderClass.yaml
@@ -0,0 +1,41 @@
+apiVersion: secrets-store.csi.x-k8s.io/v1
+kind: SecretProviderClass
+metadata:
+ name: {{ include "aks-sb-azmonitor-microservices.fullname" . }}
+ labels:
+ {{- include "aks-sb-azmonitor-microservices.labels" . | nindent 4 }}
+spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ useVMManagedIdentity: "true"
+ userAssignedIdentityID: {{ .Values.aksKeyVaultSecretProviderIdentityId }}
+ keyvaultName: {{ .Values.keyVault.name }}
+ cloudName: ""
+ objects: |
+ array:
+ - |
+ objectName: AppInsightsConnectionString
+ objectType: secret
+ - |
+ objectName: ServiceBusConnectionString
+ objectType: secret
+ - |
+ objectName: CosmosDBEndpoint
+ objectType: secret
+ - |
+ objectName: CosmosDBKey
+ objectType: secret
+ tenantId: {{ .Values.keyVault.tenantId }}
+ secretObjects:
+ - data:
+ - key: AppInsightsConnectionString
+ objectName: AppInsightsConnectionString
+ - key: ServiceBusConnectionString
+ objectName: ServiceBusConnectionString
+ - key: CosmosDBEndpoint
+ objectName: CosmosDBEndpoint
+ - key: CosmosDBKey
+ objectName: CosmosDBKey
+ secretName: cargo-processing-api-secrets
+ type: Opaque
\ No newline at end of file
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/service.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/service.yaml
new file mode 100644
index 0000000..af3f13a
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/service.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "aks-sb-azmonitor-microservices.fullname" . }}
+ labels:
+ {{- include "aks-sb-azmonitor-microservices.labels" . | nindent 4 }}
+spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ {{- include "aks-sb-azmonitor-microservices.selectorLabels" . | nindent 4 }}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/tests/test-connection.yaml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/tests/test-connection.yaml
new file mode 100644
index 0000000..5eb4bc4
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/helm/templates/tests/test-connection.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: "{{ include "aks-sb-azmonitor-microservices.fullname" . }}-test-connection"
+ labels:
+ {{- include "aks-sb-azmonitor-microservices.labels" . | nindent 4 }}
+ annotations:
+ "helm.sh/hook": test
+spec:
+ containers:
+ - name: wget
+ image: busybox
+ command: ['wget']
+ args: ['{{ include "aks-sb-azmonitor-microservices.fullname" . }}:80']
+ restartPolicy: Never
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/mvnw b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/mvnw
new file mode 100644
index 0000000..8a8fb22
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/mvnw
@@ -0,0 +1,316 @@
+#!/bin/sh
+# ----------------------------------------------------------------------------
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied. See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# ----------------------------------------------------------------------------
+
+# ----------------------------------------------------------------------------
+# Maven Start Up Batch script
+#
+# Required ENV vars:
+# ------------------
+# JAVA_HOME - location of a JDK home dir
+#
+# Optional ENV vars
+# -----------------
+# M2_HOME - location of maven2's installed home dir
+# MAVEN_OPTS - parameters passed to the Java VM when running Maven
+# e.g. to debug Maven itself, use
+# set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
+# MAVEN_SKIP_RC - flag to disable loading of mavenrc files
+# ----------------------------------------------------------------------------
+
+if [ -z "$MAVEN_SKIP_RC" ] ; then
+
+ if [ -f /usr/local/etc/mavenrc ] ; then
+ . /usr/local/etc/mavenrc
+ fi
+
+ if [ -f /etc/mavenrc ] ; then
+ . /etc/mavenrc
+ fi
+
+ if [ -f "$HOME/.mavenrc" ] ; then
+ . "$HOME/.mavenrc"
+ fi
+
+fi
+
+# OS specific support. $var _must_ be set to either true or false.
+cygwin=false;
+darwin=false;
+mingw=false
+case "`uname`" in
+ CYGWIN*) cygwin=true ;;
+ MINGW*) mingw=true;;
+ Darwin*) darwin=true
+ # Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
+ # See https://developer.apple.com/library/mac/qa/qa1170/_index.html
+ if [ -z "$JAVA_HOME" ]; then
+ if [ -x "/usr/libexec/java_home" ]; then
+ export JAVA_HOME="`/usr/libexec/java_home`"
+ else
+ export JAVA_HOME="/Library/Java/Home"
+ fi
+ fi
+ ;;
+esac
+
+if [ -z "$JAVA_HOME" ] ; then
+ if [ -r /etc/gentoo-release ] ; then
+ JAVA_HOME=`java-config --jre-home`
+ fi
+fi
+
+if [ -z "$M2_HOME" ] ; then
+ ## resolve links - $0 may be a link to maven's home
+ PRG="$0"
+
+ # need this for relative symlinks
+ while [ -h "$PRG" ] ; do
+ ls=`ls -ld "$PRG"`
+ link=`expr "$ls" : '.*-> \(.*\)$'`
+ if expr "$link" : '/.*' > /dev/null; then
+ PRG="$link"
+ else
+ PRG="`dirname "$PRG"`/$link"
+ fi
+ done
+
+ saveddir=`pwd`
+
+ M2_HOME=`dirname "$PRG"`/..
+
+ # make it fully qualified
+ M2_HOME=`cd "$M2_HOME" && pwd`
+
+ cd "$saveddir"
+ # echo Using m2 at $M2_HOME
+fi
+
+# For Cygwin, ensure paths are in UNIX format before anything is touched
+if $cygwin ; then
+ [ -n "$M2_HOME" ] &&
+ M2_HOME=`cygpath --unix "$M2_HOME"`
+ [ -n "$JAVA_HOME" ] &&
+ JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
+ [ -n "$CLASSPATH" ] &&
+ CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
+fi
+
+# For Mingw, ensure paths are in UNIX format before anything is touched
+if $mingw ; then
+ [ -n "$M2_HOME" ] &&
+ M2_HOME="`(cd "$M2_HOME"; pwd)`"
+ [ -n "$JAVA_HOME" ] &&
+ JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
+fi
+
+if [ -z "$JAVA_HOME" ]; then
+ javaExecutable="`which javac`"
+ if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then
+ # readlink(1) is not available as standard on Solaris 10.
+ readLink=`which readlink`
+ if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then
+ if $darwin ; then
+ javaHome="`dirname \"$javaExecutable\"`"
+ javaExecutable="`cd \"$javaHome\" && pwd -P`/javac"
+ else
+ javaExecutable="`readlink -f \"$javaExecutable\"`"
+ fi
+ javaHome="`dirname \"$javaExecutable\"`"
+ javaHome=`expr "$javaHome" : '\(.*\)/bin'`
+ JAVA_HOME="$javaHome"
+ export JAVA_HOME
+ fi
+ fi
+fi
+
+if [ -z "$JAVACMD" ] ; then
+ if [ -n "$JAVA_HOME" ] ; then
+ if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
+ # IBM's JDK on AIX uses strange locations for the executables
+ JAVACMD="$JAVA_HOME/jre/sh/java"
+ else
+ JAVACMD="$JAVA_HOME/bin/java"
+ fi
+ else
+ JAVACMD="`\\unset -f command; \\command -v java`"
+ fi
+fi
+
+if [ ! -x "$JAVACMD" ] ; then
+ echo "Error: JAVA_HOME is not defined correctly." >&2
+ echo " We cannot execute $JAVACMD" >&2
+ exit 1
+fi
+
+if [ -z "$JAVA_HOME" ] ; then
+ echo "Warning: JAVA_HOME environment variable is not set."
+fi
+
+CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
+
+# traverses directory structure from process work directory to filesystem root
+# first directory with .mvn subdirectory is considered project base directory
+find_maven_basedir() {
+
+ if [ -z "$1" ]
+ then
+ echo "Path not specified to find_maven_basedir"
+ return 1
+ fi
+
+ basedir="$1"
+ wdir="$1"
+ while [ "$wdir" != '/' ] ; do
+ if [ -d "$wdir"/.mvn ] ; then
+ basedir=$wdir
+ break
+ fi
+ # workaround for JBEAP-8937 (on Solaris 10/Sparc)
+ if [ -d "${wdir}" ]; then
+ wdir=`cd "$wdir/.."; pwd`
+ fi
+ # end of workaround
+ done
+ echo "${basedir}"
+}
+
+# concatenates all lines of a file
+concat_lines() {
+ if [ -f "$1" ]; then
+ echo "$(tr -s '\n' ' ' < "$1")"
+ fi
+}
+
+BASE_DIR=`find_maven_basedir "$(pwd)"`
+if [ -z "$BASE_DIR" ]; then
+ exit 1;
+fi
+
+##########################################################################################
+# Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
+# This allows using the maven wrapper in projects that prohibit checking in binary data.
+##########################################################################################
+if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo "Found .mvn/wrapper/maven-wrapper.jar"
+ fi
+else
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
+ fi
+ if [ -n "$MVNW_REPOURL" ]; then
+ jarUrl="$MVNW_REPOURL/org/apache/maven/wrapper/maven-wrapper/3.1.0/maven-wrapper-3.1.0.jar"
+ else
+ jarUrl="https://repo.maven.apache.org/maven2/org/apache/maven/wrapper/maven-wrapper/3.1.0/maven-wrapper-3.1.0.jar"
+ fi
+ while IFS="=" read key value; do
+ case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
+ esac
+ done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo "Downloading from: $jarUrl"
+ fi
+ wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
+ if $cygwin; then
+ wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"`
+ fi
+
+ if command -v wget > /dev/null; then
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo "Found wget ... using wget"
+ fi
+ if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
+ wget "$jarUrl" -O "$wrapperJarPath" || rm -f "$wrapperJarPath"
+ else
+ wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath" || rm -f "$wrapperJarPath"
+ fi
+ elif command -v curl > /dev/null; then
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo "Found curl ... using curl"
+ fi
+ if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then
+ curl -o "$wrapperJarPath" "$jarUrl" -f
+ else
+ curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f
+ fi
+
+ else
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo "Falling back to using Java to download"
+ fi
+ javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
+ # For Cygwin, switch paths to Windows format before running javac
+ if $cygwin; then
+ javaClass=`cygpath --path --windows "$javaClass"`
+ fi
+ if [ -e "$javaClass" ]; then
+ if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo " - Compiling MavenWrapperDownloader.java ..."
+ fi
+ # Compiling the Java class
+ ("$JAVA_HOME/bin/javac" "$javaClass")
+ fi
+ if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
+ # Running the downloader
+ if [ "$MVNW_VERBOSE" = true ]; then
+ echo " - Running MavenWrapperDownloader.java ..."
+ fi
+ ("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
+ fi
+ fi
+ fi
+fi
+##########################################################################################
+# End of extension
+##########################################################################################
+
+export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
+if [ "$MVNW_VERBOSE" = true ]; then
+ echo $MAVEN_PROJECTBASEDIR
+fi
+MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
+
+# For Cygwin, switch paths to Windows format before running java
+if $cygwin; then
+ [ -n "$M2_HOME" ] &&
+ M2_HOME=`cygpath --path --windows "$M2_HOME"`
+ [ -n "$JAVA_HOME" ] &&
+ JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
+ [ -n "$CLASSPATH" ] &&
+ CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
+ [ -n "$MAVEN_PROJECTBASEDIR" ] &&
+ MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
+fi
+
+# Provide a "standardized" way to retrieve the CLI args that will
+# work with both Windows and non-Windows executions.
+MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@"
+export MAVEN_CMD_LINE_ARGS
+
+WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
+
+exec "$JAVACMD" \
+ $MAVEN_OPTS \
+ $MAVEN_DEBUG_OPTS \
+ -classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
+ "-Dmaven.home=${M2_HOME}" \
+ "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
+ ${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/mvnw.cmd b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/mvnw.cmd
new file mode 100644
index 0000000..1d8ab01
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/mvnw.cmd
@@ -0,0 +1,188 @@
+@REM ----------------------------------------------------------------------------
+@REM Licensed to the Apache Software Foundation (ASF) under one
+@REM or more contributor license agreements. See the NOTICE file
+@REM distributed with this work for additional information
+@REM regarding copyright ownership. The ASF licenses this file
+@REM to you under the Apache License, Version 2.0 (the
+@REM "License"); you may not use this file except in compliance
+@REM with the License. You may obtain a copy of the License at
+@REM
+@REM https://www.apache.org/licenses/LICENSE-2.0
+@REM
+@REM Unless required by applicable law or agreed to in writing,
+@REM software distributed under the License is distributed on an
+@REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+@REM KIND, either express or implied. See the License for the
+@REM specific language governing permissions and limitations
+@REM under the License.
+@REM ----------------------------------------------------------------------------
+
+@REM ----------------------------------------------------------------------------
+@REM Maven Start Up Batch script
+@REM
+@REM Required ENV vars:
+@REM JAVA_HOME - location of a JDK home dir
+@REM
+@REM Optional ENV vars
+@REM M2_HOME - location of maven2's installed home dir
+@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
+@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending
+@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
+@REM e.g. to debug Maven itself, use
+@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
+@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
+@REM ----------------------------------------------------------------------------
+
+@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
+@echo off
+@REM set title of command window
+title %0
+@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'
+@if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
+
+@REM set %HOME% to equivalent of $HOME
+if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
+
+@REM Execute a user defined script before this one
+if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
+@REM check for pre script, once with legacy .bat ending and once with .cmd ending
+if exist "%USERPROFILE%\mavenrc_pre.bat" call "%USERPROFILE%\mavenrc_pre.bat" %*
+if exist "%USERPROFILE%\mavenrc_pre.cmd" call "%USERPROFILE%\mavenrc_pre.cmd" %*
+:skipRcPre
+
+@setlocal
+
+set ERROR_CODE=0
+
+@REM To isolate internal variables from possible post scripts, we use another setlocal
+@setlocal
+
+@REM ==== START VALIDATION ====
+if not "%JAVA_HOME%" == "" goto OkJHome
+
+echo.
+echo Error: JAVA_HOME not found in your environment. >&2
+echo Please set the JAVA_HOME variable in your environment to match the >&2
+echo location of your Java installation. >&2
+echo.
+goto error
+
+:OkJHome
+if exist "%JAVA_HOME%\bin\java.exe" goto init
+
+echo.
+echo Error: JAVA_HOME is set to an invalid directory. >&2
+echo JAVA_HOME = "%JAVA_HOME%" >&2
+echo Please set the JAVA_HOME variable in your environment to match the >&2
+echo location of your Java installation. >&2
+echo.
+goto error
+
+@REM ==== END VALIDATION ====
+
+:init
+
+@REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
+@REM Fallback to current working directory if not found.
+
+set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
+IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
+
+set EXEC_DIR=%CD%
+set WDIR=%EXEC_DIR%
+:findBaseDir
+IF EXIST "%WDIR%"\.mvn goto baseDirFound
+cd ..
+IF "%WDIR%"=="%CD%" goto baseDirNotFound
+set WDIR=%CD%
+goto findBaseDir
+
+:baseDirFound
+set MAVEN_PROJECTBASEDIR=%WDIR%
+cd "%EXEC_DIR%"
+goto endDetectBaseDir
+
+:baseDirNotFound
+set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
+cd "%EXEC_DIR%"
+
+:endDetectBaseDir
+
+IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
+
+@setlocal EnableExtensions EnableDelayedExpansion
+for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
+@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
+
+:endReadAdditionalConfig
+
+SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
+set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
+set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
+
+set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/org/apache/maven/wrapper/maven-wrapper/3.1.0/maven-wrapper-3.1.0.jar"
+
+FOR /F "usebackq tokens=1,2 delims==" %%A IN ("%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties") DO (
+ IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
+)
+
+@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
+@REM This allows using the maven wrapper in projects that prohibit checking in binary data.
+if exist %WRAPPER_JAR% (
+ if "%MVNW_VERBOSE%" == "true" (
+ echo Found %WRAPPER_JAR%
+ )
+) else (
+ if not "%MVNW_REPOURL%" == "" (
+ SET DOWNLOAD_URL="%MVNW_REPOURL%/org/apache/maven/wrapper/maven-wrapper/3.1.0/maven-wrapper-3.1.0.jar"
+ )
+ if "%MVNW_VERBOSE%" == "true" (
+ echo Couldn't find %WRAPPER_JAR%, downloading it ...
+ echo Downloading from: %DOWNLOAD_URL%
+ )
+
+ powershell -Command "&{"^
+ "$webclient = new-object System.Net.WebClient;"^
+ "if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {"^
+ "$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');"^
+ "}"^
+ "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"^
+ "}"
+ if "%MVNW_VERBOSE%" == "true" (
+ echo Finished downloading %WRAPPER_JAR%
+ )
+)
+@REM End of extension
+
+@REM Provide a "standardized" way to retrieve the CLI args that will
+@REM work with both Windows and non-Windows executions.
+set MAVEN_CMD_LINE_ARGS=%*
+
+%MAVEN_JAVA_EXE% ^
+ %JVM_CONFIG_MAVEN_PROPS% ^
+ %MAVEN_OPTS% ^
+ %MAVEN_DEBUG_OPTS% ^
+ -classpath %WRAPPER_JAR% ^
+ "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" ^
+ %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
+if ERRORLEVEL 1 goto error
+goto end
+
+:error
+set ERROR_CODE=1
+
+:end
+@endlocal & set ERROR_CODE=%ERROR_CODE%
+
+if not "%MAVEN_SKIP_RC%"=="" goto skipRcPost
+@REM check for post script, once with legacy .bat ending and once with .cmd ending
+if exist "%USERPROFILE%\mavenrc_post.bat" call "%USERPROFILE%\mavenrc_post.bat"
+if exist "%USERPROFILE%\mavenrc_post.cmd" call "%USERPROFILE%\mavenrc_post.cmd"
+:skipRcPost
+
+@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
+if "%MAVEN_BATCH_PAUSE%"=="on" pause
+
+if "%MAVEN_TERMINATE_CMD%"=="on" exit %ERROR_CODE%
+
+cmd /C exit /B %ERROR_CODE%
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/pom.xml b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/pom.xml
new file mode 100644
index 0000000..0563da7
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/pom.xml
@@ -0,0 +1,172 @@
+
+
+ 4.0.0
+
+ org.springframework.boot
+ spring-boot-starter-parent
+ 2.7.1
+
+
+ com.microsoft.cse
+ cargoprocessing.api
+ 0.0.1-SNAPSHOT
+ cargoprocessing-api
+ Ingestion API for the Service Bus Messaging scenario
+
+
+ 17
+ 3.12.0
+ 5.8.2
+ 3.4.7
+ 3.4.7
+ LATEST
+ 1.0.71
+ 7.13.0
+ 2.11.0
+ 3.0.0-M5
+ 3.3.0
+ 4.4.0
+ 2.7.3
+ true
+
+
+
+
+ org.springframework.boot
+ spring-boot-starter-web
+
+
+
+ org.springframework.boot
+ spring-boot-starter-test
+ test
+
+
+
+ org.springframework.boot
+ spring-boot-starter-webflux
+
+
+
+ org.springframework.boot
+ spring-boot-starter-actuator
+
+
+
+ com.azure
+ azure-messaging-servicebus
+ ${servicebus.version}
+
+
+
+ org.apache.commons
+ commons-lang3
+ ${commons.lang.version}
+
+
+
+ org.projectlombok
+ lombok
+ ${lombok.version}
+ provided
+
+
+
+ com.networknt
+ json-schema-validator
+ ${json.schema.validation.version}
+
+
+
+ commons-io
+ commons-io
+ ${commons.io.version}
+ test
+
+
+
+ io.opentelemetry
+ opentelemetry-api
+
+
+
+ com.microsoft.azure
+ applicationinsights-web
+ ${applicationinsights.web.version}
+
+
+
+ com.microsoft.azure
+ applicationinsights-agent
+ ${applicationinsights.agent.version}
+
+
+
+
+
+
+ io.opentelemetry
+ opentelemetry-bom
+ 1.22.0
+ pom
+ import
+
+
+ org.springframework.boot
+ spring-boot-dependencies
+ ${spring.boot.version}
+ pom
+ import
+
+
+ com.azure.spring
+ spring-cloud-azure-dependencies
+ ${spring.cloud.azure.version}
+ pom
+ import
+
+
+
+
+
+
+
+ org.springframework.boot
+ spring-boot-maven-plugin
+
+
+ org.apache.maven.plugins
+ maven-dependency-plugin
+ ${maven.dependency.plugin.version}
+
+
+
+ copy
+ compile
+
+ copy
+
+
+
+
+ com.microsoft.azure
+ applicationinsights-agent
+ ${applicationinsights.agent.version}
+ applicationinsights-agent-${applicationinsights.agent.version}.jar
+
+
+
+
+
+
+
+ org.apache.maven.plugins
+ maven-failsafe-plugin
+
+ ${skipITs}
+
+
+
+
+
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/Application.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/Application.java
new file mode 100644
index 0000000..dd885dc
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/Application.java
@@ -0,0 +1,13 @@
+package com.microsoft.cse.cargoprocessing.api;
+
+import org.springframework.boot.SpringApplication;
+import org.springframework.boot.autoconfigure.SpringBootApplication;
+
+@SpringBootApplication
+public class Application {
+
+ public static void main(String[] args) {
+ SpringApplication.run(Application.class, args);
+ }
+
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/Exceptions/JsonValidationException.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/Exceptions/JsonValidationException.java
new file mode 100644
index 0000000..c0074ee
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/Exceptions/JsonValidationException.java
@@ -0,0 +1,33 @@
+package com.microsoft.cse.cargoprocessing.api.Exceptions;
+
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import com.microsoft.cse.cargoprocessing.api.controllers.ExceptionHandling.ErrorCodes;
+import com.networknt.schema.ValidationMessage;
+
+import lombok.Data;
+import lombok.EqualsAndHashCode;
+
+@Data
+@EqualsAndHashCode(callSuper=false)
+public class JsonValidationException extends RuntimeException {
+ private Set validationMessages;
+ private String failureCode;
+
+ public JsonValidationException(Throwable cause) {
+ super(cause);
+ this.failureCode = ErrorCodes.FAILS_SERIALIZATION;
+ }
+
+ public JsonValidationException(Set validationMessages) {
+ super(String.format("Json failed validation with the following errors:%n%n* %s",
+ validationMessages
+ .stream()
+ .map(v -> String.format("%s: {%s} %s", v.getCode(), v.getPath(), v.getMessage()))
+ .collect(Collectors.joining(String.format("%n* ")))));
+
+ this.validationMessages = validationMessages;
+ this.failureCode = ErrorCodes.FAILS_SCHEMA_VALIDATION;
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/ChaosMonkey.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/ChaosMonkey.java
new file mode 100644
index 0000000..36c0151
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/ChaosMonkey.java
@@ -0,0 +1,15 @@
+package com.microsoft.cse.cargoprocessing.api.chaos;
+
+import java.util.Map;
+
+import com.microsoft.cse.cargoprocessing.api.models.Cargo;
+
+public interface ChaosMonkey {
+ boolean CanWakeTheMonkey(Cargo cargo);
+
+ void WakeTheMonkey(Map parameters);
+
+ void RattleTheCage(Cargo cargo, Map parameters);
+
+ void RattleTheCage(Cargo cargo);
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/BaseMonkey.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/BaseMonkey.java
new file mode 100644
index 0000000..baa8c87
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/BaseMonkey.java
@@ -0,0 +1,41 @@
+package com.microsoft.cse.cargoprocessing.api.chaos.impl;
+
+import java.util.Map;
+
+import com.microsoft.cse.cargoprocessing.api.chaos.ChaosMonkey;
+import com.microsoft.cse.cargoprocessing.api.models.Cargo;
+import com.microsoft.cse.cargoprocessing.api.models.Port;
+
+abstract public class BaseMonkey implements ChaosMonkey {
+ private final String chaosTrigger;
+ private final String SERVICE_TRIGGER = "cargo-processing-api";
+
+ public BaseMonkey(String chaosTrigger) {
+ this.chaosTrigger = chaosTrigger;
+ }
+
+ @Override
+ public boolean CanWakeTheMonkey(Cargo cargo) {
+ Port portInfo = cargo.getPort();
+ return portInfo.getSource().equalsIgnoreCase(SERVICE_TRIGGER) &&
+ portInfo.getDestination().equalsIgnoreCase(chaosTrigger);
+ }
+
+ @SuppressWarnings("unchecked")
+ protected static T getParm(Map map, String key, T defaultValue) {
+ return (map.containsKey(key)) ? (T) map.get(key) : defaultValue;
+ }
+
+ abstract public void WakeTheMonkey(Map parameters);
+
+ @Override
+ public void RattleTheCage(Cargo cargo, Map parameters) {
+ if (CanWakeTheMonkey(cargo))
+ WakeTheMonkey(parameters);
+ }
+
+ @Override
+ public void RattleTheCage(Cargo cargo) {
+ RattleTheCage(cargo, null);
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ChaosMonkeyException.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ChaosMonkeyException.java
new file mode 100644
index 0000000..bdc02fa
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ChaosMonkeyException.java
@@ -0,0 +1,7 @@
+package com.microsoft.cse.cargoprocessing.api.chaos.impl;
+
+public class ChaosMonkeyException extends RuntimeException {
+ public ChaosMonkeyException(String chaosType) {
+ super(String.format("%s Chaos Monkey reeking havoc.", chaosType));
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/DependantApiFailureMonkey.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/DependantApiFailureMonkey.java
new file mode 100644
index 0000000..67d8b31
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/DependantApiFailureMonkey.java
@@ -0,0 +1,17 @@
+package com.microsoft.cse.cargoprocessing.api.chaos.impl;
+
+import java.util.Map;
+
+import org.springframework.stereotype.Service;
+
+@Service
+public class DependantApiFailureMonkey extends BaseMonkey {
+ public DependantApiFailureMonkey() {
+ super("operations-api-failure");
+ }
+
+ @Override
+ public void WakeTheMonkey(Map parameters) {
+ throw new ChaosMonkeyException("Dependant Api Failing");
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ProcessKillingMonkey.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ProcessKillingMonkey.java
new file mode 100644
index 0000000..fa66bb8
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ProcessKillingMonkey.java
@@ -0,0 +1,18 @@
+package com.microsoft.cse.cargoprocessing.api.chaos.impl;
+
+import java.util.Map;
+
+import org.springframework.stereotype.Service;
+
+@Service
+public class ProcessKillingMonkey extends BaseMonkey {
+ public ProcessKillingMonkey() {
+ super("process-ending");
+ }
+
+ @Override
+ public void WakeTheMonkey(Map parameters) {
+ // Completely Kill the application
+ System.exit(-1);
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ServiceBusKillingMonkey.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ServiceBusKillingMonkey.java
new file mode 100644
index 0000000..b29517f
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ServiceBusKillingMonkey.java
@@ -0,0 +1,22 @@
+package com.microsoft.cse.cargoprocessing.api.chaos.impl;
+
+import java.util.Map;
+
+import org.springframework.stereotype.Service;
+
+import com.azure.messaging.servicebus.ServiceBusSenderClient;
+
+@Service
+public class ServiceBusKillingMonkey extends BaseMonkey {
+ public ServiceBusKillingMonkey() {
+ super("service-bus-failure");
+ }
+
+ @Override
+ public void WakeTheMonkey(Map parameters) {
+ // Oh, let's just close that sender before trying to use it, what could possibly
+ // go wrong?
+ ServiceBusSenderClient sender = getParm(parameters, "sender", null);
+ sender.close();
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ServiceBusThrollingMonkey.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ServiceBusThrollingMonkey.java
new file mode 100644
index 0000000..5f6ab0c
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/chaos/impl/ServiceBusThrollingMonkey.java
@@ -0,0 +1,50 @@
+package com.microsoft.cse.cargoprocessing.api.chaos.impl;
+
+import java.util.Map;
+
+import org.springframework.beans.factory.annotation.Value;
+import org.springframework.stereotype.Service;
+
+import com.azure.messaging.servicebus.ServiceBusClientBuilder;
+import com.azure.messaging.servicebus.ServiceBusMessage;
+import com.azure.messaging.servicebus.ServiceBusSenderClient;
+
+import reactor.core.publisher.Flux;
+import reactor.core.publisher.Mono;
+import reactor.core.scheduler.Schedulers;
+
+@Service
+public class ServiceBusThrollingMonkey extends BaseMonkey {
+ public ServiceBusThrollingMonkey() {
+ super("service-bus-throttling");
+ }
+
+ @Value("${accelerator.queue-name:defaultValue}")
+ private String queueName;
+ @Value("${servicebus.connection-string:defaultValue}")
+ private String connectionString;
+
+ @Override
+ public void WakeTheMonkey(Map parameters) {
+ ServiceBusSenderClient sender = new ServiceBusClientBuilder()
+ .connectionString(connectionString)
+ .sender()
+ .queueName(queueName)
+ .buildClient();
+
+ ServiceBusMessage message = getParm(parameters, "message", null);
+
+ // Let's slam the service bus with that message ALOT, what could go wrong with
+ // that?
+ // TODO: Not able to get this to actually cause the service bus to throttle the
+ // requests. Need to revisit before calling this done.
+ Flux.just(1)
+ .repeat(10000)
+ .flatMap(i -> Mono.fromCallable(() -> {
+ sender.sendMessage(message);
+ return i;
+ }))
+ .subscribeOn(Schedulers.boundedElastic(), true)
+ .subscribe();
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/CargoController.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/CargoController.java
new file mode 100644
index 0000000..f056ecc
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/CargoController.java
@@ -0,0 +1,143 @@
+package com.microsoft.cse.cargoprocessing.api.controllers;
+
+import org.springframework.web.bind.annotation.RestController;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.microsoft.cse.cargoprocessing.api.Exceptions.JsonValidationException;
+import com.microsoft.cse.cargoprocessing.api.chaos.impl.DependantApiFailureMonkey;
+import com.microsoft.cse.cargoprocessing.api.chaos.impl.ProcessKillingMonkey;
+import com.microsoft.cse.cargoprocessing.api.models.Cargo;
+import com.microsoft.cse.cargoprocessing.api.models.MessageEnvelope;
+import com.microsoft.cse.cargoprocessing.api.services.CargoPublisher;
+import com.microsoft.cse.cargoprocessing.api.services.OperationPublisher;
+import com.microsoft.cse.cargoprocessing.api.services.SchemaValidator;
+
+import lombok.SneakyThrows;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Timestamp;
+import java.util.Map;
+import java.util.UUID;
+
+import org.springframework.beans.factory.annotation.Autowired;
+import org.springframework.http.HttpHeaders;
+import org.springframework.http.ResponseEntity;
+import org.springframework.web.bind.annotation.PathVariable;
+import org.springframework.web.bind.annotation.PostMapping;
+import org.springframework.web.bind.annotation.PutMapping;
+import org.springframework.web.bind.annotation.RequestBody;
+import org.springframework.web.bind.annotation.RequestHeader;
+import org.springframework.web.bind.annotation.RequestMapping;
+
+@RestController
+@RequestMapping("cargo")
+public class CargoController {
+ @Autowired
+ private CargoPublisher publisher;
+ @Autowired
+ private SchemaValidator validator;
+ @Autowired
+ private OperationPublisher operationPublisher;
+ @Autowired
+ private DependantApiFailureMonkey apiFailingMonkey;
+ @Autowired
+ private ProcessKillingMonkey processKillingMonkey;
+
+ private static final Logger logger = LoggerFactory.getLogger(CargoController.class);
+
+ private static final ObjectMapper objectMapper = new ObjectMapper();
+
+ @PutMapping("/{cargoId}")
+ public ResponseEntity createCargo(@PathVariable String cargoId, @RequestBody String cargoBody,
+ @RequestHeader Map headers) {
+ Cargo cargo = getJsonCargo(cargoBody);
+
+ // Let's see if we need to add a little chaos
+ processKillingMonkey.RattleTheCage(cargo);
+
+ cargo.setId(cargoId);
+ logger.info("Cargo body loaded for cargo id: {}", cargoId);
+
+ return processCargo(cargo, getOperationId(headers, cargo));
+ }
+
+ private String getOperationId(Map headers, Cargo cargo) {
+ String key = "operation-id";
+ if (headers.containsKey(key)) {
+ return headers.get(key);
+ }
+ // If the client doesn't provide an operation-id, generate a
+ // deterministic UUID based on the cargo object provided
+ return generateId(cargo);
+ }
+
+ @PostMapping("/")
+ public ResponseEntity createCargo(@RequestBody String cargoBody, @RequestHeader Map headers) {
+ Cargo cargo = getJsonCargo(cargoBody);
+
+ // Let's see if we need to add a little chaos
+ processKillingMonkey.RattleTheCage(cargo);
+
+ cargo.setId(generateId(cargo));
+ logger.info("Cargo body loaded for cargo id: {}", cargo.getId());
+
+ // Take note that the cargo object's id has been set at this point,
+ // so the UUID that is generated for the operation id
+ // (when the client doesn't provide one) will be
+ // different then the UUID generated for the cargo object
+ return processCargo(cargo, getOperationId(headers, cargo));
+ }
+
+ @SneakyThrows
+ private String generateId(Cargo cargo) {
+ // Get a deterministic UUID based on the cargo object provided
+ String cargoString = objectMapper.writeValueAsString(cargo);
+
+ return UUID.nameUUIDFromBytes(cargoString.getBytes()).toString();
+ }
+
+ private ResponseEntity processCargo(Cargo cargo, String operationId) {
+ // Let's see if we need to add a little chaos
+ apiFailingMonkey.RattleTheCage(cargo);
+
+ Boolean isNewOperation = operationPublisher.isNewOperation(operationId).block();
+
+ // To ensure we don't have duplicate requests in play:
+ // If the operation was created in the previous call, then we haven't
+ // received this request before, so we will process it.
+ if (isNewOperation) {
+ logger.info("New Cargo request, processing cargo id: {}", cargo.getId());
+ cargo.setTimestamp(new Timestamp(System.currentTimeMillis()));
+ publisher.publishCargo(new MessageEnvelope(cargo, operationId));
+
+ logger.info("Cargo id {} published", cargo.getId());
+ }
+
+ return ResponseEntity.accepted()
+ .headers(getHeaders(operationId))
+ .body(cargo);
+ }
+
+ private HttpHeaders getHeaders(String operationId) {
+ HttpHeaders headers = new HttpHeaders();
+ headers.add("operation-id", operationId);
+ return headers;
+ }
+
+ private Cargo getJsonCargo(String cargo) {
+ try {
+ logger.info("Validating cargo schema");
+ JsonNode jsonCargo = objectMapper.readTree(cargo);
+ validator.validate("cargo", jsonCargo);
+
+ return objectMapper.treeToValue(jsonCargo, Cargo.class);
+
+ } catch (JsonProcessingException e) {
+ throw new JsonValidationException(e);
+ }
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/Error.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/Error.java
new file mode 100644
index 0000000..49d5c81
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/Error.java
@@ -0,0 +1,14 @@
+package com.microsoft.cse.cargoprocessing.api.controllers.ExceptionHandling;
+
+import java.io.Serializable;
+
+import lombok.Data;
+
+@Data
+public class Error implements Serializable {
+ private ErrorDetail error;
+
+ public Error(String code, String message, String target, InnerError innerError){
+ error = new ErrorDetail(code, message, target, innerError);
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ErrorCodes.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ErrorCodes.java
new file mode 100644
index 0000000..d6e21be
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ErrorCodes.java
@@ -0,0 +1,11 @@
+package com.microsoft.cse.cargoprocessing.api.controllers.ExceptionHandling;
+
+public class ErrorCodes {
+ private ErrorCodes() { throw new IllegalStateException("Utility class, should not be constructed"); }
+
+ public static final String INVALID_JSON = "InvalidJson";
+ public static final String FAILS_SCHEMA_VALIDATION = "InvalidJson-SchemaValidationFailure";
+ public static final String FAILS_SERIALIZATION = "InvalidJson-UnableToSerialize";
+
+ public static final String INTERNAL_SERVER_ERROR = "InternalServerError";
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ErrorDetail.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ErrorDetail.java
new file mode 100644
index 0000000..12cc31b
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ErrorDetail.java
@@ -0,0 +1,20 @@
+package com.microsoft.cse.cargoprocessing.api.controllers.ExceptionHandling;
+
+import java.io.Serializable;
+
+import lombok.Data;
+
+@Data
+public class ErrorDetail implements Serializable {
+ private String code;
+ private String message;
+ private String target;
+ private InnerError innerError;
+
+ public ErrorDetail(String code, String message, String target, InnerError innerError){
+ this.code = code;
+ this.innerError = innerError;
+ this.target = target;
+ this.message = message;
+ }
+}
diff --git a/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ExceptionAdvisor.java b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ExceptionAdvisor.java
new file mode 100644
index 0000000..5d2a2f3
--- /dev/null
+++ b/accelerators/aks-sb-azmonitor-microservices/src/cargo-processing-api/src/main/java/com/microsoft/cse/cargoprocessing/api/controllers/ExceptionHandling/ExceptionAdvisor.java
@@ -0,0 +1,52 @@
+package com.microsoft.cse.cargoprocessing.api.controllers.ExceptionHandling;
+
+import org.springframework.web.bind.annotation.ControllerAdvice;
+import org.springframework.web.bind.annotation.ExceptionHandler;
+import org.springframework.web.context.request.WebRequest;
+import org.springframework.http.HttpHeaders;
+import org.springframework.http.HttpStatus;
+import org.springframework.http.ResponseEntity;
+import org.springframework.web.servlet.mvc.method.annotation.ResponseEntityExceptionHandler;
+
+import com.microsoft.cse.cargoprocessing.api.Exceptions.JsonValidationException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@ControllerAdvice
+public class ExceptionAdvisor extends ResponseEntityExceptionHandler {
+
+ private static final Logger logger = LoggerFactory.getLogger(ExceptionAdvisor.class);
+
+ @ExceptionHandler(JsonValidationException.class)
+ protected ResponseEntity