- Install the latest Preview VS.
- Be sure to install the
Azure Development => .NET Aspire SDK (Preview)optional workload in the VS installer - Be sure to install the
ASP.NET and web development=>.NET 8.0/9.0 WebAssembly Build Tools
- Install Docker Desktop: https://www.docker.com/products/docker-desktop
- Configure git to support long paths:
git config --system core.longpaths true # you will need elevated shell for this one git config --global core.longpaths true
- Install SQL Server Express: https://www.microsoft.com/en-us/sql-server/sql-server-downloads
- Install Entity Framework Core CLI by running
dotnet tool install --global dotnet-ef - Build the
src\Maestro\Maestro.Data\Maestro.Data.csprojproject (either from console or from IDE) - From the
src\Maestro\Maestro.Dataproject directory, rundotnet ef --msbuildprojectextensionspath <full path to obj dir for Maestro repo (e.g. "C:\arcade-services\artifacts\obj\Maestro.Data\")> database update- Note that the generated files are in the root artifacts folder, not the artifacts folder within the
Maestro.Dataproject folder
- Note that the generated files are in the root artifacts folder, not the artifacts folder within the
- Join the
maestro-auth-testorg in GitHub (you will need to ask someone to manually add you to the org) - Make sure you can read the
ProductConstructionDevkeyvault. If you can't, ask someone to add you to the keyvault - In SQL Server Object Explorer in Visual Studio, find the local SQLExpress database for the build asset registry and populate the Repositories table with the following rows:
INSERT INTO [Repositories] (RepositoryName, InstallationId) VALUES
('https://github.com/maestro-auth-test/maestro-test', 289474),
('https://github.com/maestro-auth-test/maestro-test2', 289474),
('https://github.com/maestro-auth-test/maestro-test3', 289474),
('https://github.com/maestro-auth-test/maestro-test-vmr', 289474),
('https://github.com/maestro-auth-test/arcade', 289474),
('https://github.com/maestro-auth-test/dnceng-vmr', 289474);When running locally:
- The service will attempt to read secrets from the
ProductConstructionDevKeyVault, using your Microsoft credentials. If you're having some authentication double check the following:- In VS, go to
Tools -> Options -> Azure Service Authentication -> Account Selectionand make sure your corp account is selected - Check your environmental variables, you might have
AZURE_TENANT_ID,AZURE_CLIENT_IDandAZURE_CLIENT_SECRETset, and theDefaultAzureCredentialis attempting to useEnvironmentalCredentialsfor an app that doesn't have access to the dev KV.
- In VS, go to
- The service is configured to use the same SQL Express database Maestro uses. To se it up, follow the instructions
- Configure the
ProductConstructionService.AppHost/Properties/launchSettings.json file:TmpPath: path to the TMP folder that the service will use to clone other repos (like runtime). If you've already worked with the VMR and have the TMP VMR folder on your machine, you can point the service there and it will reuse the cloned repos you already have.- AppHost's
launchSettings.jsonconfig should look something like this (fill in the VMR paths):
{ "$schema": "http://json.schemastore.org/launchsettings.json", "profiles": { "PCS (local)": { "commandName": "Project", "dotnetRunMessages": true, "launchBrowser": true, "applicationUrl": "https://localhost:18848", "environmentVariables": { "TmpPath": "D:\\tmp\\", "DOTNET_DASHBOARD_OTLP_ENDPOINT_URL": "https://localhost:19265", "DOTNET_RESOURCE_SERVICE_ENDPOINT_URL": "https://localhost:20130" } } } }- Modify the
ProductConstructionService.Api/Properties/launchSettings.jsonconfig should look something like this (fill in the VMR paths):
{ "$schema": "http://json.schemastore.org/launchsettings.json", "profiles": { "ProductConstructionService.Api": { "commandName": "Project", "launchBrowser": true, "applicationUrl": "https://localhost:53180", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Development", "TmpPath": "D:\\tmp\\" } } } }
To run the Product Construction Service locally:
- Start Docker Desktop.
- Set the
ProductConstructionService.AppHostas Startup Project, and run it.
In order to debug the Blazor project, you need to run the server (the ProductConstructionService.AppHost project) and the front-end separately. The front-end will be served from a different port but will still be able to communicate with the local server.
- Start Docker
- Run the
ProductConstructionService.AppHostproject (without debugging or usingdotnet runfromsrc\ProductConstructionService\ProductConstructionService.AppHost) - Debug the
ProductConstructionService.BarVizproject
It is also recommended to turn on the API redirection (in src\ProductConstructionService\ProductConstructionService.Api\appsettings.Development.json) to point to the production so that the front-end has data to work with:
{
"ApiRedirect": {
"Uri": "https://maestro.dot.net/"
}
}After you completed the steps to run the service locally, you can run the scenario tests against your local instance too:
- Right-click the
ProductConstructionService.ScenarioTestsproject and selectManage User Secretsand add the following content:{ "PCS_BASEURI": "https://localhost:53180/", "GITHUB_TOKEN": "[FILL SAME TOKEN AS YOU WOULD FOR DARC]", "DARC_PACKAGE_SOURCE": "[full path to your arcade-services]\\artifacts\\packages\\Debug\\NonShipping", "DARC_VERSION": "0.0.99-dev" } - Build the Darc tool locally (it is run by the scenario tests):
cd src\Microsoft.DotNet.Darc\Darc dotnet pack -c Debug
- Open two Visual Studio instances.
- In the first instance, run the PCS service (instructions above).
- In the second instance, run any of the
ProductConstructionService.ScenarioTeststests. - After you have run the tests or the service locally, your local git credential manager might populate with the
dotnet-maestro-botaccount. You can log out of it by runninggit credential-manager github logout dotnet-maestro-bot.
Run the provision.ps1 script by giving it the name of the subscription you want to create the service in. Note that keyvault and container registry names have to be unique on Azure, so you'll have to change these, or delete and purge the existing ones.
This will create all of the necessary Azure resources.
We're using a Managed Identity to authenticate PCS to BAR. You'll need to run the following SQL queries to enable this (we can't run SQL from bicep):
- CREATE USER [
ManagedIdentityName] FROM EXTERNAL PROVIDER - ALTER ROLE db_datareader ADD MEMBER [
ManagedIdentityName] - ALTER ROLE db_datawriter ADD MEMBER [
ManagedIdentityName]
If the service is being recreated and the same Managed Identity name is reused, you will have to drop the old MI from the BAR, and then run the SQL queries above
Once the resources are created and configured:
- Go to the newly created User Assigned Managed Identity (the one that's assigned to the container app, not the deployment one)
- Copy the Client ID, and paste it in the correct appconfig.json, under
ManagedIdentityClientId. Do this for all services (PCS, Subscription Triggerer, LongestBuildPathUpdater and FeedCleaner) - Add the PCS and FeedCleaner identity as a user to AzDO so it can get AzDO tokens (you'll need a saw for this). You might have to remove the old user identity before doing this
- The PCS identity needs to be able to manage code / pull requests and manage feeds (this is done in the artifact section).
- The FeedCleaner identity needs to be in the
Feed Managerspermissions group indnceng/internal
- Update the
ProductConstructionServiceDeploymentProd(orProductConstructionServiceDeploymentInt) Service Connection with the new MI information (you'll also have to create a Federated Credential in the MI) - Update the appropriate PCS URI in
ProductConstructionServiceApiOptionsandMaestroApiOptions.
We're not able to configure a few Kusto things in bicep:
- Give the PCS Managed Identity the permissions it needs:
- Go to the Kusto Cluster, and select the database you want the MI to have access to
- Go to permissions -> Add -> Viewer and select the newly created PCS Managed Identity
Give the Deployment Identity Reader role for the whole subscription.
The last part is setting up the pipeline:
- Make sure all of the resources referenced in the yaml have the correct names
- Make sure the variable group referenced in the yaml points to the new Key Vault
When creating a Container App with a bicep template, we have to give it some kind of boilerplate docker image, since our repository will be empty at the time of creation. Since we have a custom startup health probe, this revision will fail to activate. After the first run of the pipeline (deployment), make sure to deactivate the first, default revision.
In case you need to change the database model (e.g. add a column to a table), follow the usual EF Core code-first migration steps. In practice this means the following:
- Make the change to the model classes in the
Maestro.Dataproject - Install the EF Core CLI
dotnet tool install --global dotnet-ef- Go to the
src\Maestro\Maestro.Dataproject directory and build the project
cd src\Maestro\Maestro.Data
dotnet build- Run the following command to create a new migration
dotnet ef --msbuildprojectextensionspath <full path to obj dir for Maestro repo (e.g. "C:\arcade-services\artifacts\obj\Maestro.Data")> migrations add <migration name>The steps above will produce a new migration file which will later be processed by the CI pipeline on the real database.
Test this migration locally by running:
dotnet ef --msbuildprojectextensionspath <full path to obj dir for Maestro repo (e.g. "C:\arcade-services\artifacts\obj\Maestro.Data")> database updateYou should see something like:
Build started...
Build succeeded.
Applying migration '20240201144006_<migration name>'.
Done.
If your change of the model changes the public API of the service, update also the Maestro.Client.
Microsoft.DotNet.ProductConstructionService.Client is an auto-generated client library for the PCS API. It is generated using Swagger Codegen which parses the swagger.json file.
The swagger.json file is accessible at /swagger.json when the PCS API is running.
If you changed the API (e.g. changed an endpoint, model coming from the API, etc.), you need to regenerate the Microsoft.DotNet.ProductConstructionService.Client library.
If you need to update the client library, follow these steps:
- Change the model/endpoint/..
- Change
src\ProductConstructionService\Client\src\Microsoft.DotNet.Microsoft.DotNet.ProductConstructionService.Client.csprojand point theSwaggerDocumentUritohttps://localhost:53180/swagger.json. - Start the Maestro application locally, verify you can access the swagger.json file. You can now stop debugging, the local SF cluster will keep running.
- Run
src\ProductConstructionService\Microsoft.DotNet.ProductConstructionService.Client\src\generate-client.cmdwhich will regenerate the C# classes. - You might see code-style changes in the C# classes as the SDK of the repo has now been updated. You can quickly use the Visual Studio's refactorings to fix those and minimize the code changes in this project.
The Product Construction Service uses the Blue-Green deployment approach, implemented in the ProductConstructionService.Deployment script. The script does the following:
- Figures out the label that should be assigned to the new revision and removes it from the old, inactive revision.
- Tells the currently active revision to stop processing new jobs and waits for the new one to finish.
- Deploys the new revision.
- Waits for the Health Probes to be successful. We currently have two health probes:
- A Startup probe, that is run after the service is started. This probe just tests if the service is responsive.
- A Readiness probe that waits for the service to fully initialize. Currently, this is probe just waits for the VMR to be cloned on the containerapp disk.
- Assigns the correct label to the new revision, and switches all traffic to it.
- Starts the JobProcessor once the service is ready.
- If there are any failures during the deployment, the old revision is started, and the deployment is cleaned up.
While we're still in the early development phase we want to be able to publish to production easily (as it's not used by anything). We've temporarily made it so so any branch that starts with production/
or production- deploys to prod
If you need to change the configuration of the Container App (e.g. increase disk size, memory, environment variables, etc.), you have to be careful not to do it from the Azure Portal. We are on an preview feature allowing for larger disk space and the changes made in the portal will revert this setting to the publicly known maximum of 8GB (while we need around 50GB).
To correctly change the configuration, you need to use az containerapp commands like so:
az login
az account set --subscription fbd6122a-9ad3-42e4-976e-bccb82486856 # prod PCS
az containerapp show --name product-construction-prod --resource-group product-construction-service --output yamlYou can store the above into a file (or use | clip and save it), edit the settings you want to change.
Then change the revisionSuffix to something unique (e.g. add -dev).
When your new configuration is read, make sure you stop the background job processing:
cd arcade-services\tools\ProductConstructionService.Cli
dotnet run -- stop
# Wait and verify with dotnet run -- get-statusThen you can update the Container App with the new configuration:
az containerapp update --name product-construction-prod --resource-group product-construction-service --yaml [PATH TO THE YAML FILE FROM ABOVE]This will create a new revision of the Container App with the updated configuration. However, it will not automatically switch the traffic to the new revision.
For this, just re-run the Deploy stage of the last pipeline run (either prod or int) which will clean up the old revision and switch the traffic to the new one.
When the service does not start and you can't see the logs in the usual Application Insights place, you can still get the container app logs for the given revision. You can get the (short) SHA of the commit that you tried to deploy and find the logs with this query:
ContainerAppConsoleLogs_CL
| where RevisionName_s contains "4c7e5db50e"You can explore or locally run the container images that are being deployed to the Azure Container App.
- Find the image that you want to explore by checking the revisions in the Azure Container App.
- Run the following command to pull the image locally (replace the image tag):
az acr login --name productconstructionint
docker run --rm --entrypoint "/bin/sh" -it productconstructionint.azurecr.io/product-construction-service.api:2024081411-1-87a5bcb35f-devFor staging, you can use the Azure portal to interact with the Redis instance directly:
- Go to the product-construction-service-redis-int resource.
- Use the built-in Console to run commands.
- If needed, access keys can be found under Settings → Authentication.
Accessing the production Redis instance requires a few more steps:
- Navigate to the product-construction-service-redis-prod resource in the Azure portal.
- Go to Settings → Authentication.
- Under Microsoft Entra ID, assign yourself access. Look for a Redis User entry with your name. From this action, your user should become a
DATA OWNERfor this resource. - Copy the long GUID-like username associated with it — you’ll need this later.
Using the Azure CLI:
az login
az account get-access-token --resource https://redis.azure.com --query accessToken -o tsv- Copy the resulting access token.
(Or copy it directly to the clipboard without printing the token into the console)
az account get-access-token --resource https://redis.azure.com --query accessToken -o tsv | clipRun the following command:
redis-cli -h product-construction-service-redis-prod.redis.cache.windows.net -p 6380 --tlsOnce inside the CLI, authenticate using:
AUTH <username> <access_token>- Replace
<username>with the redis account username you copied in Step 1 (usually looks like a GUID). - Replace
<access_token>with the token you obtained in Step 2.
You should now be authenticated and able to interact with the production Redis.