Contents
In order to upgrade your deployment, you should:
- Read the release notes of the new version and check if there are any breaking changes. The file CHANGELOG.md should be your first stop.
- Adjust your values if necessary.
- Re-run the install command. It will upgrade your installation to the latest version of the chart.
You need to backup a couple of things to be able to restore your deployment:
- Stop Synapse and Matrix Authentication Service workloads:
kubectl scale sts -l "app.kubernetes.io/component=matrix-server" -n ess --replicas=0
kubectl scale deploy -l "app.kubernetes.io/component=matrix-authentication" -n ess --replicas=0- The database. You need to backup your database and restore it on a new deployment.
- If you are using the provided Postgres database, build a dump using the command
kubectl exec --namespace ess -it sts/ess-postgres -- pg_dumpall -U postgres > dump.sql. Adjust to your own kubernetes namespace and release name if required. - If you are using your own Postgres database, please build your backup according to your database documentation.
- Your values files used to deploy the chart
- The chart will generate some credentials in a
Secretif you do not provide them. To copy them to a local file, you can run the following command:kubectl get secrets -l "app.kubernetes.io/managed-by=matrix-tools-init-secrets" -n ess -o yaml > secrets.yaml. Adjust to your own kubernetes namespace if required. - The chart will generate some flags/markers in a
ConfigMapto ensure thathelm upgradewith different values doesn't put the installation in an invalid state. To copy them to a local file, you can run the following command:kubectl get configmap -l "app.kubernetes.io/managed-by=matrix-tools-deployment-markers" -n ess -o yaml > configmaps.yaml. Adjust to your own kubernetes namespace if required. - The media files: Synapse stores media in a persistent volume that should be backed up. On a default K3s setup, you can find where synapse media is stored on your node using the command
kubectl get pv -n ess -o yaml | grep synapse-media. - Run the
helm upgrade --install....command again to restore your workload's pods.
- Recreate the namespace and the backed-up secret in step 3:
kubectl create ns ess
kubectl -n ess apply -f secrets.yaml
kubectl -n ess apply -f configmaps.yaml- Redeploy the chart using the values backed-up in step 2.
- Stop Synapse and Matrix Authentication Service workloads:
kubectl scale sts -l "app.kubernetes.io/component=matrix-server" -n ess --replicas=0
kubectl scale deploy -l "app.kubernetes.io/component=matrix-authentication" -n ess --replicas=0- Restore the postgres dump. If you are using the provided Postgres database, this can be achieved using the following commands:
# Drop newly created databases and roles
kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP DATABASE matrixauthenticationservice'
kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP DATABASE synapse'
kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP ROLE synapse_user'
kubectl exec -n ess sts/ess-postgres -- psql -U postgres -c 'DROP ROLE matrixauthenticationservice_user'
kubectl cp dump.sql ess-postgres-0:/tmp -n ess
kubectl exec -n ess sts/ess-postgres -- bash -c "psql -U postgres -d postgres < /tmp/dump.sql"Adjust to your own kubernetes namespace and release name if required.
- Restore the synapse media files using
kubectl cpto copy them in Synapse pod. If you are using K3s, you can find where the new persistent volume has been mounted withkubectl get pv -n ess -o yaml | grep synapse-mediaand copy your files in the destination path. - Run the
helm upgrade --install....command again to restore your workload's pods.
The below documents various stores of the state for the installation, that the chart controls.
These stores of state may have a different lifecycle to the chart itself, i.e. may persist beyond helm uninstall, and their lifecycle is documented below.
By default (postgres.enabled: true) the chart will deploy a Postgres instance in the cluster, if Synapse or Matrix Authentication Service are deployed.
This is to enable a quick, easy and self-contained way of deploying the stack, whilst minimising external dependencies
The chart will construct a PersistentVolumeClaim to persist the Postgres databases.
By default, this PersistentVolumeClaim will not be deleted on helm uninstall.
This is to prevent data loss.
This behaviour can be changed by setting postgres.storage.resourcePolicy: delete rather than keep.
Alternatively, an existing PersistentVolumeClaim that is not managed by the chart, can be used by specifying postgres.storage.existingClaim.
Finally, the recommended approach is to use a Postgres instance that is not managed by the chart.
This can be done by setting postgres.enabled: false and configuring Synapse and Matrix Authentication Service with details of these Postgres instance(s).
If Synapse is enabled, the chart will default to constructing a PersistentVolumeClaim to persist uploaded media.
By default, this PersistentVolumeClaim will not be deleted on helm uninstall.
This is to prevent data loss.
This behaviour can be changed by setting synapse.media.storage.resourcePolicy: delete rather than keep.
Alternatively, an existing PersistentVolumeClaim, that is not managed by the chart, can be used by specifying synapse.media.storage.existingClaim.
By default (initSecrets.enabled: true) the chart will run a pre-install / pre-upgrade Helm hook to generate a variety of credentials that don't relate to external resources.
This is to enable a quick and easy initial installation, without having to manually generate credentials, in a way that will work with tools like ArgoCD that don't support Helm's lookup function.
The chart will create appropriate Role and RoleBindings in the installation namespace to facilitate this.
The generated Secret will have label app.kubernetes.io/managed-by=matrix-tools-init-secrets.
As it is generated & managed with pre-install / pre-upgrade Hooks and no post-uninstall Hook is configured, this Secret will not be removed on helm uninstall.
This is to prevent data loss, as some of the generated credentials (e.g. Synapse's signing key) have impact if they are changed without additional configuration.
Each credential can be manually specified either directly in the Helm values or by referencing an existing Secret and associated key.
The generated secrets functionality can be turned off by setting initSecrets.enabled: false and then the chart will require you to provide all required credentials as described above.
By default (deploymentMarkers.enabled: true) the chart will run a pre-install / pre-upgrade / post-upgrade Helm hook to check and record the state of the installation in a ConfigMap.
This is to prevent components being enabled, disabled or otherwise put into some states that could cause data-corruption.
The chart will create appropriate Role and RoleBindings in the installation namespace to facilitate this.
The generated ConfigMap will have label app.kubernetes.io/managed-by=matrix-tools-deployment-markers.
As it is generated & managed with pre-install / pre-upgrade / post-upgrade Hooks and no post-uninstall Hook is configured, this ConfigMap will not be removed on helm uninstall.
This is to prevent invalid states being entered on reinstallation, given that the default chart behaviour is to keep the Postgres database between reinstallations.
The deployment markers functionality can be turned off by setting deploymentMarkers.enabled: false and the chart will not protect you from various invalid changes to the values.
If you initially deployed ESS Community with the chart secrets initialization hook enabled (initSecrets.enabled not set to false), your Synapse signing key will be vulnerable if it was not set explicitly in synapse.signingKey. If you later specified its content in synapse.signingKey in the values files, the chart will not be able to generate a new key automatically. You will be using the vulnerable signing key until you change it manually.
- Install
signedjsonandpyyamlusingpip:pip install signedjson pyyaml - Generate your new signing key with the key id
ed25519:1using the following command :
$ python3 -c "import signedjson.key; signing_key = signedjson.key.generate_signing_key(1); print(f\"{signing_key.alg} {signing_key.version} {signedjson.key.encode_signing_key_base64(signing_key)}\")"
ed25519 1 BUIaPW...
- Specify this value as the new secret content under
synapse.signingKey:
synapse:
signingKey:
## Adjust according to how you configure your signing key in the chart
## It can either be provided inline in the Helm chart e.g.:
## value: ed25519 1 BUIaPW...
##
## Or it can be provided via an existing Secret e.g.:
## secret: existing-secret
## secretKey: key-in-secret- To invalidate the old signing key, you will have to construct Synapse
old_signing_keyconfiguration. Generate a throwaway verifying key using the key ided25519:0with the following command :
$ python3 -c "import yaml; import time; import signedjson.key; signing_key = signedjson.key.generate_signing_key(0); revoke_time = int(time.time()*1000); result = {\"old_signing_keys\": {\"ed25519:0\": {\"key\": signedjson.key.encode_verify_key_base64(signing_key), \"expired_ts\": revoke_time}}}; print(f\"{yaml.dump(result)}\")"
old_signing_keys:
ed25519:0:
expired_ts: 1770625043432
key: x1YFkPUwoKBnS69Yfxhpjc5Y8cd2nLPElJFdqCcJk4E- Inject this in synapse additional settings in your values, under a new
synapse.additionalsection :
synapse:
additional:
revoke_bad_signing_key.yml:
config: |
old_signing_keys:
ed25519:0:
key: <throwaway verifying key>
expired_ts: <current ts>This will make sure that :
- The old key id ed25519:0 is not accepted any more by the federation, and because the verifying key has been randomly generated during revocation, the old key signatures are all invalid.
- The new key ed25519:1 is accepted by the federation
- Apply the new values using
helmand wait for Synapse to be restarted. Run the following command to check that the new signing keyed25519:1is now advertised properly by Synapse, and the old key ided25519:0is marked as revoked:
curl -s https://<your synapse host>/_matrix/key/v2/server | jq
{
"old_verify_keys": {
"ed25519:0": {
"expired_ts": 1769001790846,
"key": "tt+JkcqGzTxt..."
}
},
"server_name": "<your server name>",
"signatures": {
"<your server name>": {
"ed25519:1": "gahd4eeGh..."
}
},
"valid_until_ts": ...,
"verify_keys": {
"ed25519:1": {
"key": "BUIaPW..."
}
}
}