On prem Secoda Helm with all external dependencies
This is a generic Helm chart for Secoda on-premises installation. It only installs the Secoda frontend and API services and does not provision an ingress or any of the external service dependencies.
The helm chart requires these services to run. All of the services must be accessible from the Kubernetes cluster nodes:
- Postgres Database with minimum resources of
- Postgres 14.x
- 2 CPU
- 8G Memory
- 100G Storage
- An admin-level account named
keycloakwith password authentication
- Redis Cache
- Redis 6.2.x
- 1 CPU
- 2G Memory
- Cluster Mode Disabled SECODA WILL NOT WORK WITH CLUStERED REDIS
- 1 or more replicas
- Elastic Search or Open Search
- OpenSearch 2.x/ElsasticSearch 8.x
- 1 CPU
- 4G Memory
- 20G Disk Storage
- 1 or more cluster nodes
- A master user account with user name/password authentication
- If AWS OpenSearch, the Access Policy must be set to "Only use fine-grained access control" (Allow open access to the domain)
In values.yaml you will need to set datastores.secoda.authorized_domains to a comma-separated list of email domains that are allowed to log in to your on-premises Secoda. This is an important security feature that should not be skipped.
Also, resource allocations for the api and frontend containers are defined in the values.yaml file. You may need to increase these as your Secoda instance usage grows.
All secrets must be populated before running helm install
secoda-secrets.env is where you set all of the connection values for your external Postgres, Redis and Open/Elastic Search services. There are basic examples included in the file for the more complex Redis and Postgres connection strings. The secoda-secrets.env file is loaded to its kubernetes secret by a kubectl command in secoda-secrets.sh
Before running secoda-secrets.sh you will need to set the --docker-password flag in the kubectl create secret docker-registry secoda-dockerhub command to the password provided by Secoda, and populate the secoda-secrets.env file with the connection settings for your external services. secoda-secrets.sh creates the secoda namespace and will create all of the secrets in this namespace.
If you need to reset any of the secret values, just run
kubectl -n secoda delete secret <secret-name>
then rerun secoda-secrets.sh and optionally restart the secoda pods to pick up the new secrets
After editing the values.yaml file and running secoda-secrets.sh your Kubernetes cluster will be ready to install the Secoda Helm chart.
From the base of the repository, run:
helm install -n secoda -f values.yaml secoda ./charts/secoda/
You still need to ensure all of the dependencies and secrets are set up along with any values.yaml settings. Then run
helm repo add secoda https://secoda.github.io/secoda-helm-generic
helm install -n secoda -f values.yaml secoda secoda/secoda
- The Secoda Helm is configured to pull the latest images automatically on restart.
kubectl rollout restart deployment -n secodawill redeploy the application with the latest images.
The default setup always deploys the latest Secoda Docker images (tag:latest.) This may not be optimal behavior for all customers. To pin to a specific Secoda version, modify your values.yaml file.
The default values.yaml file sets the version in:
# Set the version tag here if you need to pin to a Secoda version instead
# of tracking "latest"
global:
image:
tag: "latest"
Instead of "latest" enter your desired version:
# Set the version tag here if you need to pin to a Secoda version instead
# of tracking "latest"
global:
image:
tag: "2024.4.1"
To update the containers to your new version, after modifying your values.yaml file run
helm upgrade -n secoda -f values.yaml secoda ./charts/secoda/
An example ingress file, ingress_example.yaml is included. You will need to modify this to work with your preferred ingress class, which must also be installed to the kubernetes cluster. For most use cases, only the spec.ingressClassName and spec.rules.0.host should need to be modified. Then the ingress can be deployed with
kubectl apply -f ingress_example.yaml