diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index c657c393..b6eb4865 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -3,25 +3,20 @@ on: push: branches: - "release/*.*" - - "develop" permissions: contents: write jobs: deploy: - runs-on: ubuntu-latest + runs-on: ubuntu-24.04 steps: - - uses: actions/checkout@v5 - with: - fetch-depth: 0 + - uses: actions/checkout@v5.0.0 - name: Configure Git Credentials run: | git config user.name github-actions[bot] git config user.email 41898282+github-actions[bot]@users.noreply.github.com - - uses: actions/setup-python@v5 - with: - python-version: 3.x + - uses: actions/setup-python@v5.6.0 - run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV - - uses: actions/cache@v4 + - uses: actions/cache@v4.0.0 with: key: mkdocs-material-${{ env.cache_id }} path: .cache diff --git a/docs/home/release-notes.md b/docs/home/release-notes.md new file mode 100644 index 00000000..c94f58b8 --- /dev/null +++ b/docs/home/release-notes.md @@ -0,0 +1,13 @@ +# Release notes + +## Version 1.62 + +!!! tip "" + Helm chart version 1.62.0 + +### Helm configuration changes + +- Added a new service, `processors-controller`, which manages the lifecycle of `transformation` pods. + - To use transformations, configure cross-account ECR access. See [Cross-account ECR access](./clouds/aws.md#cross-account-ecr-access). + - The `processors-controller` requires RBAC permissions to manage `Pods`, `ConfigMaps`, and `PersistentVolumeClaims`. + The required manifests are included in the Helm chart and can be disabled by setting `processorsController.rbac.enabled` to `false`. diff --git a/docs/home/release-notes/v1.50-v1.59.md b/docs/home/release-notes/v1.50-v1.59.md deleted file mode 100644 index cc04ea2f..00000000 --- a/docs/home/release-notes/v1.50-v1.59.md +++ /dev/null @@ -1,565 +0,0 @@ -# Release notes - -## Version 1.59 - -!!! tip "" - Helm chart version 1.59.4 - -### Helm configuration changes - -The following ODM components were removed: `clickhouse`, `mysql-prepare` and `clickhouse-helper`. - -Removing the old `clickHouse` instance is completely safe, as its data was migrated to the cluster version of ClickHouse in the 1.58 release, and it should have been disabled after that. - -### Mysql migration - -Background migrations, running during the `core` container startup, may take up to tens of minutes. Use the Helm `--timeout 30m` option to adjust the timeout accordingly. - -If the ODM service is unable to start within the allocated time, increase the `failureThreshold` value for the core container. - -### Removal of Application Container Persistent Volume - -We have removed the Persistent Volume Claim (PVC) and the necessary configuration for the `applications` container. This removal is safe, so there's no need for concern. - -### Increasing memory limits for func-file - -A mechanism of attaching files to studies has been extended with new data sources. Attached files are stored in an -S3 bucket, configured for `applications` container. The `func-file` container requires additional memory -to facilitate efficient data transfer. This is particularly critical for the parallel upload of large files -(several gigabytes in size) from the S3 source. - -The default configuration works fine for sequential file uploads. To enable parallel uploads, -please increase the `limits` for memory, as well as the `cRTInitialReadBufferSizeInBytes` parameter. Additionally, -it is highly recommended that for JVM services, the values for the `requests` and `limits` parameters be equal. -Take a look at the `odm/examples/parallel-file-upload.yaml` file as an example of configuration, -demonstrating the settings required for the parallel upload of five files, each 10GB in size. -Please contact Genestack support if you need any further help with the configuration. - -You can also check Amazon S3 limits here: . - -### Amazon S3 bucket configuration - -The CORS policy is not necessary for the S3 bucket used by ODM. The policy could be removed from the bucket -configuration. - -## Version 1.58 - -!!! danger - - This version must be installed before proceeding with the next update. - -!!! tip "" - Helm chart version 1.58.3 - -### Clickhouse migration - -In current release we moved from standalone Clickhouse container to the Clickhouse cluster that will be controlled by [Altinity clickhouse operator](https://github.com/Altinity/clickhouse-operator). -We automated process of transferring data from the standalone version to the cluster version of clickhouse, migration will be executed during upgrade process. - -Things that you have keep in mind before upgrading process: - -- Migration time depends on resources(mostly on CPU and Disk IO) allocated for Clickhouse instances, during our tests we've mentioned average speed 50Gb per hour for instances with 4CPU/16Gb RAM - -- It's necessary **not to set** flags as `--wait` and `--timeout` during upgrading process because of migration time - -- The new ClickHouse cluster must have 25% more disk space than the standalone variant. - -- Optional: We've developed [a tool that checks consistency of data](../troubleshooting/sanity-check.md), you could use it before and after the upgrading process and compare results of those checks just to be sure that everything went as expected - -#### Following steps - - 1. (Upgrade flow) Proceed with the [odm](../helm/how-to-deploy.md#deployment-process) installation, take itnto account: - - a. `odm-ops` chart will install the Altinity ClickHouse operator with pre-configured settings. - - b. In your custom values for `odm` chart adjust parameters for ClickHouse and Altinity ClickHouse (requests, limits, disk size +25%, etc.). We suggest to temporary increase resources for both of Clickhouses since it will reduce timing for migration. - - c. VERY IMPORTANT! Make sure that old clickhouse is NOT disabled! Path in values is `clickhouse.enabled`. By default, it's enabled. - - d. DO NOT apply `recommendations.yaml` file from example as is, it's recommendation ONLY for new installations! - - e. Resources path in values for old clickhouse is `clickhouse.resources` for the new one `altinity.clickhouse.installation.spec.templates.podTemplate.spec.mainContainer.resources` - - f. Persistance size path for old clickhouse is `clickhouse.persistence.size` for the new one `altinity.clickhouse.installation.spec.templates.volumeClaimTemplate.spec.resources.requests.storage` - - 2. A job named `odm-clickhouse-helper` will appear in Kubernetes, and it will handle the migration. - - a. During the ClickHouse migration, ODM will continue to operate, but all writes to ClickHouse will be queued. - - b. Wait until the `odm-clickhouse-helper` job completes, indicating that the migration is done. - - 3. Disable `clickhouse` and `clickhouseHelper` in Helm values. You can refer to the example `disable-old-clickhouse-after-upgrade.yaml`. - - 4. Update ODM one last time with `helm upgrade ...`. This will disable the old ClickHouse. - -### Helm examples changes - -- New examples for different ODM configuration options have been added to the `examples` helm chart directory, and all old ones have been updated. - -- Additionally, recommendations for computing resources have been included. - -### Helm configuration changes - -- From this release, we are using fully original Docker images for the OSS components of ODM. It is not recommended to update them independently. - - From: - - ```yaml - mysql: - image: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/mysql - - mailcatcher: - image: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/mailcatcher - - clickhouse: - image: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/clickhouse - - nginx: - image: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/nginx - ``` - - To: - - ```yaml - mysql: - image: - registry: docker.io - repository: mysql - - mailcatcher: - image: - registry: docker.io - repository: dockage/mailcatcher - - clickhouse: - image: - registry: docker.io - repository: clickhouse/clickhouse-server - - nginx: - image: - registry: docker.io - repository: nginxinc/nginx-unprivileged - ``` - -- Now you can mount any file with any content into any container in ODM! For example, your certificates. This feature required adding the full path in all existing ODM configuration files. - - From: - - ```yaml - core: - configurationFiles: - "application.yaml": - - applications: - configurationFiles: - "application.yaml": - "microsoft.openid.ini": - "okta.openid.ini": - "google.openid.ini": - - mysql: - configurationFiles: - "genestack.cnf": - - funcFile: - configurationFiles: - "application.yaml": - - funcJob: - configurationFiles: - "application.yaml": - - linkService: - configurationFiles: - "application.yaml": - - clickhouse: - configurationFiles: - "config.yaml": - "users.yaml": - - nginx: - configurationFiles: - "odm.conf": - "proxy-pass-parameters.conf": - ``` - - To: - - ```yaml - core: - files: - "/var/lib/genestack/properties/application.yaml": - - applications: - files: - "/var/lib/genestack/properties/application.yaml": - "/var/lib/genestack/properties/microsoft.openid.ini": - "/var/lib/genestack/properties/okta.openid.ini": - "/var/lib/genestack/properties/google.openid.ini": - - mysql: - files: - "/etc/mysql/conf.d/genestack.cnf": - - funcFile: - files: - "/app/config/application.yaml": - - funcJob: - files: - "/app/config/application.yaml": - - linkService: - files: - "/app/config/application.yaml": - - clickhouse: - files: - "/etc/clickhouse-server/config.d/config.yaml": - "/etc/clickhouse-server/users.d/users.yaml": - - nginx: - files: - "/etc/nginx/conf.d/odm.conf": - "/etc/nginx/conf.d/proxy-pass-parameters.conf": - ``` - -- The AWS credentials for connecting to S3 in `core` and `applications` have been removed. If you have these parameters, you can safely delete them. - - ```yaml - core: - files: - "/var/lib/genestack/properties/application.yaml": - backend: - aws: - region: "" - endpoint: - url: "" - access: - key: "" - secret: - key: "" - ``` - - !!! danger - - Important! The AWS region in the `application` must remain! You can delete only the `endpoint`, `access` and `secret` parameters. - - ```yaml - applications: - files: - "/var/lib/genestack/properties/application.yaml": - frontend: - aws: - region: "{{ .Values.credentials.awsS3Region }}" - endpoint: - url: "" - access: - key: "" - secret: - key: "" - ``` - -- Configuration file `settings.py.local` has been removed. If you are using it, you can safely delete it. - - ```yaml - core: - files: - "settings.py.local": - ``` - -- The previously added `BusyBox` image for `ClickHouse` has been removed. If you are using it, you can safely delete it. - - ```yaml - clickhouse: - busyboxImage: - registry: docker.io - repository: busybox - tag: 1.36.1 - ``` - -## Version 1.57 - -!!! danger - - This version must be installed before proceeding with the next update. - -!!! tip "" - Helm chart version 1.57.0 - -### Helm configuration changes - -- Removed the link to the database for the service `func-file`. If you have it in your `values.yaml`, then you can safely remove the `spring` map completely. - - ```yaml - funcFile: - configurationFiles: - "application.yaml": - spring: - datasource: - # -- Mysql jdbc URL - url: "jdbc:mysql://..." - ``` - -- For the Clickhouse `busybox` image, the ability to set the repository and version has been added. - - ```yaml - clickhouse: - busyboxImage: - # -- Image registry - registry: docker.io - # -- Image repository - repository: busybox - # -- Image tag - tag: 1.36.1 - ``` - -## Version 1.56 - -!!! tip "" - Helm chart version 1.56.1 - -### Export metrics to Genestack - -Fluent-bit was introduced as an extra service tasked with collecting and dispatching metrics in Prometheus format to a Genestack. - -These metrics encompass technical and/or product-related data, devoid of any sensitive information. - -If you wish to deactivate this functionality, you can do so by configuring the following parameter: - -```yaml -fluent-bit: - enabled: false -``` - -### Helm configuration changes - -Now organization name and hostname are in a `global` section: - -From: - -```yaml -odmFrontendHostname: odm.local -applications: - configurationFiles: - "application.yaml": - frontend: - ui: - organization: - name: "Genestack" -``` - -To: - -```yaml -global: - hostname: odm.local - organizationName: "Genestack" -``` - -## Version 1.55 - -!!! tip "" - Helm chart version 1.55.4 - -### Configure ODM usage together with encrypted S3 bucket (SSE-KMS and SSE-S3 only) - -#### Introduction - -!!! attention "" - You can find configuration examples in the ODM Helm chart. - -In case you have several AWS credentials in your configuration, you need to modify only the credentials for -accessing the bucket in specified as `frontend.aws.bucket`. - -#### SSE-KMS - -To enable uploading into an SSE-KMS encrypted bucket, you need to customize `func-file` configuration. -The following configuration example uses a bucket encrypted by SSE-KMS with the name ``. -The bucket configuration should specify the algorithm `aws:kms` as `preferredAlgorithm`. Additionally, -the property `kmsCmkId` should be added with a value equal to key id `arn:aws:kms:...` if the bucket policy -requires this key to be explicitly send on PUT request. The `func-file` section in the configuration -should look like this: - -#### SSE-S3 - -The SSE-S3 encryption type is default to the most buckets. To force ODM request -this type of encryption from S3 provider for ``, you need to specify -the `preferredAlgorithm` property with the value `AES256`: - -#### On `storage_config` section configuration in `func-file` - -Keep in mind that `func-file` reads the `storage_config` section sequentially. You can create specific configurations -for individual buckets, e.g., if one has SSE-KMS encryption while others do not. To do this, as the first item in -the list, you'll need to specify the bucket with the specific configuration and its name. Then, provide the general -configuration for the other buckets using the wildcard symbol `*`. ODM will only upload files to the bucket, specified -as `frontend.aws.bucket` property, regardless to `storage_config` section. - -### Genestack pod separation - -Example on the image section, but it's applicable for sections with backend/frontend separation. - -ApplicationSettings changes showed separately: - -From: - -```yaml -genestack: - image: - backend: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/core - pullPolicy: Always - pullSecrets: [] - frontend: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/applications - pullPolicy: Always - pullSecrets: [] -``` - -To: - -```yaml -core: - image: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/core - pullPolicy: Always - pullSecrets: [] - -applications: - image: - registry: 091468197733.dkr.ecr.us-east-1.amazonaws.com - repository: genestack/applications - pullPolicy: Always - pullSecrets: [] -``` - -### Application settings rework - -From: - -```yaml -genestack: - applicationSettings: - backend: - properties: - # backend.properties file content - propertiesAuth: - # backend-credentials.properties file content - propertiesLimits: - # limits.yaml file content - predefinedSystemUsers: - # token and password for technical odm users - predefinedUsers: - # predefined-users.json file content - frontend: - properties: - # frontend.properties file content - "google.openid.ini": - # google.openid.ini file content - "microsoft.openid.ini": - # microsoft.openid.ini file content - "okta.openid.ini": - # okta.openid.ini file content - propertiesAuth: - # frontend-credentials.properties file content - monitoringThresholds: - # monitoring-thresholds.yaml file content - saml: - # saml directory content -``` - -To: - -```yaml - -core: - configurationFiles: - "application.yaml": - # backend.properties and backend-credentials.properties files content in YAML format - "settings.py.local": - # settings.py.local file content - secretFiles: - # saml directory content - -applications: - configurationFiles: - "application.yaml": - # frontend.properties and frontend-credentials.properties files content in YAML format - "google.openid.ini": - # google.openid.ini file content - "microsoft.openid.ini": - # microsoft.openid.ini file content - "okta.openid.ini": - # okta.openid.ini file content -``` - -### High-level paths renaming in values.yaml - -#### Solr - -From: - -```yaml -index: {} # Solr configuration -``` - -To: - -```yaml -solr: {} # Solr configuration -``` - -#### Clickhouse - -From: - -```yaml -txIndex: {} # Clickhouse configuration -``` - -To: - -```yaml -clickhouse: {} # Clickhouse configuration -``` - -#### Mysql - -From: - -```yaml -db: {} # Mysql configuration -``` - -To: - -```yaml -mysql: {} # Mysql configuration -``` - -#### Nginx - -From: - -```yaml -proxy: {} # Nginx configuration -``` - -To: - -```yaml -nginx: {} # Nginx configuration -``` diff --git a/docs/home/release-notes/v1.60-v1.69.md b/docs/home/release-notes/v1.60-v1.69.md deleted file mode 100644 index 48cd3a6a..00000000 --- a/docs/home/release-notes/v1.60-v1.69.md +++ /dev/null @@ -1,87 +0,0 @@ -# Release notes - -## Version 1.62 - -!!! tip "" - Helm chart version 1.62.0 - -### Helm configuration changes - -- Added a new service, `processors-controller`, which manages the lifecycle of `transformation` pods. - - To use transformations, configure cross-account ECR access. See [Cross-account ECR access](./../clouds/aws.md#cross-account-ecr-access). - - The `processors-controller` requires RBAC permissions to manage `Pods`, `ConfigMaps`, and `PersistentVolumeClaims`. - The required manifests are included in the Helm chart and can be disabled by setting `processorsController.rbac.enabled` to `false`. - -## Version 1.61 - -!!! tip "" - Helm chart version 1.61.0 - -### Known issues - -- If you encounter a `Status code: 500` error while loading data, especially after an update, manually restart the `core` and `applications` pods in Kubernetes. -This issue is caused by a known bug in the JDBC connector. -Performing the restart once is sufficient to resolve the problem. - -### Helm configuration changes - -- Mailcatcher was replaced by Mailpit. The `mailcatcher` section has been removed, please use the `mailpit` configuration instead. - - From - - ```yaml - mailcatcher: - image: - repository: dockage/mailcatcher - ``` - - To - - ```yaml - mailpit: - image: - repository: axllent/mailpit - ``` - -## Version 1.60 - -!!! tip "" - Helm chart version 1.60.1 - -### Rclone Migration - -The following ODM component called `funcFile` was replaced with `rclone`. - -As a result of this migration, the configuration of storages was moved from `funcFile` to `application.yaml` files in `core`, `applications`, and `funcJob`. -You can find configuration examples in the "examples" directory within the Helm chart. -Note that the configuration section `genestack.rclone` in all three of these services should be identical. For this purpose, we recommend using YAML anchors, which are also included in the examples. - -Also, Rclone allows to use the AWS IAM role instead of the AWS IAM user. If this is relevant fo your environment, then information on deployment can be found [here](./../clouds/aws.md) in paragraph 4. - -### SAML elimitaion - -Support of SAML was eliminated. - -### Helm configuration changes - -- The `credentials` section has been removed, please use the `rclone` configuration instead. - - ```yaml - credentials: - awsS3Region: - awsS3AccessKey: - awsS3SecretAccessKey: - ``` - -- All configuration related to `SAML` has been removed. - -- The `region` parameter has been removed from the `applications` configuration. - - ```yaml - applications: - files: - "/var/lib/genestack/properties/application.yaml": - frontend: - aws: - region: - ``` diff --git a/mkdocs.yml b/mkdocs.yml index 8213710d..0450d44f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -30,9 +30,7 @@ nav: - Sanity check: home/troubleshooting/sanity-check.md - Other: - Telemetry: home/other/telemetry.md - - Release Notes: - - v1.60 - v1.69: home/release-notes/v1.60-v1.69.md - - v1.50 - v1.59: home/release-notes/v1.50-v1.59.md + - Release Notes: home/release-notes.md theme: name: material palette: