diff --git a/README.md b/README.md index f1cd5e5556b..9961b7f855d 100644 --- a/README.md +++ b/README.md @@ -721,7 +721,10 @@ Partials may be organised in any further subfolders as required. For example, yo `_partials/public-cloud/_palette_setup.mdx`. In order to aid with organisation and categorization, partials must have a `partial_category` and `partial_name` defined -in their frontmatter: +in their frontmatter. Individual values assigned for `partial_category` and `partial_name` do not have to be unique, but +the _combination_ of the two must be unique to identify the correct partial. For example, you can have multiple partials +with a `partial_category` of `public-cloud` and multiple partials with a `partial_name` of `palette-setup`, but only +_one_ can have _both_ a `partial_category` of `public-cloud` _and_ a `partial_name` of `palette-setup`. ```mdx --- @@ -732,13 +735,13 @@ partial_name: palette-setup This is how you set up Palette in {props.cloud}. ``` -Partials are customized using properties which can be read using the `{props.field}` syntax. +Partials are customized by defining properties, which can be accessed with the `{props.propertyName}` syntax. Once your partial has been created, run the `make generate-partials` command to make your partial available for use. This command will also be invoked during the `make start` and `make build` commands. Finally, you can reference your partial in any `*.md` file by using the `PartialsComponent`, together with the specified -category and name of the partial: +category and name of the partial. Note that the properties `category` and `name` are _always_ required. ```md +``` -## Palette/VerteX URLs +### Internal Links -A special component has been created to handle the generation of URLs for Palette and VerteX. The component is called -[PaletteVertexUrlMapper](./src/components/PaletteVertexUrlMapper/PaletteVertexUrlMapper.tsx). This component is intended -for usage withing partials. You can use the component to change the base path of the URL to either Palette or VerteX. -The component will automatically prefix the path to the URL. The component has the following props: +Due to the complexities of Docusaurus plugin rendering, links do not support versioning in `*.mdx` files. If you want to +add an internal link you will have to use the `VersionedLink` component inside the `*.mdx` file. -- `edition` - The edition of the URL. This can be either `Palette` or `Vertex`. Internally, the component will use this - value to determine the base URL. +```mdx +--- +partial_category: public-cloud +partial_name: palette-setup +--- + +This is how you set up Palette in {props.cloud}. + +This is an . +``` + +The path of the link should be the path of the destination file from the root directory, without any back operators +`..`. External links can be referenced as usual. + +### Palette/VerteX URLs + +The component [PaletteVertexUrlMapper](./src/components/PaletteVertexUrlMapper/PaletteVertexUrlMapper.tsx) handles the +generation of URLs for Palette and VerteX documents within the `/self-hosted-setup` section. This component is used +within partials to change the base path of the URL to either `/self-hosted-setup/palette` or `/self-hosted-setup/vertex` +and, if applicable, point to a particular installation method. The component has the following props: + +- `edition` - The edition of the URL. This can be either `palette` or `vertex`. The component uses this value to + determine the base URL. Values are _not_ case sensitive. - `text` - The text to display for the link. - `url` - The path to append to the base URL. + - To redirect to the base `/self-hosted-setup/palette` or `/self-hosted-setup/vertex` URL, use `url=""`. + - When referencing a heading or anchor within a file, append `/#anchor-here` to the end of the file path. For example, + use `url="/system-management/account-management/#system-administrators`. Note that adding `/` after the anchor + allows the link to work but does not route to the correct header. -Below is an example of how to use the component: +Below is an example of how to use the component within a partial: ```mdx - System administrator permissions, either a Root Administrator or Operations Administrator. Refer to the page to learn more about system administrator roles. ``` +When referencing the `PartialsComponent` in the `.md` file, the `edition` determines if the link maps to a Palette or +VerteX page. In the below example, because the `edition` is defined as `palette`, the resulting link is +`/self-hosted-setup/palette/system-management/account-management`. If the `edition` used was `vertex`, the resulting +link would be `/self-hosted-setup/vertex/system-management/account-management`. + +```md + + category="self-hosted" + name="customize-interface" + edition="palette" +/> +``` + +#### Different Palette/VerteX URLs + In cases where Palette and Vertex pages have different URLs beyond the base path, the component will accept the following props: -- `edition` - The edition of the URL. This can be either `Palette` or `Vertex`. Internally, the component will use this - value to determine the base URL. +- `edition` - The edition of the URL. This can be either `palette` or `vertex`. The component uses this value to + determine whether to route the link to the defined `palettePath` or `vertexPath`. - `text` - The text to display for the link. -- `palettePath` - The Palette path to append to the base URL. -- `vertexPath` - The VerteX path to append to the base URL. +- `palettePath` - The full self-hosted Palette path. Using `palettePath` prevents the base URL `/self-hosted-setup/` + from being appended; therefore, you must use the full path. + - When referencing a heading or anchor within a file, append `/#anchor-here` to the end of the file path. +- `vertexPath` - The full self-hosted Palette VerteX path. Using `vertexPath` prevents the base URL + `/self-hosted-setup/` from being appended; therefore, you must use the full path. + - When referencing a heading or anchor within a file, append `/#anchor-here` to the end of the file path. Below is an example of how to use the component when the URLs are different: @@ -792,13 +853,54 @@ Below is an example of how to use the component when the URLs are different: - System administrator permissions, either a Root Administrator or Operations Administrator. Refer to the page to learn more about system administrator roles. ``` +When referencing the `PartialsComponent` in the `.md` file, the resulting links would be +`/self-hosted-setup/palette/system-management/account-management` and +`/self-hosted-setup/vertex/system-management-vertex/account-management` (based on the `edition` used). + +#### Installation-Specific URLs + +The `PaletteVertexUrlMapper` component also supports the optional `install` prop for handling installation-specific URLs +for self-hosted Palette and Palette VerteX. + +- `install` - The installation method. Can be `kubernetes`, `vmware`, or `management-appliance`. When provided, the + component appends `/supported-environments/{install-method}` to the base URL path. Values are _not_ case sensitive. + +When the `install` prop is provided, the URL is constructed as follows: + +``` +/self-hosted-setup/{edition}/supported-environments/{install-method}/{url} +``` + +Below is an example of how to use the component with the `install` prop within the partial `.mdx` file: + +```md +- To activate your installation, refer to the . +``` + +When referencing the `PartialsComponent` in the `.md` file in the below example, the resulting link would be +`/self-hosted-setup/palette/supported-environments/vmware/activate`. + +```md + +``` + ## Security Bulletins The security bulletins are auto-generated upon server start or the build process. The bulletins are generated by diff --git a/_partials/_azure-cloud-account-setup.mdx b/_partials/_azure-cloud-account-setup.mdx index 2ffd5ad29e5..d10abdcca31 100644 --- a/_partials/_azure-cloud-account-setup.mdx +++ b/_partials/_azure-cloud-account-setup.mdx @@ -7,9 +7,13 @@ Use the following steps to add an Azure or Azure US Government account in Palett :::warning - Beginning with Palette version 4.6.36, a is required to add an [Azure US Government](https://azure.microsoft.com/en-us/explore/global-infrastructure/government) cloud account. + -If you are using a or instance, a PCG is not required unless you configure both an Azure Public Cloud and Azure US Government account on the same installation. If you do not configure a PCG, you must install two instances of Palette or VerteX: one for Azure Public Cloud clusters and one for Azure US Government clusters. +Beginning with Palette version 4.6.36, a is required to add an [Azure US Government](https://azure.microsoft.com/en-us/explore/global-infrastructure/government) cloud account. + +If you are using a or instance, a PCG is not required unless you configure both an Azure Public Cloud and Azure US Government account on the same installation. If you do not configure a PCG, you must install two instances of Palette or VerteX: one for Azure Public Cloud clusters and one for Azure US Government clusters. + + ::: diff --git a/_partials/cluster-templates/_profile-vs-template.mdx b/_partials/cluster-templates/_profile-vs-template.mdx index 1b2f18a6fcd..13a6fa7bc6d 100644 --- a/_partials/cluster-templates/_profile-vs-template.mdx +++ b/_partials/cluster-templates/_profile-vs-template.mdx @@ -10,7 +10,7 @@ or a single . :::info - are a Tech Preview feature and can be used only if the **ClusterTemplates** is enabled. + are a Tech Preview feature and can be used only if the **ClusterTemplates** is enabled. ::: diff --git a/_partials/self-hosted/_install-next-steps.mdx b/_partials/self-hosted/_install-next-steps.mdx index c9d3848990f..1ceeeb510b7 100644 --- a/_partials/self-hosted/_install-next-steps.mdx +++ b/_partials/self-hosted/_install-next-steps.mdx @@ -11,8 +11,9 @@ Now that you have installed {props.version}, you can either /> to host your users and set up your clusters, or you can . Beginning with version 4.6.32, once you install {props.version}, you have 30 days to activate it; versions older than 4.6.32 do not need to be activated. During the 30-day trial period, you can use {props.version} without any restrictions. After 30 days, you can continue to use {props.version}, but you cannot deploy additional clusters or perform any day-2 operations on existing clusters until {props.version} is activated. Each installation of {props.version} must be activated separately. We recommend activating {props.version} as soon as possible to avoid any disruptions. \ No newline at end of file diff --git a/_partials/self-hosted/_setup-steps.mdx b/_partials/self-hosted/_setup-steps.mdx index 5387c5a3d9b..47105931cae 100644 --- a/_partials/self-hosted/_setup-steps.mdx +++ b/_partials/self-hosted/_setup-steps.mdx @@ -5,7 +5,7 @@ partial_name: setup-steps ## Prerequisites -- An RHEL airgap VM deployed in your VMware vSphere. The VM must be registered with +- An RHEL airgap VM deployed in VMware vSphere. The VM must be registered with [Red Hat](https://access.redhat.com/solutions/253273) and have ports `80` and `443` available. This guide uses RHEL version `9.4` as an example. @@ -31,9 +31,9 @@ partial_name: setup-steps ::: -- Review the required vSphere and ensure you have +- Review the required vSphere and ensure you have created the proper custom roles and zone tags. Zone tagging enables dynamic storage allocation across fault domains - when provisioning workloads that require persistent storage. Refer to for information. + when provisioning workloads that require persistent storage. Refer to for information. - The following artifacts must be available in the root home directory of the RHEL airgap VM. You can download the files in a system with internet access and then transfer them to your airgap environment. Contact your {props.edition} support @@ -54,8 +54,6 @@ partial_name: setup-steps distribution OVA required for the {props.edition} nodes creation. Refer to the section to learn if the version of {props.edition} you are installing requires a new OS and Kubernetes OVA. - - {props.requirementsURL} @@ -77,7 +75,8 @@ partial_name: setup-steps Place the OVA in the **spectro-templates** folder. Append the `r_` prefix, and remove the `.ova` suffix when assigning its name and target location. For example, the final output should look like `r_u-2204-0-k-1294-0`. This naming convention is required for the installation process to identify the OVA. Refer to the - page for a list of additional OS and + page for a list of additional OS and Kubernetes OVAs. You can terminate the deployment after the OVA is available in the `spectro-templates` folder. Refer to the @@ -303,7 +302,7 @@ partial_name: setup-steps systemctl restart httpd.service ``` -20. Review the page and identify any additional packs you want +20. Review the page and identify any additional packs you want to add to your registry. You can also add additional packs after the installation is complete. You have now completed the preparation steps for an airgap installation. Check out the [Validate](#validate) section to @@ -398,7 +397,7 @@ command below to start the installation. palette ec install ``` -Complete all the Palette CLI steps outlined in the guide from the RHEL VM. +Complete all the Palette CLI steps outlined in the guide from the RHEL VM. :::info diff --git a/_partials/self-hosted/_size_guidelines.mdx b/_partials/self-hosted/_size_guidelines-helm-cli.mdx similarity index 68% rename from _partials/self-hosted/_size_guidelines.mdx rename to _partials/self-hosted/_size_guidelines-helm-cli.mdx index 964fb4a71e4..f1dc65a1b39 100644 --- a/_partials/self-hosted/_size_guidelines.mdx +++ b/_partials/self-hosted/_size_guidelines-helm-cli.mdx @@ -1,6 +1,6 @@ --- partial_category: self-hosted -partial_name: size-guidelines +partial_name: size-guidelines-helm-cli --- This section lists resource requirements for {props.edition} for various capacity levels. In {props.edition}, the terms _small_, @@ -20,30 +20,12 @@ active nodes and pods at any given time.
- - - - | **Size** | **Total Nodes** | **Node CPU** | **Node Memory** | **Node Storage** | **MongoDB Node Storage Limit** | **MongoDB Node Memory Limit** | **MongoDB Node CPU Limit** | **Total Deployed Workload Cluster Nodes** | **Deployed Clusters with 10 Nodes** | | -------------------- | --------- | ------- | ---------- | ----------- | ------------------------- | ------------------------ | --------------------- | ------------------------ | ----------------------------------- | | Small | 3 | 8 | 16 GB | 60 GB | 20 GB | 4 GB | 2 | 1000 | 100 | | Medium (Recommended) | 3 | 16 | 32 GB | 100 GB | 60 GB | 8 GB | 4 | 3000 | 300 | | Large | 3 | 32 | 64 GB | 120 GB | 80 GB | 12 GB | 6 | 5000 | 500 | - - - - -| **Size** | **Total Nodes** | **Node CPU** | **Node Memory** | **Node Storage (Total)** | **Total Deployed Workload Cluster Nodes** | **Deployed Clusters with 10 Nodes** | -| -------------------- | --------------- | ------------ | --------------- | ------------------------ | ----------------------------------------- | ----------------------------------- | -| Small | 3 | 8 | 16 GB | 750 GB | 1000 | 100 | -| Medium (Recommended) | 3 | 16 | 32 GB | 750 GB | 3000 | 300 | -| Large | 3 | 32 | 64 GB | 750 GB | 5000 | 500 | - - - - - :::info The Spectro manifest requires approximately 10 GB of storage. {props.edition} deployed clusters use the manifest to identify what images to pull for each microservice that makes up {props.edition}. diff --git a/_partials/self-hosted/_size_guidelines-management-appliance.mdx b/_partials/self-hosted/_size_guidelines-management-appliance.mdx new file mode 100644 index 00000000000..e007955b0fc --- /dev/null +++ b/_partials/self-hosted/_size_guidelines-management-appliance.mdx @@ -0,0 +1,41 @@ +--- +partial_category: self-hosted +partial_name: size-guidelines-management-appliance +--- + +This section lists resource requirements for {props.edition} for various capacity levels. In {props.edition}, the terms _small_, +_medium_, and _large_ are used to describe the instance size of worker pools that Palette is installed on. The following +table lists the resource requirements for each size. + +
+ +:::warning + +The recommended maximum number of deployed nodes and clusters in the environment should not be exceeded. We have tested +the performance of {props.edition} with the recommended maximum number of deployed nodes and clusters. Exceeding these limits +can negatively impact performance and result in instability. The active workload limit refers to the maximum number of +active nodes and pods at any given time. + +::: + +
+ +| **Size** | **Total Nodes** | **Node CPU** | **Node Memory** | **Node Storage (Total)** | **Total Deployed Workload Cluster Nodes** | **Deployed Clusters with 10 Nodes** | +| -------------------- | --------------- | ------------ | --------------- | ------------------------ | ----------------------------------------- | ----------------------------------- | +| Small | 3 | 8 | 16 GB | 750 GB | 1000 | 100 | +| Medium (Recommended) | 3 | 16 | 32 GB | 750 GB | 3000 | 300 | +| Large | 3 | 32 | 64 GB | 750 GB | 5000 | 500 | + +:::info + +The Spectro manifest requires approximately 10 GB of storage. {props.edition} deployed clusters use the manifest to identify what images to pull for each microservice that makes up {props.edition}. + +::: + +#### Instance Sizing + +| **Configuration** | **Active Workload Limit** | +| -------------------- | ------------------------------------------------- | +| Small | Up to 1000 nodes each with 30 pods (30,000 pods) | +| Medium (Recommended) | Up to 3000 nodes each with 30 pods (90,000 pods) | +| Large | Up to 5000 nodes each with 30 pods (150,000 pods) | diff --git a/_partials/self-hosted/feature-flags/_feature-flags-prerequisites.mdx b/_partials/self-hosted/feature-flags/_feature-flags-prerequisites.mdx index 7bb36a82f9f..68cfe4ff492 100644 --- a/_partials/self-hosted/feature-flags/_feature-flags-prerequisites.mdx +++ b/_partials/self-hosted/feature-flags/_feature-flags-prerequisites.mdx @@ -3,10 +3,10 @@ partial_category: self-hosted partial_name: feature-flags-prerequisites --- -- A or instance. +- A self-hosted {props.version} . - A system administrator with the - or - role. + or + role. - Access to the . \ No newline at end of file diff --git a/_partials/self-hosted/management-appliance/_installation-steps-prereqs.mdx b/_partials/self-hosted/management-appliance/_installation-steps-prereqs.mdx index 9156270a1f5..4ffc0ce0997 100644 --- a/_partials/self-hosted/management-appliance/_installation-steps-prereqs.mdx +++ b/_partials/self-hosted/management-appliance/_installation-steps-prereqs.mdx @@ -13,8 +13,7 @@ partial_name: installation-steps-prereqs ::: -- {props.edition} can be installed on a single node or on three nodes. For production environments, we recommend that three nodes be provisioned in advance for the Palette installation. We recommended the following - resources for each node. Refer to the Palette for additional sizing information. +- {props.version} can be installed on a single node or on three nodes. For production environments, we recommend that three nodes be provisioned in advance for the {props.version} installation. We recommended the following resources for each node. Refer to the section for additional sizing information. - 8 CPUs per node. diff --git a/_partials/self-hosted/management-appliance/_next-steps.mdx b/_partials/self-hosted/management-appliance/_next-steps.mdx index 086b09596a9..987b4796626 100644 --- a/_partials/self-hosted/management-appliance/_next-steps.mdx +++ b/_partials/self-hosted/management-appliance/_next-steps.mdx @@ -13,7 +13,7 @@ The following actions are recommended after installing {props.version} to ensure - Create a tenant in {props.version} to host your users. Refer to the guide for instructions on how to create a tenant in {props.version}. -- Activate your {props.version} installation before the trial mode expires. Refer to the +- Activate your {props.version} installation before the trial mode expires. Refer to the guide for instructions on how to activate your installation. - Create additional system administrator accounts and assign roles to users in the system console. Refer to the diff --git a/_partials/self-hosted/scar-migration/_scar-migration-guide.mdx b/_partials/self-hosted/scar-migration/_scar-migration-guide.mdx index 037e62f4c47..dc1e136ef3e 100644 --- a/_partials/self-hosted/scar-migration/_scar-migration-guide.mdx +++ b/_partials/self-hosted/scar-migration/_scar-migration-guide.mdx @@ -17,8 +17,7 @@ partial_name: scar-migration-guide manifests are stored. For example, if you deployed an airgapped instance of {props.edition} to VMware using an , navigate to the `/var/www/html/` directory. @@ -29,9 +28,8 @@ partial_name: scar-migration-guide Alternatively, if you deployed {props.edition} in an airgapped Kubernetes environment using , navigate to the directory served by the file server you configured. + url="/supported-environments/kubernetes/setup/airgap/" + />, navigate to the directory served by the file server you configured. 3. Compress the folder contents into an archive file called `manifests.tgz`. Issue the following command to create the archive. @@ -47,14 +45,12 @@ partial_name: scar-migration-guide If you deployed an airgapped instance of {props.edition} to VMware using an , the OCI registry address is provided by the `airgap-setup.sh` script output. Alternatively, if you deployed {props.edition} to an existing Kubernetes cluster using , contact your cluster administrator for the OCI registry configuration. diff --git a/_partials/self-hosted/scar-migration/_scar-migration-prerequisites.mdx b/_partials/self-hosted/scar-migration/_scar-migration-prerequisites.mdx index f2ab8a9cfc5..5022f163fda 100644 --- a/_partials/self-hosted/scar-migration/_scar-migration-prerequisites.mdx +++ b/_partials/self-hosted/scar-migration/_scar-migration-prerequisites.mdx @@ -3,36 +3,34 @@ partial_category: self-hosted partial_name: scar-migration-prerequisites --- -- A deployed self-hosted {props.edition} that uses a customer-managed SCAR to host {props.edition} + text="instance" + url="" + /> that uses a customer-managed SCAR to host {props.version} manifests. -- Access to the {props.edition} cluster kubeconfig file to verify the SCAR endpoint. +- Access to the {props.version} cluster kubeconfig file to verify the SCAR endpoint. :::tip - If you deployed {props.edition} using the Palette CLI, you can download the kubeconfig file from the {props.edition} cluster details + If you deployed {props.version} using the Palette CLI, you can download the kubeconfig file from the {props.version} cluster details page in the system console. Navigate to the **Enterprise Cluster Migration** page and click on the **Admin - Kubeconfig** link to download the kubeconfig file. If you deployed {props.edition} to an existing Kubernetes cluster, contact + Kubeconfig** link to download the kubeconfig file. If you deployed {props.version} to an existing Kubernetes cluster, contact your cluster administrator to obtain the kubeconfig file. For instructions on using the kubeconfig file to access your cluster, refer to the . ::: -- Access to the file server that hosts the {props.edition} manifests. +- Access to the file server that hosts the {props.version} manifests. - Ensure the Kubernetes cluster has a Container Storage Interface (CSI) available and at least 10 GB of free space. The Specman service requires this to create a Persistent Volume Claim (PVC) for storing content. -- The {props.edition} cluster must have been upgraded to version `4.5.15` or later. This is required for the SCAR migration to +- Your {props.version} instance must be version 4.5.15 or later. This is required for the SCAR migration to function properly. -- Access to the {props.edition} system console. +- Access to the {props.version} . - Ensure the following software is installed and available in the environment hosting the file server. For example, if - you deployed an airgapped instance of {props.edition} to VMware using an , these tools must be available on your airgap support VM. - [tar](https://www.gnu.org/software/tar/) diff --git a/docs/deprecated/automation/palette-cli/commands/validator.md b/docs/deprecated/automation/palette-cli/commands/validator.md index b4a0dd2abda..296623f23d0 100644 --- a/docs/deprecated/automation/palette-cli/commands/validator.md +++ b/docs/deprecated/automation/palette-cli/commands/validator.md @@ -297,17 +297,17 @@ requirements. Each plugin may have its own set of failures. Resolving failures will depend on the plugin and the failure. Use the error output to help you address the failure. Below are some tips to help you resolve failures. -| **Plugin** | **Failure Scenario** | **Guidance** | -| ---------- | --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| AWS | Missing IAM permissions | The IAM role used by Palette is missing one or more required IAM permissions. Refer to [Required IAM Policies](../../../clusters/public-cloud/aws/required-iam-policies.md) for a comprehensive list of required IAM permissions and attach the missing permissions or policies. | -| AWS | Insufficient Service Quota Buffer | The usage quota for a service or multiple service quotas is above the specified buffer. Refer to AWS [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) documentation to review the default limits. Use the [Service Quotas](https://console.aws.amazon.com/servicequotas/) console to request an increase to your account, or remove resources to reduce the usage. | -| Network | TCP connection error | The Validator could not establish a Transmission Control Protocol (TCP) connection to the specified host and port. Ensure the host and port are accessible from the Validator's current network. If the current network is not in scope, ensure you conduct the test from a network in scope. Refer to the [Network Ports](../../../architecture/networking-ports.md) resource for a list of Palette required ports. | -| Network | Unable to connect | This could be caused by several issues. If you require network connections to use a proxy server, specify the usage of a network proxy and provide the required proxy server information. | -| Network | Unable to resolve DNS | The Validator was unable to resolve the specified DNS name. Ensure the DNS name is valid and accessible from the Validator's current network default DNS resolver. Use network tools such as `dig` and `nslookup` to debug DNS issues. | -| Network | Insufficient IP Addresses | The Validator was unable to find a sufficient number of IP addresses in the specified IP range. Ensure the IP range is valid and has enough IP addresses to satisfy the Validator's requirements. Discuss these findings with your network administrator. | -| vSphere | Missing permissions | The user account used by Palette or VerteX is missing one or more required permissions. Refer to [Palette Required vSphere Permissions](../../../enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md#vsphere-permissions), or the [VerteX Required vSphere Permissions](../../../vertex/install-palette-vertex/install-on-vmware/vmware-system-requirements.md#vsphere-permissions) resource for information about required permissions. | -| vSphere | Missing tags | Kubernetes regions and zone tags are missing from the vSphere environment. Refer to [Palette Required vSphere Tags](../../../enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md#zone-tagging), or the [VerteX Required vSphere Tags](../../../vertex/install-palette-vertex/install-on-vmware/vmware-system-requirements.md#zone-tagging) resource for information about zone tags. | -| vSphere | Folder missing or not accessible | The `spectro-templates` folder is missing or not accessible. Ensure the folder exists and the user account used by Palette or VerteX has read access to the folder. The `spectro-templates` folder is used by Palette and VerteX to download OVAs during the install. | +| **Plugin** | **Failure Scenario** | **Guidance** | +| ---------- | --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| AWS | Missing IAM permissions | The IAM role used by Palette is missing one or more required IAM permissions. Refer to [Required IAM Policies](../../../clusters/public-cloud/aws/required-iam-policies.md) for a comprehensive list of required IAM permissions and attach the missing permissions or policies. | +| AWS | Insufficient Service Quota Buffer | The usage quota for a service or multiple service quotas is above the specified buffer. Refer to AWS [Service Quotas](https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html) documentation to review the default limits. Use the [Service Quotas](https://console.aws.amazon.com/servicequotas/) console to request an increase to your account, or remove resources to reduce the usage. | +| Network | TCP connection error | The Validator could not establish a Transmission Control Protocol (TCP) connection to the specified host and port. Ensure the host and port are accessible from the Validator's current network. If the current network is not in scope, ensure you conduct the test from a network in scope. Refer to the [Network Ports](../../../architecture/networking-ports.md) resource for a list of Palette required ports. | +| Network | Unable to connect | This could be caused by several issues. If you require network connections to use a proxy server, specify the usage of a network proxy and provide the required proxy server information. | +| Network | Unable to resolve DNS | The Validator was unable to resolve the specified DNS name. Ensure the DNS name is valid and accessible from the Validator's current network default DNS resolver. Use network tools such as `dig` and `nslookup` to debug DNS issues. | +| Network | Insufficient IP Addresses | The Validator was unable to find a sufficient number of IP addresses in the specified IP range. Ensure the IP range is valid and has enough IP addresses to satisfy the Validator's requirements. Discuss these findings with your network administrator. | +| vSphere | Missing permissions | The user account used by Palette or VerteX is missing one or more required permissions. Refer to the [self-hosted Palette Required vSphere Permissions](../../../../docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md) or [VerteX Required vSphere Permissions](../../../../docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md) guide for more information. | +| vSphere | Missing tags | Kubernetes regions and zone tags are missing from the vSphere environment. Refer to the [self-hosted Palette Required vSphere Tags](../../../../docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md#zone-tagging)or the [VerteX Required vSphere Tags](../../../../docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md#zone-tagging) guide for more information. | +| vSphere | Folder missing or not accessible | The `spectro-templates` folder is missing or not accessible. Ensure the folder exists and the user account used by Palette or VerteX has read access to the folder. The `spectro-templates` folder is used by Palette and VerteX to download OVAs during the install. | Every 30 seconds, the Validator will continuously re-issue a validation and update the `ValidationResult` CR with the result of the validation. The validation results are hashed, and result events are only emitted if the result has diff --git a/docs/docs-content/architecture/pxk.md b/docs/docs-content/architecture/pxk.md index 3e1216e1053..1b630801fe6 100644 --- a/docs/docs-content/architecture/pxk.md +++ b/docs/docs-content/architecture/pxk.md @@ -40,7 +40,7 @@ registry used by CNCF distributions. These images include all essential componen The FIPS-compliant variants of PXK and PXK-E, used in Palette VerteX, do not use upstream images directly. Instead, they use images recompiled with FIPS-compliant cryptographic libraries. For more information, refer to -[FIPS-Compliant Components](../vertex/fips/fips-compliant-components.md). +[FIPS-Compliant Clusters](../self-hosted-setup/vertex/fips.md#fips-compliant-clusters). diff --git a/docs/docs-content/automation/automation.md b/docs/docs-content/automation/automation.md index 049b77de202..bff9ca94be8 100644 --- a/docs/docs-content/automation/automation.md +++ b/docs/docs-content/automation/automation.md @@ -12,8 +12,9 @@ tags: ["automation"] This section contains documentation and guides for tools essential in automating tasks with Palette: - Palette CLI - Enables users to interact with Palette and create and manage resources, such as projects, virtual - clusters, and more. The Palette CLI is the primary method for installing a - [self-hosted Palette](../enterprise-version/enterprise-version.md) instance and deploying a + clusters, and more. The Palette CLI is the primary method for installing + [self-hosted Palette](../self-hosted-setup/palette/palette.md) and + [Palette VerteX](../self-hosted-setup/vertex/vertex.md), as well as deploying a [Private Cloud Gateway](../clusters/pcg/pcg.md). - Palette Go SDK - Enables developers to interact with Palette APIs for automated resource management using Go. diff --git a/docs/docs-content/automation/crossplane/deploy-cluster-aws-crossplane.md b/docs/docs-content/automation/crossplane/deploy-cluster-aws-crossplane.md index 9f155b94da8..6ddf65facd5 100644 --- a/docs/docs-content/automation/crossplane/deploy-cluster-aws-crossplane.md +++ b/docs/docs-content/automation/crossplane/deploy-cluster-aws-crossplane.md @@ -22,8 +22,8 @@ how to use Crossplane to deploy a Palette-managed Kubernetes cluster in AWS. [Create EC2 SSH Key Pair](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/create-key-pairs.html) page for guidance. - The AWS account must be registered in Palette. Follow the - [Add an AWS Account to Palette](../../clusters/public-cloud/aws/add-aws-accounts.md) guide to register your account in - Palette. + [Add an AWS Account to Palette](../../clusters/public-cloud/aws/add-aws-accounts/add-aws-accounts.md) guide to + register your account in Palette. - A Kubernetes cluster with at least 2 GB of RAM. This guide uses a [kind](https://kind.sigs.k8s.io) cluster as an example. Refer to the [kind Quick Start](https://kind.sigs.k8s.io/docs/user/quick-start/) guide to learn how to install kind and create a cluster. diff --git a/docs/docs-content/automation/palette-cli/commands/ec.md b/docs/docs-content/automation/palette-cli/commands/ec.md index e488ae4f4a1..432aba5a706 100644 --- a/docs/docs-content/automation/palette-cli/commands/ec.md +++ b/docs/docs-content/automation/palette-cli/commands/ec.md @@ -11,9 +11,8 @@ The `ec` command installs a self-hosted Palette Enterprise Cluster (EC) in your conducted through an interactive wizard that guides you through the various install configurations available. A local kind cluster is created to facilitate creating the Enterprise cluster in the target environment. You do not need to install kind or any other dependencies. The CLI includes all the required dependencies to set up the kind cluster. You -can use the `ec` command to install a -[self-hosted Palette](../../../enterprise-version/install-palette/install-palette.md) instance or a self-hosted -[VerteX](../../../vertex/install-palette-vertex/install-palette-vertex.md) instance. +can use the `ec` command to install [self-hosted Palette](../../../self-hosted-setup/palette/palette.md) or +[Palette VerteX](../../../self-hosted-setup/vertex/vertex.md). ## Subcommands diff --git a/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rhel-capi-airgap.md b/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rhel-capi-airgap.md index d9598f783e2..ad8f670226f 100644 --- a/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rhel-capi-airgap.md +++ b/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rhel-capi-airgap.md @@ -28,13 +28,13 @@ This guide teaches you how to use the [CAPI Image Builder](../../capi-image-buil [Red Hat Developer Portal](https://developers.redhat.com/products/rhel/download). - An airgapped instance of - [Palette](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/install.md) or - [VerteX](../../../../vertex/install-palette-vertex/install-on-vmware/airgap-install/install.md) deployed in VMware - vSphere. + [self-hosted Palette](../../../../self-hosted-setup/palette/supported-environments/vmware/install/install.md) or + [Palette VerteX](../../../../self-hosted-setup/vertex/supported-environments/vmware/install/install.md) deployed in + VMware vSphere. - SSH access to the VMware vSphere - [airgap support VM](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md) - used to deploy the airgapped instance of Palette or Vertex. + [airgap support VM for self-hosted Palette](../../../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/airgap.md) + or [Palette Vertex](../../../../self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/airgap.md). - The following artifacts must be available in the root home directory of the airgap support VM. You can download the files on a system with internet access and then transfer them to your airgap environment. @@ -66,10 +66,10 @@ This guide teaches you how to use the [CAPI Image Builder](../../capi-image-buil Whether you use the IP address or FQDN depends on the hostname used when setting up your airgap support VM. If you used an - [existing RHEL VM](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/env-setup-vm.md) - to set up your VM, this is always the FQDN; if you used an - [OVA](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md), - it depends on the hostname used when invoking the command `/bin/airgap-setup.sh `. + [existing RHEL VM](../../../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md) to + set up your VM, this is always the FQDN; if you used an + [OVA](../../../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md), it depends on + the hostname used when invoking the command `/bin/airgap-setup.sh `. ::: diff --git a/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rocky-capi-airgap.md b/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rocky-capi-airgap.md index 2d9d3784893..3926edae8f9 100644 --- a/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rocky-capi-airgap.md +++ b/docs/docs-content/byoos/capi-image-builder/build-image-vmware/airgap-build/rocky-capi-airgap.md @@ -25,13 +25,13 @@ This guide teaches you how to use the [CAPI Image Builder](../../capi-image-buil - Access to a VMware vSphere environment, including credentials and permission to create virtual machines. - An airgapped instance of - [Palette](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/install.md) or - [VerteX](../../../../vertex/install-palette-vertex/install-on-vmware/airgap-install/install.md) deployed in VMware - vSphere. + [self-hosted Palette](../../../../self-hosted-setup/palette/supported-environments/vmware/install/install.md) or + [Palette VerteX](../../../../self-hosted-setup/vertex/supported-environments/vmware/install/install.md) deployed in + VMware vSphere. - SSH access to the VMware vSphere - [airgap support VM](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md) - used to deploy the airgapped instance of Palette or Vertex. + [airgap support VM for self-hosted Palette](../../../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/airgap.md) + or [Palette Vertex](../../../../self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/airgap.md). - The following artifacts must be available in the root home directory of the airgap support VM. You can download the files on a system with internet access and then transfer them to your airgap environment. @@ -63,10 +63,10 @@ This guide teaches you how to use the [CAPI Image Builder](../../capi-image-buil Whether you use the IP address or FQDN depends on the hostname used when setting up your airgap support VM. If you used an - [existing RHEL VM](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/env-setup-vm.md) - to set up your VM, this is always the FQDN; if you used an - [OVA](../../../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md), - it depends on the hostname used when invoking the command `/bin/airgap-setup.sh `. + [existing RHEL VM](../../../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md) to + set up your VM, this is always the FQDN; if you used an + [OVA](../../../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md), it depends on + the hostname used when invoking the command `/bin/airgap-setup.sh `. ::: diff --git a/docs/docs-content/byoos/capi-image-builder/config-reference.md b/docs/docs-content/byoos/capi-image-builder/config-reference.md index 5dce96d1ed3..760d688a8da 100644 --- a/docs/docs-content/byoos/capi-image-builder/config-reference.md +++ b/docs/docs-content/byoos/capi-image-builder/config-reference.md @@ -109,10 +109,10 @@ create a separate configuration file for each. Fill out the parameters below if you are building the image in an air-gapped environment. Otherwise, you can skip this section. -| Parameter | Description | Required | -| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -| `airgap` | Set to `true` if you are building the image in an air-gapped environment. | Yes | -| `airgap_ip` | The IP address or hostname of the airgap support VM that has the required dependencies. Refer to the [Self-Hosted Palette](../../enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md) and [Vertex](../../vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md) Environment Setup pages for instructions on how to deploy an airgap support VM. | Yes | +| Parameter | Description | Required | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| `airgap` | Set to `true` if you are building the image in an air-gapped environment. | Yes | +| `airgap_ip` | The IP address or hostname of the airgap support VM that has the required dependencies. Refer to the [self-hosted Palette](../../self-hosted-setup/palette/supported-environments/vmware/setup/airgap/airgap.md) and [Palette Vertex](../../self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/airgap.md) Environment Setup pages for instructions on how to deploy an airgap support VM. | Yes | ## Example Configuration diff --git a/docs/docs-content/cluster-templates/cluster-templates.md b/docs/docs-content/cluster-templates/cluster-templates.md index b0a99451bed..dcd420215de 100644 --- a/docs/docs-content/cluster-templates/cluster-templates.md +++ b/docs/docs-content/cluster-templates/cluster-templates.md @@ -15,7 +15,7 @@ tags: ["cluster templates", "templates", "policies"] ::: Cluster templates are reusable blueprints that define and enforce the desired state and lifecycle of clusters deployed -with Palette or [Palette VerteX](../vertex/vertex.md). +with Palette or [Palette VerteX](../self-hosted-setup/vertex/vertex.md). Unlike [cluster profiles](../profiles/cluster-profiles/cluster-profiles.md), which define the cluster's software stack (including OS, Kubernetes distribution, network, storage, and add‑ons), cluster templates are a higher level abstraction diff --git a/docs/docs-content/cluster-templates/create-cluster-template-policies/maintenance-policy.md b/docs/docs-content/cluster-templates/create-cluster-template-policies/maintenance-policy.md index 8d82f47f387..5d5eb6e82be 100644 --- a/docs/docs-content/cluster-templates/create-cluster-template-policies/maintenance-policy.md +++ b/docs/docs-content/cluster-templates/create-cluster-template-policies/maintenance-policy.md @@ -45,7 +45,7 @@ cluster templates. ### Prerequisites -- The **ClusterTemplate** [feature flag](../../enterprise-version/system-management/feature-flags.md) enabled. +- The **ClusterTemplate** [feature flag](../../self-hosted-setup/palette/system-management/feature-flags.md) enabled. - The `spcPolicy.create` permission to create cluster template policies. Refer to our [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md#project) guide for more @@ -133,7 +133,7 @@ regardless if they are attached to a cluster template and the template is or is ### Prerequisites -- The **ClusterTemplate** [feature flag](../../enterprise-version/system-management/feature-flags.md) enabled. +- The **ClusterTemplate** [feature flag](../../self-hosted-setup/palette/system-management/feature-flags.md) enabled. - The `spcPolicy.update` permission to update cluster template policies. Refer to our [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md#project) guide for more @@ -167,7 +167,7 @@ if it is not linked to a cluster template, regardless of whether the template is ### Prerequisites -- The **ClusterTemplate** [feature flag](../../enterprise-version/system-management/feature-flags.md) enabled. +- The **ClusterTemplate** [feature flag](../../self-hosted-setup/palette/system-management/feature-flags.md) enabled. - The `spcPolicy.delete` permission to delete cluster template policies. Refer to our [Roles and Permissions](../../user-management/palette-rbac/project-scope-roles-permissions.md#project) guide for more diff --git a/docs/docs-content/cluster-templates/create-cluster-templates.md b/docs/docs-content/cluster-templates/create-cluster-templates.md index a20804559fa..e2d735b0ca2 100644 --- a/docs/docs-content/cluster-templates/create-cluster-templates.md +++ b/docs/docs-content/cluster-templates/create-cluster-templates.md @@ -25,7 +25,7 @@ allow environment overrides where necessary. ## Prerequisites -- The **ClusterTemplate** [feature flag](../enterprise-version/system-management/feature-flags.md) enabled. +- The **ClusterTemplate** [feature flag](../self-hosted-setup/palette/system-management/feature-flags.md) enabled. - The `clusterTemplate.create` permission to create cluster templates. Refer to our [Roles and Permissions](../user-management/palette-rbac/project-scope-roles-permissions.md#project) guide for more diff --git a/docs/docs-content/cluster-templates/delete-cluster-templates.md b/docs/docs-content/cluster-templates/delete-cluster-templates.md index 3dccb553f27..f43a96e3350 100644 --- a/docs/docs-content/cluster-templates/delete-cluster-templates.md +++ b/docs/docs-content/cluster-templates/delete-cluster-templates.md @@ -20,7 +20,7 @@ and policies in other clusters. ## Prerequisites -- The **ClusterTemplate** [feature flag](../enterprise-version/system-management/feature-flags.md) enabled. +- The **ClusterTemplate** [feature flag](../self-hosted-setup/palette/system-management/feature-flags.md) enabled. - The `clusterTemplate.delete` permission to delete cluster templates. Refer to our [Roles and Permissions](../user-management/palette-rbac/project-scope-roles-permissions.md#project) guide for more diff --git a/docs/docs-content/cluster-templates/modify-cluster-templates.md b/docs/docs-content/cluster-templates/modify-cluster-templates.md index 0e54f7d6a97..ff16ddc2e4f 100644 --- a/docs/docs-content/cluster-templates/modify-cluster-templates.md +++ b/docs/docs-content/cluster-templates/modify-cluster-templates.md @@ -32,7 +32,7 @@ flexible, version-driven management. ## Prerequisites -- The **ClusterTemplate** [feature flag](../enterprise-version/system-management/feature-flags.md) enabled. +- The **ClusterTemplate** [feature flag](..//self-hosted-setup/palette/system-management/feature-flags.md) enabled. - The `clusterTemplate.update` permission to modify cluster templates. Refer to our [Roles and Permissions](../user-management/palette-rbac/project-scope-roles-permissions.md#project) guide for more diff --git a/docs/docs-content/clusters/cluster-management/backup-restore/add-backup-location-dynamic.md b/docs/docs-content/clusters/cluster-management/backup-restore/add-backup-location-dynamic.md index 28918e3d72f..2b378e85984 100644 --- a/docs/docs-content/clusters/cluster-management/backup-restore/add-backup-location-dynamic.md +++ b/docs/docs-content/clusters/cluster-management/backup-restore/add-backup-location-dynamic.md @@ -33,9 +33,9 @@ You can use the same AWS account in which you deploy your Kubernetes cluster to You can also use a different AWS account to add an S3 bucket as the backup location. Select the tab below that best matches your use case. -- [Single Cloud Account with AWS STS](#single-cloud-account-with-aws-sts) +- [Single Cloud Account with AWS STS](#single-cloud-account-with-aws-sts) -- [Multiple Cloud Accounts with AWS STS](#multiple-cloud-accounts-with-aws-sts) +- [Multiple Cloud Accounts with AWS STS](#multiple-cloud-accounts-with-aws-sts) ## Single Cloud Account with AWS STS @@ -45,9 +45,8 @@ cloud account. ### Prerequisites - If you are using a self-hosted Palette or Vertex instance, you must configure an AWS account at the instance-level to - allow tenants to add AWS accounts using STS. For more information, refer to - [Enable Adding AWS Accounts Using STS - Palette](../../../enterprise-version/system-management/configure-aws-sts-account.md) - or [Enable Adding AWS Accounts Using STS - VerteX](../../../vertex/system-management/configure-aws-sts-account.md) + allow tenants to add AWS accounts using STS. For more information, refer to the + [Add AWS Accounts Using STS](../../public-cloud/aws/add-aws-accounts/configure-aws-sts-account.md) guide. - Both your Palette environment instance and the S3 bucket are hosted on AWS. This prerequisite is more applicable to self-hosted Palette and Palette VerteX customers. Palette SaaS in hosted in an AWS environment. @@ -208,7 +207,8 @@ A multi-cloud account scenario requires you to perform the following authenticat 1. Grant Palette access to the cluster in AWS Account A. When you register a primary cloud account in Palette, you authenticate and authorize Palette to deploy clusters in the cloud account. Check out the - [Add AWS Account](../../public-cloud/aws/add-aws-accounts.md) to guidance on how to add an AWS account in Palette. + [Add AWS Account](../../public-cloud/aws/add-aws-accounts/add-aws-accounts.md) to guidance on how to add an AWS + account in Palette. 2. Give Palette permission to use the S3 buckets in AWS Account B. Set the bucket permissions and link them to an IAM role. Then, update the IAM role to let Palette assume it. @@ -222,9 +222,8 @@ multiple cloud accounts. ### Prerequisites - If you are using a self-hosted Palette or Vertex instance, you must configure an AWS account at the instance-level to - allow tenants to add AWS accounts using STS. For more information, refer to - [Enable Adding AWS Accounts Using STS - Palette](../../../enterprise-version/system-management/configure-aws-sts-account.md) - or [Enable Adding AWS Accounts Using STS - VerteX](../../../vertex/system-management/configure-aws-sts-account.md) + allow tenants to add AWS accounts using STS. For more information, refer to the + [Add AWS Accounts Using STS](../../public-cloud/aws/add-aws-accounts/configure-aws-sts-account.md) guide. - Both your Palette environment instance and the S3 bucket are hosted on AWS. This prerequisite is more applicable to self-hosted Palette and Palette VerteX customers. Palette SaaS is hosted in an AWS environment. diff --git a/docs/docs-content/clusters/cluster-management/cluster-proxy.md b/docs/docs-content/clusters/cluster-management/cluster-proxy.md index 2ef475ebff9..af3079266a0 100644 --- a/docs/docs-content/clusters/cluster-management/cluster-proxy.md +++ b/docs/docs-content/clusters/cluster-management/cluster-proxy.md @@ -51,9 +51,9 @@ may need to configure the proxy server to support gRPC. -- A self-hosted Palette instance is deployed into an active and healthy Kubernetes cluster. Refer to - [Self-Hosted Palette Installation](../../enterprise-version/install-palette/install-palette.md) for additional - guidance. +- A [self-hosted Palette](../../self-hosted-setup/palette/supported-environments/kubernetes/kubernetes.md) or + [Palette VerteX](../../self-hosted-setup/vertex/supported-environments/kubernetes/kubernetes.md) instance deployed on + an active and healthy Kubernetes cluster. - The self-hosted Palette instance is configured to use the proxy server that you intend for your applications to use for outbound communications. @@ -90,7 +90,7 @@ may need to configure the proxy server to support gRPC. Once you have deployed the PCG, you must create a new cloud account associated with the PCG. Refer to the following resources to learn how to create a cloud account: - - [Add an AWS Account to Palette](../public-cloud/aws/add-aws-accounts.md) + - [Add an AWS Account to Palette](../public-cloud/aws/add-aws-accounts/add-aws-accounts.md) - [Register and Manage Azure Cloud Account](../public-cloud/azure/azure-cloud.md) - [Register and Manage GCP Accounts](../public-cloud/gcp/add-gcp-accounts.md) @@ -130,8 +130,8 @@ may need to configure the proxy server to support gRPC. 1. If you are using a self-hosted Palette instance, you have the opportunity to configure proxy settings during installation. If you are using the Palette CLI for installation, refer to - [Self Hosted Palette - Installation](../../enterprise-version/install-palette/install-on-kubernetes/install.md) to - learn how to specify proxy settings during installation. If you used Helm charts for installation, refer to + [Self Hosted Palette - Installation](../../self-hosted-setup/palette/supported-environments/kubernetes/install/non-airgap.md) + to learn how to specify proxy settings during installation. If you used Helm charts for installation, refer to [Enable and Manage Proxy Configurations](../pcg/manage-pcg/configure-proxy.md) to learn how to install reach and use it to configure proxy settings. The process to install Reach on an existing self-hosted Palette instance is the same as the process to install Reach on an existing PCG cluster. diff --git a/docs/docs-content/clusters/cluster-management/image-swap.md b/docs/docs-content/clusters/cluster-management/image-swap.md index 658e34b2539..7be4ee29e56 100644 --- a/docs/docs-content/clusters/cluster-management/image-swap.md +++ b/docs/docs-content/clusters/cluster-management/image-swap.md @@ -110,9 +110,9 @@ examples and information. - Image swap is only supported for managed Kubernetes clusters, such as Amazon EKS, Azure AKS, and Google GKE. - Self-hosted Palette and VerteX installations can support image swap functionality for non-managed Kubernetes clusters. - This requires mirror registries to be specified during the self-hosted Palette or VerteX installation. Refer to the - [Self-Hosted Palette Installation](../../enterprise-version/install-palette/install-palette.md) or - [VerteX Install](../../vertex/install-palette-vertex/install-palette-vertex.md) guide for more information. + This requires mirror registries to be specified during the + [self-hosted Palette](../../self-hosted-setup/palette/palette.md) or + [Palette VerteX installation](../../self-hosted-setup/vertex/vertex.md) process. The following table summarizes the image swap support for different scenarios and what Palette deployment type is required. diff --git a/docs/docs-content/clusters/data-center/maas/create-manage-maas-lxd-clusters.md b/docs/docs-content/clusters/data-center/maas/create-manage-maas-lxd-clusters.md index 678713150ff..4db38682eb4 100644 --- a/docs/docs-content/clusters/data-center/maas/create-manage-maas-lxd-clusters.md +++ b/docs/docs-content/clusters/data-center/maas/create-manage-maas-lxd-clusters.md @@ -36,8 +36,7 @@ metal machines needed to run control planes and keeps virtualization overhead lo - MAAS hosts that support KVM or LXD VMs. -- The **LxdMaas** feature flag enabled in the - [system console](../../../enterprise-version/system-management/feature-flags.md). +- The **LxdMaas** [feature flag](../../../self-hosted-setup/palette/system-management/feature-flags.md) enabled. :::info @@ -81,7 +80,7 @@ are managed by the host cluster. The worker nodes are still deployed on bare-met 11. To use a MAAS bare metal host as a hypervisor for your control plane components, activate the **Host LXD-Based Control Planes** switch. Select **Next**. - ![Activating the Host LXD-Based Control Planes switch](../../../../../static/assets/docs/images/clusters_data-center_maas_profile-lxd-4-7-b.webp) + ![Activating the Host LXD-Based Control Planes switch](/clusters_data-center_maas_profile-lxd-4-7-b.webp) :::warning @@ -153,7 +152,7 @@ The cluster **Overview** tab displays the status and health of your cluster, as 11. When creating a workload cluster that will leverage MAAS LXD or will use an existing host LXD-based control plane, leave the **Host LXD-Based Control Planes** option disabled. Select **Next**. - ![Activating the Host LXD-Based Control Planes switch](../../../../../static/assets/docs/images/clusters_data-center_maas_profile-lxd-4-7-b.webp) + ![Activating the Host LXD-Based Control Planes switch](/clusters_data-center_maas_profile-lxd-4-7-b.webp) 12. Configure the control plane and worker node pools. The following input fields apply to MAAS control plane and worker node pools. For a detailed list of input fields that are common across environments and their usage, refer to our @@ -180,7 +179,7 @@ The cluster **Overview** tab displays the status and health of your cluster, as ::: - ![Screenshot of Cloud Configuration section in Node pools configuration](../../../../../static/assets/docs/images/clusters_data-center_maas_profile-lxd-cloud-config_4-7-b.webp) + ![Screenshot of Cloud Configuration section in Node pools configuration](/clusters_data-center_maas_profile-lxd-cloud-config_4-7-b.webp) 13. On the **Optional cluster settings** page, select from among the items on the left menu to configure additional options. Refer to applicable guide for additional information. diff --git a/docs/docs-content/clusters/data-center/nutanix/register-nutanix-cloud.md b/docs/docs-content/clusters/data-center/nutanix/register-nutanix-cloud.md index f9f5e892921..a84350c24a9 100644 --- a/docs/docs-content/clusters/data-center/nutanix/register-nutanix-cloud.md +++ b/docs/docs-content/clusters/data-center/nutanix/register-nutanix-cloud.md @@ -26,8 +26,8 @@ default Cluster API (CAPI) version, and use APIs to register a Nutanix cloud to - A Palette account with system console access. The user with this privilege is the [system administrator](../../../glossary-all.md#system-administrator) of the self-hosted - [Palette](https://docs.spectrocloud.com/enterprise-version/system-management/#system-console) or - [VerteX](https://docs.spectrocloud.com/vertex/system-management/#system-console) instance. + [Palette](../../../self-hosted-setup/palette/palette.md) or + [Palette VerteX](../../../self-hosted-setup/vertex/vertex.md) instance. - A Nutanix logo downloaded. Review logo requirements in [Register the Cloud](#register-the-cloud). @@ -210,7 +210,7 @@ cloud to Palette. Alternatively, you can use an API platform such as [Postman](h - You have completed the steps in [Customize YAML Configuration Files](#customize-yaml-configuration-files). - Only an - [Operations Administrator](../../../enterprise-version/system-management/account-management/account-management.md#operations-administrator) + [Operations Administrator](../../../self-hosted-setup/palette/system-management/account-management/account-management.md#operations-administrator) is allowed to register a Nutanix cloud. - The logo file must not exceed 100 KB in size. To ensure image quality, ensure at least one dimension in either width diff --git a/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/fips.md b/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/fips.md index 424aa026ce5..8e443f6153a 100644 --- a/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/fips.md +++ b/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/fips.md @@ -67,9 +67,9 @@ This page guides you through the process of building FIPS-compliant Edge Install [Deploy Cluster with a Private Provider Registry](../../site-deployment/deploy-custom-registries/deploy-private-registry.md) guide for instructions on how to configure the credentials. -- A [VerteX](/docs/docs-content/vertex/vertex.md) or Palette account. Refer to - [Palette VerteX](/docs/docs-content/vertex/vertex.md#access-palette-vertex) for information on how to set up a VerteX - account. +- A [VerteX](../../../../self-hosted-setup/vertex/vertex.md) or Palette account. Refer to + [Palette VerteX](../../../../self-hosted-setup/vertex/vertex.md#access-palette-vertex) for information on how to set + up a VerteX account. - VerteX registration token for pairing Edge hosts with VerteX or a Palette registration token. You will need tenant admin access to VerteX to generate a new registration token. For detailed instructions, refer to the diff --git a/docs/docs-content/clusters/public-cloud/aws/_category_.json b/docs/docs-content/clusters/public-cloud/aws/_category_.json index 3fca6fb9f9b..094470741db 100644 --- a/docs/docs-content/clusters/public-cloud/aws/_category_.json +++ b/docs/docs-content/clusters/public-cloud/aws/_category_.json @@ -1,3 +1,3 @@ { - "position": 0 + "position": 10 } diff --git a/docs/docs-content/enterprise-version/install-palette/_category_.json b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/install-palette/_category_.json rename to docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/_category_.json diff --git a/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/add-aws-accounts.md similarity index 89% rename from docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md rename to docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/add-aws-accounts.md index 8d0a9c42a3d..a179e3e5dc3 100644 --- a/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md +++ b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/add-aws-accounts.md @@ -37,12 +37,12 @@ Use the steps below to add an AWS cloud account using static access credentials. #### Prerequisites -- A Palette account with [tenant admin](../../../tenant-settings/tenant-settings.md) access. +- A Palette account with [tenant admin](../../../../tenant-settings/tenant-settings.md) access. - An AWS account with an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) or [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) for Palette. -- An AWS account with the [required IAM policies](required-iam-policies.md) assigned to the Palette IAM user or IAM +- An AWS account with the [required IAM policies](../required-iam-policies.md) assigned to the Palette IAM user or IAM role. #### Add AWS Account to Palette @@ -61,17 +61,16 @@ Use the steps below to add an AWS cloud account using Security Token Service (ST #### Prerequisites -- A Palette account with [tenant admin](../../../tenant-settings/tenant-settings.md) access. +- A Palette account with [tenant admin](../../../../tenant-settings/tenant-settings.md) access. - If you are using a self-hosted instance of Palette or VerteX, you must configure an AWS account at the instance-level - to allow tenants to add AWS accounts using STS. For more information, refer to - [Enable Adding AWS Accounts Using STS - Palette](../../../enterprise-version/system-management/configure-aws-sts-account.md) - or [Enable Adding AWS Accounts Using STS - VerteX](../../../vertex/system-management/configure-aws-sts-account.md). + to allow tenants to add AWS accounts using STS. For more information, refer to the + [Add AWS Accounts Using STS](../../../public-cloud/aws/add-aws-accounts/configure-aws-sts-account.md) guide. - An AWS account with an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) or [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) for Palette. -- An AWS account with the [required IAM policies](required-iam-policies.md) assigned to the Palette IAM user or IAM +- An AWS account with the [required IAM policies](../required-iam-policies.md) assigned to the Palette IAM user or IAM role. #### Add AWS Account to Palette @@ -125,12 +124,12 @@ Use the steps below to add an AWS cloud account using static access credentials. #### Prerequisites -- A Palette account with [tenant admin](../../../tenant-settings/tenant-settings.md) access. +- A Palette account with [tenant admin](../../../../tenant-settings/tenant-settings.md) access. - An AWS account with an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) or [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) for Palette. -- An AWS account with the [required IAM policies](required-iam-policies.md) assigned to the Palette IAM user or IAM +- An AWS account with the [required IAM policies](../required-iam-policies.md) assigned to the Palette IAM user or IAM role. #### Add AWS GovCloud Account to Palette @@ -173,17 +172,16 @@ Use the steps below to add an AWS cloud account using STS credentials. #### Prerequisites -- A Palette account with [tenant admin](../../../tenant-settings/tenant-settings.md) access. +- A Palette account with [tenant admin](../../../../tenant-settings/tenant-settings.md) access. - If you are using a self-hosted instance of Palette or VerteX, you must configure an AWS account at the instance-level - to allow tenants to add AWS accounts using STS. For more information, refer to - [Enable Adding AWS Accounts Using STS - Palette](../../../enterprise-version/system-management/configure-aws-sts-account.md) - or [Enable Adding AWS Accounts Using STS - VerteX](../../../vertex/system-management/configure-aws-sts-account.md). + to allow tenants to add AWS accounts using STS. For more information, refer to the + [Add AWS Accounts Using STS](./configure-aws-sts-account.md) guide. - An AWS account with an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) or [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) for Palette. -- An AWS account with the [required IAM policies](required-iam-policies.md) assigned to the Palette IAM user or IAM +- An AWS account with the [required IAM policies](../required-iam-policies.md) assigned to the Palette IAM user or IAM role. #### Add AWS GovCloud Account to Palette @@ -227,9 +225,9 @@ Your newly added AWS cloud account is listed under the AWS section. ## AWS Secret Cloud Account (US) You can configure [AWS Secret Cloud](https://aws.amazon.com/federal/secret-cloud/) accounts in -[Palette VerteX](../../../vertex/vertex.md) to deploy AWS EKS clusters in the AWS Secret region. Depending on your -organization's compliance requirements, you can choose between standard authentication (standard access credentials) or -secure compliance validation using your SC2S Access Portal (SCAP) credentials. +[Palette VerteX](../../../../self-hosted-setup/vertex/vertex.md) to deploy AWS EKS clusters in the AWS Secret region. +Depending on your organization's compliance requirements, you can choose between standard authentication (standard +access credentials) or secure compliance validation using your SC2S Access Portal (SCAP) credentials. :::preview @@ -256,20 +254,21 @@ secure compliance validation using your SC2S Access Portal (SCAP) credentials. ### Prerequisites -- [Palette VerteX installed](../../../vertex/install-palette-vertex/install-palette-vertex.md) and - [tenant admin](../../../tenant-settings/tenant-settings.md) access. +- [Palette VerteX installed](../../../../self-hosted-setup/vertex/vertex.md) and + [tenant admin](../../../../tenant-settings/tenant-settings.md) access. -- The **AwsSecretPartition** [feature flag](../../../vertex/system-management/feature-flags.md) enabled in the Palette - VerteX [system console](../../../vertex/system-management/system-management.md). +- The **AwsSecretPartition** [feature flag](../../../../self-hosted-setup/vertex/system-management/feature-flags.md) + enabled in the Palette VerteX + [system console](../../../../self-hosted-setup/vertex/system-management/system-management.md#access-the-system-console). - An AWS account with an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) or [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) for Palette VerteX. -- An AWS account with the [required IAM policies](required-iam-policies.md) assigned to the Palette VerteX IAM user or - IAM role. +- An AWS account with the [required IAM policies](../required-iam-policies.md) assigned to the Palette VerteX IAM user + or IAM role. - A secure connection to your AWS Secret Cloud account, such as via a - [Private Cloud Gateway (PCG)](../../../clusters/pcg/pcg.md), Wide Area Network tunnel, or AWS Private Link. + [Private Cloud Gateway (PCG)](../../../../clusters/pcg/pcg.md), Wide Area Network tunnel, or AWS Private Link. ### Static Access Credentials @@ -309,7 +308,7 @@ Use the steps below to add an AWS Secret Cloud account using static access crede 8. If you are using a PCG to connect to your AWS Secret Cloud account to Palette VerteX, toggle **Connect Private Cloud Gateway** on, and select a **Private Cloud Gateway** from the list. This list is populated automatically with the **Private Cloud Gateways** listed in **Tenant Settings**. For more information, refer to the - [Private Cloud Gateway](../../../clusters/pcg/pcg.md) page. + [Private Cloud Gateway](../../../../clusters/pcg/pcg.md) page. 9. Click **Confirm** to create your AWS Secret Cloud account. @@ -362,7 +361,7 @@ Use the steps below to add an AWS Secret Cloud account using SCAP secure complia 9. If you are using a PCG to connect to your AWS Secret Cloud account to Palette VerteX, toggle **Connect Private Cloud Gateway** on, and select a **Private Cloud Gateway** from the list. This list is populated automatically with the **Private Cloud Gateways** listed in **Tenant Settings**. For more information, refer to the - [Private Cloud Gateway](../../../clusters/pcg/pcg.md) page. + [Private Cloud Gateway](../../../../clusters/pcg/pcg.md) page. 10. Click **Confirm** to create your AWS Secret Cloud account. @@ -377,6 +376,6 @@ newly added AWS cloud account is listed under the AWS section. Now that you have added an AWS account to Palette, you can start deploying Kubernetes clusters to your AWS account. To learn how to get started with deploying Kubernetes clusters to AWS, check out the following guides: -- [Create and Manage AWS IaaS Cluster](create-cluster.md) -- [Create and Manage AWS EKS Cluster](eks.md) -- [EKS Hybrid Nodes](./eks-hybrid-nodes/eks-hybrid-nodes.md) +- [Create and Manage AWS IaaS Cluster](../create-cluster.md) +- [Create and Manage AWS EKS Cluster](../eks.md) +- [EKS Hybrid Nodes](../eks-hybrid-nodes/eks-hybrid-nodes.md) diff --git a/docs/docs-content/enterprise-version/system-management/configure-aws-sts-account.md b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/configure-aws-sts-account.md similarity index 72% rename from docs/docs-content/enterprise-version/system-management/configure-aws-sts-account.md rename to docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/configure-aws-sts-account.md index 288ecd76b56..da5d8f0cbfe 100644 --- a/docs/docs-content/enterprise-version/system-management/configure-aws-sts-account.md +++ b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts/configure-aws-sts-account.md @@ -1,6 +1,6 @@ --- -sidebar_label: "Enable Adding AWS Accounts Using STS " -title: "Enable Adding AWS Accounts Using STS " +sidebar_label: "Add AWS Accounts Using STS" +title: "Add AWS Accounts Using STS" description: "This page teaches you how to allow tenants to add AWS accounts using STS." icon: "" hide_table_of_contents: false @@ -9,4 +9,4 @@ tags: ["palette", "management", "account", "credentials"] keywords: ["self-hosted", "palette"] --- - + diff --git a/docs/docs-content/clusters/public-cloud/aws/aws.md b/docs/docs-content/clusters/public-cloud/aws/aws.md index 1acfdf102eb..e117ef5ca17 100644 --- a/docs/docs-content/clusters/public-cloud/aws/aws.md +++ b/docs/docs-content/clusters/public-cloud/aws/aws.md @@ -8,7 +8,7 @@ hide_table_of_contents: false Palette supports integration with [Amazon Web Services](https://aws.amazon.com). You can deploy and manage [Host Clusters](../../../glossary-all.md#host-cluster) in AWS. To get started check out the -[Register and Manage AWS Accounts](add-aws-accounts.md). +[Register and Manage AWS Accounts](./add-aws-accounts/add-aws-accounts.md). ## Get Started @@ -19,7 +19,7 @@ a cluster to AWS by using Palette. To learn more about Palette and AWS clusters, check out the following resources: -- [Register and Manage AWS Accounts](add-aws-accounts.md) +- [Register and Manage AWS Accounts](./add-aws-accounts/add-aws-accounts.md) - [Create and Manage AWS IaaS Cluster](create-cluster.md) diff --git a/docs/docs-content/clusters/public-cloud/aws/create-cluster.md b/docs/docs-content/clusters/public-cloud/aws/create-cluster.md index 8a66fc9036e..08f73ac9615 100644 --- a/docs/docs-content/clusters/public-cloud/aws/create-cluster.md +++ b/docs/docs-content/clusters/public-cloud/aws/create-cluster.md @@ -26,7 +26,8 @@ The following prerequisites must be met before deploying a cluster to AWS: the [AWS reference](https://docs.aws.amazon.com/cli/latest/reference/ec2/get-instance-metadata-defaults.html) guide for further information. -- You have added an AWS account in Palette. Review [Add AWS Account](add-aws-accounts.md) for guidance. +- You have added an AWS account in Palette. Review [Add AWS Account](./add-aws-accounts/add-aws-accounts.md) for + guidance. - An infrastructure cluster profile. Review [Create an Infrastructure Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-infrastructure-profile.md) @@ -96,7 +97,8 @@ Use the following steps to provision a new AWS cluster: | **Tags** | Assign any desired cluster tags. Tags on a cluster are propagated to the Virtual Machines (VMs) deployed to the target environments. Example: `region:us-east-1a` or `zone:vpc-private-us-east-1a`. | | **Cloud Account** | If you already added your AWS account in Palette, select it from the **drop-down Menu**. Otherwise, click **Add New Account** and add your AWS account information. | - To learn how to add an AWS account, review the [Add an AWS Account to Palette](add-aws-accounts.md) guide. + To learn how to add an AWS account, review the + [Add an AWS Account to Palette](./add-aws-accounts/add-aws-accounts.md) guide. 7. diff --git a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md index 5cabf857260..175fb89b8c7 100644 --- a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md +++ b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md @@ -29,7 +29,8 @@ Import your Amazon EKS cluster and enable hybrid mode to be able to create edge - Access to an AWS cloud account. -- Palette integration with AWS account. Review [Add an AWS Account to Palette](../add-aws-accounts.md) for guidance. +- Palette integration with AWS account. Review [Add an AWS Account to Palette](../add-aws-accounts/add-aws-accounts.md) + for guidance. - Your Palette account role must have the `clusterProfile.create` permission to import a cluster profile. Refer to the [Cluster Profile](../../../../user-management/palette-rbac/project-scope-roles-permissions.md#cluster-profile) @@ -511,7 +512,7 @@ Learn how to create a hybrid node pool on your cluster and add your edge hosts t ## Resources -- [Add AWS Account](../add-aws-accounts.md) +- [Add AWS Account](../add-aws-accounts/add-aws-accounts.md) - [Prepare Environment](./prepare-environment/prepare-environment.md) diff --git a/docs/docs-content/clusters/public-cloud/aws/eks.md b/docs/docs-content/clusters/public-cloud/aws/eks.md index ef850edf9c2..80325414173 100644 --- a/docs/docs-content/clusters/public-cloud/aws/eks.md +++ b/docs/docs-content/clusters/public-cloud/aws/eks.md @@ -27,7 +27,7 @@ guide for help with migrating workloads. - Access to an AWS cloud account. -- Palette integration with AWS account. Review [Add AWS Account](add-aws-accounts.md) for guidance. +- Palette integration with AWS account. Review [Add AWS Account](./add-aws-accounts/add-aws-accounts.md) for guidance. - An infrastructure cluster profile for AWS EKS. When you create the profile, ensure you choose **EKS** as the **Managed @@ -188,8 +188,9 @@ guide for help with migrating workloads. layer of your cluster profile. Review [Enable Disk Encryption for EKS Cluster](enable-disk-encryption-eks-cluster.md) for guidance. -- If you are deploying your cluster in an [Amazon Secret](./add-aws-accounts.md#aws-secret-cloud-account-us) region, you - must configure [Image Swap](../../../clusters/cluster-management/image-swap.md) in the Kubernetes layer of your +- If you are deploying your cluster in an + [Amazon Secret](./add-aws-accounts/add-aws-accounts.md#aws-secret-cloud-account-us) region, you must configure + [Image Swap](../../../clusters/cluster-management/image-swap.md) in the Kubernetes layer of your [cluster profile](../../../profiles/cluster-profiles/cluster-profiles.md) to redirect public image requests to your internal or Elastic Container Registry. @@ -252,7 +253,8 @@ guide for help with migrating workloads. | **Tags** | Assign any desired cluster tags. Tags on a cluster are propagated to the Virtual Machines (VMs) deployed to the target environments. Example: `region:us-east-1a` or `zone:vpc-private-us-east-1a`. | | **Cloud Account** | If you already added your AWS account in Palette, select it from the **drop-down Menu**. Otherwise, click **Add New Account** and add your AWS account information. | - To learn how to add an AWS account, review the [Add an AWS Account to Palette](add-aws-accounts.md) guide. + To learn how to add an AWS account, review the + [Add an AWS Account to Palette](./add-aws-accounts/add-aws-accounts.md) guide. 7. @@ -265,10 +267,10 @@ guide for help with migrating workloads. | **Parameter** | **Description** | | --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | **Static Placement** | By default, Palette uses dynamic placement. This creates a new Virtual Private Cloud (VPC) for the cluster that contains two subnets in different Availability Zones (AZs), which is required for EKS cluster deployment. Palette places resources in these clusters, manages the resources, and deletes them when the corresponding cluster is deleted.

If you want to place resources into pre-existing VPCs, enable the **Static Placement** option, and provide the VPCID in the **VPCID** field that displays with this option enabled. If you are deploying your cluster in an [AWS Secret](./add-aws-accounts.md#aws-secret-cloud-account-us) region, static placement is required. You will need to specify two subnets in different Availability Zones (AZs). | + | **Static Placement** | By default, Palette uses dynamic placement. This creates a new Virtual Private Cloud (VPC) for the cluster that contains two subnets in different Availability Zones (AZs), which is required for EKS cluster deployment. Palette places resources in these clusters, manages the resources, and deletes them when the corresponding cluster is deleted.

If you want to place resources into pre-existing VPCs, enable the **Static Placement** option, and provide the VPCID in the **VPCID** field that displays with this option enabled. If you are deploying your cluster in an [AWS Secret](./add-aws-accounts/add-aws-accounts.md#aws-secret-cloud-account-us) region, static placement is required. You will need to specify two subnets in different Availability Zones (AZs). | | **Region** | Use the **drop-down Menu** to choose the AWS region where you would like to provision the cluster. | | **SSH Key Pair Name** | Choose the SSH key pair for the region you selected. This is required for dynamic placement and optional for static placement. SSH key pairs must be pre-configured in your AWS environment. This is called an EC2 Key Pair in AWS. The key you select is inserted into the provisioned VMs. | - | **Cluster Endpoint Access** | This setting provides access to the Kubernetes API endpoint. Select **Private**, **Public** or **Private & Public**. If you are deploying your cluster in an [AWS Secret](./add-aws-accounts.md#aws-secret-cloud-account-us) region, use **Private & Public**. For more information, refer to the [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) reference guide. | + | **Cluster Endpoint Access** | This setting provides access to the Kubernetes API endpoint. Select **Private**, **Public** or **Private & Public**. If you are deploying your cluster in an [AWS Secret](./add-aws-accounts/add-aws-accounts.md#aws-secret-cloud-account-us) region, use **Private & Public**. For more information, refer to the [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) reference guide. | | **Public Access CIDRs** | This setting controls which IP address CIDR ranges can access the cluster. To fully allow unrestricted network access, enter `0.0.0.0/0` in the field. For more information, refer to the [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) reference guide. | | **Private Access CIDRs** | This setting controls which private IP address CIDR ranges can access the cluster. Private CIDRs provide a way to specify private, self-hosted, and air-gapped networks or Private Cloud Gateway (PCG) that may be located in other VPCs connected to the VPC hosting the cluster endpoint.

To restrict network access, replace the pre-populated 0.0.0.0/0 with the IP address CIDR range that should be allowed access to the cluster endpoint. Only the IP addresses that are within the specified VPC CIDR range - and any other connected VPCs - will be able to reach the private endpoint. For example, while using `0.0.0.0/0` would allow traffic throughout the VPC and all peered VPCs, specifying the VPC CIDR `10.0.0.0/16` would limit traffic to an individual VPC. For more information, refer to the [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) reference guide. | | **Enable Encryption** | Use this option for secrets encryption. You must have an existing AWS Key Management Service (KMS) key you can use. Toggle the **Enable encryption** option and use the **drop-down Menu** in the **ARN** field to select the KMS key ARN.

If you do not have a KMS key and want to create one to use this option, review [Enable Secrets Encryption for EKS Cluster](enable-secrets-encryption-kms-key.md). Once your KMS key is created, return to this Cluster Config step to enable secrets encryption and specify the KMS key ARN. | @@ -448,7 +450,7 @@ For guidance in setting up kubectl, review the [Kubectl](../../cluster-managemen ## Resources -- [Add AWS Account](add-aws-accounts.md) +- [Add AWS Account](./add-aws-accounts/add-aws-accounts.md) - [Create an Infrastructure Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-infrastructure-profile.md) diff --git a/docs/docs-content/clusters/public-cloud/aws/enable-disk-encryption-eks-cluster.md b/docs/docs-content/clusters/public-cloud/aws/enable-disk-encryption-eks-cluster.md index 712516be8ce..2c5c71c3b9b 100644 --- a/docs/docs-content/clusters/public-cloud/aws/enable-disk-encryption-eks-cluster.md +++ b/docs/docs-content/clusters/public-cloud/aws/enable-disk-encryption-eks-cluster.md @@ -20,7 +20,7 @@ workloads operating on the cluster, so ensure you have planned for this before p ## Prerequisites -- An AWS account added to Palette. Review [Add AWS Account](add-aws-accounts.md) for guidance. +- An AWS account added to Palette. Review [Add AWS Account](./add-aws-accounts/add-aws-accounts.md) for guidance. - The IAM user or role used by Palette has the required policies attached as listed in [Required IAM Policies](required-iam-policies.md), including the diff --git a/docs/docs-content/clusters/public-cloud/aws/enable-secrets-encryption-kms-key.md b/docs/docs-content/clusters/public-cloud/aws/enable-secrets-encryption-kms-key.md index c0d5a0e46a9..f5e792c320b 100644 --- a/docs/docs-content/clusters/public-cloud/aws/enable-secrets-encryption-kms-key.md +++ b/docs/docs-content/clusters/public-cloud/aws/enable-secrets-encryption-kms-key.md @@ -18,7 +18,7 @@ wizard's **Cluster Config** page for EKS. ## Prerequisites -- An AWS account added to Palette. Review [Add AWS Account](add-aws-accounts.md) for guidance. +- An AWS account added to Palette. Review [Add AWS Account](./add-aws-accounts/add-aws-accounts.md) for guidance. - IAM user or role has attached policies listed in [Required IAM Policies](required-iam-policies.md). diff --git a/docs/docs-content/downloads/artifact-studio.md b/docs/docs-content/downloads/artifact-studio.md index 328bdccf402..717dceeda4e 100644 --- a/docs/docs-content/downloads/artifact-studio.md +++ b/docs/docs-content/downloads/artifact-studio.md @@ -11,9 +11,9 @@ tags: ["downloads", "artifact-studio"] The Spectro Cloud [Artifact Studio](https://artifact-studio.spectrocloud.com/) is a unified platform that helps airgapped, regulatory-focused, and security-conscious organizations populate their registries with bundles, packs, and -installers to be used with self-hosted [Palette](../enterprise-version/enterprise-version.md) or -[Palette VerteX](../vertex/vertex.md). It provides a single location for packs and images, streamlining access and -management. +installers to be used with [self-hosted Palette](../self-hosted-setup/palette/palette.md) or +[Palette VerteX](../self-hosted-setup/vertex/vertex.md). It provides a single location for packs and images, +streamlining access and management. ## Use Cases @@ -64,7 +64,7 @@ representative or [open a support ticket](https://support.spectrocloud.com/). | **Helm installation** | Used to install with Helm charts. | Once you have the file, you can deploy Palette as a self-hosted application. For ISO downloads, review the -[Palette Management Appliance Installation guide](../enterprise-version/install-palette/palette-management-appliance.md) +[Palette Management Appliance Installation guide](../self-hosted-setup/vertex/supported-environments/management-appliance/install.md) for more information on deploying Palette locally. ## Download Palette VerteX @@ -91,8 +91,8 @@ for more information on deploying Palette locally. | **Helm installation** | Used to install with Helm charts. | Once you have the file, you can deploy Palette VerteX as a self-hosted application. For ISO downloads, review the -[VerteX Management Appliance Installation guide](../vertex/install-palette-vertex/vertex-management-appliance.md) for -more information on deploying Palette VerteX locally. +[VerteX Management Appliance Installation guide](../self-hosted-setup/vertex/supported-environments/management-appliance/install.md) +for more information on deploying Palette VerteX locally. ## Download a Pack Bundle @@ -280,6 +280,6 @@ To verify the integrity and authenticity of your artifacts, you can do a checksu For information on uploading packs to your self-hosted Palette or Palette VerteX instance, refer to the appropriate guide: -- [Upload Packs to Palette](../enterprise-version/install-palette/palette-management-appliance.md#upload-packs-to-palette) +- [Upload Packs to Palette](../self-hosted-setup/palette/supported-environments/management-appliance/upload-packs.md) -- [Upload Packs to Palette VerteX](../vertex/install-palette-vertex/vertex-management-appliance.md#upload-packs-to-palette-vertex) +- [Upload Packs to Palette VerteX](../self-hosted-setup/vertex/supported-environments/management-appliance/upload-packs.md) diff --git a/docs/docs-content/downloads/palette-vertex/additional-packs.md b/docs/docs-content/downloads/palette-vertex/additional-packs.md index 8d371281c7a..4dc3cee2105 100644 --- a/docs/docs-content/downloads/palette-vertex/additional-packs.md +++ b/docs/docs-content/downloads/palette-vertex/additional-packs.md @@ -22,7 +22,7 @@ Review the following table to determine which pack binaries you need to download You must SSH into your Palette VerteX airgap support VM to download and install the binary. You must also provide the username and password for the support team's private repository. Reach out to our support team to -[obtain the credentials](../../vertex/vertex.md#access-palette-vertex). +[obtain the credentials](../../self-hosted-setup/vertex/vertex.md#access-palette-vertex). The following example shows how to download the `airgap-vertex-pack-cni-calico-3.25.1.bin` binary. Replace `XXXX` with your username and `YYYY` with your password. diff --git a/docs/docs-content/downloads/palette-vertex/kubernetes-requirements.md b/docs/docs-content/downloads/palette-vertex/kubernetes-requirements.md index deb9205dcb8..9ed2c50b159 100644 --- a/docs/docs-content/downloads/palette-vertex/kubernetes-requirements.md +++ b/docs/docs-content/downloads/palette-vertex/kubernetes-requirements.md @@ -12,8 +12,8 @@ keywords: ["enterprise", "vertex"] The following table presents the Kubernetes version corresponding to each Palette version for -[VMware](../../vertex/install-palette-vertex/install-on-vmware/install-on-vmware.md) and -[Kubernetes](../../vertex/install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md) installations. +[VMware](../../self-hosted-setup/vertex/supported-environments/vmware/install/install.md#kubernetes-requirements) and +[Kubernetes](../../self-hosted-setup/vertex/supported-environments/kubernetes/install/install.md#kubernetes-requirements) installations. Additionally, for VMware installations, it provides the download URLs for the required Operating System and Kubernetes distribution OVA. diff --git a/docs/docs-content/downloads/palette-vertex/palette-vertex.md b/docs/docs-content/downloads/palette-vertex/palette-vertex.md index 849299ad79b..7796bbeb339 100644 --- a/docs/docs-content/downloads/palette-vertex/palette-vertex.md +++ b/docs/docs-content/downloads/palette-vertex/palette-vertex.md @@ -16,8 +16,8 @@ self-hosted platform that you can install in your data centers or public cloud p Find the additional download links for Palette VerteX in this section. -Refer to the [Palette VerteX documentation](../../vertex/install-palette-vertex/install-palette-vertex.md) for guidance -on how to deploy Palette VerteX to your environment. +Refer to the [Palette VerteX documentation](../../self-hosted-setup/vertex/vertex.md) for guidance on how to deploy +Palette VerteX to your environment. ## Resources diff --git a/docs/docs-content/downloads/self-hosted-palette/additional-packs.md b/docs/docs-content/downloads/self-hosted-palette/additional-packs.md index 107ea47cb81..5a41fd5d648 100644 --- a/docs/docs-content/downloads/self-hosted-palette/additional-packs.md +++ b/docs/docs-content/downloads/self-hosted-palette/additional-packs.md @@ -22,7 +22,7 @@ Review the following table to determine which pack binaries you need to download You must SSH into your Palette airgap support VM to download and install the binary. You must also provide the username and password for the support team's private repository. Reach out to our support team to -[obtain the credentials](../../enterprise-version/enterprise-version.md#access-palette). +[obtain the credentials](../../self-hosted-setup/palette/palette.md#access-palette). The following example shows how to download the `airgap-pack-aws-alb-2.5.1.bin` binary. Replace `XXXX` with your username and `YYYY` with your password. diff --git a/docs/docs-content/downloads/self-hosted-palette/kubernetes-requirements.md b/docs/docs-content/downloads/self-hosted-palette/kubernetes-requirements.md index 9102fe3da18..88071a9fe28 100644 --- a/docs/docs-content/downloads/self-hosted-palette/kubernetes-requirements.md +++ b/docs/docs-content/downloads/self-hosted-palette/kubernetes-requirements.md @@ -12,8 +12,8 @@ keywords: ["self-hosted", "enterprise"] The following table presents the Kubernetes version corresponding to each Palette version for -[VMware](../../enterprise-version/install-palette/install-on-vmware/install-on-vmware.md) and -[Kubernetes](../../enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md) installations. +[VMware](../../self-hosted-setup/palette/supported-environments/vmware/install/install.md#kubernetes-requirements) and +[Kubernetes](../../self-hosted-setup/palette/supported-environments/kubernetes/install/install.md#kubernetes-requirements) installations. Additionally, for VMware installations, it provides the download URLs for the required Operating System and Kubernetes distribution OVA. diff --git a/docs/docs-content/downloads/self-hosted-palette/self-hosted-palette.md b/docs/docs-content/downloads/self-hosted-palette/self-hosted-palette.md index 81a717f53cd..92d9bd802cf 100644 --- a/docs/docs-content/downloads/self-hosted-palette/self-hosted-palette.md +++ b/docs/docs-content/downloads/self-hosted-palette/self-hosted-palette.md @@ -15,8 +15,8 @@ environment, giving you full control over the management plane. Find the additional download links for self-hosted Palette in this section. -Refer to the [Self-Hosted Palette documentation](../../enterprise-version/install-palette/install-palette.md) for -guidance on how to deploy self-hosted Palette to your environment. +Refer to the [Self-Hosted Palette documentation](../../self-hosted-setup/palette/palette.md) for guidance on how to +deploy self-hosted Palette to your environment. ## Resources diff --git a/docs/docs-content/enterprise-version/enterprise-version.md b/docs/docs-content/enterprise-version/enterprise-version.md deleted file mode 100644 index 37c9c6b0f97..00000000000 --- a/docs/docs-content/enterprise-version/enterprise-version.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -sidebar_label: "Self-Hosted Palette" -title: "Self-Hosted Palette" -description: "Learn how to install and manage a self-hosted Palette environment." -hide_table_of_contents: false -sidebar_custom_props: - icon: "warehouse" -tags: ["self-hosted", "enterprise"] -keywords: ["self-hosted", "enterprise"] ---- - -Palette is available as a self-hosted platform offering. You can install the self-hosted version of Palette in your data -centers or public cloud providers to manage Kubernetes clusters. - -![A diagram of Palette deployment models eager-load](/architecture_architecture-overview-deployment-models-on-prem-focus.webp) - -:::info - -Palette VerteX is a FIPS-compliant version of Palette that is available for regulated industries, such as government and -public sector organizations that handle sensitive and classified information. To learn more about Palette VerteX, check -out the [Palette VerteX](../vertex/vertex.md) section. - -::: - -## Access Palette - -To set up a Palette account, contact our support team by sending an email to support@spectrocloud.com. Include the -following information in your email: - -- Your full name -- Organization name (if applicable) -- Email address -- Phone number (optional) -- Target Platform (VMware or Kubernetes) -- A brief description of your intended use of Palette - -Our dedicated Support team will promptly get in touch with you to provide the necessary credentials and assistance -required to get started with self-hosted Palette. - -## Resources - -- [Installation](install-palette/install-palette.md) - -- [System Management](system-management/system-management.md) - -- [Upgrade Notes](upgrade/upgrade.md) - -- [Enterprise Install Troubleshooting](../troubleshooting/enterprise-install.md) diff --git a/docs/docs-content/enterprise-version/install-palette/airgap.md b/docs/docs-content/enterprise-version/install-palette/airgap.md deleted file mode 100644 index c544a88d999..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/airgap.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -sidebar_label: "Airgap Resources" -title: "Airgap Resources" -description: "Airgap installation resources for Palette." -icon: "" -sidebar_position: 10 -hide_table_of_contents: false -tags: ["palette", "self-hosted", "airgap"] -keywords: ["self-hosted", "enterprise"] ---- - -You can install Palette in an airgapped environment. An airgap environment lacks direct access to the internet and is -intended for environments with strict security requirements. - -The installation process for an airgap environment is different due to the lack of internet access. Before the primary -Palette installation steps, you must download the following artifacts. - -- Palette platform manifests and required platform packages. - -- Container images for core platform components and third-party dependencies. - -- Palette packs. - -The other significant change is that Palette's default public OCI registry is not used. Instead, a private OCI registry -is utilized for storing images and packs. - -## Overview - -Before you can install Palette in an airgap environment, you must complete all the required pre-install steps. The -following diagram outlines the major pre-install steps for an airgap installation. - -![An architecture diagram outlining the five different install phases](/enterprise-version_air-gap-repo_overview-order-diagram.webp) - -1. Download the airgap setup binary from the URL provided by the support team. The airgap setup binary is a - self-extracting archive that contains the Palette platform manifests, images, and required packs. The airgap setup - binary is a one-time use binary for uploading Palette images and packs to your OCI registry. You will not use the - airgap setup binary again after the initial installation. This step must be completed in an environment with internet - access. - -2. Move the airgap setup binary to the airgap environment. The airgap setup binary is used to extract the manifest - content and upload the required images and packs to your private OCI registry. Start the airgap setup binary in a - Linux Virtual Machine (VM). - -3. The airgap script will push the required images, packs, and manifest to the built-in [Harbor](https://goharbor.io/) - OCI registry. - -4. Install Palette using the Palette CLI or the Kubernetes Helm chart. - -5. Configure your Palette environment. - -## Get Started - -To get started with an airgap Palette installation, check out the respective platform guide. - -- [Kubernetes Airgap Instructions](./install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md) - -- [VMware vSphere Airgap Instructions](./install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md) - -Each platform guide provides detailed instructions on how to complete the pre-install steps. - -## Supported Platforms - -The following table outlines the platforms supported for airgap VerteX installation and the supported OCI registries. - -| **Platform** | **OCI Registry** | **Supported** | -| -------------- | ---------------- | ------------- | -| Kubernetes | Harbor | ✅ | -| Kubernetes | AWS ECR | ✅ | -| VMware vSphere | Harbor | ✅ | -| VMware vSphere | AWS ECR | ✅ | - -## Resources - -- [Additional Packs](../../downloads/self-hosted-palette/additional-packs.md) diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/airgap-install.md b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/airgap-install.md deleted file mode 100644 index 9f036be636d..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/airgap-install.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -sidebar_label: "Airgap Installation" -title: "Airgap Installation" -description: "Learn how to deploy self-hosted Palette to a Kubernetes cluster using a Helm Chart." -icon: "" -hide_table_of_contents: false -sidebar_position: 0 -tags: ["self-hosted", "enterprise", "airgap"] -keywords: ["self-hosted", "enterprise"] ---- - -You can install self-hosted Palette in an airgap Kubernetes environment. An airgap environment lacks direct access to -the internet and is intended for environments with strict security requirements. - -The installation process for an airgap environment is different due to the lack of internet access. Before the primary -Palette installation steps, you must download the following artifacts: - -- Palette platform manifests and required platform packages. - -- Container images for core platform components and third-party dependencies. - -- Palette packs. - -The other significant change is that Palette's default public OCI registry is not used. Instead, a private OCI registry -is utilized to store images and packs. - -## Overview - -Before you can install Palette in an airgap environment, you must first set up your environment as outlined in the -following diagram. - -![An architecture diagram outlining the five different installation phases](/enterprise-version_air-gap-repo_k8s-points-overview-order-diagram.webp) - -1. In an environment with internet access, download the airgap setup binary from the URL provided by our support team. - The airgap setup binary is a self-extracting archive that contains the Palette platform manifests, images, and - required packs. The airgap setup binary is a single-use binary for uploading Palette images and packs to your OCI - registry. You will not use the airgap setup binary again after the initial installation. - -2. Move the airgap setup binary to the airgap environment. The airgap setup binary is used to extract the manifest - content and upload the required images and packs to your private OCI registry. Start the airgap setup binary in a - Linux Virtual Machine (VM). - -3. The airgap script will push the required images and packs to your private OCI registry. - -4. Install Palette using the Kubernetes Helm chart. - -## Get Started - -To get started with the airgap Palette installation, review the [Environment Setup](./kubernetes-airgap-instructions.md) -page. The environment setup guide provides detailed instructions on how to prepare your airgap environment. After you -have completed the environment setup, you can proceed with the [Install Palette](./install.md) guide. - -## Resources - -- [Environment Setup](kubernetes-airgap-instructions.md) - -- [Install Palette](./install.md) - -- [Checklist](checklist.md) - -- [Additional Packs](../../../../downloads/self-hosted-palette/additional-packs.md) diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/checklist.md b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/checklist.md deleted file mode 100644 index f4557f3a584..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/checklist.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -sidebar_label: "Checklist" -title: "Airgap Installation Checklist" -description: - "An airgap installation of Palette requires a few steps to be completed before the installation can begin. This - checklist will help you prepare for the installation." -icon: "" -sidebar_position: 10 -hide_table_of_contents: false -tags: ["palette", "self-hosted", "airgap"] -keywords: ["self-hosted", "enterprise"] ---- - -Use the following checklist to ensure you have completed all the required steps before deploying the airgap Palette -installation. - -- [ ] `oras` CLI v1.0.0 is installed and available. - -- [ ] `aws` CLI v2 or greater CLI is installed and available. - -- [ ] `zip` is installed and available. - -- [ ] Download the airgap setup binary from the support team. - -- [ ] Create a private repository named `spectro-packs` in your OCI registry. You can use a different name if you - prefer. - -- [ ] Create a public repository named `spectro-images` in your OCI registry. You can use a different name if you - prefer. - -- [ ] Authenticate with your OCI registry and acquired credentials to both repositories. - -- [ ] Download the Certificate Authority (CA) certificate from your OCI registry. - -- [ ] Set the required environment variables for the airgap setup binary. The values are different depending on what - type of OCI registry you use. - -- [ ] Start the airgap setup binary and verified the setup completed successfully. - -- [ ] Review the list of pack binaries to download and upload to your OCI registry. diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md deleted file mode 100644 index 9bed8b5b291..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -sidebar_label: "Kubernetes" -title: "Kubernetes" -description: "Learn how to install Palette on Kubernetes." -icon: "" -hide_table_of_contents: false -tags: ["palette", "self-hosted", "kubernetes"] -keywords: ["self-hosted", "enterprise"] ---- - -Palette can be installed on Kubernetes with internet connectivity or an airgap environment. When you install Palette, a -three-node cluster is created. You use a Helm chart our support team provides to install Palette on Kubernetes. Refer to -[Access Palette](../../enterprise-version.md#access-palette) for instructions on requesting access to the Helm Chart. - -To get started with Palette on Kubernetes, refer to the [Install Instructions](install.md) guide. - -## Get Started - -Select the scenario and the corresponding guide to install Palette on Kubernetes. If you are installing Palette in an -airgap environment, refer to the environment preparation guide before installing Palette. - -| Scenario | Environment Preparation Guide | Install Guide | -| -------------------------------------------------------- | ----------------------------------------------------------------------- | ---------------------------------------------------------- | -| Install Palette on Kubernetes with internet connectivity | None | [Install Instructions](install.md) | -| Install Palette on Kubernetes in an airgap environment | [Environment Setup](./airgap-install/kubernetes-airgap-instructions.md) | [Airgap Install Instructions](./airgap-install/install.md) | - -## Resources - -- [Non-Airgap Install Instructions](install.md) - -- [Airgap Install Instructions](./airgap-install/install.md) - -- [Helm Configuration Reference](palette-helm-ref.md) diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/_category_.json b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/_category_.json deleted file mode 100644 index 3fca6fb9f9b..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/_category_.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "position": 0 -} diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/checklist.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/checklist.md deleted file mode 100644 index cefa4012c9e..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/checklist.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -sidebar_label: "Checklist" -title: "Checklist" -description: - "An airgap installation of Palette requires a few steps to be completed before the installation can begin. This - checklist will help you prepare for the installation." -icon: "" -sidebar_position: 10 -hide_table_of_contents: false -tags: ["palette", "self-hosted", "airgap"] -keywords: ["self-hosted", "enterprise"] ---- - -Use the following checklist to ensure you have completed all the required steps before deploying the airgap Palette -installation. Review this checklist with your Palette support team to ensure you have all the required assets. - -- [ ] Create a vSphere VM and Template folder named `spectro-templates`. You may choose a different name for the folder - if you prefer. - -- [ ] Import the Operating System and Kubernetes distribution OVA required for the installation and place the OVA in the - `spectro-templates` folder. - -- [ ] Append the `r_` prefix and remove the `.ova` suffix from the OVA name after the import. - -- [ ] Start the airgap setup binary and verify the setup is completed successfully. - -- [ ] Review the list of [pack binaries](../../../../downloads/self-hosted-palette/additional-packs.md) to download and - upload to your OCI registry. - -- [ ] Download the release binary that contains the core packs and images required for the installation. - -- [ ] If you have custom SSL certificates you want to include, copy the custom SSL certificates, in base64 PEM format, - to the support VM. The custom certificates must be placed in the **/opt/spectro/ssl** folder. Include the - following files: - - **server.crt** - - **server.key** diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/environment-setup.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/environment-setup.md deleted file mode 100644 index 52cc7bf117f..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/environment-setup.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -sidebar_label: "Environment Setup" -title: "Environment Setup" -description: "Learn how to prepare your airgap environment for Palette installation." -icon: "" -hide_table_of_contents: false -sidebar_position: 20 -tags: ["self-hosted", "enterprise", "airgap", "vmware", "vsphere"] -keywords: ["self-hosted", "enterprise"] ---- - -This section helps you prepare your VMware vSphere airgap environment for Palette installation. You can choose between -two methods to prepare your environment: - -1. If you have a Red Hat Enterprise Linux (RHEL) VM deployed in your environment, follow the - [Environment Setup with an Existing RHEL VM](./env-setup-vm.md) guide to learn how to prepare this VM for Palette - installation. -2. If you do not have an RHEL VM, follow the [Environment Setup with OVA](./vmware-vsphere-airgap-instructions.md) - guide. This guide will show you how to use an OVA to deploy an airgap support VM in your VMware vSphere environment, - which will then assist with the Palette installation process. - -## Resources - -- [Environment Setup with an Existing RHEL VM](./env-setup-vm.md) - -- [Environment Setup with OVA](./vmware-vsphere-airgap-instructions.md) diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install-on-vmware.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install-on-vmware.md deleted file mode 100644 index 7345e0f14b4..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install-on-vmware.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -sidebar_label: "VMware" -title: "Install Palette on VMware" -description: "Learn how to install Palette on VMware." -icon: "" -hide_table_of_contents: false -tags: ["palette", "self-hosted", "vmware"] -keywords: ["self-hosted", "enterprise"] ---- - -Palette can be installed on VMware vSphere with internet connectivity or an airgap environment. When you install -Palette, a three-node cluster is created. You use the interactive Palette CLI to install Palette on VMware vSphere. -Refer to [Access Palette](../../enterprise-version.md#access-palette) for instructions on requesting repository access. - -## Resources - -- [Non-Airgap Install on VMware](install.md) - -- [Airgap Install](./airgap-install/airgap-install.md) - -- [VMware System Requirements](vmware-system-requirements.md) diff --git a/docs/docs-content/enterprise-version/install-palette/palette-management-appliance.md b/docs/docs-content/enterprise-version/install-palette/palette-management-appliance.md deleted file mode 100644 index 437a611bdb0..00000000000 --- a/docs/docs-content/enterprise-version/install-palette/palette-management-appliance.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -title: "Palette Management Appliance" -sidebar_label: "Palette Management Appliance" -description: "Learn how to deploy self-hosted Palette to your environment using the Palette Management Appliance" -hide_table_of_contents: false -# sidebar_custom_props: -# icon: "chart-diagram" -tags: ["palette management appliance", "self-hosted", "enterprise"] -sidebar_position: 20 ---- - -:::preview - -This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. -Do not use this feature in production workloads. - -::: - -The Palette Management Appliance is downloadable as an ISO file and is a solution for installing self-hosted Palette on -your infrastructure. The ISO file contains all the necessary components needed for Palette to function. The ISO file is -used to boot the nodes, which are then clustered to form a Palette management cluster. - -Once Palette has been installed, you can download pack bundles and upload them to the internal Zot registry or an -external registry. These pack bundles are used to create your cluster profiles. You will then be able to deploy clusters -in your environment. - -## Third Party Packs - -There is an additional option to download and install the Third Party packs that provide complementary functionality to -Palette. These packs are not required for Palette to function, but they do provide additional features and capabilities -as described in the following table. - -| **Feature** | **Included with Palette Third Party Pack** | **Included with Palette Third Party Conformance Pack** | -| ------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------ | -| [Backup and Restore](../../clusters/cluster-management/backup-restore/backup-restore.md) | :white_check_mark: | :x: | -| [Configuration Security](../../clusters/cluster-management/compliance-scan.md#configuration-security) | :white_check_mark: | :x: | -| [Penetration Testing](../../clusters/cluster-management/compliance-scan.md#penetration-testing) | :white_check_mark: | :x: | -| [Software Bill Of Materials (SBOM) scanning](../../clusters/cluster-management/compliance-scan.md#sbom-dependencies--vulnerabilities) | :white_check_mark: | :x: | -| [Conformance Testing](../../clusters/cluster-management/compliance-scan.md#conformance-testing) | :x: | :white_check_mark: | - -## Architecture - -The ISO file is built with the Operating System (OS), Kubernetes distribution, Container Network Interface (CNI), and -Container Storage Interface (CSI). A [Zot registry](https://zotregistry.dev/) is also included in the Appliance -Framework ISO. Zot is a lightweight, OCI-compliant container image registry that is used to store the Palette packs -needed to create cluster profiles. - -The following table displays the infrastructure profile for the self-hosted Palette appliance. - -| **Layer** | **Component** | -| -------------- | --------------------------------------------- | -| **OS** | Ubuntu: Immutable [Kairos](https://kairos.io) | -| **Kubernetes** | Palette eXtended Kubernetes Edge (PXK-E) | -| **CNI** | Calico | -| **CSI** | Piraeus | -| **Registry** | Zot | - -Check the **Component Updates** in the [Release Notes](../../release-notes/release-notes.md) for the specific versions -of each component as they may be updated between releases. - -## Supported Platforms - -The Palette Management Appliance can be used on the following infrastructure platforms: - -- VMware vSphere -- Bare Metal -- Machine as a Service (MAAS) - -## Limitations - -- Only public image registries are supported if you are choosing to use an external registry for your pack bundles. - -## Installation Steps - -Follow the instructions to install Palette using the Palette Management Appliance on your infrastructure platform. - -### Prerequisites - - - -### Install Palette - - - -:::warning - -If your installation is not successful, verify that the `piraeus-operator` pack was correctly installed. For more -information, refer to the -[Self-Hosted Installation - Troubleshooting](../../troubleshooting/enterprise-install.md#scenario---palettevertex-management-appliance-installation-stalled-due-to-piraeus-operator-pack-in-error-state) -guide. - -::: - -### Validate - - - -## Upload Packs to Palette - -Follow the instructions to upload packs to your Palette instance. Packs are used to create -[cluster profiles](../../profiles/cluster-profiles/cluster-profiles.md) and deploy workload clusters in your -environment. - -### Prerequisites - - - -### Upload Packs - - - -### Validate - - - -## (Optional) Upload Third Party Packs - -Follow the instructions to upload the Third Party packs to your Palette instance. The Third Party packs contain -additional functionality and capabilities that enhance the Palette experience. - -### Prerequisites - - - -### Upload Packs - - - -### Validate - - - -## Next Steps - - diff --git a/docs/docs-content/enterprise-version/system-management/account-management/email.md b/docs/docs-content/enterprise-version/system-management/account-management/email.md deleted file mode 100644 index 0bf1ed148a8..00000000000 --- a/docs/docs-content/enterprise-version/system-management/account-management/email.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -sidebar_label: "Update Email Address" -title: "Update Email Address" -description: "Update and manage the email address of the admin user." -icon: "" -hide_table_of_contents: false -sidebar_position: 30 -tags: ["vertex", "management", "account", "credentials"] -keywords: ["self-hosted", "palette"] ---- - -You can manage the credentials of the admin user by logging in to the system console. Updating or changing the email -address of the admin user requires the current password. - -Use the following steps to change the email address of the admin user. - -## Prerequisites - -- Access to the Palette system console. - -- Current password of the admin user. - -- A Simple Mail Transfer Protocol (SMTP) server must be configured in the system console. Refer to - [Configure SMTP](../smtp.md) page for guidance on how to configure an SMTP server. - -## Change Email Address - -1. Log in to the Palette system console. Refer to - [Access the System Console](../system-management.md#access-the-system-console) guide. - -2. From the **left Main Menu** select **My Account**. - -3. Type the new email address in the **Email** field. - -4. Provide the current password in the **Current Password** field. - -5. Click **Apply** to save the changes. - -## Validate - -1. Log out of the system console. You can log out by clicking the **Logout** button in the bottom right corner of the - **left Main Menu**. - -2. Log in to the system console. Refer to [Access the System Console](../system-management.md#access-the-system-console) - guide. - -3. Use the new email address and your current password to log in to the system console. - -A successful login indicates that the email address has been changed successfully. diff --git a/docs/docs-content/enterprise-version/upgrade/palette-management-appliance.md b/docs/docs-content/enterprise-version/upgrade/palette-management-appliance.md deleted file mode 100644 index f0b1e36e7d0..00000000000 --- a/docs/docs-content/enterprise-version/upgrade/palette-management-appliance.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: "Upgrade Palette Management Appliance" -sidebar_label: "Palette Management Appliance" -description: "Learn how to upgrade the Palette Management Appliance" -hide_table_of_contents: false -# sidebar_custom_props: -# icon: "chart-diagram" -tags: ["palette management appliance", "self-hosted", "enterprise"] -sidebar_position: 20 ---- - -:::preview - -This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. -Do not use this feature in production workloads. - -::: - -Follow the instructions to upgrade the -[Palette Management Appliance](../install-palette/palette-management-appliance.md) using a content bundle. The content -bundle is used to upgrade the Palette instance to a chosen target version. - -:::info - -The upgrade process will incur downtime for the Palette management cluster, but your workload clusters will remain -operational. - -::: - -## Prerequisites - - - -## Upgrade Palette - - - -## Validate - - diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/_category_.json b/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/_category_.json deleted file mode 100644 index 8c155c56d8b..00000000000 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "Kubernetes", - "position": 10 -} diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-notes.md b/docs/docs-content/enterprise-version/upgrade/upgrade-notes.md deleted file mode 100644 index c7f5c31d616..00000000000 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-notes.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -sidebar_label: "Upgrade Notes" -title: "Upgrade Notes" -description: "Learn how to upgrade self-hosted Palette instances." -icon: "" -sidebar_position: 0 -tags: ["palette", "self-hosted", "airgap", "kubernetes", "upgrade"] -keywords: ["self-hosted", "enterprise", "airgap", "kubernetes"] ---- - -This page offers version-specific reference to help you prepare for upgrading self-hosted Palette instances. - -## Palette 3.4 - -Prior versions of Palette installed internal Palette component ingress resources in the default namespace. The new -version of the Helm Chart ensures all Palette required ingress resources are installed in the correct namespace. -Self-hosted Palette instances deployed to Kubernetes and upgrading from Palette versions 3.3.X or older must complete -the following action. - -1. Connect to the cluster using the cluster's kubeconfig file. - -2. Identify all Ingress resources that belong to _Hubble_ - an internal Palette component. - - ```shell - kubectl get ingress --namespace default - ``` - -3. Remove each Ingress resource listed in the output that starts with the name Hubble. Use the following command to - delete an Ingress resource. Replace `REPLACE_ME` with the name of the Ingress resource you are removing. - - ```shell - kubectl delete ingress --namespace default - ``` - -## Upgrade Palette 3.x to 4.0 - -Palette 4.0 includes the following major enhancements that require user intervention to facilitate the upgrade process. - -- **Enhanced security for Palette microservices** - To enhance security, all microservices within Palette now use - `insecure-skip-tls-verify` set to `false`. When upgrading to Palette 4.0, you must provide a valid SSL certificate in - the system console. - - If you already have an SSL certificate, key, and Certificate Authority (CA) certificate, you can use them when - upgrading to Palette 4.0.0. To learn how to upload SSL certificates to Palette, refer to - [SSL Certificate Management](../system-management/ssl-certificate-management.md). - -- **Self-hosted Palette Kubernetes Upgrade** - If you installed Palette using the Helm Chart method, the Kubernetes - version used for Palette is upgraded from version 1.24 to 1.25. You will need to copy the new Kubernetes YAML to the - Kubernetes layer in the Enterprise cluster profile. If you have customized your Kubernetes configuration, you will - need to manually adjust custom values and include any additional configuration in the upgraded YAML that we provide. - Refer to [Upgrade Enterprise Cluster Profile](#upgrade-enterprise-cluster-profile). - -### Upgrade with VMware - -:::warning - -A known issue impacts all self-hosted Palette instances older then 4.4.14. Before upgrading a Palette instance with -version older than 4.4.14, ensure that you execute a utility script to make all your cluster IDs unique in your -Persistent Volume Claim (PVC) metadata. For more information, refer to the -[Troubleshooting Guide](../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). - -::: - -From the Palette system console, click the **Update version** button. Palette will be temporarily unavailable while -system services update. - -![Screenshot of the "Update version" button in the system consoles.](/enterprise-version_sys-console-update-palette-version.webp) - -### Upgrade Enterprise Cluster Profile - -Follow the steps below to upgrade Kubernetes. - -1. Log in to the Palette system console. - -2. From the left **Main Menu**, click **Enterprise Cluster Migration**. - -3. Click on the **Profiles** tab, and select the Kubernetes layer. The Kubernetes YAML is displayed in the editor at - right. - -4. If the existing Kubernetes YAML has been customized or includes additional configuration, we suggest you create a - backup of it by copying it to another location. - -5. Copy the Kubernetes YAML you received from our support team and paste it into the editor. - - ![Screenshot of the Kubernetes YAML editor.](/enterprise-version_upgrade_ec-cluster-profile.webp) - -6. If you have made any additional configuration changes or additions, add your customizations to the new YAML. - -7. Save your changes. - -The Enterprise cluster initiates the Kubernetes upgrade process and leads to the reconciliation of all three nodes. diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/_category_.json b/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/_category_.json deleted file mode 100644 index 11b11b09b25..00000000000 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "VMware", - "position": 0 -} diff --git a/docs/docs-content/integrations/edge-k8s.mdx b/docs/docs-content/integrations/edge-k8s.mdx index a1acfc2c8ae..0e02bca36ea 100644 --- a/docs/docs-content/integrations/edge-k8s.mdx +++ b/docs/docs-content/integrations/edge-k8s.mdx @@ -20,7 +20,7 @@ We offer PXK-E as a core pack in Palette. ### PXK and Palette VerteX -The PXK-E used in [Palette VerteX](../vertex/vertex.md) is compiled and linked with our +The PXK-E used in [Palette VerteX](../self-hosted-setup/vertex/vertex.md) is compiled and linked with our [NIST-certified FIPS crypto module](../legal-licenses/compliance.md#fips-140-3). PXK-E is by default enabled with [Ubuntu Pro](https://ubuntu.com/pro) with FIPS mode enabled. Additionally, the Operating System (OS) is hardened based on the NIST-800 standard. Refer to the diff --git a/docs/docs-content/integrations/kubernetes.mdx b/docs/docs-content/integrations/kubernetes.mdx index 56348ef3f82..f7490560ce0 100644 --- a/docs/docs-content/integrations/kubernetes.mdx +++ b/docs/docs-content/integrations/kubernetes.mdx @@ -72,7 +72,7 @@ spreadsheet maintained by the CNCF. ### PXK and Palette VerteX -The PXK used in [Palette VerteX](../vertex/vertex.md) is compiled and linked with our +The PXK used in [Palette VerteX](../self-hosted-setup/vertex/vertex.md) is compiled and linked with our [NIST-certified FIPS crypto module](../legal-licenses/compliance.md#fips-140-3) PXK is by default enabled with [Ubuntu Pro](https://ubuntu.com/pro) with FIPS mode enabled. Additionally, the Operating System (OS) is hardened based on the NIST-800 standard. However, if you use a different OS through the pack, then you are responsible for ensuring FIPS compliance and hardening of the OS. diff --git a/docs/docs-content/introduction/introduction.md b/docs/docs-content/introduction/introduction.md index d3a255d9d1f..c90b80fcf05 100644 --- a/docs/docs-content/introduction/introduction.md +++ b/docs/docs-content/introduction/introduction.md @@ -30,7 +30,7 @@ Palette to deploy and update your Kubernetes clusters. This section contains han Palette VerteX edition is also available to meet the stringent requirements of regulated industries such as government and public sector organizations. Palette VerteX integrates Spectro Cloud’s Federal Information Processing Standards (FIPS) 140-3 cryptographic modules. To learn more about FIPS-enabled Palette, check out -[Palette VerteX](../vertex/vertex.md). +[Palette VerteX](../self-hosted-setup/vertex/vertex.md). ![Palette product high level overview eager-load](/docs_introduction_product-overview.webp) diff --git a/docs/docs-content/legal-licenses/compliance.md b/docs/docs-content/legal-licenses/compliance.md index e17f56658b2..39a27765358 100644 --- a/docs/docs-content/legal-licenses/compliance.md +++ b/docs/docs-content/legal-licenses/compliance.md @@ -60,7 +60,8 @@ compliance with the Cryptographic Module Validation Program (CMVP). Our Spectro Cloud Cryptographic Module is a general-purpose cryptographic library. The FIPS-enforced Palette VerteX edition incorporates the module in the Kubernetes Management Platform and the infrastructure components of target clusters to protect the sensitive information of regulated industries. Palette VerteX supports FIPS at the tenant level. -For more information about the FIPS-enforced Palette edition, check out [Palette VerteX](vertex/vertex.md). +For more information about the FIPS-enforced Palette edition, check out +[Palette VerteX](../self-hosted-setup/vertex/vertex.md). ## Joint Certification Program diff --git a/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-helm.md b/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-helm.md index 14a5080d4b2..b5a19f7bba2 100644 --- a/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-helm.md +++ b/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-helm.md @@ -19,11 +19,11 @@ You can add an OCI type Helm registry to Palette and use the Helm Charts in your - If the OCI registry is using a self-signed certificate, or a certificate that is not signed by a trusted certificate authority (CA), you will need the certificate to add the registry to Palette. -- If you are using an Amazon ECR and your [Palette](../../../enterprise-version/enterprise-version.md) or - [Palette VerteX](../../../vertex/vertex.md) instance is installed in an airgapped environment or an environment with - limited internet access, you must whitelist the S3 endpoint that corresponds to the region of your Amazon ECR. This is - because image layers are stored in S3, not the registry. The S3 endpoint uses the following format. Replace `` - with the region your ECR is hosted in. +- If you are using an Amazon ECR and your [self-hosted Palette](../../../self-hosted-setup/palette/palette.md) or + [Palette VerteX](../../../self-hosted-setup/vertex/vertex.md) instance is installed in an airgapped environment or an + environment with limited internet access, you must whitelist the S3 endpoint that corresponds to the region of your + Amazon ECR. This is because image layers are stored in S3, not the registry. The S3 endpoint uses the following + format. Replace `` with the region your ECR is hosted in. ```shell prod--starport-layer-bucket.s3..amazonaws.com diff --git a/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-packs.md b/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-packs.md index 67f9c05e8c9..55a5f5dc7a8 100644 --- a/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-packs.md +++ b/docs/docs-content/registries-and-packs/registries/oci-registry/add-oci-packs.md @@ -28,11 +28,11 @@ For guidance on how to add a custom pack to an OCI pack registry, check out the - If the OCI registry is using a self-signed certificate, or a certificate that is not signed by a trusted certificate authority (CA), you will need the certificate to add the registry to Palette. -- If you are using an Amazon ECR and your [Palette](../../../enterprise-version/enterprise-version.md) or - [Palette VerteX](../../../vertex/vertex.md) instance is installed in an airgapped environment or an environment with - limited internet access, you must whitelist the S3 endpoint that corresponds to the region of your Amazon ECR. This is - because image layers are stored in S3, not the registry. The S3 endpoint uses the following format. Replace `` - with the region your ECR is hosted in. +- If you are using an Amazon ECR and your [self-hosted Palette](../../../self-hosted-setup/palette/palette.md) or + [Palette VerteX](../../../self-hosted-setup/vertex/vertex.md) instance is installed in an airgapped environment or an + environment with limited internet access, you must whitelist the S3 endpoint that corresponds to the region of your + Amazon ECR. This is because image layers are stored in S3, not the registry. The S3 endpoint uses the following + format. Replace `` with the region your ECR is hosted in. ```shell prod--starport-layer-bucket.s3..amazonaws.com diff --git a/docs/docs-content/registries-and-packs/registries/oci-registry/oci-registry.md b/docs/docs-content/registries-and-packs/registries/oci-registry/oci-registry.md index 175b808cbd7..5f9df73177f 100644 --- a/docs/docs-content/registries-and-packs/registries/oci-registry/oci-registry.md +++ b/docs/docs-content/registries-and-packs/registries/oci-registry/oci-registry.md @@ -42,8 +42,8 @@ To add an OCI registry to Palette, refer to the respective guide for the OCI-typ If you are using self-hosted Palette or Palette VerteX, you can add an OCI registry at the system level scope. All tenants can use the OCI registry once it is added to the system-level scope. To learn how to add an OCI registry at the system level scope, refer to the -[Self-Hosted Add Registry](../../../enterprise-version/system-management/add-registry.md) guide or the -[VerteX Add Registry](../../../vertex/system-management/add-registry.md) guide. +[Self-Hosted Add Registry](../../../self-hosted-setup/palette/system-management/add-registry.md) guide or the +[VerteX Add Registry](../../../self-hosted-setup/vertex/system-management/add-registry.md) guide. ::: diff --git a/docs/docs-content/registries-and-packs/registries/registries.md b/docs/docs-content/registries-and-packs/registries/registries.md index 1624e70ba6f..d94ab2a4a16 100644 --- a/docs/docs-content/registries-and-packs/registries/registries.md +++ b/docs/docs-content/registries-and-packs/registries/registries.md @@ -31,8 +31,8 @@ learn more about OCI registries. Registries are added at the tenant level and are available to all users in the tenant. You can add multiple registries of the same type to Palette. If you are using a self-hosted Palette instance, or Palette VerteX, you can add registries through the system console. Registries added through the system console are available to all tenants in the system. -Check out the [Self-Hosted Add Registry](../../enterprise-version/system-management/add-registry.md) guide or the -[VerteX Add Registry](../../vertex/system-management/add-registry.md) guide. +Check out the [Self-Hosted Add Registry](../../self-hosted-setup/palette/system-management/add-registry.md) guide or the +[VerteX Add Registry](../../self-hosted-setup/vertex/system-management/add-registry.md) guide. ## Synchronization @@ -59,8 +59,8 @@ Palette environments. The default registries are listed below: Palette VerteX comes with a default OCI registry that only contains FIPS compliant packs. Non-FIPS compliant packs are not available in Palette VerteX by default and must explicitly be added to Palette VerteX. Refer to the -[Use non-FIPS Packs](../../vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md) guide to learn -how to add non-FIPS packs registries to Palette VerteX. +[Use non-FIPS Packs](../../self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md) +guide to learn how to add non-FIPS packs registries to Palette VerteX. ::: diff --git a/docs/docs-content/release-notes/known-issues.md b/docs/docs-content/release-notes/known-issues.md index ab75886365b..dbdad42d275 100644 --- a/docs/docs-content/release-notes/known-issues.md +++ b/docs/docs-content/release-notes/known-issues.md @@ -14,118 +14,118 @@ to review and stay informed about the status of known issues in Palette. As issu The following table lists all known issues that are currently active and affecting users. -| Description | Workaround | Publish Date | Product Component | -| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ---------------------------- | -| Users cannot remove proxy values for connected Edge hosts in Local UI due to a validation error. Proxy values can still be added and updated. | No workaround available. | October 19, 2025 | Edge | -| On Edge clusters whose hosts run Ubuntu 24.04 with a Unified Kernel Image (UKI), CoreDNS pods may enter the `CrashLoopBackOff` state with logs showing `[FATAL] plugin/loop: Loop (127.0.0.1: -> :53) detected for zone "."`. This happens because `/etc/resolv.conf` is symlinked to `/run/systemd/resolve/stub-resolv.conf`, which lacks real DNS server entries. As a result, CoreDNS forwards DNS queries to itself, creating a recursive loop. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---coredns-pods-stuck-in-crashloopbackoff-due-to-dns-loop) for the workaround. | October 7, 2025 | Edge | -| Due to strict schema adherence enforcement, [Helm charts](../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-helm-addon.md) with parameters that do not exist in the chart schema fail to install on Palette 4.7.15 or later. | Remove parameters that do not exist in the chart schema from the pack values. Alternatively, add the missing parameters to the chart schema or remove the chart schema file entirely. | September 20, 2025 | Packs | -| Edge clusters using the versions 1.32.3 and 1.33.0 may fail to come up because CoreDNS pods do not reach the running state. On existing clusters, CoreDNS pods can fall into a `CrashLoopBackOff` state with the error `exec /bin/pebble: no such file or directory`. This is due to a [Canonical Kubernetes known issue](https://github.com/canonical/k8s-snap/issues/1864). The Palette Optimized Canonical pack references the CoreDNS images `ghcr.io/canonical/coredns:1.11.3-ck0` in version 1.32.3 and `ghcr.io/canonical/coredns:1.11.4-ck1` in version 1.33.0. Both of these images are broken and cause CoreDNS pods to fail. | Use the Palette Optimized Canonical pack versions other than 1.32.3 and 1.33.0 which include the fixed CoreDNS image. | September 20, 2025 | Edge, Packs | -| Agent mode Edge cluster creation may fail with logs showing the error `failed calling webhook "pod-registry.spectrocloud.com": tls: failed to verify certificate: x509: certificate signed by unknown authority ("Spectro Cloud")...`. As a result, core components such as CNI, Harbor, and cluster controllers never start. All pods remain in **Pending** or **Failed** state. In the Local UI, packs display **Invalid date** in the **Started On** and **Completed On** fields. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---x509-certificate-signed-by-unknown-authority-errors-during-agent-mode-cluster-creation) for the workaround. | September 1, 2025 | Edge | -| [Virtual Machine Orchestrator (VMO)](../vm-management/vm-management.md) 4.7.1 cannot be uninstalled due to a missing image. | No workaround available. | September 1, 2025 | Virtual Machine Orchestrator | -| After an OS image upgrade in appliance mode, an Edge host may fail to boot into the expected active system image and instead boot into the passive partition as a fallback due to an upgrade failure. When this happens, the Edge host does not automatically rejoin the cluster. The kernel command line (`/proc/cmdline`) includes the `upgrade_failure` flag and confirms the system root is set to `LABEL=COS_PASSIVE`. | Recover the Edge host manually using one of the following methods:
- Reboot the host and select **Palette eXtended Kubernetes – Edge** at the GRand Unified Bootloader (GRUB) menu to boot the active image.
- Establish an SSH connection to the host and run `/usr/bin/grub2-editenv /oem/grubenv set next_entry=cos && reboot`. This command updates GRUB to use the boot entry labeled `cos` (the active image) and reboots the host. | September 1, 2025 | Edge | -| On Azure IaaS clusters created using a Palette version prior to 4.6.32, scaling worker node pools does not attach newly created nodes to an outbound load balancer after upgrading to Palette version 4.6.32 or later and the cluster's Palette Agent version to 4.6.7 or later. This impacts outbound connectivity and may also disassociate existing NAT gateways from the worker node pool subnet, resulting in a loss of egress connectivity. | - **Multi-Tenant SaaS** - No workaround available.
- **Self-Hosted Palette or VerteX** - Before upgrading your [self-hosted Palette](../enterprise-version/enterprise-version.md) or [VerteX](../vertex/vertex.md) instance to Palette version 4.6.32 or later, [pause agent upgrades](../clusters/cluster-management/platform-settings/pause-platform-upgrades.md) on any Azure IaaS clusters where you plan to perform Day-2 scaling or repave operations. | September 1, 2025 | Clusters, Self-Hosted | -| In self-hosted [Palette](../enterprise-version/install-palette/palette-management-appliance.md) and [Vertex Management Appliance](../vertex/install-palette-vertex/vertex-management-appliance.md) environments, uploading the same pack as both a FIPS and non-FIPS version to the same registry overwrites the original pack.

For example, if you have a non-FIPS version of the `byoi-2.1.0` pack in your Zot registry and you upload the FIPS version of `byoi-2.1.0`, the new version will overwrite the existing one. This results in a SHA mismatch between the pack stored in the registry and the pack referenced in the cluster profile, which can lead to cluster creation failures. | Upload either a FIPS or non-FIPS version of a pack to your registry. Do not upload both to the same registry. | September 1, 2025 | Clusters, Self-Hosted | -| Cilium may fail to start on MAAS machines that are configured with a `br0` interface and are part of a cluster, displaying errors like `daemon creation failed: failed to detect devices: unable to determine direct routing device. Use --direct-routing-device to specify it`. This happens because Canonical Kubernetes supports setting various Cilium annotations, but it lacks some fields required for the MAAS `br0` network configuration due to [a limitation in `k8s-snap`](https://github.com/canonical/k8s-snap/issues/1740). | Avoid using MAAS machines with a `br0` interface when provisioning Canonical Kubernetes clusters. Instead, choose machines whose primary interface is directly connected to the MAAS-managed subnet or VLAN. | August 17, 2025 | Clusters, Packs | -| Network overlay cluster nodes may display erroneous `failed to add static FDB entry after cleanup...Stdout already set, output` logs after [upgrading the Palette agent](../clusters/edge/cluster-management/agent-upgrade-airgap.md) to version 4.7.9. Cluster functionality is not affected. | No workaround available. | August 17, 2025 | Edge | -| Container runtime may fail to run with the message `Failed to run CRI service error=failed to recover state: failed to get metadata for stored sandbox` after a node is upgraded to 1.29.14. This is related to an [upstream issue with containerd](https://github.com/containerd/containerd/issues/10848). | Remove the container runtime folder with `rm -rf /var/lib/containerd`. Then restart containerd and kubelet using `systemctl restart containerd && systemctl restart kublet`. | August 17, 2025 | Edge | -| Due to [an upstream issue with a Go library and CLIs for working with container registries](https://github.com/google/go-containerregistry/issues/2124), unintended or non-graceful reboots during content push operations to registries can cause consistency issues. This leads to content sync in locally managed clusters throwing the `content-length: 0 ` error. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---content-length-0-errors-during-content-synchronization) for the workaround. | August 17, 2025 | Edge | -| Controller mode MAAS deployments using the automatically install the Cilium CNI. This happens because of a known issue with the Canonical Kubernetes Cluster API (CAPI) bootstrap provider and cannot be disabled. However, Palette still requires users to explicitly configure a CNI in the cluster profile. | Select the **Cilium CNI (Canonical Kubernetes)** pack when creating a cluster profile to fulfill the CNI requirement. Palette recognizes this selection and allows cluster creation to proceed, even though Cilium is installed by the bootstrap process. | August 17, 2025 | Clusters, Packs | -| If you configure static IP on a host using the [Terminal User Interface (TUI)](../clusters/edge/site-deployment/site-installation/initial-setup.md), the cluster that is formed by the host cannot [enable network overlay](../clusters/edge/networking/vxlan-overlay.md). | Do not enable network overlay on clusters using static IPs configured via TUI. If you must use both static IP and network overlay, configure the static IP with the [user data network block](../clusters/edge/edge-configuration/installer-reference.md#site-network-parameters). | July 31, 2025 | Edge | -| When deploying an Edge RKE2 cluster on Rocky Linux, a worker node may fail to join the cluster if TCP port 9345 is not open on the control plane node. This port is required for communication between the RKE2 agent and the control plane. | Verify if the port is open by running `firewall-cmd --list-all` on the control plane node. If 9345/tcp is not listed in the output, open it with `firewall-cmd --zone=public --add-port=9345/tcp --permanent` and apply the change using `firewall-cmd --reload`. | July 21, 2025 | Edge | -| When using the Palette/VerteX Management Appliance, clicking on the Zot service link in Local UI results in a new tab displaying `Client sent an HTTP request to an HTTPS server`. | Change the prefix of the URL in your web browser to `https://` instead of `http://`. | July 21, 2025 | Clusters, Packs | -| When deploying a workload cluster with packs using `namespaceLabels`, the associated Pods get stuck if the cluster is deployed via [self-hosted Palette](../enterprise-version/enterprise-version.md) or [Palette VerteX](../vertex/vertex.md), or if the `palette-agent` ConfigMap specifies `data.feature.workloads: disable`. | Force-apply `privileged` labels to the affected namespace. Refer to the [Packs - Troubleshooting](../troubleshooting/pack-issues.md#scenario---pods-with-namespacelabels-are-stuck-on-deployment) guide for additional information. | July 19, 2025 | Clusters | -| Day-2 [node pool](../clusters/cluster-management/node-pool.md) operations cannot be performed on [AWS EKS clusters](../clusters/public-cloud/aws/eks.md) previously deployed with both **Enable Nodepool Customization** enabled and Amazon Linux 2023 (AL2023) [node labels](../clusters/cluster-management/node-labels.md) after upgrading to version 4.7.3. | Create a new node pool with the desired [Amazon Machine Image (AMI) and node pool customizations](../clusters/public-cloud/aws/eks.md#cloud-configuration-settings) and migrate existing workloads to the new node pool. For an example of how to migrate workloads, refer to the [AWS Scale, Upgrade, and Secure Clusters](../tutorials/getting-started/palette/aws/scale-secure-cluster.md#scale-a-cluster) guide. | July 19, 2025 | Clusters | -| [Cloning a virtual machine](../vm-management/create-manage-vm/clone-vm.md) using KubeVirt 1.5 or later may hang if [volume snapshots](../vm-management/create-manage-vm/take-snapshot-of-vm.md) are not configured. | Ensure that you configure a `VolumeSnapshotClass` in the `charts.virtual-machine-orchestrator.snapshot-controller.volumeSnapshotClass` resource in the pack. | July 19, 2025 | Virtual Machine Orchestrator | -| Edge K3s clusters may fail `kube-bench` tests even when they are expected to pass. These failures do not indicate security issues, but rather stem from how the tests are executed. | No workaround available. | July 11, 2025 | Edge | -| clusters running Kubernetes v1.32.x or later on RHEL or Rocky Linux 8.x may experience failure during Kubernetes initialization due to unsupported kernel version. | Use RHEL or Rocky Linux 9.x as the base OS or update the kernel version to 4.19 or later in the 4.x series, or to any 5.x or 6.x version. Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario--pxk-e-clusters-on-rhel-and-rocky-8-fail-kubernetes-initialization) for debug steps. | June 23, 2025 | Edge | -| fails to start when IPv6 is enabled on hosts running specific kernel versions due to missing or incompatible kernel modules required for `ip6tables` `MARK` support. Affected kernel versions include 5.15.0-127 and 5.15.0-128 (generic), 6.8.0-57 and 6.8.0-58 (generic), and 6.8.0-1022 (cloud). | Use a different CNI, disable IPv6, or use an unaffected kernel version. Refer to the [troubleshooting](../troubleshooting/pack-issues.md#scenario---calico-fails-to-start-when-ipv6-is-enabled) guide for debug steps. | June 23, 2025 | Packs | -| control plane nodes in VerteX clusters may experience failure of the `kube-vip` component after reboot. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---pxk-e-clusters-in-vertex-deployments-experience-failure-upon-reboot) for debug steps. | June 23, 2025 | Edge | -| The [Pause Agent Upgrades](../clusters/cluster-management/platform-settings/pause-platform-upgrades.md) configuration is not applied to Edge hosts that are not part of a cluster. Edge hosts that are part of a cluster are not affected. | No workaround. | June 23, 2025 | Edge | -| Due to CAPZ upgrades in version 4.6.32, [Azure IaaS](../clusters/public-cloud/azure/azure.md) and [AKS](../clusters/public-cloud/azure/aks.md) clusters cannot be deployed on both [Azure Public Cloud](../clusters/public-cloud/azure/azure.md) and [Azure US Government](https://azure.microsoft.com/en-us/explore/global-infrastructure/government). Clusters will get stuck during the provisioning stage. | Users who want to deploy a cluster on both Azure environments must use a [PCG](../clusters/pcg/pcg.md) when adding an [Azure US Government cloud account](../clusters/public-cloud/azure/azure-cloud.md). | June 11, 2025 | Clusters | -| Palette eXtended Kubernetes (PXK) and Palette eXtended Kubernetes - Edge (PXK-E) versions 1.30.10, 1.31.6, and 1.32.2 or older do not support TLS 1.3 or applications that require TLS 1.3 encrypted communications. | Use PXK and PXK-E versions 1.30.11, 1.31.7, and 1.32.3 or later instead. | June 5, 2025 | Edge | -| Clusters with [Pause Agent Upgrades](../clusters/cluster-management/platform-settings/pause-platform-upgrades.md) enabled may be stuck in the **Deleting** state. Cluster resources will not be deleted without manual intervention. | Disable the **Pause Agent Upgrades** setting and trigger the cluster deletion. | May 31, 2025 | Clusters | -| When upgrading airgapped self-hosted Palette and VerteX clusters to 4.6.32, the IPAM controller may report an `Exhausted IP Pools` error despite having available IP addresses, preventing the cluster from upgrading. This is due to a race condition in CAPV version 1.12.0, which may lead to an orphaned IP claim. | Delete the orphaned IP claim and re-run the upgrade. Refer to the [troubleshooting](../troubleshooting/enterprise-install.md#scenario---ip-pool-exhausted-during-airgapped-upgrade) guide for debug steps. | May 31, 2025 | Clusters | -| Edge clusters using K3s version 1.32.1 or 1.32.2 may fail to provision due to an upstream issue. Refer to the [K3s issue page](https://github.com/k3s-io/k3s/issues/11973) for more information. | No workaround available. | May 31, 2025 | Edge | -| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md) using the FIPS installation package, adding a custom `stylus.path` to the `user-data` file causes cluster creation to fail as it cannot find [kubelet](https://kubernetes.io/docs/concepts/architecture/#kubelet). | No workaround available. | May 31, 2025 | Edge | -| During a Kubernetes upgrade, the Cilium pod may get stuck in the `Init:CrashLoopBackoff` state due to nsenter permission issues. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---cilium-pod-stuck-during-kubernetes-upgrade-due-to-nsenter-permission-issue) for debug steps. | May 31, 2025 | Edge | -| Pods with [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volumes that are [backed up](../clusters/cluster-management/backup-restore/create-cluster-backup.md) using Velero 1.9, [restored](../clusters/cluster-management/backup-restore/restore-cluster-backup.md) using Velero 1.15, and backed up and restored again with Velero 1.15 are stuck in the `init` state when performing a second restore. This is caused by a known [upstream issue](https://github.com/vmware-tanzu/velero/pull/8880) with Velero. | Delete stuck pods or restart affected deployments. | May 31, 2025 | Clusters | -| [Appliance Studio](../deployment-modes/appliance-mode/appliance-studio.md) does not validate the value of each field in `.arg` or `user-data` files. | No workaround available. | May 31, 2025 | Edge | -| Palette virtual clusters provisioned with EKS clusters as host clusters in the cluster group and using the Calico CNI are stuck in the **Provisioning** state due to Cert Manager not being reachable. This stems from [an upstream limitation](https://cert-manager.io/docs/installation/compatibility/#aws-eks) between Cert Manager on EKS and custom CNIs. | No workaround available. | May 21, 2025 | Edge | -| [Remote shell](../clusters/edge/cluster-management/remote-shell.md) sessions executing in the [Chrome](https://www.google.com/intl/en_uk/chrome/) and [Microsoft Edge](https://www.microsoft.com/en-gb/edge/download?form=MA13FJ) browsers time out after approximately five minutes of inactivity. | Start [remote shell](../clusters/edge/cluster-management/remote-shell.md) sessions in the [Firefox](https://www.mozilla.org/en-GB/firefox/new/) browser instead. Firefox supports a 12 hour inactivity timeout. | May 5, 2025 | Edge | -| When upgrading an airgapped Edge cluster to version 4.6.24, some pods may get stuck in the `ImagePullBackOff` state. | Re-upload the content bundle. | May 5, 2025 | Edge | -| When you [enable remote shell](../clusters/edge/cluster-management/remote-shell.md) on an Edge host, the remote shell configuration may become stuck in the **Configuring** state. | Disable remote shell in the UI, and wait for one minute before enabling it again. | April 19, 2025 | Edge | -| Disconnected Edge clusters using PXK-E version 1.29.14 or 1.30.10 will sometimes go into the unknown state after a reboot. | Use the command `kubectl delete pod kube-vip- --namespace kube-system` to delete the Kubernetes VIP pod and let it be re-created automatically. Replace `node-name` with the name of the host node. | March 15, 2025 | Edge | -| [MAAS](../clusters/data-center/maas/maas.md) and [VMware vSphere](../clusters/data-center/vmware/vmware.md) clusters fail to provision on existing self-hosted Palette and VerteX environments deployed with Palette 4.2.13 or later. These installations have an incorrectly configured default image endpoint, which causes image resolution to fail. New self-hosted installations are not affected. | Refer to [Troubleshooting](../troubleshooting/enterprise-install.md#scenario---maas-and-vmware-vsphere-clusters-fail-image-resolution-in-non-airgap-environments) for a workaround for non-airgap environments. For airgap environments, ensure that the images are downloaded to your environment. Refer to the [Additional OVAs](../downloads/self-hosted-palette/additional-ovas.md) page for further details. | February 16, 2025 | Self-Hosted, Clusters | -| Performing a `InPlaceUpgrade` from version 1.28 to 1.29 on active MAAS and AWS clusters with Cilium prevents new pods from being deployed on control plane nodes due to an [upstream issue](https://github.com/canonical/cluster-api-control-plane-provider-microk8s/issues/74) with Canonical. This issue also occurs when performing a MicroK8s `SmartUpgrade` from version 1.28 to 1.29 on active MAAS and AWS clusters with one control plane node and Cilium. | Manually restart the Cilium pods on _each_ control plane node using the command `microk8s kubectl rollout restart daemonset cilium --namespace kube-system`. | February 16, 2025 | Clusters, Packs | -| For clusters deployed with [Virtual Machine Orchestrator (VMO)](../vm-management/vm-management.md), namespaces on the **Virtual Machine** tab cannot be viewed by users with any `spectro-vm` cluster role. | Add the `spectro-namespace-list` cluster role to users who need to view virtual machines and virtual machine namespaces. Refer to the [Add Roles and Role Bindings](../vm-management/rbac/add-roles-and-role-bindings.md) guide for instructions on how to add roles for VMO clusters. | February 5, 2025 | Virtual Machine Orchestrator | -| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md), the contents of the `/opt/cni/bin` folder are not set correctly, causing cluster deployment issues because the cluster network cannot come up. | Refer to [Troubleshooting](../troubleshooting/edge/edge.md#scenario---agent-mode-deployments-cni-folder-permission-issues) for a workaround. | January 30, 2025 | Palette agent | -| Palette [workload clusters](../glossary-all.md#workload-cluster) deployed with Calico version 3.28.2, 3.29.0, or 3.29.1 are experiencing memory leaks due to an [upstream issue](https://github.com/projectcalico/calico/pull/9612) with Calico, which is caused by failing to close netlink handles. | [Create a new profile version](../profiles/cluster-profiles/modify-cluster-profiles/version-cluster-profile.md) using Calico version 3.28.0 or 3.28.1 and [update your cluster](../clusters/cluster-management/cluster-updates.md#update-a-cluster). | January 27, 2025 | Clusters, Packs | -| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md) on Palette agent version 4.5.14, adding a custom `stylus.path` to the **user-data** file causes cluster creation to fail as it cannot find [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/). | Review the [Edge Troubleshooting](../troubleshooting/edge/edge.md) section for workarounds. Refer to [Identify the Target Agent Version](../clusters/edge/cluster-management/agent-upgrade-airgap.md#identify-the-target-agent-version) for guidance in retrieving your Palette agent version number. | January 19, 2025 | Edge | -| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md), upgrades to higher Kubernetes versions are not supported with Palette agent version 4.5.12 or earlier. | No workaround available. Upgrades to higher Kubernetes versions are only supported from Palette agent version 4.5.14 and above for clusters deployed with PXK-E and agent mode. Refer to [Identify the Target Agent Version](../clusters/edge/cluster-management/agent-upgrade-airgap.md#identify-the-target-agent-version) for guidance in retrieving your Palette agent version number. | January 19, 2025 | Edge | -| Transferring the management of a local Edge cluster to central management by Palette or VerteX is not supported for multi-node clusters. | No workaround is available. | January 19, 2025 | Edge | -| Edits on the [Hybrid Profile](../clusters/public-cloud/aws/eks-hybrid-nodes/create-hybrid-node-pools.md#create-node-pool) of an [EKS Hybrid node pool](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md) take effect as soon as you click the **Save** button on the **Configure Profile** tab, not when you click **Confirm** on the **Edit node pool** screen. | No workaround available. | January 19, 2025 | Clusters | -| [EKS Hybrid node](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md) statuses are not displayed accurately when an update is in progress. This has no effect on the update operation itself. | No workaround available. | January 19, 2025 | Clusters | -| Deleting an [EKS Hybrid node](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md) from the Node Details page will result in an error in the Palette UI and the operation will have no effect. Additionally, deletion cannot be performed if the node pool is in the middle of an update operation. | You can remove a node by changing the node pool instead. Refer to the [Change a Node Pool](../clusters/cluster-management/node-pool.md#change-a-node-pool) page. Ensure that the node pool update only includes deletion and that the node to be deleted is in a Running state. | January 19, 2025 | Clusters | -| [Maintenance mode](../clusters/cluster-management/maintenance-mode.md#activate-maintenance-mode) cannot be activated on [EKS Hybrid nodes](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md). Attempting to activate maintenance mode will result in an error in the Palette UI and the operation will have no effect. | No workaround available. | January 19, 2025 | Clusters | -| When using the [VM Migration Assistant](../vm-management/vm-migration-assistant/vm-migration-assistant.md) to migrate VMs to your VMO cluster, migration plans can enter an **Unknown** state if more VMs are selected for migration than the **Max concurrent virtual machine migrations** setting allows. | Review the [Virtual Machine Orchestrator (VMO) Troubleshooting](../troubleshooting/vmo-issues.md#scenario---virtual-machine-vm-migration-plans-in-unknown-state) section for workarounds. | January 19, 2025 | Virtual Machine Orchestrator | -| Palette upgrades on K3s virtual clusters may be blocked if the cluster does not have enough resources to accommodate additional pods. Ensure that your cluster has 1 CPU, 1 GiB of memory, and 1 GiB storage of free resources before commencing an upgrade. You may increase the virtual cluster's resource quotas or disable them. | Refer to the [Adjust Virtual Clusters Limits](../troubleshooting/palette-dev-engine.md#scenario---adjust-virtual-clusters-limits-before-palette-upgrades) guide for workaround steps. | January 19, 2025 | Virtual Clusters | -| If you have manually configured the metrics server in your Edge airgap cluster using a manifest, upgrading to 4.5.15 may cause an additional metrics server pod to be created in your cluster. | Remove the manifest layer that adds the manifest server from your cluster profile and apply the update on your cluster. | December 15, 2024 | Edge | -| When deploying an Edge cluster using content bundles built from cluster profiles with PXK-E as the Kubernetes layer, some images in the Kubernetes layer fail to load into containerd. This issue occurs due to image signature problems, resulting in deployment failure. | Remove the `packs.content.images` from the Kubernetes layer in the pack configuration before building the content bundle. These components are already included in the provider image and do not need to be included in the content bundle. | December 13, 2024 | Edge | -| Hosts provisioned in [agent mode](../deployment-modes/agent-mode/agent-mode.md) do not display host information in the console after using the Palette Terminal User Interface to complete host setup. | Local UI is still available and will display host information. Refer to [Access Local UI](../clusters/edge/local-ui/host-management/access-console.md) to learn how to access Local UI. | December 12, 2024 | Edge | -| In a multi-node Edge cluster, the reset action on a cluster node does not update the node status on the leader node's linking screen. | [Scale down](../clusters/edge/local-ui/cluster-management/scale-cluster.md#scale-down-a-cluster) the cluster and free up the follower node before resetting the node. | December 12, 2024 | Edge | -| For Edge airgap clusters, manifests attached to packs are not applied during cluster deployment. | Add the manifest as a layer directly instead of attaching it to a pack. For more information, refer to [Add a Manifest](../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-manifest-addon.md). | November 15, 2024 | Edge | -| In some cases, the differential editor incorrectly reports YAML differences for customizations not created by you. The issue is more common when items in a list or array are removed. Clicking the **Keep** button when non-user-generated customization is the focus causes the button to become unresponsive after the first usage. | Skip differential highlights not created by you. Click the arrow button to skip and proceed. | November 11, 2024 | Cluster Profiles | -| Palette fails to provision virtual clusters on airgapped and proxy Edge cluster groups. This error is caused by Palette incorrectly defaulting to fetch charts from an external repository, which is unreachable from these environments. | No workaround. | November 9, 2024 | Virtual Clusters | -| The resource limits on Palette Virtual Clusters are too low and may cause the Palette agent to experience resource exhaustion. As a result, Palette pods required for Palette operations may experience Out-of-Memory (OOM) errors. | Refer to the [Apply Host Cluster Resource Limits to Virtual Cluster](../troubleshooting/palette-dev-engine.md#scenario---apply-host-cluster-resource-limits-to-virtual-cluster) guide for workaround steps. | November 4, 2024 | Virtual Clusters | -| Palette incorrectly modifies the indentation of the pack after it is configured as a cluster profile layer. The modified indentation does not cause errors, but you may observe changes to the pack **values.yaml**. | No workaround available. | October 30, 2024 | Cluster Profiles, Pack | -| Palette does not correctly configure multiple search domains when provided during the self-hosted installation. The configuration file **resolve.conf** ends up containing incorrect values. | Connect remotely to each node in the Palette self-hosted instance and edit the **resolution.conf** configuration file. | October 17, 2024 | Self-Hosted, PCG | -| Upgrading the RKE2 version from 1.29 to 1.30 fails due to [an upstream issue](https://github.com/rancher/rancher/issues/46726) with RKE2 and Cilium. | Refer to the [Troubleshooting section](../troubleshooting/edge/edge.md#scenario---clusters-with-cilium-and-rke2-experiences-kubernetes-upgrade-failure) for the workaround. | October 12, 2024 | Edge | -| Kubernetes clusters deployed on MAAS with Microk8s are experiencing deployment issues when using the upgrade strategy `RollingUpgrade`. This issue is affecting new cluster deployments and node provisioning. | Use the `InPlaceUpgrade` strategy to upgrade nodes deployed in MAAS. | October 12, 2024 | Clusters, Pack | -| Clusters using Mircrok8s and conducting backup and restore operations using Velero with [restic](https://github.com/restic/restic) are encountering restic pods going into the `crashloopbackoff` state. This issue stems from an upstream problem in the Velero project. You can learn more about it in the GitHub issue [4035](https://github.com/vmware-tanzu/velero/issues/4035) page. | Refer to the Additional Details section for troubleshooting workaround steps. | October 12, 2024 | Clusters | -| Clusters deployed with Microk8s cannot accept kubectl commands if the pack is added to the cluster's cluster profile. The reason behind this issue is Microk8s' lack of support for `certSANs`. This causes the Kubernetes API server to reject Spectro Proxy certificates. Check out GitHub issue [114](https://github.com/canonical/cluster-api-bootstrap-provider-microk8s/issues/114) in the MircoK8s cluster-api-bootstrap-provider-microk8s repository to learn more. | Use the [admin kubeconfig file](../clusters/cluster-management/kubeconfig.md#kubeconfig-files) to access the cluster API, as it does not use the Spectro Proxy server. This option may be limited to environments where you can access the cluster directly from a network perspective. | October 1, 2024 | Clusters, Pack | -| Clusters deployed with Microk8s cannot accept kubectl commands if the pack is added to the cluster's cluster profile. The reason behind these issues is Microk8s' lack of support for `certSANs` . This causes the Kubernetes API server to reject Spectro Proxy certificates. | Use the CLI flag [`--insecure-skip-tls-verify`](https://kubernetes.io/docs/reference/kubectl/kubectl/) with kubectl commands or use the [admin kubeconfig file](../clusters/cluster-management/kubeconfig.md#kubeconfig-files) to access the cluster API, as it does not use the Spectro Proxy server. This option may be limited to environments where you can access the cluster directly from a network perspective. | October 1, 2024 | Clusters, Pack | -| Deploying new [Nutanix clusters](../clusters/data-center/nutanix/nutanix.md) fails for self-hosted Palette or VerteX users on version 4.4.18 or newer. | No workaround is available. | September 26, 2024 | Clusters | -| OCI Helm registries added to Palette or VerteX before support for OCI Helm registries hosted in AWS ECR was available in Palette have an invalid API payload that is causing cluster imports to fail if the OCI Helm Registry is referenced in the cluster profile. | Log in to Palette as a tenant administrator and navigate to the left **Main Menu** . Select **Registries** and click on the **OCI Registries** tab. For each OCI registry of the Helm type, click on the **three-dot Menu** at the end of the row. Select **Edit**. To fix the invalid API payload, click on **Confirm**. Palette will automatically add the correct provider type behind the scenes to address the issue. | September 25, 2024 | Helm Registries | -| Airgap self-hosted Palette or VerteX instances cannot use the Container service in App Profiles. The required dependency, [DevSpace](https://github.com/devspace-sh/devspace), is unavailable from the Palette pack registry and is downloaded from the Internet at runtime. | Use the manifest service in an [App Profile](../profiles/app-profiles/app-profiles.md) to specify a custom container image. | September 25, 2024 | App Mode | -| Using the Flannel Container Network Interface (CSI) pack together with a Red Hat Enterprise Linux (RHEL)-based provider image may lead to a pod becoming stuck during deployment. This is caused by an upstream issue with Flannel that was discovered in a K3s GitHub issue. Refer to [the K3s issue page](https://github.com/k3s-io/k3s/issues/5013) for more information. | No workaround is available | September 14, 2024 | Edge | -| Palette OVA import operations fail if the VMO cluster is using a storageClass with the volume bind method `WaitForFirstConsumer`. | Refer to the [OVA Imports Fail Due To Storage Class Attribute](../troubleshooting/vmo-issues.md#scenario---ova-imports-fail-due-to-storage-class-attribute) troubleshooting guide for workaround steps. | September 13, 2024 | Palette CLI, VMO | -| Persistent Volume Claims (PVCs) metadata do not use a unique identifier for self-hosted Palette clusters. This causes incorrect Cloud Native Storage (CNS) mappings in vSphere, potentially leading to issues during node operations and cluster upgrades. | Refer to the [Troubleshooting section](../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping) for guidance. | September 13, 2024 | Self-hosted | -| Third-party binaries downloaded and used by the Palette CLI may become stale and incompatible with the CLI. | Refer to the [Incompatible Stale Palette CLI Binaries](../troubleshooting/automation.md#scenario---incompatible-stale-palette-cli-binaries) troubleshooting guide for workaround guidance. | September 11, 2024 | CLI | -| An issue with Edge hosts using [Trusted Boot](../clusters/edge/trusted-boot/trusted-boot.md) and encrypted drives occurs when TRIM is not enabled. As a result, Solid-State Drive and Nonvolatile Memory Express drives experience degraded performance and potentially cause cluster failures. This [issue](https://github.com/kairos-io/kairos/issues/2693) stems from [Kairos](https://kairos.io/) not passing through the `--allow-discards` flag to the `systemd-cryptsetup attach` command. | Check out the [Degraded Performance on Disk Drives](../troubleshooting/edge/edge.md#scenario---degraded-performance-on-disk-drives) troubleshooting guide for guidance on workaround. | September 4, 2024 | Edge | -| The AWS CSI pack has a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) (PDB) that allows for a maximum of one unavailable pod. This behavior causes an issue for single-node clusters as well as clusters with a single control plane node and a single worker node where the control plane lacks worker capability. [Operating System (OS) patch](../clusters/cluster-management/os-patching.md) updates may attempt to evict the CSI controller without success, resulting in the node remaining in the un-schedulable state. | If OS patching is enabled, allow the control plane nodes to have worker capability. For single-node clusters, turn off the OS patching feature. | September 4, 2024 | Cluster, Packs | -| On AWS IaaS Microk8s clusters, OS patching can get stuck and fail. | Refer to the [Troubleshooting](../troubleshooting/nodes.md#os-patch-fails-on-aws-with-microk8s-127) section for debug steps. | August 17, 2024 | Palette | -| When upgrading a self-hosted Palette instance from 4.3 to 4.4 the MongoDB pod may be stuck with the following error: `ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.` | Delete the PVC, PV and the pod manually. All resources will be recreated with the correct configuration. | August 17, 2024 | Self-Hosted Palette | -| For existing clusters that have added a new machine and all new clusters, pods may be stuck in the draining process and require manual intervention to drain the pod. | Manually delete the pod if it is stuck in the draining process. | August 17, 2024 | Palette | -| Clusters with the Virtual Machine Orchestrator (VMO) pack may experience VMs getting stuck in a continuous migration loop, as indicated by a `Migrating` or `Migration` VM status. | Review the [Virtual Machine Orchestrator (VMO) Troubleshooting](../troubleshooting/vmo-issues.md) section for workarounds. | August 1, 2024 | Virtual Machine Orchestrator | -| Palette CLI users who authenticated with the `login` command and specified a Palette console endpoint that does not contain the tenant name are encountering issues with expired JWT tokens. | Re-authenticate using your tenant URL, for example, `https://my-org.console.spectrocloud.com.` If the issue persists after re-authenticating, remove the `~/.palette/palette.yaml` file that is auto-generated by the Palette CLI. Re-authenticate with the `login` command if other commands require it. | July 25, 2024 | CLI | -| Adding new cloud providers, such as Nutanix, is currently unavailable. Private Cloud Gateway (PCG) deployments in new Nutanix environments fail to complete the installation. As a result, adding a new Nutanix environment to launch new host clusters is unavailable. This does not impact existing Nutanix deployments with a PCG deployed. | No workarounds are available. | July 20, 2024 | Clusters, Self-Hosted, PCG | -| Single-node Private Cloud Gateway (PCG) clusters are experiencing an issue upgrading to 4.4.11. The vSphere CSI controller pod fails to start because there are no matching affinity rules. | Check out the [vSphere Controller Pod Fails to Start in Single Node PCG Cluster](../troubleshooting/pcg.md#scenario---vsphere-controller-pod-fails-to-start-in-single-node-pcg-cluster) guide for workaround steps. | July 20, 2024 | PCG | -| When provisioning an Edge cluster, it's possible that some Operating System (OS) user credentials will be lost once the cluster is active. This is because the cloud-init stages from different sources merge during the deployment process, and sometimes, the same stages without distinct names overwrite each other. | Give each of your cloud-init stages in the OS pack and in the Edge installer **user-data** file a unique name. For more information about cloud-init stages and examples of cloud-init stages with names, refer to [Cloud-init Stages](../clusters/edge/edge-configuration/cloud-init.md). | July 17, 2024 | Edge | -| When you use a content bundle to provision a new cluster without using the local Harbor registry, it's possible for the images to be pulled from external networks instead of from the content bundle, consuming network bandwidth. If your Edge host has no connection to external networks or if it cannot locate the image on a remote registry, some pods may enter the `ImagePullBackOff` state at first, but eventually the pods will be created using images from the content bundle. | For connected clusters, you can make sure that the remote images are not reachable by the Edge host, which will stop the Palette agent from downloading the image and consuming bandwidth, and eventually the cluster will be created using images from the content bundle. For airgap clusters, the `ImagePullBackOff` error will eventually resolve on its own and there is no action to take. | July 11, 2024 | Edge | -| When you add a new VMware vSphere Edge host to an Edge cluster, the IP address may fail to be assigned to the Edge host after a reboot. | Review the [Edge Troubleshooting](../troubleshooting/edge/edge.md) section for workarounds. | July 9, 2024 | Edge | -| When you install Palette Edge using an Edge Installer ISO with a RHEL 8 operating system on a Virtual Machine (VM) with insufficient video memory, the QR code in the registration screen does not display correctly. | Increase the video memory of your VM to 8 MB or higher. The steps to do this vary depending on the platform you use to deploy your VM. In vSphere, you can right click on the VM, click **Edit Settings** and adjust the video card memory in the **Video card** tab. | July 9, 2024 | Edge | -| Custom Certificate Authority (CA) is not supported for accessing AKS clusters. Using a custom CA prevents the `spectro-proxy` pack from working correctly with AKS clusters. | No workaround is available. | July 9, 2024 | Packs, Clusters | -| Manifests attached to an Infrastructure Pack, such as OS, Kubernetes, Network, or Storage, are not applied to the Edge cluster. This issue does not impact the infrastructure pack's YAML definition, which is applied to the cluster. | Specify custom configurations through an add-on pack or a custom manifest pack applied after the infrastructure packs. | Jul 9, 2024 | Edge, Packs | -| Clusters using Cilium and deployed to VMware environments with the VXLAN tunnel protocol may encounter an I/O timeout error. This issue is caused by the VXMNET3 adapter, which is dropping network traffic and resulting in VXLAN traffic being dropped. You can learn more about this issue in the [Cilium's GitHub issue #21801](https://github.com/cilium/cilium/issues/21801). | Review the section for workarounds. | June 27, 2024 | Packs, Clusters, Edge | -| [Sonobuoy](../clusters/cluster-management/compliance-scan.md#conformance-testing) scans fail to generate reports on airgapped Palette Edge clusters. | No workaround is available. | June 24, 2024 | Edge | -| Clusters configured with OpenID Connect (OIDC) at the Kubernetes layer encounter issues when authenticating with the [non-admin Kubeconfig file](../clusters/cluster-management/kubeconfig.md#cluster-admin). Kubeconfig files using OIDC to authenticate will not work if the SSL certificate is set at the OIDC provider level. | Use the admin Kubeconfig file to authenticate with the cluster, as it does not use OIDC to authenticate. | June 21, 2024 | Clusters | -| During the platform upgrade from Palette 4.3 to 4.4, Virtual Clusters may encounter a scenario where the pod `palette-controller-manager` is not upgraded to the newer version of Palette. The virtual cluster will continue to be operational, and this does not impact its functionality. | Refer to the [Controller Manager Pod Not Upgraded](../troubleshooting/palette-dev-engine.md#scenario---controller-manager-pod-not-upgraded) troubleshooting guide. | June 15, 2024 | Virtual Clusters | -| Edge hosts with FIPS-compliant Red Hat Enterprise Linux (RHEL) and Ubuntu Operating Systems (OS) may encounter the error where the `systemd-resolved.service` service enters the **failed** state. This prevents the nameserver from being configured, which will result in cluster deployment failure. | Refer to [TroubleShooting](../troubleshooting/edge/edge.md#scenario---systemd-resolvedservice-enters-failed-state) for a workaround. | June 15, 2024 | Edge | -| The GKE cluster's Kubernetes pods are failing to start because the Kubernetes patch version is unavailable. This is encountered during pod restarts or node scaling operations. | Deploy a new cluster and use a GKE cluster profile that does not contain a Kubernetes pack layer with a patch version. Migrate the workloads from the existing cluster to the new cluster. This is a breaking change introduced in Palette 4.4.0 | June 15, 2024 | Packs, Clusters | -| does not support multi-node control plane clusters. The upgrade strategy, `InPlaceUpgrade`, is the only option available for use. | No workaround is available. | June 15, 2024 | Packs | -| Clusters using as the Kubernetes distribution, the control plane node fails to upgrade when using the `InPlaceUpgrade` strategy for sequential upgrades, such as upgrading from version 1.25.x to version 1.26.x and then to version 1.27.x. | Refer to the [Control Plane Node Fails to Upgrade in Sequential MicroK8s Upgrades](../troubleshooting/pack-issues.md) troubleshooting guide for resolution steps. | June 15, 2024 | Packs | -| Azure IaaS clusters are having issues with deployed load balancers and ingress deployments when using Kubernetes versions 1.29.0 and 1.29.4. Incoming connections time out as a result due to a lack of network path inside the cluster. AKS clusters are not impacted. | Use a Kubernetes version lower than 1.29.0 | June 12, 2024 | Clusters | -| OIDC integration with Virtual Clusters is not functional. All other operations related to Virtual Clusters are operational. | No workaround is available. | Jun 11, 2024 | Virtual Clusters | -| Deploying self-hosted Palette or VerteX to a vSphere environment fails if vCenter has standalone hosts directly under a data center. Persistent Volume (PV) provisioning fails due to an upstream issue with the vSphere Container Storage Interface (CSI) for all versions before v3.2.0. Palette and VerteX use the vSphere CSI version 3.1.2 internally. The issue may also occur in workload clusters deployed on vSphere using the same vSphere CSI for storage volume provisioning. | If you encounter the following error message when deploying self-hosted Palette or VerteX: `'ProvisioningFailed failed to provision volume with StorageClass "spectro-storage-class". Error: failed to fetch hosts from entity ComputeResource:domain-xyz` then use the following workaround. Remove standalone hosts directly under the data center from vCenter and allow the volume provisioning to complete. After the volume is provisioned, you can add the standalone hosts back. You can also use a service account that does not have access to the standalone hosts as the user that deployed Palette. | May 21, 2024 | Self-Hosted | -| Conducting cluster node scaling operations on a cluster undergoing a backup can lead to issues and potential unresponsiveness. | To avoid this, ensure no backup operations are in progress before scaling nodes or performing other cluster operations that change the cluster state | April 14, 2024 | Clusters | -| Palette automatically creates an AWS security group for worker nodes using the format `-node`. If a security group with the same name already exists in the VPC, the cluster creation process fails. | To avoid this, ensure that no security group with the same name exists in the VPC before creating a cluster. | April 14, 2024 | Clusters | -| K3s version 1.27.7 has been marked as _Deprecated_. This version has a known issue that causes clusters to crash. | Upgrade to a newer version of K3s to avoid the issue, such as versions 1.26.12, 1.28.5, and 1.27.11. You can learn more about the issue in the [K3s GitHub issue](https://github.com/k3s-io/k3s/issues/9047) page. | April 14, 2024 | Packs, Clusters | -| When deploying a multi-node AWS EKS cluster with the Container Network Interface (CNI) , the cluster deployment fails. | A workaround is to use the AWS VPC CNI in the interim while the issue is resolved. | April 14, 2024 | Packs, Clusters | -| If a Kubernetes cluster deployed onto VMware is deleted, and later re-created with the same name, the cluster creation process fails. The issue is caused by existing resources remaining inside the PCG, or the System PCG, that are not cleaned up during the cluster deletion process. | Refer to the [VMware Resources Remain After Cluster Deletion](../troubleshooting/pcg.md#scenario---vmware-resources-remain-after-cluster-deletion) troubleshooting guide for resolution steps. | April 14, 2024 | Clusters | -| Day-2 operations related to infrastructure changes, such as modifying the node size and count, when using MicroK8s are not taking effect. | No workaround is available. | April 14, 2024 | Packs, Clusters | -| If a cluster that uses the Rook-Ceph pack experiences network issues, it's possible for the file mount to become and remain unavailable even after the network is restored. | This a known issue disclosed in the [Rook GitHub repository](https://github.com/rook/rook/issues/13818). To resolve this issue, refer to pack documentation. | April 14, 2024 | Packs, Edge | -| Edge clusters on Edge hosts with ARM64 processors may experience instability issues that cause cluster failures. | ARM64 support is limited to a specific set of Edge devices. Currently, Nvidia Jetson devices are supported. | April 14, 2024 | Edge | -| During the cluster provisioning process of new edge clusters, the Palette webhook pods may not always deploy successfully, causing the cluster to be stuck in the provisioning phase. This issue does not impact deployed clusters. | Review the [Palette Webhook Pods Fail to Start](../troubleshooting/edge/edge.md#scenario---palette-webhook-pods-fail-to-start) troubleshooting guide for resolution steps. | April 14, 2024 | Edge | +| Description | Workaround | Publish Date | Product Component | +| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ---------------------------- | +| Users cannot remove proxy values for connected Edge hosts in Local UI due to a validation error. Proxy values can still be added and updated. | No workaround available. | October 19, 2025 | Edge | +| On Edge clusters whose hosts run Ubuntu 24.04 with a Unified Kernel Image (UKI), CoreDNS pods may enter the `CrashLoopBackOff` state with logs showing `[FATAL] plugin/loop: Loop (127.0.0.1: -> :53) detected for zone "."`. This happens because `/etc/resolv.conf` is symlinked to `/run/systemd/resolve/stub-resolv.conf`, which lacks real DNS server entries. As a result, CoreDNS forwards DNS queries to itself, creating a recursive loop. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---coredns-pods-stuck-in-crashloopbackoff-due-to-dns-loop) for the workaround. | October 7, 2025 | Edge | +| Due to strict schema adherence enforcement, [Helm charts](../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-helm-addon.md) with parameters that do not exist in the chart schema fail to install on Palette 4.7.15 or later. | Remove parameters that do not exist in the chart schema from the pack values. Alternatively, add the missing parameters to the chart schema or remove the chart schema file entirely. | September 20, 2025 | Packs | +| Edge clusters using the versions 1.32.3 and 1.33.0 may fail to come up because CoreDNS pods do not reach the running state. On existing clusters, CoreDNS pods can fall into a `CrashLoopBackOff` state with the error `exec /bin/pebble: no such file or directory`. This is due to a [Canonical Kubernetes known issue](https://github.com/canonical/k8s-snap/issues/1864). The Palette Optimized Canonical pack references the CoreDNS images `ghcr.io/canonical/coredns:1.11.3-ck0` in version 1.32.3 and `ghcr.io/canonical/coredns:1.11.4-ck1` in version 1.33.0. Both of these images are broken and cause CoreDNS pods to fail. | Use the Palette Optimized Canonical pack versions other than 1.32.3 and 1.33.0 which include the fixed CoreDNS image. | September 20, 2025 | Edge, Packs | +| Agent mode Edge cluster creation may fail with logs showing the error `failed calling webhook "pod-registry.spectrocloud.com": tls: failed to verify certificate: x509: certificate signed by unknown authority ("Spectro Cloud")...`. As a result, core components such as CNI, Harbor, and cluster controllers never start. All pods remain in **Pending** or **Failed** state. In the Local UI, packs display **Invalid date** in the **Started On** and **Completed On** fields. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---x509-certificate-signed-by-unknown-authority-errors-during-agent-mode-cluster-creation) for the workaround. | September 1, 2025 | Edge | +| [Virtual Machine Orchestrator (VMO)](../vm-management/vm-management.md) 4.7.1 cannot be uninstalled due to a missing image. | No workaround available. | September 1, 2025 | Virtual Machine Orchestrator | +| After an OS image upgrade in appliance mode, an Edge host may fail to boot into the expected active system image and instead boot into the passive partition as a fallback due to an upgrade failure. When this happens, the Edge host does not automatically rejoin the cluster. The kernel command line (`/proc/cmdline`) includes the `upgrade_failure` flag and confirms the system root is set to `LABEL=COS_PASSIVE`. | Recover the Edge host manually using one of the following methods:
- Reboot the host and select **Palette eXtended Kubernetes – Edge** at the GRand Unified Bootloader (GRUB) menu to boot the active image.
- Establish an SSH connection to the host and run `/usr/bin/grub2-editenv /oem/grubenv set next_entry=cos && reboot`. This command updates GRUB to use the boot entry labeled `cos` (the active image) and reboots the host. | September 1, 2025 | Edge | +| On Azure IaaS clusters created using a Palette version prior to 4.6.32, scaling worker node pools does not attach newly created nodes to an outbound load balancer after upgrading to Palette version 4.6.32 or later and the cluster's Palette Agent version to 4.6.7 or later. This impacts outbound connectivity and may also disassociate existing NAT gateways from the worker node pool subnet, resulting in a loss of egress connectivity. | - **Multi-Tenant SaaS** - No workaround available.
- **Self-Hosted Palette or VerteX** - Before upgrading your [self-hosted Palette](../self-hosted-setup/palette/palette.md) or [VerteX](../self-hosted-setup/vertex/vertex.md) instance to Palette version 4.6.32 or later, [pause agent upgrades](../clusters/cluster-management/platform-settings/pause-platform-upgrades.md) on any Azure IaaS clusters where you plan to perform Day-2 scaling or repave operations. | September 1, 2025 | Clusters, Self-Hosted | +| In self-hosted [Palette](../self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md) and [Vertex Management Appliance](../self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md) environments, uploading the same pack as both a FIPS and non-FIPS version to the same registry overwrites the original pack.

For example, if you have a non-FIPS version of the `byoi-2.1.0` pack in your Zot registry and you upload the FIPS version of `byoi-2.1.0`, the new version will overwrite the existing one. This results in a SHA mismatch between the pack stored in the registry and the pack referenced in the cluster profile, which can lead to cluster creation failures. | Upload either a FIPS or non-FIPS version of a pack to your registry. Do not upload both to the same registry. | September 1, 2025 | Clusters, Self-Hosted | +| Cilium may fail to start on MAAS machines that are configured with a `br0` interface and are part of a cluster, displaying errors like `daemon creation failed: failed to detect devices: unable to determine direct routing device. Use --direct-routing-device to specify it`. This happens because Canonical Kubernetes supports setting various Cilium annotations, but it lacks some fields required for the MAAS `br0` network configuration due to [a limitation in `k8s-snap`](https://github.com/canonical/k8s-snap/issues/1740). | Avoid using MAAS machines with a `br0` interface when provisioning Canonical Kubernetes clusters. Instead, choose machines whose primary interface is directly connected to the MAAS-managed subnet or VLAN. | August 17, 2025 | Clusters, Packs | +| Network overlay cluster nodes may display erroneous `failed to add static FDB entry after cleanup...Stdout already set, output` logs after [upgrading the Palette agent](../clusters/edge/cluster-management/agent-upgrade-airgap.md) to version 4.7.9. Cluster functionality is not affected. | No workaround available. | August 17, 2025 | Edge | +| Container runtime may fail to run with the message `Failed to run CRI service error=failed to recover state: failed to get metadata for stored sandbox` after a node is upgraded to 1.29.14. This is related to an [upstream issue with containerd](https://github.com/containerd/containerd/issues/10848). | Remove the container runtime folder with `rm -rf /var/lib/containerd`. Then restart containerd and kubelet using `systemctl restart containerd && systemctl restart kublet`. | August 17, 2025 | Edge | +| Due to [an upstream issue with a Go library and CLIs for working with container registries](https://github.com/google/go-containerregistry/issues/2124), unintended or non-graceful reboots during content push operations to registries can cause consistency issues. This leads to content sync in locally managed clusters throwing the `content-length: 0 ` error. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---content-length-0-errors-during-content-synchronization) for the workaround. | August 17, 2025 | Edge | +| Controller mode MAAS deployments using the automatically install the Cilium CNI. This happens because of a known issue with the Canonical Kubernetes Cluster API (CAPI) bootstrap provider and cannot be disabled. However, Palette still requires users to explicitly configure a CNI in the cluster profile. | Select the **Cilium CNI (Canonical Kubernetes)** pack when creating a cluster profile to fulfill the CNI requirement. Palette recognizes this selection and allows cluster creation to proceed, even though Cilium is installed by the bootstrap process. | August 17, 2025 | Clusters, Packs | +| If you configure static IP on a host using the [Terminal User Interface (TUI)](../clusters/edge/site-deployment/site-installation/initial-setup.md), the cluster that is formed by the host cannot [enable network overlay](../clusters/edge/networking/vxlan-overlay.md). | Do not enable network overlay on clusters using static IPs configured via TUI. If you must use both static IP and network overlay, configure the static IP with the [user data network block](../clusters/edge/edge-configuration/installer-reference.md#site-network-parameters). | July 31, 2025 | Edge | +| When deploying an Edge RKE2 cluster on Rocky Linux, a worker node may fail to join the cluster if TCP port 9345 is not open on the control plane node. This port is required for communication between the RKE2 agent and the control plane. | Verify if the port is open by running `firewall-cmd --list-all` on the control plane node. If 9345/tcp is not listed in the output, open it with `firewall-cmd --zone=public --add-port=9345/tcp --permanent` and apply the change using `firewall-cmd --reload`. | July 21, 2025 | Edge | +| When using the Palette/VerteX Management Appliance, clicking on the Zot service link in Local UI results in a new tab displaying `Client sent an HTTP request to an HTTPS server`. | Change the prefix of the URL in your web browser to `https://` instead of `http://`. | July 21, 2025 | Clusters, Packs | +| When deploying a workload cluster with packs using `namespaceLabels`, the associated Pods get stuck if the cluster is deployed via [self-hosted Palette](../self-hosted-setup/palette/palette.md) or [Palette VerteX](../self-hosted-setup/vertex/vertex.md), or if the `palette-agent` ConfigMap specifies `data.feature.workloads: disable`. | Force-apply `privileged` labels to the affected namespace. Refer to the [Packs - Troubleshooting](../troubleshooting/pack-issues.md#scenario---pods-with-namespacelabels-are-stuck-on-deployment) guide for additional information. | July 19, 2025 | Clusters | +| Day-2 [node pool](../clusters/cluster-management/node-pool.md) operations cannot be performed on [AWS EKS clusters](../clusters/public-cloud/aws/eks.md) previously deployed with both **Enable Nodepool Customization** enabled and Amazon Linux 2023 (AL2023) [node labels](../clusters/cluster-management/node-labels.md) after upgrading to version 4.7.3. | Create a new node pool with the desired [Amazon Machine Image (AMI) and node pool customizations](../clusters/public-cloud/aws/eks.md#cloud-configuration-settings) and migrate existing workloads to the new node pool. For an example of how to migrate workloads, refer to the [AWS Scale, Upgrade, and Secure Clusters](../tutorials/getting-started/palette/aws/scale-secure-cluster.md#scale-a-cluster) guide. | July 19, 2025 | Clusters | +| [Cloning a virtual machine](../vm-management/create-manage-vm/clone-vm.md) using KubeVirt 1.5 or later may hang if [volume snapshots](../vm-management/create-manage-vm/take-snapshot-of-vm.md) are not configured. | Ensure that you configure a `VolumeSnapshotClass` in the `charts.virtual-machine-orchestrator.snapshot-controller.volumeSnapshotClass` resource in the pack. | July 19, 2025 | Virtual Machine Orchestrator | +| Edge K3s clusters may fail `kube-bench` tests even when they are expected to pass. These failures do not indicate security issues, but rather stem from how the tests are executed. | No workaround available. | July 11, 2025 | Edge | +| clusters running Kubernetes v1.32.x or later on RHEL or Rocky Linux 8.x may experience failure during Kubernetes initialization due to unsupported kernel version. | Use RHEL or Rocky Linux 9.x as the base OS or update the kernel version to 4.19 or later in the 4.x series, or to any 5.x or 6.x version. Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario--pxk-e-clusters-on-rhel-and-rocky-8-fail-kubernetes-initialization) for debug steps. | June 23, 2025 | Edge | +| fails to start when IPv6 is enabled on hosts running specific kernel versions due to missing or incompatible kernel modules required for `ip6tables` `MARK` support. Affected kernel versions include 5.15.0-127 and 5.15.0-128 (generic), 6.8.0-57 and 6.8.0-58 (generic), and 6.8.0-1022 (cloud). | Use a different CNI, disable IPv6, or use an unaffected kernel version. Refer to the [troubleshooting](../troubleshooting/pack-issues.md#scenario---calico-fails-to-start-when-ipv6-is-enabled) guide for debug steps. | June 23, 2025 | Packs | +| control plane nodes in VerteX clusters may experience failure of the `kube-vip` component after reboot. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---pxk-e-clusters-in-vertex-deployments-experience-failure-upon-reboot) for debug steps. | June 23, 2025 | Edge | +| The [Pause Agent Upgrades](../clusters/cluster-management/platform-settings/pause-platform-upgrades.md) configuration is not applied to Edge hosts that are not part of a cluster. Edge hosts that are part of a cluster are not affected. | No workaround. | June 23, 2025 | Edge | +| Due to CAPZ upgrades in version 4.6.32, [Azure IaaS](../clusters/public-cloud/azure/azure.md) and [AKS](../clusters/public-cloud/azure/aks.md) clusters cannot be deployed on both [Azure Public Cloud](../clusters/public-cloud/azure/azure.md) and [Azure US Government](https://azure.microsoft.com/en-us/explore/global-infrastructure/government). Clusters will get stuck during the provisioning stage. | Users who want to deploy a cluster on both Azure environments must use a [PCG](../clusters/pcg/pcg.md) when adding an [Azure US Government cloud account](../clusters/public-cloud/azure/azure-cloud.md). | June 11, 2025 | Clusters | +| Palette eXtended Kubernetes (PXK) and Palette eXtended Kubernetes - Edge (PXK-E) versions 1.30.10, 1.31.6, and 1.32.2 or older do not support TLS 1.3 or applications that require TLS 1.3 encrypted communications. | Use PXK and PXK-E versions 1.30.11, 1.31.7, and 1.32.3 or later instead. | June 5, 2025 | Edge | +| Clusters with [Pause Agent Upgrades](../clusters/cluster-management/platform-settings/pause-platform-upgrades.md) enabled may be stuck in the **Deleting** state. Cluster resources will not be deleted without manual intervention. | Disable the **Pause Agent Upgrades** setting and trigger the cluster deletion. | May 31, 2025 | Clusters | +| When upgrading airgapped self-hosted Palette and VerteX clusters to 4.6.32, the IPAM controller may report an `Exhausted IP Pools` error despite having available IP addresses, preventing the cluster from upgrading. This is due to a race condition in CAPV version 1.12.0, which may lead to an orphaned IP claim. | Delete the orphaned IP claim and re-run the upgrade. Refer to the [troubleshooting](../troubleshooting/enterprise-install.md#scenario---ip-pool-exhausted-during-airgapped-upgrade) guide for debug steps. | May 31, 2025 | Clusters | +| Edge clusters using K3s version 1.32.1 or 1.32.2 may fail to provision due to an upstream issue. Refer to the [K3s issue page](https://github.com/k3s-io/k3s/issues/11973) for more information. | No workaround available. | May 31, 2025 | Edge | +| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md) using the FIPS installation package, adding a custom `stylus.path` to the `user-data` file causes cluster creation to fail as it cannot find [kubelet](https://kubernetes.io/docs/concepts/architecture/#kubelet). | No workaround available. | May 31, 2025 | Edge | +| During a Kubernetes upgrade, the Cilium pod may get stuck in the `Init:CrashLoopBackoff` state due to nsenter permission issues. | Refer to [Troubleshooting - Edge](../troubleshooting/edge/edge.md#scenario---cilium-pod-stuck-during-kubernetes-upgrade-due-to-nsenter-permission-issue) for debug steps. | May 31, 2025 | Edge | +| Pods with [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volumes that are [backed up](../clusters/cluster-management/backup-restore/create-cluster-backup.md) using Velero 1.9, [restored](../clusters/cluster-management/backup-restore/restore-cluster-backup.md) using Velero 1.15, and backed up and restored again with Velero 1.15 are stuck in the `init` state when performing a second restore. This is caused by a known [upstream issue](https://github.com/vmware-tanzu/velero/pull/8880) with Velero. | Delete stuck pods or restart affected deployments. | May 31, 2025 | Clusters | +| [Appliance Studio](../deployment-modes/appliance-mode/appliance-studio.md) does not validate the value of each field in `.arg` or `user-data` files. | No workaround available. | May 31, 2025 | Edge | +| Palette virtual clusters provisioned with EKS clusters as host clusters in the cluster group and using the Calico CNI are stuck in the **Provisioning** state due to Cert Manager not being reachable. This stems from [an upstream limitation](https://cert-manager.io/docs/installation/compatibility/#aws-eks) between Cert Manager on EKS and custom CNIs. | No workaround available. | May 21, 2025 | Edge | +| [Remote shell](../clusters/edge/cluster-management/remote-shell.md) sessions executing in the [Chrome](https://www.google.com/intl/en_uk/chrome/) and [Microsoft Edge](https://www.microsoft.com/en-gb/edge/download?form=MA13FJ) browsers time out after approximately five minutes of inactivity. | Start [remote shell](../clusters/edge/cluster-management/remote-shell.md) sessions in the [Firefox](https://www.mozilla.org/en-GB/firefox/new/) browser instead. Firefox supports a 12 hour inactivity timeout. | May 5, 2025 | Edge | +| When upgrading an airgapped Edge cluster to version 4.6.24, some pods may get stuck in the `ImagePullBackOff` state. | Re-upload the content bundle. | May 5, 2025 | Edge | +| When you [enable remote shell](../clusters/edge/cluster-management/remote-shell.md) on an Edge host, the remote shell configuration may become stuck in the **Configuring** state. | Disable remote shell in the UI, and wait for one minute before enabling it again. | April 19, 2025 | Edge | +| Disconnected Edge clusters using PXK-E version 1.29.14 or 1.30.10 will sometimes go into the unknown state after a reboot. | Use the command `kubectl delete pod kube-vip- --namespace kube-system` to delete the Kubernetes VIP pod and let it be re-created automatically. Replace `node-name` with the name of the host node. | March 15, 2025 | Edge | +| [MAAS](../clusters/data-center/maas/maas.md) and [VMware vSphere](../clusters/data-center/vmware/vmware.md) clusters fail to provision on existing self-hosted Palette and VerteX environments deployed with Palette 4.2.13 or later. These installations have an incorrectly configured default image endpoint, which causes image resolution to fail. New self-hosted installations are not affected. | Refer to [Troubleshooting](../troubleshooting/enterprise-install.md#scenario---maas-and-vmware-vsphere-clusters-fail-image-resolution-in-non-airgap-environments) for a workaround for non-airgap environments. For airgap environments, ensure that the images are downloaded to your environment. Refer to the [Additional OVAs](../downloads/self-hosted-palette/additional-ovas.md) page for further details. | February 16, 2025 | Self-Hosted, Clusters | +| Performing a `InPlaceUpgrade` from version 1.28 to 1.29 on active MAAS and AWS clusters with Cilium prevents new pods from being deployed on control plane nodes due to an [upstream issue](https://github.com/canonical/cluster-api-control-plane-provider-microk8s/issues/74) with Canonical. This issue also occurs when performing a MicroK8s `SmartUpgrade` from version 1.28 to 1.29 on active MAAS and AWS clusters with one control plane node and Cilium. | Manually restart the Cilium pods on _each_ control plane node using the command `microk8s kubectl rollout restart daemonset cilium --namespace kube-system`. | February 16, 2025 | Clusters, Packs | +| For clusters deployed with [Virtual Machine Orchestrator (VMO)](../vm-management/vm-management.md), namespaces on the **Virtual Machine** tab cannot be viewed by users with any `spectro-vm` cluster role. | Add the `spectro-namespace-list` cluster role to users who need to view virtual machines and virtual machine namespaces. Refer to the [Add Roles and Role Bindings](../vm-management/rbac/add-roles-and-role-bindings.md) guide for instructions on how to add roles for VMO clusters. | February 5, 2025 | Virtual Machine Orchestrator | +| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md), the contents of the `/opt/cni/bin` folder are not set correctly, causing cluster deployment issues because the cluster network cannot come up. | Refer to [Troubleshooting](../troubleshooting/edge/edge.md#scenario---agent-mode-deployments-cni-folder-permission-issues) for a workaround. | January 30, 2025 | Palette agent | +| Palette [workload clusters](../glossary-all.md#workload-cluster) deployed with Calico version 3.28.2, 3.29.0, or 3.29.1 are experiencing memory leaks due to an [upstream issue](https://github.com/projectcalico/calico/pull/9612) with Calico, which is caused by failing to close netlink handles. | [Create a new profile version](../profiles/cluster-profiles/modify-cluster-profiles/version-cluster-profile.md) using Calico version 3.28.0 or 3.28.1 and [update your cluster](../clusters/cluster-management/cluster-updates.md#update-a-cluster). | January 27, 2025 | Clusters, Packs | +| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md) on Palette agent version 4.5.14, adding a custom `stylus.path` to the **user-data** file causes cluster creation to fail as it cannot find [kubeadm](https://kubernetes.io/docs/reference/setup-tools/kubeadm/). | Review the [Edge Troubleshooting](../troubleshooting/edge/edge.md) section for workarounds. Refer to [Identify the Target Agent Version](../clusters/edge/cluster-management/agent-upgrade-airgap.md#identify-the-target-agent-version) for guidance in retrieving your Palette agent version number. | January 19, 2025 | Edge | +| For clusters deployed with and [agent mode](../deployment-modes/agent-mode/agent-mode.md), upgrades to higher Kubernetes versions are not supported with Palette agent version 4.5.12 or earlier. | No workaround available. Upgrades to higher Kubernetes versions are only supported from Palette agent version 4.5.14 and above for clusters deployed with PXK-E and agent mode. Refer to [Identify the Target Agent Version](../clusters/edge/cluster-management/agent-upgrade-airgap.md#identify-the-target-agent-version) for guidance in retrieving your Palette agent version number. | January 19, 2025 | Edge | +| Transferring the management of a local Edge cluster to central management by Palette or VerteX is not supported for multi-node clusters. | No workaround is available. | January 19, 2025 | Edge | +| Edits on the [Hybrid Profile](../clusters/public-cloud/aws/eks-hybrid-nodes/create-hybrid-node-pools.md#create-node-pool) of an [EKS Hybrid node pool](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md) take effect as soon as you click the **Save** button on the **Configure Profile** tab, not when you click **Confirm** on the **Edit node pool** screen. | No workaround available. | January 19, 2025 | Clusters | +| [EKS Hybrid node](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md) statuses are not displayed accurately when an update is in progress. This has no effect on the update operation itself. | No workaround available. | January 19, 2025 | Clusters | +| Deleting an [EKS Hybrid node](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md) from the Node Details page will result in an error in the Palette UI and the operation will have no effect. Additionally, deletion cannot be performed if the node pool is in the middle of an update operation. | You can remove a node by changing the node pool instead. Refer to the [Change a Node Pool](../clusters/cluster-management/node-pool.md#change-a-node-pool) page. Ensure that the node pool update only includes deletion and that the node to be deleted is in a Running state. | January 19, 2025 | Clusters | +| [Maintenance mode](../clusters/cluster-management/maintenance-mode.md#activate-maintenance-mode) cannot be activated on [EKS Hybrid nodes](../clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md). Attempting to activate maintenance mode will result in an error in the Palette UI and the operation will have no effect. | No workaround available. | January 19, 2025 | Clusters | +| When using the [VM Migration Assistant](../vm-management/vm-migration-assistant/vm-migration-assistant.md) to migrate VMs to your VMO cluster, migration plans can enter an **Unknown** state if more VMs are selected for migration than the **Max concurrent virtual machine migrations** setting allows. | Review the [Virtual Machine Orchestrator (VMO) Troubleshooting](../troubleshooting/vmo-issues.md#scenario---virtual-machine-vm-migration-plans-in-unknown-state) section for workarounds. | January 19, 2025 | Virtual Machine Orchestrator | +| Palette upgrades on K3s virtual clusters may be blocked if the cluster does not have enough resources to accommodate additional pods. Ensure that your cluster has 1 CPU, 1 GiB of memory, and 1 GiB storage of free resources before commencing an upgrade. You may increase the virtual cluster's resource quotas or disable them. | Refer to the [Adjust Virtual Clusters Limits](../troubleshooting/palette-dev-engine.md#scenario---adjust-virtual-clusters-limits-before-palette-upgrades) guide for workaround steps. | January 19, 2025 | Virtual Clusters | +| If you have manually configured the metrics server in your Edge airgap cluster using a manifest, upgrading to 4.5.15 may cause an additional metrics server pod to be created in your cluster. | Remove the manifest layer that adds the manifest server from your cluster profile and apply the update on your cluster. | December 15, 2024 | Edge | +| When deploying an Edge cluster using content bundles built from cluster profiles with PXK-E as the Kubernetes layer, some images in the Kubernetes layer fail to load into containerd. This issue occurs due to image signature problems, resulting in deployment failure. | Remove the `packs.content.images` from the Kubernetes layer in the pack configuration before building the content bundle. These components are already included in the provider image and do not need to be included in the content bundle. | December 13, 2024 | Edge | +| Hosts provisioned in [agent mode](../deployment-modes/agent-mode/agent-mode.md) do not display host information in the console after using the Palette Terminal User Interface to complete host setup. | Local UI is still available and will display host information. Refer to [Access Local UI](../clusters/edge/local-ui/host-management/access-console.md) to learn how to access Local UI. | December 12, 2024 | Edge | +| In a multi-node Edge cluster, the reset action on a cluster node does not update the node status on the leader node's linking screen. | [Scale down](../clusters/edge/local-ui/cluster-management/scale-cluster.md#scale-down-a-cluster) the cluster and free up the follower node before resetting the node. | December 12, 2024 | Edge | +| For Edge airgap clusters, manifests attached to packs are not applied during cluster deployment. | Add the manifest as a layer directly instead of attaching it to a pack. For more information, refer to [Add a Manifest](../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-manifest-addon.md). | November 15, 2024 | Edge | +| In some cases, the differential editor incorrectly reports YAML differences for customizations not created by you. The issue is more common when items in a list or array are removed. Clicking the **Keep** button when non-user-generated customization is the focus causes the button to become unresponsive after the first usage. | Skip differential highlights not created by you. Click the arrow button to skip and proceed. | November 11, 2024 | Cluster Profiles | +| Palette fails to provision virtual clusters on airgapped and proxy Edge cluster groups. This error is caused by Palette incorrectly defaulting to fetch charts from an external repository, which is unreachable from these environments. | No workaround. | November 9, 2024 | Virtual Clusters | +| The resource limits on Palette Virtual Clusters are too low and may cause the Palette agent to experience resource exhaustion. As a result, Palette pods required for Palette operations may experience Out-of-Memory (OOM) errors. | Refer to the [Apply Host Cluster Resource Limits to Virtual Cluster](../troubleshooting/palette-dev-engine.md#scenario---apply-host-cluster-resource-limits-to-virtual-cluster) guide for workaround steps. | November 4, 2024 | Virtual Clusters | +| Palette incorrectly modifies the indentation of the pack after it is configured as a cluster profile layer. The modified indentation does not cause errors, but you may observe changes to the pack **values.yaml**. | No workaround available. | October 30, 2024 | Cluster Profiles, Pack | +| Palette does not correctly configure multiple search domains when provided during the self-hosted installation. The configuration file **resolve.conf** ends up containing incorrect values. | Connect remotely to each node in the Palette self-hosted instance and edit the **resolution.conf** configuration file. | October 17, 2024 | Self-Hosted, PCG | +| Upgrading the RKE2 version from 1.29 to 1.30 fails due to [an upstream issue](https://github.com/rancher/rancher/issues/46726) with RKE2 and Cilium. | Refer to the [Troubleshooting section](../troubleshooting/edge/edge.md#scenario---clusters-with-cilium-and-rke2-experiences-kubernetes-upgrade-failure) for the workaround. | October 12, 2024 | Edge | +| Kubernetes clusters deployed on MAAS with Microk8s are experiencing deployment issues when using the upgrade strategy `RollingUpgrade`. This issue is affecting new cluster deployments and node provisioning. | Use the `InPlaceUpgrade` strategy to upgrade nodes deployed in MAAS. | October 12, 2024 | Clusters, Pack | +| Clusters using Mircrok8s and conducting backup and restore operations using Velero with [restic](https://github.com/restic/restic) are encountering restic pods going into the `crashloopbackoff` state. This issue stems from an upstream problem in the Velero project. You can learn more about it in the GitHub issue [4035](https://github.com/vmware-tanzu/velero/issues/4035) page. | Refer to the Additional Details section for troubleshooting workaround steps. | October 12, 2024 | Clusters | +| Clusters deployed with Microk8s cannot accept kubectl commands if the pack is added to the cluster's cluster profile. The reason behind this issue is Microk8s' lack of support for `certSANs`. This causes the Kubernetes API server to reject Spectro Proxy certificates. Check out GitHub issue [114](https://github.com/canonical/cluster-api-bootstrap-provider-microk8s/issues/114) in the MircoK8s cluster-api-bootstrap-provider-microk8s repository to learn more. | Use the [admin kubeconfig file](../clusters/cluster-management/kubeconfig.md#kubeconfig-files) to access the cluster API, as it does not use the Spectro Proxy server. This option may be limited to environments where you can access the cluster directly from a network perspective. | October 1, 2024 | Clusters, Pack | +| Clusters deployed with Microk8s cannot accept kubectl commands if the pack is added to the cluster's cluster profile. The reason behind these issues is Microk8s' lack of support for `certSANs` . This causes the Kubernetes API server to reject Spectro Proxy certificates. | Use the CLI flag [`--insecure-skip-tls-verify`](https://kubernetes.io/docs/reference/kubectl/kubectl/) with kubectl commands or use the [admin kubeconfig file](../clusters/cluster-management/kubeconfig.md#kubeconfig-files) to access the cluster API, as it does not use the Spectro Proxy server. This option may be limited to environments where you can access the cluster directly from a network perspective. | October 1, 2024 | Clusters, Pack | +| Deploying new [Nutanix clusters](../clusters/data-center/nutanix/nutanix.md) fails for self-hosted Palette or VerteX users on version 4.4.18 or newer. | No workaround is available. | September 26, 2024 | Clusters | +| OCI Helm registries added to Palette or VerteX before support for OCI Helm registries hosted in AWS ECR was available in Palette have an invalid API payload that is causing cluster imports to fail if the OCI Helm Registry is referenced in the cluster profile. | Log in to Palette as a tenant administrator and navigate to the left **Main Menu** . Select **Registries** and click on the **OCI Registries** tab. For each OCI registry of the Helm type, click on the **three-dot Menu** at the end of the row. Select **Edit**. To fix the invalid API payload, click on **Confirm**. Palette will automatically add the correct provider type behind the scenes to address the issue. | September 25, 2024 | Helm Registries | +| Airgap self-hosted Palette or VerteX instances cannot use the Container service in App Profiles. The required dependency, [DevSpace](https://github.com/devspace-sh/devspace), is unavailable from the Palette pack registry and is downloaded from the Internet at runtime. | Use the manifest service in an [App Profile](../profiles/app-profiles/app-profiles.md) to specify a custom container image. | September 25, 2024 | App Mode | +| Using the Flannel Container Network Interface (CSI) pack together with a Red Hat Enterprise Linux (RHEL)-based provider image may lead to a pod becoming stuck during deployment. This is caused by an upstream issue with Flannel that was discovered in a K3s GitHub issue. Refer to [the K3s issue page](https://github.com/k3s-io/k3s/issues/5013) for more information. | No workaround is available | September 14, 2024 | Edge | +| Palette OVA import operations fail if the VMO cluster is using a storageClass with the volume bind method `WaitForFirstConsumer`. | Refer to the [OVA Imports Fail Due To Storage Class Attribute](../troubleshooting/vmo-issues.md#scenario---ova-imports-fail-due-to-storage-class-attribute) troubleshooting guide for workaround steps. | September 13, 2024 | Palette CLI, VMO | +| Persistent Volume Claims (PVCs) metadata do not use a unique identifier for self-hosted Palette clusters. This causes incorrect Cloud Native Storage (CNS) mappings in vSphere, potentially leading to issues during node operations and cluster upgrades. | Refer to the [Troubleshooting section](../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping) for guidance. | September 13, 2024 | Self-hosted | +| Third-party binaries downloaded and used by the Palette CLI may become stale and incompatible with the CLI. | Refer to the [Incompatible Stale Palette CLI Binaries](../troubleshooting/automation.md#scenario---incompatible-stale-palette-cli-binaries) troubleshooting guide for workaround guidance. | September 11, 2024 | CLI | +| An issue with Edge hosts using [Trusted Boot](../clusters/edge/trusted-boot/trusted-boot.md) and encrypted drives occurs when TRIM is not enabled. As a result, Solid-State Drive and Nonvolatile Memory Express drives experience degraded performance and potentially cause cluster failures. This [issue](https://github.com/kairos-io/kairos/issues/2693) stems from [Kairos](https://kairos.io/) not passing through the `--allow-discards` flag to the `systemd-cryptsetup attach` command. | Check out the [Degraded Performance on Disk Drives](../troubleshooting/edge/edge.md#scenario---degraded-performance-on-disk-drives) troubleshooting guide for guidance on workaround. | September 4, 2024 | Edge | +| The AWS CSI pack has a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) (PDB) that allows for a maximum of one unavailable pod. This behavior causes an issue for single-node clusters as well as clusters with a single control plane node and a single worker node where the control plane lacks worker capability. [Operating System (OS) patch](../clusters/cluster-management/os-patching.md) updates may attempt to evict the CSI controller without success, resulting in the node remaining in the un-schedulable state. | If OS patching is enabled, allow the control plane nodes to have worker capability. For single-node clusters, turn off the OS patching feature. | September 4, 2024 | Cluster, Packs | +| On AWS IaaS Microk8s clusters, OS patching can get stuck and fail. | Refer to the [Troubleshooting](../troubleshooting/nodes.md#os-patch-fails-on-aws-with-microk8s-127) section for debug steps. | August 17, 2024 | Palette | +| When upgrading a self-hosted Palette instance from 4.3 to 4.4 the MongoDB pod may be stuck with the following error: `ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.` | Delete the PVC, PV and the pod manually. All resources will be recreated with the correct configuration. | August 17, 2024 | Self-Hosted Palette | +| For existing clusters that have added a new machine and all new clusters, pods may be stuck in the draining process and require manual intervention to drain the pod. | Manually delete the pod if it is stuck in the draining process. | August 17, 2024 | Palette | +| Clusters with the Virtual Machine Orchestrator (VMO) pack may experience VMs getting stuck in a continuous migration loop, as indicated by a `Migrating` or `Migration` VM status. | Review the [Virtual Machine Orchestrator (VMO) Troubleshooting](../troubleshooting/vmo-issues.md) section for workarounds. | August 1, 2024 | Virtual Machine Orchestrator | +| Palette CLI users who authenticated with the `login` command and specified a Palette console endpoint that does not contain the tenant name are encountering issues with expired JWT tokens. | Re-authenticate using your tenant URL, for example, `https://my-org.console.spectrocloud.com.` If the issue persists after re-authenticating, remove the `~/.palette/palette.yaml` file that is auto-generated by the Palette CLI. Re-authenticate with the `login` command if other commands require it. | July 25, 2024 | CLI | +| Adding new cloud providers, such as Nutanix, is currently unavailable. Private Cloud Gateway (PCG) deployments in new Nutanix environments fail to complete the installation. As a result, adding a new Nutanix environment to launch new host clusters is unavailable. This does not impact existing Nutanix deployments with a PCG deployed. | No workarounds are available. | July 20, 2024 | Clusters, Self-Hosted, PCG | +| Single-node Private Cloud Gateway (PCG) clusters are experiencing an issue upgrading to 4.4.11. The vSphere CSI controller pod fails to start because there are no matching affinity rules. | Check out the [vSphere Controller Pod Fails to Start in Single Node PCG Cluster](../troubleshooting/pcg.md#scenario---vsphere-controller-pod-fails-to-start-in-single-node-pcg-cluster) guide for workaround steps. | July 20, 2024 | PCG | +| When provisioning an Edge cluster, it's possible that some Operating System (OS) user credentials will be lost once the cluster is active. This is because the cloud-init stages from different sources merge during the deployment process, and sometimes, the same stages without distinct names overwrite each other. | Give each of your cloud-init stages in the OS pack and in the Edge installer **user-data** file a unique name. For more information about cloud-init stages and examples of cloud-init stages with names, refer to [Cloud-init Stages](../clusters/edge/edge-configuration/cloud-init.md). | July 17, 2024 | Edge | +| When you use a content bundle to provision a new cluster without using the local Harbor registry, it's possible for the images to be pulled from external networks instead of from the content bundle, consuming network bandwidth. If your Edge host has no connection to external networks or if it cannot locate the image on a remote registry, some pods may enter the `ImagePullBackOff` state at first, but eventually the pods will be created using images from the content bundle. | For connected clusters, you can make sure that the remote images are not reachable by the Edge host, which will stop the Palette agent from downloading the image and consuming bandwidth, and eventually the cluster will be created using images from the content bundle. For airgap clusters, the `ImagePullBackOff` error will eventually resolve on its own and there is no action to take. | July 11, 2024 | Edge | +| When you add a new VMware vSphere Edge host to an Edge cluster, the IP address may fail to be assigned to the Edge host after a reboot. | Review the [Edge Troubleshooting](../troubleshooting/edge/edge.md) section for workarounds. | July 9, 2024 | Edge | +| When you install Palette Edge using an Edge Installer ISO with a RHEL 8 operating system on a Virtual Machine (VM) with insufficient video memory, the QR code in the registration screen does not display correctly. | Increase the video memory of your VM to 8 MB or higher. The steps to do this vary depending on the platform you use to deploy your VM. In vSphere, you can right click on the VM, click **Edit Settings** and adjust the video card memory in the **Video card** tab. | July 9, 2024 | Edge | +| Custom Certificate Authority (CA) is not supported for accessing AKS clusters. Using a custom CA prevents the `spectro-proxy` pack from working correctly with AKS clusters. | No workaround is available. | July 9, 2024 | Packs, Clusters | +| Manifests attached to an Infrastructure Pack, such as OS, Kubernetes, Network, or Storage, are not applied to the Edge cluster. This issue does not impact the infrastructure pack's YAML definition, which is applied to the cluster. | Specify custom configurations through an add-on pack or a custom manifest pack applied after the infrastructure packs. | Jul 9, 2024 | Edge, Packs | +| Clusters using Cilium and deployed to VMware environments with the VXLAN tunnel protocol may encounter an I/O timeout error. This issue is caused by the VXMNET3 adapter, which is dropping network traffic and resulting in VXLAN traffic being dropped. You can learn more about this issue in the [Cilium's GitHub issue #21801](https://github.com/cilium/cilium/issues/21801). | Review the section for workarounds. | June 27, 2024 | Packs, Clusters, Edge | +| [Sonobuoy](../clusters/cluster-management/compliance-scan.md#conformance-testing) scans fail to generate reports on airgapped Palette Edge clusters. | No workaround is available. | June 24, 2024 | Edge | +| Clusters configured with OpenID Connect (OIDC) at the Kubernetes layer encounter issues when authenticating with the [non-admin Kubeconfig file](../clusters/cluster-management/kubeconfig.md#cluster-admin). Kubeconfig files using OIDC to authenticate will not work if the SSL certificate is set at the OIDC provider level. | Use the admin Kubeconfig file to authenticate with the cluster, as it does not use OIDC to authenticate. | June 21, 2024 | Clusters | +| During the platform upgrade from Palette 4.3 to 4.4, Virtual Clusters may encounter a scenario where the pod `palette-controller-manager` is not upgraded to the newer version of Palette. The virtual cluster will continue to be operational, and this does not impact its functionality. | Refer to the [Controller Manager Pod Not Upgraded](../troubleshooting/palette-dev-engine.md#scenario---controller-manager-pod-not-upgraded) troubleshooting guide. | June 15, 2024 | Virtual Clusters | +| Edge hosts with FIPS-compliant Red Hat Enterprise Linux (RHEL) and Ubuntu Operating Systems (OS) may encounter the error where the `systemd-resolved.service` service enters the **failed** state. This prevents the nameserver from being configured, which will result in cluster deployment failure. | Refer to [TroubleShooting](../troubleshooting/edge/edge.md#scenario---systemd-resolvedservice-enters-failed-state) for a workaround. | June 15, 2024 | Edge | +| The GKE cluster's Kubernetes pods are failing to start because the Kubernetes patch version is unavailable. This is encountered during pod restarts or node scaling operations. | Deploy a new cluster and use a GKE cluster profile that does not contain a Kubernetes pack layer with a patch version. Migrate the workloads from the existing cluster to the new cluster. This is a breaking change introduced in Palette 4.4.0 | June 15, 2024 | Packs, Clusters | +| does not support multi-node control plane clusters. The upgrade strategy, `InPlaceUpgrade`, is the only option available for use. | No workaround is available. | June 15, 2024 | Packs | +| Clusters using as the Kubernetes distribution, the control plane node fails to upgrade when using the `InPlaceUpgrade` strategy for sequential upgrades, such as upgrading from version 1.25.x to version 1.26.x and then to version 1.27.x. | Refer to the [Control Plane Node Fails to Upgrade in Sequential MicroK8s Upgrades](../troubleshooting/pack-issues.md) troubleshooting guide for resolution steps. | June 15, 2024 | Packs | +| Azure IaaS clusters are having issues with deployed load balancers and ingress deployments when using Kubernetes versions 1.29.0 and 1.29.4. Incoming connections time out as a result due to a lack of network path inside the cluster. AKS clusters are not impacted. | Use a Kubernetes version lower than 1.29.0 | June 12, 2024 | Clusters | +| OIDC integration with Virtual Clusters is not functional. All other operations related to Virtual Clusters are operational. | No workaround is available. | Jun 11, 2024 | Virtual Clusters | +| Deploying self-hosted Palette or VerteX to a vSphere environment fails if vCenter has standalone hosts directly under a data center. Persistent Volume (PV) provisioning fails due to an upstream issue with the vSphere Container Storage Interface (CSI) for all versions before v3.2.0. Palette and VerteX use the vSphere CSI version 3.1.2 internally. The issue may also occur in workload clusters deployed on vSphere using the same vSphere CSI for storage volume provisioning. | If you encounter the following error message when deploying self-hosted Palette or VerteX: `'ProvisioningFailed failed to provision volume with StorageClass "spectro-storage-class". Error: failed to fetch hosts from entity ComputeResource:domain-xyz` then use the following workaround. Remove standalone hosts directly under the data center from vCenter and allow the volume provisioning to complete. After the volume is provisioned, you can add the standalone hosts back. You can also use a service account that does not have access to the standalone hosts as the user that deployed Palette. | May 21, 2024 | Self-Hosted | +| Conducting cluster node scaling operations on a cluster undergoing a backup can lead to issues and potential unresponsiveness. | To avoid this, ensure no backup operations are in progress before scaling nodes or performing other cluster operations that change the cluster state | April 14, 2024 | Clusters | +| Palette automatically creates an AWS security group for worker nodes using the format `-node`. If a security group with the same name already exists in the VPC, the cluster creation process fails. | To avoid this, ensure that no security group with the same name exists in the VPC before creating a cluster. | April 14, 2024 | Clusters | +| K3s version 1.27.7 has been marked as _Deprecated_. This version has a known issue that causes clusters to crash. | Upgrade to a newer version of K3s to avoid the issue, such as versions 1.26.12, 1.28.5, and 1.27.11. You can learn more about the issue in the [K3s GitHub issue](https://github.com/k3s-io/k3s/issues/9047) page. | April 14, 2024 | Packs, Clusters | +| When deploying a multi-node AWS EKS cluster with the Container Network Interface (CNI) , the cluster deployment fails. | A workaround is to use the AWS VPC CNI in the interim while the issue is resolved. | April 14, 2024 | Packs, Clusters | +| If a Kubernetes cluster deployed onto VMware is deleted, and later re-created with the same name, the cluster creation process fails. The issue is caused by existing resources remaining inside the PCG, or the System PCG, that are not cleaned up during the cluster deletion process. | Refer to the [VMware Resources Remain After Cluster Deletion](../troubleshooting/pcg.md#scenario---vmware-resources-remain-after-cluster-deletion) troubleshooting guide for resolution steps. | April 14, 2024 | Clusters | +| Day-2 operations related to infrastructure changes, such as modifying the node size and count, when using MicroK8s are not taking effect. | No workaround is available. | April 14, 2024 | Packs, Clusters | +| If a cluster that uses the Rook-Ceph pack experiences network issues, it's possible for the file mount to become and remain unavailable even after the network is restored. | This a known issue disclosed in the [Rook GitHub repository](https://github.com/rook/rook/issues/13818). To resolve this issue, refer to pack documentation. | April 14, 2024 | Packs, Edge | +| Edge clusters on Edge hosts with ARM64 processors may experience instability issues that cause cluster failures. | ARM64 support is limited to a specific set of Edge devices. Currently, Nvidia Jetson devices are supported. | April 14, 2024 | Edge | +| During the cluster provisioning process of new edge clusters, the Palette webhook pods may not always deploy successfully, causing the cluster to be stuck in the provisioning phase. This issue does not impact deployed clusters. | Review the [Palette Webhook Pods Fail to Start](../troubleshooting/edge/edge.md#scenario---palette-webhook-pods-fail-to-start) troubleshooting guide for resolution steps. | April 14, 2024 | Edge | ## Resolved Known Issues diff --git a/docs/docs-content/release-notes/release-notes.md b/docs/docs-content/release-notes/release-notes.md index 17fec2bdc00..a3c878b1424 100644 --- a/docs/docs-content/release-notes/release-notes.md +++ b/docs/docs-content/release-notes/release-notes.md @@ -24,7 +24,7 @@ The following components have been updated for Palette version 4.7.27 - 4.7.29. - Fixed an issue that caused pods belonging to the pack to go into an `Unknown` state after scaling [Edge clusters](../clusters/edge/edge.md). -- Fixed an issue that prevented the FIPS-compliant version of the pack from operating correctly on [Palette VerteX](../vertex/vertex.md). +- Fixed an issue that prevented the FIPS-compliant version of the pack from operating correctly on [Palette VerteX](../self-hosted-setup/vertex/vertex.md). @@ -77,8 +77,8 @@ The following components have been updated for Palette version 4.7.27. -- The [Palette Management Appliance](../enterprise-version/install-palette/palette-management-appliance.md) and - [VerteX Management Appliance](../vertex/install-palette-vertex/vertex-management-appliance.md) have now been updated to use the following components internally: +- The [Palette Management Appliance](../self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md) and + [VerteX Management Appliance](../self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md) have now been updated to use the following components internally: - 1.32.8 - 3.30.2 @@ -87,6 +87,14 @@ The following components have been updated for Palette version 4.7.27. +### Bug Fixes + + + +- Fixed an issue that prevented the FIPS-compliant version of the pack from operating correctly on [Palette VerteX](../self-hosted-setup/vertex/vertex.md). + + + ### Packs #### Pack Notes @@ -158,8 +166,8 @@ The following components have been updated for Palette version 4.7.27. -- [Palette Management Appliance](../enterprise-version/install-palette/palette-management-appliance.md) and - [VerteX Management Appliance](../vertex/install-palette-vertex/vertex-management-appliance.md) now automatically +- [Palette Management Appliance](../self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md) and + [VerteX Management Appliance](../self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md) now automatically delete the `provider_extract` directory after deployment, removing unused files. Additionally, Palette and VerteX management appliance now use 1.32.8 and 2.9.0 internally. @@ -185,7 +193,7 @@ The following components have been updated for Palette version 4.7.27. [Azure IaaS clusters](../clusters/public-cloud/azure/create-azure-cluster.md) using static placement. - Fixed an issue that prevented the deletion of [EKS clusters](../clusters/public-cloud/aws/eks.md) deployed in - [AWS secret regions](../clusters/public-cloud/aws/add-aws-accounts.md). + [AWS secret regions](../clusters/public-cloud/aws/add-aws-accounts/add-aws-accounts.md). #### Deprecations and Removals @@ -549,8 +557,8 @@ The following component updates are applicable to this release: #### Features - Palette and VerteX Management Appliance now support Secure Boot. Refer to the [Palette Management - Appliance](../enterprise-version/install-palette/palette-management-appliance.md) guide for further configuration - information. + Appliance](../self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md) guide for + further configuration information. - Palette and VerteX Management Appliance now support single node installation. We do not recommend this setup for production environments. @@ -563,7 +571,7 @@ The following component updates are applicable to this release: #### Bug Fixes - Fixed an issue that caused the [VM Migration Assistant](../vm-management/vm-migration-assistant/vm-migration-assistant.md) to leave open connections after VM migrations. -- Fixed an issue that incorrectly allowed the creation of [EKS Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) in [AWS GovCloud](../clusters/public-cloud/aws/add-aws-accounts.md#aws-govcloud-account-us). +- Fixed an issue that incorrectly allowed the creation of [EKS Fargate](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) in [AWS GovCloud](../clusters/public-cloud/aws/add-aws-accounts/add-aws-accounts.md#aws-govcloud-account-us). - Fixed an issue where, on Azure IaaS clusters created using a Palette version prior to 4.6.32, scaling worker node pools did not attach newly created nodes to an outbound load balancer after upgrading to Palette version 4.6.32 or later and the cluster's Palette Agent version to 4.6.7 or later. - Fixed an issue that caused manifest layers creating using [Crossplane](../automation/crossplane/crossplane.md) to display incorrectly in the Palette UI. - Fixed an issue that caused [EKS nodes](../clusters/public-cloud/aws/eks.md#cloud-configuration-settings) customized with the `AL2_x86_64` AMI label to be incorrectly configured with Amazon Linux 2023 (AL2023). @@ -860,11 +868,11 @@ Check out the [CLI Tools](/downloads/cli-tools/) page to find the compatible ver #### Bug Fixes - Fixed an issue that caused errors on message broker pods after upgrading - [self-hosted Palette](../enterprise-version/enterprise-version.md) installations to version 4.7.4 or later. + [self-hosted Palette](../self-hosted-setup/palette/palette.md) installations to version 4.7.4 or later. - Fixed an issue that caused validation errors to appear when [adding an Amazon ECR](../registries-and-packs/registries/oci-registry/add-oci-packs.md) hosted in [AWS GovCloud](https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/govcloud-ecr.html) to Palette. -- Fixed an issue that caused [self-hosted Palette](../enterprise-version/enterprise-version.md) installations to allow +- Fixed an issue that caused [self-hosted Palette](../self-hosted-setup/palette/palette.md) installations to allow passing open redirects in URLs using the `returnTo` parameter. - Fixed an issue that caused multiple repeated creations and reconciliations of @@ -1124,19 +1132,20 @@ Check out the [CLI Tools](/downloads/cli-tools/) page to find the compatible ver #### Features -- The [Palette Management Appliance](../enterprise-version/install-palette/palette-management-appliance.md) - is a new method to install self-hosted Palette in your infrastructure environment. It provides a simple and efficient - way to deploy Palette using an ISO file. The Palette Management Appliance is available for VMware, Bare Metal, and - Machine as a Service (MAAS) environments. +- The [Palette Management + Appliance](../self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md) is a new + method to install self-hosted Palette in your infrastructure environment. It provides a simple and efficient way to + deploy Palette using an ISO file. The Palette Management Appliance is available for VMware, Bare Metal, and Machine as + a Service (MAAS) environments. - The [Artifact Studio](../downloads/artifact-studio.md) is a new platform for obtaining bundles, packs, and installers relating to Palette Enterprise and Palette VerteX. It provides a single source for these artifacts, which you can download and then upload to your registries. -- [Self-hosted Palette](../enterprise-version/enterprise-version.md) now supports the configuration of a classification +- [Self-hosted Palette](../self-hosted-setup/palette/palette.md) now supports the configuration of a classification banner. System administrators can set the banner text and color through the - [system console](../enterprise-version/system-management/system-management.md#system-console). Refer to the - [Banners](../enterprise-version/system-management/login-banner.md) guide for further guidance. + [system console](../self-hosted-setup/palette/system-management/system-management.md#system-console). Refer to the + [Banners](../self-hosted-setup/palette/system-management/login-banner.md) guide for further guidance. - All ZST bundles, ISO files, and images in Spectro Cloud-owned registries are now signed using [Cosign](https://docs.sigstore.dev/cosign/system_config/installation/), ensuring artifacts are traceable, @@ -1306,8 +1315,8 @@ Check out the [CLI Tools](/downloads/cli-tools/) page to find the compatible ver - Configuration adjustments have been made to improve the compatibility of the [Virtual Machine Orchestrator](../vm-management/vm-management.md) with - [self-hosted Palette](../enterprise-version/enterprise-version.md) installations. This includes the ability to - configure a private CA certificate for secure communication. Refer to the + [self-hosted Palette](../self-hosted-setup/palette/palette.md) installations. This includes the ability to configure a + private CA certificate for secure communication. Refer to the [Configure Private CA Certificate](../vm-management/configure-private-ca-certificate.md) guide for more details. - The KubeVirt version in use is now v1.5.0. Other components of the VMO pack have also been upgraded, enhancing system diff --git a/docs/docs-content/security-bulletins/security-advisories/security-advisories.md b/docs/docs-content/security-bulletins/security-advisories/security-advisories.md index fbd4a8820ec..0defbe4901f 100644 --- a/docs/docs-content/security-bulletins/security-advisories/security-advisories.md +++ b/docs/docs-content/security-bulletins/security-advisories/security-advisories.md @@ -36,8 +36,8 @@ when running on a non-FIPS-compliant OS or Kubernetes cluster, may allow negotia algorithms. Self-hosted instances that meet the -[FIPS prerequisite](../../vertex/install-palette-vertex/install-on-kubernetes/install.md#prerequisites) as outlined in -our user documentation are not affected by this vulnerability. +[FIPS prerequisite](../../self-hosted-setup/vertex/supported-environments/kubernetes/install/non-airgap.md#prerequisites) +as outlined in our user documentation are not affected by this vulnerability. ### Recommended Actions @@ -148,9 +148,9 @@ the patched versions (v1.27.15, v1.28.11, v1.29.6, and v1.30.2) or newer. [Update a Cluster Profile](../../profiles/cluster-profiles/modify-cluster-profiles/update-cluster-profile.md) guide for instructions on how to update a cluster profile and apply the updates to workload clusters. -- Refer to the [Palette Enterprise](../../enterprise-version/upgrade/upgrade.md) or - [Palette VerteX](../../vertex/upgrade/upgrade.md) upgrade guides for guidance on upgrading the version for all - connected and airgapped Palette Enterprise and Palette VerteX clusters. +- Refer to the [Palette Enterprise](../../self-hosted-setup/palette/palette.md) or + [Palette VerteX](../../self-hosted-setup/vertex/vertex.md) upgrade guides for guidance on upgrading the version for + all connected and airgapped Palette Enterprise and Palette VerteX clusters. ## Security Advisory 001 - Nginx Vulnerability @@ -212,8 +212,8 @@ This vulnerability affects both workload clusters and Palette deployments. - Connected and airgapped Palette Enterprise and VerteX versions 4.4 - 4.6 must apply the latest patch to automatically upgrade the `ingress-nginx-controller` DaemonSet to version `1.11.5`. For guidance on upgrading your Palette version, - refer to the [Palette Enterprise](../../enterprise-version/upgrade/upgrade.md) or - [VerteX](../../vertex/upgrade/upgrade.md) upgrade guide. + refer to the [Palette Enterprise](../../self-hosted-setup/palette/palette.md) or + [VerteX](../../self-hosted-setup/vertex/vertex.md) upgrade guide. :::warning diff --git a/docs/docs-content/security/product-architecture/self-hosted-operation.md b/docs/docs-content/security/product-architecture/self-hosted-operation.md index 6dbbcf230d9..3e8532beb7a 100644 --- a/docs/docs-content/security/product-architecture/self-hosted-operation.md +++ b/docs/docs-content/security/product-architecture/self-hosted-operation.md @@ -15,14 +15,14 @@ environment has security controls. Palette automatically generates security keys management cluster. You can import an optional certificate and private key to match the Fully Qualified Domain Name (FQDN) management cluster. Palette supports enabling disk encryption policies for management cluster virtual machines (VMs) if required. For information about deploying Palette in a self-hosted environment, review the -[Self-Hosted Installation](../../enterprise-version/enterprise-version.md) guide. +[Self-Hosted Installation](../../self-hosted-setup/palette/palette.md) guide. In self-hosted deployments, the Open Virtualization Appliance (OVA) can operate in stand-alone mode for quick Proof of Concept (POC) or in enterprise mode, which launches a three-node High Availability (HA) cluster as the Palette management cluster. The management cluster provides a browser-based web interface that allows you to set up a tenant and provision and manage tenant clusters. You can also deploy Palette to a Kubernetes cluster by using the Palette Helm Chart. To learn more, review the -[Install Using Helm Chart](../../enterprise-version/install-palette/install-on-kubernetes/install.md) guide. +[Install Using Helm Chart](../../self-hosted-setup/palette/supported-environments/kubernetes/install/install.md) guide. The following points apply to self-hosted deployments: diff --git a/docs/docs-content/self-hosted-setup/_category_.json b/docs/docs-content/self-hosted-setup/_category_.json new file mode 100644 index 00000000000..b465995e2d8 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 35 +} diff --git a/docs/docs-content/enterprise-version/_category_.json b/docs/docs-content/self-hosted-setup/palette/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/_category_.json rename to docs/docs-content/self-hosted-setup/palette/_category_.json diff --git a/docs/docs-content/self-hosted-setup/palette/palette.md b/docs/docs-content/self-hosted-setup/palette/palette.md new file mode 100644 index 00000000000..3228908827d --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/palette.md @@ -0,0 +1,132 @@ +--- +sidebar_label: "Palette" +title: "Self-Hosted Palette" +description: "How to get started with self-hosted Palette." +hide_table_of_contents: false +tags: ["self-hosted"] +keywords: ["self-hosted"] +--- + +Palette is available as a self-hosted platform offering. You can install the self-hosted version of Palette in your data +centers or public cloud providers to manage Kubernetes clusters. + +![A diagram of Palette deployment models eager-load](/architecture_architecture-overview-deployment-models-on-prem-focus.webp) + +:::info + +Palette VerteX is a FIPS-compliant version of Palette that is available for regulated industries, such as government and +public sector organizations that handle sensitive and classified information. To learn more about Palette VerteX, check +out the [Palette VerteX](../vertex/vertex.md) section. + +::: + +## Access Palette + +To set up a Palette account, contact our support team by sending an email to support@spectrocloud.com. Include the +following information in your email: + +- Your full name +- Organization name (if applicable) +- Email address +- Phone number (optional) +- Target Platform (VMware or Kubernetes) +- A brief description of your intended use of Palette + +Our dedicated Support team will promptly get in touch with you to provide the necessary credentials and assistance +required to get started with self-hosted Palette. + +## Supported Platforms + +:::danger + +The [following section](#content-to-be-refactored) contains the content from the former VerteX +[Supported Platforms](https://docs.spectrocloud.com/vertex/supported-platforms/) page. Refactor this content to be a +partial and use a table similar to the following to compare and contrast support between the platforms. + +::: + +| **Azure Cloud** | **Palette Support** | **Palette VerteX Support** | +| ---------------------------------------------------------------------------------------------- | :-----------------: | :------------------------: | +| Azure Commercial (Public Cloud) | :white_check_mark: | :white_check_mark: | +| [Azure Government](https://azure.microsoft.com/en-us/explore/global-infrastructure/government) | :white_check_mark: | :white_check_mark: | + +### Content to be Refactored + +Palette VerteX supports the following infrastructure platforms for deploying Kubernetes clusters: + +| **Platform** | **Additional Information** | +| ------------------ | ------------------------------------------------------------------------- | +| **AWS** | Refer to the [AWS](#aws) section for additional guidance. | +| **AWS Gov** | Refer to the [AWS](#aws) section for additional guidance. | +| **Azure** | Refer to the [Azure](#azure) section for additional guidance. | +| **Azure Gov** | Refer to the [Azure](#azure) section for additional guidance. | +| **Dev Engine** | Refer to the VerteX Engine section for additional guidance. | +| **MAAS** | Canonical Metal-As-A-Service (MAAS) is available and supported in VerteX. | +| **Edge** | Edge deployments are supported in VerteX. | +| **VMware vSphere** | VMware vSphere is supported in VerteX. | + +Review the following tables for additional information about the supported platforms. + +:::info + +For guidance on how to deploy a Kubernetes cluster on a supported platform, refer to the +[Cluster](../../clusters/clusters.md) documentation. + +::: + +The term _IaaS_ refers to Palette using compute nodes that are not managed by a cloud provider, such as bare metal +servers or virtual machines. + +#### AWS + +VerteX supports the following AWS services. + +| **Service** | **AWS Gov Support?** | +| ----------- | -------------------- | +| **IaaS** | ✅ | +| **EKS** | ✅ | + +#### Azure + +VerteX supports the following Azure services. + +| **Service** | **Azure Gov Support?** | +| ----------- | ---------------------- | +| **IaaS** | ✅ | +| **AKS** | ✅ | + +All Azure Government regions are supported with the exception of Department of Defense regions. Refer to the +[official Azure Government documentation](https://learn.microsoft.com/en-us/azure/azure-government/documentation-government-overview-dod) +to learn more about the available regions. + +#### Dev Engine + +VerteX supports the [Dev Engine](../../devx/devx.md) platform for deploying virtual clusters. However, the Dev Engine +platform is not FIPS compliant and requires you to enable the +[non-FIPS setting](../vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md). Additionally, +container deployment based workflows are not supported for airgap environments. + +#### VMware vSphere + +The following versions of VMware vSphere are supported in VerteX. + +| **Version** | **Supported?** | +| ----------------- | -------------- | +| **vSphere 6.7U3** | ✅ | +| **vSphere 7.0** | ✅ | +| **vSphere 8.0** | ✅ | + +## Next Steps + +Get started with setting up self-hosted Palette on an existing +[Kubernetes cluster](./supported-environments/kubernetes/kubernetes.md), your +[VMware vSphere](./supported-environments/vmware/vmware.md) environment using the +[Palette CLI](../../automation/palette-cli/palette-cli.md), or your desired bare metal or data center environment with +the [Palette Management Appliance](./supported-environments/management-appliance/management-appliance.md) ISO. + +For guidance managing an existing installation, refer to our +[System Management](./system-management/system-management.md) guide. For upgrading an existing self-hosted installation, +consult the upgrade guide that aligns with your Palette installation method: +[Kubernetes (Helm chart)](./supported-environments/kubernetes/upgrade/upgrade.md), +[VMware vSphere (Palette CLI)](./supported-environments/vmware/upgrade/upgrade.md), or +[Palette Management Appliance (ISO)](./supported-environments/management-appliance/upgrade.md). diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/_category_.json diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/_category_.json diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/activate/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/activate/_category_.json diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/activate/activate.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/activate/activate.md new file mode 100644 index 00000000000..12eb26a72f3 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/activate/activate.md @@ -0,0 +1,112 @@ +--- +sidebar_label: "Activate" +title: "Activate Self-Hosted Palette" +description: "Activate your self-hosted Palette installation." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "account", "activate"] +keywords: ["self-hosted", "account", "activate"] +--- + +:::danger + +Convert to partials for reuse in other installation sections. + +::: + +Beginning with version 4.6.32, once you install Palette or upgrade to version 4.6.32 or later, you have 30 days to +activate it. During this time, you have unrestricted access to all of Palette's features. After 30 days, you can +continue to use Palette, and existing clusters will continue to run, but you cannot perform the following operations +until Palette is activated: + +- Create new clusters. + +- Modify the configuration of active clusters. This includes modifying + [cluster profile variables](../../../../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); + changing [cluster profile versions](../../../../../clusters/cluster-management/cluster-updates.md#enablement); + editing, deleting, or replacing profile layers; and editing YAML files. + +- Update [node configurations](../../../../../clusters/cluster-management/node-pool.md), such as the node pool size. + +Each installation of Palette has a unique product ID and corresponding activation key. Activation keys are single-use +and valid for the entirety of the Palette installation, including all subsequent version upgrades. Once Palette is +activated, it does not need to be reactivated unless you need to reinstall Palette, at which time a new product ID will +be assigned, and a new activation key will be needed. Activation keys are no additional cost and are included with your +purchase of Palette. The activation process is the same for connected and airgapped installations, regardless of whether +Palette is installed via the [Palette CLI](../../../../../automation/palette-cli/palette-cli.md), +[Helm chart](../../kubernetes/install/install.md), or +[Management Appliance](../../management-appliance/management-appliance.md) ISO. + +If you are in trial mode or your trial has expired, Palette displays the appropriate banner on the **Summary** screen of +your system console, as well as at **Administration > Activation**. Trial mode and expired statuses are also displayed +in the Palette UI at the bottom of the left main menu. + + ![License status of expired on the left main menu](/enterprise-version_activate-installation_left-main-menu-status.webp) + +## Overview + +Below is an overview of the activation process. + + ![Diagram of the self-hosted system activation process](/enterprise-version_activate-installation_system-activation-diagram.webp) + +1. The system admin installs Palette or upgrades to version 4.6.32 or later. +2. Palette enters trial mode. During this time, you have 30 days to take advantage of all of Palette's features. After + 30 days, the trial expires, and Palette functionality is restricted. Any clusters that you have deployed will remain + functional, but you cannot perform + [day-2 operations](../../../../../clusters/cluster-management/cluster-management.md), and you cannot deploy + additional clusters. + +3. Before or after your trial expires, contact a Spectro Cloud customer support representative. You must specify whether + you are activating Palette or VerteX and also provide a short description of your instance, along with your + installation's product ID. + +4. Spectro Cloud provides the activation key. + +5. The system admin enters the activation key and activates Palette, allowing you to resume day-2 operations and deploy + additional clusters. + +## Prerequisites + +- A Palette subscription. + +- A self-hosted instance of Palette that is not activated. For help installing Palette, check out our + [Installation](../install/install.md) guide. + +- Access to the [system console](../../../system-management/system-management.md#access-the-system-console). + +## Enablement + +1. Log in to the system console. For more information, refer to the + [Access the System Console](../../../system-management/system-management.md#access-the-system-console) guide. + +2. A banner is displayed on the **Summary** screen, alerting you that your product is either in trial mode or has + expired. On the banner, select **Activate Palette**. Alternatively, from the left main menu, select + **Administration > Activation**. + + ![Trial mode banner in the system console](/enterprise-version_activate-installation_trial-mode-banner.webp) + +3. The **Activation** tab of the **Administration** screen reiterates your product's status and displays your **Product + Setup ID**. Contact your customer support representative and provide them the following information: + + - Your installation type (Palette). + + - A short description of your instance. For example, `Spacetastic - Dev Team 1`. + + - Your instance's **Product Setup ID**. + +4. Your customer support representative will provide you an **Activation key**. The activation key is single-use and + cannot be used to activate another Palette or VerteX installation. +5. On the **Activation** tab, enter the **Activation key** and **Update** your settings. If the product ID and + activation key pair is correct, an activation successful message is displayed, and your banner is updated to state + that your license is active. + +## Validation + +You can view the status of your license from the system console. If your license is active, the license status is +removed from the left main menu of the Palette UI. + +1. Log in to the [system console](../../../system-management/system-management.md#access-the-system-console). + +2. The activation banner is no longer displayed on the **Summary** screen, indicating your license is active. Confirm + your license status by navigating to **Administration > Activation**. The banner states that **Your license is + active**. diff --git a/docs/docs-content/enterprise-version/system-management/account-management/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/system-management/account-management/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/_category_.json diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/airgap.md similarity index 97% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/install.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/airgap.md index 7dc5582437c..3de58926274 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/install.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/airgap.md @@ -1,24 +1,25 @@ --- -sidebar_label: "Install Palette" -title: "Install Airgap Self-Hosted Palette" -description: "Learn how to deploy self-hosted Palette to a Kubernetes cluster using a Helm Chart." +sidebar_label: "Install Airgap Palette" +title: "Install Airgap Palette on Kubernetes" +description: + "Learn how to deploy self-hosted Palette to a Kubernetes cluster using a Helm chart in an airgapped environment." icon: "" hide_table_of_contents: false -sidebar_position: 30 -tags: ["self-hosted", "enterprise", "airgap"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 20 +tags: ["self-hosted", "airgap", "kubernetes", "helm"] +keywords: ["self-hosted", "airgap", "kubernetes", "helm"] --- You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes cluster in your airgap production environment. This installation method is common in secure environments with restricted network access that prohibits using Palette -SaaS. Review our [architecture diagrams](../../../../architecture/networking-ports.md) to ensure your Kubernetes cluster -has the necessary network connectivity for self-hosted Palette to operate successfully. +SaaS. Review our [architecture diagrams](../../../../../architecture/networking-ports.md) to ensure your Kubernetes +cluster has the necessary network connectivity for self-hosted Palette to operate successfully. :::warning -Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps before proceeding with the installation. +Complete the [Environment Setup](../setup/airgap/airgap.md) steps before proceeding with the installation. ::: @@ -35,8 +36,8 @@ Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps befo - Ensure `unzip` or a similar extraction utility is installed on your system. - The Kubernetes cluster must be set up on a supported version of Kubernetes. Refer to the - [Kubernetes Requirements](../../install-palette.md#kubernetes-requirements) section to find the version required for - your Palette installation. + [Kubernetes Requirements](./install.md#kubernetes-requirements) section to find the version required for your Palette + installation. - Ensure the Kubernetes cluster does not have Cert Manager installed. Palette requires a unique Cert Manager configuration to be installed as part of the installation process. If Cert Manager is already installed, you must @@ -51,7 +52,7 @@ Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps befo database user in Atlas. - We recommended the following resources for Palette. Refer to the - [Palette size guidelines](../../install-palette.md#size-guidelines) for additional sizing information. + [Palette size guidelines](./install.md#size-guidelines) for additional sizing information. - 8 CPUs per node. @@ -92,8 +93,8 @@ Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps befo certificate file in the base64 format. You will need this to enable Palette to communicate with the network proxy server. -- Access to the Palette Helm Charts. Refer to the [Access Palette](../../../enterprise-version.md#access-palette) for - instructions on how to request access to the Helm Chart. +- Access to the Palette Helm Charts. Refer to the [Access Palette](../../../palette.md#access-palette) for instructions + on how to request access to the Helm Chart. :::warning @@ -215,7 +216,7 @@ environment. Reach out to our support team if you need assistance. 8. Open the **values.yaml** file in the **spectro-mgmt-plane** folder with a text editor of your choice. The **values.yaml** file contains the default values for the Palette installation parameters. However, you must populate the following parameters before installing Palette. You can learn more about the parameters on the **values.yaml** - file on the [Helm Configuration Reference](../palette-helm-ref.md) page. + file on the [Helm Configuration Reference](../setup/airgap/helm-reference.md) page. Ensure you provide the proper `ociImageRegistry.mirrorRegistries` values if you are using a self-hosted OCI registry. You can find the placeholder string in the `ociImageRegistry` section of the **values.yaml** file. @@ -236,7 +237,7 @@ environment. Reach out to our support team if you need assistance. If you are installing Palette by pulling required images from a private mirror registry, you will need to provide the credentials to your registry in the **values.yaml** file. For more information, refer to - [Helm Configuration Reference](../palette-helm-ref.md#image-pull-secret). + [Helm Configuration Reference](../setup/airgap/helm-reference.md#image-pull-secret). ::: @@ -873,4 +874,10 @@ Use the following steps to validate the Palette installation. ## Next Steps - + diff --git a/docs/docs-content/enterprise-version/install-palette/install-palette.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/install.md similarity index 65% rename from docs/docs-content/enterprise-version/install-palette/install-palette.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/install.md index bcdb0659a4b..4a32ec45abd 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-palette.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/install.md @@ -1,46 +1,58 @@ --- -sidebar_label: "Installation" -title: "Installation" -description: "Review Palette system requirements and learn more about the various install methods." +sidebar_label: "Install" +title: "Install Palette on Kubernetes" +description: "Review system requirements for installing self-hosted Palette on an existing Kubernetes cluster." icon: "" hide_table_of_contents: false -tags: ["palette", "self-hosted"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "install", "kubernetes", "helm"] +keywords: ["self-hosted", "install", "kubernetes", "helm"] --- +:::warning + +This is the former [Installation](https://docs.spectrocloud.com/enterprise-version/install-palette/) page. Leave only +what is applicable to Kubernetes. Convert to partials for reuse. + +::: + Palette is available as a self-hosted application that you install in your environment. Palette is available in the following modes. -| **Method** | **Supported Platforms** | **Description** | **Install Guide** | -| ---------------------------------------- | ------------------------ | --------------------------------------------------------------------- | ---------------------------------------------------------------------------- | -| Palette CLI | VMware | Install Palette in VMware environment. | [Install on VMware](install-on-vmware/install.md) | -| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster. | [Install on Kubernetes](install-on-kubernetes/install.md) | -| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](palette-management-appliance.md) | +| **Method** | **Supported Platforms** | **Description** | **Install Guide** | +| ---------------------------------------- | ------------------------ | --------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette in VMware environment. | [Install on VMware](../../vmware/install/install.md) | +| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster. | Install on Kubernetes | +| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](../../management-appliance/install.md) | ## Airgap Installation You can also install Palette in an airgap environment. For more information, refer to the [Airgap Installation](./airgap.md) section. -| **Method** | **Supported Airgap Platforms** | **Description** | **Install Guide** | -| ---------------------------------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- | -| Palette CLI | VMware | Install Palette in VMware environment using your own OCI registry server. | [VMware Airgap Install](./install-on-vmware/airgap-install/airgap-install.md) | -| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster with your own OCI registry server OR use AWS ECR. | [Kubernetes Airgap Install](./install-on-kubernetes/airgap-install/airgap-install.md) | -| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](palette-management-appliance.md) | +| **Method** | **Supported Airgap Platforms** | **Description** | **Install Guide** | +| ---------------------------------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette in VMware environment using your own OCI registry server. | [VMware Airgap Install](../../vmware/install/airgap.md) | +| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster with your own OCI registry server OR use AWS ECR. | [Kubernetes Airgap Install](./airgap.md) | +| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](../../management-appliance/install.md) | The next sections provide sizing guidelines we recommend you review before installing Palette in your environment. ## Size Guidelines - + ## Kubernetes Requirements The following table presents the Kubernetes version corresponding to each Palette version for -[VMware](../../enterprise-version/install-palette/install-on-vmware/install-on-vmware.md) and -[Kubernetes](../../enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md) installations. +[VMware](../../vmware/install/install.md) and +Kubernetes installations. Additionally, for VMware installations, it provides the download URLs for the required Operating System and Kubernetes distribution OVA. @@ -63,15 +75,3 @@ distribution OVA. ## Proxy Requirements - -## Resources - -- [Install on VMware](install-on-vmware/install-on-vmware.md) - -- [Install on Kubernetes](install-on-kubernetes/install.md) - -- [Airgap Installation](./airgap.md) - -- [Architecture Diagram and Network Ports](../../architecture/networking-ports.md#self-hosted-network-communications-and-ports) - -- [Enterprise Install Troubleshooting](../../troubleshooting/enterprise-install.md) diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/non-airgap.md similarity index 96% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/non-airgap.md index 5be30d2e19b..1e90342bc3a 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/install/non-airgap.md @@ -1,12 +1,13 @@ --- -sidebar_label: "Non-Airgap Installation" -title: "Install Non-Airgap Self-Hosted Palette" -description: "Learn how to deploy self-hosted Palette to a Kubernetes cluster using a Helm Chart." +sidebar_label: "Install Non-Airgap Palette" +title: "Install Non-Airgap Palette on Kubernetes" +description: + "Learn how to deploy self-hosted Palette to a Kubernetes cluster using a Helm chart in a non-airgap environment." icon: "" hide_table_of_contents: false -sidebar_position: 10 -tags: ["self-hosted", "enterprise"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 30 +tags: ["self-hosted", "kubernetes", "helm"] +keywords: ["self-hosted", "kubernetes", "helm"] --- You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes cluster in your production environment. @@ -24,8 +25,8 @@ You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes - Ensure `unzip` or a similar extraction utility is installed on your system. - The Kubernetes cluster must be set up on a supported version of Kubernetes. Refer to the - [Kubernetes Requirements](../install-palette.md#kubernetes-requirements) section to find the version required for your - Palette installation. + [Kubernetes Requirements](./install.md#kubernetes-requirements) section to find the version required for your Palette + installation. - Ensure the Kubernetes cluster does not have Cert Manager installed. Palette requires a unique Cert Manager configuration to be installed as part of the installation process. If Cert Manager is already installed, you must @@ -40,7 +41,7 @@ You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes database user in Atlas. - We recommended the following resources for Palette. Refer to the - [Palette size guidelines](../install-palette.md#size-guidelines) for additional sizing information. + [Palette size guidelines](./install.md#size-guidelines) for additional sizing information. - 8 CPUs per node. @@ -82,10 +83,10 @@ You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes server. - Ensure Palette has access to the required domains and ports. Refer to the - [Required Domains](../install-palette.md#proxy-requirements) section for more information. + [Required Domains](./install.md#proxy-requirements) section for more information. -- Access to the Palette Helm Charts. Refer to the [Access Palette](../../enterprise-version.md#access-palette) for - instructions on how to request access to the Helm Chart +- Access to the Palette Helm Charts. Refer to the [Access Palette](../../../palette.md#access-palette) for instructions + on how to request access to the Helm Chart :::warning @@ -134,7 +135,7 @@ your environment. Reach out to our support team if you need assistance. 4. Open the **values.yaml** in the **spectro-mgmt-plane** folder with a text editor of your choice. The **values.yaml** contains the default values for the Palette installation parameters, however, you must populate the following parameters before installing Palette. You can learn more about the parameters in the **values.yaml** file in the - [Helm Configuration Reference](palette-helm-ref.md) page. + [Helm Configuration Reference](../setup/non-airgap/helm-reference.md) page. | **Parameter** | **Description** | **Type** | | ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | @@ -147,7 +148,7 @@ your environment. Reach out to our support team if you need assistance. If you are installing Palette by pulling required images from a private mirror registry, you will need to provide the credentials to your registry in the **values.yaml** file. For more information, refer to - [Helm Configuration Reference](palette-helm-ref.md#image-pull-secret). + [Helm Configuration Reference](../setup/non-airgap/helm-reference.md#image-pull-secret). ::: @@ -694,7 +695,7 @@ your environment. Reach out to our support team if you need assistance. ![Screenshot of the Palette system console showing Username and Password fields.](/palette_installation_install-on-vmware_palette-system-console.webp) 10. Log in to the system console using the following default credentials. Refer to the - [password requirements](../../system-management/account-management/credentials.md#password-requirements-and-security) + [password requirements](../../../system-management/account-management/credentials.md#password-requirements-and-security) documentation page to learn more about password requirements | **Parameter** | **Value** | @@ -705,19 +706,19 @@ your environment. Reach out to our support team if you need assistance. After login, you will be prompted to create a new password. Enter a new password and save your changes. You will be redirected to the Palette system console. Use the username `admin` and your new password to log in to the system console. You can create additional system administrator accounts and assign roles to users in the system console. - Refer to the [Account Management](../../system-management/account-management/account-management.md) documentation + Refer to the [Account Management](../../../system-management/account-management/account-management.md) documentation page for more information. 11. After login, a summary page is displayed. Palette is installed with a self-signed SSL certificate. To assign a different SSL certificate you must upload the SSL certificate, SSL certificate key, and SSL certificate authority files to Palette. You can upload the files using the Palette system console. Refer to the - [Configure HTTPS Encryption](../../system-management/ssl-certificate-management.md) page for instructions on how to - upload the SSL certificate files to Palette. + [Configure HTTPS Encryption](../../../system-management/ssl-certificate-management.md) page for instructions on how + to upload the SSL certificate files to Palette. :::warning If you plan to deploy host clusters into different networks, you may require a reverse proxy. Check out the - [Configure Reverse Proxy](../../system-management/reverse-proxy.md) guide for instructions on how to configure a + [Configure Reverse Proxy](../../../system-management/reverse-proxy.md) guide for instructions on how to configure a reverse proxy for Palette. ::: @@ -787,4 +788,10 @@ Use the following steps to validate the Palette installation. ## Next Steps - + diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/kubernetes.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/kubernetes.md new file mode 100644 index 00000000000..b25540d0bef --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/kubernetes.md @@ -0,0 +1,23 @@ +--- +sidebar_label: "Kubernetes" +title: "Self-Hosted Palette on Kubernetes" +description: "Install self-hosted Palette on an existing Kubernetes cluster." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "kubernetes"] +keywords: ["self-hosted", "kubernetes"] +--- + +Palette can be installed on Kubernetes with internet connectivity or an airgap environment. When you install Palette, a +three-node cluster is created. You use a Helm chart our support team provides to install Palette on Kubernetes. Refer to +[Access Palette](../../palette.md#access-palette) for instructions on requesting access to the Helm Chart. + +## Get Started + +Select the scenario and the corresponding guide to install Palette on Kubernetes. If you are installing Palette in an +airgap environment, refer to the environment preparation guide before installing Palette. + +| Scenario | Environment Preparation Guide | Install Guide | +| -------------------------------------------------------- | --------------------------------------------- | -------------------------------------------------- | +| Install Palette on Kubernetes with internet connectivity | None | [Install Instructions](./install/non-airgap.md) | +| Install Palette on Kubernetes in an airgap environment | [Environment Setup](./setup/airgap/airgap.md) | [Airgap Install Instructions](./install/airgap.md) | diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/_category_.json new file mode 100644 index 00000000000..988cdc1b69c --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Set Up", + "position": 0 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/_category_.json similarity index 100% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/_category_.json diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/airgap.md similarity index 80% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/airgap.md index 2a3b80f6cf2..4de01890080 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/airgap.md @@ -1,29 +1,57 @@ --- -sidebar_label: "Environment Setup" -title: "Environment Setup" -description: "Learn how to prepare Palette for an airgap install" +sidebar_label: "Set Up Airgap Environment" +title: "Set Up Airgap Environment" +description: + "Set up your airgap environment in preparation to install self-hosted Palette on an existing Kubernetes cluster." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["self-hosted", "enterprise", "airgap", "kubernetes"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "airgap", "kubernetes"] +keywords: ["self-hosted", "airgap", "kubernetes"] --- -![Overview diagram of the pre-install steps eager-load](/enterprise-version_air-gap-repo_k8s-overview-order-diagram-clean.webp) +You can install self-hosted Palette in an airgap Kubernetes environment. An airgap environment lacks direct access to +the internet and is intended for environments with strict security requirements. -This guide provides instructions on how to prepare your airgap environment before installing self-hosted Palette by -completing the required preparatory steps one through four, as shown in the diagram. +The installation process for an airgap environment is different due to the lack of internet access. Before the primary +Palette installation steps, you must download the following artifacts: -## Prepare for Airgap Installation +- Palette platform manifests and required platform packages. -Use the following steps to prepare your airgap environment for a Palette installation. +- Container images for core platform components and third-party dependencies. -:::tip +- Palette packs. -Carefully review the [prerequisites](#prerequisites) section before proceeding with the environment setup. Each -prerequisite listed is required for a successful installation. +The other significant change is that Palette's default public OCI registry is not used. Instead, a private OCI registry +is utilized to store images and packs. -::: +## Overview + +Before you can install Palette in an airgap environment, you must first set up your environment as outlined in the +following diagram. + +![An architecture diagram outlining the five different installation phases](/enterprise-version_air-gap-repo_k8s-points-overview-order-diagram.webp) + +1. In an environment with internet access, download the airgap setup binary from the URL provided by our support team. + The airgap setup binary is a self-extracting archive that contains the Palette platform manifests, images, and + required packs. The airgap setup binary is a single-use binary for uploading Palette images and packs to your OCI + registry. You will not use the airgap setup binary again after the initial installation. + +2. Move the airgap setup binary to the airgap environment. The airgap setup binary is used to extract the manifest + content and upload the required images and packs to your private OCI registry. Start the airgap setup binary in a + Linux Virtual Machine (VM). + +3. The airgap script will push the required images and packs to your private OCI registry. + +4. Install Palette using the Kubernetes Helm chart. + +## Supported Platforms + +The following table outlines the platforms supported for airgap VerteX installation and the supported OCI registries. + +| **Platform** | **OCI Registry** | **Supported** | +| ------------ | ---------------- | ------------- | +| Kubernetes | Harbor | ✅ | +| Kubernetes | AWS ECR | ✅ | ## Prerequisites @@ -243,8 +271,8 @@ Complete the following steps before deploying the airgap Palette installation. 13. Review the additional packs available for download. The supplemental packs are optional and not required for a successful installation. However, to create cluster profiles you may require several of the packs available for - download. Refer to the [Additional Packs](../../../../downloads/self-hosted-palette/additional-packs.md) resource - for a list of available packs. + download. Refer to the [Additional Packs](../../../../../../downloads/self-hosted-palette/additional-packs.md) + resource for a list of available packs. 14. Once you select the packs you want to install, download the pack binaries and start the binary to initiate the upload process. This step requires internet access, so you may have to download the binaries on a separate machine @@ -281,4 +309,4 @@ Use the following steps to validate the airgap setup process completed successfu ## Next Steps You are now ready to install the airgap self-hosted Palette. You will specify your OCI registry during the installation -process. Refer to the [Install Palette](./airgap-install.md) guide for detailed guidance on installing Palette. +process. Refer to the [Install Palette](../../install/airgap.md) guide for detailed guidance on installing Palette. diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/helm-reference.md similarity index 97% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/helm-reference.md index 3c3e0975680..33f2bbce09c 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/helm-reference.md @@ -1,18 +1,24 @@ --- -sidebar_label: "Helm Configuration Reference" +sidebar_label: "Helm Chart Configuration Reference" title: "Helm Chart Configuration Reference" description: "Reference for Palette Helm Chart installation parameters." icon: "" hide_table_of_contents: false sidebar_position: 30 -tags: ["self-hosted", "enterprise"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "helm"] +keywords: ["self-hosted", "helm"] --- +:::danger + +Turn this page into partials for reuse across other self-hosted helm chart reference pages. + +::: + You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes cluster in your production environment. The Helm chart allows you to customize values in the **values.yaml** file. This reference lists and describes parameters available in the **values.yaml** file from the Helm Chart for your installation. To learn how to install Palette using -the Helm Chart, refer to the [Palette Helm install](install.md) guide. +the Helm Chart, refer to the [Palette Helm install](../../install/airgap.md) guide. ### Required Parameters @@ -123,7 +129,7 @@ config: You can configure Palette to use Single Sign-On (SSO) for user authentication. Configure the SSO parameters to enable SSO for Palette. You can also configure different SSO providers for each tenant post-install, check out the -[SAML & SSO Setup](../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance. +[SAML & SSO Setup](../../../../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance. To configure SSO, you must provide the following parameters. @@ -151,7 +157,7 @@ config: ### Email Palette uses email to send notifications to users. The email notification is used when inviting new users to the -platform, password resets, and when [webhook alerts](../../../clusters/cluster-management/health-alerts.md) are +platform, password resets, and when [webhook alerts](../../../../../../clusters/cluster-management/health-alerts.md) are triggered. Use the following parameters to configure email settings for Palette. | **Parameters** | **Description** | **Type** | **Default value** | @@ -400,7 +406,7 @@ ingress: You can specify a reverse proxy server that clusters deployed through Palette can use to facilitate network connectivity to the cluster's Kubernetes API server. Host clusters deployed in private networks can use the pack to expose the cluster's Kubernetes API to downstream clients that are not in the same network. Check out the [Reverse -Proxy](../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for +Proxy](../../../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for Palette. | **Parameters** | **Description** | **Type** | **Default value** | @@ -475,7 +481,8 @@ reach-system: :::info Due to node affinity configurations, you must set `scheduleOnControlPlane: false` for managed clusters deployed to -[Azure AKS](../../../clusters/public-cloud/azure/aks.md), [AWS EKS](../../../clusters/public-cloud/aws/eks.md), and -[GCP GKE](../../../clusters/public-cloud/gcp/create-gcp-gke-cluster.md). +[Azure AKS](../../../../../../clusters/public-cloud/azure/aks.md), +[AWS EKS](../../../../../../clusters/public-cloud/aws/eks.md), and +[GCP GKE](../../../../../../clusters/public-cloud/gcp/create-gcp-gke-cluster.md). ::: diff --git a/docs/docs-content/enterprise-version/system-management/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/system-management/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/_category_.json diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/helm-reference.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/helm-reference.md new file mode 100644 index 00000000000..a5c38157477 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/helm-reference.md @@ -0,0 +1,488 @@ +--- +sidebar_label: "Helm Chart Configuration Reference" +title: "Helm Chart Configuration Reference" +description: "Reference for Palette Helm Chart installation parameters." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["self-hosted", "helm"] +keywords: ["self-hosted", "helm"] +--- + +:::danger + +Turn this page into partials for reuse across other self-hosted helm chart reference pages. + +::: + +You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes cluster in your production environment. +The Helm chart allows you to customize values in the **values.yaml** file. This reference lists and describes parameters +available in the **values.yaml** file from the Helm Chart for your installation. To learn how to install Palette using +the Helm Chart, refer to the [Palette Helm install](../../install/non-airgap.md) guide. + +### Required Parameters + +The following parameters are required for a successful installation of Palette. + +| **Parameters** | **Description** | **Type** | +| ------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| `config.env.rootDomain` | Used to configure the domain for the Palette installation. We recommend you create a CNAME DNS record that supports multiple subdomains. You can achieve this using a wild card prefix, `*.palette.abc.com`. Review the [Environment parameters](#environment) to learn more. | String | +| `config.env.ociRegistry` or `config.env.ociEcrRegistry` | Specifies the FIPS image registry for Palette. You can use an a self-hosted OCI registry or a public OCI registry we maintain and support. For more information, refer to the [Registry](#registries) section. | Object | + +:::warning + +If you are installing an air-gapped version of Palette, you must provide the image swap configuration. For more +information, refer to the [Image Swap Configuration](#image-swap-configuration) section. + +::: + +## Global + +The global block allows you to provide configurations that apply globally to the installation process. + +### Image Pull Secret + +The `imagePullSecret` block allows you to provide image pull secrets that will be used to authenticate with private +registries to obtain the images required for Palette installation. This is relevant if you have your own mirror +registries you use for Palette installation. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `create` | Specifies whether to create a secret containing credentials to your own private image registry. | Boolean | `false` | +| `dockerConfigJson` | The **config.json** file value containing the registry URL and credentials for your image registry in base64 encoded format on a single line. For more information about the **config.json** file, refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/containers/images/#config-json). | String | None | + +:::info + +To obtain the base-64 encoded version of the credential `config.json` file, you can issue the following command. Replace +`` with the path to your `config.json` file. The `tr -d '\n'` removes new line characters +and produce the output on a single line. + +```shell +cat | base64 | tr -d '\n' +``` + +::: + +```yaml +global: + imagePullSecret: + create: true + dockerConfigJson: ewoJImF1dGhzHsKCQkiaG9va3......MiOiAidHJ1ZSIKCX0KfQ # Base64 encoded config.json +``` + +## MongoDB + +Palette uses MongoDB Enterprise as its internal database and supports two modes of deployment: + +- MongoDB Enterprise deployed and active inside the cluster. + +- MongoDB Enterprise is hosted on a Software-as-a-Service (SaaS) platform, such as MongoDB Atlas. If you choose to use + MongoDB Atlas, ensure the MongoDB database has a user named `hubble` with the permission `readWriteAnyDatabase`. Refer + to the [Add a Database User](https://www.mongodb.com/docs/guides/atlas/db-user/) guide for guidance on how to create a + database user in Atlas. + +The table below lists the parameters used to configure a MongoDB deployment. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------- | +| `internal` | Specifies the MongoDB deployment either in-cluster or using Mongo Atlas. | Boolean | `true` | +| `databaseUrl` | The URL for MongoDB Enterprise. If using a remote MongoDB Enterprise instance, provide the remote URL. This parameter must be updated if `mongo.internal` is set to `false`. You also need to ensure the MongoDB database has a user named `hubble` with the permission `readWriteAnyDatabase`. Refer to the [Add a Database User](https://www.mongodb.com/docs/guides/atlas/db-user/) guide for guidance on how to create a database user in Atlas. | String | `mongo-0.mongo,mongo-1.mongo,mongo-2.mongo` | +| `databasePassword` | The base64-encoded MongoDB Enterprise password. If you don't provide a value, a random password will be auto-generated. | String | `""` | +| `replicas` | The number of MongoDB replicas to start. | Integer | `3` | +| `memoryLimit` | Specifies the memory limit for each MongoDB Enterprise replica. | String | `4Gi` | +| `cpuLimit` | Specifies the CPU limit for each MongoDB Enterprise member. | String | `2000m` | +| `pvcSize` | The storage settings for the MongoDB Enterprise database. Use increments of `5Gi` when specifying the storage size. The storage size applies to each replica instance. The total storage size for the cluster is `replicas` \* `pvcSize`. | string | `20Gi` | +| `storageClass` | The storage class for the MongoDB Enterprise database. | String | `""` | + +```yaml +mongo: + internal: true + databaseUrl: "mongo-0.mongo,mongo-1.mongo,mongo-2.mongo" + databasePassword: "" + replicas: 3 + cpuLimit: "2000m" + memoryLimit: "4Gi" + pvcSize: "20Gi" + storageClass: "" +``` + +## Config + +Review the following parameters to configure Palette for your environment. The `config` section contains the following +subsections: + +### Install Mode + +You can install Palette in connected or air-gapped mode. The table lists the parameters to configure the installation +mode. + +| **Parameters** | **Description** | **Type** | **Default value** | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `installMode` | Specifies the installation mode. Allowed values are `connected` or `airgap`. Set the value to `airgap` when installing in an air-gapped environment. | String | `connected` | + +```yaml +config: + managementMode: "central" +``` + +### SSO + +You can configure Palette to use Single Sign-On (SSO) for user authentication. Configure the SSO parameters to enable +SSO for Palette. You can also configure different SSO providers for each tenant post-install, check out the +[SAML & SSO Setup](../../../../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance. + +To configure SSO, you must provide the following parameters. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------- | ------------------------------------------------------------------------- | -------- | --------------------------------- | +| `saml.enabled` | Specifies whether to enable SSO SAML configuration by setting it to true. | Boolean | `false` | +| `saml.acsUrlRoot` | The root URL of the Assertion Consumer Service (ACS). | String | `myfirstpalette.spectrocloud.com` | +| `saml.acsUrlScheme` | The URL scheme of the ACS: `http` or `https`. | String | `https` | +| `saml.audienceUrl` | The URL of the intended audience for the SAML response. | String | `https://www.spectrocloud.com` | +| `saml.entityID` | The Entity ID of the Service Provider. | String | `https://www.spectrocloud.com` | +| `saml.apiVersion` | Specify the SSO SAML API version to use. | String | `v1` | + +```yaml +config: + sso: + saml: + enabled: false + acsUrlRoot: "myfirstpalette.spectrocloud.com" + acsUrlScheme: "https" + audienceUrl: "https://www.spectrocloud.com" + entityId: "https://www.spectrocloud.com" + apiVersion: "v1" +``` + +### Email + +Palette uses email to send notifications to users. The email notification is used when inviting new users to the +platform, password resets, and when [webhook alerts](../../../../../../clusters/cluster-management/health-alerts.md) are +triggered. Use the following parameters to configure email settings for Palette. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ----------------------- | ---------------------------------------------------------------------------------------------- | -------- | -------------------------- | +| `enabled` | Specifies whether to enable email configuration. | Boolean | `false` | +| `emailID ` | The email address for sending mail. | String | `noreply@spectrocloud.com` | +| `smtpServer` | Simple Mail Transfer Protocol (SMTP) server used for sending mail. | String | `smtp.gmail.com` | +| `smtpPort` | SMTP port used for sending mail. | Integer | `587` | +| `insecureSkipVerifyTLS` | Specifies whether to skip Transport Layer Security (TLS) verification for the SMTP connection. | Boolean | `true` | +| `fromEmailID` | Email address of the **_From_** address. | String | `noreply@spectrocloud.com` | +| `password` | The base64-encoded SMTP password when sending emails. | String | `""` | + +```yaml +config: + email: + enabled: false + emailId: "noreply@spectrocloud.com" + smtpServer: "smtp.gmail.com" + smtpPort: 587 + insecureSkipVerifyTls: true + fromEmailId: "noreply@spectrocloud.com" + password: "" +``` + +### Environment + +The following parameters are used to configure the environment. + +| **Parameters** | **Description** | **Type** | **Default value** | +| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `env.rootDomain` | Specifies the URL name assigned to Palette. The value assigned should have a Domain Name System (DNS) CNAME record mapped to exposed IP address or the load balancer URL of the service _ingress-nginx-controller_. Optionally, if `ingress.ingressStaticIP` is provided with a value you can use same assigned static IP address as the value to this parameter. | String | `""` | +| `env.installerMode` | Specifies the installer mode. Do not modify the value. | String | `self-hosted` | +| `env.installerCloud` | Specifies the cloud provider. Leave this parameter empty if you are installing a self-hosted Palette. | String | `""` | + +```yaml +config: + env: + rootDomain: "" +``` + +:::warning + +If Palette has only one tenant and you use local accounts with Single Sign-On (SSO) disabled, you can access Palette +using the IP address or any domain name that resolves to that IP. However, once you enable SSO, users must log in using +the tenant-specific subdomain. For example, if you create a tenant named `tenant1` and the domain name you assigned to +Palette is `palette.example.com`, the tenant URL will be `tenant1.palette.example.com`. We recommend you create an +additional wildcard DNS record to map all tenant URLs to the Palette load balancer. For example, +`*.palette.example.com`. + +::: + +### Cluster + +Use the following parameters to configure the Kubernetes cluster. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `stableEndpointAccess` | Set to `true` if the Kubernetes cluster is deployed in a public endpoint. If the cluster is deployed in a private network through a stable private endpoint, set to `false`. | Boolean | `false` | + +```yaml +config: + cluster: + stableEndpointAccess: false +``` + +## Registries + +Palette requires credentials to access the required Palette images. You can configure different types of registries for +Palette to download the required images. You must configure at least one Open Container Initiative (OCI) registry for +Palette. + +### OCI Registry + +Palette requires access to an OCI registry that contains all the required FIPS packs. You can host your own OCI registry +and configure Palette to reference the registry. Alternatively, you can use the public OCI registry that we provide. +Refer to the [`ociPackEcrRegistry`](#oci-ecr-registry) section to learn more about the publicly available OCI registry. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------------ | -------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `ociPackRegistry.endpoint` | The endpoint URL for the registry. | String | `""` | +| `ociPackRegistry.name` | The name of the registry. | String | `""` | +| `ociPackRegistry.password` | The base64-encoded password for the registry. | String | `""` | +| `ociPackRegistry.username` | The username for the registry. | String | `""` | +| `ociPackRegistry.baseContentPath` | The base path for the registry. | String | `""` | +| `ociPackRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` | +| `ociPackRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. | String | `""` | + +```yaml +config: + ociPackRegistry: + endpoint: "" + name: "" + password: "" + username: "" + baseContentPath: "" + insecureSkipVerify: false + caCert: "" +``` + +### OCI ECR Registry + +We expose a public OCI ECR registry that you can configure Palette to reference. If you want to host your own OCI +registry, refer to the [OCI Registry](#oci-registry) section. The OCI Elastic Container Registry (ECR) is hosted in an +AWS ECR registry. Our support team provides the credentials for the OCI ECR registry. + +| **Parameters** | **Description** | **Type** | **Default value** | +| --------------------------------------- | -------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `ociPackEcrRegistry.endpoint` | The endpoint URL for the registry. | String | `""` | +| `ociPackEcrRegistry.name` | The name of the registry. | String | `""` | +| `ociPackEcrRegistry.accessKey` | The base64-encoded access key for the registry. | String | `""` | +| `ociPackEcrRegistry.secretKey` | The base64-encoded secret key for the registry. | String | `""` | +| `ociPackEcrRegistry.baseContentPath` | The base path for the registry. | String | `""` | +| `ociPackEcrRegistry.isPrivate` | Specifies whether the registry is private. | Boolean | `true` | +| `ociPackEcrRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` | +| `ociPackEcrRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. | String | `""` | + +```yaml +config: + ociPackEcrRegistry: + endpoint: "" + name: "" + accessKey: "" + secretKey: "" + baseContentPath: "" + isPrivate: true + insecureSkipVerify: false + caCert: "" +``` + +### OCI Image Registry + +You can specify an OCI registry for the images used by Palette. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------------- | -------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `ociImageRegistry.endpoint` | The endpoint URL for the registry. | String | `""` | +| `ociImageRegistry.name` | The name of the registry. | String | `""` | +| `ociImageRegistry.password` | The password for the registry. | String | `""` | +| `ociImageRegistry.username` | The username for the registry. | String | `""` | +| `ociImageRegistry.baseContentPath` | The base path for the registry. | String | `""` | +| `ociImageRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` | +| `ociImageRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. | String | `""` | +| `ociImageRegistry.mirrorRegistries` | A comma-separated list of mirror registries. | String | `""` | + +```yaml +config: + ociImageRegistry: + endpoint: "" + name: "" + password: "" + username: "" + baseContentPath: "" + insecureSkipVerify: false + caCert: "" + mirrorRegistries: "" +``` + +### Image Swap Configuration + +You can configure Palette to use image swap to download the required images. This is an advanced configuration option, +and it is only required for air-gapped deployments. You must also install the Palette Image Swap Helm chart to use this +option, otherwise, Palette will ignore the configuration. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------- | +| `imageSwapInitImage` | The image swap init image. | String | `gcr.io/spectro-images-public/release/thewebroot/imageswap-init:v1.5.3-spectro-4.5.1` | +| `imageSwapImage` | The image swap image. | String | `gcr.io/spectro-images-public/release/thewebroot/imageswap:v1.5.3-spectro-4.5.1` | +| `imageSwapConfig` | The image swap configuration for specific environments. | String | `""` | +| `imageSwapConfig.isEKSCluster` | Specifies whether the cluster is an Amazon EKS cluster. Set to `false` if the Kubernetes cluster is not an EKS cluster. | Boolean | `true` | + +```yaml +config: + imageSwapImages: + imageSwapInitImage: "gcr.io/spectro-images-public/release/thewebroot/imageswap-init:v1.5.3-spectro-4.5.1" + imageSwapImage: "gcr.io/spectro-images-public/release/thewebroot/imageswap:v1.5.3-spectro-4.5.1" + + imageSwapConfig: + isEKSCluster: true +``` + +## gRPC + +gRPC is used for communication between Palette components. You can enable the deployment of an additional load balancer +for gRPC. Host clusters deployed by Palette use the load balancer to communicate with the Palette control plane. This is +an advanced configuration option, and it is not required for most deployments. Speak with your support representative +before enabling this option. + +If you want to use an external gRPC endpoint, you must provide a domain name for the gRPC endpoint and a valid x509 +certificate. Additionally, you must provide a custom domain name for the endpoint. A CNAME DNS record must point to the +IP address of the gRPC load balancer. For example, if your Palette domain name is `palette.example.com`, you could +create a CNAME DNS record for `grpc.palette.example.com` that points to the IP address of the load balancer dedicated to +gRPC. + +| **Parameters** | **Description** | **Type** | **Default value** | +| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | ----------------- | +| `external` | Specifies whether to use an external gRPC endpoint. | Boolean | `false` | +| `endpoint` | The gRPC endpoint. | String | `""` | +| `annotations` | A map of key-value pairs that specifies load balancer annotations for gRPC. You can use annotations to change the behavior of the load balancer and the gRPC configuration. This field is considered an advanced setting. We recommend you consult with your assigned support team representative before making changes. | Object | `{}` | +| `grpcStaticIP` | Specify a static IP address for the gRPC load balancer service. If the field is empty, a dynamic IP address will be assigned to the load balancer. | String | `""` | +| `caCertificateBase64` | The base64-encoded Certificate Authority (CA) certificate for the gRPC endpoint. | String | `""` | +| `serverCrtBase64` | The base64-encoded server certificate for the gRPC endpoint. | String | `""` | +| `serverKeyBase64` | The base64-encoded server key for the gRPC endpoint. | String | `""` | +| `insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the gRPC endpoint. | Boolean | `false` | + +```yaml +grpc: + external: false + endpoint: "" + annotations: {} + grpcStaticIP: "" + caCertificateBase64: "" + serverCrtBase64: "" + serverKeyBase64: "" + insecureSkipVerify: false +``` + +## Ingress + +Palette deploys an Nginx Ingress Controller. This controller is used to route traffic to the Palette control plane. You +can change the default behavior and omit the deployment of an Nginx Ingress Controller. + +| **Parameters** | **Description** | **Type** | **Default value** | +| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `enabled` | Specifies whether to deploy an Nginx controller. Set to `false` if you do not want an Nginx controller deployed. | Boolean | `true` | +| `ingress.internal` | Specifies whether to deploy a load balancer or use the host network. | Boolean | `false` | +| `ingress.certificate` | Specify the base64-encoded x509 SSL certificate for the Nginx Ingress Controller. If left blank, the Nginx Ingress Controller will generate a self-signed certificate. | String | `""` | +| `ingress.key` | Specify the base64-encoded x509 SSL certificate key for the Nginx Ingress Controller. | String | `""` | +| `ingress.annotations` | A map of key-value pairs that specifies load balancer annotations for ingress. You can use annotations to change the behavior of the load balancer and the Nginx configuration. This is an advanced setting. We recommend you consult with your assigned support team representative prior to modification. | Object | `{}` | +| `ingress.ingressStaticIP` | Specify a static IP address for the ingress load balancer service. If empty, a dynamic IP address will be assigned to the load balancer. | String | `""` | +| `ingress.terminateHTTPSAtLoadBalancer` | Specifies whether to terminate HTTPS at the load balancer. | Boolean | `false` | + +```yaml +ingress: + enabled: true + ingress: + internal: false + certificate: "" + key: "" + annotations: {} + ingressStaticIP: "" + terminateHTTPSAtLoadBalancer: false +``` + +## Spectro Proxy + + +You can specify a reverse proxy server that clusters deployed through Palette can use to facilitate network connectivity +to the cluster's Kubernetes API server. Host clusters deployed in private networks can use the pack to expose the cluster's Kubernetes API to downstream clients that are not in the same network. Check out the [Reverse +Proxy](../../../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for +Palette. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ----------------- | -------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `frps.enabled` | Specifies whether to enable the Spectro server-side proxy. | Boolean | `false` | +| `frps.frpHostURL` | The Spectro server-side proxy URL. | String | `""` | +| `frps.server.crt` | The base64-encoded server certificate for the Spectro server-side proxy. | String | `""` | +| `frps.server.key` | The base64-encoded server key for the Spectro server-side proxy. | String | `""` | +| `frps.ca.crt` | The base64-encoded certificate authority (CA) certificate for the Spectro server-side proxy. | String | `""` | + +```yaml +frps: + frps: + enabled: false + frpHostURL: "" + server: + crt: "" + key: "" + ca: + crt: "" +``` + +## UI System + +The table lists parameters to configure the Palette User Interface (UI) behavior. You can disable the UI or the Network +Operations Center (NOC) UI. You can also specify the MapBox access token and style layer ID for the NOC UI. MapBox is a +third-party service that provides mapping and location services. To learn more about MapBox and how to obtain an access +token, refer to the [MapBox Access tokens](https://docs.mapbox.com/help/getting-started/access-tokens) guide. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `enabled` | Specifies whether to enable the Palette UI. | Boolean | `true` | +| `ui.nocUI.enable` | Specifies whether to enable the Palette Network Operations Center (NOC) UI. Enabling this parameter requires the `ui.nocUI.mapBoxAccessToken`. Once enabled, all cluster locations will be reported to MapBox. This feature is not FIPS compliant. | Boolean | `false` | +| `ui.nocUI.mapBoxAccessToken` | The MapBox access token for the Palette NOC UI. | String | `""` | +| `ui.nocUI.mapBoxStyledLayerID` | The MapBox style layer ID for the Palette NOC UI. | String | `""` | + +```yaml +ui-system: + enabled: true + ui: + nocUI: + enable: false + mapBoxAccessToken: "" + mapBoxStyledLayerID: "" +``` + +## Reach System + +You can configure Palette to use a proxy server to access the internet. Set the parameter `reach-system.enabled` to +`true` to enable the proxy server. Proxy settings are configured in the `reach-system.proxySettings` section. + +| **Parameters** | **Description** | **Type** | **Default value** | +| --------------------------------------- | ----------------------------------------------------------------------------------- | -------- | ----------------- | +| `reachSystem.enabled` | Specifies whether to enable the usage of a proxy server for Palette. | Boolean | `false` | +| `reachSystem.proxySettings.http_proxy` | The HTTP proxy server URL. | String | `""` | +| `reachSystem.proxySettings.https_proxy` | The HTTPS proxy server URL. | String | `""` | +| `reachSystem.proxySettings.no_proxy` | A list of hostnames or IP addresses that should not be go through the proxy server. | String | `""` | +| `reachSystem.proxySettings.ca_crt_path` | The base64-encoded certificate authority (CA) of the proxy server. | String | `""` | +| `reachSystem.scheduleOnControlPlane` | Specifies whether to schedule the reach system on the control plane. | Boolean | `true` | + +```yaml +reach-system: + enabled: false + proxySettings: + http_proxy: "" + https_proxy: "" + no_proxy: + ca_crt_path: "" + scheduleOnControlPlane: true +``` + +:::info + +Due to node affinity configurations, you must set `scheduleOnControlPlane: false` for managed clusters deployed to +[Azure AKS](../../../../../../clusters/public-cloud/azure/aks.md), +[AWS EKS](../../../../../../clusters/public-cloud/aws/eks.md), and +[GCP GKE](../../../../../../clusters/public-cloud/gcp/create-gcp-gke-cluster.md). + +::: diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/non-airgap.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/non-airgap.md new file mode 100644 index 00000000000..6740e418097 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/setup/non-airgap/non-airgap.md @@ -0,0 +1,18 @@ +--- +sidebar_label: "Set Up Non-Airgap Environment" +title: "Set Up Non-Airgap Environment" +description: + "No prior setup is needed when installing self-hosted Palette on a Kubernetes cluster with internet connectivity." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["self-hosted", "kubernetes", "non-airgap"] +keywords: ["self-hosted", "kubernetes", "non-airgap"] +--- + +:::info + +No prior setup is necessary for non-airgap installations. For system prerequisites, refer to the installation +Prerequisites. + +::: diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/uninstall.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/uninstall/uninstall.md similarity index 93% rename from docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/uninstall.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/uninstall/uninstall.md index 33b3a886f9b..22285fb3fa3 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/uninstall.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/uninstall/uninstall.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Uninstallation" -title: "Uninstall Palette" -description: "Learn how to uninstall a Palette installation from your cluster using Helm charts." +sidebar_label: "Uninstall" +title: "Uninstall Palette from Kubernetes" +description: "Uninstall self-hosted Palette from your Kubernetes cluster using Helm charts." icon: "" hide_table_of_contents: false sidebar_position: 40 -tags: ["self-hosted", "enterprise"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "uninstall", "kubernetes", "helm"] +keywords: ["self-hosted", "uninstall", "kubernetes", "helm"] --- To uninstall Palette from your cluster, you need to uninstall Palette management plane and Cert Manager. Optionally, you diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/_category_.json new file mode 100644 index 00000000000..c3460c6dbde --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 30 +} diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/airgap.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/airgap.md similarity index 94% rename from docs/docs-content/enterprise-version/upgrade/upgrade-k8s/airgap.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/airgap.md index 689484f3b4e..e13df89f39b 100644 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/airgap.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/airgap.md @@ -1,11 +1,11 @@ --- -sidebar_label: "Airgap" -title: "Upgrade Airgap Palette Installed with Kubernetes" -description: "Learn how to upgrade self-hosted airgap Palette." +sidebar_label: "Upgrade Airgap Palette" +title: "Upgrade Airgap Palette on Kubernetes" +description: "Upgrade a self-hosted, airgapped Palette instance installed on a Kubernetes cluster." icon: "" -sidebar_position: 10 -tags: ["palette", "self-hosted", "airgap", "kubernetes", "upgrade"] -keywords: ["self-hosted", "enterprise", "airgap", "kubernetes"] +sidebar_position: 20 +tags: ["self-hosted", "airgap", "kubernetes", "upgrade", "helm"] +keywords: ["self-hosted", "airgap", "kubernetes", "upgrade", "helm"] --- This guide takes you through the process of upgrading a self-hosted airgap Palette instance installed on Kubernetes. @@ -13,14 +13,14 @@ This guide takes you through the process of upgrading a self-hosted airgap Palet :::warning Before upgrading Palette to a new major version, you must first update it to the latest patch version of the latest -minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section for +minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for details. ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette upgrade. ## Prerequisites @@ -30,7 +30,7 @@ Palette upgrade. - An OCI registry such as [Harbor](https://goharbor.io/) or [AWS ECR](https://aws.amazon.com/ecr/) configured and available to store the new Palette images and packs. -- Access to the latest Palette airgap setup binary. Refer to [Access Palette](/enterprise-version/#access-palette) for +- Access to the latest Palette airgap setup binary. Refer to [Access Palette](../../../palette.md#access-palette) for more details. - [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl) and [`helm`](https://helm.sh/docs/intro/install/) @@ -42,12 +42,12 @@ Palette upgrade. - `unzip` or a similar tool available in your system. -- Access to the latest Palette Helm Chart. Refer to [Access Palette](/enterprise-version/#access-palette) for more +- Access to the latest Palette Helm Chart. Refer to [Access Palette](../../../palette.md#access-palette) for more details. - The Kubernetes cluster must be set up on a version of Kubernetes that is compatible to your upgraded version. Refer to - the [Kubernetes Requirements](../../install-palette/install-palette.md#kubernetes-requirements) section to find the - version required for your Palette installation. + the [Kubernetes Requirements](../install/install.md#kubernetes-requirements) section to find the version required for + your Palette installation. ## Upgrade @@ -233,8 +233,8 @@ Palette upgrade. -7. Refer to the [Additional Packs](../../../downloads/self-hosted-palette/additional-packs.md) page and update the - packages you are currently using. You must update each package separately. +7. Refer to the [Additional Packs](../../../../../downloads/self-hosted-palette/additional-packs.md) page and update + the packages you are currently using. You must update each package separately. :::info @@ -302,8 +302,7 @@ Palette upgrade. 12. Prepare the Palette configuration file `values.yaml`. If you saved `values.yaml` used during the Palette installation, you can reuse it for the upgrade. Alternatively, follow the - [Kubernetes Installation Instructions](../../install-palette/install-on-kubernetes/install.md) to populate your - `values.yaml`. + [Kubernetes Installation Instructions](../install/airgap.md) to populate your `values.yaml`. :::warning diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/non-airgap.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/non-airgap.md similarity index 90% rename from docs/docs-content/enterprise-version/upgrade/upgrade-k8s/non-airgap.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/non-airgap.md index 0948a2605f4..e73bc3b490d 100644 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-k8s/non-airgap.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/non-airgap.md @@ -1,11 +1,11 @@ --- -sidebar_label: "Non-airgap" -title: "Upgrade Palette Installed with Kubernetes" -description: "Learn how to upgrade self-hosted non-airgap Palette with Helm and Kubernetes." +sidebar_label: "Upgrade Non-Airgap Palette" +title: "Upgrade Non-Airgap Palette on Kubernetes" +description: "Upgrade a self-hosted, non-airgap Palette instance installed on a Kubernetes cluster." icon: "" -sidebar_position: 0 -tags: ["palette", "self-hosted", "non-airgap", "kubernetes", "management", "upgrades"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 30 +tags: ["self-hosted", "non-airgap", "kubernetes", "upgrade", "helm"] +keywords: ["self-hosted", "non-airgap", "kubernetes", "upgrade", "helm"] --- This guide takes you through the process of upgrading a self-hosted Palette instance installed with Helm on Kubernetes. @@ -13,14 +13,14 @@ This guide takes you through the process of upgrading a self-hosted Palette inst :::warning Before upgrading Palette to a new major version, you must first update it to the latest patch version of the latest -minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section for +minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for details. ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette upgrade. ## Prerequisites @@ -33,12 +33,12 @@ Palette upgrade. - `unzip` or a similar tool available in your system. -- Access to the latest Palette Helm Chart. Refer to [Access Palette](/enterprise-version/#access-palette) for more +- Access to the latest Palette Helm Chart. Refer to [Access Palette](../../../palette.md#access-palette) for more details. - The Kubernetes cluster must be set up on a version of Kubernetes that is compatible to your upgraded version. Refer to - the [Kubernetes Requirements](../../install-palette/install-palette.md#kubernetes-requirements) section to find the - version required for your Palette installation. + the [Kubernetes Requirements](../install/install.md#kubernetes-requirements) section to find the version required for + your Palette installation. ## Upgrade @@ -83,8 +83,7 @@ match your environment. 4. Prepare the Palette configuration file `values.yaml`. If you saved `values.yaml` used during the Palette installation, you can reuse it for the upgrade. Alternatively, follow the - [Kubernetes Installation Instructions](../../install-palette/install-on-kubernetes/install.md) to populate your - `values.yaml`. + [Kubernetes Installation Instructions](../install/non-airgap.md) to populate your `values.yaml`. :::warning diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/upgrade.md similarity index 94% rename from docs/docs-content/enterprise-version/upgrade/upgrade.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/upgrade.md index 282858ff563..60993f81686 100644 --- a/docs/docs-content/enterprise-version/upgrade/upgrade.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/upgrade.md @@ -1,35 +1,50 @@ --- sidebar_label: "Upgrade" -title: "Palette Upgrade" -description: "Upgrade notes for specific Palette versions." +title: "Upgrade Palette on Kubernetes" +description: "Upgrade self-hosted Palette installed on a Kubernetes cluster." icon: "" hide_table_of_contents: false -sidebar_position: 100 -tags: ["palette", "self-hosted", "upgrade"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "helm", "kubernetes", "upgrade"] +keywords: ["self-hosted", "helm", "kubernetes", "upgrade"] --- +:::danger + +The below content is from the former [Palette Upgrade](https://docs.spectrocloud.com/enterprise-version/upgrade/) page. +Convert to partials and refactor where necessary. + +::: + This page offers links and reference information for upgrading self-hosted Palette instances. If you have questions or concerns, [reach out to our support team](http://support.spectrocloud.io/). :::tip -If you are using Palette VerteX, refer to the [VerteX Upgrade](../../vertex/upgrade/upgrade.md) page for upgrade -guidance. +If you are using Palette VerteX, refer to the +[VerteX Upgrade](../../../../vertex/supported-environments/kubernetes/upgrade/upgrade.md) page for upgrade guidance. ::: ### Private Cloud Gateway If your setup includes a PCG, make sure to -[allow the PCG to upgrade automatically](../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette upgrade. + +## Upgrade Notes + +Refer to the following known issues before upgrading: + +- Upgrading self-hosted Palette or Palette VerteX from version 4.6.x to 4.7.x can cause the upgrade to hang if any + member of the MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. For guidance on + verifying the health status of MongoDB ReplicaSet members, refer to our + [Troubleshooting](../../../../../troubleshooting/palette-upgrade.md#self-hosted-palette-or-palette-vertex-upgrade-hangs) + guide. ## Supported Upgrade Paths -Refer to the following tables for the supported self-hosted Palette upgrade paths for -[VMware](../install-palette/install-on-vmware/install-on-vmware.md) and -[Kubernetes](../install-palette/install-on-kubernetes/install-on-kubernetes.md) installations. +Refer to the following tables for the supported upgrade paths for self-hosted Palette environments installed on a +[Kubernetes](../kubernetes.md) cluster. :::danger @@ -38,15 +53,6 @@ minor version available. ::: -:::warning - -Upgrading self-hosted Palette or Palette VerteX from version 4.6.x to 4.7.x can cause the upgrade to hang if any member -of the MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. For guidance on verifying the -health status of MongoDB ReplicaSet members, refer to our -[Troubleshooting](../../troubleshooting/palette-upgrade.md#self-hosted-palette-or-palette-vertex-upgrade-hangs) guide. - -::: - @@ -522,14 +528,3 @@ health status of MongoDB ReplicaSet members, refer to our - -## Upgrade Guides - -Refer to the respective guide for guidance on upgrading your self-hosted Palette instance. - -- [Upgrade Notes](upgrade-notes.md) -- [Non-Airgap VMware](upgrade-vmware/non-airgap.md) -- [Airgap VMware](upgrade-vmware/airgap.md) -- [Non-Airgap Kubernetes](upgrade-k8s/non-airgap.md) -- [Airgap Kubernetes](upgrade-k8s/airgap.md) -- [Palette Management Appliance](palette-management-appliance.md) diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/_category_.json similarity index 100% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/_category_.json diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/activate.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/activate.md new file mode 100644 index 00000000000..48b968116d7 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/activate.md @@ -0,0 +1,119 @@ +--- +sidebar_label: "Activate" +title: "Activate Self-Hosted Palette" +description: "Activate your self-hosted Palette installation." +icon: "" +hide_table_of_contents: false +sidebar_position: 40 +tags: ["self-hosted", "account", "activate"] +keywords: ["self-hosted", "account", "activate"] +--- + +:::preview + +This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. +Do not use this feature in production workloads. + +::: + +:::danger + +Convert to partials for reuse in other installation sections. + +::: + +Beginning with version 4.6.32, once you install Palette or upgrade to version 4.6.32 or later, you have 30 days to +activate it. During this time, you have unrestricted access to all of Palette's features. After 30 days, you can +continue to use Palette, and existing clusters will continue to run, but you cannot perform the following operations +until Palette is activated: + +- Create new clusters. + +- Modify the configuration of active clusters. This includes modifying + [cluster profile variables](../../../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); + changing [cluster profile versions](../../../../clusters/cluster-management/cluster-updates.md#enablement); editing, + deleting, or replacing profile layers; and editing YAML files. + +- Update [node configurations](../../../../clusters/cluster-management/node-pool.md), such as the node pool size. + +Each installation of Palette has a unique product ID and corresponding activation key. Activation keys are single-use +and valid for the entirety of the Palette installation, including all subsequent version upgrades. Once Palette is +activated, it does not need to be reactivated unless you need to reinstall Palette, at which time a new product ID will +be assigned, and a new activation key will be needed. Activation keys are no additional cost and are included with your +purchase of Palette. The activation process is the same for connected and airgapped installations, regardless of whether +Palette is installed via the [Palette CLI](../../../../automation/palette-cli/palette-cli.md), +[Helm chart](../kubernetes/setup/non-airgap/helm-reference.md), or [Management Appliance](./management-appliance.md) +ISO. + +If you are in trial mode or your trial has expired, Palette displays the appropriate banner on the **Summary** screen of +your system console, as well as at **Administration > Activation**. Trial mode and expired statuses are also displayed +in the Palette UI at the bottom of the left main menu. + + ![License status of expired on the left main menu](/enterprise-version_activate-installation_left-main-menu-status.webp) + +## Overview + +Below is an overview of the activation process. + + ![Diagram of the self-hosted system activation process](/enterprise-version_activate-installation_system-activation-diagram.webp) + +1. The system admin installs Palette or upgrades to version 4.6.32 or later. +2. Palette enters trial mode. During this time, you have 30 days to take advantage of all of Palette's features. After + 30 days, the trial expires, and Palette functionality is restricted. Any clusters that you have deployed will remain + functional, but you cannot perform [day-2 operations](../../../../clusters/cluster-management/cluster-management.md), + and you cannot deploy additional clusters. + +3. Before or after your trial expires, contact a Spectro Cloud customer support representative. You must specify whether + you are activating Palette or VerteX and also provide a short description of your instance, along with your + installation's product ID. + +4. Spectro Cloud provides the activation key. + +5. The system admin enters the activation key and activates Palette, allowing you to resume day-2 operations and deploy + additional clusters. + +## Prerequisites + +- A Palette subscription. + +- A self-hosted instance of Palette that is not activated. For help installing Palette, check out our + [Installation](./install.md) guide. + +- Access to the [system console](../../system-management/system-management.md#access-the-system-console). + +## Enablement + +1. Log in to the system console. For more information, refer to the + [Access the System Console](../../system-management/system-management.md#access-the-system-console) guide. + +2. A banner is displayed on the **Summary** screen, alerting you that your product is either in trial mode or has + expired. On the banner, select **Activate Palette**. Alternatively, from the left main menu, select + **Administration > Activation**. + + ![Trial mode banner in the system console](/enterprise-version_activate-installation_trial-mode-banner.webp) + +3. The **Activation** tab of the **Administration** screen reiterates your product's status and displays your **Product + Setup ID**. Contact your customer support representative and provide them the following information: + + - Your installation type (Palette). + + - A short description of your instance. For example, `Spacetastic - Dev Team 1`. + + - Your instance's **Product Setup ID**. + +4. Your customer support representative will provide you an **Activation key**. The activation key is single-use and + cannot be used to activate another Palette or VerteX installation. +5. On the **Activation** tab, enter the **Activation key** and **Update** your settings. If the product ID and + activation key pair is correct, an activation successful message is displayed, and your banner is updated to state + that your license is active. + +## Validation + +You can view the status of your license from the system console. If your license is active, the license status is +removed from the left main menu of the Palette UI. + +1. Log in to the [system console](../../system-management/system-management.md#access-the-system-console). + +2. The activation banner is no longer displayed on the **Summary** screen, indicating your license is active. Confirm + your license status by navigating to **Administration > Activation**. The banner states that **Your license is + active**. diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/install.md new file mode 100644 index 00000000000..2158909bcb5 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/install.md @@ -0,0 +1,83 @@ +--- +sidebar_label: "Install" +title: "Install Palette with Management Appliance" +description: "Learn how to install self-hosted Palette using the Palette Management Appliance." +hide_table_of_contents: false +tags: ["management appliance", "self-hosted", "install"] +sidebar_position: 30 +--- + +:::preview + +This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. +Do not use this feature in production workloads. + +::: + +:::danger + +This has been split from the former +[Palette Management Appliance](https://docs.spectrocloud.com/enterprise-version/install-palette/palette-management-appliance/) +page. + +::: + +Follow the instructions to install Palette using the Palette Management Appliance on your infrastructure platform. + +## Size Guidelines + + + +## Limitations + +- Only public image registries are supported if you are choosing to use an external registry for your pack bundles. + +## Prerequisites + + + +## Install Palette + + + +:::warning + +If your installation is not successful, verify that the `piraeus-operator` pack was correctly installed. For more +information, refer to the +[Self-Hosted Installation - Troubleshooting](../../../../troubleshooting/enterprise-install.md#scenario---palettevertex-management-appliance-installation-stalled-due-to-piraeus-operator-pack-in-error-state) +guide. + +::: + +## Validate + + + +## Next Steps diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md new file mode 100644 index 00000000000..dd535289102 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md @@ -0,0 +1,63 @@ +--- +sidebar_label: "Management Appliance" +title: "Self-Hosted Palette with Management Appliance" +description: "Use the Palette Management Appliance to install self-hosted Palette on your desired infrastructure." +hide_table_of_contents: false +# sidebar_custom_props: +# icon: "chart-diagram" +tags: ["management appliance", "self-hosted"] +--- + +:::preview + +This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. +Do not use this feature in production workloads. + +::: + +The Palette Management Appliance is downloadable as an ISO file and is a solution for installing self-hosted Palette on +your infrastructure. The ISO file contains all the necessary components needed for Palette to function. The ISO file is +used to boot the nodes, which are then clustered to form a Palette management cluster. + +Once Palette has been installed, you can download pack bundles and upload them to the internal Zot registry or an +external registry. These pack bundles are used to create your cluster profiles. You will then be able to deploy clusters +in your environment. + +## Third Party Packs + +There is an additional option to download and install the Third Party packs that provide complementary functionality to +Palette. These packs are not required for Palette to function, but they do provide additional features and capabilities +as described in the following table. + +| **Feature** | **Included with Palette Third Party Pack** | **Included with Palette Third Party Conformance Pack** | +| ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------ | +| [Backup and Restore](../../../../clusters/cluster-management/backup-restore/backup-restore.md) | :white_check_mark: | :x: | +| [Configuration Security](../../../../clusters/cluster-management/compliance-scan.md#configuration-security) | :white_check_mark: | :x: | +| [Penetration Testing](../../../../clusters/cluster-management/compliance-scan.md#penetration-testing) | :white_check_mark: | :x: | +| [Software Bill Of Materials (SBOM) scanning](../../../../clusters/cluster-management/compliance-scan.md#sbom-dependencies--vulnerabilities) | :white_check_mark: | :x: | +| [Conformance Testing](../../../../clusters/cluster-management/compliance-scan.md#conformance-testing) | :x: | :white_check_mark: | + +## Architecture + +The ISO file is built with the Operating System (OS), Kubernetes distribution, Container Network Interface (CNI), and +Container Storage Interface (CSI). A [Zot registry](https://zotregistry.dev/) is also included in the Appliance +Framework ISO. Zot is a lightweight, OCI-compliant container image registry that is used to store the Palette packs +needed to create cluster profiles. + +The following table displays the infrastructure profile for the self-hosted Palette appliance. + +| **Layer** | **Component** | **Version** | +| -------------- | --------------------------------------------- | ----------- | +| **OS** | Ubuntu: Immutable [Kairos](https://kairos.io) | 22.04 | +| **Kubernetes** | Palette eXtended Kubernetes Edge (PXK-E) | 1.32.3 | +| **CNI** | Calico | 3.29.2 | +| **CSI** | Piraeus | 2.8.1 | +| **Registry** | Zot | 0.1.67 | + +## Supported Platforms + +The Palette Management Appliance can be used on the following infrastructure platforms: + +- VMware vSphere +- Bare Metal +- Machine as a Service (MAAS) diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/upgrade.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/upgrade.md new file mode 100644 index 00000000000..0b8329f0de9 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/upgrade.md @@ -0,0 +1,74 @@ +--- +sidebar_label: "Upgrade" +title: "Upgrade Palette with Management Appliance" +description: "Upgrade self-hosted Palette installed with the Palette Management Appliance." +hide_table_of_contents: false +tags: ["management appliance", "self-hosted", "upgrade"] +sidebar_position: 50 +--- + +:::preview + +This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. +Do not use this feature in production workloads. + +::: + +Follow the instructions to upgrade the [Palette Management Appliance](./management-appliance.md) using a content bundle. +The content bundle is used to upgrade the Palette instance to a chosen target version. + +:::info + +The upgrade process will incur downtime for the Palette management cluster, but your workload clusters will remain +operational. + +::: + +## Supported Upgrade Paths + +:::danger + +Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of the +latest minor version available. + +::: + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.15 | 4.7.27 | :white_check_mark: | +| 4.7.3 | 4.7.27 | :x: | +| 4.7.3 | 4.7.15 | :x: | + +## Prerequisites + + + +## Upgrade Palette + + + +## Validate + + diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/upload-packs.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/upload-packs.md new file mode 100644 index 00000000000..bac91e2c10b --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/management-appliance/upload-packs.md @@ -0,0 +1,100 @@ +--- +sidebar_label: "Upload Packs" +title: "Upload Packs to Palette with Management Appliance" +description: "Upload packs to self-hosted Palette installed with the Palette Management Appliance." +hide_table_of_contents: false +tags: ["management appliance", "self-hosted", "packs"] +sidebar_position: 60 +--- + +:::preview + +This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. +Do not use this feature in production workloads. + +::: + +:::danger + +This has been split from the former +[Palette Management Appliance](https://docs.spectrocloud.com/enterprise-version/install-palette/palette-management-appliance/) +page. + +::: + +## Upload Packs to Palette + +Follow the instructions to upload packs to your Palette instance. Packs are used to create +[cluster profiles](../../../../profiles/cluster-profiles/cluster-profiles.md) and deploy workload clusters in your +environment. + +### Prerequisites + + + +### Upload Packs + + + +### Validate + + + +## (Optional) Upload Third Party Packs + +Follow the instructions to upload the Third Party packs to your Palette instance. The Third Party packs contain +additional functionality and capabilities that enhance the Palette experience. + +### Prerequisites + + + +### Upload Packs + + + +### Validate + + diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/supported-environments.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/supported-environments.md new file mode 100644 index 00000000000..3cdbde4b9f1 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/supported-environments.md @@ -0,0 +1,12 @@ +--- +sidebar_label: "Supported Environments" +title: "Supported Environments" +description: "Supported environments for installing self-hosted Palette." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["self-hosted", "kubernetes", "helm", "vmware", "management appliance"] +keywords: ["self-hosted", "kubernetes", "helm", "vmware", "management appliance"] +--- + +Placeholder. diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/_category_.json new file mode 100644 index 00000000000..c3460c6dbde --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 30 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/activate/_category_.json similarity index 100% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/activate/_category_.json diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/activate/activate.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/activate/activate.md new file mode 100644 index 00000000000..9a218241f1d --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/activate/activate.md @@ -0,0 +1,113 @@ +--- +sidebar_label: "Activate" +title: "Activate Self-Hosted Palette" +description: "Activate your self-hosted Palette installation." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["self-hosted", "activate"] +keywords: ["self-hosted", "activate"] +--- + +:::danger + +Convert to partials for reuse in other installation sections. + +::: + +Beginning with version 4.6.32, once you install Palette or upgrade to version 4.6.32 or later, you have 30 days to +activate it. During this time, you have unrestricted access to all of Palette's features. After 30 days, you can +continue to use Palette, and existing clusters will continue to run, but you cannot perform the following operations +until Palette is activated: + +- Create new clusters. + +- Modify the configuration of active clusters. This includes modifying + [cluster profile variables](../../../../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); + changing [cluster profile versions](../../../../../clusters/cluster-management/cluster-updates.md#enablement); + editing, deleting, or replacing profile layers; and editing YAML files. + +- Update [node configurations](../../../../../clusters/cluster-management/node-pool.md), such as the node pool size. + +Each installation of Palette has a unique product ID and corresponding activation key. Activation keys are single-use +and valid for the entirety of the Palette installation, including all subsequent version upgrades. Once Palette is +activated, it does not need to be reactivated unless you need to reinstall Palette, at which time a new product ID will +be assigned, and a new activation key will be needed. Activation keys are no additional cost and are included with your +purchase of Palette. The activation process is the same for connected and airgapped installations, regardless of whether +Palette is installed via the [Palette CLI](../../../../../automation/palette-cli/palette-cli.md), +[Helm chart](../../kubernetes/install/install.md), or +[Management Appliance](../../management-appliance/management-appliance.md) ISO. + +If you are in trial mode or your trial has expired, Palette displays the appropriate banner on the **Summary** screen of +your system console, as well as at **Administration > Activation**. Trial mode and expired statuses are also displayed +in the Palette UI at the bottom of the left main menu. + + ![License status of expired on the left main menu](/enterprise-version_activate-installation_left-main-menu-status.webp) + +## Overview + +Below is an overview of the activation process. + + ![Diagram of the self-hosted system activation process](/enterprise-version_activate-installation_system-activation-diagram.webp) + +1. The system admin installs Palette or upgrades to version 4.6.32 or later. +2. Palette enters trial mode. During this time, you have 30 days to take advantage of all of Palette's features. After + 30 days, the trial expires, and Palette functionality is restricted. Any clusters that you have deployed will remain + functional, but you cannot perform + [day-2 operations](../../../../../clusters/cluster-management/cluster-management.md), and you cannot deploy + additional clusters. + +3. Before or after your trial expires, contact a Spectro Cloud customer support representative. You must specify whether + you are activating Palette or VerteX and also provide a short description of your instance, along with your + installation's product ID. + +4. Spectro Cloud provides the activation key. + +5. The system admin enters the activation key and activates Palette, allowing you to resume day-2 operations and deploy + additional clusters. + +## Prerequisites + +- A Palette subscription. + +- A self-hosted instance of Palette that is not activated. For help installing Palette, check out our + [Installation](../install/install.md) guide. + +- Access to the [system console](../../../system-management/system-management.md#access-the-system-console). + +## Enablement + +1. Log in to the system console. For more information, refer to the + [Access the System Console](../../../system-management/system-management.md#access-the-system-console) guide. + +2. A banner is displayed on the **Summary** screen, alerting you that your product is either in trial mode or has + expired. On the banner, select **Activate Palette**. Alternatively, from the left main menu, select + **Administration > Activation**. + + ![Trial mode banner in the system console](/enterprise-version_activate-installation_trial-mode-banner.webp) + +3. The **Activation** tab of the **Administration** screen reiterates your product's status and displays your **Product + Setup ID**. Contact your customer support representative and provide them the following information: + + - Your installation type (Palette). + + - A short description of your instance. For example, `Spacetastic - Dev Team 1`. + + - Your instance's **Product Setup ID**. + +4. Your customer support representative will provide you an **Activation key**. The activation key is single-use and + cannot be used to activate another Palette or VerteX installation. +5. On the **Activation** tab, enter the **Activation key** and **Update** your settings. If the product ID and + activation key pair is correct, an activation successful message is displayed, and your banner is updated to state + that your license is active. + +## Validation + +You can view the status of your license from the system console. If your license is active, the license status is +removed from the left main menu of the Palette UI. + +1. Log in to the [system console](../../../system-management/system-management.md#access-the-system-console). + +2. The activation banner is no longer displayed on the **Summary** screen, indicating your license is active. Confirm + your license status by navigating to **Administration > Activation**. The banner states that **Your license is + active**. diff --git a/docs/docs-content/vertex/system-management/account-management/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/_category_.json similarity index 100% rename from docs/docs-content/vertex/system-management/account-management/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/_category_.json diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/airgap.md similarity index 93% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/install.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/airgap.md index 75a5a301438..6ce4827942b 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/install.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/airgap.md @@ -1,34 +1,32 @@ --- -sidebar_label: "Install Palette" -title: "Install Palette" -description: "Learn how to install Palette on VMware." +sidebar_label: "Install Airgap Palette" +title: "Install Airgap Palette on VMware vSphere with Palette CLI" +description: "Install airgap, self-hosted Palette on VMware vSphere using the Palette CLI." icon: "" -sidebar_position: 30 +sidebar_position: 10 hide_table_of_contents: false -tags: ["palette", "self-hosted", "vmware"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "vmware", "airgap", "cli"] +keywords: ["self-hosted", "vmware", "airgap", "cli"] --- Palette can be installed on VMware vSphere in an airgap environment. When you install Palette, a three-node cluster is created. You use the interactive Palette CLI to install Palette on VMware vSphere. Refer to -[Access Palette](../../../enterprise-version.md#access-palette) for instructions on requesting the required credentials -and assets. +[Access Palette](../../../palette.md#access-palette) for instructions on requesting the required credentials and assets. ## Prerequisites -- You have completed the [Environment Setup](./environment-setup/environment-setup.md) steps and deployed the airgap - support VM. +- You have completed the [Environment Setup](../setup/airgap/airgap.md) steps and deployed the airgap support VM. - You will need to provide the Palette CLI an encryption passphrase to secure sensitive data. The passphrase must be between 8 to 32 characters long and contain a capital letter, a lowercase letter, a digit, and a special character. - Refer to the [Palette CLI Encryption](../../../../automation/palette-cli/palette-cli.md#encryption) section for more - information. + Refer to the [Palette CLI Encryption](../../../../../automation/palette-cli/palette-cli.md#encryption) section for + more information. -- Review the required VMware vSphere [permissions](../vmware-system-requirements.md). Ensure you have created the proper - custom roles and zone tags. +- Review the required VMware vSphere [permissions](../setup/airgap/vmware-system-requirements.md). Ensure you have + created the proper custom roles and zone tags. - We recommended the following resources for Palette. Refer to the - [Palette size guidelines](../../install-palette.md#size-guidelines) for additional sizing information. + [Palette size guidelines](../install/install.md#size-guidelines) for additional sizing information. - 8 CPUs per VM. @@ -56,7 +54,8 @@ and assets. - x509 SSL certificate authority file in base64 format. This file is optional. - Zone tagging is required for dynamic storage allocation across fault domains when provisioning workloads that require - persistent storage. Refer to [Zone Tagging](../../install-on-vmware/vmware-system-requirements.md) for information. + persistent storage. Refer to [Zone Tagging](../setup/airgap/vmware-system-requirements.md#zone-tagging) for + information. - Assigned IP addresses for application workload services, such as Load Balancer services. @@ -71,7 +70,7 @@ and assets. Self-hosted Palette installations provide a system Private Cloud Gateway (PCG) out-of-the-box and typically do not require a separate, user-installed PCG. However, you can create additional PCGs as needed to support provisioning into remote data centers that do not have a direct incoming connection from the Palette console. To learn how to install a -PCG on VMware, check out the [VMware](../../../../clusters/pcg/deploy-pcg/vmware.md) guide. +PCG on VMware, check out our [VMware PCG](../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. ::: @@ -116,7 +115,7 @@ Use the following steps to install Palette. 3. Invoke the Palette CLI by using the `ec` command to install the enterprise cluster. The interactive CLI prompts you for configuration details and then initiates the installation. For more information about the `ec` subcommand, refer - to [Palette Commands](../../../../automation/palette-cli/commands/commands.md). + to [Palette Commands](../../../../../automation/palette-cli/commands/commands.md). ```bash palette ec install @@ -125,8 +124,8 @@ Use the following steps to install Palette. :::warning If you deployed the airgap support VM using a generic OVA, the Palette CLI may not be in the `usr/bin` path. Ensure - that you complete step **22** of the [Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) - guide, which installs the Palette airgap binary and moves the Palette CLI to the correct path. + that you complete step 18 of the [Environment Setup](../setup/airgap/ova.md) guide, which installs the Palette + airgap binary and moves the Palette CLI to the correct path. ::: @@ -162,7 +161,7 @@ Use the following steps to install Palette. For self-hosted OCI registries, ensure you have the server Certificate Authority (CA) certificate file available on the host where you are using the Palette CLI. You will be prompted to provide the file path to the OCI CA certificate. Failure to provide the OCI CA certificate will result in self-linking errors. Refer to the - [Self-linking Error](../../../../troubleshooting/enterprise-install.md#scenario---self-linking-error) + [Self-linking Error](../../../../../troubleshooting/enterprise-install.md#scenario---self-linking-error) troubleshooting guide for more information. ::: @@ -421,14 +420,10 @@ You can also validate that a three-node Kubernetes cluster is launched and Palet ## Next Steps - - -## Resources - -- [Palette CLI](../../../../automation/palette-cli/install-palette-cli.md#download-and-setup) - -- [VMware System Requirements](../vmware-system-requirements.md) - -- [System Management](../../../system-management/system-management.md) - -- [Enterprise Install Troubleshooting](../../../../troubleshooting/enterprise-install.md) + diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/install.md new file mode 100644 index 00000000000..19d25184be9 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/install.md @@ -0,0 +1,64 @@ +--- +sidebar_label: "Install" +title: "Install Palette on VMware vSphere with Palette CLI" +description: "Review system requirements for installing self-hosted Palette on VMware vSphere using the Palette CLI." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "install", "vmware", "cli"] +keywords: ["self-hosted", "install", "vmware", "cli"] +--- + +:::warning + +This is the former [Installation](https://docs.spectrocloud.com/enterprise-version/install-palette/) page. Leave only +what is applicable to VMware. Convert to partials for reuse. + +::: + +Palette is available as a self-hosted application that you install in your environment. Palette is available in the +following modes. + +| **Method** | **Supported Platforms** | **Description** | **Install Guide** | +| ---------------------------------------- | ------------------------ | --------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette in VMware environment. | Install on VMware | +| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster. | [Install on Kubernetes](../../kubernetes/install/install.md) | +| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](../../management-appliance/install.md) | + +## Airgap Installation + +You can also install Palette in an airgap environment. For more information, refer to the +[Airgap Installation](./airgap.md) section. + +| **Method** | **Supported Airgap Platforms** | **Description** | **Install Guide** | +| ---------------------------------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette in VMware environment using your own OCI registry server. | [VMware Airgap Install](./airgap.md) | +| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster with your own OCI registry server OR use AWS ECR. | [Kubernetes Airgap Install](../../kubernetes/install/airgap.md) | +| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](../../management-appliance/install.md) | + +The next sections provide sizing guidelines we recommend you review before installing Palette in your environment. + +## Size Guidelines + + + +## Kubernetes Requirements + + + +The following tables present the Kubernetes version corresponding to each Palette version for +self-hosted Palette installed on VMware vSphere environments using the Palette CLI. +Additionally, for VMware installations, it provides the download URLs for the required Operating System and Kubernetes +distribution OVA. + + + + + +## Proxy Requirements + + diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/non-airgap.md similarity index 91% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/install.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/non-airgap.md index 54b961976ab..1f66c430477 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/install/non-airgap.md @@ -1,25 +1,25 @@ --- -sidebar_label: "Non-Airgap Installation" -title: "Install Palette on VMware" -description: "Learn how to install Palette on VMware." +sidebar_label: "Install Non-Airgap Palette" +title: "Install Non-Airgap Palette on VMware vSphere with Palette CLI" +description: "Install non-airgap, self-hosted Palette on VMware vSphere using the Palette CLI." icon: "" sidebar_position: 20 hide_table_of_contents: false -tags: ["palette", "self-hosted", "vmware"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "vmware", "non-airgap", "cli"] +keywords: ["self-hosted", "vmware", "non-airgap", "cli"] --- Palette can be installed on VMware vSphere with internet connectivity or in an airgap environment. When you install Palette, a three-node cluster is created. You use the interactive Palette CLI to install Palette on VMware vSphere. -Refer to [Access Palette](../../enterprise-version.md#access-palette) for instructions on requesting repository access. +Refer to [Access Palette](../../../palette.md#access-palette) for instructions on requesting repository access. ## Prerequisites :::tip We recommend using the `--validate` flag with the `ec install` command to validate the installation. Check out the -[Validate Environment](../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC command -for more information. +[Validate Environment](../../../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC +command for more information. ::: @@ -29,18 +29,18 @@ for more information. host. - Palette CLI installed and available. Refer to the Palette CLI - [Install](../../../automation/palette-cli/install-palette-cli.md#download-and-setup) page for guidance. + [Install](../../../../../automation/palette-cli/install-palette-cli.md#download-and-setup) page for guidance. - You will need to provide the Palette CLI an encryption passphrase to secure sensitive data. The passphrase must be between 8 to 32 characters long and contain a capital letter, a lowercase letter, a digit, and a special character. - Refer to the [Palette CLI Encryption](../../../automation/palette-cli/palette-cli.md#encryption) section for more - information. + Refer to the [Palette CLI Encryption](../../../../../automation/palette-cli/palette-cli.md#encryption) section for + more information. -- Review the required VMware vSphere [permissions](vmware-system-requirements.md). Ensure you have created the proper - custom roles and zone tags. +- Review the required VMware vSphere [permissions](../setup/non-airgap/vmware-system-requirements.md). Ensure you have + created the proper custom roles and zone tags. - We recommended the following resources for Palette. Refer to the - [Palette size guidelines](../install-palette.md#size-guidelines) for additional sizing information. + [Palette size guidelines](../install/install.md#size-guidelines) for additional sizing information. - 8 CPUs per VM. @@ -68,12 +68,13 @@ for more information. - x509 SSL certificate authority file in base64 format. This file is optional. - Zone tagging is required for dynamic storage allocation across fault domains when provisioning workloads that require - persistent storage. Refer to [Zone Tagging](../install-on-vmware/vmware-system-requirements.md) for information. + persistent storage. Refer to [Zone Tagging](../setup/non-airgap/vmware-system-requirements.md#zone-tagging) for + information. - Assigned IP addresses for application workload services, such as Load Balancer services. - Ensure Palette has access to the required domains and ports. Refer to the - [Required Domains](../install-palette.md#proxy-requirements) section for more information. + [Required Domains](../install/install.md#proxy-requirements) section for more information. - A [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to manage persistent storage, with the annotation `storageclass.kubernetes.io/is-default-class` set to `true`. To override the default StorageClass for a @@ -86,7 +87,7 @@ for more information. Self-hosted Palette installations provide a system Private Cloud Gateway (PCG) out-of-the-box and typically do not require a separate, user-installed PCG. However, you can create additional PCGs as needed to support provisioning into remote data centers that do not have a direct incoming connection from the Palette console. To learn how to install a -PCG on VMware, check out the [VMware](../../../clusters/pcg/deploy-pcg/vmware.md) guide. +PCG on VMware, check out our [VMware PCG](../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. ::: @@ -107,14 +108,14 @@ Use the following steps to install Palette. user account you will use to deploy the Palette installation. 3. Find the OVA download URL corresponding to your Palette version in the - [Kubernetes Requirements](../install-palette.md#kubernetes-requirements) section. Use the identified URL to import + [Kubernetes Requirements](../install/install.md#kubernetes-requirements) section. Use the identified URL to import the Operating System and Kubernetes distribution OVA required for the install. Place the OVA in the `spectro-templates` folder. 4. Append an `r_` prefix to the OVA name and remove the `.ova` suffix after the import. For example, the final output should look like `r_u-2204-0-k-12813-0`. This naming convention is required for the install process to identify the - OVA. Refer to the [Additional OVAs](../../../downloads/self-hosted-palette/additional-ovas.md) page for a list of - additional OVAs you can download and upload to your vCenter environment. + OVA. Refer to the [Additional OVAs](../../../../../downloads/self-hosted-palette/additional-ovas.md) page for a list + of additional OVAs you can download and upload to your vCenter environment. :::tip @@ -135,14 +136,14 @@ Use the following steps to install Palette. 6. Issue the Palette `ec` command to install the enterprise cluster. The interactive CLI prompts you for configuration details and then initiates the installation. For more information about the `ec` subcommand, refer to - [Palette Commands](../../../automation/palette-cli/commands/commands.md). + [Palette Commands](../../../../../automation/palette-cli/commands/commands.md). ```bash palette ec install ``` You can also use the `--validate` flag to validate the installation prior to deployment. Refer to the - [Validate Environment](../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC + [Validate Environment](../../../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC command for more information. ```bash @@ -341,13 +342,13 @@ Use the following steps to install Palette. 17. Log in to the system console using the credentials provided in the Enterprise Cluster Details output. After login, you will be prompted to create a new password. Enter a new password and save your changes. Refer to the - [password requirements](../../system-management/account-management/credentials.md#password-requirements-and-security) + [password requirements](../../../system-management/account-management/credentials.md#password-requirements-and-security) documentation page to learn more about the password requirements. Use the username `admin` and your new password to log in to the system console. You can create additional system administrator accounts and assign roles to users in the system console. Refer to the - [Account Management](../../system-management/account-management/account-management.md) documentation page for more - information. + [Account Management](../../../system-management/account-management/account-management.md) documentation page for + more information. :::info @@ -362,11 +363,11 @@ Use the following steps to install Palette. 18. After login, a Summary page is displayed. Palette is installed with a self-signed SSL certificate. To assign a different SSL certificate you must upload the SSL certificate, SSL certificate key, and SSL certificate authority files to Palette. You can upload the files using the Palette system console. Refer to the - [Configure HTTPS Encryption](../../system-management/ssl-certificate-management.md) page for instructions on how to - upload the SSL certificate files to Palette. + [Configure HTTPS Encryption](../../../system-management/ssl-certificate-management.md) page for instructions on how + to upload the SSL certificate files to Palette. 19. The last step is to start setting up a tenant. To learn how to create a tenant, check out the - [Tenant Management](../../system-management/tenant-management.md) guide. + [Tenant Management](../../../system-management/tenant-management.md) guide. ![Screenshot of the Summary page showing where to click Go to Tenant Management button.](/palette_installation_install-on-vmware_goto-tenant-management.webp) @@ -404,14 +405,10 @@ You can also validate that a three-node Kubernetes cluster is launched and Palet ## Next Steps - - -## Resources - -- [Palette CLI](../../../automation/palette-cli/install-palette-cli.md#download-and-setup) - -- [VMware System Requirements](vmware-system-requirements.md) - -- [System Management](../../system-management/system-management.md) - -- [Enterprise Install Troubleshooting](../../../troubleshooting/enterprise-install.md) + diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/_category_.json new file mode 100644 index 00000000000..988cdc1b69c --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Set Up", + "position": 0 +} diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/airgap-install.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/airgap.md similarity index 60% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/airgap-install.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/airgap.md index 01cc8e4ab8d..5c2d3832ec8 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/airgap-install.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/airgap.md @@ -1,12 +1,11 @@ --- -sidebar_label: "Airgap Installation" -title: "Airgap Installation" -description: "Learn how to deploy self-hosted Palette in an airgapped environment." +sidebar_label: "Set Up Airgap Environment" +title: "Set Up Airgap Environment" +description: "Prepare to install your self-hosted, airgapped Palette instance in VMware vSphere." icon: "" hide_table_of_contents: false -sidebar_position: 0 -tags: ["self-hosted", "enterprise", "airgap"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "airgap", "vmware"] +keywords: ["self-hosted", "airgap", "vmware"] --- You can install Palette in an airgap VMware vSphere environment. An airgap environment lacks direct access to the @@ -45,19 +44,23 @@ following diagram outlines the major pre-installation steps for an airgap instal 4. Install Palette using the Palette CLI or the Kubernetes Helm chart. -Configure your Palette environment +## Environment Setup -## Get Started +This section helps you prepare your VMware vSphere airgap environment for Palette installation. You can choose between +two methods to prepare your environment: -To get started with an airgap Palette installation, begin by reviewing the -[Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) guide. +1. If you have a Red Hat Enterprise Linux (RHEL) VM deployed in your environment, follow the + [Environment Setup with an Existing RHEL VM](./rhel-vm.md) guide to learn how to prepare this VM for Palette + installation. +2. If you do not have an RHEL VM, follow the [Environment Setup with OVA](./ova.md) guide. This guide will show you how + to use an OVA to deploy an airgap support VM in your VMware vSphere environment, which will then assist with the + Palette installation process. -## Resources +## Supported Platforms -- [Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) +The following table outlines the platforms supported for airgap VerteX installation and the supported OCI registries. -- [Airgap Install Checklist](./checklist.md) - -- [Airgap Install](./airgap-install.md) - -- [Additional Packs](../../../../downloads/self-hosted-palette/additional-packs.md) +| **Platform** | **OCI Registry** | **Supported** | +| -------------- | ---------------- | ------------- | +| VMware vSphere | Harbor | ✅ | +| VMware vSphere | AWS ECR | ✅ | diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/ova.md similarity index 93% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/ova.md index 01069774beb..d788dfe2547 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/ova.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Environment Setup with OVA" -title: "Environment Setup with OVA" -description: "Learn how to install Palette in an airgap environment." +sidebar_label: "Set Up Environment with OVA" +title: "Set Up Environment with OVA" +description: "Set up a VM using an OVA to install self-hosted Palette in an airgapped environment." icon: "" hide_table_of_contents: false sidebar_position: 20 -tags: ["self-hosted", "enterprise", "airgap", "vmware", "vsphere"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "airgap", "vmware"] +keywords: ["self-hosted", "airgap", "vmware"] --- This guide helps you prepare your airgap environment for Palette installation using an OVA to deploy and initialize an @@ -14,9 +14,8 @@ airgap support VM. :::info -This guide is for preparing your airgap environment only. For instructions on installing Palette on VMware, check the -[Install](../install.md) guide. A checklist of the steps you will complete to prepare your airgap environment for -Palette is available on the [Checklist](../checklist.md) page. +This guide is for preparing your airgap environment only. For instructions on installing self-hosted Palette on VMware +vSphere, refer to our [Install](../../install/airgap.md) guide. ::: @@ -51,10 +50,10 @@ Palette. - Configure the Dynamic Host Configuration Protocol (DHCP) to access the airgap support VM via SSH. You can disable DHCP or modify the IP address after deploying the airgap support VM. -- Review the required vSphere [permissions](../../../install-on-vmware/vmware-system-requirements.md) and ensure you've - created the proper custom roles and zone tags. Zone tagging enables dynamic storage allocation across fault domains - when provisioning workloads that require persistent storage. Refer to - [Zone Tagging](../../../install-on-vmware/vmware-system-requirements.md#zone-tagging) for information. +- Review the required vSphere [permissions](vmware-system-requirements.md#vsphere-permissions) and ensure you've created + the proper custom roles and zone tags. Zone tagging enables dynamic storage allocation across fault domains when + provisioning workloads that require persistent storage. Refer to + [Zone Tagging](./vmware-system-requirements.md#zone-tagging) for information. - A folder named `spectro-templates` in the vCenter VM and Templates inventory. This is a hardcoded value and is case-sensitive. @@ -64,7 +63,7 @@ Palette. Self-hosted Palette installations provide a system Private Cloud Gateway (PCG) out-of-the-box and typically do not require a separate, user-installed PCG. However, you can deploy additional PCG instances to support provisioning into remote data centers without a direct incoming connection to Palette. To learn how to install a PCG on VMware, check out -the [VMware](../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. +our [VMware PCG](../../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. ::: @@ -354,7 +353,7 @@ The default container runtime for OVAs is [Podman](https://podman.io/), not Dock Once the airgap binary completes its tasks, you will receive a **Setup Completed** success message. -19. Review the [Additional Packs](../../../../../downloads/self-hosted-palette/additional-packs.md) page and identify +19. Review the [Additional Packs](../../../../../../downloads/self-hosted-palette/additional-packs.md) page and identify any additional packs you want to add to your OCI registry. You can also add additional packs after the installation is complete. @@ -365,7 +364,7 @@ The default container runtime for OVAs is [Podman](https://podman.io/), not Dock 22. In the **Deploy OVF Template** wizard, enter the following URL to import the Operating System (OS) and Kubernetes distribution OVA required for the installation. Refer to the - [Kubernetes Requirements](../../../install-palette.md#kubernetes-requirements) section to learn if the version of + [Kubernetes Requirements](../../install/install.md#kubernetes-requirements) section to learn if the version of Palette you are installing requires a new OS and Kubernetes OVA. Consider the following example for reference. @@ -391,7 +390,7 @@ The default container runtime for OVAs is [Podman](https://podman.io/), not Dock Place the OVA in the **spectro-templates** folder or in the folder you created in step **21**. Append the `r_` prefix, and remove the `.ova` suffix when assigning its name and target location. For example, the final output should look like `r_u-2204-0-k-1294-0`. This naming convention is required for the installation process to identify the OVA. Refer to the - [Additional OVAs](../../../../../downloads/self-hosted-palette/additional-ovas.md) page for a list of additional OS OVAs. + [Additional OVAs](../../../../../../downloads/self-hosted-palette/additional-ovas.md) page for a list of additional OS OVAs. You can terminate the deployment after the OVA is available in the `spectro-templates` folder. Refer to the [Deploy an OVF or OVA Template](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vm-administration/GUID-AFEDC48B-C96F-4088-9C1F-4F0A30E965DE.html) @@ -480,7 +479,8 @@ installed in the airgap support VM and ready to use. palette ec install ``` -Complete all the Palette CLI steps outlined in the [Install Palette](../install.md) guide from the airgap support VM. +Complete all the Palette CLI steps outlined in the [Install Palette](../../install/airgap.md) guide from the airgap +support VM. :::info diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/env-setup-vm.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md similarity index 58% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/env-setup-vm.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md index 4583907ca22..381380da4bc 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/env-setup-vm.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Environment Setup with RHEL" -title: "Environment Setup with an Existing RHEL VM" -description: "Learn how to prepare your airgap environment for Palette installation using an existing RHEL VM" +sidebar_label: "Set Up Environment with RHEL" +title: "Set Up Environment with Existing RHEL VM" +description: "Prepare your airgap environment for installing self-hosted Palette using an existing RHEL VM." icon: "" hide_table_of_contents: false sidebar_position: 30 -tags: ["self-hosted", "enterprise", "airgap", "vmware", "vsphere", "rhel"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "airgap", "vmware", "rhel"] +keywords: ["self-hosted", "airgap", "vmware", "rhel"] --- This guide helps you prepare your VMware vSphere airgap environment for Palette installation using an existing Red Hat @@ -18,7 +18,7 @@ for hosting Palette images and assists in starting the Palette installation. :::info This guide is for preparing your airgap environment only. For instructions on installing Palette on VMware, refer to the -[Install Palette](../install.md) guide. +[Install Palette](../../install/airgap.md) guide. ::: @@ -29,6 +29,6 @@ This guide is for preparing your airgap environment only. For instructions on in diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/vmware-system-requirements.md similarity index 90% rename from docs/docs-content/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/vmware-system-requirements.md index c8fac138528..50a722ef0d3 100644 --- a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/vmware-system-requirements.md @@ -1,14 +1,20 @@ --- -sidebar_label: "VMware System and Permission Requirements" +sidebar_label: "System and Permission Requirements" title: "VMware System and Permission Requirements" description: "Review VMware system requirements and cloud account permissions." icon: "" hide_table_of_contents: false sidebar_position: 10 -tags: ["palette", "self-hosted", "vmware"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "vmware", "permissions"] +keywords: ["self-hosted", "vmware", "permissions"] --- +:::danger + +Convert content to partials for reuse. + +::: + Before installing Palette on VMware, review the following system requirements and permissions. The vSphere user account used to deploy Palette must have the required permissions to access the proper roles and objects in vSphere. @@ -38,12 +44,12 @@ guide if you need help creating a custom role in vSphere. The required custom ro - A root-level role with access to higher-level vSphere objects. This role is referred to as the _Spectro root role_. Check out the - [Root-Level Role Privileges](../../../clusters/data-center/vmware/permissions.md#spectro-root-role-privileges) table - for the list of privileges required for the root-level role. + [Root-Level Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-root-role-privileges) + table for the list of privileges required for the root-level role. - A role with the required privileges for deploying VMs. This role is referred to as the _Spectro role_. Review the - [Spectro Role Privileges](../../../clusters/data-center/vmware/permissions.md#spectro-role-privileges) table for the - list of privileges required for the Spectro role. + [Spectro Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-role-privileges) table + for the list of privileges required for the Spectro role. The user account you use to deploy Palette must have access to both roles. Each vSphere object required by Palette must have a diff --git a/docs/docs-content/vertex/system-management/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/_category_.json similarity index 100% rename from docs/docs-content/vertex/system-management/_category_.json rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/_category_.json diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/non-airgap.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/non-airgap.md new file mode 100644 index 00000000000..91040461d04 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/non-airgap.md @@ -0,0 +1,17 @@ +--- +sidebar_label: "Set Up Non-Airgap Environment" +title: "Set Up Non-Airgap Environment" +description: + "No prior setup is needed when installing self-hosted Palette on VMware vSphere with internet connectivity." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vmware", "non-airgap"] +keywords: ["self-hosted", "vmware", "non-airgap"] +--- + +:::info + +No prior setup is necessary for non-airgap installations. For system prerequisites, refer to the installation +Prerequisites. + +::: diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md new file mode 100644 index 00000000000..50a722ef0d3 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md @@ -0,0 +1,128 @@ +--- +sidebar_label: "System and Permission Requirements" +title: "VMware System and Permission Requirements" +description: "Review VMware system requirements and cloud account permissions." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["self-hosted", "vmware", "permissions"] +keywords: ["self-hosted", "vmware", "permissions"] +--- + +:::danger + +Convert content to partials for reuse. + +::: + +Before installing Palette on VMware, review the following system requirements and permissions. The vSphere user account +used to deploy Palette must have the required permissions to access the proper roles and objects in vSphere. + +Start by reviewing the required action items below: + +1. Create the two custom vSphere roles. Check out the [Create Required Roles](#create-required-roles) section to create + the required roles in vSphere. + +2. Review the [vSphere Permissions](#vsphere-permissions) section to ensure the created roles have the required vSphere + privileges and permissions. + +3. Create node zones and regions for your Kubernetes clusters. Refer to the [Zone Tagging](#zone-tagging) section to + ensure that the required tags are created in vSphere to ensure proper resource allocation across fault domains. + +:::info + +The permissions listed in this page are also needed for deploying a Private Cloud Gateway (PCG) and workload cluster in +vSphere through Palette. + +::: + +## Create Required Roles + +Palette requires two custom roles to be created in vSphere before the installation. Refer to the +[Create a Custom Role](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html?hWord=N4IghgNiBcIE4HsIFMDOIC+Q) +guide if you need help creating a custom role in vSphere. The required custom roles are: + +- A root-level role with access to higher-level vSphere objects. This role is referred to as the _Spectro root role_. + Check out the + [Root-Level Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-root-role-privileges) + table for the list of privileges required for the root-level role. + +- A role with the required privileges for deploying VMs. This role is referred to as the _Spectro role_. Review the + [Spectro Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-role-privileges) table + for the list of privileges required for the Spectro role. + +The user account you use to deploy Palette must have access to both roles. Each vSphere object required by Palette must +have a +[Permission](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.security.doc/GUID-4B47F690-72E7-4861-A299-9195B9C52E71.html) +entry for the respective Spectro role. The following tables list the privileges required for the each custom role. + +:::info + +For an in-depth explanation of vSphere authorization and permissions, check out the +[Understanding Authorization in vSphere](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-74F53189-EF41-4AC1-A78E-D25621855800.html) +resource. + +::: + +## vSphere Permissions + + + +## Zone Tagging + +You can use tags to create node zones and regions for your Kubernetes clusters. The node zones and regions can be used +to dynamically place Kubernetes workloads and achieve higher availability. Kubernetes nodes inherit the zone and region +tags as [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). Kubernetes workloads can +use the node labels to ensure that the workloads are deployed to the correct zone and region. + +The following is an example of node labels that are discovered and inherited from vSphere tags. The tag values are +applied to Kubernetes nodes in vSphere. + +```yaml hideClipboard +topology.kubernetes.io/region=usdc topology.kubernetes.io/zone=zone3 failure-domain.beta.kubernetes.io/region=usdc +failure-domain.beta.kubernetes.io/zone=zone3 +``` + +:::info + +To learn more about node zones and regions, refer to the +[Node Zones/Regions Topology](https://cloud-provider-vsphere.sigs.k8s.io/cloud_provider_interface.html) section of the +Cloud Provider Interface documentation. + +::: + +Zone tagging is required to install Palette and is helpful for Kubernetes workloads deployed in vSphere clusters through +Palette if they have persistent storage needs. Use vSphere tags on data centers and compute clusters to create distinct +zones in your environment. You can use vSphere +[Tag Categories and Tags](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-16422FF7-235B-4A44-92E2-532F6AED0923.html) +to create zones in your vSphere environment and assign them to vSphere objects. + +The zone tags you assign to your vSphere objects, such as a data center and clusters are applied to the Kubernetes nodes +you deploy through Palette into your vSphere environment. Kubernetes clusters deployed to other infrastructure +providers, such as public cloud may have other native mechanisms for auto discovery of zones. + +For example, assume a vCenter environment contains three compute clusters, cluster-1, cluster-2, and cluster-3. To +support this environment you create the tag categories `k8s-region` and `k8s-zone`. The `k8s-region` is assigned to the +data center, and the `k8s-zone` tag is assigned to the compute clusters. + +The following table lists the tag values for the data center and compute clusters. + +| **vSphere Object** | **Assigned Name** | **Tag Category** | **Tag Value** | +| ------------------ | ----------------- | ---------------- | ------------- | +| **Datacenter** | dc-1 | k8s-region | region1 | +| **Cluster** | cluster-1 | k8s-zone | az1 | +| **Cluster** | cluster-2 | k8s-zone | az2 | +| **Cluster** | cluster-3 | k8s-zone | az3 | + +Create a tag category and tag values for each data center and cluster in your environment. Use the tag categories to +create zones. Use a name that is meaningful and that complies with the tag requirements listed in the following section. + +### Tag Requirements + +The following requirements apply to tags: + +- A valid tag must consist of alphanumeric characters. + +- The tag must start and end with an alphanumeric characters. + +- The regex used for tag validation is `(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?` diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/_category_.json b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/_category_.json new file mode 100644 index 00000000000..c3460c6dbde --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 30 +} diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/airgap.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/airgap.md similarity index 87% rename from docs/docs-content/enterprise-version/upgrade/upgrade-vmware/airgap.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/airgap.md index 6a8e80f932d..0a5ed0aed29 100644 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/airgap.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/airgap.md @@ -1,29 +1,29 @@ --- -sidebar_label: "Airgap" -title: "Upgrade Airgap Palette Installed on VMware vSphere" -description: "Learn how to upgrade self-hosted airgap Palette in VMware." +sidebar_label: "Upgrade Airgap Palette" +title: "Upgrade Airgap Palette on VMware vSphere" +description: "Upgrade a self-hosted, airgap Palette instance installed on VMware vSphere using the Palette CLI." icon: "" sidebar_position: 10 -tags: ["palette", "self-hosted", "vmware", "airgap", "upgrade"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "airgap", "vmware", "upgrade", "cli"] +keywords: ["self-hosted", "airgap", "vmware", "upgrade", "cli"] --- This guide takes you through the process of upgrading a self-hosted airgap Palette instance installed on VMware vSphere. Before upgrading Palette to a new major version, you must first update it to the latest patch version of the latest -minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section for +minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for details. :::warning If you are upgrading from a Palette version that is older than 4.4.14, ensure that you have executed the utility script to make the CNS mapping unique for the associated PVC. For more information, refer to the -[Troubleshooting guide](../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). +[Troubleshooting guide](../../../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette upgrade. ## Prerequisites @@ -31,8 +31,7 @@ Palette upgrade. - Access to the Palette airgap support Virtual Machine (VM) that you used for the initial Palette installation. -- Refer to [Access Palette](../../enterprise-version.md#access-palette) to download the new airgap Palette installation - bin. +- Refer to [Access Palette](../../../palette.md#access-palette) to download the new airgap Palette installation bin. - Contact our Support Team at support@spectrocloud.com to learn if the new version of Palette requires a new OS and Kubernetes OVA. If necessary, they will provide you with a link to the OVA, which you will use to upgrade Palette. @@ -40,8 +39,8 @@ Palette upgrade. - A diff or text comparison tool of your choice. - The Kubernetes cluster must be set up on a version of Kubernetes that is compatible to your upgraded version. Refer to - the [Kubernetes Requirements](../../install-palette/install-palette.md#kubernetes-requirements) section to find the - version required for your Palette installation. + the [Kubernetes Requirements](../install/install.md#kubernetes-requirements) section to find the version required for + your Palette installation. ## Upgrade @@ -123,8 +122,8 @@ steps one through four. Otherwise, start at step five. curl --user : --output airgap-4.2.12.bin https://software.spectrocloud.com/airgap-v4.2.12.bin ``` -8. Refer to the [Additional Packs](../../../downloads/self-hosted-palette/additional-packs.md) page and update the packs - you are currently using. You must update each pack separately. +8. Refer to the [Additional Packs](../../../../../downloads/self-hosted-palette/additional-packs.md) page and update the + packs you are currently using. You must update each pack separately. 9. Use the following command template to execute the new Palette airgap installation bin. diff --git a/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/non-airgap.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/non-airgap.md similarity index 79% rename from docs/docs-content/enterprise-version/upgrade/upgrade-vmware/non-airgap.md rename to docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/non-airgap.md index 141bfd044b7..d1ac2b56048 100644 --- a/docs/docs-content/enterprise-version/upgrade/upgrade-vmware/non-airgap.md +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/non-airgap.md @@ -1,36 +1,36 @@ --- -sidebar_label: "Non-airgap" -title: "Upgrade Palette Installed on VMware vSphere" -description: "Learn how to upgrade self-hosted Palette in VMware vSphere." +sidebar_label: "Upgrade Non-Airgap Palette" +title: "Upgrade Non-Airgap Palette on VMware vSphere" +description: "Upgrade a self-hosted, non-airgap Palette instance installed on VMware vSphere using the Palette CLI." icon: "" -sidebar_position: 0 -tags: ["palette", "self-hosted", "vmware", "non-airgap", "upgrade"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 20 +tags: ["self-hosted", "non-airgap", "vmware", "upgrade", "cli"] +keywords: ["self-hosted", "non-airgap", "vmware", "upgrade", "cli"] --- This guide takes you through the process of upgrading a self-hosted Palette instance installed on VMware vSphere. Before upgrading Palette to a new major version, you must first update it to the latest patch version of the latest minor -version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section for details. +version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for details. :::warning If you are upgrading from a Palette version that is older than 4.4.14, ensure that you have executed the utility script to make the CNS mapping unique for the associated PVC. For more information, refer to the -[Troubleshooting guide](../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). +[Troubleshooting guide](../../../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette upgrade. ## Prerequisites - Access to the Palette system console. - A diff or text comparison tool of your choice. - The Kubernetes cluster must be set up on a version of Kubernetes that is compatible to your upgraded version. Refer to - the [Kubernetes Requirements](../../install-palette/install-palette.md#kubernetes-requirements) section to find the - version required for your Palette installation. + the [Kubernetes Requirements](../install/install.md#kubernetes-requirements) section to find the version required for + your Palette installation. ## Upgrade diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/upgrade.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/upgrade.md new file mode 100644 index 00000000000..7d42c67e137 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/upgrade/upgrade.md @@ -0,0 +1,544 @@ +--- +sidebar_label: "Upgrade" +title: "Upgrade Palette on VMware vSphere" +description: "Upgrade your self-hosted Palette instance installed on VMware vSphere." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vmware", "upgrade"] +keywords: ["self-hosted", "vmware", "upgrade"] +--- + +:::danger + +The below content is from the former [Palette Upgrade](https://docs.spectrocloud.com/enterprise-version/upgrade/) page. +Convert to partials and refactor where necessary. + +::: + +This page offers links and reference information for upgrading self-hosted Palette instances. If you have questions or +concerns, [reach out to our support team](http://support.spectrocloud.io/). + +:::tip + +If you are using Palette VerteX, refer to the +[VerteX Upgrade](../../../../vertex/supported-environments/vmware/upgrade/upgrade.md) page for upgrade guidance. + +::: + +### Private Cloud Gateway + +If your setup includes a PCG, make sure to +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette upgrade. + +## Upgrade Notes + +Refer to the following known issues before upgrading: + +- Upgrading self-hosted Palette or Palette VerteX from version 4.6.x to 4.7.x can cause the upgrade to hang if any + member of the MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. For guidance on + verifying the health status of MongoDB ReplicaSet members, refer to our + [Troubleshooting](../../../../../troubleshooting/palette-upgrade.md#self-hosted-palette-or-palette-vertex-upgrade-hangs) + guide. + +- A known issue impacts all self-hosted Palette instances older then 4.4.14. Before upgrading an Palette instance with + version older than 4.4.14, ensure that you execute a utility script to make all your cluster IDs unique in your + Persistent Volume Claim (PVC) metadata. For more information, refer to the + [Troubleshooting Guide](../../../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). + +- Prior to upgrading VMware vSphere VerteX installations from version 4.3.x to 4.4.x, complete the steps outlined in the + [Mongo DNS ConfigMap Issue](../../../../../troubleshooting/palette-upgrade.md#mongo-dns-configmap-value-is-incorrect) + guide. Addressing this Mongo DNS issue will prevent system pods from experiencing _CrashLoopBackOff_ errors after the + upgrade. + + After the upgrade, if Enterprise Cluster backups are stuck, refer to the + [Enterprise Backup Stuck](../../../../../troubleshooting/enterprise-install.md#scenario---enterprise-backup-stuck) + troubleshooting guide for resolution steps. + +## Supported Upgrade Paths + +Refer to the following tables for the supported upgrade paths for self-hosted Palette installed on VMware vSphere using +the Palette CLI. + +:::danger + +Before upgrading Palette to a new major version, you must first update it to the latest patch version of the latest +minor version available. + +::: + + + + +**4.7.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.21 | 4.7.27 | :white_check_mark: | +| 4.7.20 | 4.7.27 | :white_check_mark: | +| 4.7.16 | 4.7.27 | :white_check_mark: | +| 4.7.16 | 4.7.20 | :white_check_mark: | +| 4.7.15 | 4.7.27 | :white_check_mark: | +| 4.7.15 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.27 | :white_check_mark: | +| 4.7.3 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.27 | :white_check_mark: | +| 4.6.41 | 4.7.20 | :white_check_mark: | +| 4.6.41 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.3 | :white_check_mark: | +| 4.6.6 | 4.7.15 | :white_check_mark: | + +**4.6.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.6.41 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.43 | :white_check_mark: | +| 4.6.32 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.43 | :white_check_mark: | +| 4.6.28 | 4.6.41 | :white_check_mark: | +| 4.6.28 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.32 | :white_check_mark: | +| 4.6.26 | 4.6.43 | :white_check_mark: | +| 4.6.26 | 4.6.41 | :white_check_mark: | +| 4.6.26 | 4.6.34 | :white_check_mark: | +| 4.6.26 | 4.6.32 | :white_check_mark: | +| 4.6.25 | 4.6.43 | :white_check_mark: | +| 4.6.25 | 4.6.41 | :white_check_mark: | +| 4.6.25 | 4.6.34 | :white_check_mark: | +| 4.6.25 | 4.6.32 | :white_check_mark: | +| 4.6.24 | 4.6.43 | :white_check_mark: | +| 4.6.24 | 4.6.41 | :white_check_mark: | +| 4.6.24 | 4.6.34 | :white_check_mark: | +| 4.6.24 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.43 | :white_check_mark: | +| 4.6.23 | 4.6.41 | :white_check_mark: | +| 4.6.23 | 4.6.34 | :white_check_mark: | +| 4.6.23 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.28 | :white_check_mark: | +| 4.6.23 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.43 | :white_check_mark: | +| 4.6.18 | 4.6.41 | :white_check_mark: | +| 4.6.18 | 4.6.34 | :white_check_mark: | +| 4.6.18 | 4.6.32 | :white_check_mark: | +| 4.6.18 | 4.6.28 | :white_check_mark: | +| 4.6.18 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.43 | :white_check_mark: | +| 4.6.13 | 4.6.41 | :white_check_mark: | +| 4.6.13 | 4.6.34 | :white_check_mark: | +| 4.6.13 | 4.6.32 | :white_check_mark: | +| 4.6.13 | 4.6.28 | :white_check_mark: | +| 4.6.13 | 4.6.24 | :white_check_mark: | +| 4.6.13 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.43 | :white_check_mark: | +| 4.6.12 | 4.6.41 | :white_check_mark: | +| 4.6.12 | 4.6.34 | :white_check_mark: | +| 4.6.12 | 4.6.32 | :white_check_mark: | +| 4.6.12 | 4.6.28 | :white_check_mark: | +| 4.6.12 | 4.6.24 | :white_check_mark: | +| 4.6.12 | 4.6.23 | :white_check_mark: | +| 4.6.12 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.43 | :white_check_mark: | +| 4.6.9 | 4.6.41 | :white_check_mark: | +| 4.6.9 | 4.6.34 | :white_check_mark: | +| 4.6.9 | 4.6.32 | :white_check_mark: | +| 4.6.9 | 4.6.28 | :white_check_mark: | +| 4.6.9 | 4.6.24 | :white_check_mark: | +| 4.6.9 | 4.6.23 | :white_check_mark: | +| 4.6.9 | 4.6.18 | :white_check_mark: | +| 4.6.9 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.43 | :white_check_mark: | +| 4.6.8 | 4.6.41 | :white_check_mark: | +| 4.6.8 | 4.6.34 | :white_check_mark: | +| 4.6.8 | 4.6.32 | :white_check_mark: | +| 4.6.8 | 4.6.28 | :white_check_mark: | +| 4.6.8 | 4.6.24 | :white_check_mark: | +| 4.6.8 | 4.6.23 | :white_check_mark: | +| 4.6.8 | 4.6.18 | :white_check_mark: | +| 4.6.8 | 4.6.13 | :white_check_mark: | +| 4.6.8 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.43 | :white_check_mark: | +| 4.6.7 | 4.6.41 | :white_check_mark: | +| 4.6.7 | 4.6.34 | :white_check_mark: | +| 4.6.7 | 4.6.32 | :white_check_mark: | +| 4.6.7 | 4.6.28 | :white_check_mark: | +| 4.6.7 | 4.6.24 | :white_check_mark: | +| 4.6.7 | 4.6.23 | :white_check_mark: | +| 4.6.7 | 4.6.18 | :white_check_mark: | +| 4.6.7 | 4.6.13 | :white_check_mark: | +| 4.6.7 | 4.6.12 | :white_check_mark: | +| 4.6.7 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.43 | :white_check_mark: | +| 4.6.6 | 4.6.41 | :white_check_mark: | +| 4.6.6 | 4.6.34 | :white_check_mark: | +| 4.6.6 | 4.6.32 | :white_check_mark: | +| 4.6.6 | 4.6.28 | :white_check_mark: | +| 4.6.6 | 4.6.24 | :white_check_mark: | +| 4.6.6 | 4.6.23 | :white_check_mark: | +| 4.6.6 | 4.6.18 | :white_check_mark: | +| 4.6.6 | 4.6.13 | :white_check_mark: | +| 4.6.6 | 4.6.12 | :white_check_mark: | +| 4.6.6 | 4.6.9 | :white_check_mark: | +| 4.6.6 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.7 | :white_check_mark: | +| 4.5.23 | 4.6.43 | :white_check_mark: | +| 4.5.23 | 4.6.41 | :white_check_mark: | +| 4.5.23 | 4.6.34 | :white_check_mark: | +| 4.5.23 | 4.6.32 | :white_check_mark: | +| 4.5.23 | 4.6.28 | :white_check_mark: | +| 4.5.23 | 4.6.24 | :white_check_mark: | +| 4.5.23 | 4.6.23 | :white_check_mark: | +| 4.5.23 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.43 | :white_check_mark: | +| 4.5.21 | 4.6.41 | :white_check_mark: | +| 4.5.21 | 4.6.34 | :white_check_mark: | +| 4.5.21 | 4.6.32 | :white_check_mark: | +| 4.5.21 | 4.6.28 | :white_check_mark: | +| 4.5.21 | 4.6.24 | :white_check_mark: | +| 4.5.21 | 4.6.23 | :white_check_mark: | +| 4.5.21 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.13 | :white_check_mark: | +| 4.5.21 | 4.6.12 | :white_check_mark: | +| 4.5.21 | 4.6.9 | :white_check_mark: | +| 4.5.21 | 4.6.8 | :white_check_mark: | +| 4.5.21 | 4.6.7 | :white_check_mark: | +| 4.5.21 | 4.6.6 | :white_check_mark: | +| 4.5.20 | 4.6.43 | :white_check_mark: | +| 4.5.20 | 4.6.41 | :white_check_mark: | +| 4.5.20 | 4.6.34 | :white_check_mark: | +| 4.5.20 | 4.6.32 | :white_check_mark: | +| 4.5.20 | 4.6.28 | :white_check_mark: | +| 4.5.20 | 4.6.24 | :white_check_mark: | +| 4.5.20 | 4.6.23 | :white_check_mark: | +| 4.5.20 | 4.6.18 | :white_check_mark: | +| 4.5.20 | 4.6.13 | :white_check_mark: | +| 4.5.20 | 4.6.12 | :white_check_mark: | +| 4.5.20 | 4.6.9 | :white_check_mark: | +| 4.5.20 | 4.6.8 | :white_check_mark: | +| 4.5.20 | 4.6.7 | :white_check_mark: | +| 4.5.20 | 4.6.6 | :white_check_mark: | +| 4.4.24 | 4.6.43 | :white_check_mark: | +| 4.4.24 | 4.6.41 | :white_check_mark: | +| 4.4.24 | 4.6.34 | :white_check_mark: | +| 4.4.24 | 4.6.32 | :white_check_mark: | +| 4.4.24 | 4.6.28 | :white_check_mark: | +| 4.4.24 | 4.6.24 | :white_check_mark: | +| 4.4.24 | 4.6.23 | :white_check_mark: | + +**4.5.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.5.21 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.23 | :white_check_mark: | +| 4.5.15 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.23 | :white_check_mark: | +| 4.5.11 | 4.5.21 | :white_check_mark: | +| 4.5.11 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.15 | :white_check_mark: | +| 4.5.8 | 4.5.23 | :white_check_mark: | +| 4.5.8 | 4.5.21 | :white_check_mark: | +| 4.5.8 | 4.5.20 | :white_check_mark: | +| 4.5.8 | 4.5.15 | :white_check_mark: | +| 4.5.8 | 4.5.11 | :white_check_mark: | +| 4.5.4 | 4.5.23 | :white_check_mark: | +| 4.5.4 | 4.5.21 | :white_check_mark: | +| 4.5.4 | 4.5.20 | :white_check_mark: | +| 4.5.4 | 4.5.15 | :white_check_mark: | +| 4.5.4 | 4.5.11 | :white_check_mark: | +| 4.5.4 | 4.5.8 | :white_check_mark: | +| 4.4.24 | 4.5.23 | :white_check_mark: | +| 4.4.20 | 4.5.23 | :white_check_mark: | +| 4.4.20 | 4.5.21 | :white_check_mark: | +| 4.4.20 | 4.5.20 | :white_check_mark: | +| 4.4.20 | 4.5.15 | :white_check_mark: | +| 4.4.20 | 4.5.11 | :white_check_mark: | +| 4.4.20 | 4.5.8 | :white_check_mark: | +| 4.4.20 | 4.5.4 | :white_check_mark: | + +**4.4.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.4.20 | 4.4.23 | :white_check_mark: | +| 4.4.18 | 4.4.23 | :white_check_mark: | +| 4.4.18 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.23 | :white_check_mark: | +| 4.4.14 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.23 | :white_check_mark: | +| 4.4.11 | 4.4.20 | :white_check_mark: | +| 4.4.11 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.23 | :white_check_mark: | +| 4.4.6 | 4.4.20 | :white_check_mark: | +| 4.4.6 | 4.4.18 | :white_check_mark: | +| 4.4.6 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.23 | :white_check_mark: | +| 4.3.6 | 4.4.20 | :white_check_mark: | +| 4.3.6 | 4.4.18 | :white_check_mark: | +| 4.3.6 | 4.4.14 | :white_check_mark: | +| 4.3.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.6 | :white_check_mark: | + +**4.3.x and Prior** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.2.13 | 4.3.6 | :white_check_mark: | +| 4.2.7 | 4.2.13 | :white_check_mark: | +| 4.1.x | 4.3.6 | :x: | +| 4.1.12 | 4.2.7 | :white_check_mark: | +| 4.1.12 | 4.1.13 | :white_check_mark: | +| 4.1.7 | 4.2.7 | :white_check_mark: | + + + + + +**4.7.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.16 | 4.7.20 | :white_check_mark: | +| 4.7.15 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.20 | :white_check_mark: | +| 4.6.41 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.3 | :white_check_mark: | + +**4.6.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.6.41 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.43 | :white_check_mark: | +| 4.6.32 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.43 | :white_check_mark: | +| 4.6.28 | 4.6.41 | :white_check_mark: | +| 4.6.28 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.32 | :white_check_mark: | +| 4.6.26 | 4.6.43 | :white_check_mark: | +| 4.6.26 | 4.6.41 | :white_check_mark: | +| 4.6.26 | 4.6.34 | :white_check_mark: | +| 4.6.26 | 4.6.32 | :white_check_mark: | +| 4.6.25 | 4.6.43 | :white_check_mark: | +| 4.6.25 | 4.6.41 | :white_check_mark: | +| 4.6.25 | 4.6.34 | :white_check_mark: | +| 4.6.25 | 4.6.32 | :white_check_mark: | +| 4.6.24 | 4.6.43 | :white_check_mark: | +| 4.6.24 | 4.6.41 | :white_check_mark: | +| 4.6.24 | 4.6.34 | :white_check_mark: | +| 4.6.24 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.43 | :white_check_mark: | +| 4.6.23 | 4.6.41 | :white_check_mark: | +| 4.6.23 | 4.6.34 | :white_check_mark: | +| 4.6.23 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.28 | :white_check_mark: | +| 4.6.23 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.43 | :white_check_mark: | +| 4.6.18 | 4.6.41 | :white_check_mark: | +| 4.6.18 | 4.6.34 | :white_check_mark: | +| 4.6.18 | 4.6.32 | :white_check_mark: | +| 4.6.18 | 4.6.28 | :white_check_mark: | +| 4.6.18 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.43 | :white_check_mark: | +| 4.6.13 | 4.6.41 | :white_check_mark: | +| 4.6.13 | 4.6.34 | :white_check_mark: | +| 4.6.13 | 4.6.32 | :white_check_mark: | +| 4.6.13 | 4.6.28 | :white_check_mark: | +| 4.6.13 | 4.6.24 | :white_check_mark: | +| 4.6.13 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.43 | :white_check_mark: | +| 4.6.12 | 4.6.41 | :white_check_mark: | +| 4.6.12 | 4.6.34 | :white_check_mark: | +| 4.6.12 | 4.6.32 | :white_check_mark: | +| 4.6.12 | 4.6.28 | :white_check_mark: | +| 4.6.12 | 4.6.24 | :white_check_mark: | +| 4.6.12 | 4.6.23 | :white_check_mark: | +| 4.6.12 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.43 | :white_check_mark: | +| 4.6.9 | 4.6.41 | :white_check_mark: | +| 4.6.9 | 4.6.34 | :white_check_mark: | +| 4.6.9 | 4.6.32 | :white_check_mark: | +| 4.6.9 | 4.6.28 | :white_check_mark: | +| 4.6.9 | 4.6.24 | :white_check_mark: | +| 4.6.9 | 4.6.23 | :white_check_mark: | +| 4.6.9 | 4.6.18 | :white_check_mark: | +| 4.6.9 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.43 | :white_check_mark: | +| 4.6.8 | 4.6.41 | :white_check_mark: | +| 4.6.8 | 4.6.34 | :white_check_mark: | +| 4.6.8 | 4.6.32 | :white_check_mark: | +| 4.6.8 | 4.6.28 | :white_check_mark: | +| 4.6.8 | 4.6.24 | :white_check_mark: | +| 4.6.8 | 4.6.23 | :white_check_mark: | +| 4.6.8 | 4.6.18 | :white_check_mark: | +| 4.6.8 | 4.6.13 | :white_check_mark: | +| 4.6.8 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.43 | :white_check_mark: | +| 4.6.7 | 4.6.41 | :white_check_mark: | +| 4.6.7 | 4.6.34 | :white_check_mark: | +| 4.6.7 | 4.6.32 | :white_check_mark: | +| 4.6.7 | 4.6.28 | :white_check_mark: | +| 4.6.7 | 4.6.24 | :white_check_mark: | +| 4.6.7 | 4.6.23 | :white_check_mark: | +| 4.6.7 | 4.6.18 | :white_check_mark: | +| 4.6.7 | 4.6.13 | :white_check_mark: | +| 4.6.7 | 4.6.12 | :white_check_mark: | +| 4.6.7 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.43 | :white_check_mark: | +| 4.6.6 | 4.6.41 | :white_check_mark: | +| 4.6.6 | 4.6.34 | :white_check_mark: | +| 4.6.6 | 4.6.32 | :white_check_mark: | +| 4.6.6 | 4.6.28 | :white_check_mark: | +| 4.6.6 | 4.6.24 | :white_check_mark: | +| 4.6.6 | 4.6.23 | :white_check_mark: | +| 4.6.6 | 4.6.18 | :white_check_mark: | +| 4.6.6 | 4.6.13 | :white_check_mark: | +| 4.6.6 | 4.6.12 | :white_check_mark: | +| 4.6.6 | 4.6.9 | :white_check_mark: | +| 4.6.6 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.7 | :white_check_mark: | +| 4.5.23 | 4.6.43 | :white_check_mark: | +| 4.5.23 | 4.6.41 | :white_check_mark: | +| 4.5.23 | 4.6.34 | :white_check_mark: | +| 4.5.23 | 4.6.32 | :white_check_mark: | +| 4.5.23 | 4.6.28 | :white_check_mark: | +| 4.5.23 | 4.6.24 | :white_check_mark: | +| 4.5.23 | 4.6.23 | :white_check_mark: | +| 4.5.23 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.43 | :white_check_mark: | +| 4.5.21 | 4.6.41 | :white_check_mark: | +| 4.5.21 | 4.6.34 | :white_check_mark: | +| 4.5.21 | 4.6.32 | :white_check_mark: | +| 4.5.21 | 4.6.28 | :white_check_mark: | +| 4.5.21 | 4.6.24 | :white_check_mark: | +| 4.5.21 | 4.6.23 | :white_check_mark: | +| 4.5.21 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.13 | :white_check_mark: | +| 4.5.21 | 4.6.12 | :white_check_mark: | +| 4.5.21 | 4.6.9 | :white_check_mark: | +| 4.5.21 | 4.6.8 | :white_check_mark: | +| 4.5.21 | 4.6.7 | :white_check_mark: | +| 4.5.21 | 4.6.6 | :white_check_mark: | +| 4.5.20 | 4.6.43 | :white_check_mark: | +| 4.5.20 | 4.6.41 | :white_check_mark: | +| 4.5.20 | 4.6.34 | :white_check_mark: | +| 4.5.20 | 4.6.32 | :white_check_mark: | +| 4.5.20 | 4.6.28 | :white_check_mark: | +| 4.5.20 | 4.6.24 | :white_check_mark: | +| 4.5.20 | 4.6.23 | :white_check_mark: | +| 4.5.20 | 4.6.18 | :white_check_mark: | +| 4.5.20 | 4.6.13 | :white_check_mark: | +| 4.5.20 | 4.6.12 | :white_check_mark: | +| 4.5.20 | 4.6.9 | :white_check_mark: | +| 4.5.20 | 4.6.8 | :white_check_mark: | +| 4.5.20 | 4.6.7 | :white_check_mark: | +| 4.5.20 | 4.6.6 | :white_check_mark: | +| 4.4.24 | 4.6.43 | :white_check_mark: | +| 4.4.24 | 4.6.41 | :white_check_mark: | +| 4.4.24 | 4.6.34 | :white_check_mark: | +| 4.4.24 | 4.6.32 | :white_check_mark: | +| 4.4.24 | 4.6.28 | :white_check_mark: | +| 4.4.24 | 4.6.24 | :white_check_mark: | +| 4.4.24 | 4.6.23 | :white_check_mark: | + +**4.5.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.5.21 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.23 | :white_check_mark: | +| 4.5.15 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.23 | :white_check_mark: | +| 4.5.11 | 4.5.21 | :white_check_mark: | +| 4.5.11 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.15 | :white_check_mark: | +| 4.5.8 | 4.5.23 | :white_check_mark: | +| 4.5.8 | 4.5.21 | :white_check_mark: | +| 4.5.8 | 4.5.20 | :white_check_mark: | +| 4.5.8 | 4.5.15 | :white_check_mark: | +| 4.5.4 | 4.5.23 | :white_check_mark: | +| 4.5.4 | 4.5.21 | :white_check_mark: | +| 4.5.4 | 4.5.20 | :white_check_mark: | +| 4.5.4 | 4.5.15 | :white_check_mark: | +| 4.4.20 | 4.5.23 | :white_check_mark: | +| 4.4.20 | 4.5.21 | :white_check_mark: | +| 4.4.20 | 4.5.20 | :white_check_mark: | +| 4.4.20 | 4.5.15 | :white_check_mark: | + +**4.4.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.4.18 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.20 | :white_check_mark: | +| 4.4.11 | 4.4.20 | :white_check_mark: | +| 4.4.6 | 4.4.20 | :white_check_mark: | +| 4.3.6 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.18 | :white_check_mark: | +| 4.4.6 | 4.4.18 | :white_check_mark: | +| 4.3.6 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.14 | :white_check_mark: | +| 4.3.6 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.6 | :white_check_mark: | + +**4.3.x and Prior** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.2.13 | 4.3.6 | :white_check_mark: | +| 4.2.7 | 4.2.13 | :white_check_mark: | +| 4.1.x | 4.3.6 | :x: | +| 4.1.12 | 4.2.7 | :white_check_mark: | +| 4.1.7 | 4.2.7 | :white_check_mark: | + + + + + +:::preview + +::: + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.15 | 4.7.27 | :white_check_mark: | +| 4.7.3 | 4.7.27 | :x: | +| 4.7.3 | 4.7.15 | :x: | + + + + diff --git a/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/vmware.md b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/vmware.md new file mode 100644 index 00000000000..65b33f405ee --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/supported-environments/vmware/vmware.md @@ -0,0 +1,13 @@ +--- +sidebar_label: "VMware vSphere" +title: "Self-Hosted Palette on VMware vSphere" +description: "Install self-hosted Palette on VMware vSphere." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vmware"] +keywords: ["self-hosted", "vmware"] +--- + +Palette can be installed on VMware vSphere with internet connectivity or an airgap environment. When you install +Palette, a three-node cluster is created. You use the interactive Palette CLI to install Palette on VMware vSphere. +Refer to [Access Palette](../../palette.md#access-palette) for instructions on requesting repository access. diff --git a/docs/docs-content/self-hosted-setup/palette/system-management/_category_.json b/docs/docs-content/self-hosted-setup/palette/system-management/_category_.json new file mode 100644 index 00000000000..e7e7c549660 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/system-management/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 40 +} diff --git a/docs/docs-content/self-hosted-setup/palette/system-management/account-management/_category_.json b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/enterprise-version/system-management/account-management/account-management.md b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/account-management.md similarity index 92% rename from docs/docs-content/enterprise-version/system-management/account-management/account-management.md rename to docs/docs-content/self-hosted-setup/palette/system-management/account-management/account-management.md index 33f84f9661b..81fd687585f 100644 --- a/docs/docs-content/enterprise-version/system-management/account-management/account-management.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/account-management.md @@ -1,12 +1,11 @@ --- sidebar_label: "Account Management" title: "Account Management" -description: "Update and manage the user settings and credentials of the admin user." +description: "Learn about the different types of system administrators in self-hosted Palette." icon: "" hide_table_of_contents: false -sidebar_position: 60 -tags: ["palette", "management", "account"] -keywords: ["self-hosted", "palette"] +tags: ["self-hosted", "management", "account"] +keywords: ["self-hosted", "management", "account"] --- Self-hosted Palette supports the ability to have multiple system administrators with different roles and permissions. @@ -79,11 +78,3 @@ To learn how to create and manage system administrator accounts, check out the As an admin user, you can update and manage your user settings, such as changing the email address and changing the credentials. You can also enable passkey to access the admin panel. The passkey feature supports both virtual passkey and physical passkey. - -## Resources - -- [Create and Manage System Accounts](./manage-system-accounts.md) - -- [Email Address](./email.md) - -- [User Credentials](./credentials.md) diff --git a/docs/docs-content/enterprise-version/system-management/account-management/credentials.md b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/credentials.md similarity index 87% rename from docs/docs-content/enterprise-version/system-management/account-management/credentials.md rename to docs/docs-content/self-hosted-setup/palette/system-management/account-management/credentials.md index 9c37ad07052..2840508fe8b 100644 --- a/docs/docs-content/enterprise-version/system-management/account-management/credentials.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/credentials.md @@ -1,12 +1,14 @@ --- sidebar_label: "Manage User Credentials" title: "Manage User Credentials" -description: "Update and manage the user credentials" +description: + "Update and manage system admin user credentials for self-hosted Palette, including emails, passwords, passkeys, and + API access" icon: "" hide_table_of_contents: false sidebar_position: 20 -tags: ["palette", "management", "account", "credentials"] -keywords: ["self-hosted", "palette"] +tags: ["self-hosted", "management", "account", "credentials"] +keywords: ["self-hosted", "management", "account", "credentials"] --- You can manage the credentials of the admin user by logging in to the system console. You can also enable passkeys to @@ -39,10 +41,51 @@ minutes, the user can try to log in again. The default session timeout for syste The default timeout for tenant users is set to four hours. After four hours of inactivity, the user will be logged out of Palette. You can change the default session timeout value for tenant users by following the steps in the -[Session Timeout](../../../tenant-settings/session-timeout.md) guide. +[Session Timeout](../../../../tenant-settings/session-timeout.md) guide. Use the following sections to learn how to manage user credentials. +## Change System Admin Email Address + +You can manage the credentials of the admin user by logging in to the system console. Updating or changing the email +address of the admin user requires the current password. + +Use the following steps to change the email address of the admin user. + +### Prerequisites + +- Access to the Palette system console. + +- Current password of the admin user. + +- A Simple Mail Transfer Protocol (SMTP) server must be configured in the system console. Refer to + [Configure SMTP](../smtp.md) page for guidance on how to configure an SMTP server. + +### Change Email Address + +1. Log in to the Palette system console. Refer to + [Access the System Console](../system-management.md#access-the-system-console) guide. + +2. From the **left Main Menu** select **My Account**. + +3. Type the new email address in the **Email** field. + +4. Provide the current password in the **Current Password** field. + +5. Click **Apply** to save the changes. + +### Validate + +1. Log out of the system console. You can log out by clicking the **Logout** button in the bottom right corner of the + **left Main Menu**. + +2. Log in to the system console. Refer to [Access the System Console](../system-management.md#access-the-system-console) + guide. + +3. Use the new email address and your current password to log in to the system console. + +A successful login indicates that the email address has been changed successfully. + ## Change Password Use the following steps to change the password of the admin user. diff --git a/docs/docs-content/enterprise-version/system-management/account-management/manage-system-accounts.md b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/manage-system-accounts.md similarity index 98% rename from docs/docs-content/enterprise-version/system-management/account-management/manage-system-accounts.md rename to docs/docs-content/self-hosted-setup/palette/system-management/account-management/manage-system-accounts.md index cf4e6e148b3..fd8523d7b7b 100644 --- a/docs/docs-content/enterprise-version/system-management/account-management/manage-system-accounts.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/manage-system-accounts.md @@ -1,12 +1,12 @@ --- sidebar_label: "Create and Manage System Accounts" title: "Create and Manage System Accounts" -description: "Learn how to create and manage system accounts in Palette." +description: "Learn how to create and manage system accounts in self-hosted Palette." icon: "" hide_table_of_contents: false sidebar_position: 10 -tags: ["palette", "management", "account"] -keywords: ["self-hosted", "palette"] +tags: ["self-hosted", "management", "account"] +keywords: ["self-hosted", "management", "account"] --- You can create and manage system accounts if you have the Root Administrator or Account Administrator role in Palette. diff --git a/docs/docs-content/enterprise-version/system-management/account-management/password-blocklist.md b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/password-blocklist.md similarity index 96% rename from docs/docs-content/enterprise-version/system-management/account-management/password-blocklist.md rename to docs/docs-content/self-hosted-setup/palette/system-management/account-management/password-blocklist.md index 392b4c47f5f..f27b401e2d2 100644 --- a/docs/docs-content/enterprise-version/system-management/account-management/password-blocklist.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/account-management/password-blocklist.md @@ -1,12 +1,12 @@ --- sidebar_label: "Manage Password Blocklist" title: "Manage Password Blocklist" -description: "Learn how to manage the password blocklist in Palette." +description: "Learn how to prevent users from using certain passwords in self-hosted Palette with a password blocklist." icon: "" hide_table_of_contents: false -sidebar_position: 50 -tags: ["palette", "management", "account", "credentials"] -keywords: ["self-hosted", "palette"] +sidebar_position: 30 +tags: ["self-hosted", "management", "account", "credentials"] +keywords: ["self-hosted", "management", "account", "credentials"] --- You can manage a password blocklist to prevent users from using common or weak passwords. The password blocklist is a diff --git a/docs/docs-content/enterprise-version/system-management/add-registry.md b/docs/docs-content/self-hosted-setup/palette/system-management/add-registry.md similarity index 90% rename from docs/docs-content/enterprise-version/system-management/add-registry.md rename to docs/docs-content/self-hosted-setup/palette/system-management/add-registry.md index eb30ea0d126..dcc7b1708ed 100644 --- a/docs/docs-content/enterprise-version/system-management/add-registry.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/add-registry.md @@ -1,18 +1,18 @@ --- -sidebar_label: "Add System-Level Registry" -title: "Add System-Level Registry" -description: "Learn how to add a system-level registry in Palette." +sidebar_label: "System-Level Registries" +title: "System-Level Registries" +description: "Learn how to add a system-level registry in self-hosted Palette." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["enterprise", "management", "registry"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 130 +tags: ["self-hosted", "management", "registry"] +keywords: ["self-hosted", "management", "registry"] --- You can add a registry at the system level or tenant level. Registries added at the system level are available to all the tenants. Registries added at the tenant level are available only to that tenant. This section describes how to add a system-level registry. For guidance on adding a registry at the tenant scope, check out -[Add Tenant-Level Registry](../../tenant-settings/add-registry.md). +[Add Tenant-Level Registry](../../../tenant-settings/add-registry.md). ## Prerequisites @@ -97,7 +97,3 @@ check when you added the registry. Use these steps to further verify the registr 2. From the left **Main Menu** select **Administration**. 3. Select the **Pack Registries** tab and verify the registry you added is listed and available. - -## Resources - -- [Add Tenant-Level Registry](../../tenant-settings/add-registry.md) diff --git a/docs/docs-content/enterprise-version/system-management/backup-restore.md b/docs/docs-content/self-hosted-setup/palette/system-management/backup-restore.md similarity index 96% rename from docs/docs-content/enterprise-version/system-management/backup-restore.md rename to docs/docs-content/self-hosted-setup/palette/system-management/backup-restore.md index ff93c6d36d4..c132259868f 100644 --- a/docs/docs-content/enterprise-version/system-management/backup-restore.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/backup-restore.md @@ -1,12 +1,12 @@ --- sidebar_label: "Backup and Restore" title: "Backup and Restore" -description: "Learn how to enable backup and restore for self-hosted Palette." +description: "Learn how to enable backup and restore for your self-hosted Palette cluster." icon: "" hide_table_of_contents: false sidebar_position: 30 -tags: ["palette", "management", "self-hosted", "backup", "restore"] -keywords: ["self-hosted", "enterprise"] +tags: ["self-hosted", "management", "backup", "restore"] +keywords: ["self-hosted", "management", "backup", "restore"] --- You can enable backup and restore for your self-hosted Palette cluster to ensure that your Palette configuration data is diff --git a/docs/docs-content/enterprise-version/system-management/change-cloud-config.md b/docs/docs-content/self-hosted-setup/palette/system-management/change-cloud-config.md similarity index 96% rename from docs/docs-content/enterprise-version/system-management/change-cloud-config.md rename to docs/docs-content/self-hosted-setup/palette/system-management/change-cloud-config.md index 9e01db11507..7eac22cc929 100644 --- a/docs/docs-content/enterprise-version/system-management/change-cloud-config.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/change-cloud-config.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Modify Cloud Provider Configuration" -title: "Modify Cloud Provider Configuration" -description: "Learn how to modify the system-level cloud provider configuration in Palette." +sidebar_label: "Cloud Provider Configuration" +title: "Cloud Provider Configuration" +description: "Learn how to modify the system-level cloud provider configuration in self-hosted Palette." icon: "" hide_table_of_contents: false -sidebar_position: 130 -tags: ["management", "clouds"] -keywords: ["self-hosted"] +sidebar_position: 50 +tags: ["self-hosted", "management", "clouds"] +keywords: ["self-hosted", "management", "clouds"] --- Different cloud providers use different image formats to create virtual machines. Amazon Web Services (AWS), for diff --git a/docs/docs-content/enterprise-version/system-management/customize-interface.md b/docs/docs-content/self-hosted-setup/palette/system-management/customize-interface.md similarity index 67% rename from docs/docs-content/enterprise-version/system-management/customize-interface.md rename to docs/docs-content/self-hosted-setup/palette/system-management/customize-interface.md index ea4521d9e70..5af7b6615f6 100644 --- a/docs/docs-content/enterprise-version/system-management/customize-interface.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/customize-interface.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Customize Interface" -title: "Customize Interface" -description: "Learn how to customize the branding and interface of Palette " +sidebar_label: "Interface Customization" +title: "Interface Customization" +description: "Learn how to customize the branding and interface of self-hosted Palette " icon: "" hide_table_of_contents: false -sidebar_position: 55 +sidebar_position: 80 tags: ["self-hosted", "management", "account", "customize-interface"] -keywords: ["self-hosted", "palette", "customize-interface"] +keywords: ["self-hosted", "management", "account", "customize-interface"] --- @@ -17,7 +17,7 @@ keywords: ["self-hosted", "palette", "feature-flags"] ## Prerequisites - + ## Enable a Feature diff --git a/docs/docs-content/enterprise-version/system-management/login-banner.md b/docs/docs-content/self-hosted-setup/palette/system-management/login-banner.md similarity index 83% rename from docs/docs-content/enterprise-version/system-management/login-banner.md rename to docs/docs-content/self-hosted-setup/palette/system-management/login-banner.md index 3b16f169ce7..dcad0bd688c 100644 --- a/docs/docs-content/enterprise-version/system-management/login-banner.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/login-banner.md @@ -2,12 +2,13 @@ sidebar_label: "Banners" title: "Banners" description: - "Learn how to add login and classification banners, also known as Authority to Operate (ATO) banners, in Palette." + "Learn how to add login and classification banners, also known as Authority to Operate (ATO) banners, in self-hosted + Palette." icon: "" hide_table_of_contents: false -sidebar_position: 100 -tags: ["enterprise", "management", "ato", "banner"] -keywords: ["self-hosted", "enterprise", "ato", "banner"] +sidebar_position: 40 +tags: ["self-hosted", "management", "ato", "banner"] +keywords: ["self-hosted", "management", "ato", "banner"] --- @@ -25,7 +26,7 @@ Take the following steps to add a login banner to your system console and tenant :::warning Login banners configured in the system console override tenant-specific login banners. Refer to the -[Tenant Login Banner](../../tenant-settings/login-banner.md) guide to learn more about tenant-specific login banners. +[Tenant Login Banner](../../../tenant-settings/login-banner.md) guide to learn more about tenant-specific login banners. ::: diff --git a/docs/docs-content/enterprise-version/system-management/registry-override.md b/docs/docs-content/self-hosted-setup/palette/system-management/registry-override.md similarity index 97% rename from docs/docs-content/enterprise-version/system-management/registry-override.md rename to docs/docs-content/self-hosted-setup/palette/system-management/registry-override.md index 2ce53868c61..8186ff20c19 100644 --- a/docs/docs-content/enterprise-version/system-management/registry-override.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/registry-override.md @@ -1,11 +1,11 @@ --- -sidebar_label: "Override Registry Configuration" -title: "Override Registry Configuration" -description: "Learn how to override the image registry configuration for Palette." +sidebar_label: "Image Registry Override" +title: "Image Registry Override" +description: "Learn how to override the default image registry for self-hosted Palette." hide_table_of_contents: false -sidebar_position: 120 -tags: ["palette", "self-hosted"] -keywords: ["enterprise kubernetes", "multi cloud kubernetes"] +sidebar_position: 70 +tags: ["self-hosted", "registry"] +keywords: ["self-hosted", "registry"] --- You can override the image registry configuration for Palette to reference a different image registry. This feature is @@ -15,7 +15,7 @@ useful when you want to use a custom image registry to store and manage the imag Before overriding the image registry configuration for Palette, ensure you have the following: -- A deployed and healthy [Palette cluster](../install-palette/install-palette.md). +- A deployed and healthy self-hosted [Palette cluster](../palette.md). - Access to the kubeconfig file for the Palette cluster. You need the kubeconfig file to access the Palette cluster and apply the image registry configuration. @@ -25,7 +25,7 @@ Before overriding the image registry configuration for Palette, ensure you have If you deployed Palette through the Palette CLI, then you can download the kubeconfig file from the Palette cluster details page in the system console. Navigate to the **Enterprise Cluster Migration** page. Click on the **Admin Kubeconfig** link to download the kubeconfig file. If you need help with configuring kubectl to access the Palette - cluster, refer to the [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) guide. If you + cluster, refer to the [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) guide. If you deployed Palette onto an existing Kubernetes cluster, reach out to your cluster administrator for the kubeconfig file. ::: @@ -52,7 +52,8 @@ Select the appropriate tab below based on the environment in which your VertX cl 1. Open a terminal session. 2. Configure kubectl to use the kubeconfig file for the Palette cluster. Refer to the - [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) for guidance on configuring kubectl. + [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) for guidance on configuring + kubectl. 3. Navigate to the folder where you have the image-swap Helm chart available. You may have to extract the Helm chart if it is in a compressed format to access the **values.yaml** file. @@ -228,7 +229,8 @@ Use the following steps to override the image registry configuration. 1. Open a terminal session. 2. Configure kubectl to use the kubeconfig file for the Palette cluster. Refer to the - [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) for guidance on configuring kubectl. + [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) for guidance on configuring + kubectl. 3. Create an empty YAML file with the name **registry-secret.yaml**. Use the following command to create the file. @@ -317,7 +319,8 @@ Use the following steps to override the image registry configuration. 1. Open a terminal session with a network access to the VeteX cluster. 2. Configure kubectl to use the kubeconfig file for the Palette cluster. Refer to the - [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) for guidance on configuring kubectl. + [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) for guidance on configuring + kubectl. 3. Issue the following command to verify that the secret containing the image registry configuration is created. diff --git a/docs/docs-content/enterprise-version/system-management/reverse-proxy.md b/docs/docs-content/self-hosted-setup/palette/system-management/reverse-proxy.md similarity index 96% rename from docs/docs-content/enterprise-version/system-management/reverse-proxy.md rename to docs/docs-content/self-hosted-setup/palette/system-management/reverse-proxy.md index 69aeb6f5eca..ac34ca9f1b9 100644 --- a/docs/docs-content/enterprise-version/system-management/reverse-proxy.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/reverse-proxy.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Configure Reverse Proxy" -title: "Configure Reverse Proxy" -description: "Learn how to configure a reverse proxy for Palette." +sidebar_label: "Reverse Proxy Configuration" +title: "Reverse Proxy Configuration" +description: "Learn how to configure a reverse proxy for self-hosted Palette." icon: "" hide_table_of_contents: false -sidebar_position: 50 -tags: ["palette", "management"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 90 +tags: ["self-hosted", "management"] +keywords: ["self-hosted", "management"] --- You can configure a reverse proxy for Palette. The reverse proxy can be used by host clusters deployed in a private @@ -51,8 +51,8 @@ Use the following steps to configure a reverse proxy server for Palette. 2. Use a text editor and open the **values.yaml** file. Locate the `frps` section and update the following values in the **values.yaml** file. Refer to the - [Spectro Proxy Helm Configuration](../install-palette/install-on-kubernetes/palette-helm-ref.md#spectro-proxy) to - learn more about the configuration options. + [Spectro Proxy Helm Configuration](../supported-environments/kubernetes/setup/non-airgap/helm-reference.md) to learn + more about the configuration options.
diff --git a/docs/docs-content/enterprise-version/system-management/scar-migration.md b/docs/docs-content/self-hosted-setup/palette/system-management/scar-migration.md similarity index 53% rename from docs/docs-content/enterprise-version/system-management/scar-migration.md rename to docs/docs-content/self-hosted-setup/palette/system-management/scar-migration.md index 00ddfe06157..c03f075dfef 100644 --- a/docs/docs-content/enterprise-version/system-management/scar-migration.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/scar-migration.md @@ -1,21 +1,21 @@ --- -sidebar_label: "Migrate SCAR to OCI Registry" -title: "Migrate Customer-Managed SCAR to OCI Registry" +sidebar_label: "SCAR to OCI Registry Migration" +title: "SCAR to OCI Registry Migration" description: - "Learn how to migrate the Spectro Cloud Artifact Regisry (SCAR) content to the OCI registry used to host packs and - images." + "Migrate Spectro Cloud Artifact Registry (SCAR) content to the OCI registry used to host packs and images for + self-hosted Palette." icon: "" hide_table_of_contents: false -sidebar_position: 125 -tags: ["enterprise", "management", "scar"] -keywords: ["self-hosted", "enterprise"] +sidebar_position: 100 +tags: ["self-hosted", "management", "scar"] +keywords: ["self-hosted", "management", "scar"] --- ## Prerequisites - + ## Migrate SCAR diff --git a/docs/docs-content/enterprise-version/system-management/smtp.md b/docs/docs-content/self-hosted-setup/palette/system-management/smtp.md similarity index 60% rename from docs/docs-content/enterprise-version/system-management/smtp.md rename to docs/docs-content/self-hosted-setup/palette/system-management/smtp.md index a89ae56c937..99727fe1067 100644 --- a/docs/docs-content/enterprise-version/system-management/smtp.md +++ b/docs/docs-content/self-hosted-setup/palette/system-management/smtp.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Configure SMTP" -title: "Configure SMTP" -description: "Learn how to configure an SMTP server for your Palette instance." +sidebar_label: "SMTP Configuration" +title: "SMTP Configuration" +description: "Learn how to configure an SMTP server for your self-hosted Palette instance." icon: "" hide_table_of_contents: false -sidebar_position: 40 -tags: ["vertex", "management"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 110 +tags: ["self-hosted", "management"] +keywords: ["self-hosted", "management"] --- - -## FIPS-Compliant Kubernetes +### FIPS-Compliant Kubernetes + Our customized version of Kubernetes is FIPS-compliant. Both and are compiled with FIPS-compliant compiler and libraries. :::info @@ -64,16 +67,15 @@ Refer to the Activation**. Trial mode and expired statuses are also +displayed in the Palette VerteX UI at the bottom of the left main menu. + +## Overview + +Below is an overview of the activation process. + +![Diagram of the self-hosted system activation process](/enterprise-version_activate-installation_system-activation-diagram.webp) + +1. The system admin installs Palette VerteX or upgrades to version 4.6.32 or later. +2. VerteX enters trial mode. During this time, you have 30 days to take advantage of all of VerteX's features. After 30 + days, the trial expires, and VerteX functionality is restricted. Any clusters that you have deployed will remain + functional, but you cannot perform + [day-2 operations](../../../../../clusters/cluster-management/cluster-management.md), and you cannot deploy + additional clusters. + +3. Before or after your trial expires, contact a Spectro Cloud customer support representative. You must specify whether + you are activating Palette or VerteX and also provide a short description of your instance, along with your + installation's product ID. + +4. Spectro Cloud provides the activation key. + +5. The system admin enters the activation key and activates VerteX, allowing you to resume day-2 operations and deploy + additional clusters. + +## Prerequisites + +- A Palette VerteX subscription. + +- A self-hosted instance of Palette VerteX that is not activated. For help installing Palette VerteX, check out our + [Installation](../install/install.md) guide. + +- Access to the [system console](../../../system-management/system-management.md#access-the-system-console). + +## Enablement + +1. Log in to the system console. For more information, refer to the + [Access the System Console](../../../system-management/system-management.md#access-the-system-console) guide. + +2. A banner is displayed on the **Summary** screen, alerting you that your product is either in trial mode or has + expired. On the banner, select **Activate VerteX**. Alternatively, from the left main menu, select **Administration > + Activation**. + +3. The **Activation** tab of the **Administration** screen reiterates your product's status and displays your **Product + Setup ID**. Contact your customer support representative and provide them the following information: + + - Your installation type (VerteX). + + - A short description of your instance. For example, `Spacetastic - Dev Team 1`. + + - Your instance's **Product Setup ID**. + +4. Your customer support representative will provide you an **Activation key**. The activation key is single-use and + cannot be used to activate another Palette or VerteX installation. +5. On the **Activation** tab, enter the **Activation key** and **Update** your settings. If the product ID and + activation key pair is correct, an activation successful message is displayed, and your banner is updated to state + that your license is active. + +## Validation + +You can view the status of your license from the system console. If your license is active, the license status is +removed from the left main menu of the Palette VerteX UI. + +1. Log in to the [system console](../../../system-management/system-management.md#access-the-system-console). + +2. The activation banner is no longer displayed on the **Summary** screen, indicating your license is active. Confirm + your license status by navigating to **Administration > Activation**. The banner states that **Your license is + active**. diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/airgap.md similarity index 97% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/install.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/airgap.md index dd2ab3e6598..de4348689e9 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/install.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/airgap.md @@ -1,24 +1,26 @@ --- -sidebar_label: "Install VerteX" -title: "Install VerteX" -description: "Learn how to deploy airgap VerteX to a Kubernetes cluster using a Helm Chart." +sidebar_label: "Install Airgap Palette VerteX" +title: "Install Airgap Palette VerteX on Kubernetes" +description: + "Learn how to deploy self-hosted Palette VerteX to a Kubernetes cluster using a Helm chart in an airgapped + environment." icon: "" hide_table_of_contents: false -sidebar_position: 30 -tags: ["vertex", "enterprise"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 10 +tags: ["self-hosted", "vertex", "airgap", "kubernetes", "helm"] +keywords: ["self-hosted", "vertex", "airgap", "kubernetes", "helm"] --- You can use the Palette VerteX Helm Chart to install VerteX in a multi-node Kubernetes cluster in your airgap production environment. This installation method is common in secure environments with restricted network access that prohibits using VerteX -SaaS. Review our [architecture diagrams](../../../../architecture/networking-ports.md) to ensure your Kubernetes cluster -has the necessary network connectivity for VerteX to operate successfully. +SaaS. Review our [architecture diagrams](../../../../../architecture/networking-ports.md) to ensure your Kubernetes +cluster has the necessary network connectivity for VerteX to operate successfully. :::warning -Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps before proceeding with the installation. +Complete the [Environment Setup](../setup/airgap/airgap.md) steps before proceeding with the installation. ::: @@ -35,8 +37,8 @@ Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps befo - Ensure `unzip` or a similar extraction utility is installed on your system. - The Kubernetes cluster must be set up on a version of Kubernetes that is compatible to your upgraded version. Refer to - the [Kubernetes Requirements](../../install-palette-vertex.md#kubernetes-requirements) section to find the version - required for your Palette installation. + the [Kubernetes Requirements](./install.md#kubernetes-requirements) section to find the version required for your + Palette installation. - Ensure the Kubernetes cluster does not have Cert Manager installed. VerteX requires a unique Cert Manager configuration to be installed as part of the installation process. If Cert Manager is already installed, you must @@ -50,9 +52,8 @@ Complete the [Environment Setup](./kubernetes-airgap-instructions.md) steps befo [Add a Database User](https://www.mongodb.com/docs/guides/atlas/db-user/) guide for guidance on how to create a database user in Atlas. -- We recommended the following resources for VerteX. Refer to the - [VerteX size guidelines](../../../install-palette-vertex/install-palette-vertex.md#size-guidelines) for additional - sizing information. +- We recommended the following resources for VerteX. Refer to the [VerteX size guidelines](./install.md#size-guidelines) + for additional sizing information. - 8 CPUs per node. @@ -219,7 +220,7 @@ environment. Reach out to our support team if you need assistance. 8. Open the **values.yaml** file in the **spectro-mgmt-plane** folder with a text editor of your choice. The **values.yaml** file contains the default values for the Palette installation parameters. However, you must populate the following parameters before installing Palette. You can learn more about the parameters on the **values.yaml** - file on the [Helm Configuration Reference](../vertex-helm-ref.md) page. + file on the [Helm Configuration Reference](../setup/airgap/helm-reference.md) page. Ensure you provide the proper `ociImageRegistry.mirrorRegistries` values if you are using a self-hosted OCI registry. You can find the placeholder string in the `ociImageRegistry` section of the **values.yaml** file. @@ -240,7 +241,7 @@ environment. Reach out to our support team if you need assistance. If you are installing VerteX by pulling required images from a private mirror registry, you will need to provide the credentials to your registry in the **values.yaml** file. For more information, refer to - [Helm Configuration Reference](../vertex-helm-ref.md#image-pull-secret). + [Helm Configuration Reference](../setup/airgap/helm-reference.md#image-pull-secret). ::: @@ -886,4 +887,10 @@ Use the following steps to validate the VerteX installation. ## Next Steps - + diff --git a/docs/docs-content/vertex/install-palette-vertex/install-palette-vertex.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/install.md similarity index 70% rename from docs/docs-content/vertex/install-palette-vertex/install-palette-vertex.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/install.md index 5ad69b4ce38..94f85ca8b49 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-palette-vertex.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/install.md @@ -1,46 +1,58 @@ --- -sidebar_label: "Installation" -title: "Installation" -description: "Review Palette VerteX system requirements." +sidebar_label: "Install" +title: "Install Palette VerteX on Kubernetes" +description: "Review system requirements for installing self-hosted Palette VerteX on an existing Kubernetes cluster." icon: "" hide_table_of_contents: false -tags: ["vertex"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "install", "kubernetes", "helm"] +keywords: ["self-hosted", "vertex", "install", "kubernetes", "helm"] --- +:::warning + +This is the former [Installation](https://docs.spectrocloud.com/vertex/install-palette-vertex/) page. Leave only what is +applicable to Kubernetes. Convert to partials for reuse. + +::: + Palette VerteX is available as a self-hosted application that you install in your environment. Palette VerteX is available in the following modes. -| **Method** | **Supported Platforms** | **Description** | **Install Guide** | -| --------------------------------------- | ------------------------ | ---------------------------------------------------------------------------- | -------------------------------------------------------------------------- | -| Palette CLI | VMware | Install Palette VerteX in VMware environment. | [Install on VMware](./install-on-vmware/install.md) | -| Helm Chart | Kubernetes | Install Palette VerteX using a Helm Chart in an existing Kubernetes cluster. | [Install on Kubernetes](./install-on-kubernetes/install.md) | -| VerteX Management Appliance | VMware, Bare Metal, MAAS | Install Palette VerteX using the VerteX Management Appliance ISO file. | [Install with VerteX Management Appliance](vertex-management-appliance.md) | +| **Method** | **Supported Platforms** | **Description** | **Install Guide** | +| --------------------------------------- | ------------------------ | ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette VerteX in VMware environment. | [Install on VMware](../../vmware/install/install.md) | +| Helm Chart | Kubernetes | Install Palette VerteX using a Helm Chart in an existing Kubernetes cluster. | Install on Kubernetes | +| VerteX Management Appliance | VMware, Bare Metal, MAAS | Install Palette VerteX using the VerteX Management Appliance ISO file. | [Install with VerteX Management Appliance](../../management-appliance/install.md) | ## Airgap Installation You can also install Palette VerteX in an airgap environment. For more information, refer to the [Airgap Installation](./airgap.md) section. -| **Method** | **Supported Airgap Platforms** | **Description** | **Install Guide** | -| --------------------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | +| **Method** | **Supported Airgap Platforms** | **Description** | **Install Guide** | +| --------------------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | | Palette CLI | VMware | Install Palette VerteX in VMware environment using your own OCI registry server. | -| Helm Chart | Kubernetes | Install Palette VerteX using a Helm Chart in an existing Kubernetes cluster with your own OCI registry server OR use AWS ECR. | [Airgap Install](./install-on-kubernetes/airgap-install/airgap-install.md) | -| VerteX Management Appliance | VMware, Bare Metal, MAAS | Install Palette VerteX using the VerteX Management Appliance ISO file. | [Install with VerteX Management Appliance](vertex-management-appliance.md) | +| Helm Chart | Kubernetes | Install Palette VerteX using a Helm Chart in an existing Kubernetes cluster with your own OCI registry server OR use AWS ECR. | [Airgap Install](./airgap.md) | +| VerteX Management Appliance | VMware, Bare Metal, MAAS | Install Palette VerteX using the VerteX Management Appliance ISO file. | [Install with VerteX Management Appliance](../../management-appliance/install.md) | The next sections describe specific requirements for installing Palette VerteX. ## Size Guidelines - + ## Kubernetes Requirements The following table presents the Kubernetes version corresponding to each Palette version for -[VMware](../../vertex/install-palette-vertex/install-on-vmware/install-on-vmware.md) and -[Kubernetes](../../vertex/install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md) installations. +[VMware](../../vmware/vmware.md) and +[Kubernetes](../kubernetes.md) installations. Additionally, for VMware installations, it provides the download URLs for the required Operating System and Kubernetes distribution OVA. @@ -63,11 +75,3 @@ distribution OVA. ## Proxy Requirements - -## Resources - -- [Install on VMware vSphere](install-on-vmware/install-on-vmware.md) - -- [Install Using Helm Chart](install-on-kubernetes/install-on-kubernetes.md) - -- [Airgap Installation](./airgap.md) diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/non-airgap.md similarity index 95% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/install.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/non-airgap.md index f4abd813a49..ea9bb97fc9f 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/install.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/install/non-airgap.md @@ -1,20 +1,22 @@ --- -sidebar_label: "Non-Airgap Installation" -title: "Install Non-Airgap Self-Hosted Palette VerteX" -description: "Learn how to deploy self-hosted VerteX to a Kubernetes cluster using a Helm Chart." +sidebar_label: "Install Non-Airgap Palette VerteX" +title: "Install Non-Airgap Palette VerteX on Kubernetes" +description: + "Learn how to deploy self-hosted Palette VerteX to a Kubernetes cluster using a Helm chart in a non-airgap + environment." icon: "" hide_table_of_contents: false -sidebar_position: 10 -tags: ["vertex", "enterprise"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 20 +tags: ["self-hosted", "vertex", "kubernetes", "helm"] +keywords: ["self-hosted", "vertex", "kubernetes", "helm"] --- You can use the Palette VerteX Helm Chart to install VerteX in a multi-node Kubernetes cluster in your production environment. This installation method is common in secure environments with restricted network access that prohibits using VerteX -SaaS. Review our [architecture diagrams](../../../architecture/networking-ports.md) to ensure your Kubernetes cluster -has the necessary network connectivity for VerteX to operate successfully. +SaaS. Review our [architecture diagrams](../../../../../architecture/networking-ports.md) to ensure your Kubernetes +cluster has the necessary network connectivity for VerteX to operate successfully. ## Prerequisites @@ -29,8 +31,8 @@ has the necessary network connectivity for VerteX to operate successfully. - Ensure `unzip` or a similar extraction utility is installed on your system. - The Kubernetes cluster must be set up on a version of Kubernetes that is compatible to your upgraded version. Refer to - the [Kubernetes Requirements](../install-palette-vertex.md#kubernetes-requirements) section to find the version - required for your Palette installation. + the [Kubernetes Requirements](./install.md#kubernetes-requirements) section to find the version required for your + Palette installation. - Ensure the Kubernetes cluster does not have Cert Manager installed. VerteX requires a unique Cert Manager configuration to be installed as part of the installation process. If Cert Manager is already installed, you must @@ -44,8 +46,8 @@ has the necessary network connectivity for VerteX to operate successfully. [Add a Database User](https://www.mongodb.com/docs/guides/atlas/db-user/) guide for guidance on how to create a database user in Atlas. -- We recommend the following resources for VerteX. Refer to the - [VerteX size guidelines](../install-palette-vertex.md#size-guidelines) for additional sizing information. +- We recommend the following resources for VerteX. Refer to the [VerteX size guidelines](./install.md#size-guidelines) + for additional sizing information. - 8 CPUs per node. @@ -86,13 +88,13 @@ has the necessary network connectivity for VerteX to operate successfully. encryption for VerteX. - Ensure VerteX has access to the required domains and ports. Refer to the - [Required Domains](../install-palette-vertex.md#proxy-requirements) section for more information. + [Required Domains](../install/install.md#proxy-requirements) section for more information. - If you are installing VerteX behind a network proxy server, ensure you have the Certificate Authority (CA) certificate file in the base64 format. You will need this to enable VerteX to communicate with the network proxy server. -- Access to the VerteX Helm Charts. Refer to the [Access VerteX](../../vertex.md#access-palette-vertex) for instructions - on how to request access to the Helm Chart. +- Access to the VerteX Helm Charts. Refer to the [Access VerteX](../../../vertex.md#access-palette-vertex) for + instructions on how to request access to the Helm Chart.
@@ -143,7 +145,7 @@ your environment. Reach out to our support team if you need assistance. 4. Open the **values.yaml** in the **spectro-mgmt-plane** folder with a text editor of your choice. The **values.yaml** contains the default values for the VerteX installation parameters. However, you must populate the following parameters before installing VerteX. You can learn more about the parameters in the **values.yaml** file in the - [Helm Configuration Reference](vertex-helm-ref.md) page. + [Helm Configuration Reference](../setup/non-airgap/helm-reference.md) page. | **Parameter** | **Description** | **Type** | | ----------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | @@ -156,7 +158,7 @@ your environment. Reach out to our support team if you need assistance. If you are installing VerteX by pulling required images from a private mirror registry, you will need to provide the credentials to your registry in the **values.yaml** file. For more information, refer to - [Helm Configuration Reference](vertex-helm-ref.md#image-pull-secret). + [Helm Configuration Reference](../setup/non-airgap/helm-reference.md#image-pull-secret). ::: @@ -704,7 +706,7 @@ your environment. Reach out to our support team if you need assistance. ![Screenshot of the VerteX system console showing Username and Password fields.](/vertex_install-on-kubernetes_install_system-console.webp) 10. Log in to the system console using the following default credentials. Refer to the - [password requirements](../../system-management/account-management/credentials.md#password-requirements-and-security) + [password requirements](../../../system-management/account-management/credentials.md#password-requirements-and-security) documentation page to learn more about password requirements. | **Parameter** | **Value** | @@ -715,19 +717,19 @@ your environment. Reach out to our support team if you need assistance. After login, you will be prompted to create a new password. Enter a new password and save your changes. You will be redirected to the VerteX system console. Use the username `admin` and your new password to log in to the system console. You can create additional system administrator accounts and assign roles to users in the system console. - Refer to the [Account Management](../../system-management/account-management/account-management.md) documentation + Refer to the [Account Management](../../../system-management/account-management/account-management.md) documentation page for more information. 11. After login, a summary page is displayed. VerteX is installed with a self-signed SSL certificate. To assign a different SSL certificate you must upload the SSL certificate, SSL certificate key, and SSL certificate authority files to VerteX. You can upload the files using the VerteX system console. Refer to the - [Configure HTTPS Encryption](../../system-management/ssl-certificate-management.md) page for instructions on how to - upload the SSL certificate files to VerteX. + [Configure HTTPS Encryption](../../../system-management/ssl-certificate-management.md) page for instructions on how + to upload the SSL certificate files to VerteX. :::warning If you plan to deploy host clusters into different networks, you may require a reverse proxy. Check out the - [Configure Reverse Proxy](../../system-management/reverse-proxy.md) guide for instructions on how to configure a + [Configure Reverse Proxy](../../../system-management/reverse-proxy.md) guide for instructions on how to configure a reverse proxy for VerteX. ::: @@ -796,8 +798,14 @@ Use the following steps to validate the VerteX installation. ## Next Steps - + ## Resources -- [Enterprise Install Troubleshooting](../../../troubleshooting/enterprise-install.md) +- [Enterprise Install Troubleshooting](../../../../../troubleshooting/enterprise-install.md) diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/kubernetes.md similarity index 51% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/kubernetes.md index 313600dba4b..616da7cc3b1 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/kubernetes.md @@ -1,11 +1,11 @@ --- sidebar_label: "Kubernetes" -title: "Kubernetes" -description: "Learn how to install Palette VerteX on Kubernetes." +title: "Self-Hosted Palette VerteX on Kubernetes" +description: "Install self-hosted Palette VerteX on an existing Kubernetes cluster." icon: "" hide_table_of_contents: false -tags: ["vertex", "kubernetes"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "kubernetes"] +keywords: ["self-hosted", "vertex", "kubernetes"] --- Palette VerteX can be installed on Kubernetes with internet connectivity or an airgap environment. When you install @@ -18,15 +18,7 @@ Helm Chart. Select the scenario and the corresponding guide to install VerteX on Kubernetes. If you are installing VerteX in an airgap environment, refer to the environment preparation guide before installing VerteX. -| Scenario | Environment Preparation Guide | Install Guide | -| ------------------------------------------------------- | ----------------------------------------------------------------------- | ---------------------------------------------------------- | -| Install VerteX on Kubernetes with internet connectivity | None | [Install Instructions](install.md) | -| Install VerteX on Kubernetes in an airgap environment | [Environment Setup](./airgap-install/kubernetes-airgap-instructions.md) | [Airgap Install Instructions](./airgap-install/install.md) | - -## Resources - -- [Non-Airgap Install Instructions](install.md) - -- [Airgap Install Instructions](./airgap-install/install.md) - -- [Helm Configuration Reference](./vertex-helm-ref.md) +| Scenario | Environment Preparation Guide | Install Guide | +| ------------------------------------------------------- | --------------------------------------------- | -------------------------------------------------- | +| Install VerteX on Kubernetes with internet connectivity | None | [Install Instructions](./install/non-airgap.md) | +| Install VerteX on Kubernetes in an airgap environment | [Environment Setup](./setup/airgap/airgap.md) | [Airgap Install Instructions](./install/airgap.md) | diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/_category_.json new file mode 100644 index 00000000000..988cdc1b69c --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Set Up", + "position": 0 +} diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/airgap.md similarity index 80% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/airgap.md index 849a034f62f..c41cae6ee57 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/airgap.md @@ -1,30 +1,58 @@ --- -sidebar_label: "Environment Setup" -title: "Environment Setup" -description: "Learn how to prepare VerteX for an airgap install" +sidebar_label: "Set Up Airgap Environment" +title: "Set Up Airgap Environment" +description: + "Set up your airgap environment in preparation to install self-hosted Palette VerteX on an existing Kubernetes + cluster." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex", "enterprise", "airgap", "kubernetes"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "airgap", "kubernetes"] +keywords: ["self-hosted", "vertex", "airgap", "kubernetes"] --- -![Overview diagram of the pre-install steps eager-load](/enterprise-version_air-gap-repo_k8s-overview-order-diagram-clean.webp) +You can install VerteX in an airgap Kubernetes environment. An airgap environment lacks direct access to the internet +and is intended for environments with strict security requirements. -This guide provides instructions to prepare your airgap environment for a Palette VerteX installation by completing the -required preparatory steps one through four shown in the diagram. The respective installation guides for each platform -cover the remaining installation process. +The installation process for an airgap environment is different due to the lack of internet access. Before the primary +Palette installation steps, you must download the following artifacts: -## Prepare Airgap Installation +- Palette platform manifests and required platform packages. -Use the following steps to prepare your airgap environment for a VerteX installation. +- Container images for core platform components and third-party dependencies. -:::tip +- Palette packs. -Carefully review the [prerequisites](#prerequisites) section before proceeding. This will save you time and frustration. -Each prerequisite is required for a successful installation. +The other significant change is that VerteX's default public OCI registry is not used. Instead, a private OCI registry +is utilized to store images and packs. -::: +## Overview + +Before you can install Palette VerteX in an airgap environment, you must first set up your environment as outlined in +the following diagram. + +![An architecture diagram outlining the five different installation phases](/enterprise-version_air-gap-repo_k8s-points-overview-order-diagram.webp) + +1. In an environment with internet access, download the airgap setup binary from the URL provided by our support team. + The airgap setup binary is a self-extracting archive that contains the Palette platform manifests, images, and + required packs. The airgap setup binary is a single-use binary for uploading Palette images and packs to your OCI + registry. You will not use the airgap setup binary again after the initial installation. + +2. Move the airgap setup binary to the airgap environment. The airgap setup binary is used to extract the manifest + content and upload the required images and packs to your private OCI registry. Start the airgap setup binary in a + Linux Virtual Machine (VM). + +3. The airgap script will push the required images and packs to your private OCI registry. + +4. Install Palette using the Kubernetes Helm chart. + +## Supported Platforms + +The following table outlines the platforms supported for airgap VerteX installation and the supported OCI registries. + +| **Platform** | **OCI Registry** | **Supported** | +| ------------ | ---------------- | ------------- | +| Kubernetes | Harbor | ✅ | +| Kubernetes | AWS ECR | ✅ | ## Prerequisites @@ -245,8 +273,8 @@ Complete the following steps before deploying the airgap VerteX installation. 13. Review the additional packs available for download. The supplemental packs are optional and not required for a successful installation. However, to create cluster profiles you may require several of the packs available for - download. Refer to the [Additional Packs](../../../../downloads/palette-vertex/additional-packs.md) resource for a - list of available packs. + download. Refer to the [Additional Packs](../../../../../../downloads/palette-vertex/additional-packs.md) resource + for a list of available packs. 14. Once you select the packs you want to install, download the pack binaries and start the binary to initiate the upload process. This step requires internet access, so you may have to download the binaries on a separate machine @@ -282,5 +310,5 @@ Use the following steps to validate the airgap setup process completed successfu ## Next Steps You are now ready to deploy the airgap VerteX installation. The important difference is that you will specify your OCI -registry during the installation process. Refer to the [VerteX Install](./install.md) guide for detailed guidance on -installing VerteX. +registry during the installation process. Refer to the [VerteX Install](../../install/install.md) guide for detailed +guidance on installing VerteX. diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/vertex-helm-ref.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/helm-reference.md similarity index 97% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/vertex-helm-ref.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/helm-reference.md index 037e1d0fbf9..131b609544c 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/vertex-helm-ref.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/helm-reference.md @@ -1,19 +1,26 @@ --- -sidebar_label: "Helm Configuration Reference" -title: "Helm Configuration Reference" -description: "Reference resource for the Palette VerteX Helm Chart installation parameters." +sidebar_label: "Helm Chart Configuration Reference" +title: "Helm Chart Configuration Reference" +description: "Reference for Palette VerteX Helm Chart installation parameters." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex", "helm"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 30 +tags: ["self-hosted", "vertex", "helm"] +keywords: ["self-hosted", "vertex", "helm"] --- +:::danger + +Turn this page into partials for reuse across other self-hosted helm chart reference pages. + +::: + You can use the Palette VerteX Helm Chart to install Palette VerteX in a multi-node Kubernetes cluster in your production environment. The Helm chart allows you to customize values in the **values.yaml** file. This reference page lists and describes parameters available in the **values.yaml** file from the Helm Chart for your installation. -To learn how to install Palette VerteX using the Helm Chart, refer to the Kubernetes [Instructions](install.md). +To learn how to install Palette VerteX using the Helm Chart, refer to the Kubernetes +[Instructions](../../install/install.md). ## Required Parameters @@ -126,7 +133,7 @@ config: You can configure Palette VerteX to use Single Sign-On (SSO) for user authentication. Configure the SSO parameters to enable SSO for Palette VerteX. You can also configure different SSO providers for each tenant post-install, check out -the [SAML & SSO Setup](../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance. +the [SAML & SSO Setup](../../../../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance. To configure SSO, you must provide the following parameters. @@ -154,7 +161,7 @@ config: ### Email Palette VerteX uses email to send notifications to users. The email notification is used when inviting new users to the -platform, password resets, and when [webhook alerts](../../../clusters/cluster-management/health-alerts.md) are +platform, password resets, and when [webhook alerts](../../../../../../clusters/cluster-management/health-alerts.md) are triggered. Use the following parameters to configure email settings for Palette VerteX. | **Parameters** | **Description** | **Type** | **Default value** | @@ -419,7 +426,7 @@ ingress: You can specify a reverse proxy server that clusters deployed through Palette VerteX can use to facilitate network connectivity to the cluster's Kubernetes API server. Host clusters deployed in private networks can use the pack to expose the cluster's Kubernetes API to downstream clients that are not in the same network. Check out the [Reverse -Proxy](../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for +Proxy](../../../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for Palette VerteX. | **Parameters** | **Description** | **Type** | **Default value** | @@ -495,7 +502,8 @@ reach-system: :::info Due to node affinity configurations, you must set `scheduleOnControlPlane: false` for managed clusters deployed to -[Azure AKS](../../../clusters/public-cloud/azure/aks.md), [AWS EKS](../../../clusters/public-cloud/aws/eks.md), and -[GCP GKE](../../../clusters/public-cloud/gcp/create-gcp-gke-cluster.md). +[Azure AKS](../../../../../../clusters/public-cloud/azure/aks.md), +[AWS EKS](../../../../../../clusters/public-cloud/aws/eks.md), and +[GCP GKE](../../../../../../clusters/public-cloud/gcp/create-gcp-gke-cluster.md). ::: diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/_category_.json new file mode 100644 index 00000000000..455b8e49697 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 20 +} diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/helm-reference.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/helm-reference.md new file mode 100644 index 00000000000..2384bd656c2 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/helm-reference.md @@ -0,0 +1,509 @@ +--- +sidebar_label: "Helm Chart Configuration Reference" +title: "Helm Chart Configuration Reference" +description: "Reference for Palette VerteX Helm chart installation parameters." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["self-hosted", "vertex", "helm"] +keywords: ["self-hosted", "vertex", "helm"] +--- + +:::danger + +Turn this page into partials for reuse across other self-hosted helm chart reference pages. + +::: + +You can use the Palette VerteX Helm Chart to install Palette VerteX in a multi-node Kubernetes cluster in your +production environment. The Helm chart allows you to customize values in the **values.yaml** file. This reference page +lists and describes parameters available in the **values.yaml** file from the Helm Chart for your installation. + +To learn how to install Palette VerteX using the Helm Chart, refer to the Kubernetes +[Instructions](../../install/non-airgap.md). + +## Required Parameters + +The following parameters are required for a successful installation of Palette VerteX. + +| **Parameters** | **Description** | **Type** | +| --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | +| `config.env.rootDomain` | Used to configure the domain for the Palette installation. We recommend you create a CNAME DNS record that supports multiple subdomains. You can achieve this using a wild card prefix, `*.vertex.abc.com`. Review the [Environment parameters](#environment) to learn more. | String | +| `config.env.ociPackRegistry` or `config.env.ociPackEcrRegistry` | Specifies the FIPS image registry for Palette VerteX. You can use an a self-hosted OCI registry or a public OCI registry we maintain and support. For more information, refer to the [Registry](#registries) section. | Object | + +:::warning + +If you are installing an air-gapped version of Palette VerteX, you must provide the image swap configuration. For more +information, refer to the [Image Swap Configuration](#image-swap-configuration) section. + +::: + +## Global + +The global block allows you to provide configurations that apply globally to the installation process. + +### Image Pull Secret + +This section is only relevant if you are using your own private registry to host the images required for the Palette +installation process. + +The `imagePullSecret` block allows you to provide image pull secrets that will be used to authenticate with private +registries to obtain the images required for Palette VerteX installation. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `create` | Specifies whether to create a secret containing credentials to your own private image registry. | Boolean | `false` | +| `dockerConfigJson` | The **config.json** file value containing the registry URL and credentials for your image registry in base64 encoded format on a single line. For more information about the **config.json** file, refer to [Kubernetes Documentation](https://kubernetes.io/docs/concepts/containers/images/#config-json). | String | None | + +:::info + +To obtain the base-64 encoded version of the credential `config.json` file, you can issue the following command. Replace +`` with the path to your `config.json` file. The `tr -d '\n'` removes new line characters +and produce the output on a single line. + +```shell +cat | base64 | tr -d '\n' +``` + +::: + +```yaml +global: + imagePullSecret: + create: true + dockerConfigJson: ewoJImF1dGhzHsKCQkiaG9va3......MiOiAidHJ1ZSIKCX0KfQ # Base64 encoded config.json +``` + +## MongoDB + +Palette VerteX uses MongoDB Enterprise as its internal database and supports two modes of deployment: + +- MongoDB Enterprise deployed and active inside the cluster. + +- MongoDB Enterprise is hosted on a Software-as-a-Service (SaaS) platform, such as MongoDB Atlas. If you choose to use + MongoDB Atlas, ensure the MongoDB database has a user named `hubble` with the permission `readWriteAnyDatabase`. Refer + to the [Add a Database User](https://www.mongodb.com/docs/guides/atlas/db-user/) guide for guidance on how to create a + database user in Atlas. + +The table below lists the parameters used to configure a MongoDB deployment. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------- | +| `internal` | Specifies the MongoDB deployment either in-cluster or using Mongo Atlas. | Boolean | `true` | +| `databaseUrl` | The URL for MongoDB Enterprise. If using a remote MongoDB Enterprise instance, provide the remote URL. This parameter must be updated if `mongo.internal` is set to `false`. You also need to ensure the MongoDB database has a user named `hubble` with the permission `readWriteAnyDatabase`. Refer to the [Add a Database User](https://www.mongodb.com/docs/guides/atlas/db-user/) guide for guidance on how to create a database user in Atlas. | String | `mongo-0.mongo,mongo-1.mongo,mongo-2.mongo` | +| `databasePassword` | The base64-encoded MongoDB Enterprise password. If you don't provide a value, a random password will be auto-generated. | String | `""` | +| `replicas` | The number of MongoDB replicas to start. | Integer | `3` | +| `memoryLimit` | Specifies the memory limit for each MongoDB Enterprise replica. | String | `4Gi` | +| `cpuLimit` | Specifies the CPU limit for each MongoDB Enterprise member. | String | `2000m` | +| `pvcSize` | The storage settings for the MongoDB Enterprise database. Use increments of `5Gi` when specifying the storage size. The storage size applies to each replica instance. The total storage size for the cluster is `replicas` \* `pvcSize`. | string | `20Gi` | +| `storageClass` | The storage class for the MongoDB Enterprise database. | String | `""` | + +```yaml +mongo: + internal: true + databaseUrl: "mongo-0.mongo,mongo-1.mongo,mongo-2.mongo" + databasePassword: "" + replicas: 3 + cpuLimit: "2000m" + memoryLimit: "4Gi" + pvcSize: "20Gi" + storageClass: "" +``` + +## Config + +Review the following parameters to configure Palette VerteX for your environment. The `config` section contains the +following subsections: + +### Install Mode + +You can install Palette in connected or air-gapped mode. The table lists the parameters to configure the installation +mode. + +| **Parameters** | **Description** | **Type** | **Default value** | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `installMode` | Specifies the installation mode. Allowed values are `connected` or `airgap`. Set the value to `airgap` when installing in an air-gapped environment. | String | `connected` | + +```yaml +config: + installationMode: "connected" +``` + +### SSO + +You can configure Palette VerteX to use Single Sign-On (SSO) for user authentication. Configure the SSO parameters to +enable SSO for Palette VerteX. You can also configure different SSO providers for each tenant post-install, check out +the [SAML & SSO Setup](../../../../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance. + +To configure SSO, you must provide the following parameters. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------- | ------------------------------------------------------------------------- | -------- | --------------------------------- | +| `saml.enabled` | Specifies whether to enable SSO SAML configuration by setting it to true. | Boolean | `false` | +| `saml.acsUrlRoot` | The root URL of the Assertion Consumer Service (ACS). | String | `myfirstpalette.spectrocloud.com` | +| `saml.acsUrlScheme` | The URL scheme of the ACS: `http` or `https`. | String | `https` | +| `saml.audienceUrl` | The URL of the intended audience for the SAML response. | String | `https://www.spectrocloud.com` | +| `saml.entityID` | The Entity ID of the Service Provider. | String | `https://www.spectrocloud.com` | +| `saml.apiVersion` | Specify the SSO SAML API version to use. | String | `v1` | + +```yaml +config: + sso: + saml: + enabled: false + acsUrlRoot: "myfirstpalette.spectrocloud.com" + acsUrlScheme: "https" + audienceUrl: "https://www.spectrocloud.com" + entityId: "https://www.spectrocloud.com" + apiVersion: "v1" +``` + +### Email + +Palette VerteX uses email to send notifications to users. The email notification is used when inviting new users to the +platform, password resets, and when [webhook alerts](../../../../../../clusters/cluster-management/health-alerts.md) are +triggered. Use the following parameters to configure email settings for Palette VerteX. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ----------------------- | ---------------------------------------------------------------------------------------------- | -------- | -------------------------- | +| `enabled` | Specifies whether to enable email configuration. | Boolean | `false` | +| `emailID ` | The email address for sending mail. | String | `noreply@spectrocloud.com` | +| `smtpServer` | Simple Mail Transfer Protocol (SMTP) server used for sending mail. | String | `smtp.gmail.com` | +| `smtpPort` | SMTP port used for sending mail. | Integer | `587` | +| `insecureSkipVerifyTLS` | Specifies whether to skip Transport Layer Security (TLS) verification for the SMTP connection. | Boolean | `true` | +| `fromEmailID` | Email address of the **_From_** address. | String | `noreply@spectrocloud.com` | +| `password` | The base64-encoded SMTP password when sending emails. | String | `""` | + +```yaml +config: + email: + enabled: false + emailId: "noreply@spectrocloud.com" + smtpServer: "smtp.gmail.com" + smtpPort: 587 + insecureSkipVerifyTls: true + fromEmailId: "noreply@spectrocloud.com" + password: "" +``` + +### Environment + +The following parameters are used to configure the environment. + +| **Parameters** | **Description** | **Type** | **Default value** | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | ----------------- | +| `env.rootDomain` | Specifies the URL name assigned to Palette Vertex. The value assigned should have a Domain Name System (DNS) CNAME record mapped to exposed IP address or the load balancer URL of the service _ingress-nginx-controller_. Optionally, if `ingress.ingressStaticIP` is provided with a value you can use same assigned static IP address as the value to this parameter. | String | `""` | +| `env.installerMode` | Specifies the installer mode. Do not modify the value. | String | `self-hosted` | +| `env.installerCloud` | Specifies the cloud provider. Leave this parameter empty if you are installing a self-hosted Palette VerteX. | String | `""` | + +```yaml +config: + env: + rootDomain: "" +``` + +:::warning + +If Palette VerteX has only one tenant and you use local accounts with Single Sign-On (SSO) disabled, you can access +Palette VerteX using the IP address or any domain name that resolves to that IP. However, once you enable SSO, users +must log in using the tenant-specific subdomain. For example, if you create a tenant named `tenant1` and the domain name +you assigned to Palette VerteX is `vertex.example.com`, the tenant URL will be `tenant1.vertex.example.com`. We +recommend you create an additional wildcard DNS record to map all tenant URLs to the Palette VerteX load balancer. For +example, `*.vertex.example.com`. + +::: + +### Cluster + +Use the following parameters to configure the Kubernetes cluster. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `stableEndpointAccess` | Set to `true` if the Kubernetes cluster is deployed in a public endpoint. If the cluster is deployed in a private network through a stable private endpoint, set to `false`. | Boolean | `false` | + +```yaml +config: + cluster: + stableEndpointAccess: false +``` + +## Registries + +Palette VerteX requires credentials to access the required Palette VerteX images. You can configure different types of +registries for Palette VerteX to download the required images. You must configure at least one Open Container Initiative +(OCI) registry for Palette VerteX. + +:::warning + +Palette VerteX does not support insecure connections. Ensure you have the Certificate Authority (CA) available, in PEM +format, when using a custom packs and image registry. Otherwise, VerteX will not be able to pull packs and images from +the registry. Use the `caCert` parameter to provide the base64-encoded CA certificate. + +::: + +### OCI Registry + +Palette VerteX requires access to an OCI registry that contains all the required FIPS packs. You can host your own OCI +registry and configure Palette VerteX to reference the registry. Alternatively, you can use the public OCI registry +provided by us, refer to the [`ociPackEcrRegistry`](#oci-ecr-registry) section to learn more about the publicly +available OCI registry. + +:::warning + +If you are using a self-hosted OCI registry, you must provide the required FIPS packs to the registry. Contact support +for additional guidance on how to add the required FIPS packs to your OCI registry. + +::: + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `ociPackRegistry.endpoint` | The endpoint URL for the registry. | String | `""` | +| `ociPackRegistry.name` | The name of the registry. | String | `""` | +| `ociPackRegistry.password` | The base64-encoded password for the registry. | String | `""` | +| `ociPackRegistry.username` | The username for the registry. | String | `""` | +| `ociPackRegistry.baseContentPath` | The base path for the registry. | String | `""` | +| `ociPackRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. VerteX requires the CA for registries that use a self-signed certificate. | Boolean | `false` | +| `ociPackRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. Required for self-hosted OCI registries. | String | `""` | + +```yaml +config: + ociPackRegistry: + endpoint: "" + name: "" + password: "" + username: "" + baseContentPath: "" + insecureSkipVerify: false + caCert: "" +``` + +### OCI ECR Registry + +We expose a public OCI ECR registry that you can configure Palette VerteX to reference. If you want to host your own OCI +registry, refer to the [OCI Registry](#oci-registry) section. The OCI Elastic Container Registry (ECR) is hosted in an +AWS ECR registry. Our support team provides the credentials for the OCI ECR registry. + +| **Parameters** | **Description** | **Type** | **Default value** | +| --------------------------------------- | -------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `ociPackEcrRegistry.endpoint` | The endpoint URL for the registry. | String | `""` | +| `ociPackEcrRegistry.name` | The name of the registry. | String | `""` | +| `ociPackEcrRegistry.accessKey` | The base64-encoded access key for the registry. | String | `""` | +| `ociPackEcrRegistry.secretKey` | The base64-encoded secret key for the registry. | String | `""` | +| `ociPackEcrRegistry.baseContentPath` | The base path for the registry. | String | `""` | +| `ociPackEcrRegistry.isPrivate` | Specifies whether the registry is private. | Boolean | `true` | +| `ociPackEcrRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` | +| `ociPackEcrRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. | String | `""` | + +```yaml +config: + ociPackEcrRegistry: + endpoint: "" + name: "" + accessKey: "" + secretKey: "" + baseContentPath: "" + isPrivate: true + insecureSkipVerify: false + caCert: "" +``` + +### OCI Image Registry + +You can specify an OCI registry for the images used by Palette. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------------- | -------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `ociImageRegistry.endpoint` | The endpoint URL for the registry. | String | `""` | +| `ociImageRegistry.name` | The name of the registry. | String | `""` | +| `ociImageRegistry.password` | The password for the registry. | String | `""` | +| `ociImageRegistry.username` | The username for the registry. | String | `""` | +| `ociImageRegistry.baseContentPath` | The base path for the registry. | String | `""` | +| `ociImageRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` | +| `ociImageRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. Required for self-hosted OCI registries. | String | `""` | +| `ociImageRegistry.mirrorRegistries` | A comma-separated list of mirror registries. | String | `""` | + +```yaml +config: + ociImageRegistry: + endpoint: "" + name: "" + password: "" + username: "" + baseContentPath: "" + insecureSkipVerify: false + caCert: "" + mirrorRegistries: "" +``` + +### Image Swap Configuration + +You can configure Palette VerteX to use image swap to download the required images. This is an advanced configuration +option, and it is only required for air-gapped deployments. You must also install the Palette VerteX Image Swap Helm +chart to use this option, otherwise, Palette VerteX will ignore the configuration. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------- | +| `imageSwapInitImage` | The image swap init image. | String | `gcr.io/spectro-images-public/release/thewebroot/imageswap-init:v1.5.3-spectro-4.5.1` | +| `imageSwapImage` | The image swap image. | String | `gcr.io/spectro-images-public/release/thewebroot/imageswap:v1.5.3-spectro-4.5.1` | +| `imageSwapConfig` | The image swap configuration for specific environments. | String | `""` | +| `imageSwapConfig.isEKSCluster` | Specifies whether the cluster is an Amazon EKS cluster. Set to `false` if the Kubernetes cluster is not an EKS cluster. | Boolean | `true` | + +```yaml +config: + imageSwapImages: + imageSwapInitImage: "gcr.io/spectro-images-public/release/thewebroot/imageswap-init:v1.5.3-spectro-4.5.1" + imageSwapImage: "gcr.io/spectro-images-public/release/thewebroot/imageswap:v1.5.3-spectro-4.5.1" + + imageSwapConfig: + isEKSCluster: true +``` + +## gRPC + +gRPC is used for communication between Palette VerteX components. You can enable the deployment of an additional load +balancer for gRPC. Host clusters deployed by Palette VerteX use the load balancer to communicate with the Palette VerteX +control plane. This is an advanced configuration option, and it is not required for most deployments. Speak with your +support representative before enabling this option. + +If you want to use an external gRPC endpoint, you must provide a domain name for the gRPC endpoint and a valid x509 +certificate. Additionally, you must provide a custom domain name for the endpoint. A CNAME DNS record must point to the +IP address of the gRPC load balancer. For example, if your Palette VerteX domain name is `vertex.example.com`, you could +create a CNAME DNS record for `grpc.vertex.example.com` that points to the IP address of the load balancer dedicated to +gRPC. + +| **Parameters** | **Description** | **Type** | **Default value** | +| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | ----------------- | +| `external` | Specifies whether to use an external gRPC endpoint. | Boolean | `false` | +| `endpoint` | The gRPC endpoint. | String | `""` | +| `annotations` | A map of key-value pairs that specifies load balancer annotations for gRPC. You can use annotations to change the behavior of the load balancer and the gRPC configuration. This field is considered an advanced setting. We recommend you consult with your assigned support team representative before making changes. | Object | `{}` | +| `grpcStaticIP` | Specify a static IP address for the gRPC load balancer service. If the field is empty, a dynamic IP address will be assigned to the load balancer. | String | `""` | +| `caCertificateBase64` | The base64-encoded Certificate Authority (CA) certificate for the gRPC endpoint. | String | `""` | +| `serverCrtBase64` | The base64-encoded server certificate for the gRPC endpoint. | String | `""` | +| `serverKeyBase64` | The base64-encoded server key for the gRPC endpoint. | String | `""` | +| `insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the gRPC endpoint. | Boolean | `false` | + +```yaml +grpc: + external: false + endpoint: "" + annotations: {} + grpcStaticIP: "" + caCertificateBase64: "" + serverCrtBase64: "" + serverKeyBase64: "" + insecureSkipVerify: false +``` + +## Ingress + +Palette VerteX deploys an Nginx Ingress Controller. This controller is used to route traffic to the Palette VerteX +control plane. You can change the default behavior and omit the deployment of an Nginx Ingress Controller. + +| **Parameters** | **Description** | **Type** | **Default value** | +| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `enabled` | Specifies whether to deploy an Nginx controller. Set to `false` if you do not want an Nginx controller deployed. | Boolean | `true` | +| `ingress.internal` | Specifies whether to deploy a load balancer or use the host network. | Boolean | `false` | +| `ingress.certificate` | Specify the base64-encoded x509 SSL certificate for the Nginx Ingress Controller. If left blank, the Nginx Ingress Controller will generate a self-signed certificate. | String | `""` | +| `ingress.key` | Specify the base64-encoded x509 SSL certificate key for the Nginx Ingress Controller. | String | `""` | +| `ingress.annotations` | A map of key-value pairs that specifies load balancer annotations for ingress. You can use annotations to change the behavior of the load balancer and the Nginx configuration. This is an advanced setting. We recommend you consult with your assigned support team representative prior to modification. | Object | `{}` | +| `ingress.ingressStaticIP` | Specify a static IP address for the ingress load balancer service. If empty, a dynamic IP address will be assigned to the load balancer. | String | `""` | +| `ingress.terminateHTTPSAtLoadBalancer` | Specifies whether to terminate HTTPS at the load balancer. | Boolean | `false` | + +```yaml +ingress: + enabled: true + ingress: + internal: false + certificate: "" + key: "" + annotations: {} + ingressStaticIP: "" + terminateHTTPSAtLoadBalancer: false +``` + +## Spectro Proxy + + +You can specify a reverse proxy server that clusters deployed through Palette VerteX can use to facilitate network +connectivity to the cluster's Kubernetes API server. Host clusters deployed in private networks can use the pack to expose the cluster's Kubernetes API to downstream clients that are not in the same network. Check out the [Reverse +Proxy](../../../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for +Palette VerteX. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ----------------- | -------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `frps.enabled` | Specifies whether to enable the Spectro server-side proxy. | Boolean | `false` | +| `frps.frpHostURL` | The Spectro server-side proxy URL. | String | `""` | +| `frps.server.crt` | The base64-encoded server certificate for the Spectro server-side proxy. | String | `""` | +| `frps.server.key` | The base64-encoded server key for the Spectro server-side proxy. | String | `""` | +| `frps.ca.crt` | The base64-encoded certificate authority (CA) certificate for the Spectro server-side proxy. | String | `""` | + +```yaml +frps: + frps: + enabled: false + frpHostURL: "" + server: + crt: "" + key: "" + ca: + crt: "" +``` + +## UI System + +The table lists parameters to configure the Palette VerteX User Interface (UI) behavior. You can disable the UI or the +Network Operations Center (NOC) UI. You can also specify the MapBox access token and style layer ID for the NOC UI. +MapBox is a third-party service that provides mapping and location services. To learn more about MapBox and how to +obtain an access token, refer to the [MapBox Access tokens](https://docs.mapbox.com/help/getting-started/access-tokens) +guide. + +| **Parameters** | **Description** | **Type** | **Default value** | +| ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------- | +| `enabled` | Specifies whether to enable the Palette VerteX UI. | Boolean | `true` | +| `ui.nocUI.enable` | Specifies whether to enable the Palette VerteX Network Operations Center (NOC) UI. Enabling this parameter requires the `ui.nocUI.mapBoxAccessToken`. Once enabled, all cluster locations will be reported to MapBox. This feature is not FIPS compliant. | Boolean | `false` | +| `ui.nocUI.mapBoxAccessToken` | The MapBox access token for the Palette VerteX NOC UI. | String | `""` | +| `ui.nocUI.mapBoxStyledLayerID` | The MapBox style layer ID for the Palette VerteX NOC UI. | String | `""` | + +```yaml +ui-system: + enabled: true + ui: + nocUI: + enable: false + mapBoxAccessToken: "" + mapBoxStyledLayerID: "" +``` + +## Reach System + +You can configure VerteX to use a proxy server to access the internet. Set the parameter `reach-system.enabled` to +`true` to enable the proxy server. Proxy settings are configured in the `reach-system.proxySettings` section. + +| **Parameters** | **Description** | **Type** | **Default value** | +| --------------------------------------- | ----------------------------------------------------------------------------------- | -------- | ----------------- | +| `reachSystem.enabled` | Specifies whether to enable the usage of a proxy server for Palette. | Boolean | `false` | +| `reachSystem.proxySettings.http_proxy` | The HTTP proxy server URL. | String | `""` | +| `reachSystem.proxySettings.https_proxy` | The HTTPS proxy server URL. | String | `""` | +| `reachSystem.proxySettings.no_proxy` | A list of hostnames or IP addresses that should not be go through the proxy server. | String | `""` | +| `reachSystem.proxySettings.ca_crt_path` | The base64-encoded certificate authority (CA) of the proxy server. | String | `""` | +| `reachSystem.scheduleOnControlPlane` | Specifies whether to schedule the reach system on the control plane. | Boolean | `true` | + +```yaml +reach-system: + enabled: false + proxySettings: + http_proxy: "" + https_proxy: "" + no_proxy: + ca_crt_path: "" + scheduleOnControlPlane: true +``` + +:::info + +Due to node affinity configurations, you must set `scheduleOnControlPlane: false` for managed clusters deployed to +[Azure AKS](../../../../../../clusters/public-cloud/azure/aks.md), +[AWS EKS](../../../../../../clusters/public-cloud/aws/eks.md), and +[GCP GKE](../../../../../../clusters/public-cloud/gcp/create-gcp-gke-cluster.md). + +::: diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/non-airgap.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/non-airgap.md new file mode 100644 index 00000000000..72c91c66c68 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/setup/non-airgap/non-airgap.md @@ -0,0 +1,19 @@ +--- +sidebar_label: "Set Up Non-Airgap Environment" +title: "Set Up Non-Airgap Environment" +description: + "No prior setup is needed when installing self-hosted Palette VerteX on a Kubernetes cluster with internet + connectivity." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["self-hosted", "vertex", "kubernetes", "non-airgap"] +keywords: ["self-hosted", "vertex", "kubernetes", "non-airgap"] +--- + +:::info + +No prior setup is necessary for non-airgap installations. For system prerequisites, refer to the installation +Prerequisites. + +::: diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/_category_.json new file mode 100644 index 00000000000..e7e7c549660 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 40 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/uninstall.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/uninstall.md similarity index 92% rename from docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/uninstall.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/uninstall.md index 48fe4c7a67c..a1d0616ead1 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/uninstall.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/uninstall.md @@ -1,12 +1,11 @@ --- -sidebar_label: "Uninstallation" -title: "Uninstall VerteX" -description: "Learn how to uninstall a VerteX installation from your cluster using Helm charts." +sidebar_label: "Uninstall" +title: "Uninstall Palette VerteX from Kubernetes" +description: "Uninstall self-hosted Palette VerteX from your Kubernetes cluster using Helm charts." icon: "" hide_table_of_contents: false -sidebar_position: 40 -tags: ["self-hosted", "enterprise"] -keywords: ["vertex"] +tags: ["self-hosted", "vertex", "uninstall", "kubernetes", "helm"] +keywords: ["self-hosted", "vertex", "uninstall", "kubernetes", "helm"] --- To uninstall VerteX from your cluster, you need to uninstall VerteX management plane and Cert Manager. Optionally, you diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/_category_.json new file mode 100644 index 00000000000..c3460c6dbde --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 30 +} diff --git a/docs/docs-content/vertex/upgrade/upgrade-k8s/airgap.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/airgap.md similarity index 94% rename from docs/docs-content/vertex/upgrade/upgrade-k8s/airgap.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/airgap.md index 325ef678540..2d1f33be58e 100644 --- a/docs/docs-content/vertex/upgrade/upgrade-k8s/airgap.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/airgap.md @@ -1,11 +1,11 @@ --- -sidebar_label: "Airgap" -title: "Upgrade Airgap Palette VerteX Installed with Kubernetes" -description: "Learn how to upgrade self-hosted airgap Palette VerteX." +sidebar_label: "Upgrade Airgap Palette VerteX" +title: "Upgrade Airgap Palette VerteX on Kubernetes" +description: "Upgrade a self-hosted, airgapped Palette VerteX instance installed on a Kubernetes cluster." icon: "" sidebar_position: 10 -tags: ["vertex", "self-hosted", "airgap", "kubernetes", "upgrade"] -keywords: ["self-hosted", "vertex", "airgap", "kubernetes"] +tags: ["self-hosted", "vertex", "airgap", "kubernetes", "upgrade", "helm"] +keywords: ["self-hosted", "vertex", "airgap", "kubernetes", "upgrade", "helm"] --- This guide takes you through the process of upgrading a self-hosted airgap Palette VerteX instance installed on @@ -14,14 +14,14 @@ Kubernetes. :::warning Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of the -latest minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section -for details. +latest minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for +details. ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette VerteX upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette VerteX upgrade. ## Prerequisites @@ -32,7 +32,7 @@ Palette VerteX upgrade. available to store the new Palette VerteX images and packs. - Access to the latest Palette VerteX airgap setup binary. Refer to - [Access Palette VerteX](../../vertex.md#access-palette-vertex) for more details. + [Access Palette VerteX](../../../vertex.md#access-palette-vertex) for more details. - [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl) and [`helm`](https://helm.sh/docs/intro/install/) available in your system. @@ -44,7 +44,7 @@ Palette VerteX upgrade. - `unzip` or a similar tool available in your system. - Access to the latest Palette VerteX Helm Chart. Refer to - [Access Palette VerteX](../../vertex.md#access-palette-vertex) for more details. + [Access Palette VerteX](../../../vertex.md#access-palette-vertex) for more details. ## Upgrade @@ -230,8 +230,8 @@ Palette VerteX upgrade. -7. Refer to the [Additional Packs](../../../downloads/palette-vertex/additional-packs.md) page and update the packages - you are currently using. You must update each package separately. +7. Refer to the [Additional Packs](../../../../../downloads/palette-vertex/additional-packs.md) page and update the + packages you are currently using. You must update each package separately. :::info @@ -300,8 +300,7 @@ Palette VerteX upgrade. 12. Prepare the Palette VerteX configuration file `values.yaml`. If you saved `values.yaml` used during the Palette VerteX installation, you can reuse it for the upgrade. Alternatively, follow the - [Kubernetes Installation Instructions](../../install-palette-vertex/install-on-kubernetes/install.md) to populate - your `values.yaml`. + [Kubernetes Installation Instructions](../install/airgap.md) to populate your `values.yaml`. :::warning diff --git a/docs/docs-content/vertex/upgrade/upgrade-k8s/non-airgap.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/non-airgap.md similarity index 91% rename from docs/docs-content/vertex/upgrade/upgrade-k8s/non-airgap.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/non-airgap.md index 748e52be7ad..e657cf35356 100644 --- a/docs/docs-content/vertex/upgrade/upgrade-k8s/non-airgap.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/non-airgap.md @@ -1,11 +1,11 @@ --- -sidebar_label: "Non-airgap" -title: "Upgrade Palette VerteX Installed with Kubernetes" -description: "Learn how to upgrade self-hosted non-airgap Palette VerteX with Helm and Kubernetes." +sidebar_label: "Upgrade Non-Airgap Palette VerteX" +title: "Upgrade Non-Airgap Palette VerteX on Kubernetes" +description: "Upgrade a self-hosted, non-airgap Palette VerteX instance installed on a Kubernetes cluster." icon: "" -sidebar_position: 0 -tags: ["vertex", "self-hosted", "non-airgap", "kubernetes", "management", "upgrades"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 20 +tags: ["self-hosted", "vertex", "non-airgap", "kubernetes", "upgrade", "helm"] +keywords: ["self-hosted", "vertex", "non-airgap", "kubernetes", "upgrade", "helm"] --- This guide takes you through the process of upgrading a self-hosted Palette VerteX instance installed with Helm on @@ -14,14 +14,14 @@ Kubernetes. :::warning Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of the -latest minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section -for details. +latest minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for +details. ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette VerteX upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette VerteX upgrade. ## Prerequisites @@ -35,7 +35,7 @@ Palette VerteX upgrade. - `unzip` or a similar tool available in your system. - Access to the latest Palette VerteX Helm Chart. Refer to - [Access Palette VerteX](../../vertex.md#access-palette-vertex) for more details. + [Access Palette VerteX](../../../vertex.md#access-palette-vertex) for more details. ## Upgrade @@ -80,8 +80,7 @@ match your environment. 4. Prepare the Palette VerteX configuration file `values.yaml`. If you saved `values.yaml` used during the Palette VerteX installation, you can reuse it for the upgrade. Alternatively, follow the - [Kubernetes Installation Instructions](../../install-palette-vertex/install-on-kubernetes/install.md) to populate - your `values.yaml`. + [Kubernetes Installation Instructions](../install/non-airgap.md) to populate your `values.yaml`. :::warning diff --git a/docs/docs-content/vertex/upgrade/upgrade.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/upgrade.md similarity index 94% rename from docs/docs-content/vertex/upgrade/upgrade.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/upgrade.md index 5e0712b7e87..3777f82b80d 100644 --- a/docs/docs-content/vertex/upgrade/upgrade.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/upgrade.md @@ -1,35 +1,50 @@ --- sidebar_label: "Upgrade" -title: "VerteX Upgrade" -description: "Upgrade notes for specific Palette VerteX versions." +title: "Upgrade Palette VerteX on Kubernetes" +description: "Upgrade self-hosted Palette VerteX installed on a Kubernetes cluster." icon: "" hide_table_of_contents: false -sidebar_position: 100 -tags: ["vertex", "self-hosted", "upgrade"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "helm", "kubernetes", "upgrade"] +keywords: ["self-hosted", "vertex", "helm", "kubernetes", "upgrade"] --- +:::danger + +The below content is from the former [VerteX Upgrade](https://docs.spectrocloud.com/vertex/upgrade/) page. Convert to +partials and refactor where necessary. Only mention Kubernetes! + +::: + This page offers links and reference information for upgrading self-hosted Palette VerteX instances. If you have questions or concerns, [reach out to our support team](http://support.spectrocloud.io/). :::tip -If you are using self-hosted Palette, refer to the [Palette Upgrade](../../enterprise-version/upgrade/upgrade.md) page -for upgrade guidance. +If you are using self-hosted Palette instead of Palette VerteX, refer to the +[Palette Upgrade](../../../../palette/supported-environments/kubernetes/upgrade/upgrade.md) page for upgrade guidance. ::: ### Private Cloud Gateway If your setup includes a PCG, make sure to -[allow the PCG to upgrade automatically](../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette VerteX upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette VerteX upgrade. + +## Upgrade Notes + +Refer to the following known issues before upgrading: + +- Upgrading self-hosted Palette or Palette VerteX from version 4.6.x to 4.7.x can cause the upgrade to hang if any + member of the MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. For guidance on + verifying the health status of MongoDB ReplicaSet members, refer to our + [Troubleshooting](../../../../../troubleshooting/palette-upgrade.md#self-hosted-palette-or-palette-vertex-upgrade-hangs) + guide. ## Supported Upgrade Paths -Refer to the following tables for the supported Palette VerteX upgrade paths for -[VMware](../install-palette-vertex/install-on-vmware/install-on-vmware.md) and -[Kubernetes](../install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md) installations. +Refer to the following tables for the supported upgrade paths for self-hosted Palette VerteX environments installed on a +[Kubernetes](../kubernetes.md) cluster. :::danger @@ -38,15 +53,6 @@ latest minor version available. ::: -:::warning - -Upgrading self-hosted Palette or Palette VerteX from version 4.6.x to 4.7.x can cause the upgrade to hang if any member -of the MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. For guidance on verifying the -health status of MongoDB ReplicaSet members, refer to our -[Troubleshooting](../../troubleshooting/palette-upgrade.md#self-hosted-palette-or-palette-vertex-upgrade-hangs) guide. - -::: - @@ -301,7 +307,6 @@ health status of MongoDB ReplicaSet members, refer to our | 4.6.41 | 4.7.20 | :white_check_mark: | | 4.6.41 | 4.7.15 | :white_check_mark: | | 4.6.41 | 4.7.3 | :white_check_mark: | -| 4.6.6 | 4.7.15 | :white_check_mark: | **4.6.x** @@ -508,7 +513,7 @@ health status of MongoDB ReplicaSet members, refer to our - + :::preview @@ -523,14 +528,3 @@ health status of MongoDB ReplicaSet members, refer to our - -## Upgrade Guides - -Refer to the respective guide for guidance on upgrading your self-hosted Palette VerteX instance. - -- [Upgrade Notes](upgrade-notes.md) -- [Non-Airgap VMware](upgrade-vmware/non-airgap.md) -- [Airgap VMware](upgrade-vmware/airgap.md) -- [Non-Airgap Kubernetes](upgrade-k8s/non-airgap.md) -- [Airgap Kubernetes](upgrade-k8s/airgap.md) -- [VerteX Management Appliance](vertex-management-appliance.md) diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/_category_.json new file mode 100644 index 00000000000..455b8e49697 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 20 +} diff --git a/docs/docs-content/enterprise-version/activate-installation/activate-installation.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/activate.md similarity index 75% rename from docs/docs-content/enterprise-version/activate-installation/activate-installation.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/activate.md index 45986307ccd..6aa212a2d6b 100644 --- a/docs/docs-content/enterprise-version/activate-installation/activate-installation.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/activate.md @@ -1,14 +1,20 @@ --- -sidebar_label: "Activate Palette" -title: "Activate Palette" -description: "Learn how to activate your self-hosted Palette installation" +sidebar_label: "Activate" +title: "Activate Self-Hosted Palette VerteX" +description: "Activate your self-hosted Palette VerteX installation." icon: "" hide_table_of_contents: false -sidebar_position: 10 -tags: ["self-hosted", "account", "activate"] -keywords: ["self-hosted", "palette", "activate"] +sidebar_position: 40 +tags: ["self-hosted", "vertex", "account", "activate"] +keywords: ["self-hosted", "vertex", "account", "activate"] --- +:::danger + +Convert to partials for reuse in other installation sections. + +::: + Beginning with version 4.6.32, once you install Palette or upgrade to version 4.6.32 or later, you have 30 days to activate it. During this time, you have unrestricted access to all of Palette's features. After 30 days, you can continue to use Palette, and existing clusters will continue to run, but you cannot perform the following operations @@ -17,19 +23,19 @@ until Palette is activated: - Create new clusters. - Modify the configuration of active clusters. This includes modifying - [cluster profile variables](../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); - changing [cluster profile versions](../../clusters/cluster-management/cluster-updates.md#enablement); editing, + [cluster profile variables](../../../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); + changing [cluster profile versions](../../../../clusters/cluster-management/cluster-updates.md#enablement); editing, deleting, or replacing profile layers; and editing YAML files. -- Update [node configurations](../../clusters/cluster-management/node-pool.md), such as the node pool size. +- Update [node configurations](../../../../clusters/cluster-management/node-pool.md), such as the node pool size. Each installation of Palette has a unique product ID and corresponding activation key. Activation keys are single-use and valid for the entirety of the Palette installation, including all subsequent version upgrades. Once Palette is activated, it does not need to be reactivated unless you need to reinstall Palette, at which time a new product ID will be assigned, and a new activation key will be needed. Activation keys are no additional cost and are included with your purchase of Palette. The activation process is the same for connected and airgapped installations, regardless of whether -Palette is installed via the [Palette CLI](../../automation/palette-cli/palette-cli.md) or a -[Helm Chart](../install-palette/install-on-kubernetes/install-on-kubernetes.md). +Palette is installed via the [Palette CLI](../../../../automation/palette-cli/palette-cli.md), +[Helm chart](../kubernetes/install/install.md), or [Management Appliance](./management-appliance.md) ISO. If you are in trial mode or your trial has expired, Palette displays the appropriate banner on the **Summary** screen of your system console, as well as at **Administration > Activation**. Trial mode and expired statuses are also displayed @@ -46,8 +52,8 @@ Below is an overview of the activation process. 1. The system admin installs Palette or upgrades to version 4.6.32 or later. 2. Palette enters trial mode. During this time, you have 30 days to take advantage of all of Palette's features. After 30 days, the trial expires, and Palette functionality is restricted. Any clusters that you have deployed will remain - functional, but you cannot perform [day-2 operations](../../clusters/cluster-management/cluster-management.md), and - you cannot deploy additional clusters. + functional, but you cannot perform [day-2 operations](../../../../clusters/cluster-management/cluster-management.md), + and you cannot deploy additional clusters. 3. Before or after your trial expires, contact a Spectro Cloud customer support representative. You must specify whether you are activating Palette or VerteX and also provide a short description of your instance, along with your @@ -63,14 +69,14 @@ Below is an overview of the activation process. - A Palette subscription. - A self-hosted instance of Palette that is not activated. For help installing Palette, check out our - [Installation](../install-palette/install-palette.md) guide. + [Installation](./install.md) guide. -- Access to the [system console](../system-management/system-management.md#access-the-system-console). +- Access to the [system console](../../system-management/system-management.md#access-the-system-console). ## Enablement 1. Log in to the system console. For more information, refer to the - [Access the System Console](../system-management/system-management.md#access-the-system-console) guide. + [Access the System Console](../../system-management/system-management.md#access-the-system-console) guide. 2. A banner is displayed on the **Summary** screen, alerting you that your product is either in trial mode or has expired. On the banner, select **Activate Palette**. Alternatively, from the left main menu, select @@ -98,7 +104,7 @@ Below is an overview of the activation process. You can view the status of your license from the system console. If your license is active, the license status is removed from the left main menu of the Palette UI. -1. Log in to the [system console](../system-management/system-management.md#access-the-system-console). +1. Log in to the [system console](../../system-management/system-management.md#access-the-system-console). 2. The activation banner is no longer displayed on the **Summary** screen, indicating your license is active. Confirm your license status by navigating to **Administration > Activation**. The banner states that **Your license is diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/install.md new file mode 100644 index 00000000000..40912d7ead5 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/install.md @@ -0,0 +1,76 @@ +--- +sidebar_label: "Install" +title: "Install Palette VerteX with Management Appliance" +description: "Install self-hosted Palette VerteX using the VerteX Management Appliance." +hide_table_of_contents: false +tags: ["management appliance", "self-hosted", "vertex", "install"] +sidebar_position: 30 +--- + +:::danger + +This has been split from the former +[VerteX Management Appliance](https://docs.spectrocloud.com/vertex/install-palette-vertex/vertex-management-appliance/) +page. + +::: + +Follow the instructions to install Palette VerteX using the VerteX Management Appliance on your infrastructure platform. + +## Size Guidelines + + + +## Limitations + +- Only public image registries are supported if you are choosing to use an external registry for your pack bundles. + +## Prerequisites + + + +## Install Palette VerteX + + + +:::warning + +If your installation is not successful, verify that the `piraeus-operator` pack was correctly installed. For more +information, refer to the +[Self-Hosted Installation - Troubleshooting](../../../../troubleshooting/enterprise-install.md#scenario---palettevertex-management-appliance-installation-stalled-due-to-piraeus-operator-pack-in-error-state) +guide. + +::: + +## Validate + + + +## Next Steps diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md new file mode 100644 index 00000000000..968af50286c --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md @@ -0,0 +1,67 @@ +--- +sidebar_label: "Management Appliance" +title: "Self-Hosted Palette VerteX with Management Appliance" +description: + "Learn how to use the VerteX Management Appliance to install self-hosted Palette VerteX on your desired + infrastructure." +hide_table_of_contents: false +# sidebar_custom_props: +# icon: "chart-diagram" +tags: ["management appliance", "self-hosted", "vertex"] +--- + +:::preview + +This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. +Do not use this feature in production workloads. + +::: + +The VerteX Management Appliance is downloadable as an ISO file and is a solution for installing Palette VerteX on your +infrastructure. The ISO file contains all the necessary components needed for Palette to function. The ISO file is used +to boot the nodes, which are then clustered to form a Palette management cluster. + +Once Palette VerteX has been installed, you can download pack bundles and upload them to the internal Zot registry or an +external registry. These pack bundles are used to create your cluster profiles. You will then be able to deploy clusters +in your environment. + +## Third Party Packs + +There is an additional option to download and install the Third Party packs that provide complementary functionality to +Palette VerteX. These packs are not required for Palette VerteX to function, but they do provide additional features and +capabilities as described in the following table. + +| **Feature** | **Included with Palette Third Party Pack** | **Included with Palette Third Party Conformance Pack** | +| ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------ | +| [Backup and Restore](../../../../clusters/cluster-management/backup-restore/backup-restore.md) | :white_check_mark: | :x: | +| [Configuration Security](../../../../clusters/cluster-management/compliance-scan.md#configuration-security) | :white_check_mark: | :x: | +| [Penetration Testing](../../../../clusters/cluster-management/compliance-scan.md#penetration-testing) | :white_check_mark: | :x: | +| [Software Bill Of Materials (SBOM) scanning](../../../../clusters/cluster-management/compliance-scan.md#sbom-dependencies--vulnerabilities) | :white_check_mark: | :x: | +| [Conformance Testing](../../../../clusters/cluster-management/compliance-scan.md#conformance-testing) | :x: | :white_check_mark: | + +## Architecture + +The ISO file is built with the Operating System (OS), Kubernetes distribution, Container Network Interface (CNI), and +Container Storage Interface (CSI). A [Zot registry](https://zotregistry.dev/) is also included in the Appliance +Framework ISO. Zot is a lightweight, OCI-compliant container image registry that is used to store the Palette packs +needed to create cluster profiles. + +This solution is designed to be immutable, secure, and compliant with industry standards, such as the Federal +Information Processing Standards (FIPS). The following table displays the infrastructure profile for the Palette VerteX +appliance. + +| **Layer** | **Component** | **Version** | **FIPS-compliant** | +| -------------- | --------------------------------------------- | ----------- | ------------------ | +| **OS** | Ubuntu: Immutable [Kairos](https://kairos.io) | 22.04 | :white_check_mark: | +| **Kubernetes** | Palette eXtended Kubernetes Edge (PXK-E) | 1.32.3 | :white_check_mark: | +| **CNI** | Calico | 3.29.2 | :white_check_mark: | +| **CSI** | Piraeus | 2.8.1 | :white_check_mark: | +| **Registry** | Zot | 0.1.67 | :white_check_mark: | + +## Supported Platforms + +The VerteX Management Appliance can be used on the following infrastructure platforms: + +- VMware vSphere +- Bare Metal +- Machine as a Service (MAAS) diff --git a/docs/docs-content/vertex/upgrade/vertex-management-appliance.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/upgrade.md similarity index 50% rename from docs/docs-content/vertex/upgrade/vertex-management-appliance.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/upgrade.md index b90a64756ae..939f9403d7a 100644 --- a/docs/docs-content/vertex/upgrade/vertex-management-appliance.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/management-appliance/upgrade.md @@ -1,12 +1,10 @@ --- -title: "Upgrade VerteX Management Appliance" -sidebar_label: "VerteX Management Appliance" -description: "Learn how to upgrade the VerteX Management Appliance" +sidebar_label: "Upgrade" +title: "Upgrade Palette VerteX with Management Appliance" +description: "Upgrade self-hosted Palette VerteX installed with the VerteX Management Appliance." hide_table_of_contents: false -# sidebar_custom_props: -# icon: "chart-diagram" -tags: ["verteX management appliance", "self-hosted", "vertex"] -sidebar_position: 20 +tags: ["management appliance", "self-hosted", "vertex", "upgrade"] +sidebar_position: 50 --- :::preview @@ -16,9 +14,8 @@ Do not use this feature in production workloads. ::: -Follow the instructions to upgrade the -[VerteX Management Appliance](../install-palette-vertex/vertex-management-appliance.md) using a content bundle. The -content bundle is used to upgrade the Palette VerteX instance to a chosen target version. +Follow the instructions to upgrade the [VerteX Management Appliance](./management-appliance.md) using a content bundle. +The content bundle is used to upgrade the Palette VerteX instance to a chosen target version. :::info @@ -27,11 +24,27 @@ remain operational. ::: +## Supported Upgrade Paths + +:::danger + +Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of the +latest minor version available. + +::: + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.15 | 4.7.27 | :white_check_mark: | +| 4.7.3 | 4.7.27 | :x: | +| 4.7.3 | 4.7.15 | :x: | + ## Prerequisites + +### Upload Packs + + + +### Validate + + + +## (Optional) Upload Third Party Packs + +Follow the instructions to upload the Third Party packs to your Palette VerteX instance. The Third Party packs contain +additional functionality and capabilities that enhance the Palette VerteX experience, such as backup and restore, +configuration scanning, penetration scanning, SBOM scanning, and conformance scanning. + +### Prerequisites + + + +### Upload Packs + + + +### Validate + + + +## Next Steps + + diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/supported-environments.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/supported-environments.md new file mode 100644 index 00000000000..39b7a252ebd --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/supported-environments.md @@ -0,0 +1,11 @@ +--- +sidebar_label: "Supported Environments" +title: "Supported Environments" +description: "Supported environments for installing self-hosted Palette VerteX." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vertex", "kubernetes", "helm", "vmware", "management appliance"] +keywords: ["self-hosted", "vertex", "kubernetes", "helm", "vmware", "management appliance"] +--- + +Placeholder. diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/_category_.json new file mode 100644 index 00000000000..c3460c6dbde --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 30 +} diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/activate/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/activate/_category_.json new file mode 100644 index 00000000000..455b8e49697 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/activate/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 20 +} diff --git a/docs/docs-content/vertex/activate-installation/activate-installation.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/activate/activate.md similarity index 74% rename from docs/docs-content/vertex/activate-installation/activate-installation.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/activate/activate.md index 3fb869040e1..a291c420007 100644 --- a/docs/docs-content/vertex/activate-installation/activate-installation.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/activate/activate.md @@ -1,11 +1,10 @@ --- -sidebar_label: "Activate VerteX" -title: "Activate VerteX" -description: "Learn how to activate your self-hosted Palette VerteX installation" +sidebar_label: "Activate" +title: "Activate Self-Hosted Palette VerteX" +description: "Activate your self-hosted Palette VerteX installation." icon: "" hide_table_of_contents: false -sidebar_position: 10 -tags: ["self-hosted", "account", "activate"] +tags: ["self-hosted", "vertex", "activate"] keywords: ["self-hosted", "vertex", "activate"] --- @@ -17,19 +16,20 @@ until VerteX is activated: - Create new clusters. - Modify the configuration of active clusters. This includes modifying - [cluster profile variables](../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); - changing [cluster profile versions](../../clusters/cluster-management/cluster-updates.md#enablement); editing, - deleting, or replacing profile layers; and editing YAML files. + [cluster profile variables](../../../../../profiles/cluster-profiles/create-cluster-profiles/define-profile-variables/define-profile-variables.md); + changing [cluster profile versions](../../../../../clusters/cluster-management/cluster-updates.md#enablement); + editing, deleting, or replacing profile layers; and editing YAML files. -- Update [node configurations](../../clusters/cluster-management/node-pool.md), such as the node pool size. +- Update [node configurations](../../../../../clusters/cluster-management/node-pool.md), such as the node pool size. Each installation of Palette VerteX has a unique product ID and corresponding activation key. Activation keys are single-use and valid for the entirety of the VerteX installation, including all subsequent version upgrades. Once VerteX is activated, it does not need to be reactivated unless you need to reinstall VerteX, at which time a new product ID will be assigned, and a new activation key will be needed. Activation keys are no additional cost and are included with your purchase of Palette VerteX. The activation process is the same for connected and airgapped installations, -regardless of whether VerteX is installed via the [Palette CLI](../../automation/palette-cli/palette-cli.md) or a -[Helm Chart](../install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md). +regardless of whether Palette is installed via the [Palette CLI](../../../../../automation/palette-cli/palette-cli.md), +[Helm chart](../../kubernetes/install/install.md), or +[Management Appliance](../../management-appliance/management-appliance.md) ISO. If you are in trial mode or your trial has expired, Palette VerteX displays the appropriate banner on the **Summary** screen of your system console, as well as at **Administration > Activation**. Trial mode and expired statuses are also @@ -44,8 +44,9 @@ Below is an overview of the activation process. 1. The system admin installs Palette VerteX or upgrades to version 4.6.32 or later. 2. VerteX enters trial mode. During this time, you have 30 days to take advantage of all of VerteX's features. After 30 days, the trial expires, and VerteX functionality is restricted. Any clusters that you have deployed will remain - functional, but you cannot perform [day-2 operations](../../clusters/cluster-management/cluster-management.md), and - you cannot deploy additional clusters. + functional, but you cannot perform + [day-2 operations](../../../../../clusters/cluster-management/cluster-management.md), and you cannot deploy + additional clusters. 3. Before or after your trial expires, contact a Spectro Cloud customer support representative. You must specify whether you are activating Palette or VerteX and also provide a short description of your instance, along with your @@ -61,14 +62,14 @@ Below is an overview of the activation process. - A Palette VerteX subscription. - A self-hosted instance of Palette VerteX that is not activated. For help installing Palette VerteX, check out our - [Installation](../install-palette-vertex/install-palette-vertex.md) guide. + [Installation](../install/install.md) guide. -- Access to the [system console](../system-management/system-management.md#access-the-system-console). +- Access to the [system console](../../../system-management/system-management.md#access-the-system-console). ## Enablement 1. Log in to the system console. For more information, refer to the - [Access the System Console](../system-management/system-management.md#access-the-system-console) guide. + [Access the System Console](../../../system-management/system-management.md#access-the-system-console) guide. 2. A banner is displayed on the **Summary** screen, alerting you that your product is either in trial mode or has expired. On the banner, select **Activate VerteX**. Alternatively, from the left main menu, select **Administration > @@ -94,7 +95,7 @@ Below is an overview of the activation process. You can view the status of your license from the system console. If your license is active, the license status is removed from the left main menu of the Palette VerteX UI. -1. Log in to the [system console](../system-management/system-management.md#access-the-system-console). +1. Log in to the [system console](../../../system-management/system-management.md#access-the-system-console). 2. The activation banner is no longer displayed on the **Summary** screen, indicating your license is active. Confirm your license status by navigating to **Administration > Activation**. The banner states that **Your license is diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/airgap.md similarity index 91% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/install.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/airgap.md index 1d65c177dcd..8098b3e2011 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/install.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/airgap.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Install VerteX" -title: "Install VerteX" -description: "Learn how to install VerteX in an airgap VMware environment." +sidebar_label: "Install Airgap Palette VerteX" +title: "Install Airgap Palette VerteX on VMware vSphere with Palette CLI" +description: "Install airgap, self-hosted Palette VerteX on VMware vSphere using the Palette CLI." icon: "" -sidebar_position: 40 +sidebar_position: 10 hide_table_of_contents: false -tags: ["vertex", "enterprise", "airgap", "vmware", "vsphere"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "vmware", "airgap", "cli"] +keywords: ["self-hosted", "vertex", "vmware", "airgap", "cli"] --- Palette VerteX can be installed on VMware vSphere in an airgap environment. When you install VerteX, a three-node @@ -16,13 +16,12 @@ assets. ## Prerequisites -- You have completed the [Environment Setup](./environment-setup/environment-setup.md) steps and deployed the airgap - support VM. +- You have completed the [Environment Setup](../setup/airgap/airgap.md) steps and deployed the airgap support VM. - You will need to provide the Palette CLI an encryption passphrase to secure sensitive data. The passphrase must be between 8 to 32 characters long and contain a capital letter, a lowercase letter, a digit, and a special character. - Refer to the [Palette CLI Encryption](../../../../automation/palette-cli/palette-cli.md#encryption) section for more - information. + Refer to the [Palette CLI Encryption](../../../../../automation/palette-cli/palette-cli.md#encryption) section for + more information. - You can choose between two Operating Systems (OS) when installing Vertex. Review the requirements for each OS. @@ -30,7 +29,7 @@ assets. - [Red Hat Linux Enterprise](https://www.redhat.com/en) - you need a Red Hat subscription and a custom RHEL vSphere template with Kubernetes available in your vSphere environment. To learn how to create the required template, refer - to the [RHEL and PXK](../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. + to the [RHEL and PXK](../../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. :::warning @@ -38,11 +37,11 @@ assets. ::: -- Review the required VMware vSphere [permissions](../vmware-system-requirements.md). Ensure you have created the proper - custom roles and zone tags. +- Review the required VMware vSphere [permissions](../setup/airgap/vmware-system-requirements.md). Ensure you have + created the proper custom roles and zone tags. - We recommended the following resources for Palette VerteX. Refer to the - [Palette VerteX size guidelines](../../install-palette-vertex.md#instance-sizing) for additional sizing information. + [Palette VerteX size guidelines](../install/install.md#size-guidelines) for additional sizing information. - 8 CPUs per VM. @@ -71,7 +70,8 @@ assets. - x509 SSL certificate authority file in base64 format. This file is optional. - Zone tagging is required for dynamic storage allocation across fault domains when provisioning workloads that require - persistent storage. Refer to [Zone Tagging](../vmware-system-requirements.md#zone-tagging) for information. + persistent storage. Refer to [Zone Tagging](../setup/airgap/vmware-system-requirements.md#zone-tagging) for + information. - Assigned IP addresses for application workload services, such as Load Balancer services. @@ -86,7 +86,7 @@ assets. Self-hosted Palette VerteX installations provide a system Private Cloud Gateway (PCG) out-of-the-box and typically do not require a separate, user-installed PCG. However, you can create additional PCGs as needed to support provisioning into remote data centers that do not have a direct incoming connection from the Palette console. To learn how to install -a PCG on VMware, check out the [Deploy to VMware vSphere](../../../../clusters/pcg/deploy-pcg/vmware.md) guide. +a PCG on VMware, check out the [Deploy to VMware vSphere](../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. ::: @@ -138,7 +138,7 @@ Use the following steps to install Palette VerteX. 3. Invoke the Palette CLI by using the `ec` command to install the enterprise cluster. The interactive CLI prompts you for configuration details and then initiates the installation. For more information about the `ec` subcommand, refer - to [Palette Commands](../../../../automation/palette-cli/commands/ec.md). + to [Palette Commands](../../../../../automation/palette-cli/commands/ec.md). ```bash palette ec install @@ -147,8 +147,8 @@ Use the following steps to install Palette VerteX. :::warning If you deployed the airgap support VM using a generic OVA, the Palette CLI may not be in the `usr/bin` path. Ensure - that you complete step **22** of the [Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) - guide, which installs the VerteX airgap binary and moves the Palette CLI to the correct path. + that you complete step 18 of the [Environment Setup](../setup/airgap/ova.md) guide, which installs the VerteX airgap + binary and moves the Palette CLI to the correct path. ::: @@ -157,10 +157,10 @@ Use the following steps to install Palette VerteX. 5. Select the desired OS you want to use for the installation. Review the table below for more information about each option. - | **Option** | **Description** | **Requirements** | - | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | **Ubuntu Pro** | [Ubuntu Pro](https://ubuntu.com/pro) is the default option. It provides access to FIPS 140-3 certified cryptographic packages. | Ubuntu Pro token. | - | **Red Hat Linux Enterprise** | Red Hat Linux Enterprise provides access to Red Hat Enterprise Linux. | Red Hat subscription and a custom RHEL vSphere template with Kubernetes. Review the [RHEL and PXK](../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) to learn how to create the required template. | + | **Option** | **Description** | **Requirements** | + | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | **Ubuntu Pro** | [Ubuntu Pro](https://ubuntu.com/pro) is the default option. It provides access to FIPS 140-3 certified cryptographic packages. | Ubuntu Pro token. | + | **Red Hat Linux Enterprise** | Red Hat Linux Enterprise provides access to Red Hat Enterprise Linux. | Red Hat subscription and a custom RHEL vSphere template with Kubernetes. Review the [RHEL and PXK](../../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) to learn how to create the required template. | 6. Depending on your OS selection, you will be prompted to provide the required information. For Ubuntu Pro, you will need to provide your Ubuntu Pro token. For Red Hat Linux Enterprise, you will need to provide the path to the @@ -189,17 +189,16 @@ Use the following steps to install Palette VerteX. | **Service IP Range** | Enter the IP address range that will be used to assign IP addresses to services in the EC cluster. The service IP addresses should be unique and not overlap with any machine IPs in the environment. | 9. Select the OCI registry type and provide the configuration values. Review the following table for more information. - If you are using the Palette CLI from inside an - [airgap support VM](./environment-setup/vmware-vsphere-airgap-instructions.md), the CLI will automatically detect - the airgap environment and prompt you to **Use local, air-gapped Pack Registry?** Type `y` to use the local - resources and skip filling in the OCI registry URL and credentials. + If you are using the Palette CLI from inside an [airgap support VM](../setup/airgap/airgap.md), the CLI will + automatically detect the airgap environment and prompt you to **Use local, air-gapped Pack Registry?** Type `y` to + use the local resources and skip filling in the OCI registry URL and credentials. :::warning For self-hosted OCI registries, ensure you have the server Certificate Authority (CA) certificate file available on the host where you are using the Palette CLI. You will be prompted to provide the file path to the OCI CA certificate. Failure to provide the OCI CA certificate will result in self-linking errors. Refer to the - [Self-linking Error](../../../../troubleshooting/enterprise-install.md#scenario---self-linking-error) + [Self-linking Error](../../../../../troubleshooting/enterprise-install.md#scenario---self-linking-error) troubleshooting guide for more information. ::: @@ -410,8 +409,8 @@ Use the following steps to install Palette VerteX. 19. After login, a Summary page is displayed. Palette VerteX is installed with a self-signed SSL certificate. To assign a different SSL certificate you must upload the SSL certificate, SSL certificate key, and SSL certificate authority files to Palette VerteX. You can upload the files using the Palette VerteX system console. Refer to the - [Configure HTTPS Encryption](/vertex/system-management/ssl-certificate-management) page for instructions on how to - upload the SSL certificate files to Palette VerteX. + [Configure HTTPS Encryption](../../../system-management/ssl-certificate-management.md) page for instructions on how + to upload the SSL certificate files to Palette VerteX. 20. The last step is to start setting up a tenant. To learn how to create a tenant, check out the [Tenant Management](../../../system-management/tenant-management.md) guide. @@ -452,18 +451,10 @@ You can also validate that a three-node Kubernetes cluster is launched and Palet ## Next Steps - - -## Resources - -- [Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) - -- [Create a Tenant](../../../system-management/tenant-management.md) - -- [Enterprise Install Troubleshooting](../../../../troubleshooting/enterprise-install.md) - -- [Palette CLI](../../../../automation/palette-cli/install-palette-cli.md#download-and-setup) - -- [System Management](../../../system-management/system-management.md) - -- [VMware System Requirements](../vmware-system-requirements.md) + diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/install.md new file mode 100644 index 00000000000..035efe2ea27 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/install.md @@ -0,0 +1,77 @@ +--- +sidebar_label: "Install" +title: "Install Palette VerteX on VMware vSphere with Palette CLI" +description: + "Review system requirements for installing self-hosted Palette VerteX on VMware vSphere using the Palette CLI." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vertex", "install", "vmware", "cli"] +keywords: ["self-hosted", "vertex", "install", "vmware", "cli"] +--- + +:::warning + +This is the former [Installation](https://docs.spectrocloud.com/vertex/install-palette-vertex/) page. Leave only what is +applicable to VMware. Convert to partials for reuse. + +::: + +Palette is available as a self-hosted application that you install in your environment. Palette is available in the +following modes. + +| **Method** | **Supported Platforms** | **Description** | **Install Guide** | +| ---------------------------------------- | ------------------------ | --------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette in VMware environment. | Install on VMware | +| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster. | [Install on Kubernetes](../../kubernetes/install/install.md) | +| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](../../management-appliance/install.md) | + +## Airgap Installation + +You can also install Palette in an airgap environment. For more information, refer to the +[Airgap Installation](./airgap.md) section. + +| **Method** | **Supported Airgap Platforms** | **Description** | **Install Guide** | +| ---------------------------------------- | ------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- | +| Palette CLI | VMware | Install Palette in VMware environment using your own OCI registry server. | [VMware Airgap Install](./airgap.md) | +| Helm Chart | Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster with your own OCI registry server OR use AWS ECR. | [Kubernetes Airgap Install](../../kubernetes/install/airgap.md) | +| Palette Management Appliance | VMware, Bare Metal, MAAS | Install Palette using the Palette Management Appliance ISO file. | [Install with Palette Management Appliance](../../management-appliance/install.md) | + +The next sections provide sizing guidelines we recommend you review before installing Palette in your environment. + +## Size Guidelines + + + +## Kubernetes Requirements + + + +The following table presents the Kubernetes version corresponding to each Palette version for +VMware and [Kubernetes](../../kubernetes/kubernetes.md) installations. +Additionally, for VMware installations, it provides the download URLs for the required Operating System and Kubernetes +distribution OVA. + + + + + + + + + + + + + + + + + +## Proxy Requirements + + diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/non-airgap.md similarity index 90% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/install.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/non-airgap.md index 1fc269c0eb3..a22b3920598 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/install.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/install/non-airgap.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Non-Airgap Install" -title: "Non-Airgap Install" -description: "Learn how to deploy Palette VerteX on VMware." +sidebar_label: "Install Non-Airgap Palette VerteX" +title: "Install Non-Airgap Palette VerteX on VMware vSphere with Palette CLI" +description: "Install non-airgap, self-hosted Palette VerteX on VMware vSphere using the Palette CLI." icon: "" +sidebar_position: 20 hide_table_of_contents: false -sidebar_position: 0 -tags: ["vertex", "vmware"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "vmware", "non-airgap", "cli"] +keywords: ["self-hosted", "vertex", "vmware", "non-airgap", "cli"] --- You can install Palette VerteX in a connected environment using the Palette Command Line Interface (CLI). The CLI @@ -19,8 +19,8 @@ Palette VerteX will be deployed. :::tip We recommend using the `--validate` flag with the `ec install` command to validate the installation. Check out the -[Validate Environment](../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC command -for more information. +[Validate Environment](../../../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC +command for more information. ::: @@ -30,12 +30,12 @@ for more information. host. - Palette CLI installed and available. Refer to the Palette CLI - [Install](../../../automation/palette-cli/install-palette-cli.md#download-and-setup) page for guidance. + [Install](../../../../../automation/palette-cli/install-palette-cli.md#download-and-setup) page for guidance. - You will need to provide the Palette CLI an encryption passphrase to secure sensitive data. The passphrase must be between 8 to 32 characters long and contain a capital letter, a lowercase letter, a digit, and a special character. - Refer to the [Palette CLI Encryption](../../../automation/palette-cli/palette-cli.md#encryption) section for more - information. + Refer to the [Palette CLI Encryption](../../../../../automation/palette-cli/palette-cli.md#encryption) section for + more information. - You can choose between two Operating Systems (OS) when installing Vertex. Review the requirements for each OS. @@ -43,7 +43,7 @@ for more information. - [Red Hat Linux Enterprise](https://www.redhat.com/en) - you need a Red Hat subscription and a custom RHEL vSphere template with Kubernetes available in your vSphere environment. To learn how to create the required template, refer - to the [RHEL and PXK](../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. + to the [RHEL and PXK](../../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. :::warning @@ -51,11 +51,11 @@ for more information. ::: -- Review the required VMware vSphere [permissions](vmware-system-requirements.md). Ensure you have created the proper - custom roles and zone tags. +- Review the required VMware vSphere [permissions](../setup/non-airgap/vmware-system-requirements.md). Ensure you have + created the proper custom roles and zone tags. - We recommended the following resources for Palette VerteX. Refer to the - [Palette VerteX size guidelines](../install-palette-vertex.md#instance-sizing) for additional sizing information. + [Palette VerteX size guidelines](../install/install.md#size-guidelines) for additional sizing information. - 8 CPUs per VM. @@ -92,12 +92,13 @@ for more information. ::: - Zone tagging is required for dynamic storage allocation across fault domains when provisioning workloads that require - persistent storage. Refer to [Zone Tagging](vmware-system-requirements.md#zone-tagging) for information. + persistent storage. Refer to [Zone Tagging](../setup/non-airgap/vmware-system-requirements.md#zone-tagging) for + information. - Assigned IP addresses for application workload services, such as Load Balancer services. - Ensure Palette has access to the required domains and ports. Refer to the - [Required Domains](../install-palette-vertex.md#proxy-requirements) section for more information. + [Required Domains](../install/install.md#proxy-requirements) section for more information. - A [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to manage persistent storage, with the annotation `storageclass.kubernetes.io/is-default-class` set to `true`. To override the default StorageClass for a @@ -110,7 +111,7 @@ for more information. Palette VerteX installations provide a system Private Cloud Gateway (PCG) out-of-the-box and typically do not require a separate, user-installed PCG. However, you can create additional PCGs as needed to support provisioning into remote data centers that do not have a direct incoming connection from the Palette console. To learn how to install a PCG on VMware, -check out the [Deploy to VMware vSphere](../../../clusters/pcg/deploy-pcg/vmware.md) guide. +check out the [Deploy to VMware vSphere](../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. ::: @@ -131,15 +132,15 @@ Use the following steps to install Palette VerteX. user account you will use to deploy the VerteX installation. 3. Find the OVA download URL corresponding to your Palette VerteX version in the - [Kubernetes Requirements](../install-palette-vertex.md#kubernetes-requirements) section. Use the identified URL to - import the Operating System and Kubernetes distribution OVA required for the install. Place the OVA in the + [Kubernetes Requirements](../install/install.md#kubernetes-requirements) section. Use the identified URL to import + the Operating System and Kubernetes distribution OVA required for the install. Place the OVA in the `spectro-templates` folder. Refer to the [Import Items to a Content Library](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vm-administration/GUID-B413FBAE-8FCB-4598-A3C2-8B6DDA772D5C.html?hWord=N4IghgNiBcIJYFsAOB7ATgFwAQYKbIjDwGcQBfIA) guide for information about importing an OVA in vCenter. 4. Append an `r_` prefix to the OVA name and remove the `.ova` suffix after the import. For example, the final output should look like `r_u-2204-0-k-12813-0`. This naming convention is required for the install process to identify the - OVA. Refer to the [Additional OVAs](../../../downloads/palette-vertex/additional-ovas.md) page for a list of + OVA. Refer to the [Additional OVAs](../../../../../downloads/palette-vertex/additional-ovas.md) page for a list of additional OVAs you can download and upload to your vCenter environment. :::tip @@ -161,14 +162,14 @@ Use the following steps to install Palette VerteX. 6. Invoke the Palette CLI by using the `ec` command to install the enterprise cluster. The interactive CLI prompts you for configuration details and then initiates the installation. For more information about the `ec` subcommand, refer - to [Palette Commands](../../../automation/palette-cli/commands/ec.md). + to [Palette Commands](../../../../../automation/palette-cli/commands/ec.md). ```bash palette ec install ``` You can also use the `--validate` flag to validate the installation prior to deployment. Refer to the - [Validate Environment](../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC + [Validate Environment](../../../../../automation/palette-cli/commands/ec.md#validate-environment) section of the EC command for more information. ```bash @@ -180,10 +181,10 @@ Use the following steps to install Palette VerteX. 8. Select the desired OS you want to use for the installation. Review the table below for more information about each option. - | **Option** | **Description** | **Requirements** | - | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | **Ubuntu Pro** | [Ubuntu Pro](https://ubuntu.com/pro) is the default option. It provides access to FIPS 140-3 certified cryptographic packages. | Ubuntu Pro token. | - | **Red Hat Linux Enterprise** | Red Hat Linux Enterprise provides access to Red Hat Enterprise Linux. | Red Hat subscription and a custom RHEL vSphere template with Kubernetes. Review the [RHEL and PXK](../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) to learn how to create the required template. | + | **Option** | **Description** | **Requirements** | + | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | **Ubuntu Pro** | [Ubuntu Pro](https://ubuntu.com/pro) is the default option. It provides access to FIPS 140-3 certified cryptographic packages. | Ubuntu Pro token. | + | **Red Hat Linux Enterprise** | Red Hat Linux Enterprise provides access to Red Hat Enterprise Linux. | Red Hat subscription and a custom RHEL vSphere template with Kubernetes. Review the [RHEL and PXK](../../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) to learn how to create the required template. | 9. Depending on your OS selection, you will be prompted to provide the required information. For Ubuntu Pro, you will need to provide your Ubuntu Pro token. For Red Hat Linux Enterprise, you will need to provide the path to the @@ -385,13 +386,13 @@ Use the following steps to install Palette VerteX. 18. Log in to the system console using the credentials provided in the Enterprise Cluster Details output. After login, you will be prompted to create a new password. Enter a new password and save your changes. Refer to the - [password requirements](../../system-management/account-management/credentials.md#password-requirements-and-security) + [password requirements](../../../system-management/account-management/credentials.md#password-requirements-and-security) documentation page to learn more about the password requirements. Use the username `admin` and your new password to log in to the system console. You can create additional system administrator accounts and assign roles to users in the system console. Refer to the - [Account Management](../../system-management/account-management/account-management.md) documentation page for more - information. + [Account Management](../../../system-management/account-management/account-management.md) documentation page for + more information. :::info @@ -406,11 +407,11 @@ Use the following steps to install Palette VerteX. 19. After login, a Summary page is displayed. Palette VerteX is installed with a self-signed SSL certificate. To assign a different SSL certificate you must upload the SSL certificate, SSL certificate key, and SSL certificate authority files to Palette VerteX. You can upload the files using the Palette VerteX system console. Refer to the - [Configure HTTPS Encryption](/vertex/system-management/ssl-certificate-management) page for instructions on how to - upload the SSL certificate files to Palette VerteX. + [Configure HTTPS Encryption](../../../system-management/ssl-certificate-management.md) page for instructions on how + to upload the SSL certificate files to Palette VerteX. 20. The last step is to start setting up a tenant. To learn how to create a tenant, check out the - [Tenant Management](../../system-management/tenant-management.md) guide. + [Tenant Management](../../../system-management/tenant-management.md) guide. ![Screenshot of the Summary page showing where to click Go to Tenant Management button.](/vertex_installation_install-on-vmware_goto-tenant-management.webp) @@ -448,18 +449,10 @@ You can also validate that a three-node Kubernetes cluster is launched and Palet ## Next Steps - - -## Resources - -- [Airgap Instructions](./airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md) - -- [Create a Tenant](../../system-management/tenant-management.md) - -- [Enterprise Install Troubleshooting](../../../troubleshooting/enterprise-install.md) - -- [Palette CLI](../../../automation/palette-cli/install-palette-cli.md#download-and-setup) - -- [System Management](../../system-management/system-management.md) - -- [VMware System Requirements](vmware-system-requirements.md) + diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/_category_.json new file mode 100644 index 00000000000..988cdc1b69c --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Set Up", + "position": 0 +} diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/airgap-install.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/airgap.md similarity index 60% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/airgap-install.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/airgap.md index 20cda54a6c6..68822824379 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/airgap-install.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/airgap.md @@ -1,12 +1,11 @@ --- -sidebar_label: "Airgap Installation" -title: "Airgap Installation" -description: "Learn how to deploy VerteX in an airgapped environment." +sidebar_label: "Set Up Airgap Environment" +title: "Set Up Airgap Environment" +description: "Prepare to install your self-hosted, airgapped Palette VerteX instance in VMware vSphere." icon: "" hide_table_of_contents: false -sidebar_position: 0 -tags: ["vertex", "enterprise", "airgap", "vmware", "vsphere"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "airgap", "vmware"] +keywords: ["self-hosted", "vertex", "airgap", "vmware"] --- You can install Palette VerteX in an airgap VMware vSphere environment. An airgap environment lacks direct access to the @@ -45,19 +44,23 @@ following diagram outlines the major pre-installation steps for an airgap instal 4. Install Palette using the Palette CLI or the Kubernetes Helm chart. -Configure your Palette environment +## Environment Setup -## Get Started +This section helps you prepare your VMware vSphere airgap environment for VerteX installation. You can choose between +two methods to prepare your environment: -To get started with an airgap Palette installation, begin by reviewing the -[Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) guide. +1. If you have a Red Hat Enterprise Linux (RHEL) VM deployed in your environment, follow the + [Environment Setup with an Existing RHEL VM](./rhel-vm.md) guide to learn how to prepare this VM for VerteX + installation. +2. If you do not have an RHEL VM, follow the [Environment Setup with OVA](./ova.md) guide. This guide will show you how + to use an OVA to deploy an airgap support VM in your VMware vSphere environment, which will then assist with the + VerteX installation process. -## Resources +## Supported Platforms -- [Environment Setup](./environment-setup/vmware-vsphere-airgap-instructions.md) +The following table outlines the supported platforms for an airgap VerteX installation and the supported OCI registries. -- [Airgap Install Checklist](./checklist.md) - -- [Airgap Install](./install.md) - -- [Additional Packs](../../../../downloads/palette-vertex/additional-packs.md) +| **Platform** | **OCI Registry** | **Supported** | +| ------------ | ---------------- | ------------- | +| Kubernetes | Harbor | ✅ | +| Kubernetes | AWS ECR | ✅ | diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/ova.md similarity index 93% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/ova.md index a1f963227c4..a49ef71c6fa 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/ova.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Environment Setup with OVA" -title: "Environment Setup with OVA" -description: "Learn how to install VerteX in an airgap environment." +sidebar_label: "Set Up Environment with OVA" +title: "Set Up Environment with OVA" +description: "Set up a VM using an OVA to install self-hosted Palette VerteX in an airgapped environment." icon: "" hide_table_of_contents: false sidebar_position: 20 -tags: ["vertex", "enterprise", "airgap", "vmware", "vsphere"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "airgap", "vmware"] +keywords: ["self-hosted", "vertex", "airgap", "vmware"] --- This guide helps you to prepare your airgap environment for VerteX installation using an OVA to deploy and initialize an @@ -14,9 +14,8 @@ airgap support VM. :::info -This guide is for preparing your airgap environment only. For instructions on installing VerteX on VMware, check the -[Install](../install.md) guide. A checklist of the steps you will complete to prepare your airgap environment for VerteX -is available on the [Checklist](../checklist.md) page. +This guide is for preparing your airgap environment only. For instructions on installing self-hosted Palette VerteX on +VMware vSphere, refer to our [Install](../../install/airgap.md) guide. ::: @@ -51,10 +50,10 @@ VerteX. - Configure the Dynamic Host Configuration Protocol (DHCP) to access the airgap support VM via SSH. You can disable DHCP or modify the IP address after deploying the airgap support VM. -- Review the required vSphere [permissions](../../vmware-system-requirements.md) and ensure you've created the proper - custom roles and zone tags. Zone tagging enables dynamic storage allocation across fault domains when provisioning - workloads that require persistent storage. Refer to [Zone Tagging](../../vmware-system-requirements.md#zone-tagging) - for information. +- Review the required vSphere [permissions](./vmware-system-requirements.md) and ensure you've created the proper custom + roles and zone tags. Zone tagging enables dynamic storage allocation across fault domains when provisioning workloads + that require persistent storage. Refer to [Zone Tagging](./vmware-system-requirements.md#zone-tagging) for + information.
@@ -63,7 +62,7 @@ VerteX. Self-hosted VerteX installations provide a system Private Cloud Gateway (PCG) out-of-the-box and typically do not require a separate, user-installed PCG. However, you can deploy additional PCG instances to support provisioning into remote data centers without a direct incoming connection to VerteX. To learn how to install a PCG on VMware, check out -the [VMware](../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. +the [VMware](../../../../../../clusters/pcg/deploy-pcg/vmware.md) guide. ::: @@ -357,7 +356,7 @@ The default container runtime for OVAs is [Podman](https://podman.io/), not Dock Once the Palette VerteX airgap binary completes its tasks, you will receive a **Setup Completed** success message. -19. Review the [Additional Packs](../../../../../downloads/palette-vertex/additional-packs.md) page and identify any +19. Review the [Additional Packs](../../../../../../downloads/palette-vertex/additional-packs.md) page and identify any additional packs you want to add to your OCI registry. You can also add additional packs after the installation is complete. @@ -370,8 +369,8 @@ The default container runtime for OVAs is [Podman](https://podman.io/), not Dock 22. In the **Deploy OVF Template** wizard, enter the following URL to import the Operating System (OS) and Kubernetes distribution OVA required for the installation. Refer to the - [Kubernetes Requirements](../../../install-palette-vertex.md#kubernetes-requirements) section to learn if the - version of Palette you are installing requires a new OS and Kubernetes OVA. + [Kubernetes Requirements](../../install/install.md#kubernetes-requirements) section to learn if the version of + Palette you are installing requires a new OS and Kubernetes OVA. Consider the following example for reference. @@ -396,8 +395,8 @@ The default container runtime for OVAs is [Podman](https://podman.io/), not Dock Place the OVA in the **spectro-templates** folder or in the folder you created in step **21**. Append the `r_` prefix, and remove the `.ova` suffix when assigning its name and target location. For example, the final output should look like `r_u-2204-0-k-1294-0`. This naming convention is required for the installation process to identify - the OVA. Refer to the [Additional OVAs](../../../../../downloads/palette-vertex/additional-ovas.md) page for a list - of additional OS OVAs. + the OVA. Refer to the [Additional OVAs](../../../../../../downloads/palette-vertex/additional-ovas.md) page for a + list of additional OS OVAs. You can terminate the deployment after the OVA is available in the `spectro-templates` folder. Refer to the [Deploy an OVF or OVA Template](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vm-administration/GUID-AFEDC48B-C96F-4088-9C1F-4F0A30E965DE.html) @@ -487,7 +486,8 @@ installed in the airgap support VM and ready to use. palette ec install ``` -Complete all the Palette CLI steps outlined in the [Install VerteX](../install.md) guide from the airgap support VM. +Complete all the Palette CLI steps outlined in the [Install VerteX](../../install/airgap.md) guide from the airgap +support VM. :::info diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/env-setup-vm-vertex.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/rhel-vm.md similarity index 55% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/env-setup-vm-vertex.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/rhel-vm.md index 86cee856492..b594475a25a 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/env-setup-vm-vertex.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/rhel-vm.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Environment Setup with RHEL" -title: "Environment Setup with an Existing RHEL VM" -description: "Learn how to prepare your airgap environment for VerteX installation using an existing RHEL VM" +sidebar_label: "Set Up Environment with RHEL" +title: "Set Up Environment with Existing RHEL VM" +description: "Prepare your airgap environment for installing self-hosted Palette VerteX using an existing RHEL VM." icon: "" hide_table_of_contents: false -sidebar_position: 35 -tags: ["self-hosted", "vertex", "airgap", "vmware", "vsphere", "rhel"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 30 +tags: ["self-hosted", "vertex", "airgap", "vmware", "rhel"] +keywords: ["self-hosted", "vertex", "airgap", "vmware", "rhel"] --- This guide helps you prepare your VMware vSphere airgap environment for VerteX installation using an existing Red Hat @@ -18,7 +18,7 @@ for hosting VerteX images and assists in starting the VerteX installation. :::info This guide is for preparing your airgap environment only. For instructions on installing VerteX on VMware, refer to the -[Install VerteX](../install.md) guide. +[Install VerteX](../../install/airgap.md) guide. ::: @@ -29,6 +29,6 @@ This guide is for preparing your airgap environment only. For instructions on in diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/vmware-system-requirements.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/vmware-system-requirements.md similarity index 89% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/vmware-system-requirements.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/vmware-system-requirements.md index 4b966a4f1cf..c559ffe9705 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/vmware-system-requirements.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/vmware-system-requirements.md @@ -1,14 +1,20 @@ --- -sidebar_label: "VMware System and Permission Requirements" +sidebar_label: "System and Permission Requirements" title: "VMware System and Permission Requirements" description: "Review VMware system requirements and cloud account permissions." icon: "" hide_table_of_contents: false -sidebar_position: 30 -tags: ["vertex", "self-hosted", "vmware"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 10 +tags: ["self-hosted", "vertex", "vmware", "permissions"] +keywords: ["self-hosted", "vertex", "vmware", "permissions"] --- +:::danger + +Convert to partials for reuse + +::: + Before installing Palette VerteX on VMware, review the following system requirements and permissions. The vSphere user account used to deploy VerteX must have the required permissions to access the proper roles and objects in vSphere. @@ -26,7 +32,7 @@ Start by reviewing the required action items below: 4. If you are deploying VerteX with Red Hat Enterprise Linux (RHEL). Ensure you create a custom image containing your RHEL subscription credentials and the desired Kubernetes version. This image template must be uploaded to the vSphere `spectro-templates` folder. Instructions for creating the custom RHEL image with Kubernetes are available in the - [RHEL and PXK](../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. + [RHEL and PXK](../../../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. :::info @@ -43,12 +49,12 @@ guide if you need help creating a custom role in vSphere. The required custom ro - A root-level role with access to higher-level vSphere objects. This role is referred to as the _Spectro root role_. Check out the - [Root-Level Role Privileges](../../../clusters/data-center/vmware/permissions.md#spectro-root-role-privileges) table - for the list of privileges required for the root-level role. + [Root-Level Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-root-role-privileges) + table for the list of privileges required for the root-level role. - A role with the required privileges for deploying VMs. This role is referred to as the _Spectro role_. Review the - [Spectro Role Privileges](../../../clusters/data-center/vmware/permissions.md#spectro-role-privileges) table for the - list of privileges required for the Spectro role. + [Spectro Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-role-privileges) table + for the list of privileges required for the Spectro role. The user account you use to deploy VerteX must have access to both roles. Each vSphere object required by VerteX must have a diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/_category_.json new file mode 100644 index 00000000000..455b8e49697 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 20 +} diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/non-airgap.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/non-airgap.md new file mode 100644 index 00000000000..ab610981f14 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/non-airgap.md @@ -0,0 +1,17 @@ +--- +sidebar_label: "Set Up Non-Airgap Environment" +title: "Set Up Non-Airgap Environment" +description: + "No prior setup is needed when installing self-hosted Palette VerteX on VMware vSphere with internet connectivity." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vertex", "vmware", "non-airgap"] +keywords: ["self-hosted", "vertex", "vmware", "non-airgap"] +--- + +:::info + +No prior setup is necessary for non-airgap installations. Ensure you have the required vmware permissions. For system +prerequisites, refer to the installation Prerequisites. + +::: diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md new file mode 100644 index 00000000000..c559ffe9705 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/vmware-system-requirements.md @@ -0,0 +1,133 @@ +--- +sidebar_label: "System and Permission Requirements" +title: "VMware System and Permission Requirements" +description: "Review VMware system requirements and cloud account permissions." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["self-hosted", "vertex", "vmware", "permissions"] +keywords: ["self-hosted", "vertex", "vmware", "permissions"] +--- + +:::danger + +Convert to partials for reuse + +::: + +Before installing Palette VerteX on VMware, review the following system requirements and permissions. The vSphere user +account used to deploy VerteX must have the required permissions to access the proper roles and objects in vSphere. + +Start by reviewing the required action items below: + +1. Create the two custom vSphere roles. Check out the [Create Required Roles](#create-required-roles) section to create + the required roles in vSphere. + +2. Review the [vSphere Permissions](#vsphere-permissions) section to ensure the created roles have the required vSphere + privileges and permissions. + +3. Create node zones and regions for your Kubernetes clusters. Refer to the [Zone Tagging](#zone-tagging) section to + ensure that the required tags are created in vSphere to ensure proper resource allocation across fault domains. + +4. If you are deploying VerteX with Red Hat Enterprise Linux (RHEL). Ensure you create a custom image containing your + RHEL subscription credentials and the desired Kubernetes version. This image template must be uploaded to the vSphere + `spectro-templates` folder. Instructions for creating the custom RHEL image with Kubernetes are available in the + [RHEL and PXK](../../../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. + +:::info + +The permissions listed in this page are also needed for deploying a Private Cloud Gateway (PCG) and workload cluster in +vSphere through VerteX. + +::: + +## Create Required Roles + +VerteX requires two custom roles to be created in vSphere before the installation. Refer to the +[Create a Custom Role](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html?hWord=N4IghgNiBcIE4HsIFMDOIC+Q) +guide if you need help creating a custom role in vSphere. The required custom roles are: + +- A root-level role with access to higher-level vSphere objects. This role is referred to as the _Spectro root role_. + Check out the + [Root-Level Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-root-role-privileges) + table for the list of privileges required for the root-level role. + +- A role with the required privileges for deploying VMs. This role is referred to as the _Spectro role_. Review the + [Spectro Role Privileges](../../../../../../clusters/data-center/vmware/permissions.md#spectro-role-privileges) table + for the list of privileges required for the Spectro role. + +The user account you use to deploy VerteX must have access to both roles. Each vSphere object required by VerteX must +have a +[Permission](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.security.doc/GUID-4B47F690-72E7-4861-A299-9195B9C52E71.html) +entry for the respective Spectro role. The following tables list the privileges required for the each custom role. + +:::info + +For an in-depth explanation of vSphere authorization and permissions, check out the +[Understanding Authorization in vSphere](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-74F53189-EF41-4AC1-A78E-D25621855800.html) +resource. + +::: + +## vSphere Permissions + + + +## Zone Tagging + +You can use tags to create node zones and regions for your Kubernetes clusters. The node zones and regions can be used +to dynamically place Kubernetes workloads and achieve higher availability. Kubernetes nodes inherit the zone and region +tags as [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). Kubernetes workloads can +use the node labels to ensure that the workloads are deployed to the correct zone and region. + +The following is an example of node labels that are discovered and inherited from vSphere tags. The tag values are +applied to Kubernetes nodes in vSphere. + +```yaml hideClipboard +topology.kubernetes.io/region=usdc topology.kubernetes.io/zone=zone3 failure-domain.beta.kubernetes.io/region=usdc +failure-domain.beta.kubernetes.io/zone=zone3 +``` + +:::info + +To learn more about node zones and regions, refer to the +[Node Zones/Regions Topology](https://cloud-provider-vsphere.sigs.k8s.io/cloud_provider_interface.html) section of the +Cloud Provider Interface documentation. + +::: + +Zone tagging is required to install VerteX and is helpful for Kubernetes workloads deployed in vSphere clusters through +VerteX if they have persistent storage needs. Use vSphere tags on data centers and compute clusters to create distinct +zones in your environment. You can use vSphere +[Tag Categories and Tags](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-16422FF7-235B-4A44-92E2-532F6AED0923.html) +to create zones in your vSphere environment and assign them to vSphere objects. + +The zone tags you assign to your vSphere objects, such as a data center and clusters are applied to the Kubernetes nodes +you deploy through VerteX into your vSphere environment. Kubernetes clusters deployed to other infrastructure providers, +such as public cloud may have other native mechanisms for auto discovery of zones. + +For example, assume a vCenter environment contains three compute clusters, cluster-1, cluster-2, and cluster-3. To +support this environment you create the tag categories `k8s-region` and `k8s-zone`. The `k8s-region` is assigned to the +data center, and the `k8s-zone` tag is assigned to the compute clusters. + +The following table lists the tag values for the data center and compute clusters. + +| **vSphere Object** | **Assigned Name** | **Tag Category** | **Tag Value** | +| ------------------ | ----------------- | ---------------- | ------------- | +| **Datacenter** | dc-1 | k8s-region | region1 | +| **Cluster** | cluster-1 | k8s-zone | az1 | +| **Cluster** | cluster-2 | k8s-zone | az2 | +| **Cluster** | cluster-3 | k8s-zone | az3 | + +Create a tag category and tag values for each data center and cluster in your environment. Use the tag categories to +create zones. Use a name that is meaningful and that complies with the tag requirements listed in the following section. + +### Tag Requirements + +The following requirements apply to tags: + +- A valid tag must consist of alphanumeric characters. + +- The tag must start and end with an alphanumeric characters. + +- The regex used for tag validation is `(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?` diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/_category_.json b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/_category_.json new file mode 100644 index 00000000000..c3460c6dbde --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 30 +} diff --git a/docs/docs-content/vertex/upgrade/upgrade-vmware/airgap.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/airgap.md similarity index 88% rename from docs/docs-content/vertex/upgrade/upgrade-vmware/airgap.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/airgap.md index 08c16b73c76..dc86156e451 100644 --- a/docs/docs-content/vertex/upgrade/upgrade-vmware/airgap.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/airgap.md @@ -1,29 +1,29 @@ --- -sidebar_label: "Airgap" -title: "Upgrade Airgap Palette VerteX Installed on VMware vSphere" -description: "Learn how to upgrade self-hosted airgap Palette VerteX in VMware." +sidebar_label: "Upgrade Airgap Palette VerteX" +title: "Upgrade Airgap Palette VerteX on VMware vSphere" +description: "Upgrade a self-hosted, airgap Palette VerteX instance installed on VMware vSphere using the Palette CLI." icon: "" sidebar_position: 10 -tags: ["vertex", "self-hosted", "vmware", "airgap", "upgrade"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "airgap", "vmware", "upgrade", "cli"] +keywords: ["self-hosted", "vertex", "airgap", "vmware", "upgrade", "cli"] --- This guide takes you through the process of upgrading a self-hosted airgap Palette VerteX instance installed on VMware vSphere. Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of -the latest minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) -section for details. +the latest minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section +for details. :::warning If you are upgrading from a Palette VerteX version that is older than 4.4.14, ensure that you have executed the utility script to make the CNS mapping unique for the associated PVC. For more information, refer to the -[Troubleshooting guide](../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). +[Troubleshooting guide](../../../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette VerteX upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette VerteX upgrade. ## Prerequisites @@ -32,7 +32,7 @@ Palette VerteX upgrade. - Access to the Palette VerteX airgap support Virtual Machine (VM) that you used for the initial Palette VerteX installation. -- Refer to [Access Palette VerteX](../../vertex.md#access-palette-vertex) to download the new airgap Palette VerteX +- Refer to [Access Palette VerteX](../../../vertex.md#access-palette-vertex) to download the new airgap Palette VerteX installation bin and, if necessary, receive a link to the new OS and Kubernetes OVA. - Contact our Support Team at support@spectrocloud.com to learn if the new version of Palette VerteX requires a new OS @@ -121,8 +121,8 @@ one through four. Otherwise, start at step five. curl --user : --output airgap-4.2.12.bin https://software.spectrocloud.com/airgap-v4.2.12.bin ``` -8. Refer to the [Additional Packs](../../../downloads/palette-vertex/additional-packs.md) page and update the packs you - are currently using. You must update each pack separately. +8. Refer to the [Additional Packs](../../../../../downloads/palette-vertex/additional-packs.md) page and update the + packs you are currently using. You must update each pack separately. 9. Use the following command template to execute the new Palette VerteX airgap installation bin. diff --git a/docs/docs-content/vertex/upgrade/upgrade-vmware/non-airgap.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/non-airgap.md similarity index 82% rename from docs/docs-content/vertex/upgrade/upgrade-vmware/non-airgap.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/non-airgap.md index 7decb68be37..305f9bde9d8 100644 --- a/docs/docs-content/vertex/upgrade/upgrade-vmware/non-airgap.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/non-airgap.md @@ -1,29 +1,30 @@ --- -sidebar_label: "Non-airgap" -title: "Upgrade Palette VerteX Installed on VMware vSphere" -description: "Learn how to upgrade self-hosted Palette VerteX in VMware vSphere." +sidebar_label: "Upgrade Non-Airgap Palette VerteX" +title: "Upgrade Non-Airgap Palette VerteX on VMware vSphere" +description: + "Upgrade a self-hosted, non-airgap Palette VerteX instance installed on VMware vSphere using the Palette CLI." icon: "" -sidebar_position: 0 -tags: ["vertex", "self-hosted", "vmware", "non-airgap", "upgrade"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 20 +tags: ["self-hosted", "vertex", "non-airgap", "vmware", "upgrade", "cli"] +keywords: ["self-hosted", "vertex", "non-airgap", "vmware", "upgrade", "cli"] --- This guide takes you through the process of upgrading a self-hosted Palette VerteX instance installed on VMware vSphere. Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of the -latest minor version available. Refer to the [Supported Upgrade Paths](../upgrade.md#supported-upgrade-paths) section -for details. +latest minor version available. Refer to the [Supported Upgrade Paths](./upgrade.md#supported-upgrade-paths) section for +details. :::warning If you are upgrading from a Palette VerteX version that is older than 4.4.14, ensure that you have executed the utility script to make the CNS mapping unique for the associated PVC. For more information, refer to the -[Troubleshooting guide](../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). +[Troubleshooting guide](../../../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). ::: If your setup includes a PCG, you must also -[allow the PCG to upgrade automatically](../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or minor -Palette VerteX upgrade. +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette VerteX upgrade. ## Prerequisites diff --git a/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/upgrade.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/upgrade.md new file mode 100644 index 00000000000..3309b95fec9 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/upgrade/upgrade.md @@ -0,0 +1,546 @@ +--- +sidebar_label: "Upgrade" +title: "Upgrade Palette VerteX on VMware vSphere" +description: "Upgrade your self-hosted Palette VerteX instance installed on VMware vSphere using the Palette CLI." +icon: "" +hide_table_of_contents: false +tags: ["self-hosted", "vertex", "vmware", "upgrade"] +keywords: ["self-hosted", "vertex", "vmware", "upgrade"] +--- + +:::danger + +The below content is from the former [VerteX Upgrade](https://docs.spectrocloud.com/vertex/upgrade/) page. Convert to +partials and refactor where necessary. Only mention VMware! + +::: + +This page offers links and reference information for upgrading self-hosted Palette VerteX instances. If you have +questions or concerns, [reach out to our support team](http://support.spectrocloud.io/). + +:::tip + +If you are using self-hosted Palette instead of Palette VerteX, refer to the +[Palette Upgrade](../../../../palette/supported-environments/vmware/upgrade/upgrade.md) page for upgrade guidance. + +::: + +### Private Cloud Gateway + +If your setup includes a PCG, make sure to +[allow the PCG to upgrade automatically](../../../../../clusters/pcg/manage-pcg/pcg-upgrade.md) before each major or +minor Palette VerteX upgrade. + +## Upgrade Notes + +Refer to the following known issues before upgrading: + +- Upgrading self-hosted Palette or Palette VerteX from version 4.6.x to 4.7.x can cause the upgrade to hang if any + member of the MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. For guidance on + verifying the health status of MongoDB ReplicaSet members, refer to our + [Troubleshooting](../../../../../troubleshooting/palette-upgrade.md#self-hosted-palette-or-palette-vertex-upgrade-hangs) + guide. + +- A known issue impacts all self-hosted Palette instances older then 4.4.14. Before upgrading an Palette instance with + version older than 4.4.14, ensure that you execute a utility script to make all your cluster IDs unique in your + Persistent Volume Claim (PVC) metadata. For more information, refer to the + [Troubleshooting Guide](../../../../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). + +- Prior to upgrading VMware vSphere VerteX installations from version 4.3.x to 4.4.x, complete the steps outlined in the + [Mongo DNS ConfigMap Issue](../../../../../troubleshooting/palette-upgrade.md#mongo-dns-configmap-value-is-incorrect) + guide. Addressing this Mongo DNS issue will prevent system pods from experiencing _CrashLoopBackOff_ errors after the + upgrade. + + After the upgrade, if Enterprise Cluster backups are stuck, refer to the + [Enterprise Backup Stuck](../../../../../troubleshooting/enterprise-install.md#scenario---enterprise-backup-stuck) + troubleshooting guide for resolution steps. + +## Supported Upgrade Paths + +Refer to the following tables for the supported Palette VerteX upgrade paths for [VMware](../install/install.md) +installations. + +:::danger + +Before upgrading Palette VerteX to a new major version, you must first update it to the latest patch version of the +latest minor version available. + +::: + + + + + +**4.7.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.21 | 4.7.27 | :white_check_mark: | +| 4.7.20 | 4.7.27 | :white_check_mark: | +| 4.7.16 | 4.7.27 | :white_check_mark: | +| 4.7.16 | 4.7.20 | :white_check_mark: | +| 4.7.15 | 4.7.27 | :white_check_mark: | +| 4.7.15 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.27 | :white_check_mark: | +| 4.7.3 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.27 | :white_check_mark: | +| 4.6.41 | 4.7.20 | :white_check_mark: | +| 4.6.41 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.3 | :white_check_mark: | +| 4.6.6 | 4.7.15 | :white_check_mark: | + +**4.6.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.6.41 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.43 | :white_check_mark: | +| 4.6.32 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.43 | :white_check_mark: | +| 4.6.28 | 4.6.41 | :white_check_mark: | +| 4.6.28 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.32 | :white_check_mark: | +| 4.6.26 | 4.6.43 | :white_check_mark: | +| 4.6.26 | 4.6.41 | :white_check_mark: | +| 4.6.26 | 4.6.34 | :white_check_mark: | +| 4.6.26 | 4.6.32 | :white_check_mark: | +| 4.6.25 | 4.6.43 | :white_check_mark: | +| 4.6.25 | 4.6.41 | :white_check_mark: | +| 4.6.25 | 4.6.34 | :white_check_mark: | +| 4.6.25 | 4.6.32 | :white_check_mark: | +| 4.6.24 | 4.6.43 | :white_check_mark: | +| 4.6.24 | 4.6.41 | :white_check_mark: | +| 4.6.24 | 4.6.34 | :white_check_mark: | +| 4.6.24 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.43 | :white_check_mark: | +| 4.6.23 | 4.6.41 | :white_check_mark: | +| 4.6.23 | 4.6.34 | :white_check_mark: | +| 4.6.23 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.28 | :white_check_mark: | +| 4.6.23 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.43 | :white_check_mark: | +| 4.6.18 | 4.6.41 | :white_check_mark: | +| 4.6.18 | 4.6.34 | :white_check_mark: | +| 4.6.18 | 4.6.32 | :white_check_mark: | +| 4.6.18 | 4.6.28 | :white_check_mark: | +| 4.6.18 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.43 | :white_check_mark: | +| 4.6.13 | 4.6.41 | :white_check_mark: | +| 4.6.13 | 4.6.34 | :white_check_mark: | +| 4.6.13 | 4.6.32 | :white_check_mark: | +| 4.6.13 | 4.6.28 | :white_check_mark: | +| 4.6.13 | 4.6.24 | :white_check_mark: | +| 4.6.13 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.43 | :white_check_mark: | +| 4.6.12 | 4.6.41 | :white_check_mark: | +| 4.6.12 | 4.6.34 | :white_check_mark: | +| 4.6.12 | 4.6.32 | :white_check_mark: | +| 4.6.12 | 4.6.28 | :white_check_mark: | +| 4.6.12 | 4.6.24 | :white_check_mark: | +| 4.6.12 | 4.6.23 | :white_check_mark: | +| 4.6.12 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.43 | :white_check_mark: | +| 4.6.9 | 4.6.41 | :white_check_mark: | +| 4.6.9 | 4.6.34 | :white_check_mark: | +| 4.6.9 | 4.6.32 | :white_check_mark: | +| 4.6.9 | 4.6.28 | :white_check_mark: | +| 4.6.9 | 4.6.24 | :white_check_mark: | +| 4.6.9 | 4.6.23 | :white_check_mark: | +| 4.6.9 | 4.6.18 | :white_check_mark: | +| 4.6.9 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.43 | :white_check_mark: | +| 4.6.8 | 4.6.41 | :white_check_mark: | +| 4.6.8 | 4.6.34 | :white_check_mark: | +| 4.6.8 | 4.6.32 | :white_check_mark: | +| 4.6.8 | 4.6.28 | :white_check_mark: | +| 4.6.8 | 4.6.24 | :white_check_mark: | +| 4.6.8 | 4.6.23 | :white_check_mark: | +| 4.6.8 | 4.6.18 | :white_check_mark: | +| 4.6.8 | 4.6.13 | :white_check_mark: | +| 4.6.8 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.43 | :white_check_mark: | +| 4.6.7 | 4.6.41 | :white_check_mark: | +| 4.6.7 | 4.6.34 | :white_check_mark: | +| 4.6.7 | 4.6.32 | :white_check_mark: | +| 4.6.7 | 4.6.28 | :white_check_mark: | +| 4.6.7 | 4.6.24 | :white_check_mark: | +| 4.6.7 | 4.6.23 | :white_check_mark: | +| 4.6.7 | 4.6.18 | :white_check_mark: | +| 4.6.7 | 4.6.13 | :white_check_mark: | +| 4.6.7 | 4.6.12 | :white_check_mark: | +| 4.6.7 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.43 | :white_check_mark: | +| 4.6.6 | 4.6.41 | :white_check_mark: | +| 4.6.6 | 4.6.34 | :white_check_mark: | +| 4.6.6 | 4.6.32 | :white_check_mark: | +| 4.6.6 | 4.6.28 | :white_check_mark: | +| 4.6.6 | 4.6.24 | :white_check_mark: | +| 4.6.6 | 4.6.23 | :white_check_mark: | +| 4.6.6 | 4.6.18 | :white_check_mark: | +| 4.6.6 | 4.6.13 | :white_check_mark: | +| 4.6.6 | 4.6.12 | :white_check_mark: | +| 4.6.6 | 4.6.9 | :white_check_mark: | +| 4.6.6 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.7 | :white_check_mark: | +| 4.5.23 | 4.6.43 | :white_check_mark: | +| 4.5.23 | 4.6.41 | :white_check_mark: | +| 4.5.23 | 4.6.34 | :white_check_mark: | +| 4.5.23 | 4.6.32 | :white_check_mark: | +| 4.5.23 | 4.6.28 | :white_check_mark: | +| 4.5.23 | 4.6.24 | :white_check_mark: | +| 4.5.23 | 4.6.23 | :white_check_mark: | +| 4.5.23 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.43 | :white_check_mark: | +| 4.5.21 | 4.6.41 | :white_check_mark: | +| 4.5.21 | 4.6.34 | :white_check_mark: | +| 4.5.21 | 4.6.32 | :white_check_mark: | +| 4.5.21 | 4.6.28 | :white_check_mark: | +| 4.5.21 | 4.6.24 | :white_check_mark: | +| 4.5.21 | 4.6.23 | :white_check_mark: | +| 4.5.21 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.13 | :white_check_mark: | +| 4.5.21 | 4.6.12 | :white_check_mark: | +| 4.5.21 | 4.6.9 | :white_check_mark: | +| 4.5.21 | 4.6.8 | :white_check_mark: | +| 4.5.21 | 4.6.7 | :white_check_mark: | +| 4.5.21 | 4.6.6 | :white_check_mark: | +| 4.5.20 | 4.6.43 | :white_check_mark: | +| 4.5.20 | 4.6.41 | :white_check_mark: | +| 4.5.20 | 4.6.34 | :white_check_mark: | +| 4.5.20 | 4.6.32 | :white_check_mark: | +| 4.5.20 | 4.6.28 | :white_check_mark: | +| 4.5.20 | 4.6.24 | :white_check_mark: | +| 4.5.20 | 4.6.23 | :white_check_mark: | +| 4.5.20 | 4.6.18 | :white_check_mark: | +| 4.5.20 | 4.6.13 | :white_check_mark: | +| 4.5.20 | 4.6.12 | :white_check_mark: | +| 4.5.20 | 4.6.9 | :white_check_mark: | +| 4.5.20 | 4.6.8 | :white_check_mark: | +| 4.5.20 | 4.6.7 | :white_check_mark: | +| 4.5.20 | 4.6.6 | :white_check_mark: | +| 4.4.24 | 4.6.43 | :white_check_mark: | +| 4.4.24 | 4.6.41 | :white_check_mark: | +| 4.4.24 | 4.6.34 | :white_check_mark: | +| 4.4.24 | 4.6.32 | :white_check_mark: | +| 4.4.24 | 4.6.28 | :white_check_mark: | +| 4.4.24 | 4.6.24 | :white_check_mark: | +| 4.4.24 | 4.6.23 | :white_check_mark: | + +**4.5.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.5.21 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.23 | :white_check_mark: | +| 4.5.15 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.23 | :white_check_mark: | +| 4.5.11 | 4.5.21 | :white_check_mark: | +| 4.5.11 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.15 | :white_check_mark: | +| 4.5.8 | 4.5.23 | :white_check_mark: | +| 4.5.8 | 4.5.21 | :white_check_mark: | +| 4.5.8 | 4.5.20 | :white_check_mark: | +| 4.5.8 | 4.5.15 | :white_check_mark: | +| 4.5.8 | 4.5.11 | :white_check_mark: | +| 4.5.4 | 4.5.23 | :white_check_mark: | +| 4.5.4 | 4.5.21 | :white_check_mark: | +| 4.5.4 | 4.5.20 | :white_check_mark: | +| 4.5.4 | 4.5.15 | :white_check_mark: | +| 4.5.4 | 4.5.11 | :white_check_mark: | +| 4.5.4 | 4.5.8 | :white_check_mark: | +| 4.4.24 | 4.5.23 | :white_check_mark: | +| 4.4.20 | 4.5.23 | :white_check_mark: | +| 4.4.20 | 4.5.21 | :white_check_mark: | +| 4.4.20 | 4.5.20 | :white_check_mark: | +| 4.4.20 | 4.5.15 | :white_check_mark: | +| 4.4.20 | 4.5.11 | :white_check_mark: | +| 4.4.20 | 4.5.8 | :white_check_mark: | +| 4.4.20 | 4.5.4 | :white_check_mark: | + +**4.4.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.4.20 | 4.4.23 | :white_check_mark: | +| 4.4.18 | 4.4.23 | :white_check_mark: | +| 4.4.18 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.23 | :white_check_mark: | +| 4.4.14 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.23 | :white_check_mark: | +| 4.4.11 | 4.4.20 | :white_check_mark: | +| 4.4.11 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.23 | :white_check_mark: | +| 4.4.6 | 4.4.20 | :white_check_mark: | +| 4.4.6 | 4.4.18 | :white_check_mark: | +| 4.4.6 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.23 | :white_check_mark: | +| 4.3.6 | 4.4.20 | :white_check_mark: | +| 4.3.6 | 4.4.18 | :white_check_mark: | +| 4.3.6 | 4.4.14 | :white_check_mark: | +| 4.3.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.6 | :white_check_mark: | + +**4.3.x and Prior** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.2.13 | 4.3.6 | :white_check_mark: | +| 4.2.7 | 4.2.13 | :white_check_mark: | +| 4.1.x | 4.3.6 | :x: | +| 4.1.12 | 4.2.7 | :white_check_mark: | +| 4.1.12 | 4.1.13 | :white_check_mark: | +| 4.1.7 | 4.2.7 | :white_check_mark: | + + + + + +**4.7.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.16 | 4.7.20 | :white_check_mark: | +| 4.7.15 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.20 | :white_check_mark: | +| 4.7.3 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.20 | :white_check_mark: | +| 4.6.41 | 4.7.15 | :white_check_mark: | +| 4.6.41 | 4.7.3 | :white_check_mark: | +| 4.6.6 | 4.7.15 | :white_check_mark: | + +**4.6.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.6.41 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.43 | :white_check_mark: | +| 4.6.36 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.43 | :white_check_mark: | +| 4.6.32 | 4.6.41 | :white_check_mark: | +| 4.6.32 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.43 | :white_check_mark: | +| 4.6.28 | 4.6.41 | :white_check_mark: | +| 4.6.28 | 4.6.34 | :white_check_mark: | +| 4.6.28 | 4.6.32 | :white_check_mark: | +| 4.6.26 | 4.6.43 | :white_check_mark: | +| 4.6.26 | 4.6.41 | :white_check_mark: | +| 4.6.26 | 4.6.34 | :white_check_mark: | +| 4.6.26 | 4.6.32 | :white_check_mark: | +| 4.6.25 | 4.6.43 | :white_check_mark: | +| 4.6.25 | 4.6.41 | :white_check_mark: | +| 4.6.25 | 4.6.34 | :white_check_mark: | +| 4.6.25 | 4.6.32 | :white_check_mark: | +| 4.6.24 | 4.6.43 | :white_check_mark: | +| 4.6.24 | 4.6.41 | :white_check_mark: | +| 4.6.24 | 4.6.34 | :white_check_mark: | +| 4.6.24 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.43 | :white_check_mark: | +| 4.6.23 | 4.6.41 | :white_check_mark: | +| 4.6.23 | 4.6.34 | :white_check_mark: | +| 4.6.23 | 4.6.32 | :white_check_mark: | +| 4.6.23 | 4.6.28 | :white_check_mark: | +| 4.6.23 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.43 | :white_check_mark: | +| 4.6.18 | 4.6.41 | :white_check_mark: | +| 4.6.18 | 4.6.34 | :white_check_mark: | +| 4.6.18 | 4.6.32 | :white_check_mark: | +| 4.6.18 | 4.6.28 | :white_check_mark: | +| 4.6.18 | 4.6.24 | :white_check_mark: | +| 4.6.18 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.43 | :white_check_mark: | +| 4.6.13 | 4.6.41 | :white_check_mark: | +| 4.6.13 | 4.6.34 | :white_check_mark: | +| 4.6.13 | 4.6.32 | :white_check_mark: | +| 4.6.13 | 4.6.28 | :white_check_mark: | +| 4.6.13 | 4.6.24 | :white_check_mark: | +| 4.6.13 | 4.6.23 | :white_check_mark: | +| 4.6.13 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.43 | :white_check_mark: | +| 4.6.12 | 4.6.41 | :white_check_mark: | +| 4.6.12 | 4.6.34 | :white_check_mark: | +| 4.6.12 | 4.6.32 | :white_check_mark: | +| 4.6.12 | 4.6.28 | :white_check_mark: | +| 4.6.12 | 4.6.24 | :white_check_mark: | +| 4.6.12 | 4.6.23 | :white_check_mark: | +| 4.6.12 | 4.6.18 | :white_check_mark: | +| 4.6.12 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.43 | :white_check_mark: | +| 4.6.9 | 4.6.41 | :white_check_mark: | +| 4.6.9 | 4.6.34 | :white_check_mark: | +| 4.6.9 | 4.6.32 | :white_check_mark: | +| 4.6.9 | 4.6.28 | :white_check_mark: | +| 4.6.9 | 4.6.24 | :white_check_mark: | +| 4.6.9 | 4.6.23 | :white_check_mark: | +| 4.6.9 | 4.6.18 | :white_check_mark: | +| 4.6.9 | 4.6.13 | :white_check_mark: | +| 4.6.9 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.43 | :white_check_mark: | +| 4.6.8 | 4.6.41 | :white_check_mark: | +| 4.6.8 | 4.6.34 | :white_check_mark: | +| 4.6.8 | 4.6.32 | :white_check_mark: | +| 4.6.8 | 4.6.28 | :white_check_mark: | +| 4.6.8 | 4.6.24 | :white_check_mark: | +| 4.6.8 | 4.6.23 | :white_check_mark: | +| 4.6.8 | 4.6.18 | :white_check_mark: | +| 4.6.8 | 4.6.13 | :white_check_mark: | +| 4.6.8 | 4.6.12 | :white_check_mark: | +| 4.6.8 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.43 | :white_check_mark: | +| 4.6.7 | 4.6.41 | :white_check_mark: | +| 4.6.7 | 4.6.34 | :white_check_mark: | +| 4.6.7 | 4.6.32 | :white_check_mark: | +| 4.6.7 | 4.6.28 | :white_check_mark: | +| 4.6.7 | 4.6.24 | :white_check_mark: | +| 4.6.7 | 4.6.23 | :white_check_mark: | +| 4.6.7 | 4.6.18 | :white_check_mark: | +| 4.6.7 | 4.6.13 | :white_check_mark: | +| 4.6.7 | 4.6.12 | :white_check_mark: | +| 4.6.7 | 4.6.9 | :white_check_mark: | +| 4.6.7 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.43 | :white_check_mark: | +| 4.6.6 | 4.6.41 | :white_check_mark: | +| 4.6.6 | 4.6.34 | :white_check_mark: | +| 4.6.6 | 4.6.32 | :white_check_mark: | +| 4.6.6 | 4.6.28 | :white_check_mark: | +| 4.6.6 | 4.6.24 | :white_check_mark: | +| 4.6.6 | 4.6.23 | :white_check_mark: | +| 4.6.6 | 4.6.18 | :white_check_mark: | +| 4.6.6 | 4.6.13 | :white_check_mark: | +| 4.6.6 | 4.6.12 | :white_check_mark: | +| 4.6.6 | 4.6.9 | :white_check_mark: | +| 4.6.6 | 4.6.8 | :white_check_mark: | +| 4.6.6 | 4.6.7 | :white_check_mark: | +| 4.5.23 | 4.6.43 | :white_check_mark: | +| 4.5.23 | 4.6.41 | :white_check_mark: | +| 4.5.23 | 4.6.34 | :white_check_mark: | +| 4.5.23 | 4.6.32 | :white_check_mark: | +| 4.5.23 | 4.6.28 | :white_check_mark: | +| 4.5.23 | 4.6.24 | :white_check_mark: | +| 4.5.23 | 4.6.23 | :white_check_mark: | +| 4.5.23 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.43 | :white_check_mark: | +| 4.5.21 | 4.6.41 | :white_check_mark: | +| 4.5.21 | 4.6.34 | :white_check_mark: | +| 4.5.21 | 4.6.32 | :white_check_mark: | +| 4.5.21 | 4.6.28 | :white_check_mark: | +| 4.5.21 | 4.6.24 | :white_check_mark: | +| 4.5.21 | 4.6.23 | :white_check_mark: | +| 4.5.21 | 4.6.18 | :white_check_mark: | +| 4.5.21 | 4.6.13 | :white_check_mark: | +| 4.5.21 | 4.6.12 | :white_check_mark: | +| 4.5.21 | 4.6.9 | :white_check_mark: | +| 4.5.21 | 4.6.8 | :white_check_mark: | +| 4.5.21 | 4.6.7 | :white_check_mark: | +| 4.5.21 | 4.6.6 | :white_check_mark: | +| 4.5.20 | 4.6.43 | :white_check_mark: | +| 4.5.20 | 4.6.41 | :white_check_mark: | +| 4.5.20 | 4.6.34 | :white_check_mark: | +| 4.5.20 | 4.6.32 | :white_check_mark: | +| 4.5.20 | 4.6.28 | :white_check_mark: | +| 4.5.20 | 4.6.24 | :white_check_mark: | +| 4.5.20 | 4.6.23 | :white_check_mark: | +| 4.5.20 | 4.6.18 | :white_check_mark: | +| 4.5.20 | 4.6.13 | :white_check_mark: | +| 4.5.20 | 4.6.12 | :white_check_mark: | +| 4.5.20 | 4.6.9 | :white_check_mark: | +| 4.5.20 | 4.6.8 | :white_check_mark: | +| 4.5.20 | 4.6.7 | :white_check_mark: | +| 4.5.20 | 4.6.6 | :white_check_mark: | +| 4.4.24 | 4.6.43 | :white_check_mark: | +| 4.4.24 | 4.6.41 | :white_check_mark: | +| 4.4.24 | 4.6.34 | :white_check_mark: | +| 4.4.24 | 4.6.32 | :white_check_mark: | +| 4.4.24 | 4.6.28 | :white_check_mark: | +| 4.4.24 | 4.6.24 | :white_check_mark: | +| 4.4.24 | 4.6.23 | :white_check_mark: | + +**4.5.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.5.21 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.23 | :white_check_mark: | +| 4.5.20 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.23 | :white_check_mark: | +| 4.5.15 | 4.5.21 | :white_check_mark: | +| 4.5.15 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.23 | :white_check_mark: | +| 4.5.11 | 4.5.21 | :white_check_mark: | +| 4.5.11 | 4.5.20 | :white_check_mark: | +| 4.5.11 | 4.5.15 | :white_check_mark: | +| 4.5.8 | 4.5.23 | :white_check_mark: | +| 4.5.8 | 4.5.21 | :white_check_mark: | +| 4.5.8 | 4.5.20 | :white_check_mark: | +| 4.5.8 | 4.5.15 | :white_check_mark: | +| 4.5.4 | 4.5.23 | :white_check_mark: | +| 4.5.4 | 4.5.21 | :white_check_mark: | +| 4.5.4 | 4.5.20 | :white_check_mark: | +| 4.5.4 | 4.5.15 | :white_check_mark: | +| 4.4.20 | 4.5.23 | :white_check_mark: | +| 4.4.20 | 4.5.21 | :white_check_mark: | +| 4.4.20 | 4.5.20 | :white_check_mark: | +| 4.4.20 | 4.5.15 | :white_check_mark: | + +**4.4.x** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.4.18 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.20 | :white_check_mark: | +| 4.4.11 | 4.4.20 | :white_check_mark: | +| 4.4.6 | 4.4.20 | :white_check_mark: | +| 4.3.6 | 4.4.20 | :white_check_mark: | +| 4.4.14 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.18 | :white_check_mark: | +| 4.4.6 | 4.4.18 | :white_check_mark: | +| 4.3.6 | 4.4.18 | :white_check_mark: | +| 4.4.11 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.14 | :white_check_mark: | +| 4.3.6 | 4.4.14 | :white_check_mark: | +| 4.4.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.11 | :white_check_mark: | +| 4.3.6 | 4.4.6 | :white_check_mark: | + +**4.3.x and Prior** + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.2.13 | 4.3.6 | :white_check_mark: | +| 4.2.7 | 4.2.13 | :white_check_mark: | +| 4.1.x | 4.3.6 | :x: | +| 4.1.12 | 4.2.7 | :white_check_mark: | +| 4.1.7 | 4.2.7 | :white_check_mark: | + + + + + +:::preview + +::: + +| **Source Version** | **Target Version** | **Support** | +| :----------------: | :----------------: | :----------------: | +| 4.7.15 | 4.7.27 | :white_check_mark: | +| 4.7.3 | 4.7.27 | :x: | +| 4.7.3 | 4.7.15 | :x: | + + + + diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/install-on-vmware.md b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/vmware.md similarity index 52% rename from docs/docs-content/vertex/install-palette-vertex/install-on-vmware/install-on-vmware.md rename to docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/vmware.md index 5a001b8dea2..fbbd45ae7ab 100644 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/install-on-vmware.md +++ b/docs/docs-content/self-hosted-setup/vertex/supported-environments/vmware/vmware.md @@ -1,22 +1,14 @@ --- -sidebar_label: "VMware" -title: "Install Palette VerteX on VMware" -description: "Learn how to install Palette VerteX on VMware." +sidebar_label: "VMware vSphere" +title: "Self-Hosted Palette VerteX on VMware vSphere" +description: "Install self-hosted Palette VerteX on VMware vSphere." icon: "" hide_table_of_contents: false -tags: ["vertex", "vmware"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "vmware"] +keywords: ["self-hosted", "vertex", "vmware"] --- Palette VerteX can be installed on VMware vSphere with internet connectivity or an airgap environment. When you install Palette VerteX, a three-node cluster is created. You use the interactive Palette CLI to install Palette VerteX on VMware vSphere. Refer to [Access Palette VerteX](../../vertex.md#access-palette-vertex) for instructions on requesting repository access. - -## Resources - -- [Non-Airgap Install on VMware](install.md) - -- [Airgap Installation](./airgap-install/airgap-install.md) - -- [VMware System Requirements](vmware-system-requirements.md) diff --git a/docs/docs-content/self-hosted-setup/vertex/system-management/_category_.json b/docs/docs-content/self-hosted-setup/vertex/system-management/_category_.json new file mode 100644 index 00000000000..e7e7c549660 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 40 +} diff --git a/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/_category_.json b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/_category_.json new file mode 100644 index 00000000000..094470741db --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 10 +} diff --git a/docs/docs-content/vertex/system-management/account-management/account-management.md b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/account-management.md similarity index 94% rename from docs/docs-content/vertex/system-management/account-management/account-management.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/account-management/account-management.md index e8c2095be9b..a629dc4160f 100644 --- a/docs/docs-content/vertex/system-management/account-management/account-management.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/account-management.md @@ -4,9 +4,9 @@ title: "Account Management" description: "Update and manage the user settings and credentials of the admin user." icon: "" hide_table_of_contents: false -sidebar_position: 60 -tags: ["vertex", "management", "account"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 10 +tags: ["self-hosted", "vertex", "management", "account"] +keywords: ["self-hosted", "vertex", "management", "account"] --- VerteX supports the ability to have multiple system administrators with different roles and permissions. Use the @@ -79,11 +79,3 @@ To learn how to create and manage system administrator accounts, check out the As an admin user, you can update and manage your user settings, such as changing the email address and changing the credentials. You can also enable passkey to access the admin panel. The passkey feature supports both virtual passkey and physical passkey. - -## Resources - -- [Create and Manage System Accounts](./manage-system-accounts.md) - -- [Email Address](./email.md) - -- [User Credentials](./credentials.md) diff --git a/docs/docs-content/vertex/system-management/account-management/credentials.md b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/credentials.md similarity index 87% rename from docs/docs-content/vertex/system-management/account-management/credentials.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/account-management/credentials.md index 02a07b7056a..d90c2c10837 100644 --- a/docs/docs-content/vertex/system-management/account-management/credentials.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/credentials.md @@ -1,12 +1,14 @@ --- sidebar_label: "Manage User Credentials" title: "Manage User Credentials" -description: "Update and manage the user credentials" +description: + "Update and manage system admin user credentials for self-hosted Palette VerteX, including emails, passwords, + passkeys, and API access" icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex", "management", "account", "credentials"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 30 +tags: ["self-hosted", "vertex", "management", "account", "credentials"] +keywords: ["self-hosted", "vertex", "management", "account", "credentials"] --- You can manage the credentials of the admin user by logging in to the system console. You can also enable passkeys to @@ -39,10 +41,51 @@ minutes, the user can try to log in again. The default session timeout for syste The default timeout for tenant users is set to four hours. After four hours of inactivity, the user will be logged out of VerteX. You can change the default session timeout value for tenant users by following the steps in the -[Session Timeout](../../../tenant-settings/session-timeout.md) guide. +[Session Timeout](../../../../tenant-settings/session-timeout.md) guide. Use the following sections to learn how to manage user credentials. +## Change System Admin Email Address + +You can manage the credentials of the admin user by logging in to the system console. Updating or changing the email +address of the admin user requires the current password. + +Use the following steps to change the email address of the admin user. + +## Prerequisites + +- Access to the Palette VerteX system console. + +- Current password of the admin user. + +- A Simple Mail Transfer Protocol (SMTP) server must be configured in the system console. Refer to + [Configure SMTP](../smtp.md) page for guidance on how to configure an SMTP server. + +## Change Email Address + +1. Log in to the Palette VerteX system console. Refer to + [Access the System Console](../system-management.md#access-the-system-console) guide. + +2. From the **left Main Menu** select **My Account**. + +3. Type the new email address in the **Email** field. + +4. Provide the current password in the **Current Password** field. + +5. Click **Apply** to save the changes. + +## Validate + +1. Log out of the system console. You can log out by clicking the **Logout** button in the bottom right corner of the + **left Main Menu**. + +2. Log in to the system console. Refer to [Access the System Console](../system-management.md#access-the-system-console) + guide. + +3. Use the new email address and your current password to log in to the system console. + +A successful login indicates that the email address has been changed successfully. + ## Change Password Use the following steps to change the password of the admin user. diff --git a/docs/docs-content/vertex/system-management/account-management/manage-system-accounts.md b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/manage-system-accounts.md similarity index 98% rename from docs/docs-content/vertex/system-management/account-management/manage-system-accounts.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/account-management/manage-system-accounts.md index 35218a7c1a9..00a4c308bdd 100644 --- a/docs/docs-content/vertex/system-management/account-management/manage-system-accounts.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/manage-system-accounts.md @@ -4,9 +4,9 @@ title: "Create and Manage System Accounts" description: "Learn how to create and manage system accounts in Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 10 -tags: ["vertex", "management", "account"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 20 +tags: ["self-hosted", "vertex", "management", "account"] +keywords: ["self-hosted", "vertex", "management", "account"] --- You can create and manage system accounts if you have the Root Administrator or Account Administrator role in Palette diff --git a/docs/docs-content/vertex/system-management/account-management/password-blocklist.md b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/password-blocklist.md similarity index 96% rename from docs/docs-content/vertex/system-management/account-management/password-blocklist.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/account-management/password-blocklist.md index e4d5d105c18..15e63e4b6fe 100644 --- a/docs/docs-content/vertex/system-management/account-management/password-blocklist.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/account-management/password-blocklist.md @@ -1,12 +1,12 @@ --- sidebar_label: "Manage Password Blocklist" title: "Manage Password Blocklist" -description: "Learn how to manage the password blocklist in Palette VerteX." +description: "Learn how to manage the password blocklist in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 50 -tags: ["vertex", "management", "account", "credentials"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 40 +tags: ["self-hosted", "vertex", "management", "account", "credentials"] +keywords: ["self-hosted", "vertex", "management", "account", "credentials"] --- You can manage a password blocklist to prevent users from using common or weak passwords. The password blocklist is a diff --git a/docs/docs-content/vertex/system-management/add-registry.md b/docs/docs-content/self-hosted-setup/vertex/system-management/add-registry.md similarity index 90% rename from docs/docs-content/vertex/system-management/add-registry.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/add-registry.md index 37db200aae2..d1f2fdd2044 100644 --- a/docs/docs-content/vertex/system-management/add-registry.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/add-registry.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Add System-Level Registry" -title: "Add System-Level Registry" -description: "Learn how to add a system-level registry in Palette VerteX." +sidebar_label: "System-Level Registries" +title: "System-Level Registries" +description: "Add a system-level registry in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex", "management", "registry"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 150 +tags: ["self-hosted", "vertex", "management", "registry"] +keywords: ["self-hosted", "vertex", "management", "registry"] --- You can add a registry at the system level or the tenant level. Registries added at the system level are available to @@ -15,7 +15,7 @@ all the tenants. Registries added at the tenant level are available only to that :::info This section describes how to add a system scope registry. For guidance on adding a registry at the tenant scope, check -out [Add a Tenant-Level Registry](../../tenant-settings/add-registry.md). +out [Add a Tenant-Level Registry](../../../tenant-settings/add-registry.md). ::: @@ -106,6 +106,6 @@ check when you added the registry. Use these steps to further verify the registr ## Resources -- [Add a Tenant-Level Registry](../../tenant-settings/add-registry.md) +- [Add a Tenant-Level Registry](../../../tenant-settings/add-registry.md) - [Use non-FIPS Packs](../system-management/enable-non-fips-settings/use-non-fips-addon-packs.md) diff --git a/docs/docs-content/vertex/system-management/change-cloud-config.md b/docs/docs-content/self-hosted-setup/vertex/system-management/change-cloud-config.md similarity index 96% rename from docs/docs-content/vertex/system-management/change-cloud-config.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/change-cloud-config.md index 3d0a4bd3de7..67ae48e8949 100644 --- a/docs/docs-content/vertex/system-management/change-cloud-config.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/change-cloud-config.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Modify Cloud Provider Configuration" -title: "Modify Cloud Provider Configuration" -description: "Learn how to modify the system-level cloud provider configuration in Palette VerteX." +sidebar_label: "Cloud Provider Configuration" +title: "Cloud Provider Configuration" +description: "Learn how to modify the system-level cloud provider configuration in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 130 -tags: ["vertex", "management", "clouds"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 60 +tags: ["self-hosted", "vertex", "management", "clouds"] +keywords: ["self-hosted", "vertex", "management", "clouds"] --- Different cloud providers use different image formats to create virtual machines. Amazon Web Services (AWS), for diff --git a/docs/docs-content/self-hosted-setup/vertex/system-management/customize-interface.md b/docs/docs-content/self-hosted-setup/vertex/system-management/customize-interface.md new file mode 100644 index 00000000000..aec1c94606c --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/customize-interface.md @@ -0,0 +1,19 @@ +--- +sidebar_label: "Interface Customization" +title: "Interface Customization" +description: "Learn how to customize the branding and interface of self-hosted Palette VerteX." +icon: "" +hide_table_of_contents: false +sidebar_position: 90 +tags: ["self-hosted", "vertex", "management", "account", "customize-interface"] +keywords: ["self-hosted", "vertex", "management", "account", "customize-interface"] +--- + + diff --git a/docs/docs-content/enterprise-version/upgrade/_category_.json b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/_category_.json similarity index 100% rename from docs/docs-content/enterprise-version/upgrade/_category_.json rename to docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/_category_.json diff --git a/docs/docs-content/vertex/system-management/enable-non-fips-settings/allow-cluster-import.md b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/allow-cluster-import.md similarity index 60% rename from docs/docs-content/vertex/system-management/enable-non-fips-settings/allow-cluster-import.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/allow-cluster-import.md index c8353ece34d..6dec53880cf 100644 --- a/docs/docs-content/vertex/system-management/enable-non-fips-settings/allow-cluster-import.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/allow-cluster-import.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Allow Cluster Import" -title: "Allow Cluster Import" -description: "Learn how to import clusters to Palette VerteX." +sidebar_label: "Allow Cluster Imports" +title: "Allow Cluster Imports" +description: "Learn how to import clusters to self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex", "non-fips"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 10 +tags: ["self-hosted", "vertex", "non-fips"] +keywords: ["self-hosted", "vertex", "non-fips"] --- You can allow tenant users to import Kubernetes clusters that were not deployed through Palette, including some that @@ -15,13 +15,13 @@ option is not available. Palette VerteX displays icons next to clusters to indicate their FIPS compliance status or when FIPS compliance cannot be confirmed. To learn about icons that Palette VerteX applies, refer to -[FIPS Status Icons](../../fips/fips-status-icons.md). +[FIPS Status Icons](../../fips.md#fips-status-icons). ## Prerequisites - You need tenant admin permission to enable this feature. -- Refer to [Cluster Import Prerequisites](../../../clusters/imported-clusters/cluster-import.md#prerequisites). +- Refer to [Cluster Import Prerequisites](../../../../clusters/imported-clusters/cluster-import.md#prerequisites). ## Allow non-FIPS Cluster Import @@ -37,13 +37,13 @@ be confirmed. To learn about icons that Palette VerteX applies, refer to To disable the setting, toggle this option off and confirm you want to disable it. -Refer to [Import a Cluster](../../../clusters/imported-clusters/cluster-import.md) for guidance. Check out -[Import Modes](../../../clusters/imported-clusters/imported-clusters.md#import-modes) to learn about various import +Refer to [Import a Cluster](../../../../clusters/imported-clusters/cluster-import.md) for guidance. Check out +[Import Modes](../../../../clusters/imported-clusters/imported-clusters.md#import-modes) to learn about various import modes and limitations to be aware of. ## Validate -1. Log in to [Palette VerteX](https://console.spectrocloud.com/). +1. Log in to Palette VerteX. 2. Navigate to the left **Main Menu** and select **Clusters**. @@ -51,8 +51,8 @@ modes and limitations to be aware of. ## Resources -- [Import a Cluster](../../../clusters/imported-clusters/cluster-import.md) +- [Import a Cluster](../../../../clusters/imported-clusters/cluster-import.md) -- [Import Modes](../../../clusters/imported-clusters/imported-clusters.md#import-modes) +- [Import Modes](../../../../clusters/imported-clusters/imported-clusters.md#import-modes) -- [Cluster Import Limitations](../../../clusters/imported-clusters/imported-clusters.md#limitations) +- [Cluster Import Limitations](../../../../clusters/imported-clusters/imported-clusters.md#limitations) diff --git a/docs/docs-content/vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md similarity index 66% rename from docs/docs-content/vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md index 4154b85ad21..ae6e8495987 100644 --- a/docs/docs-content/vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/enable-non-fips-settings.md @@ -1,17 +1,18 @@ --- -sidebar_label: "Enable non-FIPS Settings" -title: "Enable non-FIPS Settings" +sidebar_label: "Non-FIPS Settings" +title: "Non-FIPS Settings" description: - "Enable settings in Palette VerteX that allow you to use non-FIPS resources and perform non-FIPS compliant actions." + "Enable settings in self-hosted Palette VerteX that allow you to use non-FIPS resources and perform non-FIPS compliant + actions." icon: "" hide_table_of_contents: false -tags: ["vertex", "non-fips"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "non-fips"] +keywords: ["self-hosted", "vertex", "non-fips"] --- Palette VerteX is FIPS-enforced by default, incorporating the Spectro Cloud Cryptographic Module into the Kubernetes Management Platform and the infrastructure components of target clusters. To learn more about our cryptographic library, -check out [FIPS 140-3 Certification](../../../legal-licenses/compliance.md#fips-140-3). +check out [FIPS 140-3 Certification](../../../../legal-licenses/compliance.md#fips-140-3). If desired, you can allow the consumption of certain non-FIPS functionality in Palette VerteX at the tenant level. **Platform Settings** at the tenant level provide toggles to allow non-FIPS-compliant packs and non-FIPS features such @@ -25,4 +26,4 @@ as scans, backup, and restore. You can also allow importing clusters created ext - [Allow Cluster Import](../../system-management/enable-non-fips-settings/allow-cluster-import.md) -- [Spectro Cloud FIPS 140-3 Certification](../../../legal-licenses/compliance.md#fips-140-3) +- [Spectro Cloud FIPS 140-3 Certification](../../../../legal-licenses/compliance.md#fips-140-3) diff --git a/docs/docs-content/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md similarity index 76% rename from docs/docs-content/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md index a994af01a7c..53a888ea569 100644 --- a/docs/docs-content/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs.md @@ -1,16 +1,16 @@ --- -sidebar_label: "Use non-FIPS Packs" -title: "Use non-FIPS Packs" -description: "Add non-FIPS packs to VerteX cluster profiles." +sidebar_label: "Use Non-FIPS Packs" +title: "Use Non-FIPS Packs" +description: "Learn how to enable non-FIPS packs and add them to cluster profiles in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 0 -tags: ["vertex", "non-fips"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 30 +tags: ["self-hosted", "vertex", "non-fips", "packs"] +keywords: ["self-hosted", "vertex", "non-fips", "packs"] --- Palette VerteX provides the following FIPS-compliant infrastructure components in Kubernetes clusters it deploys. Review -[FIPS-Compliant Components](../../fips/fips-compliant-components.md) to learn more. +[FIPS-Compliant Clusters](../../fips.md#fips-compliant-clusters) to learn more. - Operating System (OS) - Kubernetes @@ -34,13 +34,13 @@ Registries can be added at the system level or tenant level. When added at the s all the tenants. When added at the tenant level, registries are available only to that tenant. The [Add a Registry](../add-registry.md) page offers guidance on adding a registry at the system scope in VerteX. For guidance on adding a registry at the tenant scope, check out -[Add a Tenant-Level Registry](../../../tenant-settings/add-registry.md). +[Add a Tenant-Level Registry](../../../../tenant-settings/add-registry.md). ::: The screenshot below shows the icon that VerteX displays next to FIPS-compliant infrastructure components to indicate full FIPS compliance. Other icons are used to indicate profile layers with partial, unknown, or non-FIPS compliant -status. To learn about other icons VerteX applies, refer to [FIPS Status Icons](../../fips/fips-status-icons.md). +status. To learn about other icons VerteX applies, refer to [FIPS Status Icons](../../fips.md#fips-status-icons). ![Diagram showing FIPS-compliant icons in profile stack.](/vertex_fips-status-icons_icons-in-profile-stack.webp) @@ -76,25 +76,11 @@ indicate their FIPS compliance status. Use these steps to verify non-FIPS packs are available. -1. Log in to [Palette](https://console.spectrocloud.com). +1. Log in to Palette VerteX. 2. Navigate to the left **Main Menu** and select **Profiles**. 3. Try creating a cluster profile and verify the registry you added is available and packs are displayed. For guidance, - review the [Cluster Profiles](../../../profiles/cluster-profiles/cluster-profiles.md) documentation. + review the [Cluster Profiles](../../../../profiles/cluster-profiles/cluster-profiles.md) documentation. VerteX will display the appropriate FIPS status icon next to each pack layer. - -## Resources - -- [Packs List](../../../integrations/integrations.mdx) - -- [Create an Infrastructure Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-infrastructure-profile.md) - -- [Create an Add-on Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-addon-profile/create-addon-profile.md) - -- [FIPS Status Icons](../../fips/fips-status-icons.md) - -- [Add a Registry](../add-registry.md) - -- [Add a Tenant-Level Registry](../../../tenant-settings/add-registry.md) diff --git a/docs/docs-content/vertex/system-management/enable-non-fips-settings/use-non-fips-features.md b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-features.md similarity index 68% rename from docs/docs-content/vertex/system-management/enable-non-fips-settings/use-non-fips-features.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-features.md index ff3412ec165..3bb4a27aa96 100644 --- a/docs/docs-content/vertex/system-management/enable-non-fips-settings/use-non-fips-features.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-features.md @@ -1,12 +1,12 @@ --- -sidebar_label: "Use non-FIPS Features" -title: "Use non-FIPS Features" -description: "Use non-FIPS features such as backup, restore, and scans." +sidebar_label: "Use Non-FIPS Features" +title: "Use Non-FIPS Features" +description: "Learn how to enable non-FIPS features such as backup, restore, and scans in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 10 -tags: ["vertex", "non-fips"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 20 +tags: ["self-hosted", "vertex", "non-fips"] +keywords: ["self-hosted", "vertex", "non-fips"] --- You can allow tenant users access to Palette features that are _not_ FIPS-compliant, such as tenant cluster backup and @@ -19,13 +19,13 @@ page. - You need tenant admin permission to enable this feature. - Palette can back up clusters to several locations. To learn about backup requirements, review - [Backup-Restore](../../../clusters/cluster-management/backup-restore/backup-restore.md). + [Backup-Restore](../../../../clusters/cluster-management/backup-restore/backup-restore.md). - There are no prerequisites for restoring clusters or performing scans. ## Allow non-FIPS Features -1. Log in to [Palette VerteX](https://console.spectrocloud.com/) as a tenant admin. +1. Log in to Palette VerteX as a tenant admin. 2. Navigate to the left **Main Menu** and click on **Tenant Settings**. @@ -40,7 +40,7 @@ To disable the setting, toggle this option off and confirm you want to disable i ## Validate -1. Log in to [Palette VerteX](https://console.spectrocloud.com/). +1. Log in to Palette VerteX. 2. Navigate to the left **Main Menu** and click on **Clusters**. @@ -49,6 +49,6 @@ To disable the setting, toggle this option off and confirm you want to disable i ## Resources -- [Cluster Backup and Restore](../../../clusters/cluster-management/backup-restore/backup-restore.md) +- [Cluster Backup and Restore](../../../../clusters/cluster-management/backup-restore/backup-restore.md) -- [Scans](../../../clusters/cluster-management/compliance-scan.md) +- [Scans](../../../../clusters/cluster-management/compliance-scan.md) diff --git a/docs/docs-content/vertex/system-management/feature-flags.md b/docs/docs-content/self-hosted-setup/vertex/system-management/feature-flags.md similarity index 64% rename from docs/docs-content/vertex/system-management/feature-flags.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/feature-flags.md index 669033896b4..8d2003228a4 100644 --- a/docs/docs-content/vertex/system-management/feature-flags.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/feature-flags.md @@ -1,12 +1,12 @@ --- sidebar_label: "Feature Flags" title: "Feature Flags" -description: "Learn how to to use feature flags to manage features in Palette VerteX" +description: "Learn how to to use feature flags to manage features in self-hosted Palette VerteX" icon: "" hide_table_of_contents: false -sidebar_position: 60 -tags: ["vertex", "management", "feature-flags"] -keywords: ["self-hosted", "vertex", "feature-flags"] +sidebar_position: 70 +tags: ["self-hosted", "vertex", "management", "feature-flags"] +keywords: ["self-hosted", "vertex", "management", "feature-flags"] --- @@ -17,7 +17,12 @@ keywords: ["self-hosted", "vertex", "feature-flags"] ## Prerequisites - + ## Enable a Feature diff --git a/docs/docs-content/vertex/system-management/login-banner.md b/docs/docs-content/self-hosted-setup/vertex/system-management/login-banner.md similarity index 81% rename from docs/docs-content/vertex/system-management/login-banner.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/login-banner.md index fc295ecaa79..65d68a00657 100644 --- a/docs/docs-content/vertex/system-management/login-banner.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/login-banner.md @@ -2,12 +2,13 @@ sidebar_label: "Banners" title: "Banners" description: - "Learn how to add login and classification banners, also known as Authority to Operate (ATO) banners, in VerteX." + "Learn how to add login and classification banners, also known as Authority to Operate (ATO) banners, in self-hosted + Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 100 -tags: ["vertex", "management", "ato", "banner"] -keywords: ["self-hosted", "vertex", "ato", "banner"] +sidebar_position: 50 +tags: ["self-hosted", "vertex", "management", "ato", "banner"] +keywords: ["self-hosted", "vertex", "management", "ato", "banner"] --- @@ -25,7 +26,7 @@ Take the following steps to add a login banner to your system console and tenant :::warning Login banners configured in the system console override tenant-specific login banners. Refer to the -[Tenant Login Banner](../../tenant-settings/login-banner.md) guide to learn more about tenant-specific login banners. +[Tenant Login Banner](../../../tenant-settings/login-banner.md) guide to learn more about tenant-specific login banners. ::: diff --git a/docs/docs-content/vertex/system-management/registry-override.md b/docs/docs-content/self-hosted-setup/vertex/system-management/registry-override.md similarity index 96% rename from docs/docs-content/vertex/system-management/registry-override.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/registry-override.md index eba0785b0b8..1799d9eaa99 100644 --- a/docs/docs-content/vertex/system-management/registry-override.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/registry-override.md @@ -1,11 +1,11 @@ --- -sidebar_label: "Override Registry Configuration" -title: "Override Registry Configuration" -description: "Learn how to override the image registry configuration for Palette VerteX." +sidebar_label: "Image Registry Override" +title: "Image Registry Override" +description: "Learn how to override the default image registry for self-hosted Palette VerteX." hide_table_of_contents: false -sidebar_position: 120 -tags: ["vertex"] -keywords: ["enterprise kubernetes", "multi cloud kubernetes"] +sidebar_position: 80 +tags: ["self-hosted", "vertex", "registry"] +keywords: ["self-hosted", "vertex", "registry"] --- You can override the image registry configuration for Palette VerteX to reference a different image registry. This @@ -15,7 +15,7 @@ feature is useful when you want to use a custom image registry to store and mana Before overriding the image registry configuration for VerteX, ensure you have the following: -- A deployed and healthy [VerteX cluster](../install-palette-vertex/install-palette-vertex.md). +- A deployed and healthy [VerteX cluster](../vertex.md). - Access to the kubeconfig file for the VerteX cluster. You need the kubeconfig file to access the VerteX cluster and apply the image registry configuration. @@ -25,7 +25,7 @@ Before overriding the image registry configuration for VerteX, ensure you have t If you deployed VerteX through the Palette CLI, then you can download the kubeconfig file from the VerteX cluster details page in the system console. Navigate to the **Enterprise Cluster Migration** page. Click on the **Admin Kubeconfig** link to download the kubeconfig file. If you need help with configuring kubectl to access the VerteX - cluster, refer to the [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) guide. If you + cluster, refer to the [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) guide. If you deployed VerteX onto an existing Kubernetes cluster, reach out to your cluster administrator for the kubeconfig file. ::: @@ -52,7 +52,8 @@ Select the appropriate tab below based on the environment in which your VertX cl 1. Open a terminal session. 2. Configure kubectl to use the kubeconfig file for the VerteX cluster. Refer to the - [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) for guidance on configuring kubectl. + [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) for guidance on configuring + kubectl. 3. Navigate to the folder where you have the image-swap Helm chart available. You may have to extract the Helm chart if it is in a compressed format to access the **values.yaml** file. @@ -215,7 +216,8 @@ Use the following steps to override the image registry configuration. 1. Open a terminal session. 2. Configure kubectl to use the kubeconfig file for the VerteX cluster. Refer to the - [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) for guidance on configuring kubectl. + [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) for guidance on configuring + kubectl. 3. Create an empty YAML file with the name **registry-secret.yaml**. Use the following command to create the file. @@ -304,7 +306,8 @@ Use the following steps to override the image registry configuration. 1. Open a terminal session with a network access to the VeteX cluster. 2. Configure kubectl to use the kubeconfig file for the VerteX cluster. Refer to the - [Access Cluster with CLI](../../clusters/cluster-management/palette-webctl.md) for guidance on configuring kubectl. + [Access Cluster with CLI](../../../clusters/cluster-management/palette-webctl.md) for guidance on configuring + kubectl. 3. Issue the following command to verify that the secret containing the image registry configuration is created. diff --git a/docs/docs-content/vertex/system-management/reverse-proxy.md b/docs/docs-content/self-hosted-setup/vertex/system-management/reverse-proxy.md similarity index 95% rename from docs/docs-content/vertex/system-management/reverse-proxy.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/reverse-proxy.md index e4007a8a727..c6d986ebc7d 100644 --- a/docs/docs-content/vertex/system-management/reverse-proxy.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/reverse-proxy.md @@ -1,21 +1,22 @@ --- -sidebar_label: "Configure Reverse Proxy" -title: "Configure Reverse Proxy" -description: "Learn how to configure a reverse proxy for Palette VerteX." +sidebar_label: "Reverse Proxy Configuration" +title: "Reverse Proxy Configuration" +description: "Learn how to configure a reverse proxy for self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 40 -tags: ["vertex", "management"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 110 +tags: ["self-hosted", "vertex", "management"] +keywords: ["self-hosted", "vertex", "management"] --- You can configure a reverse proxy for Palette VerteX. The reverse proxy can be used by host clusters deployed in a private network. Host clusters deployed in a private network are not accessible from the public internet or by users in different networks. You can use a reverse proxy to access the cluster's Kubernetes API server from a different network. - + When you configure reverse proxy server for Palette VerteX, clusters that use the will use the reverse proxy server address in the kubeconfig file. Clusters not using the Spectro Proxy pack will use the default cluster address in the kubeconfig file. + Use the following steps to configure a reverse proxy server for Palette VerteX. @@ -52,7 +53,7 @@ Use the following steps to configure a reverse proxy server for Palette VerteX. 2. Use a text editor and open the **values.yaml** file. Locate the `frps` section and update the following values in the **values.yaml** file. Refer to the - [Spectro Proxy Helm Configuration](../install-palette-vertex/install-on-kubernetes/vertex-helm-ref.md#spectro-proxy) + [Spectro Proxy Helm Configuration](../supported-environments/kubernetes/setup/non-airgap/helm-reference.md#spectro-proxy) to learn more about the configuration options.
diff --git a/docs/docs-content/self-hosted-setup/vertex/system-management/scar-migration.md b/docs/docs-content/self-hosted-setup/vertex/system-management/scar-migration.md new file mode 100644 index 00000000000..f4146460704 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/scar-migration.md @@ -0,0 +1,31 @@ +--- +sidebar_label: "SCAR to OCI Registry Migration" +title: "SCAR to OCI Registry Migration" +description: + "Migrate Spectro Cloud Artifact Registry (SCAR) content to the OCI registry used to host packs and images for + self-hosted Palette VerteX." +icon: "" +hide_table_of_contents: false +sidebar_position: 120 +tags: ["self-hosted", "vertex", "management", "scar"] +keywords: ["self-hosted", "vertex", "management", "scar"] +--- + + + +## Prerequisites + + + +## Migrate SCAR + + + +## Validate + + diff --git a/docs/docs-content/vertex/system-management/smtp.md b/docs/docs-content/self-hosted-setup/vertex/system-management/smtp.md similarity index 70% rename from docs/docs-content/vertex/system-management/smtp.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/smtp.md index 9291234e3c5..80c5d9febc1 100644 --- a/docs/docs-content/vertex/system-management/smtp.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/smtp.md @@ -1,10 +1,10 @@ --- -sidebar_label: "Configure SMTP" -title: "Configure SMTP" -description: "Learn how to configure an SMTP server for your Palette instance." +sidebar_label: "SMTP Configuration" +title: "SMTP Configuration" +description: "Configure an SMTP server for self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 30 +sidebar_position: 130 tags: ["vertex", "management"] keywords: ["self-hosted", "vertex"] --- diff --git a/docs/docs-content/vertex/system-management/ssl-certificate-management.md b/docs/docs-content/self-hosted-setup/vertex/system-management/ssl-certificate-management.md similarity index 89% rename from docs/docs-content/vertex/system-management/ssl-certificate-management.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/ssl-certificate-management.md index 9dacccd6ea3..052605c8468 100644 --- a/docs/docs-content/vertex/system-management/ssl-certificate-management.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/ssl-certificate-management.md @@ -1,19 +1,19 @@ --- -sidebar_label: "System Address Management" -title: "System Address Management" -description: "Manage system address and SSL certificates in Palette." +sidebar_label: "System Address and SSL Certificate Management" +title: "System Address and SSL Certificate Management" +description: "Manage system address and SSL certificates in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 70 -tags: ["vertex", "management"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 140 +tags: ["self-hosted", "vertex", "management"] +keywords: ["self-hosted", "vertex", "management"] --- Palette VerteX uses Secure Sockets Layer (SSL) certificates to secure internal and external communication with Hypertext Transfer Protocol Secure (HTTPS). External VerteX endpoints, such as the [system console](../system-management/system-management.md#system-console), -[VerteX dashboard](../../introduction/dashboard.md), the VerteX API, and the gRPC endpoint, are enabled by default with -HTTPS using an auto-generated self-signed certificate. +[VerteX dashboard](../../../introduction/dashboard.md), the VerteX API, and the gRPC endpoint, are enabled by default +with HTTPS using an auto-generated self-signed certificate. ## Update System Address and Certificates @@ -43,10 +43,9 @@ updating the system address may require manual reconciliation on deployed cluste - A utility or tool to convert the certificate and key files to base64-encoded strings. You can use the `base64` command in Unix-based systems. Alternatively, you can use an online tool to convert the files to base64-encoded strings. -- If you installed Palette VerteX on - [Kubernetes](../install-palette-vertex/install-on-kubernetes/install-on-kubernetes.md) and specified a custom domain - name, ensure that you created a certificate for that domain. If you did not specify a custom domain name, or if you - installed Palette VerteX on [VMware](../install-palette-vertex/install-on-vmware/install-on-vmware.md), you must +- If you installed Palette VerteX on [Kubernetes](../supported-environments/kubernetes/install/install.md) and specified + a custom domain name, ensure that you created a certificate for that domain. If you did not specify a custom domain + name, or if you installed Palette VerteX on [VMware](../supported-environments/vmware/install/install.md), you must create a certificate for the Palette VerteX system console’s IP address. You can also specify a load balancer IP address if you are using a load balancer to access Palette VerteX. @@ -127,8 +126,8 @@ newly configured system address. - Palette VerteX access with a configured cloud account. -- A cluster deployed prior to the system address update. Refer to the [Clusters](../../clusters/clusters.md) section for - further guidance. +- A cluster deployed prior to the system address update. Refer to the [Clusters](../../../clusters/clusters.md) section + for further guidance. - `kubectl` installed. Use the Kubernetes [Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. @@ -143,7 +142,7 @@ newly configured system address. 3. Select your cluster from the **Clusters** list. The cluster **Overview** tab displays. 4. Download the kubeconfig file. This file allows you to connect to your deployed cluster. Check out the - [Kubeconfig](../../clusters/cluster-management/kubeconfig.md) page to learn more. + [Kubeconfig](../../../clusters/cluster-management/kubeconfig.md) page to learn more. 5. Open a terminal window and set the environment variable `KUBECONFIG` to point to the file you downloaded. ```shell diff --git a/docs/docs-content/vertex/system-management/system-management.md b/docs/docs-content/self-hosted-setup/vertex/system-management/system-management.md similarity index 51% rename from docs/docs-content/vertex/system-management/system-management.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/system-management.md index a1962dc4b7b..03567f5f7f8 100644 --- a/docs/docs-content/vertex/system-management/system-management.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/system-management.md @@ -4,13 +4,12 @@ title: "System Management" description: "Manage your Palette VerteX system settings." icon: "" hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex", "management"] -keywords: ["self-hosted", "vertex"] +tags: ["self-hosted", "vertex", "management"] +keywords: ["self-hosted", "vertex", "management"] --- Palette VerteX contains many system settings you can configure to meet your organization's needs. These settings are -available at the system level and are applied to all [tenants](../../glossary-all.md#tenant) in the system. +available at the system level and are applied to all [tenants](../../../glossary-all.md#tenant) in the system. ## System Console @@ -28,53 +27,32 @@ cluster and appending the `/system` path to the URL. For example, if your Palett System administrators can use the system console to perform the following operations: -- [Create and Manage System Accounts](./account-management/account-management.md) +- [Create and manage system administrators](account-management/account-management.md) -- Manage FIPS enforcement behaviors and settings. +- [Configure and manage SMTP settings](smtp.md). -- [Configure and manage SMTP settings](./smtp.md) - -- [Configure and manage pack registries](../system-management/add-registry.md). +- [Add system-level OCI-compliant pack registries](add-registry.md). - [Configure and manage SSL certificates](ssl-certificate-management.md). - Configure DNS settings. -- Setup alerts and notifications. +- Set up alerts and notifications. - Enable metrics collection. -- [Manage feature flags](./feature-flags.md). +- [Enable tech preview features using feature flags](./feature-flags.md). -- [Manage VerteX platform upgrades](../upgrade/upgrade.md). +- Manage Palette platform upgrades. -- [Configure login banner](./login-banner.md). +- [Configure login and classification banners](./login-banner.md). -- [Manage tenants](tenant-management.md). +- [Create and manage tenants](tenant-management.md). -- [Override Registry Configuration](registry-override.md) +- [Configure Palette to pull images from an alternate registry](registry-override.md) - Manage the Enterprise cluster and the profile layers and pack integrations that makeup the Enterprise cluster. -- [Customize the login screen and dashboard interface](./customize-interface.md). - -Check out the following resources to learn more about these operations. - -:::warning - -Exercise caution when changing system settings as the changes will be applied to all tenants in the system. - -::: -## Resources - -- [Account Management](./account-management/account-management.md) - -- [Add a Registry](add-registry.md) - -- [Enable non-FIPS Settings](enable-non-fips-settings/enable-non-fips-settings.md) - -- [Tenant Management](../system-management/tenant-management.md) - -- [SSL Certificate Management](../system-management/ssl-certificate-management.md) +- [Customize the login screen and dashboard interface](./customize-interface.md). -- [Configure and manage pack registries](../system-management/add-registry.md). +- [Configure reverse proxy](reverse-proxy.md) diff --git a/docs/docs-content/vertex/system-management/tenant-management.md b/docs/docs-content/self-hosted-setup/vertex/system-management/tenant-management.md similarity index 93% rename from docs/docs-content/vertex/system-management/tenant-management.md rename to docs/docs-content/self-hosted-setup/vertex/system-management/tenant-management.md index 095cb28bb83..55616cc39e6 100644 --- a/docs/docs-content/vertex/system-management/tenant-management.md +++ b/docs/docs-content/self-hosted-setup/vertex/system-management/tenant-management.md @@ -1,20 +1,18 @@ --- sidebar_label: "Tenant Management" title: "Tenant Management" -description: "Learn how to create and remove tenants in Palette VerteX." +description: "Create and remove tenants in self-hosted Palette VerteX." icon: "" hide_table_of_contents: false -sidebar_position: 90 -tags: ["vertex", "management"] -keywords: ["self-hosted", "vertex"] +sidebar_position: 160 +tags: ["self-hosted", "vertex", "management"] +keywords: ["self-hosted", "vertex", "management"] --- Tenants are isolated environments in Palette VerteX that contain their own clusters, users, and resources. You can create multiple tenants in Palette VerteX to support multiple teams or projects. Instructions for creating and removing tenants are provided below. -
- ## Create a Tenant You can create a tenant in Palette VerteX by following these steps. diff --git a/docs/docs-content/self-hosted-setup/vertex/vertex.md b/docs/docs-content/self-hosted-setup/vertex/vertex.md new file mode 100644 index 00000000000..eb150e86401 --- /dev/null +++ b/docs/docs-content/self-hosted-setup/vertex/vertex.md @@ -0,0 +1,122 @@ +--- +sidebar_label: "Palette VerteX" +title: "Self-Hosted Palette VerteX" +description: "Learn how Palette VerteX enables regulated industries to meet stringent security requirements." +hide_table_of_contents: false +sidebar_position: 0 +tags: ["self-hosted", "vertex"] +keywords: ["self-hosted", "vertex"] +--- + +Palette VerteX offers regulated industries, such as government and public sector organizations that handle sensitive and +classified information simplicity, security, and scale in production Kubernetes. VerteX is available as a self-hosted +platform offering that you can install in your data centers or public cloud providers to manage Kubernetes clusters. + +## FIPS-Compliant + +Palette VerteX integrates validated Federal Information Processing Standards (FIPS) 140-3 cryptographic modules in +Kubernetes clusters it deploys to ensure robust data protection for your organization’s infrastructure and applications. + +To learn more about our FIPS 140-3 certification, review +[Spectro Cloud Cryptographic Module](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/5061). +FIPS modules, which are accessible in our private artifact repository, extend Palette’s existing security features that +include security scans, powerful RBAC, and tamper-proof edge device images. Palette VerteX protects sensitive data in +clusters across edge, bare metal, on-prem data centers, air-gapped environments, and cloud. + +To learn more about FIPS in Palette VerteX, check out the [FIPS](./fips.md) section. + +## Supported Platforms + +:::danger + +The [following section](#content-to-be-refactored) contains the content from the former VerteX +[Supported Platforms](https://docs.spectrocloud.com/vertex/supported-platforms/) page. Refactor this content to be a +partial and use a table similar to the following to compare and contrast support between the platforms. + +::: + +| **Azure Cloud** | **Palette Support** | **Palette VerteX Support** | +| ---------------------------------------------------------------------------------------------- | :-----------------: | :------------------------: | +| Azure Commercial (Public Cloud) | :white_check_mark: | :white_check_mark: | +| [Azure Government](https://azure.microsoft.com/en-us/explore/global-infrastructure/government) | :white_check_mark: | :white_check_mark: | + +### Content to be Refactored + +Palette VerteX supports the following infrastructure platforms for deploying Kubernetes clusters: + +| **Platform** | **Additional Information** | +| ------------------ | ------------------------------------------------------------------------- | +| **AWS** | Refer to the [AWS](#aws) section for additional guidance. | +| **AWS Gov** | Refer to the [AWS](#aws) section for additional guidance. | +| **AWS Secret** | Refer to the [AWS](#aws) section for additional guidance. | +| **Azure** | Refer to the [Azure](#azure) section for additional guidance. | +| **Azure Gov** | Refer to the [Azure](#azure) section for additional guidance. | +| **Dev Engine** | Refer to the VerteX Engine section for additional guidance. | +| **MAAS** | Canonical Metal-As-A-Service (MAAS) is available and supported in VerteX. | +| **Edge** | Edge deployments are supported in VerteX. | +| **VMware vSphere** | VMware vSphere is supported in VerteX. | + +Review the following tables for additional information about the supported platforms. + +:::info + +For guidance on how to deploy a Kubernetes cluster on a supported platform, refer to the +[Cluster](../../clusters/clusters.md) documentation. + +::: + +The term _IaaS_ refers to Palette using compute nodes that are not managed by a cloud provider, such as bare metal +servers or virtual machines. + +#### AWS + +VerteX supports the following AWS services. + +| **Service** | **AWS Gov Support?** | +| ----------- | -------------------- | +| **IaaS** | ✅ | +| **EKS** | ✅ | + +#### Azure + +VerteX supports the following Azure services. + +| **Service** | **Azure Gov Support?** | +| ----------- | ---------------------- | +| **IaaS** | ✅ | +| **AKS** | ✅ | + +All Azure Government regions are supported with the exception of Department of Defense regions. Refer to the +[official Azure Government documentation](https://learn.microsoft.com/en-us/azure/azure-government/documentation-government-overview-dod) +to learn more about the available regions. + +#### Dev Engine + +VerteX supports the [Dev Engine](../../devx/devx.md) platform for deploying virtual clusters. However, the Dev Engine +platform is not FIPS compliant and requires you to enable the [non-FIPS setting](./fips.md#enable-non-fips-components). +Additionally, container deployment based workflows are not supported for airgap environments. + +#### VMware vSphere + +The following versions of VMware vSphere are supported in VerteX. + +| **Version** | **Supported?** | +| ----------------- | -------------- | +| **vSphere 6.7U3** | ✅ | +| **vSphere 7.0** | ✅ | +| **vSphere 8.0** | ✅ | + +## Access Palette VerteX + +To set up a Palette VerteX account, contact our support team by sending an email to support@spectrocloud.com. Include +the following information in your email: + +- Your full name +- Organization name (if applicable) +- Email address +- Phone number (optional) +- Target Platform (VMware or Kubernetes) +- A brief description of your intended use of VerteX + +Our dedicated support team will promptly get in touch with you to provide the necessary assistance and share the +installer image, credentials, and an endpoint URL to access the FIPS registry. diff --git a/docs/docs-content/tenant-settings/add-registry.md b/docs/docs-content/tenant-settings/add-registry.md index 0c049162b53..6ca18d3a3c4 100644 --- a/docs/docs-content/tenant-settings/add-registry.md +++ b/docs/docs-content/tenant-settings/add-registry.md @@ -12,7 +12,7 @@ You can add a registry at the tenant level, or if you are using self-hosted Pale registries at the system level. Registries added at the system level are available to all the tenants. Registries added at the tenant level are available only to that tenant. This section describes how to add a tenant-level registry. For guidance on adding a registry at the system scope, check out -[Add System-Level Registry](../enterprise-version/system-management/add-registry.md). +[Add System-Level Registry](../self-hosted-setup/palette/system-management/add-registry.md). To add a tenant-level registry, you must have tenant admin access to Palette. Use the following resources to learn more about the different types of registries that you can add to Palette: @@ -24,5 +24,5 @@ about the different types of registries that you can add to Palette: - [Legacy Pack Registries](../registries-and-packs/registries/pack-registries.md) To add a system level registry, you must have system admin access to a self-hosted Palette or Palette VerteX -environment. Check out the [Self-Hosted Add Registry](../enterprise-version/system-management/add-registry.md) guide or -the [VerteX Add Registry](../vertex/system-management/add-registry.md) guide. +environment. Check out the [Self-Hosted Add Registry](../self-hosted-setup/palette/system-management/add-registry.md) +guide or the [VerteX Add Registry](../self-hosted-setup/vertex/system-management/add-registry.md) guide. diff --git a/docs/docs-content/tenant-settings/login-banner.md b/docs/docs-content/tenant-settings/login-banner.md index 8a77a9b63ef..0c90af4f541 100644 --- a/docs/docs-content/tenant-settings/login-banner.md +++ b/docs/docs-content/tenant-settings/login-banner.md @@ -21,8 +21,8 @@ self-hosted Palette use the tenant URL defined during the Palette installation. Additionally, if you are using self-hosted Palette or VerteX and have a login banner configured at the system console, the tenant login banner will not be displayed, as the system console login banner takes precedence. Refer to the -[System Login Banner](../enterprise-version/system-management/login-banner.md) page to learn more about system login -banners. +[System Login Banner](../self-hosted-setup/palette/system-management/login-banner.md) page to learn more about system +login banners. ::: diff --git a/docs/docs-content/troubleshooting/enterprise-install.md b/docs/docs-content/troubleshooting/enterprise-install.md index 337a561695e..d7118fac132 100644 --- a/docs/docs-content/troubleshooting/enterprise-install.md +++ b/docs/docs-content/troubleshooting/enterprise-install.md @@ -139,10 +139,11 @@ The VerteX Management Appliance upgrade process will then continue. You can moni ## Scenario - Palette/VerteX Management Appliance Installation Stalled due to piraeus-operator Pack in Error State -During the installation of the [Palette](../enterprise-version/install-palette/palette-management-appliance.md) or -[VerteX Management Appliance](../vertex/install-palette-vertex/vertex-management-appliance.md), the `piraeus-operator` -pack can enter an error state in Local UI. This can be caused by stalled creation of Kubernetes secrets in the -`piraeus-system` namespace and can prevent the installation from completing successfully. +During the installation of the +[Palette](../self-hosted-setup/palette/supported-environments/management-appliance/install.md) or +[VerteX Management Appliance](../self-hosted-setup/vertex/supported-environments/management-appliance/install.md), the +`piraeus-operator` pack can enter an error state in Local UI. This can be caused by stalled creation of Kubernetes +secrets in the `piraeus-system` namespace and can prevent the installation from completing successfully. To resolve, you can manually delete any secrets in the `piraeus-system` namespace that have a `pending-install` status label. This will allow the `piraeus-operator` pack to complete its deployment and the Palette/VerteX Management @@ -230,10 +231,11 @@ Appliance installation to proceed. ## Scenario - Unexpected Logouts in Tenant Console After Palette/VerteX Management Appliance Installation After installing self-hosted Palette/Palette VerteX using the -[Palette Management Appliance](../enterprise-version/install-palette/palette-management-appliance.md) or -[VerteX Management Appliance](../vertex/install-palette-vertex/vertex-management-appliance.md), you may experience -unexpected logouts when using the tenant console. This can be caused by a time skew on your Palette/VerteX management -cluster nodes, which leads to authentication issues. +[Palette Management Appliance](../self-hosted-setup/palette/supported-environments/management-appliance/management-appliance.md) +or +[VerteX Management Appliance](../self-hosted-setup/vertex/supported-environments/management-appliance/management-appliance.md), +you may experience unexpected logouts when using the tenant console. This can be caused by a time skew on your +Palette/VerteX management cluster nodes, which leads to authentication issues. To verify the system time, open a terminal session on each node in your Palette/VerteX management cluster and issue the following command to check the system time. @@ -400,8 +402,8 @@ the upgrade, you must manually release the orphaned claim holding the IP address ``` 6. Re-run the upgrade. For guidance, refer to the applicable upgrade guide for your airgapped instance of - [Palette](../enterprise-version/upgrade/upgrade-vmware/airgap.md) or - [VerteX](../vertex/upgrade/upgrade-vmware/airgap.md). + [Palette](../self-hosted-setup/palette/supported-environments/vmware/upgrade/airgap.md) or + [VerteX](../self-hosted-setup/vertex/supported-environments/vmware/upgrade/airgap.md). ## Scenario - Self-Linking Error @@ -420,9 +422,10 @@ This error may occur if the self-hosted pack registry specified in the installat guide. 3. Log in to the system console. Refer to - [Access Palette system console](../enterprise-version/system-management/system-management.md#access-the-system-console) - or [Access Vertex system console](../vertex/system-management/system-management.md#access-the-system-console) for - additional guidance. + [Access Palette system console](../self-hosted-setup/palette/system-management/system-management.md#access-the-system-console) + or + [Access Vertex system console](../self-hosted-setup/vertex/system-management/system-management.md#access-the-system-console) + for additional guidance. 4. From the left navigation menu, select **Administration** and click on the **Pack Registries** tab. diff --git a/docs/docs-content/troubleshooting/pack-issues.md b/docs/docs-content/troubleshooting/pack-issues.md index a9d352c0ba6..359f5935739 100644 --- a/docs/docs-content/troubleshooting/pack-issues.md +++ b/docs/docs-content/troubleshooting/pack-issues.md @@ -13,10 +13,11 @@ The following are common scenarios that you may encounter when using Packs. ## Scenario - Pods with NamespaceLabels are Stuck on Deployment When deploying a workload cluster with packs that declare `namespaceLabels`, the associated Pods never start if the -cluster was deployed via self-hosted [Palette](../enterprise-version/enterprise-version.md) or -[Palette VerteX](../vertex/vertex.md) or if the `palette-agent` ConfigMap has `data.feature.workloads: disable`. This is -due to the necessary labels not being applied to the target namespace, resulting in the namespace lacking the elevated -privileges the Pods require and the Kubernetes’ PodSecurity admission blocks the Pods. +cluster was deployed via self-hosted [Palette](../self-hosted-setup/palette/palette.md) or +[Palette VerteX](../self-hosted-setup/vertex/vertex.md) or if the `palette-agent` ConfigMap has +`data.feature.workloads: disable`. This is due to the necessary labels not being applied to the target namespace, +resulting in the namespace lacking the elevated privileges the Pods require and the Kubernetes’ PodSecurity admission +blocks the Pods. To resolve this issue, force-apply the PodSecurity policies directly to the namespace of the affected Pods. diff --git a/docs/docs-content/troubleshooting/palette-upgrade.md b/docs/docs-content/troubleshooting/palette-upgrade.md index dbbd6f4a017..f8f318a71ed 100644 --- a/docs/docs-content/troubleshooting/palette-upgrade.md +++ b/docs/docs-content/troubleshooting/palette-upgrade.md @@ -8,23 +8,24 @@ sidebar_position: 60 tags: ["troubleshooting", "palette-upgrade"] --- -We recommend you review the [Release Notes](../release-notes/release-notes.md) and the -[Upgrade Notes](../enterprise-version/upgrade/upgrade.md) before attempting to upgrade Palette. Use this information to -address common issues that may occur during an upgrade. +We recommend you review the [Release Notes](../release-notes/release-notes.md) before attempting to upgrade Palette. Use +this information to address common issues that may occur during an upgrade. ## Self-Hosted Palette or Palette VerteX Upgrade Hangs -Upgrading [self-hosted Palette](../enterprise-version/enterprise-version.md) or [Palette VerteX](../vertex/vertex.md) -from version 4.6.x to 4.7.x can cause the upgrade to hang if any member of a MongoDB ReplicaSet is not fully synced and -in a healthy state prior to the upgrade. +Upgrading [self-hosted Palette](../self-hosted-setup/palette/palette.md) or +[Palette VerteX](../self-hosted-setup/vertex/vertex.md) from version 4.6.x to 4.7.x can cause the upgrade to hang if any +member of a MongoDB ReplicaSet is not fully synced and in a healthy state prior to the upgrade. ### Debug Steps To verify the health status of each MongoDB ReplicaSet member, use the following procedure based on whether you are upgrading Palette or Palette VerteX. -1. Log in to the [Palette](../enterprise-version/system-management/system-management.md#access-the-system-console) or - [Palette VerteX](../vertex/system-management/system-management.md#access-the-system-console) system console. +1. Log in to the + [Palette](../self-hosted-setup/palette/system-management/system-management.md#access-the-system-console) or + [Palette VerteX](../self-hosted-setup/vertex/system-management/system-management.md#access-the-system-console) system + console. 2. From the left main menu, select **Enterprise Cluster**. @@ -260,7 +261,7 @@ ConfigMap value is incorrect, use the following steps to resolve the issue. 4. If the host value is incorrect, log in to the System Console. You can find guidance on how to access the System Console in the - [Access the System Console](../vertex/system-management/system-management.md#access-the-system-console) + [Access the System Console](../self-hosted-setup/vertex/system-management/system-management.md#access-the-system-console) documentation. 5. Navigate to the **Main Menu** and select **Enterprise Cluster**. From the **System Profiles** page, select the diff --git a/docs/docs-content/troubleshooting/pcg.md b/docs/docs-content/troubleshooting/pcg.md index 1e4e608aac9..efca2db57c4 100644 --- a/docs/docs-content/troubleshooting/pcg.md +++ b/docs/docs-content/troubleshooting/pcg.md @@ -29,9 +29,10 @@ cluster on an as-need basis. ### Debug Steps For multi-tenant and dedicated SaaS instances, perform cleanup on any applicable PCGs. For -[self-hosted Palette](../enterprise-version/enterprise-version.md) and [Palette VerteX](../vertex/vertex.md), clean up -any applicable PCGs as well as your management plane cluster if you have used the Palette -[System Private Gateway](../clusters/pcg/architecture.md#system-private-gateway) to deploy clusters. +[self-hosted Palette](../self-hosted-setup/palette/palette.md) and +[Palette VerteX](../self-hosted-setup/vertex/vertex.md), clean up any applicable PCGs as well as your management plane +cluster if you have used the Palette [System Private Gateway](../clusters/pcg/architecture.md#system-private-gateway) to +deploy clusters. @@ -106,7 +107,7 @@ any applicable PCGs as well as your management plane cluster if you have used th 1. Log in to your Palette or Palette VerteX - [system console](../enterprise-version/system-management/system-management.md#access-the-system-console). + [system console](../self-hosted-setup/palette/system-management/system-management.md#access-the-system-console). 2. From the left main menu, select **Enterprise Cluster**. @@ -275,7 +276,7 @@ to Palette 4.7. 1. Log in to your Palette or Palette VerteX - [system console](../enterprise-version/system-management/system-management.md#access-the-system-console). + [system console](../self-hosted-setup/palette/system-management/system-management.md#access-the-system-console). 2. From the left main menu, select **Enterprise Cluster**. diff --git a/docs/docs-content/tutorials/getting-started/additional-capabilities/self-hosted.md b/docs/docs-content/tutorials/getting-started/additional-capabilities/self-hosted.md index b335c698019..37a5c1de564 100644 --- a/docs/docs-content/tutorials/getting-started/additional-capabilities/self-hosted.md +++ b/docs/docs-content/tutorials/getting-started/additional-capabilities/self-hosted.md @@ -43,11 +43,11 @@ and applications. ## Resources -Check out the [Self-Hosted Palette](../../../enterprise-version/enterprise-version.md) section to learn how to install -the self-hosted version of Palette in your data centers or public cloud providers. +Check out the [Self-Hosted Palette](../../../self-hosted-setup/palette/palette.md) section to learn how to install the +self-hosted version of Palette in your data centers or public cloud providers. -Review the [Palette VerteX](../../../vertex/vertex.md) section to learn how to install and configure VerteX in your data -centers or public cloud providers. +Review the [Palette VerteX](../../../self-hosted-setup/vertex/vertex.md) section to learn how to install and configure +VerteX in your data centers or public cloud providers. Check out the following video for a tour of Palette VerteX, our tailor-made Kubernetes management solution for government and regulated industries. diff --git a/docs/docs-content/user-management/authentication/switch-tenant.md b/docs/docs-content/user-management/authentication/switch-tenant.md index c76bf161bdf..98d5c40c7f5 100644 --- a/docs/docs-content/user-management/authentication/switch-tenant.md +++ b/docs/docs-content/user-management/authentication/switch-tenant.md @@ -15,11 +15,11 @@ having to log in again. This feature is available to self-hosted Palette, VerteX - You must have a user account in the tenant you want to switch to. - At least two tenants must be available in the Palette instance. System administrators for self-hosted Palette or - VerteX instances can create multiple tenants. Refer to the Palette - [Tenant Management](../../enterprise-version/system-management/tenant-management.md) or the Vertex - [Tenant Management](../../vertex/system-management/tenant-management.md) page for guidance on how to create tenants. - Users of Palette SaaS, contact our support team at [support@spectrocloud.com](mailto:support@spectrocloud.com) for - additional tenants. + VerteX instances can create multiple tenants. Refer to the self-hosted Palette + [Tenant Management](../../self-hosted-setup/palette/system-management/tenant-management.md) or the Palette VerteX + [Tenant Management](../../self-hosted-setup/vertex/system-management/tenant-management.md) page for guidance on how to + create tenants. Users of Palette SaaS, contact our support team at + [support@spectrocloud.com](mailto:support@spectrocloud.com) for additional tenants. ## Switch Tenant diff --git a/docs/docs-content/user-management/saml-sso/palette-sso-with-okta-saml.md b/docs/docs-content/user-management/saml-sso/palette-sso-with-okta-saml.md index 57865172c58..418ee172f39 100644 --- a/docs/docs-content/user-management/saml-sso/palette-sso-with-okta-saml.md +++ b/docs/docs-content/user-management/saml-sso/palette-sso-with-okta-saml.md @@ -20,9 +20,10 @@ The following steps will guide you on how to enable Palette SSO with ## Prerequisites - For Okta SAML to work correctly with self-hosted Palette, ensure that HTTPS is enabled and TLS is configured. For - additional information, refer to the appropriate - [Palette](../../../enterprise-version/system-management/ssl-certificate-management) or - [VerteX](../../../vertex/system-management/ssl-certificate-management) System Address Management guide. + additional information, refer to the appropriate self-hosted + [Palette](../../self-hosted-setup/palette/system-management/ssl-certificate-management.md) or + [Palette VerteX](../../self-hosted-setup/vertex/system-management/ssl-certificate-management.md) System Address + Management guide. - A free or paid subscription with Okta. Okta provides free [developer subscriptions](https://developer.okta.com/signup/) for testing purposes. diff --git a/docs/docs-content/user-management/saml-sso/palette-sso-with-okta.md b/docs/docs-content/user-management/saml-sso/palette-sso-with-okta.md index e964bb1a749..6b69f59eeac 100644 --- a/docs/docs-content/user-management/saml-sso/palette-sso-with-okta.md +++ b/docs/docs-content/user-management/saml-sso/palette-sso-with-okta.md @@ -20,9 +20,10 @@ The following steps will guide you on how to enable Palette SSO with ## Prerequisites - For Okta SAML to work correctly with self-hosted Palette, ensure that HTTPS is enabled and TLS is configured. For - additional information, refer to the appropriate - [Palette](../../../enterprise-version/system-management/ssl-certificate-management) or - [VerteX](../../../vertex/system-management/ssl-certificate-management) System Address Management guide. + additional information, refer to the appropriate self-hosted + [Palette](../../self-hosted-setup/palette/system-management/ssl-certificate-management.md) or + [Palette VerteX](../../self-hosted-setup/vertex/system-management/ssl-certificate-management.md) System Address + Management guide. - A free or paid subscription with Okta. Okta provides free [developer subscriptions](https://developer.okta.com/signup/) for testing purposes. diff --git a/docs/docs-content/vertex/fips/_category_.json b/docs/docs-content/vertex/fips/_category_.json deleted file mode 100644 index 3fca6fb9f9b..00000000000 --- a/docs/docs-content/vertex/fips/_category_.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "position": 0 -} diff --git a/docs/docs-content/vertex/fips/fips-status-icons.md b/docs/docs-content/vertex/fips/fips-status-icons.md deleted file mode 100644 index edeeec620bd..00000000000 --- a/docs/docs-content/vertex/fips/fips-status-icons.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -sidebar_label: "FIPS Status Icons" -title: "FIPS Status Icons" -description: - "Learn how icons can help you identify FIPS compliance when you consume features that are not FIPS compliant." -icon: "" -hide_table_of_contents: false -sidebar_position: 0 -tags: ["vertex", "fips"] -keywords: ["self-hosted", "vertex"] ---- - -While Palette VerteX brings FIPS 140-3 cryptographic modules to the Palette management platform and deployed clusters, -it also provides the capability to consume features that are not FIPS compliant. For example, when the cluster import -option is enabled, it allows users to import any type of Kubernetes cluster, including some that are not fully FIPS -compliant. Similarly, when the option to add non-FIPS add-on packs is enabled, users can add packs in cluster profiles -that are not FIPS compliant. For more information about these tenant-level settings, refer to -[Enable non-FIPS Settings](../system-management/enable-non-fips-settings/enable-non-fips-settings.md). - -To avoid confusion and compliance issues, Palette VerteX displays icons to indicate the FIPS compliance status of -clusters, profiles, and packs. - -The table lists icons used to indicate FIPS compliance status. The partial FIPS compliance icon applies only to clusters -and profiles because these may contain packs with an _Unknown_ or _Not FIPS-compliant_ status. - -| **Icon** | **Description** | **Applies to Clusters** | **Applies to Profiles** | **Applies to Packs** | -| ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | ----------------------- | ----------------------- | -------------------- | -| ![Full FIPS compliance](/vertex_fips-status-icons_compliant.webp) | Full FIPS compliance. All packs in the cluster are FIPS-compliant. | ✅ | ✅ | ✅ | -| ![Partial FIPS compliance](/vertex_fips-status-icons_partial.webp) | Partial FIPS compliance. Some packs are FIPS compliant, but there is at least one that is not. | ✅ | ✅ | ❌ | -| ![Not FIPS-compliant](/vertex_fips-status-icons_not-compliant.webp) | Not FIPS-compliant. None of the packs in the cluster are FIPS-compliant. | ✅ | ✅ | ✅ | -| ![Unknown FIPS state](/vertex_fips-status-icons_unknown.webp) | Unknown state of FIPS compliance. This applies to imported clusters that were not deployed by Palette. | ✅ | ✅ | ✅ | - - - -The screenshots below show how Palette VerteX applies FIPS status icons. - -:::tip - -When creating a cluster profile, you can filter packs by checking the **FIPS Compliant** checkbox to display only -FIPS-compliant packs. - -::: - -When you create a profile, icons display next to packs. - -![Diagram showing FIPS status icons on profile page.](/vertex_fips-status-icons_icons-on-profile-page.webp) - -Icons appear next to each profile layer to indicate FIPS compliance. - -![Diagram showing FIPS-compliant icons in profile stack.](/vertex_fips-status-icons_icons-in-profile-stack.webp) - -In this screenshot, Palette VerteX shows FIPS status for the cluster is partially compliant because one pack in the -profile is not FIPS-compliant. - -![Diagram showing FIPS status icons on Cluster Overview page.](/vertex_fips-status-icons_icons-in-cluster-overview.webp) diff --git a/docs/docs-content/vertex/fips/fips.md b/docs/docs-content/vertex/fips/fips.md deleted file mode 100644 index 6e3e9970437..00000000000 --- a/docs/docs-content/vertex/fips/fips.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -sidebar_label: "FIPS" -title: "FIPS" -description: "Learn about FIPS compliance in Palette VerteX." -icon: "" -hide_table_of_contents: false -tags: ["vertex", "fips"] -keywords: ["self-hosted", "vertex"] ---- - -Palette VerteX is FIPS 140-3 certified -([#5061](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/5061)). This means that -Palette VerteX uses FIPS 140-3 compliant algorithms and encryption methods. With its additional security scanning -capabilities, Palette VerteX is designed to meet the stringent requirements of regulated industries. Palette VerteX -operates on FIPS-compliant Ubuntu Pro versions. - -## Non-FIPS Enablement - -You can deploy non-FIPS-compliant components in your Palette VerteX environment by enabling non-FIPS settings. Refer to -the [Enable non-FIPS Settings](../system-management/enable-non-fips-settings/enable-non-fips-settings.md) guide for more -information. - -Something to note when using RKE2 and K3s: - -- When we scan the binaries, which we consume directly from Rancher's RKE2 repository, issues are reported for the - following components. These components were compiled with a Go compiler that is not FIPS-compliant. - - - `container-suseconnect` - - `container-suseconnect-zypp` - - `susecloud` - - Since these components are unrelated to Kubernetes and are instead used to access SUSE’s repositories during the - Docker build process, RKE2 itself remains fully compliant. - - RKE2 is designated as FIPS-compliant per official Rancher - [FIPS 140-2 Enablement](https://docs.rke2.io/security/fips_support) security documentation. Therefore, Palette VerteX - designates RKE2 as FIPS-compliant. - -- Although K3s is not available as a FIPS-certified distribution, Palette VerteX supports K3s as a Kubernetes - distribution for Edge clusters. - -Palette VerteX uses icons to show FIPS compliance status. For information about Palette VerteX status icons, review -[FIPS Status Icons](fips-status-icons.md). - -## Legal Notice - -Spectro Cloud has performed a categorization under FIPS 199 with (client/tenant) for the data types (in accordance with -NIST 800-60 Vol. 2 Revision 1) to be stored, processed, and/or transmitted by the Palette Vertex environment. -(client/tenant) maintains ownership and responsibility for the data and data types to be ingested by the Palette Vertex -SaaS in accordance with the agreed upon Palette Vertex FIPS 199 categorization. - -## Resources - -- [FIPS Status Icons](fips-status-icons.md) - -- [FIPS-Compliant Components](fips-compliant-components.md) - -- [RKE2 FIPS 140-2 Enablement](https://docs.rke2.io/security/fips_support) diff --git a/docs/docs-content/vertex/install-palette-vertex/_category_.json b/docs/docs-content/vertex/install-palette-vertex/_category_.json deleted file mode 100644 index 3fca6fb9f9b..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/_category_.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "position": 0 -} diff --git a/docs/docs-content/vertex/install-palette-vertex/airgap.md b/docs/docs-content/vertex/install-palette-vertex/airgap.md deleted file mode 100644 index a18b1263290..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/airgap.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -sidebar_label: "Airgap Resources" -title: "Airgap Resources" -description: "Airgap installation resources for Palette VerteX." -icon: "" -sidebar_position: 10 -hide_table_of_contents: false -tags: ["vertex", "self-hosted", "airgap"] -keywords: ["self-hosted", "vertex"] ---- - -You can install Palette VerteX in an airgapped environment. An airgap environment lacks direct access to the internet -and is intended for environments with strict security requirements. - -The installation process for an airgap environment is different due to the lack of internet access. Before the primary -VerteX installation steps, you must download the following artifacts. - -- Palette VerteX platform manifests and required platform packages. - -- Container images for core platform components and third party dependencies. - -- Palette VerteX packs. - -The other significant change is that VerteX's default public OCI registry is not used. Instead, a private OCI registry -is utilized for storing images and packs. - -## Overview - -Before you can install VerteX in an airgap environment, you must complete all the required pre-install steps. The -following diagram outlines the major pre-install steps for an airgap installation. - -![An architecture diagram outlining the five different install phases](/enterprise-version_air-gap-repo_overview-order-diagram.webp) - -1. Download the airgap setup binary from the URL provided by the support team. The airgap setup binary is a - self-extracting archive that contains the Palette platform manifests, images, and required packs. The airgap setup - binary is a one-time use binary for uploading VerteX images and packs to your OCI registry. You will not use the - airgap setup binary again after the initial installation. This step must be completed in an environment with internet - access. - -2. Move the airgap setup binary to the airgap environment. The airgap setup binary is used to extract the manifest - content and upload the required images and packs to your private OCI registry. Start the airgap setup binary in a - Linux Virtual Machine (VM). - -3. The airgap script will push the required images, packs, and manifest to the built-in [Harbor](https://goharbor.io/) - OCI registry. - -4. Install Palette using the Palette CLI or the Kubernetes Helm chart. - -5. Configure your VerteX environment. - -## Get Started - -To get started with an airgap VerteX installation, check out the respective platform guide. - -- [Kubernetes Airgap Instructions](./install-on-kubernetes/airgap-install/airgap-install.md) - -- [VMware vSphere Airgap Instructions](./install-on-vmware/airgap-install/airgap-install.md) - -Each platform guide provides detailed instructions on how to complete the pre-install steps. - -## Supported Platforms - -The following table outlines the supported platforms for an airgap VerteX installation and the supported OCI registries. - -| **Platform** | **OCI Registry** | **Supported** | -| -------------- | ---------------- | ------------- | -| Kubernetes | Harbor | ✅ | -| Kubernetes | AWS ECR | ✅ | -| VMware vSphere | Harbor | ✅ | -| VMware vSphere | AWS ECR | ✅ | - -## Resources - -- [Additional Packs](../../downloads/palette-vertex/additional-packs.md) - -- [Offline Documentation](../../downloads/offline-docs.md) diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/airgap-install.md b/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/airgap-install.md deleted file mode 100644 index ecff194518d..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/airgap-install.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -sidebar_label: "Airgap Installation" -title: "Airgap Installation" -description: "Learn how to deploy VerteX to a Kubernetes cluster using a Helm Chart." -icon: "" -hide_table_of_contents: false -sidebar_position: 0 -tags: ["vertex", "enterprise", "airgap", "kubernetes"] -keywords: ["self-hosted", "vertex"] ---- - -You can install VerteX in an airgap Kubernetes environment. An airgap environment lacks direct access to the internet -and is intended for environments with strict security requirements. - -The installation process for an airgap environment is different due to the lack of internet access. Before the primary -Palette installation steps, you must download the following artifacts: - -- Palette platform manifests and required platform packages. - -- Container images for core platform components and third-party dependencies. - -- Palette packs. - -The other significant change is that VerteX's default public OCI registry is not used. Instead, a private OCI registry -is utilized to store images and packs. - -## Overview - -Before you can install Palette VerteX in an airgap environment, you must first set up your environment as outlined in -the following diagram. - -![An architecture diagram outlining the five different installation phases](/enterprise-version_air-gap-repo_k8s-points-overview-order-diagram.webp) - -1. In an environment with internet access, download the airgap setup binary from the URL provided by our support team. - The airgap setup binary is a self-extracting archive that contains the Palette platform manifests, images, and - required packs. The airgap setup binary is a single-use binary for uploading Palette images and packs to your OCI - registry. You will not use the airgap setup binary again after the initial installation. - -2. Move the airgap setup binary to the airgap environment. The airgap setup binary is used to extract the manifest - content and upload the required images and packs to your private OCI registry. Start the airgap setup binary in a - Linux Virtual Machine (VM). - -3. The airgap script will push the required images and packs to your private OCI registry. - -4. Install Palette using the Kubernetes Helm chart. - -## Get Started - -To get started with the airgap Palette installation, start by reviewing the -[Environment Setup](./kubernetes-airgap-instructions.md) page. The environment setup guide provides detailed -instructions on how to prepare your airgap environment. After you have completed the environment setup, you can proceed -with the [Install VerteX](./install.md) guide. - -## Resources - -- [Environment Setup](kubernetes-airgap-instructions.md) - -- [Install VerteX](./install.md) - -- [Airgap Installation Checklist](checklist.md) - -- [Additional Packs](../../../../downloads/palette-vertex/additional-packs.md) diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/checklist.md b/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/checklist.md deleted file mode 100644 index a157d149f24..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/checklist.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -sidebar_label: "Checklist" -title: "Airgap VerteX Installation Checklist" -description: - "An airgap installation of Palette requires a few steps to be completed before the installation can begin. This - checklist will help you prepare for the installation." -icon: "" -sidebar_position: 10 -hide_table_of_contents: false -tags: ["vertex", "enterprise", "airgap", "kubernetes"] -keywords: ["self-hosted", "vertex"] ---- - -Use the following checklist to ensure you have completed all the required steps before deploying the airgap Palette -installation. - -- [ ] `oras` CLI v1.0.0 is installed and available. - -- [ ] `aws` CLI v2 or greater CLI is installed and available. - -- [ ] `zip` is installed and available. - -- [ ] Download the airgap setup binary from the support team. - -- [ ] Create a private repository named `spectro-packs` in your OCI registry. You can use a different name if you - prefer. - -- [ ] Create a public repository named `spectro-images` in your OCI registry. You can use a different name if you - prefer. - -- [ ] Authenticate with your OCI registry and acquired credentials to both repositories. - -- [ ] Download the Certificate Authority (CA) certificate from your OCI registry. - -- [ ] Set the required environment variables for the airgap setup binary. The values are different depending on what - type of OCI registry you use. - -- [ ] Start the airgap setup binary and verified the setup completed successfully. - -- [ ] Review the list of pack binaries to download and upload to your OCI registry. diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/_category_.json b/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/_category_.json deleted file mode 100644 index 3fca6fb9f9b..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/_category_.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "position": 0 -} diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/checklist.md b/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/checklist.md deleted file mode 100644 index bd1e67334aa..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/checklist.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -sidebar_label: "Checklist" -title: "Checklist" -description: - "An airgap installation of Palette requires a few steps to be completed before the installation can begin. This - checklist will help you prepare for the installation." -icon: "" -sidebar_position: 10 -hide_table_of_contents: false -tags: ["palette", "self-hosted", "airgap"] -keywords: ["self-hosted", "enterprise"] ---- - -Use the following checklist to ensure you have completed all the required steps before deploying the airgap Palette -installation. Review this checklist with your VerteX support team to ensure you have all the required assets. - -- [ ] Create a vSphere VM and Template folder named `spectro-templates`. - -- [ ] You have the met the requirements for the operating system. - - - [Ubuntu Pro](https://ubuntu.com/pro) - you need an Ubuntu Pro subscription token. - - - [Red Hat Linux Enterprise](https://www.redhat.com/en) - you need a Red Hat subscription and a custom RHEL vSphere - template with Kubernetes available in your vSphere environment. To learn how to create the required template, refer - to the [RHEL and PXK](../../../../byoos/image-builder/build-image-vmware/rhel-pxk.md) guide. - -- [ ] Import the Operating System and Kubernetes distribution OVA required for the installation and place the OVA in the - `spectro-templates` folder. - -- [ ] Append the `r_` prefix and remove the `.ova` suffix from the OVA name after the import. - -- [ ] Start the airgap setup binary and verify the setup is completed successfully. - -- [ ] Review the list of [pack binaries](../../../../downloads/palette-vertex/additional-packs.md) to download and - upload to your OCI registry. - -- [ ] Download the release binary that contains the core packs and images required for the installation. - -- [ ] If you have custom SSL certificates you want to include, copy the custom SSL certificates, in base64 PEM format, - to the support VM. The custom certificates must be placed in the **/opt/spectro/ssl** folder. Include the - following files: - - **server.crt** - - **server.key** diff --git a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/environment-setup.md b/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/environment-setup.md deleted file mode 100644 index 2393dd35c2b..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/environment-setup.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -sidebar_label: "Environment Setup" -title: "Environment Setup" -description: "Learn how to prepare your airgap environment for VerteX installation." -icon: "" -hide_table_of_contents: false -sidebar_position: 20 -tags: ["self-hosted", "vertex", "airgap", "vmware", "vsphere"] -keywords: ["self-hosted", "vertex"] ---- - -This section helps you prepare your VMware vSphere airgap environment for VerteX installation. You can choose between -two methods to prepare your environment: - -1. If you have a Red Hat Enterprise Linux (RHEL) VM deployed in your environment, follow the - [Environment Setup with an Existing RHEL VM](./env-setup-vm-vertex.md) guide to learn how to prepare this VM for - VerteX installation. -2. If you do not have an RHEL VM, follow the [Environment Setup with OVA](./vmware-vsphere-airgap-instructions.md) - guide. This guide will show you how to use an OVA to deploy an airgap support VM in your VMware vSphere environment, - which will then assist with the VerteX installation process. - -## Resources - -- [Environment Setup with an Existing RHEL VM](./env-setup-vm-vertex.md) - -- [Environment Setup with OVA](./vmware-vsphere-airgap-instructions.md) diff --git a/docs/docs-content/vertex/install-palette-vertex/vertex-management-appliance.md b/docs/docs-content/vertex/install-palette-vertex/vertex-management-appliance.md deleted file mode 100644 index 0f71f424eb7..00000000000 --- a/docs/docs-content/vertex/install-palette-vertex/vertex-management-appliance.md +++ /dev/null @@ -1,208 +0,0 @@ ---- -title: "VerteX Management Appliance" -sidebar_label: "VerteX Management Appliance" -description: "Learn how to deploy Palette VerteX to your environment using the VerteX Management Appliance" -hide_table_of_contents: false -# sidebar_custom_props: -# icon: "chart-diagram" -tags: ["verteX management appliance", "self-hosted", "vertex"] -sidebar_position: 20 ---- - -:::preview - -This is a Tech Preview feature and is subject to change. Upgrades from a Tech Preview deployment may not be available. -Do not use this feature in production workloads. - -::: - -The VerteX Management Appliance is downloadable as an ISO file and is a solution for installing Palette VerteX on your -infrastructure. The ISO file contains all the necessary components needed for Palette to function. The ISO file is used -to boot the nodes, which are then clustered to form a Palette management cluster. - -Once Palette VerteX has been installed, you can download pack bundles and upload them to the internal Zot registry or an -external registry. These pack bundles are used to create your cluster profiles. You will then be able to deploy clusters -in your environment. - -## Third Party Packs - -There is an additional option to download and install the Third Party packs that provide complementary functionality to -Palette VerteX. These packs are not required for Palette VerteX to function, but they do provide additional features and -capabilities as described in the following table. - -| **Feature** | **Included with Palette Third Party Pack** | **Included with Palette Third Party Conformance Pack** | -| ------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | ------------------------------------------------------ | -| [Backup and Restore](../../clusters/cluster-management/backup-restore/backup-restore.md) | :white_check_mark: | :x: | -| [Configuration Security](../../clusters/cluster-management/compliance-scan.md#configuration-security) | :white_check_mark: | :x: | -| [Penetration Testing](../../clusters/cluster-management/compliance-scan.md#penetration-testing) | :white_check_mark: | :x: | -| [Software Bill Of Materials (SBOM) scanning](../../clusters/cluster-management/compliance-scan.md#sbom-dependencies--vulnerabilities) | :white_check_mark: | :x: | -| [Conformance Testing](../../clusters/cluster-management/compliance-scan.md#conformance-testing) | :x: | :white_check_mark: | - -## Architecture - -The ISO file is built with the Operating System (OS), Kubernetes distribution, Container Network Interface (CNI), and -Container Storage Interface (CSI). A [Zot registry](https://zotregistry.dev/) is also included in the Appliance -Framework ISO. Zot is a lightweight, OCI-compliant container image registry that is used to store the Palette packs -needed to create cluster profiles. - -This solution is designed to be immutable, secure, and compliant with industry standards, such as the Federal -Information Processing Standards (FIPS). The following table displays the infrastructure profile for the Palette VerteX -appliance. - -| **Layer** | **Component** | **FIPS-compliant** | -| -------------- | --------------------------------------------- | ------------------ | -| **OS** | Ubuntu: Immutable [Kairos](https://kairos.io) | :white_check_mark: | -| **Kubernetes** | Palette eXtended Kubernetes Edge (PXK-E) | :white_check_mark: | -| **CNI** | Calico | :white_check_mark: | -| **CSI** | Piraeus | :white_check_mark: | -| **Registry** | Zot | :white_check_mark: | - -Check the **Component Updates** in the [Release Notes](../../release-notes/release-notes.md) for the specific versions -of each component as they may be updated between releases. - -## Supported Platforms - -The VerteX Management Appliance can be used on the following infrastructure platforms: - -- VMware vSphere -- Bare Metal -- Machine as a Service (MAAS) - -## Limitations - -- Only public image registries are supported if you are choosing to use an external registry for your pack bundles. - -## Installation Steps - -Follow the instructions to install Palette VerteX using the VerteX Management Appliance on your infrastructure platform. - -### Prerequisites - - - -### Install Palette VerteX - - - -:::warning - -If your installation is not successful, verify that the `piraeus-operator` pack was correctly installed. For more -information, refer to the -[Self-Hosted Installation - Troubleshooting](../../troubleshooting/enterprise-install.md#scenario---palettevertex-management-appliance-installation-stalled-due-to-piraeus-operator-pack-in-error-state) -guide. - -::: - -### Validate - - - -## Upload Packs to Palette VerteX - -Follow the instructions to upload packs to your Palette VerteX instance. Packs are used to create -[cluster profiles](../../profiles/cluster-profiles/cluster-profiles.md) and deploy workload clusters in your -environment. - -### Prerequisites - - - -### Upload Packs - - - -### Validate - - - -## (Optional) Upload Third Party Packs - -Follow the instructions to upload the Third Party packs to your Palette VerteX instance. The Third Party packs contain -additional functionality and capabilities that enhance the Palette VerteX experience, such as backup and restore, -configuration scanning, penetration scanning, SBOM scanning, and conformance scanning. - -### Prerequisites - - - -### Upload Packs - - - -### Validate - - - -## Next Steps - - diff --git a/docs/docs-content/vertex/supported-platforms.md b/docs/docs-content/vertex/supported-platforms.md deleted file mode 100644 index e157a6f933b..00000000000 --- a/docs/docs-content/vertex/supported-platforms.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -sidebar_label: "Supported Platforms" -title: "Supported Platforms" -description: "Review the supported platforms for deploying Kubernetes clusters with Palette VerteX." -hide_table_of_contents: false -sidebar_position: 20 -tags: ["vertex"] -keywords: ["self-hosted", "vertex"] ---- - -Palette VerteX supports the following infrastructure platforms for deploying Kubernetes clusters: - -| **Platform** | **Additional Information** | -| ------------------ | ------------------------------------------------------------------------- | -| **AWS** | Refer to the [AWS](#aws) section for additional guidance. | -| **AWS Gov** | Refer to the [AWS](#aws) section for additional guidance. | -| **Azure** | Refer to the [Azure](#azure) section for additional guidance. | -| **Azure Gov** | Refer to the [Azure](#azure) section for additional guidance. | -| **Dev Engine** | Refer to the VerteX Engine section for additional guidance. | -| **MAAS** | Canonical Metal-As-A-Service (MAAS) is available and supported in VerteX. | -| **Edge** | Edge deployments are supported in VerteX. | -| **VMware vSphere** | VMware vSphere is supported in VerteX. | - -Review the following tables for additional information about the supported platforms. - -:::info - -For guidance on how to deploy a Kubernetes cluster on a supported platform, refer to the -[Cluster](../clusters/clusters.md) documentation. - -::: - -The term _IaaS_ refers to Palette using compute nodes that are not managed by a cloud provider, such as bare metal -servers or virtual machines. - -#### AWS - -VerteX supports the following AWS services. - -| **Service** | **AWS Gov Support?** | -| ----------- | -------------------- | -| **IaaS** | ✅ | -| **EKS** | ✅ | - -#### Azure - -VerteX supports the following Azure services. - -| **Service** | **Azure Gov Support?** | -| ----------- | ---------------------- | -| **IaaS** | ✅ | -| **AKS** | ✅ | - -All Azure Government regions are supported with the exception of Department of Defense regions. Refer to the -[official Azure Government documentation](https://learn.microsoft.com/en-us/azure/azure-government/documentation-government-overview-dod) -to learn more about the available regions. - -#### Dev Engine - -VerteX supports the [Dev Engine](../devx/devx.md) platform for deploying virtual clusters. However, the Dev Engine -platform is not FIPS compliant and requires you to enable the -[non-FIPS setting](./system-management/enable-non-fips-settings/enable-non-fips-settings.md). Additionally, container -deployment based workflows are not supported for airgap environments. - -#### VMware vSphere - -The following versions of VMware vSphere are supported in VerteX. - -| **Version** | **Supported?** | -| ----------------- | -------------- | -| **vSphere 6.7U3** | ✅ | -| **vSphere 7.0** | ✅ | -| **vSphere 8.0** | ✅ | diff --git a/docs/docs-content/vertex/system-management/account-management/email.md b/docs/docs-content/vertex/system-management/account-management/email.md deleted file mode 100644 index 2c5e4330303..00000000000 --- a/docs/docs-content/vertex/system-management/account-management/email.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -sidebar_label: "Update Email Address" -title: "Update Email Address" -description: "Update and manage the email address of the admin user." -icon: "" -hide_table_of_contents: false -sidebar_position: 30 -tags: ["vertex", "management", "account", "credentials"] -keywords: ["self-hosted", "vertex"] ---- - -You can manage the credentials of the admin user by logging in to the system console. Updating or changing the email -address of the admin user requires the current password. - -Use the following steps to change the email address of the admin user. - -## Prerequisites - -- Access to the Palette VerteX system console. - -- Current password of the admin user. - -- A Simple Mail Transfer Protocol (SMTP) server must be configured in the system console. Refer to - [Configure SMTP](../smtp.md) page for guidance on how to configure an SMTP server. - -## Change Email Address - -1. Log in to the Palette VerteX system console. Refer to - [Access the System Console](../system-management.md#access-the-system-console) guide. - -2. From the **left Main Menu** select **My Account**. - -3. Type the new email address in the **Email** field. - -4. Provide the current password in the **Current Password** field. - -5. Click **Apply** to save the changes. - -## Validate - -1. Log out of the system console. You can log out by clicking the **Logout** button in the bottom right corner of the - **left Main Menu**. - -2. Log in to the system console. Refer to [Access the System Console](../system-management.md#access-the-system-console) - guide. - -3. Use the new email address and your current password to log in to the system console. - -A successful login indicates that the email address has been changed successfully. diff --git a/docs/docs-content/vertex/system-management/configure-aws-sts-account.md b/docs/docs-content/vertex/system-management/configure-aws-sts-account.md deleted file mode 100644 index 8ac1ff8a60d..00000000000 --- a/docs/docs-content/vertex/system-management/configure-aws-sts-account.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -sidebar_label: "Enable Adding AWS Accounts Using STS" -title: "Enable Adding AWS Accounts Using STS " -description: "This page teaches you how to allow tenants to add AWS accounts using STS." -icon: "" -hide_table_of_contents: false -sidebar_position: 20 -tags: ["palette", "management", "account", "credentials"] -keywords: ["self-hosted", "vertex"] ---- - - diff --git a/docs/docs-content/vertex/system-management/customize-interface.md b/docs/docs-content/vertex/system-management/customize-interface.md deleted file mode 100644 index 0e354610755..00000000000 --- a/docs/docs-content/vertex/system-management/customize-interface.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -sidebar_label: "Customize Interface" -title: "Customize Interface" -description: "Learn how to customize the branding and interface of Palette VerteX" -icon: "" -hide_table_of_contents: false -sidebar_position: 55 -tags: ["self-hosted", "management", "account", "customize-interface"] -keywords: ["self-hosted", "vertex", "customize-interface"] ---- - - diff --git a/docs/docs-content/vertex/system-management/enable-non-fips-settings/_category_.json b/docs/docs-content/vertex/system-management/enable-non-fips-settings/_category_.json deleted file mode 100644 index ae9ddb024de..00000000000 --- a/docs/docs-content/vertex/system-management/enable-non-fips-settings/_category_.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "position": 50 -} diff --git a/docs/docs-content/vertex/system-management/scar-migration.md b/docs/docs-content/vertex/system-management/scar-migration.md deleted file mode 100644 index 563d4f1c9ea..00000000000 --- a/docs/docs-content/vertex/system-management/scar-migration.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -sidebar_label: "Migrate SCAR to OCI Registry" -title: "Migrate Customer-Managed SCAR to OCI Registry" -description: - "Learn how to migrate the Spectro Cloud Artifact Regisry (SCAR) content to the OCI registry used to host packs and - images." -icon: "" -hide_table_of_contents: false -sidebar_position: 125 -tags: ["vertex", "management", "scar"] -keywords: ["self-hosted", "vertex"] ---- - - - -## Prerequisites - - - -## Migrate SCAR - - - -## Validate - - diff --git a/docs/docs-content/vertex/upgrade/_category_.json b/docs/docs-content/vertex/upgrade/_category_.json deleted file mode 100644 index e1d4231c700..00000000000 --- a/docs/docs-content/vertex/upgrade/_category_.json +++ /dev/null @@ -1,3 +0,0 @@ -{ - "position": 100 -} diff --git a/docs/docs-content/vertex/upgrade/upgrade-k8s/_category_.json b/docs/docs-content/vertex/upgrade/upgrade-k8s/_category_.json deleted file mode 100644 index d6d6332053d..00000000000 --- a/docs/docs-content/vertex/upgrade/upgrade-k8s/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "Kubernetes", - "position": 30 -} diff --git a/docs/docs-content/vertex/upgrade/upgrade-notes.md b/docs/docs-content/vertex/upgrade/upgrade-notes.md deleted file mode 100644 index 197513df22f..00000000000 --- a/docs/docs-content/vertex/upgrade/upgrade-notes.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -sidebar_label: "Upgrade Notes" -title: "Upgrade Notes" -description: "Learn how to upgrade self-hosted Palette instances." -icon: "" -sidebar_position: 0 -tags: ["vertex", "self-hosted", "airgap", "kubernetes", "upgrade"] -keywords: ["vertex", "enterprise", "airgap", "kubernetes"] ---- - -This page offers version-specific reference to help you prepare for upgrading self-hosted Vertex instances. - -## Upgrade VerteX 4.3.x to 4.4.x - - -Prior to upgrading VMware vSphere VerteX installations from version 4.3.x to 4.4.x, complete the -steps outlined in the -[Mongo DNS ConfigMap Issue](../../troubleshooting/palette-upgrade.md#mongo-dns-configmap-value-is-incorrect) guide. -Addressing this Mongo DNS issue will prevent system pods from experiencing _CrashLoopBackOff_ errors after the upgrade. - -After the upgrade, if Enterprise Cluster backups are stuck, refer to the -[Enterprise Backup Stuck](../../troubleshooting/enterprise-install.md#scenario---enterprise-backup-stuck) -troubleshooting guide for resolution steps. - -## Upgrade with VMware - -A known issue impacts all self-hosted Palette instances older then 4.4.14. Before upgrading an Palette instance with -version older than 4.4.14, ensure that you execute a utility script to make all your cluster IDs unique in your -Persistent Volume Claim (PVC) metadata. For more information, refer to the -[Troubleshooting Guide](../../troubleshooting/enterprise-install.md#scenario---non-unique-vsphere-cns-mapping). diff --git a/docs/docs-content/vertex/upgrade/upgrade-vmware/_category_.json b/docs/docs-content/vertex/upgrade/upgrade-vmware/_category_.json deleted file mode 100644 index 11b11b09b25..00000000000 --- a/docs/docs-content/vertex/upgrade/upgrade-vmware/_category_.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "label": "VMware", - "position": 0 -} diff --git a/docs/docs-content/vertex/vertex.md b/docs/docs-content/vertex/vertex.md deleted file mode 100644 index f476012db6d..00000000000 --- a/docs/docs-content/vertex/vertex.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -sidebar_label: "Palette VerteX" -title: "Palette VerteX" -description: "Learn how Palette VerteX enables regulated industries to meet stringent security requirements." -hide_table_of_contents: false -sidebar_custom_props: - icon: "shield" -tags: ["vertex"] -keywords: ["self-hosted", "vertex"] ---- - -Palette VerteX offers regulated industries, such as government and public sector organizations that handle sensitive and -classified information simplicity, security, and scale in production Kubernetes. VerteX is available as a self-hosted -platform offering that you can install in your data centers or public cloud providers to manage Kubernetes clusters. - -## FIPS-Compliant - -Palette VerteX integrates validated Federal Information Processing Standards (FIPS) 140-3 cryptographic modules in -Kubernetes clusters it deploys to ensure robust data protection for your organization’s infrastructure and applications. - -To learn more about our FIPS 140-3 certification, review -[Spectro Cloud Cryptographic Module](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/5061). -FIPS modules, which are accessible in our private artifact repository, extend Palette’s existing security features that -include security scans, powerful RBAC, and tamper-proof edge device images. Palette VerteX protects sensitive data in -clusters across edge, bare metal, on-prem data centers, air-gapped environments, and cloud. - -To learn more about FIPS in Palette VerteX, check out the [FIPS](fips/fips.md) section. - -## Supported Platforms - -To learn more about infrastructure platforms supported by Palette VerteX, refer to the -[Supported Platforms](supported-platforms.md) section. - -## Access Palette VerteX - -To set up a Palette VerteX account, contact our support team by sending an email to support@spectrocloud.com. Include -the following information in your email: - -- Your full name -- Organization name (if applicable) -- Email address -- Phone number (optional) -- Target Platform (VMware or Kubernetes) -- A brief description of your intended use of VerteX - -Our dedicated support team will promptly get in touch with you to provide the necessary assistance and share the -installer image, credentials, and an endpoint URL to access the FIPS registry. - -## Resources - -- [FIPS](fips/fips.md) - -- [Installation](install-palette-vertex/install-palette-vertex.md) - -- [Supported Platforms](supported-platforms.md) - -- [System Management](system-management/system-management.md) - -- [Upgrade Notes](upgrade/upgrade.md) - -- [Enterprise Install Troubleshooting](../troubleshooting/enterprise-install.md) diff --git a/docs/docs-content/vm-management/configure-private-ca-certificate.md b/docs/docs-content/vm-management/configure-private-ca-certificate.md index 7eb22dbb772..580cff586b4 100644 --- a/docs/docs-content/vm-management/configure-private-ca-certificate.md +++ b/docs/docs-content/vm-management/configure-private-ca-certificate.md @@ -14,8 +14,9 @@ to ensure that VMO can securely communicate with your self-hosted Palette or Pal ## Prerequisites -- A self-hosted Palette installation. Refer to the [Self-Hosted Palette](../enterprise-version/enterprise-version.md) or - [Palette VerteX](../vertex/vertex.md) guides for installation instructions. +- A self-hosted Palette installation. Refer to the appropriate + [self-hosted Palette](../self-hosted-setup/palette/palette.md) or + [Palette VerteX](../self-hosted-setup/vertex/vertex.md) guide for installation instructions. - A workload cluster with VMO installed and configured. Refer to the [VMO](./vm-management.md) guide for details. diff --git a/docs/docs-content/vm-management/install-vmo-in-airgap.md b/docs/docs-content/vm-management/install-vmo-in-airgap.md index 6c563d4441d..bc2fd76b943 100644 --- a/docs/docs-content/vm-management/install-vmo-in-airgap.md +++ b/docs/docs-content/vm-management/install-vmo-in-airgap.md @@ -13,9 +13,8 @@ instance of Palette and Palette VerteX. ## Prerequisites -- An existing airgap instance of Palette or Palette VerteX. Refer to the - [Self-Hosted Palette Installation](../enterprise-version/install-palette/install-palette.md) and - [Palette VerteX Installation](../vertex/install-palette-vertex/install-palette-vertex.md) guides for more information. +- An existing self-hosted, airgapped instance of [Palette](../self-hosted-setup/palette/palette.md) or + [Palette VerteX](../self-hosted-setup/vertex/vertex.md). :::info @@ -25,8 +24,9 @@ instance of Palette and Palette VerteX. ::: -- At least one tenant created for your airgap instance of Palette or Palette VerteX. Refer to - [Tenant Management](../enterprise-version/system-management/tenant-management.md) for more information. +- At least one tenant created for your airgap instance of Palette or Palette VerteX. Refer to the appropriate + [Tenant Management guide for self-hosted Palette](../self-hosted-setup/palette/system-management/tenant-management.md) + or [Palette VerteX](../self-hosted-setup/vertex/system-management/tenant-management.md) for more information. - Access to the Palette airgap support Virtual Machine (VM) that you used for the initial Palette installation. diff --git a/redirects.js b/redirects.js index f5335525650..79f98f11f68 100644 --- a/redirects.js +++ b/redirects.js @@ -320,50 +320,6 @@ let redirects = [ from: `/devx/app-profile/services/service-listings/cockroach-db/`, to: `/devx/services/service-listings/cockroach-db/`, }, - { - from: `/enterprise-version/on-prem-system-requirements/`, - to: `/enterprise-version/install-palette/`, - }, - { - from: `/enterprise-version/deploying-the-platform-installer/`, - to: `/enterprise-version/install-palette/`, - }, - { - from: `/enterprise-version/deploying-an-enterprise-cluster/`, - to: `/enterprise-version/install-palette/`, - }, - { - from: `/enterprise-version/deploying-palette-with-helm/`, - to: `/enterprise-version/install-palette/install-on-kubernetes/install/`, - }, - { - from: `/enterprise-version/helm-chart-install-reference/`, - to: `/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref/`, - }, - { - from: `/enterprise-version/system-console-dashboard/`, - to: `/enterprise-version/system-management/`, - }, - { - from: `/enterprise-version/enterprise-cluster-management/`, - to: `/enterprise-version/system-management/`, - }, - { - from: `/enterprise-version/monitoring/`, - to: `/enterprise-version/system-management/`, - }, - { - from: `/enterprise-version/air-gap-repo/`, - to: `/enterprise-version/install-palette/`, - }, - { - from: `/enterprise-version/reverse-proxy/`, - to: `/enterprise-version/system-management/reverse-proxy/`, - }, - { - from: `/enterprise-version/ssl-certificate-management/`, - to: `/enterprise-version/system-management/ssl-certificate-management/`, - }, { from: `/clusters/cluster-management/palette-lock-cluster/`, to: `/clusters/cluster-management/platform-settings/`, @@ -425,31 +381,6 @@ let redirects = [ from: "/projects/", to: "/tenant-settings/projects/", }, - { - from: "/enterprise-version/install-palette/airgap/checklist/", - to: "/enterprise-version/install-palette/airgap/", - }, - { - from: "/enterprise-version/install-palette/airgap/kubernetes-airgap-instructions/", - to: "/enterprise-version/install-palette/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions/", - }, - { - from: "/enterprise-version/install-palette/airgap/vmware-vsphere-airgap-instructions/", - to: "/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions/", - }, - { - from: "/vertex/install-palette-vertex/airgap/kubernetes-airgap-instructions/", - to: "/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions/", - }, - { - from: "/vertex/install-palette-vertex/airgap/vmware-vsphere-airgap-instructions/", - to: "/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions/", - }, - { - from: "/vertex/install-palette-vertex/airgap/checklist/", - to: "/vertex/install-palette-vertex/airgap/", - }, - { from: "/terraform/", to: "/automation/terraform/", @@ -606,14 +537,6 @@ let redirects = [ from: "/automation/palette-cli/commands/validator/", to: "/automation/palette-cli/commands/ec/", }, - { - from: "/enterprise-version/install-palette/install-on-vmware/airgap-install/vmware-vsphere-airgap-instructions/", - to: "/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions/", - }, - { - from: "/vertex/install-palette-vertex/install-on-vmware/airgap-install/vmware-vsphere-airgap-instructions/", - to: "/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions/", - }, { from: "/legal-licenses/oss-licenses/", to: "/legal-licenses/oss-licenses-index/", @@ -966,6 +889,409 @@ let redirects = [ from: `/clusters/public-cloud/azure/azure-disk-encryption/`, to: `/clusters/public-cloud/azure/azure-disk-storage-sse/`, }, + // Self-hosted Palette/VerteX redirects for sidebar refactor + { + from: [ + "/enterprise-version/", + "/enterprise-version/install-palette/", + "/enterprise-version/on-prem-system-requirements/", + "/enterprise-version/deploying-the-platform-installer/", + "/enterprise-version/deploying-an-enterprise-cluster/", + "/enterprise-version/air-gap-repo/", + ], + to: "/self-hosted-setup/palette/", + }, + { + from: "/enterprise-version/install-palette/install-on-kubernetes/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes", + }, + { + from: [ + "/enterprise-version/install-palette/install-on-kubernetes/airgap-install/", + "/enterprise-version/install-palette/install-on-kubernetes/airgap-install/checklist/", + "/enterprise-version/install-palette/airgap/kubernetes-airgap-instructions/", + "/enterprise-version/install-palette/airgap/checklist/", + "/enterprise-version/install-palette/airgap/", + ], + to: "/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/", + }, + { + from: "/enterprise-version/install-palette/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/", + }, + { + from: [ + "/enterprise-version/helm-chart-install-reference/", + "/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref/", + ], + to: "/self-hosted-setup/palette/supported-environments/kubernetes/setup/airgap/helm-reference/", + }, + { + from: [ + "/enterprise-version/deploying-palette-with-helm/", + "/enterprise-version/install-palette/install-on-kubernetes/install/", + ], + to: "/self-hosted-setup/palette/supported-environments/kubernetes/install/non-airgap/", + }, + { + from: "/enterprise-version/install-palette/install-on-kubernetes/airgap-install/install/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/install/airgap", + }, + { + from: "/enterprise-version/activate-installation/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/activate/", + }, + { + from: "/enterprise-version/upgrade/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/", + }, + { + from: "/enterprise-version/upgrade/upgrade-k8s/non-airgap/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/non-airgap", + }, + { + from: "/enterprise-version/upgrade/upgrade-k8s/airgap/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/upgrade/airgap", + }, + { + from: "/enterprise-version/install-palette/install-on-kubernetes/uninstall/", + to: "/self-hosted-setup/palette/supported-environments/kubernetes/uninstall/", + }, + { + from: "/enterprise-version/install-palette/install-on-vmware/", + to: "/self-hosted-setup/palette/supported-environments/vmware/", + }, + { + from: "/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements/", + to: "/self-hosted-setup/palette/supported-environments/vmware/setup/non-airgap/vmware-system-requirements/", + }, + { + from: [ + "/enterprise-version/install-palette/install-on-vmware/airgap-install/", + "/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/", + "/enterprise-version/install-palette/install-on-vmware/airgap-install/checklist/", + ], + to: "/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/", + }, + { + from: [ + "/enterprise-version/install-palette/airgap/vmware-vsphere-airgap-instructions/", + "/enterprise-version/install-palette/install-on-vmware/airgap-install/vmware-vsphere-airgap-instructions/", + "/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions/", + ], + to: "/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/ova/", + }, + { + from: "/enterprise-version/install-palette/install-on-vmware/airgap-install/environment-setup/env-setup-vm/", + to: "/self-hosted-setup/palette/supported-environments/vmware/setup/airgap/rhel-vm/", + }, + { + from: "/enterprise-version/install-palette/install-on-vmware/install/", + to: "/self-hosted-setup/palette/supported-environments/vmware/install/non-airgap/", + }, + { + from: "/enterprise-version/install-palette/install-on-vmware/airgap-install/install/", + to: "/self-hosted-setup/palette/supported-environments/vmware/install/airgap", + }, + { + from: "/enterprise-version/upgrade/upgrade-vmware/non-airgap/", + to: "/self-hosted-setup/palette/supported-environments/vmware/upgrade/non-airgap/", + }, + { + from: "/enterprise-version/upgrade/upgrade-vmware/airgap/", + to: "/self-hosted-setup/palette/supported-environments/vmware/upgrade/airgap/", + }, + { + from: "/enterprise-version/install-palette/palette-management-appliance/", + to: "/self-hosted-setup/palette/supported-environments/management-appliance/", + }, + { + from: "/enterprise-version/upgrade/palette-management-appliance/", + to: "/self-hosted-setup/palette/supported-environments/management-appliance/upgrade/", + }, + { + from: "/enterprise-version/upgrade/upgrade-notes/", + to: "/self-hosted-setup/palette/supported-environments/vmware/upgrade/", + }, + { + from: [ + "/enterprise-version/system-management/", + "/enterprise-version/system-console-dashboard/", + "/enterprise-version/enterprise-cluster-management/", + "/enterprise-version/monitoring/", + ], + to: "/self-hosted-setup/palette/system-management/", + }, + { + from: "/enterprise-version/system-management/account-management/", + to: "/self-hosted-setup/palette/system-management/account-management/", + }, + { + from: "/enterprise-version/system-management/account-management/manage-system-accounts/", + to: "/self-hosted-setup/palette/system-management/account-management/manage-system-accounts/", + }, + { + from: [ + "/enterprise-version/system-management/account-management/credentials/", + "/enterprise-version/system-management/account-management/email/", + ], + to: "/self-hosted-setup/palette/system-management/account-management/credentials/", + }, + { + from: "/enterprise-version/system-management/account-management/password-blocklist/", + to: "/self-hosted-setup/palette/system-management/account-management/password-blocklist/", + }, + { + from: "/enterprise-version/system-management/backup-restore/", + to: "/self-hosted-setup/palette/system-management/backup-restore/", + }, + { + from: "/enterprise-version/system-management/login-banner/", + to: "/self-hosted-setup/palette/system-management/login-banner/", + }, + { + from: "/enterprise-version/system-management/change-cloud-config/", + to: "/self-hosted-setup/palette/system-management/change-cloud-config/", + }, + { + from: "/enterprise-version/system-management/registry-override/", + to: "/self-hosted-setup/palette/system-management/registry-override/", + }, + { + from: "/enterprise-version/system-management/feature-flags/", + to: "/self-hosted-setup/palette/system-management/feature-flags/", + }, + { + from: "/enterprise-version/system-management/customize-interface/", + to: "/self-hosted-setup/palette/system-management/customize-interface/", + }, + { + from: ["/enterprise-version/system-management/reverse-proxy/", "/enterprise-version/reverse-proxy/"], + to: "/self-hosted-setup/palette/system-management/reverse-proxy/", + }, + { + from: "/enterprise-version/system-management/scar-migration/", + to: "/self-hosted-setup/palette/system-management/scar-migration/", + }, + { + from: "/enterprise-version/system-management/smtp/", + to: "/self-hosted-setup/palette/system-management/smtp/", + }, + { + from: [ + "/enterprise-version/system-management/ssl-certificate-management/", + "/enterprise-version/ssl-certificate-management/", + ], + to: "/self-hosted-setup/palette/system-management/ssl-certificate-management/", + }, + { + from: "/enterprise-version/system-management/add-registry/", + to: "/self-hosted-setup/palette/system-management/add-registry/", + }, + { + from: "/enterprise-version/system-management/tenant-management/", + to: "/self-hosted-setup/palette/system-management/tenant-management/", + }, + { + from: ["/vertex/", "/vertex/supported-platforms/", "/vertex/install-palette-vertex/", "/vertex/upgrade/"], + to: "/self-hosted-setup/vertex/", + }, + { + from: ["/vertex/fips/", "/vertex/fips/fips-status-icons/", "/vertex/fips/fips-compliant-components/"], + to: "/self-hosted-setup/vertex/fips/", + }, + { + from: "/vertex/install-palette-vertex/install-on-kubernetes/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/", + }, + { + from: "/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/", + }, + { + from: [ + "/vertex/install-palette-vertex/airgap/kubernetes-airgap-instructions/", + "/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/kubernetes-airgap-instructions/", + "/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/checklist/", + ], + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/", + }, + { + from: "/vertex/install-palette-vertex/install-on-kubernetes/vertex-helm-ref/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/setup/airgap/helm-reference/", + }, + { + from: "/vertex/install-palette-vertex/install-on-kubernetes/install/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/install/non-airgap/", + }, + { + from: "/vertex/install-palette-vertex/install-on-kubernetes/airgap-install/install/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/install/airgap/", + }, + { + from: "/vertex/activate-installation/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/activate/", + }, + { + from: "/vertex/upgrade/upgrade-k8s/non-airgap/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/non-airgap/", + }, + { + from: "/vertex/upgrade/upgrade-k8s/airgap/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/upgrade/airgap/", + }, + { + from: "/vertex/install-palette-vertex/install-on-kubernetes/uninstall/", + to: "/self-hosted-setup/vertex/supported-environments/kubernetes/uninstall/", + }, + { + from: "/vertex/install-palette-vertex/install-on-vmware/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/", + }, + { + from: "/vertex/install-palette-vertex/install-on-vmware/vmware-system-requirements/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/setup/non-airgap/vmware-system-requirements/", + }, + { + from: "/vertex/install-palette-vertex/install-on-vmware/install/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/install/non-airgap/", + }, + { + from: [ + "/vertex/install-palette-vertex/install-on-vmware/airgap-install/", + "/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/", + "/vertex/install-palette-vertex/install-on-vmware/airgap-install/checklist/", + "/vertex/install-palette-vertex/airgap/", + "/vertex/install-palette-vertex/airgap/checklist/", + ], + to: "/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/", + }, + { + from: [ + "/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/vmware-vsphere-airgap-instructions/", + "/vertex/install-palette-vertex/install-on-vmware/airgap-install/vmware-vsphere-airgap-instructions/", + "/vertex/install-palette-vertex/airgap/vmware-vsphere-airgap-instructions/", + ], + to: "/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/ova/", + }, + { + from: "/vertex/install-palette-vertex/install-on-vmware/airgap-install/environment-setup/env-setup-vm-vertex/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/setup/airgap/rhel-vm/", + }, + { + from: "/vertex/install-palette-vertex/install-on-vmware/airgap-install/install/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/install/airgap/", + }, + { + from: "/vertex/upgrade/upgrade-notes/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/upgrade/", + }, + { + from: "/vertex/upgrade/upgrade-vmware/non-airgap/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/upgrade/non-airgap/", + }, + { + from: "/vertex/upgrade/upgrade-vmware/airgap/", + to: "/self-hosted-setup/vertex/supported-environments/vmware/upgrade/airgap/", + }, + { + from: "/vertex/install-palette-vertex/vertex-management-appliance/", + to: "/self-hosted-setup/vertex/supported-environments/management-appliance/", + }, + { + from: "/vertex/upgrade/vertex-management-appliance/", + to: "/self-hosted-setup/vertex/supported-environments/management-appliance/upgrade/", + }, + { + from: [ + "/vertex/system-management/configure-aws-sts-account/", + "/enterprise-version/system-management/configure-aws-sts-account/", + ], + to: "/clusters/public-cloud/aws/add-aws-accounts/configure-aws-sts-account/", + }, + { + from: "/vertex/system-management/", + to: "/self-hosted-setup/vertex/system-management/", + }, + { + from: "/vertex/system-management/account-management/", + to: "/self-hosted-setup/vertex/system-management/account-management/", + }, + { + from: "/vertex/system-management/account-management/manage-system-accounts/", + to: "/self-hosted-setup/vertex/system-management/account-management/manage-system-accounts/", + }, + { + from: [ + "/vertex/system-management/account-management/credentials/", + "/vertex/system-management/account-management/email/", + ], + to: "/self-hosted-setup/vertex/system-management/account-management/credentials/", + }, + { + from: "/vertex/system-management/account-management/password-blocklist/", + to: "/self-hosted-setup/vertex/system-management/account-management/password-blocklist/", + }, + { + from: "/vertex/system-management/login-banner/", + to: "/self-hosted-setup/vertex/system-management/login-banner/", + }, + { + from: "/vertex/system-management/change-cloud-config/", + to: "/self-hosted-setup/vertex/system-management/change-cloud-config/", + }, + { + from: "/vertex/system-management/feature-flags/", + to: "/self-hosted-setup/vertex/system-management/feature-flags/", + }, + { + from: "/vertex/system-management/registry-override/", + to: "/self-hosted-setup/vertex/system-management/registry-override/", + }, + { + from: "/vertex/system-management/customize-interface/", + to: "/self-hosted-setup/vertex/system-management/customize-interface/", + }, + { + from: "/vertex/system-management/enable-non-fips-settings/", + to: "/self-hosted-setup/vertex/system-management/enable-non-fips-settings/", + }, + { + from: "/vertex/system-management/enable-non-fips-settings/allow-cluster-import/", + to: "/self-hosted-setup/vertex/system-management/enable-non-fips-settings/allow-cluster-import/", + }, + { + from: "/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs/", + to: "/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-addon-packs/", + }, + { + from: "/vertex/system-management/enable-non-fips-settings/use-non-fips-features/", + to: "/self-hosted-setup/vertex/system-management/enable-non-fips-settings/use-non-fips-features/", + }, + { + from: "/vertex/system-management/reverse-proxy/", + to: "/self-hosted-setup/vertex/system-management/reverse-proxy/", + }, + { + from: "/vertex/system-management/scar-migration/", + to: "/self-hosted-setup/vertex/system-management/scar-migration/", + }, + { + from: "/vertex/system-management/smtp/", + to: "/self-hosted-setup/vertex/system-management/smtp/", + }, + { + from: "/vertex/system-management/ssl-certificate-management/", + to: "/self-hosted-setup/vertex/system-management/ssl-certificate-management/", + }, + { + from: "/vertex/system-management/add-registry/", + to: "/self-hosted-setup/vertex/system-management/add-registry/", + }, + { + from: "/vertex/system-management/tenant-management/", + to: "/self-hosted-setup/vertex/system-management/tenant-management/", + }, ]; if (packRedirects.length > 0) { diff --git a/src/components/PaletteVertexUrlMapper/PaletteVertexUrlMapper.tsx b/src/components/PaletteVertexUrlMapper/PaletteVertexUrlMapper.tsx index 6c589d40441..408928282a9 100644 --- a/src/components/PaletteVertexUrlMapper/PaletteVertexUrlMapper.tsx +++ b/src/components/PaletteVertexUrlMapper/PaletteVertexUrlMapper.tsx @@ -4,24 +4,40 @@ import VersionedLink from "../VersionedLink/VersionedLink"; // This component is used to generate the correct URL for the palette and vertex versions of the documentation. // It takes the edition, text, and URL as props and returns the correct URL based on the edition. // If the vertex and palette pages have different URLs, the component takes palettePath and vertexPath as individual props and returns the correct URL. +// For installation-specific content, the install prop can be used to specify 'kubernetes', 'vmware', or 'management-appliance'. interface ComponentProperties { [key: string]: string; } export default function PaletteVertexUrlMapper(props: ComponentProperties) { - const { edition, text, url, palettePath, vertexPath } = props; + const { edition, text, url, palettePath, vertexPath, install } = props; const normalizedEdition = edition?.toLowerCase(); + const normalizedInstall = install?.toLowerCase(); if (normalizedEdition !== "palette" && normalizedEdition !== "vertex") { throw new Error("Invalid edition. Please provide either 'palette' or 'vertex'."); } + if (normalizedInstall && !["kubernetes", "vmware", "management-appliance"].includes(normalizedInstall)) { + throw new Error("Invalid install method. Please provide 'kubernetes', 'vmware', or 'management-appliance'."); + } + const isPalette = normalizedEdition === "palette"; - const baseUrl = isPalette ? "/enterprise-version" : "/vertex"; - const mappedUrl = - palettePath && vertexPath ? `${baseUrl}${isPalette ? palettePath : vertexPath}` : `${baseUrl}${url}`; + // If using custom paths, return them directly without prepending baseUrl + if (palettePath && vertexPath) { + const mappedUrl = isPalette ? palettePath : vertexPath; + return ; + } + + // Construct base URL with optional installation method + let baseUrl = `/self-hosted-setup/${isPalette ? "palette" : "vertex"}`; + if (normalizedInstall) { + baseUrl += `/supported-environments/${normalizedInstall}`; + } + + const mappedUrl = `${baseUrl}${url}`; return ; } diff --git a/static/llms.txt b/static/llms.txt index 73b8964cea5..e24e32af556 100644 --- a/static/llms.txt +++ b/static/llms.txt @@ -12,9 +12,9 @@ specific needs, with granular governance and enterprise-grade security. - [Getting Started](https://docs.spectrocloud.com/tutorials/getting-started): Learn how to get started with Spectro Cloud Palette and begin leveraging its Kubernetes full-stack management at scale. Palette's unique capabilities provide end-to-end declarative cluster management, cluster monitoring and reconciliation, as well as enterprise-grade security. - [Welcome to Palette Tutorials](https://docs.spectrocloud.com/tutorials/): This section provides hands-on tutorials you can complete in your environment to learn more about Palette. Here, you will find tutorials covering the aspects of Palette you need to become a proficient user, as well as advanced topics that require more time and attention to comprehend. These tutorials will enable you to maximize Palette's ability to manage Kubernetes at scale. - [Downloads](https://docs.spectrocloud.com/downloads/): Explore our downloads section to discover the latest and specific versions of support tools and utilities for Palette. -- [Artifact Studio](https://docs.spectrocloud.com/downloads/artifact-studio/): The Spectro Cloud [Artifact Studio](https://artifact-studio.spectrocloud.com/) is a unified platform that helps airgapped, regulatory-focused, and security-conscious organizations populate their registries with bundles, packs, and installers to be used with self-hosted [Palette](https://docs.spectrocloud.com/enterprise-version/) or [Palette VerteX](https://docs.spectrocloud.com/vertex/). It provides a single location for packs and images, streamlining access and management. +- [Artifact Studio](https://docs.spectrocloud.com/downloads/artifact-studio/): The Spectro Cloud [Artifact Studio](https://artifact-studio.spectrocloud.com/) is a unified platform that helps airgapped, regulatory-focused, and security-conscious organizations populate their registries with bundles, packs, and installers to be used with self-hosted [Palette](https://docs.spectrocloud.com/self-hosted-setup/palette/) or [Palette VerteX](https://docs.spectrocloud.com/self-hosted-setup/vertex/). It provides a single location for packs and images, streamlining access and management. - [Cluster Profiles](https://docs.spectrocloud.com/profiles/cluster-profiles/): Cluster profiles are composed of layers using packs, Helm charts, and custom manifests to meet specific types of workloads on your Palette cluster deployments. You can create as many profiles as needed for your workload cluster deployments. -- [Self-Hosted Palette](https://docs.spectrocloud.com/enterprise-version/): Palette is available as a self-hosted platform offering. You can install the self-hosted version of Palette on your data center, public cloud providers, or Edge devices to manage Kubernetes clusters. +- [Self-Hosted Palette](https://docs.spectrocloud.com/self-hosted-setup/palette/): Palette is available as a self-hosted platform offering. You can install the self-hosted version of Palette on your data center, public cloud providers, or Edge devices to manage Kubernetes clusters. - [Deployment Architecture Overview](https://docs.spectrocloud.com/architecture/architecture-overview/): Palette is available in three flexible deployment models: multi-tenant SaaS, dedicated SaaS, and self-hosted. - [Deployment Modes](https://docs.spectrocloud.com/deployment-modes/): Palette provides two different modes for deploying and managing applications. The first mode is Cluster Mode; this mode enables you to create, deploy, and manage Kubernetes clusters and applications. The second mode is App Mode, a mode optimized for a simpler and streamlined developer experience that allows you to only focus on the building, maintenance, testing, deployment, and monitoring of your applications. - [Palette VerteX](https://docs.spectrocloud.com/vertex/): Palette VerteX offers regulated industries, such as government and public sector organizations that handle sensitive and classified information, simplicity, security, and scale in production Kubernetes. VerteX is available as a self-hosted platform offering that you can install on your data centers, public cloud providers, or Edge devices to manage Kubernetes clusters.