From bc847e7dee0ac6e87ef3a5f023b9ebc8d00a35ac Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Wed, 10 Dec 2025 10:48:50 +0100 Subject: [PATCH 1/9] feat: Inkless doc updates --- docs/products/kafka/concepts/inkless-aku.md | 62 +++ .../kafka/concepts/inkless-billing.md | 42 +++ docs/products/kafka/concepts/inkless.md | 37 ++ .../kafka/create-kafka-service copy.md | 318 ++++++++++++++++ docs/products/kafka/create-kafka-service.md | 352 +++++++++--------- sidebars.ts | 40 +- 6 files changed, 653 insertions(+), 198 deletions(-) create mode 100644 docs/products/kafka/concepts/inkless-aku.md create mode 100644 docs/products/kafka/concepts/inkless-billing.md create mode 100644 docs/products/kafka/concepts/inkless.md create mode 100644 docs/products/kafka/create-kafka-service copy.md diff --git a/docs/products/kafka/concepts/inkless-aku.md b/docs/products/kafka/concepts/inkless-aku.md new file mode 100644 index 000000000..2a9ec8211 --- /dev/null +++ b/docs/products/kafka/concepts/inkless-aku.md @@ -0,0 +1,62 @@ +--- +title: AKU plans and scaling +--- + +Inkless uses Aiven Kafka Units (AKUs) to size Apache Kafka services by throughput instead of hardware resources. +An AKU represents the amount of traffic a service can handle. You select an initial AKU +level when creating the service and define how far the service can scale. + +## How AKUs work + +- Each AKU corresponds to a specific throughput capacity. +- You set the initial AKU level by choosing the expected throughput during service + creation. +- The service monitors throughput over time, not momentary spikes. +- When throughput reaches the threshold for the current AKU level, the service scales up + within your configured limits. +- When throughput stays low, the service scales down. + +Scaling changes the number of ACUs in use, which affects ACU-hour billing. Scaling +actions do not affect topic configuration or data retention. + +## Throughput measurement + +Inkless measures two types of traffic: + +- **Ingress:** Data written to topics by producers. +- **Egress:** Data read from topics by consumers, connectors, and mirroring processes. + +Both ingress and egress contribute to AKU usage. You can track ingress and egress usage +in the Service utilisation view, which also shows the ACU thresholds. + +## Autoscaling limits + +You can configure: + +- **Minimum AKUs:** The lowest capacity the service can scale down to. +- **Maximum AKUs:** The highest capacity the service can scale up to. + +Inkless scales automatically within these limits. Scaling occurs only when +throughput remains above or below a threshold for a sustained period. + +## Storage and AKUs + +Storage does not influence AKU scaling: + +- Diskless topics write directly to object storage. +- Classic topics use local disk for recent data and move older segments to object storage + through tiered storage. + +Storage and compute scale independently, so you can adjust retention without changing +AKU levels. + +## When to adjust AKU ranges + +Adjust your AKU limits when: + +- Workload throughput increases for sustained periods. +- Short-term traffic spikes are expected. +- Reducing costs during low-traffic periods requires a lower maximum ACU. +- The workload needs a guaranteed minimum level of throughput. + +For details on how ACU usage affects billing, see [Billing](/docs/products/kafka/concepts/inkless-billing). diff --git a/docs/products/kafka/concepts/inkless-billing.md b/docs/products/kafka/concepts/inkless-billing.md new file mode 100644 index 000000000..ff494503c --- /dev/null +++ b/docs/products/kafka/concepts/inkless-billing.md @@ -0,0 +1,42 @@ +--- +title: Billing +sidebar_label: Billing +--- + +Inkless uses a usage-based billing model. Charges are based on compute, storage, and data movement used by the service. + +:::note +Inkless BYOC deployments continue to use the existing plans-based pricing model. +::: + +## AKU-hours + +Compute charges are based on AKU-hours. + +An AKU (Aiven Kafka Unit) represents the throughput capacity of the service. The service +bills for the number of AKUs in use during each hour. When the service scales up or +down, the AKU-hour charge updates to match the current AKU level. + +For details on how scaling works, see [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku). + +## Storage + +Storage charges are based on the amount of data retained in object storage. + +- Diskless topics store all retained data in object storage. +- Classic topics keep a short amount of data on local disk before offloading older data + to object storage. + +Storage costs depend on how much data you retain. Storage is billed only for data kept +in object storage. Local disk used by brokers is not billed. + +## Network usage + +Network charges apply to: + +- **Ingress:** Data written to topics +- **Egress:** Data read by consumers, connectors, or mirroring processes + +:::note +Only topic ingress and egress are billed. Internal Kafka replication traffic is not billed. +::: diff --git a/docs/products/kafka/concepts/inkless.md b/docs/products/kafka/concepts/inkless.md new file mode 100644 index 000000000..e83e44d9c --- /dev/null +++ b/docs/products/kafka/concepts/inkless.md @@ -0,0 +1,37 @@ +--- +title: Inkless overview +sidebar_label: Overview +--- + +Inkless is Aiven’s cloud-native Apache Kafka® service that modernizes Kafka with diskless topics and object-storage retention to reduce operating costs while preserving full compatibility with existing Kafka clients. + +Inkless runs on Kafka 4.x and uses Aiven Kafka Units (AKUs) to size services by +throughput instead of hardware plans. It supports both classic and diskless topics in +the same service. + +## Key differences from classic Kafka + +Inkless changes how Kafka services are sized, stored, and managed: + +- **Throughput-based plans:** Services use AKUs instead of hardware plans. The service + scales within your defined limits as throughput changes. +- **Flexible storage:** Diskless topics store all data in object storage. Classic topics + use local disk with tiered storage enabled by default. +- **Managed configuration:** Broker-level settings are fixed to maintain service + stability and allow automatic scaling. +- **KRaft metadata management:** Inkless uses KRaft for metadata and consensus, + replacing ZooKeeper. +- **Cloud availability:** Inkless is initially available on AWS, with additional cloud + providers to follow. + +## When to use Inkless + +Use Inkless when: + +- Workload throughput fluctuates and requires autoscaling. +- Storage and compute must scale independently. +- Your use cases require diskless topics for long-term retention or large datasets. +- You need a simplified capacity model without hardware planning. + +Classic Kafka remains available for existing deployments and appears in the Aiven Console +only for customers who already run Classic services.. diff --git a/docs/products/kafka/create-kafka-service copy.md b/docs/products/kafka/create-kafka-service copy.md new file mode 100644 index 000000000..698dd7950 --- /dev/null +++ b/docs/products/kafka/create-kafka-service copy.md @@ -0,0 +1,318 @@ +--- +title: Create an Aiven for Apache Kafka® service +sidebar_label: Create service +keywords: [create, kafka, service, byoc, diskless] +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import ConsoleLabel from "@site/src/components/ConsoleIcons" +import LimitedBadge from "@site/src/components/Badges/LimitedBadge"; +import EarlyBadge from "@site/src/components/Badges/EarlyBadge"; +import RelatedPages from "@site/src/components/RelatedPages"; +import TerraformPrereqs from "@site/static/includes/terraform-get-started-prerequisites.md"; +import TerraformApply from "@site/static/includes/terraform-apply-changes.md"; +import TerraformSample from '@site/src/components/CodeSamples/TerraformSample'; + +You can create an Aiven for Apache Kafka® service using the Aiven Console, CLI, or Terraform. +During creation, you can enable **diskless topics** for Bring Your Own Cloud (BYOC) +deployments. If you do not enable diskless topics, the service stores topic data on +local disks by default. + +### Decide whether to enable diskless topics + +Choose the configuration that fits your workload: + +- **Standard Kafka service:** Uses local disk storage for lower latency and all-region + availability. +- **Kafka service with diskless topics:** Stores data in cloud object storage for + cost-optimized scaling in Bring Your Own Cloud (BYOC) environments. + +Diskless topics are currently supported only for BYOC deployments on AWS. + +:::note +You cannot enable diskless topics on an existing Kafka service that was created with +local storage only. +To use diskless topics, create a Kafka service with diskless support enabled. +Once enabled, you can create both diskless and classic topics within that service. +::: + +For details on the differences between topic types, see +[Classic vs. diskless topics](/docs/products/kafka/diskless/concepts/topics-vs-classic). + +## Prerequisites + +Make sure you have the following: + + + + +- Access to the [Aiven Console](https://console.aiven.io) +- An Aiven project to create the service in + + + + +- [Aiven CLI](https://github.com/aiven/aiven-client#installation) installed +- [A personal token](/docs/platform/howto/create_authentication_token) + + + + + + + + + +### Additional requirements for diskless topics + +To create a Kafka service with diskless topics, make sure that: + +- You have a [BYOC environment](/docs/platform/howto/byoc/create-cloud/create-custom-cloud) + set up in your cloud account on AWS. +- Diskless topics are enabled for your organization by Aiven. If the option does not + appear in the Aiven Console, [contact Aiven support](https://aiven.io/contact). + +## Create a Kafka service + +Create a Kafka service that stores topic data on local disks by default. + + + + +1. In your project, click . +1. Click **Create service**. +1. Select **Aiven for Apache Kafka®**. +1. In the **Optimize cost** section, keep diskless topics turned off to create a standard + Kafka service. + + :::tip + To create a Kafka service with diskless topics instead, see + [Create a Kafka service with diskless topics (BYOC)](#create-a-kafka-service-with-diskless-topics-byoc). + ::: + +1. Select a **Cloud**. + + :::note + Available plans and pricing vary between cloud providers and regions. + ::: + +1. Select a **Plan**. + +1. Optional: Add [disk storage](/docs/platform/howto/add-storage-space). + You can also enable [Tiered storage](/docs/products/kafka/howto/enable-kafka-tiered-storage) + to offload older data automatically to object storage. + +1. In the **Service basics** section, set the following: + - **Service name:** Enter a name for the service. + :::important + You cannot change the name after creation. + ::: + - **Version:** Select the Kafka version. The latest supported version appears by default. + - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to + organize your services. + +1. Review the **Service summary**. + Confirm the version, region, plan, and estimated price. + +1. Click **Create service**. + +The service status changes to **Rebuilding** during creation. +When it changes to **Running**, your Kafka service is ready. + + + + +Create a Kafka service using the Aiven CLI. + +```bash +avn service create SERVICE_NAME \ + --service-type kafka \ + --cloud CLOUD_REGION \ + --plan PLAN_NAME +``` + +Parameters: + +- `SERVICE_NAME`: The name of the Kafka service +- `CLOUD_REGION`: The cloud and region +- `PLAN_NAME`: The plan name + +Wait until the service status changes to **RUNNING**. + + + + +Use Terraform to create a Kafka service in your Aiven project. + +1. Create a file named `provider.tf` and add the following: + + + +1. Create a file named `service.tf` and add the following: + + + +1. Create a file named `variables.tf` and add the following: + + + +1. Create the `terraform.tfvars` file and add the values for your token and project name. + +1. Optional: To output connection details, create a file named `output.tf` and add the following: + + + + + + + + +## Create a Kafka service with diskless topics (BYOC) + +Use [diskless topics](/docs/products/kafka/diskless/concepts/diskless-overview) to +store Kafka data in cloud object storage instead of local disks. +You can use both diskless and classic topics in the same Kafka cluster. + +For instructions on setting up a BYOC environment, see +[Create a custom cloud (BYOC)](/docs/platform/howto/byoc/create-cloud/create-custom-cloud). + + + + +1. In your project, click . +1. Click **Create service**. +1. Select **Aiven for Apache Kafka®**. +1. Under **Optimize cost**, turn on **Enable diskless topics**. +1. Under **Add service metadata**, set the following: + - **Version:** Select the Kafka version. The latest supported version appears by + default. + :::note + Diskless topics require Apache Kafka® version 4.0 or later. + ::: + - **Service name:** Enter a name for your service. + :::important + You cannot change the name after creation. + ::: + - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to + organize your services. +1. Select the **cloud provider**, **BYOC region**, and **plan**. +1. Under **Select plan**, choose one of the plans available for diskless topics. +1. Review the **Service summary** on the right. + Confirm the version, region, plan, and estimated price. +1. Click **Create service**. + + + + +You can create a Kafka service with diskless topics enabled using the Aiven CLI. + +```bash +avn service create SERVICE_NAME \ + --project PROJECT_NAME \ + --service-type kafka \ + --cloud CLOUD_NAME \ + --plan PLAN_NAME \ + -c kafka_version=4.0 \ + -c kafka_diskless.enabled=true +``` + +Parameters: + +- `SERVICE_NAME`: Name of your Kafka service. +- `PROJECT_NAME`: Your Aiven project name. +- `CLOUD_NAME`: Custom BYOC cloud region, for example `custom-aws-eu-central-1`. +- `PLAN_NAME`: Diskless-compatible plan, such as `business-8-inkless`. Plans that support + diskless topics include `-inkless` in the plan name. +- `kafka_diskless.enabled`: Enables diskless topics. Must be set to `true`. + + + + +You can create a Kafka service with diskless topics enabled using Terraform. + +1. Create a file named `main.tf` and add the following: + + ```hcl + terraform { + required_providers { + aiven = { + source = "aiven/aiven" + version = ">=4.0.0, <5.0.0" + } + } + } + + provider "aiven" { + api_token = var.aiven_token + } + + resource "aiven_kafka" "diskless_kafka" { + project = var.aiven_project_name + service_name = "kafka-diskless" + cloud_name = "custom-aws-eu-central-1" + plan = "business-8-inkless" + + kafka_user_config = { + kafka_version = "4.0" + kafka_diskless = { + enabled = true + } + } + } + ``` + +1. Create a `variables.tf` file: + + ```hcl + variable "aiven_token" { + description = "Aiven API token" + type = string + } + + variable "aiven_project_name" { + description = "Your Aiven project name" + type = string + } + ``` + +1. Initialize and apply your configuration: + + ```hcl + terraform init + terraform apply --auto-approve + ``` + + + + +### After service creation + +When you create a Kafka service with diskless topics, Aiven deploys it directly in your +BYOC environment using your connected cloud account. The service runs entirely within +your cloud account. + +Aiven configures the following: + +- **Access to object storage** for storing Kafka topic data, either through an + Aiven-managed or a customer-provided bucket, depending on your BYOC configuration. +- **A PostgreSQL-based coordinator** managed as a service integration with Kafka. + This coordinator maintains message ordering and metadata consistency for diskless topics. + It is required for the current implementation of diskless topics. For details about + how the coordinator is upgraded, see + [PostgreSQL service upgrades](/docs/products/kafka/diskless/concepts/limitations#automatic-postgresql-service-upgrades). + +After creation, the **Kafka Diskless PostgreSQL** integration appears on the + page in the Aiven Console. This integration is managed +by Aiven and cannot be modified or deleted. + +To learn more about how diskless topics work, see +[Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview). + + + + +- [Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview) +- [Diskless topics architecture](/docs/products/kafka/diskless/concepts/architecture) +- [Batching and delivery in diskless topics](/docs/products/kafka/diskless/concepts/batching-and-delivery) +- [Create a Kafka topic](/docs/products/kafka/howto/create-topic) diff --git a/docs/products/kafka/create-kafka-service.md b/docs/products/kafka/create-kafka-service.md index 67c67a7d1..7a1456e12 100644 --- a/docs/products/kafka/create-kafka-service.md +++ b/docs/products/kafka/create-kafka-service.md @@ -1,7 +1,7 @@ --- -title: Create an Aiven for Apache Kafka® service -sidebar_label: Create Kafka service -keywords: [create, kafka, service, byoc, diskless] +title: Create a Kafka service +sidebar_label: Create service +keywords: [create, kafka, service, inkless, classic, byoc] --- import Tabs from '@theme/Tabs'; @@ -14,47 +14,32 @@ import TerraformPrereqs from "@site/static/includes/terraform-get-started-prereq import TerraformApply from "@site/static/includes/terraform-apply-changes.md"; import TerraformSample from '@site/src/components/CodeSamples/TerraformSample'; -You can create an Aiven for Apache Kafka® service using the Aiven Console, CLI, or Terraform. -During creation, you can enable **diskless topics** for Bring Your Own Cloud (BYOC) -deployments. If you do not enable diskless topics, the service stores topic data on -local disks by default. +Learn how to create an Apache Kafka® service on Aiven. You can choose between two Kafka +modes and deploy to either Aiven cloud or your own cloud infrastructure. -### Decide whether to enable diskless topics +## Choose your Kafka mode -Choose the configuration that fits your workload: +Aiven offers two ways to run Apache Kafka: -- **Standard Kafka service:** Uses local disk storage for lower latency and all-region - availability. -- **Kafka service with diskless topics:** Stores data in cloud object storage for - cost-optimized scaling in Bring Your Own Cloud (BYOC) environments. - -Diskless topics are currently supported only for BYOC deployments on AWS. - -:::note -You cannot enable diskless topics on an existing Kafka service that was created with -local storage only. -To use diskless topics, create a Kafka service with diskless support enabled. -Once enabled, you can create both diskless and classic topics within that service. -::: - -For details on the differences between topic types, see -[Classic vs. diskless topics](/docs/products/kafka/diskless/concepts/topics-vs-classic). +- **Inkless**: Uses usage-based compute measured in Aiven Kafka Units (AKUs) on Aiven + cloud, or plan-based pricing on Bring Your Own Cloud (BYOC). Inkless runs Kafka 4.x + and enables diskless topics and tiered storage by default. +- **Classic Kafka**: Uses fixed plans with local broker storage. Stores topic data on + local disks by default, with optional tiered storage. ## Prerequisites -Make sure you have the following: - - + - Access to the [Aiven Console](https://console.aiven.io) -- An Aiven project to create the service in +- An Aiven project where you can create services -- [Aiven CLI](https://github.com/aiven/aiven-client#installation) installed -- [A personal token](/docs/platform/howto/create_authentication_token) +- Install the [Aiven CLI](https://github.com/aiven/aiven-client#installation) +- Create an [API token](/docs/platform/howto/create_authentication_token) @@ -64,66 +49,141 @@ Make sure you have the following: -### Additional requirements for diskless topics +## Create an Inkless service on Aiven cloud -To create a Kafka service with diskless topics, make sure that: +Inkless on Aiven cloud uses Aiven Kafka Units (AKUs) to size compute capacity. It runs +Kafka 4.x and enables diskless topics and tiered storage by default. -- You have a [BYOC environment](/docs/platform/howto/byoc/create-cloud/create-custom-cloud) - set up in your cloud account on AWS. -- Diskless topics are enabled for your organization by Aiven. If the option does not - appear in the Aiven Console, [contact Aiven support](https://aiven.io/contact). + + -## Create a Kafka service +1. In the [Aiven Console](https://console.aiven.io), open your project and + click . +1. Click **Create service**. +1. Select **Apache Kafka®**. +1. Select **Inkless** as the service type. +1. Select **Aiven cloud** as the deployment mode. +1. Select a **cloud provider** and **region**. +1. In **Stream load**, set the expected ingress and egress throughput. +1. In **Retention**, enter the data retention period. +1. In **Service basics**, enter: + - **Name:** Enter a name for the service. + :::important + You cannot change the name after creation. + ::: + - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to + organize your services. +1. Review the **Service summary**, and click **Create service**. -Create a Kafka service that stores topic data on local disks by default. + + - - +Run the following command to create an Inkless Kafka service: -1. In your project, click . -1. Click **Create service**. -1. Select **Aiven for Apache Kafka®**. -1. In the **Optimize cost** section, keep diskless topics turned off to create a standard - Kafka service. +```bash +avn service create SERVICE_NAME \ +--project PROJECT_NAME \ +--service-type kafka \ +--cloud CLOUD_REGION \ +--plan AKU_OFFERING \ +-c inkless.enabled=true +``` + +Parameters: - :::tip - To create a Kafka service with diskless topics instead, see - [Create a Kafka service with diskless topics (BYOC)](#create-a-kafka-service-with-diskless-topics-byoc). - ::: +- `SERVICE_NAME`: Name of the Kafka service. +- `PROJECT_NAME`: Project that contains the service. +- `CLOUD_REGION`: Cloud region to deploy the service in. +- `AKU_OFFERING`: Inkless AKU offering to use, for example `aku-1`. -1. Select a **Cloud**. + + - :::note - Available plans and pricing vary between cloud providers and regions. - ::: +## Create an Inkless service on Bring your own cloud (BYOC) -1. Select a **Plan**. +Inkless services can run in your cloud account through BYOC. Inkless on BYOC uses Kafka +4.x and enables diskless topics and tiered storage by default. -1. Optional: Add [disk storage](/docs/platform/howto/add-storage-space). - You can also enable [Tiered storage](/docs/products/kafka/howto/enable-kafka-tiered-storage) - to offload older data automatically to object storage. + + -1. In the **Service basics** section, set the following: - - **Service name:** Enter a name for the service. +1. In the [Aiven Console](https://console.aiven.io), open your project and + click . +1. Click **Create service**. +1. Select **Apache Kafka®**. +1. Select **Inkless** as the service type. +1. Select **Bring your own cloud (BYOC)** as the deployment mode. +1. In the Cloud section, choose your BYOC environment and region. +1. Choose a **plan**. +1. In **Service basics**, enter: + - **Name:** Enter a name for the service. :::important You cannot change the name after creation. ::: - - **Version:** Select the Kafka version. The latest supported version appears by default. - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to organize your services. +1. Review the **Service summary**, and click **Create service**. -1. Review the **Service summary**. - Confirm the version, region, plan, and estimated price. + + -1. Click **Create service**. +Use the Aiven CLI to create the service. -The service status changes to **Rebuilding** during creation. -When it changes to **Running**, your Kafka service is ready. +```bash +avn service create SERVICE_NAME \ + --project PROJECT_NAME \ + --service-type kafka \ + --cloud CUSTOM_CLOUD_REGION \ + --plan INKLESS_PLAN +``` + +Parameters: + +- `SERVICE_NAME`: Name of the Kafka service. +- `PROJECT_NAME`: Aiven project name. +- `CUSTOM_CLOUD_REGION`: BYOC cloud region, such as `custom-aws-eu-central-1`. +- `INKLESS_PLAN`: Inkless plan, for example `business-8-inkless`. + + + + +## Create a Classic Kafka service on Aiven cloud + +Classic Kafka uses fixed plans and local broker storage. It stores topic data on local +disks by default, with optional tiered storage. + + + + +1. In the [Aiven Console](https://console.aiven.io), open your project and + click . +1. Click **Create service**. +1. Select **Apache Kafka®**. +1. Select **Classic Kafka** as the service type. +1. Select **Aiven cloud** as the deployment mode. +1. In the **Cloud** section: + + - Choose a **cloud provider**. + - Select a **region**. +1. In the **Plan** section, choose a plan from the available plan groups. +1. Optional: + + - Add [disk storage](/docs/platform/howto/add-storage-space). + - Enable [Tiered storage](/docs/products/kafka/howto/enable-kafka-tiered-storage) if + supported for your plan and region. +1. In **Service basics**, enter: + + - **Name:** Name of the service. + - **Version:** Select the Kafka version. The latest supported version appears by + default. + - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to + organize your services. +1. Review the **Service summary**, then click **Create service**. -Create a Kafka service using the Aiven CLI. +Create a classic Kafka service using the Aiven CLI. ```bash avn service create SERVICE_NAME \ @@ -134,16 +194,15 @@ avn service create SERVICE_NAME \ Parameters: -- `SERVICE_NAME`: The name of the Kafka service -- `CLOUD_REGION`: The cloud and region -- `PLAN_NAME`: The plan name +- `SERVICE_NAME`: Name of the Kafka service. +- `CLOUD_REGION`: Cloud provider and region. +- `PLAN_NAME`: Classic Kafka plan. -Wait until the service status changes to **RUNNING**. -Use Terraform to create a Kafka service in your Aiven project. +Use Terraform to create a classic Kafka service in your Aiven project. 1. Create a file named `provider.tf` and add the following: @@ -159,7 +218,8 @@ Use Terraform to create a Kafka service in your Aiven project. 1. Create the `terraform.tfvars` file and add the values for your token and project name. -1. Optional: To output connection details, create a file named `output.tf` and add the following: +1. Optional: To output connection details, create a file named `output.tf` and add the + following: @@ -168,147 +228,71 @@ Use Terraform to create a Kafka service in your Aiven project. -## Create a Kafka service with diskless topics (BYOC) +## Create a Classic Kafka service on Bring your own cloud (BYOC) -Use [diskless topics](/docs/products/kafka/diskless/concepts/diskless-overview) to -store Kafka data in cloud object storage instead of local disks. -You can use both diskless and classic topics in the same Kafka cluster. +You can run Classic Kafka in your own cloud account using BYOC. -For instructions on setting up a BYOC environment, see -[Create a custom cloud (BYOC)](/docs/platform/howto/byoc/create-cloud/create-custom-cloud). - - + -1. In your project, click . +1. In the [Aiven Console](https://console.aiven.io), open your project and + click . 1. Click **Create service**. -1. Select **Aiven for Apache Kafka®**. -1. Under **Optimize cost**, turn on **Enable diskless topics**. -1. Under **Add service metadata**, set the following: +1. Select **Apache Kafka®**. +1. Select **Classic Kafka** as the service type. +1. Select **Bring your own cloud (BYOC)** as the deployment mode. +1. In the **Cloud** section: + - Select your **BYOC environment**. + - Select a **region**. +1. In the **Plan** section, choose a plan from the available plan groups. +1. Optional: + - Adjust **Additional disk storage**. + - Enable **Tiered storage** if supported for your plan and region. +1. In **Service basics**, enter: + - **Name:** Name of the service. - **Version:** Select the Kafka version. The latest supported version appears by default. - :::note - Diskless topics require Apache Kafka® version 4.0 or later. - ::: - - **Service name:** Enter a name for your service. - :::important - You cannot change the name after creation. - ::: - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to organize your services. -1. Select the **cloud provider**, **BYOC region**, and **plan**. -1. Under **Select plan**, choose one of the plans available for diskless topics. -1. Review the **Service summary** on the right. - Confirm the version, region, plan, and estimated price. -1. Click **Create service**. +1. Review the **Service summary**, then click **Create service**. -You can create a Kafka service with diskless topics enabled using the Aiven CLI. +Use the Aiven CLI to create a Classic Kafka BYOC service. ```bash avn service create SERVICE_NAME \ --project PROJECT_NAME \ --service-type kafka \ - --cloud CLOUD_NAME \ - --plan PLAN_NAME \ - -c kafka_version=4.0 \ - -c kafka_diskless.enabled=true + --cloud CUSTOM_CLOUD_REGION \ + --plan PLAN_NAME ``` Parameters: -- `SERVICE_NAME`: Name of your Kafka service. -- `PROJECT_NAME`: Your Aiven project name. -- `CLOUD_NAME`: Custom BYOC cloud region, for example `custom-aws-eu-central-1`. -- `PLAN_NAME`: Diskless-compatible plan, such as `business-8-inkless`. Plans that support - diskless topics include `-inkless` in the plan name. -- `kafka_diskless.enabled`: Enables diskless topics. Must be set to `true`. - - - - -You can create a Kafka service with diskless topics enabled using Terraform. - -1. Create a file named `main.tf` and add the following: - - ```hcl - terraform { - required_providers { - aiven = { - source = "aiven/aiven" - version = ">=4.0.0, <5.0.0" - } - } - } - - provider "aiven" { - api_token = var.aiven_token - } - - resource "aiven_kafka" "diskless_kafka" { - project = var.aiven_project_name - service_name = "kafka-diskless" - cloud_name = "custom-aws-eu-central-1" - plan = "business-8-inkless" - - kafka_user_config = { - kafka_version = "4.0" - kafka_diskless = { - enabled = true - } - } - } - ``` - -1. Create a `variables.tf` file: - - ```hcl - variable "aiven_token" { - description = "Aiven API token" - type = string - } - - variable "aiven_project_name" { - description = "Your Aiven project name" - type = string - } - ``` - -1. Initialize and apply your configuration: - - ```hcl - terraform init - terraform apply --auto-approve - ``` +- `CUSTOM_CLOUD_REGION`: Your BYOC region. +- `PLAN_NAME`: Classic Kafka BYOC plan. -### After service creation - -When you create a Kafka service with diskless topics, Aiven deploys it directly in your -BYOC environment using your connected cloud account. The service runs entirely within -your cloud account. - -Aiven configures the following: +## After service creation -- **Access to object storage** for storing Kafka topic data, either through an - Aiven-managed or a customer-provided bucket, depending on your BYOC configuration. -- **A PostgreSQL-based coordinator** managed as a service integration with Kafka. - This coordinator maintains message ordering and metadata consistency for diskless topics. - It is required for the current implementation of diskless topics. For details about - how the coordinator is upgraded, see - [PostgreSQL service upgrades](/docs/products/kafka/diskless/concepts/limitations#automatic-postgresql-service-upgrades). +Inkless services require a metadata coordinator and object storage. Aiven provisions +these components automatically. -After creation, the **Kafka Diskless PostgreSQL** integration appears on the - page in the Aiven Console. This integration is managed -by Aiven and cannot be modified or deleted. +Aiven configures: -To learn more about how diskless topics work, see -[Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview). +- **Object storage access** for storing diskless topic data. Inkless uses an + Aiven-managed object storage bucket, which is created and managed for you. +- **A PostgreSQL-based coordinator** that stores metadata for diskless topics. The + coordinator is provisioned automatically and linked to the Kafka service through a + managed integration. It maintains metadata such as batch offsets and storage locations. +After creation, the **Kafka Inkless PostgreSQL** integration appears on +the page in the Aiven Console. This integration +is managed by Aiven and cannot be modified or removed. diff --git a/sidebars.ts b/sidebars.ts index afab225c9..a0875b0e6 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -765,7 +765,19 @@ const sidebars: SidebarsConfig = { items: [ { type: 'category', - label: 'Core concepts', + label: 'Inkless', + link: { + type: 'doc', + id: 'products/kafka/concepts/inkless', + }, + items: [ + 'products/kafka/concepts/inkless-aku', + 'products/kafka/concepts/inkless-billing', + ], + }, + { + type: 'category', + label: 'Kafka fundamentals', items: [ 'products/kafka/concepts/partition-segments', 'products/kafka/concepts/log-compaction', @@ -775,18 +787,7 @@ const sidebars: SidebarsConfig = { 'products/kafka/concepts/kafka-rest-api', ], }, - { - type: 'category', - label: 'Operating Kafka with Aiven', - items: [ - 'products/kafka/concepts/upgrade-procedure', - 'products/kafka/concepts/horizontal-vertical-scaling', - 'products/kafka/concepts/configuration-backup', - 'products/kafka/concepts/monitor-consumer-group', - 'products/kafka/concepts/consumer-lag-predictor', - 'products/kafka/concepts/follower-fetching', - ], - }, + { type: 'category', label: 'Diskless topics', @@ -820,7 +821,18 @@ const sidebars: SidebarsConfig = { 'products/kafka/concepts/kraft-mode', ], }, - + { + type: 'category', + label: 'Operate Kafka on Aiven', + items: [ + 'products/kafka/concepts/upgrade-procedure', + 'products/kafka/concepts/horizontal-vertical-scaling', + 'products/kafka/concepts/configuration-backup', + 'products/kafka/concepts/monitor-consumer-group', + 'products/kafka/concepts/consumer-lag-predictor', + 'products/kafka/concepts/follower-fetching', + ], + }, { type: 'category', label: 'How to', From deb1a0ea215ef26251764ddf25fcc327215476bf Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Wed, 10 Dec 2025 10:54:58 +0100 Subject: [PATCH 2/9] feat: Inkless doc updates --- .github/vale/styles/config/vocabularies/Aiven/accept.txt | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/.github/vale/styles/config/vocabularies/Aiven/accept.txt b/.github/vale/styles/config/vocabularies/Aiven/accept.txt index a81bd821b..4196bec24 100644 --- a/.github/vale/styles/config/vocabularies/Aiven/accept.txt +++ b/.github/vale/styles/config/vocabularies/Aiven/accept.txt @@ -1,11 +1,15 @@ 188 ACL ACLs +ACU +ACUs Addons africa AIInsights Aiven Aiven's +AKU +AKUs allowlist allowlists Altinity @@ -140,7 +144,6 @@ GitHub go Google Cloud Platform google_columnar_engine_enabled -google_columnar_engine_enabled google_columnar_engine_memory_size_percentage Gzipped gzipped @@ -169,6 +172,7 @@ hypertables IdP IdPs InfluxDB +Inkless InnoDB inodes Instana @@ -259,7 +263,6 @@ pg_dump pgAdmin PGAudit PgBouncer -pg_dump PGHoard pglookout pgoutput From 15609b41d79cd077deebda5a6b51235534446340 Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Wed, 10 Dec 2025 11:06:42 +0100 Subject: [PATCH 3/9] update: delet duplicate file --- .../kafka/create-kafka-service copy.md | 318 ------------------ 1 file changed, 318 deletions(-) delete mode 100644 docs/products/kafka/create-kafka-service copy.md diff --git a/docs/products/kafka/create-kafka-service copy.md b/docs/products/kafka/create-kafka-service copy.md deleted file mode 100644 index 698dd7950..000000000 --- a/docs/products/kafka/create-kafka-service copy.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -title: Create an Aiven for Apache Kafka® service -sidebar_label: Create service -keywords: [create, kafka, service, byoc, diskless] ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -import ConsoleLabel from "@site/src/components/ConsoleIcons" -import LimitedBadge from "@site/src/components/Badges/LimitedBadge"; -import EarlyBadge from "@site/src/components/Badges/EarlyBadge"; -import RelatedPages from "@site/src/components/RelatedPages"; -import TerraformPrereqs from "@site/static/includes/terraform-get-started-prerequisites.md"; -import TerraformApply from "@site/static/includes/terraform-apply-changes.md"; -import TerraformSample from '@site/src/components/CodeSamples/TerraformSample'; - -You can create an Aiven for Apache Kafka® service using the Aiven Console, CLI, or Terraform. -During creation, you can enable **diskless topics** for Bring Your Own Cloud (BYOC) -deployments. If you do not enable diskless topics, the service stores topic data on -local disks by default. - -### Decide whether to enable diskless topics - -Choose the configuration that fits your workload: - -- **Standard Kafka service:** Uses local disk storage for lower latency and all-region - availability. -- **Kafka service with diskless topics:** Stores data in cloud object storage for - cost-optimized scaling in Bring Your Own Cloud (BYOC) environments. - -Diskless topics are currently supported only for BYOC deployments on AWS. - -:::note -You cannot enable diskless topics on an existing Kafka service that was created with -local storage only. -To use diskless topics, create a Kafka service with diskless support enabled. -Once enabled, you can create both diskless and classic topics within that service. -::: - -For details on the differences between topic types, see -[Classic vs. diskless topics](/docs/products/kafka/diskless/concepts/topics-vs-classic). - -## Prerequisites - -Make sure you have the following: - - - - -- Access to the [Aiven Console](https://console.aiven.io) -- An Aiven project to create the service in - - - - -- [Aiven CLI](https://github.com/aiven/aiven-client#installation) installed -- [A personal token](/docs/platform/howto/create_authentication_token) - - - - - - - - - -### Additional requirements for diskless topics - -To create a Kafka service with diskless topics, make sure that: - -- You have a [BYOC environment](/docs/platform/howto/byoc/create-cloud/create-custom-cloud) - set up in your cloud account on AWS. -- Diskless topics are enabled for your organization by Aiven. If the option does not - appear in the Aiven Console, [contact Aiven support](https://aiven.io/contact). - -## Create a Kafka service - -Create a Kafka service that stores topic data on local disks by default. - - - - -1. In your project, click . -1. Click **Create service**. -1. Select **Aiven for Apache Kafka®**. -1. In the **Optimize cost** section, keep diskless topics turned off to create a standard - Kafka service. - - :::tip - To create a Kafka service with diskless topics instead, see - [Create a Kafka service with diskless topics (BYOC)](#create-a-kafka-service-with-diskless-topics-byoc). - ::: - -1. Select a **Cloud**. - - :::note - Available plans and pricing vary between cloud providers and regions. - ::: - -1. Select a **Plan**. - -1. Optional: Add [disk storage](/docs/platform/howto/add-storage-space). - You can also enable [Tiered storage](/docs/products/kafka/howto/enable-kafka-tiered-storage) - to offload older data automatically to object storage. - -1. In the **Service basics** section, set the following: - - **Service name:** Enter a name for the service. - :::important - You cannot change the name after creation. - ::: - - **Version:** Select the Kafka version. The latest supported version appears by default. - - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to - organize your services. - -1. Review the **Service summary**. - Confirm the version, region, plan, and estimated price. - -1. Click **Create service**. - -The service status changes to **Rebuilding** during creation. -When it changes to **Running**, your Kafka service is ready. - - - - -Create a Kafka service using the Aiven CLI. - -```bash -avn service create SERVICE_NAME \ - --service-type kafka \ - --cloud CLOUD_REGION \ - --plan PLAN_NAME -``` - -Parameters: - -- `SERVICE_NAME`: The name of the Kafka service -- `CLOUD_REGION`: The cloud and region -- `PLAN_NAME`: The plan name - -Wait until the service status changes to **RUNNING**. - - - - -Use Terraform to create a Kafka service in your Aiven project. - -1. Create a file named `provider.tf` and add the following: - - - -1. Create a file named `service.tf` and add the following: - - - -1. Create a file named `variables.tf` and add the following: - - - -1. Create the `terraform.tfvars` file and add the values for your token and project name. - -1. Optional: To output connection details, create a file named `output.tf` and add the following: - - - - - - - - -## Create a Kafka service with diskless topics (BYOC) - -Use [diskless topics](/docs/products/kafka/diskless/concepts/diskless-overview) to -store Kafka data in cloud object storage instead of local disks. -You can use both diskless and classic topics in the same Kafka cluster. - -For instructions on setting up a BYOC environment, see -[Create a custom cloud (BYOC)](/docs/platform/howto/byoc/create-cloud/create-custom-cloud). - - - - -1. In your project, click . -1. Click **Create service**. -1. Select **Aiven for Apache Kafka®**. -1. Under **Optimize cost**, turn on **Enable diskless topics**. -1. Under **Add service metadata**, set the following: - - **Version:** Select the Kafka version. The latest supported version appears by - default. - :::note - Diskless topics require Apache Kafka® version 4.0 or later. - ::: - - **Service name:** Enter a name for your service. - :::important - You cannot change the name after creation. - ::: - - **Tags:** Optional. Add [resource tags](/docs/platform/howto/tag-resources) to - organize your services. -1. Select the **cloud provider**, **BYOC region**, and **plan**. -1. Under **Select plan**, choose one of the plans available for diskless topics. -1. Review the **Service summary** on the right. - Confirm the version, region, plan, and estimated price. -1. Click **Create service**. - - - - -You can create a Kafka service with diskless topics enabled using the Aiven CLI. - -```bash -avn service create SERVICE_NAME \ - --project PROJECT_NAME \ - --service-type kafka \ - --cloud CLOUD_NAME \ - --plan PLAN_NAME \ - -c kafka_version=4.0 \ - -c kafka_diskless.enabled=true -``` - -Parameters: - -- `SERVICE_NAME`: Name of your Kafka service. -- `PROJECT_NAME`: Your Aiven project name. -- `CLOUD_NAME`: Custom BYOC cloud region, for example `custom-aws-eu-central-1`. -- `PLAN_NAME`: Diskless-compatible plan, such as `business-8-inkless`. Plans that support - diskless topics include `-inkless` in the plan name. -- `kafka_diskless.enabled`: Enables diskless topics. Must be set to `true`. - - - - -You can create a Kafka service with diskless topics enabled using Terraform. - -1. Create a file named `main.tf` and add the following: - - ```hcl - terraform { - required_providers { - aiven = { - source = "aiven/aiven" - version = ">=4.0.0, <5.0.0" - } - } - } - - provider "aiven" { - api_token = var.aiven_token - } - - resource "aiven_kafka" "diskless_kafka" { - project = var.aiven_project_name - service_name = "kafka-diskless" - cloud_name = "custom-aws-eu-central-1" - plan = "business-8-inkless" - - kafka_user_config = { - kafka_version = "4.0" - kafka_diskless = { - enabled = true - } - } - } - ``` - -1. Create a `variables.tf` file: - - ```hcl - variable "aiven_token" { - description = "Aiven API token" - type = string - } - - variable "aiven_project_name" { - description = "Your Aiven project name" - type = string - } - ``` - -1. Initialize and apply your configuration: - - ```hcl - terraform init - terraform apply --auto-approve - ``` - - - - -### After service creation - -When you create a Kafka service with diskless topics, Aiven deploys it directly in your -BYOC environment using your connected cloud account. The service runs entirely within -your cloud account. - -Aiven configures the following: - -- **Access to object storage** for storing Kafka topic data, either through an - Aiven-managed or a customer-provided bucket, depending on your BYOC configuration. -- **A PostgreSQL-based coordinator** managed as a service integration with Kafka. - This coordinator maintains message ordering and metadata consistency for diskless topics. - It is required for the current implementation of diskless topics. For details about - how the coordinator is upgraded, see - [PostgreSQL service upgrades](/docs/products/kafka/diskless/concepts/limitations#automatic-postgresql-service-upgrades). - -After creation, the **Kafka Diskless PostgreSQL** integration appears on the - page in the Aiven Console. This integration is managed -by Aiven and cannot be modified or deleted. - -To learn more about how diskless topics work, see -[Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview). - - - - -- [Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview) -- [Diskless topics architecture](/docs/products/kafka/diskless/concepts/architecture) -- [Batching and delivery in diskless topics](/docs/products/kafka/diskless/concepts/batching-and-delivery) -- [Create a Kafka topic](/docs/products/kafka/howto/create-topic) From 09b33dd4cf6140e974232aac997122bf1de1f222 Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Wed, 10 Dec 2025 13:17:01 +0100 Subject: [PATCH 4/9] update: TOC --- sidebars.ts | 44 ++++++++++++++++---------------------------- 1 file changed, 16 insertions(+), 28 deletions(-) diff --git a/sidebars.ts b/sidebars.ts index a0875b0e6..b21186232 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -738,14 +738,6 @@ const sidebars: SidebarsConfig = { type: 'doc', }, items: [ - { - type: 'category', - label: 'Free tier', - items: [ - 'products/kafka/free-tier/kafka-free-tier', - 'products/kafka/free-tier/create-free-tier-kafka-service', - ], - }, 'products/kafka/create-kafka-service', { type: 'category', @@ -766,10 +758,7 @@ const sidebars: SidebarsConfig = { { type: 'category', label: 'Inkless', - link: { - type: 'doc', - id: 'products/kafka/concepts/inkless', - }, + link: {type: 'doc', id: 'products/kafka/concepts/inkless'}, items: [ 'products/kafka/concepts/inkless-aku', 'products/kafka/concepts/inkless-billing', @@ -787,7 +776,6 @@ const sidebars: SidebarsConfig = { 'products/kafka/concepts/kafka-rest-api', ], }, - { type: 'category', label: 'Diskless topics', @@ -816,21 +804,23 @@ const sidebars: SidebarsConfig = { 'products/kafka/concepts/tiered-storage-limitations', ], }, + 'products/kafka/concepts/governance-overview', 'products/kafka/concepts/kafka-quotas', 'products/kafka/concepts/kraft-mode', - ], - }, - { - type: 'category', - label: 'Operate Kafka on Aiven', - items: [ - 'products/kafka/concepts/upgrade-procedure', - 'products/kafka/concepts/horizontal-vertical-scaling', - 'products/kafka/concepts/configuration-backup', - 'products/kafka/concepts/monitor-consumer-group', - 'products/kafka/concepts/consumer-lag-predictor', - 'products/kafka/concepts/follower-fetching', + + { + type: 'category', + label: 'Operate Kafka on Aiven', // label is fine! + items: [ + 'products/kafka/concepts/upgrade-procedure', + 'products/kafka/concepts/horizontal-vertical-scaling', + 'products/kafka/concepts/configuration-backup', + 'products/kafka/concepts/monitor-consumer-group', + 'products/kafka/concepts/consumer-lag-predictor', + 'products/kafka/concepts/follower-fetching', + ], + }, ], }, { @@ -1712,7 +1702,6 @@ const sidebars: SidebarsConfig = { 'products/opensearch/howto/handle-low-disk-space', 'products/opensearch/howto/resolve-shards-too-large', 'products/opensearch/howto/setup-cross-cluster-replication-opensearch', - 'products/opensearch/howto/enable-slow-query-log', ], }, { @@ -1732,8 +1721,8 @@ const sidebars: SidebarsConfig = { label: 'Reference', items: [ 'products/opensearch/reference/plugins', - 'products/opensearch/reference/list-of-plugins-for-each-version', 'products/opensearch/reference/advanced-params', + 'products/opensearch/reference/restapi-limited-access', 'products/opensearch/reference/low-space-watermarks', 'products/opensearch/howto/os-metrics', @@ -1974,7 +1963,6 @@ const sidebars: SidebarsConfig = { 'products/valkey/concepts/lua-scripts', 'products/valkey/concepts/memory-usage', 'products/valkey/concepts/read-replica', - 'products/valkey/concepts/valkey-cluster', ], }, { From e512159f55b7d2bfa0b4e0b8e48b2a2c05dbecf7 Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Mon, 15 Dec 2025 14:32:00 +0100 Subject: [PATCH 5/9] update: minor refinement to content --- docs/products/kafka/concepts/inkless-aku.md | 18 +++++----- .../kafka/concepts/inkless-billing.md | 18 +++++----- docs/products/kafka/concepts/inkless.md | 30 +++++++++++----- docs/products/kafka/create-kafka-service.md | 34 +++++++++++++------ 4 files changed, 63 insertions(+), 37 deletions(-) diff --git a/docs/products/kafka/concepts/inkless-aku.md b/docs/products/kafka/concepts/inkless-aku.md index 2a9ec8211..339ec2ef5 100644 --- a/docs/products/kafka/concepts/inkless-aku.md +++ b/docs/products/kafka/concepts/inkless-aku.md @@ -2,9 +2,9 @@ title: AKU plans and scaling --- -Inkless uses Aiven Kafka Units (AKUs) to size Apache Kafka services by throughput instead of hardware resources. -An AKU represents the amount of traffic a service can handle. You select an initial AKU -level when creating the service and define how far the service can scale. +Inkless uses Aiven Kafka Units (AKUs) to size Apache Kafka services by throughput instead of hardware resources. An AKU represents the amount of traffic a service can handle. You estimate the expected +throughput when creating the service. This estimate determines the initial AKU level and +the scaling range. ## How AKUs work @@ -14,9 +14,9 @@ level when creating the service and define how far the service can scale. - The service monitors throughput over time, not momentary spikes. - When throughput reaches the threshold for the current AKU level, the service scales up within your configured limits. -- When throughput stays low, the service scales down. +- When throughput remains low for a sustained period, the service scales down. -Scaling changes the number of ACUs in use, which affects ACU-hour billing. Scaling +Scaling changes the number of AKUs in use, which affects AKU-hour billing. Scaling actions do not affect topic configuration or data retention. ## Throughput measurement @@ -27,11 +27,11 @@ Inkless measures two types of traffic: - **Egress:** Data read from topics by consumers, connectors, and mirroring processes. Both ingress and egress contribute to AKU usage. You can track ingress and egress usage -in the Service utilisation view, which also shows the ACU thresholds. +in the Service utilisation view, which also shows the AKU thresholds. ## Autoscaling limits -You can configure: +Depending on your cloud provider and account, you can configure: - **Minimum AKUs:** The lowest capacity the service can scale down to. - **Maximum AKUs:** The highest capacity the service can scale up to. @@ -56,7 +56,7 @@ Adjust your AKU limits when: - Workload throughput increases for sustained periods. - Short-term traffic spikes are expected. -- Reducing costs during low-traffic periods requires a lower maximum ACU. +- Reducing costs during low-traffic periods requires a lower maximum AKU. - The workload needs a guaranteed minimum level of throughput. -For details on how ACU usage affects billing, see [Billing](/docs/products/kafka/concepts/inkless-billing). +For details on how AKU usage affects billing, see [Billing](/docs/products/kafka/concepts/inkless-billing). diff --git a/docs/products/kafka/concepts/inkless-billing.md b/docs/products/kafka/concepts/inkless-billing.md index ff494503c..76522009d 100644 --- a/docs/products/kafka/concepts/inkless-billing.md +++ b/docs/products/kafka/concepts/inkless-billing.md @@ -3,7 +3,7 @@ title: Billing sidebar_label: Billing --- -Inkless uses a usage-based billing model. Charges are based on compute, storage, and data movement used by the service. +Inkless uses a usage-based billing model. Charges are based on compute measured in Aiven Kafka Units (AKUs), storage, and data movement used by the service. :::note Inkless BYOC deployments continue to use the existing plans-based pricing model. @@ -14,21 +14,21 @@ Inkless BYOC deployments continue to use the existing plans-based pricing model. Compute charges are based on AKU-hours. An AKU (Aiven Kafka Unit) represents the throughput capacity of the service. The service -bills for the number of AKUs in use during each hour. When the service scales up or -down, the AKU-hour charge updates to match the current AKU level. +bills based on the number of AKUs in use over time, calculated in AKU-hours. When the +service scales up or down, the AKU-hour charge updates to match the current AKU level. -For details on how scaling works, see [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku). +For details on how scaling works, see +[AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku). ## Storage Storage charges are based on the amount of data retained in object storage. - Diskless topics store all retained data in object storage. -- Classic topics keep a short amount of data on local disk before offloading older data - to object storage. +- Classic topics keep a short amount of recent data on local disk before offloading older + data to object storage. -Storage costs depend on how much data you retain. Storage is billed only for data kept -in object storage. Local disk used by brokers is not billed. +Local disk used by brokers is not billed. ## Network usage @@ -37,6 +37,8 @@ Network charges apply to: - **Ingress:** Data written to topics - **Egress:** Data read by consumers, connectors, or mirroring processes +Network usage is measured at the service level across all topics. + :::note Only topic ingress and egress are billed. Internal Kafka replication traffic is not billed. ::: diff --git a/docs/products/kafka/concepts/inkless.md b/docs/products/kafka/concepts/inkless.md index e83e44d9c..ee0db9265 100644 --- a/docs/products/kafka/concepts/inkless.md +++ b/docs/products/kafka/concepts/inkless.md @@ -3,26 +3,31 @@ title: Inkless overview sidebar_label: Overview --- -Inkless is Aiven’s cloud-native Apache Kafka® service that modernizes Kafka with diskless topics and object-storage retention to reduce operating costs while preserving full compatibility with existing Kafka clients. +Inkless is Aiven’s cloud-native Apache Kafka® service that modernizes Kafka with diskless topics and object storage for data retention. +It reduces operational overhead while preserving full compatibility with existing +Kafka clients. + +Inkless runs on Kafka 4.x and uses Aiven Kafka Units (AKUs) to size services by throughput +instead of hardware plans. It supports both classic and diskless topics within the same +service. -Inkless runs on Kafka 4.x and uses Aiven Kafka Units (AKUs) to size services by -throughput instead of hardware plans. It supports both classic and diskless topics in -the same service. ## Key differences from classic Kafka Inkless changes how Kafka services are sized, stored, and managed: -- **Throughput-based plans:** Services use AKUs instead of hardware plans. The service - scales within your defined limits as throughput changes. +- **Throughput-based sizing:** Services use AKUs instead of hardware plans and scale + within defined limits as throughput changes. - **Flexible storage:** Diskless topics store all data in object storage. Classic topics use local disk with tiered storage enabled by default. - **Managed configuration:** Broker-level settings are fixed to maintain service stability and allow automatic scaling. - **KRaft metadata management:** Inkless uses KRaft for metadata and consensus, replacing ZooKeeper. -- **Cloud availability:** Inkless is initially available on AWS, with additional cloud - providers to follow. +- **Cloud availability:** Inkless is available on selected cloud providers, with support + expanding over time. +- **Diskless topics:** Diskless topics are available only in Inkless services. + ## When to use Inkless @@ -34,4 +39,11 @@ Use Inkless when: - You need a simplified capacity model without hardware planning. Classic Kafka remains available for existing deployments and appears in the Aiven Console -only for customers who already run Classic services.. +only for customers who already run Classic services. + +## Existing Classic Kafka services + +Existing Classic Kafka services continue to run unchanged. + +You cannot upgrade or migrate an existing Classic Kafka service to Inkless. +Service type is fixed at creation. To use Inkless, create a new Kafka service. diff --git a/docs/products/kafka/create-kafka-service.md b/docs/products/kafka/create-kafka-service.md index 7a1456e12..e558f087b 100644 --- a/docs/products/kafka/create-kafka-service.md +++ b/docs/products/kafka/create-kafka-service.md @@ -21,9 +21,12 @@ modes and deploy to either Aiven cloud or your own cloud infrastructure. Aiven offers two ways to run Apache Kafka: -- **Inkless**: Uses usage-based compute measured in Aiven Kafka Units (AKUs) on Aiven - cloud, or plan-based pricing on Bring Your Own Cloud (BYOC). Inkless runs Kafka 4.x - and enables diskless topics and tiered storage by default. +- **Inkless**: Runs Apache Kafka 4.x with diskless topics and tiered storage enabled by + default. + - On **Aiven cloud**, compute is usage-based and measured in Aiven Kafka Units (AKUs). + - On **Bring Your Own Cloud (BYOC)**, pricing is plan-based. + Inkless availability depends on the selected cloud provider. + - **Classic Kafka**: Uses fixed plans with local broker storage. Stores topic data on local disks by default, with optional tiered storage. @@ -63,9 +66,18 @@ Kafka 4.x and enables diskless topics and tiered storage by default. 1. Select **Apache Kafka®**. 1. Select **Inkless** as the service type. 1. Select **Aiven cloud** as the deployment mode. + + :::note + Inkless on Aiven cloud is available only on selected cloud providers. + If Inkless is not supported in the selected cloud or region, Classic Kafka is used instead. + ::: + 1. Select a **cloud provider** and **region**. -1. In **Stream load**, set the expected ingress and egress throughput. +1. In **Stream load**, estimate the expected ingress and egress throughput. + This estimate is used for initial AKU sizing and cost estimation and can be + changed later. 1. In **Retention**, enter the data retention period. + Retention is used to estimate storage costs and can be adjusted after service creation. 1. In **Service basics**, enter: - **Name:** Enter a name for the service. :::important @@ -78,15 +90,14 @@ Kafka 4.x and enables diskless topics and tiered storage by default. -Run the following command to create an Inkless Kafka service: +Create an Inkless Kafka service using the Aiven CLI: ```bash avn service create SERVICE_NAME \ ---project PROJECT_NAME \ ---service-type kafka \ ---cloud CLOUD_REGION \ ---plan AKU_OFFERING \ --c inkless.enabled=true + --project PROJECT_NAME \ + --service-type kafka \ + --cloud CLOUD_REGION \ + --plan INKLESS_PLAN ``` Parameters: @@ -94,7 +105,8 @@ Parameters: - `SERVICE_NAME`: Name of the Kafka service. - `PROJECT_NAME`: Project that contains the service. - `CLOUD_REGION`: Cloud region to deploy the service in. -- `AKU_OFFERING`: Inkless AKU offering to use, for example `aku-1`. +- `INKLES_PLAN`: An Inkless Kafka plan available for the selected cloud and account. Plan + availability depends on the selected cloud provider and account. From 65cc02b6eda915fec6bdf789c0161b1eb0105d11 Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Mon, 15 Dec 2025 14:51:33 +0100 Subject: [PATCH 6/9] update: cross-links --- docs/products/kafka/concepts/inkless-aku.md | 14 ++++++++--- .../kafka/concepts/inkless-billing.md | 23 +++++++++++++++---- docs/products/kafka/concepts/inkless.md | 9 ++++++++ 3 files changed, 39 insertions(+), 7 deletions(-) diff --git a/docs/products/kafka/concepts/inkless-aku.md b/docs/products/kafka/concepts/inkless-aku.md index 339ec2ef5..2dc4748c4 100644 --- a/docs/products/kafka/concepts/inkless-aku.md +++ b/docs/products/kafka/concepts/inkless-aku.md @@ -1,6 +1,7 @@ --- title: AKU plans and scaling --- +import RelatedPages from "@site/src/components/RelatedPages"; Inkless uses Aiven Kafka Units (AKUs) to size Apache Kafka services by throughput instead of hardware resources. An AKU represents the amount of traffic a service can handle. You estimate the expected throughput when creating the service. This estimate determines the initial AKU level and @@ -9,8 +10,8 @@ the scaling range. ## How AKUs work - Each AKU corresponds to a specific throughput capacity. -- You set the initial AKU level by choosing the expected throughput during service - creation. +- The initial AKU level is derived from the expected throughput estimate provided during + service creation. - The service monitors throughput over time, not momentary spikes. - When throughput reaches the threshold for the current AKU level, the service scales up within your configured limits. @@ -55,8 +56,15 @@ AKU levels. Adjust your AKU limits when: - Workload throughput increases for sustained periods. -- Short-term traffic spikes are expected. +- Traffic spikes begin to persist for longer periods. - Reducing costs during low-traffic periods requires a lower maximum AKU. - The workload needs a guaranteed minimum level of throughput. For details on how AKU usage affects billing, see [Billing](/docs/products/kafka/concepts/inkless-billing). + + + + +- [Inkless overview](/docs/products/kafka/concepts/inkless-overview) +- [Billing for Inkless](/docs/products/kafka/concepts/inkless-billing) +- [Create a Kafka service](/docs/products/kafka/create-kafka-service) diff --git a/docs/products/kafka/concepts/inkless-billing.md b/docs/products/kafka/concepts/inkless-billing.md index 76522009d..0b76c94ca 100644 --- a/docs/products/kafka/concepts/inkless-billing.md +++ b/docs/products/kafka/concepts/inkless-billing.md @@ -1,9 +1,17 @@ --- -title: Billing +title: Inkless billing sidebar_label: Billing +description: Learn how billing works for Inkless Apache Kafka® on Aiven, including compute billed in AKUs, object storage costs, and topic ingress and egress charges. --- -Inkless uses a usage-based billing model. Charges are based on compute measured in Aiven Kafka Units (AKUs), storage, and data movement used by the service. +import RelatedPages from "@site/src/components/RelatedPages"; + +Inkless uses a usage-based billing model. +You are charged for: + +- **Compute**, measured in Aiven Kafka Units (AKUs) +- **Storage**, based on the amount of data retained in object storage +- **Data movement**, based on topic ingress and egress :::note Inkless BYOC deployments continue to use the existing plans-based pricing model. @@ -11,7 +19,7 @@ Inkless BYOC deployments continue to use the existing plans-based pricing model. ## AKU-hours -Compute charges are based on AKU-hours. +Compute charges are measured in AKU-hours. An AKU (Aiven Kafka Unit) represents the throughput capacity of the service. The service bills based on the number of AKUs in use over time, calculated in AKU-hours. When the @@ -40,5 +48,12 @@ Network charges apply to: Network usage is measured at the service level across all topics. :::note -Only topic ingress and egress are billed. Internal Kafka replication traffic is not billed. +Only data written to and read from Kafka topics is billed. +Data Kafka replicates between brokers for fault tolerance is not billed. ::: + + + +- [Inkless overview](/docs/products/kafka/concepts/inkless-overview) +- [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku) +- [Create a Kafka service](/docs/products/kafka/create-kafka-service) diff --git a/docs/products/kafka/concepts/inkless.md b/docs/products/kafka/concepts/inkless.md index ee0db9265..1873897b3 100644 --- a/docs/products/kafka/concepts/inkless.md +++ b/docs/products/kafka/concepts/inkless.md @@ -3,6 +3,8 @@ title: Inkless overview sidebar_label: Overview --- +import RelatedPages from "@site/src/components/RelatedPages"; + Inkless is Aiven’s cloud-native Apache Kafka® service that modernizes Kafka with diskless topics and object storage for data retention. It reduces operational overhead while preserving full compatibility with existing Kafka clients. @@ -47,3 +49,10 @@ Existing Classic Kafka services continue to run unchanged. You cannot upgrade or migrate an existing Classic Kafka service to Inkless. Service type is fixed at creation. To use Inkless, create a new Kafka service. + + + +- [Create a Kafka service](/docs/products/kafka/create-kafka-service) +- [Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview) +- [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku) +- [Billing for Inkless](/docs/products/kafka/concepts/inkless-billing) From 927d5e8f92f8735b4c42e3832faf35749f98424c Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Mon, 15 Dec 2025 14:55:23 +0100 Subject: [PATCH 7/9] update: cross-links --- docs/products/kafka/concepts/inkless.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/products/kafka/concepts/inkless.md b/docs/products/kafka/concepts/inkless.md index 1873897b3..a096983da 100644 --- a/docs/products/kafka/concepts/inkless.md +++ b/docs/products/kafka/concepts/inkless.md @@ -48,7 +48,8 @@ only for customers who already run Classic services. Existing Classic Kafka services continue to run unchanged. You cannot upgrade or migrate an existing Classic Kafka service to Inkless. -Service type is fixed at creation. To use Inkless, create a new Kafka service. +Service type is fixed at creation. To use Inkless, create a Kafka service and select +Inkless as the service type. From 629dd1cd63cf39dfedd45acdf7191800ea38242a Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Mon, 15 Dec 2025 15:22:33 +0100 Subject: [PATCH 8/9] update: fix links --- docs/products/kafka/concepts/inkless-aku.md | 2 +- docs/products/kafka/concepts/inkless-billing.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/products/kafka/concepts/inkless-aku.md b/docs/products/kafka/concepts/inkless-aku.md index 2dc4748c4..e8fb94518 100644 --- a/docs/products/kafka/concepts/inkless-aku.md +++ b/docs/products/kafka/concepts/inkless-aku.md @@ -65,6 +65,6 @@ For details on how AKU usage affects billing, see [Billing](/docs/products/kafka -- [Inkless overview](/docs/products/kafka/concepts/inkless-overview) +- [Inkless overview](/docs/products/kafka/concepts/inkless) - [Billing for Inkless](/docs/products/kafka/concepts/inkless-billing) - [Create a Kafka service](/docs/products/kafka/create-kafka-service) diff --git a/docs/products/kafka/concepts/inkless-billing.md b/docs/products/kafka/concepts/inkless-billing.md index 0b76c94ca..65b982544 100644 --- a/docs/products/kafka/concepts/inkless-billing.md +++ b/docs/products/kafka/concepts/inkless-billing.md @@ -54,6 +54,6 @@ Data Kafka replicates between brokers for fault tolerance is not billed. -- [Inkless overview](/docs/products/kafka/concepts/inkless-overview) +- [Inkless overview](/docs/products/kafka/concepts/inkless) - [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku) - [Create a Kafka service](/docs/products/kafka/create-kafka-service) From 4ec005836f6270caa0ee65021bd8aea78c7bfd1b Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Tue, 16 Dec 2025 10:12:04 +0100 Subject: [PATCH 9/9] update: feedback --- docs/products/kafka/concepts/inkless-aku.md | 16 +++++++++------- .../products/kafka/concepts/inkless-billing.md | 6 ++---- docs/products/kafka/concepts/inkless.md | 18 +++++++++++++++--- docs/products/kafka/create-kafka-service.md | 4 ++-- 4 files changed, 28 insertions(+), 16 deletions(-) diff --git a/docs/products/kafka/concepts/inkless-aku.md b/docs/products/kafka/concepts/inkless-aku.md index e8fb94518..c083f8efb 100644 --- a/docs/products/kafka/concepts/inkless-aku.md +++ b/docs/products/kafka/concepts/inkless-aku.md @@ -3,18 +3,20 @@ title: AKU plans and scaling --- import RelatedPages from "@site/src/components/RelatedPages"; -Inkless uses Aiven Kafka Units (AKUs) to size Apache Kafka services by throughput instead of hardware resources. An AKU represents the amount of traffic a service can handle. You estimate the expected +Inkless uses Aiven Kafka Units (AKUs) to help you size Apache Kafka services based on throughput instead of hardware resources. +An AKU represents the amount of traffic a service can handle. You estimate the expected throughput when creating the service. This estimate determines the initial AKU level and the scaling range. ## How AKUs work -- Each AKU corresponds to a specific throughput capacity. +- Each AKU corresponds to a specific throughput capacity. It represents the compute and + memory resources required to meet that throughput. - The initial AKU level is derived from the expected throughput estimate provided during service creation. -- The service monitors throughput over time, not momentary spikes. -- When throughput reaches the threshold for the current AKU level, the service scales up - within your configured limits. +- The service monitors throughput over time. +- When throughput remains above the threshold for the current AKU level for a period of + time, the service scales up within your configured limits. - When throughput remains low for a sustained period, the service scales down. Scaling changes the number of AKUs in use, which affects AKU-hour billing. Scaling @@ -27,8 +29,8 @@ Inkless measures two types of traffic: - **Ingress:** Data written to topics by producers. - **Egress:** Data read from topics by consumers, connectors, and mirroring processes. -Both ingress and egress contribute to AKU usage. You can track ingress and egress usage -in the Service utilisation view, which also shows the AKU thresholds. +Both ingress and egress affect the number of AKUs required. You can track ingress and +egress usage in the Service utilisation view, which also shows the AKU thresholds. ## Autoscaling limits diff --git a/docs/products/kafka/concepts/inkless-billing.md b/docs/products/kafka/concepts/inkless-billing.md index 65b982544..64e4c9ee7 100644 --- a/docs/products/kafka/concepts/inkless-billing.md +++ b/docs/products/kafka/concepts/inkless-billing.md @@ -33,10 +33,8 @@ For details on how scaling works, see Storage charges are based on the amount of data retained in object storage. - Diskless topics store all retained data in object storage. -- Classic topics keep a short amount of recent data on local disk before offloading older - data to object storage. - -Local disk used by brokers is not billed. +- Classic topics keep some recent data on local disk before offloading it to + object storage. ## Network usage diff --git a/docs/products/kafka/concepts/inkless.md b/docs/products/kafka/concepts/inkless.md index a096983da..4e724f99a 100644 --- a/docs/products/kafka/concepts/inkless.md +++ b/docs/products/kafka/concepts/inkless.md @@ -30,7 +30,6 @@ Inkless changes how Kafka services are sized, stored, and managed: expanding over time. - **Diskless topics:** Diskless topics are available only in Inkless services. - ## When to use Inkless Use Inkless when: @@ -43,14 +42,27 @@ Use Inkless when: Classic Kafka remains available for existing deployments and appears in the Aiven Console only for customers who already run Classic services. +## Inkless capabilities + +Inkless supports: + +- High-throughput workloads by reducing cross-availability zone network traffic with diskless topics. +- Workloads with fluctuating throughput through autoscaling. +- Independent scaling of storage and compute. +- Diskless topics for long-term retention and large datasets. +- A simplified, throughput-based capacity model without hardware planning. + + ## Existing Classic Kafka services Existing Classic Kafka services continue to run unchanged. -You cannot upgrade or migrate an existing Classic Kafka service to Inkless. +Classic Kafka remains available only for existing deployments and appears in the +Aiven Console only when a project already includes a Classic Kafka service. + +Upgrading or migrating an existing Classic Kafka service to Inkless is not supported at this time. Service type is fixed at creation. To use Inkless, create a Kafka service and select Inkless as the service type. - - [Create a Kafka service](/docs/products/kafka/create-kafka-service) diff --git a/docs/products/kafka/create-kafka-service.md b/docs/products/kafka/create-kafka-service.md index e558f087b..408583f47 100644 --- a/docs/products/kafka/create-kafka-service.md +++ b/docs/products/kafka/create-kafka-service.md @@ -74,8 +74,8 @@ Kafka 4.x and enables diskless topics and tiered storage by default. 1. Select a **cloud provider** and **region**. 1. In **Stream load**, estimate the expected ingress and egress throughput. - This estimate is used for initial AKU sizing and cost estimation and can be - changed later. + This estimate is used to determine the initial number of AKUs and estimate costs, and + it can be adjusted later. 1. In **Retention**, enter the data retention period. Retention is used to estimate storage costs and can be adjusted after service creation. 1. In **Service basics**, enter: