Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .github/vale/styles/config/vocabularies/Aiven/accept.txt
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
188
ACL
ACLs
ACU
ACUs
Addons
africa
AIInsights
Aiven
Aiven's
AKU
AKUs
allowlist
allowlists
Altinity
Expand Down Expand Up @@ -140,7 +144,6 @@ GitHub
go
Google Cloud Platform
google_columnar_engine_enabled
google_columnar_engine_enabled
google_columnar_engine_memory_size_percentage
Gzipped
gzipped
Expand Down Expand Up @@ -169,6 +172,7 @@ hypertables
IdP
IdPs
InfluxDB
Inkless
InnoDB
inodes
Instana
Expand Down Expand Up @@ -259,7 +263,6 @@ pg_dump
pgAdmin
PGAudit
PgBouncer
pg_dump
PGHoard
pglookout
pgoutput
Expand Down
72 changes: 72 additions & 0 deletions docs/products/kafka/concepts/inkless-aku.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
title: AKU plans and scaling
---
import RelatedPages from "@site/src/components/RelatedPages";

Inkless uses Aiven Kafka Units (AKUs) to help you size Apache Kafka services based on throughput instead of hardware resources.
An AKU represents the amount of traffic a service can handle. You estimate the expected
throughput when creating the service. This estimate determines the initial AKU level and
the scaling range.

## How AKUs work

- Each AKU corresponds to a specific throughput capacity. It represents the compute and
memory resources required to meet that throughput.
- The initial AKU level is derived from the expected throughput estimate provided during
service creation.
- The service monitors throughput over time.
- When throughput remains above the threshold for the current AKU level for a period of
time, the service scales up within your configured limits.
- When throughput remains low for a sustained period, the service scales down.

Scaling changes the number of AKUs in use, which affects AKU-hour billing. Scaling
actions do not affect topic configuration or data retention.

## Throughput measurement

Inkless measures two types of traffic:

- **Ingress:** Data written to topics by producers.
- **Egress:** Data read from topics by consumers, connectors, and mirroring processes.

Both ingress and egress affect the number of AKUs required. You can track ingress and
egress usage in the Service utilisation view, which also shows the AKU thresholds.

## Autoscaling limits

Depending on your cloud provider and account, you can configure:

- **Minimum AKUs:** The lowest capacity the service can scale down to.
- **Maximum AKUs:** The highest capacity the service can scale up to.

Inkless scales automatically within these limits. Scaling occurs only when
throughput remains above or below a threshold for a sustained period.

## Storage and AKUs

Storage does not influence AKU scaling:

- Diskless topics write directly to object storage.
- Classic topics use local disk for recent data and move older segments to object storage
through tiered storage.

Storage and compute scale independently, so you can adjust retention without changing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's confirm, it will have some affect. e.g. if high throughput topic you will be tiering a lot more data, which takes CPU

AKU levels.

## When to adjust AKU ranges

Adjust your AKU limits when:

- Workload throughput increases for sustained periods.
- Traffic spikes begin to persist for longer periods.
- Reducing costs during low-traffic periods requires a lower maximum AKU.
- The workload needs a guaranteed minimum level of throughput.

For details on how AKU usage affects billing, see [Billing](/docs/products/kafka/concepts/inkless-billing).


<RelatedPages />

- [Inkless overview](/docs/products/kafka/concepts/inkless)
- [Billing for Inkless](/docs/products/kafka/concepts/inkless-billing)
- [Create a Kafka service](/docs/products/kafka/create-kafka-service)
57 changes: 57 additions & 0 deletions docs/products/kafka/concepts/inkless-billing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: Inkless billing
sidebar_label: Billing
description: Learn how billing works for Inkless Apache Kafka® on Aiven, including compute billed in AKUs, object storage costs, and topic ingress and egress charges.
---

import RelatedPages from "@site/src/components/RelatedPages";

Inkless uses a usage-based billing model.
You are charged for:

- **Compute**, measured in Aiven Kafka Units (AKUs)
- **Storage**, based on the amount of data retained in object storage
- **Data movement**, based on topic ingress and egress

:::note
Inkless BYOC deployments continue to use the existing plans-based pricing model.
:::

## AKU-hours

Compute charges are measured in AKU-hours.

An AKU (Aiven Kafka Unit) represents the throughput capacity of the service. The service
bills based on the number of AKUs in use over time, calculated in AKU-hours. When the
service scales up or down, the AKU-hour charge updates to match the current AKU level.

For details on how scaling works, see
[AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku).

## Storage

Storage charges are based on the amount of data retained in object storage.

- Diskless topics store all retained data in object storage.
- Classic topics keep some recent data on local disk before offloading it to
object storage.

## Network usage

Network charges apply to:

- **Ingress:** Data written to topics
- **Egress:** Data read by consumers, connectors, or mirroring processes

Network usage is measured at the service level across all topics.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will likely be split by topic type


:::note
Only data written to and read from Kafka topics is billed.
Data Kafka replicates between brokers for fault tolerance is not billed.
:::

<RelatedPages />

- [Inkless overview](/docs/products/kafka/concepts/inkless)
- [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku)
- [Create a Kafka service](/docs/products/kafka/create-kafka-service)
71 changes: 71 additions & 0 deletions docs/products/kafka/concepts/inkless.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
title: Inkless overview
sidebar_label: Overview
---

import RelatedPages from "@site/src/components/RelatedPages";

Inkless is Aiven’s cloud-native Apache Kafka® service that modernizes Kafka with diskless topics and object storage for data retention.
It reduces operational overhead while preserving full compatibility with existing
Kafka clients.

Inkless runs on Kafka 4.x and uses Aiven Kafka Units (AKUs) to size services by throughput
instead of hardware plans. It supports both classic and diskless topics within the same
service.


## Key differences from classic Kafka

Inkless changes how Kafka services are sized, stored, and managed:

- **Throughput-based sizing:** Services use AKUs instead of hardware plans and scale
within defined limits as throughput changes.
- **Flexible storage:** Diskless topics store all data in object storage. Classic topics
use local disk with tiered storage enabled by default.
- **Managed configuration:** Broker-level settings are fixed to maintain service
stability and allow automatic scaling.
- **KRaft metadata management:** Inkless uses KRaft for metadata and consensus,
replacing ZooKeeper.
- **Cloud availability:** Inkless is available on selected cloud providers, with support
expanding over time.
- **Diskless topics:** Diskless topics are available only in Inkless services.

## When to use Inkless

Use Inkless when:

- Workload throughput fluctuates and requires autoscaling.
- Storage and compute must scale independently.
- Your use cases require diskless topics for long-term retention or large datasets.
- You need a simplified capacity model without hardware planning.

Classic Kafka remains available for existing deployments and appears in the Aiven Console
only for customers who already run Classic services.

## Inkless capabilities

Inkless supports:

- High-throughput workloads by reducing cross-availability zone network traffic with diskless topics.
- Workloads with fluctuating throughput through autoscaling.
- Independent scaling of storage and compute.
- Diskless topics for long-term retention and large datasets.
- A simplified, throughput-based capacity model without hardware planning.


## Existing Classic Kafka services

Existing Classic Kafka services continue to run unchanged.

Classic Kafka remains available only for existing deployments and appears in the
Aiven Console only when a project already includes a Classic Kafka service.

Upgrading or migrating an existing Classic Kafka service to Inkless is not supported at this time.
Service type is fixed at creation. To use Inkless, create a Kafka service and select
Inkless as the service type.
<RelatedPages />

- [Create a Kafka service](/docs/products/kafka/create-kafka-service)
- [Diskless topics overview](/docs/products/kafka/diskless/concepts/diskless-overview)
- [AKU plans and scaling](/docs/products/kafka/concepts/inkless-aku)
- [Billing for Inkless](/docs/products/kafka/concepts/inkless-billing)
Loading