Helm chart for deploying TON Rust Node on Kubernetes.
Current status: Fullnode and validator images are published. For validator election management, see the nodectl Helm chart (alpha).
- Node roles
- Installation
- Quick start
- Providing configuration
- Parameters
- Architecture
- Configuration guide
- Useful commands
A TON node can run in two roles — validator or fullnode — using the same binary and the same chart. The difference is in configuration and how you use the node. The node supports both mainnet and testnet networks.
Validator participates in network consensus: it validates blocks, votes in elections, and earns rewards. Validator is currently supported on mainnet only — testnet validator support is not yet available. A validator is a critical infrastructure component, so:
- Never expose
liteserverorjsonRpcports on a validator. Every open port is an attack surface and adds unnecessary load to a machine that must stay performant and stable. - Allocate more resources (see docs/resources.md for recommended values).
Fullnode syncs the blockchain and can serve queries to external clients. This is the role you want for building APIs, explorers, bots, and any other integration. Enable liteserver and/or jsonRpc ports to expose the node's data. A fullnode can be:
- Regular — keeps only recent state. Suitable for most API use cases.
- Archival — stores the full history of the blockchain. Requires significantly more disk space. See docs/node-config.md for the relevant config fields.
We recommend running validators and fullnodes as separate Helm releases so they have independent configs, resources, and lifecycle:
helm install validator ./helm/ton-rust-node -f validator-values.yaml
helm install fullnode ./helm/ton-rust-node -f fullnode-values.yaml# From local chart
helm install my-node ./helm/ton-rust-node -f values.yaml
# From OCI registry
helm install my-node oci://ghcr.io/rsquad/ton-rust-node/helm/node -f values.yamlMinimal deployment:
# values.yaml
replicas: 2
services:
perReplica:
- annotations:
metallb.universe.tf/loadBalancerIPs: "1.2.3.4"
- annotations:
metallb.universe.tf/loadBalancerIPs: "5.6.7.8"
nodeConfigs:
node-0.json: |
{ "log_config_name": "/main/logs.config.yml", ... }
node-1.json: |
{ "log_config_name": "/main/logs.config.yml", ... }The chart ships with a mainnet globalConfig and a sensible logsConfig by default — you only need nodeConfigs and service annotations. The examples above use MetalLB — see docs/networking.md for other options (NodePort, hostNetwork, ingress-nginx).
helm install my-validator ./helm/ton-rust-node -f values.yamlWith liteserver and JSON-RPC ports (2 replicas):
replicas: 2
ports:
liteserver: 40000
jsonRpc: 8081
services:
perReplica:
- annotations:
metallb.universe.tf/loadBalancerIPs: "10.0.0.1"
- annotations:
metallb.universe.tf/loadBalancerIPs: "10.0.0.2"
nodeConfigs:
node-0.json: |
{ "log_config_name": "/main/logs.config.yml", ... }
node-1.json: |
{ "log_config_name": "/main/logs.config.yml", ... }Multiple nodes in the same namespace — use different release names:
helm install validator ./helm/ton-rust-node -f validator-values.yaml
helm install lite ./helm/ton-rust-node -f lite-values.yamlThis creates separate StatefulSets (validator, lite), services (validator-0, lite-0), and configs.
Every config (global config, logs config, node configs, basestate, zerostate) supports three modes:
Pass the content directly:
globalConfig: |
{"dht": {"nodes": [...]}}
logsConfig: |
refresh_rate: 30 seconds
...
nodeConfigs:
node-0.json: |
{"log_config_name": "/main/logs.config.yml", ...}
node-1.json: |
{"log_config_name": "/main/logs.config.yml", ...}Keep configs as separate files on disk and let Helm read them:
helm install my-node ./helm/ton-rust-node \
--set-file globalConfig=./global.config.json \
--set-file logsConfig=./logs.config.yml \
--set-file nodeConfigs.node-0\\.json=./configs/node-0.json \
--set-file nodeConfigs.node-1\\.json=./configs/node-1.json \
-f values.yamlNote the escaped dot (\\.) in node-0.json keys — required by Helm's --set parser.
Point to ConfigMaps/Secrets that already exist in the cluster:
existingGlobalConfigMapName: my-global-config
existingLogsConfigMapName: my-logs-config
existingNodeConfigsSecretName: my-node-secrets
existingBasestateConfigMapName: my-basestate
existingZerostateConfigMapName: my-zerostateWhen an existing*Name is set, the chart does not create that resource — it only references it in the StatefulSet volumes. The inline value (e.g. globalConfig) is ignored.
Why no file path option? Helm's
.Files.Getcan only read files bundled inside the chart package — it cannot access files from your filesystem at install time. That's why we offer three modes instead of a simple file path. If you prefer to keep configs as local files, use--set-file(mode 2) or clone the chart and place files inside the chart directory.
Do not edit the Parameters section by hand. It is auto-generated from
@paramannotations in values.yaml. To make changes, editvalues.yamland regenerate — see docs/maintaining.md.
| Name | Description | Value |
|---|---|---|
replicas |
Number of node instances in the StatefulSet. Each replica is an independent TON node with its own config, keys, and IP — not replication for redundancy. You need a matching nodeConfigs entry (node-N.json) and a perReplica service entry for each replica. | 1 |
command |
Override container command. Auto-detected: adds -z /main/static when zerostate+basestate are provided. Change only if you know what you are doing. |
[] |
| Name | Description | Value |
|---|---|---|
image.repository |
Container image repository | ghcr.io/rsquad/ton-rust-node/node |
image.tag |
Image tag | v0.3.0 |
image.pullPolicy |
Pull policy | IfNotPresent |
imagePullSecrets |
Image pull secrets for private registries | [] |
| Name | Description | Value |
|---|---|---|
initImage.repository |
Init container image repository | alpine |
initImage.tag |
Init container image tag | 3.21 |
initImage.pullPolicy |
Init container pull policy | IfNotPresent |
| Name | Description | Value |
|---|---|---|
extraInitContainers |
Additional init containers to run before the node starts. Runs after the built-in init-bootstrap container. Useful for downloading global config, fetching external IP, etc. | [] |
| Name | Description | Value |
|---|---|---|
extraContainers |
Additional sidecar containers to run alongside the node. Useful for monitoring agents, log shippers, etc. | [] |
| Name | Description | Value |
|---|---|---|
extraVolumes |
Additional volumes for the pod. Use with extraVolumeMounts to mount them into containers. | [] |
extraVolumeMounts |
Additional volume mounts for the main node container. | [] |
| Name | Description | Value |
|---|---|---|
resources.requests.cpu |
CPU request | 8 |
resources.requests.memory |
Memory request | 32Gi |
resources.limits.cpu |
CPU limit | 16 |
resources.limits.memory |
Memory limit | 64Gi |
| Name | Description | Value |
|---|---|---|
storage.main.size |
Main volume size | 1Gi |
storage.main.storageClassName |
Storage class for main volume | local-path |
storage.main.resourcePolicy |
Value for the helm.sh/resource-policy annotation. Set to keep to prevent PVC deletion on helm uninstall. Set to empty string to omit. |
keep |
storage.main.annotations |
Extra annotations for the main PVC | {} |
storage.db.size |
Database volume size (hundreds of GB for mainnet) | 1Ti |
storage.db.storageClassName |
Storage class for database volume | local-path |
storage.db.resourcePolicy |
Value for the helm.sh/resource-policy annotation on the db PVC |
"" |
storage.db.annotations |
Extra annotations for the db PVC | {} |
storage.logs.enabled |
Create a PVC for logs. Set to false if you log to stdout only. | true |
storage.logs.size |
Logs volume size | 150Gi |
storage.logs.storageClassName |
Storage class for logs volume | local-path |
storage.logs.resourcePolicy |
Value for the helm.sh/resource-policy annotation on the logs PVC |
"" |
storage.logs.annotations |
Extra annotations for the logs PVC | {} |
storage.keys.size |
Keys volume size | 1Gi |
storage.keys.storageClassName |
Storage class for keys volume | local-path |
storage.keys.resourcePolicy |
Value for the helm.sh/resource-policy annotation on the keys PVC |
keep |
storage.keys.annotations |
Extra annotations for the keys PVC | {} |
| Name | Description | Value |
|---|---|---|
ports.adnl |
ADNL port (UDP) | 30303 |
ports.simplex |
Simplex consensus port (UDP). Only needed for validators after switching to simplex consensus. false/null = disabled (default), true = adnl + 1000, number = explicit port. | false |
ports.control |
Control port (TCP). Set to null to disable. | 50000 |
ports.liteserver |
Liteserver port (TCP). Set to enable. | nil |
ports.jsonRpc |
JSON-RPC port (TCP). Set to enable. | nil |
ports.metrics |
Metrics/probes HTTP port (TCP). Serves /metrics, /healthz, /readyz. Set to enable. | nil |
| Name | Description | Value |
|---|---|---|
services.adnl.type |
ADNL service type | LoadBalancer |
services.adnl.externalTrafficPolicy |
ADNL service traffic policy | Local |
services.adnl.labels |
Extra labels applied to all ADNL per-replica services | {} |
services.adnl.annotations |
Annotations applied to all ADNL per-replica services | {} |
services.adnl.perReplica |
Per-replica ADNL service overrides (list index = replica index) | [] |
services.simplex.type |
Simplex service type | LoadBalancer |
services.simplex.externalTrafficPolicy |
Simplex service traffic policy | Local |
services.simplex.labels |
Extra labels for simplex services | {} |
services.control.type |
Control service type | ClusterIP |
services.control.labels |
Extra labels for control services | {} |
services.liteserver.type |
Liteserver service type | LoadBalancer |
services.liteserver.externalTrafficPolicy |
Liteserver service traffic policy | Local |
services.liteserver.labels |
Extra labels for liteserver services | {} |
services.jsonRpc.type |
JSON-RPC service type | LoadBalancer |
services.jsonRpc.externalTrafficPolicy |
JSON-RPC service traffic policy | Local |
services.jsonRpc.labels |
Extra labels for JSON-RPC services | {} |
| Name | Description | Value |
|---|---|---|
nodeConfigs |
Per-node JSON configs (one node-N.json per replica). See docs/node-config.md. | {} |
existingNodeConfigsSecretName |
Use an existing Secret for node configs instead of inline | "" |
globalConfig |
Global TON network config (JSON string). A mainnet default is bundled in files/global.config.json. See docs/global-config.md. | bundled mainnet |
existingGlobalConfigMapName |
Use an existing ConfigMap for global config instead of inline | "" |
logsConfig |
Logging configuration (log4rs YAML). A default is bundled in files/logs.config.yml. See docs/logging.md. | bundled default |
existingLogsConfigMapName |
Use an existing ConfigMap for logs config instead of inline | "" |
basestate |
Base64-encoded basestate.boc. Only needed when bootstrapping a brand new network. | "" |
existingBasestateConfigMapName |
Use an existing ConfigMap for basestate | "" |
zerostate |
Base64-encoded zerostate.boc. Only needed when bootstrapping a brand new network. | "" |
existingZerostateConfigMapName |
Use an existing ConfigMap for zerostate | "" |
| Name | Description | Value |
|---|---|---|
probes |
Liveness, readiness, and startup probes. Requires ports.metrics to be set (the node serves /healthz and /readyz on the metrics port). See docs/probes.md. Disabled by default. | {} |
| Name | Description | Value |
|---|---|---|
podAnnotations |
Additional annotations for pods. Useful for Vault agent injection, service mesh, etc. | {} |
podLabels |
Additional labels for pods. Useful for cost allocation, policy enforcement, etc. | {} |
| Name | Description | Value |
|---|---|---|
extraEnv |
Additional environment variables for the main node container. Supports Downward API, ConfigMap/Secret refs, etc. | [] |
extraEnvFrom |
Additional envFrom sources for the main node container. Inject all keys from a Secret or ConfigMap as environment variables. | [] |
| Name | Description | Value |
|---|---|---|
vault.url |
Vault URL (plain text). Example: file:///keys/vault.json?master_key=<hex> |
"" |
vault.secretName |
Name of an existing Secret containing the vault URL. Takes precedence over vault.url. | "" |
vault.secretKey |
Key inside the Secret that holds the vault URL. | VAULT_URL |
| Name | Description | Value |
|---|---|---|
hostNetwork |
Bind pods directly to the host network. The pod gets the node's IP with zero NAT overhead. Requires one pod per node — use nodeSelector or podAntiAffinity to spread replicas. See docs/networking.md. | false |
hostPort.adnl |
Expose the ADNL port on the host IP via hostPort | false |
hostPort.simplex |
Expose the simplex port on the host IP via hostPort | false |
hostPort.control |
Expose the control port on the host IP via hostPort | false |
hostPort.liteserver |
Expose the liteserver port on the host IP via hostPort | false |
hostPort.jsonRpc |
Expose the JSON-RPC port on the host IP via hostPort | false |
hostPort.metrics |
Expose the metrics port on the host IP via hostPort | false |
| Name | Description | Value |
|---|---|---|
networkPolicy.enabled |
Create a NetworkPolicy | false |
networkPolicy.adnl.allowFrom |
ADNL ingress sources. Default allows all traffic (0.0.0.0/0). Each entry is a raw NetworkPolicy from item. |
[] |
networkPolicy.simplex.allowFrom |
Simplex ingress sources. Default allows all traffic (0.0.0.0/0). Each entry is a raw NetworkPolicy from item. |
[] |
networkPolicy.control.enabled |
Create an ingress rule for the control port | false |
networkPolicy.control.allowFrom |
Control port ingress sources. If empty, allows all. | [] |
networkPolicy.liteserver.enabled |
Create an ingress rule for the liteserver port | false |
networkPolicy.liteserver.allowFrom |
Liteserver port ingress sources. If empty, allows all. | [] |
networkPolicy.jsonRpc.enabled |
Create an ingress rule for the JSON-RPC port | false |
networkPolicy.jsonRpc.allowFrom |
JSON-RPC port ingress sources. If empty, allows all. | [] |
networkPolicy.metrics.enabled |
Create an ingress rule for the metrics port | false |
networkPolicy.metrics.allowFrom |
Metrics port ingress sources. If empty, allows all. | [] |
networkPolicy.extraIngress |
Additional raw ingress rules appended to the policy. | [] |
| Name | Description | Value |
|---|---|---|
serviceAccount.enabled |
Create a ServiceAccount for the pods | false |
serviceAccount.name |
ServiceAccount name. Defaults to the release fullname if not set. | "" |
serviceAccount.annotations |
Annotations for the ServiceAccount (e.g. for Vault or cloud IAM role binding) | {} |
| Name | Description | Value |
|---|---|---|
nodeSelector |
Node selector for pod scheduling | {} |
tolerations |
Tolerations for pod scheduling | [] |
affinity |
Affinity rules for pod scheduling | {} |
| Name | Description | Value |
|---|---|---|
podDisruptionBudget.enabled |
Create a PodDisruptionBudget | false |
podDisruptionBudget.minAvailable |
Minimum available pods during disruption. Only one of minAvailable or maxUnavailable should be set. | 1 |
| Name | Description | Value |
|---|---|---|
debug.sleep |
Replace node with sleep infinity for debugging | false |
debug.securityContext |
Security context overrides for debugging (e.g. SYS_PTRACE) | {} |
| Name | Description | Value |
|---|---|---|
metrics.serviceMonitor.enabled |
Create a ServiceMonitor for kube-prometheus-stack (recommended) | false |
metrics.serviceMonitor.namespace |
Namespace for ServiceMonitor (defaults to release namespace) | nil |
metrics.serviceMonitor.interval |
Scrape interval (e.g. "30s"). Uses Prometheus default if null. | nil |
metrics.serviceMonitor.scrapeTimeout |
Scrape timeout. Uses Prometheus default if null. | nil |
metrics.serviceMonitor.labels |
Extra labels for ServiceMonitor (for Prometheus selector matching) | {} |
metrics.annotations.enabled |
Add prometheus.io annotations to the metrics ClusterIP service (alternative to ServiceMonitor) | false |
The chart creates:
- StatefulSet named after the release, with
podManagementPolicy: ParallelandfsGroup: 1000 - One LoadBalancer Service per replica with
externalTrafficPolicy: Localand optional static IPs - Init container (
alpine:3.23, pinned by digest) that seeds configs from volumes into the main PVC - PersistentVolumeClaims:
main,db,keys, and optionallylogs(seestorage.logs.enabled) - ConfigMaps for global config, logs config, and optionally basestate/zerostate
- Secret for per-node JSON configs
All resource names are prefixed with the release name, allowing multiple installations in the same namespace.
The node uses persistent volumes:
| Volume | Mount path | Purpose | Optional |
|---|---|---|---|
main |
/main |
Working directory: node config, global config, logs config, static files (basestate/zerostate hashes) | no |
db |
/db |
Blockchain database (the largest volume, grows over time) | no |
logs |
/logs |
Rolling log files (output.log, rotated by log4rs) | yes (storage.logs.enabled) |
keys |
/keys |
Node keys and vault | no |
Important: Disk performance is critical for correct node operation. The
dbvolume requires storage capable of sustaining up to 64,000 IOPS. Insufficient disk performance leads to sync delays, missed validations, and degraded node behavior. Use NVMe or high-performance SSD with a local volume provisioner.
The db and logs volumes are performance-critical — they handle continuous heavy I/O from the blockchain database and log writes. We strongly recommend using local storage that provides direct disk access: local-path, OpenEBS LVM, or similar local volume provisioners. Network-attached storage (NFS, Ceph RBD, EBS, etc.) adds latency that significantly impacts node performance and sync speed.
For main and keys volumes the I/O load is minimal — any storage provider will work. We recommend Longhorn v1 with replica count 3 for data safety. We have tested Longhorn v2 and do not recommend it at this time.
| Volume | Default size | Notes |
|---|---|---|
db |
1Ti |
Not recommended to go below 500Gi. Grows over time as the blockchain state accumulates. |
logs |
150Gi |
Default log rotation is configured for 25 GB per file with 4 rotations (see logsConfig). You can reduce the volume size if you adjust the rolling file limits accordingly. See docs/logging.md for details. |
main |
1Gi |
Holds configs and static files. Default is sufficient. |
keys |
1Gi |
Holds node keys and vault. Default is sufficient. |
Before the node starts, an init container (alpine:3.23, pinned by digest) runs a bootstrap script that prepares the /main volume:
- Copies
global.config.jsonandlogs.config.ymlfrom seed ConfigMaps into/main - If basestate/zerostate are provided — hashes them with SHA-256 and places as
/main/static/{hash}.boc - Resolves the pod index from the pod name (e.g.
my-node-2->2) and copiesnode-2.jsonfrom the node-configs Secret as/main/config.json - Sets ownership to UID
1000(non-root app user)
Seed volumes (ConfigMaps/Secrets) are mounted read-only under /seed/:
| Seed volume | Mount path | Source |
|---|---|---|
global-config |
/seed/global-config |
ConfigMap with global.config.json |
logs-config |
/seed/logs-config |
ConfigMap with logs.config.yml |
node-configs |
/seed/node-configs |
Secret with node-{i}.json per replica |
basestate |
/seed/basestate |
ConfigMap with basestate.boc (optional) |
zerostate |
/seed/zerostate |
ConfigMap with zerostate.boc (optional) |
The main container command is determined automatically:
| Condition | Command |
|---|---|
debug.sleep: true |
sleep infinity (busybox image) |
command is set |
custom command |
| basestate + zerostate provided | node -c /main -z /main/static |
| default | node -c /main |
A SHA-256 checksum of all inline configs is stored in the pod annotation rsquad.io/config-checksum. Any config change triggers a pod restart.
This chart does not generate node configs — you must prepare them yourself. The node uses three config files:
| Config | Description | Default | Reference |
|---|---|---|---|
globalConfig |
TON network config (DHT nodes, network ID, etc.) | mainnet (bundled) | docs/global-config.md |
logsConfig |
log4rs logging config (appenders, levels, rotation) | bundled | docs/logging.md |
nodeConfigs |
Per-node config (IP, ports, keys, paths) — one node-N.json per replica |
none, required | docs/node-config.md |
Sensible defaults for globalConfig (mainnet, from ton-blockchain.github.io) and logsConfig are bundled in the chart and used automatically. It is strongly recommended to provide your own up-to-date globalConfig. nodeConfigs has no default — the chart will fail with a clear error if it is not provided.
See also docs/resources.md for CPU and memory recommendations and docs/networking.md for networking modes (LoadBalancer, NodePort, hostNetwork, ingress-nginx).
See the linked docs for field-by-field explanations, required fields, and which values must match between the config files and the Helm values (e.g. ports, IPs).
For chart maintainers: docs/maintaining.md documents how to regenerate the Parameters table after editing values.yaml.
# Check pod status (replace "my-node" with your release name)
kubectl get pods -l app.kubernetes.io/name=node,app.kubernetes.io/instance=my-node
# Get external service IPs
kubectl get svc -l app.kubernetes.io/name=node,app.kubernetes.io/instance=my-node
# View logs
kubectl logs my-node-0 -c ton-node
# Exec into pod
kubectl exec -it my-node-0 -c ton-node -- /bin/sh