A Kubernetes client library for Zig, implementing all 62 standard Kubernetes 1.35 resource types with full CRUD operations across 20 API groups. Includes native WebSocket support for Pod exec/attach/port-forward and Protobuf serialization via the zig-protobuf library.
Tested against: Rancher Desktop with Kubernetes 1.35.1
Dependencies: zig-yaml (YAML parsing), zig-protobuf (Protocol Buffers)
Custom CA certificates are supported via tls_config.ca_cert_data or tls_config.ca_cert_path. For clusters with self-signed certificates, pass the CA bundle in the client config. As a fallback, use connectWithFallback() for automatic kubectl proxy detection, or connect via http://127.0.0.1:8080 after running kubectl proxy.
Core API (v1) - 17 resources
Pod, Service, ConfigMap, Secret, Namespace, Node, PersistentVolume, PersistentVolumeClaim, ServiceAccount, Endpoints, Event, ReplicationController, PodTemplate, ResourceQuota, LimitRange, Binding, ComponentStatus
Workloads (apps/v1) - 5 resources
Deployment, ReplicaSet, StatefulSet, DaemonSet, ControllerRevision
Batch (batch/v1) - 2 resources
Job, CronJob
Networking (networking.k8s.io/v1) - 6 resources
Ingress, IngressClass, NetworkPolicy, EndpointSlice, IPAddress, ServiceCIDR
Gateway API (gateway.networking.k8s.io/v1, v1beta1) - 5 resources
GatewayClass, Gateway, HTTPRoute, GRPCRoute, ReferenceGrant
RBAC (rbac.authorization.k8s.io/v1) - 4 resources
Role, RoleBinding, ClusterRole, ClusterRoleBinding
Storage (storage.k8s.io/v1) - 6 resources
StorageClass, VolumeAttachment, CSIDriver, CSINode, CSIStorageCapacity, VolumeAttributesClass
Dynamic Resource Allocation (resource.k8s.io/v1) - 4 resources
ResourceClaim, ResourceClaimTemplate, ResourceSlice, DeviceClass
Policy (policy/v1) - 1 resource
PodDisruptionBudget
Auto-scaling (autoscaling/v2) - 1 resource
HorizontalPodAutoscaler
Scheduling (scheduling.k8s.io/v1) - 1 resource
PriorityClass
Coordination (coordination.k8s.io/v1) - 1 resource
Lease
Certificates (certificates.k8s.io/v1) - 1 resource
CertificateSigningRequest
Admission Control (admissionregistration.k8s.io/v1) - 4 resources
ValidatingWebhookConfiguration, MutatingWebhookConfiguration, ValidatingAdmissionPolicy, ValidatingAdmissionPolicyBinding
API Registration (apiregistration.k8s.io/v1) - 1 resource
APIService
Flow Control (flowcontrol.apiserver.k8s.io/v1) - 2 resources
FlowSchema, PriorityLevelConfiguration
Node (node.k8s.io/v1) - 1 resource RuntimeClass
Storage Migration (storagemigration.k8s.io/v1beta1) - 1 resource StorageVersionMigration
- 62 Resource Types: ALL Kubernetes 1.35 standard resources (100% coverage)
- Full CRUD Operations: Create, Read, Update, Delete, Patch on all resources
- Advanced Delete: Grace period, propagation policy, preconditions, delete collection
- Advanced Create/Update: Field manager, field validation, dry-run support
- WebSocket Operations: Pod exec, attach, port-forward
- Gateway API: GatewayClass, Gateway, HTTPRoute, GRPCRoute, ReferenceGrant
- Dynamic Resource Allocation: ResourceClaim, DeviceClass, ResourceSlice support
- Generic Resource Client: Type-safe operations with
ResourceClient<T>pattern - JSON Serialization: Built-in support for Kubernetes JSON API
- Cluster-Scoped Resources: 21 resources with custom list methods
- Bearer Token: Standard token-based authentication
- mTLS: Client certificate authentication with full TLS support
- Exec Credential Plugins: AWS EKS, GCP GKE, Azure AKS integration
- Kubeconfig Parsing: via
kubectl config view --output json
- Enhanced Pod Logs: Container selection, previous container, tail lines, timestamps, sinceSeconds
- Rollout Restart: Deployments, StatefulSets, DaemonSets rolling restart
- Image Update: Set container image on deployments (
setImage) - Pod Eviction: Graceful eviction via Eviction API
- Node Cordon/Uncordon: Mark nodes as schedulable/unschedulable
- Metrics Server API: CPU/memory metrics for pods and nodes (metrics.k8s.io/v1beta1)
- RBAC CanI: SelfSubjectAccessReview permission checks
- Retry Logic: Exponential backoff with jitter, status-code-aware retries (429, 500, 502, 503, 504)
- Watch API: Real-time resource updates with streaming support
- Informers: Local caching with automatic synchronization
- Connection Pooling: Thread-safe connection management
- CRD Support: Dynamic client for Custom Resource Definitions
- Predefined CRDs: Cert-Manager, Istio, Prometheus, Argo, Knative
- In-Cluster Config: Automatic service account detection and configuration
- Field/Label Selectors: Advanced filtering and search capabilities
- Pagination: Efficient handling of large result sets
- Server-Side Apply: Declarative resource management with field ownership
- Structured API Errors: K8s API error responses parsed into status/message/reason/code
- Response Size Limits: Configurable max response size (default 16MB) to prevent OOM
- 92 passing tests with comprehensive coverage
- Memory safe with explicit allocator management
- Type safe with Zig's compile-time type system
- Two dependencies: zig-yaml (YAML parsing) and zig-protobuf (Protocol Buffers)
- Tested against Kubernetes 1.35.1
Add to your build.zig.zon:
.{
.name = .myapp,
.version = "0.1.0",
.dependencies = .{
.klient = .{
.url = "https://github.com/guanchzhou/zig-klient/archive/main.tar.gz",
.hash = "...", // zig will provide this
},
},
}In your build.zig:
const klient_dep = b.dependency("klient", .{
.target = target,
.optimize = optimize,
});
exe.root_module.addImport("klient", klient_dep.module("klient"));Copy the src/ directory into your project and import as a module.
const std = @import("std");
const klient = @import("klient");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Initialize client
var client = try klient.K8sClient.init(allocator, .{
.server = "https://kubernetes.default.svc",
.token = "your-bearer-token",
});
defer client.deinit();
// List all pods
var pods = klient.Pods.init(&client);
const pod_list = try pods.client.listAll();
defer pod_list.deinit();
for (pod_list.value.items) |pod| {
std.debug.print("Pod: {s}\n", .{pod.metadata.name});
}
}var client = try klient.K8sClient.init(allocator, .{
.server = "https://kubernetes.example.com",
.tls_config = .{
.client_cert_path = "/path/to/client.crt",
.client_key_path = "/path/to/client.key",
.ca_cert_path = "/path/to/ca.crt",
},
});
defer client.deinit();const aws_config = try klient.awsEksConfig(allocator, "my-cluster");
const cred = try klient.exec_credential.executeCredentialPlugin(allocator, aws_config);
defer allocator.free(cred.status.?.token.?);
var client = try klient.K8sClient.init(allocator, .{
.server = "https://xxx.eks.amazonaws.com",
.token = cred.status.?.token,
});
defer client.deinit();var client = try klient.K8sClient.init(allocator, .{
.server = "https://kubernetes.example.com",
.token = "token",
.retry_config = klient.aggressiveConfig, // or: defaultConfig, conservativeConfig
});
defer client.deinit();const WatcherPod = klient.Watcher(klient.Pod);
var watcher = try WatcherPod.init(&client, .{
.path = "/api/v1/namespaces/default/pods",
.timeout_seconds = 300,
});
defer watcher.deinit();
while (try watcher.next()) |event| {
switch (event.event_type) {
.ADDED => std.debug.print("Pod added: {s}\n", .{event.object.metadata.name}),
.MODIFIED => std.debug.print("Pod modified: {s}\n", .{event.object.metadata.name}),
.DELETED => std.debug.print("Pod deleted: {s}\n", .{event.object.metadata.name}),
else => {},
}
}const InformerPod = klient.Informer(klient.Pod);
var informer = try InformerPod.init(
allocator,
&client,
"/api/v1/namespaces/default/pods"
);
defer informer.deinit();
try informer.start();
// Get from local cache (fast!)
if (informer.get("my-pod")) |pod| {
std.debug.print("Found: {s}\n", .{pod.metadata.name});
}var pool = try klient.ConnectionPool.init(allocator, .{
.server = "https://kubernetes.example.com",
.max_connections = 20,
.idle_timeout_ms = 60_000,
});
defer pool.deinit();
// Start automatic cleanup
try pool.startCleanup(10_000);
// Get pool stats
const stats = pool.stats();
std.debug.print("Utilization: {d:.1}%\n", .{stats.utilization()});// Use predefined CRD
var cert_client = klient.DynamicClient.init(&client, klient.CertManagerCertificate);
const certs = try cert_client.list("production");
defer certs.deinit();
// Or define custom CRD
const my_crd = klient.CRDInfo{
.group = "mycompany.io",
.version = "v1alpha1",
.kind = "CustomApp",
.plural = "customapps",
.namespaced = true,
};
var custom = klient.DynamicClient.init(&client, my_crd);
const apps = try custom.list("default");
defer apps.deinit();// Delete with grace period and propagation policy
const delete_opts = klient.DeleteOptions{
.grace_period_seconds = 30,
.propagation_policy = klient.PropagationPolicy.foreground.toString(),
.dry_run = "All", // Test without actually deleting
};
var deployments = klient.Deployments.init(&client);
try deployments.client.deleteWithOptions("my-app", null, delete_opts);
// Delete collection by label selector
const list_opts = klient.ListOptions{
.labelSelector = "app=nginx,env=staging",
};
try deployments.client.deleteCollection(null, list_opts, delete_opts);// Create with field manager and validation
const create_opts = klient.CreateOptions{
.field_manager = "my-controller",
.field_validation = klient.FieldValidation.strict.toString(),
.dry_run = "All", // Dry run to validate
};
var pods = klient.Pods.init(&client);
const pod = try pods.client.createWithOptions(my_pod, null, create_opts);
defer pod.deinit();
// Update with field ownership tracking
const update_opts = klient.UpdateOptions{
.field_manager = "deployment-controller",
.field_validation = klient.FieldValidation.warn.toString(),
};
const updated = try deployments.client.updateWithOptions(deployment, null, update_opts);
defer updated.deinit();// Filter by field and label selectors
const list_opts = klient.ListOptions{
.fieldSelector = "status.phase=Running",
.labelSelector = "app=nginx,tier=frontend",
.limit = 50,
.resourceVersion = "12345",
};
var pods = klient.Pods.init(&client);
const filtered_pods = try pods.client.listWithOptions(null, list_opts);
defer filtered_pods.deinit();
for (filtered_pods.value.items) |pod| {
std.debug.print("Pod: {s}\n", .{pod.metadata.name});
}
// Handle pagination with continue tokens
if (filtered_pods.value.metadata.@"continue") |continue_token| {
const next_opts = klient.ListOptions{
.continue_ = continue_token,
.limit = 50,
};
const next_page = try pods.client.listWithOptions(null, next_opts);
defer next_page.deinit();
}// Automatically detect and load in-cluster config
if (klient.isInCluster()) {
const in_cluster = try klient.loadInClusterConfig(allocator);
defer allocator.free(in_cluster.host);
defer allocator.free(in_cluster.token);
defer allocator.free(in_cluster.ca_cert_data);
var client = try klient.K8sClient.init(allocator, .{
.server = in_cluster.host,
.token = in_cluster.token,
.tls_config = .{
.ca_cert_data = in_cluster.ca_cert_data,
},
});
defer client.deinit();
// Now you can use the client from within a pod
var pods = klient.Pods.init(&client);
const pod_list = try pods.client.listAll();
defer pod_list.deinit();
}// Initialize WebSocket client
var ws_client = try klient.WebSocketClient.init(
allocator,
"https://kubernetes.default.svc",
bearer_token,
ca_cert_data,
);
defer ws_client.deinit();
// Execute command in pod
var exec_client = klient.ExecClient.init(allocator, &ws_client);
const result = try exec_client.exec("my-pod", "default", .{
.command = &[_][]const u8{ "ls", "-la", "/app" },
.stdout = true,
.stderr = true,
});
defer result.deinit();
std.debug.print("Output:\n{s}\n", .{result.stdout()});
std.debug.print("Exit code: {d}\n", .{result.exit_code});// Attach to running container
var attach_client = klient.AttachClient.init(allocator, &ws_client);
var session = try attach_client.attach("my-pod", "default", .{
.stdin = true,
.stdout = true,
.tty = true,
});
defer session.deinit();
// Send command via stdin
try session.writeStdin("echo hello\n");
// Read output
const msg = try session.read();
defer msg.deinit(allocator);
std.debug.print("Output: {s}\n", .{msg.data});
// Detach from container
session.detach();// Forward local ports to pod
var forwarder = klient.PortForwarder.init(allocator, &ws_client);
var forward_session = try forwarder.forward("my-pod", "default", .{
.ports = &[_]klient.PortMapping{
.{ .local = 8080, .remote = 80 },
.{ .local = 5432, .remote = 5432 },
},
});
defer forward_session.deinit();
// Ports are now forwarded
// Access pod's port 80 via localhost:8080
// Access pod's port 5432 via localhost:5432
// Keep session alive
while (forward_session.isActive()) {
std.time.sleep(1 * std.time.ns_per_s);
}var pods = klient.Pods.init(&client);
// Using field selectors
const options = klient.ListOptions{
.field_selector = "status.phase=Running,metadata.name=my-pod",
.limit = 100,
};
const filtered_pods = try pods.client.listWithOptions("default", options);
defer filtered_pods.deinit();
// Using label selector builder
var label_selector = klient.LabelSelector.init(allocator);
defer label_selector.deinit();
try label_selector.addEquals("app", "nginx");
try label_selector.addIn("env", &[_][]const u8{ "prod", "staging" });
const selector_str = try label_selector.build();
defer allocator.free(selector_str);
const labeled_options = klient.ListOptions{
.label_selector = selector_str,
};
const labeled_pods = try pods.client.listWithOptions("default", labeled_options);
defer labeled_pods.deinit();var deployments = klient.Deployments.init(&client);
const deployment_manifest =
\\{
\\ "apiVersion": "apps/v1",
\\ "kind": "Deployment",
\\ "metadata": {
\\ "name": "my-deployment",
\\ "namespace": "default"
\\ },
\\ "spec": {
\\ "replicas": 3,
\\ "selector": {
\\ "matchLabels": {"app": "myapp"}
\\ },
\\ "template": {
\\ "metadata": {"labels": {"app": "myapp"}},
\\ "spec": {
\\ "containers": [{
\\ "name": "nginx",
\\ "image": "nginx:latest"
\\ }]
\\ }
\\ }
\\ }
\\}
;
const apply_options = klient.ApplyOptions{
.field_manager = "my-controller",
.force = false,
};
const applied = try deployments.client.apply(
"my-deployment",
deployment_manifest,
"default",
apply_options,
);
defer applied.deinit();var json_patch = klient.JsonPatch.init(allocator);
defer json_patch.deinit();
try json_patch.replace("/spec/replicas", .{ .integer = 5 });
try json_patch.add("/metadata/labels/version", .{ .string = "v2" });
const patch_json = try json_patch.build();
defer allocator.free(patch_json);
const patched = try deployments.client.patchWithType(
"my-deployment",
patch_json,
"default",
klient.PatchType.json,
);
defer patched.deinit();All resource types support the same operations:
// Pods
var pods = klient.Pods.init(&client);
// List operations (return std.json.Parsed - caller must call .deinit())
const pod_list = try pods.client.list("namespace");
defer pod_list.deinit();
const all_pods = try pods.client.listAll();
defer all_pods.deinit();
// Get operation (returns Parsed(T) - caller must call .deinit())
const pod = try pods.client.get("pod-name", "namespace");
defer pod.deinit();
std.debug.print("Pod: {s}\n", .{pod.value.metadata.name});
// Create/Update (return Parsed(T) - caller must call .deinit())
const created = try pods.client.create(pod_spec, "namespace");
defer created.deinit();
const updated = try pods.client.update(updated_spec, "namespace");
defer updated.deinit();
// Delete (returns void)
try pods.client.delete("namespace", "pod-name", null);
// Patch (returns Parsed(T) - caller must call .deinit())
const patched = try pods.client.patch("pod-name", patch_json, "namespace");
defer patched.deinit();
// Pod-specific: Get logs
const logs = try pods.logs("pod-name", "namespace");
defer allocator.free(logs);
// Same pattern for: Deployments, Services, ConfigMaps, Secrets, Namespaces, Nodes,
// ReplicaSets, StatefulSets, DaemonSets, Jobs, CronJobs, PVs, PVCsRun all unit tests (isolated functionality):
zig build testRun specific test suites:
zig build test-retry # Retry logic tests
zig build test-advanced # TLS, Connection Pool, CRD testsIntegration tests run against a real Kubernetes cluster. See docs/TESTING.md for details.
cd examples/tests
./run_all_tests.sh- TESTING.md - Testing guide (unit and integration tests)
- tests/comprehensive/README.md - Comprehensive test suite
zig-klient/
├── src/
│ ├── klient.zig # Main library entry point
│ └── k8s/
│ ├── client.zig # Core HTTP client
│ ├── types.zig # Resource type definitions
│ ├── resources.zig # Resource operations
│ ├── retry.zig # Retry logic
│ ├── watch.zig # Watch API & Informers
│ ├── tls.zig # mTLS support
│ ├── connection_pool.zig # Connection pooling
│ ├── crd.zig # CRD support
│ ├── exec_credential.zig # Cloud auth
│ └── kubeconfig_json.zig # Config parsing
├── tests/ # Unit tests (isolated)
├── examples/ # Usage examples
│ └── tests/ # Integration tests (real cluster)
└── docs/ # Documentation
Tested against: Rancher Desktop with Kubernetes 1.35.1
| Feature | Kubernetes 1.35 | zig-klient | Coverage |
|---|---|---|---|
| HTTP Operations | All methods | All methods | 100% |
| K8s Resource Types | 62 standard | 62 | 100% |
| API Groups | 20 | 20 | 100% |
| Auth Methods | 5 | 4 | 100% |
| In-Cluster Config | Yes | Yes | Yes |
| Delete Options | Yes | Yes | Yes |
| Create/Update Options | Yes | Yes | Yes |
| Delete Collection | Yes | Yes | Yes |
| Retry Logic | Basic | Advanced | Yes |
| Watch API | Yes | Yes | Yes |
| Field/Label Selectors | Yes | Yes | Yes |
| Pagination | Yes | Yes | Yes |
| Server-Side Apply | Yes | Yes | Yes |
| WebSocket Support | Yes | Yes (native) | Yes |
| Protobuf Support | Yes | Yes (zig-protobuf) | Yes |
| Gateway API | Yes | Yes | Yes |
| Dynamic Resource Allocation | Yes | Yes | Yes |
Coverage: 100% of Kubernetes 1.35 standard resources (62 types across 20 API groups) - workloads, networking, Gateway API, storage, security, auto-scaling, dynamic resource allocation, admission control, certificates, API management, storage migration, WebSocket, and Protobuf.
See FEATURE_PARITY_STATUS.md for detailed breakdown.
- Zig 0.15.1 or newer
- kubectl (for kubeconfig parsing)
- Cloud CLI tools (optional, for exec credential plugins):
awsCLI for EKSgke-gcloud-auth-pluginfor GKEkubeloginfor AKS
Apache 2.0 License - see LICENSE file for details
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass (
zig build test) - Submit a pull request
- All 62 standard Kubernetes 1.35 resource types across 20 API groups
- All HTTP methods (GET, POST, PUT, DELETE, PATCH)
- Bearer token auth
- mTLS auth
- Exec credential plugins (AWS, GCP, Azure)
- In-cluster configuration with service account
- Delete options (grace period, propagation policy, preconditions)
- Delete collection operations
- Create/Update options (field manager, field validation, dry-run)
- List filtering (field selectors, label selectors)
- Pagination support (limit, continue tokens, resource version)
- Retry logic with exponential backoff and jitter
- Watch API for streaming updates
- Informers with local caching
- Thread-safe connection pooling
- CRD support with dynamic client
- Server-side apply
- JSON Patch and Strategic Merge Patch
- Scale subresources
- WebSocket operations (pod exec, attach, port-forward)
- Protobuf serialization via zig-protobuf
- Gateway API (GatewayClass, Gateway, HTTPRoute, GRPCRoute, ReferenceGrant)
- Dynamic Resource Allocation (ResourceClaim, DeviceClass, ResourceSlice)
- Metrics Server API (pod and node CPU/memory metrics)
- RBAC CanI (SelfSubjectAccessReview permission checks)
- Operational features: rollout restart, image update, pod eviction, node cordon
- Enhanced pod logs (container, previous, tail, timestamps, sinceSeconds)
- Structured K8s API error responses
- Response size limits and 4x larger HTTP buffers
- Status-code-aware retry logic (429, 500, 502, 503, 504)
- 31 test files, 92 passing tests
- 100% integration test pass rate