AstraDNS Operator is the control-plane component of AstraDNS. It watches custom resources that describe DNS configuration intent, renders engine-specific configuration, and writes the result to a ConfigMap consumed by the agent workload (DaemonSet in node-local, Deployment in central).
All CRDs belong to the API group dns.astradns.com/v1alpha1.
| CRD | Description |
|---|---|
DNSUpstreamPool |
Defines upstream DNS resolvers with health checking and selection policy |
DNSCacheProfile |
Configures DNS cache behavior (TTL bounds, prefetch, negative caching) |
ExternalDNSPolicy |
Controls external DNS zone delegation and filtering rules |
CRD changes --> Reconciler --> EngineConfig assembly
|
v
ConfigRenderer (engine-specific)
|
v
Validate + render config
|
v
Write JSON to ConfigMap
|
v
Agent detects change and reloads
- The operator watches
DNSUpstreamPool,DNSCacheProfile, andExternalDNSPolicyresources. - On change, the reconciler assembles an
EngineConfigfrom the current state of all relevant CRs. - The appropriate
ConfigRenderer(selected byASTRADNS_ENGINE_TYPE) validates and renders the config. - The rendered JSON is written to the
astradns-agent-configConfigMap in the operator namespace. - The agent's config watcher detects the ConfigMap update and reloads the engine subprocess.
- ConfigMap name:
astradns-agent-config - ConfigMap key:
config.json - Operator namespace is passed to controllers via
POD_NAMESPACE(set inconfig/manager/manager.yaml). - Agent should be deployed in the same namespace so it can mount this ConfigMap directly.
The operator selects the config renderer via the ASTRADNS_ENGINE_TYPE environment variable (default: unbound). Supported values: unbound, coredns, powerdns, bind.
When deployed with Helm, this value is taken from agent.engineType and propagated to both operator and agent.
The chart includes a production profile at deploy/helm/astradns/values-production.yaml with:
- Link-local data path (
agent.network.mode=linkLocal, defaultagent.network.linkLocalIP=169.254.20.11) - Optional cluster DNS integration job (
clusterDNS.forwardExternalToAstraDNS.enabled=true) - PodDisruptionBudgets + NetworkPolicies + PriorityClass defaults
- Agent service account token automount disabled by default
- ServiceMonitor and Grafana dashboard ConfigMap enabled
- Validating webhook enabled (cert-manager issuer still required)
Install example:
# optional: fetch the maintained production profile
curl -fsSL https://raw.githubusercontent.com/astradns/astradns-operator/main/deploy/helm/astradns/values-production.yaml -o values-production.yaml
helm upgrade --install astradns oci://ghcr.io/astradns/helm-charts/astradns \
--version <chart-version> \
-n astradns-system --create-namespace \
-f values-production.yaml \
--set webhook.certManager.issuerRef.name=<cluster-issuer-name>With clusterDNS.forwardExternalToAstraDNS.enabled=true, Helm runs a post-install/post-upgrade job that patches the CoreDNS ConfigMap forward target to the AstraDNS listener. In node-local, this defaults to 169.254.20.11:5353.
If you customize agent.network.linkLocalIP, also set clusterDNS.forwardExternalToAstraDNS.forwardTarget=<linkLocalIP>:5353.
clusterDNS.provider is currently validated to coredns when this integration is enabled.
Helm validates integration inputs early: namespace, configMapName, and optional rolloutDeployment must be valid Kubernetes resource names, forwardTarget must be host:port (or [ipv6]:port), and kubectlImagePullPolicy must be one of Always, IfNotPresent, or Never.
To avoid misconfiguration, cluster DNS integration requires agent.network.mode=linkLocal.
The integration toggle remains explicit because some clusters patch CoreDNS out-of-band (GitOps/platform controllers), and Helm should not overwrite cluster DNS unless requested.
The chart pins images to AstraDNS official artifacts for the selected chart version:
- Operator:
ghcr.io/astradns/astradns-operator:v<appVersion> - Agent:
ghcr.io/astradns/astradns-agent:v<appVersion>-<engine>
Users only select the engine flavor via agent.engineType.
- Go 1.24.6+
- Docker 17.03+
- kubectl v1.11.3+
- Access to a Kubernetes v1.11.3+ cluster
# Install CRDs into the cluster
make install
# Deploy the operator
make deploy IMG=<registry>/astradns-operator:<tag>
# Deploy the agent DaemonSet in the same namespace
kubectl apply -f ../astradns-agent/config/daemonset.yaml
# Apply sample CRs
kubectl apply -k config/samples/# Remove CRs
kubectl delete -k config/samples/
# Remove CRDs
make uninstall
# Remove the operator
make undeploy# Generate CRD manifests from Go types
make manifests
# Run unit tests (uses envtest for controller-runtime integration)
make test
# Run static analysis
make vetProduction release workflows and SLO validation assets live in:
docs/production/go-live-checklist.mddocs/production/coredns-integration.mddocs/production/runbook.mddocs/production/slo-validation.mddocs/production/release-process.md
- Human and AI contributions:
CONTRIBUTING.md - OpenCode-specific guardrails:
OPENCODE_RULES.md - Repository-level AI constraints:
AGENTS.md
Copyright 2026. Licensed under the Apache License, Version 2.0.