Skip to content

Add optional unbound DNS caching sidecar to worker deployment#69

Open
mpetrowi wants to merge 5 commits intomainfrom
mp/catalyst-dns-sidecar
Open

Add optional unbound DNS caching sidecar to worker deployment#69
mpetrowi wants to merge 5 commits intomainfrom
mp/catalyst-dns-sidecar

Conversation

@mpetrowi
Copy link
Copy Markdown
Contributor

  • Unbound runs as a K8s 1.29+ native sidecar (initContainer with restartPolicy: Always), starting before check-migrations so DNS is available to all containers
  • dnsPolicy switches to None with 127.0.0.1 as nameserver when enabled
  • cluster.local forwarded to configurable clusterDnsIP; catch-all zone forwarded to a configurable forwarders list
  • msg/rrset cache sizes are top-level values
  • Optional Prometheus exporter (kumina/unbound_exporter) with control socket on a Unix domain socket shared via emptyDir volume

- Unbound runs as a K8s 1.29+ native sidecar (initContainer with
  restartPolicy: Always), starting before check-migrations so DNS is
  available to all containers
- dnsPolicy switches to None with 127.0.0.1 as nameserver when enabled
- cluster.local forwarded to configurable clusterDnsIP; catch-all zone
  forwarded to a configurable forwarders list
- msg/rrset cache sizes are top-level values
- Optional Prometheus exporter (kumina/unbound_exporter) with control
  socket on a Unix domain socket shared via emptyDir volume

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@mpetrowi mpetrowi force-pushed the mp/catalyst-dns-sidecar branch from 91a0ffc to e626b0c Compare March 31, 2026 01:45
@mpetrowi mpetrowi force-pushed the mp/catalyst-dns-sidecar branch from e626b0c to 0ea89e6 Compare March 31, 2026 01:48
enabled: false
# This isn't a common image, so locking to a sha256
# I used: podman inspect cyb3rjak3/unbound-exporter:0.5.0 --format '{{.Digest}}'
image: cyb3rjak3/unbound-exporter:0.5.0@sha256:e4973d36ba6485e5e9378e6d3e72677c177d69a62a11c9da549a71ff9904e09f
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wish there was a more official image for this. I'm locking to the sha256 of the manifest, I hope that works.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about this one: https://github.com/letsencrypt/unbound_exporter

You can find tons of exporters from the Prometheus docs: https://prometheus.io/docs/instrumenting/exporters/

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, nvm. This is that project, but there's no official container for it, so this is just some rando's. I see

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. If it works I think we could build the letsencrypt container and push it up to ECR. I'd feel better about that on prod

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit odd that they don't build and push it themselves since they have it. Looks like the current container is slightly behind on versions, so I'd feel better about building it ourselves too.

mpetrowi and others added 2 commits March 31, 2026 10:59
Creates a headless Service and ServiceMonitor (conditional on
dnsSidecar.metrics.enabled) that scrapes the unbound exporter on
port 9167. Uses a distinct metrics: catalyst-unbound label to avoid
overlap with the existing catalyst app/worker ServiceMonitor.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@srphillips srphillips left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks like it would work fine. I would definitely feel better about just building the official project's Dockerfile for use ourselves for prod use though. One suggestion about making the upstream DNS configurable as well. Let's see what @blunckr says about the rest of it since it's for his project.

# Retrieve with: kubectl get svc kube-dns -n kube-system -o jsonpath='{.spec.clusterIP}'
clusterDnsIP: "10.96.0.10"
# Upstream forwarders for the catch-all zone (".").
forwarders:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to make this more configurable, so it doesn't require a chart change for pointing to a new upstream DNS forwarder. I'm fine if we set these as a default though if there isn't one defined in the values file.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, they are just defaults. Maybe I should have made it 1.1.1.1 to be more generic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants