Skip to content

andrewthomaslee/Clan-Kube

Repository files navigation

🚜RKE2 + 🚢Dockhand on NixOS❄️

This project creates a Kubernetes RKE2 cluster and Docker Host with Dockhand. The cloud provider used is Hetzner Cloud but this repo can be easily adapted to another provider or bare metal machines.

The goal of this project is to make truly reproducible infrastructure for any environment, defined in git.

The host OS is NixOS❄️ and managed by a flake with Clan. Whether you're running a homelab or maintaining critical computing infrastructure, Clan will help reduce maintenance burden by allowing a git repository to define your whole network of computers.

Architecture 🗺️

Kubernetes Cluster Diagram:

RKE2-Infra

Full Infrastructure Diagram:

Full-Infra

🚜RKE2 Kubernetes☸️

RKE2 is Rancher's enterprise-ready next-generation Kubernetes distribution. It has also been known as RKE Government.

It is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.

Machine Features:

Cluster Features:

🐋Docker + Dockhand🚢

The Docker host features:

🔍 Flake Inspection

To display the outputs of the flake.nix file run:

nix flake show

nixosConfigurations are the machines this flake builds.

nixosConfigurations
├── mng-0
├── wrk-0
├── proxy
└── docker

apps are predefined scripts that can be run with nix run .#<app-name>.

apps
├── get-config
├── get-token
├── sops-add-user
├── setup-env
├── send-env
└── tmp-pod

devShells are the development shells that provide all the dependencies for the project.

devShells
└── default

Prerequisites 📋

To use this project, you need either:

or

Static Variables 🔧

Static variables are defined in infra.json and used throughout the project. For instance the field meta.domain is used in the Traefik ingress controller configuration and to define ingress routes for the other Web UIs. The top level env variable is used to define the environment of the cluster, ie dev, staging or prod.

Assumptions 📋

  • A remote storage box is available to store secrets and config files. This can be any host reachable over ssh. I am using a Hetzner Storage Box
  • Tailscale is used as the VPN. Tailscale can be self hosted via Headscale but I am using Tailscale Cloud.
  • Cloudflare Origin Certificates are used for mTLS with the Cloudflare Proxy for the domain meta.domain in infra.json. This ensures that only authorized users can access the cluster Web UIs.
  • The domain meta.domain in infra.json has properly configured DNS records pointing to the IPs of the servers, either the proxy or the docker machine.

devShell 🧑‍💻🐚

Use the Nix development shell to enter the environment with all development dependencies installed. Optionally use direnv to make life easier. direnv drops you into the devShell when it detects a .envrc file and reloads the devShell when it detects a change to the shell. Either download it or use the VSCode extension.

nix develop

Nix devShell works best with bash. If you want to use a different shell see this discussion.

Packages installed:

  • clan-cli Command-line interface for Clan.lol
  • hcloud Command-line interface for Hetzner Cloud
  • lazyhetzner TUI for managing Hetzner Cloud resources
  • rke2_1_35 Rancher Kubernetes Engine (RKE2)
  • kubectl Kubernetes CLI
  • kubernetes-helm Package manager for kubernetes
  • argocd GitOps for Kubernetes
  • kubeseal Kubernetes controller and tool for one-way encrypted Secrets
  • k9s Kubernetes CLI To Manage Your Clusters In Style
  • kubefetch Neofetch-like tool to show info about your Kubernetes Cluster
  • tailscale Tailscale VPN client

First Time Setup 🔧

If you are coming to this project to join an existing repo, you can skip this section. ie my team at netsam.com.

1. Generate the secrets 🔑

Generate an age key pair and place it at $SOPS_AGE_KEY_FILE then change the public key in infra.json and add your user to the secrets backend:

nix run .#sops-add-user

Generate all the secrets for the infrastructure:

clan vars generate mng-0

clan vars generate wrk-0

clan vars generate docker

Ensure the ssh key pair is generated and placed in your user's home directory at ~/.ssh/industrial-host and ~/.ssh/industrial-host.pub

clan vars get mng-0 industrial-host/ssh-key > ~/.ssh/industrial-host
clan vars get mng-0 industrial-host/ssh-key.pub > ~/.ssh/industrial-host.pub

2. Deploy the cluster 🚀

To deploy a machine all it takes is two commands:

clan machines init-hardware-config <machine-name> --target-host <machine-ip>

clan machines install <machine-name> --target-host <machine-ip>

To update a machine:

clan machines update <machine-name>

Deploy the machine mng-0 before deploying the other kubernetes machines.

Once mng-0 is deployed, run the following commands to fetch the Join Token and the kubeconfig:

nix run .#get-token

nix run .#get-config

Make sure the var rke2/token matches.

clan vars get wrk-0 rke2/token

Adding a new type of machine to the cluster is as simple as adding a public IPv6 and private IPv4 address to the networking.public and networking.private objects in infra.json and deploying the new machine with Clan. Make sure to follow the naming convention for the machine type. ie mng-<int>, wrk-<int>, docker and proxy.

Most of the time only the infra.json file needs to be changed to re-define the cluster.

Joining an Existing Repository ➕

Ensure the ssh key pair is placed in your user's home directory at ~/.ssh/industrial-host and ~/.ssh/industrial-host.pub

To fetch the secrets and config files from the storagebox run:

nix run .#setup-env

Cluster Access 💻

1. Enter the devShell 📥

This will set up the environment variables for the cluster you are accessing.

nix develop

2. Environment Setup 🔧

Ensure you have placed the ssh key in ~/.ssh/industrial-host. Keep this secret. Anyone with access to this key can access the cluster.

nix run .#setup-env

3. Access the Cluster ☸️

Check the status of the Tailscale VPN connection:

tailscale status

KUBECONFIG environment variable is set automatically. Run kubectl commands to interact with the cluster:

kubectl get nodes

kubectl get pods -A

kubefetch

Cluster Management 🛠️

k9s is your swiss army knife for kubernetes clusters.

Watch this short video tutorial: K8s Made Easy: Manage Your Clusters with the k9s Terminal UI

k9s

🚢Dockhand

Dockhand is a powerful, intuitive Docker management platform.

Dockhand

Dockhand has Git Integration for GitOps and an API for CI/CD + Automated Deployments.

Hosting Dockhand next to Kubernetes allows for a seamless transition from Local Development to Docker Dev Environment to Kubernetes Prod Environment. Or any other combination of environments. Heck! Deploy Prod to Dockhand! 🤷 🚀

See my guide for 👉 GitOps with Dockhand and ArgoCD.

TSDProxy 🌐

TSDProxy is a reverse proxy that automatically adds docker containers to the Tailscale network.

Simply add a container label "tsdproxy.enable=true".

This will add the container as a machine to the Tailscale network. Making an address like <container_name>.armadillo-frog.ts.net available to users on the Tailscale network with https for no browser warnings.