diff --git a/helm/README.md b/helm/README.md new file mode 100644 index 0000000..48478ba --- /dev/null +++ b/helm/README.md @@ -0,0 +1,178 @@ +# Deploying GCS with Helm + +## Download Helm + +Helm can be obtained from the [Helm +releases](https://github.com/helm/helm/releases) page. + +## Install Helm & Tiller + +Once you have downloaded Helm, you need to install it. The Helm client is +installed locally, and Tiller runs within your Kubernetes cluster. + +Assuming you have RBAC installed on your cluster, you need to create a service +account for Helm to use. The provided `helm-sa.yaml` creates a service account +in the `kube-system` namespace called "tiller" and gives it cluster admin +permissions. This allows Tiller to deploy charts anywhere in the cluster. + +**Note: These instructions do not set up TLS security for Helm, so it should +not be considered a secure configuration. Patches welcome.** + +Install the SA: + +```bash +$ kubectl apply -f helm-sa.yaml +serviceaccount "tiller" created +clusterrolebinding.rbac.authorization.k8s.io "tiller" created +``` + +Install Tiller & initialize local Helm state: + +```bash +$ helm --kubeconfig=../deploy/kubeconfig init --service-account tiller +$HELM_HOME has been configured at /home/jstrunk/.helm. + +Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. + +Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. +To prevent this, run `helm init` with the --tiller-tls-verify flag. +For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation +Happy Helming! +``` + +Verify it is installed: + +```bash +$ helm --kubeconfig=../deploy/kubeconfig version +Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"} +Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"} +``` + +## Configure GCS for your cluster + +There isn't much that is currently configurable, but what exists is in `gluster-container-storage/values.yaml`. + +## Deploy GCS + +Download chart dependencies (etcd-operator): + +```bash +$ helm dependency update gluster-container-storage +Hang tight while we grab the latest from your chart repositories... +...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts): + Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused +...Successfully got an update from the "stable" chart repository +Update Complete. ⎈Happy Helming!⎈ +Saving 1 charts +Downloading etcd-operator from repo https://kubernetes-charts.storage.googleapis.com +Deleting outdated charts +``` + +Install GCS chart: + +```bash +$ helm --kubeconfig=../deploy/kubeconfig install --namespace gcs gluster-container-storage +NAME: kindred-cricket +LAST DEPLOYED: Mon Oct 1 16:12:31 2018 +NAMESPACE: gcs +STATUS: DEPLOYED + +RESOURCES: +==> v1beta2/Deployment +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +kindred-cricket-etcd-operator-etcd-operator 1 1 1 0 3s + +==> v1beta2/EtcdCluster +NAME AGE +etcd 3s + +==> v1/Pod(related) +NAME READY STATUS RESTARTS AGE +csi-nodeplugin-glusterfsplugin-ggrg7 0/2 ContainerCreating 0 3s +csi-nodeplugin-glusterfsplugin-gjx9g 0/2 ContainerCreating 0 3s +csi-nodeplugin-glusterfsplugin-qv4ph 0/2 ContainerCreating 0 3s +glusterd2-cluster-8zhzn 0/1 ContainerCreating 0 3s +glusterd2-cluster-fghw6 0/1 ContainerCreating 0 3s +glusterd2-cluster-p6d6v 0/1 ContainerCreating 0 3s +kindred-cricket-etcd-operator-etcd-operator-959d989c9-jrnjw 0/1 ContainerCreating 0 2s +csi-provisioner-glusterfsplugin-0 0/2 ContainerCreating 0 2s +csi-attacher-glusterfsplugin-0 0/2 ContainerCreating 0 2s + +==> v1/ServiceAccount +NAME SECRETS AGE +kindred-cricket-etcd-operator-etcd-operator 1 3s +csi-attacher 1 3s +csi-provisioner 1 3s +csi-nodeplugin 1 3s + +==> v1beta1/ClusterRole +NAME AGE +kindred-cricket-etcd-operator-etcd-operator 3s + +==> v1/ClusterRole +external-provisioner-runner 3s +csi-nodeplugin 3s +external-attacher-runner 3s + +==> v1/DaemonSet +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +csi-nodeplugin-glusterfsplugin 3 3 0 3 0 3s +glusterd2-cluster 3 3 0 3 0 3s + +==> v1/StatefulSet +NAME DESIRED CURRENT AGE +csi-provisioner-glusterfsplugin 1 1 3s +csi-attacher-glusterfsplugin 1 1 3s + +==> v1/StorageClass +NAME PROVISIONER AGE +glusterfs-csi (default) org.gluster.glusterfs 3s + +==> v1beta1/ClusterRoleBinding +NAME AGE +kindred-cricket-etcd-operator-etcd-operator 3s + +==> v1/ClusterRoleBinding +csi-attacher-role 3s +csi-nodeplugin 3s +csi-provisioner-role 3s + +==> v1/Service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +gluster-mgmt ClusterIP 10.103.3.25 24007/TCP 3s +``` + +It will take a few minutes for the pods to start, so check back later... + +```bash +$ kubectl -n gcs get po +NAME READY STATUS RESTARTS AGE +csi-attacher-glusterfsplugin-0 2/2 Running 0 2m +csi-nodeplugin-glusterfsplugin-ggrg7 2/2 Running 0 2m +csi-nodeplugin-glusterfsplugin-gjx9g 2/2 Running 0 2m +csi-nodeplugin-glusterfsplugin-qv4ph 2/2 Running 0 2m +csi-provisioner-glusterfsplugin-0 2/2 Running 0 2m +etcd-cnstrrvxk8 1/1 Running 0 1m +etcd-t6t5fcpqw5 1/1 Running 0 2m +etcd-xhv4gkrhxx 1/1 Running 0 2m +glusterd2-cluster-8zhzn 1/1 Running 0 2m +glusterd2-cluster-fghw6 1/1 Running 0 2m +glusterd2-cluster-p6d6v 1/1 Running 0 2m +kindred-cricket-etcd-operator-etcd-operator-959d989c9-jrnjw 1/1 Running 0 2m +``` + +Veryfy GD2 has a good cluster: + +```bash +$ kubectl -n gcs exec glusterd2-cluster-8zhzn glustercli peer list ++--------------------------------------+-------------------------+------------------+------------------+--------+-----+ +| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID | ++--------------------------------------+-------------------------+------------------+------------------+--------+-----+ +| 14e9b539-d27c-48b4-872a-143445c2c775 | glusterd2-cluster-fghw6 | 127.0.0.1:24007 | 10.44.0.9:24008 | yes | 21 | +| | | 10.44.0.9:24007 | | | | +| 5cb6bec2-e7ce-4e21-bbea-3727ffc694f7 | glusterd2-cluster-8zhzn | 127.0.0.1:24007 | 10.42.0.11:24008 | yes | 21 | +| | | 10.42.0.11:24007 | | | | +| 73121ee1-21c2-4741-a297-4eb9b532a44a | glusterd2-cluster-p6d6v | 127.0.0.1:24007 | 10.36.0.8:24008 | yes | 21 | +| | | 10.36.0.8:24007 | | | | ++--------------------------------------+-------------------------+------------------+------------------+--------+-----+ +``` diff --git a/helm/gluster-container-storage/.gitignore b/helm/gluster-container-storage/.gitignore new file mode 100644 index 0000000..ee3892e --- /dev/null +++ b/helm/gluster-container-storage/.gitignore @@ -0,0 +1 @@ +charts/ diff --git a/helm/gluster-container-storage/.helmignore b/helm/gluster-container-storage/.helmignore new file mode 100644 index 0000000..f0c1319 --- /dev/null +++ b/helm/gluster-container-storage/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/helm/gluster-container-storage/Chart.yaml b/helm/gluster-container-storage/Chart.yaml new file mode 100644 index 0000000..5ff1639 --- /dev/null +++ b/helm/gluster-container-storage/Chart.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: v1 +appVersion: "gcs-alpha1" +description: A Helm chart to deploy GCS +home: https://github.com/gluster/gcs +name: gluster-container-storage +version: 0.1.0 diff --git a/helm/gluster-container-storage/requirements.lock b/helm/gluster-container-storage/requirements.lock new file mode 100644 index 0000000..aa497d0 --- /dev/null +++ b/helm/gluster-container-storage/requirements.lock @@ -0,0 +1,6 @@ +dependencies: +- name: etcd-operator + repository: https://kubernetes-charts.storage.googleapis.com + version: 0.8.0 +digest: sha256:d66592b0d34268271264f1f1485d966f144a2173043240f604ca0e41ab27c562 +generated: 2018-09-30T12:05:37.472428572-04:00 diff --git a/helm/gluster-container-storage/requirements.yaml b/helm/gluster-container-storage/requirements.yaml new file mode 100644 index 0000000..3422f0e --- /dev/null +++ b/helm/gluster-container-storage/requirements.yaml @@ -0,0 +1,5 @@ +--- +dependencies: + - name: etcd-operator + version: "~0.8.0" + repository: https://kubernetes-charts.storage.googleapis.com diff --git a/helm/gluster-container-storage/templates/_helpers.tpl b/helm/gluster-container-storage/templates/_helpers.tpl new file mode 100644 index 0000000..58445b2 --- /dev/null +++ b/helm/gluster-container-storage/templates/_helpers.tpl @@ -0,0 +1,32 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "gluster-container-storage.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "gluster-container-storage.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "gluster-container-storage.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} diff --git a/helm/gluster-container-storage/templates/csi-gluster-file.yaml b/helm/gluster-container-storage/templates/csi-gluster-file.yaml new file mode 100644 index 0000000..2a6497a --- /dev/null +++ b/helm/gluster-container-storage/templates/csi-gluster-file.yaml @@ -0,0 +1,286 @@ +# CSI attacher +--- +kind: ServiceAccount +apiVersion: v1 +metadata: + name: csi-attacher + +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: external-attacher-runner +rules: + - apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "watch", "update"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["get", "list", "watch"] + - apiGroups: ["storage.k8s.io"] + resources: ["volumeattachments"] + verbs: ["get", "list", "watch", "update"] + +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: csi-attacher-role +subjects: + - kind: ServiceAccount + name: csi-attacher + namespace: {{.Release.Namespace}} +roleRef: + kind: ClusterRole + name: external-attacher-runner + apiGroup: rbac.authorization.k8s.io + +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: csi-attacher-glusterfsplugin +spec: + serviceName: csi-attacher + replicas: 1 + selector: + matchLabels: + app: csi-attacher-glusterfsplugin + template: + metadata: + labels: + app: csi-attacher-glusterfsplugin + spec: + serviceAccountName: csi-attacher + containers: + - name: csi-attacher + image: quay.io/k8scsi/csi-attacher:v0.3.0 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + volumeMounts: + - name: socket-dir + mountPath: /var/lib/csi/sockets/pluginproxy/ + + - name: glusterfs + image: docker.io/gluster/glusterfs-csi-driver + args: + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--resturl=$(REST_URL)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://plugin/csi.sock + - name: REST_URL + value: http://gluster-mgmt:24007 + volumeMounts: + - name: socket-dir + mountPath: /plugin + volumes: + - name: socket-dir + emptyDir: + + +# CSI NodePlugin +--- +kind: ServiceAccount +apiVersion: v1 +metadata: + name: csi-nodeplugin + +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: csi-nodeplugin +rules: + - apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "watch", "update"] + - apiGroups: [""] + resources: ["nodes"] + verbs: ["get", "list", "watch", "update"] + - apiGroups: ["storage.k8s.io"] + resources: ["volumeattachments"] + verbs: ["get", "list", "watch", "update"] + +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: csi-nodeplugin +subjects: + - kind: ServiceAccount + name: csi-nodeplugin + namespace: {{.Release.Namespace}} +roleRef: + kind: ClusterRole + name: csi-nodeplugin + apiGroup: rbac.authorization.k8s.io + +--- +kind: DaemonSet +apiVersion: apps/v1 +metadata: + name: csi-nodeplugin-glusterfsplugin +spec: + selector: + matchLabels: + app: csi-nodeplugin-glusterfsplugin + template: + metadata: + labels: + app: csi-nodeplugin-glusterfsplugin + spec: + serviceAccount: csi-nodeplugin + containers: + - name: driver-registrar + image: quay.io/k8scsi/driver-registrar:v0.3.0 + args: + - "--v=5" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /plugin/csi.sock + - name: KUBE_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + volumeMounts: + - name: plugin-dir + mountPath: /plugin + - name: gluster-nodeplugin + securityContext: + privileged: true + capabilities: + add: ["SYS_ADMIN"] + allowPrivilegeEscalation: true + image: docker.io/gluster/glusterfs-csi-driver + args: + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--resturl=$(REST_URL)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://plugin/csi.sock + - name: REST_URL + value: http://gluster-mgmt:24007 + volumeMounts: + - name: plugin-dir + mountPath: /plugin + - name: pods-mount-dir + mountPath: /var/lib/kubelet/pods + mountPropagation: "Bidirectional" + volumes: + - name: plugin-dir + hostPath: + path: /var/lib/kubelet/plugins/org.gluster.glusterfs + type: DirectoryOrCreate + - name: pods-mount-dir + hostPath: + path: /var/lib/kubelet/pods + type: Directory + + +# CSI Provisioner +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: csi-provisioner + +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: external-provisioner-runner +rules: + - apiGroups: [""] + resources: ["persistentvolumes"] + verbs: ["get", "list", "watch", "create", "delete"] + - apiGroups: [""] + resources: ["persistentvolumeclaims"] + verbs: ["get", "list", "watch", "update"] + - apiGroups: ["storage.k8s.io"] + resources: ["storageclasses"] + verbs: ["get", "list", "watch"] + - apiGroups: [""] + resources: ["events"] + verbs: ["list", "watch", "create", "update", "patch"] + +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: csi-provisioner-role +subjects: + - kind: ServiceAccount + name: csi-provisioner + namespace: {{.Release.Namespace}} +roleRef: + kind: ClusterRole + name: external-provisioner-runner + apiGroup: rbac.authorization.k8s.io + +--- +kind: StatefulSet +apiVersion: apps/v1 +metadata: + name: csi-provisioner-glusterfsplugin +spec: + serviceName: csi-provisioner-glusterfsplugin + replicas: 1 + selector: + matchLabels: + app: csi-provisioner-glusterfsplugin + template: + metadata: + name: csi-provisioner + labels: + app: csi-provisioner-glusterfsplugin + spec: + serviceAccountName: csi-provisioner + containers: + - name: csi-provisioner + image: quay.io/k8scsi/csi-provisioner:v0.3.0 + args: + - "--provisioner=org.gluster.glusterfs" + - "--csi-address=$(ADDRESS)" + env: + - name: ADDRESS + value: /var/lib/csi/sockets/pluginproxy/csi.sock + volumeMounts: + - name: socket-dir + mountPath: /var/lib/csi/sockets/pluginproxy/ + - name: gluster-provisioner + image: docker.io/gluster/glusterfs-csi-driver + args: + - "--nodeid=$(NODE_ID)" + - "--endpoint=$(CSI_ENDPOINT)" + - "--resturl=$(REST_URL)" + env: + - name: NODE_ID + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: CSI_ENDPOINT + value: unix://plugin/csi.sock + - name: REST_URL + value: http://gluster-mgmt:24007 + volumeMounts: + - name: socket-dir + mountPath: /plugin + volumes: + - name: socket-dir + emptyDir: diff --git a/helm/gluster-container-storage/templates/etcd-cluster.yaml b/helm/gluster-container-storage/templates/etcd-cluster.yaml new file mode 100644 index 0000000..4777a69 --- /dev/null +++ b/helm/gluster-container-storage/templates/etcd-cluster.yaml @@ -0,0 +1,10 @@ +--- +kind: EtcdCluster +apiVersion: etcd.database.coreos.com/v1beta2 +metadata: + name: etcd + labels: + gcs: etcd-cluster +spec: + size: 3 + version: 3.3.8 diff --git a/helm/gluster-container-storage/templates/gd2-cluster.yaml b/helm/gluster-container-storage/templates/gd2-cluster.yaml new file mode 100644 index 0000000..629bbbc --- /dev/null +++ b/helm/gluster-container-storage/templates/gd2-cluster.yaml @@ -0,0 +1,99 @@ +--- +kind: DaemonSet +apiVersion: apps/v1 +metadata: + name: glusterd2-cluster + labels: + gcs: glusterd2-cluster +spec: + selector: + matchLabels: + gcs: glusterd2 + template: + metadata: + labels: + name: glusterd2 + gcs: glusterd2 + spec: + containers: + - name: glusterd2 + image: docker.io/gluster/glusterd2-nightly:20180920 + livenessProbe: + httpGet: + path: /ping + port: 24007 + initialDelaySeconds: 60 + periodSeconds: 60 + env: + - name: GD2_ETCDENDPOINTS + value: "http://etcd-client:2379" + - name: GD2_CLUSTER_ID + value: "{{.Values.clusterId}}" + # TODO: Remove RESTAUTH false once we enable setting auth token + # using secrets + - name: GD2_RESTAUTH + value: "false" + securityContext: + capabilities: {} + privileged: true + volumeMounts: + - name: gd2-state + mountPath: "/var/lib/glusterd2" + - name: gluster-dev + mountPath: "/dev" + - name: gluster-cgroup + mountPath: "/sys/fs/cgroup" + readOnly: true + - name: gluster-lvm + mountPath: "/run/lvm" + - name: gluster-kmods + mountPath: "/usr/lib/modules" + readOnly: true + volumes: + - name: gd2-state + hostPath: + path: "/var/lib/glusterd2" + - name: gluster-dev + hostPath: + path: "/dev" + - name: gluster-cgroup + hostPath: + path: "/sys/fs/cgroup" + - name: gluster-lvm + hostPath: + path: "/run/lvm" + - name: gluster-kmods + hostPath: + path: "/usr/lib/modules" + +--- +kind: Service +apiVersion: v1 +metadata: + name: gluster-mgmt + labels: + gcs: glusterd2-service +spec: + selector: + gcs: glusterd2 + ports: + - protocol: TCP + port: 24007 + targetPort: 24007 + +# --- +# kind: Service +# apiVersion: v1 +# metadata: +# name: glusterd2-client-nodeport +# labels: +# gcs: glusterd2-service +# spec: +# selector: +# gcs: glusterd2 +# ports: +# - protocol: TCP +# port: 24007 +# targetPort: 24007 +# nodePort: 31007 +# type: NodePort diff --git a/helm/gluster-container-storage/templates/storageClass.yaml b/helm/gluster-container-storage/templates/storageClass.yaml new file mode 100644 index 0000000..c63c41a --- /dev/null +++ b/helm/gluster-container-storage/templates/storageClass.yaml @@ -0,0 +1,8 @@ +--- +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: glusterfs-csi + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: org.gluster.glusterfs diff --git a/helm/gluster-container-storage/values.yaml b/helm/gluster-container-storage/values.yaml new file mode 100644 index 0000000..2b90b3a --- /dev/null +++ b/helm/gluster-container-storage/values.yaml @@ -0,0 +1,14 @@ +--- +# Default values for gluster-container-storage. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +# Configuration for the etcd operator +etcd-operator: + etcdOperator.image.tag: v0.9.2 + deployments: + backupOperator: false + restoreOperator: false + +# An arbitrary UUID to identify the Gluster cluster +clusterId: de1a6aac-2357-447c-977b-c1484108c34f diff --git a/helm/helm-sa.yaml b/helm/helm-sa.yaml new file mode 100644 index 0000000..af5e784 --- /dev/null +++ b/helm/helm-sa.yaml @@ -0,0 +1,20 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: tiller + namespace: kube-system + +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: tiller +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin +subjects: + - kind: ServiceAccount + name: tiller + namespace: kube-system