Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions charts/elastic-operator/Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
The `container` input for Filebeat was deprecated in version `7.16` and is completely disabled by version `9.0.0` (see [#42295](https://github.com/elastic/beats/pull/42295)).
Following the official [migration guide](https://www.elastic.co/docs/reference/beats/filebeat/migrate-to-filestream), in preparation for the migration from `container` input to `filestream`, a specific tag `take_over` must be set, so Filebeat can separate logs created by the container input from the filestream input. Otherwise an error is thrown and data may be duplicated.

Important: If you want to upgrade to version 9.x, it requires a specific upgrade path (the latest upgrade path can be found here). Before installing version 9.0.0, a version update to 8.18.1 must be made, otherwise the upgrade will fail. Use version "8.18.1-fb-migr-filestream" from our stack for easy upgrading.

- this Helm chart version adds the `filestream` input and sets the `take_over` tag as part of the migration
- in version `9.0.0` the `take_over` tag will be removed to complete the migration

Expand Down
37 changes: 34 additions & 3 deletions charts/elastic-operator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,23 @@ It comes also with a backup functionality. This is the version using ECK-operato
| auth.roles.custom_filebeat.indices[0].privileges[2] | string | `"create_doc"` | |
| auth.roles.custom_filebeat.indices[0].privileges[3] | string | `"view_index_metadata"` | |
| auth.roles.custom_filebeat.indices[0].privileges[4] | string | `"manage_follow_index"` | |
| auth.roles.logstash.cluster[0] | string | `"manage_index_templates"` | |
| auth.roles.logstash.cluster[1] | string | `"monitor"` | |
| auth.roles.logstash.cluster[2] | string | `"manage_ilm"` | |
| auth.roles.logstash.indices[0].names[0] | string | `"*"` | |
| auth.roles.logstash.indices[0].privileges[0] | string | `"write"` | |
| auth.roles.logstash.indices[0].privileges[1] | string | `"create"` | |
| auth.roles.logstash.indices[0].privileges[2] | string | `"create_index"` | |
| auth.roles.logstash.indices[0].privileges[3] | string | `"manage"` | |
| auth.roles.logstash.indices[0].privileges[4] | string | `"manage_ilm"` | |
| auth.users.custom_elastalert.existingPassword | string | `""` | |
| auth.users.custom_elastalert.roles[0] | string | `"custom_elastalert"` | |
| auth.users.custom_filebeat.existingPassword | string | `""` | |
| auth.users.custom_filebeat.roles[0] | string | `"custom_filebeat"` | |
| auth.users.custom_kibana_guest.existingPassword | string | `""` | |
| auth.users.custom_kibana_guest.roles[0] | string | `"viewer"` | |
| auth.users.logstash.existingPassword | string | `""` | |
| auth.users.logstash.roles[0] | string | `"logstash"` | |
| backup.enabled | bool | `false` | |
| backup.image.repository | string | `"docker.io/curlimages/curl"` | |
| backup.image.tag | string | `"8.13.0"` | |
Expand Down Expand Up @@ -155,6 +166,8 @@ It comes also with a backup functionality. This is the version using ECK-operato
| generatePasswords.secrets[2].key | string | `"password"` | |
| generatePasswords.secrets[2].name | string | `"{{ .Release.Name }}-user-custom-elastalert"` | |
| generatePasswords.tolerations | list | `[]` | |
| generateTLS.enabled | bool | `false` | |
| generateTLS.secretName | string | `"tls-elastic"` | |
| ilm.image.repository | string | `"docker.io/curlimages/curl"` | |
| ilm.image.tag | string | `"8.13.0"` | |
| ilm.image.userId | int | `100` | |
Expand All @@ -175,9 +188,10 @@ It comes also with a backup functionality. This is the version using ECK-operato
| ilm.policies.short.indexPatterns[2] | string | `"kyverno*"` | |
| ilm.policies.short.indexPatterns[3] | string | `"monitoring*"` | |
| ilm.tolerations | list | `[]` | |
| indexPatternInit.image.repository | string | `"docker.io/curlimages/curl"` | |
| indexPatternInit.image.tag | string | `"8.12.1"` | |
| indexPatternInit.image.userId | int | `100` | |
| indexPatternInit.enabled | bool | `true` | |
| indexPatternInit.image.repository | string | `"toolbox"` | |
| indexPatternInit.image.tag | string | `"1.0.0"` | |
| indexPatternInit.image.userId | int | `10001` | |
| indexPatternInit.indices.admin.timestampField | string | `"@timestamp"` | |
| indexPatternInit.indices.argocd.timestampField | string | `"@timestamp"` | |
| indexPatternInit.indices.auth.timestampField | string | `"@timestamp"` | |
Expand All @@ -188,6 +202,7 @@ It comes also with a backup functionality. This is the version using ECK-operato
| indexPatternInit.indices.routing.timestampField | string | `"@timestamp"` | |
| indexPatternInit.indices.vault.timestampField | string | `"@timestamp"` | |
| indexPatternInit.nodeSelector | object | `{}` | |
| indexPatternInit.skipExisting | bool | `false` | |
| indexPatternInit.tolerations | list | `[]` | |
| ingress.elasticsearch.annotations."traefik.ingress.kubernetes.io/router.entrypoints" | string | `"websecure"` | |
| ingress.elasticsearch.annotations."traefik.ingress.kubernetes.io/router.middlewares" | string | `"routing-oidc-forward-auth@kubernetescrd"` | |
Expand Down Expand Up @@ -249,6 +264,22 @@ It comes also with a backup functionality. This is the version using ECK-operato
| kibana.resources.requests.cpu | string | `"100m"` | |
| kibana.resources.requests.memory | string | `"1G"` | |
| kibana.version | string | `"{{ .Chart.AppVersion }}"` | |
| logstash.config."pipeline.batch.size" | int | `125` | |
| logstash.elasticsearchRefs[0].clusterName | string | `"default"` | |
| logstash.elasticsearchRefs[0].name | string | `"{{ .Release.Name }}"` | |
| logstash.enabled | bool | `false` | |
| logstash.env[0].name | string | `"NODE_NAME"` | |
| logstash.env[0].valueFrom.fieldRef.apiVersion | string | `"v1"` | |
| logstash.env[0].valueFrom.fieldRef.fieldPath | string | `"spec.nodeName"` | |
| logstash.env[1].name | string | `"LOGSTASH_USERNAME"` | |
| logstash.env[1].valueFrom.secretKeyRef.key | string | `"username"` | |
| logstash.env[1].valueFrom.secretKeyRef.name | string | `"{{ .Release.Name }}-user-logstash"` | |
| logstash.env[2].name | string | `"LOGSTASH_PASSWORD"` | |
| logstash.env[2].valueFrom.secretKeyRef.key | string | `"password"` | |
| logstash.env[2].valueFrom.secretKeyRef.name | string | `"{{ .Release.Name }}-user-logstash"` | |
| logstash.pipelines[0]."config.string" | string | `"input {\n beats {\n port => 5044\n }\n}\nfilter {\n ### The config works from top to bottom, everytime it finds a match, the target_index will be overwritten\n mutate {\n add_field => { \"[@metadata][target_index]\" => \"not-defined-%{[agent][version]}-%{+yyyy.MM}\" }\n }\n ### Example: Overrides the default index, if the namespace is argocd\n if [kubernetes][namespace] == \"argocd\" {\n mutate { replace => { \"[@metadata][target_index]\" => \"argocd_%{[agent][version]}-%{+yyyy.MM}\" } }\n }\n}\n\noutput {\n elasticsearch {\n hosts => [ \"${DEFAULT_ES_HOSTS}\" ]\n user => \"${LOGSTASH_USERNAME}\"\n password => \"${LOGSTASH_PASSWORD}\"\n cacert => \"${DEFAULT_ES_SSL_CERTIFICATE_AUTHORITY}\"\n index => \"%{[@metadata][target_index]}\"\n }\n}\n"` | |
| logstash.pipelines[0]."pipeline.id" | string | `"default-elasticsearch"` | |
| logstash.version | string | `"{{ .Chart.AppVersion }}"` | |
| policyException.enabled | bool | `true` | |

----------------------------------------------
Expand Down
10 changes: 8 additions & 2 deletions charts/elastic-operator/templates/eks-stack/elasticsearch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,17 @@ spec:
{{- end }}
{{- end }}
{{- end }}
{{- end }}
http:
{{- with $.Values.generateTLS -}}
{{- if .enabled }}
tls:
certificate:
secretName: {{$.Release.Name}}-{{ .secretName }}
{{- end }}
{{- end }}
{{- with $.Values.ingress.elasticsearch -}}
{{- if and (eq .enabled true) (eq .className "traefik") }}
http:
service:
metadata:
annotations:
traefik.ingress.kubernetes.io/service.serverstransport: "{{ $.Release.Namespace }}-{{ $.Release.Name }}-elasticsearch@kubernetescrd"
Expand Down
11 changes: 11 additions & 0 deletions charts/elastic-operator/templates/eks-stack/filebeat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,10 @@ metadata:
spec:
type: filebeat
version: {{ tpl .version $ }}
{{- if eq $.Values.logstash.enabled false }}
elasticsearchRef:
name: {{ $.Release.Name }}
{{- end }}
kibanaRef:
name: {{ $.Release.Name }}
config:
Expand All @@ -19,11 +21,20 @@ spec:
{{- .autodiscover | toYaml | nindent 8 }}
processors:
{{- concat .processors .extraProcessors | toYaml | nindent 6 }}
{{- if eq $.Values.logstash.enabled false }}
output.elasticsearch:
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
indices:
{{- concat .extraIndices .indices | toYaml | nindent 8}}
{{- end }}
{{- if eq $.Values.logstash.enabled true }}
output.logstash:
enabled: true
loadbalance: true
hosts:
- {{$.Release.Name}}-ls-api.monitoring.svc:5044
{{- end }}
daemonSet:
podTemplate:
metadata:
Expand Down
49 changes: 49 additions & 0 deletions charts/elastic-operator/templates/eks-stack/logstash.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
{{- with .Values.logstash}}
{{- if .enabled }}
apiVersion: logstash.k8s.elastic.co/v1alpha1
kind: Logstash
metadata:
name: {{ $.Release.Name }}
annotations:
argocd.argoproj.io/sync-wave: "20"
spec:
count: 2
version: {{ tpl .version $ }}
elasticsearchRefs:
{{- tpl (toYaml .elasticsearchRefs) $ | nindent 4 }}
config:
{{- toYaml .config | nindent 4 }}
pipelines:
{{- toYaml .pipelines | nindent 2 }}
podTemplate:
metadata:
annotations:
checksum/users: {{ include (print $.Template.BasePath "/eks-stack/elasticsearch-users.yaml") $ | sha256sum | substr 0 50 }}
spec:
securityContext:
runAsUser: 0
{{- if .tolerations }}
tolerations:
{{- .tolerations | toYaml | indent 10 }}
{{- end }}
containers:
- name: logstash
resources:
{{- .resources | toYaml | nindent 12}}
env:
{{- tpl (.env | toYaml) $ | nindent 10 }}
{{- if .extraEnv }}
{{- .extraEnv | toYaml | nindent 10 }}
{{- end }}
volumeMounts:
{{- .volumeMounts | toYaml | nindent 10}}
{{- if .extraVolumeMounts }}
{{- .extraVolumeMounts | toYaml | nindent 10 }}
{{- end }}
volumes:
{{- .volumes | toYaml | nindent 8 }}
{{- if .extraVolumes }}
{{- .extraVolumes | toYaml | nindent 8 }}
{{- end -}}
{{- end -}}
{{- end -}}
47 changes: 47 additions & 0 deletions charts/elastic-operator/templates/generateTLS/certificate.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
{{ with .Values.generateTLS }}
{{ if .enabled }}
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{$.Release.Name}}-selfsigned-root-issuer
namespace: {{$.Release.Namespace}}
annotations:
"helm.sh/resource-policy": keep
argocd.argoproj.io/sync-wave: "5"
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{$.Release.Name}}-selfsigned-issuer
namespace: {{$.Release.Namespace}}
annotations:
"helm.sh/resource-policy": keep
argocd.argoproj.io/sync-wave: "7"
spec:
ca:
secretName: {{$.Release.Name}}-selfsigned-root-ca-secret
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{$.Release.Name}}-selfsigned-root-ca
annotations:
"helm.sh/resource-policy": keep
argocd.argoproj.io/sync-wave: "6"
spec:
isCA: true
commonName: {{$.Release.Name}}-selfsigned-root-ca
secretName: {{$.Release.Name}}-selfsigned-root-ca-secret
privateKey:
algorithm: ECDSA
size: 256
duration: 43800h # 5 years (long-lived)
renewBefore: 720h
issuerRef:
name: {{$.Release.Name}}-selfsigned-root-issuer
kind: Issuer
---
{{- end }}
{{- end }}
24 changes: 24 additions & 0 deletions charts/elastic-operator/templates/generateTLS/issuer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
{{ with .Values.generateTLS }}
{{ if .enabled }}
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{$.Release.Name}}-es-cert
annotations:
"helm.sh/resource-policy": keep
argocd.argoproj.io/sync-wave: "10"
spec:
dnsNames:
- localhost
- {{$.Release.Name}}-es-http
- {{$.Release.Name}}-es-http.{{$.Release.Namespace}}.svc
- {{$.Release.Name}}-es-http.{{$.Release.Namespace}}.svc.cluster.local
issuerRef:
kind: Issuer
name: {{$.Release.Name}}-selfsigned-issuer
secretName: {{$.Release.Name}}-{{.secretName}}
subject:
organizations:
- elastic
{{- end }}
{{- end }}
33 changes: 26 additions & 7 deletions charts/elastic-operator/templates/index-pattern-init/config.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
{{- if .Values.indexPatternInit.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
Expand All @@ -7,13 +8,31 @@ metadata:
argocd.argoproj.io/hook-delete-policy: HookSucceeded
data:
index-pattern-init.sh: |
#!/bin/sh

#!/bin/bash
KIBANA_BASE_PATH="{{ .Values.ingress.kibana.path }}"

{{- if .Values.indexPatternInit.skipExisting }}
KIBANA_DATA_VIEWS=$(curl --silent \
--user "$ELASTICSEARCH_USERNAME:$ELASTICSEARCH_PASSWORD" \
--cacert /kb-cert/ca.crt \
--request GET https://$KIBANA_ENDPOINT$KIBANA_BASE_PATH/api/data_views | jq -r '.data_view[].name')
{{- end }}

echo $KIBANA_DATA_VIEWS

{{- range $indexName,$indexValues := .Values.indexPatternInit.indices }}
# Apply default Index Pattern into Kibana
echo "create index pattern {{$indexName}}"
curl -u "$ELASTICSEARCH_USERNAME:$ELASTICSEARCH_PASSWORD" --cacert /kb-cert/ca.crt -X POST -v https://$KIBANA_ENDPOINT$KIBANA_BASE_PATH/api/index_patterns/index_pattern -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"index_pattern": { "id": "{{$indexName}}", "title": "{{ coalesce $indexValues.index (printf "%s*" $indexName) }}","timeFieldName": "{{$indexValues.timestampField}}"},"override":true,"refresh_fields":true}'
echo "create index pattern {{$indexName}} finished"
{{ end }}
if {{$.Values.indexPatternInit.skipExisting}} && printf "%s\n" "${KIBANA_DATA_VIEWS[@]}" | grep -q -x {{$indexName}}; then
echo "Index pattern \"{{ $indexName }}\" already exists in Kibana"
else
# Apply default Index Pattern into Kibana
echo -e "Index pattern \"{{ $indexName }}\" does not exist in Kibana. Creating it ... \n"
curl --silent \
--user "$ELASTICSEARCH_USERNAME:$ELASTICSEARCH_PASSWORD" \
--cacert /kb-cert/ca.crt \
--request POST https://$KIBANA_ENDPOINT$KIBANA_BASE_PATH/api/data_views/data_view \
--header "Content-Type: application/json; Elastic-Api-Version=2023-10-31" \
--header "kbn-xsrf: string" \
--data '{"data_view": {"name": "{{$indexName}}","title": "{{ coalesce $indexValues.index (printf "%s*" $indexName) }}","timeFieldName": "{{$indexValues.timestampField}}"},"override":true}'
fi
{{- end }}
{{- end }}
5 changes: 3 additions & 2 deletions charts/elastic-operator/templates/index-pattern-init/job.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{{- if .Values.indexPatternInit.enabled }}
{{- with .Values.indexPatternInit }}

apiVersion: batch/v1
kind: Job
metadata:
Expand All @@ -23,7 +23,7 @@ spec:
- name: {{ $.Release.Name }}-index-pattern-init
image: {{ .image.repository }}:{{ .image.tag }}
imagePullPolicy: IfNotPresent
command: [ '/bin/sh', '-c' ]
command: [ '/bin/bash', '-c' ]
args: [ "/index-pattern-init-config/index-pattern-init.sh" ]
env:
- name: ELASTICSEARCH_USERNAME
Expand Down Expand Up @@ -67,3 +67,4 @@ spec:
type: RuntimeDefault
runAsUser: {{ .image.userId }}
{{- end }}
{{- end }}
Loading