Just a personal testing repository
I used Docker Desktop
I used helm as I am lazy
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginxkubectl apply -f .I did this by setting the readiness probe to check for the host.html file. If the file vanishes the probe fails:
To debug this I was exec'ing onto the pod and running these commands:
# Make pod not ready
mv /usr/local/apache2/htdocs/host.html /usr/local/apache2/htdocs/host.html2
# Make pod ready
mv /usr/local/apache2/htdocs/host.html2 /usr/local/apache2/htdocs/host.htmlOutcome Nginx does not work, as soon as the pod is not ready nginx removes it from the pool and does not route any traffic to that pod. Found out that nginx only uses the readiness probe and not the liveliness probe
helm uninstall nginx-ingress
helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo update
helm install haproxy-ingress haproxytech/kubernetes-ingress \
--set controller.kind=DaemonSet \
--set controller.ingressClass=haproxy \
--set controller.service.type=LoadBalancerSame issue as Nginx = failed
Trying this HAProxy - https://github.com/jcmoraisjr/haproxy-ingress
First to install it:
kubectl create -f https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
kubectl label node docker-desktop role=ingress-controllerCheck its running:
kubectl -n ingress-controller get daemonset
kubectl -n ingress-controller get podNow deploy the load balancer
kubectl apply -f jcmoraisjr-loadbalancer.yamlDraining is still not working when tested. We need to enable drain-support flag
kubectl edit configmap haproxy-ingress -n ingress-controllerSo the YAML looks like this:
apiVersion: v1
data:
drain-support: "true"
kind: ConfigMap
metadata:
creationTimestamp: "2020-09-12T13:57:35Z"
name: haproxy-ingress
namespace: ingress-controller
resourceVersion: "192909"
selfLink: /api/v1/namespaces/ingress-controller/configmaps/haproxy-ingress
uid: beca0398-69d6-47eb-b9b7-bd7c318f49a4Restart the ingress pod (not sure its required)
kubectl delete pod haproxy-ingress-wn9hd -n ingress-controllerIt works
Download and install istioctl - https://istio.io/latest/docs/setup/getting-started/
curl -L https://istio.io/downloadIstio | sh -Install istio and lavel namespace for auto proxy envoy
istioctl install --set profile=demo
kubectl label namespace default istio-injection=enabledDeploy deployment and service:
kubectl apply -f deployment.yaml
kubectl apply -f service.yamlThen apply istio config (virtaul service, gateway and destination rule)
kubectl apply -f istio-gateway.yamlTest - session stickyness works - limits scaling (up or down) recreates all cookies and re-routes all traffic.
There is no way to drain sessions using this ingress.
Followed this page: https://docs.microsoft.com/en-us/azure/application-gateway/tutorial-ingress-controller-add-on-new
az feature register --name AKS-IngressApplicationGatewayAddon --namespace Microsoft.ContainerServiceWait for feature to cahnge to Registered - about 15 minutes
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKS-IngressApplicationGatewayAddon')].{Name:name,State:properties.state}"Referesh registration
az provider register --namespace Microsoft.ContainerServiceAdd extenion to CLI
az extension add --name aks-preview
az extension listaz group create --name myResourceGroup --location ukwest
az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-prefix "10.2.0.0/16" --node-count 1 --kubernetes-version 1.18.8 --generate-ssh-keysDeploy the Apps. Get Creds first:
az aks get-credentials -n myCluster -g myResourceGroupNow deploy the components:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f agic-ingress.yamlNow to do persistance testing.
Helathness probe does NOT use the readiness probe - shame, it uses the liveness probe
Even if you set AGIC probe manually it still doesn't work. As soon as AGIC probe fails no more sessions can route to the pod (as its removed from the pool).
curl 'http://52.142.173.222/' -H 'Cookie: ApplicationGatewayAffinity=7626ab90412fde0c1d28be613e212f55' ;# Make pod not ready
mv /usr/local/apache2/htdocs/host.html /usr/local/apache2/htdocs/host.html2
# Make pod ready
mv /usr/local/apache2/htdocs/host.html2 /usr/local/apache2/htdocs/host.html