This document provides a guide to install the basic features of the TEADAL node installation which is based on MicroK8S.
We recommend to deploy a TEADAL node on a machine with 8 cores, 32 GB memory, 100GB storage. Depending on the TEADAL tools installed less or more than these resources could be required.
- Setup the environment
- Quick installation
- Step-by-step installation
- Checking the installation
- Next steps
We assume that a fork (or a copy in case you do not want to be updated with the new releases) of the TEADAL node has been created. This node will contain the specific configuration for your installation. One repo = one TEADAL node. Adopting ArgoCD as CI/CD tool that directly fetches the repo to realize which are the tools that must be deployed, we suggest to have one repo for each of the TEADAL node deployments.
Now, you need to clone the repo on the machine that will host the node
git clone https://gitlab.teadal.ubiwhere.com/teadal-pilots/<name of pilot>/<name of pilot>.gitFor that repo a deploy token has been obtained. If you need additional information on how to set a deploy token, refer to this page
Nix environment is required to run the basic command tools. Thus, first install Nix
sh <(curl -L https://nixos.org/nix/install) --daemon
mkdir -p ~/.config/nix
echo 'experimental-features = nix-command flakes' >> ~/.config/nix/nix.confAs shown in the output of the installation, to use nix is required to restart the shell. Once restarted, run the nix shell under the just cloned repo
cd <clonerepo dir>/nix
nix shellcheck if it worked by checking the ArgoCD version
argocd version --client --shortit should return something like argocd: v2.7.6
Now all the command must be executed inside the Nix shell.
If you want to install all the components manually, here after the complete set of commands.
We'll use MicroK8s as a cluster manager and orchestration. Install MicroK8s (upstream Kubernetes 1.27)
sudo snap install microk8s --classic --channel=1.27/stableAdd yourself to the MicroK8s group to avoid having to sudo every time your run a microk8s command
sudo usermod -a -G microk8s $(whoami)
newgrp microk8sand then wait until MicroK8s is up and running
microk8s status --wait-readyIf your VM has slow I/O Disk, it is recommended to remove also the high availability adds-on. To do it, the following command is required
microk8s disable ha-cluster --forceFinally bolt on DNS
microk8s enable dnsWait until all the above extras show in the "enabled" list and the removed ha-cluster is in the "disabled" list
microk8s statusNow we've got to broaden MicroK8s node port range. This is to make sure it'll be able to expose any K8s node port we're going to use.
nano /var/snap/microk8s/current/args/kube-apiserverand add this line somewhere in the file
--service-node-port-range=1-65535
Then restart microk8s
microk8s stop
microk8s startSet up the KUBECONFIG variable to make kubectl accessible
export KUBECONFIG=/var/snap/microk8s/current/credentials/client.configCreate the config file which will be used by some tools to generate secrets and storage
microk8s config > ~/.kube/configCheck the status of k8s
kubectl get pod -ASomething like this should appear
Now, make $dir$/deployment/ your current dir. If you are in the nix dir you have to type:
cd ../deploymentThere are various ways to handle storage on a TEADAL node. In this guide, we will describe how to set up local storage manually. For single node solutions this is a easy way to quickly provide some storage for your pods. When adding more nodes, we may require different solutions (distributed storage), but lets not worry about that now.
We'll have to create at least 8 PVs of 5GB each and 1 PV of 20GB. Ideally these
should be backed by disk partitions, but for simplicity's sake we'll go and create
directories directly on the /mnt directory. To do so, you may execute:
sudo mkdir -p /mnt/data/d{1..10}To ensure that the pods will have the right permissions to write on these folders, you may give full write permissions on the folder you just created with the following command:
sudo chmod -R 777 /mnt/dataNow it is time to generate the .yaml files to setup the storage. To this aim, there is a tool developed named node.config that can be used and that has been included in the nix shell. This tool creates a folder named as <HOST_NAME> with the required files that must be moved to the proper location afterwards
node.config -microk8s pv 1:20 8:10
mv <HOST_NAME> mesh-infra/storage/pv/local/Last step is to update the mesh-infra/storage/pv/local/kustomization.yaml file
nano mesh-infra/storage/pv/local/kustomization.yamlto point to this new directory (the other lines should be commented):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- <HOST_NAME>Now it is time to apply your changes with:
kustomize build mesh-infra/storage/pv/local/ | kubectl apply -f -This should make the storage you created ready to be used by the pods you will initialize in the next steps.
Note if you are going to deploy the node on a ARM64 machine
The image of OPA available on official registries is only for AMD64. For this reason it is required to build and insert in the local registry an ARM64 based image. No worries, we have prepared everything, and you only need to run the following commands
cd nix nix build .#opa-envoy-plugin-img #build the image cat result | gzip -d > opa.tar #export in a file microk8s ctr image import opa.tar #import in the local registry
The mesh we're going to roll out needs to be connected to some ports
on the external network. Clients on the external network hit port 80
to access HTTP services. The Istio gateway uses a K8s node port to
accept incoming traffic on port 80 and route it to the destination
service inside the mesh. The Istio gateway also has a 5432 node port
to let external clients interact with the Postgres DB inside the mesh.
Additionally, the node port 3810 is configured on the Istio gateway
to route traffic to the kubeflow UI service.
Finally admins will want to SSH into cluster nodes so port 22 should
be open too as well as port 6443 which is the K8s API endpoint admin
tools like kubectl should connect to.
How you actually make these ports available to processes running outside the mesh really depends on your setup. In the most trivial case where your cluster is made up by a single node and that node is directly connected to the Internet, all you need to do is open those ports in the firewall, if you have a one, or do nothing if there's no firewall. In a public cloud scenario, e.g. AWS, you typically have an admin console that lets you easily make ports available to clients out in the interwebs.
Don't install Istio as a MicroK8s add-on, since MicroK8s will install an old version! For this reason, it is required to follow the following procedure
Deploy Istio to the cluster using our own profile
istioctl install -y --verify -f mesh-infra/istio/profile.yamlMake sure that your VM has the port 80, 443, 8080.
For now platform infra services (e.g. DBs) as well as app services
(e.g. file transfer UI) sit in K8s' default namespace, so tell Istio
to auto-magically add an Envoy sidecar to each service deployed to
that namespace
kubectl label namespace default istio-injection=enabledNotice that you can actually be selective about which services get an Envoy sidecar, but for now we'll just apply a blanket policy to keep things simple.
A final check to see if istio is deployed in k8s
kubectl get pod -A
To allow ArgoCD to be aligned with the gitlab repo you have to edit the app.yaml file
nano mesh-infra/argocd/projects/base/app.yamland substitute the <REPO_URL> with the name of your pilot repo (e.g., https://gitlab.teadal.ubiwhere.com/teadal-pilots/mobility-pilot/mobility-teadal-node.git).
In case you are working with a branch, substitute the targetRevision with the name of your branch.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: app
namespace: argocd
spec:
project: mesh-infra
source:
repoURL: <REPO_URL>
targetRevision: HEAD
path: deployment/mesh-infra/appIf you encouter some problems during the istio installation, maybe it is a matter of network configuration. Be sure that all the ports indicated above have been open.
Argo CD is our declarative continuous delivery engine. Except for
the things listed in this bootstrap procedure, we declare the cluster
state with YAML files that we keep in the deployment dir within
our GitHub repo. Argo CD takes care of reconciling the current cluster
state with what we declared in the repo.
For that to happen, we've got to deploy Argo CD and tell it to use the YAML in our repo to populate the cluster. Our repo also contains the instructions for Argo CD to manage its own deployment state as well as the rest of the Teadal platform—I know, it sounds like a dog chasing its own tail, but it works. So we can just build the YAML to deploy Argo CD and connect it to our repo like this
kustomize build mesh-infra/argocd | kubectl apply -f -After deploying itself to the cluster, Argo CD will populate it with all the K8s resources we declared in our repo and so slowly the Teadal platform instance will come into its own. This will take some time. Go for coffee (actually also for lunch or dinner as it could take more than 1 hour).
Note
- Argo CD project errors. If you see a message like the one below in the output, rerun the above command again — see [#42][boot.argo-app-issue] about it.
unable to recognize "STDIN": no matches for kind "AppProject" in version "argoproj.io/v1alpha1"
Again, check to see if argocs is deployed in k8s as well
kubectl get pod -AAfter sometime this command returns the basic set of pods up and running.
You can notice that two pods do not run properly. To make everything working, we need the last step, the configuration of the secrets also to allow ArgoCD to fecth the repo.
To generate the k8s secrets that will be used to store the passowrds for keycloak and argocd we need to run a tool already integrated in the nix shell. Indicates firstly the password for keycloak and argocd. For argocd, it is required to indicate the username and the value of the deploy token generated before in the repo.
node.config -microk8s basicnode-secretsAfter about 5 minutes, ArgoCD starts fecthing the repo and deploying the required containers. Now, when executing
kubectl get pod -Athe cluster returns all the pods related to the basic tools installed
It takes a while (about 5 mins depending on your network) but at the end everything should be in running status.
In case nothing changes, it could be beneficial to stop and start the cluster
microk8s stop microk8s start
A first check concerning the installation can be done looking at the result of the command kubectl get pod -A that must return this set of pods
NAMESPACE NAME READY STATUS RESTARTS AGE
argocd argocd-application-controller-0 1/1 Running 0 6m17s
argocd argocd-applicationset-controller-7786cb7547-s8g9g 1/1 Running 0 6m17s
argocd argocd-dex-server-58574dff5f-7jbh8 1/1 Running 2 (5m45s ago) 6m17s
argocd argocd-notifications-controller-7764bb774d-5j5k6 1/1 Running 0 6m17s
argocd argocd-redis-77bf5b886-xdqjx 1/1 Running 0 6m17s
argocd argocd-repo-server-5b9977b575-vzxhx 1/1 Running 0 6m17s
argocd argocd-server-6485ccb9c9-hd956 1/1 Running 2 (5m50s ago) 6m17s
default dspn-webeditor-678fc7d7c6-xs57j 2/2 Running 0 5m31s
default httpbin-fcf5d6d59-hxbpt 2/2 Running 0 5m31s
default keycloak-7978b4f4b7-5zrw9 2/2 Running 0 2m54s
default opa-7f5bdf49c-nnl9k 2/2 Running 0 5m31s
default postgres-7f9949bcff-c2tq7 2/2 Running 0 2m54s
istio-system istio-egressgateway-69cbcfc4d-bkzbk 1/1 Running 0 6m39s
istio-system istio-ingressgateway-68ccf88c86-4g7q7 1/1 Running 0 6m39s
istio-system istiod-94c7678f6-skplp 1/1 Running 0 6m51s
kube-system coredns-7745f9f87f-2bq8n 1/1 Running 0 8m6s
kube-system reloader-75f99865b5-cqprw 1/1 Running 0 5m31s
minio-operator console-756f85dc86-25ds4 1/1 Running 0 5m29s
minio-operator minio-operator-7dbf54467d-tt2wk 1/1 Running 0 5m29s
minio-operator teadal-teadal-0 2/2 Running 0 4m43s
To check if the basic installation is up and running, we'll use HttbBin to simulate a data product. There's a [policy][httpbin-rbac] that defines two roles:
- Product owner. The owner may do any kind of HTTP request to URLs
starting with
/httpbin/anything/. - Product consumer. On the other hand, the consumer is only allowed
to read (
GET) URLs starting with/httpbin/anything/or the (exact) URL/httpbin/get.
jeejee@teadal.eu is both a product owner and consumer, whereas
sebs@teadal.eu is just a consumer. To interact with HttpBin, both
users need to get a JWT from Keycloak and attach it to service requests
since the policy doesn't allow anonymous requests to the above URLs.
In fact, if you try e.g.
$ curl -i -X GET localhost/httpbin/anything/do
$ curl -i -X GET localhost/httpbin/getyou should get back a fat 403 in both cases. So let's get a JWT
for jeejee@teadal.eu. We'll store it in a env var so we can use
it later. The command below should do the trick. (If you've changed
the user's password in Keycloak, replace abc123 with the new one.)
$ export jeejees_token=$(\
curl -s \
http://localhost/keycloak/realms/teadal/protocol/openid-connect/token \
-d 'grant_type=password' -d 'client_id=admin-cli' \
-d 'username=jeejee@teadal.eu' -d 'password=abc123' | jq -r '.access_token')And, as we're at it, let's get a JWT for sebs@teadal.eu too.
$ export sebs_token=$(\
curl -s \
http://localhost/keycloak/realms/teadal/protocol/openid-connect/token \
-d 'grant_type=password' -d 'client_id=admin-cli' \
-d 'username=sebs@teadal.eu' -d 'password=abc123' | jq -r '.access_token')Again, if you've changed sebs@teadal.eu's password, update the
command above accordingly. Also keep in mind these tokens are quite
short-lived (about 4 mins) so if you take too long to go through the
examples below, you'll have to get fresh tokens again.
Both product owner and consumer are allowed to read a URL path like
/httpbin/anything. So both users, jeejee@teadal.eu (owner)
and sebs@teadal.eu (consumer), should be able to do a GET and get
back (pun intended) a 200, provided we attach their respective JWT
to the each request:
$ curl -i -X GET localhost/httpbin/anything \
-H "Authorization: Bearer ${jeejees_token}"
$ curl -i -X GET localhost/httpbin/anything \
-H "Authorization: Bearer ${sebs_token}"But, as a product owner, jeejee@teadal.eu should be able to do
anything he fancies to the above path, like DELETE, whereas
sebs@teadal.eu, as a consumer, should not.
$ curl -i -X DELETE localhost/httpbin/anything \
-H "Authorization: Bearer ${jeejees_token}"
$ curl -i -X DELETE localhost/httpbin/anything \
-H "Authorization: Bearer ${sebs_token}"You should see a 200 response for the first request, but a 403
for the second. Finally, since both users are product consumers,
they should both allowed to GET /httpbin/get
$ curl -i -X GET localhost/httpbin/get \
-H "Authorization: Bearer ${jeejees_token}"
$ curl -i -X GET localhost/httpbin/get \
-H "Authorization: Bearer ${sebs_token}"You should see a 200 response in both cases. That just about wraps
it up for the security show.
Once the Teadal node is up and running, you are ready to install the Teadal tools you need for. To this aim refer to the related guide.




