This document provides a guide to install the tools developed in the TEADAL project. These tools requires a TEADAL node up and running (see Teadal Node installation guide)
At this time, the following tools are available:
If any dependencies are indicated, then it is required to configure the ArgoCD project in your Teadal Node to add the required tool.
- Jaeger
See the related page to know how to install the dependencies
Before Advocate will work you will need to create the related namespace
kubectl create namespace trust-planeThen it is required to configure all needed secrets, variables for Advocate blockchain such as wallet private key, VM key and Ethereum Remote Procedure Call (RPC) Address. For that run this command:
node.config --microk8s advocateNow you can enter the required values. For the question about the "ADVOCATE_ETH_POA" , enter "1" as value.
Be sure that, on the repo the kustomization file used by argocd has the line - advocate uncommented. E.g.:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- project.yaml
- advocateCommit the changes to the repo, thus ArgoCD can fetch the updated configuration. Here an example of the possible list of command needed
git add deployment/mesh-infra/argocd/projects/plat-infra-services/
git commit -m "enable advocate"
git pushAfter few minutes, ArgoCD will realizes the update and starts deploying the related pods.
Check pods that are in Trust-plane namespace:
kubectl get pods -n trust-planeCheck the Advocate pod log to make sure that it is up and running:
kubectl logs <advocate-pod-name> -n trust-planeThe deployment of the TEADAL data catalogue requires an approval step. Who is interested in this tool must send an e-mail to XXXX@cefriel.com. In case of acceptance, gitlab credentials will be provided. These credentials are required to properly setup the environment for the deployment
Keycloak and Postgres
Be sure that the VM on which the catalogue will be installed has a public IP, or it can be reached from the Web.
First step is to create the secrets to allow microk8s to pull the images from the repository
kubectl create secret docker-registry teadal-registry-secret -n catalogue \
--docker-server=https://registry.teadal.ubiwhere.com \
--docker-username=<gitlab username> \
--docker-password="<gitlab password>" \
--docker-email=<gitlab account email> In case the microk8s notifies that the namespace catalogue does not exist, the following command solves the issue:
kubectl create namespace catalogueBe sure that, on the repo the kustomization file used by argocd has the line - catalogue uncommented. E.g.:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- project.yaml
- catalogueCommit the changes to the repo, thus ArgoCD can fetch the updated configuration. Here an example of the possible list of command needed
git add deployment/mesh-infra/argocd/projects/plat-app-services/
git commit -m "enable catalogue"
git pushAfter few minutes, ArgoCD will realizes the update and starts deploying the related pods.
In a browser, open the page http://<host>/catalogue. A login page to access to the catalog will appear if everything is properly working.
TBD
TBD
TBD
TBD
The AI-DPM tool is composed of two distinct but interdependent components: • AI_api: the backend service responsible for fetching and processing telemetry data using ML models • AI_dashboard: the frontend dashboard used to visualize predictions and interact with the models
Both components must be installed together to ensure full functionality.
To work correctly, the following dependencies must also be enabled and running: • Prometheus (for metrics ingestion) • Thanos (for long-term metrics storage and query) • Kepler (for power-related telemetry collection)
Be sure that, in the repo, the Istio Kustomization file includes the following resources uncommented:
resources:
- ai-dashboard
- ai-prediction-api
- prometheus
- kepler
- thanosOnce the above lines are enabled, apply the configuration by running the following command:
kustomize build deployment/mesh-infra/istio | kubectl apply -f -This will deploy the AI components as well as the required telemetry infrastructure into the istio-system namespace.
You can verify that all pods are running with:
kubectl get pods -n istio-system | grep ai-kubectl port-forward svc/ai-dashboard 8501:8501 -n istio-systemThen open your browser at http://localhost:8501
kubectl port-forward svc/ai-prediction-api 9000:9000 -n istio-system
curl http://localhost:9000/fetch
