Main repository for the USupport API services
- Make sure that Node.js and NPM is installed on your system. Download Node.js
This is the entry bundle repository for all USupport web services (webapps + API services).
To clone and populate all submodules use:
git clone --recurse-submodules git@github.com:UNICEFECAR/USupport-entry-bundle.git entry-bundle
Setup:
- Run
./setup.shscript in the root directory of the project. If you get permissions error runchmod 777 setup.sh - Create a JWT secret key and add it to the "JWT_KEY" field in the following files:
./admin/service/.env.local./user/service/.env.local - To use the payments functionality you need to create a Stripe account. After creating the account, you need to fullfill the following keys:
- "STRIPE_SECRET_KEY"
- "STRIPE_WEBHOOK_ENDPOIN_SECRET"
in the
./payments/service/.env.localfile
- To use the uploading images functionality you need to create AWS S3 bucket. After creating the bucket you will need to fulfill the following keys:
- "AWS_ACCESS_KEY_ID"
- "AWS_SECRET_ACCESS_KEY"
- "AWS_REGION"
- "AWS_BUCKET_NAME"
in the following files:
./client/service/.env.local./provider/service/.env.local./user/service/.env.localand - VITE_AMAZON_S3_BUCKET
in the following files:
./website/.env.development./client-ui/.env.development./provider-ui/.env.development./user-ui/.env.development./admin-country-ui/.env.development./admin-global-ui/.env.development
- To use the video consultation functionality you need to create Twilio account. After creating the account you need to fullfill the following keys:
- "TWILIO_ACCOUNT_SID"
- "TWILIO_API_SID"
- "TWILIO_API_SECRET"
- "TWILIO_AUTH_TOKEN"
in the
user/service/.env.local/file and - "TWILIO_ACCOUNT_SID"
- "TWILIO_AUTH_TOKEN"
in the
./provider/service/.env.localfile
- To use the email functionality you need access to an email account that can be used to setup the email service. The following enviorment variables need to be fullfilled:
-"EMAIL_SENDER"
-"EMAIL_SENDER_PASSWORD"
-"EMAIL_HOST"
-"EMAIL_PORT"
-"RECIEVERS"
in the
./email/.env.localfile - After successfully running all the microservices set the "VITE_API_ENDPOINT" key to "http://localhost:3000/api" in the following files:
./website/.env.development./client-ui/.env.development./provider-ui/.env.development./user-ui/.env.development./admin-country-ui/.env.development./admin-global-ui/.env.development
Local Deployment:
- run
./deploy.sh local - run + database drop:
./deploy.sh local drop
The above commands wil configure the local databse and run all the contaiers.
Notes:
-
To track the logs for all the containers run
docker-compose logs -f. -
To see logs for a single container run
docker-compose logs -f {container_name} -
Note that once built you can stop the services using
docker-compose -f docker-compose.yml down. If running on staging or production use the relevant docker-compose file to stop the containers. -
If you need to rebuild the containers, run
docker-compose -f docker-compose.yml up -d --build -
To restart a single container run:
docker-compose stop {container_name} docker-compose rm -f {container_name} docker compose -f docker-compose.yml up -d --build {container_name} -
To add a new dependency, still run
npm install {dependency}. To upgrade, use eithernpm update {dependency}ornpm install ${dependendency}@{version}and commit the changedpackage-lock.json. If an upgrade fails, revert to the last known workingpackage-lock.json -
After running
npm audit fixremember to commit any changes topackage-lock.jsonto the repo -
Each UI runs on a port that is specified in it's package.json file. To change it, change the port number in the package.json "dev" script
cd your_submodule
git commit -a -m "{commit message}"
git push
cd ..
git add your_submodule
git commit -m "Updated submodule"- Features
feature/{branch_name} - Bugs
bug/{branch_name} - Hotfixes
hotfix/{branch_name}
- Create:
[commit message](create a new component) - Add:
[commit message](addition to an existing component) - Fix:
[commit message](fix a bug within an existing component) - Refactor:
[commit message](refactor an existing component)
We are using Kubernetes clusters for our staging and production environments to run all API services as well as all client apps. Each API service runs in a single-container pod and has a service, config, deployment and secrets yaml files.
We are using the AWS EKS service to run each cluster.
Here we explain how to set up your local machine so you are able to run, maintain and spin off new containers in each cluster.
Make sure you have the following tools installed:
- AWS CLI - click here for instructions.
- Kubectl - used for communicating with the cluster. You need this to be able to control each pod. Click here for instructions
- Install AWS IAM Authenticator -
brew install aws-iam-authenticator - Amazon ECR Docker Credential Helper - GitHub Repo
- Change
~/.docker/config.jsonto be"credsStore": "ecr-login" - Add
secrets.yamlfiles for each pod and environment
- Add at the end of your ~/.zshrc file the following line
export DOCKER_DEFAULT_PLATFORM=linux/amd64 - Ensure that the installed K8S version is 1.23.6, otherwise apply the following commands:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
The first time you want to deploy a new pod to the cluster you can use the following command from the root folder of the project:
./kube-deploy.sh {pod folder name} {staging | prod} deploy
E.g. ./kube-deploy.sh admin staging deploy - This will deploy and configure the admin pod with the staging env variables (### NOTE: Make sure you are in the correct context ###)
E.g. ./kube-deploy.sh all staging deploy - This will deploy and configure all pods with the staging env variables (### NOTE: Make sure you are in the correct context ###)
Each K8s pod is configured to have a Rolling Update which ensures minimal downtime as the old pod stays READY until the new pod is deployed, configured and working. Use this command every time you want to apply a new change in config.yaml, secrets.yaml or the container build:
./kube-deploy.sh {pod folder name} {staging | prod} redeploy
If you want to stop a pod you have to go to the folder where the pod deployment.yaml file is:
cd admin/kube-config/staging
kubectl delete -f deployment.yaml
All shared cluster pods and services are in the ./kube-services folder in the root folder of the project. There you can find deployment and service yaml files for kafka broker, zookeeper and redis.
In ./kube-services/db-services you can find all External Names for the DBs used in the project. Add or change those to update the URL for outside resources the pods need to communicate with.
Each service has the same kubernetes configuration folder called kube-config. In that folder you can find deployment environment independent yaml files (config.yaml & service.yaml). You can also find folders called staging and prod which contain the deployment.yaml and secrets.yaml files for each deployment environment.
kubectl get po- Get all pods in a clusterkubectl get po -n {namespace}- Get all pods in a specific namespace of a clusterkubectl logs -f {pod name}- Get the console logs of a specific pod (same asdocker-compose logs -f)kubectl apply -f {yaml file}- Use this to apply a yaml configuration to the current clusterkubectl describe pod {pod name}- Detailed information of a pod status. Use this to identify why a pod failedkubectl get services- Get all services running on the clusterkubectl get ingress- Get the cluster ingress servicekubectl config use-context {context name}- Use this to switch between cluster contexts. IMPORTANT: Make sure you are in the correct context before making changes as you can apply changes to the production clusterkubectl config view- See all contexts. Use this to see what clusters tour local machine is connected to