Skip to content

INSO-World/Trinocular

Repository files navigation

Trinocular

Setup without Keycloak

This section shows how to setup the project on your machine for development without Keycloak. This configuration is not suitable for deployment as logins are disabled.

The simplest approach is running the tutor-setup script (you need to have Python installed) which automatically sets up Trinocular for local usage.

  1. Pull the repo from GitHub/GitLab.
  2. Go to the repo base directory and run the following python script. python scripts/tutor-setup.py
  3. In the base directory of the repo run docker-compose build.
  4. In the base directory of the repo run docker-compose up. Now the project should start up for the first time. Wait for all containers to be ready.
  5. Open localhost:8080 in your browser.

The setup can also be performed manually:

  1. Pull the repo from GitHub/GitLab.
  2. Take a look at the secrets section of the docker-compose.yml. You need to create each text file and fill it with a secret string. Make sure not to include any whitespace (including newlines) when saving the file if your editor has autoformatting configured. For more details check out the ReadMe in the secrets directory.
  3. In the .env file of the auth-service (/src/auth/.env) set the environment variable...
    1. PASS_THROUGH_MODE to true.
    2. ADMIN_USER_ROLE to "". (put empty quotes literally)
    3. ACCEPTED_USER_EMAILS_FILE to "". (put empty quotes literally)
  4. In the base directory of the repo run docker-compose build.
  5. In the base directory of the repo run docker-compose up. Now the project should start up for the first time. For now it should not matter if some errors get printed. Check that all services started up and are healthy.

Setup with Keycloak

Follow the steps below to setup the project on your machine for development with Keycloak. You only need to do this once. These instructions are for Keycloak v26.

  1. Pull the repo from GitHub/GitLab.
  2. Take a look at the secrets section of the docker-compose.yml. You need to create each text file and fill it with a secret string. Make sure not to include any whitespace (including newlines) when saving the file if your editor has autoformatting configured. For more details check out the ReadMe in the secrets directory.
  3. In the base directory of the repo run docker-compose build.
  4. In the base directory of the repo run docker-compose up. Now the project should start up for the first time. For now it should not matter if some errors get printed. Check that the nginx, postgres and Keycloak services started up and are healthy.
  5. Setup Keycloak by creating a new realm with a user and client.
    1. Navigate to the admin login page under localhost:8080/keycloak.
    2. Login as admin using the keycloak_admin_secret.
    3. Create new realm (Button inside the drop down left top) and name it "trinocular"
    4. Select the "trinocular" realm in the drop down
    5. Create new user in the "trinocular" realm with username, email, firstname, lastname. Also set "email verified" to true.
    6. Select the credentials tab of the new user
    7. Set Password a new password. Deselect the "temporary" option!
    8. Create new client in the "trinocular" realm and name it "trinocular_client". Then go to "next".
    9. In the capability config of the client check the "Standard flow" checkbox. Then go to "next".
    10. In the login settings of the client set the valid redirect URI to "http://localhost:8080/auth/*". Safe the client.
    11. Keycloak should show the OIDC information as JSON under http://localhost:8080/realms/trinocular/.well-known/openid-configuration
  6. Setup Keycloak with admin roles for users
    1. You need to be still logged into the Keycloak Web UI and have the "trinocular" realm selected
    2. Create a new realm role named "trinocular_admin".
      1. On the left navigate to "Realm Roles" and select "Create Role".
      2. Set the name to "trinocular_admin" and save.
    3. Add the role to your user.
      1. On the left navigate to "Users" and select your username (the name is a clickable link).
      2. Select the tab "Role Mapping" and click "Assign Role".
      3. Select "Realm Role", then choose the "trinocular_admin" role and apply.
    4. Make the OIDC scope "role" show up in the client token.
      1. On the left navigate to "Client scopes".
      2. In the list (there are multiple pages to search through) click the "roles" scope.
      3. Enable the "Include in token scope" flag.
      4. Save the changes.
      5. Select the "Mappers" tab and click the "realm roles" mapper object in the list.
      6. Enable the "Add to ID token" flag.
      7. Save the changes.
  7. Logout to disconnect from the Keycloak Web UI
  8. Hit Ctrl-C and wait for everything to shutdown. This should only take a few seconds.
  9. Start the system again with docker compose up. Now, no errors should be visible in the log.
  10. Navigate to http://localhost:8080 and click login. You should now be able to login via Keycloak.
  11. To gain better editor support for eg. types and autocomplete, install the node dependencies for the services. Run npm i (or better pnpm i) in each service directory that has a package.json.

Setup SSO

SSO with an external IdP can be enabled by configuring Keycloak as an OICD client in the "trinocular" realm. To control which users within the organization have access to Trinocular, you can enable user-filtering. For further details see the Readme of the auth-service.

Commit namespace

Each commit summary must be prefixed with a namespace, to make it easy to see what part of the system was changed. Below are a few examples.

Doc: Added section about commits in the ReadMe

Postgres: Added a health check

Docker: Moved secret files to '/secrets'

Fronted: Prevent the navbar from overflowing on mobile

Test: Added GitLab test container

The following namespaces exist for general parts of the project:

  • Doc: Anything related to documentation and non source files used for description of the project.
  • Docker: Anything related to containerization, that is not specific to a certain container or container internals.
  • Test: Anything related to testing and the CI pipeline

Each service has its own commit namespace:

  • Nginx
  • Keycloak
  • Memcached
  • Postgres
  • Auth
  • Frontend
  • Registry
  • API-Bridge
  • Repo

Visualization services have their name as the commit namespace.

  • Demo: The demo visualization service.

Services

  • nginx: Proxy server that routes incoming web requests to the respective service based on the request path.

  • keycloak: O-Auth provider that handles user authentication and initiates user sessions. It allows users to login and handles everything related to security. It also allows for user management, verification and roles.

  • memcached: Simple in-memory key-value-store database. It stores all active user sessions and makes it possible for other servives to check whether a request is authenticated.

  • postgres: Relational SQL database.

  • auth: Authentication service that handles the communication with the O-Auth provider and creates user sessions.

  • frontend: Delivers the frontend website to the browser. Communicates and proxies requests to the visualization services that hook into the system.

  • registry: Acts as a simple notification service based on HTTP pub/sub. Services can register themselves and advertise their abilities with additional data fields. Visualizations use the registry to add themselves to the system on startup, while the frontend listens for notifications.

  • api-bridge: Creates snapshots of the data imported via the GitLab API and provides it to the visualization services.

  • repo: Creates snapshots of the Git repository by importing commit data from all branches into the PostgreSQL database.

  • scheduler: Manages the update and snapshot process for repository data.

  • visualization services: Provide platform for visualizations by fetching data from the api-bridge and the repo service during a snapshot and registering visualization.

Visualization services

The visualization services provide the platform on which separate visualizations can be implemented. The tech stack used for the implementation is open for the developer to choose. The visualization service was designed with a technology-agnostic approach, utilizing Micro-frontends. The developer only needs to provide a webpage, which is embedded into an iframe in the Fronted service, and needs to register the visualizations at th registry to make available to the Frontend service.
Data for the visualizations can be fetched from the Api-Bridge and the Repo Service. To reduce the amount of data needing to be fetched by the service into its local database, visualizations with similar data need should be implemented in one service.

JS service libs

Common code that is shared across multiple services is factored out into libraries/node modules that get installed into to the service images as part of the docker build process. The modules are next to the services in the /src directory and are referred to via relative paths when importing.

  • common: Contains code common to most services.
  • auth-utils: Contains the user sessions authentication middleware and useful functions related to authentication.
  • postgres-utils: Contains the basic common code to connect to a PostgreSQL database. Offers functionality for creating databases and initializing them with a SQL script file.

Environment Files

Configuration constants are provided to the services via environment variables that get injected into the container on startup. Do not rely on the .env files being copied into the container image. Instead specify a service's environment in the docker-compose.yml and let docker inject the variables.

Secrets

While most constants needed for configuration are provided via environment variables that are defined in .env files, secrets such as passwords or encryption keys are stored elsewhere. Each secret is stored in its own text file (with .txt file extension!) in the /secrets directory. In the docker-compose.yml the secrets are listed and named, so that each service container can specify which secrets it needs. The secrets get mounted as files into the /run/secrets directory of each service on startup, where they can read them.

In case of a NodeJS service, use the readSecretEnv() function provided by the common module, to automatically load secrets into Node's copy of the environment variables (process.env). It works by looking through all keys of process.env and loading the file contents for all vars that end with SECRET_FILE and point to a file inside /run/secrets.

# This loads the contents of '/run/secrets/session_secret' into 
# a variable called 'SESSION_SECRET'
SESSION_SECRET_FILE= /run/secrets/session_secret

# Ignored: Does not end with 'SECRET_FILE'
MY_RANDOM_VAR= "hello world"

# Ignored: Does not point to a file in '/run/secrets'
MY_SECRET_FILE= /some/random/path

Setting NODE_ENV

Some NodeJS libraries check the NODE_ENV environment to behave differently whether they run in development or production mode. This is bad and should be avoided. In the docker-compose.yml file NodeJS services have their environment always set to "production". If you need to do some extensive debugging with additional error messages from eg. handlebars, you can temporarily change it back to "development". But do not commit this change.

Architecture

Service Architecture

Database Inspection

To inspect the data stored in the databases of the system, different methods exist depending on the database in question.

Frontend Service

The frontend service hosts its own local SQLite instance and a special webpage that dumps the contents of the database. You first need to enable the db viewer page via an environment variable in the .env file as shown below. Then navigate to localhost:8080/db-viewer after logging in. Up to 100 rows for each table get displayed as separate tables.

ENABLE_DB_VIEWER= true

API-Bridge Service & Repository Service

To connect to the PostgreSQL instance used by the repository service, you need to use a DB viewer application such as DBeaver. The default port is mapped in the docker-compose.yml. Use the following connection parameters on your local machine.

  • Host: localhost:5432
  • User: trinocular_db_user
  • Passsword: The value you set in the /secrets/postgres.txt file
  • Make sure to enable 'Show all databases'

Updating NPM modules

Whenever you change the node modules that are installed in a common library or service, it might happen that the docker build commands suddenly fails or hangs when trying to run the npm install step. While it is not clear why this happens, it can be fixed by ensuring all the package-lock.json files are up to date. Re-running npm in all service and library directories is cumbersome, especially if you are not using npm as your package manager (eg. yarn, npnm, ...).

For this reason there exists a script that automatically performs the updating. It can be run with the following command, from within the base directory of the repository:

npm run update-locks

For some reason code editors (eg. VSCode) like to lock the node_modules folders, which leads to the script erroring out. Therefore, it is advised to close your code editor before running the script.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors