Skip to content

UploadServer

brian-r-calder edited this page Oct 31, 2025 · 2 revisions

Upload Server Implementation

Introduction

Although it is not expected to be common for WIBL loggers to be able to send data off the host platform in real time, if there is an Internet-connected WiFi network available, the logger can be configured to send data to a known address at intervals; see the firmware documentation for details on the logger side of this connection.

The basic concept of operations for upload (and sequence of events for the interaction between the firmware and upload server) is as shown below (the diagram assumes AWS for cloud provider, but the system abstracts the cloud provider to allow for other implementations as required). The upload server automatically sends any files received into a configurable cloud bucket, and then triggers a notification to let the rest of the system know that new data has arrived. In a full cloud-based implementation of the WIBL system, this would be used to trigger the first stage of the cloud processing chain.

Upload server timing diagram

Although not shown, the upload server connection is protected using standard TLS (i.e., "https") using a pre-shared password (upload token) on the logger, and the logger's UniqueID as "username", to authenticate the logger to the server, and a certificate from the server to encrypt the connection and authenticate the server to the logger. The upload server maintains a database locally of the known loggers, with tools to add new loggers to the database when the server is running. Consequently, only loggers that the server knows about are allowed to check-in or upload, and the status and log data (including the upload token) are encrypted in flight.

Any Certificate Authority (CA) could be used to sign the server's certificate, but since the implementation is closed and the Trusted Node doing the deployment controls all aspects of the system, a self-signed certificate is acceptable. Support scripts using OpenSSL to generate the CA and certificates are provided for testing; in the cloud the certificates are generated automatically as part of the deployment, and can subsequently be retrieved for logger configuration.

The server side of the protocol can be seen in UploadServer/wibl-monitor.go, and can be deployed with Terraform.

Configuration

Server-Side

The server is configured on boot using a JSON file, of the form in UploadServer/config.json. The information here will be generated automatically through the deployment process (prototype in UploadServer/scripts/cloud/Terraform/aws/config-aws.json.proto), as described in the UploadServer/README.md; the Terraform variables (in scripts/cloud/Terraform/aws/terraform.tfvars) set the configuration of the server deployment.

As part of the deployment, Terraform will generate the certificates required for the server to authenticate to the logger, and for the self-signing Certificate Authority required. Editing the Terraform variables used for this (in UploadServer/scripts/cloud/Terraform/aws/modules/wibl_tls/terraform.tfvars) allows for configuration of the organisation name used for the certificates. Once deployed, the CA certificate (self signed), which needs to be installed on loggers to allow them to authenticate the connection to the server, can be retrieved from UploadServer/aws-build/certs/ca.crt.

For local testing and development (or just as a clearer description of what's going on that isn't embedded in the Terraform deployment), self-sign certificates can be generated using the UploadServer/cert-gen.sh script (best executed in UploadServer/certs), which requires OpenSSL installed. The CA certificate (ca.crt) and private key (ca.key), and server certificate (server.crt) are required for operations. A client certificate and key are also generated, but are not required for normal operations. (Typically, these would be used if the server needed to challenge the client to provide its authenticity, but this is rarely used, and obviated by the use of a pre-shared password in this instance.)

Client-Side

Every client (logger) needs to be able to

  • Authenticate that the server is who it says it is (i.e., that someone hasn't captured the traffic in flight and is pretending to be the upload server), and

  • Authenticate itself to the server (i.e, to demonstrate that it's a "known" logger that is allowed to do status check-ins and uploads).

To achieve the first of these, the logger has to be loaded with the CA certificate that's used to sign the server's certificate (typically ca.crt); to achieve the latter it has to has a UniqueID set (typically in the form TNODEID-UUID where TNODEID is the Trusted Node's DCDB identifier and UUID is a Universally Unique Identifier, generated as part of logger installation), and an upload token. The upload token can be any plain text, but to keep things consistent another UUID is recommended for this. The CA certificate is plain text, and can be freely distributed without security concerns. The upload token, however, is a form of password and should be protected as one normally would for such information.

Once the system is deployed, the CA certificate can be retrieved from UploadServer/aws-build/certs/ca.crt, and can be loaded into new loggers using the firmware website or desktop GUI tool. Depending on browser, the ca.crt file might not be recognised as an uploadable file type; copying the file to ca-cert.txt is an effective workaround.

The upload token can be loaded into new loggers in similar fashion, but must be consistent between the logger and server. It is therefore convenient to add the logger pre-emptively to the server (see below) and then copy the upload token generated for logger installation.

Adding Loggers

Each new logger must be added to the server's database so that it will be allowed to upload data. This is done using the add-logger utility on the server (i.e., connect to the server using ssh and then run the code locally). The add-logger tool allows for specification of logger UniqueID and password (upload token), if it is already known. Omitting the -password option instructs add-logger to auto-generate a UUID and use that for the password. After adding the logger, the code outputs a JSON string containing the UniqueID and password for the logger, optionally (-pkg) to a file so that this can be used to load the appropriate information into the logger.

Note that the password, when added to the database, is encrypted and cannot be retrieved. Therefore the package information output (-pkg) is the only source of information that can be used to set the upload token (password) on the logger. If this information is lost, the recovery procedure is to remove and re-add the logger to the database.

Further information on use of add-logger (specifically handling user permissions) is addressed in the UploadServer/README.md.

Server Operations and Logging

Once deployed and running, the upload server should be relatively maintenance free: it listens for check-ins and uploads, and transfers files as required.

The server has extensive logging capabilities, however, which are configured through the JSON parameter file provided at startup. Two logs are provided:

  • A console log that provides general information about the operation of the server, and

  • An access log (in Combined Log Format, CLF, as used in Apache and other servers) that lists all of the attempts to access the server, whether successful or not.

Log files are maintained until they exceed either the maximum size configured (max_size_mb) or maximum age configured (max_age), at which point they are rolled into a backup name and a new log is started; a maximum of max_backups files are retained before finally being deleted (setting max_backups = 0 retains all files).

As part of the Terraform deployment, on AWS the upload server also uses CloudWatch to archive log files for further analysis, as configured in the UploadServer/scripts/cloud/Terraform/aws/userdata.sh script; see UploadServer/README.md for details.

Clone this wiki locally