Skip to content

Web interface

CalebJ2 edited this page Jun 12, 2020 · 11 revisions

One of the stretch goals for this project is to create a web interface. Besides the many features we could add for human interaction with a web page, it would be useful to testing on the robot and manually driving it without a dedicated remote control. There are several existing projects that could help us make this. A ROS package called rosbridge allows accessing ROS topics over web sockets. rosbridge can be connected to from a web page using the roslibjs javascript library. An example demonstrating the capabilities of this system can be viewed at (webviz.io)[https://webviz.io/app/?demo]. This is simple to set up for testing on the robot's PC, but the challenge is allowing other computers to access the web page and making it secure so unauthorized users can't hijack the robot.

First, something needs to host the website and serve the HTML files that a user’s browser displays. To keep everything self-contained, the robot hosts the website itself instead of hosting it on a server somewhere. An added benefit of this setup is that only people on the same network as the server can access the website (unless IT were to set up port forwarding). People on the university network can be considered slightly more trustworthy that people on the world wide web because they usually have to log in to the network and are less likely to launch large attacks against random PCs they see. Despite not being visible to outside the network, the robot would have dynamic DNS set up to point a domain name to its local IP. Anyone who puts goes to that domain name (like robot.example.com) and is on the university network would be able to view the website.

The webserver software used by this system is Nginx. The purpose of Nginx in this case is to encrypt web traffic going to and from Node.js (more on that in the next paragraph) so that people can’t snoop and steal secrets. To encrypt traffic and get the little green lock symbol in browsers, a certificate is required. Unfortunately, it is generally not possible to get a certificate from a trusted certificate authority when the website is hosted on a local network like this (although it might be possible using a DNS verification method). Instead, the server uses a self-signed certificate generated by a GitLab CI job. The downside is that browsers will show a warning about the certificate when people visit the page for the first time.

Nginx forwards web requests to a Node.js application. Node.js is used to create website backends using JavaScript. It can be used as a webserver like Nginx but supporting encryption with it is slightly more complicated. Node.js serves the main web page and some files that includes (CSS and JavaScript). In the future, this is where features such as user login systems can be added. The other job it has is to relay WebSocket traffic from roslibjs in the browser to rosbridge. Making Node.js the intermediary allows access to rosbridge to be limited. Allowing direct access to rosbridge would give full control of the robot.

Both Nginx and Node.js run in their own Docker containers. This way, their installation is as simple as adding around 30 lines of configuration to the docker-compose.yml file. Only Nginx is exposed to the outside world, where it listens to port 80 (http and ws) and 443 (https and wss). Nginx relays http messages through the Docker network to the Node.js container on port 8080 and ws (websocket) messages on port 8081. Node.js either replies with files, or relays messages to/from the ROS container on port 9090.

The last part to explain is the actual web page. The layout and look are defined by the html and css files in the node/public folder of the repository. One of the html files includes the roslibjs JavaScript library, and the custom JavaScript that uses roslibjs. When the page is opened, it tries to connect to rosbridge. If successful, the connection status text on the webpage is updated. When the virtual joystick on the page is moved, the JavaScript reads its position, converts that to a message type understood by ROS, and publishes that message to a topic. For controlling the robot’s forward and rotational velocity, a message with a structure that looks something like this would be published to the “/wheels/cmd_velocity” topic: {linear: {x: , y: , z: 0}, angular: {x: 0, y: 0, z: }}. The motor controller is subscribed to that topic, so it will receive the message and move the wheels as desired.

Roslibjs supports a wide variety of features and could even be used to stream video from the robot and display it on the website. In the future, it could be used to make something such as a telepresence mode for the robot. It can also be used to display maps or any other data the robot can send.

Node.js

Everything in the node folder is run in a docker container made from the official Node.js image. Their setup instructions are available on GitHub. docker-compose mounts a volume so everything in the node folder is available in the container in the /home/node/app folder.

The Node.js package setup was done following this tutorial. If you need to run npm commands, start the nodejs container and get a terminal to it by running sudo docker-compose run nodejs sh. npm and ROS conflict so you have to do all your npm stuff in the docker container.

The key functions of node.js:

webviz.io provides a ROS viewer that can display ROS topics and image streams. Access it at https://webviz.io/app/?rosbridge-websocket-url=wss://localhost/ws after changing wss://localhost/ws to whatever your ROS websocket url is.

nginx

nginx is a webserver that relays requests to Node.js and handles ssl encryption. Key functions:

  • Handle ssl connections
  • Redirect http and ws (port 80) to https and wss (port 433) respectively
  • Relay https requests to http://nodejs:8080
  • Relay websocket requests to wss:///ws to http://nodejs:8081

It uses a self-signed certificate, which is not ideal because it causes browsers to show a warning but provides fairly good security. It also doesn't check what the host domain is so the website can be accessed by IP, any domain that points to its IP, or locally by going to localhost or 127.0.0.1. Many things about this aren't ideal but it works ok. DigitalOcean provides a tutorial that was partially followed when setting this up that shows how to properly host a website with docker and nodejs. Using certbot to get a real certificate for a website on a local IP might be possible using a DNS authentication method if you DNS provider has a certbot plugin. The self-signed certificate nginx uses in this project is generated with openssl by running the generate-cert.sh script. It was set up following this tutorial.

firewall

Be careful because mapping ports with docker seems to overrule the default ufw firewall rules. This is convenient in production but is something to be aware of when developing. More information here.

Clone this wiki locally