Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
.dockerignore
.editorconfig
README.md
LICENSE
node_modules
10 changes: 10 additions & 0 deletions .editorconfig
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@

[*]
end_of_line = lf
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true

[*.{yml,json,md}]
indent_size = 2
80 changes: 80 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
############################
# Final container
############################

# Because we are using `asdf` to manage our tools, we can simply use the bash
# image from the cto.ai registry, as it provides the `sdk-daemon` runtime that
# we need to connect to the CTO.ai platform, and we don't need to worry about
# the version of Node.js that is installed in the image by default.
FROM registry.cto.ai/official_images/bash:2-bullseye-slim

# Download the Tailscale binaries and extract them to the `/usr/local/bin`
# directory, as well as create the `/var/run/tailscale` directory which the
# Tailscale daemon uses to store runtime information.
ARG TAILSCALE_VERSION
ENV TAILSCALE_VERSION=${TAILSCALE_VERSION:-1.74.1}
RUN curl -fsSL "https://pkgs.tailscale.com/stable/tailscale_${TAILSCALE_VERSION}_amd64.tgz" --max-time 300 --fail \
| tar -xz -C /usr/local/bin --strip-components=1 --no-anchored tailscale tailscaled \
&& mkdir -p /var/run/tailscale \
&& chown -R ops:9999 /usr/local/bin/tailscale* /var/run/tailscale

# Copy the `entrypoint.sh` script to the container and set the appropriate
# permissions to ensure that it can be executed by the `ops` user. We need to
# use an entrypoint script to ensure the Tailscale daemon is running before we
# run the code that defines our workflow.
COPY --chown=ops:9999 lib/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

# The base directory for our image is `/ops`, which is where all of the code
# that defines our workflow will live.
WORKDIR /ops

# Run the container as the `ops` user by default, and set the appropriate
# environment variables for the user. Because we're going to use `asdf` to
# manage our tools, we'll manually set the `ASDF_DIR` and `PATH` environment
# variables to point to the `/ops/.asdf` directory that will soon be installed.
ENV USER=ops HOME=/ops XDG_RUNTIME_DIR=/run/ops/9999 \
PATH=/ops/.asdf/shims:/ops/.asdf/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

# Set the `ASDF_VERSION_TAG` and `ASDF_DIR` environment variables manually to
# ensure that the correct version of the tool is installed in `/ops/.asdf`.
ENV ASDF_VERSION_TAG=v0.14.1 \
ASDF_DIR=/ops/.asdf

# Copy the contents of the `lib/` directory into the root of the image. This
# means, for example, that the `./lib/build/` directory will be at `/build/`.
COPY --chown=ops:9999 lib/build/ /build/

# Uncomment to install any additional packages needed to run the tools and code
# we will be using during the build process OR in our final container.
# RUN apt-get update \
# && apt-get install -y \
# build-essential \
# && apt-get clean \
# && rm -rf /var/lib/apt/lists/*

# Run the script that will install the `asdf` tool, the plugins necessary to
# install the tools specified in the `.tool-versions` file, and then install
# the tools themselves. This is how a more recent version of Node.js will be
# installed and managed in our image.
RUN bash /build/install-asdf-tools.sh

# Copy the `package.json` file to the container and run `npm install` to ensure
# that all of the dependencies for our Node.js code are installed.
COPY --chown=ops:9999 package.json .
RUN npm install

# Copy the `index.js` file that defines the behavior of our workflow when the
# workflow is run using the `ops run` command or any other trigger.
COPY --chown=ops:9999 . /ops/

##############################################################################
# As a security best practice the container will always run as non-root user.
##############################################################################

# Finally, set the `ops` user as the default user for the container and set the
# `entrypoint.sh` script as the default command that will be run when the
# workflow container is run. The `entrypoint.sh` script will be passed the `run`
# value from the `ops.yml` file that defines this workflow.
USER ops
ENTRYPOINT [ "/entrypoint.sh" ]
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2024 workflows-sh
Copyright (c) 2024 CTO.ai

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
91 changes: 91 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Example Workflow: Command with Tailscale Integration

This repository contains an example workflow that demonstrates how to use the Tailscale CLI to connect the running [Command](https://cto.ai/docs/commands/overview/) to a Tailscale network.

It also uses the [asdf](https://asdf-vm.com/) CLI—an all-in-one runtime version manager akin to nvm or pyenv—to manage the version of Node.js that is used to run the business logic of the Command.

<p align="center">
<img src="https://github.com/user-attachments/assets/a6b1f44f-d8c2-4183-939c-9b7cc4071804" alt="Example of the sample command being run as-is" width="75%" />
</p>

## Getting Started

### Using this Template

To start using this template to build your own Command workflow integrated with the CTO.ai platform that connects to a Tailscale network, you can initialize the workflow locally using the CTO.ai [ops CLI](https://cto.ai/docs/usage-reference/ops-cli/) with this repository specified as the template:

```bash
ops init workflows-sh/sample-command-tailscaled
```

Alternatively, you can initialize a new repository by clicking the **Use this template** button at the top of this repository (or by [clicking here](https://github.com/new?template_name=sample-command-tailscaled&template_owner=workflows-sh)).

### Prerequisites

To use this Command, you will need to have accounts with the following services:

- [CTO.ai](https://cto.ai/home)
- [Tailscale](https://tailscale.com/)

<img src="https://github.com/user-attachments/assets/0c85bf1c-882b-4276-82dd-c8900787f314" alt="Screenshot of the Tailscale admin dashboard showing the proper settings to configure for your auth key" width="60%" align="right" style="padding-left: 50px;" />

#### Generate Tailscale key

You will also need to obtain an auth key for Tailscale from the [Tailscale admin console](https://login.tailscale.com/admin/settings/keys):

1. Click on **Generate auth key...**
2. Configure the auth key to be *Reusable*, ensuring that it can be used to connect multiple instances of our ephemeral Command workflow.
3. Set the key to be *Ephemeral*, ensuring that containers using the key will not be able to access the Tailscale network after the Command has completed.

### Configuration

The default place this Command looks for the Tailscale authentication key is in a Secret registered [in your team's Secret Store on the CTO.ai platform](https://cto.ai/docs/configs-and-secrets/configs-and-secrets/) named <code>TAILSCALE_AUTHKEY_<strong><em><TS_HOSTNAME></em></strong></code>.

Thus, for the default value of `TS_HOSTNAME` in the `ops.yml` file, the Secret in the Secrets Store would be named `TAILSCALE_AUTHKEY_SAMPLE_COMMAND_TAILSCALED`. To run this Command as-is, you can add you Tailscale authentication key to a Secret with that name in the Secrets Store associated with your team on the CTO.ai platform.

> [!NOTE]
> If a Tailscale auth key is not added to the appropriate Secret name in the CTO.ai Secret Store associated with your team, you will be prompted to provide a value for that Secret the first time this Command is run.

Alternatively, set a value for `AUTHKEY_SECRET_NAME` as a [static environment variable](https://cto.ai/docs/configs-and-secrets/managing-variables/#managing-workflow-behavior-with-environment-variables) in the `ops.yml` file, and the Command will look for the Tailscale authentication key in a Secret with the name specified by that value.

## Creating Your Own Workflow

Once you have this template initialized locally as a new Command workflow, you can modify the code in [index.js](./index.js) to define how the workflow should behave when it is run (see the [Workflow Architecture](#workflow-architecture) section below for more information).

When you are ready to test your changes, you can [build and run the Command](https://cto.ai/docs/workflows/using-workflows/) locally using the `ops run` command with the `-b` flag:

```bash
ops run -b .
```

When you are ready to deploy your Command to the CTO.ai platform to make it available to your team via the `ops` CLI or our [Slack integration](https://cto.ai/docs/slackops/overview/), you can use the `ops publish` command:

```bash
ops publish .
```

## Workflow Architecture

The five main components described below define this example Command workflow.

### Runtime container definition: `Dockerfile`

The [Dockerfile](./Dockerfile) defines the build stage for the container image that executes the workflow. This is where dependencies are installed, including the `tailscale` and `tailscaled` binaries, as well as the dependencies managed by `asdf`.

### Build dependencies: `lib/build/`

Contains the scripts that are executed by the Dockerfile to install the dependencies managed by `asdf`. Within this directory, the [`install-asdf-tools.sh`](./lib/build/install-asdf-tools.sh) script installs the asdf-managed dependency versions defined in the [`asdf-installs`](./lib/build/asdf-installs) file.

### Container entrypoint: `lib/entrypoint.sh`

The [`entrypoint.sh`](./lib/entrypoint.sh) script that is executed when the container starts. This script starts the `tailscaled` service, which will allow the client to connect to a Tailscale network when the Command is run. After the script starts the daemon, it uses the `exec` command to replace the current process (that is, the `entrypoint.sh` script) with the process specified in the `ops.yml` file.

### Workflow definition(s): `ops.yml`

The [`ops.yml`](./ops.yml) defines the configuration for this Command. The script to execute as the [business logic of the workflow](https://cto.ai/docs/usage-reference/ops-yml/) is passed as the value of the `run` key, which is passed to the entrypoint of the final container.

### Workflow business logic: `index.js`

The business logic of the workflow. The [`index.js`](./index.js) script is executed by the Command when it is run.

There is where connection to a Tailscale network is initiated using the `tailscale up` command, which connects to the socket created by the `tailscaled` daemon started by the `entrypoint.sh` script.
109 changes: 109 additions & 0 deletions index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
const { ux, sdk } = require('@cto.ai/sdk');

/**
* Return the hostname depending on which environment variables are set
* @returns {string} hostname
*/
function getWorkflowHostname() {
return process.env.TS_HOSTNAME || process.env.OPS_OP_NAME
}

/**
* Determine the name of the environment variable that contains the Tailscale
* auth key for the current hostname.
* @returns {string} authkeySecretName
*/
function getAuthKeySecretName() {
// If the the `AUTHKEY_SECRET_NAME` static environment variable has been set
// in the `ops.yml` for the workflow, use the value of that variable as the
// name of the secret containing the Tailscale auth key.
if (process.env.AUTHKEY_SECRET_NAME) {
return process.env.AUTHKEY_SECRET_NAME
} else {
// Otherwise, generate the name of the secret based on the hostname
const hostkey = getWorkflowHostname().toUpperCase().replace(/-/g, '_').trim()
return `TAILSCALE_AUTHKEY_${hostkey}`
}
}

/**
* Retrieve the Tailscale auth key from the Secrets Store using the name of the
* secret that contains the key. The name of the secret to retrieve is determined
* by the string passed as the `authkeyName` parameter.
* @param {string} authkeyName
* @returns {Promise<string>} tailscaleAuthkey
*/
async function getAuthKey(authkeyName) {
const authkeyResponse = await sdk.getSecret(authkeyName)
return authkeyResponse[authkeyName]
}

async function main() {
// Determine the hostname for the Tailscale node, and get the auth key
const hostname = getWorkflowHostname()
const authkeyName = getAuthKeySecretName()
const tailscaleAuthkey = await getAuthKey(authkeyName)

// Connect to the Tailscale network using the auth key
sdk.log(`Connecting to Tailscale network using auth key for hostname '${hostname}'...`)
const tsResponse = await sdk.exec(`tailscale up --authkey=${tailscaleAuthkey} --accept-routes --timeout 60s --hostname ${hostname}`)
if (tsResponse.stdout) {
sdk.log(tsResponse.stdout)
}
sdk.log('Successfully connected to Tailscale network.')

/**
* Modify the code below to implement your workflow logic
* ------------------------------------------------------
*/

// Prompt the user to choose a Tailscale command to execute
// TODO: Modify this prompt with the options appropriate for the new Command
const {action} = await ux.prompt({
type: 'list',
name: 'action',
message: 'Which tailscale command would you like to execute?',
default: 'logout',
choices: ['logout', 'status', 'netcheck', 'whois'],
});

// Execute the selected Tailscale command
// TODO: Modify the business logic defined here that controls how the
// workflow behaves when it is run.
if (action === 'logout') {
await sdk.exec(`tailscale logout`)
sdk.log('Tailscale disconnected. Exiting...')
process.exit(0)
} else if (action === 'status') {
sdk.log('Fetching status of the current Tailscale node...')
const statusResponse = await sdk.exec(`tailscale status --peers=false`)
sdk.log(statusResponse.stdout)
} else if (action === 'netcheck') {
sdk.log('Running diagnostics on the local network for the current Tailscale node...')
const netcheckResponse = await sdk.exec(`tailscale netcheck`)
sdk.log(netcheckResponse.stdout)
} else if (action === 'whois') {
sdk.log('Fetching whois information for the current Tailscale node...')
const whoisResponse = await sdk.exec(`tailscale whois $(tailscale ip --4)`)
sdk.log(whoisResponse.stdout)
}

/**
* ------------------------------------------------------
* Modify the code above to implement your workflow logic
*/

// Disconnect from the Tailscale network
sdk.log('Disconnecting from Tailscale network...')
await sdk.exec(`tailscale logout`)
sdk.log('Tailscale disconnected. Exiting...')

// Exit cleanly
process.exit(0)
}

main().catch(async (err) => {
sdk.log(err);
await sdk.exec(`tailscale logout`)
process.exit(1);
});
6 changes: 6 additions & 0 deletions lib/build/asdf-installs
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Define dependencies in the style of a `.tool-versions` file used by the `asdf`
# version manager. The only difference between this file and a `.tool-versions`
# file is that the version doesn't need to be fully specified; simply the major
# version is enough, as the install script will automatically install the latest
# minor version available if none is specified.
nodejs 22
53 changes: 53 additions & 0 deletions lib/build/install-asdf-tools.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#!/usr/bin/env bash

################################################################################
# This script is copied to the workdir of the Docker container and executed as
# the last instruction during the image build process.
################################################################################

DEBIAN_FRONTEND=noninteractive

# Set the script to fail if any commands fail.
set -e

# Install the `git` package using `apt-get`, then clean up the package manager
# cache to reduce the size of the Docker image.
apt-get update
apt-get install -y git
apt-get clean
rm -rf /var/lib/apt/lists/*

# Install the `asdf` package manager by cloning the repository from GitHub
# into our `/ops/.asdf` directory, which is acting as the home/working directory
# for the `ops` user.
git clone https://github.com/asdf-vm/asdf.git ${ASDF_DIR:="/ops/.asdf"} --branch ${ASDF_VERSION_TAG:-v0.14.1}
source "${ASDF_DIR}/asdf.sh"

echo '[[ -f "${ASDF_DIR}/asdf.sh" ]] && source "${ASDF_DIR}/asdf.sh"' >> /etc/profile

# For each line in the `/ops/.tool-versions` file, get the name of the tool from
# the first column, then use that name to add the appropriate plugin to `asdf`.
# Plugins are the component that `asdf` uses to install and manage each
# individual tool or runtime environment.
while read line ; do
# Split the line into an array using whitespace as the delimiter.
set $line

# Skip empty lines and comments.
if [[ -z $1 ]] || [[ $1 == \#* ]]; then continue; fi

# Add the `asdf` plugin for whatever tool we want to install.
asdf plugin add $1

# Install the latest version of the tool we want to install. If the version
# number set in the `asdf-installs` file is a full semver including the patch,
# the `asdf` command will still accept it with the `latest:` prefix.
asdf install $1 latest:$2

# Set the tool we just installed as the global version for the `ops` user.
asdf global $1 latest:$2
done </build/asdf-installs

# For good measure, change the ownership of our `/ops` directory to the `ops`
# user and the `9999` group, recursively.
chown -R ops:9999 /ops/
10 changes: 10 additions & 0 deletions lib/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/bash

# Start tailscaled in the background, registering the daemon as ephemeral and
# using userspace networking.
tailscaled --tun=userspace-networking --state=mem: 2>~/tailscaled.log &

# Switch to the `run` command we specify in the `ops.yml` file for this workflow
# using the `exec` command, which replaces the current process with the new one
# we pass in as arguments.
exec "$@"
Loading