From d8035bb30a65e89b397f488a05f49c282916152b Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 14:17:38 +0300 Subject: [PATCH 1/7] docs: add section for running allorad binary directly --- pages/devs/validators/run-full-node.mdx | 178 +++++++++++++++++++++++- 1 file changed, 176 insertions(+), 2 deletions(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index 4441a3f..b0519c3 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -4,7 +4,7 @@ import { Callout } from 'nextra/components' > How to become a Validator on Allora -This guide provides instructions on how to run a full node for the Allora network. There are two primary methods for running an Allora node: using `docker compose` (preferred) or using a [script](https://github.com/allora-network/allora-chain/blob/main/scripts/l1_node.sh). It's important to choose the method that best suits your environment and needs. +This guide provides instructions on how to run a full node for the Allora network. There are two primary methods for running an Allora node: using `docker compose` (preferred) or using a [script](https://github.com/allora-network/allora-chain/blob/main/scripts/l1_node.sh). It's important to choose the method that best suits your environment and needs. *** @@ -130,4 +130,178 @@ brew install rclone ```bash docker compose pull docker compose up -d -``` \ No newline at end of file +``` + +*** + +## Method 2: Running the Binary Directly (using `allorad`) + +This method describes how to install and run an Allora node by directly using the `allorad` binary. This approach offers more control over the node setup and is suitable for users who prefer not to use Docker. + +### Prerequisites + +Before you begin, ensure you have the following installed: +- **Git**: For cloning repositories. +- **curl**: For downloading files and making HTTP requests. +- **jq**: A command-line JSON processor (used in the state sync script). +- **sed**: A stream editor (used in the state sync script). +- Basic command-line knowledge. + +### Step 1: Download the `allorad` Binary + +1. Navigate to the [Allora Chain Releases page](https://github.com/allora-network/allora-chain/releases/tag/v0.12.1). +2. From the "Assets" section of the release (e.g., v0.12.1 or newer), download the `allorad` binary appropriate for your operating system (e.g., `allorad-linux-amd64`, `allorad-darwin-amd64`). +3. Rename the downloaded binary to `allorad`. +4. Move the binary to a directory included in your system's `PATH`, for example, `/usr/local/bin`: + ```shell + sudo mv ./allorad /usr/local/bin/allorad + ``` +5. Make the binary executable: + ```shell + sudo chmod +x /usr/local/bin/allorad + ``` +6. Verify the installation by checking the version: + ```shell + allorad version + ``` + +### Step 2: Initialize Node and Get Testnet Configuration + +1. **Initialize your node**: + Replace `` with a unique name for your node. This moniker will be visible on network explorers. + ```shell + allorad init --chain-id allora-testnet-1 + ``` + This command creates the necessary configuration files and data directory at `$HOME/.allorad`. + +2. **Download the Genesis File**: + The genesis file contains the initial state of the blockchain. + ```shell + curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/genesis.json > $HOME/.allorad/config/genesis.json + ``` + +3. **Download the Address Book**: + The address book helps your node find peers on the network. + ```shell + curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/addrbook.json > $HOME/.allorad/config/addrbook.json + ``` + +4. **(Optional but Recommended) Configure Seeds/Persistent Peers**: + You may need to configure seed nodes or persistent peers in your `$HOME/.allorad/config/config.toml` file to ensure your node can connect to the network. Look for the `seeds` and `persistent_peers` fields. + ```toml + # Example content for $HOME/.allorad/config/config.toml + # seeds = "..." + # persistent_peers = "..." + ``` + You can find peer information from community channels or network explorers. + +### Step 3: State Sync with Polkachu + +State sync allows your node to quickly catch up with the network by downloading a recent snapshot of the chain state. We'll use the method provided by Polkachu. For more details, you can visit [Polkachu's Allora Testnet State-Sync guide](https://www.polkachu.com/testnets/allora/state_sync). + +1. **Create the state sync script**: + Create a file named `state_sync.sh` with the following content: + + ```bash + #!/bin/bash + + # Script adapted from Polkachu.com for Allora Testnet state-sync + # Original source: https://www.polkachu.com/testnets/allora/state_sync + + set -e + + SNAP_RPC="https://allora-testnet-rpc.polkachu.com:443" + CONFIG_TOML_PATH="$HOME/.allorad/config/config.toml" + + echo "Fetching latest block height from $SNAP_RPC..." + LATEST_HEIGHT=$(curl -s $SNAP_RPC/block | jq -r .result.block.header.height) + if [ -z "$LATEST_HEIGHT" ] || [ "$LATEST_HEIGHT" == "null" ]; then + echo "Error: Could not fetch latest height. Is jq installed and $SNAP_RPC accessible?" + exit 1 + fi + echo "Latest height: $LATEST_HEIGHT" + + # Calculate block height for trust hash (2000 blocks behind latest) + BLOCK_HEIGHT=$((LATEST_HEIGHT - 2000)) + echo "Using block height for trust hash: $BLOCK_HEIGHT" + + echo "Fetching trust hash for block $BLOCK_HEIGHT..." + TRUST_HASH=$(curl -s "$SNAP_RPC/block?height=$BLOCK_HEIGHT" | jq -r .result.block_id.hash) + if [ -z "$TRUST_HASH" ] || [ "$TRUST_HASH" == "null" ]; then + echo "Error: Could not fetch trust hash for block $BLOCK_HEIGHT." + exit 1 + fi + echo "Trust hash: $TRUST_HASH" + + echo "Updating $CONFIG_TOML_PATH for state sync..." + + # Enable state sync and configure RPC servers, trust height, and trust hash + sed -i.bak -E \ + -e "s|^(enable[[:space:]]*=[[:space:]]*).*$|\\1true|" \ + -e "s|^(rpc_servers[[:space:]]*=[[:space:]]*).*$|\\1\"$SNAP_RPC,$SNAP_RPC\"|" \ + -e "s|^(trust_height[[:space:]]*=[[:space:]]*).*$|\\1$BLOCK_HEIGHT|" \ + -e "s|^(trust_hash[[:space:]]*=[[:space:]]*).*$|\\1\"$TRUST_HASH\"|" \ + "$CONFIG_TOML_PATH" + + echo "Configuration updated in $CONFIG_TOML_PATH." + echo "A backup of the original config was created at $CONFIG_TOML_PATH.bak" + echo "Make sure to review the changes." + ``` + +2. **Make the script executable**: + ```shell + chmod +x state_sync.sh + ``` + +3. **Run the script**: + ```shell + ./state_sync.sh + ``` + This script will automatically fetch the latest state sync parameters and update your `$HOME/.allorad/config/config.toml`. + + +**Note**: The script assumes your Allora configuration is at `$HOME/.allorad`. If you used a different home directory during `allorad init`, you'll need to adjust the `CONFIG_TOML_PATH` in the script. + + +### Step 4: Reset Node Data (Important for State Sync) + +Before starting the node with state sync enabled, you must reset its existing data (if any), while keeping the address book. + +```shell +allorad tendermint unsafe-reset-all --home $HOME/.allorad --keep-addr-book +``` + + +**Warning**: This command deletes blockchain data. Do not run it on a node that already has important synced data unless you intend to resync from scratch. + + +### Step 5: Start Your Node + +Now, you can start your Allora node: + +```shell +allorad start +``` + +Your node will begin the state syncing process. This can take some time, typically 10-30 minutes, depending on network conditions and the snapshot age. You will see logs indicating the progress. + +### Step 6: Check Sync Status + +To check if your node has finished syncing, open another terminal and run: + +```shell +curl -s http://localhost:26657/status | jq .result.sync_info.catching_up +``` + +- If it returns `true`, your node is still syncing. +- If it returns `false`, your node is caught up with the network. + + +The default RPC port for `allorad` is `26657`. If you've configured a different port, adjust the command accordingly. + + +### Next Steps: Delegation + +Once your node is fully synced and running smoothly (the command above returns `false`), you can proceed with further actions such as setting up your node as a validator or delegating tokens. + +(Further instructions on delegation will be added here or linked to a separate guide.) \ No newline at end of file From ea487f7e6404afdfbb14c3996bbe44a22f55155f Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 14:20:57 +0300 Subject: [PATCH 2/7] docs: add section for managing allorad with cosmovisor and systemd --- pages/devs/validators/run-full-node.mdx | 187 +++++++++++++++++++++++- 1 file changed, 186 insertions(+), 1 deletion(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index b0519c3..5ae05cf 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -304,4 +304,189 @@ The default RPC port for `allorad` is `26657`. If you've configured a different Once your node is fully synced and running smoothly (the command above returns `false`), you can proceed with further actions such as setting up your node as a validator or delegating tokens. -(Further instructions on delegation will be added here or linked to a separate guide.) \ No newline at end of file +(Further instructions on delegation will be added here or linked to a separate guide.) + +*** + +## Method 3: Using Cosmovisor for Automated Upgrades and Management + +`cosmovisor` is a process manager for Cosmos SDK application binaries like `allorad`. It monitors the chain for governance proposals approving software upgrades. If an upgrade is approved, `cosmovisor` can automatically download the new binary (if configured), stop the current binary, switch to the new version, and restart the node. This significantly simplifies the upgrade process for node operators. + +For more detailed information, refer to the official [Cosmos SDK Cosmovisor documentation](https://docs.cosmos.network/v0.46/run-node/cosmovisor.html). + +### Prerequisites + +- You should have `allorad` installed and initialized as described in "Method 2". +- Go (Golang) installed to build `cosmovisor`. + +### Step 1: Install Cosmovisor + +Install the latest version of `cosmovisor`: + +```shell +go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@latest +``` + +Verify the installation: + +```shell +cosmovisor version +``` + +This command should output the `cosmovisor` version and also attempt to run `allorad version` if `DAEMON_NAME` is already set (we'll set it soon). + +### Step 2: Configure Cosmovisor for Allorad + +`cosmovisor` is configured using environment variables and a specific directory structure. + +1. **Set Environment Variables**: + These variables tell `cosmovisor` where to find your `allorad` files and what the binary is named. + + ```shell + # Set these in your shell profile (e.g., ~/.bashrc, ~/.zshrc) or for your systemd service. + export DAEMON_HOME=$HOME/.allorad + export DAEMON_NAME=allorad + export DAEMON_RESTART_AFTER_UPGRADE=true # Or false, if you want to restart manually after an upgrade + export DAEMON_ALLOW_DOWNLOAD_BINARIES=false # Set to true to allow auto-download (recommended for full nodes, not validators initially) + # export UNSAFE_SKIP_BACKUP=false # Default is false, which is safer. Set to true to skip backup during upgrade. + ``` + + - `DAEMON_HOME`: The home directory of your `allorad` application (e.g., `$HOME/.allorad`). This is where the `cosmovisor` directory will be created. + - `DAEMON_NAME`: The name of your application binary (`allorad`). + - `DAEMON_RESTART_AFTER_UPGRADE`: If `true`, `cosmovisor` automatically restarts `allorad` with the new binary after an upgrade. Default is `true`. + - `DAEMON_ALLOW_DOWNLOAD_BINARIES`: (Optional) If `true`, `cosmovisor` attempts to auto-download new binaries. Default is `false`. For validators, it's often safer to manually place the new binary and verify it. + - `UNSAFE_SKIP_BACKUP`: (Optional) Defaults to `false`. If `true`, `cosmovisor` skips backing up data before an upgrade. It's recommended to keep this `false`. + + Source your profile or open a new terminal session to apply these variables if you set them in your shell profile. + +2. **Create the Directory Structure**: + `cosmovisor` expects a specific directory layout within `$DAEMON_HOME/cosmovisor`. + + ```shell + # Create the genesis binary directory + mkdir -p $DAEMON_HOME/cosmovisor/genesis/bin + + # Copy your current allorad binary to the genesis directory + # Ensure allorad is in your PATH or provide the full path to the binary + cp $(which allorad) $DAEMON_HOME/cosmovisor/genesis/bin/allorad + ``` + + Your directory structure under `$DAEMON_HOME/cosmovisor` should look like this: + + ``` + . ($DAEMON_HOME/cosmovisor) + ├── genesis + │ └── bin + │ └── allorad # Your current allorad binary + └── current -> genesis # Symlink created by cosmovisor when it first runs + ``` + + Future upgrade binaries will be placed in `$DAEMON_HOME/cosmovisor/upgrades//bin/allorad`. + +### Step 3: Running Allorad with Cosmovisor + +Once `cosmovisor` is installed and configured, you can start your node using `cosmovisor run`. +All arguments and flags passed after `cosmovisor run` are passed directly to the `allorad` binary. + +For example, to start `allorad`: + +```shell +cosmovisor run start --home $HOME/.allorad +``` + +Or, if you have `--home $HOME/.allorad` as the default in your `allorad` config or don't need specific flags: + +```shell +cosmovisor run start +``` + +`cosmovisor` will now manage your `allorad` process. + +### Step 4: Managing Cosmovisor with Systemd + +Running `cosmovisor` as a `systemd` service ensures it runs in the background, restarts on failure, and starts on system boot. + +1. **Create a `systemd` Service File**: + + Create a file named `allorad.service` (or `cosmovisor.service`) in `/etc/systemd/system/` using a text editor like `nano` or `vim`: + + ```shell + sudo nano /etc/systemd/system/allorad.service + ``` + + Paste the following content into the file. Adjust `User`, `Group`, `Environment` variables, and `ExecStart` paths as necessary for your setup. + + ```ini + [Unit] + Description=Allora Node (allorad via Cosmovisor) + After=network-online.target + + [Service] + User=your_username # Replace with the username that runs allorad + Group=your_groupname # Replace with the group for the user + ExecStart=/path/to/go/bin/cosmovisor run start --home /home/your_username/.allorad + Restart=on-failure + RestartSec=10 + LimitNOFILE=65535 + + # Environment variables for Cosmovisor + Environment="DAEMON_HOME=/home/your_username/.allorad" + Environment="DAEMON_NAME=allorad" + Environment="DAEMON_RESTART_AFTER_UPGRADE=true" + Environment="DAEMON_ALLOW_DOWNLOAD_BINARIES=false" # Or true, as per your policy + # Environment="UNSAFE_SKIP_BACKUP=false" + + [Install] + WantedBy=multi-user.target + ``` + + **Important considerations for the service file**: + - Replace `your_username` and `your_groupname` with the actual user and group that will run the process. + - Ensure `/path/to/go/bin/cosmovisor` points to the correct location of your `cosmovisor` binary (usually `$HOME/go/bin/cosmovisor` if installed with default Go paths, or `/usr/local/go/bin/cosmovisor` if Go is installed system-wide, then the binary path would be `/home/your_username/go/bin/cosmovisor`). You can find the path using `which cosmovisor` when logged in as the user who installed it. + - Adjust the `--home` path in `ExecStart` and `DAEMON_HOME` if your `allorad` home directory is different. + - It's generally safer to use absolute paths for binaries and home directories in `systemd` service files. + +2. **Reload `systemd` and Manage the Service**: + + - **Reload `systemd` daemon** to recognize the new service file: + ```shell + sudo systemctl daemon-reload + ``` + + - **Enable the service** to start automatically on boot: + ```shell + sudo systemctl enable allorad.service + ``` + + - **Start the service**: + ```shell + sudo systemctl start allorad.service + ``` + + - **Check the status** of the service: + ```shell + sudo systemctl status allorad.service + ``` + + - **View logs** (follow mode): + ```shell + sudo journalctl -u allorad.service -f + ``` + + - **To stop the service**: + ```shell + sudo systemctl stop allorad.service + ``` + +### Step 5: Handling Upgrades with Cosmovisor + +When a software upgrade proposal is approved by governance and the upgrade height is reached: +1. `cosmovisor` detects the upgrade plan. +2. If `DAEMON_ALLOW_DOWNLOAD_BINARIES=true` and a download link is provided in the plan, `cosmovisor` attempts to download the new binary into `$DAEMON_HOME/cosmovisor/upgrades//bin/`. +3. Alternatively, you would manually build or download the new `allorad` binary and place it in the correct upgrade directory: `$DAEMON_HOME/cosmovisor/upgrades//bin/allorad`. You also need to create an `upgrade-info.json` if not auto-downloading. +4. `cosmovisor` stops the current `allorad` process. +5. It performs a backup if `UNSAFE_SKIP_BACKUP=false`. +6. It switches the `current` symlink to point to the new upgrade directory. +7. If `DAEMON_RESTART_AFTER_UPGRADE=true`, `cosmovisor` restarts `allorad` using the new binary. + +Ensure you monitor your node and governance proposals for upcoming upgrades. If you are not using auto-download, you will need to prepare the new binary in advance. \ No newline at end of file From c721470093864a9d1b7a43ec590a7d5d0f831ba9 Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 14:34:51 +0300 Subject: [PATCH 3/7] docs: update allorad config and peer setup instructions --- pages/devs/validators/run-full-node.mdx | 61 +++++++++++++++++++++---- 1 file changed, 52 insertions(+), 9 deletions(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index 5ae05cf..c1f085f 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -172,28 +172,71 @@ Before you begin, ensure you have the following installed: ```shell allorad init --chain-id allora-testnet-1 ``` - This command creates the necessary configuration files and data directory at `$HOME/.allorad`. + This command creates the default configuration files and data directory at `$HOME/.allorad`. + +2. **Download Testnet Configuration Files**: + Fetch the `genesis.json`, `config.toml`, and `app.toml` for `allora-testnet-1` from the official networks repository. -2. **Download the Genesis File**: - The genesis file contains the initial state of the blockchain. ```shell + # Download genesis.json curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/genesis.json > $HOME/.allorad/config/genesis.json + + # Download config.toml + curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/config.toml > $HOME/.allorad/config/config.toml + + # Download app.toml + curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/app.toml > $HOME/.allorad/config/app.toml ``` + + **Customize Configuration**: + You may need to customize `$HOME/.allorad/config/config.toml` and `$HOME/.allorad/config/app.toml`. + For example, in `config.toml`, you might need to set your `external_address` if you have a static IP, or adjust `laddr` to bind to a specific interface. + Review these files and adjust settings like moniker (if not set via `init`), pruning options, and other node-specific parameters. + + 3. **Download the Address Book**: The address book helps your node find peers on the network. ```shell curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/addrbook.json > $HOME/.allorad/config/addrbook.json ``` -4. **(Optional but Recommended) Configure Seeds/Persistent Peers**: - You may need to configure seed nodes or persistent peers in your `$HOME/.allorad/config/config.toml` file to ensure your node can connect to the network. Look for the `seeds` and `persistent_peers` fields. +4. **Configure Seeds and Persistent Peers**: + To ensure your node can connect to the network and find peers, configure seed nodes and persistent peers in your `$HOME/.allorad/config/config.toml`. + + The `allora-network/networks` repository provides lists of seed nodes and peers: + - [seeds.txt](https://github.com/allora-network/networks/blob/main/allora-testnet-1/seeds.txt) + - [peers.txt](https://github.com/allora-network/networks/blob/main/allora-testnet-1/peers.txt) + + You can fetch these lists and use their contents to populate the `seeds` and `persistent_peers` fields in `$HOME/.allorad/config/config.toml`. + + **Example of fetching and setting seeds (manual step)**: + First, view the contents of `seeds.txt`: + ```shell + curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/seeds.txt + ``` + Then, copy the comma-separated list of seed nodes from the output. + + Edit `$HOME/.allorad/config/config.toml`: + ```shell + nano $HOME/.allorad/config/config.toml + ``` + Find the `seeds` line and replace its value with the copied list: + ```toml + # Comma-separated list of seed nodes to connect to + seeds = "" + ``` + + Similarly, you can use `peers.txt` for the `persistent_peers` field if needed, especially if you have specific peers you always want to connect to. + ```toml - # Example content for $HOME/.allorad/config/config.toml - # seeds = "..." - # persistent_peers = "..." + # Comma-separated list of persistent peers to connect to + persistent_peers = "" ``` - You can find peer information from community channels or network explorers. + + + Ensure the `seeds` and `persistent_peers` entries are correctly formatted as comma-separated strings within double quotes. + ### Step 3: State Sync with Polkachu From 595a998da77a2d5892f7e5b010a6ee1caddc1d00 Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 14:42:03 +0300 Subject: [PATCH 4/7] docs: refactor state sync section for multiple RPC options --- pages/devs/validators/run-full-node.mdx | 66 ++++++++++++++++++------- 1 file changed, 47 insertions(+), 19 deletions(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index c1f085f..fd15931 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -195,13 +195,8 @@ Before you begin, ensure you have the following installed: Review these files and adjust settings like moniker (if not set via `init`), pruning options, and other node-specific parameters. -3. **Download the Address Book**: - The address book helps your node find peers on the network. - ```shell - curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/addrbook.json > $HOME/.allorad/config/addrbook.json - ``` -4. **Configure Seeds and Persistent Peers**: +3. **Configure Seeds and Persistent Peers**: To ensure your node can connect to the network and find peers, configure seed nodes and persistent peers in your `$HOME/.allorad/config/config.toml`. The `allora-network/networks` repository provides lists of seed nodes and peers: @@ -238,50 +233,81 @@ Before you begin, ensure you have the following installed: Ensure the `seeds` and `persistent_peers` entries are correctly formatted as comma-separated strings within double quotes. -### Step 3: State Sync with Polkachu +### Step 3: State Sync Your Node + +State sync allows your node to quickly catch up with the network by downloading a recent snapshot of the chain state instead of replaying all historical blocks. This can significantly reduce the time it takes to get your node operational. -State sync allows your node to quickly catch up with the network by downloading a recent snapshot of the chain state. We'll use the method provided by Polkachu. For more details, you can visit [Polkachu's Allora Testnet State-Sync guide](https://www.polkachu.com/testnets/allora/state_sync). +You can use RPC endpoints from various providers for state sync. Here are a few options: +- **Official Allora RPC**: `https://allora-rpc.testnet.allora.network/` +- **Polkachu**: `https://allora-testnet-rpc.polkachu.com:443` (as referenced in their [State Sync Guide](https://www.polkachu.com/testnets/allora/state_sync)) +- **Lavender Five**: `https://testnet-rpc.lavenderfive.com:443/allora` or `https://rpc.cosmos.directory:443/allora` (as referenced in their [State Sync Guide](https://www.lavenderfive.com/tools/testnet_allora/statesync)) + +Below is a script to help automate the state sync process. You'll need to choose an RPC endpoint from the list above (or another trusted source) and set it in the script. 1. **Create the state sync script**: - Create a file named `state_sync.sh` with the following content: + Create a file named `state_sync.sh` with the following content. **Remember to replace `` with your selected RPC URL.** ```bash #!/bin/bash - # Script adapted from Polkachu.com for Allora Testnet state-sync - # Original source: https://www.polkachu.com/testnets/allora/state_sync + # Script to automate state sync for an Allora node. + # Please choose an RPC endpoint from the documentation and set it below. set -e - SNAP_RPC="https://allora-testnet-rpc.polkachu.com:443" + # ------------------------------------------------------------------------------ + # IMPORTANT: SET YOUR CHOSEN RPC ENDPOINT HERE + # Example: SNAP_RPC="https://allora-rpc.testnet.allora.network/" + # Example: SNAP_RPC="https://allora-testnet-rpc.polkachu.com:443" + # Example: SNAP_RPC="https://testnet-rpc.lavenderfive.com:443/allora" + SNAP_RPC="" + # ------------------------------------------------------------------------------ + CONFIG_TOML_PATH="$HOME/.allorad/config/config.toml" + if [ "$SNAP_RPC" == "" ] || [ -z "$SNAP_RPC" ]; then + echo "Error: Please edit the script and set the SNAP_RPC variable to your chosen RPC endpoint." + exit 1 + fi + + echo "Using RPC Endpoint: $SNAP_RPC" echo "Fetching latest block height from $SNAP_RPC..." + # The /block endpoint might vary slightly. Polkachu/LavenderFive uses /block, official might be different or not directly list height like this. + # For robust fetching, direct API interaction or a more complex query might be needed if /block isn't standard across all RPCs. + # This script assumes a Cosmos SDK standard /block endpoint as used by Polkachu. LATEST_HEIGHT=$(curl -s $SNAP_RPC/block | jq -r .result.block.header.height) + if [ -z "$LATEST_HEIGHT" ] || [ "$LATEST_HEIGHT" == "null" ]; then - echo "Error: Could not fetch latest height. Is jq installed and $SNAP_RPC accessible?" + echo "Error: Could not fetch latest height. Is jq installed and $SNAP_RPC accessible and does it provide height at .result.block.header.height?" exit 1 fi echo "Latest height: $LATEST_HEIGHT" - # Calculate block height for trust hash (2000 blocks behind latest) - BLOCK_HEIGHT=$((LATEST_HEIGHT - 2000)) - echo "Using block height for trust hash: $BLOCK_HEIGHT" + # Determine a suitable trust height. Common practice is a few thousand blocks behind the latest. + # The exact offset might depend on snapshot intervals of the chosen RPC. + # Polkachu uses 2000, LavenderFive uses a calculation like ($LATEST_HEIGHT - ($LATEST_HEIGHT % 6000)) which means it rounds down to the nearest 6000 block + # We will use a common offset of 2000 for simplicity. Adjust if needed. + BLOCK_HEIGHT_OFFSET=2000 + BLOCK_HEIGHT=$((LATEST_HEIGHT - BLOCK_HEIGHT_OFFSET)) + echo "Using block height for trust hash: $BLOCK_HEIGHT (approx. ${BLOCK_HEIGHT_OFFSET} blocks behind latest)" echo "Fetching trust hash for block $BLOCK_HEIGHT..." TRUST_HASH=$(curl -s "$SNAP_RPC/block?height=$BLOCK_HEIGHT" | jq -r .result.block_id.hash) if [ -z "$TRUST_HASH" ] || [ "$TRUST_HASH" == "null" ]; then - echo "Error: Could not fetch trust hash for block $BLOCK_HEIGHT." + echo "Error: Could not fetch trust hash for block $BLOCK_HEIGHT. Ensure the RPC supports block queries by height." exit 1 fi echo "Trust hash: $TRUST_HASH" echo "Updating $CONFIG_TOML_PATH for state sync..." - # Enable state sync and configure RPC servers, trust height, and trust hash + # Construct RPC server string. Some providers suggest using their RPC twice or a backup. + # For simplicity, we'll use the chosen SNAP_RPC twice here. Adjust if your provider recommends a different setup. + RPC_SERVERS="$SNAP_RPC,$SNAP_RPC" + sed -i.bak -E \ -e "s|^(enable[[:space:]]*=[[:space:]]*).*$|\\1true|" \ - -e "s|^(rpc_servers[[:space:]]*=[[:space:]]*).*$|\\1\"$SNAP_RPC,$SNAP_RPC\"|" \ + -e "s|^(rpc_servers[[:space:]]*=[[:space:]]*).*$|\\1\"$RPC_SERVERS\"|" \ -e "s|^(trust_height[[:space:]]*=[[:space:]]*).*$|\\1$BLOCK_HEIGHT|" \ -e "s|^(trust_hash[[:space:]]*=[[:space:]]*).*$|\\1\"$TRUST_HASH\"|" \ "$CONFIG_TOML_PATH" @@ -289,6 +315,8 @@ State sync allows your node to quickly catch up with the network by downloading echo "Configuration updated in $CONFIG_TOML_PATH." echo "A backup of the original config was created at $CONFIG_TOML_PATH.bak" echo "Make sure to review the changes." + echo "Important: You might also need to configure 'persistent_peers' in $CONFIG_TOML_PATH if not set by other means for the state sync to work reliably." + echo "Refer to the seeds.txt and peers.txt from https://github.com/allora-network/networks/tree/main/allora-testnet-1 for peer information." ``` 2. **Make the script executable**: From 505600c8d21f3b97724cc96424c24839cb36a525 Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 15:18:06 +0300 Subject: [PATCH 5/7] docs: remove cosmovisor auto-download option and update upgrade steps --- pages/devs/validators/run-full-node.mdx | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index fd15931..9cb997a 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -418,14 +418,12 @@ This command should output the `cosmovisor` version and also attempt to run `all export DAEMON_HOME=$HOME/.allorad export DAEMON_NAME=allorad export DAEMON_RESTART_AFTER_UPGRADE=true # Or false, if you want to restart manually after an upgrade - export DAEMON_ALLOW_DOWNLOAD_BINARIES=false # Set to true to allow auto-download (recommended for full nodes, not validators initially) # export UNSAFE_SKIP_BACKUP=false # Default is false, which is safer. Set to true to skip backup during upgrade. ``` - `DAEMON_HOME`: The home directory of your `allorad` application (e.g., `$HOME/.allorad`). This is where the `cosmovisor` directory will be created. - `DAEMON_NAME`: The name of your application binary (`allorad`). - `DAEMON_RESTART_AFTER_UPGRADE`: If `true`, `cosmovisor` automatically restarts `allorad` with the new binary after an upgrade. Default is `true`. - - `DAEMON_ALLOW_DOWNLOAD_BINARIES`: (Optional) If `true`, `cosmovisor` attempts to auto-download new binaries. Default is `false`. For validators, it's often safer to manually place the new binary and verify it. - `UNSAFE_SKIP_BACKUP`: (Optional) Defaults to `false`. If `true`, `cosmovisor` skips backing up data before an upgrade. It's recommended to keep this `false`. Source your profile or open a new terminal session to apply these variables if you set them in your shell profile. @@ -504,7 +502,6 @@ Running `cosmovisor` as a `systemd` service ensures it runs in the background, r Environment="DAEMON_HOME=/home/your_username/.allorad" Environment="DAEMON_NAME=allorad" Environment="DAEMON_RESTART_AFTER_UPGRADE=true" - Environment="DAEMON_ALLOW_DOWNLOAD_BINARIES=false" # Or true, as per your policy # Environment="UNSAFE_SKIP_BACKUP=false" [Install] @@ -553,11 +550,10 @@ Running `cosmovisor` as a `systemd` service ensures it runs in the background, r When a software upgrade proposal is approved by governance and the upgrade height is reached: 1. `cosmovisor` detects the upgrade plan. -2. If `DAEMON_ALLOW_DOWNLOAD_BINARIES=true` and a download link is provided in the plan, `cosmovisor` attempts to download the new binary into `$DAEMON_HOME/cosmovisor/upgrades//bin/`. -3. Alternatively, you would manually build or download the new `allorad` binary and place it in the correct upgrade directory: `$DAEMON_HOME/cosmovisor/upgrades//bin/allorad`. You also need to create an `upgrade-info.json` if not auto-downloading. -4. `cosmovisor` stops the current `allorad` process. -5. It performs a backup if `UNSAFE_SKIP_BACKUP=false`. -6. It switches the `current` symlink to point to the new upgrade directory. -7. If `DAEMON_RESTART_AFTER_UPGRADE=true`, `cosmovisor` restarts `allorad` using the new binary. - -Ensure you monitor your node and governance proposals for upcoming upgrades. If you are not using auto-download, you will need to prepare the new binary in advance. \ No newline at end of file +2. You must manually build or download the new `allorad` binary and place it in the correct upgrade directory: `$DAEMON_HOME/cosmovisor/upgrades//bin/allorad`. You will also need to create an `upgrade-info.json` file in the `$DAEMON_HOME/cosmovisor/upgrades//` directory if it's not provided with the binary. This file is typically simple and might just contain `{}` if no specific upgrade information is needed by the binary itself for the upgrade process, but consult the release notes for the specific version for requirements. +3. `cosmovisor` stops the current `allorad` process. +4. It performs a backup if `UNSAFE_SKIP_BACKUP=false` (default). +5. It switches the `current` symlink to point to the new upgrade directory. +6. If `DAEMON_RESTART_AFTER_UPGRADE=true` (default), `cosmovisor` restarts `allorad` using the new binary. + +Ensure you monitor your node and governance proposals for upcoming upgrades and prepare the new binary in advance. \ No newline at end of file From 354dac008246909079b97f658fc62be604fcf2fb Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 15:22:15 +0300 Subject: [PATCH 6/7] docs: refine cosmovisor section title for clarity on upgrade management --- pages/devs/validators/run-full-node.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index 9cb997a..7195919 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -379,9 +379,9 @@ Once your node is fully synced and running smoothly (the command above returns ` *** -## Method 3: Using Cosmovisor for Automated Upgrades and Management +## Method 3: Using Cosmovisor for Upgrade Management -`cosmovisor` is a process manager for Cosmos SDK application binaries like `allorad`. It monitors the chain for governance proposals approving software upgrades. If an upgrade is approved, `cosmovisor` can automatically download the new binary (if configured), stop the current binary, switch to the new version, and restart the node. This significantly simplifies the upgrade process for node operators. +`cosmovisor` is a process manager for Cosmos SDK application binaries like `allorad`. It monitors the chain for governance proposals approving software upgrades. If an upgrade is approved, `cosmovisor` will stop the current binary, switch to the new version (which you must manually provide), and then restart the node. This significantly simplifies the upgrade process for node operators by automating the switch and restart, but not the acquisition of the new binary. For more detailed information, refer to the official [Cosmos SDK Cosmovisor documentation](https://docs.cosmos.network/v0.46/run-node/cosmovisor.html). From 4b4e05a7a1cabfbcaf4ec044743cd73df20d7e1d Mon Sep 17 00:00:00 2001 From: kihahu Date: Thu, 22 May 2025 18:31:03 +0300 Subject: [PATCH 7/7] docs: update introduction to cover all node deployment methods --- pages/devs/validators/run-full-node.mdx | 651 +++++++++--------------- 1 file changed, 232 insertions(+), 419 deletions(-) diff --git a/pages/devs/validators/run-full-node.mdx b/pages/devs/validators/run-full-node.mdx index 7195919..26cb25e 100644 --- a/pages/devs/validators/run-full-node.mdx +++ b/pages/devs/validators/run-full-node.mdx @@ -4,556 +4,369 @@ import { Callout } from 'nextra/components' > How to become a Validator on Allora -This guide provides instructions on how to run a full node for the Allora network. There are two primary methods for running an Allora node: using `docker compose` (preferred) or using a [script](https://github.com/allora-network/allora-chain/blob/main/scripts/l1_node.sh). It's important to choose the method that best suits your environment and needs. +This guide provides instructions on how to run a full node for the Allora network. There are two primary methods for running an Allora node: using systemd with cosmosvisor for easier upgrade management (recommended) or using docker compose. It's important to choose the method that best suits your environment and needs. *** ## Prerequisites - Git -- Docker with `docker compose` +- Go (version 1.21 or later) - Basic command-line knowledge +- Linux/Unix environment with systemd +- curl and jq utilities *** -## Method 1: Using `docker compose` (Recommended) +## Method 1: Using systemd with cosmosvisor (Recommended) -Running the Allora node with `docker compose` simplifies the setup and ensures consistency across different environments. +Running the Allora node with systemd and cosmosvisor provides production-grade reliability and easier binary upgrade management. This is the recommended approach for validators and production environments. -### Step 1: Clone the Allora Chain Repository +### Step 1: Install cosmosvisor -If you haven't already, clone the latest release of the [allora-chain repository](https://github.com/allora-network/allora-chain): +First, install cosmosvisor, which will manage binary upgrades: ```shell -git clone https://github.com/allora-network/allora-chain.git +go install cosmossdk.io/tools/cosmovisor/cmd/cosmovisor@latest ``` -### Step 2: Run the Node with Docker Compose - -Navigate to the root directory of the cloned repository and start the node using `docker compose`: +Verify the installation: ```shell -cd allora-chain -docker compose pull -docker compose up +cosmovisor version ``` -> run `docker compose up -d` to run the container in detached mode, allowing it to run in the background. +### Step 2: Install allorad Binary - -**Info**: Don't forget to pull the images first, to ensure that you're using the latest images. - +Download the latest `allorad` binary from the releases page: - -Make sure that any previous containers you launched are killed, before launching a new container that uses the same port. +1. Navigate to the [Allora Chain Releases page](https://github.com/allora-network/allora-chain/releases/latest). +2. Download the `allorad` binary appropriate for your operating system (e.g., `allorad-linux-amd64`, `allorad-darwin-amd64`). +3. Rename and move the binary to a standard location: -You can run the following command to kill any containers running on the same port: -```bash -docker container ls -docker rm -f -``` - +```shell +# Rename the downloaded binary +mv ./allorad-linux-amd64 ./allorad # Adjust filename as needed -#### Run Only a Node with Docker Compose -In this case, you will use Allora's heads. -##### Run +# Move to system path +sudo mv ./allorad /usr/local/bin/allorad +# Make executable +sudo chmod +x /usr/local/bin/allorad ``` -docker compose pull -docker compose up node -``` -To run only a head: `docker compose up head` - -**NOTE:** You also can comment the head service in the Dockerfile. - +### Step 3: Initialize the Node -### Monitoring Logs - -To view the node's logs, use the following command: +Initialize your node (replace `` with your desired node name): ```shell -docker compose logs -f +allorad init --chain-id allora-testnet-1 ``` -### Executing RPC Calls +### Step 4: Download Network Configuration -You can interact with the running node through RPC calls. For example, to check the node's status: +Download the testnet configuration files: ```shell -curl -s http://localhost:26657/status | jq . +# Download genesis.json +curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/genesis.json > $HOME/.allorad/config/genesis.json + +# Download config.toml +curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/config.toml > $HOME/.allorad/config/config.toml + +# Download app.toml +curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/app.toml > $HOME/.allorad/config/app.toml ``` -This command uses `curl` to send a request to the node's RPC interface and `jq` to format the JSON response. +### Step 5: Configure Seeds and Peers -Once your node has finished syncing and is caught up with the network, this command will return `false`: +Configure seeds and persistent peers for network connectivity: ```shell -curl -so- http\://localhost:26657/status | jq .result.sync_info.catching_up +# Fetch and set seeds +SEEDS=$(curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/seeds.txt) +sed -i.bak -e "s/^seeds *=.*/seeds = \"$SEEDS\"/" $HOME/.allorad/config/config.toml + +# Optionally set persistent peers +PEERS=$(curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/peers.txt) +sed -i.bak -e "s/^persistent_peers *=.*/persistent_peers = \"$PEERS\"/" $HOME/.allorad/config/config.toml ``` - -**Info**: The time required to sync depends on the chain's size and height. +### Step 6: Configure cosmosvisor - - For newly launched chains, syncing will take **minutes**. - - Established chains like Ethereum can take around **a day** to sync using Nethermind or similar clients. - - Some chains may take **several days** to sync. - - Syncing an archival node will take significantly more time. - +Set up the cosmosvisor directory structure and environment: - -**Warning**: Network participants will not be able to connect to your node until it is finished syncing and the command above returns `false`. - +```shell +# Set environment variables +export DAEMON_NAME=allorad +export DAEMON_HOME=$HOME/.allorad +export DAEMON_RESTART_AFTER_UPGRADE=true -### Syncing from Snapshot +# Create cosmosvisor directories +mkdir -p $DAEMON_HOME/cosmovisor/genesis/bin +mkdir -p $DAEMON_HOME/cosmovisor/upgrades -Users can also opt to sync their nodes from our [latest snapshot script](https://github.com/allora-network/allora-chain/blob/main/scripts/restore_snapshot.sh) following the instructions below: +# Copy the current binary to genesis +cp /usr/local/bin/allorad $DAEMON_HOME/cosmovisor/genesis/bin/ +``` -1. Install [`rclone`](https://rclone.org/), a command-line program to manage files on cloud storage +### Step 7: Configure State Sync (Optional but Recommended) -```bash -brew install rclone -``` +State sync allows your node to quickly catch up with the network. Create and run this state sync script: -2. Follow the instructions to configure `rclone` after running `rclone config` in the command line +```shell +cat > state_sync.sh << 'EOF' +#!/bin/bash -3. Uncomment the [following lines](https://github.com/allora-network/allora-chain/blob/ccad6d27e55b27a7ec3b2aebd7e55f1bc26798ed/scripts/l1_node.sh#L15) from your Allora Chain repository: +set -e -```go -# uncomment this block if you want to restore from a snapshot -# SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -# "${SCRIPT_DIR}/restore_snapshot.sh" -``` +# Choose your preferred RPC endpoint +SNAP_RPC="https://allora-rpc.testnet.allora.network" +CONFIG_TOML_PATH="$HOME/.allorad/config/config.toml" -4. Run the node using Docker: +echo "Using RPC Endpoint: $SNAP_RPC" +echo "Fetching latest block height..." -```bash -docker compose pull -docker compose up -d -``` +LATEST_HEIGHT=$(curl -s $SNAP_RPC/block | jq -r .result.block.header.height) +if [ -z "$LATEST_HEIGHT" ] || [ "$LATEST_HEIGHT" == "null" ]; then + echo "Error: Could not fetch latest height" + exit 1 +fi -*** +BLOCK_HEIGHT_OFFSET=2000 +BLOCK_HEIGHT=$((LATEST_HEIGHT - BLOCK_HEIGHT_OFFSET)) -## Method 2: Running the Binary Directly (using `allorad`) - -This method describes how to install and run an Allora node by directly using the `allorad` binary. This approach offers more control over the node setup and is suitable for users who prefer not to use Docker. - -### Prerequisites - -Before you begin, ensure you have the following installed: -- **Git**: For cloning repositories. -- **curl**: For downloading files and making HTTP requests. -- **jq**: A command-line JSON processor (used in the state sync script). -- **sed**: A stream editor (used in the state sync script). -- Basic command-line knowledge. +echo "Fetching trust hash for block $BLOCK_HEIGHT..." +TRUST_HASH=$(curl -s "$SNAP_RPC/block?height=$BLOCK_HEIGHT" | jq -r .result.block_id.hash) +if [ -z "$TRUST_HASH" ] || [ "$TRUST_HASH" == "null" ]; then + echo "Error: Could not fetch trust hash" + exit 1 +fi -### Step 1: Download the `allorad` Binary - -1. Navigate to the [Allora Chain Releases page](https://github.com/allora-network/allora-chain/releases/tag/v0.12.1). -2. From the "Assets" section of the release (e.g., v0.12.1 or newer), download the `allorad` binary appropriate for your operating system (e.g., `allorad-linux-amd64`, `allorad-darwin-amd64`). -3. Rename the downloaded binary to `allorad`. -4. Move the binary to a directory included in your system's `PATH`, for example, `/usr/local/bin`: - ```shell - sudo mv ./allorad /usr/local/bin/allorad - ``` -5. Make the binary executable: - ```shell - sudo chmod +x /usr/local/bin/allorad - ``` -6. Verify the installation by checking the version: - ```shell - allorad version - ``` +echo "Updating config for state sync..." +RPC_SERVERS="$SNAP_RPC,$SNAP_RPC" -### Step 2: Initialize Node and Get Testnet Configuration +sed -i.bak -E \ + -e "s|^(enable[[:space:]]*=[[:space:]]*).*$|\\1true|" \ + -e "s|^(rpc_servers[[:space:]]*=[[:space:]]*).*$|\\1\"$RPC_SERVERS\"|" \ + -e "s|^(trust_height[[:space:]]*=[[:space:]]*).*$|\\1$BLOCK_HEIGHT|" \ + -e "s|^(trust_hash[[:space:]]*=[[:space:]]*).*$|\\1\"$TRUST_HASH\"|" \ + "$CONFIG_TOML_PATH" -1. **Initialize your node**: - Replace `` with a unique name for your node. This moniker will be visible on network explorers. - ```shell - allorad init --chain-id allora-testnet-1 - ``` - This command creates the default configuration files and data directory at `$HOME/.allorad`. - -2. **Download Testnet Configuration Files**: - Fetch the `genesis.json`, `config.toml`, and `app.toml` for `allora-testnet-1` from the official networks repository. - - ```shell - # Download genesis.json - curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/genesis.json > $HOME/.allorad/config/genesis.json - - # Download config.toml - curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/config.toml > $HOME/.allorad/config/config.toml - - # Download app.toml - curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/app.toml > $HOME/.allorad/config/app.toml - ``` - - - **Customize Configuration**: - You may need to customize `$HOME/.allorad/config/config.toml` and `$HOME/.allorad/config/app.toml`. - For example, in `config.toml`, you might need to set your `external_address` if you have a static IP, or adjust `laddr` to bind to a specific interface. - Review these files and adjust settings like moniker (if not set via `init`), pruning options, and other node-specific parameters. - - - -3. **Configure Seeds and Persistent Peers**: - To ensure your node can connect to the network and find peers, configure seed nodes and persistent peers in your `$HOME/.allorad/config/config.toml`. - - The `allora-network/networks` repository provides lists of seed nodes and peers: - - [seeds.txt](https://github.com/allora-network/networks/blob/main/allora-testnet-1/seeds.txt) - - [peers.txt](https://github.com/allora-network/networks/blob/main/allora-testnet-1/peers.txt) - - You can fetch these lists and use their contents to populate the `seeds` and `persistent_peers` fields in `$HOME/.allorad/config/config.toml`. - - **Example of fetching and setting seeds (manual step)**: - First, view the contents of `seeds.txt`: - ```shell - curl -s https://raw.githubusercontent.com/allora-network/networks/main/allora-testnet-1/seeds.txt - ``` - Then, copy the comma-separated list of seed nodes from the output. - - Edit `$HOME/.allorad/config/config.toml`: - ```shell - nano $HOME/.allorad/config/config.toml - ``` - Find the `seeds` line and replace its value with the copied list: - ```toml - # Comma-separated list of seed nodes to connect to - seeds = "" - ``` - - Similarly, you can use `peers.txt` for the `persistent_peers` field if needed, especially if you have specific peers you always want to connect to. - - ```toml - # Comma-separated list of persistent peers to connect to - persistent_peers = "" - ``` - - - Ensure the `seeds` and `persistent_peers` entries are correctly formatted as comma-separated strings within double quotes. - - -### Step 3: State Sync Your Node - -State sync allows your node to quickly catch up with the network by downloading a recent snapshot of the chain state instead of replaying all historical blocks. This can significantly reduce the time it takes to get your node operational. - -You can use RPC endpoints from various providers for state sync. Here are a few options: -- **Official Allora RPC**: `https://allora-rpc.testnet.allora.network/` -- **Polkachu**: `https://allora-testnet-rpc.polkachu.com:443` (as referenced in their [State Sync Guide](https://www.polkachu.com/testnets/allora/state_sync)) -- **Lavender Five**: `https://testnet-rpc.lavenderfive.com:443/allora` or `https://rpc.cosmos.directory:443/allora` (as referenced in their [State Sync Guide](https://www.lavenderfive.com/tools/testnet_allora/statesync)) - -Below is a script to help automate the state sync process. You'll need to choose an RPC endpoint from the list above (or another trusted source) and set it in the script. - -1. **Create the state sync script**: - Create a file named `state_sync.sh` with the following content. **Remember to replace `` with your selected RPC URL.** - - ```bash - #!/bin/bash - - # Script to automate state sync for an Allora node. - # Please choose an RPC endpoint from the documentation and set it below. - - set -e - - # ------------------------------------------------------------------------------ - # IMPORTANT: SET YOUR CHOSEN RPC ENDPOINT HERE - # Example: SNAP_RPC="https://allora-rpc.testnet.allora.network/" - # Example: SNAP_RPC="https://allora-testnet-rpc.polkachu.com:443" - # Example: SNAP_RPC="https://testnet-rpc.lavenderfive.com:443/allora" - SNAP_RPC="" - # ------------------------------------------------------------------------------ - - CONFIG_TOML_PATH="$HOME/.allorad/config/config.toml" - - if [ "$SNAP_RPC" == "" ] || [ -z "$SNAP_RPC" ]; then - echo "Error: Please edit the script and set the SNAP_RPC variable to your chosen RPC endpoint." - exit 1 - fi - - echo "Using RPC Endpoint: $SNAP_RPC" - echo "Fetching latest block height from $SNAP_RPC..." - # The /block endpoint might vary slightly. Polkachu/LavenderFive uses /block, official might be different or not directly list height like this. - # For robust fetching, direct API interaction or a more complex query might be needed if /block isn't standard across all RPCs. - # This script assumes a Cosmos SDK standard /block endpoint as used by Polkachu. - LATEST_HEIGHT=$(curl -s $SNAP_RPC/block | jq -r .result.block.header.height) - - if [ -z "$LATEST_HEIGHT" ] || [ "$LATEST_HEIGHT" == "null" ]; then - echo "Error: Could not fetch latest height. Is jq installed and $SNAP_RPC accessible and does it provide height at .result.block.header.height?" - exit 1 - fi - echo "Latest height: $LATEST_HEIGHT" - - # Determine a suitable trust height. Common practice is a few thousand blocks behind the latest. - # The exact offset might depend on snapshot intervals of the chosen RPC. - # Polkachu uses 2000, LavenderFive uses a calculation like ($LATEST_HEIGHT - ($LATEST_HEIGHT % 6000)) which means it rounds down to the nearest 6000 block - # We will use a common offset of 2000 for simplicity. Adjust if needed. - BLOCK_HEIGHT_OFFSET=2000 - BLOCK_HEIGHT=$((LATEST_HEIGHT - BLOCK_HEIGHT_OFFSET)) - echo "Using block height for trust hash: $BLOCK_HEIGHT (approx. ${BLOCK_HEIGHT_OFFSET} blocks behind latest)" - - echo "Fetching trust hash for block $BLOCK_HEIGHT..." - TRUST_HASH=$(curl -s "$SNAP_RPC/block?height=$BLOCK_HEIGHT" | jq -r .result.block_id.hash) - if [ -z "$TRUST_HASH" ] || [ "$TRUST_HASH" == "null" ]; then - echo "Error: Could not fetch trust hash for block $BLOCK_HEIGHT. Ensure the RPC supports block queries by height." - exit 1 - fi - echo "Trust hash: $TRUST_HASH" - - echo "Updating $CONFIG_TOML_PATH for state sync..." - - # Construct RPC server string. Some providers suggest using their RPC twice or a backup. - # For simplicity, we'll use the chosen SNAP_RPC twice here. Adjust if your provider recommends a different setup. - RPC_SERVERS="$SNAP_RPC,$SNAP_RPC" - - sed -i.bak -E \ - -e "s|^(enable[[:space:]]*=[[:space:]]*).*$|\\1true|" \ - -e "s|^(rpc_servers[[:space:]]*=[[:space:]]*).*$|\\1\"$RPC_SERVERS\"|" \ - -e "s|^(trust_height[[:space:]]*=[[:space:]]*).*$|\\1$BLOCK_HEIGHT|" \ - -e "s|^(trust_hash[[:space:]]*=[[:space:]]*).*$|\\1\"$TRUST_HASH\"|" \ - "$CONFIG_TOML_PATH" - - echo "Configuration updated in $CONFIG_TOML_PATH." - echo "A backup of the original config was created at $CONFIG_TOML_PATH.bak" - echo "Make sure to review the changes." - echo "Important: You might also need to configure 'persistent_peers' in $CONFIG_TOML_PATH if not set by other means for the state sync to work reliably." - echo "Refer to the seeds.txt and peers.txt from https://github.com/allora-network/networks/tree/main/allora-testnet-1 for peer information." - ``` - -2. **Make the script executable**: - ```shell - chmod +x state_sync.sh - ``` - -3. **Run the script**: - ```shell - ./state_sync.sh - ``` - This script will automatically fetch the latest state sync parameters and update your `$HOME/.allorad/config/config.toml`. +echo "State sync configuration updated successfully" +EOF - -**Note**: The script assumes your Allora configuration is at `$HOME/.allorad`. If you used a different home directory during `allorad init`, you'll need to adjust the `CONFIG_TOML_PATH` in the script. - +chmod +x state_sync.sh +./state_sync.sh +``` -### Step 4: Reset Node Data (Important for State Sync) +### Step 8: Reset Node Data -Before starting the node with state sync enabled, you must reset its existing data (if any), while keeping the address book. +Reset existing data while keeping the address book: ```shell allorad tendermint unsafe-reset-all --home $HOME/.allorad --keep-addr-book ``` -**Warning**: This command deletes blockchain data. Do not run it on a node that already has important synced data unless you intend to resync from scratch. +**Warning**: This command deletes blockchain data. Only run this on a fresh node or when you intend to resync from scratch. -### Step 5: Start Your Node +### Step 9: Create systemd Service -Now, you can start your Allora node: +Create a systemd service file for cosmosvisor: ```shell -allorad start +sudo tee /etc/systemd/system/allorad.service > /dev/null < -The default RPC port for `allorad` is `26657`. If you've configured a different port, adjust the command accordingly. +**Security Note**: `DAEMON_ALLOW_DOWNLOAD_BINARIES` is set to `false` for security. Validators should manually place upgrade binaries in the appropriate directories. -### Next Steps: Delegation - -Once your node is fully synced and running smoothly (the command above returns `false`), you can proceed with further actions such as setting up your node as a validator or delegating tokens. - -(Further instructions on delegation will be added here or linked to a separate guide.) - -*** - -## Method 3: Using Cosmovisor for Upgrade Management +### Step 10: Start the Service -`cosmovisor` is a process manager for Cosmos SDK application binaries like `allorad`. It monitors the chain for governance proposals approving software upgrades. If an upgrade is approved, `cosmovisor` will stop the current binary, switch to the new version (which you must manually provide), and then restart the node. This significantly simplifies the upgrade process for node operators by automating the switch and restart, but not the acquisition of the new binary. +Enable and start the systemd service: -For more detailed information, refer to the official [Cosmos SDK Cosmovisor documentation](https://docs.cosmos.network/v0.46/run-node/cosmovisor.html). +```shell +sudo systemctl daemon-reload +sudo systemctl enable allorad +sudo systemctl start allorad +``` -### Prerequisites +### Monitoring and Management -- You should have `allorad` installed and initialized as described in "Method 2". -- Go (Golang) installed to build `cosmovisor`. +Monitor your node logs: -### Step 1: Install Cosmovisor +```shell +sudo journalctl -u allorad -f +``` -Install the latest version of `cosmovisor`: +Check service status: ```shell -go install github.com/cosmos/cosmos-sdk/cosmovisor/cmd/cosmovisor@latest +sudo systemctl status allorad ``` -Verify the installation: +Check sync status: ```shell -cosmovisor version +curl -s http://localhost:26657/status | jq .result.sync_info.catching_up ``` -This command should output the `cosmovisor` version and also attempt to run `allorad version` if `DAEMON_NAME` is already set (we'll set it soon). +Once this returns `false`, your node is fully synced. -### Step 2: Configure Cosmovisor for Allorad +### Managing Upgrades with cosmosvisor -`cosmovisor` is configured using environment variables and a specific directory structure. +When a governance upgrade is approved, prepare for it by placing the new binary: -1. **Set Environment Variables**: - These variables tell `cosmovisor` where to find your `allorad` files and what the binary is named. +```shell +# For an upgrade named "v1.0.0", create the upgrade directory +mkdir -p $DAEMON_HOME/cosmovisor/upgrades/v1.0.0/bin - ```shell - # Set these in your shell profile (e.g., ~/.bashrc, ~/.zshrc) or for your systemd service. - export DAEMON_HOME=$HOME/.allorad - export DAEMON_NAME=allorad - export DAEMON_RESTART_AFTER_UPGRADE=true # Or false, if you want to restart manually after an upgrade - # export UNSAFE_SKIP_BACKUP=false # Default is false, which is safer. Set to true to skip backup during upgrade. - ``` +# Download and place the new binary (replace with actual URL) +# wget NEW_BINARY_URL -O $DAEMON_HOME/cosmovisor/upgrades/v1.0.0/bin/allorad +# chmod +x $DAEMON_HOME/cosmovisor/upgrades/v1.0.0/bin/allorad +``` - - `DAEMON_HOME`: The home directory of your `allorad` application (e.g., `$HOME/.allorad`). This is where the `cosmovisor` directory will be created. - - `DAEMON_NAME`: The name of your application binary (`allorad`). - - `DAEMON_RESTART_AFTER_UPGRADE`: If `true`, `cosmovisor` automatically restarts `allorad` with the new binary after an upgrade. Default is `true`. - - `UNSAFE_SKIP_BACKUP`: (Optional) Defaults to `false`. If `true`, `cosmovisor` skips backing up data before an upgrade. It's recommended to keep this `false`. + +**Info**: cosmosvisor will automatically switch to the new binary at the upgrade height specified in the governance proposal. Monitor governance proposals and prepare upgrade binaries in advance. + - Source your profile or open a new terminal session to apply these variables if you set them in your shell profile. +*** -2. **Create the Directory Structure**: - `cosmovisor` expects a specific directory layout within `$DAEMON_HOME/cosmovisor`. +## Method 2: Using `docker compose` - ```shell - # Create the genesis binary directory - mkdir -p $DAEMON_HOME/cosmovisor/genesis/bin +Running the Allora node with `docker compose` simplifies the setup and ensures consistency across different environments, but requires manual upgrade management. - # Copy your current allorad binary to the genesis directory - # Ensure allorad is in your PATH or provide the full path to the binary - cp $(which allorad) $DAEMON_HOME/cosmovisor/genesis/bin/allorad - ``` +### Step 1: Clone the Allora Chain Repository - Your directory structure under `$DAEMON_HOME/cosmovisor` should look like this: +If you haven't already, clone the latest release of the [allora-chain repository](https://github.com/allora-network/allora-chain): - ``` - . ($DAEMON_HOME/cosmovisor) - ├── genesis - │ └── bin - │ └── allorad # Your current allorad binary - └── current -> genesis # Symlink created by cosmovisor when it first runs - ``` +```shell +git clone https://github.com/allora-network/allora-chain.git +``` - Future upgrade binaries will be placed in `$DAEMON_HOME/cosmovisor/upgrades//bin/allorad`. +### Step 2: Run the Node with Docker Compose -### Step 3: Running Allorad with Cosmovisor +Navigate to the root directory of the cloned repository and start the node using `docker compose`: -Once `cosmovisor` is installed and configured, you can start your node using `cosmovisor run`. -All arguments and flags passed after `cosmovisor run` are passed directly to the `allorad` binary. +```shell +cd allora-chain +docker compose pull +docker compose up +``` -For example, to start `allorad`: +> run `docker compose up -d` to run the container in detached mode, allowing it to run in the background. -```shell -cosmovisor run start --home $HOME/.allorad + +**Info**: Don't forget to pull the images first, to ensure that you're using the latest images. + + + +Make sure that any previous containers you launched are killed, before launching a new container that uses the same port. + +You can run the following command to kill any containers running on the same port: +```bash +docker container ls +docker rm -f ``` + -Or, if you have `--home $HOME/.allorad` as the default in your `allorad` config or don't need specific flags: +#### Run Only a Node with Docker Compose +In this case, you will use Allora's heads. +##### Run -```shell -cosmovisor run start ``` +docker compose pull +docker compose up node +``` +To run only a head: `docker compose up head` -`cosmovisor` will now manage your `allorad` process. + +**NOTE:** You also can comment the head service in the Dockerfile. + -### Step 4: Managing Cosmovisor with Systemd +### Monitoring Logs -Running `cosmovisor` as a `systemd` service ensures it runs in the background, restarts on failure, and starts on system boot. +To view the node's logs, use the following command: -1. **Create a `systemd` Service File**: +```shell +docker compose logs -f +``` - Create a file named `allorad.service` (or `cosmovisor.service`) in `/etc/systemd/system/` using a text editor like `nano` or `vim`: +### Executing RPC Calls - ```shell - sudo nano /etc/systemd/system/allorad.service - ``` +You can interact with the running node through RPC calls. For example, to check the node's status: - Paste the following content into the file. Adjust `User`, `Group`, `Environment` variables, and `ExecStart` paths as necessary for your setup. +```shell +curl -s http://localhost:26657/status | jq . +``` - ```ini - [Unit] - Description=Allora Node (allorad via Cosmovisor) - After=network-online.target +This command uses `curl` to send a request to the node's RPC interface and `jq` to format the JSON response. - [Service] - User=your_username # Replace with the username that runs allorad - Group=your_groupname # Replace with the group for the user - ExecStart=/path/to/go/bin/cosmovisor run start --home /home/your_username/.allorad - Restart=on-failure - RestartSec=10 - LimitNOFILE=65535 +Once your node has finished syncing and is caught up with the network, this command will return `false`: - # Environment variables for Cosmovisor - Environment="DAEMON_HOME=/home/your_username/.allorad" - Environment="DAEMON_NAME=allorad" - Environment="DAEMON_RESTART_AFTER_UPGRADE=true" - # Environment="UNSAFE_SKIP_BACKUP=false" +```shell +curl -so- http\://localhost:26657/status | jq .result.sync_info.catching_up +``` - [Install] - WantedBy=multi-user.target - ``` + +**Info**: The time required to sync depends on the chain's size and height. - **Important considerations for the service file**: - - Replace `your_username` and `your_groupname` with the actual user and group that will run the process. - - Ensure `/path/to/go/bin/cosmovisor` points to the correct location of your `cosmovisor` binary (usually `$HOME/go/bin/cosmovisor` if installed with default Go paths, or `/usr/local/go/bin/cosmovisor` if Go is installed system-wide, then the binary path would be `/home/your_username/go/bin/cosmovisor`). You can find the path using `which cosmovisor` when logged in as the user who installed it. - - Adjust the `--home` path in `ExecStart` and `DAEMON_HOME` if your `allorad` home directory is different. - - It's generally safer to use absolute paths for binaries and home directories in `systemd` service files. + - For newly launched chains, syncing will take **minutes**. + - Established chains like Ethereum can take around **a day** to sync using Nethermind or similar clients. + - Some chains may take **several days** to sync. + - Syncing an archival node will take significantly more time. + -2. **Reload `systemd` and Manage the Service**: + +**Warning**: Network participants will not be able to connect to your node until it is finished syncing and the command above returns `false`. + - - **Reload `systemd` daemon** to recognize the new service file: - ```shell - sudo systemctl daemon-reload - ``` +### Syncing from Snapshot - - **Enable the service** to start automatically on boot: - ```shell - sudo systemctl enable allorad.service - ``` +Users can also opt to sync their nodes from our [latest snapshot script](https://github.com/allora-network/allora-chain/blob/main/scripts/restore_snapshot.sh) following the instructions below: - - **Start the service**: - ```shell - sudo systemctl start allorad.service - ``` +1. Install [`rclone`](https://rclone.org/), a command-line program to manage files on cloud storage - - **Check the status** of the service: - ```shell - sudo systemctl status allorad.service - ``` +```bash +brew install rclone +``` - - **View logs** (follow mode): - ```shell - sudo journalctl -u allorad.service -f - ``` +2. Follow the instructions to configure `rclone` after running `rclone config` in the command line - - **To stop the service**: - ```shell - sudo systemctl stop allorad.service - ``` +3. Uncomment the [following lines](https://github.com/allora-network/allora-chain/blob/ccad6d27e55b27a7ec3b2aebd7e55f1bc26798ed/scripts/l1_node.sh#L15) from your Allora Chain repository: -### Step 5: Handling Upgrades with Cosmovisor +```go +# uncomment this block if you want to restore from a snapshot +# SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +# "${SCRIPT_DIR}/restore_snapshot.sh" +``` -When a software upgrade proposal is approved by governance and the upgrade height is reached: -1. `cosmovisor` detects the upgrade plan. -2. You must manually build or download the new `allorad` binary and place it in the correct upgrade directory: `$DAEMON_HOME/cosmovisor/upgrades//bin/allorad`. You will also need to create an `upgrade-info.json` file in the `$DAEMON_HOME/cosmovisor/upgrades//` directory if it's not provided with the binary. This file is typically simple and might just contain `{}` if no specific upgrade information is needed by the binary itself for the upgrade process, but consult the release notes for the specific version for requirements. -3. `cosmovisor` stops the current `allorad` process. -4. It performs a backup if `UNSAFE_SKIP_BACKUP=false` (default). -5. It switches the `current` symlink to point to the new upgrade directory. -6. If `DAEMON_RESTART_AFTER_UPGRADE=true` (default), `cosmovisor` restarts `allorad` using the new binary. +4. Run the node using Docker: -Ensure you monitor your node and governance proposals for upcoming upgrades and prepare the new binary in advance. \ No newline at end of file +```bash +docker compose pull +docker compose up -d +``` \ No newline at end of file