#### General instructions for installing dependencies
+
1. Install [`cargo` and `rustc`](https://www.rust-lang.org/tools/install).
@@ -98,6 +71,7 @@ Below are quick summaries for installing the dependencies on your machine.
#### Dependencies on Arch
+
```sh
@@ -106,15 +80,21 @@ sudo pacman -S rust clang protobuf
Note that the package `clang` includes `libclang` as well as the C++ compiler.
+Recently the GCC version on Arch has broken a build script in the `rocksdb` dependency. A workaround is:
+
+```sh
+export CXXFLAGS="$CXXFLAGS -include cstdint"
+```
+
-Once the dependencies are in place, you can build and install Zebra:
+Once you have the dependencies in place, you can build and install Zebra with:
```sh
cargo install --locked zebrad
```
-You can start Zebra by
+You can start Zebra by running
```sh
zebrad start
@@ -124,6 +104,21 @@ Refer to the [Installing Zebra](https://zebra.zfnd.org/user/install.html) and
[Running Zebra](https://zebra.zfnd.org/user/run.html) sections in the book for
enabling optional features, detailed configuration and further details.
+## CI/CD Architecture
+
+Zebra uses a comprehensive CI/CD system built on GitHub Actions to ensure code
+quality, maintain stability, and automate routine tasks. Our CI/CD
+infrastructure:
+
+- Runs automated tests on every PR and commit.
+- Manages deployments to various environments.
+- Handles cross-platform compatibility checks.
+- Automates release processes.
+
+For a detailed understanding of our CI/CD system, including workflow diagrams,
+infrastructure details, and best practices, see our [CI/CD Architecture
+Documentation](.github/workflows/README.md).
+
## Documentation
The Zcash Foundation maintains the following resources documenting Zebra:
@@ -142,27 +137,27 @@ The Zcash Foundation maintains the following resources documenting Zebra:
## User support
-For bug reports please [open a bug report ticket in the Zebra repository](https://github.com/ZcashFoundation/zebra/issues/new?assignees=&labels=C-bug%2C+S-needs-triage&projects=&template=bug_report.yml&title=%5BUser+reported+bug%5D%3A+).
-
-Alternatively by chat, [Join the Zcash Foundation Discord
-Server](https://discord.com/invite/aRgNRVwsM8) and find the #zebra-support
-channel.
-
-We maintain a list of known issues in the
+If Zebra doesn't behave the way you expected, [open an
+issue](https://github.com/ZcashFoundation/zebra/issues/new/choose). We regularly
+triage new issues and we will respond. We maintain a list of known issues in the
[Troubleshooting](https://zebra.zfnd.org/user/troubleshooting.html) section of
the book.
+If you want to chat with us, [Join the Zcash Foundation Discord
+Server](https://discord.com/invite/aRgNRVwsM8) and find the "zebra-support"
+channel.
+
## Security
-Zebra has a [responsible disclosure policy](https://github.com/ZcashFoundation/zebra/blob/main/SECURITY.md), which we encourage security researchers to follow.
+Zebra has a [responsible disclosure
+policy](https://github.com/ZcashFoundation/zebra/blob/main/SECURITY.md), which
+we encourage security researchers to follow.
## License
-Zebra is distributed under the terms of both the MIT license
-and the Apache License (Version 2.0).
+Zebra is distributed under the terms of both the MIT license and the Apache
+License (Version 2.0). Some Zebra crates are distributed under the [MIT license
+only](LICENSE-MIT), because some of their code was originally from MIT-licensed
+projects. See each crate's directory for details.
See [LICENSE-APACHE](LICENSE-APACHE) and [LICENSE-MIT](LICENSE-MIT).
-
-Some Zebra crates are distributed under the [MIT license only](LICENSE-MIT),
-because some of their code was originally from MIT-licensed projects.
-See each crate's directory for details.
diff --git a/book/src/SUMMARY.md b/book/src/SUMMARY.md
index 1ec8dc35d67..6a128310ffb 100644
--- a/book/src/SUMMARY.md
+++ b/book/src/SUMMARY.md
@@ -16,8 +16,6 @@
- [Mining](user/mining.md)
- [Testnet Mining with s-nomp](user/mining-testnet-s-nomp.md)
- [Mining with Zebra in Docker](user/mining-docker.md)
- - [Shielded Scanning](user/shielded-scan.md)
- - [Shielded Scanning gRPC Server](user/shielded-scan-grpc-server.md)
- [Kibana blockchain explorer](user/elasticsearch.md)
- [Forking the Zcash Testnet with Zebra](user/fork-zebra-testnet.md)
- [Custom Testnets](user/custom-testnets.md)
@@ -27,8 +25,10 @@
- [Developer Documentation](dev.md)
- [Contribution Guide](CONTRIBUTING.md)
- [Design Overview](dev/overview.md)
+ - [Mempool Specification](dev/mempool-specification.md)
- [Diagrams](dev/diagrams.md)
- [Network Architecture](dev/diagrams/zebra-network.md)
+ - [Mempool Architecture](dev/diagrams/mempool-architecture.md)
- [Upgrading the State Database](dev/state-db-upgrades.md)
- [Zebra versioning and releases](dev/release-process.md)
- [Continuous Integration](dev/continuous-integration.md)
@@ -36,6 +36,8 @@
- [Generating Zebra Checkpoints](dev/zebra-checkpoints.md)
- [Doing Mass Renames](dev/mass-renames.md)
- [Updating the ECC dependencies](dev/ecc-updates.md)
+ - [Running a Private Testnet Test](dev/private-testnet.md)
+ - [Zebra crates](dev/crate-owners.md)
- [Zebra RFCs](dev/rfcs.md)
- [Pipelinable Block Lookup](dev/rfcs/0001-pipelinable-block-lookup.md)
- [Parallel Verification](dev/rfcs/0002-parallel-verification.md)
@@ -43,6 +45,7 @@
- [Asynchronous Script Verification](dev/rfcs/0004-asynchronous-script-verification.md)
- [State Updates](dev/rfcs/0005-state-updates.md)
- [Contextual Difficulty Validation](dev/rfcs/0006-contextual-difficulty.md)
+ - [Tree States](dev/rfcs/0007-treestate.md)
- [Zebra Client](dev/rfcs/0009-zebra-client.md)
- [V5 Transaction](dev/rfcs/0010-v5-transaction.md)
- [Async Rust in Zebra](dev/rfcs/0011-async-rust-in-zebra.md)
diff --git a/book/src/dev/continuous-delivery.md b/book/src/dev/continuous-delivery.md
index 30c3ed7e86b..c977de01fc3 100644
--- a/book/src/dev/continuous-delivery.md
+++ b/book/src/dev/continuous-delivery.md
@@ -1,6 +1,6 @@
# Zebra Continuous Delivery
-Zebra has an extension of it's continuous integration since it automatically deploys all
+Zebra has an extension of its continuous integration since it automatically deploys all
code changes to a testing and/or pre-production environment after each PR gets merged
into the `main` branch, and on each Zebra `release`.
diff --git a/book/src/dev/continuous-integration.md b/book/src/dev/continuous-integration.md
index d59ad00aeee..7e2a452d03c 100644
--- a/book/src/dev/continuous-integration.md
+++ b/book/src/dev/continuous-integration.md
@@ -16,7 +16,7 @@ Some of our builds and tests are repeated on the `main` branch, due to:
- our cached state sharing rules, or
- generating base coverage for PR coverage reports.
-Currently, each Zebra and lightwalletd full and update sync will updates cached state images,
+Currently, each Zebra and lightwalletd full and update sync will update cached state images,
which are shared by all tests. Tests prefer the latest image generated from the same commit.
But if a state from the same commit is not available, tests will use the latest image from
any branch and commit, as long as the state version is the same.
diff --git a/book/src/dev/diagrams/mempool-architecture.md b/book/src/dev/diagrams/mempool-architecture.md
new file mode 100644
index 00000000000..f0e11184048
--- /dev/null
+++ b/book/src/dev/diagrams/mempool-architecture.md
@@ -0,0 +1,88 @@
+# Mempool Architecture Diagram
+
+This diagram illustrates the architecture of the Zebra mempool, showing its main components and the flow of transactions through the system.
+
+```mermaid
+graph TD
+ %% External Components
+ Net[Network Service]
+ State[State Service]
+ TxVerifier[Transaction Verifier]
+ RPC[RPC Service]
+
+ %% Mempool Main Components
+ Mempool{{Mempool Service}}
+ Storage{{Storage}}
+ Downloads{{Transaction Downloads}}
+ Crawler{{Crawler}}
+ QueueChecker{{Queue Checker}}
+
+ %% Transaction Flow
+ Net -->|1- Poll peers| Mempool
+ RPC -->|1- Direct submit| Mempool
+ Crawler -->|1- Poll peers| Net
+ Crawler -->|2- Queue transactions| Mempool
+
+ Mempool -->|3- Queue for download| Downloads
+ Downloads -->|4a- Download request| Net
+ Net -->|4b- Transaction data| Downloads
+
+ Downloads -->|5a- Verify request| TxVerifier
+ TxVerifier -->|5b- Verification result| Downloads
+
+ Downloads -->|6a- Check UTXO| State
+ State -->|6b- UTXO data| Downloads
+
+ Downloads -->|7- Store verified tx| Storage
+
+ QueueChecker -->|8a- Check for verified| Mempool
+ Mempool -->|8b- Process verified| QueueChecker
+
+ Storage -->|9- Query responses| Mempool
+ Mempool -->|10- Gossip new tx| Net
+
+ %% State Management
+ State -->|Chain tip changes| Mempool
+ Mempool -->|Updates verification context| Downloads
+
+ %% Mempool responds to service requests
+ RPC -->|Query mempool| Mempool
+ Mempool -->|Mempool data| RPC
+
+ %% Styling
+ classDef external fill:#444,stroke:#888,stroke-width:1px,color:white;
+ classDef component fill:#333,stroke:#888,stroke-width:1px,color:white;
+
+ class Net,State,TxVerifier,RPC external;
+ class Mempool,Storage,Downloads,Crawler,QueueChecker component;
+```
+
+## Component Descriptions
+
+1. **Mempool Service**: The central coordinator that handles requests and manages the mempool state.
+
+2. **Storage**: In-memory storage for verified transactions and rejection lists.
+
+3. **Transaction Downloads**: Handles downloading and verifying transactions from peers.
+
+4. **Crawler**: Periodically polls peers for new transactions.
+
+5. **Queue Checker**: Regularly polls for newly verified transactions.
+
+## Transaction Flow
+
+1. Transactions arrive via network gossiping, direct RPC submission, or crawler polling.
+
+2. The mempool checks if transactions are already known or rejected. If not, it queues them for download.
+
+3. The download service retrieves transaction data from peers.
+
+4. Transactions are verified against consensus rules using the transaction verifier.
+
+5. Verified transactions are stored in memory and gossiped to peers.
+
+6. The queue checker regularly checks for newly verified transactions.
+
+7. Transactions remain in the mempool until they are mined or evicted due to size limits.
+
+8. When the chain tip changes, the mempool updates its verification context and potentially evicts invalid transactions.
diff --git a/book/src/dev/mempool-specification.md b/book/src/dev/mempool-specification.md
new file mode 100644
index 00000000000..1142ab0e827
--- /dev/null
+++ b/book/src/dev/mempool-specification.md
@@ -0,0 +1,206 @@
+# Mempool Specification
+
+The Zebra mempool handles unmined Zcash transactions: collecting them from peers, verifying them, storing them in memory, providing APIs for other components to access them, and gossiping transactions to peers. This document specifies the architecture, behavior, and interfaces of the mempool.
+
+## Overview
+
+The mempool is a fundamental component of the Zebra node, responsible for managing the lifecycle of unmined transactions. It provides an in-memory storage for valid transactions that haven't yet been included in a block, and offers interfaces for other components to interact with these transactions.
+
+Key responsibilities of the mempool include:
+- Accepting new transactions from the network
+- Verifying transactions against a subset of consensus rules
+- Storing verified transactions in memory
+- Managing memory usage and transaction eviction
+- Providing transaction queries to other components
+- Gossiping transactions to peers
+
+## Architecture
+
+The mempool is comprised of several subcomponents:
+
+1. **Mempool Service** (`Mempool`): The main service that handles requests from other components, manages the active state of the mempool, and coordinates the other subcomponents.
+
+2. **Transaction Storage** (`Storage`): Manages the in-memory storage of verified transactions and rejected transactions, along with their rejection reasons.
+
+3. **Transaction Downloads** (`Downloads`): Handles downloading and verifying transactions, coordinating with the network and verification services.
+
+4. **Crawler** (`Crawler`): Periodically polls peers for new transactions to add to the mempool.
+
+5. **Queue Checker** (`QueueChecker`): Regularly checks the transaction verification queue to process newly verified transactions.
+
+6. **Transaction Gossip** (`gossip`): Broadcasts newly added transactions to peers.
+
+7. **Pending Outputs** (`PendingOutputs`): Tracks requests for transaction outputs that haven't yet been seen.
+
+For a visual representation of the architecture and transaction flow, see the [Mempool Architecture Diagram](diagrams/mempool-architecture.md).
+
+## Activation
+
+The mempool is activated when:
+- The node is near the blockchain tip (determined by `SyncStatus`)
+- OR when the current chain height reaches a configured debug height (`debug_enable_at_height`)
+
+When activated, the mempool creates transaction download and verify services, initializes storage, and starts background tasks for crawling and queue checking.
+
+## Configuration
+
+The mempool has the following configurable parameters:
+
+1. **Transaction Cost Limit** (`tx_cost_limit`): The maximum total serialized byte size of all transactions in the mempool, defaulting to 80,000,000 bytes as required by [ZIP-401](https://zips.z.cash/zip-0401).
+
+2. **Eviction Memory Time** (`eviction_memory_time`): The maximum time to remember evicted transaction IDs in the rejection list, defaulting to 60 minutes.
+
+3. **Debug Enable At Height** (`debug_enable_at_height`): An optional height at which to enable the mempool for debugging, regardless of sync status.
+
+## State Management
+
+The mempool maintains an `ActiveState` which can be either:
+- `Disabled`: Mempool is not active
+- `Enabled`: Mempool is active and contains:
+ - `storage`: The Storage instance for transactions
+ - `tx_downloads`: Transaction download and verification stream
+ - `last_seen_tip_hash`: Hash of the last chain tip the mempool has seen
+
+The mempool responds to chain tip changes:
+- On new blocks: Updates verification context, removes mined transactions
+- On reorgs: Clears tip-specific rejections, retries all transactions
+
+## Transaction Processing Flow
+
+1. **Transaction Arrival**:
+ - From peer gossip (inv messages)
+ - From direct submission (RPC)
+ - From periodic peer polling (crawler)
+
+2. **Transaction Download**:
+ - Checks if transaction exists in mempool or rejection lists
+ - Queues transaction for download if needed
+ - Downloads transaction data from peers
+
+3. **Transaction Verification**:
+ - Checks transaction against consensus rules
+ - Verifies transaction against the current chain state
+ - Manages dependencies between transactions
+
+4. **Transaction Storage**:
+ - Stores verified transactions in memory
+ - Tracks transaction dependencies
+ - Enforces size limits and eviction policies
+
+5. **Transaction Gossip**:
+ - Broadcasts newly verified transactions to peers
+
+## Transaction Rejection
+
+Transactions can be rejected for multiple reasons, categorized into:
+
+1. **Exact Tip Rejections** (`ExactTipRejectionError`):
+ - Failures in consensus validation
+ - Only applies to exactly matching transactions at the current tip
+
+2. **Same Effects Tip Rejections** (`SameEffectsTipRejectionError`):
+ - Spending conflicts with other mempool transactions
+ - Missing outputs from mempool transactions
+ - Applies to any transaction with the same effects at the current tip
+
+3. **Same Effects Chain Rejections** (`SameEffectsChainRejectionError`):
+ - Expired transactions
+ - Duplicate spends already in the blockchain
+ - Transactions already mined
+ - Transactions evicted due to memory limits
+ - Applies until a rollback or network upgrade
+
+Rejection reasons are stored alongside rejected transaction IDs to prevent repeated verification of invalid transactions.
+
+## Memory Management
+
+The mempool employs several strategies for memory management:
+
+1. **Transaction Cost Limit**: Enforces a maximum total size for all mempool transactions.
+
+2. **Random Eviction**: When the mempool exceeds the cost limit, transactions are randomly evicted following the [ZIP-401](https://zips.z.cash/zip-0401) specification.
+
+3. **Eviction Memory**: Remembers evicted transaction IDs for a configurable period to prevent re-verification.
+
+4. **Rejection List Size Limit**: Caps rejection lists at 40,000 entries per [ZIP-401](https://zips.z.cash/zip-0401).
+
+5. **Automatic Cleanup**: Removes expired transactions and rejections that are no longer relevant.
+
+## Service Interface
+
+The mempool exposes a service interface with the following request types:
+
+1. **Query Requests**:
+ - `TransactionIds`: Get all transaction IDs in the mempool
+ - `TransactionsById`: Get transactions by their unmined IDs
+ - `TransactionsByMinedId`: Get transactions by their mined hashes
+ - `FullTransactions`: Get all verified transactions with fee information
+ - `RejectedTransactionIds`: Query rejected transaction IDs
+ - `TransactionWithDepsByMinedId`: Get a transaction and its dependencies
+
+2. **Action Requests**:
+ - `Queue`: Queue transactions or transaction IDs for download and verification
+ - `CheckForVerifiedTransactions`: Check for newly verified transactions
+ - `AwaitOutput`: Wait for a specific transparent output to become available
+
+## Interaction with Other Components
+
+The mempool interacts with several other Zebra components:
+
+1. **Network Service**: For downloading transactions and gossiping to peers.
+
+2. **State Service**: For checking transaction validity against the current chain state.
+
+3. **Transaction Verifier**: For consensus validation of transactions.
+
+4. **Chain Tip Service**: For tracking the current blockchain tip.
+
+5. **RPC Services**: To provide transaction data for RPC methods.
+
+## Implementation Constraints
+
+1. **Correctness**:
+ - All transactions in the mempool must be verified
+ - Transactions must be re-verified when the chain tip changes
+ - Rejected transactions must be properly tracked to prevent DoS attacks
+
+2. **Performance**:
+ - Transaction processing should be asynchronous
+ - Memory usage should be bounded
+ - Critical paths should be optimized for throughput
+
+3. **Reliability**:
+ - The mempool should recover from crashes and chain reorganizations
+ - Background tasks should be resilient to temporary failures
+
+## ZIP-401 Compliance
+
+The mempool implements the requirements specified in [ZIP-401](https://zips.z.cash/zip-0401):
+
+1. Implements `mempooltxcostlimit` configuration (default: 80,000,000)
+2. Implements `mempoolevictionmemoryminutes` configuration (default: 60 minutes)
+3. Uses random eviction when the mempool exceeds the cost limit
+4. Caps eviction memory lists at 40,000 entries
+5. Uses transaction IDs (txid) for version 5 transactions in eviction lists
+
+## Error Handling
+
+The mempool employs a comprehensive error handling strategy:
+
+1. **Temporary Failures**: For network or resource issues, transactions remain in the queue for retry.
+
+2. **Permanent Rejections**: For consensus or semantic failures, transactions are rejected with specific error reasons.
+
+3. **Dependency Failures**: For missing inputs or dependencies, transactions may wait for dependencies to be resolved.
+
+4. **Recovery**: On startup or after a crash, the mempool is rebuilt from scratch.
+
+## Metrics and Diagnostics
+
+The mempool provides metrics for monitoring:
+
+1. **Transaction Count**: Number of transactions in the mempool
+2. **Total Cost**: Total size of all mempool transactions
+3. **Queued Count**: Number of transactions pending download or verification
+4. **Rejected Count**: Number of rejected transactions in memory
+5. **Background Task Status**: Health of crawler and queue checker tasks
diff --git a/book/src/dev/overview.md b/book/src/dev/overview.md
index afefbd33403..642f472fd42 100644
--- a/book/src/dev/overview.md
+++ b/book/src/dev/overview.md
@@ -41,8 +41,6 @@ The following are general desiderata for Zebra:
## Service Dependencies
-Note: dotted lines are for "getblocktemplate-rpcs" feature
-
{{#include diagrams/service-dependencies.svg}}
@@ -74,6 +72,8 @@ digraph services {
Render here: https://dreampuf.github.io/GraphvizOnline
-->
+The dotted lines are for the `getblocktemplate` RPC.
+
## Architecture
Unlike `zcashd`, which originated as a Bitcoin Core fork and inherited its
diff --git a/book/src/dev/private-testnet.md b/book/src/dev/private-testnet.md
new file mode 100644
index 00000000000..47e3d807739
--- /dev/null
+++ b/book/src/dev/private-testnet.md
@@ -0,0 +1,181 @@
+# Private Testnet Test
+
+The objective of a private Testnet test is to test Testnet activation of an upcoming
+network upgrade in an isolated fashion, before the actual Testnet activation.
+It is usually done using the current state of the existing Testnet. For NU6, it was done
+by ZF and ECC engineers over a call.
+
+## Steps
+
+### Make Backup
+
+Make a backup of your current Testnet state. Rename/copy the `testnet` folder in
+Zebra's state cache directory to the lowercase version of the configured network name,
+or the default `unknowntestnet` if no network name is explicitly configured.
+
+### Set Protocol Version
+
+Double check that Zebra has bumped its protocol version.
+
+### Set Up Lightwalletd Server
+
+It's a good idea to set up a lightwalletd server connected to your node, and
+have a (Testnet) wallet connected to your lightwalletd server.
+
+### Connect to Peers
+
+Make sure everyone can connect to each other. You can **use Tailscale** to do
+that. Everyone needs to send invites to everyone else. Note that being able to
+access someone's node does not imply that they can access yours, it needs to be
+enabled both ways.
+
+### Choose an Activation Height
+
+Choose an activation height with the other participants. It should be in
+the near future, but with enough time for people to set things up; something
+like 30 minutes in the future?
+
+### Ensure the Activation Height is Set in Code
+
+While Zebra allows creating a private Testnet in the config file, the height is
+also set in some librustzcash crates. For this reason, someone will need to
+**create a branch of librustzcash** with the chosen height set and you will need
+to **change Zebra to use that**. However, double check if that's still
+necessary.
+
+### Configure Zebra to use a custom testnet
+
+See sample config file below. The critical part is setting the activation
+height. It is good to enable verbose logging to help debug things. Some of the
+participants must enable mining also. It's not a huge deal to keep the DNS
+seeders; the blockchain will fork when the activation happens and only the
+participants will stay connected. On the other hand, if you want to ensure you
+won't connect to anyone else, set `cache_dir = false` in the `[network]` section
+and delete the peers file (`~/.cache/zebra/network/unknowntestnet.peers`).
+
+### Run Nodes
+
+Everyone runs their nodes, and checks if they connect to other nodes. You can use
+e.g. `curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method":
+"getpeerinfo", "params": [] }' -H 'Content-Type: application/json'
+http://127.0.0.1:8232` to check that. See "Getting Peers" section below.
+
+### Wait Until Activation Happens
+
+And monitor logs for behaviour.
+
+### Do Tests
+
+Do tests, including sending transactions if possible (which will require the
+lightwalletd server). Check if whatever activated in the upgrade works.
+
+
+## Zebra
+
+Relevant information about Zebra for the testing process.
+
+### Getting peers
+
+It seems Zebra is not very reliable at returning its currently connected peers;
+you can use `getpeerinfo` RPC as above or check the peers file
+(`~/.cache/zebra/network/unknowntestnet.peers`) if `cache_dir = true` in the
+`[network]` section. You might want to sort this out before the next private
+testnet test.
+
+### Unredact IPs
+
+Zebra redacts IPs when logging for privacy reasons. However, for a test like
+this it can be annoying. You can disable that by editing `peer_addr.rs`
+with something like
+
+
+```diff
+--- a/zebra-network/src/meta_addr/peer_addr.rs
++++ b/zebra-network/src/meta_addr/peer_addr.rs
+@@ -30,7 +30,7 @@ impl fmt::Display for PeerSocketAddr {
+ let ip_version = if self.is_ipv4() { "v4" } else { "v6" };
+
+ // The port is usually not sensitive, and it's useful for debugging.
+- f.pad(&format!("{}redacted:{}", ip_version, self.port()))
++ f.pad(&format!("{}:{}", self.ip(), self.port()))
+ }
+ }
+```
+
+### Sample config file
+
+Note: Zebra's db path will end in "unknowntestnet" instead of "testnet" with
+this configuration.
+
+```
+[consensus]
+checkpoint_sync = true
+
+[mempool]
+eviction_memory_time = "1h"
+tx_cost_limit = 80000000
+
+[metrics]
+
+[mining]
+miner_address = "t27eWDgjFYJGVXmzrXeVjnb5J3uXDM9xH9v"
+# if you want to enable mining, which also requires selecting the `internal-miner` compilation feature
+internal_miner = true
+
+[network]
+# This will save peers to a file. Take care that it also reads peers from it;
+# if you want to be truly isolated and only connect to the other participants,
+# either disable this or delete the peers file before starting.
+cache_dir = true
+crawl_new_peer_interval = "1m 1s"
+
+initial_mainnet_peers = []
+
+initial_testnet_peers = [
+ # List the other participant's Tailscale IPs here.
+ # You can also keep the default DNS seeders if you wish.
+ "100.10.0.1:18233",
+]
+
+listen_addr = "0.0.0.0:18233"
+max_connections_per_ip = 1
+network = "Testnet"
+peerset_initial_target_size = 25
+
+[network.testnet_parameters]
+
+[network.testnet_parameters.activation_heights]
+BeforeOverwinter = 1
+Overwinter = 207_500
+Sapling = 280_000
+Blossom = 584_000
+Heartwood = 903_800
+Canopy = 1_028_500
+NU5 = 1_842_420
+NU6 = 2_969_920
+
+[rpc]
+debug_force_finished_sync = false
+parallel_cpu_threads = 0
+listen_addr = "127.0.0.1:8232"
+indexer_listen_addr = "127.0.0.1:8231"
+
+[state]
+delete_old_database = true
+ephemeral = false
+
+[sync]
+checkpoint_verify_concurrency_limit = 1000
+download_concurrency_limit = 50
+full_verify_concurrency_limit = 20
+parallel_cpu_threads = 0
+
+[tracing]
+buffer_limit = 128000
+force_use_color = false
+use_color = true
+use_journald = false
+# This enables debug network logging. It can be useful but it's very verbose!
+filter = 'info,zebra_network=debug'
+```
+
diff --git a/book/src/dev/release-process.md b/book/src/dev/release-process.md
index 606e9a03682..abcf90d11f3 100644
--- a/book/src/dev/release-process.md
+++ b/book/src/dev/release-process.md
@@ -65,7 +65,7 @@ We let you preview what's coming by providing Release Candidate \(`rc`\) pre-rel
### Distribution tags
-Zebras's tagging relates directly to versions published on Docker. We will reference these [Docker Hub distribution tags](https://hub.docker.com/r/zfnd/zebra/tags) throughout:
+Zebra's tagging relates directly to versions published on Docker. We will reference these [Docker Hub distribution tags](https://hub.docker.com/r/zfnd/zebra/tags) throughout:
| Tag | Description |
|:--- |:--- |
diff --git a/book/src/dev/rfcs/0000-template.md b/book/src/dev/rfcs/0000-template.md
index 72bcd9affcf..cdf4022bc24 100644
--- a/book/src/dev/rfcs/0000-template.md
+++ b/book/src/dev/rfcs/0000-template.md
@@ -122,7 +122,7 @@ Think about what the natural extension and evolution of your proposal would
be and how it would affect Zebra and Zcash as a whole. Try to use this
section as a tool to more fully consider all possible
interactions with the project and cryptocurrency ecosystem in your proposal.
-Also consider how the this all fits into the roadmap for the project
+Also consider how this all fits into the roadmap for the project
and of the relevant sub-team.
This is also a good place to "dump ideas", if they are out of scope for the
diff --git a/book/src/dev/rfcs/0003-inventory-tracking.md b/book/src/dev/rfcs/0003-inventory-tracking.md
index ff7bac4d18a..a562843070f 100644
--- a/book/src/dev/rfcs/0003-inventory-tracking.md
+++ b/book/src/dev/rfcs/0003-inventory-tracking.md
@@ -192,7 +192,7 @@ specific inventory request is ready, because until we get the request, we
can't determine which peers might be required to process it.
We could attempt to ensure that the peer set would be ready to process a
-specific inventory request would be to pre-emptively "reserve" a peer as soon
+specific inventory request would be to preemptively "reserve" a peer as soon
as it advertises an inventory item. But this doesn't actually work to ensure
readiness, because a peer could advertise two inventory items, and only be
able to service one request at a time. It also potentially locks the peer
diff --git a/book/src/dev/rfcs/0006-contextual-difficulty.md b/book/src/dev/rfcs/0006-contextual-difficulty.md
index dc27692adb2..b23fbab8c24 100644
--- a/book/src/dev/rfcs/0006-contextual-difficulty.md
+++ b/book/src/dev/rfcs/0006-contextual-difficulty.md
@@ -421,7 +421,7 @@ fn mean_target_difficulty(&self) -> ExpandedDifficulty { ... }
Since the `PoWLimit`s are `2^251 − 1` for Testnet, and `2^243 − 1` for Mainnet,
the sum of these difficulty thresholds will be less than or equal to
`(2^251 − 1)*17 = 2^255 + 2^251 - 17`. Therefore, this calculation can not
-overflow a `u256` value. So the function is infalliable.
+overflow a `u256` value. So the function is infallible.
In Zebra, contextual validation starts after Canopy activation, so we can assume
that the relevant chain contains at least 17 blocks. Therefore, the `PoWLimit`
@@ -499,7 +499,7 @@ that the relevant chain contains at least 28 blocks. Therefore:
* there is always an odd number of blocks in `MedianTime()`, so the median is
always the exact middle of the sequence.
-Therefore, the function is infalliable.
+Therefore, the function is infallible.
### Test network minimum difficulty calculation
[test-net-min-difficulty-calculation]: #test-net-min-difficulty-calculation
@@ -580,7 +580,7 @@ In Zcash, the Testnet minimum difficulty rule starts at block 299188, and in
Zebra, contextual validation starts after Canopy activation. So we can assume
that there is always a previous block.
-Therefore, this function is infalliable.
+Therefore, this function is infallible.
### Block difficulty threshold calculation
[block-difficulty-threshold-calculation]: #block-difficulty-threshold-calculation
@@ -647,7 +647,7 @@ Note that the multiplication by `ActualTimespanBounded` must happen after the
division by `AveragingWindowTimespan`. Performing the multiplication first
could overflow.
-If implemented in this way, the function is infalliable.
+If implemented in this way, the function is infallible.
`zcashd` truncates the `MeanTarget` after the mean calculation, and
after dividing by `AveragingWindowTimespan`. But as long as there is no overflow,
@@ -753,10 +753,10 @@ would be a security issue.
# Future possibilities
[future-possibilities]: #future-possibilities
-## Re-using the relevant chain API in other contextual checks
+## Reusing the relevant chain API in other contextual checks
[relevant-chain-api-reuse]: #relevant-chain-api-reuse
-The relevant chain iterator can be re-used to implement other contextual
+The relevant chain iterator can be reused to implement other contextual
validation checks.
For example, responding to peer requests for block locators, which means
diff --git a/book/src/dev/rfcs/0007-treestate.md b/book/src/dev/rfcs/0007-treestate.md
new file mode 100644
index 00000000000..1de547c6ead
--- /dev/null
+++ b/book/src/dev/rfcs/0007-treestate.md
@@ -0,0 +1,346 @@
+# Treestate
+
+- Feature Name: treestate
+- Start Date: 2020-08-31
+- Design PR: [ZcashFoundation/zebra#983](https://github.com/ZcashFoundation/zebra/issues/983)
+- Zebra Issue: [ZcashFoundation/zebra#958](https://github.com/ZcashFoundation/zebra/issues/958)
+
+# Summary
+[summary]: #summary
+
+To validate blocks involving shielded transactions, we have to check the
+computed treestate from the included transactions against the block header
+metadata (for Sapling and Orchard) or previously finalized state (for Sprout).
+This document describes how we compute and manage that data, assuming a finalized
+state service as described in the [State Updates RFC](https://zebra.zfnd.org/dev/rfcs/0005-state-updates.md).
+
+
+# Motivation
+[motivation]: #motivation
+
+Block validation requires checking that the treestate of the block (consisting
+of the note commitment tree and nullifier set) is consistent with the metadata
+we have in the block header (the root of the note commitment tree) or previously
+finalized state (for Sprout).
+
+
+# Definitions
+[definitions]: #definitions
+
+## Common Definitions
+
+Many terms used here are defined in the [Zcash Protocol Specification](https://zips.z.cash/protocol/protocol.pdf)
+
+**notes**: Represents a value bound to a shielded payment address (public key)
+which is spendable by the recipient who holds the spending key corresponding to
+a given shielded payment address.
+
+**nullifiers**: A value that prevents double-spending of a shielded payment.
+Revealed by `Spend` descriptions when its associated `Note` is spent.
+
+**nullifier set**: The set of unique `Nullifier`s revealed by any `Transaction`s
+within a `Block`. `Nullifier`s are enforced to be unique within a valid block chain
+by committing to previous treestates in `Spend` descriptions, in order to prevent
+double-spends.
+
+**note commitments**: Pedersen commitment to the values consisting a `Note`. One
+should not be able to construct a `Note` from its commitment.
+
+**note commitment tree**: An incremental Merkle tree of fixed depth used to
+store `NoteCommitment`s that `JoinSplit` transfers or `Spend` transfers produce. It
+is used to express the existence of value and the capability to spend it. It is
+not the job of this tree to protect against double-spending, as it is
+append-only: that's what the `Nullifier` set is for.
+
+**note position**: The index of a `NoteCommitment` at the leafmost layer,
+counting leftmost to rightmost. The [position in the tree is determined by the
+order of transactions in the block](https://zips.z.cash/protocol/protocol.pdf#transactions).
+
+**root**: The layer 0 node of a Merkle tree.
+
+**anchor**: A Merkle tree root of a `NoteCommitment` tree. It uniquely
+identifies a `NoteCommitment` tree state given the assumed security properties
+of the Merkle tree’s hash function. Since the `Nullifier` set is always updated
+together with the `NoteCommitment` tree, this also identifies a particular state
+of the associated `Nullifier` set.
+
+## Sprout Definitions
+
+**joinsplit**: A shielded transfer that can spend Sprout `Note`s and transparent
+value, and create new Sprout `Note`s and transparent value, in one Groth16 proof
+statement.
+
+## Sapling Definitions
+
+**spend descriptions**: A shielded Sapling transfer that spends a `Note`. Includes
+an anchor of some previous `Block`'s `NoteCommitment` tree.
+
+**output descriptions**: A shielded Sapling transfer that creates a
+`Note`. Includes the u-coordinate of the `NoteCommitment` itself.
+
+## Orchard Definitions
+
+**action descriptions**: A shielded Orchard transfer that spends and/or creates a
+`Note`. Does not include an anchor, because that is encoded once in the
+`anchorOrchard` field of a V5 `Transaction`.
+
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+## Common Processing for All Protocols
+
+As `Block`s are validated, the `NoteCommitment`s revealed by all the transactions
+within that block are used to construct `NoteCommitmentTree`s, with the
+`NoteCommitment`s aligned in their note positions in the bottom layer of the
+Sprout or Sapling tree from the left-most leaf to the right-most in
+`Transaction` order in the `Block`. So the Sprout `NoteCommitment`s revealed by
+the first `JoinSplit` in a block would take note position 0 in the Sprout
+note commitment tree, for example. Once all the transactions in a block are
+parsed and the notes for each tree collected in their appropriate positions, the
+root of each tree is computed. While the trees are being built, the respective
+block nullifier sets are updated in memory as note nullifiers are revealed. If
+the rest of the block is validated according to consensus rules, that root is
+committed to its own data structure via our state service (Sprout anchors,
+Sapling anchors). Sapling block validation includes comparing the specified
+FinalSaplingRoot in its block header to the root of the Sapling `NoteCommitment`
+tree that we have just computed to make sure they match.
+
+## Sprout Processing
+
+For Sprout, we must compute/update interstitial `NoteCommitmentTree`s between
+`JoinSplit`s that may reference an earlier one's root as its anchor. If we do
+this at the transaction layer, we can iterate through all the `JoinSplit`s and
+compute the Sprout `NoteCommitmentTree` and nullifier set similar to how we do
+the Sapling ones as described below, but at each state change (ie,
+per-`JoinSplit`) we note the root and cache it for lookup later. As the
+`JoinSplit`s are validated without context, we check for its specified anchor
+amongst the interstitial roots we've already calculated (according to the spec,
+these interstitial roots don't have to be finalized or the result of an
+independently validated `JoinSplit`, they just must refer to any prior `JoinSplit`
+root in the same transaction). So we only have to wait for our previous root to
+be computed via any of our candidates, which in the worst case is waiting for
+all of them to be computed for the last `JoinSplit`. If our `JoinSplit`s defined
+root pops out, that `JoinSplit` passes that check.
+
+## Sapling Processing
+
+As the transactions within a block are parsed, Sapling shielded transactions
+including `Spend` descriptions and `Output` descriptions describe the spending and
+creation of Zcash Sapling notes. `Spend` descriptions specify an anchor, which
+references a previous `NoteCommitment` tree root. This is a previous block's anchor
+as defined in their block header. This is convenient because we can query our state
+service for previously finalized Sapling block anchors, and if they are found, then
+that [consensus check](https://zips.z.cash/protocol/canopy.pdf#spendsandoutputs)
+has been satisfied and the `Spend` description can be validated independently.
+
+For Sapling, at the block layer, we can iterate over all the transactions in
+order and if they have `Spend`s and/or `Output`s, we update our Nullifer set for
+the block as nullifiers are revealed in `Spend` descriptions, and update our note
+commitment tree as `NoteCommitment`s are revealed in `Output` descriptions, adding
+them as leaves in positions according to their order as they appear transaction
+to transaction, output to output, in the block. This can be done independent of
+the transaction validations. When the Sapling transactions are all validated,
+the `NoteCommitmentTree` root should be computed: this is the anchor for this
+block.
+
+### Anchor Validation Across Network Upgrades
+
+For Sapling and Blossom blocks, we need to check that this root matches
+the `RootHash` bytes in this block's header, as the `FinalSaplingRoot`. Once all
+other consensus and validation checks are done, this will be saved down to our
+finalized state to our `sapling_anchors` set, making it available for lookup by
+other Sapling descriptions in future transactions.
+
+In Heartwood and Canopy, the rules for final Sapling roots are modified to support
+empty blocks by allowing an empty subtree hash instead of requiring the root to
+match the previous block's final Sapling root when there are no Sapling transactions.
+
+In NU5, the rules are further extended to include Orchard note commitment trees,
+with similar logic applied to the `anchorOrchard` field in V5 transactions.
+
+## Orchard Processing
+
+For Orchard, similar to Sapling, action descriptions can spend and create notes.
+The anchor is specified at the transaction level in the `anchorOrchard` field of
+a V5 transaction. The process follows similar steps to Sapling for validation and
+inclusion in blocks.
+
+## Block Finalization
+
+To finalize the block, the Sprout, Sapling, and Orchard treestates are the ones
+resulting from the last transaction in the block, and determines the respective
+anchors that will be associated with this block as we commit it to our finalized
+state. The nullifiers revealed in the block will be merged with the existing ones
+in our finalized state (ie, it should strictly grow over time).
+
+## State Management
+
+### Orchard
+
+- There is a single copy of the latest Orchard Note Commitment Tree for the
+finalized tip.
+- When finalizing a block, the finalized tip is updated with a serialization of
+the latest Orchard Note Commitment Tree. (The previous tree should be deleted as
+part of the same database transaction.)
+- Each non-finalized chain gets its own copy of the Orchard note commitment tree,
+cloned from the note commitment tree of the finalized tip or fork root.
+- When a block is added to a non-finalized chain tip, the Orchard note commitment
+tree is updated with the note commitments from that block.
+- When a block is rolled back from a non-finalized chain tip, the Orchard tree
+state is restored to its previous state before the block was added. This involves
+either keeping a reference to the previous state or recalculating from the fork
+point.
+
+### Sapling
+
+- There is a single copy of the latest Sapling Note Commitment Tree for the
+finalized tip.
+- When finalizing a block, the finalized tip is updated with a serialization of
+the Sapling Note Commitment Tree. (The previous tree should be deleted as part
+of the same database transaction.)
+- Each non-finalized chain gets its own copy of the Sapling note commitment tree,
+cloned from the note commitment tree of the finalized tip or fork root.
+- When a block is added to a non-finalized chain tip, the Sapling note commitment
+tree is updated with the note commitments from that block.
+- When a block is rolled back from a non-finalized chain tip, the Sapling tree
+state is restored to its previous state, similar to the Orchard process. This
+involves either maintaining a history of tree states or recalculating from the
+fork point.
+
+### Sprout
+
+- Every finalized block stores a separate copy of the Sprout note commitment
+tree (😿), as of that block.
+- When finalizing a block, the Sprout note commitment tree for that block is stored
+in the state. (The trees for previous blocks also remain in the state.)
+- Every block in each non-finalized chain gets its own copy of the Sprout note
+commitment tree. The initial tree is cloned from the note commitment tree of the
+finalized tip or fork root.
+- When a block is added to a non-finalized chain tip, the Sprout note commitment
+tree is cloned, then updated with the note commitments from that block.
+- When a block is rolled back from a non-finalized chain tip, the trees for each
+block are deleted, along with that block.
+
+We can't just compute a fresh tree with just the note commitments within a block,
+we are adding them to the tree referenced by the anchor, but we cannot update that
+tree with just the anchor, we need the 'frontier' nodes and leaves of the
+incremental merkle tree.
+
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation
+
+The implementation involves several key components:
+
+1. **Incremental Merkle Trees**: We use the `incrementalmerkletree` crate to
+implement the note commitment trees for each shielded pool.
+
+2. **Nullifier Storage**: We maintain nullifier sets in RocksDB to efficiently
+check for duplicates.
+
+3. **Tree State Management**:
+ - For finalized blocks, we store the tree states in RocksDB.
+ - For non-finalized chains, we keep tree states in memory.
+
+4. **Anchor Verification**:
+ - For Sprout: we check anchors against our stored Sprout tree roots.
+ - For Sapling: we compare the computed root against the block header's
+`FinalSaplingRoot`.
+ - For Orchard: we validate the `anchorOrchard` field in V5 transactions.
+
+5. **Re-insertion Prevention**: Our implementation should prevent re-inserts
+of keys that have been deleted from the database, as this could lead to
+inconsistencies. The state service tracks deletion events and validates insertion
+operations accordingly.
+
+
+# Drawbacks
+[drawbacks]: #drawbacks
+
+1. **Storage Requirements**: Storing separate tree states (especially for Sprout)
+requires significant disk space.
+
+2. **Performance Impact**: Computing and verifying tree states can be
+computationally expensive, potentially affecting sync performance.
+
+3. **Implementation Complexity**: Managing multiple tree states across different
+protocols adds complexity to the codebase.
+
+4. **Fork Handling**: Maintaining correct tree states during chain reorganizations
+requires careful handling.
+
+
+# Rationale and alternatives
+[rationale-and-alternatives]: #rationale-and-alternatives
+
+We chose this approach because:
+
+1. **Protocol Compatibility**: Our implementation follows the Zcash protocol
+specification requirements for handling note commitment trees and anchors.
+
+2. **Performance Optimization**: By caching tree states, we avoid recomputing
+them for every validation operation.
+
+3. **Memory Efficiency**: For non-finalized chains, we only keep necessary tree
+states in memory.
+
+4. **Scalability**: The design scales with chain growth by efficiently managing
+storage requirements.
+
+Alternative approaches considered:
+
+1. **Recompute Trees On-Demand**: Instead of storing tree states, recompute them
+when needed. This would save storage but significantly impact performance.
+
+2. **Single Tree State**: Maintain only the latest tree state and recompute for
+historical blocks. This would simplify implementation but make historical validation harder.
+
+3. **Full History Storage**: Store complete tree states for all blocks. This would optimize
+validation speed but require excessive storage.
+
+
+# Prior art
+[prior-art]: #prior-art
+
+1. **Zcashd**: Uses similar concepts but with differences in implementation details,
+particularly around storage and concurrency.
+
+2. **Lightwalletd**: Provides a simplified approach to tree state management focused
+on scanning rather than full validation.
+
+3. **Incrementalmerkletree Crate**: Our implementation leverages this existing Rust
+crate for efficient tree management.
+
+
+# Unresolved questions
+[unresolved-questions]: #unresolved-questions
+
+1. **Optimization Opportunities**: Are there further optimizations we can make to reduce
+storage requirements while maintaining performance?
+
+2. **Root Storage**: Should we store the `Root` hash in `sprout_note_commitment_tree`,
+and use it to look up the complete tree state when needed?
+
+3. **Re-insertion Prevention**: What's the most efficient approach to prevent re-inserts
+of deleted keys?
+
+4. **Concurrency Model**: How do we best handle concurrent access to tree states during
+parallel validation?
+
+
+# Future possibilities
+[future-possibilities]: #future-possibilities
+
+1. **Pruning Strategies**: Implement advanced pruning strategies for historical tree states
+to reduce storage requirements.
+
+2. **Parallelization**: Further optimize tree state updates for parallel processing.
+
+3. **Checkpoint Verification**: Use tree states for efficient checkpoint-based verification.
+
+4. **Light Client Support**: Leverage tree states to support Zebra-based light clients with
+efficient proof verification.
+
+5. **State Storage Optimization**: Investigate more efficient serialization formats and storage
+mechanisms for tree states.
diff --git a/book/src/dev/rfcs/0009-zebra-client.md b/book/src/dev/rfcs/0009-zebra-client.md
index 3aa153daea6..771aa7d3a2d 100644
--- a/book/src/dev/rfcs/0009-zebra-client.md
+++ b/book/src/dev/rfcs/0009-zebra-client.md
@@ -341,7 +341,7 @@ endpoint
-
+
diff --git a/book/src/dev/rfcs/drafts/0005-treestate.md b/book/src/dev/rfcs/drafts/0005-treestate.md
deleted file mode 100644
index f8c83936da7..00000000000
--- a/book/src/dev/rfcs/drafts/0005-treestate.md
+++ /dev/null
@@ -1,227 +0,0 @@
-# Treestate
-
-- Feature Name: treestate
-- Start Date: 2020-08-31
-- Design PR: [ZcashFoundation/zebra#983](https://github.com/ZcashFoundation/zebra/issues/983)
-- Zebra Issue: [ZcashFoundation/zebra#958](https://github.com/ZcashFoundation/zebra/issues/958)
-
-# Summary
-[summary]: #summary
-
-To validate blocks involving shielded transactions, we have to check the
-computed treestate from the included transactions against the block header
-metadata (for Sapling and Orchard) or previously finalized state (for Sprout). This document
-describes how we compute and manage that data, assuming a finalized state
-service as described in the [State Updates RFC](https://zebra.zfnd.org/dev/rfcs/0005-state-updates.md).
-
-
-# Motivation
-[motivation]: #motivation
-
-Block validation requires checking that the treestate of the block (consisting
-of the note commitment tree and nullifier set) is consistent with the metadata
-we have in the block header (the root of the note commitment tree) or previously
-finalized state (for Sprout).
-
-
-# Definitions
-[definitions]: #definitions
-
-TODO: split up these definitions into common, Sprout, Sapling, and possibly Orchard sections
-
-Many terms used here are defined in the [Zcash Protocol Specification](https://zips.z.cash/protocol/protocol.pdf)
-
-**notes**: Represents a value bound to a shielded payment address (public key)
-which is spendable by the recipient who holds the spending key corresponding to
-a given shielded payment address.
-
-**nullifiers**: Revealed by `Spend` descriptions when its associated `Note` is spent.
-
-**nullifier set**: The set of unique `Nullifier`s revealed by any `Transaction`s
-within a `Block`. `Nullifier`s are enforced to be unique within a valid block chain
-by committing to previous treestates in `Spend` descriptions, in order to prevent
-double-spends.
-
-**note commitments**: Pedersen commitment to the values consisting a `Note`. One
-should not be able to construct a `Note` from its commitment.
-
-**note commitment tree**: An incremental Merkle tree of fixed depth used to
-store `NoteCommitment`s that `JoinSplit` transfers or `Spend` transfers produce. It
-is used to express the existence of value and the capability to spend it. It is
-not the job of this tree to protect against double-spending, as it is
-append-only: that's what the `Nullifier` set is for.
-
-**note position**: The index of a `NoteCommitment` at the leafmost layer,
-counting leftmost to rightmost. The [position in the tree is determined by the
-order of transactions in the block](https://zips.z.cash/protocol/canopy.pdf#transactions).
-
-**root**: The layer 0 node of a Merkle tree.
-
-**anchor**: A Merkle tree root of a `NoteCommitment` tree. It uniquely
-identifies a `NoteCommitment` tree state given the assumed security properties
-of the Merkle tree’s hash function. Since the `Nullifier` set is always updated
-together with the `NoteCommitment` tree, this also identifies a particular state
-of the associated `Nullifier` set.
-
-**spend descriptions**: A shielded Sapling transfer that spends a `Note`. Includes
-an anchor of some previous `Block`'s `NoteCommitment` tree.
-
-**output descriptions**: A shielded Sapling transfer that creates a
-`Note`. Includes the u-coordinate of the `NoteCommitment` itself.
-
-**action descriptions**: A shielded Orchard transfer that spends and/or creates a `Note`.
-Does not include an anchor, because that is encoded once in the `anchorOrchard`
-field of a V5 `Transaction`.
-
-
-
-**joinsplit**: A shielded transfer that can spend Sprout `Note`s and transparent
-value, and create new Sprout `Note`s and transparent value, in one Groth16 proof
-statement.
-
-
-# Guide-level explanation
-[guide-level-explanation]: #guide-level-explanation
-
-TODO: split into common, Sprout, Sapling, and probably Orchard sections
-
-As `Block`s are validated, the `NoteCommitment`s revealed by all the transactions
-within that block are used to construct `NoteCommitmentTree`s, with the
-`NoteCommitment`s aligned in their note positions in the bottom layer of the
-Sprout or Sapling tree from the left-most leaf to the right-most in
-`Transaction` order in the `Block`. So the Sprout `NoteCommitment`s revealed by
-the first `JoinSplit` in a block would take note position 0 in the Sprout
-note commitment tree, for example. Once all the transactions in a block are
-parsed and the notes for each tree collected in their appropriate positions, the
-root of each tree is computed. While the trees are being built, the respective
-block nullifier sets are updated in memory as note nullifiers are revealed. If
-the rest of the block is validated according to consensus rules, that root is
-committed to its own datastructure via our state service (Sprout anchors,
-Sapling anchors). Sapling block validation includes comparing the specified
-FinalSaplingRoot in its block header to the root of the Sapling `NoteCommitment`
-tree that we have just computed to make sure they match.
-
-As the transactions within a block are parsed, Sapling shielded transactions
-including `Spend` descriptions and `Output` descriptions describe the spending and
-creation of Zcash Sapling notes, and JoinSplit-on-Groth16 descriptions to
-transfer/spend/create Sprout notes and transparent value. `JoinSplit` and `Spend`
-descriptions specify an anchor, which references a previous `NoteCommitment` tree
-root: for `Spend`s, this is a previous block's anchor as defined in their block
-header, for `JoinSplit`s, it may be a previous block's anchor or the root
-produced by a strictly previous `JoinSplit` description in its transaction. For
-`Spend`s, this is convenient because we can query our state service for
-previously finalized Sapling block anchors, and if they are found, then that
-[consensus check](https://zips.z.cash/protocol/canopy.pdf#spendsandoutputs) has
-been satisfied and the `Spend` description can be validated independently. For
-`JoinSplit`s, if it's not a previously finalized block anchor, it must be the
-treestate anchor of previous `JoinSplit` in this transaction, and we have to wait
-for that one to be parsed and its root computed to check that ours is
-valid. Luckily, it can only be a previous `JoinSplit` in this transaction, and is
-[usually the immediately previous one](zcashd), so the set of candidate anchors
-is smaller for earlier `JoinSplit`s in a transaction, but larger for the later
-ones. For these `JoinSplit`s, they can be validated independently of their
-anchor's finalization status as long as the final check of the anchor is done,
-when available, such as at the Transaction level after all the `JoinSplit`s have
-finished validating everything that can be validated without the context of
-their anchor's finalization state.
-
-So for each transaction, for both `Spend` descriptions and `JoinSplit`s, we can
-pre-emptively try to do our consensus check by looking up the anchors in our
-finalized set first. For `Spend`s, we then trigger the remaining validation and
-when that finishes we are full done with those. For `JoinSplit`s, the anchor
-state check may pass early if it's a previous block Sprout `NoteCommitment` tree
-root, but it may fail because it's an earlier `JoinSplit`s root instead, so once
-the `JoinSplit` validates independently of the anchor, we wait for all candidate
-previous `JoinSplit`s in that transaction finish validating before doing the
-anchor consensus check again, but against the output treestate roots of earlier
-`JoinSplit`s.
-
-Both Sprout and Sapling `NoteCommitment` trees must be computed for the whole
-block to validate. For Sprout, we need to compute interstitial treestates in
-between `JoinSplit`s in order to do the final consensus check for each/all
-`JoinSplit`s, not just for the whole block, as in Sapling.
-
-For Sapling, at the block layer, we can iterate over all the transactions in
-order and if they have `Spend`s and/or `Output`s, we update our Nullifer set for
-the block as nullifiers are revealed in `Spend` descriptions, and update our note
-commitment tree as `NoteCommitment`s are revealed in `Output` descriptions, adding
-them as leaves in positions according to their order as they appear transaction
-to transaction, output to output, in the block. This can be done independent of
-the transaction validations. When the Sapling transactions are all validated,
-the `NoteCommitmentTree` root should be computed: this is the anchor for this
-block. For Sapling and Blossom blocks, we need to check that this root matches
-the `RootHash` bytes in this block's header, as the `FinalSaplingRoot`. Once all
-other consensus and validation checks are done, this will be saved down to our
-finalized state to our `sapling_anchors` set, making it available for lookup by
-other Sapling descriptions in future transactions.
-TODO: explain Heartwood, Canopy, NU5 rule variants around anchors.
-For Sprout, we must compute/update interstitial `NoteCommitmentTree`s between
-`JoinSplit`s that may reference an earlier one's root as its anchor. If we do
-this at the transaction layer, we can iterate through all the `JoinSplit`s and
-compute the Sprout `NoteCommitmentTree` and nullifier set similar to how we do
-the Sapling ones as described above, but at each state change (ie,
-per-`JoinSplit`) we note the root and cache it for lookup later. As the
-`JoinSplit`s are validated without context, we check for its specified anchor
-amongst the interstitial roots we've already calculated (according to the spec,
-these interstitial roots don't have to be finalized or the result of an
-independently validated `JoinSplit`, they just must refer to any prior `JoinSplit`
-root in the same transaction). So we only have to wait for our previous root to
-be computed via any of our candidates, which in the worst case is waiting for
-all of them to be computed for the last `JoinSplit`. If our `JoinSplit`s defined
-root pops out, that `JoinSplit` passes that check.
-
-To finalize the block, the Sprout and Sapling treestates are the ones resulting
-from the last transaction in the block, and determines the Sprout and Sapling
-anchors that will be associated with this block as we commit it to our finalized
-state. The Sprout and Sapling nullifiers revealed in the block will be merged
-with the existing ones in our finalized state (ie, it should strictly grow over
-time).
-
-## State Management
-
-### Orchard
-- There is a single copy of the latest Orchard Note Commitment Tree for the finalized tip.
-- When finalizing a block, the finalized tip is updated with a serialization of the latest Orchard Note Commitment Tree. (The previous tree should be deleted as part of the same database transaction.)
-- Each non-finalized chain gets its own copy of the Orchard note commitment tree, cloned from the note commitment tree of the finalized tip or fork root.
-- When a block is added to a non-finalized chain tip, the Orchard note commitment tree is updated with the note commitments from that block.
-- When a block is rolled back from a non-finalized chain tip... (TODO)
-
-### Sapling
-- There is a single copy of the latest Sapling Note Commitment Tree for the finalized tip.
-- When finalizing a block, the finalized tip is updated with a serialization of the Sapling Note Commitment Tree. (The previous tree should be deleted as part of the same database transaction.)
-- Each non-finalized chain gets its own copy of the Sapling note commitment tree, cloned from the note commitment tree of the finalized tip or fork root.
-- When a block is added to a non-finalized chain tip, the Sapling note commitment tree is updated with the note commitments from that block.
-- When a block is rolled back from a non-finalized chain tip... (TODO)
-
-### Sprout
-- Every finalized block stores a separate copy of the Sprout note commitment tree (😿), as of that block.
-- When finalizing a block, the Sprout note commitment tree for that block is stored in the state. (The trees for previous blocks also remain in the state.)
-- Every block in each non-finalized chain gets its own copy of the Sprout note commitment tree. The initial tree is cloned from the note commitment tree of the finalized tip or fork root.
-- When a block is added to a non-finalized chain tip, the Sprout note commitment tree is cloned, then updated with the note commitments from that block.
-- When a block is rolled back from a non-finalized chain tip, the trees for each block are deleted, along with that block.
-
-We can't just compute a fresh tree with just the note commitments within a block, we are adding them to the tree referenced by the anchor, but we cannot update that tree with just the anchor, we need the 'frontier' nodes and leaves of the incremental merkle tree.
-
-# Reference-level explanation
-[reference-level-explanation]: #reference-level-explanation
-
-
-# Drawbacks
-[drawbacks]: #drawbacks
-
-
-
-# Rationale and alternatives
-[rationale-and-alternatives]: #rationale-and-alternatives
-
-
-# Prior art
-[prior-art]: #prior-art
-
-
-# Unresolved questions
-[unresolved-questions]: #unresolved-questions
-
-
-# Future possibilities
-[future-possibilities]: #future-possibilities
diff --git a/book/src/dev/rfcs/drafts/data-flow-2020-07-22.md b/book/src/dev/rfcs/drafts/data-flow-2020-07-22.md
index 5818ad119a1..8c366472ad4 100644
--- a/book/src/dev/rfcs/drafts/data-flow-2020-07-22.md
+++ b/book/src/dev/rfcs/drafts/data-flow-2020-07-22.md
@@ -23,7 +23,7 @@
- nullifiers (within a single transaction)
- // Transactions containing empty `vin` must have either non-empty `vJoinSplit` or non-empty `vShieldedSpend`.
- // Transactions containing empty `vout` must have either non-empty `vJoinSplit` or non-empty `vShieldedOutput`.
- - Moar: https://github.com/zcash/zcash/blob/ab2b7c0969391d8a57d90d008665da02f3f618e7/src/main.cpp#L1091
+ - More: https://github.com/zcash/zcash/blob/ab2b7c0969391d8a57d90d008665da02f3f618e7/src/main.cpp#L1091
- Sum up "LegacySigOps" for each transaction and check that it's less than some maximum
- Acquires a lock, then calls `MarkBlockAsReceived` (networking?)
- Calls `AcceptBlock`, defined at: https://github.com/zcash/zcash/blob/ab2b7c0969391d8a57d90d008665da02f3f618e7/src/main.cpp#L4180
diff --git a/book/src/dev/rfcs/drafts/xxxx-basic-integration-testing.md b/book/src/dev/rfcs/drafts/xxxx-basic-integration-testing.md
index 7f9c0594e3c..dd77cc14cdc 100644
--- a/book/src/dev/rfcs/drafts/xxxx-basic-integration-testing.md
+++ b/book/src/dev/rfcs/drafts/xxxx-basic-integration-testing.md
@@ -124,7 +124,7 @@ Think about what the natural extension and evolution of your proposal would
be and how it would affect Zebra and Zcash as a whole. Try to use this
section as a tool to more fully consider all possible
interactions with the project and cryptocurrency ecosystem in your proposal.
-Also consider how the this all fits into the roadmap for the project
+Also consider how this all fits into the roadmap for the project
and of the relevant sub-team.
This is also a good place to "dump ideas", if they are out of scope for the
diff --git a/book/src/dev/rfcs/drafts/xxxx-block-subsidy.md b/book/src/dev/rfcs/drafts/xxxx-block-subsidy.md
index 2ab752c4be6..838343ee77d 100644
--- a/book/src/dev/rfcs/drafts/xxxx-block-subsidy.md
+++ b/book/src/dev/rfcs/drafts/xxxx-block-subsidy.md
@@ -70,7 +70,7 @@ In Zebra the consensus related code lives in the `zebra-consensus` crate. The bl
Inside `zebra-consensus/src/block/subsidy/` the following submodules will be created:
- `general.rs`: General block reward functions and utilities.
-- `founders_reward.rs`: Specific functions related to funders reward.
+- `founders_reward.rs`: Specific functions related to founders reward.
- `funding_streams.rs`: Specific functions for funding streams.
In addition to calculations the block subsidy requires constants defined in the protocol. The implementation will also create additional constants, all of them will live at:
diff --git a/book/src/dev/state-db-upgrades.md b/book/src/dev/state-db-upgrades.md
index 15f962e88b4..dd9e5bcd587 100644
--- a/book/src/dev/state-db-upgrades.md
+++ b/book/src/dev/state-db-upgrades.md
@@ -12,7 +12,8 @@ family doesn't exist.
Instead:
- define the name and type of each column family at the top of the implementation module,
- add a method on the database that returns that type, and
-- add the column family name to the list of column families in the database:
+- add the column family name to the list of column families in the database
+ (in the `STATE_COLUMN_FAMILIES_IN_CODE` array):
For example:
```rust
@@ -256,7 +257,7 @@ simulates typical user behaviour.
And ideally:
- An upgrade from the earliest supported Zebra version
- (the CI sync-past-checkpoint tests do this on every PR)
+ (the CI sync-past-mandatory-checkpoint tests do this on every PR)
#### Manually Triggering a Format Upgrade
[manual-upgrade]: #manual-upgrade
@@ -304,7 +305,7 @@ We use the following rocksdb column families:
| `hash_by_tx_loc` | `TransactionLocation` | `transaction::Hash` | Create |
| `tx_loc_by_hash` | `transaction::Hash` | `TransactionLocation` | Create |
| *Transparent* | | | |
-| `balance_by_transparent_addr` | `transparent::Address` | `Amount \|\| AddressLocation` | Update |
+| `balance_by_transparent_addr` | `transparent::Address` | `AddressBalanceLocation` | Update |
| `tx_loc_by_transparent_addr_loc` | `AddressTransaction` | `()` | Create |
| `utxo_by_out_loc` | `OutputLocation` | `transparent::Output` | Delete |
| `utxo_loc_by_transparent_addr_loc` | `AddressUnspentOutput` | `()` | Delete |
@@ -325,6 +326,20 @@ We use the following rocksdb column families:
| *Chain* | | | |
| `history_tree` | `()` | `NonEmptyHistoryTree` | Update |
| `tip_chain_value_pool` | `()` | `ValueBalance` | Update |
+| `block_info` | `block::Height` | `BlockInfo` | Create |
+
+With the following additional modifications when compiled with the `indexer` feature:
+
+| Column Family | Keys | Values | Changes |
+| ------------------------- | -------------------- | --------------------- | ------- |
+| *Transparent* | | | |
+| `tx_loc_by_spent_out_loc` | `OutputLocation` | `TransactionLocation` | Create |
+| *Sprout* | | | |
+| `sprout_nullifiers` | `sprout::Nullifier` | `TransactionLocation` | Create |
+| *Sapling* | | | |
+| `sapling_nullifiers` | `sapling::Nullifier` | `TransactionLocation` | Create |
+| *Orchard* | | | |
+| `orchard_nullifiers` | `orchard::Nullifier` | `TransactionLocation` | Create |
### Data Formats
[rocksdb-data-format]: #rocksdb-data-format
@@ -339,6 +354,7 @@ Block and Transaction Data:
- `TransactionIndex`: 16 bits, big-endian, unsigned (max ~23,000 transactions in the 2 MB block limit)
- `TransactionCount`: same as `TransactionIndex`
- `TransactionLocation`: `Height \|\| TransactionIndex`
+- `AddressBalanceLocation`: `Amount \|\| u64 \|\| AddressLocation`
- `OutputIndex`: 24 bits, big-endian, unsigned (max ~223,000 transfers in the 2 MB block limit)
- transparent and shielded input indexes, and shielded output indexes: 16 bits, big-endian, unsigned (max ~49,000 transfers in the 2 MB block limit)
- `OutputLocation`: `TransactionLocation \|\| OutputIndex`
@@ -587,9 +603,16 @@ So they should not be used for consensus-critical checks.
**TODO:** store the `Root` hash in `sprout_note_commitment_tree`, and use it to look up the
note commitment tree. This de-duplicates tree state data. But we currently only store one sprout tree by height.
-- The value pools are only stored for the finalized tip.
+- The value pools are only stored for the finalized tip. Per-block value pools
+ are stored in `block_info`, see below.
- We do not store the cumulative work for the finalized chain,
because the finalized work is equal for all non-finalized chains.
So the additional non-finalized work can be used to calculate the relative chain order,
and choose the best chain.
+
+- The `block_info` contains additional per-block data. Currently it stores
+ the value pools after that block, and its size. It has been implemented
+ in a future-proof way so it is possible to add more data to it whiles
+ still allowing database downgrades (i.e. it does not require the
+ data length to match exactly what is expected and ignores the rest)
diff --git a/book/src/dev/zebra-dependencies-for-audit.md b/book/src/dev/zebra-dependencies-for-audit.md
index f33aba321b8..7655ff275de 100644
--- a/book/src/dev/zebra-dependencies-for-audit.md
+++ b/book/src/dev/zebra-dependencies-for-audit.md
@@ -63,7 +63,7 @@ The following consensus, security, and functional changes are in Zebra's develop
The following list of dependencies is out of scope for the audit.
-Please ignore the dependency versions in these tables, some of them are are outdated. All versions of these dependencies are out of scope.
+Please ignore the dependency versions in these tables, some of them are outdated. All versions of these dependencies are out of scope.
The latest versions of Zebra's dependencies are in [`Cargo.lock`](https://github.com/ZcashFoundation/zebra/tree/audit-v1.0.0-rc.0/Cargo.lock), including transitive dependencies. They can be viewed using `cargo tree`.
diff --git a/book/src/user/custom-testnets.md b/book/src/user/custom-testnets.md
index 76406cdf704..18a19673ce9 100644
--- a/book/src/user/custom-testnets.md
+++ b/book/src/user/custom-testnets.md
@@ -156,7 +156,7 @@ The remaining consensus differences between Mainnet and Testnet could be made co
## Differences Between Custom Testnets and Regtest
Zebra's Regtest network is a special case of a custom Testnet that:
-- Won't make peer connections[^fn4],
+- Won't make remote peer connections[^fn4],
- Skips Proof-of-Work validation,
- Uses a reserved network magic and network name,
- Activates network upgrades up to and including Canopy at block height 1,
@@ -183,4 +183,4 @@ Zebra nodes on custom Testnets will also reject peer connections with nodes that
[^fn3]: Configuring any of the Testnet parameters that are currently configurable except the network name will result in an incompatible custom Testnet, these are: the network magic, network upgrade activation heights, slow start interval, genesis hash, disabled Proof-of-Work and target difficulty limit.
-[^fn4]: Zebra won't make outbound peer connections on Regtest, but currently still listens for inbound peer connections, which will be rejected unless they use the Regtest network magic, and Zcash nodes using the Regtest network magic should not be making outbound peer connections. It may be updated to skip initialization of the peerset service altogether so that it won't listen for peer connections at all when support for isolated custom Testnets is added.
+[^fn4]: Zebra won't make remote outbound peer connections on Regtest, but currently still listens for remote inbound peer connections, which will be rejected unless they use the Regtest network magic, and Zcash nodes using the Regtest network magic should not be making outbound peer connections. It may be updated to skip initialization of the peerset service altogether so that it won't listen for peer connections at all when support for isolated custom Testnets is added.
diff --git a/book/src/user/docker.md b/book/src/user/docker.md
index 90491024df3..95657770610 100644
--- a/book/src/user/docker.md
+++ b/book/src/user/docker.md
@@ -1,182 +1,178 @@
# Zebra with Docker
-The easiest way to run Zebra is using [Docker](https://docs.docker.com/get-docker/).
+The foundation maintains a Docker infrastructure for deploying and testing Zebra.
-We've embraced Docker in Zebra for most of the solution lifecycle, from development environments to CI (in our pipelines), and deployment to end users.
+## Quick Start
-> [!TIP]
-> We recommend using `docker compose` sub-command over the plain `docker` CLI, especially for more advanced use-cases like running CI locally, as it provides a more convenient and powerful way to manage multi-container Docker applications. See [CI/CD Local Testing](#cicd-local-testing) for more information, and other compose files available in the [docker](https://github.com/ZcashFoundation/zebra/tree/main/docker) folder.
-
-## Quick usage
-
-You can deploy Zebra for daily use with the images available in [Docker Hub](https://hub.docker.com/r/zfnd/zebra) or build it locally for testing.
-
-### Ready to use image
-
-Using `docker compose`:
+To get Zebra quickly up and running, you can use an off-the-rack image from
+[Docker Hub](https://hub.docker.com/r/zfnd/zebra/tags):
```shell
-docker compose -f docker/docker-compose.yml up
+docker run --name zebra zfnd/zebra
```
-With plain `docker` CLI:
+If you want to preserve Zebra's state, you can create a Docker volume:
```shell
docker volume create zebrad-cache
-
-docker run -d --platform linux/amd64 \
- --restart unless-stopped \
- --env-file .env \
- --mount type=volume,source=zebrad-cache,target=/var/cache/zebrad-cache \
- -p 8233:8233 \
- --memory 16G \
- --cpus 4 \
- zfnd/zebra
```
-### Build it locally
+And mount it before you start the container:
```shell
-git clone --depth 1 --branch v2.0.0 https://github.com/ZcashFoundation/zebra.git
-docker build --file docker/Dockerfile --target runtime --tag zebra:local .
-docker run --detach zebra:local
+docker run \
+ --mount source=zebrad-cache,target=/home/zebra/.cache/zebra \
+ --name zebra \
+ zfnd/zebra
```
-### Alternatives
-
-See [Building Zebra](https://github.com/ZcashFoundation/zebra#building-zebra) for more information.
-
-## Advanced usage
-
-You're able to specify various parameters when building or launching the Docker image, which are meant to be used by developers and CI pipelines. For example, specifying the Network where Zebra will run (Mainnet, Testnet, etc), or enabling features like metrics with Prometheus.
-
-For example, if we'd like to enable metrics on the image, we'd build it using the following `build-arg`:
-
-> [!IMPORTANT]
-> To fully use and display the metrics, you'll need to run a Prometheus and Grafana server, and configure it to scrape and visualize the metrics endpoint. This is explained in more detailed in the [Metrics](https://zebra.zfnd.org/user/metrics.html#zebra-metrics) section of the User Guide.
+You can also use `docker compose`, which we recommend. First get the repo:
```shell
-docker build -f ./docker/Dockerfile --target runtime --build-arg FEATURES='default-release-binaries prometheus' --tag local/zebra.mining:latest .
+git clone --depth 1 --branch v2.4.2 https://github.com/ZcashFoundation/zebra.git
+cd zebra
```
-To increase the log output we can optionally add these `build-arg`s:
+Then run:
```shell
---build-arg RUST_BACKTRACE=full --build-arg RUST_LOG=debug --build-arg COLORBT_SHOW_HIDDEN=1
+docker compose -f docker/docker-compose.yml up
```
-And after our image has been built, we can run it on `Mainnet` with the following command, which will expose the metrics endpoint on port `9999` and force the logs to be colored:
+## Custom Images
+
+If you want to use your own images with, for example, some opt-in compilation
+features enabled, add the desired features to the `FEATURES` variable in the
+`docker/.env` file and build the image:
```shell
-docker run --env LOG_COLOR="true" -p 9999:9999 local/zebra.mining
+docker build \
+ --file docker/Dockerfile \
+ --env-file docker/.env \
+ --target runtime \
+ --tag zebra:local \
+ .
```
-Based on our actual `entrypoint.sh` script, the following configuration file will be generated (on the fly, at startup) and used by Zebra:
+### Alternatives
-```toml
-[network]
-network = "Mainnet"
-listen_addr = "0.0.0.0"
-[state]
-cache_dir = "/var/cache/zebrad-cache"
-[metrics]
-endpoint_addr = "127.0.0.1:9999"
-```
+See [Building Zebra](https://github.com/ZcashFoundation/zebra#manual-build) for more information.
-### Running Zebra with Lightwalletd
-To run Zebra with Lightwalletd, we recommend using the provided `docker compose` files for Zebra and Lightwalletd, which will start both services and connect them together, while exposing ports, mounting volumes, and setting environment variables.
+### Building with Custom Features
-```shell
-docker compose -f docker/docker-compose.yml -f docker/docker-compose.lwd.yml up
-```
+Zebra supports various features that can be enabled during build time using the `FEATURES` build argument:
-### CI/CD Local Testing
+For example, if we'd like to enable metrics on the image, we'd build it using the following `build-arg`:
+
+> [!IMPORTANT]
+> To fully use and display the metrics, you'll need to run a Prometheus and Grafana server, and configure it to scrape and visualize the metrics endpoint. This is explained in more detailed in the [Metrics](https://zebra.zfnd.org/user/metrics.html#zebra-metrics) section of the User Guide.
-To run CI tests locally, which mimics the testing done in our CI pipelines on GitHub Actions, use the `docker-compose.test.yml` file. This setup allows for a consistent testing environment both locally and in CI.
+```shell
+# Build with specific features
+docker build -f ./docker/Dockerfile --target runtime \
+ --build-arg FEATURES="default-release-binaries prometheus" \
+ --tag zebra:metrics .
+```
-#### Running Tests Locally
+All available Cargo features are listed at
+.
-1. **Setting Environment Variables**:
- - Modify the `test.env` file to set the desired test configurations.
- - For running all tests, set `RUN_ALL_TESTS=1` in `test.env`.
+## Configuring Zebra
-2. **Starting the Test Environment**:
- - Use Docker Compose to start the testing environment:
+To configure Zebra using Docker, you have a few options, processed in this order:
- ```shell
- docker-compose -f docker/docker-compose.test.yml up
- ```
+1. **Provide a specific config file path:** Set the `ZEBRA_CONF_PATH` environment variable to point to your config file within the container.
+2. **Use the default config file:** By default, the `docker-compose.yml` file mounts `./default-zebra-config.toml` to `/home/zebra/.config/zebrad.toml` using the `configs:` mapping. Zebra will use this file if `ZEBRA_CONF_PATH` is not set. To use environment variables instead, you must **comment out** the `configs:` mapping in `docker/docker-compose.yml`.
+3. **Generate config from environment variables:** If neither of the above methods provides a config file (i.e., `ZEBRA_CONF_PATH` is unset *and* the `configs:` mapping in `docker-compose.yml` is commented out), the container's entrypoint script will *automatically generate* a default configuration file at `/home/zebra/.config/zebrad.toml`. This generated file uses specific environment variables (like `NETWORK`, `ZEBRA_RPC_PORT`, `ENABLE_COOKIE_AUTH`, `MINER_ADDRESS`, etc.) to define the settings. Using the `docker/.env` file is the primary way to set these variables for this auto-generation mode.
- - This will start the Docker container and run the tests based on `test.env` settings.
+You can see if your config works as intended by looking at Zebra's logs.
-3. **Viewing Test Output**:
- - The test results and logs will be displayed in the terminal.
+Note that if you provide a configuration file using methods 1 or 2, environment variables from `docker/.env` will **not** override the settings within that file. The environment variables are primarily used for the auto-generation scenario (method 3).
-4. **Stopping the Environment**:
- - Once testing is complete, stop the environment using:
+### RPC
- ```shell
- docker-compose -f docker/docker-compose.test.yml down
- ```
+Zebra's RPC server is disabled by default. To enable it, you need to define the RPC settings in Zebra's configuration. You can achieve this using one of the configuration methods described above:
-This approach ensures you can run the same tests locally that are run in CI, providing a robust way to validate changes before pushing to the repository.
+* **Using a config file (methods 1 or 2):** Add or uncomment the `[rpc]` section in your `zebrad.toml` file (like the one provided in `docker/default-zebra-config.toml`). Ensure you set the `listen_addr` (e.g., `"0.0.0.0:8232"` for Mainnet).
+* **Using environment variables (method 3):** Set the `ZEBRA_RPC_PORT` environment variable (e.g., in `docker/.env`). This tells the entrypoint script to include an enabled `[rpc]` section listening on `0.0.0.0:` in the auto-generated configuration file.
-### Build and Run Time Configuration
+**Cookie Authentication:**
-#### Build Time Arguments
+By default, Zebra uses cookie-based authentication for RPC requests (`enable_cookie_auth = true`). When enabled, Zebra generates a unique, random cookie file required for client authentication.
-#### Configuration
+* **Cookie Location:** The entrypoint script configures Zebra to store this file at `/home/zebra/.cache/zebra/.cookie` inside the container.
+* **Viewing the Cookie:** If the container is running and RPC is enabled with authentication, you can view the cookie content using:
-- `FEATURES`: Specifies the features to build `zebrad` with. Example: `"default-release-binaries getblocktemplate-rpcs"`
-- `TEST_FEATURES`: Specifies the features for tests. Example: `"lightwalletd-grpc-tests zebra-checkpoints"`
+ ```bash
+ docker exec cat /home/zebra/.cache/zebra/.cookie
+ ```
-#### Logging
+ (Replace `` with your container's name, typically `zebra` if using the default `docker-compose.yml`). Your RPC client will need this value.
+* **Disabling Authentication:** If you need to disable cookie authentication (e.g., for compatibility with tools like `lightwalletd`):
+ * If using a **config file** (methods 1 or 2), set `enable_cookie_auth = false` within the `[rpc]` section:
-- `RUST_LOG`: Sets the trace log level. Example: `"debug"`
-- `RUST_BACKTRACE`: Enables or disables backtraces. Example: `"full"`
-- `RUST_LIB_BACKTRACE`: Enables or disables library backtraces. Example: `1`
-- `COLORBT_SHOW_HIDDEN`: Enables or disables showing hidden backtraces. Example: `1`
+ ```toml
+ [rpc]
+ # listen_addr = ...
+ enable_cookie_auth = false
+ ```
-#### Tests
+ * If using **environment variables** for auto-generation (method 3), set `ENABLE_COOKIE_AUTH=false` in your `docker/.env` file.
-- `TEST_FEATURES`: Specifies the features for tests. Example: `"lightwalletd-grpc-tests zebra-checkpoints"`
-- `ZEBRA_SKIP_IPV6_TESTS`: Skips IPv6 tests. Example: `1`
-- `ENTRYPOINT_FEATURES`: Overrides the specific features used to run tests in `entrypoint.sh`. Example: `"default-release-binaries lightwalletd-grpc-tests"`
+Remember that Zebra only generates the cookie file if the RPC server is enabled *and* `enable_cookie_auth` is set to `true` (or omitted, as `true` is the default).
-#### CI/CD
+## Examples
-- `SHORT_SHA`: Represents the short SHA of the commit. Example: `"a1b2c3d"`
+To make the initial setup of Zebra with other services easier, we provide some
+example files for `docker compose`. The following subsections will walk you
+through those examples.
-#### Run Time Variables
+### Running Zebra with Lightwalletd
-- `NETWORK`: Specifies the network type. Example: `"Mainnet"`
+The following command will run Zebra with Lightwalletd:
-#### Zebra Configuration
+```shell
+docker compose -f docker/docker-compose.lwd.yml up
+```
-- `ZEBRA_CHECKPOINT_SYNC`: Enables or disables checkpoint sync. Example: `true`
-- `ZEBRA_LISTEN_ADDR`: Address for Zebra to listen on. Example: `"0.0.0.0"`
-- `ZEBRA_CACHED_STATE_DIR`: Directory for cached state. Example: `"/var/cache/zebrad-cache"`
+Note that Docker will run Zebra with the RPC server enabled and the cookie
+authentication mechanism disabled since Lightwalletd doesn't support it. Instead
+of configuring Zebra via the recommended config file or `docker/.env` file, we
+configured the RPC server by setting environment variables directly in the
+`docker/docker-compose.lwd.yml` file. This takes advantage of the entrypoint
+script's auto-generation feature (method 3 described above).
-#### Mining Configuration
+### Running Zebra with Prometheus and Grafana
-- `RPC_LISTEN_ADDR`: Address for RPC to listen on. Example: `"0.0.0.0"`
-- `RPC_PORT`: Port for RPC. Example: `8232`
-- `MINER_ADDRESS`: Address for the miner. Example: `"t1XhG6pT9xRqRQn3BHP7heUou1RuYrbcrCc"`
+The following commands will run Zebra with Prometheus and Grafana:
-#### Other Configuration
+```shell
+docker compose -f docker/docker-compose.grafana.yml build --no-cache
+docker compose -f docker/docker-compose.grafana.yml up
+```
-- `METRICS_ENDPOINT_ADDR`: Address for metrics endpoint. Example: `"0.0.0.0"`
-- `METRICS_ENDPOINT_PORT`: Port for metrics endpoint. Example: `9999`
-- `LOG_FILE`: Path to the log file. Example: `"/path/to/log/file.log"`
-- `LOG_COLOR`: Enables or disables log color. Example: `false`
-- `TRACING_ENDPOINT_ADDR`: Address for tracing endpoint. Example: `"0.0.0.0"`
-- `TRACING_ENDPOINT_PORT`: Port for tracing endpoint. Example: `3000`
+In this example, we build a local Zebra image with the `prometheus` Cargo
+compilation feature. Note that we enable this feature by specifying its name in
+the build arguments. Having this Cargo feature specified at build time makes
+`cargo` compile Zebra with the metrics support for Prometheus enabled. Note that
+we also specify this feature as an environment variable at run time. Having this
+feature specified at run time makes Docker's entrypoint script configure Zebra
+to open a scraping endpoint on `localhost:9999` for Prometheus.
-Specific tests are defined in `docker/test.env` file and can be enabled by setting the corresponding environment variable to `1`.
+Once all services are up, the Grafana web UI should be available at
+`localhost:3000`, the Prometheus web UI should be at `localhost:9090`, and
+Zebra's scraping page should be at `localhost:9999`. The default login and
+password for Grafana are both `admin`. To make Grafana use Prometheus, you need
+to add Prometheus as a data source with the URL `http://localhost:9090` in
+Grafana's UI. You can then import various Grafana dashboards from the `grafana`
+directory in the Zebra repo.
-## Registries
+### Running CI Tests Locally
-The images built by the Zebra team are all publicly hosted. Old image versions meant to be used by our [CI pipeline](https://github.com/ZcashFoundation/zebra/blob/main/.github/workflows/ci-integration-tests-gcp.yml) (`zebrad-test`, `lighwalletd`) might be deleted on a scheduled basis.
+To run CI tests locally, first set the variables in the `test.env` file to
+configure the tests, then run:
-We use [Docker Hub](https://hub.docker.com/r/zfnd/zebra) for end-user images and [Google Artifact Registry](https://console.cloud.google.com/artifacts/docker/zfnd-dev-zebra/us/zebra) to build external tools and test images.
+```shell
+docker-compose -f docker/docker-compose.test.yml up
+```
diff --git a/book/src/user/fork-zebra-testnet.md b/book/src/user/fork-zebra-testnet.md
index a7bf7b410c4..732129dafbe 100644
--- a/book/src/user/fork-zebra-testnet.md
+++ b/book/src/user/fork-zebra-testnet.md
@@ -94,7 +94,6 @@ Relevant parts of the configuration file:
debug_enable_at_height = 0
[mining]
-debug_like_zcashd = true
miner_address = 't27eWDgjFYJGVXmzrXeVjnb5J3uXDM9xH9v'
[network]
diff --git a/book/src/user/install.md b/book/src/user/install.md
index 6648339f743..a7bc4b4ae97 100644
--- a/book/src/user/install.md
+++ b/book/src/user/install.md
@@ -45,7 +45,6 @@ features](https://doc.rust-lang.org/cargo/reference/features.html#command-line-f
- `prometheus` for [Prometheus metrics](https://zebra.zfnd.org/user/metrics.html)
- `sentry` for [Sentry monitoring](https://zebra.zfnd.org/user/tracing.html#sentry-production-monitoring)
- `elasticsearch` for [experimental Elasticsearch support](https://zebra.zfnd.org/user/elasticsearch.html)
-- `shielded-scan` for [experimental shielded scan support](https://zebra.zfnd.org/user/shielded-scan.html)
You can combine multiple features by listing them as parameters of the
`--features` flag:
@@ -76,7 +75,7 @@ To compile Zebra directly from GitHub, or from a GitHub release source archive:
```sh
git clone https://github.com/ZcashFoundation/zebra.git
cd zebra
-git checkout v2.0.0
+git checkout v2.4.2
```
3. Build and Run `zebrad`
@@ -89,7 +88,7 @@ target/release/zebrad start
### Compiling from git using cargo install
```sh
-cargo install --git https://github.com/ZcashFoundation/zebra --tag v2.0.0 zebrad
+cargo install --git https://github.com/ZcashFoundation/zebra --tag v2.4.2 zebrad
```
### Compiling on ARM
@@ -114,11 +113,6 @@ If you're having trouble with:
- use `cargo install` without `--locked` to build with the latest versions of each dependency
-## Experimental Shielded Scanning feature
-
-- install the `rocksdb-tools` or `rocksdb` packages to get the `ldb` binary, which allows expert users to
- [query the scanner database](https://zebra.zfnd.org/user/shielded-scan.html). This binary is sometimes called `rocksdb_ldb`.
-
## Optional Tor feature
- **sqlite linker errors:** libsqlite3 is an optional dependency of the `zebra-network/tor` feature.
diff --git a/book/src/user/mining-docker.md b/book/src/user/mining-docker.md
index 002848c0ca3..2550ba23610 100644
--- a/book/src/user/mining-docker.md
+++ b/book/src/user/mining-docker.md
@@ -1,42 +1,43 @@
# Mining with Zebra in Docker
-Zebra's [Docker images](https://hub.docker.com/r/zfnd/zebra/tags) can be used for your mining
-operations. If you don't have Docker, see the
-[manual configuration instructions](https://zebra.zfnd.org/user/mining.html).
+Zebra's [Docker images](https://hub.docker.com/r/zfnd/zebra/tags) can be used
+for your mining operations. If you don't have Docker, see the [manual
+configuration instructions](https://zebra.zfnd.org/user/mining.html).
Using docker, you can start mining by running:
```bash
-docker run -e MINER_ADDRESS="t3dvVE3SQEi7kqNzwrfNePxZ1d4hUyztBA1" -p 8232:8232 zfnd/zebra:latest
+docker run --name -zebra_local -e MINER_ADDRESS="t3dvVE3SQEi7kqNzwrfNePxZ1d4hUyztBA1" -e ZEBRA_RPC_PORT=8232 -p 8232:8232 zfnd/zebra:latest
```
-This command starts a container on Mainnet and binds port 8232 on your Docker host. If you
-want to start generating blocks, you need to let Zebra sync first.
+This command starts a container on Mainnet and binds port 8232 on your Docker
+host. If you want to start generating blocks, you need to let Zebra sync first.
Note that you must pass the address for your mining rewards via the
`MINER_ADDRESS` environment variable when you are starting the container, as we
-did with the ZF funding stream address above. The address we used starts with the prefix `t1`,
-meaning it is a Mainnet P2PKH address. Please remember to set your own address
-for the rewards.
+did with the ZF funding stream address above. The address we used starts with
+the prefix `t1`, meaning it is a Mainnet P2PKH address. Please remember to set
+your own address for the rewards.
The port we mapped between the container and the host with the `-p` flag in the
-example above is Zebra's default Mainnet RPC port. If you want to use a
-different one, you can specify it in the `RPC_PORT` environment variable,
-similarly to `MINER_ADDRESS`, and then map it with the Docker's `-p` flag.
+example above is Zebra's default Mainnet RPC port.
Instead of listing the environment variables on the command line, you can use
-Docker's `--env-file` flag to specify a file containing the variables. You
-can find more info here
+Docker's `--env-file` flag to specify a file containing the variables. You can
+find more info here
https://docs.docker.com/engine/reference/commandline/run/#env.
-## Mining on Testnet
+If you don't want to set any environment variables, you can edit the
+`docker/default-zebra-config.toml` file, and pass it to Zebra before starting
+the container. There's an example in `docker/docker-compose.yml` of how to do
+that.
If you want to mine on Testnet, you need to set the `NETWORK` environment
variable to `Testnet` and use a Testnet address for the rewards. For example,
running
```bash
-docker run -e NETWORK="Testnet" -e MINER_ADDRESS="t27eWDgjFYJGVXmzrXeVjnb5J3uXDM9xH9v" -p 18232:18232 zfnd/zebra:latest
+docker run --name zebra_local -e NETWORK="Testnet" -e MINER_ADDRESS="t27eWDgjFYJGVXmzrXeVjnb5J3uXDM9xH9v" -e ZEBRA_RPC_PORT=18232 -p 18232:18232 zfnd/zebra:latest
```
will start a container on Testnet and bind port 18232 on your Docker host, which
@@ -44,3 +45,21 @@ is the standard Testnet RPC port. Notice that we also used a different rewards
address. It starts with the prefix `t2`, indicating that it is a Testnet
address. A Mainnet address would prevent Zebra from starting on Testnet, and
conversely, a Testnet address would prevent Zebra from starting on Mainnet.
+
+To connect to the RPC port, you will need the contents of the [cookie
+file](https://zebra.zfnd.org/user/mining.html?highlight=cookie#testing-the-setup)
+Zebra uses for authentication. By default, it is stored at
+`/home/zebra/.cache/zebra/.cookie`. You can print its contents by running
+
+```bash
+docker exec -it zebra_local cat /home/zebra/.cache/zebra/.cookie
+```
+
+If you want to avoid authentication, you can turn it off by setting
+
+```toml
+[rpc]
+enable_cookie_auth = false
+```
+
+in Zebra's config file before you start the container.
diff --git a/book/src/user/regtest.md b/book/src/user/regtest.md
index 4acb5b36194..90962f95b59 100644
--- a/book/src/user/regtest.md
+++ b/book/src/user/regtest.md
@@ -13,15 +13,15 @@ Relevant parts of the configuration file:
```toml
[mining]
miner_address = 't27eWDgjFYJGVXmzrXeVjnb5J3uXDM9xH9v'
-
+
[network]
network = "Regtest"
# This section may be omitted when testing only Canopy
[network.testnet_parameters.activation_heights]
-# Configured activation heights must be greater than or equal to 1,
+# Configured activation heights must be greater than or equal to 1,
# block height 0 is reserved for the Genesis network upgrade in Zebra
-NU5 = 1
+NU5 = 1
# This section may be omitted if a persistent Regtest chain state is desired
[state]
@@ -80,14 +80,8 @@ let block_template: GetBlockTemplate = client
.await
.expect("response should be success output with a serialized `GetBlockTemplate`");
-let network_upgrade = if block_template.height < NU5_ACTIVATION_HEIGHT {
- NetworkUpgrade::Canopy
-} else {
- NetworkUpgrade::Nu5
-};
-
let block_data = hex::encode(
- proposal_block_from_template(&block_template, TimeSource::default(), network_upgrade)?
+ proposal_block_from_template(&block_template, TimeSource::default(), Network::Mainnet)?
.zcash_serialize_to_vec()?,
);
diff --git a/book/src/user/shielded-scan-grpc-server.md b/book/src/user/shielded-scan-grpc-server.md
deleted file mode 100644
index d08bc0df49e..00000000000
--- a/book/src/user/shielded-scan-grpc-server.md
+++ /dev/null
@@ -1,146 +0,0 @@
-# Zebra Shielded Scanning gRPC Server
-
-## Get Started
-
-### Setup
-
-After setting up [Zebra Shielded Scanning](https://zebra.zfnd.org/user/shielded-scan.html), you can add a `listen-addr` argument to the scanner binary:
-
-
-```bash
-zebra-scanner --sapling-keys-to-scan '{"key":"zxviews1q0duytgcqqqqpqre26wkl45gvwwwd706xw608hucmvfalr759ejwf7qshjf5r9aa7323zulvz6plhttp5mltqcgs9t039cx2d09mgq05ts63n8u35hyv6h9nc9ctqqtue2u7cer2mqegunuulq2luhq3ywjcz35yyljewa4mgkgjzyfwh6fr6jd0dzd44ghk0nxdv2hnv4j5nxfwv24rwdmgllhe0p8568sgqt9ckt02v2kxf5ahtql6s0ltjpkckw8gtymxtxuu9gcr0swvz", "birthday_height": 419200}' --zebrad-cache-dir /media/alfredo/stuff/chain/zebra --zebra-rpc-listen-addr '127.0.0.1:8232' --listen-addr '127.0.0.1:8231'
-```
-
-Making requests to the server will also require a gRPC client, the examples here use `grpcurl`, though any gRPC client should work.
-
-[See installation instructions for `grpcurl` here](https://github.com/fullstorydev/grpcurl?tab=readme-ov-file#installation).
-
-The types can be accessed through the `zebra-grpc` crate's root `scanner` module for clients in a Rust environment, and the [`scanner.proto` file here](https://github.com/ZcashFoundation/zebra/blob/main/zebra-grpc/proto/scanner.proto) can be used to build types in other environments.
-
-### Usage
-
-To check that the gRPC server is running, try calling `scanner.Scanner/GetInfo`, for example with `grpcurl`:
-
-```bash
-grpcurl -plaintext '127.0.0.1:8231' scanner.Scanner/GetInfo
-```
-
-The response should look like:
-
-```
-{
- "minSaplingBirthdayHeight": 419200
-}
-```
-
-An example request to the `Scan` method with `grpcurl` would look like:
-
-```bash
-grpcurl -plaintext -d '{ "keys": { "key": ["sapling_extended_full_viewing_key"] } }' '127.0.0.1:8231' scanner.Scanner/Scan
-```
-
-This will start scanning for transactions in Zebra's state and in new blocks as they're validated.
-
-Or, to use the scanner gRPC server without streaming, try calling `RegisterKeys` with your Sapling extended full viewing key, waiting for the scanner to cache some results, then calling `GetResults`:
-
-```bash
-grpcurl -plaintext -d '{ "keys": { "key": ["sapling_extended_full_viewing_key"] } }' '127.0.0.1:8231' scanner.Scanner/RegisterKeys
-grpcurl -plaintext -d '{ "keys": ["sapling_extended_full_viewing_key"] }' '127.0.0.1:8231' scanner.Scanner/GetResults
-```
-
-## gRPC Reflection
-
-To see all of the provided methods with `grpcurl`, try:
-
-```bash
-grpcurl -plaintext '127.0.0.1:8231' list scanner.Scanner
-```
-
-This will list the paths to each method in the `Scanner` service:
-```
-scanner.Scanner.ClearResults
-scanner.Scanner.DeleteKeys
-scanner.Scanner.GetInfo
-scanner.Scanner.GetResults
-scanner.Scanner.RegisterKeys
-```
-
-To see the the request and response types for a method, for example the `GetResults` method, try:
-
-
-```bash
-grpcurl -plaintext '127.0.0.1:8231' describe scanner.Scanner.GetResults \
-&& grpcurl -plaintext '127.0.0.1:8231' describe scanner.GetResultsRequest \
-&& grpcurl -plaintext '127.0.0.1:8231' describe scanner.GetResultsResponse \
-&& grpcurl -plaintext '127.0.0.1:8231' describe scanner.Results \
-&& grpcurl -plaintext '127.0.0.1:8231' describe scanner.Transactions \
-&& grpcurl -plaintext '127.0.0.1:8231' describe scanner.Transaction
-```
-
-The response should be the request and response types for the `GetResults` method:
-
-```
-scanner.Scanner.GetResults is a method:
-// Get all data we have stored for the given keys.
-rpc GetResults ( .scanner.GetResultsRequest ) returns ( .scanner.GetResultsResponse );
-scanner.GetResultsRequest is a message:
-// A request for getting results for a set of keys.
-message GetResultsRequest {
- // Keys for which to get results.
- repeated string keys = 1;
-}
-scanner.GetResultsResponse is a message:
-// A set of responses for each provided key of a GetResults call.
-message GetResultsResponse {
- // Results for each key.
- map results = 1;
-}
-scanner.Results is a message:
-// A result for a single key.
-message Results {
- // A height, transaction id map
- map by_height = 1;
-}
-scanner.Transactions is a message:
-// A vector of transaction hashes
-message Transactions {
- // Transactions
- repeated Transaction transactions = 1;
-}
-scanner.Transaction is a message:
-// Transaction data
-message Transaction {
- // The transaction hash/id
- string hash = 1;
-}
-```
-
-## Methods
-
-
-
----
-#### GetInfo
-
-Returns basic information about the `zebra-scan` instance.
-
-#### RegisterKeys
-
-Starts scanning for a set of keys, with optional start heights, and caching the results.
-Cached results can later be retrieved by calling the `GetResults` or `Scan` methods.
-
-#### DeleteKeys
-
-Stops scanning transactions for a set of keys. Deletes the keys and their cached results for the keys from zebra-scan.
-
-#### GetResults
-
-Returns cached results for a set of keys.
-
-#### ClearResults
-
-Deletes any cached results for a set of keys.
-
-#### Scan
-
-Starts scanning for a set of keys and returns a stream of results.
diff --git a/book/src/user/shielded-scan.md b/book/src/user/shielded-scan.md
deleted file mode 100644
index dff3e599ed8..00000000000
--- a/book/src/user/shielded-scan.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Zebra Shielded Scanning
-
-The `zebra-scanner` binary is a standalone application that utilizes Zebra libraries to scan for transactions associated with specific Sapling viewing keys. It stores the discovered transactions and scanning progress data in a RocksDB database.
-
-For this application to function, it requires access to a Zebra node's RPC server and state cache.
-
-For now, we only support Sapling, and only store transaction IDs in the scanner results database.
-
-Ongoing development is tracked in issue [#7728](https://github.com/ZcashFoundation/zebra/issues/7728).
-
-## Important Security Warning
-
-Zebra's shielded scanning feature has known security issues. It is for experimental use only.
-
-Do not use regular or sensitive viewing keys with Zebra's experimental scanning feature. Do not use this feature on a shared machine. We suggest generating new keys for experimental use or using publicly known keys.
-
-## Build & Install
-
-Use [Zebra 1.9.0](https://github.com/ZcashFoundation/zebra/releases/tag/v1.9.0) or greater, or the `main` branch to get the latest features.
-
-You can also use Rust's `cargo` to install `zebra-scanner` from the latest release Zebra repository:
-
-```bash
-cargo install --locked --git https://github.com/ZcashFoundation/zebra zebra-scan
-```
-
-The scanner binary will be at `~/.cargo/bin/zebra-scanner`, which should be in your `PATH`.
-
-## Arguments
-
-Retrieve the binary arguments with:
-
-```bash
-zebra-scanner --help
-```
-
-## Scanning the Block Chain
-
-Before starting, ensure a `zebrad` node is running locally with the RPC endpoint open. Refer to the [lightwalletd zebrad setup](https://zebra.zfnd.org/user/lightwalletd.html#configure-zebra-for-lightwalletd) or [zebrad mining setup](https://zebra.zfnd.org/user/mining.html#configure-zebra-for-mining) for instructions.
-
-To initiate the scanning process, you need the following:
-
-- A zebrad cache state directory. This can be obtained from the running zebrad configuration file, under the `state` section in the `cache_dir` field.
-- A key to scan with, optionally including a birthday height, which specifies the starting height for the scanning process for that key.
-- A zebrad RPC endpoint address. This can be found in the running zebrad configuration file, under the `rpc` section in the `listen_addr` field.
-
-
-Sapling diversifiable/extended full viewing keys strings start with `zxviews` as
-described in
-[ZIP-32](https://zips.z.cash/zip-0032#sapling-extended-full-viewing-keys).
-
-For example, to scan the block chain with the [public ZECpages viewing
-key](https://zecpages.com/boardinfo), use:
-
-```bash
-RUST_LOG=info zebra-scanner --sapling-keys-to-scan '{"key":"zxviews1q0duytgcqqqqpqre26wkl45gvwwwd706xw608hucmvfalr759ejwf7qshjf5r9aa7323zulvz6plhttp5mltqcgs9t039cx2d09mgq05ts63n8u35hyv6h9nc9ctqqtue2u7cer2mqegunuulq2luhq3ywjcz35yyljewa4mgkgjzyfwh6fr6jd0dzd44ghk0nxdv2hnv4j5nxfwv24rwdmgllhe0p8568sgqt9ckt02v2kxf5ahtql6s0ltjpkckw8gtymxtxuu9gcr0swvz", "birthday_height": 419200}' --zebrad-cache-dir /media/alfredo/stuff/chain/zebra --zebra-rpc-listen-addr '127.0.0.1:8232'
-```
-
-- A birthday lower than the Sapling activation height defaults to Sapling activation height.
-- A birthday greater or equal than Sapling activation height will start scanning at provided height, improving scanner speed.
-
-The scanning process begins once Zebra syncs its state past the Sapling activation height. Scanning a synced state takes between 12 and 24 hours. The scanner searches for transactions containing Sapling notes with outputs decryptable by the provided viewing keys.
-
-You will see log messages in the output every 10,000 blocks scanned, similar to:
-
-```
-...
-2024-07-13T16:07:47.944309Z INFO zebra_scan::service::scan_task::scan: Scanning the blockchain for key 0, started at block 571001, now at block 580000, current tip 2556979
-2024-07-13T16:08:07.811013Z INFO zebra_scan::service::scan_task::scan: Scanning the blockchain for key 0, started at block 571001, now at block 590000, current tip 2556979
-...
-```
-
-If your Zebra instance goes down for any reason, the Zebra scanner will resume the task. Upon a new start, Zebra will display:
-
-```
-2024-07-13T16:07:17.700073Z INFO zebra_scan::storage::db: Last scanned height for key number 0 is 590000, resuming at 590001
-2024-07-13T16:07:17.706727Z INFO zebra_scan::service::scan_task::scan: got min scan height start_height=Height(590000)
-```
-
-## Displaying Scanning Results
-
-An easy way to query the results is to use the
-[Scanning Results Reader](https://github.com/ZcashFoundation/zebra/tree/main/zebra-utils#scanning-results-reader).
-
-## Querying Raw Scanning Results
-
-A more advanced way to query results is to use `ldb` tool, requires a certain level of expertise.
-
-Install `ldb`:
-
-```bash
-sudo apt install rocksdb-tools
-```
-
-Run `ldb` with the scanner database:
-
-```bash
-ldb --db="$HOME/.cache/zebra/private-scan/v1/mainnet" --secondary_path= --column_family=sapling_tx_ids --hex scan
-```
-
-Some of the output will be markers the scanner uses to keep track of progress, however, some of them will be transactions found.
-
-To lean more about how to filter the database please refer to [RocksDB Administration and Data Access Tool](https://github.com/facebook/rocksdb/wiki/Administration-and-Data-Access-Tool)
diff --git a/book/src/user/tracing.md b/book/src/user/tracing.md
index d1b984f05df..cbacddb9ad4 100644
--- a/book/src/user/tracing.md
+++ b/book/src/user/tracing.md
@@ -36,10 +36,10 @@ and the [`flamegraph`][flamegraph] runtime config option.
Compile Zebra with `--features sentry` to monitor it using [Sentry][sentry] in production.
-[tracing_section]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.Config.html
-[filter]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.Config.html#structfield.filter
-[flamegraph]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.Config.html#structfield.flamegraph
+[tracing_section]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.InnerConfig.html
+[filter]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.InnerConfig.html#structfield.filter
+[flamegraph]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.InnerConfig.html#structfield.flamegraph
[flamegraphs]: http://www.brendangregg.com/flamegraphs.html
[systemd_journald]: https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html
-[use_journald]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.Config.html#structfield.use_journald
+[use_journald]: https://docs.rs/zebrad/latest/zebrad/components/tracing/struct.InnerConfig.html#structfield.use_journald
[sentry]: https://sentry.io/welcome/
diff --git a/book/src/user/troubleshooting.md b/book/src/user/troubleshooting.md
index 09b31e4513d..3566fecadf6 100644
--- a/book/src/user/troubleshooting.md
+++ b/book/src/user/troubleshooting.md
@@ -4,8 +4,6 @@
There are a few bugs in Zebra that we're still working on fixing:
-- [The `getpeerinfo` RPC shows current and recent outbound connections](https://github.com/ZcashFoundation/zebra/issues/7893), rather than current inbound and outbound connections.
-
- [Progress bar estimates can become extremely large](https://github.com/console-rs/indicatif/issues/556). We're waiting on a fix in the progress bar library.
- Zebra currently gossips and connects to [private IP addresses](https://en.wikipedia.org/wiki/IP_address#Private_addresses), we want to [disable private IPs but provide a config (#3117)](https://github.com/ZcashFoundation/zebra/issues/3117) in an upcoming release
diff --git a/deny.toml b/deny.toml
index 6c809cabd12..4273223c9d2 100644
--- a/deny.toml
+++ b/deny.toml
@@ -52,9 +52,6 @@ skip-tree = [
# wait for ordered-map to release a dependency fix
{ name = "ordered-map", version = "=0.4.2" },
- # wait for primitive-types to upgrade
- { name = "proc-macro-crate", version = "=0.1.5" },
-
# wait for `color-eyre` to upgrade
{ name = "owo-colors", version = "=3.5.0" },
@@ -64,33 +61,12 @@ skip-tree = [
# wait for abscissa_core to upgrade
{name = "tracing-log", version = "=0.1.4" },
- # wait for tokio-test -> tokio-stream to upgrade
- { name = "tokio-util", version = "=0.6.10" },
-
# wait for console-subscriber and tower to update hdrhistogram.
# also wait for ron to update insta, and wait for tonic update.
{ name = "base64", version = "=0.13.1" },
- # wait for elasticsearch to update base64, darling, rustc_version, serde_with
- { name = "elasticsearch", version = "=8.5.0-alpha.1" },
-
- # wait for reqwest to update base64
- { name = "base64", version = "=0.21.7" },
- { name = "sync_wrapper", version = "0.1.2" },
-
- # wait for jsonrpc-http-server to update hyper or for Zebra to replace jsonrpc (#8682)
- { name = "h2", version = "=0.3.26" },
- { name = "http", version = "=0.2.12" },
- { name = "http-body", version = "=0.4.6" },
- { name = "hyper", version = "=0.14.31" },
- { name = "hyper-rustls", version = "=0.24.2" },
-
- { name = "reqwest", version = "=0.11.27" },
- { name = "rustls", version = "=0.21.12" },
- { name = "rustls-pemfile", version = "=1.0.4" },
- { name = "rustls-webpki", version = "=0.101.7" },
- { name = "tokio-rustls", version = "=0.24.1" },
- { name = "webpki-roots", version = "=0.25.4" },
+ # wait for abscissa_core to update toml
+ { name = "toml", version = "=0.5.11" },
# wait for structopt-derive to update heck
{ name = "heck", version = "=0.3.3" },
@@ -102,21 +78,35 @@ skip-tree = [
# wait for halo2_gadgets and primitive-types to update uint
{ name = "uint", version = "=0.9.5" },
- # wait for dirs-sys to update windows-sys
- { name = "windows-sys", version = "=0.48.0" },
-
# wait for zebra to update tower
{ name = "tower", version = "=0.4.13" },
- { name = "hashbrown", version = "=0.12.3" },
-
- # Remove after release candicate period is over and the ECC crates are not patched anymore
- { name = "equihash", version = "=0.2.0" },
- { name = "f4jumble", version = "=0.1.0" },
- { name = "incrementalmerkletree", version = "=0.6.0" },
- { name = "zcash_address", version = "=0.4.0" },
- { name = "zcash_keys", version = "=0.3.0" },
- { name = "zcash_primitives", version = "=0.16.0" },
- { name = "zcash_protocol", version = "=0.2.0" }
+ { name = "hashbrown", version = "=0.14.5" },
+
+ # wait for zebra to update vergen
+ { name = "thiserror", version = "=1.0.69" },
+ { name = "thiserror-impl", version = "=1.0.69" },
+
+ # wait for all librustzcash crates to update sha2, secp256k1, and ripemd
+ { name = "sha2", version = "=0.10.9" },
+ { name = "ripemd", version = "=0.1.3" },
+
+ # wait for zcash_script to update itertools
+ { name = "itertools", version = "=0.13.0" },
+
+ # wait for abscissa_core to update synstructure
+ { name = "synstructure", version = "=0.12.6" },
+
+ # wait until zcash_client_backend update rustix
+ { name = "rustix", version = "=0.38.44" },
+
+ # wait for reqwest to update windows-registry
+ { name = "windows-strings", version = "=0.3.1" },
+
+ # wait for sentry to update windows-core
+ { name = "windows-core", version = "=0.52.0" },
+
+ # wait for `inferno` to upgrade
+ { name = "quick-xml", version = "=0.37.5" }
]
# This section is considered when running `cargo deny check sources`.
diff --git a/docker/.env b/docker/.env
index 2d96240f23e..629d7d6b9c7 100644
--- a/docker/.env
+++ b/docker/.env
@@ -1,33 +1,74 @@
-RUST_LOG=info
-# This variable forces the use of color in the logs
-ZEBRA_FORCE_USE_COLOR=1
-LOG_COLOR=true
-
-###
-# Configuration Variables
-# These variables are used to configure the zebra node
-# Check the entrypoint.sh script for more details
-###
-
-# The config file full path used in the Dockerfile.
-ZEBRA_CONF_PATH=/etc/zebrad/zebrad.toml
-# [network]
-NETWORK=Mainnet
-ZEBRA_LISTEN_ADDR=0.0.0.0
-# [consensus]
-ZEBRA_CHECKPOINT_SYNC=true
-# [state]
-# Set this to change the default cached state directory
-ZEBRA_CACHED_STATE_DIR=/var/cache/zebrad-cache
-LIGHTWALLETD_DATA_DIR=/var/cache/lwd-cache
-# [metrics]
-METRICS_ENDPOINT_ADDR=0.0.0.0
-METRICS_ENDPOINT_PORT=9999
-# [tracing]
-TRACING_ENDPOINT_ADDR=0.0.0.0
-TRACING_ENDPOINT_PORT=3000
-# [rpc]
-RPC_LISTEN_ADDR=0.0.0.0
-# if ${RPC_PORT} is not set, it will use the default value for the current network
-RPC_PORT=8232
+# Configuration variables for running Zebra in Docker
+# Sets the path to a custom Zebra config file. If not set, Zebra will look for a config at
+# ${HOME}/.config/zebrad.toml or generate one using environment variables below.
+# ! Setting ZEBRA_CONF_PATH will make most of the following environment variables ineffective.
+#
+# ZEBRA_CONF_PATH="/path/to/your/custom/zebrad.toml"
+
+# Sets the network Zebra runs will run on.
+#
+# NETWORK=Mainnet
+
+# Zebra's RPC server is disabled by default. To enable it, set its port number.
+#
+# ZEBRA_RPC_PORT=8232 # Default RPC port number on Mainnet.
+# ZEBRA_RPC_PORT=18232 # Default RPC port number on Testnet.
+
+# To disable cookie authentication, set the value below to false.
+#
+# ENABLE_COOKIE_AUTH=true
+
+# Sets a custom directory for the cookie authentication file.
+#
+# ZEBRA_COOKIE_DIR="/home/zebra/.config/cookie"
+
+# Sets a custom directory for the state and network caches.
+#
+# ZEBRA_CACHE_DIR="/home/zebra/.cache/zebra"
+
+# Sets custom Cargo features. Available features are listed at
+# .
+#
+# Must be set at build time.
+#
+# FEATURES=""
+
+# Sets the listen address and port for Prometheus metrics.
+#
+# METRICS_ENDPOINT_ADDR="0.0.0.0"
+# METRICS_ENDPOINT_PORT=9999
+
+# Logging to a file is disabled by default. To enable it, uncomment the line
+# below and alternatively set your own path.
+#
+# LOG_FILE="/home/zebra/.local/state/zebrad.log"
+
+# Zebra recognizes whether its logs are being written to a terminal or a file,
+# and uses colored logs for terminals and uncolored logs for files. Setting the
+# variable below to true will force colored logs even for files and setting it
+# to false will disable colors even for terminals.
+#
+# LOG_COLOR=true
+
+# To disable logging to journald, set the value to false.
+#
+# USE_JOURNALD=true
+
+# Sets the listen address and port for the tracing endpoint.
+# Only active when the 'filter-reload' feature is enabled.
+#
+# TRACING_ENDPOINT_ADDR="0.0.0.0"
+# TRACING_ENDPOINT_PORT=3000
+
+# If you are going to use Zebra as a backend for a mining pool, set your mining
+# address.
+#
+# MINER_ADDRESS="your_mining_address"
+
+# Controls the output of `env_logger`:
+# https://docs.rs/env_logger/latest/env_logger/
+#
+# Must be set at build time.
+#
+# RUST_LOG=info
diff --git a/docker/Dockerfile b/docker/Dockerfile
index c441ce44e22..dad9346f228 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -1,76 +1,58 @@
# syntax=docker/dockerfile:1
-# check=skip=UndefinedVar
+# check=skip=UndefinedVar,UserExist # We use gosu in the entrypoint instead of USER directive
# If you want to include a file in the Docker image, add it to .dockerignore.
#
-# We are using 4 stages:
-# - deps: install build dependencies and sets the needed variables
-# - tests: builds tests binaries
+# We use 4 (TODO: 5) stages:
+# - deps: installs build dependencies and sets default values
+# - tests: prepares a test image
# - release: builds release binaries
-# - runtime: runs the release binaries
+# - runtime: prepares the release image
+# - TODO: Add a `monitoring` stage
#
# We first set default values for build arguments used across the stages.
# Each stage must define the build arguments (ARGs) it uses.
-#
-# Build zebrad with these features
-#
-# Keep these argument defaults in sync with GitHub vars.RUST_PROD_FEATURES and vars.RUST_TEST_FEATURES
+
+ARG RUST_VERSION=1.85.0
+
+# Keep in sync with vars.RUST_PROD_FEATURES in GitHub
# https://github.com/ZcashFoundation/zebra/settings/variables/actions
ARG FEATURES="default-release-binaries"
-ARG TEST_FEATURES="lightwalletd-grpc-tests zebra-checkpoints"
-ARG EXPERIMENTAL_FEATURES=""
-ARG APP_HOME="/opt/zebrad"
-ARG RUST_VERSION=1.82.0
-# In this stage we download all system requirements to build the project
-#
-# It also captures all the build arguments to be used as environment variables.
-# We set defaults for the arguments, in case the build does not include this information.
+ARG UID=10001
+ARG GID=${UID}
+ARG USER="zebra"
+ARG HOME="/home/${USER}"
+ARG CARGO_HOME="${HOME}/.cargo"
+
+# This stage prepares Zebra's build deps and captures build args as env vars.
FROM rust:${RUST_VERSION}-bookworm AS deps
SHELL ["/bin/bash", "-xo", "pipefail", "-c"]
-# Set the default path for the zebrad binary
-ARG APP_HOME
-ENV APP_HOME=${APP_HOME}
-WORKDIR ${APP_HOME}
-
-# Install zebra build deps and Dockerfile deps
+# Install zebra build deps
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
- llvm \
libclang-dev \
- clang \
- ca-certificates \
protobuf-compiler \
- rocksdb-tools \
&& rm -rf /var/lib/apt/lists/* /tmp/*
-# Build arguments and variables set for tracelog levels and debug information
-#
-# We set defaults to all variables.
-ARG RUST_LOG
-ENV RUST_LOG=${RUST_LOG:-info}
+# Build arguments and variables
+ARG CARGO_INCREMENTAL
+ENV CARGO_INCREMENTAL=${CARGO_INCREMENTAL:-0}
-ARG RUST_BACKTRACE
-ENV RUST_BACKTRACE=${RUST_BACKTRACE:-1}
+ARG CARGO_HOME
+ENV CARGO_HOME=${CARGO_HOME}
-ARG RUST_LIB_BACKTRACE
-ENV RUST_LIB_BACKTRACE=${RUST_LIB_BACKTRACE:-1}
-
-ARG COLORBT_SHOW_HIDDEN
-ENV COLORBT_SHOW_HIDDEN=${COLORBT_SHOW_HIDDEN:-1}
+ARG FEATURES
+ENV FEATURES=${FEATURES}
-ARG SHORT_SHA
-# If this is not set, it must be an empty string, so Zebra can try an alternative git commit source:
+# If this is not set, it must be an empty string, so Zebra can try an
+# alternative git commit source:
# https://github.com/ZcashFoundation/zebra/blob/9ebd56092bcdfc1a09062e15a0574c94af37f389/zebrad/src/application.rs#L179-L182
+ARG SHORT_SHA
ENV SHORT_SHA=${SHORT_SHA:-}
-ENV CARGO_HOME="${APP_HOME}/.cargo/"
-
-# Copy the entrypoint script to be used on both images
-COPY ./docker/entrypoint.sh /etc/zebrad/entrypoint.sh
-
-# In this stage we build tests (without running then)
+# This stage builds tests without running them.
#
# We also download needed dependencies for tests to work, from other images.
# An entrypoint.sh is only available in this step for easier test handling with variables.
@@ -80,23 +62,33 @@ FROM deps AS tests
ARG ZEBRA_SKIP_IPV6_TESTS
ENV ZEBRA_SKIP_IPV6_TESTS=${ZEBRA_SKIP_IPV6_TESTS:-1}
-# Use ENTRYPOINT_FEATURES to override the specific features used to run tests in entrypoint.sh,
-# separately from the test and production image builds.
-ARG FEATURES
-ARG TEST_FEATURES
-ARG EXPERIMENTAL_FEATURES
-# TODO: add empty $EXPERIMENTAL_FEATURES when we can avoid adding an extra space to the end of the string
-ARG ENTRYPOINT_FEATURES="${FEATURES} ${TEST_FEATURES}"
+# This environment setup is almost identical to the `runtime` target so that the
+# `tests` target differs minimally. In fact, a subset of this setup is used for
+# the `runtime` target.
+ARG UID
+ENV UID=${UID}
+ARG GID
+ENV GID=${GID}
+ARG USER
+ENV USER=${USER}
+ARG HOME
+ENV HOME=${HOME}
-# Build Zebra test binaries, but don't run them
+RUN addgroup --quiet --gid ${GID} ${USER} && \
+ adduser --quiet --gid ${GID} --uid ${UID} --home ${HOME} ${USER} --disabled-password --gecos ""
+
+# Set the working directory for the build.
+WORKDIR ${HOME}
+# Build Zebra test binaries, but don't run them
+#
# Leverage a cache mount to /usr/local/cargo/registry/
# for downloaded dependencies, a cache mount to /usr/local/cargo/git/db
-# for git repository dependencies, and a cache mount to ${APP_HOME}/target/ for
+# for git repository dependencies, and a cache mount to ${HOME}/target/ for
# compiled dependencies which will speed up subsequent builds.
# Leverage a bind mount to each crate directory to avoid having to copy the
# source code into the container. Once built, copy the executable to an
-# output directory before the cache mounted ${APP_HOME}/target/ is unmounted.
+# output directory before the cache mounted ${HOME}/target/ is unmounted.
RUN --mount=type=bind,source=zebrad,target=zebrad \
--mount=type=bind,source=zebra-chain,target=zebra-chain \
--mount=type=bind,source=zebra-network,target=zebra-network \
@@ -107,50 +99,53 @@ RUN --mount=type=bind,source=zebrad,target=zebrad \
--mount=type=bind,source=zebra-node-services,target=zebra-node-services \
--mount=type=bind,source=zebra-test,target=zebra-test \
--mount=type=bind,source=zebra-utils,target=zebra-utils \
- --mount=type=bind,source=zebra-scan,target=zebra-scan \
- --mount=type=bind,source=zebra-grpc,target=zebra-grpc \
--mount=type=bind,source=tower-batch-control,target=tower-batch-control \
--mount=type=bind,source=tower-fallback,target=tower-fallback \
--mount=type=bind,source=Cargo.toml,target=Cargo.toml \
--mount=type=bind,source=Cargo.lock,target=Cargo.lock \
- --mount=type=cache,target=${APP_HOME}/target/ \
+ --mount=type=cache,target=${HOME}/target/ \
--mount=type=cache,target=/usr/local/cargo/git/db \
--mount=type=cache,target=/usr/local/cargo/registry/ \
-cargo test --locked --release --features "${ENTRYPOINT_FEATURES}" --workspace --no-run && \
-cp ${APP_HOME}/target/release/zebrad /usr/local/bin && \
-cp ${APP_HOME}/target/release/zebra-checkpoints /usr/local/bin
+ cargo test --locked --release --workspace --no-run \
+ --features "${FEATURES} zebra-checkpoints" && \
+ cp ${HOME}/target/release/zebrad /usr/local/bin && \
+ cp ${HOME}/target/release/zebra-checkpoints /usr/local/bin
# Copy the lightwalletd binary and source files to be able to run tests
-COPY --from=electriccoinco/lightwalletd:latest /usr/local/bin/lightwalletd /usr/local/bin/
-COPY ./ ./
+COPY --from=electriccoinco/lightwalletd:v0.4.17 /usr/local/bin/lightwalletd /usr/local/bin/
+
+# Copy the gosu binary to be able to run the entrypoint as non-root user
+# and allow to change permissions for mounted cache directories
+COPY --from=tianon/gosu:bookworm /gosu /usr/local/bin/
-# Entrypoint environment variables
-ENV ENTRYPOINT_FEATURES=${ENTRYPOINT_FEATURES}
-# We repeat the ARGs here, so they are available in the entrypoint.sh script for $RUN_ALL_EXPERIMENTAL_TESTS
-ARG EXPERIMENTAL_FEATURES="journald prometheus filter-reload"
-ENV ENTRYPOINT_FEATURES_EXPERIMENTAL="${ENTRYPOINT_FEATURES} ${EXPERIMENTAL_FEATURES}"
+# As the build has already run with the root user,
+# we need to set the correct permissions for the home and cargo home dirs owned by it.
+RUN chown -R ${UID}:${GID} "${HOME}" && \
+ chown -R ${UID}:${GID} "${CARGO_HOME}"
-# By default, runs the entrypoint tests specified by the environmental variables (if any are set)
-ENTRYPOINT [ "/etc/zebrad/entrypoint.sh" ]
+COPY --chown=${UID}:${GID} ./ ${HOME}
+COPY --chown=${UID}:${GID} ./docker/entrypoint.sh /usr/local/bin/entrypoint.sh
-# In this stage we build a release (generate the zebrad binary)
+ENTRYPOINT [ "entrypoint.sh", "test" ]
+CMD [ "cargo", "test" ]
+
+# This stage builds the zebrad release binary.
#
-# This step also adds `cache mounts` as this stage is completely independent from the
-# `test` stage. This step is a dependency for the `runtime` stage, which uses the resulting
-# zebrad binary from this step.
+# It also adds `cache mounts` as this stage is completely independent from the
+# `test` stage. The resulting zebrad binary is used in the `runtime` stage.
FROM deps AS release
-ARG FEATURES
+# Set the working directory for the build.
+ARG HOME
+WORKDIR ${HOME}
RUN --mount=type=bind,source=tower-batch-control,target=tower-batch-control \
--mount=type=bind,source=tower-fallback,target=tower-fallback \
--mount=type=bind,source=zebra-chain,target=zebra-chain \
--mount=type=bind,source=zebra-consensus,target=zebra-consensus \
- --mount=type=bind,source=zebra-grpc,target=zebra-grpc \
--mount=type=bind,source=zebra-network,target=zebra-network \
--mount=type=bind,source=zebra-node-services,target=zebra-node-services \
--mount=type=bind,source=zebra-rpc,target=zebra-rpc \
- --mount=type=bind,source=zebra-scan,target=zebra-scan \
--mount=type=bind,source=zebra-script,target=zebra-script \
--mount=type=bind,source=zebra-state,target=zebra-state \
--mount=type=bind,source=zebra-test,target=zebra-test \
@@ -158,71 +153,67 @@ RUN --mount=type=bind,source=tower-batch-control,target=tower-batch-control \
--mount=type=bind,source=zebrad,target=zebrad \
--mount=type=bind,source=Cargo.toml,target=Cargo.toml \
--mount=type=bind,source=Cargo.lock,target=Cargo.lock \
- --mount=type=cache,target=${APP_HOME}/target/ \
+ --mount=type=cache,target=${HOME}/target/ \
--mount=type=cache,target=/usr/local/cargo/git/db \
--mount=type=cache,target=/usr/local/cargo/registry/ \
-cargo build --locked --release --features "${FEATURES}" --package zebrad --bin zebrad && \
-cp ${APP_HOME}/target/release/zebrad /usr/local/bin
+ cargo build --locked --release --features "${FEATURES}" --package zebrad --bin zebrad && \
+ cp ${HOME}/target/release/zebrad /usr/local/bin
-# This stage is only used when deploying nodes or when only the resulting zebrad binary is needed
-#
-# To save space, this step starts from scratch using debian, and only adds the resulting
-# binary from the `release` stage
+# This stage starts from scratch using Debian and copies the built zebrad binary
+# from the `release` stage along with other binaries and files.
FROM debian:bookworm-slim AS runtime
-# Set the default path for the zebrad binary
-ARG APP_HOME
-ENV APP_HOME=${APP_HOME}
-WORKDIR ${APP_HOME}
-
-RUN apt-get update && \
- apt-get install -y --no-install-recommends \
- ca-certificates \
- curl \
- rocksdb-tools \
- gosu \
- && rm -rf /var/lib/apt/lists/* /tmp/*
+ARG FEATURES
+ENV FEATURES=${FEATURES}
-# Create a non-privileged user that the app will run under.
-# Running as root inside the container is running as root in the Docker host
-# If an attacker manages to break out of the container, they will have root access to the host
-# See https://docs.docker.com/go/dockerfile-user-best-practices/
-ARG USER=zebra
-ENV USER=${USER}
-ARG UID=10001
+# Create a non-privileged user for running `zebrad`.
+#
+# We use a high UID/GID (10001) to avoid overlap with host system users.
+# This reduces the risk of container user namespace conflicts with host accounts,
+# which could potentially lead to privilege escalation if a container escape occurs.
+#
+# We do not use the `--system` flag for user creation since:
+# 1. System user ranges (100-999) can collide with host system users
+# (see: https://github.com/nginxinc/docker-nginx/issues/490)
+# 2. There's no value added and warning messages can be raised at build time
+# (see: https://github.com/dotnet/dotnet-docker/issues/4624)
+#
+# The high UID/GID values provide an additional security boundary in containers
+# where user namespaces are shared with the host.
+ARG UID
ENV UID=${UID}
-ARG GID=10001
+ARG GID
ENV GID=${GID}
+ARG USER
+ENV USER=${USER}
+ARG HOME
+ENV HOME=${HOME}
-RUN addgroup --system --gid ${GID} ${USER} \
- && adduser \
- --system \
- --disabled-login \
- --shell /bin/bash \
- --home ${APP_HOME} \
- --uid "${UID}" \
- --gid "${GID}" \
- ${USER}
-
-# Config settings for zebrad
-ARG FEATURES
-ENV FEATURES=${FEATURES}
-
-# Path and name of the config file
-# These are set to a default value when not defined in the environment
-ENV ZEBRA_CONF_DIR=${ZEBRA_CONF_DIR:-/etc/zebrad}
-ENV ZEBRA_CONF_FILE=${ZEBRA_CONF_FILE:-zebrad.toml}
+RUN addgroup --quiet --gid ${GID} ${USER} && \
+ adduser --quiet --gid ${GID} --uid ${UID} --home ${HOME} ${USER} --disabled-password --gecos ""
-RUN mkdir -p ${ZEBRA_CONF_DIR} && chown ${UID}:${UID} ${ZEBRA_CONF_DIR} \
- && chown ${UID}:${UID} ${APP_HOME}
+WORKDIR ${HOME}
+RUN chown -R ${UID}:${GID} ${HOME}
-COPY --from=release /usr/local/bin/zebrad /usr/local/bin
-COPY --from=release /etc/zebrad/entrypoint.sh /etc/zebrad
+# We're explicitly NOT using the USER directive here.
+# Instead, we run as root initially and use gosu in the entrypoint.sh
+# to step down to the non-privileged user. This allows us to change permissions
+# on mounted volumes before running the application as a non-root user.
+# User with UID=${UID} is created above and used via gosu in entrypoint.sh.
-# Expose configured ports
-EXPOSE 8233 18233
+# Copy the gosu binary to be able to run the entrypoint as non-root user
+COPY --from=tianon/gosu:bookworm /gosu /usr/local/bin/
+COPY --from=release /usr/local/bin/zebrad /usr/local/bin/
+COPY --chown=${UID}:${GID} ./docker/entrypoint.sh /usr/local/bin/entrypoint.sh
-# Update the config file based on the Docker run variables,
-# and launch zebrad with it
-ENTRYPOINT [ "/etc/zebrad/entrypoint.sh" ]
+ENTRYPOINT [ "entrypoint.sh" ]
CMD ["zebrad"]
+
+# TODO: Add a `monitoring` stage
+#
+# This stage will be based on `runtime`, and initially:
+#
+# - run `zebrad` on Testnet
+# - with mining enabled using S-nomp and `nheqminer`.
+#
+# We can add further functionality to this stage for further purposes.
diff --git a/docker/default-zebra-config.toml b/docker/default-zebra-config.toml
new file mode 100644
index 00000000000..5b5e5b6ebd1
--- /dev/null
+++ b/docker/default-zebra-config.toml
@@ -0,0 +1,65 @@
+# Default configuration file for running Zebra in Docker.
+#
+# This file is tailored for Zebra running in Docker. Do not use it with Zebra
+# running directly on your localhost as some fields are adjusted specifically
+# for Docker.
+#
+# You can use this file as a starting point for custom configuration. If you
+# don't specify a field, Zebra will use its default value.
+#
+# The config format, including a complete list of sections and fields, is
+# documented here:
+# https://docs.rs/zebrad/latest/zebrad/config/struct.ZebradConfig.html
+
+[network]
+network = "Mainnet"
+listen_addr = "0.0.0.0"
+cache_dir = "/home/zebra/.cache/zebra"
+
+[rpc]
+# The RPC server is disabled by default. To enable it, uncomment one of the
+# lines below and alternatively set your own port.
+
+# listen_addr = "0.0.0.0:8232" # Mainnet
+# listen_addr = "0.0.0.0:18232" # Testnet
+
+cookie_dir = "/home/zebra/.cache/zebra"
+
+# To disable cookie authentication, uncomment the line below and set the value
+# to false.
+
+# enable_cookie_auth = true
+
+[state]
+cache_dir = "/home/zebra/.cache/zebra"
+
+[tracing]
+# Zebra recognizes whether its logs are being written to a terminal or a file,
+# and uses colored logs for terminals and uncolored logs for files. To force
+# colors even for files, uncomment the line below. To disable colors, set
+# `use_color` to false.
+
+# force_use_color = true
+use_color = true
+
+# Logging to a file is disabled by default. To enable it, uncomment the line
+# below and alternatively set your own path.
+
+# log_file = "/home/zebra/.local/state/zebrad.log"
+
+# Sending tracing events to systemd-journald is disabled by default. To enable
+# it, uncomment the line below.
+
+# use_journald = true
+
+[metrics]
+# Metrics via Prometheus are disabled by default. To enable them, uncomment the
+# line below and alternatively set your own port.
+
+# endpoint_addr = "0.0.0.0:9999" # Prometheus
+
+[mining]
+# If you are going to use Zebra as a backend for a mining pool, set your mining
+# address.
+
+# miner_address = "your_mining_address"
diff --git a/docker/docker-compose.grafana.yml b/docker/docker-compose.grafana.yml
new file mode 100644
index 00000000000..2c7b6b4d7ab
--- /dev/null
+++ b/docker/docker-compose.grafana.yml
@@ -0,0 +1,52 @@
+services:
+ zebra:
+ container_name: zebra
+ build:
+ context: ../
+ dockerfile: docker/Dockerfile
+ target: runtime
+ args:
+ - FEATURES=prometheus
+ volumes:
+ - zebrad-cache:/home/zebra/.cache/zebra
+ tty: true
+ environment:
+ - FEATURES=prometheus
+ network_mode: "host"
+ ports:
+ - 9999:9999
+
+ prometheus:
+ container_name: prometheus
+ image: prom/prometheus
+ volumes:
+ - prometheus-cache:/prometheus
+ configs:
+ - source: prometheus-config
+ target: /etc/prometheus/prometheus.yml
+ network_mode: "host"
+ ports:
+ - 9090:9090
+
+ grafana:
+ container_name: grafana
+ image: grafana/grafana
+ volumes:
+ - grafana-cache:/var/lib/grafana
+ network_mode: "host"
+ ports:
+ - 3000:3000
+
+volumes:
+ zebrad-cache:
+ driver: local
+
+ grafana-cache:
+ driver: local
+
+ prometheus-cache:
+ driver: local
+
+configs:
+ prometheus-config:
+ file: ../prometheus.yaml
diff --git a/docker/docker-compose.lwd.yml b/docker/docker-compose.lwd.yml
index 7d8c56b1855..456e7602d97 100644
--- a/docker/docker-compose.lwd.yml
+++ b/docker/docker-compose.lwd.yml
@@ -1,15 +1,22 @@
-version: "3.8"
-
services:
zebra:
+ container_name: zebra
+ image: zfnd/zebra
+ platform: linux/amd64
+ restart: unless-stopped
+ deploy:
+ resources:
+ reservations:
+ cpus: "4"
+ memory: 16G
+ volumes:
+ - zebrad-cache:/home/zebra/.cache/zebra
+ tty: true
+ environment:
+ - ZEBRA_RPC_PORT=8232
+ - ENABLE_COOKIE_AUTH=false
ports:
- - "8232:8232" # Opens an RPC endpoint (for lightwalletd and mining)
- healthcheck:
- start_period: 1m
- interval: 15s
- timeout: 10s
- retries: 3
- test: ["CMD-SHELL", "curl --data-binary '{\"id\":\"curltest\", \"method\": \"getinfo\"}' -H 'content-type: application/json' 127.0.0.1:8232 || exit 1"]
+ - "8232:8232"
lightwalletd:
image: electriccoinco/lightwalletd
@@ -29,13 +36,11 @@ services:
configs:
- source: lwd_config
target: /etc/lightwalletd/zcash.conf
- uid: '2002' # Golang's container default user uid
- gid: '2002' # Golang's container default group gid
- mode: 0440
volumes:
- - litewalletd-data:/var/lib/lightwalletd/db
- #! This setup with --no-tls-very-insecure is only for testing purposes
- #! For production environments follow the guidelines here: https://github.com/zcash/lightwalletd#production-usage
+ - lwd-cache:/var/lib/lightwalletd/db
+ #! This setup with `--no-tls-very-insecure` is only for testing purposes.
+ #! For production environments, follow the guidelines here:
+ #! https://github.com/zcash/lightwalletd#production-usage
command: >
--no-tls-very-insecure
--grpc-bind-addr=0.0.0.0:9067
@@ -50,10 +55,11 @@ services:
configs:
lwd_config:
- # Change the following line to point to a zcash.conf on your host machine
- # to allow for easy configuration changes without rebuilding the image
- file: ./zcash-lightwalletd/zcash.conf
+ file: ./zcash.conf
volumes:
- litewalletd-data:
+ zebrad-cache:
+ driver: local
+
+ lwd-cache:
driver: local
diff --git a/docker/docker-compose.test.yml b/docker/docker-compose.test.yml
index fac94e3f4db..d3659612a82 100644
--- a/docker/docker-compose.test.yml
+++ b/docker/docker-compose.test.yml
@@ -1,30 +1,13 @@
-version: "3.8"
-
services:
zebra:
+ container_name: zebra
build:
context: ../
dockerfile: docker/Dockerfile
target: tests
- restart: unless-stopped
- deploy:
- resources:
- reservations:
- cpus: "4"
- memory: 16G
- # Change this to the command you want to run, respecting the entrypoint.sh
- # For example, to run the tests, use the following command:
- # command: ["cargo", "test", "--locked", "--release", "--features", "${TEST_FEATURES}", "--package", "zebrad", "--test", "acceptance", "--", "--nocapture", "--include-ignored", "sync_large_checkpoints_"]
volumes:
- - zebrad-cache:/var/cache/zebrad-cache
- - lwd-cache:/var/cache/lwd-cache
- ports:
- # Zebra uses the following inbound and outbound TCP ports
- - "8232:8232" # Opens an RPC endpoint (for wallet storing and mining)
- - "8233:8233" # Mainnet Network (for peer connections)
- - "18233:18233" # Testnet Network
- # - "9999:9999" # Metrics
- # - "3000:3000" # Tracing
+ - zebrad-cache:/home/zebra/.cache/zebra
+ - lwd-cache:/home/zebra/.cache/lwd
env_file:
- test.env
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index 22359488de1..b561312fe27 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -1,13 +1,8 @@
-version: "3.8"
-
services:
zebra:
+ container_name: zebra
image: zfnd/zebra
platform: linux/amd64
- build:
- context: ../
- dockerfile: docker/Dockerfile
- target: runtime
restart: unless-stopped
deploy:
resources:
@@ -16,33 +11,32 @@ services:
memory: 16G
env_file:
- .env
- logging:
- options:
- max-size: "10m"
- max-file: "5"
- #! Uncomment the `configs` mapping below to use the `zebrad.toml` config file from the host machine
- #! NOTE: This will override the zebrad.toml in the image and make some variables irrelevant
- # configs:
- # - source: zebra_config
- # target: /etc/zebrad/zebrad.toml
- # uid: '2001' # Rust's container default user uid
- # gid: '2001' # Rust's container default group gid
- # mode: 0440
volumes:
- - zebrad-cache:/var/cache/zebrad-cache
- ports:
- # Zebra uses the following default inbound and outbound TCP ports
- - "8233:8233" # Mainnet Network (for peer connections)
- # - "8232:8232" # Opens an RPC endpoint (for wallet storing and mining)
- # - "18233:18233" # Testnet Network
- # - "9999:9999" # Metrics
- # - "3000:3000" # Tracing
+ - zebrad-cache:/home/zebra/.cache/zebra
+ # Having `tty` set to true makes Zebra use colored logs.
+ tty: true
+ # ! Comment out the `configs` mapping below to use the environment variables in the
+ # ! `.env` file, instead of the default configuration file.
+ configs:
+ - source: zebra-config
+ target: /home/zebra/.config/zebrad.toml
+
+ # Uncomment the `ports` mapping below to map ports between the container and
+ # host.
+ #
+ # ports:
+ # - "8232:8232" # RPC endpoint on Mainnet
+ # - "18232:18232" # RPC endpoint on Testnet
+ # - "8233:8233" # peer connections on Mainnet
+ # - "18233:18233" # peer connections on Testnet
+ # - "9999:9999" # Metrics
+ # - "3000:3000" # Tracing
configs:
- zebra_config:
- # Change the following line to point to a zebrad.toml on your host machine
- # to allow for easy configuration changes without rebuilding the image
- file: ../zebrad/tests/common/configs/v1.0.0-rc.2.toml
+ zebra-config:
+ #! To customize the default configuration, edit this file before starting
+ #! the container.
+ file: ./default-zebra-config.toml
volumes:
zebrad-cache:
diff --git a/docker/entrypoint.sh b/docker/entrypoint.sh
index d71be57805d..3fbee7203c8 100755
--- a/docker/entrypoint.sh
+++ b/docker/entrypoint.sh
@@ -1,365 +1,344 @@
#!/usr/bin/env bash
-# This script serves as the entrypoint for the Zebra Docker container.
+# Entrypoint for running Zebra in Docker.
#
-# Description:
-# This script serves as the primary entrypoint for the Docker container. Its main responsibilities include:
-# 1. Environment Setup: Prepares the environment by setting various flags and parameters.
-# 2. Configuration Management: Dynamically generates the `zebrad.toml` configuration file based on environment variables, ensuring the node starts with the desired settings.
-# 3. Test Execution: Can run a series of tests to validate functionality based on specified environment variables.
-# 4. Node Startup: Starts the node, allowing it to begin its operations.
+# The main script logic is at the bottom.
#
+# ## Notes
+#
+# - `$ZEBRA_CONF_PATH` can point to an existing Zebra config file, or if not set,
+# the script will look for a default config at ${HOME}/.config/zebrad.toml,
+# or generate one using environment variables.
-# Exit if a command fails
-set -e
-# Exit if any command in a pipeline fails
-set -o pipefail
-
-####
-# General Variables
-# These variables are used to run the Zebra node.
-####
-
-# Path and name of the config file. These two have defaults set in the Dockerfile.
-: "${ZEBRA_CONF_DIR:=}"
-: "${ZEBRA_CONF_FILE:=}"
-# [network]
-: "${NETWORK:=Mainnet}"
-: "${ZEBRA_LISTEN_ADDR:=0.0.0.0}"
-# [consensus]
-: "${ZEBRA_CHECKPOINT_SYNC:=true}"
-# [state]
-# Set this to change the default cached state directory
-: "${ZEBRA_CACHED_STATE_DIR:=/var/cache/zebrad-cache}"
-: "${LIGHTWALLETD_DATA_DIR:=/var/cache/lwd-cache}"
-# [metrics]
-: "${METRICS_ENDPOINT_ADDR:=0.0.0.0}"
-: "${METRICS_ENDPOINT_PORT:=9999}"
-# [tracing]
-: "${LOG_COLOR:=false}"
-: "${TRACING_ENDPOINT_ADDR:=0.0.0.0}"
-: "${TRACING_ENDPOINT_PORT:=3000}"
-# [rpc]
-: "${RPC_LISTEN_ADDR:=0.0.0.0}"
-# if ${RPC_PORT} is not set, use the default value for the current network
-if [[ -z "${RPC_PORT}" ]]; then
- if [[ "${NETWORK}" = "Mainnet" ]]; then
- : "${RPC_PORT:=8232}"
- elif [[ "${NETWORK}" = "Testnet" ]]; then
- : "${RPC_PORT:=18232}"
- fi
-fi
+set -eo pipefail
-####
-# Test Variables
-# These variables are used to run tests in the Dockerfile.
-####
-
-: "${RUN_ALL_TESTS:=}"
-: "${RUN_ALL_EXPERIMENTAL_TESTS:=}"
-: "${TEST_FAKE_ACTIVATION_HEIGHTS:=}"
-: "${TEST_ZEBRA_EMPTY_SYNC:=}"
-: "${ZEBRA_TEST_LIGHTWALLETD:=}"
-: "${FULL_SYNC_MAINNET_TIMEOUT_MINUTES:=}"
-: "${FULL_SYNC_TESTNET_TIMEOUT_MINUTES:=}"
-: "${TEST_DISK_REBUILD:=}"
-: "${TEST_UPDATE_SYNC:=}"
-: "${TEST_CHECKPOINT_SYNC:=}"
-: "${GENERATE_CHECKPOINTS_MAINNET:=}"
-: "${GENERATE_CHECKPOINTS_TESTNET:=}"
-: "${TEST_LWD_RPC_CALL:=}"
-: "${TEST_LWD_FULL_SYNC:=}"
-: "${TEST_LWD_UPDATE_SYNC:=}"
-: "${TEST_LWD_GRPC:=}"
-: "${TEST_LWD_TRANSACTIONS:=}"
-: "${TEST_GET_BLOCK_TEMPLATE:=}"
-: "${TEST_SUBMIT_BLOCK:=}"
-: "${TEST_SCAN_START_WHERE_LEFT:=}"
-: "${ENTRYPOINT_FEATURES:=}"
-: "${TEST_SCAN_TASK_COMMANDS:=}"
-
-# Configuration file path
-if [[ -n "${ZEBRA_CONF_DIR}" ]] && [[ -n "${ZEBRA_CONF_FILE}" ]] && [[ -z "${ZEBRA_CONF_PATH}" ]]; then
- ZEBRA_CONF_PATH="${ZEBRA_CONF_DIR}/${ZEBRA_CONF_FILE}"
-fi
+# These are the default cached state directories for Zebra and lightwalletd.
+#
+# They are set to `${HOME}/.cache/zebra` and `${HOME}/.cache/lwd`
+# respectively, but can be overridden by setting the
+# `ZEBRA_CACHE_DIR` and `LWD_CACHE_DIR` environment variables.
+: "${ZEBRA_CACHE_DIR:=${HOME}/.cache/zebra}"
+: "${LWD_CACHE_DIR:=${HOME}/.cache/lwd}"
+: "${ZEBRA_COOKIE_DIR:=${HOME}/.cache/zebra}"
+
+# Use gosu to drop privileges and execute the given command as the specified UID:GID
+exec_as_user() {
+ user=$(id -u)
+ if [[ ${user} == '0' ]]; then
+ exec gosu "${UID}:${GID}" "$@"
+ else
+ exec "$@"
+ fi
+}
-# Populate `zebrad.toml` before starting zebrad, using the environmental
-# variables set by the Dockerfile or the user. If the user has already created a config, don't replace it.
+# Modifies the Zebra config file using environment variables.
#
-# We disable most ports by default, so the default config is secure.
-# Users have to opt-in to additional functionality by setting environmental variables.
-if [[ -n "${ZEBRA_CONF_PATH}" ]] && [[ ! -f "${ZEBRA_CONF_PATH}" ]] && [[ -z "${ENTRYPOINT_FEATURES}" ]]; then
- # Create the conf path and file
- (mkdir -p "$(dirname "${ZEBRA_CONF_PATH}")" && touch "${ZEBRA_CONF_PATH}") || { echo "Error creating file ${ZEBRA_CONF_PATH}"; exit 1; }
- # Populate the conf file
- cat < "${ZEBRA_CONF_PATH}"
+# This function generates a new config file from scratch at ZEBRA_CONF_PATH
+# using the provided environment variables.
+#
+# It creates a complete configuration with network settings, state, RPC,
+# metrics, tracing, and mining sections based on environment variables.
+prepare_conf_file() {
+ # Base configuration
+ cat >"${ZEBRA_CONF_PATH}" <> "${ZEBRA_CONF_PATH}"
-[metrics]
-endpoint_addr = "${METRICS_ENDPOINT_ADDR}:${METRICS_ENDPOINT_PORT}"
-EOF
- fi
+$( [[ -n ${ZEBRA_RPC_PORT} ]] && cat <<-SUB_EOF
- if [[ -n "${RPC_PORT}" ]]; then
- cat <> "${ZEBRA_CONF_PATH}"
[rpc]
-listen_addr = "${RPC_LISTEN_ADDR}:${RPC_PORT}"
-EOF
- fi
+listen_addr = "${RPC_LISTEN_ADDR:=0.0.0.0}:${ZEBRA_RPC_PORT}"
+enable_cookie_auth = ${ENABLE_COOKIE_AUTH:=true}
+$( [[ -n ${ZEBRA_COOKIE_DIR} ]] && echo "cookie_dir = \"${ZEBRA_COOKIE_DIR}\"" )
+SUB_EOF
+)
+
+$( ( ! [[ " ${FEATURES} " =~ " prometheus " ]] ) && cat <<-SUB_EOF
+
+[metrics]
+# endpoint_addr = "${METRICS_ENDPOINT_ADDR:=0.0.0.0}:${METRICS_ENDPOINT_PORT:=9999}"
+SUB_EOF
+)
+
+$( [[ " ${FEATURES} " =~ " prometheus " ]] && cat <<-SUB_EOF
+
+[metrics]
+endpoint_addr = "${METRICS_ENDPOINT_ADDR:=0.0.0.0}:${METRICS_ENDPOINT_PORT:=9999}"
+SUB_EOF
+)
+
+$( [[ -n ${LOG_FILE} || -n ${LOG_COLOR} || -n ${TRACING_ENDPOINT_ADDR} || -n ${USE_JOURNALD} ]] && cat <<-SUB_EOF
- if [[ -n "${LOG_FILE}" ]] || [[ -n "${LOG_COLOR}" ]] || [[ -n "${TRACING_ENDPOINT_ADDR}" ]]; then
- cat <> "${ZEBRA_CONF_PATH}"
[tracing]
-EOF
- if [[ " ${FEATURES} " =~ " filter-reload " ]]; then # spaces are important here to avoid partial matches
- cat <> "${ZEBRA_CONF_PATH}"
-endpoint_addr = "${TRACING_ENDPOINT_ADDR}:${TRACING_ENDPOINT_PORT}"
-EOF
- fi
- # Set this to log to a file, if not set, logs to standard output
- if [[ -n "${LOG_FILE}" ]]; then
- mkdir -p "$(dirname "${LOG_FILE}")"
- cat <> "${ZEBRA_CONF_PATH}"
-log_file = "${LOG_FILE}"
-EOF
- fi
- # Zebra automatically detects if it is attached to a terminal, and uses colored output.
- # Set this to 'true' to force using color even if the output is not a terminal.
- # Set this to 'false' to disable using color even if the output is a terminal.
- if [[ "${LOG_COLOR}" = "true" ]]; then
- cat <> "${ZEBRA_CONF_PATH}"
-force_use_color = true
-EOF
- elif [[ "${LOG_COLOR}" = "false" ]]; then
- cat <> "${ZEBRA_CONF_PATH}"
-use_color = false
-EOF
- fi
- fi
+$( [[ -n ${USE_JOURNALD} ]] && echo "use_journald = ${USE_JOURNALD}" )
+$( [[ " ${FEATURES} " =~ " filter-reload " ]] && echo "endpoint_addr = \"${TRACING_ENDPOINT_ADDR:=0.0.0.0}:${TRACING_ENDPOINT_PORT:=3000}\"" )
+$( [[ -n ${LOG_FILE} ]] && echo "log_file = \"${LOG_FILE}\"" )
+$( [[ ${LOG_COLOR} == "true" ]] && echo "force_use_color = true" )
+$( [[ ${LOG_COLOR} == "false" ]] && echo "use_color = false" )
+SUB_EOF
+)
+
+$( [[ -n ${MINER_ADDRESS} ]] && cat <<-SUB_EOF
- if [[ -n "${MINER_ADDRESS}" ]]; then
- cat <> "${ZEBRA_CONF_PATH}"
[mining]
miner_address = "${MINER_ADDRESS}"
+SUB_EOF
+)
EOF
- fi
-fi
-if [[ -n "${ZEBRA_CONF_PATH}" ]] && [[ -z "${ENTRYPOINT_FEATURES}" ]]; then
- # Print the config file
- echo "Using zebrad.toml:"
- cat "${ZEBRA_CONF_PATH}"
-fi
+# Ensure the config file itself has the correct ownership
+#
+# This is safe in this context because prepare_conf_file is called only when
+# ZEBRA_CONF_PATH is not set, and there's no file mounted at that path.
+chown "${UID}:${GID}" "${ZEBRA_CONF_PATH}" || exit_error "Failed to secure config file: ${ZEBRA_CONF_PATH}"
+
+}
-# Function to list directory
-check_directory_files() {
+# Helper function
+exit_error() {
+ echo "$1" >&2
+ exit 1
+}
+
+# Creates a directory if it doesn't exist and sets ownership to specified UID:GID.
+# Also ensures the parent directories have the correct ownership.
+#
+# ## Parameters
+#
+# - $1: Directory path to create and own
+create_owned_directory() {
local dir="$1"
- # Check if the directory exists
- if [[ -d "${dir}" ]]; then
- # Check if there are any subdirectories
- if find "${dir}" -mindepth 1 -type d | read -r; then
- # Subdirectories exist, so we continue
- :
- else
- # No subdirectories, print message and exit with status 1
- echo "No subdirectories found in ${dir}."
- exit 1
- fi
- else
- # Directory doesn't exist, print message and exit with status 1
- echo "Directory ${dir} does not exist."
- exit 1
+ # Skip if directory is empty
+ [[ -z ${dir} ]] && return
+
+ # Create directory with parents
+ mkdir -p "${dir}" || exit_error "Failed to create directory: ${dir}"
+
+ # Set ownership for the created directory
+ chown -R "${UID}:${GID}" "${dir}" || exit_error "Failed to secure directory: ${dir}"
+
+ # Set ownership for parent directory (but not if it's root or home)
+ local parent_dir
+ parent_dir="$(dirname "${dir}")"
+ if [[ "${parent_dir}" != "/" && "${parent_dir}" != "${HOME}" ]]; then
+ chown "${UID}:${GID}" "${parent_dir}"
fi
}
-# Function to run cargo test with an arbitrary number of arguments
-run_cargo_test() {
- # Start constructing the command, ensuring that $1 is enclosed in single quotes as it's a feature list
- local cmd="exec cargo test --locked --release --features '$1' --package zebrad --test acceptance -- --nocapture --include-ignored"
+# Create and own cache and config directories
+[[ -n ${ZEBRA_CACHE_DIR} ]] && create_owned_directory "${ZEBRA_CACHE_DIR}"
+[[ -n ${LWD_CACHE_DIR} ]] && create_owned_directory "${LWD_CACHE_DIR}"
+[[ -n ${ZEBRA_COOKIE_DIR} ]] && create_owned_directory "${ZEBRA_COOKIE_DIR}"
+[[ -n ${LOG_FILE} ]] && create_owned_directory "$(dirname "${LOG_FILE}")"
+# Runs cargo test with an arbitrary number of arguments.
+#
+# Positional Parameters
+#
+# - '$1' must contain cargo FEATURES as described here:
+# https://doc.rust-lang.org/cargo/reference/features.html#command-line-feature-options
+# - The remaining params will be appended to a command starting with
+# `exec_as_user cargo test ... -- ...`
+run_cargo_test() {
# Shift the first argument, as it's already included in the cmd
+ local features="$1"
shift
+ # Start constructing the command array
+ local cmd=(cargo test --locked --release --features "${features}" --package zebrad --test acceptance -- --nocapture --include-ignored)
+
# Loop through the remaining arguments
for arg in "$@"; do
if [[ -n ${arg} ]]; then
# If the argument is non-empty, add it to the command
- cmd+=" ${arg}"
+ cmd+=("${arg}")
fi
done
- # Run the command using eval, this will replace the current process with the cargo command
- eval "${cmd}" || { echo "Cargo test failed"; exit 1; }
+ echo "Running: ${cmd[*]}"
+ # Execute directly to become PID 1
+ exec_as_user "${cmd[@]}"
+}
+
+# Runs tests depending on the env vars.
+#
+# ## Positional Parameters
+#
+# - $@: Arbitrary command that will be executed if no test env var is set.
+run_tests() {
+ if [[ "${RUN_ALL_TESTS}" -eq "1" ]]; then
+ # Run unit, basic acceptance tests, and ignored tests, only showing command
+ # output if the test fails. If the lightwalletd environment variables are
+ # set, we will also run those tests.
+ exec_as_user cargo test --locked --release --workspace --features "${FEATURES}" \
+ -- --nocapture --include-ignored --skip check_no_git_dependencies
+
+ elif [[ "${CHECK_NO_GIT_DEPENDENCIES}" -eq "1" ]]; then
+ # Run the check_no_git_dependencies test.
+ exec_as_user cargo test --locked --release --workspace --features "${FEATURES}" \
+ -- --nocapture --include-ignored check_no_git_dependencies
+
+ elif [[ "${STATE_FAKE_ACTIVATION_HEIGHTS}" -eq "1" ]]; then
+ # Run state tests with fake activation heights.
+ exec_as_user cargo test --locked --release --lib --features "zebra-test" \
+ --package zebra-state \
+ -- --nocapture --include-ignored with_fake_activation_heights
+
+ elif [[ "${SYNC_LARGE_CHECKPOINTS_EMPTY}" -eq "1" ]]; then
+ # Test that Zebra syncs and checkpoints a few thousand blocks from an empty
+ # state.
+ run_cargo_test "${FEATURES}" "sync_large_checkpoints_"
+
+ elif [[ -n "${SYNC_FULL_MAINNET_TIMEOUT_MINUTES}" ]]; then
+ # Run a Zebra full sync test on mainnet.
+ run_cargo_test "${FEATURES}" "sync_full_mainnet"
+
+ elif [[ -n "${SYNC_FULL_TESTNET_TIMEOUT_MINUTES}" ]]; then
+ # Run a Zebra full sync test on testnet.
+ run_cargo_test "${FEATURES}" "sync_full_testnet"
+
+ elif [[ "${SYNC_TO_MANDATORY_CHECKPOINT}" -eq "1" ]]; then
+ # Run a Zebra sync up to the mandatory checkpoint.
+ run_cargo_test "${FEATURES} sync_to_mandatory_checkpoint_${NETWORK,,}" \
+ "sync_to_mandatory_checkpoint_${NETWORK,,}"
+ echo "ran test_disk_rebuild"
+
+ elif [[ "${SYNC_UPDATE_MAINNET}" -eq "1" ]]; then
+ # Run a Zebra sync starting at the cached tip, and syncing to the latest
+ # tip.
+ run_cargo_test "${FEATURES}" "sync_update_mainnet"
+
+ elif [[ "${SYNC_PAST_MANDATORY_CHECKPOINT}" -eq "1" ]]; then
+ # Run a Zebra sync starting at the cached mandatory checkpoint, and syncing
+ # past it.
+ run_cargo_test "${FEATURES} sync_past_mandatory_checkpoint_${NETWORK,,}" \
+ "sync_past_mandatory_checkpoint_${NETWORK,,}"
+
+ elif [[ "${GENERATE_CHECKPOINTS_MAINNET}" -eq "1" ]]; then
+ # Generate checkpoints after syncing Zebra from a cached state on mainnet.
+ #
+ # TODO: disable or filter out logs like:
+ # test generate_checkpoints_mainnet has been running for over 60 seconds
+ run_cargo_test "${FEATURES}" "generate_checkpoints_mainnet"
+
+ elif [[ "${GENERATE_CHECKPOINTS_TESTNET}" -eq "1" ]]; then
+ # Generate checkpoints after syncing Zebra on testnet.
+ #
+ # This test might fail if testnet is unstable.
+ run_cargo_test "${FEATURES}" "generate_checkpoints_testnet"
+
+ elif [[ "${LWD_RPC_TEST}" -eq "1" ]]; then
+ # Starting at a cached Zebra tip, test a JSON-RPC call to Zebra.
+ # Run both the fully synced RPC test and the subtree snapshot test, one test
+ # at a time. Since these tests use the same cached state, a state problem in
+ # the first test can fail the second test.
+ run_cargo_test "${FEATURES}" "--test-threads" "1" "lwd_rpc_test"
+
+ elif [[ "${LIGHTWALLETD_INTEGRATION}" -eq "1" ]]; then
+ # Test launching lightwalletd with an empty lightwalletd and Zebra state.
+ run_cargo_test "${FEATURES}" "lwd_integration"
+
+ elif [[ "${LWD_SYNC_FULL}" -eq "1" ]]; then
+ # Starting at a cached Zebra tip, run a lightwalletd sync to tip.
+ run_cargo_test "${FEATURES}" "lwd_sync_full"
+
+ elif [[ "${LWD_SYNC_UPDATE}" -eq "1" ]]; then
+ # Starting with a cached Zebra and lightwalletd tip, run a quick update sync.
+ run_cargo_test "${FEATURES}" "lwd_sync_update"
+
+ # These tests actually use gRPC.
+ elif [[ "${LWD_GRPC_WALLET}" -eq "1" ]]; then
+ # Starting with a cached Zebra and lightwalletd tip, test all gRPC calls to
+ # lightwalletd, which calls Zebra.
+ run_cargo_test "${FEATURES}" "lwd_grpc_wallet"
+
+ elif [[ "${LWD_RPC_SEND_TX}" -eq "1" ]]; then
+ # Starting with a cached Zebra and lightwalletd tip, test sending
+ # transactions gRPC call to lightwalletd, which calls Zebra.
+ run_cargo_test "${FEATURES}" "lwd_rpc_send_tx"
+
+ # These tests use mining code, but don't use gRPC.
+ elif [[ "${RPC_GET_BLOCK_TEMPLATE}" -eq "1" ]]; then
+ # Starting with a cached Zebra tip, test getting a block template from
+ # Zebra's RPC server.
+ run_cargo_test "${FEATURES}" "rpc_get_block_template"
+
+ elif [[ "${RPC_SUBMIT_BLOCK}" -eq "1" ]]; then
+ # Starting with a cached Zebra tip, test sending a block to Zebra's RPC
+ # port.
+ run_cargo_test "${FEATURES}" "rpc_submit_block"
+
+ else
+ exec_as_user "$@"
+ fi
}
-# Main Execution Logic:
-# This script orchestrates the execution flow based on the provided arguments and environment variables.
-# - If "$1" is '--', '-', or 'zebrad', the script processes the subsequent arguments for the 'zebrad' command.
-# - If ENTRYPOINT_FEATURES is unset, it checks for ZEBRA_CONF_PATH. If set, 'zebrad' runs with this custom configuration; otherwise, it runs with the provided arguments.
-# - If "$1" is an empty string and ENTRYPOINT_FEATURES is set, the script enters the testing phase, checking various environment variables to determine the specific tests to run.
-# - Different tests or operations are triggered based on the respective conditions being met.
-# - If "$1" doesn't match any of the above, it's assumed to be a command, which is executed directly.
-# This structure ensures a flexible execution strategy, accommodating various scenarios such as custom configurations, different testing phases, or direct command execution.
+# Main Script Logic
+#
+# 1. First check if ZEBRA_CONF_PATH is explicitly set or if a file exists at that path
+# 2. If not set but default config exists, use that
+# 3. If neither exists, generate a default config at ${HOME}/.config/zebrad.toml
+# 4. Print environment variables and config for debugging
+# 5. Process command-line arguments and execute appropriate action
+if [[ -n ${ZEBRA_CONF_PATH} ]]; then
+ if [[ -f ${ZEBRA_CONF_PATH} ]]; then
+ echo "ZEBRA_CONF_PATH was set to ${ZEBRA_CONF_PATH} and a file exists."
+ echo "Using user-provided config file"
+ else
+ echo "ERROR: ZEBRA_CONF_PATH was set and no config file found at ${ZEBRA_CONF_PATH}."
+ echo "Please ensure a config file exists or set ZEBRA_CONF_PATH to point to your config file."
+ exit 1
+ fi
+else
+ if [[ -f "${HOME}/.config/zebrad.toml" ]]; then
+ echo "ZEBRA_CONF_PATH was not set."
+ echo "Using default config at ${HOME}/.config/zebrad.toml"
+ ZEBRA_CONF_PATH="${HOME}/.config/zebrad.toml"
+ else
+ echo "ZEBRA_CONF_PATH was not set and no default config found at ${HOME}/.config/zebrad.toml"
+ echo "Preparing a default one..."
+ ZEBRA_CONF_PATH="${HOME}/.config/zebrad.toml"
+ create_owned_directory "$(dirname "${ZEBRA_CONF_PATH}")"
+ prepare_conf_file
+ fi
+fi
+echo "INFO: Using the following environment variables:"
+printenv
+
+echo "Using Zebra config at ${ZEBRA_CONF_PATH}:"
+cat "${ZEBRA_CONF_PATH}"
+
+# - If "$1" is "--", "-", or "zebrad", run `zebrad` with the remaining params.
+# - If "$1" is "test":
+# - and "$2" is "zebrad", run `zebrad` with the remaining params,
+# - else run tests with the remaining params.
+# - TODO: If "$1" is "monitoring", start a monitoring node.
+# - If "$1" doesn't match any of the above, run "$@" directly.
case "$1" in
- --* | -* | zebrad)
+--* | -* | zebrad)
+ shift
+ exec_as_user zebrad --config "${ZEBRA_CONF_PATH}" "$@"
+ ;;
+test)
+ shift
+ if [[ "$1" == "zebrad" ]]; then
shift
- if [[ -n "${ZEBRA_CONF_PATH}" ]]; then
- exec zebrad -c "${ZEBRA_CONF_PATH}" "$@" || { echo "Execution with custom configuration failed"; exit 1; }
- else
- exec zebrad "$@" || { echo "Execution failed"; exit 1; }
- fi
- ;;
- "")
- if [[ -n "${ENTRYPOINT_FEATURES}" ]]; then
- # Validate the test variables
- # For these tests, we activate the test features to avoid recompiling `zebrad`,
- # but we don't actually run any gRPC tests.
- if [[ "${RUN_ALL_TESTS}" -eq "1" ]]; then
- # Run unit, basic acceptance tests, and ignored tests, only showing command output if the test fails.
- # If the lightwalletd environmental variables are set, we will also run those tests.
- exec cargo test --locked --release --features "${ENTRYPOINT_FEATURES}" --workspace -- --nocapture --include-ignored
-
- elif [[ "${RUN_ALL_EXPERIMENTAL_TESTS}" -eq "1" ]]; then
- # Run unit, basic acceptance tests, and ignored tests with experimental features.
- # If the lightwalletd environmental variables are set, we will also run those tests.
- exec cargo test --locked --release --features "${ENTRYPOINT_FEATURES_EXPERIMENTAL}" --workspace -- --nocapture --include-ignored
-
- elif [[ "${TEST_FAKE_ACTIVATION_HEIGHTS}" -eq "1" ]]; then
- # Run state tests with fake activation heights.
- exec cargo test --locked --release --features "zebra-test" --package zebra-state --lib -- --nocapture --include-ignored with_fake_activation_heights
-
- elif [[ "${TEST_ZEBRA_EMPTY_SYNC}" -eq "1" ]]; then
- # Test that Zebra syncs and checkpoints a few thousand blocks from an empty state.
- run_cargo_test "${ENTRYPOINT_FEATURES}" "sync_large_checkpoints_"
-
- elif [[ "${ZEBRA_TEST_LIGHTWALLETD}" -eq "1" ]]; then
- # Test launching lightwalletd with an empty lightwalletd and Zebra state.
- run_cargo_test "${ENTRYPOINT_FEATURES}" "lightwalletd_integration"
-
- elif [[ -n "${FULL_SYNC_MAINNET_TIMEOUT_MINUTES}" ]]; then
- # Run a Zebra full sync test on mainnet.
- run_cargo_test "${ENTRYPOINT_FEATURES}" "full_sync_mainnet"
- # List directory generated by test
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
-
- elif [[ -n "${FULL_SYNC_TESTNET_TIMEOUT_MINUTES}" ]]; then
- # Run a Zebra full sync test on testnet.
- run_cargo_test "${ENTRYPOINT_FEATURES}" "full_sync_testnet"
- # List directory generated by test
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
-
- elif [[ "${TEST_DISK_REBUILD}" -eq "1" ]]; then
- # Run a Zebra sync up to the mandatory checkpoint.
- #
- # TODO: use environmental variables instead of Rust features (part of #2995)
- run_cargo_test "test_sync_to_mandatory_checkpoint_${NETWORK,,},${ENTRYPOINT_FEATURES}" "sync_to_mandatory_checkpoint_${NETWORK,,}"
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
-
- elif [[ "${TEST_UPDATE_SYNC}" -eq "1" ]]; then
- # Run a Zebra sync starting at the cached tip, and syncing to the latest tip.
- #
- # List directory used by test
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "zebrad_update_sync"
-
- elif [[ "${TEST_CHECKPOINT_SYNC}" -eq "1" ]]; then
- # Run a Zebra sync starting at the cached mandatory checkpoint, and syncing past it.
- #
- # List directory used by test
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- # TODO: use environmental variables instead of Rust features (part of #2995)
- run_cargo_test "test_sync_past_mandatory_checkpoint_${NETWORK,,},${ENTRYPOINT_FEATURES}" "sync_past_mandatory_checkpoint_${NETWORK,,}"
-
- elif [[ "${GENERATE_CHECKPOINTS_MAINNET}" -eq "1" ]]; then
- # Generate checkpoints after syncing Zebra from a cached state on mainnet.
- #
- # TODO: disable or filter out logs like:
- # test generate_checkpoints_mainnet has been running for over 60 seconds
- #
- # List directory used by test
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "generate_checkpoints_mainnet"
-
- elif [[ "${GENERATE_CHECKPOINTS_TESTNET}" -eq "1" ]]; then
- # Generate checkpoints after syncing Zebra on testnet.
- #
- # This test might fail if testnet is unstable.
- #
- # List directory used by test
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "generate_checkpoints_testnet"
-
- elif [[ "${TEST_LWD_RPC_CALL}" -eq "1" ]]; then
- # Starting at a cached Zebra tip, test a JSON-RPC call to Zebra.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- # Run both the fully synced RPC test and the subtree snapshot test, one test at a time.
- # Since these tests use the same cached state, a state problem in the first test can fail the second test.
- run_cargo_test "${ENTRYPOINT_FEATURES}" "--test-threads" "1" "fully_synced_rpc_"
-
- elif [[ "${TEST_LWD_FULL_SYNC}" -eq "1" ]]; then
- # Starting at a cached Zebra tip, run a lightwalletd sync to tip.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "lightwalletd_full_sync"
- check_directory_files "${LIGHTWALLETD_DATA_DIR}/db"
-
- elif [[ "${TEST_LWD_UPDATE_SYNC}" -eq "1" ]]; then
- # Starting with a cached Zebra and lightwalletd tip, run a quick update sync.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- check_directory_files "${LIGHTWALLETD_DATA_DIR}/db"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "lightwalletd_update_sync"
-
- # These tests actually use gRPC.
- elif [[ "${TEST_LWD_GRPC}" -eq "1" ]]; then
- # Starting with a cached Zebra and lightwalletd tip, test all gRPC calls to lightwalletd, which calls Zebra.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- check_directory_files "${LIGHTWALLETD_DATA_DIR}/db"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "lightwalletd_wallet_grpc_tests"
-
- elif [[ "${TEST_LWD_TRANSACTIONS}" -eq "1" ]]; then
- # Starting with a cached Zebra and lightwalletd tip, test sending transactions gRPC call to lightwalletd, which calls Zebra.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- check_directory_files "${LIGHTWALLETD_DATA_DIR}/db"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "sending_transactions_using_lightwalletd"
-
- # These tests use mining code, but don't use gRPC.
- elif [[ "${TEST_GET_BLOCK_TEMPLATE}" -eq "1" ]]; then
- # Starting with a cached Zebra tip, test getting a block template from Zebra's RPC server.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "get_block_template"
-
- elif [[ "${TEST_SUBMIT_BLOCK}" -eq "1" ]]; then
- # Starting with a cached Zebra tip, test sending a block to Zebra's RPC port.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- run_cargo_test "${ENTRYPOINT_FEATURES}" "submit_block"
-
- elif [[ "${TEST_SCAN_START_WHERE_LEFT}" -eq "1" ]]; then
- # Test that the scanner can continue scanning where it was left when zebra-scanner restarts.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- exec cargo test --locked --release --features "zebra-test" --package zebra-scan -- --nocapture --include-ignored scan_start_where_left
-
- elif [[ "${TEST_SCAN_TASK_COMMANDS}" -eq "1" ]]; then
- # Test that the scan task commands are working.
- check_directory_files "${ZEBRA_CACHED_STATE_DIR}"
- exec cargo test --locked --release --features "zebra-test" --package zebra-scan -- --nocapture --include-ignored scan_task_commands
-
- else
- exec "$@"
- fi
- fi
- ;;
- *)
- if command -v gosu >/dev/null 2>&1; then
- exec gosu "$USER" "$@"
- else
- exec "$@"
- fi
- ;;
+ exec_as_user zebrad --config "${ZEBRA_CONF_PATH}" "$@"
+ else
+ run_tests "$@"
+ fi
+ ;;
+monitoring)
+ # TODO: Impl logic for starting a monitoring node.
+ :
+ ;;
+*)
+ exec_as_user "$@"
+ ;;
esac
diff --git a/docker/test.env b/docker/test.env
index fd2a7c876b7..7aa4a6dfe77 100644
--- a/docker/test.env
+++ b/docker/test.env
@@ -1,60 +1,94 @@
-###
-# Configuration Variables
-# These variables are used to configure the zebra node
-# Check the entrypoint.sh script for more details
-###
-
-# Set this to change the default log level (must be set at build time)
-RUST_LOG=info
-# This variable forces the use of color in the logs
-ZEBRA_FORCE_USE_COLOR=1
-LOG_COLOR=true
-# Path to the config file. This variable has a default set in entrypoint.sh
-# ZEBRA_CONF_PATH=/etc/zebrad/zebrad.toml
-# [network]
-NETWORK=Mainnet
-# [state]
-# Set this to change the default cached state directory
-ZEBRA_CACHED_STATE_DIR=/var/cache/zebrad-cache
-LIGHTWALLETD_DATA_DIR=/var/cache/lwd-cache
-# [tracing]
-LOG_COLOR=false
-TRACING_ENDPOINT_ADDR=0.0.0.0
-TRACING_ENDPOINT_PORT=3000
-
-####
-# Test Variables
-# These variables are used to run tests in the Dockerfile
-# Check the entrypoint.sh script for more details
-####
+# Configuration variables for running Zebra in Docker
+
+# Sets the network Zebra runs will run on.
+#
+# NETWORK=Mainnet
+
+# Zebra's RPC server is disabled by default. To enable it, set its port number.
+#
+# ZEBRA_RPC_PORT=8232 # Default RPC port number on Mainnet.
+# ZEBRA_RPC_PORT=18323 # Default RPC port number on Testnet.
+
+# To disable cookie authentication, set the value below to false.
+#
+# ENABLE_COOKIE_AUTH=true
+
+# Sets a custom directory for the state and network caches. Zebra will also
+# store its cookie authentication file in this directory.
+#
+# ZEBRA_CACHE_DIR="/home/zebra/.cache/zebra"
+
+# Sets custom Cargo features. Available features are listed at
+# .
+#
+# Must be set at build time.
+#
+# FEATURES=""
+
+# Logging to a file is disabled by default. To enable it, uncomment the line
+# below and alternatively set your own path.
+#
+# LOG_FILE="/home/zebra/.local/state/zebrad.log"
+
+# Zebra recognizes whether its logs are being written to a terminal or a file,
+# and uses colored logs for terminals and uncolored logs for files. Setting the
+# variable below to true will force colored logs even for files and setting it
+# to false will disable colors even for terminals.
+#
+# LOG_COLOR=true
+
+# To disable logging to journald, set the value to false.
+#
+# USE_JOURNALD=true
+
+# If you are going to use Zebra as a backend for a mining pool, set your mining
+# address.
+#
+# MINER_ADDRESS="your_mining_address"
+
+# Controls the output of `env_logger`:
+# https://docs.rs/env_logger/latest/env_logger/
+#
+# Must be set at build time.
+#
+# RUST_LOG=info
# Unit tests
-# TODO: These variables are evaluated to any value, even setting a NULL value will evaluate to true
+
+# TODO: These variables are evaluated to any value, even setting a NULL value
+# will evaluate to true.
+#
# TEST_FAKE_ACTIVATION_HEIGHTS=
-# ZEBRA_SKIP_NETWORK_TESTS
-# ZEBRA_SKIP_IPV6_TESTS
+# ZEBRA_SKIP_NETWORK_TESTS=
+# ZEBRA_SKIP_IPV6_TESTS=
RUN_ALL_TESTS=
-RUN_ALL_EXPERIMENTAL_TESTS=
-TEST_ZEBRA_EMPTY_SYNC=
+SYNC_LARGE_CHECKPOINTS_EMPTY=
ZEBRA_TEST_LIGHTWALLETD=
+
# Integration Tests
-# Most of these tests require a cached state directory to save the network state
-TEST_DISK_REBUILD=
-# These tests needs a Zebra cached state
-TEST_CHECKPOINT_SYNC=
+
+# Most of these tests require a cached state directory to save the network state.
+SYNC_TO_CHECKPOINT=
+SYNC_PAST_CHECKPOINT=
GENERATE_CHECKPOINTS_MAINNET=
GENERATE_CHECKPOINTS_TESTNET=
-TEST_UPDATE_SYNC=
-# These tests need a Lightwalletd binary + a Zebra cached state
-TEST_LWD_RPC_CALL=
-TEST_GET_BLOCK_TEMPLATE=
-TEST_SUBMIT_BLOCK=
-# These tests need a Lightwalletd binary + Lightwalletd cached state + a Zebra cached state
-TEST_LWD_UPDATE_SYNC=
-TEST_LWD_GRPC=
-TEST_LWD_TRANSACTIONS=
+SYNC_UPDATE=
+TEST_SCANNER=
+
+# These tests need a Lightwalletd binary + a Zebra cached state.
+RPC_FULLY_SYNCED_TEST=
+RPC_GET_BLOCK_TEMPLATE=
+RPC_SUBMIT_BLOCK=
+
+# These tests need a Lightwalletd binary + Lightwalletd cached state + a Zebra
+# cached state.
+LIGHTWALLETD_SYNC_UPDATE=
+LIGHTWALLETD_GRPC_WALLET=
+LIGHTWALLETD_SEND_TRANSACTIONS=
+
# Full sync tests
-# These tests could take a long time to run, depending on the network
-FULL_SYNC_MAINNET_TIMEOUT_MINUTES=
-FULL_SYNC_TESTNET_TIMEOUT_MINUTES=
-TEST_LWD_FULL_SYNC=
+
+# These tests take 3 days on Mainnet and one day on Testnet.
+SYNC_FULL_MAINNET_TIMEOUT_MINUTES=
+SYNC_FULL_TESTNET_TIMEOUT_MINUTES=
+LIGHTWALLETD_SYNC_FULL=
diff --git a/docker/zcash.conf b/docker/zcash.conf
new file mode 100644
index 00000000000..22f9ab8495d
--- /dev/null
+++ b/docker/zcash.conf
@@ -0,0 +1,2 @@
+rpcpassword=none
+rpcbind=zebra
diff --git a/docs/decisions/README.md b/docs/decisions/README.md
new file mode 100644
index 00000000000..91d5bd9188f
--- /dev/null
+++ b/docs/decisions/README.md
@@ -0,0 +1,22 @@
+# Decision Log
+
+We capture important decisions with [architectural decision records](https://adr.github.io/).
+
+These records provide context, trade-offs, and reasoning taken at our community & technical cross-roads. Our goal is to preserve the understanding of the project growth, and capture enough insight to effectively revisit previous decisions.
+
+To get started, create a new decision record using the template:
+
+```sh
+cp template.md NNNN-title-with-dashes.md
+```
+
+For more rationale for this approach, see [Michael Nygard's article](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).
+
+We've inherited MADR [ADR template](https://adr.github.io/madr/), which is a bit more verbose than Nygard's original template. We may simplify it in the future.
+
+## Evolving Decisions
+
+Many decisions build on each other, a driver of iterative change and messiness
+in software. By laying out the "story arc" of a particular system within the
+application, we hope future maintainers will be able to identify how to rewind
+decisions when refactoring the application becomes necessary.
diff --git a/docs/decisions/devops/0001-docker-high-uid.md b/docs/decisions/devops/0001-docker-high-uid.md
new file mode 100644
index 00000000000..bef88c1980a
--- /dev/null
+++ b/docs/decisions/devops/0001-docker-high-uid.md
@@ -0,0 +1,51 @@
+---
+status: accepted
+date: 2025-02-28
+story: Appropriate UID/GID values for container users
+---
+
+# Use High UID/GID Values for Container Users
+
+## Context & Problem Statement
+
+Docker containers share the host's user namespace by default. If container UIDs/GIDs overlap with privileged host accounts, this could lead to privilege escalation if a container escape vulnerability is exploited. Low UIDs (especially in the system user range of 100-999) are particularly risky as they often map to privileged system users on the host.
+
+Our previous approach used UID/GID 101 with the `--system` flag for user creation, which falls within the system user range and could potentially overlap with critical system users on the host.
+
+## Priorities & Constraints
+
+* Enhance security by reducing the risk of container user namespace overlaps
+* Avoid warnings during container build related to system user ranges
+* Maintain compatibility with common Docker practices
+* Prevent potential privilege escalation in case of container escape
+
+## Considered Options
+
+* Option 1: Keep using low UID/GID (101) with `--system` flag
+* Option 2: Use UID/GID (1000+) without `--system` flag
+* Option 3: Use high UID/GID (10000+) without `--system` flag
+
+## Decision Outcome
+
+Chosen option: [Option 3: Use high UID/GID (10000+) without `--system` flag]
+
+We decided to:
+
+1. Change the default UID/GID from 101 to 10001
+2. Remove the `--system` flag from user/group creation commands
+3. Document the security rationale for these changes
+
+This approach significantly reduces the risk of UID/GID collision with host system users while avoiding build-time warnings related to system user ranges. Using a very high UID/GID (10001) provides an additional security boundary in containers where user namespaces are shared with the host.
+
+### Expected Consequences
+
+* Improved security posture by reducing the risk of container escapes leading to privilege escalation
+* Elimination of build-time warnings related to system user UID/GID ranges
+* Consistency with industry best practices for container security
+* No functional impact on container operation, as the internal user permissions remain the same
+
+## More Information
+
+* [NGINX Docker User ID Issue](https://github.com/nginxinc/docker-nginx/issues/490) - Demonstrates the risks of using UID 101 which overlaps with `systemd-network` user on Debian systems
+* [.NET Docker Issue on System Users](https://github.com/dotnet/dotnet-docker/issues/4624) - Details the problems with using `--system` flag and the SYS_UID_MAX warnings
+* [Docker Security Best Practices](https://docs.docker.com/develop/security-best-practices/) - General security recommendations for Docker containers
diff --git a/docs/decisions/devops/0002-docker-use-gosu.md b/docs/decisions/devops/0002-docker-use-gosu.md
new file mode 100644
index 00000000000..0bdd2931f89
--- /dev/null
+++ b/docs/decisions/devops/0002-docker-use-gosu.md
@@ -0,0 +1,51 @@
+---
+status: accepted
+date: 2025-02-28
+story: Volumes permissions and privilege management in container entrypoint
+---
+
+# Use gosu for Privilege Dropping in Entrypoint
+
+## Context & Problem Statement
+
+Running containerized applications as the root user is a security risk. If an attacker compromises the application, they gain root access within the container, potentially facilitating a container escape. However, some operations during container startup, such as creating directories or modifying file permissions in locations not owned by the application user, require root privileges. We need a way to perform these initial setup tasks as root, but then switch to a non-privileged user *before* executing the main application (`zebrad`). Using `USER` in the Dockerfile is insufficient because it applies to the entire runtime, and we need to change permissions *after* volumes are mounted.
+
+## Priorities & Constraints
+
+* Minimize the security risk by running the main application (`zebrad`) as a non-privileged user.
+* Allow initial setup tasks (file/directory creation, permission changes) that require root privileges.
+* Maintain a clean and efficient entrypoint script.
+* Avoid complex signal handling and TTY issues associated with `su` and `sudo`.
+* Ensure 1:1 parity with Docker's `--user` flag behavior.
+
+## Considered Options
+
+* Option 1: Use `USER` directive in Dockerfile.
+* Option 2: Use `su` within the entrypoint script.
+* Option 3: Use `sudo` within the entrypoint script.
+* Option 4: Use `gosu` within the entrypoint script.
+* Option 5: Use `chroot --userspec`
+* Option 6: Use `setpriv`
+
+## Decision Outcome
+
+Chosen option: [Option 4: Use `gosu` within the entrypoint script]
+
+We chose to use `gosu` because it provides a simple and secure way to drop privileges from root to a non-privileged user *after* performing necessary setup tasks. `gosu` avoids the TTY and signal-handling complexities of `su` and `sudo`. It's designed specifically for this use case (dropping privileges in container entrypoints) and leverages the same underlying mechanisms as Docker itself for user/group handling, ensuring consistent behavior.
+
+### Expected Consequences
+
+* Improved security by running `zebrad` as a non-privileged user.
+* Simplified entrypoint script compared to using `su` or `sudo`.
+* Avoidance of TTY and signal-handling issues.
+* Consistent behavior with Docker's `--user` flag.
+* No negative impact on functionality, as initial setup tasks can still be performed.
+
+## More Information
+
+* [gosu GitHub repository](https://github.com/tianon/gosu#why) - Explains the rationale behind `gosu` and its advantages over `su` and `sudo`.
+* [gosu usage warning](https://github.com/tianon/gosu#warning) - Highlights the core use case (stepping down from root) and potential vulnerabilities in other scenarios.
+* Alternatives considered:
+ * `chroot --userspec`: While functional, it's less common and less directly suited to this specific task than `gosu`.
+ * `setpriv`: A viable alternative, but `gosu` is already well-established in our workflow and offers the desired functionality with a smaller footprint than a full `util-linux` installation.
+ * `su-exec`: Another minimal alternative, but it has known parser bugs that could lead to unexpected root execution.
diff --git a/docs/decisions/devops/0003-filesystem-hierarchy.md b/docs/decisions/devops/0003-filesystem-hierarchy.md
new file mode 100644
index 00000000000..13c626dec5e
--- /dev/null
+++ b/docs/decisions/devops/0003-filesystem-hierarchy.md
@@ -0,0 +1,115 @@
+---
+status: proposed
+date: 2025-02-28
+story: Standardize filesystem hierarchy for Zebra deployments
+---
+
+# Standardize Filesystem Hierarchy: FHS vs. XDG
+
+## Context & Problem Statement
+
+Zebra currently has inconsistencies in its filesystem layout, particularly regarding where configuration, data, cache files, and binaries are stored. We need a standardized approach compatible with:
+
+1. Traditional Linux systems.
+2. Containerized deployments (Docker).
+3. Cloud environments with stricter filesystem restrictions (e.g., Google's Container-Optimized OS).
+
+We previously considered using the Filesystem Hierarchy Standard (FHS) exclusively ([Issue #3432](https://github.com/ZcashFoundation/zebra/issues/3432)). However, recent changes introduced the XDG Base Directory Specification, which offers a user-centric approach. We need to decide whether to:
+
+* Adhere to FHS.
+* Adopt XDG Base Directory Specification.
+* Use a hybrid approach, leveraging the strengths of both.
+
+The choice impacts how we structure our Docker images, where configuration files are located, and how users interact with Zebra in different environments.
+
+## Priorities & Constraints
+
+* **Security:** Minimize the risk of privilege escalation by adhering to least-privilege principles.
+* **Maintainability:** Ensure a clear and consistent filesystem layout that is easy to understand and maintain.
+* **Compatibility:** Work seamlessly across various Linux distributions, Docker, and cloud environments (particularly those with restricted filesystems like Google's Container-Optimized OS).
+* **User Experience:** Provide a predictable and user-friendly experience for locating configuration and data files.
+* **Flexibility:** Allow users to override default locations via environment variables where appropriate.
+* **Avoid Breaking Changes:** Minimize disruption to existing users and deployments, if possible.
+
+## Considered Options
+
+### Option 1: FHS
+
+* Configuration: `/etc/zebrad/`
+* Data: `/var/lib/zebrad/`
+* Cache: `/var/cache/zebrad/`
+* Logs: `/var/log/zebrad/`
+* Binary: `/opt/zebra/bin/zebrad` or `/usr/local/bin/zebrad`
+
+### Option 2: XDG Base Directory Specification
+
+* Configuration: `$HOME/.config/zebrad/`
+* Data: `$HOME/.local/share/zebrad/`
+* Cache: `$HOME/.cache/zebrad/`
+* State: `$HOME/.local/state/zebrad/`
+* Binary: `$HOME/.local/bin/zebrad` or `/usr/local/bin/zebrad`
+
+### Option 3: Hybrid Approach (FHS for System-Wide, XDG for User-Specific)
+
+* System-wide configuration: `/etc/zebrad/`
+* User-specific configuration: `$XDG_CONFIG_HOME/zebrad/`
+* System-wide data (read-only, shared): `/usr/share/zebrad/` (e.g., checkpoints)
+* User-specific data: `$XDG_DATA_HOME/zebrad/`
+* Cache: `$XDG_CACHE_HOME/zebrad/`
+* State: `$XDG_STATE_HOME/zebrad/`
+* Runtime: `$XDG_RUNTIME_DIR/zebrad/`
+* Binary: `/opt/zebra/bin/zebrad` (system-wide) or `$HOME/.local/bin/zebrad` (user-specific)
+
+## Pros and Cons of the Options
+
+### FHS
+
+* **Pros:**
+ * Traditional and well-understood by system administrators.
+ * Clear separation of configuration, data, cache, and binaries.
+ * Suitable for packaged software installations.
+
+* **Cons:**
+ * Less user-friendly; requires root access to modify configuration.
+ * Can conflict with stricter cloud environments restricting writes to `/etc` and `/var`.
+ * Doesn't handle multi-user scenarios as gracefully as XDG.
+
+### XDG Base Directory Specification
+
+* **Pros:**
+ * User-centric: configuration and data stored in user-writable locations.
+ * Better suited for containerized and cloud environments.
+ * Handles multi-user scenarios gracefully.
+ * Clear separation of configuration, data, cache, and state.
+
+* **Cons:**
+ * Less traditional; might be unfamiliar to some system administrators.
+ * Requires environment variables to be set correctly.
+ * Binary placement less standardized.
+
+### Hybrid Approach (FHS for System-Wide, XDG for User-Specific)
+
+* **Pros:**
+ * Combines strengths of FHS and XDG.
+ * Allows system-wide defaults while prioritizing user-specific configurations.
+ * Flexible and adaptable to different deployment scenarios.
+ * Clear binary placement in `/opt`.
+
+* **Cons:**
+ * More complex than either FHS or XDG alone.
+ * Requires careful consideration of precedence rules.
+
+## Decision Outcome
+
+Pending
+
+## Expected Consequences
+
+Pending
+
+## More Information
+
+* [Filesystem Hierarchy Standard (FHS) v3.0](https://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html)
+* [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/latest/)
+* [Zebra Issue #3432: Use the Filesystem Hierarchy Standard (FHS) for deployments and artifacts](https://github.com/ZcashFoundation/zebra/issues/3432)
+* [Google Container-Optimized OS: Working with the File System](https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem#working_with_the_file_system)
diff --git a/docs/decisions/devops/004-improve-docker-conf-tests.md b/docs/decisions/devops/004-improve-docker-conf-tests.md
new file mode 100644
index 00000000000..f1332c15ef5
--- /dev/null
+++ b/docs/decisions/devops/004-improve-docker-conf-tests.md
@@ -0,0 +1,89 @@
+---
+# status and date are the only required elements. Feel free to remove the rest.
+status: accepted
+date: 2025-04-07
+builds-on: N/A
+story: Need a scalable and maintainable way to test various Docker image configurations derived from `.env` variables and `entrypoint.sh` logic, ensuring consistency between CI and CD pipelines. https://github.com/ZcashFoundation/zebra/pull/8948
+---
+
+# Centralize Docker Configuration Testing using a Reusable Workflow
+
+## Context and Problem Statement
+
+Currently, tests verifying Zebra's Docker image configuration (based on environment variables processed by `docker/entrypoint.sh`) are implemented using a reusable workflow (`sub-test-zebra-config.yml`). However, the _invocation_ of these tests, including the specific scenarios (environment variables, grep patterns), is duplicated and scattered across different workflows, notably the CI workflow (`sub-ci-unit-tests-docker.yml`) and the CD workflow (`cd-deploy-nodes-gcp.yml`).
+
+This leads to:
+
+1. **Code Duplication:** Similar test setup logic exists in multiple places.
+2. **Maintenance Overhead:** Adding or modifying configuration tests requires changes in multiple files.
+3. **Scalability Issues:** Adding numerous new test scenarios would significantly clutter the main CI and CD workflow files.
+4. **Potential Inconsistency:** Risk of configuration tests diverging between CI and CD environments.
+
+We need a centralized, scalable, and maintainable approach to define and run these configuration tests against Docker images built in both CI and CD contexts.
+
+## Priorities & Constraints
+
+- **DRY Principle:** Avoid repeating test logic and scenario definitions.
+- **Maintainability:** Configuration tests should be easy to find, understand, and modify.
+- **Scalability:** The solution should easily accommodate adding many more test scenarios in the future.
+- **Consistency:** Ensure the same tests run against both CI and CD images where applicable.
+- **Integration:** Leverage existing GitHub Actions tooling and workflows effectively.
+- **Reliability:** Testing relies on running a container and grepping its logs for specific patterns to determine success.
+
+## Considered Options
+
+1. **Status Quo:** Continue defining and invoking configuration tests within the respective CI (`sub-ci-unit-tests-docker.yml`) and CD (`cd-deploy-nodes-gcp.yml`) workflows, using `sub-test-zebra-config.yml` for the core run/grep logic.
+2. **Modify and Extend `sub-test-zebra-config.yml`:** Convert the existing `sub-test-zebra-config.yml` workflow. Remove its specific test inputs (`test_id`, `grep_patterns`, `test_variables`). Add multiple jobs _inside_ this workflow, each hardcoding a specific test scenario (run container + grep logs). The workflow would only take `docker_image` as input.
+3. **Use `docker-compose.test.yml`:** Define test scenarios as services within a dedicated `docker-compose.test.yml` file. The CI/CD workflows would call a script (like `sub-test-zebra-config.yml`) that uses `docker compose` to run specific services and performs log grepping.
+4. **Create a _New_ Dedicated Reusable Workflow:** Create a _new_ reusable workflow (e.g., `sub-test-all-configs.yml`) that takes a Docker image digest as input and contains multiple jobs, each defining and executing a specific configuration test scenario (run container + grep logs).
+
+## Pros and Cons of the Options
+
+### Option 1: Status Quo
+
+- Bad: High duplication, poor maintainability, poor scalability.
+
+### Option 2: Modify and Extend `sub-test-zebra-config.yml`
+
+- Good: Centralizes test definition, execution, and assertion logic within the GHA ecosystem. Maximizes DRY principle for GHA workflows. High maintainability and scalability for adding tests. Clear separation of concerns (build vs. test config). Reuses an existing workflow file structure.
+- Bad: Modifies the existing workflow's purpose significantly. Callers need to adapt.
+
+### Option 3: Use `docker-compose.test.yml`
+
+- Good: Centralizes test _environment definitions_ in a standard format (`docker-compose.yml`). Easy local testing via `docker compose`.
+- Bad: Requires managing an extra file (`docker-compose.test.yml`). Still requires a GitHub Actions script/workflow step to orchestrate `docker compose` commands and perform the essential log grepping/assertion logic. Less integrated into the pure GHA workflow structure.
+
+### Option 4: Create a _New_ Dedicated Reusable Workflow
+
+- Good: Cleanest separation - new workflow has a clear single purpose from the start. High maintainability and scalability.
+- Bad: Introduces an additional workflow file. Adds a layer of workflow call chaining.
+
+## Decision Outcome
+
+Chosen option [Option 2: Modify and Extend `sub-test-zebra-config.yml`]
+
+This option provides a good balance of maintainability, scalability, and consistency by centralizing the configuration testing logic within a single, dedicated GitHub Actions reusable workflow (`sub-test-zebra-config.yml`). It directly addresses the code duplication across CI and CD pipelines and leverages GHA's native features for modularity by converting the existing workflow into a multi-job test suite runner.
+
+While Option 4 (creating a new workflow) offered slightly cleaner separation initially, modifying the existing workflow (Option 2) achieves the same goal of centralization while minimizing the number of workflow files. It encapsulates the entire test process (definition, execution, assertion) within GHA jobs in the reused file.
+
+The `sub-test-zebra-config.yml` workflow will be modified to remove its specific test inputs and instead contain individual jobs for each configuration scenario to be tested, taking only the `docker_image` digest as input. The CI and CD workflows will be simplified to call this modified workflow once after their respective build steps.
+
+### Expected Consequences
+
+- Reduction in code duplication within CI/CD workflow files.
+- Improved maintainability: configuration tests are located in a single file (`sub-test-zebra-config.yml`).
+- Easier addition of new configuration test scenarios by adding jobs to `sub-test-zebra-config.yml`.
+- Clearer separation between image building and configuration testing logic.
+- `sub-test-zebra-config.yml` will fundamentally change its structure and inputs.
+- CI/CD workflows (`cd-deploy-nodes-gcp.yml`, parent of `sub-ci-unit-tests-docker.yml`) will need modification to remove old test jobs and add calls to the modified reusable workflow, passing the correct image digest.
+- Debugging might involve tracing execution across workflow calls and within the multiple jobs of `sub-test-zebra-config.yml`.
+
+## More Information
+
+- GitHub Actions: Reusing Workflows: [https://docs.github.com/en/actions/using-workflows/reusing-workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows)
+- Relevant files:
+ - `.github/workflows/sub-test-zebra-config.yml` (To be modified)
+ - `.github/workflows/cd-deploy-nodes-gcp.yml` (To be modified)
+ - `.github/workflows/sub-ci-unit-tests-docker.yml` (To be modified)
+ - `docker/entrypoint.sh` (Script processing configurations)
+ - `docker/.env` (Example environment variables)
diff --git a/docs/decisions/template.md b/docs/decisions/template.md
new file mode 100644
index 00000000000..2913cd99116
--- /dev/null
+++ b/docs/decisions/template.md
@@ -0,0 +1,49 @@
+---
+# status and date are the only required elements. Feel free to remove the rest.
+status: {[proposed | rejected | accepted | deprecated | … | superseded by [ADR-NAME](adr-file-name.md)]}
+date: {YYYY-MM-DD when the decision was last updated}
+builds-on: {[Short Title](0001-short-title.md)}
+story: {description or link to contextual issue}
+---
+
+# {short title of solved problem and solution}
+
+## Context and Problem Statement
+
+{2-3 sentences explaining the problem and the forces influencing the decision.}
+
+
+## Priorities & Constraints
+
+* {List of concerns or constraints}
+* {Factors influencing the decision}
+
+## Considered Options
+
+* Option 1: Thing
+* Option 2: Another
+
+### Pros and Cons of the Options
+
+#### Option 1: {Brief description}
+
+* Good, because {reason}
+* Bad, because {reason}
+
+## Decision Outcome
+
+Chosen option [Option 1: Thing]
+
+{Clearly state the chosen option and provide justification. Reference the "Pros and Cons of the Options" section below if applicable.}
+
+### Expected Consequences
+
+* List of outcomes resulting from this decision
+
+
+## More Information
+
+
+
+
+
diff --git a/prometheus.yaml b/prometheus.yaml
index 5501da2b3f1..86ca1d5e7e4 100644
--- a/prometheus.yaml
+++ b/prometheus.yaml
@@ -1,7 +1,6 @@
scrape_configs:
- - job_name: 'zebrad'
+ - job_name: "zebrad"
scrape_interval: 500ms
- metrics_path: '/'
+ metrics_path: "/"
static_configs:
- - targets: ['localhost:9999']
-
+ - targets: ["localhost:9999"]
diff --git a/rust-toolchain.toml b/rust-toolchain.toml
index dd1c8aa4359..57d00db4c00 100644
--- a/rust-toolchain.toml
+++ b/rust-toolchain.toml
@@ -1,3 +1,8 @@
+<<<<<<< HEAD
# TODO: Upstream specifies `channel = "stable"` — consider restoring it before final merge.
[toolchain]
channel = "1.82.0"
+=======
+[toolchain]
+channel = "stable"
+>>>>>>> zcash-v2.4.2
diff --git a/tower-batch-control/CHANGELOG.md b/tower-batch-control/CHANGELOG.md
new file mode 100644
index 00000000000..b1abe936f2b
--- /dev/null
+++ b/tower-batch-control/CHANGELOG.md
@@ -0,0 +1,11 @@
+# Changelog
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+## [0.2.41] - 2025-07-11
+
+First "stable" release. However, be advised that the API may still greatly
+change so major version bumps can be common.
diff --git a/tower-batch-control/Cargo.toml b/tower-batch-control/Cargo.toml
index 398a2baddbc..60b5f4759ab 100644
--- a/tower-batch-control/Cargo.toml
+++ b/tower-batch-control/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "tower-batch-control"
-version = "0.2.41-beta.17"
+version = "0.2.41"
authors = ["Zcash Foundation ", "Tower Maintainers "]
description = "Tower middleware for batch request processing"
# # Legal
@@ -22,31 +22,31 @@ keywords = ["tower", "batch"]
categories = ["algorithms", "asynchronous"]
[dependencies]
-futures = "0.3.31"
-futures-core = "0.3.28"
-pin-project = "1.1.6"
-rayon = "1.10.0"
-tokio = { version = "1.41.0", features = ["time", "sync", "tracing", "macros"] }
-tokio-util = "0.7.12"
-tower = { version = "0.4.13", features = ["util", "buffer"] }
-tracing = "0.1.39"
-tracing-futures = "0.2.5"
+futures = { workspace = true }
+futures-core = { workspace = true }
+pin-project = { workspace = true }
+rayon = { workspace = true }
+tokio = { workspace = true, features = ["time", "sync", "tracing", "macros"] }
+tokio-util = { workspace = true }
+tower = { workspace = true, features = ["util", "buffer"] }
+tracing = { workspace = true }
+tracing-futures = { workspace = true }
[dev-dependencies]
-color-eyre = "0.6.3"
+color-eyre = { workspace = true }
# This is a transitive dependency via color-eyre.
# Enable a feature that makes tinyvec compile much faster.
-tinyvec = { version = "1.8.0", features = ["rustc_1_55"] }
+tinyvec = { workspace = true, features = ["rustc_1_55"] }
-ed25519-zebra = "4.0.3"
-rand = "0.8.5"
+ed25519-zebra = { workspace = true }
+rand = { workspace = true }
-tokio = { version = "1.41.0", features = ["full", "tracing", "test-util"] }
-tokio-test = "0.4.4"
-tower-fallback = { path = "../tower-fallback/", version = "0.2.41-beta.17" }
-tower-test = "0.4.0"
+tokio = { workspace = true, features = ["full", "tracing", "test-util"] }
+tokio-test = { workspace = true }
+tower-fallback = { path = "../tower-fallback/", version = "0.2.41" }
+tower-test = { workspace = true }
-zebra-test = { path = "../zebra-test/", version = "1.0.0-beta.41" }
+zebra-test = { path = "../zebra-test/", version = "1.0.0" }
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(tokio_unstable)'] }
diff --git a/tower-batch-control/LICENSE b/tower-batch-control/LICENSE
index 5428318519b..c4e34c0f7c6 100644
--- a/tower-batch-control/LICENSE
+++ b/tower-batch-control/LICENSE
@@ -1,4 +1,4 @@
-Copyright (c) 2019-2024 Zcash Foundation
+Copyright (c) 2019-2025 Zcash Foundation
Copyright (c) 2019 Tower Contributors
Permission is hereby granted, free of charge, to any
diff --git a/tower-batch-control/src/service.rs b/tower-batch-control/src/service.rs
index 414e2452529..394e38a5b69 100644
--- a/tower-batch-control/src/service.rs
+++ b/tower-batch-control/src/service.rs
@@ -137,6 +137,7 @@ where
tokio::task::Builder::new()
.name(&format!("{} batch", batch_kind))
.spawn(worker.run().instrument(span))
+ .expect("panic on error to match tokio::spawn")
};
#[cfg(not(tokio_unstable))]
let worker_handle = tokio::spawn(worker.run().instrument(span));
diff --git a/tower-batch-control/src/worker.rs b/tower-batch-control/src/worker.rs
index f2266e67100..865a2f37009 100644
--- a/tower-batch-control/src/worker.rs
+++ b/tower-batch-control/src/worker.rs
@@ -323,7 +323,7 @@ where
// We don't schedule any batches on an errored service
self.pending_batch_timer = None;
- // By closing the mpsc::Receiver, we know that that the run() loop will
+ // By closing the mpsc::Receiver, we know that the run() loop will
// drain all pending requests. We just need to make sure that any
// requests that we receive before we've exhausted the receiver receive
// the error:
diff --git a/tower-fallback/CHANGELOG.md b/tower-fallback/CHANGELOG.md
new file mode 100644
index 00000000000..b1abe936f2b
--- /dev/null
+++ b/tower-fallback/CHANGELOG.md
@@ -0,0 +1,11 @@
+# Changelog
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+## [0.2.41] - 2025-07-11
+
+First "stable" release. However, be advised that the API may still greatly
+change so major version bumps can be common.
diff --git a/tower-fallback/Cargo.toml b/tower-fallback/Cargo.toml
index 5919b1bc632..1b6424ff387 100644
--- a/tower-fallback/Cargo.toml
+++ b/tower-fallback/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "tower-fallback"
-version = "0.2.41-beta.17"
+version = "0.2.41"
authors = ["Zcash Foundation "]
description = "A Tower service combinator that sends requests to a first service, then retries processing on a second fallback service if the first service errors."
license = "MIT OR Apache-2.0"
@@ -16,12 +16,12 @@ keywords = ["tower", "batch"]
categories = ["algorithms", "asynchronous"]
[dependencies]
-pin-project = "1.1.6"
-tower = "0.4.13"
-futures-core = "0.3.28"
-tracing = "0.1.39"
+pin-project = { workspace = true }
+tower = { workspace = true }
+futures-core = { workspace = true }
+tracing = { workspace = true }
[dev-dependencies]
-tokio = { version = "1.41.0", features = ["full", "tracing", "test-util"] }
+tokio = { workspace = true, features = ["full", "tracing", "test-util"] }
-zebra-test = { path = "../zebra-test/", version = "1.0.0-beta.41" }
+zebra-test = { path = "../zebra-test/", version = "1.0.0" }
diff --git a/tower-fallback/tests/fallback.rs b/tower-fallback/tests/fallback.rs
index 8b60481d7b8..486dfb4a47e 100644
--- a/tower-fallback/tests/fallback.rs
+++ b/tower-fallback/tests/fallback.rs
@@ -1,3 +1,5 @@
+//! Tests for tower-fallback
+
use tower::{service_fn, Service, ServiceExt};
use tower_fallback::Fallback;
diff --git a/zebra-chain/CHANGELOG.md b/zebra-chain/CHANGELOG.md
new file mode 100644
index 00000000000..666b7d14483
--- /dev/null
+++ b/zebra-chain/CHANGELOG.md
@@ -0,0 +1,11 @@
+# Changelog
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+## [1.0.0] - 2025-07-11
+
+First "stable" release. However, be advised that the API may still greatly
+change so major version bumps can be common.
\ No newline at end of file
diff --git a/zebra-chain/Cargo.toml b/zebra-chain/Cargo.toml
index bc1aa13ca73..ee3013e73c5 100644
--- a/zebra-chain/Cargo.toml
+++ b/zebra-chain/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "zebra-chain"
-version = "1.0.0-beta.41"
+version = "1.0.0"
authors = ["Zcash Foundation "]
description = "Core Zcash data structures"
license = "MIT OR Apache-2.0"
@@ -29,20 +29,8 @@ async-error = [
"tokio",
]
-# Mining RPC support
-getblocktemplate-rpcs = [
-]
-
-# Experimental shielded scanning support
-shielded-scan = [
- "zcash_client_backend"
-]
-
# Experimental internal miner support
-# TODO: Internal miner feature functionality was removed at https://github.com/ZcashFoundation/zebra/issues/8180
-# See what was removed at https://github.com/ZcashFoundation/zebra/blob/v1.5.1/zebra-chain/Cargo.toml#L38-L43
-# Restore support when conditions are met. https://github.com/ZcashFoundation/zebra/issues/8183
-internal-miner = []
+internal-miner = ["equihash/solver"]
# Experimental elasticsearch support
elasticsearch = []
@@ -61,14 +49,19 @@ proptest-impl = [
bench = ["zebra-test"]
+<<<<<<< HEAD
# Support for transaction version 6
tx_v6 = [
"nonempty"
]
+=======
+tx_v6 = []
+>>>>>>> zcash-v2.4.2
[dependencies]
# Cryptography
+<<<<<<< HEAD
bitvec = "1.0.1"
bitflags = "2.5.0"
bitflags-serde-legacy = "0.1.1"
@@ -82,103 +75,116 @@ byteorder = "1.5.0"
# See what was removed at https://github.com/ZcashFoundation/zebra/blob/v1.5.1/zebra-chain/Cargo.toml#L73-L85
# Restore support when conditions are met. https://github.com/ZcashFoundation/zebra/issues/8183
equihash = "0.2.2"
-
-group = "0.13.0"
+=======
+bitvec = { workspace = true }
+bitflags = { workspace = true }
+bitflags-serde-legacy = { workspace = true }
+blake2b_simd = { workspace = true }
+blake2s_simd = { workspace = true }
+bs58 = { workspace = true, features = ["check"] }
+byteorder = { workspace = true }
+
+equihash = { workspace = true }
+>>>>>>> zcash-v2.4.2
+
+group = { workspace = true }
incrementalmerkletree.workspace = true
-jubjub = "0.10.0"
-lazy_static = "1.4.0"
-tempfile = "3.13.0"
-dirs = "5.0.1"
-num-integer = "0.1.46"
-primitive-types = "0.12.2"
-rand_core = "0.6.4"
-ripemd = "0.1.3"
+jubjub = { workspace = true }
+lazy_static = { workspace = true }
+tempfile = { workspace = true }
+dirs = { workspace = true }
+num-integer = { workspace = true }
+primitive-types = { workspace = true }
+rand_core = { workspace = true }
+ripemd = { workspace = true }
# Matches version used by hdwallet
-secp256k1 = { version = "0.27.0", features = ["serde"] }
-sha2 = { version = "0.10.7", features = ["compress"] }
-uint = "0.10.0"
-x25519-dalek = { version = "2.0.1", features = ["serde"] }
+secp256k1 = { workspace = true, features = ["serde"] }
+sha2 = { workspace = true, features = ["compress"] }
+uint = { workspace = true }
+x25519-dalek = { workspace = true, features = ["serde"] }
+bech32 = { workspace = true }
# ECC deps
-halo2 = { package = "halo2_proofs", version = "0.3.0" }
+halo2 = { package = "halo2_proofs", version = "0.3" }
orchard.workspace = true
zcash_encoding.workspace = true
zcash_history.workspace = true
-zcash_note_encryption = "0.4.0"
+zcash_note_encryption = { workspace = true }
zcash_primitives = { workspace = true, features = ["transparent-inputs"] }
sapling-crypto.workspace = true
zcash_protocol.workspace = true
zcash_address.workspace = true
zcash_transparent.workspace = true
+<<<<<<< HEAD
# Used for orchard serialization
nonempty = { version = "0.7", optional = true }
+=======
+sinsemilla = { version = "0.1" }
+>>>>>>> zcash-v2.4.2
# Time
-chrono = { version = "0.4.38", default-features = false, features = ["clock", "std", "serde"] }
-humantime = "2.1.0"
+chrono = { workspace = true, features = ["clock", "std", "serde"] }
+humantime = { workspace = true }
# Error Handling & Formatting
-static_assertions = "1.1.0"
-thiserror = "1.0.64"
-tracing = "0.1.39"
+static_assertions = { workspace = true }
+thiserror = { workspace = true }
+tracing = { workspace = true }
# Serialization
-hex = { version = "0.4.3", features = ["serde"] }
-serde = { version = "1.0.211", features = ["serde_derive", "rc"] }
-serde_with = "3.11.0"
-serde-big-array = "0.5.1"
+hex = { workspace = true, features = ["serde"] }
+serde = { workspace = true, features = ["serde_derive", "rc"] }
+serde_with = { workspace = true }
+serde-big-array = { workspace = true }
# Processing
-futures = "0.3.31"
-itertools = "0.13.0"
-rayon = "1.10.0"
+futures = { workspace = true }
+itertools = { workspace = true }
+rayon = { workspace = true }
# ZF deps
-ed25519-zebra = "4.0.3"
-redjubjub = "0.7.0"
-reddsa = "0.5.1"
+ed25519-zebra = { workspace = true }
+redjubjub = { workspace = true }
+reddsa = { workspace = true }
# Production feature json-conversion
-serde_json = { version = "1.0.132", optional = true }
+serde_json = { workspace = true, optional = true }
# Production feature async-error and testing feature proptest-impl
-tokio = { version = "1.41.0", optional = true }
-
-# Experimental feature shielded-scan
-zcash_client_backend = { workspace = true, optional = true }
+tokio = { workspace = true, optional = true }
# Optional testing dependencies
-proptest = { version = "1.4.0", optional = true }
-proptest-derive = { version = "0.5.0", optional = true }
+proptest = { workspace = true, optional = true }
+proptest-derive = { workspace = true, optional = true }
-rand = { version = "0.8.5", optional = true }
-rand_chacha = { version = "0.3.1", optional = true }
+rand = { workspace = true, optional = true }
+rand_chacha = { workspace = true, optional = true }
-zebra-test = { path = "../zebra-test/", version = "1.0.0-beta.41", optional = true }
+zebra-test = { path = "../zebra-test/", version = "1.0.0", optional = true }
[dev-dependencies]
# Benchmarks
-criterion = { version = "0.5.1", features = ["html_reports"] }
+criterion = { workspace = true, features = ["html_reports"] }
# Error Handling & Formatting
-color-eyre = "0.6.3"
+color-eyre = { workspace = true }
# This is a transitive dependency via color-eyre.
# Enable a feature that makes tinyvec compile much faster.
-tinyvec = { version = "1.8.0", features = ["rustc_1_55"] }
-spandoc = "0.2.2"
-tracing = "0.1.39"
+tinyvec = { workspace = true, features = ["rustc_1_55"] }
+spandoc = { workspace = true }
+tracing = { workspace = true }
# Make the optional testing dependencies required
-proptest = "1.4.0"
-proptest-derive = "0.5.0"
+proptest = { workspace = true }
+proptest-derive = { workspace = true }
-rand = "0.8.5"
-rand_chacha = "0.3.1"
+rand = { workspace = true }
+rand_chacha = { workspace = true }
-tokio = { version = "1.41.0", features = ["full", "tracing", "test-util"] }
+tokio = { workspace = true, features = ["full", "tracing", "test-util"] }
-zebra-test = { path = "../zebra-test/", version = "1.0.0-beta.41" }
+zebra-test = { path = "../zebra-test/", version = "1.0.0" }
orchard = { workspace = true, features = ["test-dependencies"] }
diff --git a/zebra-chain/src/amount.rs b/zebra-chain/src/amount.rs
index f4a81c14893..355a8b63bb0 100644
--- a/zebra-chain/src/amount.rs
+++ b/zebra-chain/src/amount.rs
@@ -416,7 +416,7 @@ where
}
}
-// TODO: add infalliable impls for NonNegative <-> NegativeOrZero,
+// TODO: add infallible impls for NonNegative <-> NegativeOrZero,
// when Rust uses trait output types to disambiguate overlapping impls.
impl std::ops::Neg for Amount
where
@@ -538,6 +538,10 @@ impl Constraint for NegativeAllowed {
/// );
/// ```
#[derive(Clone, Copy, Debug, Eq, PartialEq, Hash, Default)]
+#[cfg_attr(
+ any(test, feature = "proptest-impl"),
+ derive(proptest_derive::Arbitrary)
+)]
pub struct NonNegative;
impl Constraint for NonNegative {
diff --git a/zebra-chain/src/amount/tests/vectors.rs b/zebra-chain/src/amount/tests/vectors.rs
index 933b2824d41..13ed0748d59 100644
--- a/zebra-chain/src/amount/tests/vectors.rs
+++ b/zebra-chain/src/amount/tests/vectors.rs
@@ -180,12 +180,12 @@ fn deserialize_checks_bounds() -> Result<()> {
let mut big_bytes = Vec::new();
(&mut big_bytes)
.write_u64::(big)
- .expect("unexpected serialization failure: vec should be infalliable");
+ .expect("unexpected serialization failure: vec should be infallible");
let mut neg_bytes = Vec::new();
(&mut neg_bytes)
.write_i64::(neg)
- .expect("unexpected serialization failure: vec should be infalliable");
+ .expect("unexpected serialization failure: vec should be infallible");
Amount::::zcash_deserialize(big_bytes.as_slice())
.expect_err("deserialization should reject too large values");
@@ -335,9 +335,7 @@ fn test_sum() -> Result<()> {
let times: usize = (i64::MAX / MAX_MONEY)
.try_into()
.expect("4392 can always be converted to usize");
- let amounts: Vec = std::iter::repeat(MAX_MONEY.try_into()?)
- .take(times + 1)
- .collect();
+ let amounts: Vec = std::iter::repeat_n(MAX_MONEY.try_into()?, times + 1).collect();
let sum_ref = amounts.iter().sum::>();
let sum_value = amounts.into_iter().sum::>();
@@ -357,7 +355,7 @@ fn test_sum() -> Result<()> {
.expect("4392 can always be converted to usize");
let neg_max_money: Amount = (-MAX_MONEY).try_into()?;
let amounts: Vec> =
- std::iter::repeat(neg_max_money).take(times + 1).collect();
+ std::iter::repeat_n(neg_max_money, times + 1).collect();
let sum_ref = amounts.iter().sum::>();
let sum_value = amounts.into_iter().sum::>();
diff --git a/zebra-chain/src/block/arbitrary.rs b/zebra-chain/src/block/arbitrary.rs
index cf8ce64c9b8..42ccd19fdc8 100644
--- a/zebra-chain/src/block/arbitrary.rs
+++ b/zebra-chain/src/block/arbitrary.rs
@@ -568,7 +568,7 @@ where
+ Copy
+ 'static,
{
- let mut spend_restriction = transaction.coinbase_spend_restriction(height);
+ let mut spend_restriction = transaction.coinbase_spend_restriction(&Network::Mainnet, height);
let mut new_inputs = Vec::new();
let mut spent_outputs = HashMap::new();
@@ -650,7 +650,8 @@ where
+ 'static,
{
let has_shielded_outputs = transaction.has_shielded_outputs();
- let delete_transparent_outputs = CoinbaseSpendRestriction::OnlyShieldedOutputs { spend_height };
+ let delete_transparent_outputs =
+ CoinbaseSpendRestriction::CheckCoinbaseMaturity { spend_height };
let mut attempts: usize = 0;
// choose an arbitrary spendable UTXO, in hash set order
diff --git a/zebra-chain/src/block/commitment.rs b/zebra-chain/src/block/commitment.rs
index ec4ef7d2616..4c41fb9fefd 100644
--- a/zebra-chain/src/block/commitment.rs
+++ b/zebra-chain/src/block/commitment.rs
@@ -125,7 +125,11 @@ impl Commitment {
// NetworkUpgrade::current() returns the latest network upgrade that's activated at the provided height, so
// on Regtest for heights above height 0, it could return NU6, and it's possible for the current network upgrade
// to be NU6 (or Canopy, or any network upgrade above Heartwood) at the Heartwood activation height.
+<<<<<<< HEAD
(Canopy | Nu5 | Nu6 | Nu7, activation_height)
+=======
+ (Canopy | Nu5 | Nu6 | Nu6_1 | Nu7, activation_height)
+>>>>>>> zcash-v2.4.2
if height == activation_height
&& Some(height) == Heartwood.activation_height(network) =>
{
@@ -136,7 +140,11 @@ impl Commitment {
}
}
(Heartwood | Canopy, _) => Ok(ChainHistoryRoot(ChainHistoryMmrRootHash(bytes))),
+<<<<<<< HEAD
(Nu5 | Nu6 | Nu7, _) => Ok(ChainHistoryBlockTxAuthCommitment(
+=======
+ (Nu5 | Nu6 | Nu6_1 | Nu7, _) => Ok(ChainHistoryBlockTxAuthCommitment(
+>>>>>>> zcash-v2.4.2
ChainHistoryBlockTxAuthCommitmentHash(bytes),
)),
}
@@ -164,7 +172,7 @@ impl Commitment {
// - add methods for maintaining the MMR peaks, and calculating the root
// hash from the current set of peaks
// - move to a separate file
-#[derive(Clone, Copy, Eq, PartialEq, Serialize, Deserialize)]
+#[derive(Clone, Copy, Eq, PartialEq, Serialize, Deserialize, Default)]
pub struct ChainHistoryMmrRootHash([u8; 32]);
impl fmt::Display for ChainHistoryMmrRootHash {
diff --git a/zebra-chain/src/block/header.rs b/zebra-chain/src/block/header.rs
index 1bbec3b471c..39b265e0304 100644
--- a/zebra-chain/src/block/header.rs
+++ b/zebra-chain/src/block/header.rs
@@ -7,11 +7,12 @@ use thiserror::Error;
use crate::{
fmt::HexDebug,
+ parameters::Network,
serialization::{TrustedPreallocate, MAX_PROTOCOL_MESSAGE_LEN},
work::{difficulty::CompactDifficulty, equihash::Solution},
};
-use super::{merkle, Hash, Height};
+use super::{merkle, Commitment, CommitmentError, Hash, Height};
#[cfg(any(test, feature = "proptest-impl"))]
use proptest_derive::Arbitrary;
@@ -58,7 +59,7 @@ pub struct Header {
/// without incrementing the block [`version`](Self::version). Therefore,
/// this field cannot be parsed without the network and height. Use
/// [`Block::commitment`](super::Block::commitment) to get the parsed
- /// [`Commitment`](super::Commitment).
+ /// [`Commitment`].
pub commitment_bytes: HexDebug<[u8; 32]>,
/// The block timestamp is a Unix epoch time (UTC) when the miner
@@ -124,6 +125,16 @@ impl Header {
}
}
+ /// Get the parsed block [`Commitment`] for this header.
+ /// Its interpretation depends on the given `network` and block `height`.
+ pub fn commitment(
+ &self,
+ network: &Network,
+ height: Height,
+ ) -> Result {
+ Commitment::from_bytes(*self.commitment_bytes, network, height)
+ }
+
/// Compute the hash of this header.
pub fn hash(&self) -> Hash {
Hash::from(self)
diff --git a/zebra-chain/src/block/merkle.rs b/zebra-chain/src/block/merkle.rs
index 639324f9d82..9d707489803 100644
--- a/zebra-chain/src/block/merkle.rs
+++ b/zebra-chain/src/block/merkle.rs
@@ -1,6 +1,6 @@
//! The Bitcoin-inherited Merkle tree of transactions.
-use std::{fmt, io::Write, iter};
+use std::{fmt, io::Write};
use hex::{FromHex, ToHex};
@@ -404,7 +404,7 @@ impl std::iter::FromIterator for AuthDataRoot {
// https://zips.z.cash/zip-0244#block-header-changes
// Pad with enough leaves to make the tree full (a power of 2).
let pad_count = hashes.len().next_power_of_two() - hashes.len();
- hashes.extend(iter::repeat([0u8; 32]).take(pad_count));
+ hashes.extend(std::iter::repeat_n([0u8; 32], pad_count));
assert!(hashes.len().is_power_of_two());
while hashes.len() > 1 {
diff --git a/zebra-chain/src/block/serialize.rs b/zebra-chain/src/block/serialize.rs
index e763915e499..2dd3ee56f37 100644
--- a/zebra-chain/src/block/serialize.rs
+++ b/zebra-chain/src/block/serialize.rs
@@ -4,6 +4,7 @@ use std::{borrow::Borrow, io};
use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt};
use chrono::{TimeZone, Utc};
+use hex::{FromHex, FromHexError};
use crate::{
block::{header::ZCASH_BLOCK_VERSION, merkle, Block, CountedHeader, Hash, Header},
@@ -39,7 +40,7 @@ fn check_version(version: u32) -> Result<(), &'static str> {
// but this is not actually part of the consensus rules, and in fact
// broken mining software created blocks that do not have version 4.
// There are approximately 4,000 blocks with version 536870912; this
- // is the bit-reversal of the value 4, indicating that that mining pool
+ // is the bit-reversal of the value 4, indicating that mining pool
// reversed bit-ordering of the version field. Because the version field
// was not properly validated, these blocks were added to the chain.
//
@@ -63,7 +64,7 @@ fn check_version(version: u32) -> Result<(), &'static str> {
impl ZcashSerialize for Header {
#[allow(clippy::unwrap_in_result)]
fn zcash_serialize(&self, mut writer: W) -> Result<(), io::Error> {
- check_version(self.version).map_err(|msg| io::Error::new(io::ErrorKind::Other, msg))?;
+ check_version(self.version).map_err(io::Error::other)?;
writer.write_u32::(self.version)?;
self.previous_block_hash.zcash_serialize(&mut writer)?;
@@ -194,3 +195,12 @@ impl From> for SerializedBlock {
Self { bytes }
}
}
+
+impl FromHex for SerializedBlock {
+ type Error = FromHexError;
+
+ fn from_hex>(hex: T) -> Result {
+ let bytes = Vec::from_hex(hex)?;
+ Ok(SerializedBlock { bytes })
+ }
+}
diff --git a/zebra-chain/src/block/tests/generate.rs b/zebra-chain/src/block/tests/generate.rs
index b908b6f9747..4fae246f082 100644
--- a/zebra-chain/src/block/tests/generate.rs
+++ b/zebra-chain/src/block/tests/generate.rs
@@ -131,9 +131,8 @@ fn multi_transaction_block(oversized: bool) -> Block {
}
// Create transactions to be just below or just above the limit
- let transactions = std::iter::repeat(Arc::new(transaction))
- .take(max_transactions_in_block)
- .collect::>();
+ let transactions =
+ std::iter::repeat_n(Arc::new(transaction), max_transactions_in_block).collect::>();
// Add the transactions into a block
let block = Block {
@@ -193,9 +192,7 @@ fn single_transaction_block_many_inputs(oversized: bool) -> Block {
let mut outputs = Vec::new();
// Create inputs to be just below the limit
- let inputs = std::iter::repeat(input)
- .take(max_inputs_in_tx)
- .collect::>();
+ let inputs = std::iter::repeat_n(input, max_inputs_in_tx).collect::>();
// 1 single output
outputs.push(output);
@@ -268,9 +265,7 @@ fn single_transaction_block_many_outputs(oversized: bool) -> Block {
let inputs = vec![input];
// Create outputs to be just below the limit
- let outputs = std::iter::repeat(output)
- .take(max_outputs_in_tx)
- .collect::>();
+ let outputs = std::iter::repeat_n(output, max_outputs_in_tx).collect::>();
// Create a big transaction
let big_transaction = Transaction::V1 {
diff --git a/zebra-chain/src/block/tests/vectors.rs b/zebra-chain/src/block/tests/vectors.rs
index 5ff19ca1092..02764c19ef7 100644
--- a/zebra-chain/src/block/tests/vectors.rs
+++ b/zebra-chain/src/block/tests/vectors.rs
@@ -9,10 +9,8 @@ use crate::{
block::{
serialize::MAX_BLOCK_BYTES, Block, BlockTimeError, Commitment::*, Hash, Header, Height,
},
- parameters::{
- Network::{self, *},
- NetworkUpgrade::*,
- },
+ parameters::{Network, NetworkUpgrade::*},
+ sapling,
serialization::{
sha256d, SerializationError, ZcashDeserialize, ZcashDeserializeInto, ZcashSerialize,
},
@@ -191,88 +189,80 @@ fn block_test_vectors_unique() {
);
}
+/// Checks that:
+///
+/// - the block test vector indexes match the heights in the block data;
+/// - each post-Sapling block has a corresponding final Sapling root;
+/// - each post-Orchard block has a corresponding final Orchard root.
#[test]
-fn block_test_vectors_height_mainnet() {
- let _init_guard = zebra_test::init();
-
- block_test_vectors_height(Mainnet);
-}
-
-#[test]
-fn block_test_vectors_height_testnet() {
+fn block_test_vectors() {
let _init_guard = zebra_test::init();
- block_test_vectors_height(Network::new_default_testnet());
-}
+ for net in Network::iter() {
+ let sapling_anchors = net.sapling_anchors();
+ let orchard_anchors = net.orchard_anchors();
-/// Test that the block test vector indexes match the heights in the block data,
-/// and that each post-sapling block has a corresponding final sapling root.
-fn block_test_vectors_height(network: Network) {
- let (block_iter, sapling_roots) = network.block_sapling_roots_iter();
-
- for (&height, block) in block_iter {
- let block = block
- .zcash_deserialize_into::()
- .expect("block is structurally valid");
- assert_eq!(
- block.coinbase_height().expect("block height is valid").0,
- height,
- "deserialized height must match BTreeMap key height"
- );
+ for (&height, block) in net.block_iter() {
+ let block = block
+ .zcash_deserialize_into::()
+ .expect("block is structurally valid");
+ assert_eq!(
+ block.coinbase_height().expect("block height is valid").0,
+ height,
+ "deserialized height must match BTreeMap key height"
+ );
- if height
- >= Sapling
- .activation_height(&network)
- .expect("sapling activation height is set")
- .0
- {
- assert!(
- sapling_roots.contains_key(&height),
- "post-sapling block test vectors must have matching sapling root test vectors: missing {network} {height}"
+ if height
+ >= Sapling
+ .activation_height(&net)
+ .expect("activation height")
+ .0
+ {
+ assert!(
+ sapling_anchors.contains_key(&height),
+ "post-sapling block test vectors must have matching sapling root test vectors: \
+ missing {net} {height}"
);
+ }
+
+ if height >= Nu5.activation_height(&net).expect("activation height").0 {
+ assert!(
+ orchard_anchors.contains_key(&height),
+ "post-nu5 block test vectors must have matching orchard root test vectors: \
+ missing {net} {height}"
+ );
+ }
}
}
}
-#[test]
-fn block_commitment_mainnet() {
- let _init_guard = zebra_test::init();
-
- block_commitment(Mainnet);
-}
-
-#[test]
-fn block_commitment_testnet() {
- let _init_guard = zebra_test::init();
-
- block_commitment(Network::new_default_testnet());
-}
-
-/// Check that the block commitment field parses without errors.
+/// Checks that the block commitment field parses without errors.
/// For sapling and blossom blocks, also check the final sapling root value.
///
/// TODO: add chain history test vectors?
-fn block_commitment(network: Network) {
- let (block_iter, sapling_roots) = network.block_sapling_roots_iter();
-
- for (height, block) in block_iter {
- let block = block
- .zcash_deserialize_into::()
- .expect("block is structurally valid");
+#[test]
+fn block_commitment() {
+ let _init_guard = zebra_test::init();
- let commitment = block.commitment(&network).unwrap_or_else(|_| {
- panic!("unexpected structurally invalid block commitment at {network} {height}")
- });
+ for net in Network::iter() {
+ let sapling_anchors = net.sapling_anchors();
- if let FinalSaplingRoot(final_sapling_root) = commitment {
- let expected_final_sapling_root = *sapling_roots
- .get(height)
- .expect("unexpected missing final sapling root test vector");
- assert_eq!(
- final_sapling_root,
- crate::sapling::tree::Root::try_from(*expected_final_sapling_root).unwrap(),
- "unexpected invalid final sapling root commitment at {network} {height}"
- );
+ for (height, block) in net.block_iter() {
+ if let FinalSaplingRoot(anchor) = block
+ .zcash_deserialize_into::()
+ .expect("block is structurally valid")
+ .commitment(&net)
+ .expect("unexpected structurally invalid block commitment at {net} {height}")
+ {
+ let expected_anchor = *sapling_anchors
+ .get(height)
+ .expect("unexpected missing final sapling root test vector");
+ assert_eq!(
+ anchor,
+ sapling::tree::Root::try_from(*expected_anchor).unwrap(),
+ "unexpected invalid final sapling root commitment at {net} {height}"
+ );
+ }
}
}
}
diff --git a/zebra-chain/src/block_info.rs b/zebra-chain/src/block_info.rs
new file mode 100644
index 00000000000..7d72380b177
--- /dev/null
+++ b/zebra-chain/src/block_info.rs
@@ -0,0 +1,28 @@
+//! Extra per-block info tracked in the state.
+use crate::{amount::NonNegative, value_balance::ValueBalance};
+
+/// Extra per-block info tracked in the state.
+#[derive(Debug, Clone, Default, PartialEq, Eq)]
+pub struct BlockInfo {
+ /// The pool balances after the block.
+ value_pools: ValueBalance,
+ /// The size of the block in bytes.
+ size: u32,
+}
+
+impl BlockInfo {
+ /// Creates a new [`BlockInfo`] with the given value pools.
+ pub fn new(value_pools: ValueBalance, size: u32) -> Self {
+ BlockInfo { value_pools, size }
+ }
+
+ /// Returns the value pools of this block.
+ pub fn value_pools(&self) -> &ValueBalance {
+ &self.value_pools
+ }
+
+ /// Returns the size of this block.
+ pub fn size(&self) -> u32 {
+ self.size
+ }
+}
diff --git a/zebra-chain/src/chain_tip.rs b/zebra-chain/src/chain_tip.rs
index 04e98ecbff7..9d37048a275 100644
--- a/zebra-chain/src/chain_tip.rs
+++ b/zebra-chain/src/chain_tip.rs
@@ -3,7 +3,6 @@
use std::{future, sync::Arc};
use chrono::{DateTime, Utc};
-use futures::{future::BoxFuture, Future, FutureExt};
use crate::{block, parameters::Network, transaction, BoxError};
@@ -64,11 +63,9 @@ pub trait ChainTip {
/// Returns an error if Zebra is shutting down, or the state has permanently failed.
///
/// See [`tokio::watch::Receiver::changed()`](https://docs.rs/tokio/latest/tokio/sync/watch/struct.Receiver.html#method.changed) for details.
- //
- // TODO:
- // Use async_fn_in_trait or return_position_impl_trait_in_trait when one of them stabilises:
- // https://github.com/rust-lang/rust/issues/91611
- fn best_tip_changed(&mut self) -> BestTipChanged;
+ fn best_tip_changed(
+ &mut self,
+ ) -> impl std::future::Future