Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions forester/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ reqwest = { workspace = true, features = ["json", "rustls-tls", "blocking"] }
futures = { workspace = true }
thiserror = { workspace = true }
borsh = { workspace = true }
bincode = "1.3"
bs58 = { workspace = true }
hex = { workspace = true }
env_logger = { workspace = true }
Expand All @@ -61,6 +62,7 @@ itertools = "0.14"
async-channel = "2.5"
solana-pubkey = { workspace = true }
dotenvy = "0.15"
mwmatching = "0.1.1"

[dev-dependencies]
serial_test = { workspace = true }
Expand Down
164 changes: 164 additions & 0 deletions forester/docs/v1_forester_flows.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# Forester V1 Flows (PR: v2 Nullify + Blockhash)

## 1. Transaction Send Flow (Blockhash)

```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ send_batched_transactions │
└─────────────────────────────────────────────────────────────────────────────────┘

┌──────────────────────────────────┐
│ prepare_batch_prerequisites │
│ - fetch queue items │
│ - single RPC: blockhash + │
│ priority_fee (same connection) │
│ - PreparedBatchData: │
│ recent_blockhash │
│ last_valid_block_height │
└──────────────┬───────────────────┘
┌──────────────────────────────────┐
│ for each work_chunk (100 items) │
└──────────────┬───────────────────┘
┌────────────┴────────────┐
│ elapsed > 30s? │
│ YES → refresh blockhash│
│ (pool.get_connection │
│ → rpc.get_latest_ │
│ blockhash) │
│ NO → keep current │
└────────────┬────────────┘
┌──────────────────────────────────┐
│ build_signed_transaction_batch │
│ (recent_blockhash, │
│ last_valid_block_height) │
│ → (txs, chunk_last_valid_ │
│ block_height) │
└──────────────┬───────────────────┘
┌──────────────────────────────────┐
│ execute_transaction_chunk_sending │
│ PreparedTransaction::legacy( │
│ tx, chunk_last_valid_block_ │
│ height) │
│ - send + confirm │
│ - blockhash expiry check via │
│ last_valid_block_height │
└──────────────────────────────────┘

No refetch-before-send. No re-sign.
```

## 2. State Nullify Instruction Flow (Legacy vs v2)

```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ Registry: nullify instruction paths │
└─────────────────────────────────────────────────────────────────────────────────┘

LEGACY (proof in ix data) v2 (proof in remaining_accounts)
─────────────────────── ────────────────────────────────────

create_nullify_instruction() create_nullify_with_proof_accounts_instruction()
│ │
│ ix data: [change_log, queue_idx, │ ix data: [change_log, queue_idx,
│ leaf_idx, proofs[16][32]] │ leaf_idx] (no proofs)
│ │
│ remaining_accounts: standard │ remaining_accounts: 16 proof
│ (authority, merkle_tree, queue...) │ account pubkeys (key = node bytes)
│ │
▼ ▼
process_nullify() nullify_2 instruction
(proofs from ix data) - validate: 1 change, 1 queue, 1 index
- validate: exactly 16 proof accounts
- extract_proof_nodes_from_remaining_accounts
- process_nullify(..., vec![proof_nodes])

Forester V1 uses nullify_2 only (create_nullify_2_instruction).
```

## 3. Forester V1 State Nullify Pairing Flow

```
┌─────────────────────────────────────────────────────────────────────────────────┐
│ build_instruction_batches (state nullify path) │
└─────────────────────────────────────────────────────────────────────────────────┘

fetch_proofs_and_create_instructions
│ For each state item:
│ create_nullify_with_proof_accounts_instruction (v2)
│ → StateNullifyInstruction { instruction, proof_nodes, leaf_index }
┌─────────────────────────────────────────────────────────────────────────────┐
│ allow_pairing? │
│ batch_size >= 2 AND should_attempt_pairing() │
└─────────────────────────────────────────────────────────────────────────────┘
│ should_attempt_pairing checks:
│ - pair_candidates = n*(n-1)/2 <= 2000 (MAX_PAIR_CANDIDATES)
│ - state_nullify_count <= 96 (MAX_PAIRING_INSTRUCTIONS)
│ - remaining_blocks = last_valid - current > 25 (MIN_REMAINING_BLOCKS_FOR_PAIRING)
├── NO → each nullify → 1 tx (no pairing)
└── YES → pair_state_nullify_batches
│ For each pair (i,j):
│ - pair_fits_transaction_size(ix_i, ix_j)? (serialized <= 1232)
│ - weight = 10000 + proof_overlap_count
│ Max-cardinality matching (mwmatching)
│ - prioritize number of pairs
│ - then maximize proof overlap (fewer unique accounts)
Output: Vec<Vec<Instruction>>
- paired: [ix_a, ix_b] in one tx
- unpaired: [ix] in one tx

Address updates: no pairing, chunked by batch_size only.
```

## 4. End-to-End Forester V1 State Tree Flow

```
Queue (state nullifier) Indexer (proofs)
│ │
└──────────┬─────────────────┘
prepare_batch_prerequisites
- queue items
- blockhash + last_valid_block_height
- priority_fee
for chunk in work_items.chunks(100):
refresh blockhash if 30s elapsed
build_signed_transaction_batch
├─ fetch_proofs_and_create_instructions
│ - state: v2 nullify ix (proof in remaining_accounts)
│ - address: update ix
├─ build_instruction_batches
│ - address: chunk by batch_size
│ - state nullify: pair if allow_pairing else 1-per-tx
└─ create_smart_transaction per batch
execute_transaction_chunk_sending
- PreparedTransaction::legacy(tx, chunk_last_valid_block_height)
- send + confirm with blockhash expiry check
```

47 changes: 36 additions & 11 deletions forester/src/processor/v1/helpers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ use forester_utils::{rpc_pool::SolanaRpcPool, utils::wait_for_indexer};
use light_client::{indexer::Indexer, rpc::Rpc};
use light_compressed_account::TreeType;
use light_registry::account_compression_cpi::sdk::{
create_nullify_instruction, create_update_address_merkle_tree_instruction,
CreateNullifyInstructionInputs, UpdateAddressMerkleTreeInstructionInputs,
create_nullify_2_instruction, create_update_address_merkle_tree_instruction,
CreateNullify2InstructionInputs, UpdateAddressMerkleTreeInstructionInputs,
};
use solana_program::instruction::Instruction;
use tokio::time::Instant;
Expand All @@ -32,14 +32,27 @@ use crate::{
errors::ForesterError,
};

#[derive(Clone, Debug)]
pub enum PreparedV1Instruction {
AddressUpdate(Instruction),
StateNullify(StateNullifyInstruction),
}

#[derive(Clone, Debug)]
pub struct StateNullifyInstruction {
pub instruction: Instruction,
pub proof_nodes: Vec<[u8; 32]>,
pub leaf_index: u64,
}

/// Work items should be of only one type and tree
pub async fn fetch_proofs_and_create_instructions<R: Rpc>(
authority: Pubkey,
derivation: Pubkey,
pool: Arc<SolanaRpcPool<R>>,
epoch: u64,
work_items: &[WorkItem],
) -> crate::Result<(Vec<MerkleProofType>, Vec<Instruction>)> {
) -> crate::Result<(Vec<MerkleProofType>, Vec<PreparedV1Instruction>)> {
let mut proofs = Vec::new();
let mut instructions = vec![];

Expand Down Expand Up @@ -360,7 +373,7 @@ pub async fn fetch_proofs_and_create_instructions<R: Rpc>(
},
epoch,
);
instructions.push(instruction);
instructions.push(PreparedV1Instruction::AddressUpdate(instruction));
}

// Process state proofs and create instructions
Expand All @@ -375,21 +388,33 @@ pub async fn fetch_proofs_and_create_instructions<R: Rpc>(
for (item, proof) in state_items.iter().zip(state_proofs.into_iter()) {
proofs.push(MerkleProofType::StateProof(proof.clone()));

let instruction = create_nullify_instruction(
CreateNullifyInstructionInputs {
let instruction = create_nullify_2_instruction(
CreateNullify2InstructionInputs {
nullifier_queue: item.tree_account.queue,
merkle_tree: item.tree_account.merkle_tree,
change_log_indices: vec![proof.root_seq % STATE_MERKLE_TREE_CHANGELOG],
leaves_queue_indices: vec![item.queue_item_data.index as u16],
indices: vec![proof.leaf_index],
proofs: vec![proof.proof.clone()],
change_log_index: proof.root_seq % STATE_MERKLE_TREE_CHANGELOG,
leaves_queue_index: item.queue_item_data.index as u16,
index: proof.leaf_index,
proof: proof
.proof
.clone()
.try_into()
.map_err(|_| ForesterError::General {
error: "Failed to convert state proof to fixed array".to_string(),
})?,
authority,
derivation,
is_metadata_forester: false,
},
epoch,
);
instructions.push(instruction);
instructions.push(PreparedV1Instruction::StateNullify(
StateNullifyInstruction {
instruction,
proof_nodes: proof.proof,
leaf_index: proof.leaf_index,
},
));
}

Ok((proofs, instructions))
Expand Down
Loading
Loading