Conversation
|
Caution Review failedThe pull request is closed. WalkthroughGates batch header saving on EVM verification, surfaces server-side execution errors via execute_result.json, tightens runtime error checks and logging, removes several host/test build scripts, adds a workspace dependency, makes serde and executor behaviors more tolerant, and adjusts timing/delays across proving components. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Shadow as shadow-prove
participant Server as prover-server (queue)
participant Verifier as EVM-verifier
participant FS as Filesystem (execute_result.json / plonk_proof.json)
Note over Shadow,Server: High-level failing-proof flow
Shadow->>Server: query proof for batch
Server->>Verifier: verify batch (save_batch_header)
alt verification fails
Verifier-->>Server: verification error
Server->>FS: write execute_result.json {error_code,error_msg}
Server-->>Shadow: response includes error_code
Shadow->>Shadow: abort/return (short‑circuit)
else verification succeeds
Verifier-->>Server: success + blob_info/versioned_hash
Server->>FS: save batch_header and plonk_proof.json
Server-->>Shadow: proof_data present
Shadow->>Shadow: continue prove workflow
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
prover/crates/core/src/database.rs (1)
191-206: Based on my verification, I can now provide the rewritten review comment. The original concerns are substantiated:
- Confirmed: This is an intentional design choice (new file, not a regression)
- Confirmed: The semantic change from
unreachable!(panic) toBytecode::default()(silent fallback) is real- Confirmed: The comment still states "this is an error" but the implementation treats missing code as recoverable
- Confirmed:
dev_trace!macro is available in the codebase (used elsewhere in the same file)- Confirmed: The use of
println!is ad-hoc and inconsistent with crate logging patterns
Silent fallback to
Bytecode::default()masks potential state inconsistencies; comment is misleading and logging is inadequateThis code was intentionally written to return
Ok(Bytecode::default())when code is missing, but the preceding comment (lines 191–195) still claims "this is an error," creating a documentation-code mismatch. More importantly, in a prover context, silently executing with empty bytecode instead of crashing on an impossible state can hide correctness bugs: if code is expected to be loaded, returning default bytecode may produce incorrect proofs without any diagnostic signal.Issues:
- Comment contradicts implementation: claims missing code is an error, but code treats it as a recoverable condition.
println!is ad-hoc debugging output, not suitable for production. The codebase already usesdev_trace!(visible on line 77 and 131).- No metric or rate-limiting on this fallback, so it's difficult to detect if this happens in production.
Recommendation:
Either:
- Confirm this is intentional: Update the comment to describe the rationale (e.g., "We tolerate missing code and fall back to empty bytecode to avoid killing the prover"), and replace
println!withdev_trace!or a structuredwarn!log with a metric counter.- Or revert to fail-fast: Keep
unreachable!orpanic!to catch state inconsistencies early, and add detailed error context (block number, transaction, address, code hash).Please clarify whether missing code is actually safe in all execution paths and verify that call sites handle "empty code" semantics correctly.
🧹 Nitpick comments (7)
prover/crates/primitives/src/types/tx.rs (3)
109-112:authorization_listnow robust to missing / null trace fieldsAdding
#[serde(default)]on top ofDefaultOnNullis a good pattern here:authorizationListwill deserialize to an emptyAuthorizationListboth when the field is omitted and when it is explicitlynull, avoiding hard failures on older / non‑7702 traces.If upstream trace producers can also omit
accessList, you might consider mirroring this and adding#[serde(default)]there for consistency, but that’s optional and depends on the actual JSON shape you see in the wild.
561-574: Strengthen mainnet trace test with assertions (and avoid extra clone)Right now this test only prints
tx_trace.datavstyped_tx.data, so it won’t catch regressions where they diverge but both deserialize fine. Also, the.clone()on the parsedValueis unnecessary.Consider tightening it up like this:
let trace: serde_json::Value = serde_json::from_str(TRACE).unwrap(); let txs = trace[0][1]["transactions"].clone(); for tx in txs.as_array().unwrap() { let tx_trace: TransactionTrace = serde_json::from_value(tx.clone()).unwrap(); let typed_tx = tx_trace.try_build_typed_tx().unwrap(); assert_eq!(tx_trace.data(), typed_tx.data(), "trace vs typed_tx data mismatch"); }This keeps the smoke‑test behavior but also turns data mismatches into failing tests.
576-587: Make 7702 authorization list test assertive rather than just loggingThis test is valuable for 7702, but it currently only logs the authorization lists. It will still pass if
authorization_listis silently dropped or mismatched.You could turn it into a stronger contract like:
let trace: serde_json::Value = serde_json::from_str(TRACE_7702).unwrap(); let txs = trace[0][0]["transactions"].clone(); for tx in txs.as_array().unwrap() { let tx_trace: TransactionTrace = serde_json::from_value(tx.clone()).unwrap(); let typed_tx = tx_trace.try_build_typed_tx().unwrap(); let trace_auth = tx_trace.authorization_list(); let tx_auth = typed_tx.authorization_list().expect("7702 tx should have authorization_list"); assert_eq!(trace_auth.as_slice(), tx_auth, "trace vs typed_tx authorization_list mismatch"); }This directly validates that the serde defaults plus
try_build_typed_txpreserve the authorization list for 7702 transactions.prover/crates/core/src/executor/mod.rs (1)
95-99: Per‑transactionprintln!may be too noisy; consider using existing logging macros or gatingThis
println!will execute for every transaction on all non‑zkvmtargets, regardless of build profile. On a busy mainnet environment, that can generate a lot of stdout traffic and bypasses whatever filtering/formatting your existingdev_debug!/dev_trace!/ metrics stack already provides.If this is intended for temporary diagnostics, consider instead:
- Reusing the existing logging macros (at an appropriate level), and/or
- Gating this behind a dedicated feature flag or env‑controlled debug mode so it doesn’t spam production logs.
Please double‑check that this level of per‑tx logging is acceptable for your mainnet prover workloads and won’t overload stdout or log collection.
prover/bin/challenge/src/handler.rs (2)
465-466: Consider adjusting the log level for diagnostic output.The special "======>" prefix and verbose output of the full
paramstructure suggest these may be temporary diagnostic logs. If these logs are intended for production, consider the following:
- Change the log level from
infotodebugto reduce log verbosity in production environments.- The full param structure could be quite large and may clutter logs.
- If these are temporary debugging logs, they should be removed before merging to main.
Apply this diff if you want to keep the logs but reduce verbosity:
- log::info!("======> batch inspect: decode tx.input, version = {:#?}", version); - log::info!("======> batch inspect: decode tx.input, param = {:#?}", param); + log::debug!("batch inspect: decode tx.input, version = {:#?}", version); + log::debug!("batch inspect: decode tx.input, param = {:#?}", param);Alternatively, if these are temporary, consider removing them entirely.
579-579: Consider adding a newline at end of file.Most style guides recommend ending files with a newline character. This is often enforced by formatters and can prevent unnecessary diffs in future commits.
prover/bin/challenge/src/external_sign.rs (1)
140-152: Read response body once before branching on status; current control flow prevents double execution, but refactoring improves clarityThe current code is functionally correct—the early
returnin the error branch (line 149) ensures only oneresponse.text().await?call executes per request. However, the suggested refactoring is still worthwhile for clarity and maintainability: reading the body once upfront and then branching onstatuseliminates the structural ambiguity and avoids the appearance of redundant awaits.Note: The codebase uses reqwest 0.11 (not 0.12), though both versions have identical body consumption semantics (text() consumes the body; subsequent calls return empty).
The suggested refactoring remains valid:
log::debug!("===payload: {:?}", serde_json::to_string(payload).unwrap()); let response: reqwest::Response = self.client.post(url).json(&payload).send().await?; + let status = response.status(); log::info!("===do_request response: {:?}", response); - - let status = response.status(); - if !status.is_success() { - let text = response.text().await?; - log::info!("===do_request response text: {:?}", &text); - return Err(format!("ext_sign response status not ok: {:?}", status).into()); - } - Ok(response.text().await?) + + let text = response.text().await?; + if !status.is_success() { + log::info!("===do_request response text: {:?}", &text); + return Err(format!("ext_sign response status not ok: {:?}", status).into()); + } + Ok(text)Consider using
warn!orerror!for the non-success case instead ofinfo!based on your logging strategy.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
prover/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
prover/.gitignore(1 hunks)prover/bin/challenge/src/external_sign.rs(3 hunks)prover/bin/challenge/src/handler.rs(3 hunks)prover/bin/host/build.rs(0 hunks)prover/bin/server/src/queue.rs(3 hunks)prover/bin/server/src/server.rs(2 hunks)prover/bin/shadow-prove/src/main.rs(1 hunks)prover/bin/shadow-prove/src/shadow_prove.rs(2 hunks)prover/bin/shadow-prove/src/shadow_rollup.rs(3 hunks)prover/crates/core/Cargo.toml(1 hunks)prover/crates/core/src/database.rs(1 hunks)prover/crates/core/src/executor/mod.rs(2 hunks)prover/crates/primitives/src/types/tx.rs(2 hunks)prover/tests/algebra/host/build.rs(0 hunks)prover/tests/bls12381/host/build.rs(0 hunks)prover/tests/keccak256/host/build.rs(0 hunks)prover/tests/zstd/host/build.rs(0 hunks)
💤 Files with no reviewable changes (5)
- prover/tests/keccak256/host/build.rs
- prover/tests/bls12381/host/build.rs
- prover/tests/zstd/host/build.rs
- prover/tests/algebra/host/build.rs
- prover/bin/host/build.rs
🧰 Additional context used
🧬 Code graph analysis (3)
prover/bin/shadow-prove/src/main.rs (1)
contracts/src/deploy-utils.ts (1)
sleep(166-172)
prover/crates/primitives/src/types/tx.rs (1)
prover/crates/primitives/src/lib.rs (4)
data(181-181)data(389-391)authorization_list(187-187)authorization_list(397-399)
prover/bin/server/src/queue.rs (5)
prover/crates/morph-executor/client/src/lib.rs (1)
verify(9-63)prover/crates/morph-executor/client/src/verifier/blob_verifier.rs (1)
verify(13-48)prover/crates/morph-executor/client/src/verifier/evm_verifier.rs (1)
verify(17-20)prover/crates/morph-executor/host/src/lib.rs (1)
get_blob_info(36-39)prover/bin/host/src/main.rs (1)
load_trace(28-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (11)
prover/.gitignore (1)
27-29: LGTM!The addition of the
proofdirectory to the gitignore is appropriate and follows the existing conventions in the file. This aligns well with the PR's objective of preventing proof-related generated artifacts from being checked in.prover/bin/shadow-prove/src/shadow_rollup.rs (2)
74-76: LGTM on log level promotion.Promoting this log from debug to info is appropriate for mainnet visibility—operators can now readily observe when batches are skipped because they've already been proven.
211-213: Verify intent: condition also rejects single-block batches.The condition
blocks.0 >= blocks.1rejects both empty batches (start > end) and single-block batches (start == end). Given that the error message says "blocks is empty", was rejecting single-block batches intended?If single-block batches are valid, the condition should use strict inequality:
- if blocks.0 >= blocks.1 { + if blocks.0 > blocks.1 { return Err(String::from("blocks is empty")); }prover/crates/core/Cargo.toml (1)
24-24: Workspace dependency forcfg-iflooks appropriateDeclaring
cfg-ifas a workspace dependency here matches its use in the executor module and keeps versions centralized at the workspace level. No issues from this file alone.If not already present, please confirm that
cfg-ifis defined in the workspace root and matches the intended version for the prover crates.prover/crates/core/src/executor/mod.rs (1)
286-290: Based on my investigation, I cannot access the specific repository containingprover/crates/core/src/executor/mod.rsreferenced in the review. The cloned repository (morph-l2/morph) does not contain this file or thezktrie.update_accountcode pattern.However, I can analyze the review comment based on the code snippet provided and established error handling best practices. Let me provide my assessment:
Key findings:
- The code snippet shows:
unwrap_or_else(|e| println!(...))converting a potential error into a log message while continuing execution- This pattern is inherently risky in prover code where state consistency is critical
- The concerns raised about silent state divergence are valid in principle
- The error message formatting issue is real (labels don't match argument order)
Limitations:
- Cannot verify the full
commit_changes_innercontext- Cannot confirm if there are external guarantees making this safe
- Cannot assess whether this change was intentional or introduces new risk
Given that:
- The review raises legitimate concerns about error handling safety
- The code pattern (error swallowing in commit path) is objectively risky
- But full verification of context and intent is not possible without repo access
Here is my rewritten review comment:
Error swallowing in
zktrie.update_accountrisks silent state divergence in the prover commitment pathThe
unwrap_or_else(|e| println!(...))pattern converts potential update failures into log output while execution continues. In a prover commitment path, this risks:
- Silently ignoring account state update failures
- Diverging from EVM state without clear diagnostics or failure signals
- Committing incorrect roots
Unless there are strong external guarantees that
update_accountfailures are benign, this should either:
- Propagate errors upward (refactor
commit_changes_innerto returnResult), or- Aggregate failures and treat any non-empty set as a hard error, or
- Use structured error-level logging with both address and error details (not bare
println!)Also, the message format is confusing:
"failed to update account: {:?}, address: {:?}"labels the second placeholder as address but prints error second—either swap arguments or relabel for clarity.prover/bin/shadow-prove/src/main.rs (1)
75-75: Verify the impact of increasing the loop interval from 12s to 30s.This change increases the main loop cadence by 2.5x, which could impact the responsiveness of the shadow proving system to new batches.
Please confirm this timing change is intentional and assess whether it could delay batch processing or create backlogs during high-throughput periods.
prover/bin/shadow-prove/src/shadow_prove.rs (2)
113-116: Good error handling for server-reported errors.The error_code check properly short-circuits the prove workflow when the server reports an error, preventing unnecessary work and clearly logging the failure reason.
167-170: LGTM! Consistent error handling in the retry loop.The early return on error_code aligns with the error handling pattern established earlier in the function, ensuring the prove workflow terminates appropriately when errors are detected during retries.
prover/bin/server/src/server.rs (2)
227-250: Good structured error handling via execute_result.json.The code correctly reads error information from execute_result.json and short-circuits proof loading when errors are present. The error handling gracefully degrades when the file cannot be opened.
289-289: LGTM! Correct condition to preserve specific error messages.The updated condition ensures that specific error messages (from execute_result.json or other sources) are not overwritten with the generic "No proof was found" message.
prover/bin/server/src/queue.rs (1)
89-92: Good error recovery on verification failure.The code correctly aborts proof generation when batch header saving fails (verification error) and preserves the trace for debugging before moving to the next batch.
| let rt = self.do_request(&self.url, &req_data).await?; | ||
| log::debug!("ext_sign response: {:?}", rt); | ||
| log::info!("ext_sign rt: {:?}", rt); | ||
|
|
||
| let response: Response = serde_json::from_str(&rt)?; | ||
| if response.result.sign_datas.is_empty() { |
There was a problem hiding this comment.
Reconsider logging full external sign response at info level
rt holds the full JSON response from the external signing service, including signatures and transaction-related data. Logging this at info on every successful call can:
- Leak user/transaction details and signatures into standard production logs.
- Significantly increase log volume on a hot path.
Unless you explicitly want this in normal runtime logs, consider either:
- log::info!("ext_sign rt: {:?}", rt);
+ log::debug!("ext_sign rt: {:?}", rt);or gating it behind a feature flag / config so it can be enabled only during targeted debugging.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let rt = self.do_request(&self.url, &req_data).await?; | |
| log::debug!("ext_sign response: {:?}", rt); | |
| log::info!("ext_sign rt: {:?}", rt); | |
| let response: Response = serde_json::from_str(&rt)?; | |
| if response.result.sign_datas.is_empty() { | |
| let rt = self.do_request(&self.url, &req_data).await?; | |
| log::debug!("ext_sign rt: {:?}", rt); | |
| let response: Response = serde_json::from_str(&rt)?; | |
| if response.result.sign_datas.is_empty() { |
| if !batch_proof.proof_data.is_empty() { | ||
| log::info!("query proof and prove state: {:#?}", batch_index); | ||
| let batch_header = batch_info.fill_ext(batch_proof.batch_header.clone()).encode(); | ||
| sleep(Duration::from_secs(600)).await; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's examine the code around line 177 in handler.rs with context
echo "=== Code around line 177 ==="
sed -n '160,195p' prover/bin/challenge/src/handler.rsRepository: morph-l2/morph
Length of output: 1721
🏁 Script executed:
#!/bin/bash
# Check recent commits affecting this file
echo "=== Recent commits for handler.rs ==="
git log --oneline -20 -- prover/bin/challenge/src/handler.rs
echo -e "\n=== Detailed commit info for the sleep addition ==="
# Try to find the commit that added this sleep
git log -p --all -S 'Duration::from_secs(600)' -- prover/bin/challenge/src/handler.rs | head -100Repository: morph-l2/morph
Length of output: 177
🏁 Script executed:
#!/bin/bash
# Get full commit details
echo "=== Full commit details ==="
git show 6236577
echo -e "\n=== Check for PR related to this commit ==="
# Try to find associated PR
git log --all --grep="Execute before prove" --oneline
git log --all --grep="sleep" --oneline | head -10Repository: morph-l2/morph
Length of output: 50443
🏁 Script executed:
#!/bin/bash
# Search for comments about timing, proof, or delays near the sleep
echo "=== Searching for related comments or timing logic ==="
rg -n -B5 -A5 "query_proof|prove_state|proof_data" prover/bin/challenge/src/handler.rs | head -80
echo -e "\n=== Searching for any TODO or FIXME comments in handler.rs ==="
rg -n "TODO|FIXME|HACK|XXX" prover/bin/challenge/src/handler.rs
echo -e "\n=== Looking for other timing/delay related code ==="
rg -n "Duration|sleep|delay" prover/bin/challenge/src/handler.rsRepository: morph-l2/morph
Length of output: 4746
🏁 Script executed:
#!/bin/bash
# Check git history to see if this sleep was added in this commit
echo "=== Check git log with -p limited output for the sleep ==="
git log --all -p -- prover/bin/challenge/src/handler.rs | grep -A10 -B10 "Duration::from_secs(600)" | head -50
echo -e "\n=== Check line 110 context ==="
sed -n '105,115p' prover/bin/challenge/src/handler.rs
echo -e "\n=== Check the full context around both prove_state calls ==="
sed -n '195,210p' prover/bin/challenge/src/handler.rsRepository: morph-l2/morph
Length of output: 2785
🏁 Script executed:
#!/bin/bash
# Let me verify the full extent of the inconsistency
echo "=== Full comparison of both prove_state call paths ==="
echo "PATH 1 (line 173-179): Early query finds proof"
sed -n '173,179p' prover/bin/challenge/src/handler.rs
echo -e "\nPATH 2 (line 200-205): Status PROVED finds proof"
sed -n '200,205p' prover/bin/challenge/src/handler.rs
echo -e "\n=== Checking if there's any documentation about L1 finality timing ==="
rg -n "finality|L1|confirmation" prover/bin/challenge/src/handler.rs | head -20
echo -e "\n=== Check challenge deadline or timing constants ==="
rg -n "CHALLENGE|challenge" prover/bin/challenge/src/handler.rs | head -20Repository: morph-l2/morph
Length of output: 3523
Based on my thorough verification, I have found critical evidence that confirms and extends the original review concern:
Key findings:
- The 600-second sleep is newly added in this commit (shown by
+in git log) - Inconsistent application: There are two identical code paths that both call
prove_statewith proof data:- PATH 1 (line 173-179): Queries proof → sleeps 600 seconds → calls prove_state
- PATH 2 (line 200-205): Queries proof → NO sleep → calls prove_state
- No explanation: No comment explaining the rationale
- Hardcoded value: Not configurable, not environment-variable driven
- Challenge deadline concern: The code mentions a challenge period (3-day window), and a 10-minute blocking delay could impact responsiveness
Rewritten Review Comment:
Resolve the inconsistent and unexplained 600-second delay before proving state.
The 600-second (10-minute) blocking sleep at line 177 is newly added but appears only in one of two identical code paths that prove state with proof data:
- Path 1 (line 173-179):
query_prooffinds proof → sleeps 600s →prove_state - Path 2 (line 200-205): Status
PROVEDfinds proof → no sleep →prove_state
Both paths perform identical operations on proof data, yet behave differently. This inconsistency suggests either:
- The sleep should be applied to both paths (if needed for L1 finality or other timing requirements), or
- It should be removed from line 177 (if it was added as temporary debugging)
The hardcoded 10-minute delay blocks the entire event loop and could impact challenge responsiveness, especially given the code references a challenge period window. Without explanation or configurability, this is difficult to maintain and verify.
Required actions:
- Add an explanatory comment clarifying why this specific delay is necessary
- Either apply the delay consistently to both code paths or remove it
- If this is addressing L1 finality or another timing requirement, make it configurable via environment variable rather than hardcoded
🤖 Prompt for AI Agents
In prover/bin/challenge/src/handler.rs around line 177, remove the unexplained
hardcoded 600-second sleep or make its behavior explicit and consistent: either
apply the same delay to the other identical proof-handling path or remove it
entirely; if the delay is required for L1 finality or timing, replace the
hardcoded value with a configurable Duration loaded from an environment variable
or config (with a sensible default), add a clear comment explaining the
rationale and units, and ensure the implementation uses an async, non-blocking
wait and is applied to both code paths that call prove_state so the behavior is
consistent.
| if let Ok(batch_info) = verify_result { | ||
| let blob_info = morph_executor_host::get_blob_info(blocks).unwrap(); | ||
| let (versioned_hash, _) = BlobVerifier::verify(&blob_info, blocks.len()).unwrap(); | ||
|
|
||
| // Save batch_header | ||
| // | batch_data_hash | versioned_hash | sequencer_root | | ||
| // |-----------------|----------------|----------------| | ||
| // | bytes32 | bytes32 | bytes32 | | ||
| let mut batch_header: Vec<u8> = Vec::with_capacity(96); | ||
| batch_header.extend_from_slice(&batch_info.data_hash().0); | ||
| batch_header.extend_from_slice(&versioned_hash.0); | ||
| batch_header.extend_from_slice(&batch_info.sequencer_root().0); | ||
| let mut batch_file = File::create(format!("{}/batch_header.data", proof_dir)).unwrap(); | ||
| batch_file.write_all(&batch_header[..]).expect("failed to batch_header"); | ||
| true |
There was a problem hiding this comment.
Replace panicking unwrap() calls with proper error handling.
Lines 126, 127, and 137 use unwrap(), which will panic and crash the prover service if the operations fail. This could happen due to filesystem issues, invalid data, or blob verification failures.
Consider returning Result<bool, anyhow::Error> from save_batch_header and propagating errors properly:
-fn save_batch_header(blocks: &mut Vec<BlockTrace>, batch_index: u64) -> bool {
+fn save_batch_header(blocks: &mut Vec<BlockTrace>, batch_index: u64) -> Result<bool, anyhow::Error> {
let proof_dir = PROVER_PROOF_DIR.to_string() + format!("/batch_{}", batch_index).as_str();
std::fs::create_dir_all(&proof_dir).expect("failed to create proof path");
blocks.iter_mut().for_each(|block| block.flatten());
let verify_result = EVMVerifier::verify(blocks);
if let Ok(batch_info) = verify_result {
- let blob_info = morph_executor_host::get_blob_info(blocks).unwrap();
- let (versioned_hash, _) = BlobVerifier::verify(&blob_info, blocks.len()).unwrap();
+ let blob_info = morph_executor_host::get_blob_info(blocks)?;
+ let (versioned_hash, _) = BlobVerifier::verify(&blob_info, blocks.len())?;
let mut batch_header: Vec<u8> = Vec::with_capacity(96);
batch_header.extend_from_slice(&batch_info.data_hash().0);
batch_header.extend_from_slice(&versioned_hash.0);
batch_header.extend_from_slice(&batch_info.sequencer_root().0);
- let mut batch_file = File::create(format!("{}/batch_header.data", proof_dir)).unwrap();
- batch_file.write_all(&batch_header[..]).expect("failed to batch_header");
- true
+ let mut batch_file = File::create(format!("{}/batch_header.data", proof_dir))?;
+ batch_file.write_all(&batch_header[..])?;
+ Ok(true)
} else {
// ... error handling ...
- false
+ Ok(false)
}
}Then update the caller at line 89 to handle the Result.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In prover/bin/server/src/queue.rs around lines 125 to 139, the code uses
unwrap() on morph_executor_host::get_blob_info, BlobVerifier::verify, and
filesystem operations which will panic; change save_batch_header to return
Result<bool, anyhow::Error> and replace unwrap()/expect() with ? or map_err(...)
to propagate errors (convert errors into anyhow::Error when needed), write the
file using File::create(...)? and write_all(...)? and return Ok(true) on
success; finally update the caller at the call site around line 89 to handle the
Result (propagate the error or handle it explicitly) instead of assuming
success.
| #[test] | ||
| fn test_save_execute() { | ||
| let batch_index = 102u64; | ||
|
|
||
| let mut blocks = load_trace("../../testdata/viridian/eip7702_traces.json"); | ||
| println!("blocks.len(): {:?}", blocks.len()); | ||
| let traces = blocks.first_mut().unwrap(); | ||
|
|
||
| if !save_batch_header(traces, batch_index) { | ||
| save_trace(batch_index, traces); | ||
| println!("save_batch_header error"); | ||
| } else { | ||
| println!("save_batch_header success"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Test uses hardcoded path that may not exist in all environments.
The test function relies on a hardcoded relative path "../../testdata/viridian/eip7702_traces.json" which may not exist in all development or CI environments, making the test brittle.
Consider using #[ignore] attribute for tests that require specific data files, or ensure the test data path is configured via environment variables:
+#[ignore] // Requires test data files
#[test]
fn test_save_execute() {Alternatively, mock the data or use a test fixture that's guaranteed to exist in the test environment.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[test] | |
| fn test_save_execute() { | |
| let batch_index = 102u64; | |
| let mut blocks = load_trace("../../testdata/viridian/eip7702_traces.json"); | |
| println!("blocks.len(): {:?}", blocks.len()); | |
| let traces = blocks.first_mut().unwrap(); | |
| if !save_batch_header(traces, batch_index) { | |
| save_trace(batch_index, traces); | |
| println!("save_batch_header error"); | |
| } else { | |
| println!("save_batch_header success"); | |
| } | |
| } | |
| #[ignore] // Requires test data files | |
| #[test] | |
| fn test_save_execute() { | |
| let batch_index = 102u64; | |
| let mut blocks = load_trace("../../testdata/viridian/eip7702_traces.json"); | |
| println!("blocks.len(): {:?}", blocks.len()); | |
| let traces = blocks.first_mut().unwrap(); | |
| if !save_batch_header(traces, batch_index) { | |
| save_trace(batch_index, traces); | |
| println!("save_batch_header error"); | |
| } else { | |
| println!("save_batch_header success"); | |
| } | |
| } |
🤖 Prompt for AI Agents
In prover/bin/server/src/queue.rs around lines 214 to 228 the test uses a
hardcoded relative path "../../testdata/viridian/eip7702_traces.json" which may
not exist in all environments; update the test to avoid brittle file
dependencies by either marking it #[ignore] and documenting how to run it,
making the path configurable via an environment variable (falling back to a
bundled fixture), or embedding a small fixture with include_str! or creating the
necessary file in a tempdir during test setup so CI and local runs won’t fail.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
prover/bin/server/src/queue.rs (1)
214-228: Hardcoded test data path remains brittle.This test relies on a hardcoded relative path that may not exist in all environments, as noted in a previous review comment. Consider applying the suggested mitigations: mark with
#[ignore], use an environment variable, or embed a minimal fixture.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
prover/bin/server/src/queue.rs(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
prover/bin/server/src/queue.rs (4)
prover/crates/morph-executor/client/src/verifier/blob_verifier.rs (1)
verify(13-48)prover/crates/morph-executor/client/src/verifier/evm_verifier.rs (1)
verify(17-20)prover/crates/morph-executor/host/src/lib.rs (1)
get_blob_info(36-39)prover/bin/host/src/main.rs (1)
load_trace(28-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build
- GitHub Check: Analyze (rust)
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
prover/bin/server/src/queue.rs (1)
89-92: Error handling logic looks correct, contingent on fixing panics in save_batch_header.The boolean check and trace-saving fallback are appropriate. However, this graceful handling won't catch panics from
unwrap()/expect()calls withinsave_batch_header(lines 121, 126, 127, 137, 138, 146, 148, 149), which will still crash the prover service. Ensure the prior review comment about convertingsave_batch_headerto returnResultis addressed.
| } else { | ||
| let e = verify_result.unwrap_err(); | ||
| let error_data = serde_json::json!({ | ||
| "error_code": "EVM_EXECUTE_NOT_EXPECTED", | ||
| "error_msg": e.to_string() | ||
| }); | ||
| let mut batch_file = File::create(format!("{}/execute_result.json", proof_dir)).unwrap(); | ||
| batch_file | ||
| .write_all(serde_json::to_string_pretty(&error_data).unwrap().as_bytes()) | ||
| .expect("failed to write error"); | ||
| log::error!("EVM verification failed for batch {}: {}", batch_index, e); | ||
| false | ||
| } |
There was a problem hiding this comment.
Error path can still panic—defeats graceful error handling.
The error path writes execute_result.json to surface verification failures, but it uses unwrap() on lines 146 and 148, and expect() on line 149. If file creation or JSON serialization fails, the function will panic despite being in an error-handling branch, undermining the intent of returning false gracefully.
Convert these to return Result<bool, anyhow::Error> as suggested in the prior review comment, or at minimum use fallback logging:
} else {
let e = verify_result.unwrap_err();
let error_data = serde_json::json!({
"error_code": "EVM_EXECUTE_NOT_EXPECTED",
"error_msg": e.to_string()
});
- let mut batch_file = File::create(format!("{}/execute_result.json", proof_dir)).unwrap();
- batch_file
- .write_all(serde_json::to_string_pretty(&error_data).unwrap().as_bytes())
- .expect("failed to write error");
- log::error!("EVM verification failed for batch {}: {}", batch_index, e);
+ match File::create(format!("{}/execute_result.json", proof_dir))
+ .and_then(|mut f| f.write_all(serde_json::to_string_pretty(&error_data)?.as_bytes()))
+ {
+ Ok(_) => log::error!("EVM verification failed for batch {}: {}", batch_index, e),
+ Err(write_err) => log::error!("EVM verification failed for batch {} ({}), and failed to write execute_result.json: {}", batch_index, e, write_err),
+ }
false
}Committable suggestion skipped: line range outside the PR's diff.
Summary by CodeRabbit
Bug Fixes
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.