Conversation
📝 WalkthroughWalkthroughAdds selectable state backend (MPT vs zkTrie), CLI flags to control it and override the MPT fork time, disk-state-root mappings for cross-format access, conditional state-root validation across formats, API/tracer/VM adjustments, extensive tests, and migration tooling. Changes
sequenceDiagram
participant Inserter as Block Inserter
participant Validator as BlockValidator
participant Disk as DiskDB (rawdb)
participant Trie as Trie/State
Inserter->>Validator: submit block (header, header.Root, header.Time)
Validator->>Disk: ReadDiskStateRoot(header.Root)?
alt mapping exists
Disk-->>Validator: diskRoot
Validator->>Trie: OpenTrie(diskRoot)
else no mapping
Validator->>Trie: OpenTrie(header.Root)
end
Trie-->>Validator: computedRoot
alt header.Root != computedRoot
Validator->>Disk: WriteDiskStateRoot(header.Root -> computedRoot)
end
Validator->>Validator: evaluate UseZktrie XOR IsMPTFork(header.Time)
Validator-->>Inserter: accept/reject
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
core/block_validator.go (1)
104-111: Consider adding debug logging for the validation gate.The XOR-based validation logic is correct and critical for cross-format block processing. However, when validation is skipped due to format mismatch, it would be helpful to log this at debug level to aid troubleshooting during the fork transition.
Consider adding a debug log when validation is skipped:
shouldValidateStateRoot := v.config.Morph.UseZktrie != v.config.IsMPTFork(header.Time) + if !shouldValidateStateRoot { + log.Debug("Skipping state root validation due to cross-format block", + "useZktrie", v.config.Morph.UseZktrie, + "isMPTFork", v.config.IsMPTFork(header.Time), + "block", header.Number) + } + if root := statedb.IntermediateRoot(v.config.IsEIP158(header.Number)); shouldValidateStateRoot && header.Root != root { return fmt.Errorf("invalid merkle root (remote: %x local: %x)", header.Root, root) }cmd/utils/flags.go (1)
1784-1800: Consider extracting duplicated MPT flag handling logic.This same block of code is repeated three times for
MorphFlag,MorphHoleskyFlag, andMorphHoodiFlag(lines 1784-1800, 1806-1822, 1828-1844). Extracting it into a helper function would improve maintainability and reduce the risk of inconsistencies when changes are needed.Example helper:
func configureMorphMPTMode(ctx *cli.Context, cfg *ethconfig.Config) { cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name) if cfg.Genesis.Config.Morph.UseZktrie { if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive { log.Crit("zkTrie mode requires --gcmode=archive") } log.Info("Pruning disabled (zkTrie mode)") cfg.NoPruning = true log.Info("Prefetch disabled (zkTrie mode)") cfg.NoPrefetch = true } else { log.Info("MPT mode enabled, pruning is allowed") } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (27)
cmd/geth/main.go(1 hunks)cmd/geth/usage.go(1 hunks)cmd/utils/flags.go(2 hunks)core/block_validator.go(1 hunks)core/block_validator_test.go(1 hunks)core/blockchain.go(1 hunks)core/chain_makers.go(1 hunks)core/genesis.go(1 hunks)core/mpt_fork_test.go(1 hunks)core/rawdb/accessors_chain_test.go(2 hunks)core/rawdb/accessors_state.go(1 hunks)core/rawdb/accessors_state_test.go(1 hunks)core/rawdb/schema.go(2 hunks)core/state/database.go(1 hunks)core/state/mpt_migration_test.go(1 hunks)core/state/snapshot/account.go(1 hunks)core/state/snapshot/generate.go(1 hunks)core/state/state_object.go(2 hunks)core/state/state_test.go(1 hunks)core/state/statedb.go(1 hunks)core/types/state_account.go(1 hunks)eth/api.go(1 hunks)eth/tracers/logger/logger_trace.go(1 hunks)internal/web3ext/web3ext.go(1 hunks)params/config.go(14 hunks)params/version.go(1 hunks)trie/database.go(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (13)
cmd/geth/main.go (1)
cmd/utils/flags.go (1)
MorphMPTFlag(185-188)
cmd/geth/usage.go (1)
cmd/utils/flags.go (1)
MorphMPTFlag(185-188)
core/state/database.go (1)
core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)
core/state/snapshot/account.go (1)
crypto/codehash/codehash.go (1)
PoseidonCodeHash(12-14)
core/chain_makers.go (1)
params/config.go (1)
TestChainConfig(438-469)
core/genesis.go (1)
core/rawdb/accessors_state.go (1)
WriteDiskStateRoot(101-105)
core/state/snapshot/generate.go (1)
crypto/codehash/codehash.go (1)
PoseidonCodeHash(12-14)
core/rawdb/accessors_state_test.go (3)
core/rawdb/database.go (1)
NewMemoryDatabase(234-236)common/types.go (1)
HexToHash(66-66)core/rawdb/accessors_state.go (3)
WriteDiskStateRoot(101-105)ReadDiskStateRoot(109-115)DeleteDiskStateRoot(118-122)
trie/database.go (2)
core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)common/types.go (1)
HashLength(39-39)
eth/api.go (2)
rpc/types.go (1)
BlockNumberOrHash(138-142)core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)
core/blockchain.go (1)
core/rawdb/accessors_state.go (1)
WriteDiskStateRoot(101-105)
core/state/state_object.go (4)
core/state/database.go (1)
Database(42-60)trie/database.go (1)
Database(70-96)crypto/codehash/codehash.go (1)
KeccakCodeHash(16-18)common/types.go (1)
BytesToHash(54-58)
core/types/state_account.go (1)
crypto/codehash/codehash.go (1)
PoseidonCodeHash(12-14)
🔇 Additional comments (46)
core/blockchain.go (1)
1253-1259: LGTM! Cross-format state root mapping correctly implemented.The logic correctly writes a disk state root mapping when the block's header root differs from the locally computed root, enabling cross-format state access between MPT and zkTrie backends. The comment clearly explains the purpose.
core/chain_makers.go (1)
258-260: LGTM! Proper test isolation.The change to clone the test chain config instead of using the shared default prevents unintended mutations and improves test isolation.
params/version.go (1)
27-27: LGTM! Version bump for new features.The patch version bump to 9 is appropriate for this PR's feature additions (MPT support and cross-format state handling).
internal/web3ext/web3ext.go (1)
894-899: LGTM! Web3 extension method correctly configured.The new
diskRootmethod is properly configured with appropriate formatters and exposes the disk state root API for cross-format state access.core/rawdb/accessors_chain_test.go (2)
396-396: LGTM! Simplified error message format.The updated error message format using
%vto format the error is simpler while still providing necessary diagnostic information.
406-406: LGTM! Consistent error message format.The error message format is now consistent with the receipts RLP check above.
core/rawdb/schema.go (2)
126-128: LGTM! Clear disk state root mapping schema.The new prefix and comment clearly document the disk state root mapping semantics for cross-format state access.
317-320: LGTM! Consistent key construction pattern.The
diskStateRootKeyhelper follows the same pattern as other key construction functions in this file.core/state/snapshot/generate.go (1)
621-622: LGTM! Correct RLP serialization exclusions.The
rlp:"-"tags correctly excludePoseidonCodeHashandCodeSizefrom RLP serialization, with clear comments explaining that PoseidonCodeHash is zkTrie-specific and CodeSize can be derived.core/state/snapshot/account.go (1)
36-37: LGTM! Consistent RLP serialization behavior.The
rlp:"-"tags onPoseidonCodeHashandCodeSizeare consistent with related changes ingenerate.goand properly exclude these fields from snapshot serialization.core/types/state_account.go (1)
34-35: LGTM! RLP exclusion correctly implements MPT/zkTrie separation.The
rlp:"-"tags prevent serialization of zkTrie-specific fields in MPT mode.PoseidonCodeHashis zkTrie-specific, andCodeSizeis now dynamically computed from the code itself, eliminating the need for disk persistence.core/state/statedb.go (1)
298-304: LGTM! Database handle correctly passed for dynamic code size retrieval.The updated call site passes the database handle to enable CodeSize to query the code size dynamically when not cached in memory, aligning with the new signature in
state_object.go.core/state/state_test.go (1)
232-234: LGTM! Test correctly updated for new CodeSize signature.The test now passes the database handle (
so0.db.dbandso1.db.db) toCodeSize, correctly exercising the new dynamic code size retrieval behavior.core/state/state_object.go (2)
506-523: LGTM! Dynamic code size retrieval correctly implemented.The new signature and implementation properly handle all scenarios:
- Returns in-memory code length when available (fast path)
- Returns 0 for accounts with no code (empty hash check)
- Queries database for code size when needed (MPT mode fallback)
- Memoizes errors appropriately
The comment on Line 516-517 correctly explains why database lookup is necessary:
CodeSizefield hasrlp:"-"and won't be persisted in MPT mode.
556-561: LGTM! PoseidonCodeHash guard prevents MPT/zkTrie data leakage.The guard correctly returns
nilwhen not using zkTrie, preventing zkTrie-specific hash exposure in MPT mode. This maintains clean separation between trie formats.eth/tracers/logger/logger_trace.go (1)
50-71: LGTM! Cross-format code tracing correctly implemented.The logic correctly determines the effective code hash key:
- Uses Poseidon hash as the map key when non-zero (zkTrie mode, Line 58-61)
- Falls back to Keccak hash otherwise (MPT mode or post-fork, Line 62)
- Stores the actual code payload (Line 69) for tracer access
This enables seamless tracing across both trie formats.
core/rawdb/accessors_state.go (1)
98-122: LGTM! Disk state root mapping functions correctly implement cross-format support.The three functions properly manage headerRoot → diskRoot mappings:
WriteDiskStateRoot(Lines 101-105): Persists mapping, fails fatally on error (consistent with other rawdb writers)ReadDiskStateRoot(Lines 109-115): Returns error when mapping missing, which is expected for same-format blocksDeleteDiskStateRoot(Lines 118-122): Removes mapping, fails fatally on errorThe error handling strategy is appropriate: read failures propagate (normal case), write/delete failures are fatal (corruption/infrastructure issues).
eth/api.go (1)
663-691: LGTM! DiskRoot RPC method correctly exposes cross-format state mapping.The implementation properly handles both scenarios:
- Cross-format block (Line 681-686): Returns distinct
diskRoot(local format) andheaderRoot(block format)- Same-format block (Line 687-690): Returns identical values when no mapping exists
This provides essential debugging visibility for MPT ↔ zkTrie transitions. Error handling and block retrieval logic are correct.
params/config.go (9)
43-44: LGTM! Genesis state root constants correctly defined.The constants provide zkTrie genesis roots for Morph Mainnet and Hoodi chains, enabling cross-format genesis compatibility when nodes operate in MPT mode.
327-328: LGTM! GenesisStateRoot correctly configured for Morph chains.The Hoodi (Line 327) and Mainnet (Line 358) chain configs now include
GenesisStateRoot, pointing to the zkTrie genesis roots defined at Lines 43-44. This enables MPT-mode nodes to maintain compatibility with zkTrie genesis state.Also applies to: 358-358
589-589: LGTM! MPTForkTime field correctly added to ChainConfig.The optional timestamp-based field enables scheduling the zkTrie → MPT fork transition, following the established pattern for time-based forks (Morph203Time, ViridianTime, EmeraldTime).
608-611: LGTM! GenesisStateRoot documentation clearly explains MPT-mode usage.The comment correctly describes the field's purpose: when
UseZktrie=false(MPT mode), this specifies the zkTrie genesis root to maintain compatibility, while the actual MPT root will be computed and mapped.
676-688: LGTM! Clone method provides safe deep-copy for tests.The implementation uses JSON marshaling for deep cloning, which is safe and handles all pointer fields correctly. The panic on marshal/unmarshal errors is appropriate since these should never fail for valid ChainConfig instances.
830-834: LGTM! IsMPTFork method correctly implements timestamp-based fork detection.The method properly delegates to
isTimestampForked, following the established pattern for time-based forks. The documentation clearly states the format transition behavior.
869-937: LGTM! MPTForkTime correctly integrated into fork ordering validation.The
mptForkTimeentry at Line 900 is properly positioned as an optional timestamp-based fork, following theemeraldTimefork in the sequence. The validation logic will correctly enforce monotonic fork timing.
1010-1012: LGTM! MPTForkTime compatibility check correctly implemented.The timestamp compatibility check at Lines 1010-1012 properly validates that the MPT fork time hasn't been rescheduled after the chain has passed it, following the pattern of other timestamp-based forks.
1154-1189: LGTM! IsMPTFork correctly exposed in Rules.The
IsMPTForkfield is properly added to theRulesstruct (Line 1159) and populated by callingc.IsMPTFork(time)in theRules()method (Line 1187), making the fork state easily accessible during execution.cmd/geth/usage.go (1)
53-53: LGTM!The flag is correctly added to the ETHEREUM group for help text display.
cmd/geth/main.go (1)
154-154: LGTM!The flag is correctly added to the nodeFlags list to enable runtime configuration.
core/block_validator_test.go (1)
233-326: LGTM! Comprehensive test coverage for XOR validation logic.The test correctly validates all scenarios for the cross-format state root validation gate:
- Same-format blocks (zkTrie→zkTrie, MPT→MPT) are validated
- Cross-format blocks skip validation to allow seamless transitions
- Edge cases like missing fork configuration are handled
core/rawdb/accessors_state_test.go (1)
17-95: LGTM! Comprehensive test coverage for disk state root mappings.The tests thoroughly validate:
- Write and read operations
- Error handling for non-existent mappings
- Delete operations and post-deletion behavior
- Multiple concurrent mappings
- Isolation between different mappings
trie/database.go (2)
788-788: LGTM!The childrenSize accounting correctly decrements by the full size of the children map including metadata.
670-672: Clarify the intent and add validation for the disk state root override during commit.The code reads a disk state root mapping and overrides the
nodehash during the commit operation. While this is intentional for cross-format trie scenarios (MPT ↔ zkTrie migration), there are three concerns:
Silent no-op if diskRoot nodes unavailable: If
diskRootnodes are not present indb.dirties(e.g., already committed or from a previous session),db.commit()returnsnilwithout error or logging, potentially leaving the commit in an incomplete state.No validation that diskRoot exists: The override happens without verifying that
diskRootnodes are actually reachable indb.dirtiesor represent valid trie state.Child node handling: If the overridden root is not found in
db.dirties, child nodes are never recursed into, potentially leaving orphaned state in memory.Consider adding:
- Validation that
diskRootexists indb.dirtiesbefore override, or explicit error handling if it doesn't- Debug logging if override silently returns nil (to detect incomplete commits)
- Documentation explaining when and why
diskRootshould exist and what happens if it doesn'tcore/genesis.go (1)
389-392: This code is correct and follows the established MPT/zkTrie format compatibility pattern.The genesis state root override is part of a systematic design for cross-format support, not an isolated scenario. When
GenesisStateRootis set, it represents the zkTrie-format genesis root; the computed MPT root is mapped to it viaWriteDiskStateRoot. This mapping persists in the database using a simple key-value prefix scheme (diskStateRootPrefix + headerRoot) and is reliably retrieved during state access, block validation, and trie operations. The mappings survive node restarts and database migrations—the key-value store provides standard persistence. The approach integrates with core components (trie resolution intrie/database.go, state database incore/state/database.go, blockchain validation incore/blockchain.go), is extensively tested inmpt_migration_test.goandmpt_fork_test.go, and is exposed via the RPC API. No consistency or persistence issues exist.core/state/database.go (1)
139-147: Consider whether silent error handling masks database issues.The code treats all errors from
ReadDiskStateRootidentically, but the underlyingdb.Get()can return both "not found" (normal) and database errors like corruption (abnormal). While the design intent is to treat missing mappings as normal, actual I/O or corruption errors would be silently ignored. TheisNotFoundErr()pattern exists elsewhere in the codebase to distinguish these cases. Verify whetherReadDiskStateRootshould internally distinguish betweenErrNotFoundand other errors, or if this calling code should handle them separately.cmd/utils/flags.go (1)
185-188: LGTM - Clear flag definition.The flag is well-defined with a descriptive name and usage message that clearly explains its purpose.
core/mpt_fork_test.go (4)
32-87: LGTM - Well-structured fork transition tests.Good table-driven test covering the key scenarios: before fork, at fork, after fork, and nil fork configuration. The test descriptions are clear and the assertions are appropriate.
89-180: LGTM - Good test coverage for same-format scenarios.The test appropriately covers pure MPT and pure zkTrie cases, with a clear note that cross-format scenarios are tested separately. The test structure is clean and the verification logic correctly checks mapping existence.
182-293: LGTM - Comprehensive cross-format syncing test.This is a well-structured end-to-end test that validates the key cross-format scenario: an MPT node syncing blocks that were produced with zkTrie. The four phases are clearly documented and the verification covers both mapping creation and actual state accessibility.
295-415: LGTM - Important backward compatibility test.This test validates the critical scenario of existing zkTrie nodes continuing to operate after the MPT fork by syncing new MPT blocks. The use of
MPTForkTimeto enable cross-format handling is appropriate, and the verification covers both mapping creation and state accessibility via MPT header roots.core/state/mpt_migration_test.go (6)
30-117: LGTM - Pragmatic approach to format-specific testing.The test correctly handles the limitation that zkTrie requires special initialization not available in unit tests. The MPT-to-MPT case provides basic coverage, and the skip mechanism is well-documented with clear explanations.
119-168: LGTM - Clear mapping functionality test.This test effectively validates the basic disk state root mapping operations using a simulated cross-format scenario. The comments clearly explain the simulation approach.
170-239: LGTM - Well-documented multi-step test.The 6-step approach is clearly documented and tests both the mapping resolution and the actual trie accessibility. The test appropriately handles the case where the simulated root doesn't exist in the disk while still verifying the core mapping functionality.
241-290: LGTM - Good edge case coverage.
TestEmptyStateMappingcorrectly tests bothcommon.Hash{}andtypes.EmptyRootHashedge cases.TestMissingMappingproperly validates the error handling for non-existent mappings.
292-385: LGTM - Comprehensive lifecycle and isolation tests.
TestMappingDeletionandTestMultipleMappingstogether provide good coverage of the mapping lifecycle (create, read, delete) and verify that operations on one mapping don't affect others.
387-428: LGTM - Overwrite semantics properly tested.The test correctly verifies that writing a new value to an existing key replaces the old value, and explicitly checks that the old value is no longer accessible.
| // This allows cross-format blocks to pass validation without matching local state root. | ||
| shouldValidateStateRoot := v.config.Morph.UseZktrie != v.config.IsMPTFork(header.Time) | ||
|
|
||
| if root := statedb.IntermediateRoot(v.config.IsEIP158(header.Number)); shouldValidateStateRoot && header.Root != root { |
There was a problem hiding this comment.
Is that better place shouldValidateStateRoot at the beginning?
There was a problem hiding this comment.
- IntermediateRoot() must be called regardless - it finalizes the state and may have side effects
- Moving the condition earlier wouldn't skip any expensive computation, becasue the root should use the latest value
| if diskRoot, err := rawdb.ReadDiskStateRoot(diskDB, root); err == nil { | ||
| root = diskRoot | ||
| } | ||
| // If mapping doesn't exist, root remains unchanged (normal case). |
There was a problem hiding this comment.
what if err exists, but not isNotFound?
There was a problem hiding this comment.
when the mapping doesn't exist, we want to use the original root. Any other database errors are rare and the fallback behavior (using original root) is safe.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
rollup/tracing/tracing.go (2)
305-321: Code key determination relies on hash presence rather than chain config.The logic uses
poseidonCodeHash != (common.Hash{})to infer zkTrie mode. This heuristic works for existing accounts but could be fragile for edge cases (e.g., accounts that exist across the MPT fork boundary). Consider using the chain config's MPT fork status to explicitly determine which hash to use as the key.- // Determine the code key based on trie mode: - // zkTrie mode uses poseidon hash, MPT mode uses keccak hash - codeKey := keccakCodeHash - if poseidonCodeHash != (common.Hash{}) { - codeKey = poseidonCodeHash - } + // Determine the code key based on trie mode: + // zkTrie mode uses poseidon hash, MPT mode uses keccak hash + var codeKey common.Hash + if env.chainConfig.IsMPTForkEnabled(block.Number()) { + codeKey = keccakCodeHash + } else { + codeKey = poseidonCodeHash + }
331-344: Consider usinglog.Debugfor slot tracing output.These
log.Infostatements output detailed slot information that's useful for debugging but may be noisy in production. Consider downgrading tolog.Debugor removing after the feature is stable.- log.Info("base slot", "base slot", baseSlot.String(), "token id", tx.FeeTokenID(), "token address", tokenInfo.TokenAddress.String()) + log.Debug("base slot", "base slot", baseSlot.String(), "token id", tx.FeeTokenID(), "token address", tokenInfo.TokenAddress.String())- log.Info("offset slot", "offset", offset, "slot", slot.String()) + log.Debug("offset slot", "offset", offset, "slot", slot.String())
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
core/types/l2trace.go(1 hunks)eth/tracers/logger/logger_trace.go(1 hunks)rollup/tracing/tracing.go(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- eth/tracers/logger/logger_trace.go
🧰 Additional context used
🧬 Code graph analysis (2)
core/types/l2trace.go (4)
core/types/block.go (1)
Header(70-114)core/types/gen_header_json.go (2)
Header(18-43)Header(74-98)core/types/transaction.go (1)
Transactions(615-615)trie/zkproof/types.go (1)
StorageTrace(40-51)
rollup/tracing/tracing.go (4)
eth/tracers/logger/logger.go (1)
CodeInfo(205-210)core/state/state_prove.go (1)
ZktrieProofTracer(19-21)core/evm.go (1)
NewEVMTxContext(72-78)core/types/l2trace.go (1)
BytecodeTrace(25-30)
🔇 Additional comments (3)
rollup/tracing/tracing.go (3)
86-88: LGTM on Context struct simplification.Removing
BlockHashfrom theContextstruct aligns with the simplified tracing approach in this PR.
261-264: LGTM on EVM and state initialization with struct logger hooks.The setup correctly wraps the statedb with
NewHookedStateand configures the VM with the tracer hooks fromstructLogger.
513-515: CoinbaseAccountWrapperonly populatesAddressfield—this appears intentional.Based on investigation, this design is intentional. The commit "refactor: remove useless block trace struct fields" explicitly removed non-essential fields from trace structures. No consumers of
Coinbase.Nonce,Coinbase.Balance,Coinbase.KeccakCodeHash,Coinbase.PoseidonCodeHash, orCoinbase.CodeSizeexist in the codebase. Similar patterns in the tracer code (e.g.,eth/tracers/logger/logger.go) also populate only a subset ofAccountWrapperfields. The rollup tracing only requires the coinbase address for its current use cases.
| type BlockTrace struct { | ||
| ChainID uint64 `json:"chainID"` | ||
| Version string `json:"version"` | ||
| Coinbase *AccountWrapper `json:"coinbase"` | ||
| Header *Header `json:"header"` | ||
| Transactions []*TransactionData `json:"transactions"` | ||
| StorageTrace *StorageTrace `json:"storageTrace"` | ||
| Bytecodes []*BytecodeTrace `json:"codes"` | ||
| TxStorageTraces []*StorageTrace `json:"txStorageTraces,omitempty"` | ||
| ExecutionResults []*ExecutionResult `json:"executionResults"` | ||
| WithdrawTrieRoot common.Hash `json:"withdraw_trie_root,omitempty"` | ||
| SequencerSetVerifyHash common.Hash `json:"sequencer_set_verify_hash,omitempty"` | ||
| StartL1QueueIndex uint64 `json:"startL1QueueIndex"` | ||
| ChainID uint64 `json:"chainID"` | ||
| Coinbase *AccountWrapper `json:"coinbase"` | ||
| Header *Header `json:"header"` | ||
| Transactions []*TransactionData `json:"transactions"` | ||
| StorageTrace *StorageTrace `json:"storageTrace"` | ||
| Bytecodes []*BytecodeTrace `json:"codes"` | ||
| StartL1QueueIndex uint64 `json:"startL1QueueIndex"` | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find usages of removed BlockTrace fields across the codebase
echo "=== Searching for ExecutionResults usage ==="
rg -n "ExecutionResults" --type go
echo "=== Searching for TxStorageTraces usage ==="
rg -n "TxStorageTraces" --type go
echo "=== Searching for WithdrawTrieRoot usage ==="
rg -n "WithdrawTrieRoot" --type go
echo "=== Searching for SequencerSetVerifyHash usage ==="
rg -n "SequencerSetVerifyHash" --type go
echo "=== Searching for Version usage in l2trace context ==="
rg -n "\.Version" --type go | grep -i trace | head -20Repository: morph-l2/go-ethereum
Length of output: 2730
Breaking API change: Multiple fields removed from BlockTrace.
The BlockTrace struct has had several public fields removed: Version, TxStorageTraces, ExecutionResults, WithdrawTrieRoot, and SequencerSetVerifyHash. This is a breaking change for any downstream consumers that depend on these fields in the JSON output of BlockTrace.
The removed fields (WithdrawTrieRoot, SequencerSetVerifyHash) remain in use elsewhere in the codebase (in ExecutionResult structs, L2 API parameters, and rollup configurations), so confirm that all BlockTrace consumers have been updated or notified of this API change before merging.
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | ||
| CodeSize: 0, | ||
| KeccakCodeHash: codehash.EmptyKeccakCodeHash, | ||
| PoseidonCodeHash: codehash.EmptyPoseidonCodeHash, | ||
| Code: hexutil.Bytes{}, | ||
| Code: hexutil.Bytes{}, | ||
| }) | ||
| for _, codeInfo := range env.Codes { | ||
| blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ | ||
| CodeSize: codeInfo.CodeSize, | ||
| KeccakCodeHash: codeInfo.KeccakCodeHash, | ||
| PoseidonCodeHash: codeInfo.PoseidonCodeHash, | ||
| Code: codeInfo.Code, | ||
| Code: codeInfo.Code, | ||
| }) | ||
| } |
There was a problem hiding this comment.
BytecodeTrace fields CodeSize, KeccakCodeHash, PoseidonCodeHash are not populated.
The codeInfo struct contains CodeSize, KeccakCodeHash, PoseidonCodeHash, and Code, but only Code is being copied to BytecodeTrace. The BytecodeTrace struct (in l2trace.go lines 25-30) still defines these fields, so they will serialize as zero values.
If these fields are no longer needed, consider removing them from the BytecodeTrace struct for consistency. Otherwise, populate them:
for _, codeInfo := range env.Codes {
blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{
- Code: codeInfo.Code,
+ CodeSize: codeInfo.CodeSize,
+ KeccakCodeHash: codeInfo.KeccakCodeHash,
+ PoseidonCodeHash: codeInfo.PoseidonCodeHash,
+ Code: codeInfo.Code,
})
}🤖 Prompt for AI Agents
In rollup/tracing/tracing.go around lines 523 to 530, the BytecodeTrace entries
are only setting Code so CodeSize, KeccakCodeHash and PoseidonCodeHash remain
zero; update the append calls to populate those fields from codeInfo (and for
the initial empty entry set CodeSize=0 and set hashes to their zero-value bytes
appropriately or omit if intentional). Specifically, copy CodeSize,
KeccakCodeHash and PoseidonCodeHash from each codeInfo into the corresponding
BytecodeTrace fields (ensuring any required type conversions) and ensure the
first placeholder BytecodeTrace uses consistent zero values for those fields.
|
|
||
| // Try to get poseidon code hash - if it's non-zero, we're in zkTrie mode | ||
| poseidonCodeHash := l.env.StateDB.GetPoseidonCodeHash(address) | ||
| if poseidonCodeHash != (common.Hash{}) { |
There was a problem hiding this comment.
if codeKey == (common.Hash{}) && poseidonCodeHash != (common.Hash{}) {
codeKey = poseidonCodeHash
}
Is it better?
There was a problem hiding this comment.
Remove all poseidonCodeHash in this upgrade?
There was a problem hiding this comment.
if codeKey == (common.Hash{}) && poseidonCodeHash != (common.Hash{}) { codeKey = poseidonCodeHash } Is it better?
In zkTrie mode, contracts have both keccakCodeHash and poseidonCodeHash non-empty, so the condition codeKey == (common.Hash{}) would never be true and we'd incorrectly use keccak instead of poseidon as the key.
There was a problem hiding this comment.
Remove all poseidonCodeHash in this upgrade?
Should we need to support both zkTrie and MPT modes for backward compatibility with existing chains?
There was a problem hiding this comment.
make sense, lets remove it in the next update. cc @FletcherMan
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
params/config.go (1)
610-614: Consider clarifying theGenesisStateRootdocumentation.The current documentation mentions both "MPT mode" and "zkTrie genesis root," which might be confusing. Consider rephrasing to more clearly explain that this field stores the zkTrie format genesis root that will be used for mapping when the node operates in MPT mode after the fork.
📝 Suggested documentation improvement
- // Genesis state root for MPT mode [optional] - // When UseZktrie=false (MPT mode), this specifies the zkTrie genesis root - // to maintain compatibility. The actual MPT root will be computed and mapped. + // zkTrie genesis state root for MPT compatibility [optional] + // When operating in MPT mode (UseZktrie=false), this stores the original + // zkTrie format genesis root to enable state root mapping between formats. GenesisStateRoot *common.Hash `json:"genesisStateRoot,omitempty"`
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
params/config.go(14 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
params/config.go (2)
common/types.go (3)
HexToHash(66-66)Hash(50-50)Hash(172-172)trie/database_types.go (1)
Clone(43-47)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (8)
params/config.go (8)
21-21: LGTM!The
encoding/jsonimport is appropriately added to support the newClone()method.
591-591: LGTM!The
MPTForkTimefield follows the established pattern for timestamp-based forks and integrates properly with the existing fork mechanism.
678-690: LGTM!The JSON-based
Clone()implementation is a pragmatic approach for deep copying. While it has some performance overhead compared to manual field copying, it's appropriate for test usage and automatically handles nested structures.
703-703: LGTM!The
String()method correctly includesMPTForkTimein the formatted output, maintaining consistency with other fork fields.Also applies to: 726-726
832-836: LGTM!The
IsMPTFork()method correctly implements timestamp-based fork detection using the establishedisTimestampForked()helper, consistent with other time-based forks.
902-902: LGTM!The MPT fork is properly integrated into the fork ordering checks, compatibility validation, and runtime rules evaluation. The implementation follows the established patterns for timestamp-based forks.
Also applies to: 1012-1014, 1161-1161, 1189-1189
328-328: Clarify whyGenesisStateRootis set only for production networks.
GenesisStateRootis assigned toMorphHoodiChainConfigandMorphMainnetChainConfigbut not toMorphHoleskyChainConfig. Confirm whether this selective assignment is intentional (e.g., test networks don't require MPT fork state root) or ifMorphHoleskyChainConfigshould also define this field.
43-44: Verify these genesis state root constants against the official Morph genesis state.These hardcoded genesis state roots are critical configuration values used during chain initialization. Ensure
MorphMainnetGenesisStateRootandMorphHoodiGenesisStateRoothave been validated against the actual genesis state for their respective networks, as incorrect values could cause consensus failures or sync issues.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
internal/ethapi/api.go (1)
42-56: EIP‑7910eth_configimplementation is coherent; semantics for non–time-based chains are worth confirmingThe new
morphExtension/config/configResponsetypes andPublicBlockChainAPI.Configimplementation look consistent with the fork/time helpers inparams.ChainConfig:
- ForkID derivation via
forkid.NewID(c, genesis.Hash(), ^uint64(0), forkTime)correctly treats all block-based forks as passed and includes time-based forks up toforkTime.- Using
c.Rules(currentBlock, forkTime)andvm.ActivePrecompiledContractsto build the precompile map per fork is a clean way to expose the active precompiles, andPrecompiledContract.Name()provides stable, descriptive keys.LatestFork/Timestamp/ActiveSystemContractsintegration gives reasonableCurrent/Next/Lasttriples for Morph’s time-based forks; when no time-based forks are configured, all three end upnil, effectively signaling “no EIP‑7910 schedule”.If you intend
eth_configto be meaningful on non‑Morph or purely block‑scheduled networks, you may later want to extendLatestFork/Timestampto cover more fork types; otherwise this behavior is fine for Morph-only usage.Also applies to: 653-745
core/vm/contracts.go (1)
49-56: Precompile metadata (Name) and selector (ActivePrecompiledContracts) are wired cleanlyAdding
Name()toPrecompiledContractand exposingActivePrecompiledContracts(rules)gives a nice, declarative way to surface active precompiles (e.g. viaeth_config). The implementation:
- Keeps the existing fork selection logic (mirrors
ActivePrecompiles),- Provides stable, human-readable names for all precompile implementations,
- Avoids behavioral changes in the EVM itself.
The only minor downside is the duplicated switch between
ActivePrecompilesandActivePrecompiledContracts; if this grows further, consider centralizing the fork→precompile-set mapping to avoid drift.Also applies to: 136-183, 211-231, 233-255
params/config.go (2)
679-691:ChainConfig.Cloneworks but JSON-based cloning is heavyweight and panicsUsing
encoding/jsonto deep-cloneChainConfigis simple and avoids manual copy bugs, but:
- It allocates and serializes/deserializes the entire struct, which is overkill if Clone is used frequently.
- It panics on (theoretically impossible) marshal/unmarshal errors, which is fine for tests but less ideal in production paths.
If this ever escapes test-only usage, consider either (a) returning an
(cfg *ChainConfig, err error)and handling failures gracefully, or (b) introducing a small hand-rolled copy that covers just the needed fields.
1158-1191: Fork-rule extensions and system-contract helpers align with eth_config usageThe updates:
- Extend
RuleswithIsMPTForkand set it inChainConfig.Rules, which keeps rule derivation in one place.- Add
LatestForkandTimestamp(forks.Fork)helpers modeling Morph’s time-based forks (Morph203, Viridian, Emerald), defaulting to Curie when no time-based fork is active.- Provide
ActiveSystemContracts(time)that returns Morph’s fixed system contract addresses.These are consistent with how
internal/ethapi/api.go’sConfigmethod consumes them. TodayActiveSystemContractsignorestime, which matches the “from genesis” semantics of Morph’s system contracts; if you later introduce fork-dependent system contracts, this will be the right place to branch ontimeand/orChainID.Also applies to: 1194-1238
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
cmd/geth/chaincmd.gocmd/geth/config.gocmd/geth/main.gocmd/utils/flags.gocore/genesis.gocore/vm/contracts.goeth/backend.goeth/ethconfig/config.gointernal/ethapi/api.gointernal/web3ext/web3ext.goles/client.goparams/config.go
🚧 Files skipped from review as they are similar to previous changes (3)
- core/genesis.go
- cmd/geth/main.go
- internal/web3ext/web3ext.go
🧰 Additional context used
🧬 Code graph analysis (5)
cmd/geth/chaincmd.go (1)
cmd/utils/flags.go (1)
OverrideMPTForkTimeFlag(290-293)
core/vm/contracts.go (1)
params/config.go (1)
Rules(1157-1163)
internal/ethapi/api.go (4)
eth/ethconfig/config.go (1)
Config(119-230)params/config.go (2)
ChainConfig(562-605)Rules(1157-1163)core/vm/contracts.go (1)
ActivePrecompiledContracts(235-254)core/forkid/forkid.go (1)
NewID(76-99)
cmd/utils/flags.go (2)
eth/ethconfig/config.go (1)
Config(119-230)trie/database.go (1)
Config(280-285)
params/config.go (2)
trie/database_types.go (1)
Clone(43-47)params/forks/forks.go (5)
Fork(20-20)Emerald(43-43)Viridian(42-42)Morph203(41-41)Curie(40-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (9)
eth/backend.go (1)
133-145: MPT fork time override is plumbed consistently with existing overridesThe new
OverrideMPTForkTime→overrides.MPTForkTimewiring mirrors the other fork-time overrides and safely no-ops when unset. Looks correct and keeps genesis setup behavior centralized inSetupGenesisBlockWithOverride.params/config.go (2)
592-593: MPTForkTime integration into fork rules and compatibility logic is well-alignedThe new
MPTForkTime *uint64:
- Is added alongside the existing Morph time-based forks,
- Participates in
CheckConfigForkOrderas an optional timestamp-based fork, ensuring no backwards fork reordering,- Is checked in
checkCompatibleviaisForkTimestampIncompatible, so changing MPTForkTime after activation correctly triggers a rewind requirement,- Surfaces in
IsMPTForkandRules.IsMPTForkfor downstream consumers.This is consistent with the existing Morph{203,Viridian,Emerald} fork-handling patterns and should be safe for both existing chains (where it’s nil) and future MPT transitions.
Also applies to: 833-837, 872-915, 1004-1015
34-46: Morph genesis state roots are correctly scoped to genesis and migration pathsVerification confirms that
MorphMainnetGenesisStateRootandMorphHoodiGenesisStateRootare only used in the intended contexts:
- Wired into
MorphHoodiChainConfigandMorphMainnetChainConfigvia theGenesisStateRootfield (lines 329, 361)- Applied during genesis block initialization to set the disk state root (core/genesis.go:393–395)
- Used in MPT fork tests for state root setup
The structure is sound and properly maintains separation of concerns. Ensure these values remain synchronized with your zkTrie↔MPT mapping generation tooling and are updated atomically during any future regenesis events.
eth/ethconfig/config.go (1)
219-230:OverrideMPTForkTimefield is correctly added to eth configThe new
OverrideMPTForkTime *uint64mirrors the existing Morph time overrides (Morph203/Viridian/Emerald) with the same TOML semantics, providing a clean way to carry the CLI/config override down into genesis setup.cmd/geth/config.go (1)
155-176: CLI → config wiring foroverride.mptforktimematches existing overridesThe new
if ctx.GlobalIsSet(utils.OverrideMPTForkTimeFlag.Name) { ... }block is consistent with the existing Morph203/Viridian/Emerald override handling and correctly populatescfg.Eth.OverrideMPTForkTimeprior to service registration.les/client.go (1)
91-105: Light client now honors MPT fork time overrides consistently with full nodePropagating
config.OverrideMPTForkTimeintooverrides.MPTForkTimeinles.Newkeeps the light client’s genesis and fork view aligned with the full node path (eth.New). This avoids subtle desyncs when operators set an explicit MPT fork timestamp.cmd/geth/chaincmd.go (1)
45-57: Genesis-init path correctly exposes and applies MPT fork time overrideAdding
utils.OverrideMPTForkTimeFlagtoinitand wiring it intooverrides.MPTForkTimeininitGenesiskeeps the one-shotgeth initflow consistent with runtime configuration regarding the MPT fork timestamp. This avoids divergence between chains initialized via CLI vs. config.Also applies to: 216-233
core/vm/contracts.go (1)
256-272: All RunPrecompiledContract call sites correctly use the new 3-value return signatureVerified: All 4 call sites in core/vm/evm.go properly capture and use the remaining gas value, and all 4 test call sites intentionally ignore it using the blank identifier where appropriate. No adjustment needed.
cmd/utils/flags.go (1)
185-188: LGTM! Flag declarations follow established patterns.Both flag declarations are well-formed and consistent with the existing flag definitions in the file.
Also applies to: 290-293
|
|
||
| // Handle MPT flag | ||
| cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name) | ||
| if cfg.Genesis.Config.Morph.UseZktrie { | ||
| // zkTrie mode: forced archive mode | ||
| if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive { | ||
| log.Crit("zkTrie mode requires --gcmode=archive") | ||
| } | ||
| log.Info("Pruning disabled (zkTrie mode)") | ||
| cfg.NoPruning = true | ||
| // disable prefetch | ||
| log.Info("Prefetch disabled (zkTrie mode)") | ||
| cfg.NoPrefetch = true | ||
| } else { | ||
| // MPT mode: pruning is allowed | ||
| log.Info("MPT mode enabled, pruning is allowed") | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Refactor duplicated MPT/zkTrie configuration logic.
The same 17-line block handling MPT flag logic is duplicated across all three Morph network configurations (MorphFlag, MorphHoleskyFlag, MorphHoodiFlag). This hurts maintainability and increases the risk of inconsistent behavior if one block is updated but others are missed.
Additionally, the error handling uses log.Crit() instead of Fatalf(), which is inconsistent with similar fatal errors elsewhere in this file (e.g., lines 1675, 1929, 1939).
🔎 Proposed refactoring
Extract the common logic into a helper function:
+// configureMorphTrieMode sets zkTrie vs MPT mode for Morph networks
+func configureMorphTrieMode(ctx *cli.Context, cfg *ethconfig.Config) {
+ cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name)
+ if cfg.Genesis.Config.Morph.UseZktrie {
+ // zkTrie mode: forced archive mode
+ if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive {
+ Fatalf("zkTrie mode requires --gcmode=archive")
+ }
+ log.Info("Pruning disabled (zkTrie mode)")
+ cfg.NoPruning = true
+ // disable prefetch
+ log.Info("Prefetch disabled (zkTrie mode)")
+ cfg.NoPrefetch = true
+ } else {
+ // MPT mode: pruning is allowed
+ log.Info("MPT mode enabled, pruning is allowed")
+ }
+}
+
case ctx.GlobalBool(MorphFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 2818
}
cfg.Genesis = core.DefaultMorphMainnetGenesisBlock()
-
- // Handle MPT flag
- cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name)
- if cfg.Genesis.Config.Morph.UseZktrie {
- // zkTrie mode: forced archive mode
- if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive {
- log.Crit("zkTrie mode requires --gcmode=archive")
- }
- log.Info("Pruning disabled (zkTrie mode)")
- cfg.NoPruning = true
- // disable prefetch
- log.Info("Prefetch disabled (zkTrie mode)")
- cfg.NoPrefetch = true
- } else {
- // MPT mode: pruning is allowed
- log.Info("MPT mode enabled, pruning is allowed")
- }
+ configureMorphTrieMode(ctx, cfg)
case ctx.GlobalBool(MorphHoleskyFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 2810
}
cfg.Genesis = core.DefaultMorphHoleskyGenesisBlock()
-
- // Handle MPT flag
- cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name)
- if cfg.Genesis.Config.Morph.UseZktrie {
- // zkTrie mode: forced archive mode
- if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive {
- log.Crit("zkTrie mode requires --gcmode=archive")
- }
- log.Info("Pruning disabled (zkTrie mode)")
- cfg.NoPruning = true
- // disable prefetch
- log.Info("Prefetch disabled (zkTrie mode)")
- cfg.NoPrefetch = true
- } else {
- // MPT mode: pruning is allowed
- log.Info("MPT mode enabled, pruning is allowed")
- }
+ configureMorphTrieMode(ctx, cfg)
case ctx.GlobalBool(MorphHoodiFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 2910
}
cfg.Genesis = core.DefaultMorphHoodiGenesisBlock()
-
- // Handle MPT flag
- cfg.Genesis.Config.Morph.UseZktrie = !ctx.GlobalBool(MorphMPTFlag.Name)
- if cfg.Genesis.Config.Morph.UseZktrie {
- // zkTrie mode: forced archive mode
- if ctx.GlobalString(GCModeFlag.Name) != GCModeArchive {
- log.Crit("zkTrie mode requires --gcmode=archive")
- }
- log.Info("Pruning disabled (zkTrie mode)")
- cfg.NoPruning = true
- // disable prefetch
- log.Info("Prefetch disabled (zkTrie mode)")
- cfg.NoPrefetch = true
- } else {
- // MPT mode: pruning is allowed
- log.Info("MPT mode enabled, pruning is allowed")
- }
+ configureMorphTrieMode(ctx, cfg)Also applies to: 1810-1826, 1832-1848
🤖 Prompt for AI Agents
In cmd/utils/flags.go around lines 1788-1804 (and the other duplicated blocks at
1810-1826 and 1832-1848), the 17-line MPT/zkTrie configuration logic is
duplicated and uses log.Crit for fatal errors; extract this repeated logic into
a single helper function (e.g., applyMorphMPTConfig(ctx, cfg, logger)) and call
it from each Morph network configuration to eliminate duplication and ensure
consistent behavior, and replace the log.Crit call with the file’s conventional
fatal logging function (Fatalf) so the error handling matches other fatal errors
in this file; ensure the helper accepts the ctx and config it needs and
preserves the exact configuration changes (UseZktrie, NoPruning, NoPrefetch) and
the GC mode check.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
rollup/tracing/tracing.go (1)
369-385: Code key selection doesn't check actual trie mode—always prefers poseidon hash when available.The comment correctly states "zkTrie mode uses poseidon hash, MPT mode uses keccak hash," but the code unconditionally prefers
poseidonCodeHashoverkeccakCodeHashwhen non-zero, regardless of which trie mode is actually active. During the MPT fork transition with both hashes available, this will use the wrong hash in MPT mode.🔎 Proposed fix to check trie mode
// Determine the code key based on trie mode: // zkTrie mode uses poseidon hash, MPT mode uses keccak hash - codeKey := keccakCodeHash - if poseidonCodeHash != (common.Hash{}) { - codeKey = poseidonCodeHash + var codeKey common.Hash + if env.chainConfig.IsMPTFork(block.Time()) { + // MPT mode: use keccak hash + codeKey = keccakCodeHash + if codeKey == (common.Hash{}) { + codeKey = poseidonCodeHash + } + } else { + // zkTrie mode: use poseidon hash + codeKey = poseidonCodeHash + if codeKey == (common.Hash{}) { + codeKey = keccakCodeHash + } }
♻️ Duplicate comments (1)
rollup/tracing/tracing.go (1)
615-622:BytecodeTracefieldsCodeSize,KeccakCodeHash,PoseidonCodeHashare not populated.The
codeInfostruct contains all these fields, but onlyCodeis being copied toBytecodeTrace. This was flagged in a previous review and remains unaddressed.🔎 Proposed fix to populate all BytecodeTrace fields
blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ - Code: hexutil.Bytes{}, + CodeSize: 0, + KeccakCodeHash: common.Hash{}, + PoseidonCodeHash: common.Hash{}, + Code: hexutil.Bytes{}, }) for _, codeInfo := range env.Codes { blockTrace.Bytecodes = append(blockTrace.Bytecodes, &types.BytecodeTrace{ - Code: codeInfo.Code, + CodeSize: codeInfo.CodeSize, + KeccakCodeHash: codeInfo.KeccakCodeHash, + PoseidonCodeHash: codeInfo.PoseidonCodeHash, + Code: codeInfo.Code, }) }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rollup/tracing/tracing.go
🧰 Additional context used
🧬 Code graph analysis (1)
rollup/tracing/tracing.go (3)
eth/tracers/logger/logger.go (1)
CodeInfo(205-210)core/rawdb/accessors_state.go (1)
ReadDiskStateRoot(109-115)core/types/l2trace.go (4)
BlockTrace(14-24)AccountWrapper(110-117)BytecodeTrace(27-32)StorageTrace(35-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (go)
🔇 Additional comments (3)
rollup/tracing/tracing.go (3)
57-93: LGTM - Struct simplifications align with the refactoring goals.The
TraceEnvstruct now includesExecutionResultsfor consolidated per-tx result storage, and theContextstruct is appropriately simplified sinceBlockHashis still provided to thecallTracerviatracers.Context.
101-121: LGTM - Helper function properly parameterized for cross-format root handling.The signature extension to accept
rootBeforeandrootAftercleanly decouples root resolution from the helper, enabling the caller to handle MPT/zkTrie format differences.
143-156: Cross-format root resolution is correct.The fallback to header root when
ReadDiskStateRootfails (error returned from database lookup) or returns a zero hash is the appropriate behavior for the MPT↔zkTrie migration. During the migration period, diskRoot mappings will not exist for all blocks, so the silent error fallback provides proper backward compatibility. The conditionerr == nil && diskRoot != (common.Hash{})correctly handles both error cases and invalid (zero) hash returns.
| blockTrace := &types.BlockTrace{ | ||
| ChainID: chainID, | ||
| Version: params.ArchiveVersion(params.CommitHash), | ||
| Coinbase: &types.AccountWrapper{ | ||
| Address: env.coinbase, | ||
| Nonce: statedb.GetNonce(env.coinbase), | ||
| Balance: (*hexutil.Big)(statedb.GetBalance(env.coinbase)), | ||
| KeccakCodeHash: statedb.GetKeccakCodeHash(env.coinbase), | ||
| PoseidonCodeHash: statedb.GetPoseidonCodeHash(env.coinbase), | ||
| CodeSize: statedb.GetCodeSize(env.coinbase), | ||
| Address: env.coinbase, | ||
| }, | ||
| Header: block.Header(), | ||
| Bytecodes: make([]*types.BytecodeTrace, 0, len(env.Codes)), | ||
| StorageTrace: env.StorageTrace, | ||
| ExecutionResults: env.ExecutionResults, | ||
| TxStorageTraces: env.TxStorageTraces, | ||
| WithdrawTrieRoot: withdrawTrieRoot, | ||
| Transactions: txs, | ||
| StartL1QueueIndex: env.StartL1QueueIndex, | ||
| } |
There was a problem hiding this comment.
Coinbase AccountWrapper only populates Address field.
Unlike sender/receiver wrappers in getTxResult (lines 279-286) which populate all fields (Nonce, Balance, KeccakCodeHash, PoseidonCodeHash, CodeSize), the Coinbase wrapper here only sets Address. If downstream consumers expect full account metadata, this could cause issues.
🔎 Proposed fix to populate full Coinbase account data
blockTrace := &types.BlockTrace{
ChainID: chainID,
Coinbase: &types.AccountWrapper{
- Address: env.coinbase,
+ Address: env.coinbase,
+ Nonce: statedb.GetNonce(env.coinbase),
+ Balance: (*hexutil.Big)(statedb.GetBalance(env.coinbase)),
+ KeccakCodeHash: statedb.GetKeccakCodeHash(env.coinbase),
+ PoseidonCodeHash: statedb.GetPoseidonCodeHash(env.coinbase),
+ CodeSize: statedb.GetCodeSize(env.coinbase),
},Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In rollup/tracing/tracing.go around lines 601 to 613, the Coinbase
AccountWrapper is only setting Address which can break consumers expecting full
account metadata; populate the Coinbase wrapper with the same fields used in
getTxResult (Nonce, Balance, KeccakCodeHash, PoseidonCodeHash, CodeSize) by
retrieving the coinbase account state (e.g., from env/state, env.Accounts, or
the same lookup path used for sender/receiver) and assign those fields into the
AccountWrapper; if the account lookup can fail, handle errors or supply safe
zero/default values and ensure no nil dereference occurs.
* update AssembleL2BlockParams --------- Co-authored-by: corey <corey.zhang@bitget.com>
* update AssembleL2BlockParams * update ec clent * add migration-checker * add gen preimages * clean changes * optimize“ * update readme“ * clean readme“ * clean trie changes --------- Co-authored-by: corey <corey.zhang@bitget.com>
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@cmd/gen-preimages/main.go`:
- Around line 150-165: The loop over the iterator (iter.Next()/iter.Value())
lacks a post-iteration error check; after the for iter.Next() loop completes,
call iter.Error() and handle any returned error (e.g., log via
fmt.Fprintf(os.Stderr, ...), return a non-zero exit code, or abort) before
printing statistics so I/O/corruption errors are not ignored; update the code
around the iter usage (the loop and subsequent prints) to check iter.Error() and
abort/propagate/log the error when non-nil.
In `@cmd/migration-checker/main.go`:
- Around line 224-252: In loadZkTrie the concurrent fmt.Println("ZK Accounts
Loaded:", len(zkLeafMap)) reads len(zkLeafMap) without holding zkLeafMutex,
causing a race when parallel=true; either acquire zkLeafMutex before reading
len(zkLeafMap) (and release it immediately after) or maintain an atomic counter
updated inside the mutex and print that counter instead; update the code around
the fmt.Println line and reference zkLeafMap and zkLeafMutex (or the new atomic
counter) so the map-size logging is always read under synchronization when
parallel is enabled.
- Around line 174-222: In loadMPT fix three concurrency issues: 1) avoid
capturing trieIt by creating per-iteration locals (capture i or make a local
startKeyCopy and call trie.NewIterator(startKeyCopy) inside the goroutine so
each goroutine has its own trieIt) instead of reusing the loop variable; 2)
always protect accesses to mptLeafMap (both reads and writes) with mptLeafMutex
when parallel==true — acquire the lock around the duplicate-key check, the map
write, and the len(mptLeafMap) read (or read len into a local while holding the
lock) so no concurrent map read/write occurs; 3) only call mptLeafMutex.Unlock()
if you actually held the lock (i.e., guard lock/unlock by checking parallel or
branch so lock is acquired before defer unlocking) or use a pattern like if
parallel { mptLeafMutex.Lock(); defer mptLeafMutex.Unlock() } inside the
goroutine around the critical section; update the goroutine body in loadMPT to
use these local variables and proper lock/unlock pairing to eliminate the
unlock-of-unlocked-mutex and iterator capture issues.
In `@cmd/migration-checker/README.md`:
- Around line 63-70: Update the fenced code block under "Expected Output" to
include a language tag (e.g., use ```text or ```console) so the block becomes
fenced as ```text (or ```console) followed by the lines "Accounts done: 1" ...
"Accounts done: 365"; modify the README.md fenced block content around the
"Expected Output" heading to add that tag to satisfy markdownlint MD040.
In `@trie/zk_trie.go`:
- Around line 239-242: The public method ZkTrie.Witness currently panics;
replace the panic with a safe stub that returns an empty map[string]struct{}
(i.e., make Witness return an initialized empty set) so callers won't crash;
keep the existing signature and optionally add a TODO comment referencing
ZkTrie.Witness for future implementation or consider changing the signature to
include an error in a follow-up PR.
- Around line 244-289: The callback cb is invoked concurrently from countLeaves
when parallel==true which can cause data races; modify CountLeaves/countLeaves
to serialize callback execution: when parallel is true create a single goroutine
that consumes a channel of (key,value) items and calls cb, have countLeaves (and
its goroutine children) send copies of key/value into that channel instead of
calling cb directly, close the channel and wait for the consumer to finish
before returning the total count; update CountLeaves to set up the
channel/consumer and pass it (or a sender wrapper) into countLeaves so cb is
always called from one goroutine even when recursion runs in parallel.
🧹 Nitpick comments (1)
cmd/gen-preimages/main.go (1)
141-141: Use thePreimagePrefixconstant fromrawdbinstead of hardcoding.The prefix value
"secure-key-"matchesrawdb.PreimagePrefix(defined incore/rawdb/schema.go), but the hardcoded string should be replaced with an import of the exported constant to prevent drift if the prefix ever changes.
| for iter.Next() { | ||
| totalCount++ | ||
| value := iter.Value() | ||
| switch len(value) { | ||
| case 20: | ||
| addrCount++ | ||
| case 32: | ||
| slotCount++ | ||
| } | ||
| } | ||
|
|
||
| fmt.Println("========== Preimage Statistics ==========") | ||
| fmt.Printf("Total preimages: %d\n", totalCount) | ||
| fmt.Printf(" - Address preimages (20 bytes): %d\n", addrCount) | ||
| fmt.Printf(" - Storage slot preimages (32 bytes): %d\n", slotCount) | ||
| fmt.Println("==========================================") |
There was a problem hiding this comment.
Missing iterator error check after loop.
The iterator may encounter errors during iteration (e.g., I/O errors, corruption). After the loop completes, you should check iter.Error() to ensure the iteration completed successfully.
Proposed fix
for iter.Next() {
totalCount++
value := iter.Value()
switch len(value) {
case 20:
addrCount++
case 32:
slotCount++
}
}
+
+ if err := iter.Error(); err != nil {
+ fmt.Printf("Error during iteration: %v\n", err)
+ os.Exit(1)
+ }
fmt.Println("========== Preimage Statistics ==========")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for iter.Next() { | |
| totalCount++ | |
| value := iter.Value() | |
| switch len(value) { | |
| case 20: | |
| addrCount++ | |
| case 32: | |
| slotCount++ | |
| } | |
| } | |
| fmt.Println("========== Preimage Statistics ==========") | |
| fmt.Printf("Total preimages: %d\n", totalCount) | |
| fmt.Printf(" - Address preimages (20 bytes): %d\n", addrCount) | |
| fmt.Printf(" - Storage slot preimages (32 bytes): %d\n", slotCount) | |
| fmt.Println("==========================================") | |
| for iter.Next() { | |
| totalCount++ | |
| value := iter.Value() | |
| switch len(value) { | |
| case 20: | |
| addrCount++ | |
| case 32: | |
| slotCount++ | |
| } | |
| } | |
| if err := iter.Error(); err != nil { | |
| fmt.Printf("Error during iteration: %v\n", err) | |
| os.Exit(1) | |
| } | |
| fmt.Println("========== Preimage Statistics ==========") | |
| fmt.Printf("Total preimages: %d\n", totalCount) | |
| fmt.Printf(" - Address preimages (20 bytes): %d\n", addrCount) | |
| fmt.Printf(" - Storage slot preimages (32 bytes): %d\n", slotCount) | |
| fmt.Println("==========================================") |
🤖 Prompt for AI Agents
In `@cmd/gen-preimages/main.go` around lines 150 - 165, The loop over the iterator
(iter.Next()/iter.Value()) lacks a post-iteration error check; after the for
iter.Next() loop completes, call iter.Error() and handle any returned error
(e.g., log via fmt.Fprintf(os.Stderr, ...), return a non-zero exit code, or
abort) before printing statistics so I/O/corruption errors are not ignored;
update the code around the iter usage (the loop and subsequent prints) to check
iter.Error() and abort/propagate/log the error when non-nil.
| func loadMPT(mptTrie *trie.SecureTrie, parallel bool) chan map[string][]byte { | ||
| startKey := make([]byte, 32) | ||
| workers := 1 << 5 | ||
| if !parallel { | ||
| workers = 1 | ||
| } | ||
| step := byte(0xFF) / byte(workers) | ||
|
|
||
| mptLeafMap := make(map[string][]byte, 1000) | ||
| var mptLeafMutex sync.Mutex | ||
|
|
||
| var mptWg sync.WaitGroup | ||
| for i := 0; i < workers; i++ { | ||
| startKey[0] = byte(i) * step | ||
| trieIt := trie.NewIterator(mptTrie.NodeIterator(startKey)) | ||
|
|
||
| mptWg.Add(1) | ||
| go func() { | ||
| defer mptWg.Done() | ||
| for trieIt.Next() { | ||
| if parallel { | ||
| mptLeafMutex.Lock() | ||
| } | ||
|
|
||
| if _, ok := mptLeafMap[string(trieIt.Key)]; ok { | ||
| mptLeafMutex.Unlock() | ||
| break | ||
| } | ||
|
|
||
| mptLeafMap[string(dup(trieIt.Key))] = dup(trieIt.Value) | ||
|
|
||
| if parallel { | ||
| mptLeafMutex.Unlock() | ||
| } | ||
|
|
||
| if parallel && len(mptLeafMap)%10000 == 0 { | ||
| fmt.Println("MPT Accounts Loaded:", len(mptLeafMap)) | ||
| } | ||
| } | ||
| }() | ||
| } | ||
|
|
||
| respChan := make(chan map[string][]byte) | ||
| go func() { | ||
| mptWg.Wait() | ||
| respChan <- mptLeafMap | ||
| }() | ||
| return respChan | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the current state of the file
head -n 300 cmd/migration-checker/main.go | tail -n 130Repository: morph-l2/go-ethereum
Length of output: 3751
🏁 Script executed:
# Let's also check if there are any related files or tests that might show the intended behavior
find . -type f -name "*.go" -path "*/migration-checker/*" | head -20Repository: morph-l2/go-ethereum
Length of output: 95
🏁 Script executed:
# Search for usage patterns of loadMPT to understand how parallel parameter is used
rg "loadMPT" --type go -B 2 -A 5Repository: morph-l2/go-ethereum
Length of output: 903
Fix goroutine capture, concurrent map read, and unlock mismatch in loadMPT.
Three concurrency bugs exist:
-
trieItis captured by reference from the loop body. Multiple goroutines can share iterator instances or read from wrong iterators created in other loop iterations. -
len(mptLeafMap)is read without holding the lock (line 209) while writer goroutines hold it, triggering concurrent map read/write panics. -
When a duplicate key is found,
mptLeafMutex.Unlock()is called unconditionally (line 200), but the lock is only acquired ifparallel==true, causing an "unlock of unlocked mutex" panic whenparallel==false.
Suggested fix
- startKey := make([]byte, 32)
workers := 1 << 5
if !parallel {
workers = 1
}
step := byte(0xFF) / byte(workers)
for i := 0; i < workers; i++ {
- startKey[0] = byte(i) * step
- trieIt := trie.NewIterator(mptTrie.NodeIterator(startKey))
+ startKey := make([]byte, 32)
+ startKey[0] = byte(i) * step
+ trieIt := trie.NewIterator(mptTrie.NodeIterator(startKey))
mptWg.Add(1)
- go func() {
+ go func(it *trie.Iterator) {
defer mptWg.Done()
- for trieIt.Next() {
+ for it.Next() {
if parallel {
mptLeafMutex.Lock()
}
- if _, ok := mptLeafMap[string(trieIt.Key)]; ok {
- mptLeafMutex.Unlock()
+ if _, ok := mptLeafMap[string(it.Key)]; ok {
+ if parallel {
+ mptLeafMutex.Unlock()
+ }
break
}
- mptLeafMap[string(dup(trieIt.Key))] = dup(trieIt.Value)
+ mptLeafMap[string(dup(it.Key))] = dup(it.Value)
+ size := len(mptLeafMap)
if parallel {
mptLeafMutex.Unlock()
}
- if parallel && len(mptLeafMap)%10000 == 0 {
- fmt.Println("MPT Accounts Loaded:", len(mptLeafMap))
+ if parallel && size%10000 == 0 {
+ fmt.Println("MPT Accounts Loaded:", size)
}
}
- }()
+ }(trieIt)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func loadMPT(mptTrie *trie.SecureTrie, parallel bool) chan map[string][]byte { | |
| startKey := make([]byte, 32) | |
| workers := 1 << 5 | |
| if !parallel { | |
| workers = 1 | |
| } | |
| step := byte(0xFF) / byte(workers) | |
| mptLeafMap := make(map[string][]byte, 1000) | |
| var mptLeafMutex sync.Mutex | |
| var mptWg sync.WaitGroup | |
| for i := 0; i < workers; i++ { | |
| startKey[0] = byte(i) * step | |
| trieIt := trie.NewIterator(mptTrie.NodeIterator(startKey)) | |
| mptWg.Add(1) | |
| go func() { | |
| defer mptWg.Done() | |
| for trieIt.Next() { | |
| if parallel { | |
| mptLeafMutex.Lock() | |
| } | |
| if _, ok := mptLeafMap[string(trieIt.Key)]; ok { | |
| mptLeafMutex.Unlock() | |
| break | |
| } | |
| mptLeafMap[string(dup(trieIt.Key))] = dup(trieIt.Value) | |
| if parallel { | |
| mptLeafMutex.Unlock() | |
| } | |
| if parallel && len(mptLeafMap)%10000 == 0 { | |
| fmt.Println("MPT Accounts Loaded:", len(mptLeafMap)) | |
| } | |
| } | |
| }() | |
| } | |
| respChan := make(chan map[string][]byte) | |
| go func() { | |
| mptWg.Wait() | |
| respChan <- mptLeafMap | |
| }() | |
| return respChan | |
| } | |
| func loadMPT(mptTrie *trie.SecureTrie, parallel bool) chan map[string][]byte { | |
| workers := 1 << 5 | |
| if !parallel { | |
| workers = 1 | |
| } | |
| step := byte(0xFF) / byte(workers) | |
| mptLeafMap := make(map[string][]byte, 1000) | |
| var mptLeafMutex sync.Mutex | |
| var mptWg sync.WaitGroup | |
| for i := 0; i < workers; i++ { | |
| startKey := make([]byte, 32) | |
| startKey[0] = byte(i) * step | |
| trieIt := trie.NewIterator(mptTrie.NodeIterator(startKey)) | |
| mptWg.Add(1) | |
| go func(it *trie.Iterator) { | |
| defer mptWg.Done() | |
| for it.Next() { | |
| if parallel { | |
| mptLeafMutex.Lock() | |
| } | |
| if _, ok := mptLeafMap[string(it.Key)]; ok { | |
| if parallel { | |
| mptLeafMutex.Unlock() | |
| } | |
| break | |
| } | |
| mptLeafMap[string(dup(it.Key))] = dup(it.Value) | |
| size := len(mptLeafMap) | |
| if parallel { | |
| mptLeafMutex.Unlock() | |
| } | |
| if parallel && size%10000 == 0 { | |
| fmt.Println("MPT Accounts Loaded:", size) | |
| } | |
| } | |
| }(trieIt) | |
| } | |
| respChan := make(chan map[string][]byte) | |
| go func() { | |
| mptWg.Wait() | |
| respChan <- mptLeafMap | |
| }() | |
| return respChan | |
| } |
🤖 Prompt for AI Agents
In `@cmd/migration-checker/main.go` around lines 174 - 222, In loadMPT fix three
concurrency issues: 1) avoid capturing trieIt by creating per-iteration locals
(capture i or make a local startKeyCopy and call trie.NewIterator(startKeyCopy)
inside the goroutine so each goroutine has its own trieIt) instead of reusing
the loop variable; 2) always protect accesses to mptLeafMap (both reads and
writes) with mptLeafMutex when parallel==true — acquire the lock around the
duplicate-key check, the map write, and the len(mptLeafMap) read (or read len
into a local while holding the lock) so no concurrent map read/write occurs; 3)
only call mptLeafMutex.Unlock() if you actually held the lock (i.e., guard
lock/unlock by checking parallel or branch so lock is acquired before defer
unlocking) or use a pattern like if parallel { mptLeafMutex.Lock(); defer
mptLeafMutex.Unlock() } inside the goroutine around the critical section; update
the goroutine body in loadMPT to use these local variables and proper
lock/unlock pairing to eliminate the unlock-of-unlocked-mutex and iterator
capture issues.
| func loadZkTrie(zkTrie *trie.ZkTrie, parallel, paranoid bool) chan map[string][]byte { | ||
| zkLeafMap := make(map[string][]byte, 1000) | ||
| var zkLeafMutex sync.Mutex | ||
| zkDone := make(chan map[string][]byte) | ||
| go func() { | ||
| zkTrie.CountLeaves(func(key, value []byte) { | ||
| preimageKey := zkTrie.GetKey(key) | ||
| if len(preimageKey) == 0 { | ||
| panic(fmt.Sprintf("preimage not found zk trie %s", hex.EncodeToString(key))) | ||
| } | ||
|
|
||
| if parallel { | ||
| zkLeafMutex.Lock() | ||
| } | ||
|
|
||
| zkLeafMap[string(dup(preimageKey))] = value | ||
|
|
||
| if parallel { | ||
| zkLeafMutex.Unlock() | ||
| } | ||
|
|
||
| if parallel && len(zkLeafMap)%10000 == 0 { | ||
| fmt.Println("ZK Accounts Loaded:", len(zkLeafMap)) | ||
| } | ||
| }, parallel, paranoid) | ||
| zkDone <- zkLeafMap | ||
| }() | ||
| return zkDone | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and examine the CountLeaves method in trie/zk_trie.go
fd "zk_trie.go" trie/ | head -5Repository: morph-l2/go-ethereum
Length of output: 79
🏁 Script executed:
# Search for the CountLeaves method implementation
rg "func.*CountLeaves" -A 30 trie/Repository: morph-l2/go-ethereum
Length of output: 1483
🏁 Script executed:
# Also check the exact line numbers in the review to confirm the mutex scope
sed -n '224,252p' cmd/migration-checker/main.go | cat -nRepository: morph-l2/go-ethereum
Length of output: 996
Guard map-size logging when parallel.
len(zkLeafMap) is read without holding the mutex on line 22, creating a race condition when parallel=true and concurrent callbacks modify the map.
🛠️ Suggested fix
if parallel {
zkLeafMutex.Lock()
}
zkLeafMap[string(dup(preimageKey))] = value
+ size := len(zkLeafMap)
if parallel {
zkLeafMutex.Unlock()
}
- if parallel && len(zkLeafMap)%10000 == 0 {
- fmt.Println("ZK Accounts Loaded:", len(zkLeafMap))
+ if parallel && size%10000 == 0 {
+ fmt.Println("ZK Accounts Loaded:", size)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func loadZkTrie(zkTrie *trie.ZkTrie, parallel, paranoid bool) chan map[string][]byte { | |
| zkLeafMap := make(map[string][]byte, 1000) | |
| var zkLeafMutex sync.Mutex | |
| zkDone := make(chan map[string][]byte) | |
| go func() { | |
| zkTrie.CountLeaves(func(key, value []byte) { | |
| preimageKey := zkTrie.GetKey(key) | |
| if len(preimageKey) == 0 { | |
| panic(fmt.Sprintf("preimage not found zk trie %s", hex.EncodeToString(key))) | |
| } | |
| if parallel { | |
| zkLeafMutex.Lock() | |
| } | |
| zkLeafMap[string(dup(preimageKey))] = value | |
| if parallel { | |
| zkLeafMutex.Unlock() | |
| } | |
| if parallel && len(zkLeafMap)%10000 == 0 { | |
| fmt.Println("ZK Accounts Loaded:", len(zkLeafMap)) | |
| } | |
| }, parallel, paranoid) | |
| zkDone <- zkLeafMap | |
| }() | |
| return zkDone | |
| } | |
| func loadZkTrie(zkTrie *trie.ZkTrie, parallel, paranoid bool) chan map[string][]byte { | |
| zkLeafMap := make(map[string][]byte, 1000) | |
| var zkLeafMutex sync.Mutex | |
| zkDone := make(chan map[string][]byte) | |
| go func() { | |
| zkTrie.CountLeaves(func(key, value []byte) { | |
| preimageKey := zkTrie.GetKey(key) | |
| if len(preimageKey) == 0 { | |
| panic(fmt.Sprintf("preimage not found zk trie %s", hex.EncodeToString(key))) | |
| } | |
| if parallel { | |
| zkLeafMutex.Lock() | |
| } | |
| zkLeafMap[string(dup(preimageKey))] = value | |
| size := len(zkLeafMap) | |
| if parallel { | |
| zkLeafMutex.Unlock() | |
| } | |
| if parallel && size%10000 == 0 { | |
| fmt.Println("ZK Accounts Loaded:", size) | |
| } | |
| }, parallel, paranoid) | |
| zkDone <- zkLeafMap | |
| }() | |
| return zkDone | |
| } |
🤖 Prompt for AI Agents
In `@cmd/migration-checker/main.go` around lines 224 - 252, In loadZkTrie the
concurrent fmt.Println("ZK Accounts Loaded:", len(zkLeafMap)) reads
len(zkLeafMap) without holding zkLeafMutex, causing a race when parallel=true;
either acquire zkLeafMutex before reading len(zkLeafMap) (and release it
immediately after) or maintain an atomic counter updated inside the mutex and
print that counter instead; update the code around the fmt.Println line and
reference zkLeafMap and zkLeafMutex (or the new atomic counter) so the map-size
logging is always read under synchronization when parallel is enabled.
| ## Expected Output | ||
|
|
||
| ``` | ||
| Accounts done: 1 | ||
| Accounts done: 2 | ||
| ... | ||
| Accounts done: 365 | ||
| ``` |
There was a problem hiding this comment.
Add a language tag to the “Expected Output” fenced block.
Markdownlint MD040 flags this block. Use text (or console) for consistency.
🔧 Suggested fix
-```
+```text
Accounts done: 1
Accounts done: 2
...
Accounts done: 365
</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
## Expected Output
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
65-65: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@cmd/migration-checker/README.md` around lines 63 - 70, Update the fenced code
block under "Expected Output" to include a language tag (e.g., use ```text or
```console) so the block becomes fenced as ```text (or ```console) followed by
the lines "Accounts done: 1" ... "Accounts done: 365"; modify the README.md
fenced block content around the "Expected Output" heading to add that tag to
satisfy markdownlint MD040.
| // Witness returns a set containing all trie nodes that have been accessed. | ||
| func (t *ZkTrie) Witness() map[string]struct{} { | ||
| panic("not implemented") | ||
| } |
There was a problem hiding this comment.
Avoid exposing a public method that always panics.
Witness() is now part of the public API but always panics, which can crash callers at runtime. Prefer returning a safe default (or changing the signature to return an error) until an implementation is ready.
🛠️ Suggested safe stub
func (t *ZkTrie) Witness() map[string]struct{} {
- panic("not implemented")
+ // TODO: implement witness collection for zkTrie
+ return map[string]struct{}{}
}🤖 Prompt for AI Agents
In `@trie/zk_trie.go` around lines 239 - 242, The public method ZkTrie.Witness
currently panics; replace the panic with a safe stub that returns an empty
map[string]struct{} (i.e., make Witness return an initialized empty set) so
callers won't crash; keep the existing signature and optionally add a TODO
comment referencing ZkTrie.Witness for future implementation or consider
changing the signature to include an error in a follow-up PR.
| func (t *ZkTrie) CountLeaves(cb func(key, value []byte), parallel, verifyNodeHashes bool) uint64 { | ||
| root, err := t.ZkTrie.Tree().Root() | ||
| if err != nil { | ||
| panic("CountLeaves cannot get root") | ||
| } | ||
| return t.countLeaves(root, cb, 0, parallel, verifyNodeHashes) | ||
| } | ||
|
|
||
| func (t *ZkTrie) countLeaves(root *zkt.Hash, cb func(key, value []byte), depth int, parallel, verifyNodeHashes bool) uint64 { | ||
| if root == nil { | ||
| return 0 | ||
| } | ||
|
|
||
| rootNode, err := t.ZkTrie.Tree().GetNode(root) | ||
| if err != nil { | ||
| panic("countLeaves cannot get rootNode") | ||
| } | ||
|
|
||
| if rootNode.Type == zktrie.NodeTypeLeaf_New { | ||
| if verifyNodeHashes { | ||
| calculatedNodeHash, err := rootNode.NodeHash() | ||
| if err != nil { | ||
| panic("countLeaves cannot get calculatedNodeHash") | ||
| } | ||
| if *calculatedNodeHash != *root { | ||
| panic("countLeaves node hash mismatch") | ||
| } | ||
| } | ||
|
|
||
| cb(append([]byte{}, rootNode.NodeKey.Bytes()...), append([]byte{}, rootNode.Data()...)) | ||
| return 1 | ||
| } else { | ||
| if parallel && depth < 5 { | ||
| count := make(chan uint64) | ||
| leftT := t.Copy() | ||
| rightT := t.Copy() | ||
| go func() { | ||
| count <- leftT.countLeaves(rootNode.ChildL, cb, depth+1, parallel, verifyNodeHashes) | ||
| }() | ||
| go func() { | ||
| count <- rightT.countLeaves(rootNode.ChildR, cb, depth+1, parallel, verifyNodeHashes) | ||
| }() | ||
| return <-count + <-count | ||
| } else { | ||
| return t.countLeaves(rootNode.ChildL, cb, depth+1, parallel, verifyNodeHashes) + t.countLeaves(rootNode.ChildR, cb, depth+1, parallel, verifyNodeHashes) | ||
| } |
There was a problem hiding this comment.
Document or guard concurrent callback invocations when parallel=true.
countLeaves calls cb from multiple goroutines, but there’s no contract stating the callback must be thread-safe. This can easily introduce data races in callers. Please either serialize callback execution internally or clearly document the concurrency requirement.
🧵 Option: serialize callbacks internally when parallel
import (
"fmt"
+ "sync"
@@
func (t *ZkTrie) CountLeaves(cb func(key, value []byte), parallel, verifyNodeHashes bool) uint64 {
+ if parallel {
+ var mu sync.Mutex
+ orig := cb
+ cb = func(key, value []byte) {
+ mu.Lock()
+ defer mu.Unlock()
+ orig(key, value)
+ }
+ }
root, err := t.ZkTrie.Tree().Root()
if err != nil {
panic("CountLeaves cannot get root")
}
return t.countLeaves(root, cb, 0, parallel, verifyNodeHashes)
}Based on learnings, this API is intended to be caller-responsible for thread safety in parallel mode—please make that explicit if you prefer not to serialize the callback.
🤖 Prompt for AI Agents
In `@trie/zk_trie.go` around lines 244 - 289, The callback cb is invoked
concurrently from countLeaves when parallel==true which can cause data races;
modify CountLeaves/countLeaves to serialize callback execution: when parallel is
true create a single goroutine that consumes a channel of (key,value) items and
calls cb, have countLeaves (and its goroutine children) send copies of key/value
into that channel instead of calling cb directly, close the channel and wait for
the consumer to finish before returning the total count; update CountLeaves to
set up the channel/consumer and pass it (or a sender wrapper) into countLeaves
so cb is always called from one goroutine even when recursion runs in parallel.
* limit AssembleL2Block * fmt --------- Co-authored-by: corey <corey.zhang@bitget.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@eth/api.go`:
- Around line 690-697: DiskRoot dereferences the pointer blockNrOrHash without
checking for nil which can panic; mirror EstimateL1DataFee by guarding the
parameter: at the start of MorphAPI.DiskRoot, check if blockNrOrHash == nil and
if so set a local var to rpc.PendingBlockNumber (or otherwise initialize a
non-pointer default), then call api.eth.APIBackend.BlockByNumberOrHash(ctx,
*localVar) instead of dereferencing the original pointer; update only DiskRoot
(retain the pointer signature) and reuse the same defaulting pattern used in
EstimateL1DataFee to prevent nil-pointer dereference.
🧹 Nitpick comments (1)
eth/api.go (1)
699-704: Use consistent error handling pattern with rest of codebase.The code checks
diskRoot != (common.Hash{})instead of following the idiomatic pattern used elsewhere. TheReadDiskStateRootfunction documents that returning an error when the mapping doesn't exist is normal and expected. Other callers in the codebase consistently checkerr == nilto determine if a mapping exists (e.g.,trie/database.go:670,core/state/database.go:143). For consistency, check the error explicitly:♻️ Suggested fix
- if diskRoot, _ := rawdb.ReadDiskStateRoot(api.eth.ChainDb(), block.Root()); diskRoot != (common.Hash{}) { + diskRoot, err := rawdb.ReadDiskStateRoot(api.eth.ChainDb(), block.Root()) + if err == nil && diskRoot != (common.Hash{}) {
| func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash *rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) { | ||
| block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, *blockNrOrHash) | ||
| if err != nil { | ||
| return DiskAndHeaderRoot{}, fmt.Errorf("failed to retrieve block: %w", err) | ||
| } | ||
| if block == nil { | ||
| return DiskAndHeaderRoot{}, fmt.Errorf("block not found: %s", blockNrOrHash.String()) | ||
| } |
There was a problem hiding this comment.
Nil-pointer dereference when blockNrOrHash is omitted or null.
blockNrOrHash is a pointer, so an RPC call that omits the parameter (or sends null) will panic on *blockNrOrHash at line 691. Compare with EstimateL1DataFee (line 573–577 in this file), which guards against nil by defaulting to rpc.PendingBlockNumber.
Either add a nil guard or change the parameter to a non-pointer type.
🐛 Option A: add a nil guard with a sensible default
func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash *rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) {
- block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, *blockNrOrHash)
+ bNrOrHash := rpc.BlockNumberOrHashWithNumber(rpc.LatestBlockNumber)
+ if blockNrOrHash != nil {
+ bNrOrHash = *blockNrOrHash
+ }
+ block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, bNrOrHash)🐛 Option B: use a non-pointer parameter (simpler, no default needed)
-func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash *rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) {
- block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, *blockNrOrHash)
+func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) {
+ block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, blockNrOrHash)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash *rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) { | |
| block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, *blockNrOrHash) | |
| if err != nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("failed to retrieve block: %w", err) | |
| } | |
| if block == nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("block not found: %s", blockNrOrHash.String()) | |
| } | |
| func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash *rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) { | |
| bNrOrHash := rpc.BlockNumberOrHashWithNumber(rpc.LatestBlockNumber) | |
| if blockNrOrHash != nil { | |
| bNrOrHash = *blockNrOrHash | |
| } | |
| block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, bNrOrHash) | |
| if err != nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("failed to retrieve block: %w", err) | |
| } | |
| if block == nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("block not found: %s", blockNrOrHash.String()) | |
| } |
| func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash *rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) { | |
| block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, *blockNrOrHash) | |
| if err != nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("failed to retrieve block: %w", err) | |
| } | |
| if block == nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("block not found: %s", blockNrOrHash.String()) | |
| } | |
| func (api *MorphAPI) DiskRoot(ctx context.Context, blockNrOrHash rpc.BlockNumberOrHash) (DiskAndHeaderRoot, error) { | |
| block, err := api.eth.APIBackend.BlockByNumberOrHash(ctx, blockNrOrHash) | |
| if err != nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("failed to retrieve block: %w", err) | |
| } | |
| if block == nil { | |
| return DiskAndHeaderRoot{}, fmt.Errorf("block not found: %s", blockNrOrHash.String()) | |
| } |
🤖 Prompt for AI Agents
In `@eth/api.go` around lines 690 - 697, DiskRoot dereferences the pointer
blockNrOrHash without checking for nil which can panic; mirror EstimateL1DataFee
by guarding the parameter: at the start of MorphAPI.DiskRoot, check if
blockNrOrHash == nil and if so set a local var to rpc.PendingBlockNumber (or
otherwise initialize a non-pointer default), then call
api.eth.APIBackend.BlockByNumberOrHash(ctx, *localVar) instead of dereferencing
the original pointer; update only DiskRoot (retain the pointer signature) and
reuse the same defaulting pattern used in EstimateL1DataFee to prevent
nil-pointer dereference.
1. Purpose or design rationale of this PR
To better integrate with Ethereum, we will switch the block state tree from zktrie to mpt.
2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.gobeen updated?4. Breaking change label
Does this PR have the
breaking-changelabel?Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation