-
Notifications
You must be signed in to change notification settings - Fork 2
Feat zk integration #662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: testnet
Are you sure you want to change the base?
Feat zk integration #662
Conversation
… manager query flexibility, and fix default value for transaction hashes
… into l2ps_implementation
…, constraint comment
…andling and enhance Poseidon hasher interface
…t crashes and improve error logging
…ure authentication and add node identity certificates.
…ant discovery and delta sync, and enable DTR Manager.
…monic, and update L2PS transaction execution and network management.
…updating L2PS quickstart and Solana resolver.
L2ps implementation
…d interfaces for improved readability.
… circuit range checks, and improve signaling server robustness with offline message rollback.
…rotection, and refactor network handlers for improved error logging and type safety.
…ool transactions, enhance L2PS circuit with underflow protection, and refactor graceful shutdown.
…down and mempool transaction saving.
…t-zk-integration
…into feat-zk-integration
…Merkle verification key, and include a setup script.
|
This PR modifies Files that will be reverted:
|
|
This PR modifies Files that will be reverted:
|
WalkthroughThis PR implements three interconnected systems: ZK Identity (Groth16 circuits, proof verification, Merkle trees), L2PS privacy subnets (encrypted mempools, hash service, batch aggregation, validator coordination), and offline instant messaging. Includes consensus integration, new RPC endpoints, database entities, test/setup scripts, and extensive documentation. Changes
Sequence DiagramssequenceDiagram
actor User
participant Client as Client / SDK
participant L2PS as L2PS Node<br/>(Participant)
participant Mempool as L2PS Mempool<br/>(Local)
participant HashSvc as HashService<br/>(Periodic)
participant Validator as Validator Node
User->>Client: Create L2PS Transaction
Client->>Client: Encrypt with L2PS Keys
Client->>L2PS: Send Encrypted Tx
L2PS->>L2PS: Decrypt & Validate
L2PS->>Mempool: Store(l2psUid, encryptedTx)
Mempool->>Mempool: Compute Consolidated Hash<br/>(per-UID transactions)
HashSvc->>Mempool: Get Consolidated Hash
HashSvc->>HashSvc: Create Hash Update Tx
HashSvc->>Validator: Relay Hash (DTR/HTTP)
Validator->>Validator: Store Hash<br/>(UID → Hash mapping)<br/>No transaction content
Note over Validator: Privacy: Validators only<br/>see UID-to-hash,<br/>not transaction content
sequenceDiagram
actor User
participant SDK as Client SDK
participant Circuit as ZK Circuit<br/>(identity_with_merkle)
participant Prover as Proof Generator<br/>(snarkjs/PLONK)
participant Verifier as ProofVerifier<br/>(Server)
participant DB as Database
User->>SDK: Create Identity Proof<br/>(provider_id, secret, context)
SDK->>Circuit: Compute Commitment<br/>= Poseidon(provider_id, secret)
SDK->>DB: Fetch Merkle Proof<br/>for Commitment
Circuit->>Circuit: Verify Merkle Inclusion<br/>against merkle_root
Circuit->>Circuit: Compute Nullifier<br/>= Poseidon(provider_id,<br/>secret, context)
SDK->>Prover: Generate ZK Proof
Prover->>Prover: Create Groth16/PLONK Proof
SDK->>Verifier: Submit Attestation<br/>(proof, publicSignals)
Verifier->>Verifier: Verify Cryptographic Proof
Verifier->>DB: Check Merkle Root Current
Verifier->>DB: Mark Nullifier Used<br/>(prevent double-spend)
Verifier->>DB: Award Points to User
Verifier->>SDK: Return Success + Metadata
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180+ minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoZK-SNARK Integration with L2PS Batch Aggregation and Identity Verification System
WalkthroughsDescription• **L2PS (Layer 2 Parallel Subnet) Integration**: Complete implementation of encrypted transaction batching, aggregation, and ZK-SNARK proof generation for parallel network processing • **L2PS Core Services**: New L2PSBatchAggregator, L2PSMempool, L2PSTransactionExecutor, L2PSConsensus, L2PSHashService, and L2PSProofManager classes for transaction lifecycle management and batch processing • **ParallelNetworks Manager**: Refactored singleton manager for loading, encrypting/decrypting L2PS transactions with signature verification and path traversal protection • **ZK-SNARK Proof System**: Implemented PLONK proof generation for L2PS batches with automatic circuit size selection, Bun-compatible verification wrapper, and multi-party trusted setup ceremony • **ZK Identity System**: New proof verifier with nullifier uniqueness checking, Merkle root validation, and RPC endpoints for identity attestations • **OmniProtocol Handlers**: Seven new handlers (opcodes 0x70-0x77) for L2PS operations including encrypted transaction submission, proof retrieval, and batch verification • **Offline Messaging**: Enhanced signaling server with async/await support, offline message storage with database persistence, and DoS protection • **Testing & Scripts**: Comprehensive test suites for ZK systems, stress/load test scripts for L2PS, and unified ZK setup script with Powers of Tau verification • **Database Updates**: New entities for L2PS transactions, proofs, hashes, mempool, and identity commitments; updated Mempool nonce column to bigint Diagramflowchart LR
A["L2PS Transactions"] -->|"Encrypt & Validate"| B["ParallelNetworks"]
B -->|"Store"| C["L2PSMempool"]
C -->|"Aggregate"| D["L2PSBatchAggregator"]
D -->|"Generate Proofs"| E["L2PSBatchProver"]
E -->|"PLONK Proofs"| F["L2PSProofManager"]
F -->|"Apply at Consensus"| G["L2PSConsensus"]
G -->|"GCR Edits"| H["Unified L1 Batch"]
I["ZK Identity"] -->|"Verify"| J["ProofVerifier"]
J -->|"Check Nullifiers"| K["MerkleTreeManager"]
D -->|"Hash Updates"| L["L2PSHashService"]
L -->|"Relay"| M["OmniProtocol Handlers"]
File Changes1. src/libs/l2ps/L2PSBatchAggregator.ts
|
|
Code Review by Qodo
1. L2PS_PHASES.md not in history/
|
| # L2PS Implementation Phases | ||
|
|
||
| This document provides actionable implementation steps for completing the L2PS (Layer 2 Privacy Subnets) system in the Demos Network node software. | ||
|
|
||
| **Branch**: l2ps_simplified | ||
| **Status**: ALL PHASES COMPLETE (100%) - Implementation finished, awaiting testing | ||
| **Context**: See Serena memories: l2ps_overview, l2ps_architecture, l2ps_implementation_status, l2ps_code_patterns, l2ps_remaining_work | ||
|
|
||
| --- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. l2ps_phases.md not in history/ 📘 Rule violation ⛯ Reliability
• The PR adds multiple planning/design-style markdown documents (implementation phases, testing/validation guides, and integration plans) outside the required history/ directory. • This breaks the repository hygiene requirement intended to keep ephemeral/AI-generated planning content out of the root and other non-history/ locations. • Keeping these documents outside history/ makes it harder to distinguish permanent documentation from temporary planning artifacts.
Agent Prompt
## Issue description
The PR adds planning/design markdown documents outside the required `history/` directory.
## Issue Context
Compliance requires AI-generated/ephemeral planning and design notes to live under `history/` rather than repo root or other directories (e.g., `.serena/memories/`, `src/libs/...`).
## Fix Focus Areas
- L2PS_PHASES.md[1-20]
- L2PS_TESTING.md[1-15]
- src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md[1-25]
- .serena/memories/zk_verify_and_delete_plan.md[1-15]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| ### Implementation Standards | ||
| - [ ] All new code follows established patterns | ||
| - [ ] Error handling is comprehensive | ||
| - [ ] Type safety is maintained | ||
| - [ ] No hardcoded values (use config/env vars) | ||
|
|
||
| ### Testing (if applicable) | ||
| - [ ] Tests pass: `bun run test:chains` | ||
| - [ ] New functionality has test coverage | ||
| - [ ] Edge cases are covered |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2. Markdown - [ ] checklists 📘 Rule violation ⛯ Reliability
• The PR introduces markdown task lists (e.g., - [ ] ...) in documentation files, functioning as a TODO/task tracking mechanism. • This duplicates the project’s required issue tracking system (bd/beads) and can cause tracking drift and confusion. • These checklists should be removed or replaced with links to beads issues (or converted to non-task narrative documentation).
Agent Prompt
## Issue description
Markdown checkbox task lists were added to documentation, creating an alternate issue-tracking system.
## Issue Context
The repository uses bd (beads) for issue tracking; docs must not contain TODO/task tracking checklists.
## Fix Focus Areas
- .serena/memories/task_completion_checklist.md[32-104]
- src/features/zk/README.md[263-273]
- L2PS_TESTING.md[424-471]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| private static ensureInitialized(): void { | ||
| if (!this.repo) { | ||
| throw new Error("[L2PS Hashes] Repository not initialized. Call init() first.") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3. L2pshashes not initialized 🐞 Bug ⛯ Reliability
• L2PSHashes.updateHash() now hard-fails unless init() was called, but new handlers call updateHash() directly. • This will cause runtime exceptions (likely 500s) and prevent validators from persisting L2PS hash updates. • Impact: L2PS hash consensus/monitoring becomes non-functional even when networks are joined.
Agent Prompt
### Issue description
`L2PSHashes.updateHash()` throws unless `L2PSHashes.init()` has been called, but new code paths call `updateHash()` directly.
### Issue Context
This breaks L2PS hash-update handling at runtime.
### Fix Focus Areas
- src/libs/blockchain/l2ps_hashes.ts[33-104]
- src/libs/network/endpointHandlers.ts[843-863]
- src/libs/omniprotocol/protocol/handlers/l2ps.ts[397-405]
- src/index.ts[455-875]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| constructor(data?: Partial<ITransaction>) { | ||
| // Initialize with defaults or provided data | ||
| Object.assign(this, { | ||
| content: { | ||
| from_ed25519_address: null, | ||
| type: null, | ||
| from: "", | ||
| to: "", | ||
| amount: null, | ||
| data: [null, null], | ||
| gcr_edits: [], | ||
| nonce: null, | ||
| timestamp: null, | ||
| transaction_fee: { | ||
| network_fee: null, | ||
| rpc_fee: null, | ||
| additional_fee: null, | ||
| }, | ||
| }, | ||
| } | ||
| this.signature = null | ||
| this.hash = null | ||
| this.status = null | ||
| signature: null, | ||
| ed25519_signature: null, | ||
| hash: null, | ||
| status: null, | ||
| blockNumber: null, | ||
| ...data, | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4. Null transaction defaults 🐞 Bug ✓ Correctness
• Transaction constructor now assigns null defaults to fields later assumed to be non-null (e.g., content.from_ed25519_address, signature, hash). • This conflicts with DB constraints (transactions.from_ed25519_address is non-null) and signature verification code that hex-decodes from_ed25519_address. • Concrete breakage: genesis transaction is created without setting from_ed25519_address and is added to mempool; later persistence/validation paths can fail due to nulls.
Agent Prompt
### Issue description
`Transaction` constructor now initializes critical fields to `null`, but downstream code and DB schema assume non-null (notably `from_ed25519_address`). This can break signature verification and inserts into the `transactions` table.
### Issue Context
Genesis transaction creation does not set `from_ed25519_address` and is added to the mempool.
### Fix Focus Areas
- src/libs/blockchain/transaction.ts[45-80]
- src/model/entities/Transactions.ts[34-41]
- src/libs/blockchain/chain.ts[483-560]
- src/libs/blockchain/transaction.ts[171-260]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| case "getL2PSMempoolInfo": { | ||
| // REVIEW: Phase 3c-1 - L2PS mempool info endpoint | ||
| console.log("[L2PS] Received L2PS mempool info request") | ||
| if (!data.l2psUid) { | ||
| response.result = 400 | ||
| response.response = "No L2PS UID specified" | ||
| break | ||
| } | ||
|
|
||
| try { | ||
| // Get all processed transactions for this L2PS UID | ||
| const transactions = await L2PSMempool.getByUID(data.l2psUid, "processed") | ||
|
|
||
| response.result = 200 | ||
| response.response = { | ||
| l2psUid: data.l2psUid, | ||
| transactionCount: transactions.length, | ||
| lastTimestamp: transactions.at(-1)?.timestamp ?? 0, | ||
| oldestTimestamp: transactions.at(0)?.timestamp ?? 0, | ||
| } | ||
| } catch (error: any) { | ||
| log.error("[L2PS] Failed to get mempool info:", error) | ||
| response.result = 500 | ||
| response.response = "Failed to get L2PS mempool info" | ||
| response.extra = error.message || "Internal error" | ||
| } | ||
| break | ||
| } | ||
|
|
||
| case "getL2PSTransactions": { | ||
| // REVIEW: Phase 3c-1 - L2PS transactions sync endpoint | ||
| console.log("[L2PS] Received L2PS transactions sync request") | ||
| if (!data.l2psUid) { | ||
| response.result = 400 | ||
| response.response = "No L2PS UID specified" | ||
| break | ||
| } | ||
|
|
||
| try { | ||
| // Optional timestamp filter for incremental sync | ||
| const sinceTimestamp = data.since_timestamp || 0 | ||
|
|
||
| // Get all processed transactions for this L2PS UID | ||
| let transactions = await L2PSMempool.getByUID(data.l2psUid, "processed") | ||
|
|
||
| // Filter by timestamp if provided (incremental sync) | ||
| if (sinceTimestamp > 0) { | ||
| transactions = transactions.filter(tx => tx.timestamp > sinceTimestamp) | ||
| } | ||
|
|
||
| // Return encrypted transactions (validators never see this) | ||
| // Only L2PS participants can decrypt | ||
| response.result = 200 | ||
| response.response = { | ||
| l2psUid: data.l2psUid, | ||
| transactions: transactions.map(tx => ({ | ||
| hash: tx.hash, | ||
| l2ps_uid: tx.l2ps_uid, | ||
| original_hash: tx.original_hash, | ||
| encrypted_tx: tx.encrypted_tx, | ||
| timestamp: tx.timestamp, | ||
| block_number: tx.block_number, | ||
| })), | ||
| count: transactions.length, | ||
| } | ||
| } catch (error: any) { | ||
| log.error("[L2PS] Failed to get transactions:", error) | ||
| response.result = 500 | ||
| response.response = "Failed to get L2PS transactions" | ||
| response.extra = error.message || "Internal error" | ||
| } | ||
| break | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5. Unauth l2ps tx export 🐞 Bug ⛨ Security
• New nodecall endpoints return L2PS mempool contents (including encrypted_tx) without verifying the requester is an L2PS participant or otherwise authorized. • Even if ciphertext is opaque, this leaks metadata and enables bandwidth/CPU amplification by repeatedly requesting large mempools. • These methods are not listed as protected endpoints in the server RPC layer.
Agent Prompt
### Issue description
Nodecall endpoints export L2PS mempool data without verifying the requester is authorized/participant.
### Issue Context
`manageNodeCall` currently has no explicit access-control checks around L2PS sync endpoints.
### Fix Focus Areas
- src/libs/network/manageNodeCall.ts[736-808]
- src/libs/network/server_rpc.ts[127-141]
- src/libs/l2ps/L2PSConcurrentSync.ts[1-200]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| // Validate sender is part of the L2PS network | ||
| const parallelNetworks = ParallelNetworks.getInstance() | ||
| const l2psInstance = await parallelNetworks.getL2PS(l2psUid) | ||
|
|
||
| if (!l2psInstance) { | ||
| response.result = 403 | ||
| response.response = "Not participant in L2PS network" | ||
| response.extra = `L2PS network ${l2psUid} not found or not joined` | ||
| return response | ||
| } | ||
|
|
||
| // REVIEW: Store hash update for validator consensus (Phase 3b) | ||
| // Validators store ONLY UID → hash mappings (content blind) | ||
| try { | ||
| await L2PSHashes.updateHash( | ||
| l2psHashPayload.l2ps_uid, | ||
| l2psHashPayload.consolidated_hash, | ||
| l2psHashPayload.transaction_count, | ||
| BigInt(tx.blockNumber), // Now guaranteed to exist due to validation above | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
6. Unauth l2ps hash update 🐞 Bug ⛨ Security
• handleL2PSHashUpdate claims to validate the sender, but only checks that *this node* has joined the L2PS network. • No verification is performed that tx.content.from belongs to the L2PS participant set, nor that the tx signature binds sender→payload. • Impact: any peer can submit l2ps_hash_update with arbitrary hash/count and poison validator state (even before the L2PSHashes.init() issue is fixed).
Agent Prompt
### Issue description
`handleL2PSHashUpdate` does not actually authenticate the sender as an L2PS participant.
### Issue Context
Current check only verifies that the local node has joined the L2PS network.
### Fix Focus Areas
- src/libs/network/endpointHandlers.ts[805-889]
- src/libs/l2ps/parallelNetworks.ts[1-200]
- src/libs/blockchain/transaction.ts[171-260]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
.env.example (1)
45-49:⚠️ Potential issue | 🟡 MinorAdd documentation to .env.example explaining the rate limit interaction.
Both the per-IP and per-identity limits are enforced simultaneously using AND logic—a request must pass both checks to be allowed. This means multiple identities from the same IP cannot collectively exceed the per-IP limit of 100 req/s, even though the per-identity limit is set to 200 req/s. Add a comment to
.env.exampleclarifying that these limits apply together, not independently:Suggested documentation update
# OmniProtocol Rate Limiting # Note: Both per-IP and per-identity limits are enforced simultaneously (AND logic). # A request must satisfy both limits to proceed. Multiple identities from the same IP # cannot collectively exceed the per-IP limit. OMNI_RATE_LIMIT_ENABLED=true OMNI_MAX_CONNECTIONS_PER_IP=10 OMNI_MAX_REQUESTS_PER_SECOND_PER_IP=100 OMNI_MAX_REQUESTS_PER_SECOND_PER_IDENTITY=200src/libs/network/endpointHandlers.ts (1)
322-331:⚠️ Potential issue | 🟡 MinorBiome:
vardeclaration in switch clause leaks scope.
var subnetResult(Line 328) is accessible from othercasebranches. This was flagged by Biome (noSwitchDeclarations). The case block should be wrapped in braces like the adjacentl2psEncryptedTxcase already is.Proposed fix
- case "subnet": + case "subnet": { payload = tx.content.data log.debug( "[handleExecuteTransaction] Subnet payload: " + JSON.stringify(payload[1]), ) - var subnetResult = await ServerHandlers.handleSubnetTx( + const subnetResult = await ServerHandlers.handleSubnetTx( tx as L2PSTransaction, ) result.response = subnetResult break + }src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (1)
205-211:⚠️ Potential issue | 🟠 Major
handleRegisteris async but not awaited — errors indeliverOfflineMessageswill be silently lost.Line 344 calls
await this.deliverOfflineMessages(ws, clientId)insidehandleRegister, buthandleRegisteritself is fire-and-forget at Line 205. Any rejection (e.g., DB failure during offline delivery) will produce an unhandled promise rejection instead of being caught byhandleMessage'stry-catch.Proposed fix
- this.handleRegister( + await this.handleRegister( ws, registerMessage.payload.clientId, registerMessage.payload.publicKey, registerMessage.payload.verification, - ) // REVIEW As this is async, is ok not to await it? + )
🤖 Fix all issues with AI agents
In `@scripts/setup-zk-all.ts`:
- Line 269: The script uses the shell command string `circom ${circuitPath}
--r1cs --wasm --sym -o ${outputDir} -l ${circomlibPath}` in
scripts/setup-zk-all.ts but devDependencies only include `circom2`; update the
invocation to use `circom2` consistently (replace `circom` with `circom2`
wherever circuits are compiled, e.g., the L2PS compile command that references
circuitPath/outputDir/circomlibPath) or alternatively add the `circom` package
to devDependencies and update package.json; ensure all compile calls (identity
and L2PS) use the same installed compiler to avoid "command not found" at
runtime.
In `@src/features/zk/keys/verification_key.json`:
- Around line 38-51: vk_delta_2 in this verification_key.json is a copy of
vk_gamma_2 which breaks Groth16 security; replace vk_delta_2 with the correct,
independent group element values (notably the three coordinate pairs under the
symbol vk_delta_2) by either copying the correct vk_delta_2 from
verification_key_merkle.json or re-running the trusted setup/regeneration that
produced the verification key so vk_delta_2 is distinct from vk_gamma_2; ensure
the updated vk_delta_2 contains three valid pairs of field element strings
matching the format used elsewhere in the file.
In `@src/features/zk/proof/BunSnarkjsWrapper.ts`:
- Line 20: The call to getCurveFromName is using an options object instead of
the boolean singleThread positional param; update the call (where
BunSnarkjsWrapper or any usage references getCurveFromName and
vk_verifier.curve) to call getCurveFromName(vk_verifier.curve, true) (i.e., pass
true as the second positional argument) so the signature matches ffjavascript
v0.3.1.
In `@src/features/zk/scripts/ceremony.ts`:
- Around line 350-358: The contrib command currently injects entropy via
command-line (-e="${entropy}") which exposes toxic-waste to process inspection;
remove the -e argument and invoke snarkjs in interactive mode so the child
process can prompt the user for entropy (i.e., call execSync(`${NPX} snarkjs
zkey contribute ${inputKeyPath} ${outputKeyPath} --name="${participantName}"`, {
stdio: "inherit", shell: "/bin/bash", env: process.env })) or alternatively gate
the non-interactive -e usage behind an explicit opt-in (e.g., a flag/environment
variable) and emit a clear security warning before executing; update the
try/catch around execSync (the block referencing execSync, NPX, entropy,
inputKeyPath, outputKeyPath, participantName) accordingly so no secret entropy
value is passed on the command line.
In `@src/index.ts`:
- Around line 881-914: There are duplicate SIGINT/SIGTERM handlers causing
process.exit(0) to run before gracefulShutdown(); remove the direct handlers
that stop DTR/L2PS and consolidate their logic into the existing
gracefulShutdown function: inside gracefulShutdown, add calls to
DTRManager.getInstance().stop() (guarded by getSharedState.PROD) and stop
L2PSHashService.getInstance().stop() and
L2PSBatchAggregator.getInstance().stop() with try/catch logging; then register
only one SIGINT and one SIGTERM listener that call await gracefulShutdown() (or
gracefulShutdown().catch(...)) and call process.exit(0) after gracefulShutdown
completes to ensure OmniProtocol, MCP, TLSNotary, Metrics, DTR, and L2PS are all
cleaned up.
In `@src/libs/blockchain/routines/Sync.ts`:
- Around line 119-133: The call to discoverL2PSParticipants is incorrect: remove
the second argument (do not pass getSharedState.l2psJoinedUids) since
discoverL2PSParticipants(peers: Peer[]) only accepts peers, and do not treat its
result as a Map because it returns Promise<void> and updates
l2psParticipantCache via addL2PSParticipant; replace the current .then(...)
handler with a simple await (or a fire-and-forget call that only logs
success/failure without inspecting a return value) and update the catch to log
the error; ensure you reference discoverL2PSParticipants, peers,
l2psParticipantCache and addL2PSParticipant when making the change.
In `@src/libs/blockchain/transaction.ts`:
- Around line 46-79: The class properties (content, signature,
ed25519_signature, hash, status, blockNumber) are declared with non-null
assertions (e.g., content!: TransactionContent) but the constructor assigns
nulls, which breaks TypeScript null-safety; change those declarations to
nullable types and initialize them to null or proper defaults (e.g., signature:
ISignature | null = null, ed25519_signature: string | null = null, hash: string
| null = null, status: string | null = null, blockNumber: number | null = null,
and adjust content to TransactionContent | null = <defaultContent> or a fully
non-null default), and keep the constructor’s Object.assign consistent with
those nullable types so no property uses the `!` assertion while being set to
null.
In `@src/libs/l2ps/L2PSConcurrentSync.ts`:
- Around line 36-53: The .then() callback on peer.call triggers
syncL2PSWithPeer(peer, uid) which returns a Promise but its rejection is not
handled; modify the peer.call().then(...) block to ensure any Promise returned
by syncL2PSWithPeer is caught (either await it inside an async callback or
append .catch(...) to handle errors) and log or ignore errors appropriately;
locate the peer.call invocation and adjust the handler around addL2PSParticipant
and syncL2PSWithPeer to attach error handling so rejections from
syncL2PSWithPeer do not become unhandled.
In `@src/libs/l2ps/L2PSTransactionExecutor.ts`:
- Around line 204-214: The code in L2PSTransactionExecutor (around where
nativePayload.args are unpacked and totalRequired is computed) converts amount
to BigInt without ensuring it's an integer, which throws for non-integer
numbers; update the validation to require an integer by adding
Number.isInteger(amount) (in the same check that currently uses Number.isFinite
and amount > 0) and only perform BigInt(amount) after that check, e.g. validate
via Number.isInteger(amount) and then compute totalRequired = BigInt(amount) +
BigInt(L2PS_TX_FEE); adjust any error message to indicate "must be a positive
integer" and keep references to getOrCreateL1Account, nativePayload.args, and
L2PS_TX_FEE.
In `@src/libs/l2ps/parallelNetworks.ts`:
- Around line 173-181: The privateKeyPath and ivPath are resolved from
nodeConfig.keys without enforcing the same basePath boundary check used for
configPath, allowing path traversal; update the resolution and validation so
that after resolving privateKeyPath and ivPath (from
nodeConfig.keys.private_key_path and nodeConfig.keys.iv_path) you verify each
absolute path begins with basePath (using the same startsWith(basePath) logic
used for configPath) and throw an error if not, ensuring only key files inside
the node's basePath for the given uid are read.
In `@src/libs/l2ps/zk/circuits/l2ps_batch_10.circom`:
- Around line 26-27: The current constraint using signal check and check <==
sender_after * sender_after does not enforce non-negativity; replace it with an
explicit range check using circomlib's Num2Bits: add the Num2Bits include,
instantiate Num2Bits with the chosen bitWidth (e.g., 64 or 128), connect its in
to sender_after, and ensure the recomposed value from its bits equals
sender_after (or wire its outBits as required) so sender_after is constrained to
the chosen non-negative range; remove the squaring constraint and keep other
equations (e.g., sender_after === sender_before - amount) intact so overflow
wrapping cannot be exploited.
In `@src/libs/network/server_rpc.ts`:
- Around line 334-381: The verifyProof RPC case currently calls
ProofVerifier.verifyIdentityAttestation which marks nullifiers as consumed in
the DB (non-transactional), creating a DoS vector; change the handler in the
"verifyProof" case to call the read-only verification method
ProofVerifier.verifyProofOnly (or another non-mutating method) instead of
verifyIdentityAttestation so nullifiers are not marked, and only use the
mutating verifyIdentityAttestation path behind proper
authentication/rate-limiting/consensus gating if committing is intended; update
instantiation (new ProofVerifier(...)) usage accordingly and ensure
Datasource/EntityManager are not used for writes in this endpoint.
🟠 Major comments (36)
scripts/generate_simple_test_proof.sh-4-8 (1)
4-8:⚠️ Potential issue | 🟠 MajorEXIT trap deletes proof files before the user can save them as fixtures.
The
trap cleanup EXITremovestest_proof_simple.jsonandtest_public_simple.jsonwhen the script exits, but lines 60–63 instruct the user to manually save those same files afterward. By the time the user sees the instructions, the files are already deleted.Either exclude the proof/public files from cleanup, or integrate the fixture-saving step into the script itself before cleanup runs.
Proposed fix: exclude output files from cleanup
cleanup() { - rm -f test_input_simple.json test_witness_simple.wtns test_proof_simple.json test_public_simple.json + rm -f test_input_simple.json test_witness_simple.wtns }src/libs/l2ps/zk/snarkjs.d.ts-6-78 (1)
6-78:⚠️ Potential issue | 🟠 MajorRemove this custom type declaration and use
@types/snarkjsinstead.The
@types/snarkjspackage exists on DefinitelyTyped and is the community-maintained standard for snarkjs TypeScript types. Install it as a dev dependency (npm i -D@types/snarkjs``) and remove this custom.d.tsfile to reduce maintenance burden and ensure compatibility with upstream updates.src/libs/l2ps/zk/scripts/setup_all_batches.sh-28-41 (1)
28-41:⚠️ Potential issue | 🟠 MajorMissing integrity check for downloaded PTAU files.
The PTAU files are critical cryptographic material for the trusted setup. Downloading over HTTPS without verifying a known checksum (SHA-256) means a compromised CDN or MITM proxy could serve malicious files, undermining the entire ZK proof system. Consider adding checksum verification after download.
🛡️ Proposed fix
download_ptau() { local size=$1 local file="powersOfTau28_hez_final_${size}.ptau" local url="https://storage.googleapis.com/zkevm/ptau/$file" if [[ ! -f "$PTAU_DIR/$file" ]] || [[ $(stat -c%s "$PTAU_DIR/$file") -lt 1000000 ]]; then echo -e "${YELLOW}Downloading pot${size}...${NC}" rm -f "$PTAU_DIR/$file" curl -L -o "$PTAU_DIR/$file" "$url" + # Verify integrity (add known checksums for each PTAU size) + echo "Verifying checksum for pot${size}..." + local expected_checksum + case $size in + 16) expected_checksum="<sha256_for_pot16>" ;; + 17) expected_checksum="<sha256_for_pot17>" ;; + *) echo -e "${YELLOW}⚠ No checksum available for pot${size}${NC}"; return 0 ;; + esac + local actual_checksum + actual_checksum=$(sha256sum "$PTAU_DIR/$file" | cut -d' ' -f1) + if [[ "$actual_checksum" != "$expected_checksum" ]]; then + echo -e "\033[0;31m❌ Checksum mismatch for pot${size}! Deleting file.${NC}" + rm -f "$PTAU_DIR/$file" + return 1 + fi else echo "pot${size} already exists" fi return 0 }scripts/generate_test_proof.sh-4-8 (1)
4-8:⚠️ Potential issue | 🟠 MajorCleanup trap deletes proof files before the user can save fixtures.
The
trap cleanup EXITon line 8 removestest_proof.jsonandtest_public.jsonas soon as the script exits — immediately after printing the "save the fixture" instructions on lines 99–100. The user never has a chance to run those commands.Either copy the files into the fixtures directory before the trap fires, or exclude the proof/public files from cleanup when generation succeeds.
Proposed fix: auto-save fixtures on success
echo "✅ All done! Now run these commands to save the fixture:" echo "" -echo " mkdir -p src/tests/fixtures" -echo " cat test_proof.json test_public.json | jq -s '{proof: .[0], publicSignals: .[1]}' > src/tests/fixtures/valid_proof_fixture.json" +mkdir -p src/tests/fixtures +cat test_proof.json test_public.json | jq -s '{proof: .[0], publicSignals: .[1]}' > src/tests/fixtures/valid_proof_fixture.json +echo " Saved to src/tests/fixtures/valid_proof_fixture.json" echo "" echo "Note: Temporary files will be cleaned up automatically on exit." echo ""Also applies to: 97-102
src/libs/l2ps/zk/circuits/l2ps_batch_5.circom-26-27 (1)
26-27:⚠️ Potential issue | 🟠 Major
checksignal is a dead constraint — it does not enforce non-negative balances.
sender_after * sender_afterin a prime field is always satisfiable for any field element; squaring doesn't prove non-negativity. Thechecksignal is computed but never compared against a meaningful bound.If the intent is to ensure
sender_after >= 0(i.e., prevent underflow), you need a range check viaNum2Bits(n)to constrainsender_afterto fit withinnbits. Without this, a malicious prover could wrap around the field to fabricate balances.If the batch aggregator is always trusted and this circuit only proves correct computation (not input validity), consider removing this dead signal to reduce confusion.
Option A: Remove the dead constraint
receiver_after === receiver_before + amount; - signal check; - check <== sender_after * sender_after; - component preHasher = Poseidon(2);Option B: Add a proper range check
+include "bitify.circom"; + template BalanceTransfer() { ... receiver_after === receiver_before + amount; - signal check; - check <== sender_after * sender_after; + // Range check: sender_after must fit in 64 bits (non-negative) + component senderCheck = Num2Bits(64); + senderCheck.in <== sender_after; + + // Range check: amount must fit in 64 bits (non-negative) + component amountCheck = Num2Bits(64); + amountCheck.in <== amount; component preHasher = Poseidon(2);src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts-1150-1156 (1)
1150-1156:⚠️ Potential issue | 🟠 MajorMissing rollback cases for ZK operations.
When
isRollbackis true (line 1060–1062), the operation is flipped ("add"→"remove"). This means a rollback of"zk_commitmentadd"would try to match"zk_commitmentremove", and a rollback of"zk_attestationadd"would match"zk_attestationremove". Neither case exists in the switch, so rollbacks silently fall through to thedefaultbranch returning"Unsupported identity operation".This means ZK commitments and attestations cannot be rolled back, which could leave orphaned state if a block is reverted.
Also applies to: 1164-1170
src/libs/l2ps/L2PSTransactionExecutor.ts-65-80 (1)
65-80:⚠️ Potential issue | 🟠 MajorFailed initialization is never retried.
If the IIFE on line 72 rejects (e.g., Datasource connection fails),
this.initPromiseremains set to the rejected promise. All subsequentinit()calls willawaitthe same rejected promise and fail permanently. Consider resettinginitPromiseon failure:Proposed fix
this.initPromise = (async () => { - const dsInstance = await Datasource.getInstance() - const ds = dsInstance.getDataSource() - this.l1Repo = ds.getRepository(GCRMain) - log.info("[L2PS Executor] Repository initialized (unified state mode)") + try { + const dsInstance = await Datasource.getInstance() + const ds = dsInstance.getDataSource() + this.l1Repo = ds.getRepository(GCRMain) + log.info("[L2PS Executor] Repository initialized (unified state mode)") + } catch (error) { + this.initPromise = null + throw error + } })()src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts-976-998 (1)
976-998:⚠️ Potential issue | 🟠 MajorTOCTOU: account read outside transaction isolation.
ensureGCRForUser(line 980) reads the account using its own repository/connection, not thequeryRunner.manager. The subsequentqueryRunner.manager.save(account)(line 998) writes within the transaction, but the read was not part of that transaction. A concurrent operation modifying the same account between the read and write could cause lost updates (e.g., points overwritten).Use the transactional manager for both read and write:
Proposed fix
- const account = await ensureGCRForUser(editOperation.account) + // Read within transaction to prevent TOCTOU + let account = await queryRunner.manager.findOne(GCRMain, { + where: { pubkey: editOperation.account } + }) + if (!account) { + account = await HandleGCR.createAccount(editOperation.account) + }Note: This requires importing
HandleGCR(or adaptingensureGCRForUserto accept anEntityManager).src/index.ts-888-894 (1)
888-894:⚠️ Potential issue | 🟠 MajorL2PS
stop()calls are not awaited — async cleanup may be incomplete.Per the code snippets, both
L2PSHashService.stop(timeoutMs)andL2PSBatchAggregator.stop(timeoutMs)are async. Withoutawait,process.exit(0)on Line 896 terminates before they finish. This issue is moot once the duplicate handlers above are resolved and the stops move intoasync gracefulShutdown.src/libs/consensus/v2/PoRBFT.ts-245-246 (1)
245-246:⚠️ Potential issue | 🟠 MajorWrap L2PS rollback in try-catch to prevent cascading crash during error recovery.
If
rollbackProofsForBlockthrows here, it propagates to the outer catch which callsprocess.exit(1). A failure during rollback should be logged as a critical error but shouldn't prevent the remaining cleanup (thefinallyblock still runs, but thereturnon line 248 is skipped).Proposed fix
// Also rollback any L2PS proofs that were applied - await L2PSConsensus.rollbackProofsForBlock(blockRef) + try { + await L2PSConsensus.rollbackProofsForBlock(blockRef) + } catch (rollbackError) { + log.error(`[consensusRoutine] Failed to rollback L2PS proofs: ${rollbackError}`) + }src/features/zk/scripts/ceremony.ts-147-159 (1)
147-159:⚠️ Potential issue | 🟠 MajorCorrupted state file silently treated as uninitialized — risk of ceremony re-initialization.
If
ceremony_state.jsonexists but is corrupted (e.g., partial write),loadCeremonyStatereturnsnull. IninitCeremony, this means the check on line 221 passes, and a new ceremony is initialized, overwriting the existing ceremony directory and potentially losing contributions.Consider distinguishing "file doesn't exist" from "file exists but is corrupt":
Proposed fix
function loadCeremonyState(): CeremonyState | null { if (!existsSync(STATE_FILE)) { return null } try { const content = readFileSync(STATE_FILE, "utf-8") return JSON.parse(content) } catch (err) { - warn(`Failed to parse ceremony state: ${err}`) - return null + error(`Ceremony state file exists but is corrupted: ${err}\nManually inspect or delete ${STATE_FILE} to proceed.`) } }.serena/memories/l2ps_architecture.md-167-175 (1)
167-175:⚠️ Potential issue | 🟠 MajorSecurity model claim may not hold:
getL2PSTransactionsis accessible to any peer.Line 171 states "Validator Privacy Leak: IMPOSSIBLE — Validators never receive transaction content." However, the
getL2PSTransactionsNodeCall endpoint inmanageNodeCall.ts(lines 765–808) returns encrypted transaction blobs to any calling peer without verifying L2PS participation. While the content is encrypted, validators can still harvest the ciphertext. Consider reconciling the documentation with the actual access controls or tightening the endpoint.src/libs/l2ps/L2PSBatchAggregator.ts-535-549 (1)
535-549:⚠️ Potential issue | 🟠 MajorZK proof uses synthetic balances — the proof doesn't validate real state.
senderBeforeis set toamountandsenderAfterto0, making the constraintsenderAfter = senderBefore - amounttrivially true for any amount. The proof never checks actual account balances, so it only proves arithmetic consistency of a fabricated scenario.If this is intentional as a stepping-stone, consider adding a
// TODOor doc comment explaining that real balance lookups are needed before this proof carries security weight.src/libs/network/manageNodeCall.ts-765-808 (1)
765-808:⚠️ Potential issue | 🟠 Major
getL2PSTransactionsexposes encrypted payloads to any peer without access control.The comment on line 786 says "validators never see this", but this is a public
NodeCallendpoint callable by any connected peer. There is no check that the requesting peer is an L2PS participant for the givenl2psUid. This could let validators (or any peer) harvest encrypted transaction blobs, undermining the privacy separation described in the architecture docs.Consider adding a participation check (e.g., verifying the caller's identity against known L2PS participants for that UID) before returning transaction data, or at minimum document why open access is acceptable here.
src/libs/l2ps/L2PSBatchAggregator.ts-523-533 (1)
523-533:⚠️ Potential issue | 🟠 Major
BigInt(Math.floor(Number(rawAmount)))loses precision for large values.Converting through
Numberfirst truncates integer precision beyond 2^53. Since this represents transaction amounts (potentially large integers), convert directly:Proposed fix
- try { - amount = rawAmount !== undefined && rawAmount !== null - ? BigInt(Math.floor(Number(rawAmount))) - : 0n - } catch { - amount = 0n - } + try { + amount = rawAmount !== undefined && rawAmount !== null + ? BigInt(String(rawAmount).split('.')[0]) + : 0n + } catch { + amount = 0n + }src/libs/l2ps/L2PSBatchAggregator.ts-473-486 (1)
473-486:⚠️ Potential issue | 🟠 MajorHMAC key derived by slicing raw private key — use a proper KDF instead.
Lines 479–481 extract the first 64 hex characters of the raw private key as the HMAC key. This directly exposes partial private key material as the HMAC secret, and the key isn't domain-separated from the signing key.
Derive a dedicated HMAC key using HKDF (available in Node's
cryptomodule) with a domain-specific info string, e.g.:Proposed fix
- const hmacKey = Buffer.from(sharedState.keypair.privateKey as Uint8Array) - .toString("hex") - .slice(0, 64) + const hmacKey = crypto.createHmac("sha256", "L2PS_BATCH_HMAC_KEY_DERIVE") + .update(Buffer.from(sharedState.keypair.privateKey as Uint8Array)) + .digest()Or better, use
crypto.hkdfwith a proper salt and info parameter.scripts/setup-zk-all.ts-37-44 (1)
37-44:⚠️ Potential issue | 🟠 MajorMissing SHA-256 checksums for L2PS Powers of Tau files.
The identity ptau source has a
sha256checksum for supply-chain integrity verification, butl2ps_16andl2ps_17do not. SinceverifyPtauChecksumsilently skips verification when no checksum is provided (Line 83), these files are downloaded without any integrity guarantee.src/features/zk/proof/BunSnarkjsWrapper.ts-27-32 (1)
27-32: 🛠️ Refactor suggestion | 🟠 MajorDuplicate
ZKProofinterface — also defined inProofVerifier.ts(Lines 28-33).Both files define identical
ZKProofinterfaces. Import from one canonical location to avoid divergence.♻️ Proposed fix
-export interface ZKProof { - pi_a: string[] - pi_b: string[][] - pi_c: string[] - protocol: string -} +import type { ZKProof } from "./ProofVerifier"And keep the canonical definition in
ProofVerifier.ts.src/features/zk/scripts/setup-zk.ts-18-19 (1)
18-19:⚠️ Potential issue | 🟠 MajorHardcoded
/usr/bin/npxbreaks portability.This path won't exist on macOS (
/usr/local/bin/npxor Homebrew paths), Windows, or nix-based setups. The unified scriptscripts/setup-zk-all.tscorrectly resolves the localnode_modules/.bin/snarkjsbinary instead.🔧 Proposed fix
-// npx path - hardcoded to /usr/bin/npx for reliability -const NPX = "/usr/bin/npx" +import { resolve } from "path" +// Use local snarkjs binary from node_modules for cross-platform compatibility +const SNARKJS = resolve(__dirname, "../../../node_modules/.bin/snarkjs")Then replace
${NPX} snarkjsusages with${SNARKJS}.src/features/zk/merkle/updateMerkleTreeAfterBlock.ts-159-171 (1)
159-171:⚠️ Potential issue | 🟠 MajorRollback fails if no
MerkleTreeStateexists for the exact target block.
findOneat Line 160 looks for a record whereblockNumber === targetBlockNumber. If the target block had no ZK commitments (and thus no tree state was saved for that block), the function throws on Line 168. During a chain reorg, the rollback target may be a block without commitments.Consider finding the latest state at or before
targetBlockNumberinstead:🔧 Proposed fix
const targetState = await merkleStateRepo.findOne({ where: { treeId: GLOBAL_TREE_ID, - blockNumber: targetBlockNumber, }, + order: { blockNumber: "DESC" }, + where: { + treeId: GLOBAL_TREE_ID, + }, }) + + // Accept any state at or before the target block + if (!targetState || targetState.blockNumber > targetBlockNumber) {src/features/zk/scripts/setup-zk.ts-240-260 (1)
240-260:⚠️ Potential issue | 🟠 MajorGenerate verification keys for both circuits when both compile successfully.
The if-else-if pattern (lines 253–257) causes only
identity_with_merklekeys to be generated when both circuits compile. However,test_identity_verification.tsdepends onverification_key.jsonfrom the basicidentitycircuit, so skipping its key generation breaks that test. Both circuits should generate keys independently.Generate keys for both circuits
- if (merkleCompiled) { - await generateKeys("identity_with_merkle") - } else if (basicCompiled) { - await generateKeys("identity") - } else { + if (basicCompiled) { + await generateKeys("identity") + } + if (merkleCompiled) { + await generateKeys("identity_with_merkle") + } + if (!basicCompiled && !merkleCompiled) {src/libs/network/endpointHandlers.ts-805-889 (1)
805-889:⚠️ Potential issue | 🟠 Major
handleL2PSHashUpdate— solid validation, but the sender is never authenticated.The method validates the payload structure, block number, and L2PS network participation. However, it does not verify the transaction's cryptographic signature or that
tx.content.fromis actually a member of the L2PS network. TheparallelNetworks.getL2PS(l2psUid)call on Line 845 only checks if the local node has joined the L2PS network — it does not verify the sender's identity or membership. This means any node that knows an L2PS UID the local node participates in could submit arbitrary hash updates.Consider adding sender signature verification (similar to what
handleExecuteTransactiondoes for standard transactions) before storing the hash.src/features/zk/types/index.ts-51-65 (1)
51-65:⚠️ Potential issue | 🟠 Major
siblingstype mismatch withMerkleTreeManager.generateProof.
MerkleProofResponse.proof.siblingsis typed asstring[](Line 55), butMerkleTreeManager.generateProof()(insrc/features/zk/merkle/MerkleTreeManager.ts, Line 156) returnssiblings: string[][](a nested array — each level's sibling is itself an array). This will cause a type error when mapping the proof into this response shape, or silently produce incorrect data if cast.Align the types — likely
siblingsshould bestring[][]here.Proposed fix
proof: { /** Sibling hashes along the path from leaf to root */ - siblings: string[] + siblings: string[][] /** Path indices (0 = left, 1 = right) */ path_indices: number[]src/libs/blockchain/transaction.ts-54-79 (1)
54-79:⚠️ Potential issue | 🟠 Major
Object.assignwith...datadoes shallow merge — partialcontentreplaces defaults entirely.If a caller provides
new Transaction({ content: { type: "native" } as any }),...datawill replace the entire defaultcontentobject. Fields likefrom,to,amount,gcr_edits, andtransaction_feewill be missing instead of defaulted. Consider using a deep merge forcontent, or documenting that callers must provide a completecontentobject if they override it.Proposed fix using explicit deep merge for content
constructor(data?: Partial<ITransaction>) { + const defaultContent = { + from_ed25519_address: null, + type: null, + from: "", + to: "", + amount: null, + data: [null, null], + gcr_edits: [], + nonce: null, + timestamp: null, + transaction_fee: { + network_fee: null, + rpc_fee: null, + additional_fee: null, + }, + } Object.assign(this, { - content: { - from_ed25519_address: null, - type: null, - from: "", - to: "", - amount: null, - data: [null, null], - gcr_edits: [], - nonce: null, - timestamp: null, - transaction_fee: { - network_fee: null, - rpc_fee: null, - additional_fee: null, - }, - }, + content: data?.content + ? { ...defaultContent, ...data.content, + transaction_fee: { ...defaultContent.transaction_fee, ...data.content.transaction_fee } } + : defaultContent, signature: null, ed25519_signature: null, hash: null, status: null, blockNumber: null, - ...data, + ...(data ? { ...data, content: undefined } : {}), }) }Note: This is one approach. Alternatively, you could require callers to always provide a full
contentand not deep-merge at all, but that should be clearly documented.src/libs/l2ps/L2PSProofManager.ts-79-94 (1)
79-94:⚠️ Potential issue | 🟠 MajorInitialization failure is permanent — rejected promise is never cleared.
If
init()fails (e.g., database temporarily unavailable),this.initPromiseretains the rejected promise andthis.repostays null. Every subsequent call togetRepo()will re-await the same rejected promise and throw. The service cannot recover without a process restart.🐛 Proposed fix: clear initPromise on failure
private static async init(): Promise<void> { if (this.repo) return if (this.initPromise !== null) { await this.initPromise return } this.initPromise = (async () => { - const dsInstance = await Datasource.getInstance() - const ds = dsInstance.getDataSource() - this.repo = ds.getRepository(L2PSProof) - log.info("[L2PS ProofManager] Repository initialized") + try { + const dsInstance = await Datasource.getInstance() + const ds = dsInstance.getDataSource() + this.repo = ds.getRepository(L2PSProof) + log.info("[L2PS ProofManager] Repository initialized") + } catch (error) { + this.initPromise = null + throw error + } })() await this.initPromise }src/libs/network/server_rpc.ts-89-122 (1)
89-122:⚠️ Potential issue | 🟠 MajorTimer leak and dangling async work on timeout in
getMerkleTreeManager.When
Promise.raceresolves via the timeout branch, two issues arise:
- The
setTimeouttimer is never cleared on the success path, keeping it alive until it fires (minor).- The original async IIFE continues running after the timeout rejects. If it eventually succeeds, it silently sets
globalMerkleManager; if it fails, the rejection is unhandled (no.catchon the dangling promise).🐛 Proposed fix: track the timer and the inner promise
- initializationPromise = Promise.race([ - (async () => { + let timeoutId: ReturnType<typeof setTimeout> + const initPromise = (async () => { const db = await Datasource.getInstance() const dataSource = db.getDataSource() - // REVIEW: Create local instance, only assign to global after successful init const manager = new MerkleTreeManager( dataSource, ZK_MERKLE_TREE_DEPTH, ZK_MERKLE_TREE_ID, ) await manager.initialize() log.info("✅ Global MerkleTreeManager initialized") globalMerkleManager = manager return globalMerkleManager - })(), - new Promise<MerkleTreeManager>((_, reject) => - setTimeout(() => reject(new Error("Initialization timeout")), INIT_TIMEOUT_MS), - ), - ]) + })() + + initializationPromise = Promise.race([ + initPromise, + new Promise<MerkleTreeManager>((_, reject) => { + timeoutId = setTimeout(() => reject(new Error("Initialization timeout")), INIT_TIMEOUT_MS) + }), + ]).finally(() => clearTimeout(timeoutId)) + + // Prevent unhandled rejection from the dangling init promise on timeout + initPromise.catch(() => {})src/libs/l2ps/L2PSConsensus.ts-417-465 (1)
417-465: 🛠️ Refactor suggestion | 🟠 MajorInefficient rollback: fetches all applied proofs then filters in JS; repeated dynamic imports inside loop.
Two issues in
rollbackProofsForBlock:
- Line 420-424: Fetches up to 1000 applied proofs across all blocks, then filters by
blockNumberin JS. Push theapplied_block_numberfilter into the database query for efficiency.- Lines 457-459:
Datasource.getInstance()and the dynamic import are called inside the loop for each proof. Hoist them before the loop.♻️ Proposed fix
static async rollbackProofsForBlock(blockNumber: number): Promise<void> { try { - // Get proofs that were applied in this block - const appliedProofs = await L2PSProofManager.getProofs( - "", // all L2PS networks - "applied", - 1000 - ) - - // Filter by block number and rollback in reverse order - const proofsToRollback = appliedProofs - .filter(p => p.applied_block_number === blockNumber) - .reverse() + // Query proofs applied in this specific block directly + const repo = await (await import("@/model/datasource")).default.getInstance() + const ds = repo.getDataSource() + const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof) + + const proofsToRollback = (await proofRepo.find({ + where: { status: "applied" as any, applied_block_number: blockNumber }, + order: { created_at: "DESC" }, + })) log.info(`[L2PS Consensus] Rolling back ${proofsToRollback.length} proofs for block ${blockNumber}`) for (const proof of proofsToRollback) { // ... rollback edits ... - const repo = await (await import("@/model/datasource")).default.getInstance() - const ds = repo.getDataSource() - const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof) - await proofRepo.update(proof.id, { status: "pending", applied_block_number: null, processed_at: null }) }src/libs/l2ps/L2PSHashService.ts-139-166 (1)
139-166:⚠️ Potential issue | 🟠 Major
ConnectionPoolnot shut down duringstop().The OmniProtocol
ConnectionPoolis initialized instart()but never closed instop(). This leaks TCP connections and associated resources after shutdown.🐛 Proposed fix: shut down the connection pool on stop
if (this.isGenerating) { log.warning("[L2PS Hash Service] Forced shutdown - operation still in progress") } + // Shut down OmniProtocol connection pool if initialized + if (this.connectionPool) { + try { + await this.connectionPool.shutdown() + } catch (err) { + log.warning("[L2PS Hash Service] Error shutting down connection pool: " + err) + } + this.connectionPool = null + } + log.info("[L2PS Hash Service] Stopped successfully") this.logStatistics()src/features/zk/proof/ProofVerifier.ts-251-280 (1)
251-280:⚠️ Potential issue | 🟠 MajorNon-transactional path leaks nullifier existence before cryptographic verification.
The fallback path (lines 255-268) checks whether the nullifier is already used before verifying the proof cryptographically. An attacker can submit an invalid proof to probe whether a specific nullifier has been consumed: a "Nullifier already used" response vs. "Proof failed cryptographic verification" reveals the nullifier's state.
Swap the order to match the transactional path — verify cryptographically first, then check the nullifier.
🔒 Proposed fix: reorder non-transactional path
// Non-transactional fallback (for backward compatibility) - // Step 1: Check if nullifier already used - const existing = await this.nullifierRepo.findOne({ - where: { nullifierHash: nullifier }, - }) - - if (existing) { - return { - valid: false, - reason: "Nullifier already used (double-attestation attempt)", - nullifier, - merkleRoot, - context, - } - } - - // Step 2: Cryptographic verification + // Step 1: Cryptographic verification (before any DB state checks) const cryptoValid = await ProofVerifier.verifyCryptographically(proof, publicSignals) if (!cryptoValid) { return { valid: false, reason: "Proof failed cryptographic verification", nullifier, merkleRoot, context, } } - // Step 3: Validate Merkle root is current + // Step 2: Check if nullifier already used + const existing = await this.nullifierRepo.findOne({ + where: { nullifierHash: nullifier }, + }) + + if (existing) { + return { + valid: false, + reason: "Nullifier already used (double-attestation attempt)", + nullifier, + merkleRoot, + context, + } + } + + // Step 3: Validate Merkle root is current const rootIsCurrent = await this.isMerkleRootCurrent(merkleRoot)src/libs/l2ps/zk/L2PSBatchProver.ts-10-23 (1)
10-23:⚠️ Potential issue | 🟠 MajorGlobal prototype mutation is risky.
Patching
EventTarget.prototype.dispatchEventat module-load time is a side effect that affects the entire process, not just this module. Any other code relying on standarddispatchEventbehavior could break silently. This also makes the module unsafe to import in contexts where Bun compatibility isn't needed (e.g., tests, tooling).Consider isolating this patch behind a lazy init or guarding it more tightly (e.g., only when actually about to call snarkjs in Bun), or moving it to a dedicated Bun-shim entrypoint that is imported once at application startup.
src/libs/l2ps/parallelNetworks.ts-301-315 (1)
301-315:⚠️ Potential issue | 🟠 MajorUnsigned transactions are decrypted with only a warning.
When
encryptedTx.signatureis falsy (line 313), the code logs a warning but proceeds to decrypt anyway. This means an attacker could submit unsigned (potentially forged) encrypted payloads and have them processed. Consider whether this should be a hard rejection for production.src/libs/l2ps/parallelNetworks.ts-164-166 (1)
164-166:⚠️ Potential issue | 🟠 MajorConfig UID not validated against the directory UID.
nodeConfig.uidis checked for truthiness but never compared to theuidparameter. A config file in directoryfoo/could declareuid: "bar", causing the L2PS instance to be stored under key"foo"in the maps while its internal config says"bar". This mismatch can lead to confusing routing and lookup failures downstream.Proposed fix
- if (!nodeConfig.uid || !nodeConfig.enabled) { - throw new Error(`L2PS config invalid or disabled: ${uid}`) + if (!nodeConfig.uid || nodeConfig.uid !== uid || !nodeConfig.enabled) { + throw new Error(`L2PS config invalid, UID mismatch, or disabled: ${uid}`) }src/libs/blockchain/l2ps_mempool.ts-273-278 (1)
273-278:⚠️ Potential issue | 🟠 Major
Date.now()fallback corrupts the sequence space.If the non-transactional
getNextSequenceNumberpath fails, the fallback assignsDate.now()(~1.7 trillion) as a sequence number. All subsequent sequence allocations for this UID will continue from that massive value, creating an irreversible gap. Since the transactional path (insidesaveTransactionWithRetry) correctly re-throws on error and retries, this fallback should ideally also throw rather than silently corrupt state.Proposed fix
// If in a transaction, we want to rethrow to trigger a retry instead of falling back to timestamp if (manager) throw error - // Fallback to timestamp-based sequence only for non-atomic reads - return Date.now() + // No safe fallback — propagate error + throw error }src/libs/blockchain/l2ps_mempool.ts-395-404 (1)
395-404:⚠️ Potential issue | 🟠 MajorError path returns a plausible hash — indistinguishable from valid state.
When
getHashForL2PSfails (line 400–404), it returnssha256("L2PS_ERROR_..."), which looks like any other valid hash. Downstream consensus code comparing hashes cannot tell the difference between a genuine state hash and an error artifact, potentially leading to false agreements or disagreements during validation.Consider throwing the error instead, or returning a distinct sentinel (e.g., empty string) that callers can check:
Proposed fix — let the error propagate
} catch (error: any) { log.error(`[L2PS Mempool] Error generating hash for UID ${l2psUid}, block ${blockNumber}:`, error) - // Return deterministic error hash - const blockSuffix = blockNumber === undefined ? "_ALL" : `_BLOCK_${blockNumber}` - return Hashing.sha256(`L2PS_ERROR_${l2psUid}${blockSuffix}`) + throw error }src/libs/blockchain/l2ps_mempool.ts-294-315 (1)
294-315: 🛠️ Refactor suggestion | 🟠 MajorNull-safety:
this.repoused without assertion afterensureInitialized.Throughout this file (lines 310, 328, 378, 427, 461, 507, 553, 588, 609, etc.),
this.repois accessed directly afterensureInitialized(). Sincerepois typed asRepository<L2PSMempoolTx> | null, strict TypeScript compilation would flag these as errors. The call on line 310 (this.repo.find(options)) would NPE ifrepois somehow still null.Use the non-null assertion operator (
this.repo!) consistently (as done on line 195), or narrow the type via a local guard:Example fix pattern
+ private static getRepo(): Repository<L2PSMempoolTx> { + if (!this.repo) throw new Error("[L2PS Mempool] Not initialized") + return this.repo + } + // Then use: - return await this.repo.find(options) + return await this.getRepo().find(options)src/libs/blockchain/l2ps_mempool.ts-195-198 (1)
195-198: 🛠️ Refactor suggestion | 🟠 Major
(this as any)cast to call own private static method is a code smell.Line 197 bypasses TypeScript's access control to call
getNextSequenceNumber. Since both methods are in the same class and both areprivate static, this cast shouldn't be necessary. This appears to be a workaround for a TS context issue inside the transaction callback. A cleaner approach is to reference the class by name.Proposed fix
- const sequenceNumber = await (this as any).getNextSequenceNumber(l2psUid, transactionalEntityManager) + const sequenceNumber = await L2PSMempool.getNextSequenceNumber(l2psUid, transactionalEntityManager)
|
|
||
| // Compile circuit | ||
| exec( | ||
| `circom ${circuitPath} --r1cs --wasm --sym -o ${outputDir} -l ${circomlibPath}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "circom2?\b" --type=ts -g '!node_modules/*'Repository: kynesyslabs/node
Length of output: 1392
🏁 Script executed:
cat -n scripts/setup-zk-all.ts | sed -n '160,180p'Repository: kynesyslabs/node
Length of output: 1009
🏁 Script executed:
cat -n scripts/setup-zk-all.ts | sed -n '240,280p'Repository: kynesyslabs/node
Length of output: 1629
🏁 Script executed:
# Search for any comments explaining circom vs circom2 difference
rg -B2 -A2 "circom2" scripts/setup-zk-all.tsRepository: kynesyslabs/node
Length of output: 267
🏁 Script executed:
# Check for any documentation or comments about circom vs circom2
rg -i "circom2|circom.*version|compiler.*version" --type=md --type=txtRepository: kynesyslabs/node
Length of output: 118
🏁 Script executed:
# Check package.json and dependencies to understand circom vs circom2
cat -n package.json | head -50Repository: kynesyslabs/node
Length of output: 4056
🏁 Script executed:
# Also check if there's a package-lock or bun.lock that shows versions
ls -la | grep -E "package|bun|lock"Repository: kynesyslabs/node
Length of output: 186
🏁 Script executed:
# Check devDependencies for circom tools
grep -A 100 '"devDependencies"' package.json | grep -E "circom|circuit"Repository: kynesyslabs/node
Length of output: 221
🏁 Script executed:
# Double-check if circom is available as a standalone or if it's part of circom2
rg "circom" package.jsonRepository: kynesyslabs/node
Length of output: 534
🏁 Script executed:
# Check if circom might be installed as part of circom2 or aliased
grep -r "circom" scripts/setup-zk-all.ts | head -20Repository: kynesyslabs/node
Length of output: 703
🏁 Script executed:
# Look for any npm scripts or setup that might install circom separately
cat -n scripts/setup-zk-all.ts | head -50Repository: kynesyslabs/node
Length of output: 2144
🏁 Script executed:
# Check if circom2 provides both circom and circom2 commands
rg -B5 -A5 "circom\s" scripts/setup-zk-all.ts | grep -v "circom2"Repository: kynesyslabs/node
Length of output: 1231
🏁 Script executed:
# Search for any shell wrapper or alias that maps circom to circom2
find . -name "*.sh" -o -name ".bashrc" -o -name ".zshrc" | head -20Repository: kynesyslabs/node
Length of output: 728
Fix missing circom dependency or use circom2 consistently.
Line 269 uses circom to compile L2PS circuits, but only circom2 (^0.2.22) is installed in devDependencies. This will cause the L2PS setup to fail at runtime with "command not found". Either add circom to devDependencies or use circom2 for both identity and L2PS circuits to match the installed compiler.
🤖 Prompt for AI Agents
In `@scripts/setup-zk-all.ts` at line 269, The script uses the shell command
string `circom ${circuitPath} --r1cs --wasm --sym -o ${outputDir} -l
${circomlibPath}` in scripts/setup-zk-all.ts but devDependencies only include
`circom2`; update the invocation to use `circom2` consistently (replace `circom`
with `circom2` wherever circuits are compiled, e.g., the L2PS compile command
that references circuitPath/outputDir/circomlibPath) or alternatively add the
`circom` package to devDependencies and update package.json; ensure all compile
calls (identity and L2PS) use the same installed compiler to avoid "command not
found" at runtime.
| "vk_delta_2": [ | ||
| [ | ||
| "10857046999023057135944570762232829481370756359578518086990519993285655852781", | ||
| "11559732032986387107991004021392285783925812861821192530917403151452391805634" | ||
| ], | ||
| [ | ||
| "8495653923123431417604973247489272438418190587263600148770280649306958101930", | ||
| "4082367875863433681332203403145435568316851327593401208105741076214120093531" | ||
| ], | ||
| [ | ||
| "1", | ||
| "0" | ||
| ] | ||
| ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate and verify the verification key files
fd 'verification_key' --type f | head -20Repository: kynesyslabs/node
Length of output: 152
🏁 Script executed:
#!/bin/bash
# Extract and compare vk_gamma_2 and vk_delta_2 from both files
echo "=== verification_key.json ==="
python3 << 'EOF'
import json
with open("src/features/zk/keys/verification_key.json") as f:
vk = json.load(f)
gamma_2 = vk.get("vk_gamma_2")
delta_2 = vk.get("vk_delta_2")
print("vk_gamma_2:", gamma_2)
print("vk_delta_2:", delta_2)
print("Are they identical?", gamma_2 == delta_2)
EOF
echo ""
echo "=== verification_key_merkle.json ==="
python3 << 'EOF'
import json
with open("src/features/zk/keys/verification_key_merkle.json") as f:
vk = json.load(f)
gamma_2 = vk.get("vk_gamma_2")
delta_2 = vk.get("vk_delta_2")
print("vk_gamma_2:", gamma_2)
print("vk_delta_2:", delta_2)
print("Are they identical?", gamma_2 == delta_2)
EOFRepository: kynesyslabs/node
Length of output: 1581
vk_delta_2 is identical to vk_gamma_2 — critical security issue in Groth16 verification.
Lines 24–37 (vk_gamma_2) and lines 38–51 (vk_delta_2) are byte-for-byte identical. In Groth16, these must be independent group elements from the trusted setup. When they are equal, the verification equation degenerates and proofs may become trivially forgeable.
The verification_key_merkle.json file correctly has distinct values for vk_delta_2, confirming this is a copy-paste error specific to this file. Regenerate or correct this verification key immediately.
🤖 Prompt for AI Agents
In `@src/features/zk/keys/verification_key.json` around lines 38 - 51, vk_delta_2
in this verification_key.json is a copy of vk_gamma_2 which breaks Groth16
security; replace vk_delta_2 with the correct, independent group element values
(notably the three coordinate pairs under the symbol vk_delta_2) by either
copying the correct vk_delta_2 from verification_key_merkle.json or re-running
the trusted setup/regeneration that produced the verification key so vk_delta_2
is distinct from vk_gamma_2; ensure the updated vk_delta_2 contains three valid
pairs of field element strings matching the format used elsewhere in the file.
| // NOTE: Variable names use snake_case to match ZK-SNARK cryptographic notation | ||
| // (pi_a, pi_b, pi_c, vk_alpha_1, etc. are standard Groth16 protocol names) | ||
|
|
||
| import { Scalar, utils, getCurveFromName } from "ffjavascript" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file mentioned and find how getCurveFromName is actually used
cat -n src/features/zk/proof/BunSnarkjsWrapper.ts | head -50Repository: kynesyslabs/node
Length of output: 2079
🏁 Script executed:
# Check the package.json to see what version of ffjavascript is being used
cat package.json | grep -A 2 -B 2 "ffjavascript"Repository: kynesyslabs/node
Length of output: 254
🏁 Script executed:
# Search for all usages of getCurveFromName in the codebase
rg "getCurveFromName" -A 3 -B 1Repository: kynesyslabs/node
Length of output: 2029
🌐 Web query:
ffjavascript 0.3.1 getCurveFromName singleThread option API documentation
💡 Result:
In ffjavascript v0.3.1, getCurveFromName is exported from src/curves.js with this signature:
await getCurveFromName(name, singleThread, plugins)name: normalized (uppercased, non-alphanumerics removed) and mapped as:"BN128" | "BN254" | "ALTBN128"→buildBn128(...)"BLS12381"→buildBls12381(...)- otherwise throws
Error("Curve not supported: ...")[1]
singleThread(boolean): forwarded intobuildBn128/buildBls12381[1]. When the underlying thread manager is built withsingleThread = true, it does not spawn WebWorkers and forcesconcurrency = 1; whenfalse, it spawns workers based onnavigator.hardwareConcurrency(capped at 64) [2].plugins: optional extra parameter forwarded intobuildBn128/buildBls12381[1] (plugin API is curve-specific).
Resource cleanup note (commonly relevant in Node/test runners): if you create a curve via getCurveFromName("bn128"), call await curve.terminate() when done to avoid leaked handles preventing process exit [3].
Sources: [1] [2] [3]
Fix getCurveFromName API call: pass singleThread as boolean, not options object.
In ffjavascript v0.3.1, getCurveFromName expects the signature getCurveFromName(name, singleThread, plugins) where singleThread is a boolean positional parameter. The code currently passes { singleThread: true } as an object, which is incorrect. Change to: getCurveFromName(vk_verifier.curve, true).
🤖 Prompt for AI Agents
In `@src/features/zk/proof/BunSnarkjsWrapper.ts` at line 20, The call to
getCurveFromName is using an options object instead of the boolean singleThread
positional param; update the call (where BunSnarkjsWrapper or any usage
references getCurveFromName and vk_verifier.curve) to call
getCurveFromName(vk_verifier.curve, true) (i.e., pass true as the second
positional argument) so the signature matches ffjavascript v0.3.1.
| try { | ||
| execSync( | ||
| `${NPX} snarkjs zkey contribute ${inputKeyPath} ${outputKeyPath} --name="${participantName}" -e="${entropy}"`, | ||
| { stdio: "inherit", shell: "/bin/bash", env: process.env }, | ||
| ) | ||
| success("Contribution added successfully") | ||
| } catch (err) { | ||
| error("Failed to add contribution") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
snarkjs zkey contribute entropy stdin input documentation
💡 Result:
snarkjs zkey contribute takes an “entropy” string either interactively (it prompts you to type random text) or non‑interactively via the -e flag.
-
Interactive (reads from your terminal / stdin prompt):
snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -vIt will prompt you to “enter some random text” as extra entropy. [1]
-
Non-interactive (recommended for scripting / CI):
snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v -e="some random entropy"
-esupplies the entropy without prompting (the docs explicitly note this forcontributeto be non-interactive, and the zkey flow shows-eusage). [1] -
If you specifically want to source entropy from stdin, feed it into
-e:ENTROPY="$(cat)" # reads all stdin snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v -e="$ENTROPY"
or, e.g.:
snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v \ -e="$(head -c 64 /dev/urandom | base64)"
Sources: snarkjs README “Contribute to the phase 2 ceremony” (prompted entropy) and “Provide a second contribution” (-e=...). [1]
[1] (github.com)
Citations:
🏁 Script executed:
# Check if snarkjs is mentioned in package.json or if there are any docs
find . -type f \( -name "package.json" -o -name "*.md" \) | head -20 | xargs grep -l "snarkjs" 2>/dev/null || echo "No direct snarkjs references found in docs"Repository: kynesyslabs/node
Length of output: 102
🏁 Script executed:
# Search for any snarkjs documentation or comments in the codebase
rg "snarkjs" --type ts --type js -A 3 -B 1Repository: kynesyslabs/node
Length of output: 15588
Security: entropy via -e flag is visible to process inspection and undermines trusted setup security.
The random entropy is passed directly as a command-line argument (-e="${entropy}"), making it visible to any user/process on the system via ps aux or /proc/*/cmdline. For a trusted setup ceremony, this entropy constitutes "toxic waste" — its exposure completely undermines the security contribution of this participant.
Per snarkjs documentation, zkey contribute supports two modes:
- Interactive: prompts you to enter entropy manually (secure, but requires user interaction)
- Non-interactive with
-eflag: passes entropy as a command-line argument (exposed to process inspection)
Unfortunately, snarkjs does not support piping entropy via stdin in non-interactive mode. The -e flag is the only non-interactive method.
Mitigation options:
- Use interactive mode (requires user to type entropy manually) — secure but not practical for automation
- Accept the risk if this ceremony script is only run in isolated/trusted environments where other processes cannot inspect the process list
- Investigate if your snarkjs version supports environment variables for entropy (not confirmed in official docs)
This is a fundamental security limitation of using snarkjs zkey contribute non-interactively in a shell script on a shared system.
🤖 Prompt for AI Agents
In `@src/features/zk/scripts/ceremony.ts` around lines 350 - 358, The contrib
command currently injects entropy via command-line (-e="${entropy}") which
exposes toxic-waste to process inspection; remove the -e argument and invoke
snarkjs in interactive mode so the child process can prompt the user for entropy
(i.e., call execSync(`${NPX} snarkjs zkey contribute ${inputKeyPath}
${outputKeyPath} --name="${participantName}"`, { stdio: "inherit", shell:
"/bin/bash", env: process.env })) or alternatively gate the non-interactive -e
usage behind an explicit opt-in (e.g., a flag/environment variable) and emit a
clear security warning before executing; update the try/catch around execSync
(the block referencing execSync, NPX, entropy, inputKeyPath, outputKeyPath,
participantName) accordingly so no secret entropy value is passed on the command
line.
| // Graceful shutdown handling for services | ||
| process.on("SIGINT", () => { | ||
| console.log("[DTR] Received SIGINT, shutting down gracefully...") | ||
| // if (getSharedState.PROD) { | ||
| // DTRManager.getInstance().stop() | ||
| // } | ||
| console.log("[Services] Received SIGINT, shutting down gracefully...") | ||
| if (getSharedState.PROD) { | ||
| DTRManager.getInstance().stop() | ||
| } | ||
|
|
||
| // Stop L2PS services if running | ||
| try { | ||
| L2PSHashService.getInstance().stop() | ||
| L2PSBatchAggregator.getInstance().stop() | ||
| } catch (error) { | ||
| console.error("[L2PS] Error stopping L2PS services:", error) | ||
| } | ||
|
|
||
| process.exit(0) | ||
| }) | ||
|
|
||
| process.on("SIGTERM", () => { | ||
| console.log("[DTR] Received SIGTERM, shutting down gracefully...") | ||
| // if (getSharedState.PROD) { | ||
| // DTRManager.getInstance().stop() | ||
| // } | ||
| console.log("[Services] Received SIGTERM, shutting down gracefully...") | ||
| if (getSharedState.PROD) { | ||
| DTRManager.getInstance().stop() | ||
| } | ||
|
|
||
| // Stop L2PS services if running | ||
| try { | ||
| L2PSHashService.getInstance().stop() | ||
| L2PSBatchAggregator.getInstance().stop() | ||
| } catch (error) { | ||
| console.error("[L2PS] Error stopping L2PS services:", error) | ||
| } | ||
|
|
||
| process.exit(0) | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate SIGINT/SIGTERM handlers — gracefulShutdown never executes.
Two sets of signal handlers are registered:
- Lines 882–914: Stop DTR + L2PS, then call
process.exit(0)synchronously. - Lines 974–975: Register
gracefulShutdown(), which stops OmniProtocol, MCP, TLSNotary, and Metrics.
Because Node.js executes listeners in registration order, the first handler's process.exit(0) terminates the process before gracefulShutdown runs. This means OmniProtocol, MCP, TLSNotary, and Metrics are never cleaned up.
Consolidate into a single shutdown path:
Proposed fix — merge L2PS/DTR shutdown into `gracefulShutdown`
-// Graceful shutdown handling for services
-process.on("SIGINT", () => {
- console.log("[Services] Received SIGINT, shutting down gracefully...")
- if (getSharedState.PROD) {
- DTRManager.getInstance().stop()
- }
-
- // Stop L2PS services if running
- try {
- L2PSHashService.getInstance().stop()
- L2PSBatchAggregator.getInstance().stop()
- } catch (error) {
- console.error("[L2PS] Error stopping L2PS services:", error)
- }
-
- process.exit(0)
-})
-
-process.on("SIGTERM", () => {
- console.log("[Services] Received SIGTERM, shutting down gracefully...")
- if (getSharedState.PROD) {
- DTRManager.getInstance().stop()
- }
-
- // Stop L2PS services if running
- try {
- L2PSHashService.getInstance().stop()
- L2PSBatchAggregator.getInstance().stop()
- } catch (error) {
- console.error("[L2PS] Error stopping L2PS services:", error)
- }
-
- process.exit(0)
-})Then add DTR/L2PS stops inside gracefulShutdown:
async function gracefulShutdown(signal: string) {
console.log(`\n[SHUTDOWN] Received ${signal}, shutting down gracefully...`)
try {
+ // Stop DTR relay
+ if (getSharedState.PROD) {
+ DTRManager.getInstance().stop()
+ }
+
+ // Stop L2PS services
+ try {
+ await L2PSHashService.getInstance().stop()
+ await L2PSBatchAggregator.getInstance().stop()
+ } catch (error) {
+ console.error("[SHUTDOWN] Error stopping L2PS services:", error)
+ }
+
// Stop OmniProtocol server if running
if (indexState.omniServer) {Also applies to: 974-975
🤖 Prompt for AI Agents
In `@src/index.ts` around lines 881 - 914, There are duplicate SIGINT/SIGTERM
handlers causing process.exit(0) to run before gracefulShutdown(); remove the
direct handlers that stop DTR/L2PS and consolidate their logic into the existing
gracefulShutdown function: inside gracefulShutdown, add calls to
DTRManager.getInstance().stop() (guarded by getSharedState.PROD) and stop
L2PSHashService.getInstance().stop() and
L2PSBatchAggregator.getInstance().stop() with try/catch logging; then register
only one SIGINT and one SIGTERM listener that call await gracefulShutdown() (or
gracefulShutdown().catch(...)) and call process.exit(0) after gracefulShutdown
completes to ensure OmniProtocol, MCP, TLSNotary, Metrics, DTR, and L2PS are all
cleaned up.
| peer.call({ | ||
| method: "nodeCall", | ||
| params: [{ | ||
| message: "getL2PSParticipationById", | ||
| data: { l2psUid: uid }, | ||
| muid: `l2ps_discovery_${Date.now()}` // Unique ID | ||
| }] | ||
| }).then(response => { | ||
| if (response?.result === 200 && response?.response?.participating) { | ||
| addL2PSParticipant(uid, peer.identity) | ||
| log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`) | ||
|
|
||
| // Opportunistic sync after discovery | ||
| syncL2PSWithPeer(peer, uid) | ||
| } | ||
| }).catch(() => { | ||
| // Ignore errors during discovery | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unhandled promise rejection: syncL2PSWithPeer inside .then() is not caught.
peer.call().then() at Line 36 fires off syncL2PSWithPeer(peer, uid) at Line 49, which returns a Promise. If that promise rejects, there's no .catch() on it, resulting in an unhandled promise rejection that can crash the Node.js process (depending on the --unhandled-rejections flag).
🔧 Add catch handler
.then(response => {
if (response?.result === 200 && response?.response?.participating) {
addL2PSParticipant(uid, peer.identity)
log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`)
// Opportunistic sync after discovery
- syncL2PSWithPeer(peer, uid)
+ syncL2PSWithPeer(peer, uid).catch(err => {
+ log.debug(`[L2PS-SYNC] Opportunistic sync failed for ${uid}: ${err instanceof Error ? err.message : String(err)}`)
+ })
}
}).catch(() => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| peer.call({ | |
| method: "nodeCall", | |
| params: [{ | |
| message: "getL2PSParticipationById", | |
| data: { l2psUid: uid }, | |
| muid: `l2ps_discovery_${Date.now()}` // Unique ID | |
| }] | |
| }).then(response => { | |
| if (response?.result === 200 && response?.response?.participating) { | |
| addL2PSParticipant(uid, peer.identity) | |
| log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`) | |
| // Opportunistic sync after discovery | |
| syncL2PSWithPeer(peer, uid) | |
| } | |
| }).catch(() => { | |
| // Ignore errors during discovery | |
| }) | |
| peer.call({ | |
| method: "nodeCall", | |
| params: [{ | |
| message: "getL2PSParticipationById", | |
| data: { l2psUid: uid }, | |
| muid: `l2ps_discovery_${Date.now()}` // Unique ID | |
| }] | |
| }).then(response => { | |
| if (response?.result === 200 && response?.response?.participating) { | |
| addL2PSParticipant(uid, peer.identity) | |
| log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`) | |
| // Opportunistic sync after discovery | |
| syncL2PSWithPeer(peer, uid).catch(err => { | |
| log.debug(`[L2PS-SYNC] Opportunistic sync failed for ${uid}: ${err instanceof Error ? err.message : String(err)}`) | |
| }) | |
| } | |
| }).catch(() => { | |
| // Ignore errors during discovery | |
| }) |
🤖 Prompt for AI Agents
In `@src/libs/l2ps/L2PSConcurrentSync.ts` around lines 36 - 53, The .then()
callback on peer.call triggers syncL2PSWithPeer(peer, uid) which returns a
Promise but its rejection is not handled; modify the peer.call().then(...) block
to ensure any Promise returned by syncL2PSWithPeer is caught (either await it
inside an async callback or append .catch(...) to handle errors) and log or
ignore errors appropriately; locate the peer.call invocation and adjust the
handler around addL2PSParticipant and syncL2PSWithPeer to attach error handling
so rejections from syncL2PSWithPeer do not become unhandled.
| const [to, amount] = nativePayload.args as [string, number] | ||
| const sender = tx.content.from as string | ||
|
|
||
| // Validate amount (type check and positive) | ||
| if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0) { | ||
| return { success: false, message: "Invalid amount: must be a positive number" } | ||
| } | ||
|
|
||
| // Check sender balance in L1 state (amount + fee) | ||
| const senderAccount = await this.getOrCreateL1Account(sender) | ||
| const totalRequired = BigInt(amount) + BigInt(L2PS_TX_FEE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BigInt(amount) throws on non-integer values.
The validation on line 208 checks Number.isFinite(amount) and amount > 0 but does not verify amount is an integer. If amount is 1.5, BigInt(1.5) throws a TypeError: Cannot convert a non-integer to a BigInt, crashing the execution instead of returning a clean error.
Proposed fix
- if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0) {
- return { success: false, message: "Invalid amount: must be a positive number" }
+ if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0 || !Number.isInteger(amount)) {
+ return { success: false, message: "Invalid amount: must be a positive integer" }
}🤖 Prompt for AI Agents
In `@src/libs/l2ps/L2PSTransactionExecutor.ts` around lines 204 - 214, The code in
L2PSTransactionExecutor (around where nativePayload.args are unpacked and
totalRequired is computed) converts amount to BigInt without ensuring it's an
integer, which throws for non-integer numbers; update the validation to require
an integer by adding Number.isInteger(amount) (in the same check that currently
uses Number.isFinite and amount > 0) and only perform BigInt(amount) after that
check, e.g. validate via Number.isInteger(amount) and then compute totalRequired
= BigInt(amount) + BigInt(L2PS_TX_FEE); adjust any error message to indicate
"must be a positive integer" and keep references to getOrCreateL1Account,
nativePayload.args, and L2PS_TX_FEE.
| const privateKeyPath = path.resolve( | ||
| process.cwd(), | ||
| nodeConfig.keys.private_key_path, | ||
| ) | ||
| const ivPath = path.resolve(process.cwd(), nodeConfig.keys.iv_path) | ||
|
|
||
| if (!fs.existsSync(privateKeyPath) || !fs.existsSync(ivPath)) { | ||
| throw new Error(`L2PS key files not found for ${uid}`) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Path traversal risk via config-specified key paths.
The UID is validated against [A-Za-z0-9_-]+ (line 111) and the config path is verified to stay within basePath (line 147), but nodeConfig.keys.private_key_path and nodeConfig.keys.iv_path are resolved with process.cwd() and read with no path-boundary check. A crafted config.json could set private_key_path to "../../../etc/shadow" or an absolute path, letting an attacker exfiltrate arbitrary file contents (the bytes are passed to L2PS.create).
Apply the same startsWith(basePath) check used for configPath:
Proposed fix
const privateKeyPath = path.resolve(
process.cwd(),
nodeConfig.keys.private_key_path,
)
const ivPath = path.resolve(process.cwd(), nodeConfig.keys.iv_path)
+ // Validate key paths stay within the project directory
+ const projectRoot = path.resolve(process.cwd())
+ if (!privateKeyPath.startsWith(projectRoot) || !ivPath.startsWith(projectRoot)) {
+ throw new Error(`L2PS key paths must be within the project directory for ${uid}`)
+ }
+
if (!fs.existsSync(privateKeyPath) || !fs.existsSync(ivPath)) {
throw new Error(`L2PS key files not found for ${uid}`)
}🤖 Prompt for AI Agents
In `@src/libs/l2ps/parallelNetworks.ts` around lines 173 - 181, The privateKeyPath
and ivPath are resolved from nodeConfig.keys without enforcing the same basePath
boundary check used for configPath, allowing path traversal; update the
resolution and validation so that after resolving privateKeyPath and ivPath
(from nodeConfig.keys.private_key_path and nodeConfig.keys.iv_path) you verify
each absolute path begins with basePath (using the same startsWith(basePath)
logic used for configPath) and throw an error if not, ensuring only key files
inside the node's basePath for the given uid are read.
| signal check; | ||
| check <== sender_after * sender_after; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check signal does not enforce non-negativity in a finite field.
Squaring sender_after in a prime field (BN254) does not prove the value is non-negative. Every field element has a valid square, including those representing "negative" values (i.e., values close to the field prime). An attacker can satisfy the constraint sender_after === sender_before - amount with amount > sender_before because arithmetic wraps modulo the field prime.
To enforce that sender_after is within a valid range (e.g., fits in 64 or 128 bits), you need an explicit range check using Num2Bits from circomlib:
🔒 Proposed fix: replace squaring with a range check
- signal check;
- check <== sender_after * sender_after;
+ // Enforce sender_after fits in 64 bits (i.e., non-negative and bounded)
+ component rangeCheck = Num2Bits(64);
+ rangeCheck.in <== sender_after;You'll also need to add the include at the top:
include "poseidon.circom";
+include "bitify.circom";🤖 Prompt for AI Agents
In `@src/libs/l2ps/zk/circuits/l2ps_batch_10.circom` around lines 26 - 27, The
current constraint using signal check and check <== sender_after * sender_after
does not enforce non-negativity; replace it with an explicit range check using
circomlib's Num2Bits: add the Num2Bits include, instantiate Num2Bits with the
chosen bitWidth (e.g., 64 or 128), connect its in to sender_after, and ensure
the recomposed value from its bits equals sender_after (or wire its outBits as
required) so sender_after is constrained to the chosen non-negative range;
remove the squaring constraint and keep other equations (e.g., sender_after ===
sender_before - amount) intact so overflow wrapping cannot be exploited.
| // REVIEW: ZK proof verification endpoint for Phase 8 | ||
| case "verifyProof": { | ||
| try { | ||
| const attestation = payload.params[0] as IdentityAttestationProof | ||
|
|
||
| if ( | ||
| !attestation.proof || | ||
| !attestation.publicSignals || | ||
| !Array.isArray(attestation.publicSignals) || | ||
| attestation.publicSignals.length < 2 | ||
| ) { | ||
| return { | ||
| result: 400, | ||
| response: "Invalid proof format: missing proof or insufficient public signals", | ||
| require_reply: false, | ||
| extra: null, | ||
| } | ||
| } | ||
|
|
||
| const db = await Datasource.getInstance() | ||
| const dataSource = db.getDataSource() | ||
| const verifier = new ProofVerifier(dataSource) | ||
|
|
||
| const verificationResult = | ||
| await verifier.verifyIdentityAttestation(attestation) | ||
|
|
||
| return { | ||
| result: verificationResult.valid ? 200 : 400, | ||
| response: { | ||
| valid: verificationResult.valid, | ||
| reason: verificationResult.reason, | ||
| nullifier: attestation.publicSignals[0], | ||
| merkleRoot: attestation.publicSignals[1], | ||
| }, | ||
| require_reply: false, | ||
| extra: null, | ||
| } | ||
| } catch (error) { | ||
| log.error("[ZK RPC] Error verifying proof:", error) | ||
| // REVIEW: Sanitize error response - don't expose internal details | ||
| return { | ||
| result: 500, | ||
| response: "Internal server error", | ||
| require_reply: false, | ||
| extra: null, | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
verifyProof RPC marks nullifiers as used — potential DoS vector.
verifyIdentityAttestation (non-transactional path, no EntityManager provided) marks the nullifier as consumed in the database on success. Any unauthenticated RPC caller can submit a valid proof and permanently burn its nullifier without a block being produced, preventing the proof from being used in actual consensus.
If this endpoint is intended as a read-only verification check, use ProofVerifier.verifyProofOnly() instead and skip nullifier marking. If it's intended to commit the nullifier, it should be protected (authentication, rate limiting, or consensus gating).
🤖 Prompt for AI Agents
In `@src/libs/network/server_rpc.ts` around lines 334 - 381, The verifyProof RPC
case currently calls ProofVerifier.verifyIdentityAttestation which marks
nullifiers as consumed in the DB (non-transactional), creating a DoS vector;
change the handler in the "verifyProof" case to call the read-only verification
method ProofVerifier.verifyProofOnly (or another non-mutating method) instead of
verifyIdentityAttestation so nullifiers are not marked, and only use the
mutating verifyIdentityAttestation path behind proper
authentication/rate-limiting/consensus gating if committing is intended; update
instantiation (new ProofVerifier(...)) usage accordingly and ensure
Datasource/EntityManager are not used for writes in this endpoint.




Summary by CodeRabbit
New Features
Documentation
Scripts