-
Notifications
You must be signed in to change notification settings - Fork 2
L2ps simplified #495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: testnet
Are you sure you want to change the base?
L2ps simplified #495
Conversation
…t crashes and improve error logging
…ure authentication and add node identity certificates.
…ant discovery and delta sync, and enable DTR Manager.
…monic, and update L2PS transaction execution and network management.
…updating L2PS quickstart and Solana resolver.
L2ps implementation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.gitignore (2)
1-208:⚠️ Potential issue | 🟡 MinorClean up numerous duplicate entries throughout the file.
Beyond the previously flagged duplicates (lines 114–127), many additional duplicates exist:
bridges_docs/orbridges_docs: lines 7, 11, 131data/chain.db: lines 15, 22, 41, 42yarn.lock: lines 38, 48publickey: lines 49, 50.env: lines 26, 110.DS_Store: lines 32, 128.serena-backup: lines 129, 147, 149PR_COMMENTS: lines 130, 150dist: lines 133, 151local_tests: lines 135, 183claudedocs: lines 132, 184temp: lines 166, 186omniprotocol_fixtures_scripts: lines 136, 188STORAGE_PROGRAMS_SPEC.md: lines 172, 187CLAUDE.sync-conflict-20250901-171031-7JPPSQB.md: lines 169, 173src/features/bridges/LiquidityTank_UserGuide.md: lines 163, 182Consolidate all duplicates by keeping only one entry per pattern. Consider grouping related patterns together for better maintainability.
177-177:⚠️ Potential issue | 🔴 CriticalRemove overly broad ignore pattern that conflicts with tracked source files.
Line 177 ignores the entire
src/features/bridges/directory, which conflicts with three tracked implementation files:
src/features/bridges/bridgeUtils.tssrc/features/bridges/bridges.tssrc/features/bridges/rubic.tsEither remove line 177 if bridges code should be tracked, or use more specific patterns (like lines 181–182) to exclude only subdirectories and documentation files while preserving the source files.
🤖 Fix all issues with AI agents
In `@scripts/l2ps-load-test.ts`:
- Around line 226-241: The code currently uses nonce++ when calling
buildL2PSTransaction which consumes the nonce even if confirm/broadcast fails;
change the logic in the transaction loop to pass the current nonce value (e.g.,
capture const currentNonce = nonce) to buildL2PSTransaction instead of using
nonce++, and only increment nonce (nonce++) after demos.broadcast(...) succeeds
(after successCount++), ensuring you keep buildL2PSTransaction, demos.confirm,
demos.broadcast and successCount references intact and only advance the nonce on
a successful broadcast.
In `@scripts/l2ps-stress-test.ts`:
- Around line 37-46: The CLI's --concurrency value from parseArgs (CliOptions)
is parsed but never used; either implement parallel worker processing or remove
the flag. Fix option A: in the main/send loop (the serial transaction dispatch
code referenced around lines 263-319) replace the sequential send with a
bounded-concurrency worker pool that reads options.concurrency and runs at most
that many concurrent send tasks (use a simple async queue/worker function or a
p-limit style limiter) so multiple sendTx/sendBatch calls run in parallel;
ensure errors are aggregated and progress/counting logic remains correct. Fix
option B: if you prefer serial execution, remove concurrency from CliOptions and
parseArgs, drop any references and CLI flag parsing, and update help/output so
the flag is not advertised. Ensure the symbol parseArgs and the main send loop
are updated accordingly.
- Around line 280-287: The code passes sender.nonce++ into sendTransaction which
increments the local nonce even if the send fails; change this to pass the
current nonce without mutating (use sender.nonce) and only increment
sender.nonce after sendTransaction completes successfully (e.g., await the call,
verify success/outerHash, then do sender.nonce++). Update the call site
referencing sendTransaction and the sender.nonce management so failures do not
advance the nonce.
In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`:
- Around line 404-443: The offline-path must roll back DB state if blockchain
storage fails: modify storeOfflineMessage to return the stored message ID
(and/or a success object) and implement a new
rollbackOfflineMessage(storedMessageId, senderId, targetId) that deletes the DB
entry and decrements the sender's offline-message counter (or reverses whatever
rate-limit side-effect storeOfflineMessage applied); in the catch for
storeMessageOnBlockchain call inside the handler that currently calls
storeOfflineMessage and then await storeMessageOnBlockchain, invoke
rollbackOfflineMessage(storedId, senderId, payload.targetId) and only then
sendError(ws, ...) (or alternatively change the error response to indicate
message queued despite blockchain failure if you choose non-fatal
behavior)—ensure MAX_OFFLINE_MESSAGES_PER_SENDER logic stays authoritative in
storeOfflineMessage and rollbackOfflineMessage adjusts that quota consistently.
In `@src/index.ts`:
- Around line 881-914: There are duplicate SIGINT/SIGTERM handlers causing only
the last one (which calls gracefulShutdown()) to run and preventing DTRManager
and L2PS services from being stopped; consolidate by creating a single shutdown
handler that calls gracefulShutdown() and also invokes
DTRManager.getInstance().stop(), L2PSHashService.getInstance().stop(), and
L2PSBatchAggregator.getInstance().stop() (preserving the getSharedState.PROD
check for DTRManager), wrap each stop call in try/catch, await
gracefulShutdown() if it returns a Promise, and only call process.exit(0) after
all stops complete so all services (OmniProtocol/MCP/TLSNotary/Metrics plus
DTRManager and L2PS) are reliably shut down.
In `@src/libs/l2ps/L2PSBatchAggregator.ts`:
- Around line 527-533: The conversion loses precision because
Math.floor(Number(rawAmount)) can exceed Number.MAX_SAFE_INTEGER; update the
logic in L2PSBatchAggregator (the block assigning amount from rawAmount) to
perform safe BigInt conversion instead: if rawAmount is a bigint use it
directly, if it's a string validate it represents an integer (no decimal
point/whitespace) and call BigInt(rawAmount), if it's a number only accept it
when Number.isSafeInteger(rawAmount) (otherwise treat as invalid), and wrap the
conversion in try/catch to set amount = 0n on invalid input; reference the
variables/function names amount and rawAmount and replace the current
Math.floor(Number(...)) -> BigInt pattern with this validated path.
In `@src/libs/l2ps/L2PSConsensus.ts`:
- Around line 417-492: Both rollbackProofsForBlock and getBlockStats currently
call L2PSProofManager.getProofs() with a global limit and then filter by
applied_block_number in-memory, which will miss proofs when a block exceeds the
cap; modify L2PSProofManager.getProofs to accept an optional
applied_block_number param and apply that filter at the query level, then update
L2PSConsensus.rollbackProofsForBlock and L2PSConsensus.getBlockStats to pass the
blockNumber into getProofs (removing reliance on the hard-coded 1000/10000 caps)
so the database returns only proofs for that block and prevents incomplete
rollbacks and undercounted stats.
In `@src/libs/l2ps/L2PSHashService.ts`:
- Around line 45-47: The GENERATION_INTERVAL constant in class L2PSHashService
is set directly from process.env.L2PS_HASH_INTERVAL_MS and can be NaN or too
small, causing CPU spin; change the initialization so you parse the env var,
check Number.isFinite/Number.isNaN (or isNaN) and clamp it to a sensible minimum
(e.g. MIN_INTERVAL = 1000 ms) and/or a safe default (e.g. DEFAULT_INTERVAL =
5000 ms) if parsing fails or the value is below the minimum; update the
GENERATION_INTERVAL assignment to use this validated/clamped value and keep the
env var name L2PS_HASH_INTERVAL_MS for configuration.
In `@src/libs/l2ps/zk/BunPlonkWrapper.ts`:
- Around line 14-20: This file imports ffjavascript (symbols: getCurveFromName,
utils, Scalar, unstringifyBigInts) and js-sha3 (symbol: keccak256 via jsSha3)
but those packages are missing from package.json; add them to your project's
dependencies by installing ffjavascript and js-sha3 (e.g., npm install
ffjavascript js-sha3 or pnpm add ffjavascript js-sha3 / bun add ffjavascript
js-sha3) so the imports in BunPlonkWrapper.ts resolve at runtime.
In `@src/libs/l2ps/zk/circuits/l2ps_batch_5.circom`:
- Around line 13-28: The BalanceTransfer template performs raw subtraction and
needs range and underflow/overflow guards: add Num2Bits(128) instances for
sender_before, sender_after, receiver_before, receiver_after, and amount to
constrain them to 128-bit ranges, then instantiate LessThan(128) and wire it as
LessThan128(amount, sender_before) to enforce amount <= sender_before (prevent
underflow) and also enforce LessThan128(receiver_before + amount, 2^128) by
constraining receiver_after with Num2Bits(128) or explicitly checking
receiver_after < 2^128 to prevent receiver overflow; update constraints around
sender_after === sender_before - amount and receiver_after === receiver_before +
amount accordingly so arithmetic only occurs after these bounds are proven.
In `@src/libs/l2ps/zk/L2PSBatchProver.ts`:
- Around line 247-266: The code registers the same request twice into
this.pendingRequests (first at this.pendingRequests.set(id, { resolve, reject })
then again with timeout-wrapping), which overwrites the original entry and
causes the timeout wrapper to be lost; fix by registering a single entry that
wraps the original resolve/reject with timeout handling: create the timeout via
setTimeout pointing to reject(new Error('Subprocess request timeout')) and then
set this.pendingRequests.set(id, { resolve: (value) => { clearTimeout(timeout);
resolve(value); }, reject: (error) => { clearTimeout(timeout); reject(error); }
}) so only one map entry exists and the timeout is cleared on either resolve or
reject (use symbols: this.pendingRequests, id, timeout, resolve, reject,
setTimeout).
In `@src/libs/network/manageNodeCall.ts`:
- Around line 833-839: The timestamp validation only rejects timestamps older
than 5 minutes but allows future timestamps; update the check around
requestTime/now (the parseInt(data.timestamp) validation in this block) to
enforce a two-sided bound: consider a window const MAX_SKEW = 5 * 60 * 1000 and
reject if requestTime < now - MAX_SKEW OR requestTime > now + MAX_SKEW, setting
response.result and response.response as done currently (e.g., 401 with an
explanatory "Request expired or timestamp outside allowed skew" message) so
signatures can't be extended by future timestamps.
- Around line 876-878: The limit/offset variables in manageNodeCall.ts are taken
directly from data (const limit = data.limit || 100; const offset = data.offset
|| 0) and need validation to prevent oversized DB queries: parse data.limit and
data.offset as integers, enforce non-negative offset, clamp limit to a safe
MAX_LIMIT (e.g., define MAX_LIMIT = 1000) and a minimum of 1, and if parsing
fails fall back to the existing defaults; update the code that uses limit/offset
to rely on these sanitized values so queries never receive unbounded or negative
values.
In `@src/libs/network/routines/transactions/handleL2PS.ts`:
- Around line 101-108: After extracting encryptedPayload (L2PSEncryptedPayload)
and originalHash, compute the canonical hash of the decrypted transaction bytes
and compare it to originalHash before proceeding; if they differ return
createErrorResponse(response, 400, "Decrypted transaction hash does not match
payload.original_hash"). Place this check immediately after you decrypt the
payload (use whatever decryption result variable you already have, e.g.,
decryptedTx or decryptedTxBytes) and ensure you use the same hashing
function/format used elsewhere for tx deduplication so the comparison is
canonical.
🟡 Minor comments (14)
src/libs/l2ps/L2PS_QUICKSTART.md-76-84 (1)
76-84:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced code blocks (markdownlint MD040).
This file has multiple fenced blocks without a language; adding
textkeeps markdownlint happy and improves readability.🛠️ Proposed update
-``` +```text src/libs/l2ps/zk/ ├── keys/ │ ├── batch_5/ # For 1-5 tx batches (~37K constraints) │ └── batch_10/ # For 6-10 tx batches (~74K constraints) └── ptau/ # Powers of tau files -``` +``` -``` +```text [L2PS] Loaded network: testnet_l2ps_001 [L2PS Batch Aggregator] Started -``` +``` -``` +```text User Transactions Batch Aggregator L1 Chain │ │ │ TX 1 ─┤ (encrypted) │ │ TX 2 ─┤ (1 DEM fee each) │ │ TX 3 ─┼────────────────────────→│ │ TX 4 ─┤ in mempool │ (every 10 sec) │ TX 5 ─┤ │ │ │ │ Aggregate GCR edits │ │ │ Generate ZK proof │ │ │ Create 1 batch tx ───→│ │ │ │ │ │ │ Consensus applies │ │ │ GCR edits to L1 -``` +``` -``` +```text node/ ├── data/l2ps/testnet_l2ps_001/ │ ├── config.json # L2PS network config │ ├── private_key.txt # AES-256 key (64 hex chars) │ └── iv.txt # Initialization vector (32 hex chars) ├── docs/poc-app/ │ ├── src/App.tsx # POC application │ └── .env # Client configuration ├── src/libs/l2ps/ │ ├── L2PSTransactionExecutor.ts # Transaction processing │ ├── L2PSBatchAggregator.ts # Batch creation │ └── zk/ # ZK proof system ├── scripts/ │ ├── send-l2-batch.ts # Quick test │ ├── l2ps-load-test.ts # Load test │ └── l2ps-stress-test.ts # Stress test └── mnemonic.txt # Your wallet -``` +```Also applies to: 112-116, 198-212, 326-346
src/libs/l2ps/L2PS_QUICKSTART.md-156-162 (1)
156-162:⚠️ Potential issue | 🟡 MinorNormalize table formatting and wrap the bare URL (markdownlint MD058/MD060/MD034).
Add spaces around pipes, surround tables with blank lines, and wrap
http://127.0.0.1:53550in backticks (or<...>). This resolves the markdownlint warnings and improves readability.🛠️ Example fix (apply to all tables)
-| Feature | Description | -|---------|-------------| -| **Send L1/L2PS** | Toggle between public and private transactions | -| **Transaction History** | View L1, L2PS, or All transactions | -| **Learn Tab** | Interactive demos explaining L2PS | -| **Privacy Demo** | Try authenticated vs unauthenticated access | +| Feature | Description | +| ------- | ----------- | +| **Send L1/L2PS** | Toggle between public and private transactions | +| **Transaction History** | View L1, L2PS, or All transactions | +| **Learn Tab** | Interactive demos explaining L2PS | +| **Privacy Demo** | Try authenticated vs unauthenticated access | -| `--node <url>` | Node RPC URL | http://127.0.0.1:53550 | +| `--node <url>` | Node RPC URL | `http://127.0.0.1:53550` |Also applies to: 179-187, 216-221, 253-258, 266-273, 284-287
src/libs/identity/identity.ts-26-26 (1)
26-26:⚠️ Potential issue | 🟡 MinorUpdate import path to include
.jsextension.The
@scure/bip39@2.0.1package exports wordlists with the.jsextension. Change the import to:import { wordlist } from "@scure/bip39/wordlists/english.js"This same fix is needed in
src/libs/utils/showPubkey.tsas well.src/libs/blockchain/l2ps_mempool.ts-212-217 (1)
212-217:⚠️ Potential issue | 🟡 MinorTimestamp fallback for sequence numbers could cause gaps and collisions.
When
getNextSequenceNumberfails, it falls back toDate.now(). This could:
- Create large gaps in sequence numbers (millisecond timestamps vs sequential integers)
- Cause collisions if called multiple times in same millisecond
Consider throwing the error or using a more predictable fallback.
🛡️ Proposed fix
} catch (error) { const errorMsg = error instanceof Error ? error.message : String(error) log.error(`[L2PS Mempool] Error getting next sequence number: ${errorMsg}`) - // Fallback to timestamp-based sequence - return Date.now() + // Re-throw to let caller handle - timestamp fallback causes sequence gaps + throw error }src/libs/l2ps/L2PSProofManager.ts-35-45 (1)
35-45:⚠️ Potential issue | 🟡 MinorUse locale-independent comparison for deterministic hashing.
localeComparebehavior varies by runtime locale, which could produce different hashes across environments. For cryptographic determinism, use simple string comparison.🔒 Proposed fix for deterministic sorting
function deterministicStringify(obj: any): string { return JSON.stringify(obj, (key, value) => { if (value && typeof value === 'object' && !Array.isArray(value)) { - return Object.keys(value).sort((a, b) => a.localeCompare(b)).reduce((sorted: any, k) => { + return Object.keys(value).sort().reduce((sorted: any, k) => { sorted[k] = value[k] return sorted }, {}) } return value }) }src/libs/l2ps/L2PSTransactionExecutor.ts-204-210 (1)
204-210:⚠️ Potential issue | 🟡 MinorAmount type validation may reject valid BigInt/string amounts.
The validation at Line 208 checks
typeof amount !== 'number', but transaction amounts in blockchain systems are oftenBigIntor string to handle large values. This could incorrectly reject valid transactions.🛡️ Proposed fix for flexible amount handling
- const [to, amount] = nativePayload.args as [string, number] + const [to, rawAmount] = nativePayload.args as [string, number | string | bigint] const sender = tx.content.from as string - // Validate amount (type check and positive) - if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0) { - return { success: false, message: "Invalid amount: must be a positive number" } + // Validate and normalize amount + let amount: bigint + try { + amount = BigInt(rawAmount) + } catch { + return { success: false, message: "Invalid amount: cannot convert to BigInt" } + } + if (amount <= 0n) { + return { success: false, message: "Invalid amount: must be positive" } }src/libs/l2ps/zk/README.md-7-7 (1)
7-7:⚠️ Potential issue | 🟡 MinorInconsistency between documented max batch size and file structure.
Line 7 states the system "Supports up to 10 transactions per batch", but:
- Line 84 shows a
batch_20/directory in the file structure- Line 94 documents performance for 20 tx batches
Please clarify the actual maximum supported batch size and update the documentation accordingly.
Also applies to: 84-84, 94-94
src/libs/network/routines/transactions/demosWork/handleStep.ts-45-46 (1)
45-46:⚠️ Potential issue | 🟡 MinorDouble cast should be replaced with proper type assertion.
The
as unknown as L2PSTransactionpattern is inconsistent with how other contexts (xm, web2, native) handle the sametaskvariable using single type assertions. This suggests either:
step.contentneeds explicit typing for the l2ps context, or- The SDK's
L2PSTransactiontype is incompatible withWorkStep.contentConsider explicitly typing
taskbased on thecontextvalue, or verifying that the single-cast pattern works for the l2ps case like it does for other handlers.scripts/l2ps-load-test.ts-48-56 (1)
48-56:⚠️ Potential issue | 🟡 MinorValidate numeric CLI flags to avoid NaN/negative values.
parseIntcan yield NaN or negative values, which later impact loop bounds and TPS math. Fail fast on invalid inputs to keep the run deterministic.💡 Suggested fix
} else if (arg === "--count" && argv[i + 1]) { - options.count = parseInt(argv[i + 1], 10) + const count = Number.parseInt(argv[i + 1], 10) + if (!Number.isInteger(count) || count < 1) { + throw new Error("--count must be a positive integer") + } + options.count = count i++ } else if (arg === "--value" && argv[i + 1]) { - options.value = parseInt(argv[i + 1], 10) + const value = Number.parseInt(argv[i + 1], 10) + if (!Number.isInteger(value) || value < 0) { + throw new Error("--value must be a non-negative integer") + } + options.value = value i++ } else if (arg === "--delay" && argv[i + 1]) { - options.delayMs = parseInt(argv[i + 1], 10) + const delay = Number.parseInt(argv[i + 1], 10) + if (!Number.isInteger(delay) || delay < 0) { + throw new Error("--delay must be a non-negative integer") + } + options.delayMs = delay i++ } else if (arg === "--help") {scripts/send-l2-batch.ts-325-327 (1)
325-327:⚠️ Potential issue | 🟡 MinorValidate
--valuebefore building transactions.
Number(options.value)can yield NaN or negative values; that will propagate into invalid transaction amounts.💡 Suggested fix
- const amount = options.value ? Number(options.value) : 0 + const amount = options.value ? Number(options.value) : 0 + if (!Number.isFinite(amount) || amount < 0) { + throw new Error("--value must be a non-negative number") + }scripts/l2ps-load-test.ts-83-89 (1)
83-89:⚠️ Potential issue | 🟡 MinorReject empty/odd-length hex key material early.
sanitizeHexValuecurrently allows empty or odd-length values, which can blow up downstream. Add basic length guards before converting to bytes.💡 Suggested fix
function sanitizeHexValue(value: string, label: string): string { const cleaned = value.trim().replace(/^0x/, "").replaceAll(/\s+/g, "") + if (cleaned.length === 0) { + throw new Error(`${label} is empty`) + } + if (cleaned.length % 2 !== 0) { + throw new Error(`${label} must have an even number of hex characters`) + } if (!/^[0-9a-fA-F]+$/.test(cleaned)) { throw new Error(`${label} contains non-hex characters`) } return cleaned.toLowerCase() }src/libs/l2ps/L2PSHashService.ts-140-167 (1)
140-167:⚠️ Potential issue | 🟡 MinorClose OmniProtocol connection pool on stop.
The
connectionPoolis created instart()but never closed instop(), leaving pooled sockets open on shutdown/restart. Callthis.connectionPool?.shutdown()and null the reference to avoid resource leaks.Suggested fix
// Wait for ongoing operation to complete const startTime = Date.now() while (this.isGenerating && (Date.now() - startTime) < timeoutMs) { await new Promise(resolve => setTimeout(resolve, 100)) } + // Close OmniProtocol connection pool if initialized + if (this.connectionPool) { + await this.connectionPool.shutdown() + this.connectionPool = null + } + if (this.isGenerating) { log.warning("[L2PS Hash Service] Forced shutdown - operation still in progress") }src/libs/l2ps/parallelNetworks.ts-303-316 (1)
303-316:⚠️ Potential issue | 🟡 MinorSignature verification assumes
fromfield contains the signer's public key.The verification at Lines 304-309 uses
encryptedTx.content.fromas the public key. This may not be correct if:
fromis an address rather than a raw public key- The transaction was signed by a different key than the
fromaddressConsider validating that
fromis a valid hex public key before use, or using a dedicatedsigner_public_keyfield.src/libs/network/endpointHandlers.ts-845-854 (1)
845-854:⚠️ Potential issue | 🟡 MinorAuthorization check only verifies local participation, not sender membership.
The validation at lines 846-854 checks if this node has joined the L2PS network via
parallelNetworks.getL2PS(l2psUid), but doesn't verify that the sender of the hash update (available intx.content.from) is an authorized participant in that network. While an L2PS participant cache exists inL2PSConcurrentSync.ts, it's not consulted during hash update validation, allowing any node to submit hash updates regardless of their network membership status.Consider validating the sender against the L2PS network's discovered participants before accepting the hash update.
🧹 Nitpick comments (28)
scripts/generate-test-wallets.ts (4)
31-33: Add validation for--countargument.
parseIntreturnsNaNfor non-numeric input, which would cause the wallet generation loop to silently skip. Consider adding basic validation.✨ Suggested validation
if (arg === "--count" && argv[i + 1]) { - options.count = parseInt(argv[i + 1], 10) + const count = parseInt(argv[i + 1], 10) + if (isNaN(count) || count <= 0) { + console.error("Error: --count must be a positive integer") + process.exit(1) + } + options.count = count i++
113-126: Add security warning for mnemonic output file.The output file contains sensitive mnemonic phrases that can be used to derive private keys. Consider adding a security warning and ensuring the file isn't accidentally committed to version control.
🔒 Suggested security improvements
// Save wallet mnemonics to file const outputPath = path.resolve(options.outputPath) const walletsData = { generated_at: new Date().toISOString(), count: newWallets.length, balance: options.balance, + warning: "SENSITIVE: This file contains mnemonic phrases. Keep secure and never commit to version control.", wallets: newWallets.map(w => ({ index: w.index, address: w.address, mnemonic: w.mnemonic, })), } writeFileSync(outputPath, JSON.stringify(walletsData, null, 2)) console.log(`\n💾 Saved wallet mnemonics to: ${outputPath}`) + console.log(` ⚠️ WARNING: This file contains sensitive mnemonic phrases!`) + console.log(` ⚠️ Ensure it's added to .gitignore and stored securely.`)
89-106: Theoretical infinite loop risk on address collision.The
i--pattern retries wallet generation on collision, but if address space were exhausted or the SDK consistently returned duplicates, this would loop indefinitely. While practically impossible with the Ed25519 address space, adding a max-retry guard would be defensive.🛡️ Suggested defensive guard
+ const MAX_RETRIES_PER_WALLET = 10 + for (let i = 0; i < options.count; i++) { + let retries = 0 const wallet = await generateWallet() // Skip if already exists if (existingAddresses.has(wallet.address.toLowerCase())) { + if (++retries >= MAX_RETRIES_PER_WALLET) { + throw new Error(`Failed to generate unique wallet after ${MAX_RETRIES_PER_WALLET} attempts`) + } console.log(` ⚠️ Wallet ${i + 1} already exists, regenerating...`) i-- continue }
136-139: Consider logging full error for debugging.Logging only
err.messageloses the stack trace, which can make debugging harder when something fails unexpectedly.🔍 Suggested improvement
main().catch(err => { - console.error("❌ Error:", err.message) + console.error("❌ Error:", err.message || err) + if (process.env.DEBUG) { + console.error(err.stack) + } process.exit(1) })src/libs/l2ps/zk/scripts/setup_all_batches.sh (1)
28-37: Harden PTAU download checks for portability and failure handling.
stat -c%sbreaks on macOS/BSD, andcurlwithout--failcan leave a small HTML error file that still passes. Consider a portable size check and fail‑fast curl.🛠️ Suggested tweak
- if [[ ! -f "$PTAU_DIR/$file" ]] || [[ $(stat -c%s "$PTAU_DIR/$file") -lt 1000000 ]]; then + if [[ ! -f "$PTAU_DIR/$file" ]] || [[ $(wc -c < "$PTAU_DIR/$file") -lt 1000000 ]]; then echo -e "${YELLOW}Downloading pot${size}...${NC}" rm -f "$PTAU_DIR/$file" - curl -L -o "$PTAU_DIR/$file" "$url" + curl -fL --retry 3 -o "$PTAU_DIR/$file" "$url" else echo "pot${size} already exists" fiscripts/l2ps-stress-test.ts (1)
273-273: UseconstforreceiverIdx(ESLint prefer-const).
receiverIdxis never reassigned, so it can be declared asconst.src/libs/l2ps/zk/circuits/l2ps_batch_10.circom (2)
26-27: Unusedchecksignal — remove or replace with proper constraint.The
checksignal computessender_after * sender_afterbut the result is never used or constrained. Squaring in a finite field does not enforce non-negativity (e.g.,-1squared equals1mod prime). If the intent is to ensuresender_after >= 0, Circom requires explicit range proofs via bit decomposition (e.g.,Num2Bits).Either remove the unused signal to reduce constraints, or implement proper range checking if needed.
🔧 Proposed fix (remove unused signal)
- signal check; - check <== sender_after * sender_after; - component preHasher = Poseidon(2);
20-21:pre_hashoutput is computed but never used.The
BalanceTransfertemplate computes bothpre_hashandpost_hashoutputs, but onlypost_hashis used in the state chain (Line 71). Ifpre_hashis not needed for verification, consider removing it to reduce circuit constraints (~2.3K per transfer × 10 = ~23K constraints saved).If
pre_hashis reserved for future use (e.g., state diff proofs), document this intent.Also applies to: 69-72
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (1)
812-822: Consider batching mutex acquisitions for rate limit updates.The loop acquires the mutex separately for each sender, which could cause contention with high delivery volumes. Consider collecting all updates and applying them in a single mutex-protected batch.
♻️ Proposed optimization
// REVIEW: PR Fix `#9` - Reset offline message counts for senders after successful delivery if (sentCount > 0) { - // REVIEW: PR Fix `#2` - Use mutex to prevent lost updates during concurrent deliveries - for (const [senderId, count] of senderCounts.entries()) { - await this.countMutex.runExclusive(async () => { - const currentCount = this.offlineMessageCounts.get(senderId) || 0 - const newCount = Math.max(0, currentCount - count) - if (newCount === 0) { - this.offlineMessageCounts.delete(senderId) - } else { - this.offlineMessageCounts.set(senderId, newCount) - } - }) - } + // Batch update all sender counts in single mutex acquisition + await this.countMutex.runExclusive(async () => { + for (const [senderId, count] of senderCounts.entries()) { + const currentCount = this.offlineMessageCounts.get(senderId) || 0 + const newCount = Math.max(0, currentCount - count) + if (newCount === 0) { + this.offlineMessageCounts.delete(senderId) + } else { + this.offlineMessageCounts.set(senderId, newCount) + } + } + }) console.log(`Sent ${sentCount} offline messages to ${peerId}`) }.env.example (1)
23-24: Consider defaultingL2PS_ZK_ENABLEDtofalsefor easier local development.ZK proof generation typically requires compiled circuits, trusted setup files, and significant compute resources. Defaulting to
truemay cause failures for developers who haven't completed the ZK setup. Considerfalseas default with documentation on enabling it.💡 Proposed change
# ZK Proof Settings -L2PS_ZK_ENABLED=true +L2PS_ZK_ENABLED=falsesrc/libs/l2ps/L2PSTransactionExecutor.ts (1)
124-124: Remove trivially inferred type annotations (ESLint).Static analysis flagged redundant type annotations on default parameter values.
🧹 Proposed cleanup
- simulate: boolean = false + simulate = false - batchIndex: number = 0, + batchIndex = 0, - limit: number = 100, - offset: number = 0 + limit = 100, + offset = 0Also applies to: 322-322, 393-394
src/libs/l2ps/L2PSProofManager.ts (2)
223-227: Empty GCR edits rejection may be too strict.The validation rejects proofs with
gcr_edits.length === 0, but some proof types (e.g., state attestations, no-op proofs) may legitimately have zero edits. Consider if this should be configurable or type-dependent.
117-117: Remove trivially inferred type annotations (ESLint).🧹 Proposed cleanup
- transactionCount: number = 1, + transactionCount = 1, - limit: number = 100 + limit = 100Also applies to: 318-318
src/libs/blockchain/l2ps_mempool.ts (1)
573-581: SQL timestamp comparison relies on string-to-bigint cast.The cleanup query casts string timestamps to BIGINT for comparison. This works but depends on timestamps being stored as numeric strings. Consider adding a comment documenting this assumption, or validating timestamp format on write.
package.json (1)
26-26: Cross-platform compatibility concern with bash script.The
l2ps:zk:setupscript usescdwith&&and invokesbashdirectly, which won't work on Windows. Since this is a development/setup script that's likely run once per environment, this may be acceptable, but consider documenting this limitation or providing a cross-platform alternative.src/libs/l2ps/zk/README.md (1)
65-65: Consider adding language specifiers to fenced code blocks.Lines 65 and 80 have fenced code blocks without language specifiers. Adding a language (e.g.,
```textor```plaintext) would satisfy linter rules and may improve rendering in some Markdown viewers.Also applies to: 80-80
src/libs/blockchain/routines/Sync.ts (1)
423-433: Consider limiting concurrent L2PS sync operations.The loop starts a fire-and-forget
syncL2PSWithPeercall for eachl2psUidwithout any concurrency control. If a node participates in many L2PS networks, this could create a burst of parallel network requests.Consider using a concurrency limiter (e.g.,
p-limitor manual batching) similar to how DTR relay uses a concurrency limit of 5 as mentioned in the PR description.src/index.ts (1)
853-858: ParallelNetworks initialization error handling could swallow important errors.The catch block logs the error but continues execution. Consider whether a failure to load L2PS networks should be a warning vs. preventing L2PS services from starting with stale/incorrect configuration.
src/libs/l2ps/L2PSConcurrentSync.ts (5)
6-6: Unused import.
L2PSTransactionExecutoris imported but never used in this file.🧹 Remove unused import
-import L2PSTransactionExecutor from "./L2PSTransactionExecutor"
37-54: Fire-and-forget pattern may hide discovery failures.The
peer.call().then()pattern dispatches requests without awaiting them, which means:
- Discovery errors are silently swallowed (Line 52-54)
discoverL2PSParticipantsreturns before discovery completes- Callers cannot know if discovery succeeded
If this is intentional for performance (non-blocking discovery), consider at least logging failures instead of ignoring them entirely.
♻️ Log discovery failures for observability
}).catch(() => { - // Ignore errors during discovery + log.debug(`[L2PS-SYNC] Discovery query failed for peer ${peer.identity}`) })
42-42: Potential muid collision in tight loops.Using
Date.now()for unique identifiers can produce duplicates when called in rapid succession within the same millisecond. Consider adding a random suffix or using a counter.🔧 Add randomness to prevent collisions
- muid: `l2ps_discovery_${Date.now()}` // Unique ID + muid: `l2ps_discovery_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`
87-88: Potential precision loss with BigInt timestamp.
L2PSMempoolTxmay store timestamp as bigint. Converting viaNumber(latestTx.timestamp)could lose precision for very large values. Since timestamps are typically within safe integer range this is low risk, but consider usingBigIntconsistently if the mempool stores timestamps as bigint.
103-125: Type safety could be improved for synced transactions.The code uses
any[]type assertion (Line 104) and validates structure minimally before casting toL2PSTransaction(Line 122). While the structural checks at Lines 116-119 help, consider using a type guard or validation function for more robust type safety.src/libs/l2ps/zk/L2PSBatchProver.ts (3)
9-23: Bun compatibility patch is reasonable but risky.Monkey-patching
EventTarget.prototype.dispatchEventis a necessary workaround for Bun's web-worker limitations, but it affects all EventTarget instances globally. Consider documenting this side effect and ensuring it doesn't break other event-driven code.
541-541: Synchronous file read blocks event loop.
fs.readFileSyncinverifyProofcould block the event loop for large verification key files. Consider using asyncfs.promises.readFileor caching the verification key after first load.♻️ Use async file read or cache
- const vkey = JSON.parse(fs.readFileSync(vkeyPath, 'utf-8')); + const vkey = JSON.parse(await fs.promises.readFile(vkeyPath, 'utf-8'));Or cache the verification key in
loadedKeysmap to avoid repeated file reads.
186-186: Empty reject function loses error information.The ready handler uses an empty reject function, which means any initialization errors for the
__ready__request are silently discarded.🔧 Add logging for initialization errors
- this.pendingRequests.set('__ready__', { resolve: checkReady, reject: () => {} }); + this.pendingRequests.set('__ready__', { + resolve: checkReady, + reject: (err) => log.debug(`[L2PSBatchProver] Ready handler error: ${err}`) + });src/libs/l2ps/L2PSBatchAggregator.ts (2)
479-486: HMAC key truncation reduces cryptographic strength.Truncating the private key hex to 64 characters (32 bytes) for the HMAC key is acceptable for HMAC-SHA256, but the truncation approach (
slice(0, 64)) assumes the private key is at least 64 hex chars. If shorter, you'd get a weaker key. Consider validating the key length or using a proper key derivation function.
752-752: Type casting toanybypasses mempool type checking.Casting
batchTransactiontoanywhen callingMempool.addTransactiondefeats TypeScript's type safety. Consider ensuring the transaction object matches the expected type or updating the mempool interface.
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts
Show resolved
Hide resolved
…d interfaces for improved readability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
🤖 Fix all issues with AI agents
In `@scripts/generate-test-wallets.ts`:
- Around line 44-67: In parseArgs, validate the parsed --count value
(options.count) is a positive integer after calling the ARG_HANDLERS parser: use
Number.parseInt(...) result and check Number.isInteger(value) && value > 0; if
the check fails, print a clear error and exit (or throw) so generation cannot
proceed silently with NaN/negative/zero; update the handler for the "--count"
entry in ARG_HANDLERS and the parseArgs flow to perform this validation
immediately after parsing and before returning options.
In `@scripts/l2ps-stress-test.ts`:
- Around line 324-325: The avgDuration calculation can divide by zero if results
is empty (e.g., options.count === 0); update the code around the avgDuration
computation to guard against an empty results array by checking results.length
and returning a sensible default (e.g., 0 or NaN) when length is 0 before
performing the sum / division; reference the variables avgDuration, results and
options.count and replace the current one-liner with a guarded calculation that
sums durations only when results.length > 0 and sets avgDuration = 0 when
results.length === 0.
- Around line 302-304: The TPS calculation can divide by zero when elapsed
rounds to "0.0"; modify the block around elapsed/tps to compute elapsedSeconds =
(Date.now() - startTime) / 1000 (as a number) and guard against zero by using a
small epsilon (e.g. if elapsedSeconds <= 0 then elapsedSeconds = 0.001) before
computing tps = successCount / elapsedSeconds, then format the displayed elapsed
and tps strings; update the variables referenced (elapsed, tps, successCount,
startTime) in the logging line to use these safe values.
In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`:
- Around line 190-191: The switch "register" case in signalingServer.ts declares
a lexical variable let registerMessage (data as ImRegisterMessage) directly in
the case, violating no-case-declarations; fix it by wrapping the entire
"register" case body in a block { ... } so the declaration is scoped correctly
(locate the switch handling the "register" case and the let registerMessage
variable and enclose that case's statements in braces).
In `@src/libs/blockchain/l2ps_mempool.ts`:
- Around line 166-179: getNextSequenceNumber plus the immediate repo.save in
addTransaction is race-prone: two concurrent callers can read the same max and
insert duplicate sequence_number. Add a DB-level UNIQUE constraint on (l2ps_uid,
sequence_number) and change addTransaction to allocate sequence numbers
atomically (e.g., use a DB sequence, an atomic increment row with SELECT ... FOR
UPDATE inside a transaction, or an INSERT ... RETURNING on a sequence table)
instead of relying on getNextSequenceNumber + save; also catch unique-constraint
conflicts from repo.save and retry the allocation/insert loop a few times before
failing. Ensure references to getNextSequenceNumber, addTransaction, and
repo.save are updated accordingly.
In `@src/libs/l2ps/L2PSBatchAggregator.ts`:
- Around line 82-94: The env var parsing in L2PSBatchAggregator
(AGGREGATION_INTERVAL, MIN_BATCH_SIZE, MAX_BATCH_SIZE, CLEANUP_AGE_MS) lacks NaN
checks and bounds enforcement; add a small helper (e.g. parseEnvInt) that reads
process.env[name], parses to integer, returns the default if parse yields NaN,
and clamps to optional min/max, then replace the direct Number.parseInt calls
with parseEnvInt invocations (ensure MAX_BATCH_SIZE still enforces the circuit
cap of 10 and any sensible minimums like >=1 and reasonable lower/upper bounds).
- Around line 600-613: getNextBatchNonce claims to use persistent storage but
currently relies on in-memory sharedState (so nonces reset on process restart);
update the storage layer used by getLastNonceFromStorage() and
saveNonceToStorage() in the L2PSBatchAggregator class to persist the last nonce
to durable storage (e.g., a file or database) and read it on startup before
generating a new nonce; implement atomic write and error handling (fallback to
safe default) and ensure getNextBatchNonce still calls the updated
getLastNonceFromStorage and saveNonceToStorage so newNonce is persisted across
restarts.
- Around line 802-808: The updateCycleTime method can divide by zero if
this.stats.successfulCycles is 0; modify updateCycleTime to guard against that
by checking this.stats.successfulCycles before computing the running average:
set this.stats.lastCycleTime = cycleTime as now, then if
this.stats.successfulCycles <= 0 set this.stats.averageCycleTime = cycleTime (or
initialize appropriately), otherwise compute totalTime and set averageCycleTime
using the existing formula; reference the updateCycleTime method and the stats
properties (stats.averageCycleTime, stats.successfulCycles, stats.lastCycleTime)
when making the change.
- Around line 730-734: The signature is created over a domain-separated message
(this.SIGNATURE_DOMAIN + ":" + contentString) which causes mempool validation to
fail because tx.hash is currently the SHA256 of contentString only; fix by
making tx.hash match what is signed: in L2PSBatchAggregator where you build the
domainSeparatedMessage and call ucrypto.sign, compute tx.hash as the SHA256 of
that same domainSeparatedMessage (instead of contentString) so transaction.hash
and the signature input match, and ensure the signature object still contains
domain (SIGNATURE_DOMAIN); alternatively, if you prefer verification-side
changes, update the signature validation logic in transaction.ts (the validation
function that currently hashes contentString) to detect signatures with
signature.domain === this.SIGNATURE_DOMAIN and prepend the domain to the message
before hashing/verification so the signed bytes align with the signer.
In `@src/libs/l2ps/L2PSHashService.ts`:
- Around line 453-459: The updateCycleTime method can divide by zero when
this.stats.successfulCycles is 0; modify updateCycleTime in L2PSHashService to
guard against that by checking if this.stats.successfulCycles <= 0 and in that
case set this.stats.averageCycleTime = cycleTime (and return) before performing
the running-average formula, otherwise compute the total and assign
Math.round(total / this.stats.successfulCycles) as before; ensure you still set
this.stats.lastCycleTime = cycleTime at the start.
- Around line 113-122: The ConnectionPool created in start() when
this.omniEnabled is true (new ConnectionPool({...})) is never closed, causing
resource leaks on repeated start/stop; update the stop() method to check for
this.connectionPool and call its shutdown/close method (or appropriate API) and
null it out, ensuring ConnectionPool is disposed before returning so subsequent
start() calls can recreate it safely; reference the ConnectionPool instantiation
in start(), the this.connectionPool field, and the stop() method when making the
change.
In `@src/libs/l2ps/zk/L2PSBatchProver.ts`:
- Around line 275-282: terminate() currently kills the subprocess but leaves any
pending requests unresolved; update terminate() (and use the same
pendingRequests Map used by the request/response handlers) to iterate over
pendingRequests, call each stored reject callback with an Error like "process
terminated", clear the map, then proceed to kill the childProcess, set
this.childProcess = null and this.processReady = false, and log the termination
so all awaiting promises are rejected rather than leaked.
- Around line 10-23: The global patch of EventTarget.prototype.dispatchEvent
(wrapped under isBun) can have broad side effects; modify the approach in
L2PSBatchProver so the patch is applied only when strictly needed: introduce a
guarded, idempotent patch mechanism (e.g., check a module-scoped flag before
replacing EventTarget.prototype.dispatchEvent), store the originalDispatchEvent
in a module-scoped variable so you can restore it in a cleanup method on the
L2PSBatchProver instance (or after the specific operation completes), and scope
the usage so the patched behavior is only active during the prover’s operations
rather than for the whole process; ensure you reference and protect
EventTarget.prototype.dispatchEvent, originalDispatchEvent, and isBun when
implementing this change.
In `@src/libs/network/manageNodeCall.ts`:
- Around line 69-70: The variable declaration for peerlist inside the
switch/case is using let and not scoped to the case, causing possible cross-case
access; change to a block-scoped constant by wrapping the case body in braces
and declare const peerlist = await getPeerlist(), then compute response.response
= Hashing.sha256(JSON.stringify(peerlist)). Locate the code around getPeerlist,
peerlist and response.response (and Hashing.sha256) and replace the let with
const inside a new { ... } block so peerlist is lexically scoped to that case.
In `@src/libs/network/routines/transactions/handleL2PS.ts`:
- Line 71: The return performs an unsafe double-cast "decryptedTx as unknown as
Transaction", bypassing TypeScript checks; update handleL2PS to validate the
decryptedTx returned from l2psInstance.decryptTx before casting: either correct
l2psInstance.decryptTx's declared return type, or implement a runtime check/type
guard (e.g., isTransaction) that verifies required fields (hash, content,
signature or whatever Transaction requires) and on failure return an error
(e.g., { decryptedTx: null, error: "Invalid decrypted transaction structure" }),
otherwise return the validated decryptedTx typed as Transaction; ensure you
reference decryptedTx, Transaction and l2psInstance.decryptTx in the change.
🧹 Nitpick comments (16)
src/features/tlsnotary/TLSNotaryService.ts (1)
190-193: Guard against NaN when env vars are malformed.
Number.parseIntcan yieldNaN, which would later break the service (e.g., invalid port). Consider a small guard that falls back to defaults when parsing fails.💡 Proposed refactor
- return { - port: Number.parseInt(process.env.TLSNOTARY_PORT ?? "7047", 10), - signingKey, - maxSentData: Number.parseInt(process.env.TLSNOTARY_MAX_SENT_DATA ?? "16384", 10), - maxRecvData: Number.parseInt(process.env.TLSNOTARY_MAX_RECV_DATA ?? "65536", 10), - autoStart: process.env.TLSNOTARY_AUTO_START?.toLowerCase() !== "false", - mode, - } + const port = Number.parseInt(process.env.TLSNOTARY_PORT ?? "7047", 10) + const maxSentData = Number.parseInt(process.env.TLSNOTARY_MAX_SENT_DATA ?? "16384", 10) + const maxRecvData = Number.parseInt(process.env.TLSNOTARY_MAX_RECV_DATA ?? "65536", 10) + + return { + port: Number.isFinite(port) ? port : 7047, + signingKey, + maxSentData: Number.isFinite(maxSentData) ? maxSentData : 16384, + maxRecvData: Number.isFinite(maxRecvData) ? maxRecvData : 65536, + autoStart: process.env.TLSNOTARY_AUTO_START?.toLowerCase() !== "false", + mode, + }scripts/l2ps-stress-test.ts (3)
43-46: Consider adding validation for numeric CLI options.Parsed numeric values (count, value, concurrency, delay) are not validated. Invalid inputs like
--count -5or--delay abccould cause unexpected behavior (NaNor negative values).💡 Suggested validation helper
+function parsePositiveInt(val: string, label: string, defaultVal: number): number { + const parsed = Number.parseInt(val, 10) + if (Number.isNaN(parsed) || parsed <= 0) { + console.warn(`Warning: Invalid ${label} "${val}", using default ${defaultVal}`) + return defaultVal + } + return parsed +} + const ARG_HANDLERS: Record<string, ArgHandler> = { "--node": (opts, val) => { opts.nodeUrl = val }, "--uid": (opts, val) => { opts.uid = val }, "--wallets-file": (opts, val) => { opts.walletsFile = val }, - "--count": (opts, val) => { opts.count = Number.parseInt(val, 10) }, - "--value": (opts, val) => { opts.value = Number.parseInt(val, 10) }, - "--concurrency": (opts, val) => { opts.concurrency = Number.parseInt(val, 10) }, - "--delay": (opts, val) => { opts.delayMs = Number.parseInt(val, 10) }, + "--count": (opts, val) => { opts.count = parsePositiveInt(val, "count", 100) }, + "--value": (opts, val) => { opts.value = parsePositiveInt(val, "value", 10) }, + "--concurrency": (opts, val) => { opts.concurrency = parsePositiveInt(val, "concurrency", 5) }, + "--delay": (opts, val) => { opts.delayMs = Math.max(0, Number.parseInt(val, 10) || 0) }, }
275-275: Useconstinstead ofletforreceiverIdx.The variable is never reassigned after initialization.
🔧 Suggested fix
- let receiverIdx = (senderIdx + 1 + Math.floor(Math.random() * (walletConnections.length - 1))) % walletConnections.length + const receiverIdx = (senderIdx + 1 + Math.floor(Math.random() * (walletConnections.length - 1))) % walletConnections.length
240-263: Wallet connections are not cleaned up.The
Demosinstances are connected but never disconnected when the test completes or fails. Consider adding cleanup to release resources gracefully.💡 Suggested cleanup pattern
+ // Cleanup function + const cleanup = async () => { + for (const conn of walletConnections) { + try { + await conn.demos.disconnect?.() + } catch { /* ignore cleanup errors */ } + } + } + // Run stress test console.log(`\n🔥 Starting stress test...`)Then call
await cleanup()before the final summary or in afinallyblock.scripts/generate-test-wallets.ts (1)
117-135: Ensure output directory exists before writing mnemonics.
writeFileSyncwill fail if the output directory doesn’t exist (e.g., fresh checkout).🧰 Suggested fix
-import { existsSync, readFileSync, writeFileSync } from "node:fs" +import { existsSync, readFileSync, writeFileSync, mkdirSync } from "node:fs" @@ // Save wallet mnemonics to file const outputPath = path.resolve(options.outputPath) + mkdirSync(path.dirname(outputPath), { recursive: true }) const walletsData = { generated_at: new Date().toISOString(), count: newWallets.length, balance: options.balance,src/utilities/sharedState.ts (1)
105-108: Drop redundant type annotation to satisfy lint.
ESLint flags the explicit: numberas trivially inferred.🧹 Lint fix
- l2psBatchNonce: number = 0 // Persistent nonce for L2PS batch transactions + l2psBatchNonce = 0 // Persistent nonce for L2PS batch transactionsscripts/l2ps-load-test.ts (1)
269-272: Avoid returning fromforEachcallback (Biome lint).
Wrap the body in a block to silence the lint rule.🧹 Lint-friendly tweak
- results.errors.slice(0, 5).forEach(e => console.log(` - ${e}`)) + results.errors.slice(0, 5).forEach(e => { + console.log(` - ${e}`) + })src/libs/l2ps/L2PSHashService.ts (2)
302-302: Use explicit typing instead ofanyfor validator parameter.The
hashUpdateTxandvalidatorparameters are typed asany, reducing type safety and making the code harder to maintain.♻️ Suggested improvement
- private async relayToValidators(hashUpdateTx: any): Promise<void> { + private async relayToValidators(hashUpdateTx: Transaction): Promise<void> {Similarly for
relayViaOmniProtocol:- private async relayViaOmniProtocol(validator: any, hashUpdateTx: any): Promise<boolean> { + private async relayViaOmniProtocol(validator: Peer, hashUpdateTx: Transaction): Promise<boolean> {
219-223: Early return on empty UIDs should be logged at debug level.Silently returning when there are no joined L2PS networks makes debugging harder. Consider adding a debug log.
📝 Suggested improvement
if (joinedUIDs.length === 0) { + log.debug("[L2PS Hash Service] No joined L2PS networks to process") return // No L2PS networks to process }src/libs/network/manageNodeCall.ts (1)
706-707: Replaceconsole.logwith structured logger.Multiple L2PS endpoints use
console.loginstead of the project'slogutility, which bypasses log levels and structured logging.♻️ Suggested fix
case "getL2PSParticipationById": - console.log("[L2PS] Received L2PS participation query") + log.debug("[L2PS] Received L2PS participation query") ... case "getL2PSMempoolInfo": { - console.log("[L2PS] Received L2PS mempool info request") + log.debug("[L2PS] Received L2PS mempool info request") ... case "getL2PSTransactions": { - console.log("[L2PS] Received L2PS transactions sync request") + log.debug("[L2PS] Received L2PS transactions sync request") ... case "getL2PSAccountTransactions": { - console.log("[L2PS] Received account transactions request") + log.debug("[L2PS] Received account transactions request")Also applies to: 734-734, 767-767, 813-813
src/libs/l2ps/L2PSConcurrentSync.ts (4)
36-53: Fire-and-forget pattern silently swallows all errors.The
.catch(() => { ... })with empty block and the outercatchthat does nothing make it hard to debug discovery failures. At minimum, log discovery errors at debug level.📝 Suggested improvement
peer.call({ method: "nodeCall", params: [{ message: "getL2PSParticipationById", data: { l2psUid: uid }, muid: `l2ps_discovery_${Date.now()}` // Unique ID }] }).then(response => { if (response?.result === 200 && response?.response?.participating) { addL2PSParticipant(uid, peer.identity) log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`) // Opportunistic sync after discovery syncL2PSWithPeer(peer, uid) } - }).catch(() => { - // Ignore errors during discovery + }).catch((err) => { + log.debug(`[L2PS-SYNC] Discovery query failed for ${uid} on ${peer.identity}: ${err instanceof Error ? err.message : String(err)}`) }) - } catch { + } catch (err) { + log.debug(`[L2PS-SYNC] Discovery error for ${uid}: ${err instanceof Error ? err.message : String(err)}`) // Discovery errors are non-critical, peer may be unreachable }
41-41: Non-uniquemuidif called rapidly.Using
Date.now()for muid may produce duplicates if called within the same millisecond. Consider adding a random suffix or counter.♻️ Suggested fix
- muid: `l2ps_discovery_${Date.now()}` // Unique ID + muid: `l2ps_discovery_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`
103-103: Type assertion toany[]loses type safety.Consider creating a proper interface for the transaction response shape to maintain type safety throughout the sync flow.
♻️ Suggested approach
interface L2PSTransactionResponse { hash: string l2ps_uid: string original_hash: string encrypted_tx: L2PSTransaction timestamp: number block_number: number } // Then use: const txs = response.response.transactions as L2PSTransactionResponse[]
15-16: Module-level cache without TTL or size limits.The
l2psParticipantCachegrows unbounded and entries never expire. If peers disconnect or leave networks, stale entries persist indefinitely.Consider adding:
- A TTL for cache entries
- A maximum cache size
- Integration with peer disconnect events to remove stale entries
interface CachedParticipant { nodeId: string discoveredAt: number } const CACHE_TTL_MS = 5 * 60 * 1000 // 5 minutessrc/libs/l2ps/zk/L2PSBatchProver.ts (1)
541-541: Synchronous file read blocks the event loop.
fs.readFileSyncinverifyProofblocks the event loop. For consistency with the async pattern and to avoid blocking, consider usingfs.promises.readFile.♻️ Suggested fix
+import { readFile } from 'node:fs/promises'; async verifyProof(batchProof: BatchProof): Promise<boolean> { ... - const vkey = JSON.parse(fs.readFileSync(vkeyPath, 'utf-8')); + const vkey = JSON.parse(await readFile(vkeyPath, 'utf-8'));src/libs/l2ps/parallelNetworks.ts (1)
313-315: Consider whether missing signature should be an error.Currently, a missing signature only logs a warning and proceeds with decryption. Depending on security requirements, you may want to reject transactions without signatures entirely.
// If signatures should be required: } else { throw new Error(`L2PS transaction missing required signature for ${uid}`) }
| function parseArgs(argv: string[]): CliOptions { | ||
| const options: CliOptions = { | ||
| count: 10, | ||
| balance: "1000000000000000000", | ||
| genesisPath: "data/genesis.json", | ||
| outputPath: "data/test-wallets.json", | ||
| } | ||
|
|
||
| for (let i = 2; i < argv.length; i++) { | ||
| const arg = argv[i] | ||
|
|
||
| if (arg === "--help") { | ||
| showHelp() | ||
| } | ||
|
|
||
| const handler = ARG_HANDLERS[arg] | ||
| if (handler && argv[i + 1]) { | ||
| handler(options, argv[i + 1]) | ||
| i++ | ||
| } | ||
| } | ||
|
|
||
| return options | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
wc -l scripts/generate-test-wallets.tsRepository: kynesyslabs/node
Length of output: 96
🏁 Script executed:
cat -n scripts/generate-test-wallets.tsRepository: kynesyslabs/node
Length of output: 6022
Add validation for --count to prevent silent generation failures.
Number.parseInt() can return NaN or negative values, causing the generation loop to skip silently (since 0 < NaN and 0 < negative are both false). This results in zero wallets being generated without clear error feedback.
Validate that --count is a positive integer after parsing:
🛠️ Suggested validation
function parseArgs(argv: string[]): CliOptions {
const options: CliOptions = {
count: 10,
balance: "1000000000000000000",
genesisPath: "data/genesis.json",
outputPath: "data/test-wallets.json",
}
for (let i = 2; i < argv.length; i++) {
const arg = argv[i]
if (arg === "--help") {
showHelp()
}
const handler = ARG_HANDLERS[arg]
if (handler && argv[i + 1]) {
handler(options, argv[i + 1])
i++
}
}
+ if (!Number.isFinite(options.count) || options.count <= 0) {
+ throw new Error("--count must be a positive integer")
+ }
return options
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function parseArgs(argv: string[]): CliOptions { | |
| const options: CliOptions = { | |
| count: 10, | |
| balance: "1000000000000000000", | |
| genesisPath: "data/genesis.json", | |
| outputPath: "data/test-wallets.json", | |
| } | |
| for (let i = 2; i < argv.length; i++) { | |
| const arg = argv[i] | |
| if (arg === "--help") { | |
| showHelp() | |
| } | |
| const handler = ARG_HANDLERS[arg] | |
| if (handler && argv[i + 1]) { | |
| handler(options, argv[i + 1]) | |
| i++ | |
| } | |
| } | |
| return options | |
| } | |
| function parseArgs(argv: string[]): CliOptions { | |
| const options: CliOptions = { | |
| count: 10, | |
| balance: "1000000000000000000", | |
| genesisPath: "data/genesis.json", | |
| outputPath: "data/test-wallets.json", | |
| } | |
| for (let i = 2; i < argv.length; i++) { | |
| const arg = argv[i] | |
| if (arg === "--help") { | |
| showHelp() | |
| } | |
| const handler = ARG_HANDLERS[arg] | |
| if (handler && argv[i + 1]) { | |
| handler(options, argv[i + 1]) | |
| i++ | |
| } | |
| } | |
| if (!Number.isFinite(options.count) || options.count <= 0) { | |
| throw new Error("--count must be a positive integer") | |
| } | |
| return options | |
| } |
🤖 Prompt for AI Agents
In `@scripts/generate-test-wallets.ts` around lines 44 - 67, In parseArgs,
validate the parsed --count value (options.count) is a positive integer after
calling the ARG_HANDLERS parser: use Number.parseInt(...) result and check
Number.isInteger(value) && value > 0; if the check fails, print a clear error
and exit (or throw) so generation cannot proceed silently with
NaN/negative/zero; update the handler for the "--count" entry in ARG_HANDLERS
and the parseArgs flow to perform this validation immediately after parsing and
before returning options.
| const elapsed = ((Date.now() - startTime) / 1000).toFixed(1) | ||
| const tps = (successCount / Number.parseFloat(elapsed)).toFixed(2) | ||
| console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential division by zero in TPS calculation.
If the first 10 transactions complete in under 100ms, elapsed could round to "0.0", and Number.parseFloat("0.0") returns 0, causing Infinity TPS. Guard against this edge case.
🛡️ Suggested fix
if ((i + 1) % 10 === 0 || i === options.count - 1) {
- const elapsed = ((Date.now() - startTime) / 1000).toFixed(1)
- const tps = (successCount / Number.parseFloat(elapsed)).toFixed(2)
+ const elapsedSec = (Date.now() - startTime) / 1000
+ const tps = elapsedSec > 0 ? (successCount / elapsedSec).toFixed(2) : "N/A"
+ const elapsed = elapsedSec.toFixed(1)
console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const elapsed = ((Date.now() - startTime) / 1000).toFixed(1) | |
| const tps = (successCount / Number.parseFloat(elapsed)).toFixed(2) | |
| console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`) | |
| if ((i + 1) % 10 === 0 || i === options.count - 1) { | |
| const elapsedSec = (Date.now() - startTime) / 1000 | |
| const tps = elapsedSec > 0 ? (successCount / elapsedSec).toFixed(2) : "N/A" | |
| const elapsed = elapsedSec.toFixed(1) | |
| console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`) | |
| } |
🤖 Prompt for AI Agents
In `@scripts/l2ps-stress-test.ts` around lines 302 - 304, The TPS calculation can
divide by zero when elapsed rounds to "0.0"; modify the block around elapsed/tps
to compute elapsedSeconds = (Date.now() - startTime) / 1000 (as a number) and
guard against zero by using a small epsilon (e.g. if elapsedSeconds <= 0 then
elapsedSeconds = 0.001) before computing tps = successCount / elapsedSeconds,
then format the displayed elapsed and tps strings; update the variables
referenced (elapsed, tps, successCount, startTime) in the logging line to use
these safe values.
| const totalTime = (Date.now() - startTime) / 1000 | ||
| const avgDuration = results.reduce((sum, r) => sum + r.duration, 0) / results.length |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against empty results array.
If options.count is somehow 0, results.length would be 0, causing division by zero when calculating avgDuration.
🛡️ Suggested fix
const totalTime = (Date.now() - startTime) / 1000
- const avgDuration = results.reduce((sum, r) => sum + r.duration, 0) / results.length
+ const avgDuration = results.length > 0
+ ? results.reduce((sum, r) => sum + r.duration, 0) / results.length
+ : 0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const totalTime = (Date.now() - startTime) / 1000 | |
| const avgDuration = results.reduce((sum, r) => sum + r.duration, 0) / results.length | |
| const totalTime = (Date.now() - startTime) / 1000 | |
| const avgDuration = results.length > 0 | |
| ? results.reduce((sum, r) => sum + r.duration, 0) / results.length | |
| : 0 |
🤖 Prompt for AI Agents
In `@scripts/l2ps-stress-test.ts` around lines 324 - 325, The avgDuration
calculation can divide by zero if results is empty (e.g., options.count === 0);
update the code around the avgDuration computation to guard against an empty
results array by checking results.length and returning a sensible default (e.g.,
0 or NaN) when length is 0 before performing the sum / division; reference the
variables avgDuration, results and options.count and replace the current
one-liner with a guarded calculation that sums durations only when
results.length > 0 and sets avgDuration = 0 when results.length === 0.
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts
Show resolved
Hide resolved
| private updateCycleTime(cycleTime: number): void { | ||
| this.stats.lastCycleTime = cycleTime | ||
|
|
||
| // Calculate running average | ||
| const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime | ||
| this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential division by zero in updateCycleTime.
When successfulCycles is 1, the calculation (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) evaluates to 0 * averageCycleTime, which is fine. However, if this method is ever called when successfulCycles is 0 (e.g., due to a bug), it will cause division by zero.
🛡️ Defensive fix
private updateCycleTime(cycleTime: number): void {
this.stats.lastCycleTime = cycleTime
// Calculate running average
- const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
- this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
+ if (this.stats.successfulCycles > 0) {
+ const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
+ this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private updateCycleTime(cycleTime: number): void { | |
| this.stats.lastCycleTime = cycleTime | |
| // Calculate running average | |
| const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime | |
| this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles) | |
| } | |
| private updateCycleTime(cycleTime: number): void { | |
| this.stats.lastCycleTime = cycleTime | |
| // Calculate running average | |
| if (this.stats.successfulCycles > 0) { | |
| const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime | |
| this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles) | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@src/libs/l2ps/L2PSHashService.ts` around lines 453 - 459, The updateCycleTime
method can divide by zero when this.stats.successfulCycles is 0; modify
updateCycleTime in L2PSHashService to guard against that by checking if
this.stats.successfulCycles <= 0 and in that case set
this.stats.averageCycleTime = cycleTime (and return) before performing the
running-average formula, otherwise compute the total and assign Math.round(total
/ this.stats.successfulCycles) as before; ensure you still set
this.stats.lastCycleTime = cycleTime at the start.
| const isBun = (globalThis as any).Bun !== undefined; | ||
| if (isBun) { | ||
| // Suppress web-worker errors in Bun by patching dispatchEvent | ||
| const originalDispatchEvent = EventTarget.prototype.dispatchEvent; | ||
| EventTarget.prototype.dispatchEvent = function (event: any) { | ||
| if (!(event instanceof Event)) { | ||
| // Convert plain object to Event for Bun compatibility | ||
| const realEvent = new Event(event.type || 'message'); | ||
| Object.assign(realEvent, event); | ||
| return originalDispatchEvent.call(this, realEvent); | ||
| } | ||
| return originalDispatchEvent.call(this, event); | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Global prototype modification could cause unexpected side effects.
Patching EventTarget.prototype.dispatchEvent globally for Bun compatibility can affect other code in the process. Consider isolating this patch or using a more targeted approach.
💡 Suggestion
Consider wrapping the patch in a way that only affects the specific use case:
// Store reference to restore later if needed
const originalDispatchEvent = EventTarget.prototype.dispatchEvent;
// Apply patch only within the prover's scope if possible
// Or at minimum, add documentation about the global side effectAlso consider checking if the patch is already applied to avoid double-patching:
if (isBun) {
+ if (!(EventTarget.prototype as any).__bunPatched) {
const originalDispatchEvent = EventTarget.prototype.dispatchEvent;
EventTarget.prototype.dispatchEvent = function (event: any) {
...
};
+ (EventTarget.prototype as any).__bunPatched = true;
+ }
}🧰 Tools
🪛 ESLint
[error] 10-10: Extra semicolon.
(semi)
[error] 13-13: Extra semicolon.
(semi)
[error] 17-17: Extra semicolon.
(semi)
[error] 18-18: Extra semicolon.
(semi)
[error] 19-19: Extra semicolon.
(semi)
[error] 21-21: Extra semicolon.
(semi)
[error] 22-22: Extra semicolon.
(semi)
🤖 Prompt for AI Agents
In `@src/libs/l2ps/zk/L2PSBatchProver.ts` around lines 10 - 23, The global patch
of EventTarget.prototype.dispatchEvent (wrapped under isBun) can have broad side
effects; modify the approach in L2PSBatchProver so the patch is applied only
when strictly needed: introduce a guarded, idempotent patch mechanism (e.g.,
check a module-scoped flag before replacing
EventTarget.prototype.dispatchEvent), store the originalDispatchEvent in a
module-scoped variable so you can restore it in a cleanup method on the
L2PSBatchProver instance (or after the specific operation completes), and scope
the usage so the patched behavior is only active during the prover’s operations
rather than for the whole process; ensure you reference and protect
EventTarget.prototype.dispatchEvent, originalDispatchEvent, and isBun when
implementing this change.
| async terminate(): Promise<void> { | ||
| if (this.childProcess) { | ||
| this.childProcess.kill(); | ||
| this.childProcess = null; | ||
| this.processReady = false; | ||
| log.info('[L2PSBatchProver] Subprocess terminated'); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Subprocess termination doesn't reject pending requests.
When terminate() is called, pending requests remain unresolved, potentially causing promise leaks. The error handler at lines 150-158 handles this for process errors, but explicit termination doesn't.
🐛 Suggested fix
async terminate(): Promise<void> {
if (this.childProcess) {
+ // Reject all pending requests before killing
+ for (const [id, pending] of this.pendingRequests) {
+ pending.reject(new Error('Subprocess terminated'));
+ this.pendingRequests.delete(id);
+ }
this.childProcess.kill();
this.childProcess = null;
this.processReady = false;
log.info('[L2PSBatchProver] Subprocess terminated');
}
}🧰 Tools
🪛 ESLint
[error] 277-277: Extra semicolon.
(semi)
[error] 278-278: Extra semicolon.
(semi)
[error] 279-279: Extra semicolon.
(semi)
[error] 280-280: Extra semicolon.
(semi)
🤖 Prompt for AI Agents
In `@src/libs/l2ps/zk/L2PSBatchProver.ts` around lines 275 - 282, terminate()
currently kills the subprocess but leaves any pending requests unresolved;
update terminate() (and use the same pendingRequests Map used by the
request/response handlers) to iterate over pendingRequests, call each stored
reject callback with an Error like "process terminated", clear the map, then
proceed to kill the childProcess, set this.childProcess = null and
this.processReady = false, and log the termination so all awaiting promises are
rejected rather than leaked.
|
This PR modifies Files that will be reverted:
|
|
This PR modifies Files that will be reverted:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 129-136: The .gitignore contains a redundant entry "bridges_docs"
(without trailing slash) that duplicates the existing "bridges_docs/" ignore
earlier; remove the duplicate "bridges_docs" line so only the canonical
"bridges_docs/" entry remains and the trailing-slash style is consistent.
In `@src/utilities/sharedState.ts`:
- Around line 112-114: The comment on the l2psBatchNonce field is misleading
about persistence; update the comment on the l2psBatchNonce property in
sharedState.ts to state it's session-persistent (e.g., "Session-persistent nonce
for L2PS batch transactions") to match the behavior in L2PSBatchAggregator
(which retains nonce only for the process lifetime), or alternatively confirm
and document if you intend to make it durable across restarts; reference the
l2psBatchNonce variable and L2PSBatchAggregator usage when making the change.
🧹 Nitpick comments (6)
src/libs/blockchain/chain.ts (1)
208-214: Inconsistent error handling compared togetTxByHash.This method now has identical logic to
getTxByHash(lines 74-89), but lacks the try-catch error handling present there. If the database query fails, errors will propagate without logging.Consider one of these approaches:
- Add consistent error handling to match
getTxByHash- Consolidate both methods into one to eliminate duplication
♻️ Option 1: Add consistent error handling
static async getTransactionFromHash(hash: string): Promise<Transaction | null> { + try { const rawTx = await this.transactions.findOneBy({ hash: ILike(hash) }) if (!rawTx) { return null } return Transaction.fromRawTransaction(rawTx) + } catch (error) { + log.error("[ChainDB] [ ERROR ]: " + JSON.stringify(error)) + throw error + } }♻️ Option 2: Consolidate by delegating to `getTxByHash`
static async getTransactionFromHash(hash: string): Promise<Transaction | null> { - const rawTx = await this.transactions.findOneBy({ hash: ILike(hash) }) - if (!rawTx) { - return null - } - return Transaction.fromRawTransaction(rawTx) + return this.getTxByHash(hash) }.env.example (2)
23-24: Consider adding performance warning for ZK proof enablement.ZK-SNARK PLONK proof generation is CPU and memory-intensive. Enabling it by default (
L2PS_ZK_ENABLED=true) may cause unexpected resource consumption for users unfamiliar with the performance implications. Consider adding an inline comment warning about resource requirements, or defaulting tofalsewith documentation explaining when to enable it.📝 Suggested documentation enhancement
# ZK Proof Settings +# WARNING: ZK proof generation is CPU/memory intensive. Only enable if hardware supports it. +# Recommended minimum: 4+ CPU cores, 8GB+ RAM L2PS_ZK_ENABLED=true
13-74: Optional: Address dotenv-linter ordering warnings.The static analysis tool flagged several
UnorderedKeywarnings suggesting alphabetical ordering of environment variables. The current logical grouping (related settings together) is reasonable and arguably more maintainable than strict alphabetical ordering. However, if your team prefers to satisfy the linter, you could reorder keys alphabetically within each section.This is purely a style preference with no functional impact.
package.json (1)
26-26: Consider a cross‑platform wrapper (or document the bash requirement).
npm runon Windows won’t havebashby default, so this script will fail for contributors who don’t use Git Bash/WSL. If Windows support matters, consider a Node wrapper or explicitly document the bash prerequisite in onboarding docs.src/libs/network/manageNodeCall.ts (1)
735-758: Avoid loading the entire L2PS mempool just to compute stats.
For large UIDs this can be heavy; consider a stats query (count + min/max timestamps) instead of pulling all rows.♻️ Possible refactor (using a dedicated stats helper)
- const transactions = await L2PSMempool.getByUID(data.l2psUid, "processed") - response.response = { - l2psUid: data.l2psUid, - transactionCount: transactions.length, - lastTimestamp: transactions.length > 0 - ? transactions[transactions.length - 1].timestamp - : 0, - oldestTimestamp: transactions.length > 0 - ? transactions[0].timestamp - : 0, - } + const stats = await L2PSMempool.getStatsByUID(data.l2psUid, "processed") + response.response = { + l2psUid: data.l2psUid, + transactionCount: stats.count, + lastTimestamp: stats.lastTimestamp, + oldestTimestamp: stats.oldestTimestamp, + }src/libs/blockchain/routines/Sync.ts (1)
580-620: Consider wiring the new peer-failover helpers or removing them.
findNextAvailablePeer/handlePeerUnreachablearen’t referenced in this file yet; either use them in the retry path or drop them to avoid dead code.
| .serena-backup | ||
| PR_COMMENTS | ||
| bridges_docs | ||
| claudedocs | ||
| dist | ||
| docs | ||
| local_tests | ||
| omniprotocol_fixtures_scripts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove redundant bridges_docs ignore entry.
bridges_docs/ is already ignored earlier; the new bridges_docs entry is redundant and inconsistent with the trailing slash style.
🧹 Suggested cleanup
-bridges_docs📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .serena-backup | |
| PR_COMMENTS | |
| bridges_docs | |
| claudedocs | |
| dist | |
| docs | |
| local_tests | |
| omniprotocol_fixtures_scripts | |
| .serena-backup | |
| PR_COMMENTS | |
| claudedocs | |
| dist | |
| docs | |
| local_tests | |
| omniprotocol_fixtures_scripts |
🤖 Prompt for AI Agents
In @.gitignore around lines 129 - 136, The .gitignore contains a redundant entry
"bridges_docs" (without trailing slash) that duplicates the existing
"bridges_docs/" ignore earlier; remove the duplicate "bridges_docs" line so only
the canonical "bridges_docs/" entry remains and the trailing-slash style is
consistent.
| // SECTION L2PS | ||
| l2psJoinedUids: string[] = [] // UIDs of the L2PS networks that are joined to the node (loaded from the data directory) | ||
| l2psBatchNonce: number = 0 // Persistent nonce for L2PS batch transactions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for l2psBatchNonce usage to understand persistence requirements
rg -n -C3 'l2psBatchNonce'Repository: kynesyslabs/node
Length of output: 1887
Clarify comment: session-persistent nonce, not permanently persistent.
The comment "Persistent nonce for L2PS batch transactions" is misleading. The implementation resets to 0 on node restart. Looking at L2PSBatchAggregator.ts (lines 620-642), the nonce is intentionally session-persistent only: "Use shared state to persist nonce across the session... This survives within the same process lifetime."
Update the comment to clarify this is session-scoped (e.g., "Session-persistent nonce for L2PS batch transactions") or verify whether nonce reuse across restarts is acceptable for L2PS transaction security semantics.
🤖 Prompt for AI Agents
In `@src/utilities/sharedState.ts` around lines 112 - 114, The comment on the
l2psBatchNonce field is misleading about persistence; update the comment on the
l2psBatchNonce property in sharedState.ts to state it's session-persistent
(e.g., "Session-persistent nonce for L2PS batch transactions") to match the
behavior in L2PSBatchAggregator (which retains nonce only for the process
lifetime), or alternatively confirm and document if you intend to make it
durable across restarts; reference the l2psBatchNonce variable and
L2PSBatchAggregator usage when making the change.
… circuit range checks, and improve signaling server robustness with offline message rollback.
|
This PR modifies Files that will be reverted:
|
|
This PR modifies Files that will be reverted:
|
…rotection, and refactor network handlers for improved error logging and type safety.
|
This PR modifies Files that will be reverted:
|
|
This PR modifies Files that will be reverted:
|
…ool transactions, enhance L2PS circuit with underflow protection, and refactor graceful shutdown.
|
This PR modifies Files that will be reverted:
|
|
This PR modifies Files that will be reverted:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/libs/network/endpointHandlers.ts (1)
322-331:⚠️ Potential issue | 🟠 MajorWrap the subnet case in a block to satisfy noSwitchDeclarations.
Biome flags switch-scoped
vardeclarations; this will fail lint.✅ Suggested fix
- case "subnet": + case "subnet": { payload = tx.content.data log.debug( "[handleExecuteTransaction] Subnet payload: " + JSON.stringify(payload[1]), ) - var subnetResult = await ServerHandlers.handleSubnetTx( + const subnetResult = await ServerHandlers.handleSubnetTx( tx as L2PSTransaction, ) result.response = subnetResult break + }
🤖 Fix all issues with AI agents
In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`:
- Around line 706-731: The code computes a timestamp and includes it in
messageContent before computing messageHash, but then calls Date.now() again
when building the offlineMessage entity causing a potential millisecond
mismatch; change the offlineMessage creation to reuse the earlier timestamp
variable (converted to string to match TypeORM bigint behavior) so the stored
offlineMessage.timestamp exactly matches the timestamp used in
messageContent/messageHash (referencing variables messageContent, timestamp,
messageHash and the offlineMessageRepository.create call).
In `@src/libs/blockchain/l2ps_mempool.ts`:
- Around line 344-348: getHashForL2PS currently restricts the query to
L2PSMempoolTx records with status: "processed", which omits transactions that
later become "executed" or "batched" and breaks deterministic hashing; update
the FindManyOptions where-clause in getHashForL2PS to include all
non-failed/non-confirmed statuses (e.g., ["processed","executed","batched"] or
equivalent predicate) so the consolidated hash uses every relevant L2PSMempoolTx
for the given l2ps_uid.
In `@src/libs/blockchain/routines/Sync.ts`:
- Around line 119-133: The calls to discoverL2PSParticipants and
exchangeL2PSParticipation are passing an unnecessary second argument
getSharedState.l2psJoinedUids; both helpers already read shared state
internally, so remove the extra parameter and call them with only the peers
argument (e.g., replace discoverL2PSParticipants(peers,
getSharedState.l2psJoinedUids) with discoverL2PSParticipants(peers) and
similarly for exchangeL2PSParticipation(peers, ...)). Ensure any related
.then/.catch handlers and logging (e.g., log.debug/log.error) remain unchanged.
In `@src/libs/l2ps/L2PSConcurrentSync.ts`:
- Around line 128-145: processReceivedTransactions currently adds peer-supplied
items directly via L2PSMempool.addTransaction(..., "processed"), which bypasses
decrypt/signature/original_hash checks; change this to either route incoming txs
through the existing validation pipeline (call handleL2PS or the decrypt+confirm
functions used by the main flow) before adding, or insert them into the mempool
with status "pending" and run the same validation routine that verifies
encrypted_tx, original_hash and signatures and only promote to "processed" when
that routine succeeds; update references in processReceivedTransactions and
avoid direct "processed" writes via L2PSMempool.addTransaction until validation
completes.
In `@src/libs/l2ps/zk/L2PSBatchProver.ts`:
- Around line 168-186: The ready-timeout handler currently switches to
main-thread proving but leaves the spawned subprocess and the '__ready__'
pendingRequests entry alive; modify the timeout callback (the readyTimeout
closure) to also terminate/cleanup the child subprocess (e.g., call whatever
shutdown/kill on this.child or this.subprocess), remove the '__ready__' key from
this.pendingRequests, and clear any handlers so no lingering listeners remain,
then set this.useSubprocess = false and resolve; also ensure the checkReady
handler clears the timeout and pendingRequests on success to avoid duplicates.
In `@src/libs/omniprotocol/protocol/handlers/l2ps.ts`:
- Around line 397-405: Before calling L2PSHashes.updateHash you must
authenticate the update: verify the payload is signed by a valid participant and
that the signer is an active participant of the targeted L2PS (identified by
request.l2psUid). Add a step to (1) validate the signature on the incoming
payload (using your existing signature verification helper) against the provided
signer/public key fields on the request, and (2) confirm that the recovered
signer is in the participant set for the L2PS (use the module/method that
returns participants or an isParticipant check for request.l2psUid). Only after
both checks pass should you invoke L2PSHashes.updateHash(request.l2psUid,
request.consolidatedHash, request.transactionCount,
BigInt(request.blockNumber)); reject/throw otherwise.
- Around line 300-311: The handler handleL2PSGetBatchStatus is querying the L2PS
mempool with the wrong status; change the getByUID call on L2PSMempool to use
the "pending" status instead of "processed" (i.e.,
L2PSMempool.getByUID(request.l2psUid, "pending")) so it returns transactions
awaiting aggregation; keep the surrounding logic (encoding via encodeResponse
and the returned fields like pendingTransactions and l2psUid) unchanged.
🧹 Nitpick comments (3)
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (2)
408-411: Avoid@ts-ignore— declare an explicit return type instead.The
storeOfflineMessagemethod returnsofflineMessage.idbut lacks a return type annotation, forcing a@ts-ignoreand double cast here. Adding proper typing eliminates the need for suppression.♻️ Proposed fix
Update
storeOfflineMessagesignature (around line 692):- private async storeOfflineMessage(senderId: string, targetId: string, message: SerializedEncryptedObject) { + private async storeOfflineMessage(senderId: string, targetId: string, message: SerializedEncryptedObject): Promise<string> {Then remove the ts-ignore and cast here:
- // `@ts-ignore` - We know this returns a string ID now - messageId = await this.storeOfflineMessage(senderId, payload.targetId, payload.message) as unknown as string + messageId = await this.storeOfflineMessage(senderId, payload.targetId, payload.message)
844-854: Consider batching mutex acquisitions for better performance.The loop acquires the mutex separately for each sender. For recipients with many offline messages from different senders, this creates multiple lock/unlock cycles. Batching the updates into a single mutex-protected block would be more efficient.
♻️ Proposed optimization
// REVIEW: PR Fix `#9` - Reset offline message counts for senders after successful delivery if (sentCount > 0) { - // REVIEW: PR Fix `#2` - Use mutex to prevent lost updates during concurrent deliveries - for (const [senderId, count] of senderCounts.entries()) { - await this.countMutex.runExclusive(async () => { + await this.countMutex.runExclusive(async () => { + for (const [senderId, count] of senderCounts.entries()) { const currentCount = this.offlineMessageCounts.get(senderId) || 0 const newCount = Math.max(0, currentCount - count) if (newCount === 0) { this.offlineMessageCounts.delete(senderId) } else { this.offlineMessageCounts.set(senderId, newCount) } - }) - } + } + }) console.log(`Sent ${sentCount} offline messages to ${peerId}`) }src/libs/network/routines/transactions/handleL2PS.ts (1)
109-112: Add explicit hash computation after decryption for defensive validation.While
Transaction.confirmTx()validates hash coherence (lines 65–69), it depends on the decrypted transaction already having the hash field populated. For stronger defensive coding and independence from external library behavior, compute the canonical hash explicitly after decryption:// After decryptTx returns, ensure hash is populated if (!decryptedTx.hash) { decryptedTx = Transaction.hash(decryptedTx) if (!decryptedTx) { return createErrorResponse(response, 400, "Failed to compute transaction hash") } }This makes the hash computation explicit and independent of whether demosdk populates it, improving code clarity and resilience.
| const timestamp = Date.now() | ||
| const messageContent = JSON.stringify({ | ||
| message, // Keys in alphabetical order | ||
| senderId, | ||
| targetId, | ||
| timestamp, | ||
| }) | ||
| const messageHash = Hashing.sha256(messageContent) | ||
|
|
||
| // NOTE: Future improvement - will be replaced with sender signature verification once client-side signing is implemented | ||
| // Current: Sign with node's private key for integrity (not authentication) | ||
| // REVIEW: PR Fix #14 - Add null safety check for private key access (location 2/3) | ||
| if (!getSharedState.identity?.ed25519?.privateKey) { | ||
| throw new Error("[Signaling Server] Private key not available for offline message signing") | ||
| } | ||
|
|
||
| const signature = Cryptography.sign(messageHash, getSharedState.identity.ed25519.privateKey) | ||
|
|
||
| const offlineMessage = offlineMessageRepository.create({ | ||
| recipientPublicKey: targetId, | ||
| senderPublicKey: senderId, | ||
| messageHash, | ||
| encryptedContent: message, | ||
| signature: Buffer.from(signature).toString("base64"), | ||
| // REVIEW: PR Fix #9 - timestamp is string type to match TypeORM bigint behavior | ||
| timestamp: Date.now().toString(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Timestamp mismatch between hash and stored entity.
The timestamp variable (line 706) is used in the message hash, but line 731 calls Date.now() again for the entity's timestamp field. These values can differ by milliseconds, creating inconsistency between what's hashed and what's stored.
🐛 Proposed fix
const offlineMessage = offlineMessageRepository.create({
recipientPublicKey: targetId,
senderPublicKey: senderId,
messageHash,
encryptedContent: message,
signature: Buffer.from(signature).toString("base64"),
// REVIEW: PR Fix `#9` - timestamp is string type to match TypeORM bigint behavior
- timestamp: Date.now().toString(),
+ timestamp: timestamp.toString(),
status: "pending",
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const timestamp = Date.now() | |
| const messageContent = JSON.stringify({ | |
| message, // Keys in alphabetical order | |
| senderId, | |
| targetId, | |
| timestamp, | |
| }) | |
| const messageHash = Hashing.sha256(messageContent) | |
| // NOTE: Future improvement - will be replaced with sender signature verification once client-side signing is implemented | |
| // Current: Sign with node's private key for integrity (not authentication) | |
| // REVIEW: PR Fix #14 - Add null safety check for private key access (location 2/3) | |
| if (!getSharedState.identity?.ed25519?.privateKey) { | |
| throw new Error("[Signaling Server] Private key not available for offline message signing") | |
| } | |
| const signature = Cryptography.sign(messageHash, getSharedState.identity.ed25519.privateKey) | |
| const offlineMessage = offlineMessageRepository.create({ | |
| recipientPublicKey: targetId, | |
| senderPublicKey: senderId, | |
| messageHash, | |
| encryptedContent: message, | |
| signature: Buffer.from(signature).toString("base64"), | |
| // REVIEW: PR Fix #9 - timestamp is string type to match TypeORM bigint behavior | |
| timestamp: Date.now().toString(), | |
| const timestamp = Date.now() | |
| const messageContent = JSON.stringify({ | |
| message, // Keys in alphabetical order | |
| senderId, | |
| targetId, | |
| timestamp, | |
| }) | |
| const messageHash = Hashing.sha256(messageContent) | |
| // NOTE: Future improvement - will be replaced with sender signature verification once client-side signing is implemented | |
| // Current: Sign with node's private key for integrity (not authentication) | |
| // REVIEW: PR Fix `#14` - Add null safety check for private key access (location 2/3) | |
| if (!getSharedState.identity?.ed25519?.privateKey) { | |
| throw new Error("[Signaling Server] Private key not available for offline message signing") | |
| } | |
| const signature = Cryptography.sign(messageHash, getSharedState.identity.ed25519.privateKey) | |
| const offlineMessage = offlineMessageRepository.create({ | |
| recipientPublicKey: targetId, | |
| senderPublicKey: senderId, | |
| messageHash, | |
| encryptedContent: message, | |
| signature: Buffer.from(signature).toString("base64"), | |
| // REVIEW: PR Fix `#9` - timestamp is string type to match TypeORM bigint behavior | |
| timestamp: timestamp.toString(), | |
| status: "pending", | |
| }) |
🤖 Prompt for AI Agents
In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`
around lines 706 - 731, The code computes a timestamp and includes it in
messageContent before computing messageHash, but then calls Date.now() again
when building the offlineMessage entity causing a potential millisecond
mismatch; change the offlineMessage creation to reuse the earlier timestamp
variable (converted to string to match TypeORM bigint behavior) so the stored
offlineMessage.timestamp exactly matches the timestamp used in
messageContent/messageHash (referencing variables messageContent, timestamp,
messageHash and the offlineMessageRepository.create call).
| const options: FindManyOptions<L2PSMempoolTx> = { | ||
| where: { | ||
| l2ps_uid: l2psUid, | ||
| status: "processed", // Only include successfully processed transactions | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consolidated hash ignores executed/batched transactions.
getHashForL2PS filters only status: "processed". After execution, status changes to "executed", so those txs drop out of the hash, which can break validator relay determinism. Consider including all non-failed/non-confirmed statuses (e.g., processed/executed/batched).
✅ Suggested fix
- where: {
- l2ps_uid: l2psUid,
- status: "processed", // Only include successfully processed transactions
- },
+ where: {
+ l2ps_uid: l2psUid,
+ status: In([
+ L2PS_STATUS.PROCESSED,
+ L2PS_STATUS.EXECUTED,
+ L2PS_STATUS.BATCHED,
+ ]),
+ },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const options: FindManyOptions<L2PSMempoolTx> = { | |
| where: { | |
| l2ps_uid: l2psUid, | |
| status: "processed", // Only include successfully processed transactions | |
| }, | |
| const options: FindManyOptions<L2PSMempoolTx> = { | |
| where: { | |
| l2ps_uid: l2psUid, | |
| status: In([ | |
| L2PS_STATUS.PROCESSED, | |
| L2PS_STATUS.EXECUTED, | |
| L2PS_STATUS.BATCHED, | |
| ]), | |
| }, |
🤖 Prompt for AI Agents
In `@src/libs/blockchain/l2ps_mempool.ts` around lines 344 - 348, getHashForL2PS
currently restricts the query to L2PSMempoolTx records with status: "processed",
which omits transactions that later become "executed" or "batched" and breaks
deterministic hashing; update the FindManyOptions where-clause in getHashForL2PS
to include all non-failed/non-confirmed statuses (e.g.,
["processed","executed","batched"] or equivalent predicate) so the consolidated
hash uses every relevant L2PSMempoolTx for the given l2ps_uid.
…down and mempool transaction saving.
|
This PR modifies Files that will be reverted:
|
|
This PR modifies Files that will be reverted:
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/libs/blockchain/l2ps_mempool.ts`:
- Around line 265-268: The code currently does Number(result.max_seq) and can
produce NaN which will propagate; update the logic around result.max_seq and the
maxSeq local so you validate the numeric conversion (e.g., const parsed =
Number(result?.max_seq); if (!Number.isFinite(parsed)) parsedFallback = -1) and
then use that validated value when returning parsedFallback + 1; reference
result.max_seq and the maxSeq variable in your change and ensure null/undefined
and non-numeric strings both fall back to -1 before adding 1.
🧹 Nitpick comments (4)
src/libs/blockchain/l2ps_mempool.ts (4)
195-212: Remove unnecessaryanycast for private method access.Line 197 uses
(this as any).getNextSequenceNumber(...)which bypasses type checking. SincegetNextSequenceNumberis a private static method of the same class, you can call it directly using the class name.♻️ Suggested fix
- const sequenceNumber = await (this as any).getNextSequenceNumber(l2psUid, transactionalEntityManager) + const sequenceNumber = await L2PSMempool.getNextSequenceNumber(l2psUid, transactionalEntityManager)
310-310: Add non-null assertion afterensureInitialized()for type safety.After
ensureInitialized()completes,this.repois guaranteed to be non-null, but TypeScript doesn't infer this. Add!tothis.repocalls to make the invariant explicit and avoid potential type errors. This pattern applies to lines 310, 328, 378, 427, 461, 507, 553, 588, 609, 636, 667, 684, 701, 727, 768, 771, and 784.♻️ Example fix
- return await this.repo.find(options) + return await this.repo!.find(options)
432-432: Inconsistent null handling forresult.affected.Line 432 accesses
result.affected > 0directly, butaffectedcan beundefinedaccording to TypeORM'sUpdateResulttype. Line 470 correctly uses(result.affected ?? 0) > 0. Standardize across all methods for consistency.♻️ Suggested fix
- const updated = result.affected > 0 + const updated = (result.affected ?? 0) > 0Also apply to line 512:
- const updated = result.affected || 0 + const updated = result.affected ?? 0
778-794: Add type annotations toreduceaccumulators.The
reducecallbacks use an empty object{}as the initial value without type annotation, resulting in implicitanyfor the accumulator. This could cause type inference issues.♻️ Suggested fix
- const transactionsByUID = byUID.reduce((acc, row) => { + const transactionsByUID = byUID.reduce<Record<string, number>>((acc, row) => { acc[row.l2ps_uid] = Number.parseInt(row.count, 10) return acc }, {}) // ... - const transactionsByStatus = byStatus.reduce((acc, row) => { + const transactionsByStatus = byStatus.reduce<Record<string, number>>((acc, row) => { acc[row.status] = Number.parseInt(row.count, 10) return acc }, {})
| // result.max_seq may be a string from BIGINT cast, so we use Number() or BigInt() | ||
| // We return Number since we expect sequence to stay within JS precision (2^53 - 1) | ||
| const maxSeq = result?.max_seq !== null && result?.max_seq !== undefined ? Number(result.max_seq) : -1 | ||
| return maxSeq + 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential NaN from malformed sequence numbers.
If result.max_seq contains a non-numeric string (e.g., corrupted data), Number(result.max_seq) returns NaN, causing NaN + 1 to propagate as NaN. Consider adding a validation check.
🛡️ Suggested fix
- const maxSeq = result?.max_seq !== null && result?.max_seq !== undefined ? Number(result.max_seq) : -1
- return maxSeq + 1
+ const rawMaxSeq = result?.max_seq
+ const maxSeq = rawMaxSeq !== null && rawMaxSeq !== undefined ? Number(rawMaxSeq) : -1
+ if (Number.isNaN(maxSeq)) {
+ log.warning(`[L2PS Mempool] Invalid sequence number in DB for ${l2psUid}: ${rawMaxSeq}, falling back to 0`)
+ return 0
+ }
+ return maxSeq + 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // result.max_seq may be a string from BIGINT cast, so we use Number() or BigInt() | |
| // We return Number since we expect sequence to stay within JS precision (2^53 - 1) | |
| const maxSeq = result?.max_seq !== null && result?.max_seq !== undefined ? Number(result.max_seq) : -1 | |
| return maxSeq + 1 | |
| // result.max_seq may be a string from BIGINT cast, so we use Number() or BigInt() | |
| // We return Number since we expect sequence to stay within JS precision (2^53 - 1) | |
| const rawMaxSeq = result?.max_seq | |
| const maxSeq = rawMaxSeq !== null && rawMaxSeq !== undefined ? Number(rawMaxSeq) : -1 | |
| if (Number.isNaN(maxSeq)) { | |
| log.warning(`[L2PS Mempool] Invalid sequence number in DB for ${l2psUid}: ${rawMaxSeq}, falling back to 0`) | |
| return 0 | |
| } | |
| return maxSeq + 1 |
🤖 Prompt for AI Agents
In `@src/libs/blockchain/l2ps_mempool.ts` around lines 265 - 268, The code
currently does Number(result.max_seq) and can produce NaN which will propagate;
update the logic around result.max_seq and the maxSeq local so you validate the
numeric conversion (e.g., const parsed = Number(result?.max_seq); if
(!Number.isFinite(parsed)) parsedFallback = -1) and then use that validated
value when returning parsedFallback + 1; reference result.max_seq and the maxSeq
variable in your change and ensure null/undefined and non-numeric strings both
fall back to -1 before adding 1.


User description
L2PS Reworking
PR Type
Enhancement
Description
Complete L2PS (Layer 2 Private Subnet) and DTR (Distributed Transaction Routing) Implementation
Core Features Implemented:
L2PS Network Management: Refactored to singleton pattern with lazy initialization, transaction encryption/decryption, and signature verification using
UnifiedCryptoL2PS Mempool Manager: New dedicated mempool for L2PS transactions with duplicate detection, hash generation, and comprehensive statistics
Validator Hash Service: Periodic L2PS hash generation and relay to validators every 5 seconds with reentrancy protection
DTR Transaction Routing: Non-validator nodes relay transactions to validators with parallel relay (5 validator concurrency limit) and retry mechanism
L2PS Concurrent Sync: Peer discovery and mempool synchronization across L2PS participants with incremental sync and batch duplicate detection
Instant Messaging Enhancement: Offline message storage with DoS protection, blockchain integration, and nonce-based replay prevention using mutex-based thread-safe operations
Validator Consensus: L2PS hash mapping manager for validators with atomic upsert operations and content-blind privacy model
Background Services: DTR relay retry service with validator caching optimization and L2PS hash generation service with graceful shutdown
Database & Entity Changes:
New
L2PSMempoolentity with composite indexing for efficient transaction queryingNew
L2PSHashesentity for validator consensus with UID-to-hash mappingsNew
OfflineMessagesentity for instant messaging with status trackingUpdated
GCRSubnetsTxsentity to useL2PSTransactiontypeDatasource refactored to singleton pattern
Integration Points:
L2PS transaction handler with mempool integration and signature verification
NodeCall endpoints for L2PS participation queries, mempool info, and transaction retrieval
Concurrent L2PS sync during blockchain synchronization
DTR relay handler with validator status verification
Shared state extensions for DTR caching and L2PS participation tracking
Infrastructure:
Added
async-mutexdependency for synchronization primitivesUpdated
@kynesyslabs/demosdkto ^2.2.71Comprehensive documentation and implementation guides for L2PS phases and DTR architecture
Development guidelines and onboarding documentation for future sessions
Diagram Walkthrough
File Walkthrough
21 files
parallelNetworks.ts
L2PS Network Management Refactored to Singleton Patternsrc/libs/l2ps/parallelNetworks.ts
Subnetclass to newParallelNetworkssingleton managing multiple L2PS networks
conditions during
loadL2PS()signature verification using
UnifiedCryptoL2PSNodeConfiginterface for configuration management withfile-based key loading and validation
processL2PSTransaction()formempool integration
signalingServer.ts
Instant Messaging with Offline Storage and Nonce Managementsrc/features/InstantMessagingProtocol/signalingServer/signalingServer.ts
uniqueness and replay prevention
per sender with
MAX_OFFLINE_MESSAGES_PER_SENDER)mandatory audit trail consistency
transactional semantics and WebSocket state checking
atomic operations via
Mutexl2ps_mempool.ts
L2PS Mempool Manager with Hash Generationsrc/libs/blockchain/l2ps_mempool.ts
locking for race condition prevention
deterministic ordering for validator relay
hashes
comprehensive statistics
L2PSHashService.ts
L2PS Hash Generation and Validator Relay Servicesrc/libs/l2ps/L2PSHashService.ts
every 5 seconds
generation cycles
DemosSDK instance for efficiency instead of creating newinstances per cycle
sequential fallback
support
endpointHandlers.ts
DTR Transaction Routing and L2PS Hash Update Handlingsrc/libs/network/endpointHandlers.ts
nodes to relay transactions to validators
using
Promise.allSettled()handleL2PSHashUpdate()handler for processing L2PS hash updatesfrom other nodes
consensus
relayRetryService.ts
DTR Relay Retry Service with Validator Optimizationsrc/libs/network/dtr/relayRetryService.ts
non-validator nodes to validators
number changes)
indefinite hanging
selection
cache eviction
L2PSConcurrentSync.ts
L2PS Concurrent Sync and Peer Discoverysrc/libs/l2ps/L2PSConcurrentSync.ts
across peers
graceful failure handling
redundant transfers
repository access checks
generation for muid collision prevention
l2ps_hashes.ts
L2PS Hash Mapping Manager for Validator Consensussrc/libs/blockchain/l2ps_hashes.ts
consensus
concurrent updates
monitoring
validators
GCRSubnetsTxs.ts
Update GCR Subnet Transactions Type Definitionsrc/model/entities/GCRv2/GCRSubnetsTxs.ts
tx_datacolumn type fromEncryptedTransactiontoL2PSTransactionhandleL2PS.ts
L2PS transaction handler refactoring with mempool integrationsrc/libs/network/routines/transactions/handleL2PS.ts
error handling for nested data structures
original_hashfieldpayload validation
and L2PS UID tracking
manageNodeCall.ts
DTR relay and L2PS mempool synchronization endpointssrc/libs/network/manageNodeCall.ts
RELAY_TXcase) with validatorstatus verification and transaction validation
getL2PSParticipationById,getL2PSMempoolInfo,getL2PSTransactionsoperations and L2PS mempool queries
mempool insertion
Sync.ts
Concurrent L2PS sync integration with blockchain synchronizationsrc/libs/blockchain/routines/Sync.ts
discoverL2PSParticipants()download via
syncL2PSWithPeer()exchangeL2PSParticipation()blockchain sync performance
index.ts
Background service initialization for DTR and L2PSsrc/index.ts
check
and L2PS services
checking
transaction.ts
Transaction class constructor refactoring for flexibilitysrc/libs/blockchain/transaction.ts
Transactionclass constructor to accept optional partialdata for flexible initialization
assignment
from_ed25519_addressmovedto beginning
Object.assign()for cleaner propertysetup
L2PSMempool.ts
L2PS mempool entity with composite indexingsrc/model/entities/L2PSMempool.ts
JSONB support
l2ps_uidwithtimestampandstatusforefficient querying
preservation and entity purpose
original hash tracking
datasource.ts
Datasource refactoring to singleton patternsrc/model/datasource.ts
singleton pattern
initialization
OfflineMessageentity to entities arrayL2PSHashes.ts
L2PS hash storage entity for validator consensussrc/model/entities/L2PSHashes.ts
mappings for validators
l2ps_uidwith supporting columns for hash,transaction count, and timestamps
model
detection
OfflineMessages.ts
Offline messages entity for instant messagingsrc/model/entities/OfflineMessages.ts
recipient and sender keys
precision loss
sharedState.ts
Shared state extensions for DTR and L2PSsrc/utilities/sharedState.ts
validityDataCacheMap for DTR retry mechanism storingValidityData by transaction hash
l2psJoinedUidsarray to track L2PS networks the nodeparticipates in
ValidityDatatype from demosdk for DTR caching supportmempool_v2.ts
Mempool transaction removal for DTR relaysrc/libs/blockchain/mempool_v2.ts
removeTransaction()static method for DTR relay success cleanupvalidator relay
isValidator.ts
Validator status detection utility functionsrc/libs/consensus/v2/routines/isValidator.ts
for next block
getCommonValidatorSeed()andgetShard()logic forvalidator determination
1 files
handleStep.ts
L2PS import path update to deprecated modulesrc/libs/network/routines/transactions/demosWork/handleStep.ts
location
parallelNetworkstoparallelNetworks_deprecated3 files
pay.ts
Code formatting consistency improvementssrc/features/multichain/routines/executors/pay.ts
calls
server_rpc.ts
Minor formatting fix for response objectsrc/libs/network/server_rpc.ts
consistency
validateUint8Array.ts
Code style consistency improvementssrc/utilities/validateUint8Array.ts
19 files
L2PS_DTR_IMPLEMENTATION.md
L2PS and DTR implementation documentationsrc/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md
integration
L2PS_PHASES.md
L2PS implementation phases and action itemsL2PS_PHASES.md
criteria
plan_of_action_for_offline_messages.md
Offline messaging and L2PS quantum-safe encryption plansrc/features/InstantMessagingProtocol/signalingServer/plan_of_action_for_offline_messages.md
README.md
DTR implementation overview and architecture guidedtr_implementation/README.md
architecture
L2PS_TESTING.md
L2PS Testing and Validation Guide CreationL2PS_TESTING.md
implementation
phase-by-phase testing
validation procedures
runtime validation
session_2025_01_31_l2ps_completion.md
Session Summary for L2PS Implementation Completion.serena/memories/session_2025_01_31_l2ps_completion.md
finished (100%)
endpoints, concurrent sync, and blockchain integration
and non-blocking operations
validation
DTR_MINIMAL_IMPLEMENTATION.md
DTR Minimal Implementation Plan and Strategydtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md
strategy leveraging existing infrastructure
endpointHandlers.tswith ~20 lines of DTR logic
with block-aware optimization
production implementation
l2ps_onboarding_guide.md
L2PS Onboarding Guide for Future Sessions.serena/memories/l2ps_onboarding_guide.md
privacy model
layer) and all implementation phases
concepts for future LLM sessions
checklist
l2ps_architecture.md
L2PS Architecture Documentation with Diagrams.serena/memories/l2ps_architecture.md
component interactions
and validator hash updates
characteristics
l2ps_implementation_status.md
L2PS Implementation Status - All Phases Complete.serena/memories/l2ps_implementation_status.md
(NodeCall endpoints), 3c-2 (concurrent sync), and 3c-3 (blockchain
integration)
production code
available
l2ps_remaining_work.md
L2PS Remaining Work Documentation.serena/memories/l2ps_remaining_work.md
completed)
implementation details
and sync service
priorities
l2ps_code_patterns.md
L2PS Code Patterns and Conventions Reference.serena/memories/l2ps_code_patterns.md
implementation
database patterns
ParallelNetworks, and PeerManager
development
development_guidelines.md
Development Guidelines and Best Practices.serena/memories/development_guidelines.md
architecture, and best practices
standards
and use linting for validation
development workflow summary
codebase_structure.md
Codebase Structure and Organization Reference.serena/memories/codebase_structure.md
and feature modules
documentation locations
@/prefix and naming conventions forrepository
suggested_commands.md
Suggested Commands Reference Guide.serena/memories/suggested_commands.md
testing, and node operations
code_style_conventions.md
Code Style and Conventions Reference.serena/memories/code_style_conventions.md
PascalCase for classes)
and trailing commas
@/path aliases insteadof relative paths
task_completion_checklist.md
Task Completion Checklist and Validation Guide.serena/memories/task_completion_checklist.md
integration
bun run lint:fixexecution before marking taskscomplete
considerations
validator_status_minimal.md
Validator Status Minimal Implementation Approachdtr_implementation/validator_status_minimal.md
single function
getShard,getCommonValidatorSeed) with zero modificationsisValidatorForNextBlock()function and optionalgetValidatorsForRelay()helpereffectively
tech_stack.md
Tech Stack and Dependencies Reference.serena/memories/tech_stack.md
dependencies
frameworks
Docker and PostgreSQL
settings
3 files
extensions.json
VSCode extension recommendation addition.vscode/extensions.json
nur-publisher.hypercomments-vscodeextension to recommendedextensions list
project.yml
Serena Project Configuration File.serena/project.yml
package.json
Package Configuration and Dependency Updatespackage.json
lint:fixcommand to excludelocal_tests/**directory fromlinting
@kynesyslabs/demosdkfrom ^2.2.70 to ^2.2.71async-mutex^0.5.0 as new dependency for synchronizationprimitives
4 files
Summary by CodeRabbit
New Features
Documentation
Chores