Skip to content

Conversation

@tcsenpai
Copy link
Contributor

@tcsenpai tcsenpai commented Nov 8, 2025

User description

L2PS Reworking


PR Type

Enhancement


Description

Complete L2PS (Layer 2 Private Subnet) and DTR (Distributed Transaction Routing) Implementation

Core Features Implemented:

  • L2PS Network Management: Refactored to singleton pattern with lazy initialization, transaction encryption/decryption, and signature verification using UnifiedCrypto

  • L2PS Mempool Manager: New dedicated mempool for L2PS transactions with duplicate detection, hash generation, and comprehensive statistics

  • Validator Hash Service: Periodic L2PS hash generation and relay to validators every 5 seconds with reentrancy protection

  • DTR Transaction Routing: Non-validator nodes relay transactions to validators with parallel relay (5 validator concurrency limit) and retry mechanism

  • L2PS Concurrent Sync: Peer discovery and mempool synchronization across L2PS participants with incremental sync and batch duplicate detection

  • Instant Messaging Enhancement: Offline message storage with DoS protection, blockchain integration, and nonce-based replay prevention using mutex-based thread-safe operations

  • Validator Consensus: L2PS hash mapping manager for validators with atomic upsert operations and content-blind privacy model

  • Background Services: DTR relay retry service with validator caching optimization and L2PS hash generation service with graceful shutdown

Database & Entity Changes:

  • New L2PSMempool entity with composite indexing for efficient transaction querying

  • New L2PSHashes entity for validator consensus with UID-to-hash mappings

  • New OfflineMessages entity for instant messaging with status tracking

  • Updated GCRSubnetsTxs entity to use L2PSTransaction type

  • Datasource refactored to singleton pattern

Integration Points:

  • L2PS transaction handler with mempool integration and signature verification

  • NodeCall endpoints for L2PS participation queries, mempool info, and transaction retrieval

  • Concurrent L2PS sync during blockchain synchronization

  • DTR relay handler with validator status verification

  • Shared state extensions for DTR caching and L2PS participation tracking

Infrastructure:

  • Added async-mutex dependency for synchronization primitives

  • Updated @kynesyslabs/demosdk to ^2.2.71

  • Comprehensive documentation and implementation guides for L2PS phases and DTR architecture

  • Development guidelines and onboarding documentation for future sessions


Diagram Walkthrough

flowchart LR
  A["Non-Validator Nodes"] -- "DTR Relay" --> B["Validators"]
  C["L2PS Participants"] -- "Transaction Submit" --> D["L2PS Mempool"]
  D -- "Hash Generation" --> E["L2PS Hash Service"]
  E -- "Relay Hashes" --> B
  F["Peer Nodes"] -- "Concurrent Sync" --> C
  G["Instant Messaging"] -- "Offline Storage" --> H["OfflineMessages Entity"]
  H -- "Delivery on Reconnect" --> I["Message Recipients"]
  D -- "Duplicate Detection" --> J["Original Hash Tracking"]
  B -- "Consensus" --> K["L2PS Hashes Entity"]
Loading

File Walkthrough

Relevant files
Enhancement
21 files
parallelNetworks.ts
L2PS Network Management Refactored to Singleton Pattern   

src/libs/l2ps/parallelNetworks.ts

  • Complete rewrite from old Subnet class to new ParallelNetworks
    singleton managing multiple L2PS networks
  • Implements lazy initialization with promise locking to prevent race
    conditions during loadL2PS()
  • Adds comprehensive L2PS transaction encryption/decryption with
    signature verification using UnifiedCrypto
  • Introduces L2PSNodeConfig interface for configuration management with
    file-based key loading and validation
  • Adds transaction processing pipeline with processL2PSTransaction() for
    mempool integration
+379/-216
signalingServer.ts
Instant Messaging with Offline Storage and Nonce Management

src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts

  • Adds mutex-based thread-safe nonce management for transaction
    uniqueness and replay prevention
  • Implements offline message storage with DoS protection (rate limiting
    per sender with MAX_OFFLINE_MESSAGES_PER_SENDER)
  • Adds blockchain storage for both online and offline messages with
    mandatory audit trail consistency
  • Implements offline message delivery on peer reconnection with
    transactional semantics and WebSocket state checking
  • Adds per-sender nonce counter and offline message count tracking with
    atomic operations via Mutex
+303/-13
l2ps_mempool.ts
L2PS Mempool Manager with Hash Generation                               

src/libs/blockchain/l2ps_mempool.ts

  • New L2PS-specific mempool manager with lazy initialization and promise
    locking for race condition prevention
  • Implements consolidated hash generation for L2PS networks with
    deterministic ordering for validator relay
  • Adds duplicate detection via both original and encrypted transaction
    hashes
  • Provides transaction status tracking, cleanup routines, and
    comprehensive statistics
  • Includes block number validation and error handling for edge cases
+477/-0 
L2PSHashService.ts
L2PS Hash Generation and Validator Relay Service                 

src/libs/l2ps/L2PSHashService.ts

  • New service for periodic L2PS hash generation and relay to validators
    every 5 seconds
  • Implements reentrancy protection to prevent overlapping hash
    generation cycles
  • Reuses Demos SDK instance for efficiency instead of creating new
    instances per cycle
  • Relays hash updates to validators via DTR infrastructure with
    sequential fallback
  • Includes comprehensive statistics tracking and graceful shutdown
    support
+410/-0 
endpointHandlers.ts
DTR Transaction Routing and L2PS Hash Update Handling       

src/libs/network/endpointHandlers.ts

  • Adds DTR (Distributed Transaction Routing) logic for non-validator
    nodes to relay transactions to validators
  • Implements parallel relay with concurrency limiting (5 validators)
    using Promise.allSettled()
  • Adds handleL2PSHashUpdate() handler for processing L2PS hash updates
    from other nodes
  • Validates L2PS hash payload structure and stores hashes for validator
    consensus
  • Adds ValidityData caching for retry service and fallback local storage
+199/-34
relayRetryService.ts
DTR Relay Retry Service with Validator Optimization           

src/libs/network/dtr/relayRetryService.ts

  • New background service for retrying failed transaction relays from
    non-validator nodes to validators
  • Implements optimized validator caching (only recalculates when block
    number changes)
  • Adds timeout protection (5 seconds) for validator calls to prevent
    indefinite hanging
  • Implements Fisher-Yates shuffle for truly uniform random validator
    selection
  • Includes cleanup routines for stale retry entries and ValidityData
    cache eviction
+343/-0 
L2PSConcurrentSync.ts
L2PS Concurrent Sync and Peer Discovery                                   

src/libs/l2ps/L2PSConcurrentSync.ts

  • New module for L2PS participant discovery and mempool synchronization
    across peers
  • Implements parallel peer queries for L2PS participation discovery with
    graceful failure handling
  • Adds incremental sync with timestamp-based filtering to avoid
    redundant transfers
  • Implements batch duplicate detection for efficiency and safe
    repository access checks
  • Includes participation exchange broadcast and randomized UUID
    generation for muid collision prevention
+303/-0 
l2ps_hashes.ts
L2PS Hash Mapping Manager for Validator Consensus               

src/libs/blockchain/l2ps_hashes.ts

  • New manager for L2PS UID to hash mappings used by validators for
    consensus
  • Implements atomic upsert operations to prevent race conditions from
    concurrent updates
  • Provides hash retrieval, statistics, and pagination support for
    monitoring
  • Stores only hash mappings (content-blind) to preserve privacy for
    validators
  • Includes comprehensive error handling and initialization validation
+237/-0 
GCRSubnetsTxs.ts
Update GCR Subnet Transactions Type Definition                     

src/model/entities/GCRv2/GCRSubnetsTxs.ts

  • Updates tx_data column type from EncryptedTransaction to
    L2PSTransaction
  • Aligns entity with new L2PS transaction type definitions from SDK
+2/-2     
handleL2PS.ts
L2PS transaction handler refactoring with mempool integration

src/libs/network/routines/transactions/handleL2PS.ts

  • Refactored L2PS transaction handler with comprehensive validation and
    error handling for nested data structures
  • Integrated L2PS mempool storage with duplicate detection via
    original_hash field
  • Added transaction decryption with signature verification and encrypted
    payload validation
  • Implemented structured response with encrypted hash, original hash,
    and L2PS UID tracking
+124/-37
manageNodeCall.ts
DTR relay and L2PS mempool synchronization endpoints         

src/libs/network/manageNodeCall.ts

  • Added DTR relay transaction handler (RELAY_TX case) with validator
    status verification and transaction validation
  • Implemented three L2PS NodeCall endpoints: getL2PSParticipationById,
    getL2PSMempoolInfo, getL2PSTransactions
  • Added comprehensive error handling and logging for DTR relay
    operations and L2PS mempool queries
  • Integrated transaction coherence and signature validation before
    mempool insertion
+167/-5 
Sync.ts
Concurrent L2PS sync integration with blockchain synchronization

src/libs/blockchain/routines/Sync.ts

  • Added concurrent L2PS participant discovery during block sync via
    discoverL2PSParticipants()
  • Integrated L2PS mempool synchronization with peers during block
    download via syncL2PSWithPeer()
  • Added L2PS participation exchange with newly discovered peers via
    exchangeL2PSParticipation()
  • All L2PS operations run non-blocking in background to preserve
    blockchain sync performance
+53/-0   
index.ts
Background service initialization for DTR and L2PS             

src/index.ts

  • Added DTR relay retry service initialization with production mode
    check
  • Added L2PS hash generation service startup for participating nodes
  • Implemented graceful shutdown handlers (SIGINT/SIGTERM) for both DTR
    and L2PS services
  • Services initialize after main loop to ensure proper sync status
    checking
+57/-0   
transaction.ts
Transaction class constructor refactoring for flexibility

src/libs/blockchain/transaction.ts

  • Refactored Transaction class constructor to accept optional partial
    data for flexible initialization
  • Changed property declarations to use non-null assertion with definite
    assignment
  • Reordered transaction content fields with from_ed25519_address moved
    to beginning
  • Improved initialization with Object.assign() for cleaner property
    setup
+36/-27 
L2PSMempool.ts
L2PS mempool entity with composite indexing                           

src/model/entities/L2PSMempool.ts

  • Created new TypeORM entity for L2PS mempool transaction storage with
    JSONB support
  • Defined composite indexes on l2ps_uid with timestamp and status for
    efficient querying
  • Added comprehensive JSDoc documentation explaining privacy
    preservation and entity purpose
  • Stores encrypted L2PS transactions separately from main mempool with
    original hash tracking
+72/-0   
datasource.ts
Datasource refactoring to singleton pattern                           

src/model/datasource.ts

  • Refactored datasource initialization from module-level export to
    singleton pattern
  • Moved DataSource configuration into private constructor for lazy
    initialization
  • Added OfflineMessage entity to entities array
  • Removed duplicate entity entries and cleaned up entity list
+26/-29 
L2PSHashes.ts
L2PS hash storage entity for validator consensus                 

src/model/entities/L2PSHashes.ts

  • Created new TypeORM entity for storing L2PS UID to consolidated hash
    mappings for validators
  • Defined primary key on l2ps_uid with supporting columns for hash,
    transaction count, and timestamps
  • Added comprehensive JSDoc explaining validator content-blind consensus
    model
  • Includes block number tracking for consensus ordering and staleness
    detection
+55/-0   
OfflineMessages.ts
Offline messages entity for instant messaging                       

src/model/entities/OfflineMessages.ts

  • Created new TypeORM entity for offline message storage with indexed
    recipient and sender keys
  • Added support for encrypted content storage via JSONB column
  • Defined message status field with three states: pending, sent, failed
  • Includes timestamp tracking with bigint type to prevent JavaScript
    precision loss
+34/-0   
sharedState.ts
Shared state extensions for DTR and L2PS                                 

src/utilities/sharedState.ts

  • Added validityDataCache Map for DTR retry mechanism storing
    ValidityData by transaction hash
  • Added l2psJoinedUids array to track L2PS networks the node
    participates in
  • Imported ValidityData type from demosdk for DTR caching support
+9/-1     
mempool_v2.ts
Mempool transaction removal for DTR relay                               

src/libs/blockchain/mempool_v2.ts

  • Added removeTransaction() static method for DTR relay success cleanup
  • Method removes transactions from mempool by hash after successful
    validator relay
  • Includes logging for removal tracking and error handling
+21/-0   
isValidator.ts
Validator status detection utility function                           

src/libs/consensus/v2/routines/isValidator.ts

  • Created new utility function to determine if current node is validator
    for next block
  • Reuses existing getCommonValidatorSeed() and getShard() logic for
    validator determination
  • Returns boolean with conservative fallback to false on errors
+15/-0   
Miscellaneous
1 files
handleStep.ts
L2PS import path update to deprecated module                         

src/libs/network/routines/transactions/demosWork/handleStep.ts

  • Updated L2PS message import path from deprecated module to new
    location
  • Changed import from parallelNetworks to parallelNetworks_deprecated
+1/-1     
Formatting
3 files
pay.ts
Code formatting consistency improvements                                 

src/features/multichain/routines/executors/pay.ts

  • Standardized semicolon usage in variable declarations and function
    calls
  • Minor formatting consistency improvements
+2/-2     
server_rpc.ts
Minor formatting fix for response object                                 

src/libs/network/server_rpc.ts

  • Added trailing comma to MCP server status response object for
    consistency
+1/-1     
validateUint8Array.ts
Code style consistency improvements                                           

src/utilities/validateUint8Array.ts

  • Changed string quotes from single to double quotes for consistency
  • Removed trailing semicolon from return statement for consistency
+2/-2     
Documentation
19 files
L2PS_DTR_IMPLEMENTATION.md
L2PS and DTR implementation documentation                               

src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md

  • Comprehensive implementation documentation for L2PS and DTR
    integration
  • Detailed architecture overview with transaction flow diagrams
  • Phase-by-phase implementation status with completed and planned work
  • Privacy model validation and DTR integration points documentation
+630/-0 
L2PS_PHASES.md
L2PS implementation phases and action items                           

L2PS_PHASES.md

  • Detailed actionable implementation steps for L2PS phases 1-3c
  • Complete phase descriptions with code templates and validation
    criteria
  • Implementation status tracking and success metrics
  • File modification summary and completion criteria
+731/-0 
plan_of_action_for_offline_messages.md
Offline messaging and L2PS quantum-safe encryption plan   

src/features/InstantMessagingProtocol/signalingServer/plan_of_action_for_offline_messages.md

  • Comprehensive plan for offline messaging with blockchain integration
  • L2PS ML-KEM-AES quantum-safe encryption architecture documentation
  • Phase 2 planning for L2PS-integrated messaging system
  • Implementation status tracking and future enhancement roadmap
+479/-0 
README.md
DTR implementation overview and architecture guide             

dtr_implementation/README.md

  • Comprehensive DTR (Distributed Transaction Routing) overview and
    architecture
  • Problem statement and two-tier transaction architecture explanation
  • Security advantages and technical benefits documentation
  • DTR flow architecture with validation pipeline and performance metrics
+273/-0 
L2PS_TESTING.md
L2PS Testing and Validation Guide Creation                             

L2PS_TESTING.md

  • Created comprehensive 17-test scenario validation guide for L2PS
    implementation
  • Covers database schema verification, node startup validation, and
    phase-by-phase testing
  • Includes performance testing, error recovery, edge cases, and privacy
    validation procedures
  • Provides completion checklist and known issues to watch for during
    runtime validation
+496/-0 
session_2025_01_31_l2ps_completion.md
Session Summary for L2PS Implementation Completion             

.serena/memories/session_2025_01_31_l2ps_completion.md

  • Documents complete L2PS implementation session with all phases
    finished (100%)
  • Records 4 commits implementing validator hash storage, NodeCall
    endpoints, concurrent sync, and blockchain integration
  • Summarizes technical discoveries including auto-initialization pattern
    and non-blocking operations
  • Provides file organization summary and next steps for runtime
    validation
+385/-0 
DTR_MINIMAL_IMPLEMENTATION.md
DTR Minimal Implementation Plan and Strategy                         

dtr_implementation/DTR_MINIMAL_IMPLEMENTATION.md

  • Outlines minimal DTR (Distributed Transaction Routing) implementation
    strategy leveraging existing infrastructure
  • Describes single-point modification approach in endpointHandlers.ts
    with ~20 lines of DTR logic
  • Details multi-validator retry mechanism and background retry service
    with block-aware optimization
  • Includes enhanced fallback strategy and complete flow diagram for
    production implementation
+354/-0 
l2ps_onboarding_guide.md
L2PS Onboarding Guide for Future Sessions                               

.serena/memories/l2ps_onboarding_guide.md

  • Comprehensive onboarding guide explaining L2PS system architecture and
    privacy model
  • Documents three-tier architecture (participants, validators, sync
    layer) and all implementation phases
  • Provides file organization, key data structures, and important
    concepts for future LLM sessions
  • Includes code flow examples, NodeCall endpoints reference, and testing
    checklist
+395/-0 
l2ps_architecture.md
L2PS Architecture Documentation with Diagrams                       

.serena/memories/l2ps_architecture.md

  • Detailed L2PS system architecture with ASCII diagrams showing
    component interactions
  • Documents data flow for transaction submission, privacy separation,
    and validator hash updates
  • Includes network topology, security model, and performance
    characteristics
  • Provides threat protection analysis and trust boundary definitions
+215/-0 
l2ps_implementation_status.md
L2PS Implementation Status - All Phases Complete                 

.serena/memories/l2ps_implementation_status.md

  • Status report showing all L2PS phases complete (100%) as of 2025-01-31
  • Details implementation of Phase 3b (validator hash storage), 3c-1
    (NodeCall endpoints), 3c-2 (concurrent sync), and 3c-3 (blockchain
    integration)
  • Lists 3 new files created and 4 files modified with ~650 lines of
    production code
  • Notes testing status as pending with comprehensive validation guide
    available
+168/-0 
l2ps_remaining_work.md
L2PS Remaining Work Documentation                                               

.serena/memories/l2ps_remaining_work.md

  • Documents remaining work priorities for L2PS implementation (now
    completed)
  • Outlines 4 priority phases with specific file locations and
    implementation details
  • Provides code examples for validator hash storage, NodeCall endpoints,
    and sync service
  • Includes testing considerations and dependency relationships between
    priorities
+178/-0 
l2ps_code_patterns.md
L2PS Code Patterns and Conventions Reference                         

.serena/memories/l2ps_code_patterns.md

  • Reference guide for L2PS code patterns and conventions used throughout
    implementation
  • Documents file locations, service patterns, NodeCall patterns, and
    database patterns
  • Provides key integration points including shared state,
    ParallelNetworks, and PeerManager
  • Lists important constraints and logging conventions for L2PS
    development
+205/-0 
development_guidelines.md
Development Guidelines and Best Practices                               

.serena/memories/development_guidelines.md

  • Comprehensive development guidelines covering core principles,
    architecture, and best practices
  • Emphasizes maintainability, planning workflow, and code quality
    standards
  • Details critical requirement to never start node during development
    and use linting for validation
  • Includes repository-specific notes, testing guidelines, and
    development workflow summary
+175/-0 
codebase_structure.md
Codebase Structure and Organization Reference                       

.serena/memories/codebase_structure.md

  • Documents complete codebase structure including root directory layout
    and feature modules
  • Details source code organization, configuration files, and
    documentation locations
  • Explains path aliases using @/ prefix and naming conventions for
    repository
  • Provides build output information and ignored directories reference
+145/-0 
suggested_commands.md
Suggested Commands Reference Guide                                             

.serena/memories/suggested_commands.md

  • Reference guide for essential development commands including linting,
    testing, and node operations
  • Documents package management, database operations, and Docker commands
  • Provides standard development workflow and troubleshooting commands
  • Emphasizes critical requirement to never start node during development
+142/-0 
code_style_conventions.md
Code Style and Conventions Reference                                         

.serena/memories/code_style_conventions.md

  • Documents ESLint-enforced naming conventions (camelCase for functions,
    PascalCase for classes)
  • Details code formatting rules including double quotes, no semicolons,
    and trailing commas
  • Specifies import organization with mandatory @/ path aliases instead
    of relative paths
  • Lists TypeScript configuration settings and documentation standards
+117/-0 
task_completion_checklist.md
Task Completion Checklist and Validation Guide                     

.serena/memories/task_completion_checklist.md

  • Comprehensive pre-completion validation checklist for code quality and
    integration
  • Emphasizes mandatory bun run lint:fix execution before marking tasks
    complete
  • Details code quality, testing, integration, and security
    considerations
  • Includes critical warnings against starting node during development
+108/-0 
validator_status_minimal.md
Validator Status Minimal Implementation Approach                 

dtr_implementation/validator_status_minimal.md

  • Minimal implementation approach for validator status checking using
    single function
  • Leverages existing consensus routines (getShard,
    getCommonValidatorSeed) with zero modifications
  • Provides simple isValidatorForNextBlock() function and optional
    getValidatorsForRelay() helper
  • Documents usage pattern and explains why minimal approach works
    effectively
+88/-0   
tech_stack.md
Tech Stack and Dependencies Reference                                       

.serena/memories/tech_stack.md

  • Documents complete tech stack including core technologies and key
    dependencies
  • Details blockchain/crypto libraries, database/ORM, and server/API
    frameworks
  • Lists development tools and infrastructure requirements including
    Docker and PostgreSQL
  • Specifies build configuration with path aliases and TypeScript
    settings
+52/-0   
Configuration changes
3 files
extensions.json
VSCode extension recommendation addition                                 

.vscode/extensions.json

  • Added nur-publisher.hypercomments-vscode extension to recommended
    extensions list
+2/-1     
project.yml
Serena Project Configuration File                                               

.serena/project.yml

  • Project configuration file for Serena development environment
  • Specifies TypeScript as primary language with UTF-8 encoding
  • Enables gitignore-based file filtering and read-write mode
  • Lists all available development tools (excluded_tools is empty)
+84/-0   
package.json
Package Configuration and Dependency Updates                         

package.json

  • Updated lint:fix command to exclude local_tests/** directory from
    linting
  • Upgraded @kynesyslabs/demosdk from ^2.2.70 to ^2.2.71
  • Added async-mutex ^0.5.0 as new dependency for synchronization
    primitives
+3/-2     
Additional files
4 files
l2ps_overview.md +44/-0   
project_purpose.md +26/-0   
settings.json +2/-20   
manageExecution.ts +0/-10   

Summary by CodeRabbit

  • New Features

    • Full L2PS support: privacy-preserving mempool, network loading, concurrent mempool sync, periodic hash generation, batch aggregation, ZK proof tooling, consensus hooks and NodeCall endpoints
    • Offline messaging: persistent offline storage, delivery on connect, DoS controls
    • CLI tools: L2PS load/stress tests, batch/send utilities, test-wallet generator
  • Documentation

    • Extensive L2PS guides, quickstart, testing, architecture, onboarding, and developer checklists
  • Chores

    • Expanded ignore patterns, updated environment template and editor workspace defaults

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.gitignore (2)

1-208: ⚠️ Potential issue | 🟡 Minor

Clean up numerous duplicate entries throughout the file.

Beyond the previously flagged duplicates (lines 114–127), many additional duplicates exist:

  • bridges_docs/ or bridges_docs: lines 7, 11, 131
  • data/chain.db: lines 15, 22, 41, 42
  • yarn.lock: lines 38, 48
  • publickey: lines 49, 50
  • .env: lines 26, 110
  • .DS_Store: lines 32, 128
  • .serena-backup: lines 129, 147, 149
  • PR_COMMENTS: lines 130, 150
  • dist: lines 133, 151
  • local_tests: lines 135, 183
  • claudedocs: lines 132, 184
  • temp: lines 166, 186
  • omniprotocol_fixtures_scripts: lines 136, 188
  • STORAGE_PROGRAMS_SPEC.md: lines 172, 187
  • CLAUDE.sync-conflict-20250901-171031-7JPPSQB.md: lines 169, 173
  • src/features/bridges/LiquidityTank_UserGuide.md: lines 163, 182

Consolidate all duplicates by keeping only one entry per pattern. Consider grouping related patterns together for better maintainability.


177-177: ⚠️ Potential issue | 🔴 Critical

Remove overly broad ignore pattern that conflicts with tracked source files.

Line 177 ignores the entire src/features/bridges/ directory, which conflicts with three tracked implementation files:

  • src/features/bridges/bridgeUtils.ts
  • src/features/bridges/bridges.ts
  • src/features/bridges/rubic.ts

Either remove line 177 if bridges code should be tracked, or use more specific patterns (like lines 181–182) to exclude only subdirectories and documentation files while preserving the source files.

🤖 Fix all issues with AI agents
In `@scripts/l2ps-load-test.ts`:
- Around line 226-241: The code currently uses nonce++ when calling
buildL2PSTransaction which consumes the nonce even if confirm/broadcast fails;
change the logic in the transaction loop to pass the current nonce value (e.g.,
capture const currentNonce = nonce) to buildL2PSTransaction instead of using
nonce++, and only increment nonce (nonce++) after demos.broadcast(...) succeeds
(after successCount++), ensuring you keep buildL2PSTransaction, demos.confirm,
demos.broadcast and successCount references intact and only advance the nonce on
a successful broadcast.

In `@scripts/l2ps-stress-test.ts`:
- Around line 37-46: The CLI's --concurrency value from parseArgs (CliOptions)
is parsed but never used; either implement parallel worker processing or remove
the flag. Fix option A: in the main/send loop (the serial transaction dispatch
code referenced around lines 263-319) replace the sequential send with a
bounded-concurrency worker pool that reads options.concurrency and runs at most
that many concurrent send tasks (use a simple async queue/worker function or a
p-limit style limiter) so multiple sendTx/sendBatch calls run in parallel;
ensure errors are aggregated and progress/counting logic remains correct. Fix
option B: if you prefer serial execution, remove concurrency from CliOptions and
parseArgs, drop any references and CLI flag parsing, and update help/output so
the flag is not advertised. Ensure the symbol parseArgs and the main send loop
are updated accordingly.
- Around line 280-287: The code passes sender.nonce++ into sendTransaction which
increments the local nonce even if the send fails; change this to pass the
current nonce without mutating (use sender.nonce) and only increment
sender.nonce after sendTransaction completes successfully (e.g., await the call,
verify success/outerHash, then do sender.nonce++). Update the call site
referencing sendTransaction and the sender.nonce management so failures do not
advance the nonce.

In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`:
- Around line 404-443: The offline-path must roll back DB state if blockchain
storage fails: modify storeOfflineMessage to return the stored message ID
(and/or a success object) and implement a new
rollbackOfflineMessage(storedMessageId, senderId, targetId) that deletes the DB
entry and decrements the sender's offline-message counter (or reverses whatever
rate-limit side-effect storeOfflineMessage applied); in the catch for
storeMessageOnBlockchain call inside the handler that currently calls
storeOfflineMessage and then await storeMessageOnBlockchain, invoke
rollbackOfflineMessage(storedId, senderId, payload.targetId) and only then
sendError(ws, ...) (or alternatively change the error response to indicate
message queued despite blockchain failure if you choose non-fatal
behavior)—ensure MAX_OFFLINE_MESSAGES_PER_SENDER logic stays authoritative in
storeOfflineMessage and rollbackOfflineMessage adjusts that quota consistently.

In `@src/index.ts`:
- Around line 881-914: There are duplicate SIGINT/SIGTERM handlers causing only
the last one (which calls gracefulShutdown()) to run and preventing DTRManager
and L2PS services from being stopped; consolidate by creating a single shutdown
handler that calls gracefulShutdown() and also invokes
DTRManager.getInstance().stop(), L2PSHashService.getInstance().stop(), and
L2PSBatchAggregator.getInstance().stop() (preserving the getSharedState.PROD
check for DTRManager), wrap each stop call in try/catch, await
gracefulShutdown() if it returns a Promise, and only call process.exit(0) after
all stops complete so all services (OmniProtocol/MCP/TLSNotary/Metrics plus
DTRManager and L2PS) are reliably shut down.

In `@src/libs/l2ps/L2PSBatchAggregator.ts`:
- Around line 527-533: The conversion loses precision because
Math.floor(Number(rawAmount)) can exceed Number.MAX_SAFE_INTEGER; update the
logic in L2PSBatchAggregator (the block assigning amount from rawAmount) to
perform safe BigInt conversion instead: if rawAmount is a bigint use it
directly, if it's a string validate it represents an integer (no decimal
point/whitespace) and call BigInt(rawAmount), if it's a number only accept it
when Number.isSafeInteger(rawAmount) (otherwise treat as invalid), and wrap the
conversion in try/catch to set amount = 0n on invalid input; reference the
variables/function names amount and rawAmount and replace the current
Math.floor(Number(...)) -> BigInt pattern with this validated path.

In `@src/libs/l2ps/L2PSConsensus.ts`:
- Around line 417-492: Both rollbackProofsForBlock and getBlockStats currently
call L2PSProofManager.getProofs() with a global limit and then filter by
applied_block_number in-memory, which will miss proofs when a block exceeds the
cap; modify L2PSProofManager.getProofs to accept an optional
applied_block_number param and apply that filter at the query level, then update
L2PSConsensus.rollbackProofsForBlock and L2PSConsensus.getBlockStats to pass the
blockNumber into getProofs (removing reliance on the hard-coded 1000/10000 caps)
so the database returns only proofs for that block and prevents incomplete
rollbacks and undercounted stats.

In `@src/libs/l2ps/L2PSHashService.ts`:
- Around line 45-47: The GENERATION_INTERVAL constant in class L2PSHashService
is set directly from process.env.L2PS_HASH_INTERVAL_MS and can be NaN or too
small, causing CPU spin; change the initialization so you parse the env var,
check Number.isFinite/Number.isNaN (or isNaN) and clamp it to a sensible minimum
(e.g. MIN_INTERVAL = 1000 ms) and/or a safe default (e.g. DEFAULT_INTERVAL =
5000 ms) if parsing fails or the value is below the minimum; update the
GENERATION_INTERVAL assignment to use this validated/clamped value and keep the
env var name L2PS_HASH_INTERVAL_MS for configuration.

In `@src/libs/l2ps/zk/BunPlonkWrapper.ts`:
- Around line 14-20: This file imports ffjavascript (symbols: getCurveFromName,
utils, Scalar, unstringifyBigInts) and js-sha3 (symbol: keccak256 via jsSha3)
but those packages are missing from package.json; add them to your project's
dependencies by installing ffjavascript and js-sha3 (e.g., npm install
ffjavascript js-sha3 or pnpm add ffjavascript js-sha3 / bun add ffjavascript
js-sha3) so the imports in BunPlonkWrapper.ts resolve at runtime.

In `@src/libs/l2ps/zk/circuits/l2ps_batch_5.circom`:
- Around line 13-28: The BalanceTransfer template performs raw subtraction and
needs range and underflow/overflow guards: add Num2Bits(128) instances for
sender_before, sender_after, receiver_before, receiver_after, and amount to
constrain them to 128-bit ranges, then instantiate LessThan(128) and wire it as
LessThan128(amount, sender_before) to enforce amount <= sender_before (prevent
underflow) and also enforce LessThan128(receiver_before + amount, 2^128) by
constraining receiver_after with Num2Bits(128) or explicitly checking
receiver_after < 2^128 to prevent receiver overflow; update constraints around
sender_after === sender_before - amount and receiver_after === receiver_before +
amount accordingly so arithmetic only occurs after these bounds are proven.

In `@src/libs/l2ps/zk/L2PSBatchProver.ts`:
- Around line 247-266: The code registers the same request twice into
this.pendingRequests (first at this.pendingRequests.set(id, { resolve, reject })
then again with timeout-wrapping), which overwrites the original entry and
causes the timeout wrapper to be lost; fix by registering a single entry that
wraps the original resolve/reject with timeout handling: create the timeout via
setTimeout pointing to reject(new Error('Subprocess request timeout')) and then
set this.pendingRequests.set(id, { resolve: (value) => { clearTimeout(timeout);
resolve(value); }, reject: (error) => { clearTimeout(timeout); reject(error); }
}) so only one map entry exists and the timeout is cleared on either resolve or
reject (use symbols: this.pendingRequests, id, timeout, resolve, reject,
setTimeout).

In `@src/libs/network/manageNodeCall.ts`:
- Around line 833-839: The timestamp validation only rejects timestamps older
than 5 minutes but allows future timestamps; update the check around
requestTime/now (the parseInt(data.timestamp) validation in this block) to
enforce a two-sided bound: consider a window const MAX_SKEW = 5 * 60 * 1000 and
reject if requestTime < now - MAX_SKEW OR requestTime > now + MAX_SKEW, setting
response.result and response.response as done currently (e.g., 401 with an
explanatory "Request expired or timestamp outside allowed skew" message) so
signatures can't be extended by future timestamps.
- Around line 876-878: The limit/offset variables in manageNodeCall.ts are taken
directly from data (const limit = data.limit || 100; const offset = data.offset
|| 0) and need validation to prevent oversized DB queries: parse data.limit and
data.offset as integers, enforce non-negative offset, clamp limit to a safe
MAX_LIMIT (e.g., define MAX_LIMIT = 1000) and a minimum of 1, and if parsing
fails fall back to the existing defaults; update the code that uses limit/offset
to rely on these sanitized values so queries never receive unbounded or negative
values.

In `@src/libs/network/routines/transactions/handleL2PS.ts`:
- Around line 101-108: After extracting encryptedPayload (L2PSEncryptedPayload)
and originalHash, compute the canonical hash of the decrypted transaction bytes
and compare it to originalHash before proceeding; if they differ return
createErrorResponse(response, 400, "Decrypted transaction hash does not match
payload.original_hash"). Place this check immediately after you decrypt the
payload (use whatever decryption result variable you already have, e.g.,
decryptedTx or decryptedTxBytes) and ensure you use the same hashing
function/format used elsewhere for tx deduplication so the comparison is
canonical.
🟡 Minor comments (14)
src/libs/l2ps/L2PS_QUICKSTART.md-76-84 (1)

76-84: ⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks (markdownlint MD040).

This file has multiple fenced blocks without a language; adding text keeps markdownlint happy and improves readability.

🛠️ Proposed update
-```
+```text
 src/libs/l2ps/zk/
 ├── keys/
 │   ├── batch_5/   # For 1-5 tx batches (~37K constraints)
 │   └── batch_10/  # For 6-10 tx batches (~74K constraints)
 └── ptau/          # Powers of tau files
-```
+```

-```
+```text
 [L2PS] Loaded network: testnet_l2ps_001
 [L2PS Batch Aggregator] Started
-```
+```

-```
+```text
 User Transactions          Batch Aggregator           L1 Chain
       │                         │                        │
 TX 1 ─┤  (encrypted)            │                        │
 TX 2 ─┤  (1 DEM fee each)       │                        │
 TX 3 ─┼────────────────────────→│                        │
 TX 4 ─┤       in mempool        │   (every 10 sec)       │
 TX 5 ─┤                         │                        │
       │                         │  Aggregate GCR edits   │
       │                         │  Generate ZK proof     │
       │                         │  Create 1 batch tx ───→│
       │                         │                        │
       │                         │                        │ Consensus applies
       │                         │                        │ GCR edits to L1
-```
+```

-```
+```text
 node/
 ├── data/l2ps/testnet_l2ps_001/
 │   ├── config.json       # L2PS network config
 │   ├── private_key.txt   # AES-256 key (64 hex chars)
 │   └── iv.txt            # Initialization vector (32 hex chars)
 ├── docs/poc-app/
 │   ├── src/App.tsx       # POC application
 │   └── .env              # Client configuration
 ├── src/libs/l2ps/
 │   ├── L2PSTransactionExecutor.ts  # Transaction processing
 │   ├── L2PSBatchAggregator.ts      # Batch creation
 │   └── zk/               # ZK proof system
 ├── scripts/
 │   ├── send-l2-batch.ts        # Quick test
 │   ├── l2ps-load-test.ts       # Load test
 │   └── l2ps-stress-test.ts     # Stress test
 └── mnemonic.txt          # Your wallet
-```
+```

Also applies to: 112-116, 198-212, 326-346

src/libs/l2ps/L2PS_QUICKSTART.md-156-162 (1)

156-162: ⚠️ Potential issue | 🟡 Minor

Normalize table formatting and wrap the bare URL (markdownlint MD058/MD060/MD034).

Add spaces around pipes, surround tables with blank lines, and wrap http://127.0.0.1:53550 in backticks (or <...>). This resolves the markdownlint warnings and improves readability.

🛠️ Example fix (apply to all tables)
-| Feature | Description |
-|---------|-------------|
-| **Send L1/L2PS** | Toggle between public and private transactions |
-| **Transaction History** | View L1, L2PS, or All transactions |
-| **Learn Tab** | Interactive demos explaining L2PS |
-| **Privacy Demo** | Try authenticated vs unauthenticated access |
+| Feature | Description |
+| ------- | ----------- |
+| **Send L1/L2PS** | Toggle between public and private transactions |
+| **Transaction History** | View L1, L2PS, or All transactions |
+| **Learn Tab** | Interactive demos explaining L2PS |
+| **Privacy Demo** | Try authenticated vs unauthenticated access |

-| `--node <url>` | Node RPC URL | http://127.0.0.1:53550 |
+| `--node <url>` | Node RPC URL | `http://127.0.0.1:53550` |

Also applies to: 179-187, 216-221, 253-258, 266-273, 284-287

src/libs/identity/identity.ts-26-26 (1)

26-26: ⚠️ Potential issue | 🟡 Minor

Update import path to include .js extension.

The @scure/bip39@2.0.1 package exports wordlists with the .js extension. Change the import to:

import { wordlist } from "@scure/bip39/wordlists/english.js"

This same fix is needed in src/libs/utils/showPubkey.ts as well.

src/libs/blockchain/l2ps_mempool.ts-212-217 (1)

212-217: ⚠️ Potential issue | 🟡 Minor

Timestamp fallback for sequence numbers could cause gaps and collisions.

When getNextSequenceNumber fails, it falls back to Date.now(). This could:

  1. Create large gaps in sequence numbers (millisecond timestamps vs sequential integers)
  2. Cause collisions if called multiple times in same millisecond

Consider throwing the error or using a more predictable fallback.

🛡️ Proposed fix
         } catch (error) {
             const errorMsg = error instanceof Error ? error.message : String(error)
             log.error(`[L2PS Mempool] Error getting next sequence number: ${errorMsg}`)
-            // Fallback to timestamp-based sequence
-            return Date.now()
+            // Re-throw to let caller handle - timestamp fallback causes sequence gaps
+            throw error
         }
src/libs/l2ps/L2PSProofManager.ts-35-45 (1)

35-45: ⚠️ Potential issue | 🟡 Minor

Use locale-independent comparison for deterministic hashing.

localeCompare behavior varies by runtime locale, which could produce different hashes across environments. For cryptographic determinism, use simple string comparison.

🔒 Proposed fix for deterministic sorting
 function deterministicStringify(obj: any): string {
     return JSON.stringify(obj, (key, value) => {
         if (value && typeof value === 'object' && !Array.isArray(value)) {
-            return Object.keys(value).sort((a, b) => a.localeCompare(b)).reduce((sorted: any, k) => {
+            return Object.keys(value).sort().reduce((sorted: any, k) => {
                 sorted[k] = value[k]
                 return sorted
             }, {})
         }
         return value
     })
 }
src/libs/l2ps/L2PSTransactionExecutor.ts-204-210 (1)

204-210: ⚠️ Potential issue | 🟡 Minor

Amount type validation may reject valid BigInt/string amounts.

The validation at Line 208 checks typeof amount !== 'number', but transaction amounts in blockchain systems are often BigInt or string to handle large values. This could incorrectly reject valid transactions.

🛡️ Proposed fix for flexible amount handling
-            const [to, amount] = nativePayload.args as [string, number]
+            const [to, rawAmount] = nativePayload.args as [string, number | string | bigint]
             const sender = tx.content.from as string

-            // Validate amount (type check and positive)
-            if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0) {
-                return { success: false, message: "Invalid amount: must be a positive number" }
+            // Validate and normalize amount
+            let amount: bigint
+            try {
+                amount = BigInt(rawAmount)
+            } catch {
+                return { success: false, message: "Invalid amount: cannot convert to BigInt" }
+            }
+            if (amount <= 0n) {
+                return { success: false, message: "Invalid amount: must be positive" }
             }
src/libs/l2ps/zk/README.md-7-7 (1)

7-7: ⚠️ Potential issue | 🟡 Minor

Inconsistency between documented max batch size and file structure.

Line 7 states the system "Supports up to 10 transactions per batch", but:

  • Line 84 shows a batch_20/ directory in the file structure
  • Line 94 documents performance for 20 tx batches

Please clarify the actual maximum supported batch size and update the documentation accordingly.

Also applies to: 84-84, 94-94

src/libs/network/routines/transactions/demosWork/handleStep.ts-45-46 (1)

45-46: ⚠️ Potential issue | 🟡 Minor

Double cast should be replaced with proper type assertion.

The as unknown as L2PSTransaction pattern is inconsistent with how other contexts (xm, web2, native) handle the same task variable using single type assertions. This suggests either:

  1. step.content needs explicit typing for the l2ps context, or
  2. The SDK's L2PSTransaction type is incompatible with WorkStep.content

Consider explicitly typing task based on the context value, or verifying that the single-cast pattern works for the l2ps case like it does for other handlers.

scripts/l2ps-load-test.ts-48-56 (1)

48-56: ⚠️ Potential issue | 🟡 Minor

Validate numeric CLI flags to avoid NaN/negative values.

parseInt can yield NaN or negative values, which later impact loop bounds and TPS math. Fail fast on invalid inputs to keep the run deterministic.

💡 Suggested fix
         } else if (arg === "--count" && argv[i + 1]) {
-            options.count = parseInt(argv[i + 1], 10)
+            const count = Number.parseInt(argv[i + 1], 10)
+            if (!Number.isInteger(count) || count < 1) {
+                throw new Error("--count must be a positive integer")
+            }
+            options.count = count
             i++
         } else if (arg === "--value" && argv[i + 1]) {
-            options.value = parseInt(argv[i + 1], 10)
+            const value = Number.parseInt(argv[i + 1], 10)
+            if (!Number.isInteger(value) || value < 0) {
+                throw new Error("--value must be a non-negative integer")
+            }
+            options.value = value
             i++
         } else if (arg === "--delay" && argv[i + 1]) {
-            options.delayMs = parseInt(argv[i + 1], 10)
+            const delay = Number.parseInt(argv[i + 1], 10)
+            if (!Number.isInteger(delay) || delay < 0) {
+                throw new Error("--delay must be a non-negative integer")
+            }
+            options.delayMs = delay
             i++
         } else if (arg === "--help") {
scripts/send-l2-batch.ts-325-327 (1)

325-327: ⚠️ Potential issue | 🟡 Minor

Validate --value before building transactions.

Number(options.value) can yield NaN or negative values; that will propagate into invalid transaction amounts.

💡 Suggested fix
-    const amount = options.value ? Number(options.value) : 0
+    const amount = options.value ? Number(options.value) : 0
+    if (!Number.isFinite(amount) || amount < 0) {
+        throw new Error("--value must be a non-negative number")
+    }
scripts/l2ps-load-test.ts-83-89 (1)

83-89: ⚠️ Potential issue | 🟡 Minor

Reject empty/odd-length hex key material early.

sanitizeHexValue currently allows empty or odd-length values, which can blow up downstream. Add basic length guards before converting to bytes.

💡 Suggested fix
 function sanitizeHexValue(value: string, label: string): string {
     const cleaned = value.trim().replace(/^0x/, "").replaceAll(/\s+/g, "")
+    if (cleaned.length === 0) {
+        throw new Error(`${label} is empty`)
+    }
+    if (cleaned.length % 2 !== 0) {
+        throw new Error(`${label} must have an even number of hex characters`)
+    }
     if (!/^[0-9a-fA-F]+$/.test(cleaned)) {
         throw new Error(`${label} contains non-hex characters`)
     }
     return cleaned.toLowerCase()
 }
src/libs/l2ps/L2PSHashService.ts-140-167 (1)

140-167: ⚠️ Potential issue | 🟡 Minor

Close OmniProtocol connection pool on stop.

The connectionPool is created in start() but never closed in stop(), leaving pooled sockets open on shutdown/restart. Call this.connectionPool?.shutdown() and null the reference to avoid resource leaks.

Suggested fix
         // Wait for ongoing operation to complete
         const startTime = Date.now()
         while (this.isGenerating && (Date.now() - startTime) < timeoutMs) {
             await new Promise(resolve => setTimeout(resolve, 100))
         }
 
+        // Close OmniProtocol connection pool if initialized
+        if (this.connectionPool) {
+            await this.connectionPool.shutdown()
+            this.connectionPool = null
+        }
+
         if (this.isGenerating) {
             log.warning("[L2PS Hash Service] Forced shutdown - operation still in progress")
         }
src/libs/l2ps/parallelNetworks.ts-303-316 (1)

303-316: ⚠️ Potential issue | 🟡 Minor

Signature verification assumes from field contains the signer's public key.

The verification at Lines 304-309 uses encryptedTx.content.from as the public key. This may not be correct if:

  1. from is an address rather than a raw public key
  2. The transaction was signed by a different key than the from address

Consider validating that from is a valid hex public key before use, or using a dedicated signer_public_key field.

src/libs/network/endpointHandlers.ts-845-854 (1)

845-854: ⚠️ Potential issue | 🟡 Minor

Authorization check only verifies local participation, not sender membership.

The validation at lines 846-854 checks if this node has joined the L2PS network via parallelNetworks.getL2PS(l2psUid), but doesn't verify that the sender of the hash update (available in tx.content.from) is an authorized participant in that network. While an L2PS participant cache exists in L2PSConcurrentSync.ts, it's not consulted during hash update validation, allowing any node to submit hash updates regardless of their network membership status.

Consider validating the sender against the L2PS network's discovered participants before accepting the hash update.

🧹 Nitpick comments (28)
scripts/generate-test-wallets.ts (4)

31-33: Add validation for --count argument.

parseInt returns NaN for non-numeric input, which would cause the wallet generation loop to silently skip. Consider adding basic validation.

✨ Suggested validation
         if (arg === "--count" && argv[i + 1]) {
-            options.count = parseInt(argv[i + 1], 10)
+            const count = parseInt(argv[i + 1], 10)
+            if (isNaN(count) || count <= 0) {
+                console.error("Error: --count must be a positive integer")
+                process.exit(1)
+            }
+            options.count = count
             i++

113-126: Add security warning for mnemonic output file.

The output file contains sensitive mnemonic phrases that can be used to derive private keys. Consider adding a security warning and ensuring the file isn't accidentally committed to version control.

🔒 Suggested security improvements
     // Save wallet mnemonics to file
     const outputPath = path.resolve(options.outputPath)
     const walletsData = {
         generated_at: new Date().toISOString(),
         count: newWallets.length,
         balance: options.balance,
+        warning: "SENSITIVE: This file contains mnemonic phrases. Keep secure and never commit to version control.",
         wallets: newWallets.map(w => ({
             index: w.index,
             address: w.address,
             mnemonic: w.mnemonic,
         })),
     }
     writeFileSync(outputPath, JSON.stringify(walletsData, null, 2))
     console.log(`\n💾 Saved wallet mnemonics to: ${outputPath}`)
+    console.log(`   ⚠️  WARNING: This file contains sensitive mnemonic phrases!`)
+    console.log(`   ⚠️  Ensure it's added to .gitignore and stored securely.`)

89-106: Theoretical infinite loop risk on address collision.

The i-- pattern retries wallet generation on collision, but if address space were exhausted or the SDK consistently returned duplicates, this would loop indefinitely. While practically impossible with the Ed25519 address space, adding a max-retry guard would be defensive.

🛡️ Suggested defensive guard
+    const MAX_RETRIES_PER_WALLET = 10
+
     for (let i = 0; i < options.count; i++) {
+        let retries = 0
         const wallet = await generateWallet()
 
         // Skip if already exists
         if (existingAddresses.has(wallet.address.toLowerCase())) {
+            if (++retries >= MAX_RETRIES_PER_WALLET) {
+                throw new Error(`Failed to generate unique wallet after ${MAX_RETRIES_PER_WALLET} attempts`)
+            }
             console.log(`   ⚠️  Wallet ${i + 1} already exists, regenerating...`)
             i--
             continue
         }

136-139: Consider logging full error for debugging.

Logging only err.message loses the stack trace, which can make debugging harder when something fails unexpectedly.

🔍 Suggested improvement
 main().catch(err => {
-    console.error("❌ Error:", err.message)
+    console.error("❌ Error:", err.message || err)
+    if (process.env.DEBUG) {
+        console.error(err.stack)
+    }
     process.exit(1)
 })
src/libs/l2ps/zk/scripts/setup_all_batches.sh (1)

28-37: Harden PTAU download checks for portability and failure handling.

stat -c%s breaks on macOS/BSD, and curl without --fail can leave a small HTML error file that still passes. Consider a portable size check and fail‑fast curl.

🛠️ Suggested tweak
-    if [[ ! -f "$PTAU_DIR/$file" ]] || [[ $(stat -c%s "$PTAU_DIR/$file") -lt 1000000 ]]; then
+    if [[ ! -f "$PTAU_DIR/$file" ]] || [[ $(wc -c < "$PTAU_DIR/$file") -lt 1000000 ]]; then
         echo -e "${YELLOW}Downloading pot${size}...${NC}"
         rm -f "$PTAU_DIR/$file"
-        curl -L -o "$PTAU_DIR/$file" "$url"
+        curl -fL --retry 3 -o "$PTAU_DIR/$file" "$url"
     else
         echo "pot${size} already exists"
     fi
scripts/l2ps-stress-test.ts (1)

273-273: Use const for receiverIdx (ESLint prefer-const).

receiverIdx is never reassigned, so it can be declared as const.

src/libs/l2ps/zk/circuits/l2ps_batch_10.circom (2)

26-27: Unused check signal — remove or replace with proper constraint.

The check signal computes sender_after * sender_after but the result is never used or constrained. Squaring in a finite field does not enforce non-negativity (e.g., -1 squared equals 1 mod prime). If the intent is to ensure sender_after >= 0, Circom requires explicit range proofs via bit decomposition (e.g., Num2Bits).

Either remove the unused signal to reduce constraints, or implement proper range checking if needed.

🔧 Proposed fix (remove unused signal)
-    signal check;
-    check <== sender_after * sender_after;
-    
     component preHasher = Poseidon(2);

20-21: pre_hash output is computed but never used.

The BalanceTransfer template computes both pre_hash and post_hash outputs, but only post_hash is used in the state chain (Line 71). If pre_hash is not needed for verification, consider removing it to reduce circuit constraints (~2.3K per transfer × 10 = ~23K constraints saved).

If pre_hash is reserved for future use (e.g., state diff proofs), document this intent.

Also applies to: 69-72

src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (1)

812-822: Consider batching mutex acquisitions for rate limit updates.

The loop acquires the mutex separately for each sender, which could cause contention with high delivery volumes. Consider collecting all updates and applying them in a single mutex-protected batch.

♻️ Proposed optimization
         // REVIEW: PR Fix `#9` - Reset offline message counts for senders after successful delivery
         if (sentCount > 0) {
-            // REVIEW: PR Fix `#2` - Use mutex to prevent lost updates during concurrent deliveries
-            for (const [senderId, count] of senderCounts.entries()) {
-                await this.countMutex.runExclusive(async () => {
-                    const currentCount = this.offlineMessageCounts.get(senderId) || 0
-                    const newCount = Math.max(0, currentCount - count)
-                    if (newCount === 0) {
-                        this.offlineMessageCounts.delete(senderId)
-                    } else {
-                        this.offlineMessageCounts.set(senderId, newCount)
-                    }
-                })
-            }
+            // Batch update all sender counts in single mutex acquisition
+            await this.countMutex.runExclusive(async () => {
+                for (const [senderId, count] of senderCounts.entries()) {
+                    const currentCount = this.offlineMessageCounts.get(senderId) || 0
+                    const newCount = Math.max(0, currentCount - count)
+                    if (newCount === 0) {
+                        this.offlineMessageCounts.delete(senderId)
+                    } else {
+                        this.offlineMessageCounts.set(senderId, newCount)
+                    }
+                }
+            })
             console.log(`Sent ${sentCount} offline messages to ${peerId}`)
         }
.env.example (1)

23-24: Consider defaulting L2PS_ZK_ENABLED to false for easier local development.

ZK proof generation typically requires compiled circuits, trusted setup files, and significant compute resources. Defaulting to true may cause failures for developers who haven't completed the ZK setup. Consider false as default with documentation on enabling it.

💡 Proposed change
 # ZK Proof Settings
-L2PS_ZK_ENABLED=true
+L2PS_ZK_ENABLED=false
src/libs/l2ps/L2PSTransactionExecutor.ts (1)

124-124: Remove trivially inferred type annotations (ESLint).

Static analysis flagged redundant type annotations on default parameter values.

🧹 Proposed cleanup
-        simulate: boolean = false
+        simulate = false

-        batchIndex: number = 0,
+        batchIndex = 0,

-        limit: number = 100,
-        offset: number = 0
+        limit = 100,
+        offset = 0

Also applies to: 322-322, 393-394

src/libs/l2ps/L2PSProofManager.ts (2)

223-227: Empty GCR edits rejection may be too strict.

The validation rejects proofs with gcr_edits.length === 0, but some proof types (e.g., state attestations, no-op proofs) may legitimately have zero edits. Consider if this should be configurable or type-dependent.


117-117: Remove trivially inferred type annotations (ESLint).

🧹 Proposed cleanup
-        transactionCount: number = 1,
+        transactionCount = 1,

-        limit: number = 100
+        limit = 100

Also applies to: 318-318

src/libs/blockchain/l2ps_mempool.ts (1)

573-581: SQL timestamp comparison relies on string-to-bigint cast.

The cleanup query casts string timestamps to BIGINT for comparison. This works but depends on timestamps being stored as numeric strings. Consider adding a comment documenting this assumption, or validating timestamp format on write.

package.json (1)

26-26: Cross-platform compatibility concern with bash script.

The l2ps:zk:setup script uses cd with && and invokes bash directly, which won't work on Windows. Since this is a development/setup script that's likely run once per environment, this may be acceptable, but consider documenting this limitation or providing a cross-platform alternative.

src/libs/l2ps/zk/README.md (1)

65-65: Consider adding language specifiers to fenced code blocks.

Lines 65 and 80 have fenced code blocks without language specifiers. Adding a language (e.g., ```text or ```plaintext) would satisfy linter rules and may improve rendering in some Markdown viewers.

Also applies to: 80-80

src/libs/blockchain/routines/Sync.ts (1)

423-433: Consider limiting concurrent L2PS sync operations.

The loop starts a fire-and-forget syncL2PSWithPeer call for each l2psUid without any concurrency control. If a node participates in many L2PS networks, this could create a burst of parallel network requests.

Consider using a concurrency limiter (e.g., p-limit or manual batching) similar to how DTR relay uses a concurrency limit of 5 as mentioned in the PR description.

src/index.ts (1)

853-858: ParallelNetworks initialization error handling could swallow important errors.

The catch block logs the error but continues execution. Consider whether a failure to load L2PS networks should be a warning vs. preventing L2PS services from starting with stale/incorrect configuration.

src/libs/l2ps/L2PSConcurrentSync.ts (5)

6-6: Unused import.

L2PSTransactionExecutor is imported but never used in this file.

🧹 Remove unused import
-import L2PSTransactionExecutor from "./L2PSTransactionExecutor"

37-54: Fire-and-forget pattern may hide discovery failures.

The peer.call().then() pattern dispatches requests without awaiting them, which means:

  1. Discovery errors are silently swallowed (Line 52-54)
  2. discoverL2PSParticipants returns before discovery completes
  3. Callers cannot know if discovery succeeded

If this is intentional for performance (non-blocking discovery), consider at least logging failures instead of ignoring them entirely.

♻️ Log discovery failures for observability
             }).catch(() => {
-                // Ignore errors during discovery
+                log.debug(`[L2PS-SYNC] Discovery query failed for peer ${peer.identity}`)
             })

42-42: Potential muid collision in tight loops.

Using Date.now() for unique identifiers can produce duplicates when called in rapid succession within the same millisecond. Consider adding a random suffix or using a counter.

🔧 Add randomness to prevent collisions
-                        muid: `l2ps_discovery_${Date.now()}` // Unique ID
+                        muid: `l2ps_discovery_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`

87-88: Potential precision loss with BigInt timestamp.

L2PSMempoolTx may store timestamp as bigint. Converting via Number(latestTx.timestamp) could lose precision for very large values. Since timestamps are typically within safe integer range this is low risk, but consider using BigInt consistently if the mempool stores timestamps as bigint.


103-125: Type safety could be improved for synced transactions.

The code uses any[] type assertion (Line 104) and validates structure minimally before casting to L2PSTransaction (Line 122). While the structural checks at Lines 116-119 help, consider using a type guard or validation function for more robust type safety.

src/libs/l2ps/zk/L2PSBatchProver.ts (3)

9-23: Bun compatibility patch is reasonable but risky.

Monkey-patching EventTarget.prototype.dispatchEvent is a necessary workaround for Bun's web-worker limitations, but it affects all EventTarget instances globally. Consider documenting this side effect and ensuring it doesn't break other event-driven code.


541-541: Synchronous file read blocks event loop.

fs.readFileSync in verifyProof could block the event loop for large verification key files. Consider using async fs.promises.readFile or caching the verification key after first load.

♻️ Use async file read or cache
-        const vkey = JSON.parse(fs.readFileSync(vkeyPath, 'utf-8'));
+        const vkey = JSON.parse(await fs.promises.readFile(vkeyPath, 'utf-8'));

Or cache the verification key in loadedKeys map to avoid repeated file reads.


186-186: Empty reject function loses error information.

The ready handler uses an empty reject function, which means any initialization errors for the __ready__ request are silently discarded.

🔧 Add logging for initialization errors
-                this.pendingRequests.set('__ready__', { resolve: checkReady, reject: () => {} });
+                this.pendingRequests.set('__ready__', { 
+                    resolve: checkReady, 
+                    reject: (err) => log.debug(`[L2PSBatchProver] Ready handler error: ${err}`) 
+                });
src/libs/l2ps/L2PSBatchAggregator.ts (2)

479-486: HMAC key truncation reduces cryptographic strength.

Truncating the private key hex to 64 characters (32 bytes) for the HMAC key is acceptable for HMAC-SHA256, but the truncation approach (slice(0, 64)) assumes the private key is at least 64 hex chars. If shorter, you'd get a weaker key. Consider validating the key length or using a proper key derivation function.


752-752: Type casting to any bypasses mempool type checking.

Casting batchTransaction to any when calling Mempool.addTransaction defeats TypeScript's type safety. Consider ensuring the transaction object matches the expected type or updating the mempool interface.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🤖 Fix all issues with AI agents
In `@scripts/generate-test-wallets.ts`:
- Around line 44-67: In parseArgs, validate the parsed --count value
(options.count) is a positive integer after calling the ARG_HANDLERS parser: use
Number.parseInt(...) result and check Number.isInteger(value) && value > 0; if
the check fails, print a clear error and exit (or throw) so generation cannot
proceed silently with NaN/negative/zero; update the handler for the "--count"
entry in ARG_HANDLERS and the parseArgs flow to perform this validation
immediately after parsing and before returning options.

In `@scripts/l2ps-stress-test.ts`:
- Around line 324-325: The avgDuration calculation can divide by zero if results
is empty (e.g., options.count === 0); update the code around the avgDuration
computation to guard against an empty results array by checking results.length
and returning a sensible default (e.g., 0 or NaN) when length is 0 before
performing the sum / division; reference the variables avgDuration, results and
options.count and replace the current one-liner with a guarded calculation that
sums durations only when results.length > 0 and sets avgDuration = 0 when
results.length === 0.
- Around line 302-304: The TPS calculation can divide by zero when elapsed
rounds to "0.0"; modify the block around elapsed/tps to compute elapsedSeconds =
(Date.now() - startTime) / 1000 (as a number) and guard against zero by using a
small epsilon (e.g. if elapsedSeconds <= 0 then elapsedSeconds = 0.001) before
computing tps = successCount / elapsedSeconds, then format the displayed elapsed
and tps strings; update the variables referenced (elapsed, tps, successCount,
startTime) in the logging line to use these safe values.

In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`:
- Around line 190-191: The switch "register" case in signalingServer.ts declares
a lexical variable let registerMessage (data as ImRegisterMessage) directly in
the case, violating no-case-declarations; fix it by wrapping the entire
"register" case body in a block { ... } so the declaration is scoped correctly
(locate the switch handling the "register" case and the let registerMessage
variable and enclose that case's statements in braces).

In `@src/libs/blockchain/l2ps_mempool.ts`:
- Around line 166-179: getNextSequenceNumber plus the immediate repo.save in
addTransaction is race-prone: two concurrent callers can read the same max and
insert duplicate sequence_number. Add a DB-level UNIQUE constraint on (l2ps_uid,
sequence_number) and change addTransaction to allocate sequence numbers
atomically (e.g., use a DB sequence, an atomic increment row with SELECT ... FOR
UPDATE inside a transaction, or an INSERT ... RETURNING on a sequence table)
instead of relying on getNextSequenceNumber + save; also catch unique-constraint
conflicts from repo.save and retry the allocation/insert loop a few times before
failing. Ensure references to getNextSequenceNumber, addTransaction, and
repo.save are updated accordingly.

In `@src/libs/l2ps/L2PSBatchAggregator.ts`:
- Around line 82-94: The env var parsing in L2PSBatchAggregator
(AGGREGATION_INTERVAL, MIN_BATCH_SIZE, MAX_BATCH_SIZE, CLEANUP_AGE_MS) lacks NaN
checks and bounds enforcement; add a small helper (e.g. parseEnvInt) that reads
process.env[name], parses to integer, returns the default if parse yields NaN,
and clamps to optional min/max, then replace the direct Number.parseInt calls
with parseEnvInt invocations (ensure MAX_BATCH_SIZE still enforces the circuit
cap of 10 and any sensible minimums like >=1 and reasonable lower/upper bounds).
- Around line 600-613: getNextBatchNonce claims to use persistent storage but
currently relies on in-memory sharedState (so nonces reset on process restart);
update the storage layer used by getLastNonceFromStorage() and
saveNonceToStorage() in the L2PSBatchAggregator class to persist the last nonce
to durable storage (e.g., a file or database) and read it on startup before
generating a new nonce; implement atomic write and error handling (fallback to
safe default) and ensure getNextBatchNonce still calls the updated
getLastNonceFromStorage and saveNonceToStorage so newNonce is persisted across
restarts.
- Around line 802-808: The updateCycleTime method can divide by zero if
this.stats.successfulCycles is 0; modify updateCycleTime to guard against that
by checking this.stats.successfulCycles before computing the running average:
set this.stats.lastCycleTime = cycleTime as now, then if
this.stats.successfulCycles <= 0 set this.stats.averageCycleTime = cycleTime (or
initialize appropriately), otherwise compute totalTime and set averageCycleTime
using the existing formula; reference the updateCycleTime method and the stats
properties (stats.averageCycleTime, stats.successfulCycles, stats.lastCycleTime)
when making the change.
- Around line 730-734: The signature is created over a domain-separated message
(this.SIGNATURE_DOMAIN + ":" + contentString) which causes mempool validation to
fail because tx.hash is currently the SHA256 of contentString only; fix by
making tx.hash match what is signed: in L2PSBatchAggregator where you build the
domainSeparatedMessage and call ucrypto.sign, compute tx.hash as the SHA256 of
that same domainSeparatedMessage (instead of contentString) so transaction.hash
and the signature input match, and ensure the signature object still contains
domain (SIGNATURE_DOMAIN); alternatively, if you prefer verification-side
changes, update the signature validation logic in transaction.ts (the validation
function that currently hashes contentString) to detect signatures with
signature.domain === this.SIGNATURE_DOMAIN and prepend the domain to the message
before hashing/verification so the signed bytes align with the signer.

In `@src/libs/l2ps/L2PSHashService.ts`:
- Around line 453-459: The updateCycleTime method can divide by zero when
this.stats.successfulCycles is 0; modify updateCycleTime in L2PSHashService to
guard against that by checking if this.stats.successfulCycles <= 0 and in that
case set this.stats.averageCycleTime = cycleTime (and return) before performing
the running-average formula, otherwise compute the total and assign
Math.round(total / this.stats.successfulCycles) as before; ensure you still set
this.stats.lastCycleTime = cycleTime at the start.
- Around line 113-122: The ConnectionPool created in start() when
this.omniEnabled is true (new ConnectionPool({...})) is never closed, causing
resource leaks on repeated start/stop; update the stop() method to check for
this.connectionPool and call its shutdown/close method (or appropriate API) and
null it out, ensuring ConnectionPool is disposed before returning so subsequent
start() calls can recreate it safely; reference the ConnectionPool instantiation
in start(), the this.connectionPool field, and the stop() method when making the
change.

In `@src/libs/l2ps/zk/L2PSBatchProver.ts`:
- Around line 275-282: terminate() currently kills the subprocess but leaves any
pending requests unresolved; update terminate() (and use the same
pendingRequests Map used by the request/response handlers) to iterate over
pendingRequests, call each stored reject callback with an Error like "process
terminated", clear the map, then proceed to kill the childProcess, set
this.childProcess = null and this.processReady = false, and log the termination
so all awaiting promises are rejected rather than leaked.
- Around line 10-23: The global patch of EventTarget.prototype.dispatchEvent
(wrapped under isBun) can have broad side effects; modify the approach in
L2PSBatchProver so the patch is applied only when strictly needed: introduce a
guarded, idempotent patch mechanism (e.g., check a module-scoped flag before
replacing EventTarget.prototype.dispatchEvent), store the originalDispatchEvent
in a module-scoped variable so you can restore it in a cleanup method on the
L2PSBatchProver instance (or after the specific operation completes), and scope
the usage so the patched behavior is only active during the prover’s operations
rather than for the whole process; ensure you reference and protect
EventTarget.prototype.dispatchEvent, originalDispatchEvent, and isBun when
implementing this change.

In `@src/libs/network/manageNodeCall.ts`:
- Around line 69-70: The variable declaration for peerlist inside the
switch/case is using let and not scoped to the case, causing possible cross-case
access; change to a block-scoped constant by wrapping the case body in braces
and declare const peerlist = await getPeerlist(), then compute response.response
= Hashing.sha256(JSON.stringify(peerlist)). Locate the code around getPeerlist,
peerlist and response.response (and Hashing.sha256) and replace the let with
const inside a new { ... } block so peerlist is lexically scoped to that case.

In `@src/libs/network/routines/transactions/handleL2PS.ts`:
- Line 71: The return performs an unsafe double-cast "decryptedTx as unknown as
Transaction", bypassing TypeScript checks; update handleL2PS to validate the
decryptedTx returned from l2psInstance.decryptTx before casting: either correct
l2psInstance.decryptTx's declared return type, or implement a runtime check/type
guard (e.g., isTransaction) that verifies required fields (hash, content,
signature or whatever Transaction requires) and on failure return an error
(e.g., { decryptedTx: null, error: "Invalid decrypted transaction structure" }),
otherwise return the validated decryptedTx typed as Transaction; ensure you
reference decryptedTx, Transaction and l2psInstance.decryptTx in the change.
🧹 Nitpick comments (16)
src/features/tlsnotary/TLSNotaryService.ts (1)

190-193: Guard against NaN when env vars are malformed.

Number.parseInt can yield NaN, which would later break the service (e.g., invalid port). Consider a small guard that falls back to defaults when parsing fails.

💡 Proposed refactor
-  return {
-    port: Number.parseInt(process.env.TLSNOTARY_PORT ?? "7047", 10),
-    signingKey,
-    maxSentData: Number.parseInt(process.env.TLSNOTARY_MAX_SENT_DATA ?? "16384", 10),
-    maxRecvData: Number.parseInt(process.env.TLSNOTARY_MAX_RECV_DATA ?? "65536", 10),
-    autoStart: process.env.TLSNOTARY_AUTO_START?.toLowerCase() !== "false",
-    mode,
-  }
+  const port = Number.parseInt(process.env.TLSNOTARY_PORT ?? "7047", 10)
+  const maxSentData = Number.parseInt(process.env.TLSNOTARY_MAX_SENT_DATA ?? "16384", 10)
+  const maxRecvData = Number.parseInt(process.env.TLSNOTARY_MAX_RECV_DATA ?? "65536", 10)
+
+  return {
+    port: Number.isFinite(port) ? port : 7047,
+    signingKey,
+    maxSentData: Number.isFinite(maxSentData) ? maxSentData : 16384,
+    maxRecvData: Number.isFinite(maxRecvData) ? maxRecvData : 65536,
+    autoStart: process.env.TLSNOTARY_AUTO_START?.toLowerCase() !== "false",
+    mode,
+  }
scripts/l2ps-stress-test.ts (3)

43-46: Consider adding validation for numeric CLI options.

Parsed numeric values (count, value, concurrency, delay) are not validated. Invalid inputs like --count -5 or --delay abc could cause unexpected behavior (NaN or negative values).

💡 Suggested validation helper
+function parsePositiveInt(val: string, label: string, defaultVal: number): number {
+    const parsed = Number.parseInt(val, 10)
+    if (Number.isNaN(parsed) || parsed <= 0) {
+        console.warn(`Warning: Invalid ${label} "${val}", using default ${defaultVal}`)
+        return defaultVal
+    }
+    return parsed
+}
+
 const ARG_HANDLERS: Record<string, ArgHandler> = {
     "--node": (opts, val) => { opts.nodeUrl = val },
     "--uid": (opts, val) => { opts.uid = val },
     "--wallets-file": (opts, val) => { opts.walletsFile = val },
-    "--count": (opts, val) => { opts.count = Number.parseInt(val, 10) },
-    "--value": (opts, val) => { opts.value = Number.parseInt(val, 10) },
-    "--concurrency": (opts, val) => { opts.concurrency = Number.parseInt(val, 10) },
-    "--delay": (opts, val) => { opts.delayMs = Number.parseInt(val, 10) },
+    "--count": (opts, val) => { opts.count = parsePositiveInt(val, "count", 100) },
+    "--value": (opts, val) => { opts.value = parsePositiveInt(val, "value", 10) },
+    "--concurrency": (opts, val) => { opts.concurrency = parsePositiveInt(val, "concurrency", 5) },
+    "--delay": (opts, val) => { opts.delayMs = Math.max(0, Number.parseInt(val, 10) || 0) },
 }

275-275: Use const instead of let for receiverIdx.

The variable is never reassigned after initialization.

🔧 Suggested fix
-        let receiverIdx = (senderIdx + 1 + Math.floor(Math.random() * (walletConnections.length - 1))) % walletConnections.length
+        const receiverIdx = (senderIdx + 1 + Math.floor(Math.random() * (walletConnections.length - 1))) % walletConnections.length

240-263: Wallet connections are not cleaned up.

The Demos instances are connected but never disconnected when the test completes or fails. Consider adding cleanup to release resources gracefully.

💡 Suggested cleanup pattern
+    // Cleanup function
+    const cleanup = async () => {
+        for (const conn of walletConnections) {
+            try {
+                await conn.demos.disconnect?.()
+            } catch { /* ignore cleanup errors */ }
+        }
+    }
+
     // Run stress test
     console.log(`\n🔥 Starting stress test...`)

Then call await cleanup() before the final summary or in a finally block.

scripts/generate-test-wallets.ts (1)

117-135: Ensure output directory exists before writing mnemonics.
writeFileSync will fail if the output directory doesn’t exist (e.g., fresh checkout).

🧰 Suggested fix
-import { existsSync, readFileSync, writeFileSync } from "node:fs"
+import { existsSync, readFileSync, writeFileSync, mkdirSync } from "node:fs"
@@
     // Save wallet mnemonics to file
     const outputPath = path.resolve(options.outputPath)
+    mkdirSync(path.dirname(outputPath), { recursive: true })
     const walletsData = {
         generated_at: new Date().toISOString(),
         count: newWallets.length,
         balance: options.balance,
src/utilities/sharedState.ts (1)

105-108: Drop redundant type annotation to satisfy lint.
ESLint flags the explicit : number as trivially inferred.

🧹 Lint fix
-    l2psBatchNonce: number = 0 // Persistent nonce for L2PS batch transactions
+    l2psBatchNonce = 0 // Persistent nonce for L2PS batch transactions
scripts/l2ps-load-test.ts (1)

269-272: Avoid returning from forEach callback (Biome lint).
Wrap the body in a block to silence the lint rule.

🧹 Lint-friendly tweak
-        results.errors.slice(0, 5).forEach(e => console.log(`   - ${e}`))
+        results.errors.slice(0, 5).forEach(e => {
+            console.log(`   - ${e}`)
+        })
src/libs/l2ps/L2PSHashService.ts (2)

302-302: Use explicit typing instead of any for validator parameter.

The hashUpdateTx and validator parameters are typed as any, reducing type safety and making the code harder to maintain.

♻️ Suggested improvement
-    private async relayToValidators(hashUpdateTx: any): Promise<void> {
+    private async relayToValidators(hashUpdateTx: Transaction): Promise<void> {

Similarly for relayViaOmniProtocol:

-    private async relayViaOmniProtocol(validator: any, hashUpdateTx: any): Promise<boolean> {
+    private async relayViaOmniProtocol(validator: Peer, hashUpdateTx: Transaction): Promise<boolean> {

219-223: Early return on empty UIDs should be logged at debug level.

Silently returning when there are no joined L2PS networks makes debugging harder. Consider adding a debug log.

📝 Suggested improvement
             if (joinedUIDs.length === 0) {
+                log.debug("[L2PS Hash Service] No joined L2PS networks to process")
                 return // No L2PS networks to process
             }
src/libs/network/manageNodeCall.ts (1)

706-707: Replace console.log with structured logger.

Multiple L2PS endpoints use console.log instead of the project's log utility, which bypasses log levels and structured logging.

♻️ Suggested fix
         case "getL2PSParticipationById":
-            console.log("[L2PS] Received L2PS participation query")
+            log.debug("[L2PS] Received L2PS participation query")
             ...
         case "getL2PSMempoolInfo": {
-            console.log("[L2PS] Received L2PS mempool info request")
+            log.debug("[L2PS] Received L2PS mempool info request")
             ...
         case "getL2PSTransactions": {
-            console.log("[L2PS] Received L2PS transactions sync request")
+            log.debug("[L2PS] Received L2PS transactions sync request")
             ...
         case "getL2PSAccountTransactions": {
-            console.log("[L2PS] Received account transactions request")
+            log.debug("[L2PS] Received account transactions request")

Also applies to: 734-734, 767-767, 813-813

src/libs/l2ps/L2PSConcurrentSync.ts (4)

36-53: Fire-and-forget pattern silently swallows all errors.

The .catch(() => { ... }) with empty block and the outer catch that does nothing make it hard to debug discovery failures. At minimum, log discovery errors at debug level.

📝 Suggested improvement
                 peer.call({
                     method: "nodeCall",
                     params: [{
                         message: "getL2PSParticipationById",
                         data: { l2psUid: uid },
                         muid: `l2ps_discovery_${Date.now()}` // Unique ID
                     }]
                 }).then(response => {
                     if (response?.result === 200 && response?.response?.participating) {
                         addL2PSParticipant(uid, peer.identity)
                         log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`)
 
                         // Opportunistic sync after discovery
                         syncL2PSWithPeer(peer, uid)
                     }
-                }).catch(() => {
-                    // Ignore errors during discovery
+                }).catch((err) => {
+                    log.debug(`[L2PS-SYNC] Discovery query failed for ${uid} on ${peer.identity}: ${err instanceof Error ? err.message : String(err)}`)
                 })
 
-            } catch {
+            } catch (err) {
+                log.debug(`[L2PS-SYNC] Discovery error for ${uid}: ${err instanceof Error ? err.message : String(err)}`)
                 // Discovery errors are non-critical, peer may be unreachable
             }

41-41: Non-unique muid if called rapidly.

Using Date.now() for muid may produce duplicates if called within the same millisecond. Consider adding a random suffix or counter.

♻️ Suggested fix
-                        muid: `l2ps_discovery_${Date.now()}` // Unique ID
+                        muid: `l2ps_discovery_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`

103-103: Type assertion to any[] loses type safety.

Consider creating a proper interface for the transaction response shape to maintain type safety throughout the sync flow.

♻️ Suggested approach
interface L2PSTransactionResponse {
    hash: string
    l2ps_uid: string
    original_hash: string
    encrypted_tx: L2PSTransaction
    timestamp: number
    block_number: number
}

// Then use:
const txs = response.response.transactions as L2PSTransactionResponse[]

15-16: Module-level cache without TTL or size limits.

The l2psParticipantCache grows unbounded and entries never expire. If peers disconnect or leave networks, stale entries persist indefinitely.

Consider adding:

  1. A TTL for cache entries
  2. A maximum cache size
  3. Integration with peer disconnect events to remove stale entries
interface CachedParticipant {
    nodeId: string
    discoveredAt: number
}

const CACHE_TTL_MS = 5 * 60 * 1000 // 5 minutes
src/libs/l2ps/zk/L2PSBatchProver.ts (1)

541-541: Synchronous file read blocks the event loop.

fs.readFileSync in verifyProof blocks the event loop. For consistency with the async pattern and to avoid blocking, consider using fs.promises.readFile.

♻️ Suggested fix
+import { readFile } from 'node:fs/promises';

     async verifyProof(batchProof: BatchProof): Promise<boolean> {
         ...
-        const vkey = JSON.parse(fs.readFileSync(vkeyPath, 'utf-8'));
+        const vkey = JSON.parse(await readFile(vkeyPath, 'utf-8'));
src/libs/l2ps/parallelNetworks.ts (1)

313-315: Consider whether missing signature should be an error.

Currently, a missing signature only logs a warning and proceeds with decryption. Depending on security requirements, you may want to reject transactions without signatures entirely.

// If signatures should be required:
} else {
    throw new Error(`L2PS transaction missing required signature for ${uid}`)
}

Comment on lines +44 to +67
function parseArgs(argv: string[]): CliOptions {
const options: CliOptions = {
count: 10,
balance: "1000000000000000000",
genesisPath: "data/genesis.json",
outputPath: "data/test-wallets.json",
}

for (let i = 2; i < argv.length; i++) {
const arg = argv[i]

if (arg === "--help") {
showHelp()
}

const handler = ARG_HANDLERS[arg]
if (handler && argv[i + 1]) {
handler(options, argv[i + 1])
i++
}
}

return options
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

wc -l scripts/generate-test-wallets.ts

Repository: kynesyslabs/node

Length of output: 96


🏁 Script executed:

cat -n scripts/generate-test-wallets.ts

Repository: kynesyslabs/node

Length of output: 6022


Add validation for --count to prevent silent generation failures.

Number.parseInt() can return NaN or negative values, causing the generation loop to skip silently (since 0 < NaN and 0 < negative are both false). This results in zero wallets being generated without clear error feedback.

Validate that --count is a positive integer after parsing:

🛠️ Suggested validation
 function parseArgs(argv: string[]): CliOptions {
     const options: CliOptions = {
         count: 10,
         balance: "1000000000000000000",
         genesisPath: "data/genesis.json",
         outputPath: "data/test-wallets.json",
     }

     for (let i = 2; i < argv.length; i++) {
         const arg = argv[i]

         if (arg === "--help") {
             showHelp()
         }

         const handler = ARG_HANDLERS[arg]
         if (handler && argv[i + 1]) {
             handler(options, argv[i + 1])
             i++
         }
     }

+    if (!Number.isFinite(options.count) || options.count <= 0) {
+        throw new Error("--count must be a positive integer")
+    }
     return options
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function parseArgs(argv: string[]): CliOptions {
const options: CliOptions = {
count: 10,
balance: "1000000000000000000",
genesisPath: "data/genesis.json",
outputPath: "data/test-wallets.json",
}
for (let i = 2; i < argv.length; i++) {
const arg = argv[i]
if (arg === "--help") {
showHelp()
}
const handler = ARG_HANDLERS[arg]
if (handler && argv[i + 1]) {
handler(options, argv[i + 1])
i++
}
}
return options
}
function parseArgs(argv: string[]): CliOptions {
const options: CliOptions = {
count: 10,
balance: "1000000000000000000",
genesisPath: "data/genesis.json",
outputPath: "data/test-wallets.json",
}
for (let i = 2; i < argv.length; i++) {
const arg = argv[i]
if (arg === "--help") {
showHelp()
}
const handler = ARG_HANDLERS[arg]
if (handler && argv[i + 1]) {
handler(options, argv[i + 1])
i++
}
}
if (!Number.isFinite(options.count) || options.count <= 0) {
throw new Error("--count must be a positive integer")
}
return options
}
🤖 Prompt for AI Agents
In `@scripts/generate-test-wallets.ts` around lines 44 - 67, In parseArgs,
validate the parsed --count value (options.count) is a positive integer after
calling the ARG_HANDLERS parser: use Number.parseInt(...) result and check
Number.isInteger(value) && value > 0; if the check fails, print a clear error
and exit (or throw) so generation cannot proceed silently with
NaN/negative/zero; update the handler for the "--count" entry in ARG_HANDLERS
and the parseArgs flow to perform this validation immediately after parsing and
before returning options.

Comment on lines +302 to +304
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1)
const tps = (successCount / Number.parseFloat(elapsed)).toFixed(2)
console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential division by zero in TPS calculation.

If the first 10 transactions complete in under 100ms, elapsed could round to "0.0", and Number.parseFloat("0.0") returns 0, causing Infinity TPS. Guard against this edge case.

🛡️ Suggested fix
             if ((i + 1) % 10 === 0 || i === options.count - 1) {
-                const elapsed = ((Date.now() - startTime) / 1000).toFixed(1)
-                const tps = (successCount / Number.parseFloat(elapsed)).toFixed(2)
+                const elapsedSec = (Date.now() - startTime) / 1000
+                const tps = elapsedSec > 0 ? (successCount / elapsedSec).toFixed(2) : "N/A"
+                const elapsed = elapsedSec.toFixed(1)
                 console.log(`   📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`)
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const elapsed = ((Date.now() - startTime) / 1000).toFixed(1)
const tps = (successCount / Number.parseFloat(elapsed)).toFixed(2)
console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`)
if ((i + 1) % 10 === 0 || i === options.count - 1) {
const elapsedSec = (Date.now() - startTime) / 1000
const tps = elapsedSec > 0 ? (successCount / elapsedSec).toFixed(2) : "N/A"
const elapsed = elapsedSec.toFixed(1)
console.log(` 📊 Progress: ${i + 1}/${options.count} | Success: ${successCount} | Failed: ${failCount} | TPS: ${tps}`)
}
🤖 Prompt for AI Agents
In `@scripts/l2ps-stress-test.ts` around lines 302 - 304, The TPS calculation can
divide by zero when elapsed rounds to "0.0"; modify the block around elapsed/tps
to compute elapsedSeconds = (Date.now() - startTime) / 1000 (as a number) and
guard against zero by using a small epsilon (e.g. if elapsedSeconds <= 0 then
elapsedSeconds = 0.001) before computing tps = successCount / elapsedSeconds,
then format the displayed elapsed and tps strings; update the variables
referenced (elapsed, tps, successCount, startTime) in the logging line to use
these safe values.

Comment on lines +324 to +325
const totalTime = (Date.now() - startTime) / 1000
const avgDuration = results.reduce((sum, r) => sum + r.duration, 0) / results.length
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against empty results array.

If options.count is somehow 0, results.length would be 0, causing division by zero when calculating avgDuration.

🛡️ Suggested fix
     const totalTime = (Date.now() - startTime) / 1000
-    const avgDuration = results.reduce((sum, r) => sum + r.duration, 0) / results.length
+    const avgDuration = results.length > 0 
+        ? results.reduce((sum, r) => sum + r.duration, 0) / results.length 
+        : 0
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const totalTime = (Date.now() - startTime) / 1000
const avgDuration = results.reduce((sum, r) => sum + r.duration, 0) / results.length
const totalTime = (Date.now() - startTime) / 1000
const avgDuration = results.length > 0
? results.reduce((sum, r) => sum + r.duration, 0) / results.length
: 0
🤖 Prompt for AI Agents
In `@scripts/l2ps-stress-test.ts` around lines 324 - 325, The avgDuration
calculation can divide by zero if results is empty (e.g., options.count === 0);
update the code around the avgDuration computation to guard against an empty
results array by checking results.length and returning a sensible default (e.g.,
0 or NaN) when length is 0 before performing the sum / division; reference the
variables avgDuration, results and options.count and replace the current
one-liner with a guarded calculation that sums durations only when
results.length > 0 and sets avgDuration = 0 when results.length === 0.

Comment on lines +453 to +459
private updateCycleTime(cycleTime: number): void {
this.stats.lastCycleTime = cycleTime

// Calculate running average
const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential division by zero in updateCycleTime.

When successfulCycles is 1, the calculation (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) evaluates to 0 * averageCycleTime, which is fine. However, if this method is ever called when successfulCycles is 0 (e.g., due to a bug), it will cause division by zero.

🛡️ Defensive fix
     private updateCycleTime(cycleTime: number): void {
         this.stats.lastCycleTime = cycleTime
 
         // Calculate running average
-        const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
-        this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
+        if (this.stats.successfulCycles > 0) {
+            const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
+            this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
+        }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private updateCycleTime(cycleTime: number): void {
this.stats.lastCycleTime = cycleTime
// Calculate running average
const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
}
private updateCycleTime(cycleTime: number): void {
this.stats.lastCycleTime = cycleTime
// Calculate running average
if (this.stats.successfulCycles > 0) {
const totalTime = (this.stats.averageCycleTime * (this.stats.successfulCycles - 1)) + cycleTime
this.stats.averageCycleTime = Math.round(totalTime / this.stats.successfulCycles)
}
}
🤖 Prompt for AI Agents
In `@src/libs/l2ps/L2PSHashService.ts` around lines 453 - 459, The updateCycleTime
method can divide by zero when this.stats.successfulCycles is 0; modify
updateCycleTime in L2PSHashService to guard against that by checking if
this.stats.successfulCycles <= 0 and in that case set
this.stats.averageCycleTime = cycleTime (and return) before performing the
running-average formula, otherwise compute the total and assign Math.round(total
/ this.stats.successfulCycles) as before; ensure you still set
this.stats.lastCycleTime = cycleTime at the start.

Comment on lines +10 to +23
const isBun = (globalThis as any).Bun !== undefined;
if (isBun) {
// Suppress web-worker errors in Bun by patching dispatchEvent
const originalDispatchEvent = EventTarget.prototype.dispatchEvent;
EventTarget.prototype.dispatchEvent = function (event: any) {
if (!(event instanceof Event)) {
// Convert plain object to Event for Bun compatibility
const realEvent = new Event(event.type || 'message');
Object.assign(realEvent, event);
return originalDispatchEvent.call(this, realEvent);
}
return originalDispatchEvent.call(this, event);
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Global prototype modification could cause unexpected side effects.

Patching EventTarget.prototype.dispatchEvent globally for Bun compatibility can affect other code in the process. Consider isolating this patch or using a more targeted approach.

💡 Suggestion

Consider wrapping the patch in a way that only affects the specific use case:

// Store reference to restore later if needed
const originalDispatchEvent = EventTarget.prototype.dispatchEvent;

// Apply patch only within the prover's scope if possible
// Or at minimum, add documentation about the global side effect

Also consider checking if the patch is already applied to avoid double-patching:

 if (isBun) {
+    if (!(EventTarget.prototype as any).__bunPatched) {
         const originalDispatchEvent = EventTarget.prototype.dispatchEvent;
         EventTarget.prototype.dispatchEvent = function (event: any) {
             ...
         };
+        (EventTarget.prototype as any).__bunPatched = true;
+    }
 }
🧰 Tools
🪛 ESLint

[error] 10-10: Extra semicolon.

(semi)


[error] 13-13: Extra semicolon.

(semi)


[error] 17-17: Extra semicolon.

(semi)


[error] 18-18: Extra semicolon.

(semi)


[error] 19-19: Extra semicolon.

(semi)


[error] 21-21: Extra semicolon.

(semi)


[error] 22-22: Extra semicolon.

(semi)

🤖 Prompt for AI Agents
In `@src/libs/l2ps/zk/L2PSBatchProver.ts` around lines 10 - 23, The global patch
of EventTarget.prototype.dispatchEvent (wrapped under isBun) can have broad side
effects; modify the approach in L2PSBatchProver so the patch is applied only
when strictly needed: introduce a guarded, idempotent patch mechanism (e.g.,
check a module-scoped flag before replacing
EventTarget.prototype.dispatchEvent), store the originalDispatchEvent in a
module-scoped variable so you can restore it in a cleanup method on the
L2PSBatchProver instance (or after the specific operation completes), and scope
the usage so the patched behavior is only active during the prover’s operations
rather than for the whole process; ensure you reference and protect
EventTarget.prototype.dispatchEvent, originalDispatchEvent, and isBun when
implementing this change.

Comment on lines +275 to +282
async terminate(): Promise<void> {
if (this.childProcess) {
this.childProcess.kill();
this.childProcess = null;
this.processReady = false;
log.info('[L2PSBatchProver] Subprocess terminated');
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Subprocess termination doesn't reject pending requests.

When terminate() is called, pending requests remain unresolved, potentially causing promise leaks. The error handler at lines 150-158 handles this for process errors, but explicit termination doesn't.

🐛 Suggested fix
     async terminate(): Promise<void> {
         if (this.childProcess) {
+            // Reject all pending requests before killing
+            for (const [id, pending] of this.pendingRequests) {
+                pending.reject(new Error('Subprocess terminated'));
+                this.pendingRequests.delete(id);
+            }
             this.childProcess.kill();
             this.childProcess = null;
             this.processReady = false;
             log.info('[L2PSBatchProver] Subprocess terminated');
         }
     }
🧰 Tools
🪛 ESLint

[error] 277-277: Extra semicolon.

(semi)


[error] 278-278: Extra semicolon.

(semi)


[error] 279-279: Extra semicolon.

(semi)


[error] 280-280: Extra semicolon.

(semi)

🤖 Prompt for AI Agents
In `@src/libs/l2ps/zk/L2PSBatchProver.ts` around lines 275 - 282, terminate()
currently kills the subprocess but leaves any pending requests unresolved;
update terminate() (and use the same pendingRequests Map used by the
request/response handlers) to iterate over pendingRequests, call each stored
reject callback with an Error like "process terminated", clear the map, then
proceed to kill the childProcess, set this.childProcess = null and
this.processReady = false, and log the termination so all awaiting promises are
rejected rather than leaked.

@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ MCP Memory Files Detected

This PR modifies .serena/ files. After merge, these changes will be automatically reverted to preserve branch-specific MCP memories.

Files that will be reverted:

  • .serena/memories/development_guidelines.md
  • .serena/memories/l2ps_architecture.md
  • .serena/memories/l2ps_code_patterns.md
  • .serena/memories/l2ps_implementation_status.md
  • .serena/memories/l2ps_onboarding_guide.md
  • .serena/memories/l2ps_overview.md
  • .serena/memories/l2ps_remaining_work.md
  • .serena/memories/project_purpose.md
  • .serena/memories/session_2025_01_31_l2ps_completion.md
  • .serena/memories/suggested_commands.md
  • .serena/memories/task_completion_checklist.md

@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ Beads Issue Tracking Files Detected

This PR modifies .beads/ issue tracking files. After merge, these changes will be automatically reverted to preserve branch-specific issue tracking.

Files that will be reverted:

  • .beads/metadata.json

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 129-136: The .gitignore contains a redundant entry "bridges_docs"
(without trailing slash) that duplicates the existing "bridges_docs/" ignore
earlier; remove the duplicate "bridges_docs" line so only the canonical
"bridges_docs/" entry remains and the trailing-slash style is consistent.

In `@src/utilities/sharedState.ts`:
- Around line 112-114: The comment on the l2psBatchNonce field is misleading
about persistence; update the comment on the l2psBatchNonce property in
sharedState.ts to state it's session-persistent (e.g., "Session-persistent nonce
for L2PS batch transactions") to match the behavior in L2PSBatchAggregator
(which retains nonce only for the process lifetime), or alternatively confirm
and document if you intend to make it durable across restarts; reference the
l2psBatchNonce variable and L2PSBatchAggregator usage when making the change.
🧹 Nitpick comments (6)
src/libs/blockchain/chain.ts (1)

208-214: Inconsistent error handling compared to getTxByHash.

This method now has identical logic to getTxByHash (lines 74-89), but lacks the try-catch error handling present there. If the database query fails, errors will propagate without logging.

Consider one of these approaches:

  1. Add consistent error handling to match getTxByHash
  2. Consolidate both methods into one to eliminate duplication
♻️ Option 1: Add consistent error handling
 static async getTransactionFromHash(hash: string): Promise<Transaction | null> {
+    try {
         const rawTx = await this.transactions.findOneBy({ hash: ILike(hash) })
         if (!rawTx) {
             return null
         }
         return Transaction.fromRawTransaction(rawTx)
+    } catch (error) {
+        log.error("[ChainDB] [ ERROR ]: " + JSON.stringify(error))
+        throw error
+    }
 }
♻️ Option 2: Consolidate by delegating to `getTxByHash`
 static async getTransactionFromHash(hash: string): Promise<Transaction | null> {
-    const rawTx = await this.transactions.findOneBy({ hash: ILike(hash) })
-    if (!rawTx) {
-        return null
-    }
-    return Transaction.fromRawTransaction(rawTx)
+    return this.getTxByHash(hash)
 }
.env.example (2)

23-24: Consider adding performance warning for ZK proof enablement.

ZK-SNARK PLONK proof generation is CPU and memory-intensive. Enabling it by default (L2PS_ZK_ENABLED=true) may cause unexpected resource consumption for users unfamiliar with the performance implications. Consider adding an inline comment warning about resource requirements, or defaulting to false with documentation explaining when to enable it.

📝 Suggested documentation enhancement
 # ZK Proof Settings
+# WARNING: ZK proof generation is CPU/memory intensive. Only enable if hardware supports it.
+# Recommended minimum: 4+ CPU cores, 8GB+ RAM
 L2PS_ZK_ENABLED=true

13-74: Optional: Address dotenv-linter ordering warnings.

The static analysis tool flagged several UnorderedKey warnings suggesting alphabetical ordering of environment variables. The current logical grouping (related settings together) is reasonable and arguably more maintainable than strict alphabetical ordering. However, if your team prefers to satisfy the linter, you could reorder keys alphabetically within each section.

This is purely a style preference with no functional impact.

package.json (1)

26-26: Consider a cross‑platform wrapper (or document the bash requirement).

npm run on Windows won’t have bash by default, so this script will fail for contributors who don’t use Git Bash/WSL. If Windows support matters, consider a Node wrapper or explicitly document the bash prerequisite in onboarding docs.

src/libs/network/manageNodeCall.ts (1)

735-758: Avoid loading the entire L2PS mempool just to compute stats.
For large UIDs this can be heavy; consider a stats query (count + min/max timestamps) instead of pulling all rows.

♻️ Possible refactor (using a dedicated stats helper)
-                const transactions = await L2PSMempool.getByUID(data.l2psUid, "processed")
-                response.response = {
-                    l2psUid: data.l2psUid,
-                    transactionCount: transactions.length,
-                    lastTimestamp: transactions.length > 0
-                        ? transactions[transactions.length - 1].timestamp
-                        : 0,
-                    oldestTimestamp: transactions.length > 0
-                        ? transactions[0].timestamp
-                        : 0,
-                }
+                const stats = await L2PSMempool.getStatsByUID(data.l2psUid, "processed")
+                response.response = {
+                    l2psUid: data.l2psUid,
+                    transactionCount: stats.count,
+                    lastTimestamp: stats.lastTimestamp,
+                    oldestTimestamp: stats.oldestTimestamp,
+                }
src/libs/blockchain/routines/Sync.ts (1)

580-620: Consider wiring the new peer-failover helpers or removing them.
findNextAvailablePeer / handlePeerUnreachable aren’t referenced in this file yet; either use them in the retry path or drop them to avoid dead code.

Comment on lines +129 to +136
.serena-backup
PR_COMMENTS
bridges_docs
claudedocs
dist
docs
local_tests
omniprotocol_fixtures_scripts
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove redundant bridges_docs ignore entry.

bridges_docs/ is already ignored earlier; the new bridges_docs entry is redundant and inconsistent with the trailing slash style.

🧹 Suggested cleanup
-bridges_docs
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.serena-backup
PR_COMMENTS
bridges_docs
claudedocs
dist
docs
local_tests
omniprotocol_fixtures_scripts
.serena-backup
PR_COMMENTS
claudedocs
dist
docs
local_tests
omniprotocol_fixtures_scripts
🤖 Prompt for AI Agents
In @.gitignore around lines 129 - 136, The .gitignore contains a redundant entry
"bridges_docs" (without trailing slash) that duplicates the existing
"bridges_docs/" ignore earlier; remove the duplicate "bridges_docs" line so only
the canonical "bridges_docs/" entry remains and the trailing-slash style is
consistent.

Comment on lines +112 to +114
// SECTION L2PS
l2psJoinedUids: string[] = [] // UIDs of the L2PS networks that are joined to the node (loaded from the data directory)
l2psBatchNonce: number = 0 // Persistent nonce for L2PS batch transactions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for l2psBatchNonce usage to understand persistence requirements
rg -n -C3 'l2psBatchNonce'

Repository: kynesyslabs/node

Length of output: 1887


Clarify comment: session-persistent nonce, not permanently persistent.

The comment "Persistent nonce for L2PS batch transactions" is misleading. The implementation resets to 0 on node restart. Looking at L2PSBatchAggregator.ts (lines 620-642), the nonce is intentionally session-persistent only: "Use shared state to persist nonce across the session... This survives within the same process lifetime."

Update the comment to clarify this is session-scoped (e.g., "Session-persistent nonce for L2PS batch transactions") or verify whether nonce reuse across restarts is acceptable for L2PS transaction security semantics.

🤖 Prompt for AI Agents
In `@src/utilities/sharedState.ts` around lines 112 - 114, The comment on the
l2psBatchNonce field is misleading about persistence; update the comment on the
l2psBatchNonce property in sharedState.ts to state it's session-persistent
(e.g., "Session-persistent nonce for L2PS batch transactions") to match the
behavior in L2PSBatchAggregator (which retains nonce only for the process
lifetime), or alternatively confirm and document if you intend to make it
durable across restarts; reference the l2psBatchNonce variable and
L2PSBatchAggregator usage when making the change.

… circuit range checks, and improve signaling server robustness with offline message rollback.
@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ MCP Memory Files Detected

This PR modifies .serena/ files. After merge, these changes will be automatically reverted to preserve branch-specific MCP memories.

Files that will be reverted:

  • .serena/memories/development_guidelines.md
  • .serena/memories/l2ps_architecture.md
  • .serena/memories/l2ps_code_patterns.md
  • .serena/memories/l2ps_implementation_status.md
  • .serena/memories/l2ps_onboarding_guide.md
  • .serena/memories/l2ps_overview.md
  • .serena/memories/l2ps_remaining_work.md
  • .serena/memories/project_purpose.md
  • .serena/memories/session_2025_01_31_l2ps_completion.md
  • .serena/memories/suggested_commands.md
  • .serena/memories/task_completion_checklist.md

@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ Beads Issue Tracking Files Detected

This PR modifies .beads/ issue tracking files. After merge, these changes will be automatically reverted to preserve branch-specific issue tracking.

Files that will be reverted:

  • .beads/metadata.json

…rotection, and refactor network handlers for improved error logging and type safety.
@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ MCP Memory Files Detected

This PR modifies .serena/ files. After merge, these changes will be automatically reverted to preserve branch-specific MCP memories.

Files that will be reverted:

  • .serena/memories/development_guidelines.md
  • .serena/memories/l2ps_architecture.md
  • .serena/memories/l2ps_code_patterns.md
  • .serena/memories/l2ps_implementation_status.md
  • .serena/memories/l2ps_onboarding_guide.md
  • .serena/memories/l2ps_overview.md
  • .serena/memories/l2ps_remaining_work.md
  • .serena/memories/project_purpose.md
  • .serena/memories/session_2025_01_31_l2ps_completion.md
  • .serena/memories/suggested_commands.md
  • .serena/memories/task_completion_checklist.md

@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ Beads Issue Tracking Files Detected

This PR modifies .beads/ issue tracking files. After merge, these changes will be automatically reverted to preserve branch-specific issue tracking.

Files that will be reverted:

  • .beads/metadata.json

…ool transactions, enhance L2PS circuit with underflow protection, and refactor graceful shutdown.
@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ Beads Issue Tracking Files Detected

This PR modifies .beads/ issue tracking files. After merge, these changes will be automatically reverted to preserve branch-specific issue tracking.

Files that will be reverted:

  • .beads/metadata.json

@github-actions
Copy link

github-actions bot commented Feb 2, 2026

⚠️ MCP Memory Files Detected

This PR modifies .serena/ files. After merge, these changes will be automatically reverted to preserve branch-specific MCP memories.

Files that will be reverted:

  • .serena/memories/development_guidelines.md
  • .serena/memories/l2ps_architecture.md
  • .serena/memories/l2ps_code_patterns.md
  • .serena/memories/l2ps_implementation_status.md
  • .serena/memories/l2ps_onboarding_guide.md
  • .serena/memories/l2ps_overview.md
  • .serena/memories/l2ps_remaining_work.md
  • .serena/memories/project_purpose.md
  • .serena/memories/session_2025_01_31_l2ps_completion.md
  • .serena/memories/suggested_commands.md
  • .serena/memories/task_completion_checklist.md

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/libs/network/endpointHandlers.ts (1)

322-331: ⚠️ Potential issue | 🟠 Major

Wrap the subnet case in a block to satisfy noSwitchDeclarations.

Biome flags switch-scoped var declarations; this will fail lint.

✅ Suggested fix
-            case "subnet":
+            case "subnet": {
                 payload = tx.content.data
                 log.debug(
                     "[handleExecuteTransaction] Subnet payload: " +
                     JSON.stringify(payload[1]),
                 )
-                var subnetResult = await ServerHandlers.handleSubnetTx(
+                const subnetResult = await ServerHandlers.handleSubnetTx(
                     tx as L2PSTransaction,
                 )
                 result.response = subnetResult
                 break
+            }
🤖 Fix all issues with AI agents
In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`:
- Around line 706-731: The code computes a timestamp and includes it in
messageContent before computing messageHash, but then calls Date.now() again
when building the offlineMessage entity causing a potential millisecond
mismatch; change the offlineMessage creation to reuse the earlier timestamp
variable (converted to string to match TypeORM bigint behavior) so the stored
offlineMessage.timestamp exactly matches the timestamp used in
messageContent/messageHash (referencing variables messageContent, timestamp,
messageHash and the offlineMessageRepository.create call).

In `@src/libs/blockchain/l2ps_mempool.ts`:
- Around line 344-348: getHashForL2PS currently restricts the query to
L2PSMempoolTx records with status: "processed", which omits transactions that
later become "executed" or "batched" and breaks deterministic hashing; update
the FindManyOptions where-clause in getHashForL2PS to include all
non-failed/non-confirmed statuses (e.g., ["processed","executed","batched"] or
equivalent predicate) so the consolidated hash uses every relevant L2PSMempoolTx
for the given l2ps_uid.

In `@src/libs/blockchain/routines/Sync.ts`:
- Around line 119-133: The calls to discoverL2PSParticipants and
exchangeL2PSParticipation are passing an unnecessary second argument
getSharedState.l2psJoinedUids; both helpers already read shared state
internally, so remove the extra parameter and call them with only the peers
argument (e.g., replace discoverL2PSParticipants(peers,
getSharedState.l2psJoinedUids) with discoverL2PSParticipants(peers) and
similarly for exchangeL2PSParticipation(peers, ...)). Ensure any related
.then/.catch handlers and logging (e.g., log.debug/log.error) remain unchanged.

In `@src/libs/l2ps/L2PSConcurrentSync.ts`:
- Around line 128-145: processReceivedTransactions currently adds peer-supplied
items directly via L2PSMempool.addTransaction(..., "processed"), which bypasses
decrypt/signature/original_hash checks; change this to either route incoming txs
through the existing validation pipeline (call handleL2PS or the decrypt+confirm
functions used by the main flow) before adding, or insert them into the mempool
with status "pending" and run the same validation routine that verifies
encrypted_tx, original_hash and signatures and only promote to "processed" when
that routine succeeds; update references in processReceivedTransactions and
avoid direct "processed" writes via L2PSMempool.addTransaction until validation
completes.

In `@src/libs/l2ps/zk/L2PSBatchProver.ts`:
- Around line 168-186: The ready-timeout handler currently switches to
main-thread proving but leaves the spawned subprocess and the '__ready__'
pendingRequests entry alive; modify the timeout callback (the readyTimeout
closure) to also terminate/cleanup the child subprocess (e.g., call whatever
shutdown/kill on this.child or this.subprocess), remove the '__ready__' key from
this.pendingRequests, and clear any handlers so no lingering listeners remain,
then set this.useSubprocess = false and resolve; also ensure the checkReady
handler clears the timeout and pendingRequests on success to avoid duplicates.

In `@src/libs/omniprotocol/protocol/handlers/l2ps.ts`:
- Around line 397-405: Before calling L2PSHashes.updateHash you must
authenticate the update: verify the payload is signed by a valid participant and
that the signer is an active participant of the targeted L2PS (identified by
request.l2psUid). Add a step to (1) validate the signature on the incoming
payload (using your existing signature verification helper) against the provided
signer/public key fields on the request, and (2) confirm that the recovered
signer is in the participant set for the L2PS (use the module/method that
returns participants or an isParticipant check for request.l2psUid). Only after
both checks pass should you invoke L2PSHashes.updateHash(request.l2psUid,
request.consolidatedHash, request.transactionCount,
BigInt(request.blockNumber)); reject/throw otherwise.
- Around line 300-311: The handler handleL2PSGetBatchStatus is querying the L2PS
mempool with the wrong status; change the getByUID call on L2PSMempool to use
the "pending" status instead of "processed" (i.e.,
L2PSMempool.getByUID(request.l2psUid, "pending")) so it returns transactions
awaiting aggregation; keep the surrounding logic (encoding via encodeResponse
and the returned fields like pendingTransactions and l2psUid) unchanged.
🧹 Nitpick comments (3)
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (2)

408-411: Avoid @ts-ignore — declare an explicit return type instead.

The storeOfflineMessage method returns offlineMessage.id but lacks a return type annotation, forcing a @ts-ignore and double cast here. Adding proper typing eliminates the need for suppression.

♻️ Proposed fix

Update storeOfflineMessage signature (around line 692):

-    private async storeOfflineMessage(senderId: string, targetId: string, message: SerializedEncryptedObject) {
+    private async storeOfflineMessage(senderId: string, targetId: string, message: SerializedEncryptedObject): Promise<string> {

Then remove the ts-ignore and cast here:

-                // `@ts-ignore` - We know this returns a string ID now
-                messageId = await this.storeOfflineMessage(senderId, payload.targetId, payload.message) as unknown as string
+                messageId = await this.storeOfflineMessage(senderId, payload.targetId, payload.message)

844-854: Consider batching mutex acquisitions for better performance.

The loop acquires the mutex separately for each sender. For recipients with many offline messages from different senders, this creates multiple lock/unlock cycles. Batching the updates into a single mutex-protected block would be more efficient.

♻️ Proposed optimization
         // REVIEW: PR Fix `#9` - Reset offline message counts for senders after successful delivery
         if (sentCount > 0) {
-            // REVIEW: PR Fix `#2` - Use mutex to prevent lost updates during concurrent deliveries
-            for (const [senderId, count] of senderCounts.entries()) {
-                await this.countMutex.runExclusive(async () => {
+            await this.countMutex.runExclusive(async () => {
+                for (const [senderId, count] of senderCounts.entries()) {
                     const currentCount = this.offlineMessageCounts.get(senderId) || 0
                     const newCount = Math.max(0, currentCount - count)
                     if (newCount === 0) {
                         this.offlineMessageCounts.delete(senderId)
                     } else {
                         this.offlineMessageCounts.set(senderId, newCount)
                     }
-                })
-            }
+                }
+            })
             console.log(`Sent ${sentCount} offline messages to ${peerId}`)
         }
src/libs/network/routines/transactions/handleL2PS.ts (1)

109-112: Add explicit hash computation after decryption for defensive validation.

While Transaction.confirmTx() validates hash coherence (lines 65–69), it depends on the decrypted transaction already having the hash field populated. For stronger defensive coding and independence from external library behavior, compute the canonical hash explicitly after decryption:

// After decryptTx returns, ensure hash is populated
if (!decryptedTx.hash) {
    decryptedTx = Transaction.hash(decryptedTx)
    if (!decryptedTx) {
        return createErrorResponse(response, 400, "Failed to compute transaction hash")
    }
}

This makes the hash computation explicit and independent of whether demosdk populates it, improving code clarity and resilience.

Comment on lines +706 to +731
const timestamp = Date.now()
const messageContent = JSON.stringify({
message, // Keys in alphabetical order
senderId,
targetId,
timestamp,
})
const messageHash = Hashing.sha256(messageContent)

// NOTE: Future improvement - will be replaced with sender signature verification once client-side signing is implemented
// Current: Sign with node's private key for integrity (not authentication)
// REVIEW: PR Fix #14 - Add null safety check for private key access (location 2/3)
if (!getSharedState.identity?.ed25519?.privateKey) {
throw new Error("[Signaling Server] Private key not available for offline message signing")
}

const signature = Cryptography.sign(messageHash, getSharedState.identity.ed25519.privateKey)

const offlineMessage = offlineMessageRepository.create({
recipientPublicKey: targetId,
senderPublicKey: senderId,
messageHash,
encryptedContent: message,
signature: Buffer.from(signature).toString("base64"),
// REVIEW: PR Fix #9 - timestamp is string type to match TypeORM bigint behavior
timestamp: Date.now().toString(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Timestamp mismatch between hash and stored entity.

The timestamp variable (line 706) is used in the message hash, but line 731 calls Date.now() again for the entity's timestamp field. These values can differ by milliseconds, creating inconsistency between what's hashed and what's stored.

🐛 Proposed fix
             const offlineMessage = offlineMessageRepository.create({
                 recipientPublicKey: targetId,
                 senderPublicKey: senderId,
                 messageHash,
                 encryptedContent: message,
                 signature: Buffer.from(signature).toString("base64"),
                 // REVIEW: PR Fix `#9` - timestamp is string type to match TypeORM bigint behavior
-                timestamp: Date.now().toString(),
+                timestamp: timestamp.toString(),
                 status: "pending",
             })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const timestamp = Date.now()
const messageContent = JSON.stringify({
message, // Keys in alphabetical order
senderId,
targetId,
timestamp,
})
const messageHash = Hashing.sha256(messageContent)
// NOTE: Future improvement - will be replaced with sender signature verification once client-side signing is implemented
// Current: Sign with node's private key for integrity (not authentication)
// REVIEW: PR Fix #14 - Add null safety check for private key access (location 2/3)
if (!getSharedState.identity?.ed25519?.privateKey) {
throw new Error("[Signaling Server] Private key not available for offline message signing")
}
const signature = Cryptography.sign(messageHash, getSharedState.identity.ed25519.privateKey)
const offlineMessage = offlineMessageRepository.create({
recipientPublicKey: targetId,
senderPublicKey: senderId,
messageHash,
encryptedContent: message,
signature: Buffer.from(signature).toString("base64"),
// REVIEW: PR Fix #9 - timestamp is string type to match TypeORM bigint behavior
timestamp: Date.now().toString(),
const timestamp = Date.now()
const messageContent = JSON.stringify({
message, // Keys in alphabetical order
senderId,
targetId,
timestamp,
})
const messageHash = Hashing.sha256(messageContent)
// NOTE: Future improvement - will be replaced with sender signature verification once client-side signing is implemented
// Current: Sign with node's private key for integrity (not authentication)
// REVIEW: PR Fix `#14` - Add null safety check for private key access (location 2/3)
if (!getSharedState.identity?.ed25519?.privateKey) {
throw new Error("[Signaling Server] Private key not available for offline message signing")
}
const signature = Cryptography.sign(messageHash, getSharedState.identity.ed25519.privateKey)
const offlineMessage = offlineMessageRepository.create({
recipientPublicKey: targetId,
senderPublicKey: senderId,
messageHash,
encryptedContent: message,
signature: Buffer.from(signature).toString("base64"),
// REVIEW: PR Fix `#9` - timestamp is string type to match TypeORM bigint behavior
timestamp: timestamp.toString(),
status: "pending",
})
🤖 Prompt for AI Agents
In `@src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts`
around lines 706 - 731, The code computes a timestamp and includes it in
messageContent before computing messageHash, but then calls Date.now() again
when building the offlineMessage entity causing a potential millisecond
mismatch; change the offlineMessage creation to reuse the earlier timestamp
variable (converted to string to match TypeORM bigint behavior) so the stored
offlineMessage.timestamp exactly matches the timestamp used in
messageContent/messageHash (referencing variables messageContent, timestamp,
messageHash and the offlineMessageRepository.create call).

Comment on lines +344 to +348
const options: FindManyOptions<L2PSMempoolTx> = {
where: {
l2ps_uid: l2psUid,
status: "processed", // Only include successfully processed transactions
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Consolidated hash ignores executed/batched transactions.

getHashForL2PS filters only status: "processed". After execution, status changes to "executed", so those txs drop out of the hash, which can break validator relay determinism. Consider including all non-failed/non-confirmed statuses (e.g., processed/executed/batched).

✅ Suggested fix
-                where: {
-                    l2ps_uid: l2psUid,
-                    status: "processed",  // Only include successfully processed transactions
-                },
+                where: {
+                    l2ps_uid: l2psUid,
+                    status: In([
+                        L2PS_STATUS.PROCESSED,
+                        L2PS_STATUS.EXECUTED,
+                        L2PS_STATUS.BATCHED,
+                    ]),
+                },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const options: FindManyOptions<L2PSMempoolTx> = {
where: {
l2ps_uid: l2psUid,
status: "processed", // Only include successfully processed transactions
},
const options: FindManyOptions<L2PSMempoolTx> = {
where: {
l2ps_uid: l2psUid,
status: In([
L2PS_STATUS.PROCESSED,
L2PS_STATUS.EXECUTED,
L2PS_STATUS.BATCHED,
]),
},
🤖 Prompt for AI Agents
In `@src/libs/blockchain/l2ps_mempool.ts` around lines 344 - 348, getHashForL2PS
currently restricts the query to L2PSMempoolTx records with status: "processed",
which omits transactions that later become "executed" or "batched" and breaks
deterministic hashing; update the FindManyOptions where-clause in getHashForL2PS
to include all non-failed/non-confirmed statuses (e.g.,
["processed","executed","batched"] or equivalent predicate) so the consolidated
hash uses every relevant L2PSMempoolTx for the given l2ps_uid.

@github-actions
Copy link

github-actions bot commented Feb 3, 2026

⚠️ Beads Issue Tracking Files Detected

This PR modifies .beads/ issue tracking files. After merge, these changes will be automatically reverted to preserve branch-specific issue tracking.

Files that will be reverted:

  • .beads/metadata.json

@github-actions
Copy link

github-actions bot commented Feb 3, 2026

⚠️ MCP Memory Files Detected

This PR modifies .serena/ files. After merge, these changes will be automatically reverted to preserve branch-specific MCP memories.

Files that will be reverted:

  • .serena/memories/development_guidelines.md
  • .serena/memories/l2ps_architecture.md
  • .serena/memories/l2ps_code_patterns.md
  • .serena/memories/l2ps_implementation_status.md
  • .serena/memories/l2ps_onboarding_guide.md
  • .serena/memories/l2ps_overview.md
  • .serena/memories/l2ps_remaining_work.md
  • .serena/memories/project_purpose.md
  • .serena/memories/session_2025_01_31_l2ps_completion.md
  • .serena/memories/suggested_commands.md
  • .serena/memories/task_completion_checklist.md

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 3, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
1 Security Hotspot
3.2% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@src/libs/blockchain/l2ps_mempool.ts`:
- Around line 265-268: The code currently does Number(result.max_seq) and can
produce NaN which will propagate; update the logic around result.max_seq and the
maxSeq local so you validate the numeric conversion (e.g., const parsed =
Number(result?.max_seq); if (!Number.isFinite(parsed)) parsedFallback = -1) and
then use that validated value when returning parsedFallback + 1; reference
result.max_seq and the maxSeq variable in your change and ensure null/undefined
and non-numeric strings both fall back to -1 before adding 1.
🧹 Nitpick comments (4)
src/libs/blockchain/l2ps_mempool.ts (4)

195-212: Remove unnecessary any cast for private method access.

Line 197 uses (this as any).getNextSequenceNumber(...) which bypasses type checking. Since getNextSequenceNumber is a private static method of the same class, you can call it directly using the class name.

♻️ Suggested fix
-                    const sequenceNumber = await (this as any).getNextSequenceNumber(l2psUid, transactionalEntityManager)
+                    const sequenceNumber = await L2PSMempool.getNextSequenceNumber(l2psUid, transactionalEntityManager)

310-310: Add non-null assertion after ensureInitialized() for type safety.

After ensureInitialized() completes, this.repo is guaranteed to be non-null, but TypeScript doesn't infer this. Add ! to this.repo calls to make the invariant explicit and avoid potential type errors. This pattern applies to lines 310, 328, 378, 427, 461, 507, 553, 588, 609, 636, 667, 684, 701, 727, 768, 771, and 784.

♻️ Example fix
-            return await this.repo.find(options)
+            return await this.repo!.find(options)

432-432: Inconsistent null handling for result.affected.

Line 432 accesses result.affected > 0 directly, but affected can be undefined according to TypeORM's UpdateResult type. Line 470 correctly uses (result.affected ?? 0) > 0. Standardize across all methods for consistency.

♻️ Suggested fix
-            const updated = result.affected > 0
+            const updated = (result.affected ?? 0) > 0

Also apply to line 512:

-            const updated = result.affected || 0
+            const updated = result.affected ?? 0

778-794: Add type annotations to reduce accumulators.

The reduce callbacks use an empty object {} as the initial value without type annotation, resulting in implicit any for the accumulator. This could cause type inference issues.

♻️ Suggested fix
-            const transactionsByUID = byUID.reduce((acc, row) => {
+            const transactionsByUID = byUID.reduce<Record<string, number>>((acc, row) => {
                 acc[row.l2ps_uid] = Number.parseInt(row.count, 10)
                 return acc
             }, {})

             // ...

-            const transactionsByStatus = byStatus.reduce((acc, row) => {
+            const transactionsByStatus = byStatus.reduce<Record<string, number>>((acc, row) => {
                 acc[row.status] = Number.parseInt(row.count, 10)
                 return acc
             }, {})

Comment on lines +265 to +268
// result.max_seq may be a string from BIGINT cast, so we use Number() or BigInt()
// We return Number since we expect sequence to stay within JS precision (2^53 - 1)
const maxSeq = result?.max_seq !== null && result?.max_seq !== undefined ? Number(result.max_seq) : -1
return maxSeq + 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle potential NaN from malformed sequence numbers.

If result.max_seq contains a non-numeric string (e.g., corrupted data), Number(result.max_seq) returns NaN, causing NaN + 1 to propagate as NaN. Consider adding a validation check.

🛡️ Suggested fix
-            const maxSeq = result?.max_seq !== null && result?.max_seq !== undefined ? Number(result.max_seq) : -1
-            return maxSeq + 1
+            const rawMaxSeq = result?.max_seq
+            const maxSeq = rawMaxSeq !== null && rawMaxSeq !== undefined ? Number(rawMaxSeq) : -1
+            if (Number.isNaN(maxSeq)) {
+                log.warning(`[L2PS Mempool] Invalid sequence number in DB for ${l2psUid}: ${rawMaxSeq}, falling back to 0`)
+                return 0
+            }
+            return maxSeq + 1
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// result.max_seq may be a string from BIGINT cast, so we use Number() or BigInt()
// We return Number since we expect sequence to stay within JS precision (2^53 - 1)
const maxSeq = result?.max_seq !== null && result?.max_seq !== undefined ? Number(result.max_seq) : -1
return maxSeq + 1
// result.max_seq may be a string from BIGINT cast, so we use Number() or BigInt()
// We return Number since we expect sequence to stay within JS precision (2^53 - 1)
const rawMaxSeq = result?.max_seq
const maxSeq = rawMaxSeq !== null && rawMaxSeq !== undefined ? Number(rawMaxSeq) : -1
if (Number.isNaN(maxSeq)) {
log.warning(`[L2PS Mempool] Invalid sequence number in DB for ${l2psUid}: ${rawMaxSeq}, falling back to 0`)
return 0
}
return maxSeq + 1
🤖 Prompt for AI Agents
In `@src/libs/blockchain/l2ps_mempool.ts` around lines 265 - 268, The code
currently does Number(result.max_seq) and can produce NaN which will propagate;
update the logic around result.max_seq and the maxSeq local so you validate the
numeric conversion (e.g., const parsed = Number(result?.max_seq); if
(!Number.isFinite(parsed)) parsedFallback = -1) and then use that validated
value when returning parsedFallback + 1; reference result.max_seq and the maxSeq
variable in your change and ensure null/undefined and non-numeric strings both
fall back to -1 before adding 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants