From 49cf0e160df0b87f9018c87510bba12cef4fa308 Mon Sep 17 00:00:00 2001 From: tac0turtle Date: Wed, 28 Jan 2026 13:20:12 +0100 Subject: [PATCH 1/4] rewrite and restructure docs --- docs/concepts/block-lifecycle.md | 759 +++++++++++++ docs/concepts/data-availability.md | 75 ++ docs/concepts/fee-systems.md | 153 +++ docs/concepts/finality.md | 55 + docs/concepts/p2p-networking.md | 60 ++ docs/concepts/sequencing.md | 108 ++ docs/concepts/transaction-flow.md | 53 + docs/ev-abci/integration-guide.md | 130 +++ docs/ev-abci/migration-from-cometbft.md | 286 +++++ docs/ev-abci/modules/migration-manager.md | 143 +++ docs/ev-abci/modules/staking-wrapper.md | 96 ++ docs/ev-abci/overview.md | 76 ++ docs/ev-abci/rpc-compatibility.md | 135 +++ docs/ev-reth/configuration.md | 128 +++ docs/ev-reth/engine-api.md | 170 +++ docs/ev-reth/features/base-fee-redirect.md | 86 ++ docs/ev-reth/features/contract-size-limits.md | 71 ++ docs/ev-reth/features/deploy-allowlist.md | 77 ++ docs/ev-reth/features/mint-precompile.md | 87 ++ docs/ev-reth/overview.md | 68 ++ docs/getting-started/choose-your-path.md | 118 +++ .../cosmos/integrate-ev-abci.md | 109 ++ .../getting-started/cosmos/migration-guide.md | 115 ++ docs/getting-started/cosmos/quickstart.md | 85 ++ .../custom/implement-executor.md | 212 ++++ docs/getting-started/custom/quickstart.md | 140 +++ docs/getting-started/evm/deploy-contracts.md | 144 +++ docs/getting-started/evm/quickstart.md | 88 ++ docs/getting-started/evm/setup-ev-reth.md | 134 +++ docs/guides/advanced/based-sequencing.md | 76 ++ docs/guides/advanced/custom-precompiles.md | 11 + docs/guides/advanced/forced-inclusion.md | 128 +++ docs/guides/da-layers/celestia.md | 153 +++ docs/guides/da-layers/local-da.md | 56 + docs/guides/operations/deployment.md | 49 + docs/guides/operations/monitoring.md | 79 ++ docs/guides/operations/troubleshooting.md | 10 + docs/guides/operations/upgrades.md | 9 + docs/guides/running-nodes/aggregator.md | 12 + docs/guides/running-nodes/attester.md | 9 + docs/guides/running-nodes/full-node.md | 104 ++ docs/guides/running-nodes/light-node.md | 9 + docs/guides/tools/blob-decoder.md | 158 +++ docs/guides/tools/visualizer.md | 240 +++++ docs/overview/architecture.md | 184 ++++ docs/overview/execution-environments.md | 31 + docs/overview/what-is-evolve.md | 95 ++ docs/reference/api/abci-rpc.md | 9 + docs/reference/api/engine-api.md | 10 + docs/reference/api/rpc-endpoints.md | 10 + docs/reference/configuration/ev-abci-flags.md | 8 + .../reference/configuration/ev-node-config.md | 999 ++++++++++++++++++ .../configuration/ev-reth-chainspec.md | 12 + docs/reference/interfaces/da.md | 12 + docs/reference/interfaces/executor.md | 11 + docs/reference/interfaces/sequencer.md | 11 + docs/reference/specs/block-manager.md | 759 +++++++++++++ docs/reference/specs/block-validity.md | 125 +++ docs/reference/specs/da.md | 63 ++ docs/reference/specs/full-node.md | 107 ++ docs/reference/specs/header-sync.md | 108 ++ docs/reference/specs/out-of-order-blocks.png | Bin 0 -> 27206 bytes docs/reference/specs/overview.md | 17 + docs/reference/specs/store.md | 92 ++ docs/reference/specs/termination.png | Bin 0 -> 42225 bytes 65 files changed, 7727 insertions(+) create mode 100644 docs/concepts/block-lifecycle.md create mode 100644 docs/concepts/data-availability.md create mode 100644 docs/concepts/fee-systems.md create mode 100644 docs/concepts/finality.md create mode 100644 docs/concepts/p2p-networking.md create mode 100644 docs/concepts/sequencing.md create mode 100644 docs/concepts/transaction-flow.md create mode 100644 docs/ev-abci/integration-guide.md create mode 100644 docs/ev-abci/migration-from-cometbft.md create mode 100644 docs/ev-abci/modules/migration-manager.md create mode 100644 docs/ev-abci/modules/staking-wrapper.md create mode 100644 docs/ev-abci/overview.md create mode 100644 docs/ev-abci/rpc-compatibility.md create mode 100644 docs/ev-reth/configuration.md create mode 100644 docs/ev-reth/engine-api.md create mode 100644 docs/ev-reth/features/base-fee-redirect.md create mode 100644 docs/ev-reth/features/contract-size-limits.md create mode 100644 docs/ev-reth/features/deploy-allowlist.md create mode 100644 docs/ev-reth/features/mint-precompile.md create mode 100644 docs/ev-reth/overview.md create mode 100644 docs/getting-started/choose-your-path.md create mode 100644 docs/getting-started/cosmos/integrate-ev-abci.md create mode 100644 docs/getting-started/cosmos/migration-guide.md create mode 100644 docs/getting-started/cosmos/quickstart.md create mode 100644 docs/getting-started/custom/implement-executor.md create mode 100644 docs/getting-started/custom/quickstart.md create mode 100644 docs/getting-started/evm/deploy-contracts.md create mode 100644 docs/getting-started/evm/quickstart.md create mode 100644 docs/getting-started/evm/setup-ev-reth.md create mode 100644 docs/guides/advanced/based-sequencing.md create mode 100644 docs/guides/advanced/custom-precompiles.md create mode 100644 docs/guides/advanced/forced-inclusion.md create mode 100644 docs/guides/da-layers/celestia.md create mode 100644 docs/guides/da-layers/local-da.md create mode 100644 docs/guides/operations/deployment.md create mode 100644 docs/guides/operations/monitoring.md create mode 100644 docs/guides/operations/troubleshooting.md create mode 100644 docs/guides/operations/upgrades.md create mode 100644 docs/guides/running-nodes/aggregator.md create mode 100644 docs/guides/running-nodes/attester.md create mode 100644 docs/guides/running-nodes/full-node.md create mode 100644 docs/guides/running-nodes/light-node.md create mode 100644 docs/guides/tools/blob-decoder.md create mode 100644 docs/guides/tools/visualizer.md create mode 100644 docs/overview/architecture.md create mode 100644 docs/overview/execution-environments.md create mode 100644 docs/overview/what-is-evolve.md create mode 100644 docs/reference/api/abci-rpc.md create mode 100644 docs/reference/api/engine-api.md create mode 100644 docs/reference/api/rpc-endpoints.md create mode 100644 docs/reference/configuration/ev-abci-flags.md create mode 100644 docs/reference/configuration/ev-node-config.md create mode 100644 docs/reference/configuration/ev-reth-chainspec.md create mode 100644 docs/reference/interfaces/da.md create mode 100644 docs/reference/interfaces/executor.md create mode 100644 docs/reference/interfaces/sequencer.md create mode 100644 docs/reference/specs/block-manager.md create mode 100644 docs/reference/specs/block-validity.md create mode 100644 docs/reference/specs/da.md create mode 100644 docs/reference/specs/full-node.md create mode 100644 docs/reference/specs/header-sync.md create mode 100644 docs/reference/specs/out-of-order-blocks.png create mode 100644 docs/reference/specs/overview.md create mode 100644 docs/reference/specs/store.md create mode 100644 docs/reference/specs/termination.png diff --git a/docs/concepts/block-lifecycle.md b/docs/concepts/block-lifecycle.md new file mode 100644 index 0000000000..c97171f90e --- /dev/null +++ b/docs/concepts/block-lifecycle.md @@ -0,0 +1,759 @@ +# Block Components + +## Abstract + +The block package provides a modular component-based architecture for handling block-related operations in full nodes. Instead of a single monolithic manager, the system is divided into specialized components that work together, each responsible for specific aspects of block processing. This architecture enables better separation of concerns, easier testing, and more flexible node configurations. + +The main components are: + +- **Executor**: Handles block production and state transitions (aggregator nodes only) +- **Reaper**: Periodically retrieves transactions and submits them to the sequencer (aggregator nodes only) +- **Submitter**: Manages submission of headers and data to the DA network (aggregator nodes only) +- **Syncer**: Handles synchronization from both DA and P2P sources (all full nodes) +- **Cache Manager**: Coordinates caching and tracking of blocks across all components + +A full node coordinates these components based on its role: + +- **Aggregator nodes**: Use all components for block production, submission, and synchronization +- **Non-aggregator full nodes**: Use only Syncer and Cache for block synchronization + +```mermaid +sequenceDiagram + title Overview of Block Manager + + participant User + participant Sequencer + participant Full Node 1 + participant Full Node 2 + participant DA Layer + + User->>Sequencer: Send Tx + Sequencer->>Sequencer: Generate Block + Sequencer->>DA Layer: Publish Block + + Sequencer->>Full Node 1: Gossip Block + Sequencer->>Full Node 2: Gossip Block + Full Node 1->>Full Node 1: Verify Block + Full Node 1->>Full Node 2: Gossip Block + Full Node 1->>Full Node 1: Mark Block Soft Confirmed + + Full Node 2->>Full Node 2: Verify Block + Full Node 2->>Full Node 2: Mark Block Soft Confirmed + + DA Layer->>Full Node 1: Retrieve Block + Full Node 1->>Full Node 1: Mark Block DA Included + + DA Layer->>Full Node 2: Retrieve Block + Full Node 2->>Full Node 2: Mark Block DA Included +``` + +### Component Architecture Overview + +```mermaid +flowchart TB + subgraph Block Components [Modular Block Components] + EXE[Executor
Block Production] + REA[Reaper
Tx Collection] + SUB[Submitter
DA Submission] + SYN[Syncer
Block Sync] + CAC[Cache Manager
State Tracking] + end + + subgraph External Components + CEXE[Core Executor] + SEQ[Sequencer] + DA[DA Layer] + HS[Header Store/P2P] + DS[Data Store/P2P] + ST[Local Store] + end + + REA -->|GetTxs| CEXE + REA -->|SubmitBatch| SEQ + REA -->|Notify| EXE + + EXE -->|CreateBlock| CEXE + EXE -->|ApplyBlock| CEXE + EXE -->|Save| ST + EXE -->|Track| CAC + + EXE -->|Headers| SUB + EXE -->|Data| SUB + SUB -->|Submit| DA + SUB -->|Track| CAC + + DA -->|Retrieve| SYN + HS -->|Headers| SYN + DS -->|Data| SYN + + SYN -->|ApplyBlock| CEXE + SYN -->|Save| ST + SYN -->|Track| CAC + SYN -->|SetFinal| CEXE + + CAC -->|Coordinate| EXE + CAC -->|Coordinate| SUB + CAC -->|Coordinate| SYN +``` + +## Protocol/Component Description + +The block components are initialized based on the node type: + +### Aggregator Components + +Aggregator nodes create all components for full block production and synchronization capabilities: + +```go +components := block.NewAggregatorComponents( + config, // Node configuration + genesis, // Genesis state + store, // Local datastore + executor, // Core executor for state transitions + sequencer, // Sequencer client + da, // DA client + signer, // Block signing key + // P2P stores and options... +) +``` + +### Non-Aggregator Components + +Non-aggregator full nodes create only synchronization components: + +```go +components := block.NewSyncComponents( + config, // Node configuration + genesis, // Genesis state + store, // Local datastore + executor, // Core executor for state transitions + da, // DA client + // P2P stores and options... (no signer or sequencer needed) +) +``` + +### Component Initialization Parameters + +| **Name** | **Type** | **Description** | +| --------------------------- | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| signing key | crypto.PrivKey | used for signing blocks and data after creation | +| config | config.BlockManagerConfig | block manager configurations (see config options below) | +| genesis | \*cmtypes.GenesisDoc | initialize the block manager with genesis state (genesis configuration defined in `config/genesis.json` file under the app directory) | +| store | store.Store | local datastore for storing chain blocks and states (default local store path is `$db_dir/evolve` and `db_dir` specified in the `config.yaml` file under the app directory) | +| mempool, proxyapp, eventbus | mempool.Mempool, proxy.AppConnConsensus, \*cmtypes.EventBus | for initializing the executor (state transition function). mempool is also used in the manager to check for availability of transactions for lazy block production | +| dalc | da.DAClient | the data availability light client used to submit and retrieve blocks to DA network | +| headerStore | *goheaderstore.Store[*types.SignedHeader] | to store and retrieve block headers gossiped over the P2P network | +| dataStore | *goheaderstore.Store[*types.SignedData] | to store and retrieve block data gossiped over the P2P network | +| signaturePayloadProvider | types.SignaturePayloadProvider | optional custom provider for header signature payloads | +| sequencer | core.Sequencer | used to retrieve batches of transactions from the sequencing layer | +| reaper | \*Reaper | component that periodically retrieves transactions from the executor and submits them to the sequencer | + +### Configuration Options + +The block components share a common configuration: + +| Name | Type | Description | +| ------------------------ | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | +| BlockTime | time.Duration | time interval used for block production and block retrieval from block store ([`defaultBlockTime`][defaultBlockTime]) | +| DABlockTime | time.Duration | time interval used for both block publication to DA network and block retrieval from DA network ([`defaultDABlockTime`][defaultDABlockTime]) | +| DAStartHeight | uint64 | block retrieval from DA network starts from this height | +| LazyBlockInterval | time.Duration | time interval used for block production in lazy aggregator mode even when there are no transactions ([`defaultLazyBlockTime`][defaultLazyBlockTime]) | +| LazyMode | bool | when set to true, enables lazy aggregation mode which produces blocks only when transactions are available or at LazyBlockInterval intervals | +| MaxPendingHeadersAndData | uint64 | maximum number of pending headers and data blocks before pausing block production (default: 100) | +| MaxSubmitAttempts | int | maximum number of retry attempts for DA submissions (default: 30) | +| MempoolTTL | int | number of blocks to wait when transaction is stuck in DA mempool (default: 25) | +| GasPrice | float64 | gas price for DA submissions (-1 for automatic/default) | +| GasMultiplier | float64 | multiplier for gas price on DA submission retries (default: 1.3) | +| Namespace | da.Namespace | DA namespace ID for block submissions (deprecated, use HeaderNamespace and DataNamespace instead) | +| HeaderNamespace | string | namespace ID for submitting headers to DA layer (automatically encoded by the node) | +| DataNamespace | string | namespace ID for submitting data to DA layer (automatically encoded by the node) | +| RequestTimeout | duration | per-request timeout for DA `GetIDs`/`Get` calls; higher values tolerate slow DA nodes, lower values fail faster (default: 30s) | + +### Block Production (Executor Component) + +When the full node is operating as an aggregator, the **Executor component** handles block production. There are two modes of block production, which can be specified in the block manager configurations: `normal` and `lazy`. + +In `normal` mode, the block manager runs a timer, which is set to the `BlockTime` configuration parameter, and continuously produces blocks at `BlockTime` intervals. + +In `lazy` mode, the block manager implements a dual timer mechanism: + +```mermaid +flowchart LR + subgraph Lazy Aggregation Mode + R[Reaper] -->|GetTxs| CE[Core Executor] + CE -->|Txs Available| R + R -->|Submit to Sequencer| S[Sequencer] + R -->|NotifyNewTransactions| N[txNotifyCh] + + N --> E{Executor Logic} + BT[blockTimer] --> E + LT[lazyTimer] --> E + + E -->|Txs Available| P1[Produce Block with Txs] + E -->|No Txs & LazyTimer| P2[Produce Empty Block] + + P1 --> B[Block Creation] + P2 --> B + end +``` + +1. A `blockTimer` that triggers block production at regular intervals when transactions are available +2. A `lazyTimer` that ensures blocks are produced at `LazyBlockInterval` intervals even during periods of inactivity + +The block manager starts building a block when any transaction becomes available in the mempool via a notification channel (`txNotifyCh`). When the `Reaper` detects new transactions, it calls `Manager.NotifyNewTransactions()`, which performs a non-blocking signal on this channel. The block manager also produces empty blocks at regular intervals to maintain consistency with the DA layer, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks. + +The Reaper component periodically retrieves transactions from the core executor and submits them to the sequencer. It runs independently and notifies the Executor component when new transactions are available, enabling responsive block production in lazy mode. + +#### Building the Block + +The Executor component of aggregator nodes performs the following steps to produce a block: + +```mermaid +flowchart TD + A[Timer Trigger / Transaction Notification] --> B[Retrieve Batch] + B --> C{Transactions Available?} + C -->|Yes| D[Create Block with Txs] + C -->|No| E[Create Empty Block] + D --> F[Generate Header & Data] + E --> F + F --> G[Sign Header → SignedHeader] + F --> H[Sign Data → SignedData] + G --> I[Apply Block] + H --> I + I --> J[Update State] + J --> K[Save to Store] + K --> L[Add to pendingHeaders] + K --> M[Add to pendingData] + L --> N[Broadcast Header to P2P] + M --> O[Broadcast Data to P2P] +``` + +- Retrieve a batch of transactions using `retrieveBatch()` which interfaces with the sequencer +- Call `CreateBlock` using executor with the retrieved transactions +- Create separate header and data structures from the block +- Sign the header using `signing key` to generate `SignedHeader` +- Sign the data using `signing key` to generate `SignedData` (if transactions exist) +- Call `ApplyBlock` using executor to generate an updated state +- Save the block, validators, and updated state to local store +- Add the newly generated header to `pendingHeaders` queue +- Add the newly generated data to `pendingData` queue (if not empty) +- Publish the newly generated header and data to channels to notify other components of the sequencer node (such as block and header gossip) + +Note: When no transactions are available, the block manager creates blocks with empty data using a special `dataHashForEmptyTxs` marker. The header and data separation architecture allows headers and data to be submitted and retrieved independently from the DA layer. + +### Block Publication to DA Network (Submitter Component) + +The **Submitter component** of aggregator nodes implements separate submission loops for headers and data, both operating at `DABlockTime` intervals. Headers and data are submitted to different namespaces to improve scalability and allow for more flexible data availability strategies: + +```mermaid +flowchart LR + subgraph Header Submission + H1[pendingHeaders Queue] --> H2[Header Submission Loop] + H2 --> H3[Marshal to Protobuf] + H3 --> H4[Submit to DA] + H4 -->|Success| H5[Remove from Queue] + H4 -->|Failure| H6[Keep in Queue & Retry] + end + + subgraph Data Submission + D1[pendingData Queue] --> D2[Data Submission Loop] + D2 --> D3[Marshal to Protobuf] + D3 --> D4[Submit to DA] + D4 -->|Success| D5[Remove from Queue] + D4 -->|Failure| D6[Keep in Queue & Retry] + end + + H2 -.->|DABlockTime| H2 + D2 -.->|DABlockTime| D2 +``` + +#### Header Submission Loop + +The `HeaderSubmissionLoop` manages the submission of signed headers to the DA network: + +- Retrieves pending headers from the `pendingHeaders` queue +- Marshals headers to protobuf format +- Submits to DA using the generic `submitToDA` helper with the configured `HeaderNamespace` +- On success, removes submitted headers from the pending queue +- On failure, headers remain in the queue for retry + +#### Data Submission Loop + +The `DataSubmissionLoop` manages the submission of signed data to the DA network: + +- Retrieves pending data from the `pendingData` queue +- Marshals data to protobuf format +- Submits to DA using the generic `submitToDA` helper with the configured `DataNamespace` +- On success, removes submitted data from the pending queue +- On failure, data remains in the queue for retry + +#### Generic Submission Logic + +Both loops use a shared `submitToDA` function that provides: + +- Namespace-specific submission based on header or data type +- Retry logic with configurable maximum attempts via `MaxSubmitAttempts` configuration +- Exponential backoff starting at `initialBackoff` (100ms), doubling each attempt, capped at `DABlockTime` +- Gas price management with `GasMultiplier` applied on retries using a centralized `retryStrategy` +- Recursive batch splitting for handling "too big" DA submissions that exceed blob size limits +- Comprehensive error handling for different DA submission failure types (mempool issues, context cancellation, blob size limits) +- Comprehensive metrics tracking for attempts, successes, and failures +- Context-aware cancellation support + +#### Retry Strategy and Error Handling + +The DA submission system implements sophisticated retry logic using a centralized `retryStrategy` struct to handle various failure scenarios: + +```mermaid +flowchart TD + A[Submit to DA] --> B{Submission Result} + B -->|Success| C[Reset Backoff & Adjust Gas Price Down] + B -->|Too Big| D{Batch Size > 1?} + B -->|Mempool/Not Included| E[Mempool Backoff Strategy] + B -->|Context Canceled| F[Stop Submission] + B -->|Other Error| G[Exponential Backoff] + + D -->|Yes| H[Recursive Batch Splitting] + D -->|No| I[Skip Single Item - Cannot Split] + + E --> J[Set Backoff = MempoolTTL * BlockTime] + E --> K[Multiply Gas Price by GasMultiplier] + + G --> L[Double Backoff Time] + G --> M[Cap at MaxBackoff - BlockTime] + + H --> N[Split into Two Halves] + N --> O[Submit First Half] + O --> P[Submit Second Half] + P --> Q{Both Halves Processed?} + Q -->|Yes| R[Combine Results] + Q -->|No| S[Handle Partial Success] + + C --> T[Update Pending Queues] + T --> U[Post-Submit Actions] +``` + +##### Retry Strategy Features + +- **Centralized State Management**: The `retryStrategy` struct manages attempt counts, backoff timing, and gas price adjustments +- **Multiple Backoff Types**: + - Exponential backoff for general failures (doubles each attempt, capped at `BlockTime`) + - Mempool-specific backoff (waits `MempoolTTL * BlockTime` for stuck transactions) + - Success-based backoff reset with gas price reduction +- **Gas Price Management**: + - Increases gas price by `GasMultiplier` on mempool failures + - Decreases gas price after successful submissions (bounded by initial price) + - Supports automatic gas price detection (`-1` value) +- **Intelligent Batch Splitting**: + - Recursively splits batches that exceed DA blob size limits + - Handles partial submissions within split batches + - Prevents infinite recursion with proper base cases +- **Comprehensive Error Classification**: + - `StatusSuccess`: Full or partial successful submission + - `StatusTooBig`: Triggers batch splitting logic + - `StatusNotIncludedInBlock`/`StatusAlreadyInMempool`: Mempool-specific handling + - `StatusContextCanceled`: Graceful shutdown support + - Other errors: Standard exponential backoff + +The manager enforces a limit on pending headers and data through `MaxPendingHeadersAndData` configuration. When this limit is reached, block production pauses to prevent unbounded growth of the pending queues. + +### Block Retrieval from DA Network (Syncer Component) + +The **Syncer component** implements a `RetrieveLoop` through its DARetriever that regularly pulls headers and data from the DA network. The retrieval process supports both legacy single-namespace mode (for backward compatibility) and the new separate namespace mode: + +```mermaid +flowchart TD + A[Start RetrieveLoop] --> B[Get DA Height] + B --> C{DABlockTime Timer} + C --> D[GetHeightPair from DA] + D --> E{Result?} + E -->|Success| F[Validate Signatures] + E -->|NotFound| G[Increment Height] + E -->|Error| H[Retry Logic] + + F --> I[Check Sequencer Info] + I --> J[Mark DA Included] + J --> K[Send to Sync] + K --> L[Increment Height] + L --> M[Immediate Next Retrieval] + + G --> C + H --> N{Retries < 10?} + N -->|Yes| O[Wait 100ms] + N -->|No| P[Log Error & Stall] + O --> D + M --> D +``` + +#### Retrieval Process + +1. **Height Management**: Starts from the latest of: + - DA height from the last state in local store + - `DAStartHeight` configuration parameter + - Maintains and increments `daHeight` counter after successful retrievals + +2. **Retrieval Mechanism**: + - Executes at `DABlockTime` intervals + - Implements namespace migration support: + - First attempts legacy namespace retrieval if migration not completed + - Falls back to separate header and data namespace retrieval + - Tracks migration status to optimize future retrievals + - Retrieves from separate namespaces: + - Headers from `HeaderNamespace` + - Data from `DataNamespace` + - Combines results from both namespaces + - Handles three possible outcomes: + - `Success`: Process retrieved header and/or data + - `NotFound`: No chain block at this DA height (normal case) + - `Error`: Retry with backoff + +3. **Error Handling**: + - Implements retry logic with 100ms delay between attempts + - After 10 retries, logs error and stalls retrieval + - Does not increment `daHeight` on persistent errors + +4. **Processing Retrieved Blocks**: + - Validates header and data signatures + - Checks sequencer information + - Marks blocks as DA included in caches + - Sends to sync goroutine for state update + - Successful processing triggers immediate next retrieval without waiting for timer + - Updates namespace migration status when appropriate: + - Marks migration complete when data is found in new namespaces + - Persists migration state to avoid future legacy checks + +#### Header and Data Caching + +The retrieval system uses persistent caches for both headers and data: + +- Prevents duplicate processing +- Tracks DA inclusion status +- Supports out-of-order block arrival +- Enables efficient sync from P2P and DA sources +- Maintains namespace migration state for optimized retrieval + +For more details on DA integration, see the [Data Availability specification](./da.md). + +#### Out-of-Order Chain Blocks on DA + +Evolve should support blocks arriving out-of-order on DA, like so: +![out-of-order blocks](./out-of-order-blocks.png) + +#### Termination Condition + +If the sequencer double-signs two blocks at the same height, evidence of the fault should be posted to DA. Evolve full nodes should process the longest valid chain up to the height of the fault evidence, and terminate. See diagram: +![termination condition](./termination.png) + +### Block Sync Service (Syncer Component) + +The **Syncer component** manages the synchronization of headers and data through its P2PHandler and coordination with the Cache Manager: + +#### Architecture + +- **Header Store**: Uses `goheader.Store[*types.SignedHeader]` for header management +- **Data Store**: Uses `goheader.Store[*types.SignedData]` for data management +- **Separation of Concerns**: Headers and data are handled independently, supporting the header/data separation architecture + +#### Synchronization Flow + +1. **Header Sync**: Headers created by the sequencer are sent to the header store for P2P gossip +2. **Data Sync**: Data blocks are sent to the data store for P2P gossip +3. **Cache Integration**: Both header and data caches track seen items to prevent duplicates +4. **DA Inclusion Tracking**: Separate tracking for header and data DA inclusion status + +### Block Publication to P2P network (Executor Component) + +The **Executor component** of aggregator nodes publishes headers and data separately to the P2P network: + +#### Header Publication + +- Headers are sent through the header broadcast channel +- Written to the header store for P2P gossip +- Broadcast to network peers via header sync service + +#### Data Publication + +- Data blocks are sent through the data broadcast channel +- Written to the data store for P2P gossip +- Broadcast to network peers via data sync service + +Non-sequencer full nodes receive headers and data through the P2P sync service and do not publish blocks themselves. + +### Block Retrieval from P2P network (Syncer Component) + +The **Syncer component** retrieves headers and data separately from P2P stores through its P2PHandler: + +#### Header Store Retrieval Loop + +The `HeaderStoreRetrieveLoop`: + +- Operates at `BlockTime` intervals via `headerStoreCh` signals +- Tracks `headerStoreHeight` for the last retrieved header +- Retrieves all headers between last height and current store height +- Validates sequencer information using `assertUsingExpectedSingleSequencer` +- Marks headers as "seen" in the header cache +- Sends headers to sync goroutine via `headerInCh` + +#### Data Store Retrieval Loop + +The `DataStoreRetrieveLoop`: + +- Operates at `BlockTime` intervals via `dataStoreCh` signals +- Tracks `dataStoreHeight` for the last retrieved data +- Retrieves all data blocks between last height and current store height +- Validates data signatures using `assertValidSignedData` +- Marks data as "seen" in the data cache +- Sends data to sync goroutine via `dataInCh` + +#### Soft Confirmations + +Headers and data retrieved from P2P are marked as soft confirmed until both: + +1. The corresponding header is seen on the DA layer +2. The corresponding data is seen on the DA layer + +Once both conditions are met, the block is marked as DA-included. + +#### About Soft Confirmations and DA Inclusions + +The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 6 seconds). +The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked DA-included. +DA-included blocks are considered to have a higher level of finality. + +**DAIncluderLoop**: +The `DAIncluderLoop` is responsible for advancing the `DAIncludedHeight` by: + +- Checking if blocks after the current height have both header and data marked as DA-included in caches +- Stopping advancement if either header or data is missing for a height +- Calling `SetFinal` on the executor when a block becomes DA-included +- Storing the Evolve height to DA height mapping for tracking +- Ensuring only blocks with both header and data present are considered DA-included + +### State Update after Block Retrieval (Syncer Component) + +The **Syncer component** uses a `SyncLoop` to coordinate state updates from blocks retrieved via P2P or DA networks: + +```mermaid +flowchart TD + subgraph Sources + P1[P2P Header Store] --> H[headerInCh] + P2[P2P Data Store] --> D[dataInCh] + DA1[DA Header Retrieval] --> H + DA2[DA Data Retrieval] --> D + end + + subgraph SyncLoop + H --> S[Sync Goroutine] + D --> S + S --> C{Header & Data for Same Height?} + C -->|Yes| R[Reconstruct Block] + C -->|No| W[Wait for Matching Pair] + R --> V[Validate Signatures] + V --> A[ApplyBlock] + A --> CM[Commit] + CM --> ST[Store Block & State] + ST --> F{DA Included?} + F -->|Yes| FN[SetFinal] + F -->|No| E[End] + FN --> U[Update DA Height] + end +``` + +#### Sync Loop Architecture + +The `SyncLoop` processes headers and data from multiple sources: + +- Headers from `headerInCh` (P2P and DA sources) +- Data from `dataInCh` (P2P and DA sources) +- Maintains caches to track processed items +- Ensures ordered processing by height + +#### State Update Process + +When both header and data are available for a height: + +1. **Block Reconstruction**: Combines header and data into a complete block +2. **Validation**: Verifies header and data signatures match expectations +3. **ApplyBlock**: + - Validates the block against current state + - Executes transactions + - Captures validator updates + - Returns updated state +4. **Commit**: + - Persists execution results + - Updates mempool by removing included transactions + - Publishes block events +5. **Storage**: + - Stores the block, validators, and updated state + - Updates last state in manager +6. **Finalization**: + - When block is DA-included, calls `SetFinal` on executor + - Updates DA included height + +## Message Structure/Communication Format + +### Component Communication + +The components communicate through well-defined interfaces: + +#### Executor ↔ Core Executor + +- `InitChain`: initializes the chain state with the given genesis time, initial height, and chain ID using `InitChainSync` on the executor to obtain initial `appHash` and initialize the state. +- `CreateBlock`: prepares a block with transactions from the provided batch data. +- `ApplyBlock`: validates the block, executes the block (apply transactions), captures validator updates, and returns updated state. +- `SetFinal`: marks the block as final when both its header and data are confirmed on the DA layer. +- `GetTxs`: retrieves transactions from the application (used by Reaper component). + +#### Reaper ↔ Sequencer + +- `GetNextBatch`: retrieves the next batch of transactions to include in a block. +- `VerifyBatch`: validates that a batch came from the expected sequencer. + +#### Submitter/Syncer ↔ DA Layer + +- `Submit`: submits headers or data blobs to the DA network. +- `Get`: retrieves headers or data blobs from the DA network. +- `GetHeightPair`: retrieves both header and data at a specific DA height. + +## Assumptions and Considerations + +### Component Architecture + +- The block package uses a modular component architecture instead of a monolithic manager +- Components are created based on node type: aggregator nodes get all components, non-aggregator nodes only get synchronization components +- Each component has a specific responsibility and communicates through well-defined interfaces +- Components share a common Cache Manager for coordination and state tracking + +### Initialization and State Management + +- Components load the initial state from the local store and use genesis if not found in the local store, when the node (re)starts +- During startup the Syncer invokes the execution Replayer to re-execute any blocks the local execution layer is missing; the replayer enforces strict app-hash matching so a mismatch aborts initialization instead of silently drifting out of sync +- The default mode for aggregator nodes is normal (not lazy) +- Components coordinate through channels and shared cache structures + +### Block Production (Executor Component) + +- The Executor can produce empty blocks +- In lazy aggregation mode, the Executor maintains consistency with the DA layer by producing empty blocks at regular intervals, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks +- The lazy aggregation mechanism uses a dual timer approach: + - A `blockTimer` that triggers block production when transactions are available + - A `lazyTimer` that ensures blocks are produced even during periods of inactivity +- Empty batches are handled differently in lazy mode - instead of discarding them, they are returned with the `ErrNoBatch` error, allowing the caller to create empty blocks with proper timestamps +- Transaction notifications from the `Reaper` to the `Executor` are handled via a non-blocking notification channel (`txNotifyCh`) to prevent backpressure + +### DA Submission (Submitter Component) + +- The Submitter enforces `MaxPendingHeadersAndData` limit to prevent unbounded growth of pending queues during DA submission issues +- Headers and data are submitted separately to the DA layer using different namespaces, supporting the header/data separation architecture +- The Cache Manager uses persistent caches for headers and data to track seen items and DA inclusion status +- Namespace migration is handled transparently by the Syncer, with automatic detection and state persistence to optimize future operations +- The system supports backward compatibility with legacy single-namespace deployments while transitioning to separate namespaces +- Gas price management in the Submitter includes automatic adjustment with `GasMultiplier` on DA submission retries + +### Storage and Persistence + +- Components use persistent storage (disk) when the `root_dir` and `db_path` configuration parameters are specified in `config.yaml` file under the app directory. If these configuration parameters are not specified, the in-memory storage is used, which will not be persistent if the node stops +- The Syncer does not re-apply blocks when they transition from soft confirmed to DA included status. The block is only marked DA included in the caches +- Header and data stores use separate prefixes for isolation in the underlying database +- The genesis `ChainID` is used to create separate `PubSubTopID`s for headers and data in go-header + +### P2P and Synchronization + +- Block sync over the P2P network works only when a full node is connected to the P2P network by specifying the initial seeds to connect to via `P2PConfig.Seeds` configuration parameter when starting the full node +- Node's context is passed down to all components to support graceful shutdown and cancellation + +### Architecture Design Decisions + +- The Executor supports custom signature payload providers for headers, enabling flexible signing schemes +- The component architecture supports the separation of header and data structures in Evolve. This allows for expanding the sequencing scheme beyond single sequencing and enables the use of a decentralized sequencer mode. For detailed information on this architecture, see the [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md) +- Components process blocks with a minimal header format, which is designed to eliminate dependency on CometBFT's header format and can be used to produce an execution layer tailored header if needed. For details on this header structure, see the [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md) specification + +## Metrics + +The block components expose comprehensive metrics for monitoring through the shared Metrics instance: + +### Block Production Metrics (Executor Component) + +- `last_block_produced_height`: Height of the last produced block +- `last_block_produced_time`: Timestamp of the last produced block +- `aggregation_type`: Current aggregation mode (normal/lazy) +- `block_size_bytes`: Size distribution of produced blocks +- `produced_empty_blocks_total`: Count of empty blocks produced + +### DA Metrics (Submitter and Syncer Components) + +- `da_submission_attempts_total`: Total DA submission attempts +- `da_submission_success_total`: Successful DA submissions +- `da_submission_failure_total`: Failed DA submissions +- `da_retrieval_attempts_total`: Total DA retrieval attempts +- `da_retrieval_success_total`: Successful DA retrievals +- `da_retrieval_failure_total`: Failed DA retrievals +- `da_height`: Current DA retrieval height +- `pending_headers_count`: Number of headers pending DA submission +- `pending_data_count`: Number of data blocks pending DA submission + +### Sync Metrics (Syncer Component) + +- `sync_height`: Current sync height +- `da_included_height`: Height of last DA-included block +- `soft_confirmed_height`: Height of last soft confirmed block +- `header_store_height`: Current header store height +- `data_store_height`: Current data store height + +### Performance Metrics (All Components) + +- `block_production_time`: Time to produce a block +- `da_submission_time`: Time to submit to DA +- `state_update_time`: Time to apply block and update state +- `channel_buffer_usage`: Usage of internal channels + +### Error Metrics (All Components) + +- `errors_total`: Total errors by type and operation + +## Implementation + +The modular block components are implemented in the following packages: + +- [Executor]: Block production and state transitions (`block/internal/executing/`) +- [Reaper]: Transaction collection and submission (`block/internal/reaping/`) +- [Submitter]: DA submission logic (`block/internal/submitting/`) +- [Syncer]: Block synchronization from DA and P2P (`block/internal/syncing/`) +- [Cache Manager]: Coordination and state tracking (`block/internal/cache/`) +- [Components]: Main components orchestration (`block/components.go`) + +See [tutorial] for running a multi-node network with both aggregator and non-aggregator full nodes. + +## References + +[1] [Go Header][go-header] + +[2] [Block Sync][block-sync] + +[3] [Full Node][full-node] + +[4] [Block Components][Components] + +[5] [Tutorial][tutorial] + +[6] [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md) + +[7] [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md) + +[8] [Data Availability](./da.md) + +[9] [Lazy Aggregation with DA Layer Consistency ADR](../../adr/adr-021-lazy-aggregation.md) + +[defaultBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L50 +[defaultDABlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L59 +[defaultLazyBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L52 +[go-header]: https://github.com/celestiaorg/go-header +[block-sync]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go +[full-node]: https://github.com/evstack/ev-node/blob/main/node/full.go +[Executor]: https://github.com/evstack/ev-node/blob/main/block/internal/executing/executor.go +[Reaper]: https://github.com/evstack/ev-node/blob/main/block/internal/reaping/reaper.go +[Submitter]: https://github.com/evstack/ev-node/blob/main/block/internal/submitting/submitter.go +[Syncer]: https://github.com/evstack/ev-node/blob/main/block/internal/syncing/syncer.go +[Cache Manager]: https://github.com/evstack/ev-node/blob/main/block/internal/cache/manager.go +[Components]: https://github.com/evstack/ev-node/blob/main/block/components.go +[tutorial]: https://ev.xyz/guides/full-node diff --git a/docs/concepts/data-availability.md b/docs/concepts/data-availability.md new file mode 100644 index 0000000000..69896ce4ab --- /dev/null +++ b/docs/concepts/data-availability.md @@ -0,0 +1,75 @@ +# Data Availability + +Data availability (DA) ensures that all transaction data required to verify the chain's state is accessible to anyone. + +## Why DA Matters + +Without data availability guarantees: +- Nodes can't verify state transitions +- Users can't prove their balances +- The chain's security model breaks down + +Evolve uses external DA layers to provide these guarantees, rather than storing all data on L1. + +## How Evolve Handles Data Availability + +Evolve is DA-agnostic and can integrate with different DA layers: + +### Local DA + +- **Use case**: Development and testing +- **Guarantee**: None (operator can withhold data) +- **Latency**: Instant + +### Celestia + +- **Use case**: Production deployments +- **Guarantee**: Data availability sampling (DAS) +- **Latency**: ~12 seconds to finality + +### Custom DA + +Implement the [DA interface](/reference/interfaces/da) to integrate any DA layer. + +## DA Flow + +``` +Block Produced + │ + ▼ +┌─────────────────┐ +│ Submitter │ Queues block for DA +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ DA Layer │ Stores and orders data +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ Full Nodes │ Retrieve and verify +└─────────────────┘ +``` + +## Namespaces + +Evolve uses DA namespaces to organize data: + +| Namespace | Purpose | +|-----------|---------| +| Header | Block headers | +| Data | Transaction data | +| Forced Inclusion | User-submitted transactions | + +## Best Practices + +- **Development**: Use Local DA for fast iteration +- **Testnet**: Use Celestia testnet (Mocha or Arabica) +- **Production**: Use Celestia mainnet or equivalent + +## Learn More + +- [Local DA Guide](/guides/da-layers/local-da) +- [Celestia Guide](/guides/da-layers/celestia) +- [DA Interface Reference](/reference/interfaces/da) diff --git a/docs/concepts/fee-systems.md b/docs/concepts/fee-systems.md new file mode 100644 index 0000000000..057fc43c08 --- /dev/null +++ b/docs/concepts/fee-systems.md @@ -0,0 +1,153 @@ +# Fee Systems + +Evolve chains have two layers of fees: execution fees (paid to process transactions) and DA fees (paid to post data). + +## Execution Fees + +### EVM (ev-reth) + +Uses EIP-1559 fee model: + +``` +Transaction Fee = (Base Fee + Priority Fee) × Gas Used +``` + +| Component | Destination | Purpose | +|-----------|-------------|---------| +| Base Fee | Burned (or redirected) | Congestion pricing | +| Priority Fee | Sequencer | Incentive for inclusion | + +#### Base Fee Redirect + +By default, base fees are burned. ev-reth can redirect them to a treasury: + +```json +{ + "config": { + "evolve": { + "baseFeeSink": "0xTREASURY", + "baseFeeRedirectActivationHeight": 0 + } + } +} +``` + +See [Base Fee Redirect](/ev-reth/features/base-fee-redirect) for details. + +### Cosmos SDK (ev-abci) + +Uses standard Cosmos SDK fee model: + +``` +Transaction Fee = Gas Price × Gas Used +``` + +Configure minimum gas prices: + +```toml +# app.toml +minimum-gas-prices = "0.025stake" +``` + +Fees go to the fee collector module and can be distributed via standard Cosmos mechanisms. + +## DA Fees + +Both execution environments incur DA fees when blocks are posted to the DA layer. + +### Cost Factors + +| Factor | Impact | +|--------|--------| +| Block size | Linear cost increase | +| DA gas price | Market-driven, varies | +| Batching | Amortizes overhead | +| Compression | Reduces data size | + +### Who Pays? + +The sequencer pays DA fees from their own funds. They recover costs through: +- Priority fees from users +- Base fee redirect (if configured) +- External subsidy + +### Optimization Strategies + +#### Lazy Aggregation + +Only produce blocks when there are transactions: + +```yaml +node: + lazy-aggregator: true + lazy-block-time: 1s # Max wait time +``` + +Reduces empty blocks and DA costs. + +#### Batching + +ev-node batches multiple blocks into single DA submissions: + +```yaml +da: + batch-size-threshold: 100000 # bytes + batch-max-delay: 5s +``` + +#### Compression + +Enable blob compression: + +```yaml +da: + compression: true +``` + +## Fee Flow Diagram + +``` +User Transaction + │ + │ Pays: Gas Price × Gas + ▼ +┌─────────────────┐ +│ Sequencer │ +│ │ +│ Receives: │ +│ - Priority fees │ +│ - Base fees* │ +└────────┬────────┘ + │ + │ Pays: DA fees + ▼ +┌─────────────────┐ +│ DA Layer │ +│ (Celestia) │ +└─────────────────┘ + +* If base fee redirect is enabled +``` + +## Estimating Costs + +### Execution Costs + +EVM: +```bash +cast estimate --rpc-url http://localhost:8545 "transfer(address,uint256)" +``` + +Cosmos: +```bash +appd tx bank send 1000stake --gas auto --gas-adjustment 1.3 +``` + +### DA Costs + +Depends on: +- DA layer pricing (e.g., Celestia gas price) +- Data size per block +- Submission frequency + +Use the [Celestia Gas Calculator](/guides/tools/celestia-gas-calculator) for estimates. diff --git a/docs/concepts/finality.md b/docs/concepts/finality.md new file mode 100644 index 0000000000..be965a4443 --- /dev/null +++ b/docs/concepts/finality.md @@ -0,0 +1,55 @@ +# Finality + +Finality determines when a transaction is irreversible. Evolve has a multi-stage finality model. + +## Finality Stages + +``` +Transaction Submitted + │ + ▼ +┌───────────────────┐ +│ Soft Confirmed │ ← Block produced, gossiped via P2P +└─────────┬─────────┘ + │ + ▼ +┌───────────────────┐ +│ DA Finalized │ ← DA layer confirms inclusion +└───────────────────┘ +``` + +### Soft Confirmation + +When a block is produced and gossiped via P2P: + +- **Latency**: Milliseconds (block time) +- **Guarantee**: Sequencer has committed to this ordering +- **Risk**: Sequencer could equivocate (produce conflicting blocks) + +### DA Finalized + +When the DA layer confirms the block is included: + +- **Latency**: ~6 seconds (Celestia) +- **Guarantee**: Block data is permanently available and ordered +- **Risk**: None (assuming DA layer security) + +## Choosing Finality Thresholds + +| Use Case | Recommended Finality | +|----------|---------------------| +| Display balance | Soft confirmation | +| Accept payment | Soft confirmation | +| Process withdrawal | DA finalized | +| Bridge transfer | DA finalized | + +## Configuration + +Block time affects soft confirmation latency: + +```yaml +node: + block-time: 100ms +``` + +DA finality depends on the DA layer. Celestia provides ~6 second finality. diff --git a/docs/concepts/p2p-networking.md b/docs/concepts/p2p-networking.md new file mode 100644 index 0000000000..14309d9e3c --- /dev/null +++ b/docs/concepts/p2p-networking.md @@ -0,0 +1,60 @@ +# P2P + +Every node (both full and light) runs a P2P client using [go-libp2p][go-libp2p] P2P networking stack for gossiping transactions in the chain's P2P network. The same P2P client is also used by the header and block sync services for gossiping headers and blocks. + +Following parameters are required for creating a new instance of a P2P client: + +* P2PConfig (described below) +* [go-libp2p][go-libp2p] private key used to create a libp2p connection and join the p2p network. +* chainID: identifier used as namespace within the p2p network for peer discovery. The namespace acts as a sub network in the p2p network, where peer connections are limited to the same namespace. +* datastore: an instance of [go-datastore][go-datastore] used for creating a connection gator and stores blocked and allowed peers. +* logger + +```go +// P2PConfig stores configuration related to peer-to-peer networking. +type P2PConfig struct { + ListenAddress string // Address to listen for incoming connections + Seeds string // Comma separated list of seed nodes to connect to + BlockedPeers string // Comma separated list of nodes to ignore + AllowedPeers string // Comma separated list of nodes to whitelist +} +``` + +A P2P client also instantiates a [connection gator][conngater] to block and allow peers specified in the `P2PConfig`. + +It also sets up a gossiper using the gossip topic `+` (`txTopicSuffix` is defined in [p2p/client.go][client.go]), a Distributed Hash Table (DHT) using the `Seeds` defined in the `P2PConfig` and peer discovery using go-libp2p's `discovery.RoutingDiscovery`. + +A P2P client provides an interface `SetTxValidator(p2p.GossipValidator)` for specifying a gossip validator which can define how to handle the incoming `GossipMessage` in the P2P network. The `GossipMessage` represents message gossiped via P2P network (e.g. transaction, Block etc). + +```go +// GossipValidator is a callback function type. +type GossipValidator func(*GossipMessage) bool +``` + +The full nodes define a transaction validator (shown below) as gossip validator for processing the gossiped transactions to add to the mempool, whereas light nodes simply pass a dummy validator as light nodes do not process gossiped transactions. + +```go +// newTxValidator creates a pubsub validator that uses the node's mempool to check the +// transaction. If the transaction is valid, then it is added to the mempool +func (n *FullNode) newTxValidator() p2p.GossipValidator { +``` + +```go +// Dummy validator that always returns a callback function with boolean `false` +func (ln *LightNode) falseValidator() p2p.GossipValidator { +``` + +## References + +[1] [client.go][client.go] + +[2] [go-datastore][go-datastore] + +[3] [go-libp2p][go-libp2p] + +[4] [conngater][conngater] + +[client.go]: https://github.com/evstack/ev-node/blob/main/pkg/p2p/client.go +[go-datastore]: https://github.com/ipfs/go-datastore +[go-libp2p]: https://github.com/libp2p/go-libp2p +[conngater]: https://github.com/libp2p/go-libp2p/tree/master/p2p/net/conngater diff --git a/docs/concepts/sequencing.md b/docs/concepts/sequencing.md new file mode 100644 index 0000000000..4141dc56f7 --- /dev/null +++ b/docs/concepts/sequencing.md @@ -0,0 +1,108 @@ +# Sequencing + +Sequencing determines transaction ordering. The sequencer collects transactions, orders them, and produces blocks. + +## Sequencer Interface + +The [Sequencer interface](https://github.com/evstack/ev-node/blob/main/core/sequencer/sequencing.go) defines how ev-node communicates with sequencing implementations: + +```go +type Sequencer interface { + // Submit transactions to the sequencer + SubmitBatchTxs(ctx context.Context, req SubmitBatchTxsRequest) (*SubmitBatchTxsResponse, error) + + // Get the next batch of ordered transactions + GetNextBatch(ctx context.Context, req GetNextBatchRequest) (*GetNextBatchResponse, error) + + // Verify a batch from another source + VerifyBatch(ctx context.Context, req VerifyBatchRequest) (*VerifyBatchResponse, error) +} +``` + +## Sequencing Modes + +### Single Sequencer + +One node orders transactions and produces blocks. + +``` +User → Mempool → Sequencer → Block → DA +``` + +**Characteristics:** +- Fast block times (~100ms possible) +- Simple operation +- Single point of ordering (with forced inclusion for censorship resistance) + +**Configuration:** +```yaml +node: + aggregator: true + block-time: 100ms +``` + +See [Single Sequencer / Forced Inclusion](/guides/advanced/forced-inclusion) for details. + +### Based Sequencer + +Transaction ordering is determined by the DA layer. Every full node derives blocks independently. + +``` +User → DA Layer → All Nodes Derive Same Blocks +``` + +**Characteristics:** +- No single sequencer +- Ordering from DA layer (slower blocks) +- Maximum censorship resistance + +**Configuration:** +```yaml +node: + aggregator: true + based-sequencer: true +``` + +See [Based Sequencing](/guides/advanced/based-sequencing) for details. + +## Choosing a Sequencing Mode + +| Factor | Single Sequencer | Based Sequencer | +|--------|-----------------|-----------------| +| Block time | ~100ms | ~12s (DA block time) | +| Censorship resistance | Forced inclusion | Native | +| Complexity | Lower | Higher | +| MEV | Sequencer controls | DA layer controls | + +## Forced Inclusion + +Single sequencer mode includes forced inclusion for censorship resistance: + +1. Users can submit transactions directly to DA +2. Sequencer must include these within a grace period +3. Failure to include marks sequencer as malicious +4. Chain can transition to based mode + +This provides a safety mechanism while maintaining fast block times. + +## Transaction Flow + +```mermaid +sequenceDiagram + participant User + participant Mempool + participant Sequencer + participant DA + + User->>Mempool: Submit tx + Sequencer->>Mempool: GetTxs() + Mempool->>Sequencer: Pending txs + Sequencer->>Sequencer: Order & Execute + Sequencer->>DA: Submit block +``` + +## Learn More + +- [Single Sequencer / Forced Inclusion](/guides/advanced/forced-inclusion) +- [Based Sequencing](/guides/advanced/based-sequencing) +- [Sequencer Interface Reference](/reference/interfaces/sequencer) diff --git a/docs/concepts/transaction-flow.md b/docs/concepts/transaction-flow.md new file mode 100644 index 0000000000..8d055321f5 --- /dev/null +++ b/docs/concepts/transaction-flow.md @@ -0,0 +1,53 @@ +# Transaction flow + +Chain users use a light node to communicate with the chain P2P network for two primary reasons: + +- submitting transactions +- gossiping headers and fraud proofs + +Here's what the typical transaction flow looks like: + +## Transaction submission + +```mermaid +sequenceDiagram + participant User + participant LightNode + participant FullNode + + User->>LightNode: Submit Transaction + LightNode->>FullNode: Gossip Transaction + FullNode-->>User: Refuse (if invalid) +``` + +## Transaction validation and processing + +```mermaid +sequenceDiagram + participant FullNode + participant Sequencer + + FullNode->>FullNode: Check Validity + FullNode->>FullNode: Add to Mempool (if valid) + FullNode-->>User: Transaction Processed (if valid) + FullNode->>Sequencer: Inform about Valid Transaction + Sequencer->>DALayer: Add to Chain Block +``` + +## Block processing + +```mermaid +sequenceDiagram + participant DALayer + participant FullNode + participant Chain + + DALayer->>Chain: Update State + DALayer->>FullNode: Download & Validate Block +``` + +To transact, users submit a transaction to their light node, which gossips the transaction to a full node. Before adding the transaction to their mempool, the full node checks its validity. Valid transactions are included in the mempool, while invalid ones are refused, and the user's transaction will not be processed. + +If the transaction is valid and has been included in the mempool, the sequencer can add it to a chain block, which is then submitted to the data availability (DA) layer. This results in a successful transaction flow for the user, and the state of the chain is updated accordingly. + +After the block is submitted to the DA layer, the full nodes download and validate the block. diff --git a/docs/ev-abci/integration-guide.md b/docs/ev-abci/integration-guide.md new file mode 100644 index 0000000000..67b855de4e --- /dev/null +++ b/docs/ev-abci/integration-guide.md @@ -0,0 +1,130 @@ +# Integration Guide + +Integrate ev-abci into a Cosmos SDK application. + +## Overview + +ev-abci replaces CometBFT as the consensus layer. Your ABCI application logic remains unchanged—only the node startup code changes. + +## Prerequisites + +- Cosmos SDK v0.50+ application +- Go 1.22+ + +## Step 1: Add Dependency + +```bash +go get github.com/evstack/ev-abci@latest +``` + +## Step 2: Modify Start Command + +Locate your app's entrypoint (typically `cmd//root.go` or `main.go`). + +### Before (CometBFT) + +```go +import ( + "github.com/cosmos/cosmos-sdk/server" +) + +// In your root command setup: +server.AddCommands(rootCmd, app.DefaultNodeHome, newApp, appExport) +``` + +### After (ev-abci) + +```go +import ( + "github.com/cosmos/cosmos-sdk/server" + evabci "github.com/evstack/ev-abci/server" +) + +// Keep existing commands for init, genesis, keys, etc. +server.AddCommands(rootCmd, app.DefaultNodeHome, newApp, appExport) + +// Replace the start command +startCmd := &cobra.Command{ + Use: "start", + Short: "Run the node", + RunE: func(cmd *cobra.Command, _ []string) error { + return evabci.StartHandler(cmd, newApp) + }, +} +evabci.AddFlags(startCmd) +rootCmd.AddCommand(startCmd) +``` + +## Step 3: Build + +```bash +go build -o appd ./cmd/appd +``` + +## Step 4: Verify + +Check for ev-abci flags: + +```bash +./appd start --help +``` + +Expected flags: +``` +--evnode.node.aggregator Run as block producer +--evnode.da.address DA layer address +--evnode.signer.passphrase Signer passphrase +--evnode.node.block_time Block production interval +``` + +## Step 5: Initialize + +Standard Cosmos SDK initialization: + +```bash +./appd init mynode --chain-id mychain-1 +./appd keys add mykey --keyring-backend test +./appd genesis add-genesis-account mykey 1000000000stake --keyring-backend test +./appd genesis gentx mykey 1000000stake --chain-id mychain-1 --keyring-backend test +./appd genesis collect-gentxs +``` + +## Step 6: Start + +```bash +./appd start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 \ + --evnode.signer.passphrase secret +``` + +## Configuration + +### ev-node Flags + +| Flag | Description | Default | +|------|-------------|---------| +| `--evnode.node.aggregator` | Run as sequencer | `false` | +| `--evnode.node.block_time` | Block interval | `1s` | +| `--evnode.da.address` | DA layer URL | required | +| `--evnode.signer.passphrase` | Signer passphrase | required | +| `--evnode.p2p.peers` | P2P peer addresses | none | + +### Full Node (Non-Sequencer) + +```bash +./appd start \ + --evnode.da.address http://localhost:7980 \ + --evnode.p2p.peers @:26659 +``` + +## RPC Compatibility + +ev-abci provides CometBFT-compatible RPC endpoints. Existing clients work without modification. + +See [RPC Compatibility](/ev-abci/rpc-compatibility) for details. + +## Next Steps + +- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Migrate existing chain +- [RPC Compatibility](/ev-abci/rpc-compatibility) — Endpoint compatibility diff --git a/docs/ev-abci/migration-from-cometbft.md b/docs/ev-abci/migration-from-cometbft.md new file mode 100644 index 0000000000..f49ba6df6f --- /dev/null +++ b/docs/ev-abci/migration-from-cometbft.md @@ -0,0 +1,286 @@ +# Migrating an Existing Chain to ev-abci + +This guide is for developers of existing Cosmos SDK chains who want to replace their node's default CometBFT consensus engine with the `ev-abci` implementation. By following these steps, you will migrate your chain to run as an `ev-abci` node while preserving chain state. + +## Overview of Migration Process + +The migration process involves the following key phases: + +1. **Code Preparation:** Add migration module, staking wrapper, and upgrade handler to your existing chain +2. **Governance Proposal:** Create and pass a governance proposal to initiate the migration +3. **State Export:** Export the current chain state at the designated upgrade height +4. **Node Reconfiguration:** Wire the `ev-abci` start handler into your node's entrypoint +5. **Migration Execution:** Run `appd evolve-migrate` to transform the exported state +6. **Chain Restart:** Start the new `ev-abci` node with the migrated state + +This document will guide you through each phase. + +--- + +## Phase 1: Code Preparation - Add Migration Module and Staking Wrapper + +The first step prepares your existing chain for migration by integrating the necessary modules. + +### Step 1: Add Migration Manager Module + +Add the `migrationmngr` module to your application. This module manages the transition from a PoS validator set to a sequencer-based model. + +*Note: For detailed information about the migration manager, please refer to the [migration manager documentation](https://github.com/evstack/ev-abci/tree/main/modules/migrationmngr).* + +In your `app.go` file: + +1. Import the migration manager module: + +```go +import ( + // ... + migrationmngr "github.com/evstack/ev-abci/modules/migrationmngr" + migrationmngrkeeper "github.com/evstack/ev-abci/modules/migrationmngr/keeper" + migrationmngrtypes "github.com/evstack/ev-abci/modules/migrationmngr/types" + // ... +) +``` + +2. Add the migration manager keeper to your app struct +3. Register the module in your module manager +4. Configure the migration manager in your app initialization + +### Step 2: Replace Staking Module with Wrapper + +**Goal:** Ensure the `migrationmngr` module is the *sole* source of validator set updates during migration. + +Replace the standard Cosmos SDK `x/staking` module with the **staking wrapper module** provided in `ev-abci`. The wrapper's `EndBlock` method prevents validator updates from the staking module, delegating that responsibility to the `migrationmngr` module during migration. + +In your `app.go` file (and any other files that import the staking module): + +**Replace this:** + +```go +import ( + // ... + "github.com/cosmos/cosmos-sdk/x/staking" + stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" + // ... +) +``` + +**With this:** + +```go +import ( + // ... + "github.com/evstack/ev-abci/modules/staking" // The wrapper module + stakingkeeper "github.com/evstack/ev-abci/modules/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" // Staking types remain the same + // ... +) +``` + +By changing the import path, your application will automatically use the wrapper module. No other changes to your `EndBlocker` method are needed. + +--- + +## Phase 2: Create Upgrade Handler + +Create an upgrade handler in your `app.go` that will be triggered when the governance proposal is executed. + +```go +func (app *App) setupUpgradeHandlers() { + app.UpgradeKeeper.SetUpgradeHandler( + "v2-migrate-to-evolve", // Upgrade name must match governance proposal + func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + // The upgrade handler can initialize state for the migration manager if needed + // The actual migration will happen during the evolve-migrate step + return app.mm.RunMigrations(ctx, app.configurator, fromVM) + }, + ) +} +``` + +Call this function in your app initialization code in `app.go`. + +--- + +## Phase 3: Create Governance Proposal for Migration + +Create and submit a software upgrade governance proposal to initiate the migration at a specific block height. + +```bash +# Create the governance proposal + tx gov submit-proposal software-upgrade v2-migrate-to-evolve \ + --title "Migrate to Evolve" \ + --description "Upgrade chain to use ev-abci consensus" \ + --upgrade-height \ + --from \ + --chain-id + +# Vote on the proposal (repeat for validators to reach quorum) + tx gov vote yes --from +``` + +Wait for the proposal to pass and for the chain to reach the upgrade height. The chain will halt at the specified height, waiting for the upgrade to be applied. + +### Trigger Migration to Evolve + +After the upgrade proposal has passed, submit the `MsgMigrateToEvolve` message to initiate the actual migration process. This can be done through a governance proposal or directly if your chain's authority allows it. + +```bash +# Submit MsgMigrateToEvolve governance proposal (if using governance) + tx gov submit-proposal migrate-to-evolve \ + --title "Trigger Migration to Evolve" \ + --description "Execute migration to ev-abci consensus" \ + --from \ + --chain-id + +# Or submit directly if authority allows (authority address depends on your chain configuration) + tx migrationmngr migrate-to-evolve \ + --from \ + --chain-id +``` + +Once this message is processed, the migration manager module will handle the transition from the PoS validator set to the sequencer-based model. + +--- + +## Phase 4: Wire ev-abci Start Handler in root.go + +**⚠️ Important:** Complete this phase BEFORE the chain halts at the upgrade height. Do NOT start your node yet - you will start it in Phase 6 after running the migration command. + +Modify your node's entrypoint to use the `ev-abci` server commands. + +### Locate Your Application's Entrypoint + +Open the main entrypoint file for your chain's binary, usually found at `cmd//main.go` or `root.go`. + +### Modify the Start Command + +Add the `ev-abci` start handler to your root command. This is similar to the [Ignite Apps evolve template](https://github.com/ignite/apps/blob/main/evolve/template/init.go#L48-L60). + +```go +// cmd//main.go (or root.go) +package main + +import ( + "os" + + "github.com/cosmos/cosmos-sdk/server" + "github.com/spf13/cobra" + + // Import the ev-abci server package + evabci_server "github.com/evstack/ev-abci/server" + + "/app" +) + +func main() { + rootCmd := &cobra.Command{ + Use: "", + Short: "Your App Daemon (ev-abci enabled)", + } + + // Keep existing commands (keys, export, etc.) + server.AddCommands(rootCmd, app.DefaultNodeHome, app.New, app.MakeEncodingConfig(), tx.DefaultSignModes) + + // --- Wire ev-abci start handler --- + startCmd := &cobra.Command{ + Use: "start", + Short: "Run the full node with ev-abci", + RunE: func(cmd *cobra.Command, _ []string) error { + return server.Start(cmd, evabci_server.StartHandler()) + }, + } + + evabci_server.AddFlags(startCmd) + rootCmd.AddCommand(startCmd) + // --- End of ev-abci changes --- + + if err := rootCmd.Execute(); err != nil { + server.HandleError(err) + os.Exit(1) + } +} +``` + +### Build Your Application + +Re-build your application's binary with the updated code: + +```sh +go build -o ./cmd/ +``` + +**⚠️ Important:** Do NOT start the node yet. Proceed directly to Phase 5 to run the migration command. + +--- + +## Phase 5: Run evolve-migrate + +After the chain halts at the upgrade height, run the migration command to transform the CometBFT data to Evolve format. + +**⚠️ Critical:** The node must NOT be running when you execute this command. Ensure all node processes are stopped before proceeding. + +```bash +# Run the migration command + evolve-migrate + +# Optional: specify the DA height for the Evolve state (defaults to 1) + evolve-migrate --da-height +``` + +The `evolve-migrate` command performs the following operations: + +1. **Migrates all blocks** from the CometBFT blockstore to the Evolve store +2. **Converts the CometBFT state** to Evolve state format +3. **Creates `ev_genesis.json`** - a minimal genesis file that the node will automatically detect and use on subsequent startups +4. **Saves state** to the ABCI execution store for compatibility +5. **Seeds sync stores** with the latest migrated header and data +6. **Cleans up migration state** from the application database + +**Important Notes:** + +- The migration processes blocks in reverse order (from latest to earliest) +- If blocks are missing (e.g., due to pruning), they will be skipped. Migration stops if more than the configured maximum number of blocks are missing +- Vote extensions are not supported in Evolve - if they were enabled in your chain, they will have no effect after migration +- The command operates on the data in your node's home directory (e.g., `~/.appd/data/`) +- After successful migration, the `ev_genesis.json` file will be used automatically on node restart + +--- + +## Phase 6: Start New ev-abci Node + +Start your node with the migrated state: + +```bash + start +``` + +Verify that the node starts successfully: + +```sh +# Check that ev-abci flags are available + start --help + +# You should see flags like: +# --ev-node.attester-mode +# --ev-node.aggregator +# --ev-node.sequencer-url +# etc. +``` + +Your node is now running with `ev-abci` instead of CometBFT. The chain continues from the same state but with the new consensus engine. + +--- + +## Summary + +The migration process follows these key phases: + +1. **Code Preparation:** Modify your chain code to add the migration manager module and staking wrapper +2. **Create Upgrade Handler:** Define the upgrade logic that will be triggered by governance +3. **Governance Proposal:** Submit and pass a software upgrade proposal +4. **Wire Start Handler:** Update your node's entrypoint to use the `ev-abci` start command +5. **Execute Migration:** Run `appd evolve-migrate` to transform the exported state +6. **Restart Chain:** Start the new `ev-abci` node with the migrated state + +This approach ensures a smooth migration with minimal downtime and preserves all chain state and history. diff --git a/docs/ev-abci/modules/migration-manager.md b/docs/ev-abci/modules/migration-manager.md new file mode 100644 index 0000000000..203cd10587 --- /dev/null +++ b/docs/ev-abci/modules/migration-manager.md @@ -0,0 +1,143 @@ +# Migration Manager Module + +Coordinates the transition from CometBFT multi-validator consensus to Evolve single-sequencer mode. + +## Purpose + +The migration manager: + +- Stores the designated sequencer address +- Tracks migration height +- Coordinates with the staking wrapper to freeze validators +- Provides the `MsgMigrateToEvolve` message for triggering migration + +## Installation + +### Add to app.go + +```go +import ( + migrationmngr "github.com/evstack/ev-abci/modules/migrationmngr" + migrationmngrkeeper "github.com/evstack/ev-abci/modules/migrationmngr/keeper" + migrationmngrtypes "github.com/evstack/ev-abci/modules/migrationmngr/types" +) + +// Add store key +keys := sdk.NewKVStoreKeys( + // ... other keys + migrationmngrtypes.StoreKey, +) + +// Create keeper +app.MigrationManagerKeeper = migrationmngrkeeper.NewKeeper( + appCodec, + keys[migrationmngrtypes.StoreKey], + app.StakingKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) + +// Add to module manager +app.ModuleManager = module.NewManager( + // ... other modules + migrationmngr.NewAppModule(appCodec, app.MigrationManagerKeeper), +) +``` + +### Genesis Configuration + +```json +{ + "app_state": { + "migrationmngr": { + "params": { + "sequencer_address": "", + "migration_height": "0" + } + } + } +} +``` + +## Migration Flow + +### 1. Governance Proposal + +Submit a proposal to set migration parameters: + +```bash +appd tx gov submit-proposal set-sequencer \ + --sequencer-address cosmos1... \ + --migration-height 5000001 \ + --from +``` + +### 2. Vote and Pass + +Standard governance voting process. + +### 3. Chain Halts + +At migration height, the chain halts automatically. + +### 4. Run Migration + +```bash +appd evolve-migrate +``` + +### 5. Restart with ev-abci + +```bash +appd start \ + --evnode.node.aggregator \ + --evnode.da.address \ + --evnode.signer.passphrase +``` + +## Messages + +### MsgSetMigrationParams + +Set migration parameters (governance-gated): + +```protobuf +message MsgSetMigrationParams { + string authority = 1; + string sequencer_address = 2; + int64 migration_height = 3; +} +``` + +### MsgMigrateToEvolve + +Trigger the migration (called internally): + +```protobuf +message MsgMigrateToEvolve { + string authority = 1; +} +``` + +## Queries + +```bash +# Get migration params +appd query migrationmngr params + +# Get previous validators (post-migration) +appd query migrationmngr previous-validators +``` + +## State + +| Key | Description | +|-----|-------------| +| `params` | Sequencer address and migration height | +| `previous_validators` | Validator set before migration (for reference) | +| `migration_complete` | Boolean flag | + +## Next Steps + +- [Staking Wrapper](/ev-abci/modules/staking-wrapper) — Freeze validator set +- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Full migration guide diff --git a/docs/ev-abci/modules/staking-wrapper.md b/docs/ev-abci/modules/staking-wrapper.md new file mode 100644 index 0000000000..e9e71607f0 --- /dev/null +++ b/docs/ev-abci/modules/staking-wrapper.md @@ -0,0 +1,96 @@ +# Staking Wrapper Module + +A wrapper around the Cosmos SDK staking module that prevents validator set changes during migration. + +## Purpose + +When migrating from CometBFT to Evolve, the validator set must be frozen to allow a clean transition to single-sequencer mode. The staking wrapper: + +- Prevents new delegations and undelegations from affecting the validator set +- Blocks validator creation and updates +- Allows the migration manager to perform the final transition + +## Installation + +Replace your staking module import: + +```go +// Before +import "github.com/cosmos/cosmos-sdk/x/staking" + +// After +import "github.com/evstack/ev-abci/modules/staking" +``` + +The wrapper is API-compatible with the standard staking module. + +## Behavior + +### Normal Operation + +Before migration is triggered, the wrapper behaves identically to the standard staking module: + +- Delegations work normally +- Validator operations work normally +- Rewards distribution works normally + +### During Migration + +Once the migration manager signals migration mode: + +- `EndBlock` returns an empty validator update set +- Delegation changes are recorded but don't affect validators +- Validator creation/modification is blocked + +### After Migration + +Post-migration, the staking module becomes read-only for validator operations. The single sequencer is now the only block producer. + +## Integration + +### app.go + +```go +import ( + stakingkeeper "github.com/evstack/ev-abci/modules/staking/keeper" + stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" +) + +// In your NewApp function: +app.StakingKeeper = stakingkeeper.NewKeeper( + appCodec, + keys[stakingtypes.StoreKey], + app.AccountKeeper, + app.BankKeeper, + authtypes.NewModuleAddress(govtypes.ModuleName).String(), +) +``` + +### Module Manager + +```go +import ( + staking "github.com/evstack/ev-abci/modules/staking" +) + +// In your module manager: +app.ModuleManager = module.NewManager( + // ... other modules + staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper), +) +``` + +## Queries + +All standard staking queries remain available: + +```bash +appd query staking validators +appd query staking delegations
+appd query staking pool +``` + +## Next Steps + +- [Migration Manager](/ev-abci/modules/migration-manager) — Coordinate the migration +- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Full migration guide diff --git a/docs/ev-abci/overview.md b/docs/ev-abci/overview.md new file mode 100644 index 0000000000..2331fa5df6 --- /dev/null +++ b/docs/ev-abci/overview.md @@ -0,0 +1,76 @@ +# ev-abci Overview + +ev-abci is an ABCI adapter that allows Cosmos SDK applications to run on Evolve instead of CometBFT. + +## What is ev-abci? + +ev-abci provides: + +- **Drop-in replacement** — Swap CometBFT for Evolve with minimal code changes +- **ABCI compatibility** — Your existing Cosmos SDK modules work unchanged +- **CometBFT RPC compatibility** — Existing clients and tooling continue to work +- **Migration tooling** — Migrate existing chains from CometBFT to Evolve + +## Architecture + +``` +┌─────────────────────────────────────────┐ +│ Your Cosmos App │ +│ ┌─────────────────────────────────┐ │ +│ │ Cosmos SDK Modules │ │ +│ │ (bank, staking, gov, etc.) │ │ +│ └─────────────────────────────────┘ │ +│ │ ABCI │ +│ ┌───────────────▼─────────────────┐ │ +│ │ ev-abci │ │ +│ │ (ABCI adapter + RPC server) │ │ +│ └───────────────┬─────────────────┘ │ +└──────────────────┼──────────────────────┘ + │ Executor Interface +┌──────────────────▼──────────────────────┐ +│ ev-node │ +│ (consensus + DA + P2P) │ +└─────────────────────────────────────────┘ +``` + +ev-abci implements the Executor interface, translating ev-node's calls into ABCI calls to your application. + +## Key Differences from CometBFT + +| Aspect | CometBFT | ev-abci | +|--------|----------|---------| +| Validators | Multiple validators with staking | Single sequencer | +| Consensus | BFT consensus rounds | Sequencer produces blocks | +| Finality | Instant (BFT) | Soft (P2P) → Hard (DA) | +| Block time | ~6s typical | Configurable (100ms+) | +| Vote extensions | Supported | Not supported | + +## Benefits + +- **No validator coordination** — Single sequencer eliminates consensus overhead +- **Faster blocks** — No BFT round-trips, blocks as fast as 100ms +- **DA-secured** — Security from data availability, not validator set +- **Simpler operations** — No validator management, slashing, or jailing + +## Trade-offs + +- **Single sequencer** — One node produces blocks (with forced inclusion for censorship resistance) +- **Different finality model** — Soft confirmation before DA finality +- **No vote extensions** — ABCI++ vote extensions not available + +## Modules + +ev-abci includes helper modules for migration: + +- [Staking Wrapper](/ev-abci/modules/staking-wrapper) — Prevents validator updates during migration +- [Migration Manager](/ev-abci/modules/migration-manager) — Handles validator set transition + +## Repository + +- GitHub: [github.com/evstack/ev-abci](https://github.com/evstack/ev-abci) + +## Next Steps + +- [Cosmos SDK Quickstart](/getting-started/cosmos/quickstart) — Get started +- [Integration Guide](/ev-abci/integration-guide) — Manual integration +- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Migrate existing chain diff --git a/docs/ev-abci/rpc-compatibility.md b/docs/ev-abci/rpc-compatibility.md new file mode 100644 index 0000000000..9b8e8f9898 --- /dev/null +++ b/docs/ev-abci/rpc-compatibility.md @@ -0,0 +1,135 @@ +# RPC Compatibility + +ev-abci provides CometBFT-compatible RPC endpoints for client compatibility. + +## Overview + +Existing Cosmos SDK clients expect CometBFT RPC endpoints. ev-abci implements these endpoints so tools like: + +- Cosmos SDK CLI +- Keplr wallet +- CosmJS +- Block explorers + +continue to work without modification. + +## Supported Endpoints + +### Query Methods + +| Endpoint | Status | Notes | +|----------|--------|-------| +| `/abci_query` | ✓ | Full support | +| `/block` | ✓ | Full support | +| `/block_by_hash` | ✓ | Full support | +| `/block_results` | ✓ | Full support | +| `/blockchain` | ✓ | Full support | +| `/commit` | ✓ | Full support | +| `/consensus_params` | ✓ | Full support | +| `/genesis` | ✓ | Full support | +| `/health` | ✓ | Full support | +| `/status` | ✓ | Full support | +| `/tx` | ✓ | Full support | +| `/tx_search` | ✓ | Full support | +| `/validators` | ✓ | Returns sequencer | + +### Transaction Methods + +| Endpoint | Status | Notes | +|----------|--------|-------| +| `/broadcast_tx_async` | ✓ | Full support | +| `/broadcast_tx_sync` | ✓ | Full support | +| `/broadcast_tx_commit` | ✓ | Waits for inclusion | +| `/check_tx` | ✓ | Full support | + +### Subscription Methods + +| Endpoint | Status | Notes | +|----------|--------|-------| +| `/subscribe` | ✓ | WebSocket events | +| `/unsubscribe` | ✓ | Full support | +| `/unsubscribe_all` | ✓ | Full support | + +## Unsupported Endpoints + +| Endpoint | Reason | +|----------|--------| +| `/consensus_state` | No BFT consensus | +| `/dump_consensus_state` | No BFT consensus | +| `/net_info` | Different P2P model | +| `/num_unconfirmed_txs` | Different mempool | +| `/unconfirmed_txs` | Different mempool | + +## Behavioral Differences + +### Validators + +`/validators` returns the single sequencer rather than a validator set: + +```json +{ + "validators": [ + { + "address": "...", + "voting_power": "1", + "proposer_priority": "0" + } + ], + "count": "1", + "total": "1" +} +``` + +### Commit + +`/commit` returns a simplified commit structure since there's no BFT voting: + +```json +{ + "signed_header": { + "header": { ... }, + "commit": { + "height": "100", + "signatures": [ + { + "validator_address": "...", + "signature": "..." + } + ] + } + } +} +``` + +### Block Time + +Block timestamps reflect actual production time, which may be faster than CometBFT's typical 6s blocks. + +## Port Configuration + +Default ports match CometBFT: + +| Port | Purpose | +|------|---------| +| 26657 | RPC | +| 26656 | P2P | + +Configure via flags: +```bash +--evnode.rpc.address tcp://0.0.0.0:26657 +--evnode.p2p.listen /ip4/0.0.0.0/tcp/26656 +``` + +## Client Configuration + +No client changes needed. Point clients at the same RPC URL: + +```javascript +// CosmJS +const client = await StargateClient.connect("http://localhost:26657"); +``` + +```bash +# CLI +appd config node tcp://localhost:26657 +``` diff --git a/docs/ev-reth/configuration.md b/docs/ev-reth/configuration.md new file mode 100644 index 0000000000..5ef7821395 --- /dev/null +++ b/docs/ev-reth/configuration.md @@ -0,0 +1,128 @@ +# ev-reth Configuration + +Configure ev-reth through chainspec (genesis.json) and command-line flags. + +## Chainspec + +The chainspec defines chain parameters. ev-reth uses standard Ethereum genesis format with Evolve extensions. + +### Basic Structure + +```json +{ + "config": { + "chainId": 1337, + "homesteadBlock": 0, + "eip150Block": 0, + "eip155Block": 0, + "eip158Block": 0, + "byzantiumBlock": 0, + "constantinopleBlock": 0, + "petersburgBlock": 0, + "istanbulBlock": 0, + "berlinBlock": 0, + "londonBlock": 0, + "shanghaiTime": 0, + "cancunTime": 0 + }, + "alloc": {}, + "coinbase": "0x0000000000000000000000000000000000000000", + "difficulty": "0x0", + "gasLimit": "0x1c9c380", + "nonce": "0x0", + "timestamp": "0x0" +} +``` + +### Evolve Extensions + +Add under `config.evolve`: + +```json +{ + "config": { + "chainId": 1337, + "evolve": { + "baseFeeSink": "0x...", + "baseFeeRedirectActivationHeight": 0, + "deployAllowlist": { + "admin": "0x...", + "enabled": ["0x..."] + }, + "contractSizeLimit": 49152, + "mintPrecompile": { + "admin": "0x...", + "address": "0x0000000000000000000000000000000000000100" + } + } + } +} +``` + +See [Features](/ev-reth/features/base-fee-redirect) for detailed configuration of each extension. + +## Command-Line Flags + +### RPC + +```bash +--http # Enable HTTP JSON-RPC +--http.addr 0.0.0.0 # Listen address +--http.port 8545 # Listen port +--http.api eth,net,web3 # Enabled APIs +``` + +### Engine API + +```bash +--authrpc.addr 0.0.0.0 # Engine API address +--authrpc.port 8551 # Engine API port +--authrpc.jwtsecret jwt.hex # JWT secret file +``` + +### Data + +```bash +--datadir /data # Data directory +--chain genesis.json # Chainspec file +``` + +## Docker + +Default `docker-compose.yml`: + +```yaml +services: + reth: + image: ghcr.io/evstack/ev-reth:latest + ports: + - "8545:8545" + - "8551:8551" + volumes: + - ./data:/data + - ./genesis.json:/genesis.json + - ./jwt.hex:/jwt.hex + command: + - node + - --chain=/genesis.json + - --http + - --http.addr=0.0.0.0 + - --http.api=eth,net,web3,txpool + - --authrpc.addr=0.0.0.0 + - --authrpc.jwtsecret=/jwt.hex +``` + +## JWT Secret + +Generate for Engine API authentication: + +```bash +openssl rand -hex 32 > jwt.hex +``` + +Both ev-reth and ev-node must use the same secret. + +## Next Steps + +- [Engine API](/ev-reth/engine-api) — Communication protocol +- [Chainspec Reference](/reference/configuration/ev-reth-chainspec) — Full field reference diff --git a/docs/ev-reth/engine-api.md b/docs/ev-reth/engine-api.md new file mode 100644 index 0000000000..905e6e3075 --- /dev/null +++ b/docs/ev-reth/engine-api.md @@ -0,0 +1,170 @@ +# Engine API + +ev-node communicates with ev-reth through the Ethereum Engine API, the same protocol used by Ethereum consensus clients. + +## Overview + +The Engine API is a JSON-RPC interface authenticated with JWT. ev-node acts as the consensus client, driving ev-reth (execution client) to build and finalize blocks. + +## Authentication + +All Engine API calls require JWT authentication: + +```bash +# Generate shared secret +openssl rand -hex 32 > jwt.hex +``` + +Configure both sides: +- ev-reth: `--authrpc.jwtsecret jwt.hex` +- ev-node: `--evm.jwt-secret jwt.hex` + +## Block Production Flow + +``` +ev-node ev-reth + │ │ + │ 1. engine_forkchoiceUpdatedV3 │ + │ (headBlockHash, payloadAttributes) │ + │─────────────────────────────────────────►│ + │ │ + │ 2. {payloadId} │ + │◄─────────────────────────────────────────│ + │ │ + │ 3. engine_getPayloadV3(payloadId) │ + │─────────────────────────────────────────►│ + │ │ + │ 4. {executionPayload, blockValue} │ + │◄─────────────────────────────────────────│ + │ │ + │ [ev-node broadcasts to P2P, submits DA] │ + │ │ + │ 5. engine_newPayloadV3(executionPayload)│ + │─────────────────────────────────────────►│ + │ │ + │ 6. {status: VALID} │ + │◄─────────────────────────────────────────│ + │ │ + │ 7. engine_forkchoiceUpdatedV3 │ + │ (newHeadBlockHash) │ + │─────────────────────────────────────────►│ + │ │ +``` + +## Methods + +### engine_forkchoiceUpdatedV3 + +Update the fork choice and optionally start building a new block. + +**Request:** +```json +{ + "method": "engine_forkchoiceUpdatedV3", + "params": [ + { + "headBlockHash": "0x...", + "safeBlockHash": "0x...", + "finalizedBlockHash": "0x..." + }, + { + "timestamp": "0x...", + "prevRandao": "0x...", + "suggestedFeeRecipient": "0x...", + "withdrawals": [], + "parentBeaconBlockRoot": "0x..." + } + ] +} +``` + +**Response:** +```json +{ + "payloadStatus": { + "status": "VALID", + "latestValidHash": "0x..." + }, + "payloadId": "0x..." +} +``` + +### engine_getPayloadV3 + +Retrieve a built payload. + +**Request:** +```json +{ + "method": "engine_getPayloadV3", + "params": ["0x...payloadId"] +} +``` + +**Response:** +```json +{ + "executionPayload": { + "parentHash": "0x...", + "feeRecipient": "0x...", + "stateRoot": "0x...", + "receiptsRoot": "0x...", + "logsBloom": "0x...", + "prevRandao": "0x...", + "blockNumber": "0x1", + "gasLimit": "0x...", + "gasUsed": "0x...", + "timestamp": "0x...", + "extraData": "0x", + "baseFeePerGas": "0x...", + "blockHash": "0x...", + "transactions": ["0x..."] + }, + "blockValue": "0x..." +} +``` + +### engine_newPayloadV3 + +Validate and execute a payload. + +**Request:** +```json +{ + "method": "engine_newPayloadV3", + "params": [ + { "executionPayload": "..." }, + ["0x...versionedHashes"], + "0x...parentBeaconBlockRoot" + ] +} +``` + +**Response:** +```json +{ + "status": "VALID", + "latestValidHash": "0x..." +} +``` + +## Status Codes + +| Status | Meaning | +|--------|---------| +| `VALID` | Payload is valid | +| `INVALID` | Payload is invalid | +| `SYNCING` | Node is syncing | +| `ACCEPTED` | Payload accepted but not yet validated | + +## Ports + +| Port | Purpose | +|------|---------| +| 8545 | JSON-RPC (public) | +| 8551 | Engine API (authenticated) | + +## Next Steps + +- [Engine API Reference](/reference/api/engine-api) — Full method reference +- [Configuration](/ev-reth/configuration) — ev-reth settings diff --git a/docs/ev-reth/features/base-fee-redirect.md b/docs/ev-reth/features/base-fee-redirect.md new file mode 100644 index 0000000000..2165bae15b --- /dev/null +++ b/docs/ev-reth/features/base-fee-redirect.md @@ -0,0 +1,86 @@ +# Base Fee Redirect + +Redirect EIP-1559 base fees to a treasury address instead of burning them. + +## Overview + +In standard Ethereum, base fees are burned. ev-reth allows redirecting these fees to a specified address, enabling: + +- Protocol revenue collection +- Treasury funding +- DAO-controlled fee distribution + +## Configuration + +In your chainspec (`genesis.json`): + +```json +{ + "config": { + "evolve": { + "baseFeeSink": "0xYOUR_TREASURY_ADDRESS", + "baseFeeRedirectActivationHeight": 0 + } + } +} +``` + +| Field | Description | +|-------|-------------| +| `baseFeeSink` | Address to receive base fees | +| `baseFeeRedirectActivationHeight` | Block height to activate (0 = genesis) | + +## How It Works + +``` +Transaction Fee = Base Fee + Priority Fee + +Standard Ethereum: +├── Base Fee → Burned +└── Priority Fee → Block producer + +With Base Fee Redirect: +├── Base Fee → baseFeeSink address +└── Priority Fee → Block producer (fee recipient) +``` + +## Example + +Treasury at `0x1234...`: + +```json +{ + "config": { + "chainId": 1337, + "evolve": { + "baseFeeSink": "0x1234567890123456789012345678901234567890", + "baseFeeRedirectActivationHeight": 0 + } + } +} +``` + +All base fees from block 0 onward go to the treasury. + +## Activation at Later Height + +To activate after chain launch: + +```json +{ + "config": { + "evolve": { + "baseFeeSink": "0x...", + "baseFeeRedirectActivationHeight": 1000000 + } + } +} +``` + +Fees are burned until block 1,000,000, then redirected. + +## Use Cases + +- **Protocol treasury** — Fund development, grants, or operations +- **Staking rewards** — Distribute to token holders +- **Burn address** — Set to `0x0` to explicitly burn (default behavior) diff --git a/docs/ev-reth/features/contract-size-limits.md b/docs/ev-reth/features/contract-size-limits.md new file mode 100644 index 0000000000..0d73f30c03 --- /dev/null +++ b/docs/ev-reth/features/contract-size-limits.md @@ -0,0 +1,71 @@ +# Contract Size Limits + +Increase the maximum contract bytecode size beyond Ethereum's 24KB limit. + +## Overview + +Ethereum limits contract size to 24,576 bytes (24KB) via [EIP-170](https://eips.ethereum.org/EIPS/eip-170). ev-reth allows increasing this limit for use cases requiring larger contracts: + +- Complex DeFi protocols +- On-chain game logic +- ZK verification contracts + +## Configuration + +In your chainspec (`genesis.json`): + +```json +{ + "config": { + "evolve": { + "contractSizeLimit": 49152 + } + } +} +``` + +| Field | Description | Default | +|-------|-------------|---------| +| `contractSizeLimit` | Max bytecode size in bytes | 24576 (24KB) | + +## Common Values + +| Size | Bytes | Use Case | +|------|-------|----------| +| 24KB | 24576 | Ethereum default | +| 48KB | 49152 | 2x limit | +| 64KB | 65536 | 2.67x limit | +| 128KB | 131072 | Large contracts | + +## Trade-offs + +**Pros:** +- Deploy larger, more complex contracts +- Avoid splitting logic across multiple contracts +- Simpler contract architecture + +**Cons:** +- Higher deployment gas costs +- Longer deployment times +- May impact block gas limits + +## Example + +Allow contracts up to 64KB: + +```json +{ + "config": { + "chainId": 1337, + "evolve": { + "contractSizeLimit": 65536 + } + } +} +``` + +## Considerations + +- This is a chain-wide setting—affects all deployments +- Existing tooling may warn about large contracts +- Consider gas costs for deployment and interaction diff --git a/docs/ev-reth/features/deploy-allowlist.md b/docs/ev-reth/features/deploy-allowlist.md new file mode 100644 index 0000000000..7b44b5908e --- /dev/null +++ b/docs/ev-reth/features/deploy-allowlist.md @@ -0,0 +1,77 @@ +# Deploy Allowlist + +Restrict contract deployment to a set of approved addresses. + +## Overview + +By default, any address can deploy contracts. The deploy allowlist restricts deployment to explicitly approved addresses, useful for: + +- Permissioned chains +- Controlled rollouts +- Compliance requirements + +## Configuration + +In your chainspec (`genesis.json`): + +```json +{ + "config": { + "evolve": { + "deployAllowlist": { + "admin": "0xADMIN_ADDRESS", + "enabled": [ + "0xDEPLOYER_1", + "0xDEPLOYER_2" + ] + } + } + } +} +``` + +| Field | Description | +|-------|-------------| +| `admin` | Address that can modify the allowlist | +| `enabled` | Addresses allowed to deploy contracts | + +## How It Works + +1. User attempts `CREATE` or `CREATE2` opcode +2. ev-reth checks if sender is in `enabled` list +3. If not allowed, transaction reverts + +## Admin Operations + +The admin can modify the allowlist via precompile calls: + +```solidity +interface IDeployAllowlist { + function addDeployer(address deployer) external; + function removeDeployer(address deployer) external; + function isAllowed(address deployer) external view returns (bool); +} +``` + +Precompile address: `0x0000000000000000000000000000000000000101` + +## Disabling + +To allow unrestricted deployment, omit the `deployAllowlist` config entirely or set an empty `enabled` list with no admin. + +## Example: Single Deployer + +```json +{ + "config": { + "evolve": { + "deployAllowlist": { + "admin": "0xAdminAddress", + "enabled": ["0xAdminAddress"] + } + } + } +} +``` + +Only the admin can deploy contracts initially. They can add more deployers later. diff --git a/docs/ev-reth/features/mint-precompile.md b/docs/ev-reth/features/mint-precompile.md new file mode 100644 index 0000000000..d876c7bf9e --- /dev/null +++ b/docs/ev-reth/features/mint-precompile.md @@ -0,0 +1,87 @@ +# Mint Precompile + +A custom precompile for minting native tokens. + +## Overview + +The mint precompile allows authorized addresses to mint native tokens (ETH equivalent) directly. This enables: + +- Bridge minting (mint when assets are bridged in) +- Inflation schedules +- Programmatic rewards +- Airdrops + +## Configuration + +In your chainspec (`genesis.json`): + +```json +{ + "config": { + "evolve": { + "mintPrecompile": { + "admin": "0xMINT_ADMIN_ADDRESS", + "address": "0x0000000000000000000000000000000000000100" + } + } + } +} +``` + +| Field | Description | +|-------|-------------| +| `admin` | Address authorized to call mint | +| `address` | Precompile address (conventionally `0x100`) | + +## Interface + +```solidity +interface IMintPrecompile { + // Mint native tokens to recipient + function mint(address recipient, uint256 amount) external; +} +``` + +## Usage + +From an authorized contract: + +```solidity +contract Bridge { + IMintPrecompile constant MINT = IMintPrecompile(0x0000000000000000000000000000000000000100); + + function bridgeIn(address recipient, uint256 amount) external { + // Verify bridge proof... + + // Mint native tokens + MINT.mint(recipient, amount); + } +} +``` + +## Security + +- Only the `admin` address can call `mint()` +- Calls from other addresses revert +- The admin is typically a bridge contract or multisig + +## Changing Admin + +The admin cannot be changed after genesis. To update, you would need a chain upgrade with a new chainspec. + +## Example: Bridge Setup + +```json +{ + "config": { + "evolve": { + "mintPrecompile": { + "admin": "0xBridgeContractAddress", + "address": "0x0000000000000000000000000000000000000100" + } + } + } +} +``` + +The bridge contract can mint tokens when users bridge assets from another chain. diff --git a/docs/ev-reth/overview.md b/docs/ev-reth/overview.md new file mode 100644 index 0000000000..79d3c5d423 --- /dev/null +++ b/docs/ev-reth/overview.md @@ -0,0 +1,68 @@ +# ev-reth Overview + +ev-reth is a modified [reth](https://github.com/paradigmxyz/reth) Ethereum execution client optimized for Evolve rollups. + +## What is ev-reth? + +ev-reth extends reth with: + +- **Engine API integration** — Driven by ev-node for block production +- **Rollup-specific features** — Base fee redirect, deploy allowlist, custom precompiles +- **Configurable chain parameters** — Contract size limits, custom gas settings + +## Architecture + +``` +┌─────────────────────────────────────────┐ +│ ev-node │ +│ (consensus + DA + P2P) │ +└─────────────────┬───────────────────────┘ + │ Engine API + │ (JWT authenticated) +┌─────────────────▼───────────────────────┐ +│ ev-reth │ +│ (EVM execution) │ +│ ┌───────────┐ ┌───────────────────┐ │ +│ │ State DB │ │ Transaction Pool │ │ +│ └───────────┘ └───────────────────┘ │ +│ ┌───────────────────────────────────┐ │ +│ │ EVM + Precompiles │ │ +│ └───────────────────────────────────┘ │ +└─────────────────────────────────────────┘ +``` + +ev-node drives ev-reth through the Engine API: +1. ev-node calls `engine_forkchoiceUpdated` with payload attributes +2. ev-reth builds a block from pending transactions +3. ev-node calls `engine_getPayload` to retrieve the block +4. ev-node broadcasts and submits to DA +5. ev-node calls `engine_newPayload` to finalize + +## Features + +| Feature | Description | +|---------|-------------| +| [Base Fee Redirect](/ev-reth/features/base-fee-redirect) | Send base fees to treasury instead of burning | +| [Deploy Allowlist](/ev-reth/features/deploy-allowlist) | Restrict who can deploy contracts | +| [Contract Size Limits](/ev-reth/features/contract-size-limits) | Increase max contract size beyond 24KB | +| [Mint Precompile](/ev-reth/features/mint-precompile) | Native token minting for bridges | + +## When to Use ev-reth + +Use ev-reth when you want: + +- Full EVM compatibility +- Ethereum tooling (Foundry, Hardhat, etc.) +- Standard wallet support (MetaMask, etc.) +- High-performance Rust execution + +## Repository + +- GitHub: [github.com/evstack/ev-reth](https://github.com/evstack/ev-reth) +- Based on: [paradigmxyz/reth](https://github.com/paradigmxyz/reth) + +## Next Steps + +- [EVM Quickstart](/getting-started/evm/quickstart) — Get started +- [Configuration](/ev-reth/configuration) — Chainspec and settings +- [Engine API](/ev-reth/engine-api) — How ev-node communicates with ev-reth diff --git a/docs/getting-started/choose-your-path.md b/docs/getting-started/choose-your-path.md new file mode 100644 index 0000000000..b07a1b6e05 --- /dev/null +++ b/docs/getting-started/choose-your-path.md @@ -0,0 +1,118 @@ +# Choose Your Path + +Evolve supports three execution environments. Your choice depends on your existing codebase, target users, and development resources. + +## Quick Comparison + +| | EVM (ev-reth) | Cosmos SDK (ev-abci) | Custom Executor | +|----------------------|------------------------------------|-----------------------------|---------------------| +| **Best for** | New chains, DeFi, NFTs | Existing Cosmos chains | Novel VMs, research | +| **Language** | Solidity, Vyper | Go | Any | +| **Wallet support** | MetaMask, Rainbow, all EVM wallets | Keplr, Leap, Cosmos wallets | Build your own | +| **Block explorer** | Blockscout, any EVM explorer | Mintscan, Ping.pub | Build your own | +| **Tooling maturity** | Excellent | Good | None | +| **Setup complexity** | Low | Medium | High | +| **Migration path** | Deploy existing contracts | Migrate existing chain | N/A | + +## EVM (ev-reth) + +Use ev-reth if you want Ethereum compatibility. + +### Pros + +- **Wallet ecosystem** — MetaMask, Rainbow, Rabby, and every EVM wallet works out of the box. Users don't need new software. +- **Developer tooling** — Foundry, Hardhat, Remix, Tenderly, and the entire Ethereum toolchain works unchanged. +- **Contract portability** — Deploy existing Solidity/Vyper contracts without modification. +- **Block explorers** — Blockscout, Etherscan-compatible APIs, and standard indexers work immediately. +- **RPC compatibility** — Standard Ethereum JSON-RPC means existing frontend code works. + +### Cons + +- **EVM constraints** — Bound by EVM gas model and execution semantics. + +### When to choose EVM + +- Building a new chain and want maximum user/developer reach +- Need access to EVM DeFi tooling (Uniswap, lending protocols, etc.) +- Want users to connect with wallets they already have + +**→ [EVM Quickstart](/getting-started/evm/quickstart)** + +## Cosmos SDK (ev-abci) + +Use ev-abci if you have an existing Cosmos chain or want Cosmos SDK modules. + +### Pros + +- **Migration path** — Existing Cosmos SDK chains can migrate without rewriting application logic. +- **Cosmos tooling** — Ignite CLI, Cosmos SDK modules, and familiar Go development. +- **Custom modules** — Build application-specific logic beyond what smart contracts allow. +- **Established wallets** — Keplr, Leap, and Cosmos wallets have strong user bases. + +### Cons + +- **Smaller wallet ecosystem** — Fewer wallets than EVM, though major ones are well-supported. +- **Migration complexity** — Moving from CometBFT requires careful migration. +- **Different mental model** — Cosmos SDK modules differ significantly from smart contracts. + +### When to choose Cosmos SDK + +- Have an existing Cosmos SDK chain running on CometBFT +- Want to shed validator overhead while keeping your application logic +- Prefer Go over Solidity for application development + +**→ [Cosmos SDK Quickstart](/getting-started/cosmos/quickstart)** + +## Custom Executor + +Use a custom executor if you need something neither EVM nor Cosmos SDK provides. + +### Pros + +- **Maximum flexibility** — Implement any state machine, any VM, any execution model. +- **Performance optimization** — Tailor execution to your specific use case. +- **Novel designs** — Build zkVMs, specialized rollups, or research prototypes. + +### Cons + +- **No wallet support** — You must build or integrate wallet connectivity. +- **No tooling** — No block explorers, no development frameworks, no debugging tools. +- **High development cost** — Everything beyond ev-node itself is your responsibility. +- **No ecosystem** — Users and developers must learn your custom environment. + +### When to choose Custom + +- Building a novel VM (zkVM, MoveVM, etc.) +- Research or experimental chains +- Highly specialized state machines (gaming, specific financial instruments) +- Have resources to build full tooling stack + +**→ [Custom Executor Quickstart](/getting-started/custom/quickstart)** + +## Decision Tree + +``` +Do you have an existing Cosmos SDK chain? +├── Yes → Cosmos SDK (ev-abci) +└── No + │ + Do you need a custom VM or non-standard execution? + ├── Yes → Custom Executor + └── No + │ + Do you want maximum wallet/tooling support? + ├── Yes → EVM (ev-reth) + └── No + │ + Do you prefer Go over Solidity? + ├── Yes → Cosmos SDK (ev-abci) + └── No → EVM (ev-reth) +``` + +## Switching Later + +- **EVM → Cosmos SDK**: Not practical. Different execution models, would require chain restart. +- **Cosmos SDK → EVM**: Not practical. Same reason. +- **Custom → Either**: Possible if you design for it, but significant work. + +Choose based on your long-term needs. The execution environment is a foundational decision. diff --git a/docs/getting-started/cosmos/integrate-ev-abci.md b/docs/getting-started/cosmos/integrate-ev-abci.md new file mode 100644 index 0000000000..9983693a49 --- /dev/null +++ b/docs/getting-started/cosmos/integrate-ev-abci.md @@ -0,0 +1,109 @@ +# Integrate ev-abci + +Manually integrate ev-abci into an existing Cosmos SDK application. + +## Overview + +ev-abci replaces CometBFT as the consensus engine for your Cosmos SDK chain. Your application logic remains unchanged—only the node startup code changes. + +## 1. Add Dependency + +```bash +go get github.com/evstack/ev-abci@latest +``` + +## 2. Modify Your Start Command + +Locate your application's entrypoint, typically `cmd//main.go` or `cmd//root.go`. + +Replace the CometBFT server with ev-abci: + +```go +package main + +import ( + "os" + + "github.com/cosmos/cosmos-sdk/server" + "github.com/spf13/cobra" + + // Import ev-abci server + evabci "github.com/evstack/ev-abci/server" + + "your-app/app" +) + +func main() { + rootCmd := &cobra.Command{ + Use: "appd", + Short: "Your App Daemon", + } + + // Keep existing commands + server.AddCommands(rootCmd, app.DefaultNodeHome, app.New, app.MakeEncodingConfig()) + + // Replace start command with ev-abci + startCmd := &cobra.Command{ + Use: "start", + Short: "Run the node with ev-abci", + RunE: func(cmd *cobra.Command, _ []string) error { + return evabci.StartHandler(cmd, app.New) + }, + } + + evabci.AddFlags(startCmd) + rootCmd.AddCommand(startCmd) + + if err := rootCmd.Execute(); err != nil { + os.Exit(1) + } +} +``` + +## 3. Build + +```bash +go build -o appd ./cmd/appd +``` + +## 4. Verify + +Check that ev-abci flags are available: + +```bash +./appd start --help +``` + +You should see flags like: +``` +--evnode.node.aggregator +--evnode.da.address +--evnode.signer.passphrase +``` + +## 5. Initialize and Run + +```bash +# Initialize (same as before) +./appd init mynode --chain-id mychain-1 + +# Start with ev-abci +./appd start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 \ + --evnode.signer.passphrase secret +``` + +## Key Differences from CometBFT + +| Aspect | CometBFT | ev-abci | +|--------|----------|---------| +| Validators | Multiple validators with staking | Single sequencer | +| Consensus | BFT consensus rounds | Sequencer produces blocks | +| Finality | Instant (BFT) | Soft (P2P) → Hard (DA) | +| Block time | ~6s typical | Configurable (100ms+) | + +## Next Steps + +- [Migration Guide](/getting-started/cosmos/migration-guide) — Migrate existing chain with state +- [ev-abci Overview](/ev-abci/overview) — Architecture details diff --git a/docs/getting-started/cosmos/migration-guide.md b/docs/getting-started/cosmos/migration-guide.md new file mode 100644 index 0000000000..59e09b9e9d --- /dev/null +++ b/docs/getting-started/cosmos/migration-guide.md @@ -0,0 +1,115 @@ +# Migration Guide + +Migrate an existing Cosmos SDK chain from CometBFT to Evolve while preserving state. + +## Overview + +The migration process: + +1. Add migration modules to your chain +2. Pass governance proposal to halt at upgrade height +3. Export state and run migration +4. Restart with ev-abci + +## Phase 1: Add Migration Modules + +### Add Migration Manager + +The migration manager handles the transition from multi-validator to single-sequencer. + +```go +import ( + migrationmngr "github.com/evstack/ev-abci/modules/migrationmngr" + migrationmngrkeeper "github.com/evstack/ev-abci/modules/migrationmngr/keeper" + migrationmngrtypes "github.com/evstack/ev-abci/modules/migrationmngr/types" +) +``` + +Add the keeper to your app and register the module. + +### Replace Staking Module + +Replace the standard staking module with ev-abci's wrapper to prevent validator updates during migration: + +```go +// Replace this: +import "github.com/cosmos/cosmos-sdk/x/staking" + +// With this: +import "github.com/evstack/ev-abci/modules/staking" +``` + +## Phase 2: Governance Proposal + +Submit a software upgrade proposal: + +```bash +appd tx gov submit-proposal software-upgrade v2-evolve \ + --title "Migrate to Evolve" \ + --description "Upgrade to ev-abci consensus" \ + --upgrade-height \ + --from +``` + +Vote on the proposal and wait for it to pass. + +## Phase 3: Wire ev-abci + +Before the chain halts, update your start command to use ev-abci (see [Integrate ev-abci](/getting-started/cosmos/integrate-ev-abci)). + +Rebuild your binary: + +```bash +go build -o appd ./cmd/appd +``` + +**Do not start the node yet.** + +## Phase 4: Run Migration + +After the chain halts at the upgrade height: + +```bash +appd evolve-migrate +``` + +This command: +- Migrates blocks from CometBFT to Evolve format +- Converts state to Evolve format +- Creates `ev_genesis.json` +- Seeds sync stores + +## Phase 5: Restart + +Start with ev-abci: + +```bash +appd start \ + --evnode.node.aggregator \ + --evnode.da.address \ + --evnode.signer.passphrase +``` + +The chain continues from the last CometBFT state with the new consensus engine. + +## Considerations + +- **Downtime**: Chain is halted during migration (typically minutes) +- **Coordination**: All node operators must upgrade simultaneously +- **Rollback**: Keep CometBFT binary and data backup for emergency rollback +- **Vote extensions**: Not supported in Evolve—will have no effect after migration + +## Full Node Migration + +For non-sequencer nodes, skip the aggregator flag: + +```bash +appd start \ + --evnode.da.address \ + --evnode.p2p.peers @: +``` + +## Next Steps + +- [ev-abci Migration from CometBFT](/ev-abci/migration-from-cometbft) — Detailed migration reference +- [Run a Full Node](/guides/running-nodes/full-node) — Non-sequencer setup diff --git a/docs/getting-started/cosmos/quickstart.md b/docs/getting-started/cosmos/quickstart.md new file mode 100644 index 0000000000..6e87a6d792 --- /dev/null +++ b/docs/getting-started/cosmos/quickstart.md @@ -0,0 +1,85 @@ +# Cosmos SDK Quickstart + +Get a Cosmos SDK chain running on Evolve using Ignite CLI. + +## Prerequisites + +- Go 1.22+ +- [Ignite CLI](https://docs.ignite.com/welcome/install) + +## 1. Start Local DA + +```bash +go install github.com/evstack/ev-node/tools/local-da@latest +local-da +``` + +Keep this running in a separate terminal. + +## 2. Create a New Chain + +```bash +ignite scaffold chain mychain --address-prefix mychain +cd mychain +``` + +## 3. Add Evolve + +Install the Evolve plugin for Ignite: + +```bash +ignite app install -g github.com/ignite/apps/evolve +``` + +Add Evolve to your chain: + +```bash +ignite evolve add +``` + +This modifies your chain to use ev-abci instead of CometBFT. + +## 4. Build and Initialize + +```bash +make install + +mychaind init mynode --chain-id mychain-1 +mychaind keys add mykey --keyring-backend test +mychaind genesis add-genesis-account mykey 1000000000stake --keyring-backend test +mychaind genesis gentx mykey 1000000stake --chain-id mychain-1 --keyring-backend test +mychaind genesis collect-gentxs +``` + +## 5. Start the Chain + +```bash +mychaind start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 \ + --evnode.signer.passphrase secret +``` + +You should see blocks being produced: +``` +INF block marked as DA included blockHeight=1 +INF block marked as DA included blockHeight=2 +``` + +## 6. Interact + +In another terminal: + +```bash +# Check balance +mychaind query bank balances $(mychaind keys show mykey -a --keyring-backend test) + +# Send tokens +mychaind tx bank send mykey mychain1... 1000stake --keyring-backend test --chain-id mychain-1 -y +``` + +## Next Steps + +- [Integrate ev-abci](/getting-started/cosmos/integrate-ev-abci) — Manual integration without Ignite +- [Migration Guide](/getting-started/cosmos/migration-guide) — Migrate existing CometBFT chain +- [Connect to Celestia](/guides/da-layers/celestia) — Production DA layer diff --git a/docs/getting-started/custom/implement-executor.md b/docs/getting-started/custom/implement-executor.md new file mode 100644 index 0000000000..18ec9c0dfa --- /dev/null +++ b/docs/getting-started/custom/implement-executor.md @@ -0,0 +1,212 @@ +# Implement Executor Interface + +Deep dive into each method of the Executor interface. + +## Interface Overview + +```go +type Executor interface { + InitChain(ctx context.Context, genesis Genesis) ([]byte, error) + GetTxs(ctx context.Context) ([][]byte, error) + ExecuteTxs(ctx context.Context, txs [][]byte, height uint64, timestamp time.Time) (*ExecutionResult, error) + SetFinal(ctx context.Context, height uint64) error +} +``` + +## InitChain + +Called once when the chain starts for the first time. + +```go +func (e *MyExecutor) InitChain(ctx context.Context, genesis Genesis) ([]byte, error) +``` + +**Parameters:** +- `genesis` — Contains initial state, chain ID, and configuration + +**Returns:** +- Initial state root (hash of genesis state) +- Error if initialization fails + +**Responsibilities:** +- Parse genesis data +- Initialize state storage +- Set up initial accounts/balances +- Return deterministic state root + +**Example:** + +```go +func (e *MyExecutor) InitChain(ctx context.Context, genesis Genesis) ([]byte, error) { + // Parse genesis + var state GenesisState + if err := json.Unmarshal(genesis.AppState, &state); err != nil { + return nil, err + } + + // Initialize state + for addr, balance := range state.Balances { + e.db.Set([]byte(addr), []byte(balance)) + } + + // Compute and return state root + return e.db.Hash(), nil +} +``` + +## GetTxs + +Called by the sequencer to get pending transactions for the next block. + +```go +func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error) +``` + +**Returns:** +- Slice of transaction bytes from your mempool +- Error if retrieval fails + +**Responsibilities:** +- Return transactions ready for inclusion +- Optionally prioritize by fee, nonce, etc. +- Remove invalid transactions + +**Example:** + +```go +func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error) { + txs := e.mempool.GetPending(100) // Get up to 100 txs + return txs, nil +} +``` + +## ExecuteTxs + +The core execution method. Called for every block. + +```go +func (e *MyExecutor) ExecuteTxs( + ctx context.Context, + txs [][]byte, + height uint64, + timestamp time.Time, +) (*ExecutionResult, error) +``` + +**Parameters:** +- `txs` — Ordered transactions to execute +- `height` — Block height +- `timestamp` — Block timestamp + +**Returns:** +- `ExecutionResult` containing new state root and gas used +- Error only for system failures (not tx failures) + +**Responsibilities:** +- Execute each transaction in order +- Update state +- Track gas usage +- Handle transaction failures gracefully +- Return new state root + +**Example:** + +```go +func (e *MyExecutor) ExecuteTxs( + ctx context.Context, + txs [][]byte, + height uint64, + timestamp time.Time, +) (*ExecutionResult, error) { + var totalGas uint64 + + for _, txBytes := range txs { + tx, err := DecodeTx(txBytes) + if err != nil { + continue // Skip invalid tx + } + + gas, err := e.executeTx(tx) + if err != nil { + // Log but continue - tx failure != block failure + continue + } + + totalGas += gas + } + + // Commit state changes + stateRoot := e.db.Commit() + + return &ExecutionResult{ + StateRoot: stateRoot, + GasUsed: totalGas, + }, nil +} +``` + +## SetFinal + +Called when a block is confirmed on the DA layer. + +```go +func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error +``` + +**Parameters:** +- `height` — The block height that is now DA-finalized + +**Responsibilities:** +- Mark state as finalized +- Prune old state if desired +- Trigger any finality-dependent logic + +**Example:** + +```go +func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error { + // Mark height as final + e.finalHeight = height + + // Optionally prune old state + if height > 100 { + e.db.Prune(height - 100) + } + + return nil +} +``` + +## State Management Tips + +1. **Determinism** — ExecuteTxs must be deterministic. Same inputs must produce same state root. + +2. **Atomicity** — Either all state changes for a block commit, or none do. + +3. **Crash recovery** — State should be recoverable after crash. ev-node will replay blocks if needed. + +4. **Gas metering** — Track computational cost to prevent DoS. + +## Testing + +Test your executor in isolation: + +```go +func TestExecuteTxs(t *testing.T) { + exec := NewMyExecutor() + + // Initialize + _, err := exec.InitChain(ctx, genesis) + require.NoError(t, err) + + // Execute + result, err := exec.ExecuteTxs(ctx, txs, 1, time.Now()) + require.NoError(t, err) + require.NotEmpty(t, result.StateRoot) +} +``` + +## Next Steps + +- [Executor Interface Reference](/reference/interfaces/executor) — Full type definitions +- [Testapp Source](https://github.com/evstack/ev-node/tree/main/apps/testapp) — Working example diff --git a/docs/getting-started/custom/quickstart.md b/docs/getting-started/custom/quickstart.md new file mode 100644 index 0000000000..87785de584 --- /dev/null +++ b/docs/getting-started/custom/quickstart.md @@ -0,0 +1,140 @@ +# Custom Executor Quickstart + +Build a minimal custom executor to understand how ev-node integrates with execution layers. + +## Prerequisites + +- Go 1.22+ +- Familiarity with Go interfaces + +## 1. Start Local DA + +```bash +go install github.com/evstack/ev-node/tools/local-da@latest +local-da +``` + +Keep this running. + +## 2. Clone ev-node + +```bash +git clone https://github.com/evstack/ev-node.git +cd ev-node +``` + +## 3. Explore the Testapp + +ev-node includes a reference executor in `apps/testapp/`. This is a minimal key-value store: + +```bash +ls apps/testapp/ +``` + +Key files: +- `executor.go` — Implements the Executor interface +- `main.go` — Wires everything together + +## 4. Build and Run + +```bash +make build + +./build/testapp init --evnode.node.aggregator --evnode.signer.passphrase secret + +./build/testapp start --evnode.signer.passphrase secret +``` + +You should see blocks being produced. + +## 5. Understand the Executor Interface + +The core interface your executor must implement: + +```go +type Executor interface { + // Initialize chain state from genesis + InitChain(ctx context.Context, genesis Genesis) (stateRoot []byte, err error) + + // Return pending transactions from mempool + GetTxs(ctx context.Context) (txs [][]byte, err error) + + // Execute transactions and return new state root + ExecuteTxs(ctx context.Context, txs [][]byte, height uint64, timestamp time.Time) (*ExecutionResult, error) + + // Mark a height as DA-finalized + SetFinal(ctx context.Context, height uint64) error +} +``` + +## 6. Create Your Own Executor + +Create a new file `my_executor.go`: + +```go +package main + +import ( + "context" + "time" + + "github.com/evstack/ev-node/core/execution" +) + +type MyExecutor struct { + state map[string]string +} + +func NewMyExecutor() *MyExecutor { + return &MyExecutor{state: make(map[string]string)} +} + +func (e *MyExecutor) InitChain(ctx context.Context, genesis execution.Genesis) ([]byte, error) { + // Initialize from genesis + return []byte("genesis-root"), nil +} + +func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error) { + // Return pending transactions + return nil, nil +} + +func (e *MyExecutor) ExecuteTxs(ctx context.Context, txs [][]byte, height uint64, timestamp time.Time) (*execution.ExecutionResult, error) { + // Process transactions, update state + for _, tx := range txs { + // Your logic here + _ = tx + } + + return &execution.ExecutionResult{ + StateRoot: []byte("new-root"), + GasUsed: 0, + }, nil +} + +func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error { + // Height is now DA-finalized + return nil +} +``` + +## 7. Wire It Up + +See `apps/testapp/main.go` for how to create a full node with your executor: + +```go +executor := NewMyExecutor() + +node, err := node.NewFullNode( + ctx, + config, + executor, + // ... other options +) +``` + +## Next Steps + +- [Implement Executor](/getting-started/custom/implement-executor) — Deep dive into each method +- [Executor Interface Reference](/reference/interfaces/executor) — Full interface documentation +- [Testapp Source](https://github.com/evstack/ev-node/tree/main/apps/testapp) — Reference implementation diff --git a/docs/getting-started/evm/deploy-contracts.md b/docs/getting-started/evm/deploy-contracts.md new file mode 100644 index 0000000000..716810919f --- /dev/null +++ b/docs/getting-started/evm/deploy-contracts.md @@ -0,0 +1,144 @@ +# Deploy Contracts + +Deploy smart contracts to your Evolve EVM chain using Foundry or Hardhat. + +## Network Configuration + +| Setting | Local | Testnet (example) | +|---------|-------|-------------------| +| RPC URL | http://localhost:8545 | https://rpc.your-chain.com | +| Chain ID | 1337 | Your chain ID | +| Currency | ETH | Your native token | + +## Foundry + +### Install + +```bash +curl -L https://foundry.paradigm.xyz | bash +foundryup +``` + +### Configure + +Create or update `foundry.toml`: + +```toml +[profile.default] +src = "src" +out = "out" +libs = ["lib"] + +[rpc_endpoints] +local = "http://localhost:8545" +``` + +### Deploy + +```bash +# Deploy a contract +forge create src/MyContract.sol:MyContract \ + --rpc-url local \ + --private-key $PRIVATE_KEY + +# Deploy with constructor args +forge create src/Token.sol:Token \ + --rpc-url local \ + --private-key $PRIVATE_KEY \ + --constructor-args "MyToken" "MTK" 18 + +# Deploy and verify (if explorer supports it) +forge create src/MyContract.sol:MyContract \ + --rpc-url local \ + --private-key $PRIVATE_KEY \ + --verify +``` + +### Interact + +```bash +# Call a read function +cast call $CONTRACT_ADDRESS "balanceOf(address)" $WALLET_ADDRESS --rpc-url local + +# Send a transaction +cast send $CONTRACT_ADDRESS "transfer(address,uint256)" $TO_ADDRESS 1000 \ + --rpc-url local \ + --private-key $PRIVATE_KEY +``` + +## Hardhat + +### Install + +```bash +npm init -y +npm install --save-dev hardhat @nomicfoundation/hardhat-toolbox +npx hardhat init +``` + +### Configure + +Update `hardhat.config.js`: + +```javascript +require("@nomicfoundation/hardhat-toolbox"); + +module.exports = { + solidity: "0.8.24", + networks: { + local: { + url: "http://localhost:8545", + accounts: [process.env.PRIVATE_KEY], + }, + }, +}; +``` + +### Deploy + +Create `scripts/deploy.js`: + +```javascript +const hre = require("hardhat"); + +async function main() { + const Contract = await hre.ethers.getContractFactory("MyContract"); + const contract = await Contract.deploy(); + await contract.waitForDeployment(); + + console.log("Deployed to:", await contract.getAddress()); +} + +main().catch((error) => { + console.error(error); + process.exit(1); +}); +``` + +Run: + +```bash +npx hardhat run scripts/deploy.js --network local +``` + +## Prefunded Accounts + +The default chainspec includes prefunded accounts for testing. Check your `genesis.json` `alloc` section for available addresses. + +To add your own: + +```json +{ + "alloc": { + "0xYourAddress": { + "balance": "0x200000000000000000000000000000000000000000000000000000000000000" + } + } +} +``` + +## Next Steps + +- [Configure ev-reth](/getting-started/evm/setup-ev-reth) — Chainspec customization +- [Base Fee Redirect](/ev-reth/features/base-fee-redirect) — Send fees to treasury +- [Deploy Allowlist](/ev-reth/features/deploy-allowlist) — Restrict contract deployment diff --git a/docs/getting-started/evm/quickstart.md b/docs/getting-started/evm/quickstart.md new file mode 100644 index 0000000000..4f25f3b4ab --- /dev/null +++ b/docs/getting-started/evm/quickstart.md @@ -0,0 +1,88 @@ +# EVM Quickstart + +Get an EVM rollup running locally in under 5 minutes. + +## Prerequisites + +- Go 1.22+ +- Docker +- Git + +## 1. Start Local DA + +```bash +go install github.com/evstack/ev-node/tools/local-da@latest +local-da +``` + +You should see: +``` +INF Listening on host=localhost port=7980 +``` + +Keep this running in a separate terminal. + +## 2. Start ev-reth + +```bash +git clone https://github.com/evstack/ev-reth.git +cd ev-reth +docker compose up -d +``` + +This starts reth with Evolve's Engine API configuration. The default ports: +- `8545` — JSON-RPC +- `8551` — Engine API + +## 3. Start ev-node + +In a new terminal: + +```bash +git clone https://github.com/evstack/ev-node.git +cd ev-node +make build-evm +``` + +Initialize and start: + +```bash +./build/evm init --evnode.node.aggregator --evnode.signer.passphrase secret + +./build/evm start \ + --evnode.node.aggregator \ + --evnode.signer.passphrase secret \ + --evnode.node.block_time 1s +``` + +You should see blocks being produced: +``` +INF block marked as DA included blockHeight=1 +INF block marked as DA included blockHeight=2 +``` + +## 4. Connect a Wallet + +Add the network to MetaMask: + +| Setting | Value | +|---------|-------| +| Network Name | Evolve Local | +| RPC URL | http://localhost:8545 | +| Chain ID | 1337 | +| Currency | ETH | + +## 5. Deploy a Contract + +With Foundry: + +```bash +forge create src/Counter.sol:Counter --rpc-url http://localhost:8545 --private-key +``` + +## Next Steps + +- [Configure ev-reth](/getting-started/evm/setup-ev-reth) — Customize chainspec, features +- [Deploy Contracts](/getting-started/evm/deploy-contracts) — Foundry and Hardhat setup +- [Connect to Celestia](/guides/da-layers/celestia) — Production DA layer +- [Run a Full Node](/guides/running-nodes/full-node) — Non-sequencer node setup diff --git a/docs/getting-started/evm/setup-ev-reth.md b/docs/getting-started/evm/setup-ev-reth.md new file mode 100644 index 0000000000..8475312235 --- /dev/null +++ b/docs/getting-started/evm/setup-ev-reth.md @@ -0,0 +1,134 @@ +# Configure ev-reth + +ev-reth is a modified [reth](https://github.com/paradigmxyz/reth) client with Evolve-specific features. This guide covers configuration options. + +## Chainspec + +The chainspec (`genesis.json`) defines your chain's parameters. ev-reth extends the standard Ethereum genesis format with Evolve-specific fields. + +### Minimal Chainspec + +```json +{ + "config": { + "chainId": 1337, + "homesteadBlock": 0, + "eip150Block": 0, + "eip155Block": 0, + "eip158Block": 0, + "byzantiumBlock": 0, + "constantinopleBlock": 0, + "petersburgBlock": 0, + "istanbulBlock": 0, + "berlinBlock": 0, + "londonBlock": 0, + "shanghaiTime": 0, + "cancunTime": 0 + }, + "alloc": { + "0xYOUR_ADDRESS": { + "balance": "0x200000000000000000000000000000000000000000000000000000000000000" + } + }, + "coinbase": "0x0000000000000000000000000000000000000000", + "difficulty": "0x0", + "gasLimit": "0x1c9c380", + "nonce": "0x0", + "timestamp": "0x0" +} +``` + +### Evolve Extensions + +Add these under `config.evolve`: + +```json +{ + "config": { + "chainId": 1337, + "evolve": { + "baseFeeSink": "0xTREASURY_ADDRESS", + "baseFeeRedirectActivationHeight": 0, + "deployAllowlist": { + "admin": "0xADMIN_ADDRESS", + "enabled": ["0xDEPLOYER1", "0xDEPLOYER2"] + }, + "contractSizeLimit": 49152, + "mintPrecompile": { + "admin": "0xMINT_ADMIN", + "address": "0x0000000000000000000000000000000000000100" + } + } + } +} +``` + +| Field | Description | +|-------|-------------| +| `baseFeeSink` | Address to receive base fees instead of burning | +| `deployAllowlist` | Restrict contract deployment to allowlisted addresses | +| `contractSizeLimit` | Override default 24KB contract size limit | +| `mintPrecompile` | Enable native token minting precompile | + +## Docker Configuration + +The default `docker-compose.yml` in ev-reth: + +```yaml +services: + reth: + image: ghcr.io/evstack/ev-reth:latest + ports: + - "8545:8545" # JSON-RPC + - "8551:8551" # Engine API + volumes: + - ./data:/data + - ./genesis.json:/genesis.json + - ./jwt.hex:/jwt.hex + command: + - node + - --chain=/genesis.json + - --http + - --http.addr=0.0.0.0 + - --http.api=eth,net,web3,txpool + - --authrpc.addr=0.0.0.0 + - --authrpc.jwtsecret=/jwt.hex +``` + +### JWT Secret + +Generate a JWT secret for Engine API authentication: + +```bash +openssl rand -hex 32 > jwt.hex +``` + +Both ev-reth and ev-node must use the same JWT secret. + +## Environment Variables + +| Variable | Description | Default | +|----------|-------------|---------| +| `RUST_LOG` | Log level | `info` | +| `RETH_DATA_DIR` | Data directory | `/data` | + +## Command Line Flags + +Common flags when running ev-reth directly: + +```bash +ev-reth node \ + --chain genesis.json \ + --http \ + --http.addr 0.0.0.0 \ + --http.port 8545 \ + --http.api eth,net,web3,txpool,debug,trace \ + --authrpc.addr 0.0.0.0 \ + --authrpc.port 8551 \ + --authrpc.jwtsecret jwt.hex +``` + +## Next Steps + +- [ev-reth Features](/ev-reth/features/base-fee-redirect) — Detailed feature documentation +- [ev-reth Chainspec Reference](/reference/configuration/ev-reth-chainspec) — Full configuration reference diff --git a/docs/guides/advanced/based-sequencing.md b/docs/guides/advanced/based-sequencing.md new file mode 100644 index 0000000000..c99bf279fa --- /dev/null +++ b/docs/guides/advanced/based-sequencing.md @@ -0,0 +1,76 @@ +# Based Sequencing + +Based sequencing is a decentralized sequencing model where transaction ordering is determined by the base layer (Celestia) rather than a centralized sequencer. In this model, **every full node acts as its own proposer** by independently and deterministically deriving the next batch of transactions directly from the base layer. + +## How Based Sequencing Works + +### Transaction Submission + +Users submit transactions to the base layer's forced inclusion namespace. These transactions are posted as blobs to the DA layer, where they become part of the canonical transaction ordering. + +```text +User → Base Layer (DA) → Full Nodes retrieve and execute +``` + +### Deterministic Batch Construction + +All full nodes independently construct identical batches by: + +1. **Retrieving forced inclusion transactions** from the base layer at epoch boundaries +2. **Applying forkchoice rules** to determine batch composition: + - `MaxBytes`: Maximum byte size per batch (respects block size limits) + - DA epoch boundaries +3. **Smoothing large transactions** across multiple blocks when necessary + +### Epoch-Based Processing + +Forced inclusion transactions are retrieved in epochs defined by `DAEpochForcedInclusion`. For example, with an epoch size of 10: + +- DA heights 100-109 form one epoch +- DA heights 110-119 form the next epoch +- Transactions from each epoch must be included before the epoch ends + +Epochs durations determine the block time in based sequencing. +Additionally, because no headers are published, the lazy mode has no effect. The block time is a factor of the DA layer's block time. + +## Block Smoothing + +When forced inclusion transactions exceed the `MaxBytes` limit for a single block, they can be "smoothed" across multiple blocks within the same epoch. This ensures that: + +- Large transactions don't block the chain +- All transactions are eventually included +- The system remains censorship-resistant + +### Example + +```text +Epoch [100, 104]: + - Block 1: Includes 1.5 MB of forced inclusion txs (partial) + - Block 2: Includes remaining 0.5 MB + new regular txs + - All epoch transactions included before DA height 105 +``` + +## Trust Assumptions + +Based sequencing minimizes trust assumptions: + +- **No trusted sequencer** - ordering comes from the base layer +- **No proposer selection** - every full node derives blocks independently +- **Deterministic consensus** - all honest nodes converge on the same chain +- **Base layer security** - inherits the security guarantees of the DA layer +- **No malicious actor concern** - invalid blocks are automatically rejected by validation rules + +## Comparison with Single Sequencer + +| Feature | Based Sequencing | Single Sequencer | +| --------------------- | ----------------------------- | ----------------------------- | +| Decentralization | ✅ Fully decentralized | ❌ Single point of control | +| Censorship Resistance | ✅ Guaranteed by base layer | ⚠️ Guaranteed by base layer | +| Latency | ⚠️ Depends on DA layer (~12s) | ✅ Low latency (configurable) | +| Block Time Control | ❌ Factor of DA block time | ✅ Configurable by sequencer | +| Trust Assumptions | ✅ Minimal (only DA layer) | ❌ Trust the sequencer | + +## Further Reading + +- [Data Availability](../data-availability.md) - Understanding the DA layer +- [Transaction Flow](../transaction-flow.md) - How transactions move through the system diff --git a/docs/guides/advanced/custom-precompiles.md b/docs/guides/advanced/custom-precompiles.md new file mode 100644 index 0000000000..f0a4a4b1a8 --- /dev/null +++ b/docs/guides/advanced/custom-precompiles.md @@ -0,0 +1,11 @@ +# Custom Precompiles + + diff --git a/docs/guides/advanced/forced-inclusion.md b/docs/guides/advanced/forced-inclusion.md new file mode 100644 index 0000000000..38494af3eb --- /dev/null +++ b/docs/guides/advanced/forced-inclusion.md @@ -0,0 +1,128 @@ +# Single Sequencer + +A single sequencer is the simplest sequencing architecture for an Evolve-based chain. In this model, one node (the sequencer) is responsible for ordering transactions, producing blocks, and submitting data to the data availability (DA) layer. + +## How the Single Sequencer Model Works + +1. **Transaction Submission:** + - Users submit transactions to the execution environment via RPC or other interfaces. +2. **Transaction Collection and Ordering:** + - The execution environment collects incoming transactions. + - The sequencer requests a batch of transactions from the execution environment to be included in the next block. +3. **Block Production:** + - **Without lazy mode:** the sequencer produces new blocks at fixed intervals. + - **With lazy mode:** the sequencer produces a block once either + - enough transactions are collected + - the lazy-mode block interval elapses + More info in the [lazy mode configuration guide](../config.md#lazy-mode-lazy-aggregator). + - Each block contains a batch of ordered transactions and metadata. + +4. **Data Availability Posting:** + - The sequencer posts the block data to the configured DA layer (e.g., Celestia). + - This ensures that anyone can access the data needed to reconstruct the chain state. + +5. **State Update:** + - The sequencer updates the chain state based on the new block and makes the updated state available to light clients and full nodes. + +## Transaction Flow Diagram + +```mermaid +sequenceDiagram + participant User + participant ExecutionEnv as Execution Environment + participant Sequencer + participant DA as Data Availability Layer + + User->>ExecutionEnv: Submit transaction + Sequencer->>ExecutionEnv: Request batch for block + ExecutionEnv->>Sequencer: Provide batch of transactions + Sequencer->>DA: Post block data + Sequencer->>ExecutionEnv: Update state + ExecutionEnv->>User: State/query response +``` + +## Forced Inclusion + +While the single sequencer controls transaction ordering, the system provides a censorship-resistance mechanism called **forced inclusion**. This ensures users can always include their transactions even if the sequencer refuses to process them. + +### How Forced Inclusion Works + +1. **Direct DA Submission:** + - Users can submit transactions directly to the DA layer's forced inclusion namespace + - These transactions bypass the sequencer entirely + +2. **Epoch-Based Retrieval:** + - The sequencer retrieves forced inclusion transactions from the DA layer at epoch boundaries + - Epochs are defined by `DAEpochForcedInclusion` in the genesis configuration + +3. **Mandatory Inclusion:** + - The sequencer MUST include all forced inclusion transactions from an epoch before the epoch ends + - Full nodes verify that forced inclusion transactions are properly included + +4. **Smoothing:** + - If forced inclusion transactions exceed block size limits (`MaxBytes`), they can be spread across multiple blocks within the same epoch + - All transactions must be included before moving to the next epoch + +### Example + +```text +Epoch [100, 109] (epoch size = 10): + - User submits tx directly to DA at height 102 + - Sequencer retrieves forced txs at epoch start (height 100) + - Sequencer includes forced tx in blocks before height 110 +``` + +See [Based Sequencing](./based.md) for a fully decentralized alternative that relies entirely on forced inclusion. + +## Detecting Malicious Sequencer Behavior + +Full nodes continuously monitor the sequencer to ensure it follows consensus rules, particularly around forced inclusion: + +### Censorship Detection + +If a sequencer fails to include forced inclusion transactions past their epoch boundary, full nodes will: + +1. **Detect the violation** - missing transactions from past epochs +2. **Reject invalid blocks** - do not build on top of censoring blocks +3. **Log the violation** with transaction hashes and epoch details +4. **Halt consensus** - the chain cannot progress with a malicious sequencer + +### Recovery from Malicious Sequencer + +When a malicious sequencer is detected (censoring forced inclusion transactions): + +**All nodes must restart the chain in based sequencing mode:** + +```bash +# Restart with based sequencing enabled +./evnode start --node.aggregator --node.based_sequencer +``` + +**In based sequencing mode:** + +- No single sequencer controls transaction ordering +- Every full node derives blocks independently from the DA layer +- Forced inclusion becomes the primary (and only) transaction submission method +- Censorship becomes impossible as ordering comes from the DA layer + +**Important considerations:** + +- All full nodes should coordinate the switch to based mode +- The chain continues from the last valid state +- Users submit transactions directly to the DA layer going forward +- This is a one-way transition - moving back to single sequencer requires social consensus + +See [Based Sequencing documentation](./based.md) for details on operating in this mode. + +## Advantages + +- **Simplicity:** Easy to set up and operate, making it ideal for development, testing, and small-scale deployments compared to other more complex sequencers. +- **Low Latency:** Fast block production and transaction inclusion, since there is no consensus overhead among multiple sequencers. +- **Independence from DA block time:** The sequencer can produce blocks on its own schedule, without being tied to the block time of the DA layer, enabling more flexible transaction processing than DA-timed sequencers. +- **Forced inclusion fallback:** Users can always submit transactions via the DA layer if the sequencer is unresponsive or censoring. + +## Disadvantages + +- **Single point of failure:** If the sequencer goes offline, block production stops (though the chain can transition to based mode). +- **Trust requirement:** Users must trust the sequencer to include their transactions in a timely manner (mitigated by forced inclusion). +- **Censorship risk:** A malicious sequencer can temporarily censor transactions until forced inclusion activates or the chain transitions to based mode. diff --git a/docs/guides/da-layers/celestia.md b/docs/guides/da-layers/celestia.md new file mode 100644 index 0000000000..6b6c092b05 --- /dev/null +++ b/docs/guides/da-layers/celestia.md @@ -0,0 +1,153 @@ +# Using Celestia as DA + + + + +## 🌞 Introduction {#introduction} + +This tutorial serves as a comprehensive guide for deploying your chain on Celestia's data availability (DA) network. From the Evolve perspective, there's no difference in posting blocks to Celestia's testnets or Mainnet Beta. + +Before proceeding, ensure that you have completed the [gm-world](../gm-world.md) tutorial, which covers installing the Testapp CLI and running a chain against a local DA network. + +## 🪶 Running a Celestia light node + +Before you can start your chain node, you need to initiate, sync, and fund a light node on one of Celestia's networks on a compatible version: + +Find more information on how to run a light node in the [Celestia documentation](https://celestia.org/run-a-light-node/#start-up-a-node). + +::: code-group + +```sh-vue [Arabica] +Evolve Version: {{constants.celestiaNodeArabicaEvolveTag}} +Celestia Node Version: {{constants.celestiaNodeArabicaTag}} +``` + +```sh-vue [Mocha] +Evolve Version: {{constants.celestiaNodeMochaEvolveTag}} +Celestia Node Version: {{constants.celestiaNodeMochaTag}} +``` + +```sh-vue [Mainnet] +Evolve Version: {{constants.celestiaNodeMainnetEvolveTag}} +Celestia Node Version: {{constants.celestiaNodeMainnetTag}} +``` + +::: + +- [Arabica Devnet](https://docs.celestia.org/how-to-guides/arabica-devnet) +- [Mocha Testnet](https://docs.celestia.org/how-to-guides/mocha-testnet) +- [Mainnet Beta](https://docs.celestia.org/how-to-guides/mainnet) + +The main difference lies in how you fund your wallet address: using testnet TIA or [TIA](https://docs.celestia.org/learn/tia#overview-of-tia) for Mainnet Beta. + +After successfully starting a light node, it's time to start posting the batches of blocks of data that your chain generates to Celestia. + +## 🏗️ Prerequisites {#prerequisites} + +- `gmd` CLI installed from the [gm-world](../gm-world.md) tutorial. + +## 🛠️ Configuring flags for DA + +Now that we are posting to the Celestia DA instead of the local DA, the `evolve start` command requires three DA configuration flags: + +- `--evnode.da.start_height` +- `--evnode.da.auth_token` +- `--evnode.da.namespace` + +:::tip +Optionally, you could also set the `--evnode.da.block_time` flag. This should be set to the finality time of the DA layer, not its actual block time, as Evolve does not handle reorganization logic. The default value is 15 seconds. +::: + +Let's determine which values to provide for each of them. + +First, let's query the DA layer start height using our light node. + +```bash +DA_BLOCK_HEIGHT=$(celestia header network-head | jq -r '.result.header.height') +echo -e "\n Your DA_BLOCK_HEIGHT is $DA_BLOCK_HEIGHT \n" +``` + +The output of the command above will look similar to this: + +```bash + Your DA_BLOCK_HEIGHT is 2127672 +``` + +Now, let's obtain the authentication token of your light node using the following command: + +::: code-group + +```bash [Arabica Devnet] +AUTH_TOKEN=$(celestia light auth write --p2p.network arabica) +echo -e "\n Your DA AUTH_TOKEN is $AUTH_TOKEN \n" +``` + +```bash [Mocha Testnet] +AUTH_TOKEN=$(celestia light auth write --p2p.network mocha) +echo -e "\n Your DA AUTH_TOKEN is $AUTH_TOKEN \n" +``` + +```bash [Mainnet Beta] +AUTH_TOKEN=$(celestia light auth write) +echo -e "\n Your DA AUTH_TOKEN is $AUTH_TOKEN \n" +``` + +::: + +The output of the command above will look similar to this: + +```bash + Your DA AUTH_TOKEN is eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJwdWJsaWMiLCJyZWFkIiwid3JpdGUiXX0.cSrJjpfUdTNFtzGho69V0D_8kyECn9Mzv8ghJSpKRDE +``` + +Next, let's set up the namespace to be used for posting data on Celestia. Evolve supports separate namespaces for headers and data, but for simplicity, we'll use a single namespace for both: + +```bash +DA_NAMESPACE="fancy_namespace" +``` + +**Advanced Configuration:** For production deployments, you can use separate namespaces for headers and data to optimize syncing: + +- `--evnode.da.header_namespace` for block headers +- `--evnode.da.data_namespace` for transaction data + +The namespace values are automatically encoded by the node to ensure compatibility with Celestia. + +[Learn more about namespaces](https://docs.celestia.org/tutorials/node-tutorial#namespaces). +::: + +Lastly, set your DA address for your light node, which by default runs at +port 26658: + +```bash +DA_ADDRESS=http://localhost:26658 +``` + +## 🔥 Running your chain connected to Celestia light node + +Finally, let's initiate the chain node with all the flags: + +```bash +gmd start \ + --evnode.node.aggregator \ + --evnode.da.auth_token $AUTH_TOKEN \ + --evnode.da.header_namespace $DA_NAMESPACE \ + --evnode.da.data_namespace $DA_NAMESPACE \ + --evnode.da.address $DA_ADDRESS +``` + +Now, the chain is running and posting blocks (aggregated in batches) to Celestia. You can view your chain by using your namespace or account on one of Celestia's block explorers. + +For example, [here on Celenium for Arabica](https://arabica.celenium.io/). + +Other explorers: + +- [Arabica testnet](https://docs.celestia.org/how-to-guides/arabica-devnet) +- [Mocha testnet](https://docs.celestia.org/how-to-guides/mocha-testnet) +- [Mainnet Beta](https://docs.celestia.org/how-to-guides/mainnet) + +## 🎉 Next steps + +Congratulations! You've built a local chain that posts data to Celestia's DA layer. Well done! Now, go forth and build something great! Good luck! diff --git a/docs/guides/da-layers/local-da.md b/docs/guides/da-layers/local-da.md new file mode 100644 index 0000000000..912724c52e --- /dev/null +++ b/docs/guides/da-layers/local-da.md @@ -0,0 +1,56 @@ +# Using Local DA + + + + +## Introduction {#introduction} + +This tutorial serves as a comprehensive guide for using the [local-da](../../../tools/local-da) with your chain. + +Before proceeding, ensure that you have completed the [build a chain](../gm-world.md) tutorial, which covers setting-up, building and running your chain. + +## Setting Up a Local DA Network + +To set up a local DA network node on your machine, run the following script to install and start the local DA node: + +```bash-vue +go install github.com/evstack/ev-node/tools/local-da@latest +``` + +This script will build and run the node, which will then listen on port `7980`. + +## Configuring your chain to connect to the local DA network + +To connect your chain to the local DA network, you need to pass the `--evnode.da.address` flag with the local DA node address. + +## Run your chain + +Start your chain node with the following command, ensuring to include the DA address flag: + +::: code-group + +```sh [Quick Start] +testapp start --evnode.da.address http://localhost:7980 +``` + +```sh [gm-world Chain] +testapp start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 \ +``` + +::: + +You should see the following log message indicating that your chain is connected to the local DA network: + +```shell +11:07AM INF NewLocalDA: initialized LocalDA module=local-da +11:07AM INF Listening on host=localhost maxBlobSize=1974272 module=da port=7980 +11:07AM INF server started listening on=localhost:7980 module=da +``` + +## Summary + +By following these steps, you will set up a local DA network node and configure your chain to post data to it. This setup is useful for testing and development in a controlled environment. You can find more information in the [local-da README](../../../tools/local-da/README.md) diff --git a/docs/guides/operations/deployment.md b/docs/guides/operations/deployment.md new file mode 100644 index 0000000000..dcc78ad54e --- /dev/null +++ b/docs/guides/operations/deployment.md @@ -0,0 +1,49 @@ +--- +description: This page provides an overview of some common ways to deploy chains. +--- + +# 🚀 Deploying Your Chain + +One of the benefits of building chains with Evolve is the flexibility you have as a developer to choose things like the DA layer, the settlement scheme, and the execution environment. + +You can learn more about Evolve architecture [here](../../learn/specs/overview.md). + +The challenge that comes with this flexibility is that there are more services that now need to be deployed and managed while running your chain. + +In the tutorials so far, you've seen various helper scripts used to make things easier. While great for tutorials, there are better ways to deploy and manage chains than using various bash scripts. + +## 🏗️ Deployment Scales + +Depending on your needs and the stage of your chain development, there are different deployment approaches you can take: + +### 🏠 Local Development + +For development and testing purposes, you can deploy your chain locally using containerized environments. This approach provides: + +- Quick iteration and testing +- No external dependencies +- Full control over the environment +- Cost-effective development + +### 🌐 Testnet Deployment + +When you're ready to test with real network conditions, you can deploy to testnet environments. This includes: + +- Integration with testnet DA networks +- Real network latency and conditions +- Multi-node testing scenarios +- Pre-production validation + +## 📚 Available Deployment Guides + +Choose the deployment approach that matches your current needs: + +- [🌐 Testnet Deployment](./testnet.md) - Deploy on testnet with external DA networks + +:::warning Disclaimer +These examples are for educational purposes only. Before deploying your chain for production use you should fully understand the services you are deploying and your choice in deployment method. +::: + +## 🎉 Next Steps + +For production mainnet deployments, consider additional requirements such as monitoring, security audits, infrastructure hardening, and operational procedures that go beyond the scope of these tutorials. diff --git a/docs/guides/operations/monitoring.md b/docs/guides/operations/monitoring.md new file mode 100644 index 0000000000..6e47357703 --- /dev/null +++ b/docs/guides/operations/monitoring.md @@ -0,0 +1,79 @@ +# Evolve Metrics Guide + +## How to configure metrics + +Evolve can report and serve Prometheus metrics, which can be consumed by Prometheus collector(s). + +This functionality is disabled by default. + +To enable Prometheus metrics, set `instrumentation.prometheus=true` in your Evolve node's configuration file. + +Metrics will be served under `/metrics` on port 26660 by default. The listening address can be changed using the `instrumentation.prometheus_listen_addr` configuration option. + +## List of available metrics + +You can find the full list of available metrics in the [Technical Specifications](../learn/specs/block-manager.md#metrics). + +## Viewing Metrics + +Once your Evolve node is running with metrics enabled, you can view the metrics by: + +1. Accessing the metrics endpoint directly: + + ```bash + curl http://localhost:26660/metrics + ``` + +2. Configuring Prometheus to scrape these metrics by adding the following to your `prometheus.yml`: + + ```yaml + scrape_configs: + - job_name: evolve + static_configs: + - targets: ['localhost:26660'] + ``` + +3. Using Grafana with Prometheus as a data source to visualize the metrics. + +## Example Prometheus Configuration + +Here's a basic Prometheus configuration to scrape metrics from a Evolve node: + +```yaml +global: + scrape_interval: 15s + evaluation_interval: 15s + +scrape_configs: + - job_name: evolve + static_configs: + - targets: ['localhost:26660'] +``` + +## Troubleshooting + +If you're not seeing metrics: + +1. Ensure metrics are enabled in your configuration with `instrumentation.prometheus=true` +2. Verify the metrics endpoint is accessible: `curl http://localhost:26660/metrics` +3. Check your Prometheus configuration is correctly pointing to your Evolve node +4. Examine the Evolve node logs for any errors related to the metrics server + +## Advanced Configuration + +For more advanced metrics configuration, you can adjust the following settings in your configuration file: + +```yaml +instrumentation: + prometheus: true + prometheus_listen_addr: ":26660" + max_open_connections: 3 + namespace: "evolve" +``` + +These settings allow you to: + +- Enable/disable Prometheus metrics +- Change the listening address for the metrics server +- Limit the maximum number of open connections to the metrics server +- Set a custom namespace for all metrics diff --git a/docs/guides/operations/troubleshooting.md b/docs/guides/operations/troubleshooting.md new file mode 100644 index 0000000000..a3e26be799 --- /dev/null +++ b/docs/guides/operations/troubleshooting.md @@ -0,0 +1,10 @@ +# Troubleshooting + + diff --git a/docs/guides/operations/upgrades.md b/docs/guides/operations/upgrades.md new file mode 100644 index 0000000000..f130f001bc --- /dev/null +++ b/docs/guides/operations/upgrades.md @@ -0,0 +1,9 @@ +# Upgrades + + diff --git a/docs/guides/running-nodes/aggregator.md b/docs/guides/running-nodes/aggregator.md new file mode 100644 index 0000000000..36b436f16c --- /dev/null +++ b/docs/guides/running-nodes/aggregator.md @@ -0,0 +1,12 @@ +# Aggregator Node + + diff --git a/docs/guides/running-nodes/attester.md b/docs/guides/running-nodes/attester.md new file mode 100644 index 0000000000..1e1a234392 --- /dev/null +++ b/docs/guides/running-nodes/attester.md @@ -0,0 +1,9 @@ +# Attester Node + + diff --git a/docs/guides/running-nodes/full-node.md b/docs/guides/running-nodes/full-node.md new file mode 100644 index 0000000000..753985033e --- /dev/null +++ b/docs/guides/running-nodes/full-node.md @@ -0,0 +1,104 @@ +# Chain Full Node Setup Guide + +## Introduction + +This guide covers how to set up a full node to run alongside a sequencer node in a Evolve-based blockchain network. A full node maintains a complete copy of the blockchain and helps validate transactions, improving the network's decentralization and security. + +> **Note: The guide on how to run an evolve EVM full node can be found [in the evm section](./evm/single.md#setting-up-a-full-node).** + +## Prerequisites + +Before proceeding, ensure that you have completed the [build a chain](./gm-world.md) tutorial, which covers setting-up, building and running your chain. + +Ensure that you have: + +- A local Data Availability (DA) network node running on port `7980`. +- A Evolve sequencer node running and posting blocks to the DA network. + +## Setting Up Your Full Node + +### Initialize Chain Config and Copy Genesis File + +Let's set a terminal variable for the chain ID. + +```bash +CHAIN_ID=gm +``` + +Initialize the chain config for the full node, lets call it `FullNode` and set the chain ID to your chain ID: + +```bash +gmd init FullNode --chain-id $CHAIN_ID --home $HOME/.${CHAIN_ID}_fn +``` + +Copy the genesis file from the sequencer node: + +```bash +cp $HOME/.$CHAIN_ID/config/genesis.json $HOME/.${CHAIN_ID}_fn/config/genesis.json +``` + +### Set Up P2P Connection to Sequencer Node + +Identify the sequencer node's P2P address from its logs. It will look similar to: + +```text +1:55PM INF listening on address=/ip4/127.0.0.1/tcp/7676/p2p/12D3KooWJbD9TQoMSSSUyfhHMmgVY3LqCjxYFz8wQ92Qa6DAqtmh module=p2p +``` + +Create an environment variable with the P2P address: + +```bash +export P2P_ID="12D3KooWJbD9TQoMSSSUyfhHMmgVY3LqCjxYFz8wQ92Qa6DAqtmh" +``` + +### Start the Full Node + +We are now ready to run our full node. If we are running the full node on the same machine as the sequencer, we need to make sure we update the ports to avoid conflicts. + +Make sure to include these flags with your start command: + +```sh + --rpc.laddr tcp://127.0.0.1:46657 \ + --grpc.address 127.0.0.1:9390 \ + --p2p.laddr "0.0.0.0:46656" \ + --api.address tcp://localhost:1318 +``` + +Run your full node with the following command: + +```bash +gmd start \ + --evnode.da.address http://127.0.0.1:7980 \ + --p2p.seeds $P2P_ID@127.0.0.1:7676 \ + --minimum-gas-prices 0stake \ + --rpc.laddr tcp://127.0.0.1:46657 \ + --grpc.address 127.0.0.1:9390 \ + --p2p.laddr "0.0.0.0:46656" \ + --api.address tcp://localhost:1318 \ + --home $HOME/.${CHAIN_ID}_fn +``` + +Key points about this command: + +- `chain_id` is generally the `$CHAIN_ID`, which is `gm` in this case. +- The ports and addresses are different from the sequencer node to avoid conflicts. Not everything may be necessary for your setup. +- We use the `P2P_ID` environment variable to set the seed node. + +## Verifying Full Node Operation + +After starting your full node, you should see output similar to: + +``` bash +2:33PM DBG indexed transactions height=1 module=txindex num_txs=0 +2:33PM INF block marked as DA included blockHash=7897885B959F52BF0D772E35F8DA638CF8BBC361C819C3FD3E61DCEF5034D1CC blockHeight=5532 module=BlockManager +``` + +This output indicates that your full node is successfully connecting to the network and processing blocks. + +:::tip +If your chain uses EVM as an execution layer and you see an error like `datadir already used by another process`, it means you have to remove all the state from chain data directory (`/root/.yourchain_fn/data/`) and specify a different data directory for the EVM client. +::: + +## Conclusion + +You've now set up a full node running alongside your Evolve sequencer. diff --git a/docs/guides/running-nodes/light-node.md b/docs/guides/running-nodes/light-node.md new file mode 100644 index 0000000000..bfcfac5fbf --- /dev/null +++ b/docs/guides/running-nodes/light-node.md @@ -0,0 +1,9 @@ +# Light Node + + diff --git a/docs/guides/tools/blob-decoder.md b/docs/guides/tools/blob-decoder.md new file mode 100644 index 0000000000..8879f39fa4 --- /dev/null +++ b/docs/guides/tools/blob-decoder.md @@ -0,0 +1,158 @@ +# Blob Decoder Tool + +The blob decoder is a utility tool for decoding and inspecting blobs from Celestia (DA) layers. It provides both a web interface and API for decoding blob data into human-readable format. + +## Overview + +The blob decoder helps developers and operators inspect the contents of blobs submitted to DA layers. It can decode: + +- Raw blob data (hex or base64 encoded) +- Block data structures +- Transaction payloads +- Protobuf-encoded messages + +## Usage + +### Starting the Server + +```bash +# Run with default port (8080) +go run tools/blob-decoder/main.go +``` + +The server will start and display: + +- Web interface URL: `http://localhost:8080` +- API endpoint: `http://localhost:8080/api/decode` + +### Web Interface + +1. Open your browser to `http://localhost:8080` +2. Paste your blob data in the input field +3. Select the encoding format (hex or base64) +4. Click "Decode" to see the parsed output + +### API Usage + +The decoder provides a REST API for programmatic access: + +```bash +# Decode hex-encoded blob +curl -X POST http://localhost:8080/api/decode \ + -H "Content-Type: application/json" \ + -d '{ + "data": "0x1234abcd...", + "encoding": "hex" + }' + +# Decode base64-encoded blob +curl -X POST http://localhost:8080/api/decode \ + -H "Content-Type: application/json" \ + -d '{ + "data": "SGVsbG8gV29ybGQ=", + "encoding": "base64" + }' +``` + +#### API Request Format + +```json +{ + "data": "string", // The encoded blob data + "encoding": "string" // Either "hex" or "base64" +} +``` + +#### API Response Format + +```json +{ + "success": true, + "decoded": { + // Decoded data structure + }, + "error": "string" // Only present if success is false +} +``` + +## Supported Data Types + +### Block Data + +The decoder can parse ev-node block structures: + +- Block height +- Timestamp +- Parent hash +- Transaction list +- Validator information +- Data commitments + +### Transaction Data + +Decodes individual transactions including: + +- Transaction type +- Sender/receiver addresses +- Value/amount +- Gas parameters +- Payload data + +### Protobuf Messages + +Automatically detects and decodes protobuf-encoded messages used in ev-node: + +- Block headers +- Transaction batches +- State updates +- DA commitments + +## Examples + +### Decoding a Block Blob + +```bash +# Example block blob (hex encoded) +curl -X POST http://localhost:8080/api/decode \ + -H "Content-Type: application/json" \ + -d '{ + "data": "0a2408011220...", + "encoding": "hex" + }' +``` + +Response: + +```json +{ + "success": true, + "decoded": { + "height": 100, + "timestamp": "2024-01-15T10:30:00Z", + "parentHash": "0xabc123...", + "transactions": [ + { + "type": "transfer", + "from": "0x123...", + "to": "0x456...", + "value": "1000000000000000000" + } + ] + } +} +``` + +### Decoding DA Commitment + +```bash +curl -X POST http://localhost:8080/api/decode \ + -H "Content-Type: application/json" \ + -d '{ + "data": "eyJjb21taXRtZW50IjogIi4uLiJ9", + "encoding": "base64" + }' +``` + +### Celestia + +For Celestia blobs, you can decode namespace data and payment information from [celenium](https://celenium.io/namespaces). diff --git a/docs/guides/tools/visualizer.md b/docs/guides/tools/visualizer.md new file mode 100644 index 0000000000..55ebc99801 --- /dev/null +++ b/docs/guides/tools/visualizer.md @@ -0,0 +1,240 @@ +# DA Visualizer + +The Data Availability (DA) Visualizer is a built-in monitoring tool in Evolve that provides real-time insights into blob submissions to the DA layer. It offers a web-based interface for tracking submission statistics, monitoring DA layer health, and analyzing blob details. + +**Note**: Only aggregator nodes submit data to the DA layer. Non-aggregator nodes will not display submission data. + +## Overview + +The DA Visualizer provides: + +- Real-time monitoring of blob submissions (last 100 submissions) +- Success/failure statistics and trends +- Gas price tracking and cost analysis +- DA layer health monitoring +- Detailed blob inspection capabilities +- Recent submission history + +## Enabling the DA Visualizer + +The DA Visualizer is disabled by default. To enable it, use the following configuration: + +### Via Command-line Flag + +```bash +testapp start --rollkit.rpc.enable_da_visualization +``` + +### Via Configuration File + +Add the following to your `evnode.yml` configuration file: + +```yaml +rpc: + enable_da_visualization: true +``` + +## Accessing the DA Visualizer + +Once enabled, the DA Visualizer is accessible through your node's RPC server. By default, this is: + +``` +http://localhost:7331/da +``` + +The visualizer provides several API endpoints and a web interface: + +### Web Interface + +Navigate to `http://localhost:7331/da` in your web browser to access the interactive dashboard. + +### API Endpoints + +The following REST API endpoints are available for programmatic access: + +#### Get Recent Submissions + +```bash +GET /da/submissions +``` + +Returns the most recent blob submissions (up to 100 kept in memory). + +#### Get Blob Details + +```bash +GET /da/blob?id={blob_id} +``` + +Returns detailed information about a specific blob submission. + +#### Get DA Statistics + +```bash +GET /da/stats +``` + +Returns aggregated statistics including: + +- Total submissions count +- Success/failure rates +- Average gas price +- Total gas spent +- Average blob size +- Submission trends + +#### Get DA Health Status + +```bash +GET /da/health +``` + +Returns the current health status of the DA layer including: + +- Connection status +- Recent error rates +- Performance metrics +- Last successful submission timestamp + +## Features + +### Real-time Monitoring + +The dashboard automatically updates every 30 seconds, displaying: + +- Recent submission feed with status indicators (last 100 submissions) +- Success rate percentage +- Current gas price trends +- Submission history + +### Submission Details + +Each submission entry shows: + +- Timestamp +- Blob ID with link to detailed view +- Number of blobs in the batch +- Submission status (success/failure) +- Gas price used +- Error messages (if any) + +### Statistics Dashboard + +The statistics section provides: + +- **Performance Metrics**: Success rate, average submission time +- **Cost Analysis**: Total gas spent, average gas price over time +- **Volume Metrics**: Total blobs submitted, average blob size +- **Trend Analysis**: Hourly and daily submission patterns + +### Health Monitoring + +The health status indicator shows: + +- 🟢 **Healthy**: DA layer responding normally +- 🟡 **Warning**: Some failures but overall functional +- 🔴 **Critical**: High failure rate or connection issues + +## Use Cases + +### For Node Operators + +- Monitor the reliability of DA submissions +- Track gas costs and optimize gas price settings +- Identify patterns in submission failures +- Ensure DA layer connectivity + +### For Developers + +- Debug DA submission issues +- Analyze blob data structure +- Monitor application-specific submission patterns +- Test DA layer integration + +### For Network Monitoring + +- Track overall network DA usage +- Identify congestion periods +- Monitor gas price fluctuations +- Analyze submission patterns across the network + +## Configuration Options + +When enabling the DA Visualizer, you may want to adjust related RPC settings: + +```yaml +rpc: + address: "0.0.0.0:7331" # Bind to all interfaces for remote access + enable_da_visualization: true +``` + +**Security Note**: If binding to all interfaces (`0.0.0.0`), ensure proper firewall rules are in place to restrict access to trusted sources only. + +## Troubleshooting + +### Visualizer Not Accessible + +1. Verify the DA Visualizer is enabled: + - Check your configuration file or ensure the flag is set + - Look for log entries confirming "DA visualization endpoints registered" + +2. Check the RPC server is running: + - Verify the RPC address in logs + - Ensure no port conflicts + +3. For remote access: + - Ensure the RPC server is bound to an accessible interface + - Check firewall settings + +### No Data Displayed + +1. Verify your node is in aggregator mode (only aggregators submit to DA) +2. Check DA layer connectivity in the node logs +3. Ensure transactions are being processed +4. Note that the visualizer only keeps the last 100 submissions in memory + +### API Errors + +- **404 Not Found**: DA Visualizer not enabled +- **500 Internal Server Error**: Check node logs for DA connection issues +- **Empty responses**: No submissions have been made yet + +## Example Usage + +### Using curl to access the API + +```bash +# Get recent submissions (returns up to 100) +curl http://localhost:7331/da/submissions + +# Get specific blob details +curl http://localhost:7331/da/blob?id=abc123... + +# Get statistics +curl http://localhost:7331/da/stats + +# Check DA health +curl http://localhost:7331/da/health +``` + +### Monitoring with scripts + +```bash +#!/bin/bash +# Simple monitoring script + +while true; do + health=$(curl -s http://localhost:7331/da/health | jq -r '.status') + if [ "$health" != "healthy" ]; then + echo "DA layer issue detected: $health" + # Send alert... + fi + sleep 30 +done +``` + +## Related Configuration + +For complete DA layer configuration options, see the [Config Reference](../../learn/config.md#data-availability-configuration-da). + +For metrics and monitoring setup, see the [Metrics Guide](../metrics.md). diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md new file mode 100644 index 0000000000..0681a5cbec --- /dev/null +++ b/docs/overview/architecture.md @@ -0,0 +1,184 @@ +# Architecture + +Evolve uses a modular architecture where each component has a well-defined interface and can be swapped independently. This document provides an overview of how the pieces fit together. + +## System Overview + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ Client Apps │ +│ (wallets, dapps, indexers) │ +└─────────────────────────────┬───────────────────────────────────┘ + │ JSON-RPC / gRPC +┌─────────────────────────────▼───────────────────────────────────┐ +│ ev-node │ +│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌──────────────┐ │ +│ │ Block │ │ Sequencer │ │ P2P │ │ Sync │ │ +│ │ Components│ │ │ │ Network │ │ Services │ │ +│ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └───────┬──────┘ │ +└────────┼──────────────┼──────────────┼────────────────┼─────────┘ + │ │ │ │ + │ Executor │ Sequencer │ libp2p │ DA Client + ▼ ▼ ▼ ▼ +┌────────────────┐ ┌──────────┐ ┌─────────────────────────────────┐ +│ Executor │ │Sequencer │ │ DA Layer │ +│ (ev-reth or │ │(single, │ │ (Celestia) │ +│ ev-abci) │ │ based) │ │ │ +└────────────────┘ └──────────┘ └─────────────────────────────────┘ +``` + +## Core Design Principles + +1. **Zero-dependency core** — The `core/` package contains only interfaces with no external dependencies. This keeps the API stable and allows any implementation. + +2. **Modular components** — Executor, Sequencer, and DA layer are all pluggable. Swap them without changing ev-node. + +3. **Separation of concerns** — Block production, syncing, and DA submission run as independent components that communicate through well-defined channels. + +4. **Two operating modes** — Nodes run as either an Aggregator (produces blocks) or Sync-only (follows chain). + +## Block Components + +The block package is the heart of ev-node. It's organized into specialized components: + +| Component | Responsibility | Runs On | +|-----------|---------------|---------| +| **Executor** | Produces blocks by getting batches from sequencer and executing via execution layer | Aggregator only | +| **Reaper** | Scrapes transactions from execution layer mempool and submits to sequencer | Aggregator only | +| **Syncer** | Coordinates block sync from DA layer and P2P network | All nodes | +| **Submitter** | Submits blocks to DA layer and tracks inclusion | Aggregator only | +| **Cache** | Manages in-memory state for headers, data, and pending submissions | All nodes | + +### Component Interaction + +``` + ┌─────────────┐ + │ Reaper │ + │ (tx scrape)│ + └──────┬──────┘ + │ Submit batch + ▼ +┌─────────────┐ ┌─────────────┐ ┌─────────────┐ +│ Sequencer │◄───│ Executor │───►│ Broadcaster │ +│ │ │(block prod) │ │ (P2P) │ +└─────────────┘ └──────┬──────┘ └─────────────┘ + │ + │ Queue for submission + ▼ + ┌─────────────┐ + │ Submitter │───► DA Layer + │ │ + └──────┬──────┘ + │ + │ Track inclusion + ▼ + ┌─────────────┐ + │ Cache │ + └─────────────┘ +``` + +## Node Types + +Evolve supports several node configurations: + +| Type | Block Production | Full Validation | DA Submission | Use Case | +|------|-----------------|-----------------|---------------|----------| +| **Aggregator** | Yes | Yes | Yes | Block producer (sequencer) | +| **Full Node** | No | Yes | No | RPC provider, validator | +| **Light Node** | No | Headers only | No | Mobile, embedded clients | +| **Attester** | No | Yes | No | Soft consensus participant | + +### Aggregator + +The aggregator (also called sequencer node) produces blocks: + +1. Reaper collects transactions from execution layer +2. Executor gets ordered batch from sequencer +3. Executor calls execution layer to process transactions +4. Executor creates and signs block (header + data) +5. Broadcaster gossips block to P2P network +6. Submitter queues block for DA submission + +### Full Node + +Full nodes sync and validate without producing blocks: + +1. Syncer receives blocks from DA layer and/or P2P +2. Validates header signatures and data hashes +3. Executes transactions via execution layer +4. Verifies resulting state root matches header +5. Persists validated blocks to local store + +## Data Flow + +### Block Production (Aggregator) + +``` +User Tx → Execution Layer Mempool + │ + ▼ + Reaper scrapes txs + │ + ▼ + Sequencer orders batch + │ + ▼ + Executor.ExecuteTxs() + │ + ├──► SignedHeader + Data + │ + ├──► P2P Broadcast (soft confirmation) + │ + └──► Submitter Queue + │ + ▼ + DA Layer (hard confirmation) +``` + +### Block Sync (Non-Aggregator) + +``` +┌────────────────────────────────────────┐ +│ Syncer │ +├────────────┬────────────┬──────────────┤ +│ DA Worker │ P2P Worker │Forced Incl. │ +│ │ │ Worker │ +└─────┬──────┴─────┬──────┴───────┬──────┘ + │ │ │ + └────────────┴──────────────┘ + │ + ▼ + processHeightEvent() + │ + ▼ + Validate → Execute → Persist +``` + +## P2P Network + +Built on libp2p with: + +- **GossipSub** for transaction and block propagation +- **Kademlia DHT** for peer discovery +- **Topics**: `{chainID}-tx`, `{chainID}-header`, `{chainID}-data` + +Nodes discover peers through: +1. Bootstrap/seed nodes +2. DHT peer exchange +3. PEX (peer exchange protocol) + +## Storage + +ev-node uses a key-value store (badger) for: + +- **Headers** — Indexed by height and hash +- **Data** — Transaction lists indexed by height +- **State** — Last committed height, app hash, DA height +- **Pending** — Blocks awaiting DA inclusion + +## Further Reading + +- [Block Lifecycle](/concepts/block-lifecycle) — Detailed block processing flow +- [Sequencing](/concepts/sequencing) — How transaction ordering works +- [Data Availability](/concepts/data-availability) — DA layer integration +- [Executor Interface](/reference/interfaces/executor) — Full interface reference diff --git a/docs/overview/execution-environments.md b/docs/overview/execution-environments.md new file mode 100644 index 0000000000..c2c76503e8 --- /dev/null +++ b/docs/overview/execution-environments.md @@ -0,0 +1,31 @@ +# Execution Layers in Evolve + +Evolve is designed to be modular and flexible, allowing different execution layers to be plugged in. Evolve defines a general-purpose execution interface ([see execution.go](https://github.com/evstack/ev-node/blob/main/core/execution/execution.go)) that enables developers to integrate any compatible application as the chain's execution layer. + +This means you can use a variety of Cosmos SDK or Reth compatible applications as the execution environment for your chain: choose the execution environment that best fits your use case. + +## Supported Execution Layers + +### Cosmos SDK Execution Layer + +Evolve natively supports Cosmos SDK-based applications as the execution layer for a chain via the ABCI (Application Blockchain Interface) protocol. The Cosmos SDK provides a rich set of modules for staking, governance, IBC, and more, and is widely used in the Cosmos ecosystem. This integration allows developers to leverage the full power and flexibility of the Cosmos SDK when building their chain applications. + +- [Cosmos SDK Documentation](https://docs.cosmos.network/) +- [Cosmos SDK ABCI Documentation](https://docs.cosmos.network/main/build/abci/introduction) +- [Evolve ABCI Adapter](https://github.com/evstack/ev-abci) + +### Reth + +Reth is a high-performance Ethereum execution client written in Rust. Evolve can integrate Reth as an execution layer, enabling Ethereum-compatible chains to process EVM transactions and maintain Ethereum-like state. This allows developers to build chains that leverage the Ethereum ecosystem, tooling, and smart contracts, while benefiting from Evolve's modular consensus and data availability. + +For more information about Reth, see the official documentation: + +- [Reth GitHub Repository](https://github.com/paradigmxyz/reth) +- [Evolve Reth Integration](https://github.com/evstack/ev-reth) + +## How It Works + +- Evolve acts as the consensus and uses Celestia as its data availability layer. +- The execution layer (Cosmos SDK app or Reth) processes transactions and maintains application state. + +For more details on integrating an execution layer with Evolve, see the respective documentation links above. diff --git a/docs/overview/what-is-evolve.md b/docs/overview/what-is-evolve.md new file mode 100644 index 0000000000..1f49b1d6fc --- /dev/null +++ b/docs/overview/what-is-evolve.md @@ -0,0 +1,95 @@ +# Introduction + +Evolve is the fastest way to launch your own modular network — without validator overhead or token lock-in. + +Built on Celestia, Evolve offers L1-level control with L2-level performance. + +This isn't a toolkit. It's a launch stack. + +No fees. No middlemen. No revenue share. + +## What is Evolve + +Evolve is a launch stack for L1s. It gives you full control over execution — without CometBFT, validator ops, or lock-in. + +It's [open-source](https://github.com/evstack/ev-node), production-ready, and fully composable. + +At its core is \`ev-node\`, a modular node that exposes an [Execution interface](https://github.com/evstack/ev-node/blob/main/core/execution/execution.go), — letting you bring any VM or execution logic, including Cosmos SDK or custom-built runtimes. + +Evolving from Cosmos SDK? + +Migrate without rewriting your stack. Bring your logic and state to Evolve and shed validator overhead — all while gaining performance and execution freedom. + +Evolve is how you launch your network. Modular. Production-ready. Yours. + +With Evolve, you get: + +- Full control over execution \- use any VM +- Low-cost launch — no emissions, no validator inflation +- Speed to traction — from local devnet to testnet in minutes +- Keep sequencer revenue — monetize directly +- Optional L1 validator network for fast finality and staking + +Powered by Celestia — toward 1GB blocks, multi-VM freedom, and execution without compromising flexibility or cost. + +## What problems is Evolve solving + +### 1\. Scalability and customizability + +Deploying your decentralized application as a smart contract on a shared blockchain has many limitations. Your smart contract has to share computational resources with every other application, so scalability is limited. + +Plus, you're restricted to the execution environment that the shared blockchain uses, so developer flexibility is limited as well. + +### 2\. Security and time to market + +Deploying a new chain might sound like the perfect solution for the problems listed above. While it's somewhat true, deploying a new layer 1 chain presents a complex set of challenges and trade-offs for developers looking to build blockchain products. + +Deploying a legacy layer 1 has huge barriers to entry: time, capital, token emissions and expertise. + +In order to secure the network, developers must bootstrap a sufficiently secure set of validators, incurring the overhead of managing a full consensus network. This requires paying validators with inflationary tokens, putting the network's business sustainability at risk. Network effects are also critical for success, but can be challenging to achieve as the network must gain widespread adoption to be secure and valuable. + +In a potential future with millions of chains, it's unlikely all of those chains will be able to sustainably attract a sufficiently secure and decentralized validator set. + +## Why Evolve + +Evolve solves the challenges encountered during the deployment of a smart contract or a new layer 1, by minimizing these tradeoffs through the implementation of evolve chains. + +With Evolve, developers can benefit from: + +- **Shared security**: Chains inherit security from a data availability layer, by posting blocks to it. Chains reduce the trust assumptions placed on chain sequencers by allowing full nodes to download and verify the transactions in the blocks posted by the sequencer. For optimistic or zk-chains, in case of fraudulent blocks, full nodes can generate fraud or zk-proofs, which they can share with the rest of the network, including light nodes. Our roadmap includes the ability for light clients to receive and verify proofs, so that everyday users can enjoy high security guarantees. + +- **Scalability:** Evolve chains are deployed on specialized data availability layers like Celestia, which directly leverages the scalability of the DA layer. Additionally, chain transactions are executed off-chain rather than on the data availability layer. This means chains have their own dedicated computational resources, rather than sharing computational resources with other applications. + +- **Customizability:** Evolve is built as an open source modular framework, to make it easier for developers to reuse the four main components and customize their chains. These components are data availability layers, execution environments, proof systems, and sequencer schemes. + +- **Faster time-to-market:** Evolve eliminates the need to bootstrap a validator set, manage a consensus network, incur high economic costs, and face other trade-offs that come with deploying a legacy layer 1\. Evolve's goal is to make deploying a chain as easy as it is to deploy a smart contract, cutting the time it takes to bring blockchain products to market from months (or even years) to just minutes. + +- **Sovereignty**: Evolve also enables developers to deploy chains for cases where communities require sovereignty. + +## How can you use Evolve + +As briefly mentioned above, Evolve could be used in many different ways. From chains, to settlement layers, and in the future even to L3s. + +### Chain with any VM + +Evolve gives developers the flexibility to use pre-existing ABCI-compatible state machines or create a custom state machine tailored to their chain needs. Evolve does not restrict the use of any specific virtual machine, allowing developers to experiment and bring innovative applications to life. + +### Cosmos SDK + +Similarly to how developers utilize the Cosmos SDK to build a layer 1 chain, the Cosmos SDK could be utilized to create a Evolve-compatible chain. Cosmos-SDK has great [documentation](https://docs.cosmos.network/main) and tooling that developers can leverage to learn. + +Another possibility is taking an existing layer 1 built with the Cosmos SDK and deploying it as a Evolve chain. Evolve gives your network a forward path. Migrate seamlessly, keep your logic, and evolve into a modular, high-performance system without CometBFT bottlenecks and zero validator overhead. + +### Build a settlement layer + +[Settlement layers](https://celestia.org/learn/modular-settlement-layers/settlement-in-the-modular-stack/) are ideal for developers who want to avoid deploying chains. They provide a platform for chains to verify proofs and resolve disputes. Additionally, they act as a hub for chains to facilitate trust-minimized token transfers and liquidity sharing between chains that share the same settlement layer. Think of settlement layers as a special type of execution layer. + +## When can you use Evolve + +As of today, Evolve provides a single sequencer, an execution interface (Engine API or ABCI), and a connection to Celestia. + +We're currently working on implementing many new and exciting features such as light nodes and state fraud proofs. + +Head down to the next section to learn more about what's coming for Evolve. If you're ready to start building, you can skip to the [Guides](../guides/quick-start.md) section. + +Spoiler alert, whichever you choose, it's going to be a great rabbit hole\! diff --git a/docs/reference/api/abci-rpc.md b/docs/reference/api/abci-rpc.md new file mode 100644 index 0000000000..2a2aa22ad1 --- /dev/null +++ b/docs/reference/api/abci-rpc.md @@ -0,0 +1,9 @@ +# ABCI RPC Reference + + diff --git a/docs/reference/api/engine-api.md b/docs/reference/api/engine-api.md new file mode 100644 index 0000000000..6aab0b6c77 --- /dev/null +++ b/docs/reference/api/engine-api.md @@ -0,0 +1,10 @@ +# Engine API Reference + + diff --git a/docs/reference/api/rpc-endpoints.md b/docs/reference/api/rpc-endpoints.md new file mode 100644 index 0000000000..dc360c0e88 --- /dev/null +++ b/docs/reference/api/rpc-endpoints.md @@ -0,0 +1,10 @@ +# RPC Endpoints + + diff --git a/docs/reference/configuration/ev-abci-flags.md b/docs/reference/configuration/ev-abci-flags.md new file mode 100644 index 0000000000..4c7aaddaf6 --- /dev/null +++ b/docs/reference/configuration/ev-abci-flags.md @@ -0,0 +1,8 @@ +# ev-abci Flags + + diff --git a/docs/reference/configuration/ev-node-config.md b/docs/reference/configuration/ev-node-config.md new file mode 100644 index 0000000000..ba900a1630 --- /dev/null +++ b/docs/reference/configuration/ev-node-config.md @@ -0,0 +1,999 @@ +# Config + +This document provides a comprehensive reference for all configuration options available in Evolve. Understanding these configurations will help you tailor Evolve's behavior to your specific needs, whether you're running an aggregator, a full node, or a light client. + +## Table of Contents + +- [DA-Only Sync Mode](#da-only-sync-mode) +- [Introduction to Configurations](#configs) +- [Base Configuration](#base-configuration) + - [Root Directory](#root-directory) + - [Database Path](#database-path) + - [Chain ID](#chain-id) +- [Node Configuration (`node`)](#node-configuration-node) + - [Aggregator Mode](#aggregator-mode) + - [Light Client Mode](#light-client-mode) + - [Block Time](#block-time) + - [Maximum Pending Blocks](#maximum-pending-blocks) + - [Lazy Mode (Lazy Aggregator)](#lazy-mode-lazy-aggregator) + - [Lazy Block Interval](#lazy-block-interval) +- [Data Availability Configuration (`da`)](#data-availability-configuration-da) + - [DA Service Address](#da-service-address) + - [DA Authentication Token](#da-authentication-token) + - [DA Gas Price](#da-gas-price) + - [DA Gas Multiplier](#da-gas-multiplier) + - [DA Submit Options](#da-submit-options) + - [DA Signing Addresses](#da-signing-addresses) + - [DA Namespace](#da-namespace) + - [DA Header Namespace](#da-namespace) + - [DA Data Namespace](#da-data-namespace) + - [DA Block Time](#da-block-time) + - [DA Mempool TTL](#da-mempool-ttl) + - [DA Request Timeout](#da-request-timeout) + - [DA Batching Strategy](#da-batching-strategy) + - [DA Batch Size Threshold](#da-batch-size-threshold) + - [DA Batch Max Delay](#da-batch-max-delay) + - [DA Batch Min Items](#da-batch-min-items) +- [P2P Configuration (`p2p`)](#p2p-configuration-p2p) + - [P2P Listen Address](#p2p-listen-address) + - [P2P Peers](#p2p-peers) + - [P2P Blocked Peers](#p2p-blocked-peers) + - [P2P Allowed Peers](#p2p-allowed-peers) +- [RPC Configuration (`rpc`)](#rpc-configuration-rpc) + - [RPC Server Address](#rpc-server-address) + - [Enable DA Visualization](#enable-da-visualization) + - [Health Endpoints](#health-endpoints) +- [Instrumentation Configuration (`instrumentation`)](#instrumentation-configuration-instrumentation) + - [Enable Prometheus Metrics](#enable-prometheus-metrics) + - [Prometheus Listen Address](#prometheus-listen-address) + - [Maximum Open Connections](#maximum-open-connections) + - [Enable Pprof Profiling](#enable-pprof-profiling) + - [Pprof Listen Address](#pprof-listen-address) +- [Logging Configuration (`log`)](#logging-configuration-log) + - [Log Level](#log-level) + - [Log Format](#log-format) + - [Log Trace (Stack Traces)](#log-trace-stack-traces) +- [Signer Configuration (`signer`)](#signer-configuration-signer) + - [Signer Type](#signer-type) + - [Signer Path](#signer-path) + - [Signer Passphrase](#signer-passphrase) + +## DA-Only Sync Mode + +Evolve supports running nodes that sync exclusively from the Data Availability (DA) layer without participating in P2P networking. This mode is useful for: + +- **Pure DA followers**: Nodes that only need the canonical chain data from DA +- **Resource optimization**: Reducing network overhead by avoiding P2P gossip +- **Simplified deployment**: No need to configure or maintain P2P peer connections +- **Isolated environments**: Nodes that should not participate in P2P communication + +**To enable DA-only sync mode:** + +1. **Leave P2P peers empty** (default behavior): + + ```yaml + p2p: + peers: "" # Empty or omit this field entirely + ``` + +2. **Configure DA connection** (required): + + ```yaml + da: + address: "your-da-service:port" + namespace: "your-namespace" + # ... other DA configuration + ``` + +3. **Optional**: You can still configure P2P listen address for potential future connections, but without peers, no P2P networking will occur. + +When running in DA-only mode, the node will: + +- ✅ Sync blocks and headers from the DA layer +- ✅ Validate transactions and maintain state +- ✅ Serve RPC requests +- ❌ Not participate in P2P gossip or peer discovery +- ❌ Not share blocks with other nodes via P2P +- ❌ Not receive transactions via P2P (only from direct RPC submission) + +## Configs + +Evolve configurations can be managed through a YAML file (typically `evnode.yml` located in `~/.evolve/config/` or `/config/`) and command-line flags. The system prioritizes configurations in the following order (highest priority first): + +1. **Command-line flags:** Override all other settings. +2. **YAML configuration file:** Values specified in the `config.yaml` file. +3. **Default values:** Predefined defaults within Evolve. + +Environment variables can also be used, typically prefixed with your executable's name (e.g., `YOURAPP_CHAIN_ID="my-chain"`). + +## Base Configuration + +These are fundamental settings for your Evolve node. + +### Root Directory + +**Description:** +The root directory where Evolve stores its data, including the database and configuration files. This is a foundational setting that dictates where all other file paths are resolved from. + +**YAML:** +This option is not set within the YAML configuration file itself, as it specifies the location _of_ the configuration file and other application data. + +**Command-line Flag:** +`--home ` +_Example:_ `--home /mnt/data/evolve_node` +_Default:_ `~/.evolve` (or a directory derived from the application name if `defaultHome` is customized). +_Constant:_ `FlagRootDir` + +### Database Path + +**Description:** +The path, relative to the Root Directory, where the Evolve database will be stored. This database contains blockchain state, blocks, and other critical node data. + +**YAML:** +Set this in your configuration file at the top level: + +```yaml +db_path: "data" +``` + +**Command-line Flag:** +`--rollkit.db_path ` +_Example:_ `--rollkit.db_path "node_db"` +_Default:_ `"data"` +_Constant:_ `FlagDBPath` + +### Chain ID + +**Description:** +The unique identifier for your chain. This ID is used to differentiate your network from others and is crucial for network communication and transaction validation. + +**YAML:** +Set this in your configuration file at the top level: + +```yaml +chain_id: "my-evolve-chain" +``` + +**Command-line Flag:** +`--chain_id ` +_Example:_ `--chain_id "super_rollup_testnet_v1"` +_Default:_ `"evolve"` +_Constant:_ `FlagChainID` + +## Node Configuration (`node`) + +Settings related to the core behavior of the Evolve node, including its mode of operation and block production parameters. + +**YAML Section:** + +```yaml +node: + # ... node configurations ... +``` + +### Aggregator Mode + +**Description:** +If true, the node runs in aggregator mode. Aggregators are responsible for producing blocks by collecting transactions, ordering them, and proposing them to the network. + +**YAML:** + +```yaml +node: + aggregator: true +``` + +**Command-line Flag:** +`--rollkit.node.aggregator` (boolean, presence enables it) +_Example:_ `--rollkit.node.aggregator` +_Default:_ `false` +_Constant:_ `FlagAggregator` + +### Light Client Mode + +**Description:** +If true, the node runs in light client mode. Light clients rely on full nodes for block headers and state information, offering a lightweight way to interact with the chain without storing all data. + +**YAML:** + +```yaml +node: + light: true +``` + +**Command-line Flag:** +`--rollkit.node.light` (boolean, presence enables it) +_Example:_ `--rollkit.node.light` +_Default:_ `false` +_Constant:_ `FlagLight` + +### Block Time + +**Description:** +The target time interval between consecutive blocks produced by an aggregator. This duration (e.g., "500ms", "1s", "5s") dictates the pace of block production. + +**YAML:** + +```yaml +node: + block_time: "1s" +``` + +**Command-line Flag:** +`--rollkit.node.block_time ` +_Example:_ `--rollkit.node.block_time 2s` +_Default:_ `"1s"` +_Constant:_ `FlagBlockTime` + +### Maximum Pending Blocks + +**Description:** +The maximum number of blocks that can be pending Data Availability (DA) submission. When this limit is reached, the aggregator pauses block production until some blocks are confirmed on the DA layer. Use 0 for no limit. This helps manage resource usage and DA layer capacity. + +**YAML:** + +```yaml +node: + max_pending_blocks: 100 +``` + +**Command-line Flag:** +`--rollkit.node.max_pending_blocks ` +_Example:_ `--rollkit.node.max_pending_blocks 50` +_Default:_ `0` (no limit) +_Constant:_ `FlagMaxPendingBlocks` + +### Lazy Mode (Lazy Aggregator) + +**Description:** +Enables lazy aggregation mode. In this mode, blocks are produced only when new transactions are available in the mempool or after the `lazy_block_interval` has passed. This optimizes resource usage by avoiding the creation of empty blocks during periods of inactivity. + +**YAML:** + +```yaml +node: + lazy_mode: true +``` + +**Command-line Flag:** +`--rollkit.node.lazy_mode` (boolean, presence enables it) +_Example:_ `--rollkit.node.lazy_mode` +_Default:_ `false` +_Constant:_ `FlagLazyAggregator` + +### Lazy Block Interval + +**Description:** +The maximum time interval between blocks when running in lazy aggregation mode (`lazy_mode`). This ensures that blocks are produced periodically even if there are no new transactions, keeping the chain active. This value is generally larger than `block_time`. + +**YAML:** + +```yaml +node: + lazy_block_interval: "30s" +``` + +**Command-line Flag:** +`--rollkit.node.lazy_block_interval ` +_Example:_ `--rollkit.node.lazy_block_interval 1m` +_Default:_ `"30s"` +_Constant:_ `FlagLazyBlockTime` + +## Data Availability Configuration (`da`) + +Parameters for connecting and interacting with the Data Availability (DA) layer, which Evolve uses to publish block data. + +**YAML Section:** + +```yaml +da: + # ... DA configurations ... +``` + +### DA Service Address + +**Description:** +The network address (host:port) of the Data Availability layer service. Evolve connects to this endpoint to submit and retrieve block data. + +**YAML:** + +```yaml +da: + address: "localhost:26659" +``` + +**Command-line Flag:** +`--rollkit.da.address ` +_Example:_ `--rollkit.da.address 192.168.1.100:26659` +_Default:_ `""` (empty, must be configured if DA is used) +_Constant:_ `FlagDAAddress` + +### DA Authentication Token + +**Description:** +The authentication token required to interact with the DA layer service, if the service mandates authentication. + +**YAML:** + +```yaml +da: + auth_token: "YOUR_DA_AUTH_TOKEN" +``` + +**Command-line Flag:** +`--rollkit.da.auth_token ` +_Example:_ `--rollkit.da.auth_token mysecrettoken` +_Default:_ `""` (empty) +_Constant:_ `FlagDAAuthToken` + +### DA Gas Price + +**Description:** +The gas price to use for transactions submitted to the DA layer. A value of -1 indicates automatic gas price determination (if supported by the DA layer). Higher values may lead to faster inclusion of data. + +**YAML:** + +```yaml +da: + gas_price: 0.025 +``` + +**Command-line Flag:** +`--rollkit.da.gas_price ` +_Example:_ `--rollkit.da.gas_price 0.05` +_Default:_ `-1` (automatic) +_Constant:_ `FlagDAGasPrice` + +### DA Gas Multiplier + +**Description:** +A multiplier applied to the gas price when retrying failed DA submissions. Values greater than 1 increase the gas price on retries, potentially improving the chances of successful inclusion. + +**YAML:** + +```yaml +da: + gas_multiplier: 1.1 +``` + +**Command-line Flag:** +`--rollkit.da.gas_multiplier ` +_Example:_ `--rollkit.da.gas_multiplier 1.5` +_Default:_ `1.0` (no multiplication) +_Constant:_ `FlagDAGasMultiplier` + +### DA Submit Options + +**Description:** +Additional options passed to the DA layer when submitting data. The format and meaning of these options depend on the specific DA implementation being used. For example, with Celestia, this can include custom gas settings or other submission parameters in JSON format. + +**Note:** If you configure multiple signing addresses (see [DA Signing Addresses](#da-signing-addresses)), the selected signing address will be automatically merged into these options as a JSON field `signer_address` (matching Celestia's TxConfig schema). If the base options are already valid JSON, the signing address is added to the existing object; otherwise, a new JSON object is created. + +**YAML:** + +```yaml +da: + submit_options: '{"key":"value"}' # Example, format depends on DA layer +``` + +**Command-line Flag:** +`--rollkit.da.submit_options ` +_Example:_ `--rollkit.da.submit_options '{"custom_param":true}'` +_Default:_ `""` (empty) +_Constant:_ `FlagDASubmitOptions` + +### DA Signing Addresses + +**Description:** +A comma-separated list of signing addresses to use for DA blob submissions. When multiple addresses are provided, they will be used in round-robin fashion to prevent sequence mismatches that can occur with high-throughput Cosmos SDK-based DA layers. This is particularly useful for Celestia when submitting many transactions concurrently. + +Each submission will select the next address in the list, and that address will be automatically added to the `submit_options` as `signer_address`. This ensures that the DA layer (e.g., celestia-node) uses the specified account for signing that particular blob submission. + +**Setup Requirements:** + +- All addresses must be loaded into the DA node's keyring and have sufficient funds for transaction fees +- For Celestia, see the guide on setting up multiple accounts in the DA node documentation + +**YAML:** + +```yaml +da: + signing_addresses: + - "celestia1abc123..." + - "celestia1def456..." + - "celestia1ghi789..." +``` + +**Command-line Flag:** +`--evnode.da.signing_addresses ` +_Example:_ `--rollkit.da.signing_addresses celestia1abc...,celestia1def...,celestia1ghi...` +_Default:_ `[]` (empty, uses default DA node behavior) +_Constant:_ `FlagDASigningAddresses` + +**Behavior:** + +- If no signing addresses are configured, submissions use the DA layer's default signing behavior +- If one address is configured, all submissions use that address +- If multiple addresses are configured, they are used in round-robin order to distribute the load and prevent nonce/sequence conflicts +- The address selection is thread-safe for concurrent submissions + +### DA Namespace + +**Description:** +The namespace ID used when submitting blobs (block data) to the DA layer. This helps segregate data from different chains or applications on a shared DA layer. + +**Note:** If only `namespace` is provided, it will be used for both headers and data, otherwise the `data_namespace` will be used for data. Doing so allows speeding up light clients. + +**YAML:** + +```yaml +da: + namespace: "MY_UNIQUE_NAMESPACE_ID" +``` + +**Command-line Flag:** +`--rollkit.da.namespace ` +_Example:_ `--rollkit.da.namespace 0x1234567890abcdef` +_Default:_ `""` (empty) +_Constant:_ `FlagDANamespace` + +### DA Data Namespace + +**Description:** +The namespace ID specifically for submitting transaction data to the DA layer. Transaction data is submitted separately from headers, enabling nodes to sync only the data they need. The namespace value is encoded by the node to ensure proper formatting and compatibility with the DA layer. + +**YAML:** + +```yaml +da: + data_namespace: "DATA_NAMESPACE_ID" +``` + +**Command-line Flag:** +`--rollkit.da.data_namespace ` +_Example:_ `--rollkit.da.data_namespace my_data_namespace` +_Default:_ Falls back to `namespace` if not set +_Constant:_ `FlagDADataNamespace` + +### DA Block Time + +**Description:** +The average block time of the Data Availability chain (specified as a duration string, e.g., "15s", "1m"). This value influences: + +- The frequency of DA layer syncing. +- The maximum backoff time for retrying DA submissions. +- Calculation of transaction expiration when multiplied by `mempool_ttl`. + +**YAML:** + +```yaml +da: + block_time: "6s" +``` + +**Command-line Flag:** +`--rollkit.da.block_time ` +_Example:_ `--rollkit.da.block_time 12s` +_Default:_ `"6s"` +_Constant:_ `FlagDABlockTime` + +### DA Mempool TTL + +**Description:** +The number of DA blocks after which a transaction submitted to the DA layer is considered expired and potentially dropped from the DA layer's mempool. This also controls the retry backoff timing for DA submissions. + +**YAML:** + +```yaml +da: + mempool_ttl: 20 +``` + +**Command-line Flag:** +`--rollkit.da.mempool_ttl ` +_Example:_ `--rollkit.da.mempool_ttl 30` +_Default:_ `20` +_Constant:_ `FlagDAMempoolTTL` + +### DA Request Timeout + +**Description:** +Per-request timeout applied to DA `GetIDs` and `Get` RPC calls while retrieving blobs. Increase this value if your DA endpoint has high latency to avoid premature failures; decrease it to make the syncer fail fast and free resources sooner when the DA node becomes unresponsive. + +**YAML:** + +```yaml +da: + request_timeout: "30s" +``` + +**Command-line Flag:** +`--rollkit.da.request_timeout ` +_Example:_ `--rollkit.da.request_timeout 45s` +_Default:_ `"30s"` +_Constant:_ `FlagDARequestTimeout` + +### DA Batching Strategy + +**Description:** +Controls how blocks are batched before submission to the DA layer. Different strategies offer trade-offs between latency, cost efficiency, and throughput. All strategies pass through the DA submitter which performs additional size checks and may further split batches that exceed the DA layer's blob size limit. + +Available strategies: + +- **`immediate`**: Submits as soon as any items are available. Best for low-latency requirements where cost is not a concern. +- **`size`**: Waits until the batch reaches a size threshold (fraction of max blob size). Best for maximizing blob utilization and minimizing costs when latency is flexible. +- **`time`**: Waits for a time interval before submitting. Provides predictable submission timing aligned with DA block times. +- **`adaptive`**: Balances between size and time constraints—submits when either the size threshold is reached OR the max delay expires. Recommended for most production deployments as it optimizes both cost and latency. + +**YAML:** + +```yaml +da: + batching_strategy: "time" +``` + +**Command-line Flag:** +`--rollkit.da.batching_strategy ` +_Example:_ `--rollkit.da.batching_strategy adaptive` +_Default:_ `"time"` +_Constant:_ `FlagDABatchingStrategy` + +### DA Batch Size Threshold + +**Description:** +The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size. + +Higher values maximize blob utilization and reduce costs but may increase latency. Lower values reduce latency but may result in less efficient blob usage. + +**YAML:** + +```yaml +da: + batch_size_threshold: 0.8 +``` + +**Command-line Flag:** +`--rollkit.da.batch_size_threshold ` +_Example:_ `--rollkit.da.batch_size_threshold 0.9` +_Default:_ `0.8` (80% of max blob size) +_Constant:_ `FlagDABatchSizeThreshold` + +### DA Batch Max Delay + +**Description:** +The maximum time to wait before submitting a batch regardless of size. Applies to the `time` and `adaptive` strategies. Lower values reduce latency but may increase costs due to smaller batches. This value is typically aligned with the DA chain's block time to ensure submissions land in consecutive blocks. + +When set to 0, defaults to the DA BlockTime value. + +**YAML:** + +```yaml +da: + batch_max_delay: "6s" +``` + +**Command-line Flag:** +`--rollkit.da.batch_max_delay ` +_Example:_ `--rollkit.da.batch_max_delay 12s` +_Default:_ `0` (uses DA BlockTime) +_Constant:_ `FlagDABatchMaxDelay` + +### DA Batch Min Items + +**Description:** +The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum. + +**YAML:** + +```yaml +da: + batch_min_items: 1 +``` + +**Command-line Flag:** +`--rollkit.da.batch_min_items ` +_Example:_ `--rollkit.da.batch_min_items 5` +_Default:_ `1` +_Constant:_ `FlagDABatchMinItems` + +## P2P Configuration (`p2p`) + +Settings for peer-to-peer networking, enabling nodes to discover each other, exchange blocks, and share transactions. + +**YAML Section:** + +```yaml +p2p: + # ... P2P configurations ... +``` + +### P2P Listen Address + +**Description:** +The network address (host:port) on which the Evolve node will listen for incoming P2P connections from other nodes. + +**YAML:** + +```yaml +p2p: + listen_address: "0.0.0.0:7676" +``` + +**Command-line Flag:** +`--rollkit.p2p.listen_address ` +_Example:_ `--rollkit.p2p.listen_address /ip4/127.0.0.1/tcp/26656` +_Default:_ `"/ip4/0.0.0.0/tcp/7676"` +_Constant:_ `FlagP2PListenAddress` + +### P2P Peers + +**Description:** +A comma-separated list of peer addresses (e.g., multiaddresses) that the node will attempt to connect to for bootstrapping its P2P connections. These are often referred to as seed nodes. + +**For DA-only sync mode:** Leave this field empty (default) to disable P2P networking entirely. When no peers are configured, the node will sync exclusively from the Data Availability layer without participating in P2P gossip, peer discovery, or block sharing. This is useful for nodes that only need to follow the canonical chain data from DA. + +**YAML:** + +```yaml +p2p: + peers: "/ip4/some_peer_ip/tcp/7676/p2p/PEER_ID1,/ip4/another_peer_ip/tcp/7676/p2p/PEER_ID2" + # For DA-only sync, leave peers empty: + # peers: "" +``` + +**Command-line Flag:** +`--rollkit.p2p.peers ` +_Example:_ `--rollkit.p2p.peers /dns4/seed.example.com/tcp/26656/p2p/12D3KooW...` +_Default:_ `""` (empty - enables DA-only sync mode) +_Constant:_ `FlagP2PPeers` + +### P2P Blocked Peers + +**Description:** +A comma-separated list of peer IDs that the node should block from connecting. This can be used to prevent connections from known malicious or problematic peers. + +**YAML:** + +```yaml +p2p: + blocked_peers: "PEER_ID_TO_BLOCK1,PEER_ID_TO_BLOCK2" +``` + +**Command-line Flag:** +`--rollkit.p2p.blocked_peers ` +_Example:_ `--rollkit.p2p.blocked_peers 12D3KooW...,12D3KooX...` +_Default:_ `""` (empty) +_Constant:_ `FlagP2PBlockedPeers` + +### P2P Allowed Peers + +**Description:** +A comma-separated list of peer IDs that the node should exclusively allow connections from. If this list is non-empty, only peers in this list will be able to connect. + +**YAML:** + +```yaml +p2p: + allowed_peers: "PEER_ID_TO_ALLOW1,PEER_ID_TO_ALLOW2" +``` + +**Command-line Flag:** +`--rollkit.p2p.allowed_peers ` +_Example:_ `--rollkit.p2p.allowed_peers 12D3KooY...,12D3KooZ...` +_Default:_ `""` (empty, allow all unless blocked) +_Constant:_ `FlagP2PAllowedPeers` + +## RPC Configuration (`rpc`) + +Settings for the Remote Procedure Call (RPC) server, which allows clients and applications to interact with the Evolve node. + +**YAML Section:** + +```yaml +rpc: + # ... RPC configurations ... +``` + +### RPC Server Address + +**Description:** +The network address (host:port) to which the RPC server will bind and listen for incoming requests. + +**YAML:** + +```yaml +rpc: + address: "127.0.0.1:7331" +``` + +**Command-line Flag:** +`--rollkit.rpc.address ` +_Example:_ `--rollkit.rpc.address 0.0.0.0:26657` +_Default:_ `"127.0.0.1:7331"` +_Constant:_ `FlagRPCAddress` + +### Enable DA Visualization + +**Description:** +If true, enables the Data Availability (DA) visualization endpoints that provide real-time monitoring of blob submissions to the DA layer. This includes a web-based dashboard and REST API endpoints for tracking submission statistics, monitoring DA health, and analyzing blob details. Only aggregator nodes submit data to the DA layer, so this feature is most useful when running in aggregator mode. + +**YAML:** + +```yaml +rpc: + enable_da_visualization: true +``` + +**Command-line Flag:** +`--rollkit.rpc.enable_da_visualization` (boolean, presence enables it) +_Example:_ `--rollkit.rpc.enable_da_visualization` +_Default:_ `false` +_Constant:_ `FlagRPCEnableDAVisualization` + +See the [DA Visualizer Guide](../guides/da/visualizer.md) for detailed information on using this feature. + +### Health Endpoints + +#### `/health/live` + +Returns `200 OK` if the process is alive and can access the store. + +```bash +curl http://localhost:7331/health/live +``` + +#### `/health/ready` + +Returns `200 OK` if the node can serve correct data. Checks: + +- P2P is listening (if enabled) +- Has synced blocks +- Not too far behind network +- Non-aggregators: has peers +- Aggregators: producing blocks at expected rate + +```bash +curl http://localhost:7331/health/ready +``` + +Configure max blocks behind: + +```yaml +node: + readiness_max_blocks_behind: 15 +``` + +## Instrumentation Configuration (`instrumentation`) + +Settings for enabling and configuring metrics and profiling endpoints, useful for monitoring node performance and debugging. + +**YAML Section:** + +```yaml +instrumentation: + # ... instrumentation configurations ... +``` + +### Enable Prometheus Metrics + +**Description:** +If true, enables the Prometheus metrics endpoint, allowing Prometheus to scrape operational data from the Evolve node. + +**YAML:** + +```yaml +instrumentation: + prometheus: true +``` + +**Command-line Flag:** +`--rollkit.instrumentation.prometheus` (boolean, presence enables it) +_Example:_ `--rollkit.instrumentation.prometheus` +_Default:_ `false` +_Constant:_ `FlagPrometheus` + +### Prometheus Listen Address + +**Description:** +The network address (host:port) where the Prometheus metrics server will listen for scraping requests. + +See [Metrics](../guides/metrics.md) for more details on what metrics are exposed. + +**YAML:** + +```yaml +instrumentation: + prometheus_listen_addr: ":2112" +``` + +**Command-line Flag:** +`--rollkit.instrumentation.prometheus_listen_addr ` +_Example:_ `--rollkit.instrumentation.prometheus_listen_addr 0.0.0.0:9090` +_Default:_ `":2112"` +_Constant:_ `FlagPrometheusListenAddr` + +### Maximum Open Connections + +**Description:** +The maximum number of simultaneous connections allowed for the metrics server (e.g., Prometheus endpoint). + +**YAML:** + +```yaml +instrumentation: + max_open_connections: 100 +``` + +**Command-line Flag:** +`--rollkit.instrumentation.max_open_connections ` +_Example:_ `--rollkit.instrumentation.max_open_connections 50` +_Default:_ (Refer to `DefaultInstrumentationConfig()` in code, typically a reasonable number like 100) +_Constant:_ `FlagMaxOpenConnections` + +### Enable Pprof Profiling + +**Description:** +If true, enables the pprof HTTP endpoint, which provides runtime profiling data for debugging performance issues. Accessing these endpoints can help diagnose CPU and memory usage. + +**YAML:** + +```yaml +instrumentation: + pprof: true +``` + +**Command-line Flag:** +`--rollkit.instrumentation.pprof` (boolean, presence enables it) +_Example:_ `--rollkit.instrumentation.pprof` +_Default:_ `false` +_Constant:_ `FlagPprof` + +### Pprof Listen Address + +**Description:** +The network address (host:port) where the pprof HTTP server will listen for profiling requests. + +**YAML:** + +```yaml +instrumentation: + pprof_listen_addr: "localhost:6060" +``` + +**Command-line Flag:** +`--rollkit.instrumentation.pprof_listen_addr ` +_Example:_ `--rollkit.instrumentation.pprof_listen_addr 0.0.0.0:6061` +_Default:_ `"localhost:6060"` +_Constant:_ `FlagPprofListenAddr` + +## Logging Configuration (`log`) + +Settings that control the verbosity and format of log output from the Evolve node. These are typically set via global flags. + +**YAML Section:** + +```yaml +log: + # ... logging configurations ... +``` + +### Log Level + +**Description:** +Sets the minimum severity level for log messages to be displayed. Common levels include `debug`, `info`, `warn`, `error`. + +**YAML:** + +```yaml +log: + level: "info" +``` + +**Command-line Flag:** +`--log.level ` (Note: some applications might use a different flag name like `--log_level`) +_Example:_ `--log.level debug` +_Default:_ `"info"` +_Constant:_ `FlagLogLevel` (value: "evolve.log.level", but often overridden by global app flags) + +### Log Format + +**Description:** +Sets the format for log output. Common formats include `text` (human-readable) and `json` (structured, machine-readable). + +**YAML:** + +```yaml +log: + format: "text" +``` + +**Command-line Flag:** +`--log.format ` (Note: some applications might use a different flag name like `--log_format`) +_Example:_ `--log.format json` +_Default:_ `"text"` +_Constant:_ `FlagLogFormat` (value: "evolve.log.format", but often overridden by global app flags) + +### Log Trace (Stack Traces) + +**Description:** +If true, enables the inclusion of stack traces in error logs. This can be very helpful for debugging issues by showing the call stack at the point of an error. + +**YAML:** + +```yaml +log: + trace: false +``` + +**Command-line Flag:** +`--log.trace` (boolean, presence enables it; Note: some applications might use a different flag name like `--log_trace`) +_Example:_ `--log.trace` +_Default:_ `false` +_Constant:_ `FlagLogTrace` (value: "evolve.log.trace", but often overridden by global app flags) + +## Signer Configuration (`signer`) + +Settings related to the signing mechanism used by the node, particularly for aggregators that need to sign blocks. + +**YAML Section:** + +```yaml +signer: + # ... signer configurations ... +``` + +### Signer Type + +**Description:** +Specifies the type of remote signer to use. Common options might include `file` (for key files) or `grpc` (for connecting to a remote signing service). + +**YAML:** + +```yaml +signer: + signer_type: "file" +``` + +**Command-line Flag:** +`--rollkit.signer.signer_type ` +_Example:_ `--rollkit.signer.signer_type grpc` +_Default:_ (Depends on application, often "file" or none if not an aggregator) +_Constant:_ `FlagSignerType` + +### Signer Path + +**Description:** +The path to the signer file (if `signer_type` is `file`) or the address of the remote signer service (if `signer_type` is `grpc` or similar). + +**YAML:** + +```yaml +signer: + signer_path: "/path/to/priv_validator_key.json" # For file signer + # signer_path: "localhost:9000" # For gRPC signer +``` + +**Command-line Flag:** +`--rollkit.signer.signer_path ` +_Example:_ `--rollkit.signer.signer_path ./config` +_Default:_ (Depends on application) +_Constant:_ `FlagSignerPath` + +### Signer Passphrase + +**Description:** +The passphrase required to decrypt or access the signer key, particularly if using a `file` signer and the key is encrypted, or if the aggregator mode is enabled and requires it. This flag is not directly a field in the `SignerConfig` struct but is used in conjunction with it. + +**YAML:** +This is typically not stored in the YAML file for security reasons but provided via flag or environment variable. + +**Command-line Flag:** +`--rollkit.signer.passphrase ` +_Example:_ `--rollkit.signer.passphrase "mysecretpassphrase"` +_Default:_ `""` (empty) +_Constant:_ `FlagSignerPassphrase` +_Note:_ Be cautious with providing passphrases directly on the command line in shared environments due to history logging. Environment variables or secure input methods are often preferred. + +--- + +This reference should help you configure your Evolve node effectively. Always refer to the specific version of Evolve you are using, as options and defaults may change over time. diff --git a/docs/reference/configuration/ev-reth-chainspec.md b/docs/reference/configuration/ev-reth-chainspec.md new file mode 100644 index 0000000000..1d18055a8b --- /dev/null +++ b/docs/reference/configuration/ev-reth-chainspec.md @@ -0,0 +1,12 @@ +# ev-reth Chainspec + + diff --git a/docs/reference/interfaces/da.md b/docs/reference/interfaces/da.md new file mode 100644 index 0000000000..e5bc3bed52 --- /dev/null +++ b/docs/reference/interfaces/da.md @@ -0,0 +1,12 @@ +# DA Interface + + diff --git a/docs/reference/interfaces/executor.md b/docs/reference/interfaces/executor.md new file mode 100644 index 0000000000..8e22250e95 --- /dev/null +++ b/docs/reference/interfaces/executor.md @@ -0,0 +1,11 @@ +# Executor Interface + + diff --git a/docs/reference/interfaces/sequencer.md b/docs/reference/interfaces/sequencer.md new file mode 100644 index 0000000000..186a46062e --- /dev/null +++ b/docs/reference/interfaces/sequencer.md @@ -0,0 +1,11 @@ +# Sequencer Interface + + diff --git a/docs/reference/specs/block-manager.md b/docs/reference/specs/block-manager.md new file mode 100644 index 0000000000..c97171f90e --- /dev/null +++ b/docs/reference/specs/block-manager.md @@ -0,0 +1,759 @@ +# Block Components + +## Abstract + +The block package provides a modular component-based architecture for handling block-related operations in full nodes. Instead of a single monolithic manager, the system is divided into specialized components that work together, each responsible for specific aspects of block processing. This architecture enables better separation of concerns, easier testing, and more flexible node configurations. + +The main components are: + +- **Executor**: Handles block production and state transitions (aggregator nodes only) +- **Reaper**: Periodically retrieves transactions and submits them to the sequencer (aggregator nodes only) +- **Submitter**: Manages submission of headers and data to the DA network (aggregator nodes only) +- **Syncer**: Handles synchronization from both DA and P2P sources (all full nodes) +- **Cache Manager**: Coordinates caching and tracking of blocks across all components + +A full node coordinates these components based on its role: + +- **Aggregator nodes**: Use all components for block production, submission, and synchronization +- **Non-aggregator full nodes**: Use only Syncer and Cache for block synchronization + +```mermaid +sequenceDiagram + title Overview of Block Manager + + participant User + participant Sequencer + participant Full Node 1 + participant Full Node 2 + participant DA Layer + + User->>Sequencer: Send Tx + Sequencer->>Sequencer: Generate Block + Sequencer->>DA Layer: Publish Block + + Sequencer->>Full Node 1: Gossip Block + Sequencer->>Full Node 2: Gossip Block + Full Node 1->>Full Node 1: Verify Block + Full Node 1->>Full Node 2: Gossip Block + Full Node 1->>Full Node 1: Mark Block Soft Confirmed + + Full Node 2->>Full Node 2: Verify Block + Full Node 2->>Full Node 2: Mark Block Soft Confirmed + + DA Layer->>Full Node 1: Retrieve Block + Full Node 1->>Full Node 1: Mark Block DA Included + + DA Layer->>Full Node 2: Retrieve Block + Full Node 2->>Full Node 2: Mark Block DA Included +``` + +### Component Architecture Overview + +```mermaid +flowchart TB + subgraph Block Components [Modular Block Components] + EXE[Executor
Block Production] + REA[Reaper
Tx Collection] + SUB[Submitter
DA Submission] + SYN[Syncer
Block Sync] + CAC[Cache Manager
State Tracking] + end + + subgraph External Components + CEXE[Core Executor] + SEQ[Sequencer] + DA[DA Layer] + HS[Header Store/P2P] + DS[Data Store/P2P] + ST[Local Store] + end + + REA -->|GetTxs| CEXE + REA -->|SubmitBatch| SEQ + REA -->|Notify| EXE + + EXE -->|CreateBlock| CEXE + EXE -->|ApplyBlock| CEXE + EXE -->|Save| ST + EXE -->|Track| CAC + + EXE -->|Headers| SUB + EXE -->|Data| SUB + SUB -->|Submit| DA + SUB -->|Track| CAC + + DA -->|Retrieve| SYN + HS -->|Headers| SYN + DS -->|Data| SYN + + SYN -->|ApplyBlock| CEXE + SYN -->|Save| ST + SYN -->|Track| CAC + SYN -->|SetFinal| CEXE + + CAC -->|Coordinate| EXE + CAC -->|Coordinate| SUB + CAC -->|Coordinate| SYN +``` + +## Protocol/Component Description + +The block components are initialized based on the node type: + +### Aggregator Components + +Aggregator nodes create all components for full block production and synchronization capabilities: + +```go +components := block.NewAggregatorComponents( + config, // Node configuration + genesis, // Genesis state + store, // Local datastore + executor, // Core executor for state transitions + sequencer, // Sequencer client + da, // DA client + signer, // Block signing key + // P2P stores and options... +) +``` + +### Non-Aggregator Components + +Non-aggregator full nodes create only synchronization components: + +```go +components := block.NewSyncComponents( + config, // Node configuration + genesis, // Genesis state + store, // Local datastore + executor, // Core executor for state transitions + da, // DA client + // P2P stores and options... (no signer or sequencer needed) +) +``` + +### Component Initialization Parameters + +| **Name** | **Type** | **Description** | +| --------------------------- | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| signing key | crypto.PrivKey | used for signing blocks and data after creation | +| config | config.BlockManagerConfig | block manager configurations (see config options below) | +| genesis | \*cmtypes.GenesisDoc | initialize the block manager with genesis state (genesis configuration defined in `config/genesis.json` file under the app directory) | +| store | store.Store | local datastore for storing chain blocks and states (default local store path is `$db_dir/evolve` and `db_dir` specified in the `config.yaml` file under the app directory) | +| mempool, proxyapp, eventbus | mempool.Mempool, proxy.AppConnConsensus, \*cmtypes.EventBus | for initializing the executor (state transition function). mempool is also used in the manager to check for availability of transactions for lazy block production | +| dalc | da.DAClient | the data availability light client used to submit and retrieve blocks to DA network | +| headerStore | *goheaderstore.Store[*types.SignedHeader] | to store and retrieve block headers gossiped over the P2P network | +| dataStore | *goheaderstore.Store[*types.SignedData] | to store and retrieve block data gossiped over the P2P network | +| signaturePayloadProvider | types.SignaturePayloadProvider | optional custom provider for header signature payloads | +| sequencer | core.Sequencer | used to retrieve batches of transactions from the sequencing layer | +| reaper | \*Reaper | component that periodically retrieves transactions from the executor and submits them to the sequencer | + +### Configuration Options + +The block components share a common configuration: + +| Name | Type | Description | +| ------------------------ | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | +| BlockTime | time.Duration | time interval used for block production and block retrieval from block store ([`defaultBlockTime`][defaultBlockTime]) | +| DABlockTime | time.Duration | time interval used for both block publication to DA network and block retrieval from DA network ([`defaultDABlockTime`][defaultDABlockTime]) | +| DAStartHeight | uint64 | block retrieval from DA network starts from this height | +| LazyBlockInterval | time.Duration | time interval used for block production in lazy aggregator mode even when there are no transactions ([`defaultLazyBlockTime`][defaultLazyBlockTime]) | +| LazyMode | bool | when set to true, enables lazy aggregation mode which produces blocks only when transactions are available or at LazyBlockInterval intervals | +| MaxPendingHeadersAndData | uint64 | maximum number of pending headers and data blocks before pausing block production (default: 100) | +| MaxSubmitAttempts | int | maximum number of retry attempts for DA submissions (default: 30) | +| MempoolTTL | int | number of blocks to wait when transaction is stuck in DA mempool (default: 25) | +| GasPrice | float64 | gas price for DA submissions (-1 for automatic/default) | +| GasMultiplier | float64 | multiplier for gas price on DA submission retries (default: 1.3) | +| Namespace | da.Namespace | DA namespace ID for block submissions (deprecated, use HeaderNamespace and DataNamespace instead) | +| HeaderNamespace | string | namespace ID for submitting headers to DA layer (automatically encoded by the node) | +| DataNamespace | string | namespace ID for submitting data to DA layer (automatically encoded by the node) | +| RequestTimeout | duration | per-request timeout for DA `GetIDs`/`Get` calls; higher values tolerate slow DA nodes, lower values fail faster (default: 30s) | + +### Block Production (Executor Component) + +When the full node is operating as an aggregator, the **Executor component** handles block production. There are two modes of block production, which can be specified in the block manager configurations: `normal` and `lazy`. + +In `normal` mode, the block manager runs a timer, which is set to the `BlockTime` configuration parameter, and continuously produces blocks at `BlockTime` intervals. + +In `lazy` mode, the block manager implements a dual timer mechanism: + +```mermaid +flowchart LR + subgraph Lazy Aggregation Mode + R[Reaper] -->|GetTxs| CE[Core Executor] + CE -->|Txs Available| R + R -->|Submit to Sequencer| S[Sequencer] + R -->|NotifyNewTransactions| N[txNotifyCh] + + N --> E{Executor Logic} + BT[blockTimer] --> E + LT[lazyTimer] --> E + + E -->|Txs Available| P1[Produce Block with Txs] + E -->|No Txs & LazyTimer| P2[Produce Empty Block] + + P1 --> B[Block Creation] + P2 --> B + end +``` + +1. A `blockTimer` that triggers block production at regular intervals when transactions are available +2. A `lazyTimer` that ensures blocks are produced at `LazyBlockInterval` intervals even during periods of inactivity + +The block manager starts building a block when any transaction becomes available in the mempool via a notification channel (`txNotifyCh`). When the `Reaper` detects new transactions, it calls `Manager.NotifyNewTransactions()`, which performs a non-blocking signal on this channel. The block manager also produces empty blocks at regular intervals to maintain consistency with the DA layer, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks. + +The Reaper component periodically retrieves transactions from the core executor and submits them to the sequencer. It runs independently and notifies the Executor component when new transactions are available, enabling responsive block production in lazy mode. + +#### Building the Block + +The Executor component of aggregator nodes performs the following steps to produce a block: + +```mermaid +flowchart TD + A[Timer Trigger / Transaction Notification] --> B[Retrieve Batch] + B --> C{Transactions Available?} + C -->|Yes| D[Create Block with Txs] + C -->|No| E[Create Empty Block] + D --> F[Generate Header & Data] + E --> F + F --> G[Sign Header → SignedHeader] + F --> H[Sign Data → SignedData] + G --> I[Apply Block] + H --> I + I --> J[Update State] + J --> K[Save to Store] + K --> L[Add to pendingHeaders] + K --> M[Add to pendingData] + L --> N[Broadcast Header to P2P] + M --> O[Broadcast Data to P2P] +``` + +- Retrieve a batch of transactions using `retrieveBatch()` which interfaces with the sequencer +- Call `CreateBlock` using executor with the retrieved transactions +- Create separate header and data structures from the block +- Sign the header using `signing key` to generate `SignedHeader` +- Sign the data using `signing key` to generate `SignedData` (if transactions exist) +- Call `ApplyBlock` using executor to generate an updated state +- Save the block, validators, and updated state to local store +- Add the newly generated header to `pendingHeaders` queue +- Add the newly generated data to `pendingData` queue (if not empty) +- Publish the newly generated header and data to channels to notify other components of the sequencer node (such as block and header gossip) + +Note: When no transactions are available, the block manager creates blocks with empty data using a special `dataHashForEmptyTxs` marker. The header and data separation architecture allows headers and data to be submitted and retrieved independently from the DA layer. + +### Block Publication to DA Network (Submitter Component) + +The **Submitter component** of aggregator nodes implements separate submission loops for headers and data, both operating at `DABlockTime` intervals. Headers and data are submitted to different namespaces to improve scalability and allow for more flexible data availability strategies: + +```mermaid +flowchart LR + subgraph Header Submission + H1[pendingHeaders Queue] --> H2[Header Submission Loop] + H2 --> H3[Marshal to Protobuf] + H3 --> H4[Submit to DA] + H4 -->|Success| H5[Remove from Queue] + H4 -->|Failure| H6[Keep in Queue & Retry] + end + + subgraph Data Submission + D1[pendingData Queue] --> D2[Data Submission Loop] + D2 --> D3[Marshal to Protobuf] + D3 --> D4[Submit to DA] + D4 -->|Success| D5[Remove from Queue] + D4 -->|Failure| D6[Keep in Queue & Retry] + end + + H2 -.->|DABlockTime| H2 + D2 -.->|DABlockTime| D2 +``` + +#### Header Submission Loop + +The `HeaderSubmissionLoop` manages the submission of signed headers to the DA network: + +- Retrieves pending headers from the `pendingHeaders` queue +- Marshals headers to protobuf format +- Submits to DA using the generic `submitToDA` helper with the configured `HeaderNamespace` +- On success, removes submitted headers from the pending queue +- On failure, headers remain in the queue for retry + +#### Data Submission Loop + +The `DataSubmissionLoop` manages the submission of signed data to the DA network: + +- Retrieves pending data from the `pendingData` queue +- Marshals data to protobuf format +- Submits to DA using the generic `submitToDA` helper with the configured `DataNamespace` +- On success, removes submitted data from the pending queue +- On failure, data remains in the queue for retry + +#### Generic Submission Logic + +Both loops use a shared `submitToDA` function that provides: + +- Namespace-specific submission based on header or data type +- Retry logic with configurable maximum attempts via `MaxSubmitAttempts` configuration +- Exponential backoff starting at `initialBackoff` (100ms), doubling each attempt, capped at `DABlockTime` +- Gas price management with `GasMultiplier` applied on retries using a centralized `retryStrategy` +- Recursive batch splitting for handling "too big" DA submissions that exceed blob size limits +- Comprehensive error handling for different DA submission failure types (mempool issues, context cancellation, blob size limits) +- Comprehensive metrics tracking for attempts, successes, and failures +- Context-aware cancellation support + +#### Retry Strategy and Error Handling + +The DA submission system implements sophisticated retry logic using a centralized `retryStrategy` struct to handle various failure scenarios: + +```mermaid +flowchart TD + A[Submit to DA] --> B{Submission Result} + B -->|Success| C[Reset Backoff & Adjust Gas Price Down] + B -->|Too Big| D{Batch Size > 1?} + B -->|Mempool/Not Included| E[Mempool Backoff Strategy] + B -->|Context Canceled| F[Stop Submission] + B -->|Other Error| G[Exponential Backoff] + + D -->|Yes| H[Recursive Batch Splitting] + D -->|No| I[Skip Single Item - Cannot Split] + + E --> J[Set Backoff = MempoolTTL * BlockTime] + E --> K[Multiply Gas Price by GasMultiplier] + + G --> L[Double Backoff Time] + G --> M[Cap at MaxBackoff - BlockTime] + + H --> N[Split into Two Halves] + N --> O[Submit First Half] + O --> P[Submit Second Half] + P --> Q{Both Halves Processed?} + Q -->|Yes| R[Combine Results] + Q -->|No| S[Handle Partial Success] + + C --> T[Update Pending Queues] + T --> U[Post-Submit Actions] +``` + +##### Retry Strategy Features + +- **Centralized State Management**: The `retryStrategy` struct manages attempt counts, backoff timing, and gas price adjustments +- **Multiple Backoff Types**: + - Exponential backoff for general failures (doubles each attempt, capped at `BlockTime`) + - Mempool-specific backoff (waits `MempoolTTL * BlockTime` for stuck transactions) + - Success-based backoff reset with gas price reduction +- **Gas Price Management**: + - Increases gas price by `GasMultiplier` on mempool failures + - Decreases gas price after successful submissions (bounded by initial price) + - Supports automatic gas price detection (`-1` value) +- **Intelligent Batch Splitting**: + - Recursively splits batches that exceed DA blob size limits + - Handles partial submissions within split batches + - Prevents infinite recursion with proper base cases +- **Comprehensive Error Classification**: + - `StatusSuccess`: Full or partial successful submission + - `StatusTooBig`: Triggers batch splitting logic + - `StatusNotIncludedInBlock`/`StatusAlreadyInMempool`: Mempool-specific handling + - `StatusContextCanceled`: Graceful shutdown support + - Other errors: Standard exponential backoff + +The manager enforces a limit on pending headers and data through `MaxPendingHeadersAndData` configuration. When this limit is reached, block production pauses to prevent unbounded growth of the pending queues. + +### Block Retrieval from DA Network (Syncer Component) + +The **Syncer component** implements a `RetrieveLoop` through its DARetriever that regularly pulls headers and data from the DA network. The retrieval process supports both legacy single-namespace mode (for backward compatibility) and the new separate namespace mode: + +```mermaid +flowchart TD + A[Start RetrieveLoop] --> B[Get DA Height] + B --> C{DABlockTime Timer} + C --> D[GetHeightPair from DA] + D --> E{Result?} + E -->|Success| F[Validate Signatures] + E -->|NotFound| G[Increment Height] + E -->|Error| H[Retry Logic] + + F --> I[Check Sequencer Info] + I --> J[Mark DA Included] + J --> K[Send to Sync] + K --> L[Increment Height] + L --> M[Immediate Next Retrieval] + + G --> C + H --> N{Retries < 10?} + N -->|Yes| O[Wait 100ms] + N -->|No| P[Log Error & Stall] + O --> D + M --> D +``` + +#### Retrieval Process + +1. **Height Management**: Starts from the latest of: + - DA height from the last state in local store + - `DAStartHeight` configuration parameter + - Maintains and increments `daHeight` counter after successful retrievals + +2. **Retrieval Mechanism**: + - Executes at `DABlockTime` intervals + - Implements namespace migration support: + - First attempts legacy namespace retrieval if migration not completed + - Falls back to separate header and data namespace retrieval + - Tracks migration status to optimize future retrievals + - Retrieves from separate namespaces: + - Headers from `HeaderNamespace` + - Data from `DataNamespace` + - Combines results from both namespaces + - Handles three possible outcomes: + - `Success`: Process retrieved header and/or data + - `NotFound`: No chain block at this DA height (normal case) + - `Error`: Retry with backoff + +3. **Error Handling**: + - Implements retry logic with 100ms delay between attempts + - After 10 retries, logs error and stalls retrieval + - Does not increment `daHeight` on persistent errors + +4. **Processing Retrieved Blocks**: + - Validates header and data signatures + - Checks sequencer information + - Marks blocks as DA included in caches + - Sends to sync goroutine for state update + - Successful processing triggers immediate next retrieval without waiting for timer + - Updates namespace migration status when appropriate: + - Marks migration complete when data is found in new namespaces + - Persists migration state to avoid future legacy checks + +#### Header and Data Caching + +The retrieval system uses persistent caches for both headers and data: + +- Prevents duplicate processing +- Tracks DA inclusion status +- Supports out-of-order block arrival +- Enables efficient sync from P2P and DA sources +- Maintains namespace migration state for optimized retrieval + +For more details on DA integration, see the [Data Availability specification](./da.md). + +#### Out-of-Order Chain Blocks on DA + +Evolve should support blocks arriving out-of-order on DA, like so: +![out-of-order blocks](./out-of-order-blocks.png) + +#### Termination Condition + +If the sequencer double-signs two blocks at the same height, evidence of the fault should be posted to DA. Evolve full nodes should process the longest valid chain up to the height of the fault evidence, and terminate. See diagram: +![termination condition](./termination.png) + +### Block Sync Service (Syncer Component) + +The **Syncer component** manages the synchronization of headers and data through its P2PHandler and coordination with the Cache Manager: + +#### Architecture + +- **Header Store**: Uses `goheader.Store[*types.SignedHeader]` for header management +- **Data Store**: Uses `goheader.Store[*types.SignedData]` for data management +- **Separation of Concerns**: Headers and data are handled independently, supporting the header/data separation architecture + +#### Synchronization Flow + +1. **Header Sync**: Headers created by the sequencer are sent to the header store for P2P gossip +2. **Data Sync**: Data blocks are sent to the data store for P2P gossip +3. **Cache Integration**: Both header and data caches track seen items to prevent duplicates +4. **DA Inclusion Tracking**: Separate tracking for header and data DA inclusion status + +### Block Publication to P2P network (Executor Component) + +The **Executor component** of aggregator nodes publishes headers and data separately to the P2P network: + +#### Header Publication + +- Headers are sent through the header broadcast channel +- Written to the header store for P2P gossip +- Broadcast to network peers via header sync service + +#### Data Publication + +- Data blocks are sent through the data broadcast channel +- Written to the data store for P2P gossip +- Broadcast to network peers via data sync service + +Non-sequencer full nodes receive headers and data through the P2P sync service and do not publish blocks themselves. + +### Block Retrieval from P2P network (Syncer Component) + +The **Syncer component** retrieves headers and data separately from P2P stores through its P2PHandler: + +#### Header Store Retrieval Loop + +The `HeaderStoreRetrieveLoop`: + +- Operates at `BlockTime` intervals via `headerStoreCh` signals +- Tracks `headerStoreHeight` for the last retrieved header +- Retrieves all headers between last height and current store height +- Validates sequencer information using `assertUsingExpectedSingleSequencer` +- Marks headers as "seen" in the header cache +- Sends headers to sync goroutine via `headerInCh` + +#### Data Store Retrieval Loop + +The `DataStoreRetrieveLoop`: + +- Operates at `BlockTime` intervals via `dataStoreCh` signals +- Tracks `dataStoreHeight` for the last retrieved data +- Retrieves all data blocks between last height and current store height +- Validates data signatures using `assertValidSignedData` +- Marks data as "seen" in the data cache +- Sends data to sync goroutine via `dataInCh` + +#### Soft Confirmations + +Headers and data retrieved from P2P are marked as soft confirmed until both: + +1. The corresponding header is seen on the DA layer +2. The corresponding data is seen on the DA layer + +Once both conditions are met, the block is marked as DA-included. + +#### About Soft Confirmations and DA Inclusions + +The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 6 seconds). +The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked DA-included. +DA-included blocks are considered to have a higher level of finality. + +**DAIncluderLoop**: +The `DAIncluderLoop` is responsible for advancing the `DAIncludedHeight` by: + +- Checking if blocks after the current height have both header and data marked as DA-included in caches +- Stopping advancement if either header or data is missing for a height +- Calling `SetFinal` on the executor when a block becomes DA-included +- Storing the Evolve height to DA height mapping for tracking +- Ensuring only blocks with both header and data present are considered DA-included + +### State Update after Block Retrieval (Syncer Component) + +The **Syncer component** uses a `SyncLoop` to coordinate state updates from blocks retrieved via P2P or DA networks: + +```mermaid +flowchart TD + subgraph Sources + P1[P2P Header Store] --> H[headerInCh] + P2[P2P Data Store] --> D[dataInCh] + DA1[DA Header Retrieval] --> H + DA2[DA Data Retrieval] --> D + end + + subgraph SyncLoop + H --> S[Sync Goroutine] + D --> S + S --> C{Header & Data for Same Height?} + C -->|Yes| R[Reconstruct Block] + C -->|No| W[Wait for Matching Pair] + R --> V[Validate Signatures] + V --> A[ApplyBlock] + A --> CM[Commit] + CM --> ST[Store Block & State] + ST --> F{DA Included?} + F -->|Yes| FN[SetFinal] + F -->|No| E[End] + FN --> U[Update DA Height] + end +``` + +#### Sync Loop Architecture + +The `SyncLoop` processes headers and data from multiple sources: + +- Headers from `headerInCh` (P2P and DA sources) +- Data from `dataInCh` (P2P and DA sources) +- Maintains caches to track processed items +- Ensures ordered processing by height + +#### State Update Process + +When both header and data are available for a height: + +1. **Block Reconstruction**: Combines header and data into a complete block +2. **Validation**: Verifies header and data signatures match expectations +3. **ApplyBlock**: + - Validates the block against current state + - Executes transactions + - Captures validator updates + - Returns updated state +4. **Commit**: + - Persists execution results + - Updates mempool by removing included transactions + - Publishes block events +5. **Storage**: + - Stores the block, validators, and updated state + - Updates last state in manager +6. **Finalization**: + - When block is DA-included, calls `SetFinal` on executor + - Updates DA included height + +## Message Structure/Communication Format + +### Component Communication + +The components communicate through well-defined interfaces: + +#### Executor ↔ Core Executor + +- `InitChain`: initializes the chain state with the given genesis time, initial height, and chain ID using `InitChainSync` on the executor to obtain initial `appHash` and initialize the state. +- `CreateBlock`: prepares a block with transactions from the provided batch data. +- `ApplyBlock`: validates the block, executes the block (apply transactions), captures validator updates, and returns updated state. +- `SetFinal`: marks the block as final when both its header and data are confirmed on the DA layer. +- `GetTxs`: retrieves transactions from the application (used by Reaper component). + +#### Reaper ↔ Sequencer + +- `GetNextBatch`: retrieves the next batch of transactions to include in a block. +- `VerifyBatch`: validates that a batch came from the expected sequencer. + +#### Submitter/Syncer ↔ DA Layer + +- `Submit`: submits headers or data blobs to the DA network. +- `Get`: retrieves headers or data blobs from the DA network. +- `GetHeightPair`: retrieves both header and data at a specific DA height. + +## Assumptions and Considerations + +### Component Architecture + +- The block package uses a modular component architecture instead of a monolithic manager +- Components are created based on node type: aggregator nodes get all components, non-aggregator nodes only get synchronization components +- Each component has a specific responsibility and communicates through well-defined interfaces +- Components share a common Cache Manager for coordination and state tracking + +### Initialization and State Management + +- Components load the initial state from the local store and use genesis if not found in the local store, when the node (re)starts +- During startup the Syncer invokes the execution Replayer to re-execute any blocks the local execution layer is missing; the replayer enforces strict app-hash matching so a mismatch aborts initialization instead of silently drifting out of sync +- The default mode for aggregator nodes is normal (not lazy) +- Components coordinate through channels and shared cache structures + +### Block Production (Executor Component) + +- The Executor can produce empty blocks +- In lazy aggregation mode, the Executor maintains consistency with the DA layer by producing empty blocks at regular intervals, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks +- The lazy aggregation mechanism uses a dual timer approach: + - A `blockTimer` that triggers block production when transactions are available + - A `lazyTimer` that ensures blocks are produced even during periods of inactivity +- Empty batches are handled differently in lazy mode - instead of discarding them, they are returned with the `ErrNoBatch` error, allowing the caller to create empty blocks with proper timestamps +- Transaction notifications from the `Reaper` to the `Executor` are handled via a non-blocking notification channel (`txNotifyCh`) to prevent backpressure + +### DA Submission (Submitter Component) + +- The Submitter enforces `MaxPendingHeadersAndData` limit to prevent unbounded growth of pending queues during DA submission issues +- Headers and data are submitted separately to the DA layer using different namespaces, supporting the header/data separation architecture +- The Cache Manager uses persistent caches for headers and data to track seen items and DA inclusion status +- Namespace migration is handled transparently by the Syncer, with automatic detection and state persistence to optimize future operations +- The system supports backward compatibility with legacy single-namespace deployments while transitioning to separate namespaces +- Gas price management in the Submitter includes automatic adjustment with `GasMultiplier` on DA submission retries + +### Storage and Persistence + +- Components use persistent storage (disk) when the `root_dir` and `db_path` configuration parameters are specified in `config.yaml` file under the app directory. If these configuration parameters are not specified, the in-memory storage is used, which will not be persistent if the node stops +- The Syncer does not re-apply blocks when they transition from soft confirmed to DA included status. The block is only marked DA included in the caches +- Header and data stores use separate prefixes for isolation in the underlying database +- The genesis `ChainID` is used to create separate `PubSubTopID`s for headers and data in go-header + +### P2P and Synchronization + +- Block sync over the P2P network works only when a full node is connected to the P2P network by specifying the initial seeds to connect to via `P2PConfig.Seeds` configuration parameter when starting the full node +- Node's context is passed down to all components to support graceful shutdown and cancellation + +### Architecture Design Decisions + +- The Executor supports custom signature payload providers for headers, enabling flexible signing schemes +- The component architecture supports the separation of header and data structures in Evolve. This allows for expanding the sequencing scheme beyond single sequencing and enables the use of a decentralized sequencer mode. For detailed information on this architecture, see the [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md) +- Components process blocks with a minimal header format, which is designed to eliminate dependency on CometBFT's header format and can be used to produce an execution layer tailored header if needed. For details on this header structure, see the [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md) specification + +## Metrics + +The block components expose comprehensive metrics for monitoring through the shared Metrics instance: + +### Block Production Metrics (Executor Component) + +- `last_block_produced_height`: Height of the last produced block +- `last_block_produced_time`: Timestamp of the last produced block +- `aggregation_type`: Current aggregation mode (normal/lazy) +- `block_size_bytes`: Size distribution of produced blocks +- `produced_empty_blocks_total`: Count of empty blocks produced + +### DA Metrics (Submitter and Syncer Components) + +- `da_submission_attempts_total`: Total DA submission attempts +- `da_submission_success_total`: Successful DA submissions +- `da_submission_failure_total`: Failed DA submissions +- `da_retrieval_attempts_total`: Total DA retrieval attempts +- `da_retrieval_success_total`: Successful DA retrievals +- `da_retrieval_failure_total`: Failed DA retrievals +- `da_height`: Current DA retrieval height +- `pending_headers_count`: Number of headers pending DA submission +- `pending_data_count`: Number of data blocks pending DA submission + +### Sync Metrics (Syncer Component) + +- `sync_height`: Current sync height +- `da_included_height`: Height of last DA-included block +- `soft_confirmed_height`: Height of last soft confirmed block +- `header_store_height`: Current header store height +- `data_store_height`: Current data store height + +### Performance Metrics (All Components) + +- `block_production_time`: Time to produce a block +- `da_submission_time`: Time to submit to DA +- `state_update_time`: Time to apply block and update state +- `channel_buffer_usage`: Usage of internal channels + +### Error Metrics (All Components) + +- `errors_total`: Total errors by type and operation + +## Implementation + +The modular block components are implemented in the following packages: + +- [Executor]: Block production and state transitions (`block/internal/executing/`) +- [Reaper]: Transaction collection and submission (`block/internal/reaping/`) +- [Submitter]: DA submission logic (`block/internal/submitting/`) +- [Syncer]: Block synchronization from DA and P2P (`block/internal/syncing/`) +- [Cache Manager]: Coordination and state tracking (`block/internal/cache/`) +- [Components]: Main components orchestration (`block/components.go`) + +See [tutorial] for running a multi-node network with both aggregator and non-aggregator full nodes. + +## References + +[1] [Go Header][go-header] + +[2] [Block Sync][block-sync] + +[3] [Full Node][full-node] + +[4] [Block Components][Components] + +[5] [Tutorial][tutorial] + +[6] [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md) + +[7] [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md) + +[8] [Data Availability](./da.md) + +[9] [Lazy Aggregation with DA Layer Consistency ADR](../../adr/adr-021-lazy-aggregation.md) + +[defaultBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L50 +[defaultDABlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L59 +[defaultLazyBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L52 +[go-header]: https://github.com/celestiaorg/go-header +[block-sync]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go +[full-node]: https://github.com/evstack/ev-node/blob/main/node/full.go +[Executor]: https://github.com/evstack/ev-node/blob/main/block/internal/executing/executor.go +[Reaper]: https://github.com/evstack/ev-node/blob/main/block/internal/reaping/reaper.go +[Submitter]: https://github.com/evstack/ev-node/blob/main/block/internal/submitting/submitter.go +[Syncer]: https://github.com/evstack/ev-node/blob/main/block/internal/syncing/syncer.go +[Cache Manager]: https://github.com/evstack/ev-node/blob/main/block/internal/cache/manager.go +[Components]: https://github.com/evstack/ev-node/blob/main/block/components.go +[tutorial]: https://ev.xyz/guides/full-node diff --git a/docs/reference/specs/block-validity.md b/docs/reference/specs/block-validity.md new file mode 100644 index 0000000000..6bd6964a5b --- /dev/null +++ b/docs/reference/specs/block-validity.md @@ -0,0 +1,125 @@ +# Block and Header Validity + +## Abstract + +Like all blockchains, chains are defined as the chain of **valid** blocks from the genesis, to the head. Thus, the block and header validity rules define the chain. + +Verifying a block/header is done in 3 parts: + +1. Verify correct serialization according to the protobuf spec + +2. Perform basic validation of the types + +3. Perform verification of the new block against the previously accepted block + +Evolve uses a header/data separation architecture where headers and data can be validated independently. The system has moved from a multi-validator model to a single signer model for simplified sequencer management. + +## Basic Validation + +Each type contains a `.ValidateBasic()` method, which verifies that certain basic invariants hold. The `ValidateBasic()` calls are nested for each structure. + +### SignedHeader Validation + +```go +SignedHeader.ValidateBasic() + // Make sure the SignedHeader's Header passes basic validation + Header.ValidateBasic() + verify ProposerAddress not nil + // Make sure the SignedHeader's signature passes basic validation + Signature.ValidateBasic() + // Ensure that someone signed the header + verify len(c.Signatures) not 0 + // For based chains (sh.Signer.IsEmpty()), pass validation + if !sh.Signer.IsEmpty(): + // Verify the signer matches the proposer address + verify sh.Signer.Address == sh.ProposerAddress + // Verify signature using custom verifier if set, otherwise use default + if sh.verifier != nil: + verify sh.verifier(sh) == nil + else: + verify sh.Signature.Verify(sh.Signer.PubKey, sh.Header.MarshalBinary()) +``` + +### SignedData Validation + +```go +SignedData.ValidateBasic() + // Always passes basic validation for the Data itself + Data.ValidateBasic() // always passes + // Make sure the signature is valid + Signature.ValidateBasic() + verify len(c.Signatures) not 0 + // Verify the signer + If !sd.Signer.IsEmpty(): + verify sd.Signature.Verify(sd.Signer.PubKey, sd.Data.MarshalBinary()) +``` + +### Block Validation + +Blocks are composed of SignedHeader and Data: + +```go +// Block validation happens by validating header and data separately +// then ensuring data hash matches +verify SignedHeader.ValidateBasic() == nil +verify Data.Hash() == SignedHeader.DataHash +``` + +## Verification Against Previous Block + +```go +SignedHeader.Verify(untrustedHeader *SignedHeader) + // Basic validation is handled by go-header before this + Header.Verify(untrustedHeader) + // Verify height sequence + if untrustedHeader.Height != h.Height + 1: + if untrustedHeader.Height > h.Height + 1: + return soft verification failure + return error "headers are not adjacent" + // Verify the link to previous header + verify untrustedHeader.LastHeaderHash == h.Header.Hash() + // Note: ValidatorHash field exists for compatibility but is not validated +``` + +## [Data](https://github.com/evstack/ev-node/blob/main/types/data.go) + +| **Field Name** | **Valid State** | **Validation** | +|----------------|-----------------------------------------|------------------------------------| +| Txs | Transaction data of the block | Data.Hash() == SignedHeader.DataHash | +| Metadata | Optional p2p gossiping metadata | Not validated | + +## [SignedHeader](https://github.com/evstack/ev-node/blob/main/types/signed_header.go) + +| **Field Name** | **Valid State** | **Validation** | +|----------------|--------------------------------------------------------------------------|---------------------------------------------------------------------------------------------| +| Header | Valid header for the block | `Header` passes `ValidateBasic()` and `Verify()` | +| Signature | Valid signature from the single sequencer | `Signature` passes `ValidateBasic()`, verified against signer | +| Signer | Information about who signed the header | Must match ProposerAddress if not empty (based chain case) | +| verifier | Optional custom signature verification function | Used instead of default verification if set | + +## [Header](https://github.com/evstack/ev-node/blob/main/types/header.go) + +***Note***: Evolve has moved to a single signer model. The multi-validator architecture has been replaced with a simpler single sequencer approach. + +| **Field Name** | **Valid State** | **Validation** | +|---------------------|--------------------------------------------------------------------------------------------|---------------------------------------| +| **BaseHeader** | | | +| Height | Height of the previous accepted header, plus 1. | checked in the `Verify()`` step | +| Time | Timestamp of the block | Not validated in Evolve | +| ChainID | The hard-coded ChainID of the chain | Should be checked as soon as the header is received | +| **Header** . | | | +| Version | unused | | +| LastHeaderHash | The hash of the previous accepted block | checked in the `Verify()`` step | +| DataHash | Correct hash of the block's Data field | checked in the `ValidateBasic()`` step | +| AppHash | The correct state root after executing the block's transactions against the accepted state | checked during block execution | +| ProposerAddress | Address of the expected proposer | Must match Signer.Address in SignedHeader | +| ValidatorHash | Compatibility field for Tendermint light client | Not validated | + +## [Signer](https://github.com/evstack/ev-node/blob/main/types/signed_header.go) + +The Signer type replaces the previous ValidatorSet for single sequencer operation: + +| **Field Name** | **Valid State** | **Validation** | +|----------------|-----------------------------------------------------------------|-----------------------------| +| PubKey | Public key of the signer | Must not be nil if Signer is not empty | +| Address | Address derived from the public key | Must match ProposerAddress | diff --git a/docs/reference/specs/da.md b/docs/reference/specs/da.md new file mode 100644 index 0000000000..481a433852 --- /dev/null +++ b/docs/reference/specs/da.md @@ -0,0 +1,63 @@ +# DA + +Evolve provides a generic [data availability interface][da-interface] for modular blockchains. Any DA that implements this interface can be used with Evolve. + +## Details + +`Client` can connect via JSON-RPC transports using Evolve's [jsonrpc][jsonrpc] implementations. The connection can be configured using the following cli flags: + +* `--rollkit.da.address`: url address of the DA service (default: "grpc://localhost:26650") +* `--rollkit.da.auth_token`: authentication token of the DA service +* `--rollkit.da.namespace`: namespace to use when submitting blobs to the DA service (deprecated) +* `--rollkit.da.header_namespace`: namespace to use when submitting headers to the DA service +* `--rollkit.da.data_namespace`: namespace to use when submitting data to the DA service + +The Submitter component now submits headers and data separately to the DA layer using different namespaces: + +* **Headers**: Submitted to the namespace specified by `--rollkit.da.header_namespace` (or falls back to `--rollkit.da.namespace` if not set) +* **Data**: Submitted to the namespace specified by `--rollkit.da.data_namespace` (or falls back to `--rollkit.da.namespace` if not set) + +Each submission first encodes the headers or data using protobuf (the encoded data are called blobs) and invokes the `Submit` method on the underlying DA implementation with the appropriate namespace. On successful submission (`StatusSuccess`), the DA block height which included the blobs is returned. + +To make sure that the serialised blocks don't exceed the underlying DA's blob limits, it fetches the blob size limit by calling `Config` which returns the limit as `uint64` bytes, then includes serialised blocks until the limit is reached. If the limit is reached, it submits the partial set and returns the count of successfully submitted blocks as `SubmittedCount`. The caller should retry with the remaining blocks until all the blocks are submitted. If the first block itself is over the limit, it throws an error. + +The `Submit` call may result in an error (`StatusError`) based on the underlying DA implementations on following scenarios: + +* the total blobs size exceeds the underlying DA's limits (includes empty blobs) +* the implementation specific failures, e.g., for [celestia-da-json-rpc][jsonrpc], invalid namespace, unable to create the commitment or proof, setting low gas price, etc, could return error. + +The retrieval process now supports both legacy single-namespace mode and separate namespace mode: + +1. **Legacy Mode Support**: For backward compatibility, the system first attempts to retrieve from the legacy namespace if migration has not been completed. + +2. **Separate Namespace Retrieval**: The system retrieves headers and data separately: + * Headers are retrieved from the `HeaderNamespace` + * Data is retrieved from the `DataNamespace` + * Results from both namespaces are combined + +3. **Namespace Migration**: The system automatically detects and tracks namespace migration: + * When data is found in new namespaces, migration is marked as complete + * Migration state is persisted to optimize future retrievals + * Once migration is complete, legacy namespace checks are skipped + +If there are no blocks available for a given DA height in any namespace, `StatusNotFound` is returned (which is not an error case). The retrieved blobs are converted back to headers and data, then combined into complete blocks for processing. + +Both header/data submission and retrieval operations may be unsuccessful if the DA node and the DA blockchain that the DA implementation is using have failures. For example, failures such as, DA mempool is full, DA submit transaction is nonce clashing with other transaction from the DA submitter account, DA node is not synced, etc. + +## Namespace Separation Benefits + +The separation of headers and data into different namespaces provides several advantages: + +* **Improved Scalability**: Headers and data can be processed independently, allowing for more efficient resource utilization +* **Flexible Data Availability**: Different availability guarantees can be applied to headers vs data +* **Optimized Retrieval**: Clients can retrieve only the data they need (e.g., light clients may only need headers) +* **Backward Compatibility**: The system maintains support for legacy single-namespace deployments while enabling gradual migration + +## References + +[1] [da-interface][da-interface] + +[2] [jsonrpc][jsonrpc] + +[da-interface]: https://github.com/evstack/ev-node/blob/main/block/public.go +[jsonrpc]: https://github.com/evstack/ev-node/tree/main/pkg/da/jsonrpc diff --git a/docs/reference/specs/full-node.md b/docs/reference/specs/full-node.md new file mode 100644 index 0000000000..f909536b50 --- /dev/null +++ b/docs/reference/specs/full-node.md @@ -0,0 +1,107 @@ +# Full Node + +## Abstract + +A Full Node is a top-level service that encapsulates different components of Evolve and initializes/manages them. + +## Details + +### Full Node Details + +A Full Node is initialized inside the Cosmos SDK start script along with the node configuration, a private key to use in the P2P client, a private key for signing blocks as a block proposer, a client creator, a genesis document, and a logger. It uses them to initialize the components described above. The components TxIndexer, BlockIndexer, and IndexerService exist to ensure cometBFT compatibility since they are needed for most of the RPC calls from the `SignClient` interface from cometBFT. + +Note that unlike a light node which only syncs and stores block headers seen on the P2P layer, the full node also syncs and stores full blocks seen on both the P2P network and the DA layer. Full blocks contain all the transactions published as part of the block. + +The Full Node mainly encapsulates and initializes/manages the following components: + +### genesisDoc + +The [genesis] document contains information about the initial state of the chain, in particular its validator set. + +### conf + +The [node configuration] contains all the necessary settings for the node to be initialized and function properly. + +### P2P + +The [peer-to-peer client] is used to gossip transactions between full nodes in the network. + +### Store + +The [Store] is initialized with `DefaultStore`, an implementation of the [store interface] which is used for storing and retrieving blocks, commits, and state. | + +### blockComponents + +The [Block Components] provide a modular architecture for managing block-related operations. Instead of a single monolithic manager, the system uses specialized components: + +**For Aggregator Nodes:** + +- **Executor**: Block production (normal and lazy modes) and state transitions +- **Reaper**: Transaction collection and submission to sequencer +- **Submitter**: Header and data submission to DA layer +- **Syncer**: Block retrieval and synchronization from DA and P2P +- **Cache Manager**: Coordination and tracking across all components + +**For Non-Aggregator Nodes:** + +- **Syncer**: Block retrieval and synchronization from DA and P2P +- **Cache Manager**: Tracking and caching of synchronized blocks + +This modular architecture implements header/data separation where headers and transaction data are handled independently by different components. + +### dalc + +The [Data Availability Layer Client][dalc] is used to interact with the data availability layer. It is initialized with the DA Layer and DA Config specified in the node configuration. + +### hSyncService + +The [Header Sync Service] is used for syncing signed headers between nodes over P2P. It operates independently from data sync to support light clients. + +### dSyncService + +The [Data Sync Service] is used for syncing transaction data between nodes over P2P. This service is only used by full nodes, not light nodes. + +## Message Structure/Communication Format + +The Full Node communicates with other nodes in the network using the P2P client. It also communicates with the application using the ABCI proxy connections. The communication format is based on the P2P and ABCI protocols. + +## Assumptions and Considerations + +The Full Node assumes that the configuration, private keys, client creator, genesis document, and logger are correctly passed in by the Cosmos SDK. It also assumes that the P2P client, data availability layer client, block components, and other services can be started and stopped without errors. + +## Implementation + +See [full node] + +## References + +[1] [Full Node][full node] + +[2] [Genesis Document][genesis] + +[3] [Node Configuration][node configuration] + +[4] [Peer to Peer Client][peer-to-peer client] + +[5] [Store][Store] + +[6] [Store Interface][store interface] + +[7] [Block Components][block components] + +[8] [Data Availability Layer Client][dalc] + +[9] [Header Sync Service][Header Sync Service] + +[10] [Data Sync Service][Data Sync Service] + +[full node]: https://github.com/evstack/ev-node/blob/main/node/full.go +[genesis]: https://github.com/cometbft/cometbft/blob/main/spec/core/genesis.md +[node configuration]: https://github.com/evstack/ev-node/blob/main/pkg/config/config.go +[peer-to-peer client]: https://github.com/evstack/ev-node/blob/main/pkg/p2p/client.go +[Store]: https://github.com/evstack/ev-node/blob/main/pkg/store/store.go +[store interface]: https://github.com/evstack/ev-node/blob/main/pkg/store/types.go +[Block Components]: https://github.com/evstack/ev-node/blob/main/block/components.go +[dalc]: https://github.com/evstack/ev-node/blob/main/block/public.go +[Header Sync Service]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go +[Data Sync Service]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go diff --git a/docs/reference/specs/header-sync.md b/docs/reference/specs/header-sync.md new file mode 100644 index 0000000000..750f325933 --- /dev/null +++ b/docs/reference/specs/header-sync.md @@ -0,0 +1,108 @@ +# Header and Data Sync + +## Abstract + +The nodes in the P2P network sync headers and data using separate sync services that implement the [go-header][go-header] interface. Evolve uses a header/data separation architecture where headers and transaction data are synchronized independently through parallel services. Each sync service consists of several components as listed below. + +|Component|Description| +|---|---| +|store| a prefixed [datastore][datastore] where synced items are stored (`headerSync` prefix for headers, `dataSync` prefix for data)| +|subscriber| a [libp2p][libp2p] node pubsub subscriber for the specific data type| +|P2P server| a server for handling requests between peers in the P2P network| +|exchange| a client that enables sending in/out-bound requests from/to the P2P network| +|syncer| a service for efficient synchronization. When a P2P node falls behind and wants to catch up to the latest network head via P2P network, it can use the syncer.| + +## Details + +Evolve implements two separate sync services: + +### Header Sync Service + +- Synchronizes `SignedHeader` structures containing block headers with signatures +- Used by all node types (sequencer, full, and light) +- Essential for maintaining the canonical view of the chain + +### Data Sync Service + +- Synchronizes `Data` structures containing transaction data +- Used only by full nodes and sequencers +- Light nodes do not run this service as they only need headers + +Both services: + +- Utilize the generic `SyncService[H header.Header[H]]` implementation +- Inherit the `ConnectionGater` from the node's P2P client for peer management +- Use `NodeConfig.BlockTime` to determine outdated items during sync +- Operate independently on separate P2P topics and datastores + +### Consumption of Sync Services + +#### Header Sync + +- Sequencer nodes publish signed headers to the P2P network after block creation +- Full and light nodes receive and store headers for chain validation +- Headers contain commitments (DataHash) that link to the corresponding data + +#### Data Sync + +- Sequencer nodes publish transaction data separately from headers +- Only full nodes receive and store data (light nodes skip this) +- Data is linked to headers through the DataHash commitment + +#### Parallel Broadcasting + +The Executor component (in aggregator nodes) broadcasts headers and data in parallel when publishing blocks: + +- Headers are sent through `headerBroadcaster` +- Data is sent through `dataBroadcaster` +- This enables efficient network propagation of both components + +## Assumptions + +- Separate datastores are created with different prefixes: + - Headers: `headerSync` prefix on the main datastore + - Data: `dataSync` prefix on the main datastore +- Network IDs are suffixed to distinguish services: + - Header sync: `{network}-headerSync` + - Data sync: `{network}-dataSync` +- Chain IDs for pubsub topics are also separated: + - Headers: `{chainID}-headerSync` creates topic like `/gm-headerSync/header-sub/v0.0.1` + - Data: `{chainID}-dataSync` creates topic like `/gm-dataSync/header-sub/v0.0.1` +- Both stores must contain at least one item before the syncer starts: + - On first boot, the services fetch the configured genesis height from peers + - On restart, each store reuses its latest item to derive the initial height requested from peers +- Sync services work only when connected to P2P network via `P2PConfig.Seeds` +- Node context is passed to all components for graceful shutdown +- Headers and data are linked through DataHash but synced independently + +## Implementation + +The sync service implementation can be found in [pkg/sync/sync_service.go][sync-service]. The generic `SyncService[H header.Header[H]]` is instantiated as: + +- `HeaderSyncService` for syncing `*types.SignedHeader` +- `DataSyncService` for syncing `*types.Data` + +Full nodes create and start both services, while light nodes only start the header sync service. The services are created in [full][fullnode] and [light][lightnode] node implementations. + +The block components integrate with both services through: + +- The Syncer component's P2PHandler retrieves headers and data from P2P +- The Executor component publishes headers and data through broadcast channels +- Separate stores and channels manage header and data synchronization + +## References + +[1] [Header Sync][sync-service] + +[2] [Full Node][fullnode] + +[3] [Light Node][lightnode] + +[4] [go-header][go-header] + +[sync-service]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go +[fullnode]: https://github.com/evstack/ev-node/blob/main/node/full.go +[lightnode]: https://github.com/evstack/ev-node/blob/main/node/light.go +[go-header]: https://github.com/celestiaorg/go-header +[libp2p]: https://github.com/libp2p/go-libp2p +[datastore]: https://github.com/ipfs/go-datastore diff --git a/docs/reference/specs/out-of-order-blocks.png b/docs/reference/specs/out-of-order-blocks.png new file mode 100644 index 0000000000000000000000000000000000000000..fa7a955cb97a9e5c4408bbc6f37bd1e475ec5492 GIT binary patch literal 27206 zcmeFZWmptm_cl613jzujAy~9DNJ$AI-J!Jd13@GtrAJYaHbAke5fEwV zoFV32BR>E4dEV<>=X^cqT+b)HbY}M6Yp=Z5y>^hM`YlRwdU6B;L8+uDuZ=(u)*%oC zjAR7xFQw9&R}qK@luGilx{pnNk5Sw{J)0=)Xwh>;EzX`dUTDd!*Q(b|#Lm6UW@E*^ za%sEJuE@^4YS`3mkCXF)I}va5&6n{n$leiLxanQhY1UAX7bX2eHodgU)KkTe|bQlv76n;r$<&+LaZ_`&J#rJ4o0J!u)gQnh5?c zh`J$78AAMD$YhX|_&+Zoh=uTfq7bLL@P8r*{`-Od%HhBI@L#+5Km9||*{*!_p{I_H zj-H;Ly1Kfms%lrkaM681RL;!IOy=*GD$zV9W%ec>9++=$*hoo8b`ytMPRQ`F3HxYi zY0=Tqjg~o-nPGpR8R}YEl!;ELemxz4_7bo_9xPX_BuTjE6%;tG{rsAgl*DC_ca@zz zJ1U1;8NR4TJ<)I9>Kho`@7}EO6F~_ra3EEcm2V*GA2w2_xCTmJoZ(E#H$$tp z%TUTV#}yU|EXW501njyI@`)_yWXmb3tHY-3A3S)VqtolPQq#1un3+!0)!9jnBnr{) zxm6^^DQP2LriuN*Of@mn9)D7%fJcQ?Gx84kTzEJYr>HenTU+}EZg!NY?y3Ec>I#Q( zDSC>tS8o^i;SS5jedEVh)ckMUGGQVhE0vv3dvcwV^TFXh=9HXZ){2iBA|xaPeh_Jg zP?Ophvs<8;TV7tCZjJ6`&CAPUI(Ke@0a=IbNDz~AUaPCCtE?0+QfuNs9XocczP_I9 z!Vpb{hKGCOvw1IOBvF;12ZGIllVVO}+`A|}{Yz6**8BGh+e@SS`lJOQp|=?-(F(V2 znN>JFr*6=dLDaiyV&A=c$BtrE8sCdU=*>2lP>!-4%=`NFD+?=Y?e>VR_qF4)$j!AiG0#nQTr_fMtp9_7 z#l3rPdCjEs<}X*duP7kug|!eC78Y68P=)7AWY8NO60xzdpkhL}nKCX^>p6P*kxExv zM@L69GiGKQNe*OaXsA4*-ck!291`OH^yzW+@x2D5ahd&j0M7f~1XdoIbO_|}XyO~PFCniRRBpNEZACa7#e0SceAD%yidZIQBRD79!pUVWltIRT9Kr$(9zcB+aNjDe|WIx@9*y_o^#`v zxs#Is0olZh9ubEh*Of_=8I{;ky5haZ&mf5$)7=qU7V=0F9Ua=!r=O?QA{sy0jh24- zA}`6aEO#U_WTUpezJh-r^WiF~)OCU1KWZ3|MCStYkbM!$#G9awQBz+qlq3jxIXSsU zj~+!uMR`AC4SbiyXwi<7LUbuADShkgoZS$#A-hl~uZf@_C*R$ViLWKzjCZJ|V`L0n zlq6LBQer!}e{jI!t3^x9$Iea`9Zy`I1(GACIFoSQS3&be zGSQKd6V)uJpbxpZVd3F#T`!OlpP1!Fs&-X4&+hN<|9Ct_DW%xK$-~pp+&tk&i3nBj zL~LbRAbqA9Db38r7YFk<7>*$t`GkauUFLg038mD3vZLUBV3%K}r>89~E!XZikrR+z zcreu%HatA+{kBcHiNF)HKHcBnzdwUSJt`?KCL$rJ-5N0Ren-S|<%+JJ-dZOW!eVh{ zC0WG5)Yg`Zne9>QTka1ZKmPJVB0|Fm&Ye1Sg#|^@^#Qd!Hy0!ACzUrsIPpyHgMR!1OfByPo$N8QN_}PmnL`NzDEOR4BfTf0f?$@thadGs%4Tw6T1Tkl; z`lrNi#ArOHTcURNN$ZZm4TPNKmGauY%FKBZ^pflXik2Ofnwt9NcnZG@2j0;!oj8GN zH6;)Y%bZYCQ}ZrnB|#F!gkHWCe)Ue_v(El!T8M*#gOf6dSw=}uApwEcol%%MZJD#D zPumR_J|Dk=;JdrGy`-Y1Hk~HSf2!{FYnnqkr1?E?nP<2S`?|a9Z`t0vW;dd1Y}_Wz zflxh8%y;F=mw5!eg6|&(-}5WpR1B>IsVEkUo#{y28y!Dp8yuk5 z_#TrV*Krkfb+@0MZl~(9v)K3U6;3SCQqsjth(@uIg2{4mjesjk+~*K~$^dpGts zI(v0$RzFjOe#%PrjXPYKnrg@=tpvS$bdRL!(UbyF7awko87EtjBCDyyd~l2Iwluao z!hWl7y-y%6B((Xx!g>F#qlYpwH}P{1ZZOwqr@0_AleN9lFES=Zk$7<*voWtVq*}t41*u`P?TLH#ZU@qE%#7^w*HHmr(&)RqBvWglZ=(o1F1YWFpnG^SJAjVbV)2hO@CkCKUP-j@a>l` zPrxyikp~MO^52(j?(ED?Py6gPF#FD>1}-ct2rV9JDJYQS<`X&Yx29L`{tOwp{}?kJ zJ%3RsDg5@aP5JXV__O^Yz-|qi?QCx|(bNAdd)DfFHRUu0gK^md9r-eL_`9gH$2+>E zYrZ$DGf6VO%0PuXCEvK>hRtkdbK03M-PrTuF4;9TCSGap^YimRefpG|ItiM(5cZDP zMM>${!E{TRV|RX9(}Qv97=8f8SX|_C5mqAVG7l#%epg%^X^%VHL3fTFZsq$O%x5KMep;_5W!@j|^lcpb@a|JY zqnZ8lCT72V%xv=D4}wakODK{z#xluLKH?l4LF32y+>mNX9y77Gm^_}%v@dY$j`g(M zy?fU^fO&T=Eh2owr$AQ**DHO{3mUP4#`${okxH#Txl?47gMPu45+4^=i)nXIuL{Aw z@F%5yuW_?MjB-_Im21cMw*_+=rqU_86qPwyKQ@-A?|8j$1!D8U8;N%vDtE#>->t4X zVlU%1q;a&wo1*&prfQb7%tfEH5C$J_*Gp5(S$!nHcC3Co<`TW#yAP?BhVS~DQ)9(>({Sm8%jFQ96yj|V+&kXQ&Pfg5GuSQh||x!<9g=~o$I)) ztZeaNXcgPO4aHn(X(?OM;~N|y@$vBxjFUVUINqD(Ea>TPudlz0$~lyJ^phxTV0^bx zWUeVVI9TE`g|&u;Mx`VVk2*#xWP59?!CGI2+TakE#(eOC(+aC$W0O;8={#cD;M@EGdlX^ElFzlZz`&IV56u zsIai`gv{kI9hqAfH|C_+(-@`@oZ$H)2kF8z#3^`s- zX9{;p4tdt$jVe|Q{x-VgsI|~CH>GDcfh2kxR3)w-(#tMKr3y)1W@Ei7RW7!g~PM zyTWkIX$zbXYGK3oEG$m`*|340Ubl(Rq^_^-52pcj&8UPQf@)W|xt9+2R{WMLW*fyK zDK<6D%rw;6njSvn=jSI&dDX~~64mPnS?S09?CR-{l64AV&B_PHF1nD%KnVVro4d8O zm2q;FRC*Ul$wX4s0 zySSxsn9f^L%Np6J!*Se!*%J4rZtpTv%17=Ld>HB}vKhSMJkyqXT3lQ_(qAAaYeHX> zotar9S?b_Z6Vu-B`_iMmy$$*Ns$$O?#>bQOq1esDWo2fnGt}NwB7t@KBl0;)hHKgU)q>t8=do+x_4J(&I&u4*F0D<{hih6@#Kd6(eVN7d zxe+SXbSpo}V#yt~D_mXiVwwH(C>OdcS%7Iy3A*s!Utv{{8#lj~c+mXzm zDf73QxaxGQ=eLK3ZZB_3G!Sn}A5KX>a8OcFNsEeF;qE=t#aq2fKsotcQFRK7zXpkh zTYKGcwgVr#f9!oQI|%0dEpb`n(*BMgZU-W)fxiBO!9Hw3Fl_+?hs%#`jCyQM_oVHb>!G9-&w^np%x&N`b z^5)i7d!j@%eD(X-7~Wp=WE_4}$J%FMvewfCP&3ohVaAIQ(PHmL&vF}n8T--Iwc30+ z!ZA9ut*tFRs-H{e2~wUA@BD?-<{h7PWiwLDF^JfIG^u*4jfG(SEWoI88j5+pEd#%; z-KhwPq?{Dh6nxqvnzWYUz4IDd(X+KUh&KqopR~!EMU)RgA87uoU$^5ifHkZ)9)U;E1c=d&CS()(aW4>s_HM7|AL%i8YjJ5PtBO`lzRgy zDWy#NFL{VT8nWB-T^n=hv};{vPprCztCz}qc|@bdii!=WarYj)>uy084GX%W{Y>g9&eahVy* zhPU+0F(0!#Ax`QVD)pJ-k0Bg)Uf*c*r$i3`dH)w%RFjigdZAT&oQ*q|Lr1LO{kc4!y=@L@ZDyeb z;+0eEDJww&U%^bsQtDw_na2uV+s1jkuacx3J;f5|7;b2ok(Jep)xB%$kUlI-iR%>_ z*ZO!v$&Qbxs}2A8E1bymlWwyei7nzSTaH%WYl^vUiHUzoNkLU523XK5frTs4%K?5Q zgUP7phhT3+xycHFFBNr7(c8Ovrul>P`r2A*P~rB&>)2QJ(vN;Vn|b7?ucITkBkt>a z>fE^)0K<%p_i^A6)0mzuJX~^5(?rWeRJtw2tzT|0f+{E3@6f|=W6=yJexUa~y}Gn? z8|n*=vAv2;pKub_Er#0zc_tEay+|z%KD-JG-WO+jYhq}q>=U+r?b?`%hVTM|L-4O( z|1@=$$f{)T$si~d{5P9#cRD&d8$p!Z&Tg@23h!RBpsdm5>O8khSv>8kqC&A#%(=iK z6hSg}5oM(yc$?>fsi`SAfnKJdyyH(TOg@9Dc-=b4d=7Jid;JPbeJE7(7*UcuSocWQZb9 z_u4!D{BmrBWn^^JX6GD0r8^7;pEr+RK!pCAz68?Lffv;r>+S24w>)uL4qzYMbCyWZy-eTQ)O5@4V_%;xRw87D$y*zops2B(S&e7ZxamOC7eX>bAsKO(IeQ9HRc+L# z8Z#45NJwaBXQ!K?)&GuB_!S_E2?^B?hYKerCQhWqee6hN$9o0`vl#8F0dHSlUk7+)dIJkJ zdxZm~r>Cd(7%d&0lBFhgjJ7qhs3r30Zoh%qQo?5a@d@on3xi0N%r2^rj9h5IGDhkF zd{%dL)YLrD|7E{c^jOOysm-WK`9NNk~XYOgva``l6HwsbpJsB!F z%9>QzbnGtF-VpH^X4i>T)AwNAp_>j4`P(nE)Z(vk1YYmH;w;b<+!iMs^1#-UY5K+` zVPRn*I(r8P=EVm-#{s+)zkw7wM{e!ts#|?cSXhhuLpA9cW@f17yt5+8S9f+IUc5-5 z5o2S!KU!+%wXxjKF=(_6~UDI@#FVu4`=MEWh+NDG+Ml` zvIQ(Skvo8rnwtIyKQ93Q2d_!x7l0{qE*}ZkpnG)G3;Hz;fz;7aQI_}bvt7B;b$jtk z26tq>dw6)bX^p2VNEv!z_5P$#Zc9i=V9YoOY3HTRL74LX%+qejKu;ez^%2Ux*RNlH z{`@&%k0{9Mt8N@LSF{IZw>)ia7bhkzB-h-9lfDcN3jGtruDO75H#Rl`zV!6z)5yqE zvs5*!tE*(kjwz|CrgxVW7Z;b7zJ34eQFuw$vA>Ju9UdM+Yc5gr(Nk<7V2)WC8DGDA z>56>Z7)s}TmX9aSi`TgP<$}|~Kj3Pir_>ADbOJ0prkM;MOU>;7ZUw*?S}&P*uZoF@ z2?@Etr@m{7NWIUE>a>GyoBQ&2kIlJminBbTK3|@@9Ni_V2=4vHn}6asd!Kn6V7EGa z#I(SKB>C9bvH&oJEG;+J&e8EDP7T4wulf%W{w51T<3$_*NBs?frZ-bfcSSq*s!Ksy zeEcZ5Bw8Ap!lI(qmKG{zDQ^%xl=O##F~^TidEvBWFz3X;zyPEPn&?-tv2?Vwm$B^& zx-WD(-^9mP0VLems9@`J_Hx~+dzUV!)N2A}0H8zSbq^;{K{(dh+S-lTF)vR~1$p`O z%uMJo0@NoYEKKJ&_x$SLBk5WYuP&91gNgvs{yRz1i=2W2%BN>P^7OIUo0~4vXVRd#^6vQdUs`NJnKVI53b@dLk6iC&{Z&?#7jDpGWt z5ETT-8+*!Ka^o7$Eifq%9>6vLKPOLUwF=(vfCxX?62%3GkK4+RvG3m>78!cz{_8|< zs3&rb$_$N+BA6r%A(@jjm6v~(l{MR3Yv-b?9r)~7dV2c9(bA>nSC^NTmiBcBhtAU; z`y;;P%Bm^`1_t-vL$9KuPSeqe)z>#PH25pTK{Td!wO%cU?g}^nOqLt+UyF$ozhuQx zQc_mAEw!|_t0^kROvv^_h&@J4oyF-5?S-cB^UlxRPt<+oe}$kwPAt?s_~VD~?&>5s zgxuSo?mZ>+FDr95HvS$!$>6s9U1=(qcD&ry;9oxW%B-0QbTUM3bETu;Adh<|zK_lK2GXbySp| zmX_@FIsp8;Y|pD=6nEs3DBJRPkmw= zumH#31wLc<_5iw7DRBVtDk?ThqEY4pAMZ>~P8P3&e_dQ$%=MBD<*1jhq{cgM%l8O3 zcXttCVU9u8cQ-IeEBczV)lj-X)bu1Vi~!{a{n^ z$B!SNDx>9(EbQ!-0NZ}#k@?O1kFP_f3wR(#M@Fd5i|azR78CQh)7Gc5q5^Wo`<9k^ z1_oy3j`hNwoJiZdm;SiabHq{AeKp!_Dkj{n<$X)*l52`!-Kt7T9NgTw`T0#A+)n&| zmGur6O2q!VsPF#cJ9q9pc(5QT&6FVNwf&k!#mU)uXJs4*@c4Qsu{ptCdHD78_R3AK zz(q~{{Mm^6=cTyiLLB%U(4$9yejFb9LLY{Swa15#=!6WX3h7I62q7~dY+=X&ZWoyU zbE{)xZS4+#3#Z4yEr|{hsy;n30F&tTYtELIEK#2~W;tX!DWLRfK>b}k8$0xpD7|odQqAMc`c)qwgla#NV}#^xq4JFaqZ z+1l91@>6=6|K0E0%1RawFZB9_gd$gFm$kzmwzjrPIQ>!tZN;53m-?3xlBGD6ssb49 zL5*ZAHH*FH;Gehv|%rjQbr4sh8sYuHPooEAETKUoarINDjC? z$k4nkRHmn<<3e;O-;KkK^;gvkXWHadFTYYfdS3{`2(oammRV zIxeB|6GVUS?f82O^#svRcN+pgF0~=_*C4(i~e>6VQ+5_q!G|m-B%WeB&A;< zd%iZ07*U04qP-t{IV4!S%N8*a5gJf0n3$ZWI_gzIFf1GW|6xhk&q7C29=PVKsQ_iaHqG&-t6cvoM4 zAp4f5rpI{(hAKatueT`0z*kj983t<`o8fAYjqLL;1Wsou(>pL#2pri$51GoP*go}QlOX2nBc zYP08YZoA;~)*_I6l2)$}hGSs&r4NT`n3y`;nH$c=D66SS0>vovC(nx`9Sf5?Po!3I ze(^mt65()@v|Jym0N_iN3$!NxQo_qnOT!t}2hMhaS@2Ij!%Bu%DM)>lMxU@g3m<*5 z{J9K+dVZS+r9U`4TmfH&6~&n@0>T>=8#~`ELL+u{0QUMd z_%tM@prbR)gl?XeK<)q^dc-w(sy4J{r)=sECa_TplM)J+-t7{jux`$OT6Zs zZj>pMc!~y;$>C?;P*G5fqW#d3m*>n|j$jG4=OlZ8qb%##@bc#G?|U6D?YesYG~`|_zR3I z9EnJMVP%Dvi)(78mg3CSp%UA(c|Deb_>x0Osr%X{^y+h3X9jyop<0R&CRAk5@%ltX zPX?4TVB_p=A&$?IBZ~te9Y{!ET>fX;V`By~cZZ*YW28m+7L2A0g^u$5W!utf zL;Sva1)>XXHVWyIRt>GKwmZvX3Q3!IA7?d;prWGc$;8t+D{}}52wrl;IPuWPi~@i0 zN2M#5Yu^Kn&jI-|egM(Ij}RP59YH?@!L%f!C_diE(-RAJv#VE+1p}AexKxs>WN1h~ z)|{Uuh<7R>qy-;7aEXcCscPpiE7ufPy%xexkOk8?jQ87F|Zf0lyz>|>5oheFl=WYaoN!&$IVx1T|bK<`fCA~_svtOW3 z%W$4O4!=leVr=|-c{$IG^;zcJY-jSU0dWz{A5n}$QW3u$EC!X8{QN@;=tDtvFhX%1 z=pSm685f^bQjZc9R8@Jk#|b~61@z)WVWDPrUUcw;ze1zwHGhQ1iee} z0?R^hW*__9&@jB3AvR}L;-WukbX?Y;t%T585PC%)4DvIB%1WtIV?A75OFT&g$5d!t z4`1=!JkrlJ0GNi#AEk-wZLO}db8$6HUs=k9Mmi_wf_+4bfPjE%&(9lUfszExsn;97 ze92a8$jQ&2Wky_}cE%t^G$4!f^b~j33>kFh8nrw;#Bk>j9x=A0U5z6nBWHMwo88$P zAZ+ZG+zsaYQ*>^UK?b7}QIQ~cU)|MJ7)>(N0Vx^uSbF9Jt=eGgQ-d+uu+@IT?2IH1 z(c0g?A0EC!tn(RArql=v2|1cw{0Ol)ShT+Gv{euWa@=_mkU~Dw*#o=p*`)11*wk=GPeL526Ehk#ZY2{LP8UM*bg@zdc@NS*#zPslU$_^ z$f-LjY5XGN<3onZQw&T!Zc0X6VPW~2kUV_MmH7f-@t?;$(2g z0B+PWKSG5J#}Xao$m5XtY7@v^=BM&>Q zm+ivJdmy8jX^U-eXQ3~TNmRlMyRDyM4hoy;^847c26>M-_I6eb#Fa%v%AnX> zb&n4KrK>=wYu>umbbGc{(D)nGCSY*!l>jUN4Pf_&JR!>yx<9{@K{29JvT zXkU;IFcKDn;ZRGCj&2&aV};u)Az)9>&*$am*Rm0y%KL{DdzUQ5C(KWjl*j5m2|i*v zj}drUTPxPo{s#Zr?g1iBudFn*!J<$I&P+^IyF46A6Okx$aje+pO^>5t`Rns@ffrG3 zA_J3*aGs-mxKdbe(Ru^R6c0|>7YJ|7-7hlP8h!-_Nkxd~0yf96^hY2BbL5TU34~{h z;+_61-c1mf83hHJ0nNQb8uDM2-k3tCi7EgNF=#7oSXmjtHV* zMSn!)sbxnh<2L(0QHjS;=EGUv#tMHPc5ZGp8y^+p63strW_0=Xl7q0sWf3M}+yDCu znLH_g85RJS&7F%PD5v?p z6yT=|H3NDdhnAz6rz9VEXNGW9e)gj+maKxo8GtZgu<8c{LB(=t@6lGluXYpsxS#`4 z&8vTRT*k_Z67gnEzO0V%pst__Ars%nauzTyA$Cch8C3wgz7VKx?ck;4yzi*H-8e0u}h2d~XPcVRj45yk@< zNkK_|=IiFM3p5u`Uds%_=lpH-#GBP$AMt-Ei}OoG)4qC1keiRMqu-pBspv{E*Agc_ z5%iLbp|>$c5kYAcMi94M`WibP^Z)$W00P6Ar;5!_2Mq9d$~TG>V{6R<_QZeRs2q~p zS@z05vWp+^nfxzeO&%fk+mY%>`Ab(mHx~0B`IxsA7ibDo{zEp(S%^y~Y3^E#oIA4E zR^t=L?)>K=GjtHI$bt&^?Lv=oU`w7C&#qegoW^Gf)rgkoB*Y;-94N(S@5#fKu z@_(sW|8EjYo{`M`X{Xk_+U3X5y>ROxzPV`8iv2y@lA0Qh3;k>>%HX_#qi1){ zR_tHZJ)qY$dHORX=T1mC{!7jw>Uck(1AahW7i&7+O!-~gdvh?{R{T@UwUgp@ZJ*`j zq)rZx@8z($Ugd2Mx~v#6~1@_>beiNiheYaHsEdJopbIBhT(N(=5D(CK}=- zq+{j64u6^XEr*yLY?Ctk{O)d&qDglC@<@0V{y+JzXXwe1?Pqaa!~`<=M$)_Yrb>kG zh?mXvrI>Sz6KiJ4Uq5g+OPn#^m>gsrbh*cGcWHAd=1$b$;X;w$Eu)orw9oSJZ0*+S zW#&rU6Y5>wvAOb0=l{LE(-d=|!Qq{L^U0A(#Kgs7^W9jAw+VUp& zH1OU8c4dFGt8wg3Y;V>$T!1l#zY2@h>Ga0b2TGOu4xVm`O1!OF#or!)H*+2Xna-qS zZ2c`}uV3BNy$LkL#Wt`n++k}Swf977*dDgu$amLSdW5v6g-0y!_m_7%i-Lchlc!L2 zWwj97&sWq3U!#+8Jml6d)w2)_uU@39Ww8@1Qw)m#V6lYlq&u|zOhfzb`bYR;Fe^3= z$3r>N+ZUM=SITQ?|F?9e$>-3Rs{DhE9p5CaZdrf$x!#_fEfHpn^il-x{wVJ<8ux@! zSQ4)ebA8yqem`2BQd#BUKuxXTN!cV3gDCk;?Olg;YNJ zXM0(1U>nn4UE=g{c({FWDa`j(MX|@hyLHAv_j~+HjY14~=UHKlOw{vG!%&l33-;!0 z^|R#urddMls3jl%)9)ib>xev{L)sOpQSscWlI8JL&mQb!lB%>QONrdVhbXtL9%xEU);F=}m}t z@7x!;T7O28kC4{x=x8)mCERv-;PW^7s(AY6jdo*Fsr`NjO7G}6$Ra5{yEsriwEPRN zU%H3hSo2TDz|VZSHzGe*#&2_Ny(YOM!)~|4;qcaE{b)FsXibcpMI7Z(&@*$XZk>l< zb8}_D;O);!5hl;w?oQuHE;#+atJEfmt6AhVyW<*?8?!|q##@Uz2D|fF=V@>+UC;Vr?NA`%Yaub&jCgawn$3%RvA1Cy}chr6VN4Zl` zwR3-cGgB_kAGe))#(+1}%xS0k<(+zta4!5h|6^E8lr7w%&j>eDbPySA?{??%nf=xU zBc!%ZZpBMoWAXjV-%s9q`!Ro7_spMAkQ}#H zg_~bnW0N|lBW2oOL}PkzyI<_2_T#1=ba4Nh$JOzF5N(EG(R^32(h}SDPRIXg$=oTK zM4FR{clM~waEECTKELp;ugDBD^DbSx$u+?g6k?u1(A$;8l{Jm06rCP)6qle7ki;+k z-%RPh2_$>U|CjP=n8fLXQ6X#V7vl)^ZiRb4eu|EawxPSmgA^5?3>av}vFiYBot6Jj zVjsLoag^Ab1d()^zoxLJrjCHxdwsNyV?^=x$xHln);`1;N5ohfLVU^g@kP%spa26= z7FZOt$N%Hob0`!lFOSEULDUi$JNu?q9Gg2EyvJCN=jNvfVfOa$Gc>tZxsFzbxg+{> z=bGV%XsQ4Hib<858kCH10$#&nn2JgqCsD6(oO}voNOamAqJ!yw`#EmlyZ|ZN+Vaky zdQTsCiD!VP3_l0>X^i<%U9=S3a|QT5FkAw?>bSVLUAu{^BOk1-^dAauid?3ClV!W; z_4x6_@8z%1dL$LO8aBaUVbd@GV|eEtV!9_^7EeVueiNF^`#x`2%Vg^fp(P0rY9p)y zy&+oa6s90{PR@6ik_rp%Hjo@jDM2Z`)Y#VEo}mv?gu+Vt#BW^19}#>S>E>=zXmJNr3I&$Qrdqfi8lvQPl`4X%OQ4koP8lnB)T&(7@ZZ0H%S zxyQ6Ek5%P};0JUJW0^inhm3U>%hH`(Tv*uML$Ia-56Nl-ZuV_dlyC{Mt+`pVuxo6r zef+*V>?A!5?J}=jB0m5R+!he zDVSX0i-n2ImQyf9)!hPPZTm$-=WG7SkQK2T`7%z*cMKSBK+FP>mE!O2F2V+9@RV;Y z4+H5y#-gXE2QmdF@X5T^*43kXftig&AhtRXBHn0u4FS$} zC?$C3(>>xpE_nfkUtdgVgbo7{RBf}W4Q@8ge)3EJ37=4?GahuD_vU~8m8Js#(%+q zbI2!PQrl+MpsEDm_BGp~4rnI8q>7dD=){R$+$Pb@xb@8j0oyfi+y+~ zAD=&eexIIIMhxF?VJ`4W4JE70&=s0kell0BY~I@^Vor^Bth5+;c=|B#7cE zly=Eyp#PQs&t>Ske6Q*N1RE$FV$fwkbKV|?`Go`+&G~h<3wjqq#J|djP_fn$W`&NE zpKEKSd^2)NefHdxY5%O(MTnU##J9&d)aGYrMfmvY#wjn@hlhg(7<#brH2(8bVkk0|llaz>PLQ$TNyZyG{XPEq=ltNnT zMxIwb@n^;Ihlu{g;cnyXii%h?gZVw+=>1$;I*1?N4}b;d&dbDcg>SiD430MQjr9v9 z?rsHV)Ng9?4|J1v2~HU}g5&uHQ?He$x~{-`2xdxTnHhApj{%m%&twyZ>H$^o*iSi@ zr%dV^pQZCDGUV4IeX0 zO7t-5PR}DEIwu5!Q%q?1eIxN7QpyBgq9Vs#@YV8pX3<7K3x${kW?^m|jfMhMST-PItU%*sWb@lbF=bCh35rro0 zBPF&OL|q8+GhNWQUftUI`sJjmqToQNjdJaD{5s4#$qi3VT?tQmmee9l7?YfwUwWBG zg__oVe|HUfGW#%2J3DnnJpys<+BH@d78_~`COw#)z$tRgEi43whwGaW3xC1)&7$fC zfJSnW3!WPIgo019h@NNqv(VGVETEHcVD_&F3ck^9ApLe{Q}c$&6jk(1|72GF zK}=+%P^_RQjJm($7J*s+`>9hxjKF8F)}EV}%e*!D=sc;dQCXRpw5LnQ}#n%j%O?YR4`dhz}lzVF`0X!?=-JUBT`X9r;8& z5W)qE+5S8?5<oPsd9U1=EZsRPFY@PMmu|${=;#jyAJd$Adj{X6mY+U#D%FZMX8Ks(|33t9wckop;M# zRGUB%ZTQm?`EeSaq*-wHc>{St^w=q$MX);H(!iVxxNh$oE}2Ap0s>B#yi9f(0}f1nA7 zy0j2?OI1SxGD2%jAU2JRyw^fLw-boY$bpBI1_#S3D@TGhI1!t266xEI=V0i`mr{1~ z8HBe!_bw^kwJH)eIoOZVW?+RC0y0-gc$KFl3h1K1`!R9W88VfVYkR(hKy72Y z8=Q?yU%olEXlQ6yywd0C;Sq*YD8Y2^<|PS;dTl@S9+`@ZM1YfzWkbCYiGrsWfOP<5 zMQ9^tDFt=qoJLfF3xDg`-w)1E)aSkEQoqo%E++Lov85$%L}nbl4EC}K9}3cYV;~NA zT3rMhcG>2u%(_$p=&)vh>w0K!KeOXYn}T4yazzsXtep~=huQL$z2=QN1o9JR(BeF} zsS#c^m~zFxLI4+=_>EAMPRC4!3s?uQ-@E||2(Yf;87w1&Ip<%OKYw6jPMe!*qrt<$ zky;bCZHJXc_r?docKl&G;PlL}xr%CPUr(NY3+&wzp#B2q6&Q%XGmG87JO!Rt%y}le zDjCWma9P|w2@NpenwFNUF5;WT3WAw$Uy7>+;XT`id?7( zuEDfS(c`symdByWRk(SR7~QB?+|=W11Wz69@hyETmhLreUzxMYpuA8+6iq$>G)^_9 zzRtMe)u#4hVHk76=+%EQ^_Y%2FX8s(>sKK2P5=5;TvD>;Rq?IsYT``Zf@xDgzTR0T zCi{5==irW>S$CYUEf9CV<`cQ46Y)t&{eaPth5*VL>WP11KA`lg_m->r9^tnh5CG?i z7C(#ib9LQ=^PbXWrJ{J~L5JS~^*Q{K3cP6lZ~0 z4P>tae0H((@Que*n22>iW?~U$cb8GjNuEIlYNA$RM-bdXNSv56Ydg`da2>WB(Zlne zgpZfElLnb^UGPkoAyh=-CGvWDr`0Thzx*_yJxS7N=@BB6d?~PnO390LDY-bhr&ihP zDw@^^=7eU_GkHNfBqWTK#%=R!8dvHwFh(=6oXa%P?8Tx&UQBduyxCEiwQvrkWx) zxSoXJ16WB($w->m@ak70ATRsjiKMoKp<&AR<9r_NX)7wekmONwgwZ;8clRw>7%T;yn!+nL0eb zFrYEtPS~LqGXv4PfAFTy<*k?d$_Te-1oSdPFnv&L=62%D87qGc^Sj!WGUHy=R8dtgmnK4Wp?CfE&Kpig`%(@I@vf|6b76W+ZGGve0ykVZ7laiQxSrG9z zkdzal@Ddy_Xm!qu0gt$)rRCwOFHLAfTzOtzI5~02{Av7f+35p#7-EVUkv9v{v9q^_ zClCgFWvNq#2Z4VBGyl@7)CCtj!(bAu%Xdk|-hLUv@7H`%b2E@=0>0p0nq-6#hsRta zH(~tbNtHJ*!{NR-P~M7)-k32MH^99sSY~YRO`d-B>J^)MLP0x`m$~@`R#pWcS15He z44UA2-0|0g=iWo{t zB#v#6kYVn7?)@k354YdiX1>ek^E~h8`8@9zj509cFJfJCwD7B2=!Rj7fMg}BfB1`p zaxKUEFTU^XwJ|dro|<~CQJ0s;sPgUS1K@CjwNza;rnerJ-MTn~aBmxaOs(N) zek9+`=*!Ak!VyuAUK^Til}p1wddLkwqpp!%1j5~#8Y#KcACAE5*}9i1BO}wMp?<>1 z2!ns+n3Ybmv$cH~_8BHCiTH<6|5;DM!fG2f$uQ^U%;PBte3Kt4XxrTfXgqbR-}`ZO zaP^I}|3t8~1>$S%U9kn(v@Z#A7RCDaV^_BXygvCL6Ihm~Ut((R-&Z0M9l{DR_^cjM z-Ydl2AbEis9f(KOWE-K^KP#bcCYWu=Ke!@@#bY^*)(btI7ng#y$f$AFfw zw7l*1z!iNjj*sZl@H$K${@vR8q-WEaH%G5tyat!XJB6aTKJPNj`n$Tgm?gG7?Pd0i zxwyJwHm|$c??pYR3w_=?&uVJY($o8v7S<);MdoKL4qcuZZj2rfFoY+wDOk9Ax7QEE zW`8y1G=%;%S4>x8Mvp}}9N|Y@{f_lQ3SV|@nLjKEPv$#)SV+DTcV4|ng%8M+ME3FV zfuDi8J<=OnIloM6i0q8T2UTiy{4UhPxew*s$5NEH$;d#*%p^%FKRy&^Wn^T8rq_F~ z$84quUkGP&ART~VCND34?fn)2c(DD?`T9P29h==^VC#&b%)V4!&vt+85<9Q;4Vwi$ zxCmyAC?*SaRkIAerPF2xYG`GGe~!RvMnrT&DTc}V_hHdTf_(?7XW{Uw9{y9}u@uZ# zo_AWBo6j>UAN@gAR8$1)@>%<#GW6y8oHbt|m;_uSHduq_yY(x6qYQA*_vzKWCHn)0 zml&Q&89`k}^XU~Q%abg5{zy5MB3;vT%K4T?6tBzdU!MiNkIL`#PW4GC>;5fKY$lz# zN+gZmDeFDrkbCmWTsXU^og%gAE1CGUO|2v3(2A(!jV`F<0F5j*WoeA%hj0mbsnns& z(noB-WHOqk?U?^UDjBh0S~Ml0b<3~3r?j?wUFwATsRL&OR-FC(W=gXZzhG1qvb1;N zBd}Pe_q*sV+%`PwI^G|x9Z-ygRj0gh*(N6^CxbCw<}iYeMu^q854INyO-u!2QkUiVT()g0w?Uv&BaXG*5)#( zF}(0flTo~mq?*VP2!!##+RDnx;^JReW=;y=`N=PQ^P~+niT3h*^j-|5wQj@9|b8?OI;n0Ha5%&^JgCi z_AjUE95pd%hZx21@aFJZY^S06utUqzm2ZLk=o}i#-O7u%yMzx=TQ(aG;88c0{ONZC zK3BQ^QDdR=rwh%eNe!vhK@(QmIqgDj7$f~`I>a+lk*;E2>-D~*vlKqbpB&i0;h{o| zy{_8TD`E%FoLBk_2|rj8d)d-M@1uE-NX5IuXm_;my4k8CR?zvj(HuJ{UPBj|>9Q7C77x zHB-N$vg|>=hfF3~3OliI0MSa)A7UydLQ=PWPXd z4PW^!GekR8C1a-r5^shbk3J}VrjHX)P+k+4JrZVD7a8%>lM@s0 zUA>Bz3$I3y|Mck-Fp{x4ukjc&kkCa#KkkWDqa(V4wz06d7_X(%+M?1T_;7;xk(L;x z3)jZqu<5(AWSxek@LfTGvp&SinQipOMf!ue@L8KI4eliNi?*sNlf!Q_F9vPvzZR@I zD5IA)(BxJwCqmuC4EC&$(9z1Q^C4G>5SQ|y9qQ_IpT9t?d^;XUeuL1b$Pk`|MXGCH zAp3xnvV!5XT+}XbozuzeM6MC^nZ&GpLNm!TQaBf;RP<{2q55@It^g z+e&O{4Qe~`o%%R%R}SZMdA>6F2H-vhs7QDE$h9_=2H}|k|Dd#X!5Snb+@XC?-2KD) zG<9_F7oZC0pS5v#fzyt-`1sU~TxvZH4f=>1pUz!H5vdpS&+RgSrY=Ih(^+;)rcJpg3!z0a>Q*`^nR@l8$(J#!ubA(2X&cP1{=*F;cRY4G zJV&dm@3yQeEyf;w6XL9FqyMO#gTuLqcy~j6nz5>;W;f&u+7CmBQlNxyz6BsF*2$EMUkl`KrR7#ehe3<@UVX3_1h4Q2T(4iL?7!)L>%-eM%l5e{g znOu#U7OY>Haq*TlN4QjBiP<2sJP#|7 zNK-H_l$2!Y{Hkxbv;u`Mu4EE(&PYY5vKdt)8n~Pr4H;cvY5_B+jZiW+=5coDUWSWZ9baX2+CBn}Ha3=v@( zqHbc`!Yt#SGLl}%oC|s$keU0X6l0n;<QJ0GP39^#F6m#S(0w)IrpCw4 zz|P&>t*N7vZ6}twh8s}E9{2L`5qhZ_^1|H^g$DvBynP9t45CYxw459pU_P7#YWjQ- z6+2jy)ZLys z(oTl0F7(jvM}8f_G}Of8Cj?$`wRX;5a2?;y*UoKNso=9nCP$XXc`+E!)cI# zF-M&{ohgcj{seRzivdETPoAXX?4-Pk5V(kw8Zp%cjK77b&_0qpY*&M;= z^l>g{WwH{B4`-qA*D3A%vYhSt=FRMv4AL5tCPG)bLgI|B7hCR9pGC(ZrJs=ud&3ec z;-^(rKW_58676i5|Aq__v9|mbK20ReQ|Y@;q@PL$hXnN~{(@vWe=6p*N{$IY5;&aG z@mWVXDE?Gnw~7jp%yc9(S63EOJJz+GTPK}qfRm%Q@VQ~N^^hXM<{l+ix?bYwB_Ka1 z(EWZr*+{U1{~4B2r)PrwZE*UfZ0kzivfKJ5MR^o=TTqFQViLH8jejk0tOp(ynu2qa zlUpQvfuVT%z81AcQ<`@92p5HazYKVggPy!5r|y-WwlZhOp5k5_3b%-C{)R@vac`9z1#! z=CiMZvelN2Y64cdO#SssA^5~Scu*YaS3p564=C@q4{m~ZE&ux46J-JH3Z{kThyaQ| zGK-MGVcy(ka=prrk(-gRm;mJc-uk8AJw31i@b%sf|rA4jbg;rRF|ZKK0;21wYvDi zgYXh4oR|#M`q0oC=ynzV^>Nc*T}r?UfE_7TkWuMwiYJ5$SvzMtfH(DA)T++E5p;h5 zEQ{dUJ za%HR1ohmAbFksH=U7WNA%!nT-&uIlx;&H)(P@EBRXg|9oQ?yDu%B^U%VNF)KYO^(s z2&@J!!GVEbU1vq}lAG~-AI7G%+Rd?sqeZ~&zOE#D};_{`+W}K8Z^&Trqp=pDEWooU57Yjji^8A9XiS<2zCrbMP0J+ySj-w5ElF)Fk zSSJ^o!^h*)eCX65ZGnw|L<=y4!=>>;mu}sP@<}u!(rW_I1|RI~6uqSD;tB%?QmsK_ z@{Jpcj29BXX4Elrp}4nxT1qt~(!b3@_&1p@j^6Q$6Hk2LO&7po*k~%&Z4L(`d5*bj z)S0hn2e&Zur~%y`ZWUxg6zUPGOhF*}Nk;twxol?^bGL`07WSH`JdCUl4+&_H#;G z+UH(Yj!~q~I%zJhpZPtwxDs~cb93#u63NFE$^ZX9{Qm>_UtSFCjh~|)|5fB!mcaS> PLMM)y9R1V4@yfped`{D3 literal 0 HcmV?d00001 diff --git a/docs/reference/specs/overview.md b/docs/reference/specs/overview.md new file mode 100644 index 0000000000..0621ad0983 --- /dev/null +++ b/docs/reference/specs/overview.md @@ -0,0 +1,17 @@ +# Specs Overview + +Welcome to the Evolve Technical Specifications. + +This is comprehensive documentation on the inner components of Evolve, including data storage, transaction processing, and more. It’s an essential resource for developers looking to understand, contribute to, and leverage the full capabilities of Evolve. + +Each file in this folder covers a specific aspect of the system, from block management to data availability and networking. Use this page as a starting point to explore the technical details and architecture of Evolve. + +## Table of Contents + +- [Block Components](./block-manager.md): Explains the modular component architecture for block processing in Evolve. +- [Block Validity](./block-validity.md): Details the rules and checks for block validity within the protocol. +- [Data Availability (DA)](./da.md): Describes how Evolve ensures data availability and integrates with DA layers. +- [Full Node](./full_node.md): Outlines the architecture and operation of a full node in Evolve. +- [Header Sync](./header-sync.md): Covers the process and protocol for synchronizing block headers. +- [P2P](./p2p.md): Documents the peer-to-peer networking layer and its protocols. +- [Store](./store.md): Provides information about the storage subsystem and data management. diff --git a/docs/reference/specs/store.md b/docs/reference/specs/store.md new file mode 100644 index 0000000000..8432902f7f --- /dev/null +++ b/docs/reference/specs/store.md @@ -0,0 +1,92 @@ +# Store + +## Abstract + +The Store interface defines methods for storing and retrieving blocks, commits, and the state of the blockchain. + +## Protocol/Component Description + +The Store interface defines the following methods: + +- `Height`: Returns the height of the highest block in the store. +- `SetHeight`: Sets given height in the store if it's higher than the existing height in the store. +- `SaveBlock`: Saves a block (containing both header and data) along with its seen signature. +- `GetBlock`: Returns a block at a given height. +- `GetBlockByHash`: Returns a block with a given block header hash. + +Note: While blocks are stored as complete units in the store, the block components handle headers and data separately during synchronization and DA layer interaction. + +- `SaveBlockResponses`: Saves block responses in the Store. +- `GetBlockResponses`: Returns block results at a given height. +- `GetSignature`: Returns a signature for a block at a given height. +- `GetSignatureByHash`: Returns a signature for a block with a given block header hash. +- `UpdateState`: Updates the state saved in the Store. Only one State is stored. +- `GetState`: Returns the last state saved with UpdateState. +- `SaveValidators`: Saves the validator set at a given height. +- `GetValidators`: Returns the validator set at a given height. + +The `TxnDatastore` interface inside [go-datastore] is used for constructing different key-value stores for the underlying storage of a full node. There are two different implementations of `TxnDatastore` in [kv.go]: + +- `NewTestInMemoryKVStore`: Builds a key-value store that uses the [BadgerDB] library and operates in-memory, without accessing the disk. Used only across unit tests and integration tests. + +- `NewDefaultKVStore`: Builds a key-value store that uses the [BadgerDB] library and stores the data on disk at the specified path. + +A Evolve full node is [initialized][full_node_store_initialization] using `NewDefaultKVStore` as the base key-value store for underlying storage. To store various types of data in this base key-value store, different prefixes are used: `mainPrefix`, `dalcPrefix`, and `indexerPrefix`. The `mainPrefix` equal to `0` is used for the main node data, `dalcPrefix` equal to `1` is used for Data Availability Layer Client (DALC) data, and `indexerPrefix` equal to `2` is used for indexing related data. + +For the main node data, `DefaultStore` struct, an implementation of the Store interface, is used with the following prefixes for various types of data within it: + +- `blockPrefix` with value "b": Used to store complete blocks in the key-value store. +- `indexPrefix` with value "i": Used to index the blocks stored in the key-value store. +- `commitPrefix` with value "c": Used to store commits related to the blocks. +- `statePrefix` with value "s": Used to store the state of the blockchain. +- `responsesPrefix` with value "r": Used to store responses related to the blocks. +- `validatorsPrefix` with value "v": Used to store validator sets at a given height. + +Additional prefixes used by sync services: + +- `headerSyncPrefix` with value "hs": Used by the header sync service for P2P synced headers. +- `dataSyncPrefix` with value "ds": Used by the data sync service for P2P synced transaction data. + For example, in a call to `GetBlockByHash` for some block hash ``, the key used in the full node's base key-value store will be `/0/b/` where `0` is the main store prefix and `b` is the block prefix. Similarly, in a call to `GetValidators` for some height ``, the key used in the full node's base key-value store will be `/0/v/` where `0` is the main store prefix and `v` is the validator set prefix. + +Inside the key-value store, the value of these various types of data like `Block` is stored as a byte array which is encoded and decoded using the corresponding Protobuf [marshal and unmarshal methods][serialization]. + +The store is most widely used inside the [block components] to perform their functions correctly. Within the block components, since they have multiple go-routines, access is protected by mutex locks to synchronize read/write access and prevent race conditions. + +## Message Structure/Communication Format + +The Store does not communicate over the network, so there is no message structure or communication format. + +## Assumptions and Considerations + +The Store assumes that the underlying datastore is reliable and provides atomicity for transactions. It also assumes that the data passed to it for storage is valid and correctly formatted. + +## Implementation + +See [Store Interface][store_interface] and [Default Store][default_store] for its implementation. + +## References + +[1] [Store Interface][store_interface] + +[2] [Default Store][default_store] + +[3] [Full Node Store Initialization][full_node_store_initialization] + +[4] [Block Components][block components] + +[5] [Badger DB][BadgerDB] + +[6] [Go Datastore][go-datastore] + +[7] [Key Value Store][kv.go] + +[8] [Serialization][serialization] + +[store_interface]: https://github.com/evstack/ev-node/blob/main/pkg/store/types.go#L11 +[default_store]: https://github.com/evstack/ev-node/blob/main/pkg/store/store.go +[full_node_store_initialization]: https://github.com/evstack/ev-node/blob/main/node/full.go#L96 +[block components]: https://github.com/evstack/ev-node/blob/main/block/components.go +[BadgerDB]: https://github.com/dgraph-io/badger +[go-datastore]: https://github.com/ipfs/go-datastore +[kv.go]: https://github.com/evstack/ev-node/blob/main/pkg/store/kv.go +[serialization]: https://github.com/evstack/ev-node/blob/main/types/serialization.go diff --git a/docs/reference/specs/termination.png b/docs/reference/specs/termination.png new file mode 100644 index 0000000000000000000000000000000000000000..0b61c8f23298531c1ac2bbfa555470fd61fec5d9 GIT binary patch literal 42225 zcmeFZWn7lq*Di{Hgp`7$gbD&mcb6D62&jaFNJ>d6%_9m(r!+itN_VR?C`dOd4bt6b zJi6BUuXmsGVZZOUvwz#)`oLOC?t9*Ij&Y4^jB$=B$=;Dtf2I2?0axSl{7Gw_qJlw~$Jcw|(P?I}HC1tQrm?)T zeNQZV`Dyt4$bDaVUd746@xwvCaQ$jx+w~3cMhsHYfbVR6q zEveyg;jTG45ncOS!m}n)vbK@n)UdX`D#B)kdO7@2?|VIU?$3{$`1rhkequz!Ao%^0 zCvKx8#_yl>(K&nm{Lg5#|Ns9r&wB*+($b%ISH}AK`komW93LNJ5m5D~-X}UI$cKT2 zPa*p1g{i5js;a87@f-pJn_aiIjU~0}QY0k}4S`WLj;fKdk&)0FZfrto93x?|w61H? z-lk7t&;(gVxqD2y6LqR=jb6McjEmFO(4cU@5)u+}*;{41c{4jZTZkY&wzs$U=g*(7 zU!yPK!YgF3>YR6|c#OOJNLiTQ5Ba%WGg9S8Dh$0*oAbRyk5oXOFU?axi>LiSu|UG4 zq>w0D^>S^}#>VF0;K0Gb;rIi3CG8B_=H@0bG4bK<$`ull5|1-?U4e4!4ous$j)1^G zOLn7{)>i2lNx!Tv_me}uDDHDFF2znsCJEVj=kK1L9CgPD6y)UiHPE5GIJAdn8XIQ> z2P?mN^~%ibX2G>F$qt5Pd?R#v-YD*NVdNVp)Wk6E-EILo0Ib@A%-vl zHxEaMl$pfzqUxOIISkGy47ARoT!fXW>H5x2v)yI=yTv6Xb#-+dPbwR8zg&i=I}GM& zJ|i_Eq!GT{f%%Amg{5<9imu@%X5;N&&ymlbJ;}X9335f9gjla;ra9D$0sLSo13q`WMgz-^1qId?d8_~ za{`6uNl8f)6B9M)sL5b%W+ax`-uw?q4IcgpaP5Hurn5zc2V_~8*k ze_l;RmAtbu7A3xZh=G>XIEQukYox^Oa63CE=VyTqTArernOTGT3ASFQ>jyyXg5 z5~Es%LVHKY&l@;sDu^!x+Z~ZC(?5Ui=aQj~U8JQ=OiJp8zsvF{&yyQ5K_OrjFJ}~% z46Ai(>+HnC!<+u=*X&=scp)Wq&SL;w|H@##&J|M9=0C4sS|0r1#f=!u%FkaN$Q^3- zCwC%6M~f)@fDrdMbzPgR155h-G8Iit*{~Ytom&OX8DM~4C1WJxdOQitZP4_w%K3Uk z9Dh;p7~^}?q3g@{Z)1;DSo?4z5-7P0-lT9>SdL*Cp}T~^pO~0-yh~`vH%Z`B#paLk_%2cQ5iwG1U>gd*A!!?c0q@hr_i_b}}+;-#>Z*;0Do( zb=D~D0{Z`#U}xL=7W+keuhRy z8z~lPmSg2oQc|u<_rQ%}xQ9Myx3==V#d5IEtzD}>(si08?VZOU5-oJzc`hF6ADWaz zM@7}~*WU9^4DGH?bm|r4WAgOSxvnEBO~C3|9_Jr#C7mf8Efz4WcpU7GStmKo#k`z! znjbMq`cP0XpK7EMHD2%LB76m(*DLRuLuoBp9Jj?VC;4MWo*s$gb+1aFTZS7yo2~11 z$X3?ArTV~QoVLCQ$hHSdobJ<|?cWzy`UHp7zI=ST*I*L7r<5Q(ySywvk55QA7hX;A z_U&7hjO$&JL^Q$?mSOH$lMsg7kH)5YPk(&&5m0nzk(&RSuvk8lEbWu8Q)64C%5OC; zGyS<+;;c3$If=w=&zLH%J4LnI5j zQ4WHq=Q=Mfed>6=j)>KJKy-iR6EMG1*n=sXRvPv8UX7a;SD7NM^!2Z<*lgTC<&9q$ zDxl-IgXT#Lo@Hfa#iX1#Q}Folrx1kr#J=F0l=`DpkwvAsxNLbr`vsLae#`okU2QYg z`*Ly}XD3r<5St<+A{r(AL(c!2s1f+>u-^E*TrXFj_)+nihS<+HA~4LDMocrt&S7B5 zJWHRMn}hfZAxG||`0nttZn3?1`(Qj+`p4gIlqd4@^Ysl`WtbW-w;ZFtAH+G~?c*c= zOlaZJ%?WDJSDb2P_L;d-3F_U6VoulopvUTHS#GQ>j#9uS48@R>mHzB-`)+@Z`u&B} z;NrV^;%CQpOB8Lg@+2}sFTSH4=`lFaa`s$j5F)1iz1n;D+Ys5b$}J8CwXNT|$g~iN zZTFZL)V1*IIWMLktu^S?b!(SfXyRe?-F|X`S&aaG0hY$RWbJVjS2@4%Ep=tA;%u{% zW6{pik}Z3CsNu9M_Ci|IyLazsgzfid!j<<0$)y8X&!O2`m?O_iedl;Y{<#w2{^Koq z+jajqphFR_+Cr{b@bK}Ky{(Q(R~^7(Y2V zIep9cnZxl*mVe_{NjDie`PR35pA^)R^pTeSHW()m;VYgGEcIurS(GRziATKZe9vZ4 zQeAzBz@}XouPXVeO{$L+x_pGe+*gTM9WI)F7A(eSbvn!8VK!AS)*fK_J|d#jd}#HH zz|>_Dil2aM44eFxqc7}EbjX<1F41Eh!cs2ZIGJi=DX882?nUI+fZo;JoryW+x)m5L z$q^G1!+VT}gM+luK_N6A-oa&d?|&~)pZ?cYdpOft-QJ|gT5U=0n^roHc)}!+msWk# zul`<|FCsrDC&h+FlcZdhF`Ak)a5k+YO;OktfKoBYqGRq$oNVLEaU1OM5}b%^-%A*E z&squnQhrd68TM_+K2tRcn;b2-luxmA&T@axp*NYb7=Ll-&V%=8XhNJl<;}i`$$B@D zyKcuDtrk2tZoK8!d-db(-i_L$)fzVZJV&PUSh3PY5}v4I<2R}Tj%&fg!0?7&&t-)# zn_JK0#9?#ehmq{;PY~%+na6fLXb}L9d zhNEtqcm*FH-@-XdYFN(kSeIP-UkhSvkbkhTv0~Z`Id~=3m(|xaIs_glI-!H-n zy$krfJ_a6WMpRrq@P59#Y)>jfEKd)I8tTtZk8*Q!)6F3M(YkGat68nit*w1howU)X zEKz1Y6hz3IuT75Gq22^e>9EuvXI&3))d`tFU*K6Wy(VB*$&+IEBeGg4z1AhbJy?nL(JO+!!ii_y2o$4|Lu|gdV-`E za|bcnRctIO!E1ge=OS=p&!e)AyPD`|Gmx}1-9h=KR0)t|f)r+XG*4;@)9-JXcd=C% zP01jql)7w$ixz@`K@fkj`4p`K^9ygrdz0;0p`>^!}$j5lt`$`={7 z6Ps~Wj2pKG(cB1(h>F_c!?Ex}D#xf|j`JWU{@Gl7AZyLzlPLOj3@kW={7LOAt}V#) z-gSup8wihM_(4D??)rlM+#Ca*qXw^5Yasd zQchBpSpVxw8e^#?+~a>;I!LRn72Q^|3LP(k2)+WRKu)e7)Tqjwd*{Vk|6Uhz8IAN2V`WL$z5Ju4?8 zqfcwII$qT!Vr*ifMtGBJgoA&&s=hvP=thK@Cm4ir4x%{ApCrR;hA#9o&Ef`bFTu6`P&!%@Iu3WqU$QgA^NSps&o7dR` zEv;$@tASSpxIILTIxZKpb}?v9Tq{~E|LjA=^z1STiFBgH;r8NoIvzPYLruBkidOM` zfZ}KFt3YgAbpx+6?_i@xm|@(ybt|g~N*;x?yA4-|OV-W$GMXn&IhqQee$JY3*U3?^GR82x`S!DHJIrHpNh9l%T9hl)ZN|vSxdmvU11g$^S%s4`24c7O?tqm(!cpz z0!cCB%%jI2v?3D|#f>^50BOSz3y>76={@D1De)n=Le^VpItp39!pw{dl*Wcr#ck(Z z#ba@$vL!Ae6CjFkL-e=?(K8jxzB$55~QH6;iRk$OKk-eHAO%!cM3|U-ype|fpFe+I+_bl| zo94Lv*?wg-vUVTx(G2CJ#F&^W^C9+spQ~efe!kUIy<7hcSfI%D>jvxJKPrkm@hVJw z^=r83*TLZ7&N6H@UaAGopZ(Nl@6zZ)3IN*^7Z+z>Xi7V~R8>>sT$KQmK))aT*OuN18|_XIWam~hD`IMHw0>Byzz+u$KC{2OiRkk-%E&~mETdRe)^jN zXvcsNjE_G_P!e!Ie#O_F4xGrFh)!u&n4bO_Y~j)FN^W-c59JFAcT|>$Fo)C_3nhM4 zG(d8;xVQ+Fl+)AGM+qCWD`-?v&=UD;2UgM#d4Y96{&cy0O&Eq_c(D-=j z%a_xPRm&ygdRx0V0X*Vq@9vOr`jG|Jw9g^WmDb}hV>;SB}7p6@<3czNI~*6N@HA-N`jQL z1%Y_f2_W^TK$nJ&&Uz-wqrU>Y0oG7tayRIRzTxoj5CfYK>I8ly!hAo&zJQ0&&QP=^ z7#bQDVxPUP>R=e-fj^{zD-O_nE5IR#Mim(ZR#0& zE-aV|hghk(V!n8YdE*ZuyyE3tyt6zM?zO$UJ9o$y3JnV=xkF#$739%DM=F`MwXOiR zcx0^6yk^8ZBaCe6t$?T;1FAAII0y1eaV8K>-@ktk4r4pp0d{C*_U3RuY>XnQUR6u$ zRt768tD#Sy(}Kk>qp$FsRO@%XzDR3oD2K$JdD-=Zc%In$$7-BNU@yp@k0#P*N09}? zy{_2USSZ(pCov8j93J*EC#V(6B#FB%EYb5Qrs}OST(^%S?v{K^LPzj~SW5N}i*^ZdFwkS^ZE2<+Xwahvi4MU2xD#N$f7p&RE#ktxZi`v3&jk ztOIB}tur>W907$AWo2a%;ZWlXy_S)ek&&S)j)8#zIcO`nG#xJ*6ndXP{ZvW{k&NAC zDA=?OjSE*^G&vM^P^mTv6O>H>ivfDSGEq~(EJIkWK0Q4R`>3s{c~dMtUyqAmq9Z`V z^1CpO+@H+rl&~6e&eK7&o}*VO^xJAweil+&!e!0nOSCSSklkO z2Ro32n1p1+e85yC`FvC!ak805PfyPa2uw*Hr%sc#PT`D-RN}6JdZQU&cmqR20{r}s zs`1e^2^Pl|ffpK0;;Vn}KrNPYBe4qN1Xahq`}eOLKD2^-ZDI0mJ2Tf6Z8=I|`>g5H zDSQpQzjwC68A-`w+(zC51)lpdGBWq>VNsg<=z3RGRz?S0jcjgf!Y}qVWNByT0J8y~ zA5S48^Z|@lL4UsY>({TvzRX}so($`$de^|Vkj!K%yYX+AP027xJ7$6(5Cqyrap)-- zS3{{K{t>0~-ZOFHixGy9gG^UrRTmfALbAQ-6AE1kD9DeLnuVuIL?=)3^6?F{w)(Ij z+kmG)DVm#y2e5ZpQaNhiDhe1BVtfc`w4e_|a4kzKK!ImATF8#^B`sRO)#!W$9^|>R zuL&Z3y}hr!yas$J!;^KW|9on`DYE_6jT<+17QRW3zJ2B5qUNEdq0yG<$P4x6BKt+v zgGNcy_!!BfOy#6{o9TO2#%bZ#Q15$S)I0p{#KIqo;@r7&u3~b?TVJ}n=Yz3%_$K^vd64b;_3IoQ-N{S>{?sCltnBPTM>@!w8W#hDpHMANNDW|wR6a|cQS$;f zS+CYW=Wx+8ox(5#CC($&4B3c?2#SLfFC0ec8rIigs5)8vlw^2`W~phY;8QPgSSqF7 zn_+~Y!_wu*{Ko~l1{^Y8Z)~PpETMH4=`oe@`^nlnngl>ZXNa&Qz4dPx#bUbH`B8CF zj`CglbJR1xi(*foKFzbfY64xGnc`w@@zBqz4<2wqK2o%Gw9!hJsTlW2RrOP*ofss= zd>s+_hj};zUGe(pVugIK`Ct=z1u~gmxA)|~Lo%T9S`+@B+1Xhjjd{=3cXuZt?MeC+ zdM#DducikB-O2E1l!QYYKmkA{xw{~8Sf2!tRj1ew;DLFq2Q%bps?G~al-VHx<|zUjblmgkos0}uT>F)i9AKwe206>ta%X$4;(C+Um>=_C>*T3D%5%mRiY z#cX_+ge_yZ2U3UG{kVgCeZL2D;cboHH8^WHBv%Ft77VE z=_D2^l5D>n=bX{=90%S>-q;)#sjY$Li(L;hoT=+{cq2X3c-D}AVC!27BEv?)VTmrV4ySUbcB7Vdu2khOhq_>?LW(5D-BBJ4J2EVn-~1g_oN*# zacGM;FG z?JL+=S{|H7s|CJfa+xJY=gK@$q#r9H`|Y4jM8>z^#=!>ED5Lkglu#h$ZE1 z@1lj~e@rpifjZr+ROPX7Yc{*Lem?iW^uh`{hNZ{<(Z>}Oeq|vP*F5us@Xe_L3e>h3 zMD{B7tPloi^NBmeiOLxj>ilyYW|4zWpY|SOq-NxWuRX>GJAFU&&~|X+-PWGnHMYHY z(e%*o%zxQEt2OzBA3U3NtzXmQ6gqNlIpxLbJJx7eAv>9rb$_f$^z(RZJ%o{&{I{fvof`7(N4IfKID@ju6= zH7sm^d^#o_EbcTUw$~272M##vuyg3*K^n@zClc&!PDVen9L}3#B?N^RvZwN@mU` zx*S$mmo`kZ2P3m>@CdP#x}3jQCWo!+HD2j=n_|Jlq5sAgy3qE@GCcDk; z`1UH}a<@Xa5g|ianx#I+rKqRX{Nt2;VS|@GBYT8CXq9CY5#X1GhD2y8^`4abG>CSj z{pTdj$g9IuQhRjnj!W}bPrDI`?~I#oHnn(-tQfpQoNks4jaFD{Q(s|Oy+)*XgKBqY zXtU`a^d-gHhBUbA9_)c%Rg;dY;Xpn>fbACXnRZN{!Q#@`=!ON zOztO}Cq!b5$`+#06YTFP78pM9G3A!V4*Spe1kP=-k(=_ z9(&O%uKNxXPtTL+s*K$-3D<8KmNg4iE;B_fUMm;=2{U-wOl*7Ocm-!goh1cdzX}mx zSc>9cd|KSx#_ir;V5ykbJNlj=;WkfKr=F<5B`Eh7#Nmy-#p8XFDbgix?ams^P1|Kf zE{|U-+H|^4-)=G9aK;wQ{m-$Hd68UMhz_W;c3s~Q+Y|5kksnk)P2NVrj5S9356+X7 zD?Ki-u^gMYBxHi^=8fQ|ojlsct`^m<7~i_Dx0^tzoVI zbx~zPsDQ+=ckFauLrq*Yw*a1Th5tkR?YW*FCOX$bx3Ti<#6(!l|N2@c@5hq|5q>f8 z4o_%}7>4uzi8j174%GzI${*G{#yn2f{a0C!r=%009jZ+C=G>3X?R&2Qk47Afm_xj? z2|dm>_BNU6KmG$rlk95mY2B783Q)eSx0*UzbPMHHRR~>C`vpLxagQOL&XePrqUV1c z>hF;JK7##)!65k=RdoErYZL8rVM@C88i|uK|2`PUWqZ2I2R*6Ea?&*-!lf1ie&V%z zBMq~Qa0+HrCCh%FwtvO0IiI(Ye{rXK;xnyNN8c6PAl1JjW|(90#ftCWr5l!d%O_~O zZqK(p8^HbzmI+IXBl(JPnRM>2I-|KZH6qVpQK2Dm@c)YY@^V9C96@sZCc}m{-Cbo4 zdlK8jCq$1yho7DGiFCFsGtS;@#WM--g9A95O*6#knV6EnHdHJ6} z0`QY$zqF}s+x>APceaS}JgkBkqmGf;cYMn(Sj>@ASc^7S93?Tmj6nPe?FO=o83p<7 z-+z34`LcG3Nz|*l8n=3)p!US%R;r7B@&5}2f1u-+ZmjDRTn$?tqdT#5UE1Bh+kCbv zkpaNZQFl;eM|A!?in7R$&eFNvQ%M!wI*uD(zUHxx6VAl{$E&x^b*fcP&o;h_e6By$ zqEbk}B|t^Bf4u2!x_|GAGlVd$@r=(pG!x!RVN7cH{{}q^8Z%FD=wANQ`DLqms49>* zAFn2z35nI*OYD91BH}Ny^}bXdO#EV;w46`PTI8SCmk?|Rt!uORFVE^VoYjfe{qbao zsN35E10?fnNs($C!K(Nux>;0fZ&o$mqA)fiWZ>MBMruq!NcQh=Iw@=*^TaWruBWyC zJG0`m$Z9z#e^ECG3tA1#G~d|N;QNp9_OfsI`z4JptgB&;o_lMHPGl+YKUoP5EQ53)AtDQbpaL40*h+iCKQ` zZ&3bk@Z*51d_Ae6VzQ#UMB8-+z-Lyl{`S%T1~;dWdX6kZYEYQgockZJ?R%aGN6r7y z26FbWL)iasrxFzButtskhJpXqsW(d2uyXeNxAP=@ObXMe=5nlRXj(v9B zk2UXv?+V?5OS7oP9#+oV-oRp`b8$3b)VRT*Qp3ObPdY?3D>&dV%KbYCn%+8>mk$WO;LSD#T&IxwT`y=f&MYkCXNE7|9QADBO_?DeHwHhN=ESp9`(gu>F{D(O&q*60iBzP z@$q3k9{onokdP4QhHkwJFi1t`=g56rpd=&ny=?e0eMf1n11MuOQhTt)mG}t1@f!C( z1YxtfdOuRiOM}$u)vNWJU7xPyn16}o^Xmu*3gS;w*opqeuIlRQs>dMSeg0b=Xi%T&}J1MeVF`QR#tX#tfC<6VNMdD{>EZ_l#tyV^DUmb zOQ$j-VUdx|T0U6NESq)ZeIYDkEu0WUeq9CFdZ&RlN{AK2^{i!mh^6qBVN^5A{R>s$ z-@ofu3$0Gp#YwbeC#$YaH6+2i%dMxZ)Qno7SMX>b%3}f+!;SZAh@qPeKt+IqV|uGw z8+x$o*$J_rk7T|Te(y5>!wehfEHeh33>N5h-^Va{vt15Faa~_YD2uTBTM}$T=?dDa zI*%W>>J>LS#|zqgf~HLEtL>k{!Rgta3{`Ljn_3orA0jo?)abSDZC(fIG+DmlobSy1 zd}~Wf@5PjCiZgjATFXOe?O8l|AambEP*Gm?`I)A%g!q}ac&Zd*^F6=Uih-lMyM3lg} zbdRyn^?71%1wFx5)lyNp$RVYF0TK!lwG7wO4i@MXEgUyuGZ0Yu|! z&B7{#I0(9+IRYw_eLQjvQa%zP3diNRm2Q|%f%$_1othTVobNs`c5^#{#=FbzubYEs zpbQp-3T;>zv3T}$s{47p$8mbOd`4lc0#@V9Wnaadx1)JXf=)7nJdNM*-jh3;jnaFC zi>_AKB-zkYwG~bz-ZsvI2O_vO)$pSCzcbai^Kn;94LTxjf}Uf#Ku6!QUtxwq*xuB^ zfzKnYe7{>?mm4oN}79 zsvCg}Yvf)>URD0xDyH#+cSr0xF41rjngv9(_!C9@-oSERMM`a95xrk+-`r!u z5c{=UFbgRO#e(riTC775PeA+krNLANO@Q*u!br(HY{~w!__v_-QdaH;KiN3s2}p>n z0s*;XyJe7+QzD=F`@T|nXedIfql#N@akB0R=3ov!$5Zh{$npJ^wUC#Svwit8svLd9w3>+y z#%3hYD)H0Udnq_X^o1Ys_Sk_OW1tx7t`Z!Oy`o%AgmM8|m_0Pqd*&iY!O&CyfaK(4 z==6)&&fs_yel9lsmQq+)2%SBc{Ya^baDsLzv`%fxy}v;lBrA*g`PHNO4t$V800c>v zYq+{4T*|6dY6!XB3g)&_&^0gGx7DLd&R!#&gshdg8}ijXPKG zhQhLIzk4qJ8ZDC-R^S_xynlQ&WvNx>oNPCrlEv(lXW)sjBKu{6d^bEi46TCA$2^7n z<97)8cdGL8EZW0v7@E(7W4~z`+6{sY1tAQf0A&TVSid(SZjs~g+)+?a*t%7ckzov8 zWN6bg`)a4R!Q*UweH|KbA<8M&)#5gpiuu*+X3$~sAnN^lGE!2I^0QPlUk8`HTBGbF z;(jbhBY$7E`JUC7jY(!S!Yx8}71hD@KE;q65-%HtVBKHy;`??;iaNV2AwI~KgsQ)N zTd9W4=+#c4#Htn+8aWA9)+#mK_=d+p9q1pGd>@$}ri! z$Q7p(akPLI9kgz|<1fZoFWI*%tU)i{8Htsg{0W+KG7oC|74E2YK=ON$?;@WRC@2of zL!}7KK*|8!>>z*tIG7SEqvCrhC56(^Er{L-R0lO%d7!@o&>UXdIPJPQRraGV;`ex zo=##PdF!TZqNLI9GkuPHS6y8V2$By<+=e_v*;5jsfV@KmN1y6ppea|J%$@*W0)zlr zhkRmJ23}w6Ij76TRuCd<+q!PYbwJ}yx5QWx##J zkN)`4I6H~pC^N|**y4KL&%8m!J0KZ+Fx1c{ws*WP?Gz#VJ5*F47{N=kC3@QW#F@OE zH=vu_Nq1rCZNTW}Y-c+uWh~nlByBAIOPFm;6myBK2nPXDAq@P%ybuQmUQa!Dc0cc2 zjDYnd0IQdm*Q~^)5@FugAXxGD^}T9xuWk)g7^E~j*o%hal{P>-UEhQouo>xA*`~t) z-BNtMAh($7=W$dc*nR{C|EiP-Ft(5)S0<*i^uDKKDa=FjesXdWq>cU0dQ~)QX%)G9 zT7OQjt@I%9x3#-Pw~duIlXJw@UAVuR^&sBb8{x}1 zyXF#V827UB@=#+GkNg7y)a^0whCpQENg68L!Pagd4aT zjIK({L^{p1UCMq1gH4Xd`x_s$D?s`p2qtj#s^lS|WgSXKF*PN@#TA-QC_FZh`#7B` z0^JCGI^>qjm9!90f-2Urr312~B;b5#Mqb>X=aq4nc0@%*VUq3EMAb#CQG4DL%q_#K zy~kLwHb0wii@yPKWqkFv(A+weB#1s&^@B#lF+>OHXegmfR$xL82XqxgGeSa6FtP-5 z48q`puRfvL)5~fRe+H&dgLnQd>>=Fkc`y!mCr5$mVi-4&ZWq84P(URpk<-wuEH96J z&=$~L=5ABP#=-&>F`tM?9Q9VZeAKHAN*KPSPCKWHzKBql8q_pzAZN%laHT~iBzHv{ zCW69HGRo1)ioL?)8h+_JQ#ss>4oSa9Fno5H9Le$+gflf&Re_W|jACL5hiULNFz}); zrRL+s+6)>vkX1m;b#ryyu5Y|cm>uK2Z>YF{F|yp$bVoRt()xBeI}Ddh`*7=EKi7M1 z^xT|yYrM)1)P`m+7G`Jfa{Gd;@xle}`WJzhg`aidSZX4%ztb+Sba-3aBj;!@_QBogK_4fl_On^(>ab2;_b= z9riS)SBIAQ8O8W01z%hxpfN*}{d&|EN>9%IL!`oEuhA&2n9*3R_|XKdskY0m zY*+bWsif+Lo)jAb+4Y@#_mHzEBZcoqw9YdszMe)>&~G{$8O=f#R8m|FVnbRGOuGIM z73zNrQ-~o&N^){3=K3(F16cO(;X_cz%r)SnQ35f~`tU&vG)@_cai;qGFyLBT3tGG- zx?4PL+Hk}$O2(m4cvV88p|o@xnAdUCx$mPgmVJ^9-%<_c_~nrWWF^VHLZoD5&Q4C0 z&W0EmmbbK2UQ=?T)c&Q-Z}G_6JW!z={<(ScCZHVGCK)b0u&Cg8l{VC!s{23ql2$;^|YdgyVwINO1aEQ}rIGV{CSSASqVl z7zg1MpebY=x(Z~Lz@KLZ+KL7S2bU6vu=CL4o=64Cm(^L0@#i?%7TL$8miq<=XF3-3 z(D{R!9P|A7=BB2!jEt}|##F%Tr^j*VXt}B9$I^Jd34NPj6cUP=8oNwks`l+Qu|gQ) znm!U?XA6V5D5=?KnSU)GkM5;Qmw*C1ve7~5@^3VJ&Et>ZJbVwMa);G7qi9C*?wDrevl1arPO4Oy%`o0220nGyWVKQWp#AgIM{#fv0N)VY+qB z1Xc{Zy!Hz8vK6vQO1iJ&?)sB)rOM9C&aO0kfru4D$M*@QiubuYkz80F8frPYxy>9m zn28MBnXL`Po4c6!PO&{9QMz~wncdLBgCQUwaF_yRp{%T|v=&^U+(#ce0P^vN`uo4P zx}uq4vJEpPJfFWKa1qiwkmiWr_AY(G+4H>THL-d}qaakF7o_Zm4V$C&ju$hN+LD@E zTEaxOx3=i}GUkjZ8!h012gyiEzGD54T?H}GdT_J~6@?wX zzH{ZNGO0e>x^xKDeNr}NsX%<_Q(!f^LKL~jAEdbgVPuY9ygMT|v}z@-#fjeV-2DKG zkF-MC;zDCml)R=FNj>)0fAlh~+Rb%U?O?k6xSdl0DUuou2q{(Feim_|ef1%;2v>S* zm-1A|g^qyksVC#?<~9R|`5C8NaYo4GZLk&Esh{=1ZL|H)hQ4}oDzr_>wcE$Oh3O(CZ}d9h3m(yg7N2`4-0?P65EA7 z=+y9Ez52a3U4Go1Uhs>5_KG+7?-Q(BuqALah^Hm$X_O91plXF^U67p}x~iU*m!}|N zx5dm9K_~7{C5y_E4ku4vduh;2VdvjZX(_G zT-C0~aWY_pqq@HTJpKhVQFRG$aWw&R`&cpJkmqkKL?W{P%OeFdVREdcsR_tws@Z#l zB3Q+CN25Bb$&QAmubpQUXFMP8e)@84Z~I;3{cK#@`G~kUke0jr_;g;pwUJ*V>kXy( z^ZJz^QwQ1Pw9eb@k0(z{^k!~5Q8izULQpnJilFiF@c}ymBKR2wQV#^-xsqENrg0iF z6YHI85~uZG{%1?%*MlFD2+rGYFFYV33Wn(r@tQ_sjg&Y2J0^*4iwTcDXnCwhSxxN` zo_^_UVB*@QAZAlG6BYDAz#xp(Q7wccPsfw-nJ9xnB2|FB8t0z)a31nU$!>AU$w6_^ahK)M@gIp zzObaQiN6TdSlnuGNpg7mh^q%QuwGaMqc985Ja*+iVa4e@+)W1Fbr-NX4P1E(lJgbG;o_$mFF_OY zz)a+W5fYfw`e&)5XUW8ssQz;X;vq_E6A}8I7^^xf%M{QWsxBGbP(0p_p7XjWi4mdF; zYb*>Xr*P{=(7Sh++?F7b4G#-r-pEXHm(%?4;e*i;Tu2g)EH8nNq z7{j}u{7dyCO}&GmqoE;FK_N_9bpWjYjK+dJ1(AO`+oF8_vFcX7g~+I(;dOzY8v{d{ z9S>ddPv@_Dl+?U9XsJ%gy)5F<8ChI?rdaX=pErTQWo_VuX}kOLfxD`7+wGH2C|?#D zELMbC7bPX7D?4JP$dC9Z8TB@`(DZDDxYFQE1<5#=tfqC+d%r)Q;s+uv2=NIeuw?9B5O}-9NGN&43HjhJH%r>w=yUl zD)ZfLPlPKQfVujmoUmDKb$gtK@AjNV={mpTPkQwe_b>%ywtXhW-!9)}e5jycF52UHkD1|{GzGI2866e0*pS5)sV19L8F7ET3Zms=MXlcKsUV{w}_ZIPu zec{8wv>D}=gfsq(r!|Jgcct3PtPQA6MOR%bZW3%Gh(iX9AE;u~?d_K}+)LPtw)yev z{GvptD=e~b!jMo4eGm*A!!NbDh{{^WZH-0&2t>G+@TGWMBp=U_!WV`R>l$(GH z*;mCMf^ZiD4Ar+S`biwcaX!86KPC|xA>Gc*&%qIlDrNnG8RVlCYY%NLEx9<5=DUAK zV7E1&!hd=84#uJMTsQ76{N1 z-!KpM^@UsJBz=;aH|Z{%g%KeyWw$X&HI4GE2Bj!b7E7fsRbBk)Sk8`*PPNpaqGro{Fvjt)HFeX0~jAXAT{S01_MYv=+&nk_TsYE@3G`sh*b z)H>yXj{8;(1R89muCPol;`mm{PD%-PQTeL^Js_C=`D5m$Hm`ViMmOB~1f=rG-H=PJ z0ItI(^jOY~p66zw_kd`uCa)3eEw)oGtgpZ5h>Q-qDy8xo#x~-(F%lq?ihGhj>2c;} zJJ2h39g2f65%V-DTL*$;4O~LS_5Om42^5WhLv9Aqi1-z$Mk7$A_e!5lNtfQfzC{$z z^acgLiptd%VqWfprQCvHlO3#4^=JfP;*_D|CbHjus)fHW+Z43Oild9-b?wsAZlo1m z>~>>#<&(CPH9;q92z$+moshRtj!_e%pYgX(NroC?dnD(rE7PXLNEYL)6PWBb(yPzL z<)oQ5VjU2>c`VPlMR34iM@)MppAnY2Rk|K*LZgKFk&B^W+WrXfjY3YF)6@4KwCXl@ zlV%^>dkyzYg(pfb+Y4{3Ll__BflFCzh#u(ZB*ewZ&ogMd0+5Kqgu6w)No-WkElF^!2}5nP*Xb?AS$`xdbpTbs^+T7 zJ`2|lERL43xMdX-K>h5bV#=K*&}i#M(?}PLWCe?E`tl-hczSxv=CdA|b#E{B-N?3j z9~t>Uv&36z3~DzpJa!B>R6IShv_wW(SZ~sQTD)QEk%?J{xY+LXaNV&bT%yI)5>uGF4=)Dt`0|J|E`h|l;1p$p>! z6xt6T-r(zj85)k-_1pgUR>Ap{#ShZ#=D$A2-~Q6_3~mQ_#vS@=eMr|uF&@!Kx!{FO z)ElE&40v3>GpHR=`Su)!qIV@gu+Db!%Xa|+Q?B8pK8l%8R1Qbo8dhpGp%wpf1K@ZI z?i2`Pyck;W=_BO=l+x{&`rk$VglYV<)tWWaV||d_SrgiSt!E^9InVd?r=Z42lO$bM zZ6r5+l3qdTa)h^gAnx(J?lEJ__g+dv8@Qh&X>0zJ3|R3$ymSND!*F};`o>-34= z4>Ot-rXX5`dmQy)-g?FticTZBk2!KIH}&5${}_pytlG+VN$VU=&&zqGbW(o!`+iI`*3DwNN;avr_RZx<-CCOa{9gD`OQt+;#!!Y31z&< zWrgJ7;aS{*dnJ_Vpv%@yJPwy87*m*T%}JW9J~u#)kSuicDdO8#^^z=r1Bv?!a=6(l zs0)1csUI;#9;H30blsUNPN*0^2nalOx}(&u7gg}}n@7ueVT=kAa3CiQgSa#l1G_eP z$5b9hjNAM$;Sx8&7vDd<#e)$;k^iT?HxK8sd)LN^QW2%WBOztTlrj~WDnsTmQ-(?+ zL^5SoX;vAN3@KzLWS&ZcF=Wh~%(Kk1_q^-7zt6Mx{vF5r@9)_Aeg1il%ExEj>t5?x z*L7a!c`g89%9Vk&^&HRi*TIak5^(s)H{t0ql6@`Wpjp{YhAl^Gm8k?3a{=odox>4; z^p9`xeAbmuQun;Qb8>U#=VwM+G-p0(kP%z$B{W(w4*BZ7ijKG1(y+PboBt0RtLB_^E-d223ubL~8U z0!Qc2Y~$Ul(`6D0>!RHste)RwDlU-EqNvK7?1*Shb{vBc*UR_u745l@ph$KVw7Dz3esQ|H!whJ<7Mfx^^y1 z#M5%CT&-AUb~WZw4krn z`ft2N7;1_ZyN~f&^A`OoS@s-Ss!h$S9pGCNQUkHfxAHv|LVCu_5eO914(0uo=QPpz z+SX%3o5?3B8fd+-V}AB@BJ_yOx7)$G>EfG^_9KkMMl$wTymS53OGy|p?QsB9o0YKX zLz7qj^}em)4{5_h9Uc}XLUGSun`>wGi|(Olcnb1%e5TK{IG^)UA2Tu!_MBk+1-x?L zgcfC|G4ws%z1Mxn!gF~n`-;{6wJP-|wq##DX3IR(&NMVNr5}C&+uh{MHo;IO0@-1^ z=*n}G-kd*aBip)A5&dGe+uH?;-9udaM$2cyly$z>h^d6$N5012yyV-z5D^l)s%_diM5n@L}Q0-R++lJRT=H zkBX=82_bCNqX3XUt#NtLsqmI8dy*f@K9}%%6jJaao4kFh@|9>+dHGOR@wyMa06|-j zJ9hCEZQb>nZTrHP)&@!**|^ONb#>hC(Jrq)w5gE)$ZmwU1}!V+e&mL;gpogFz~|VtMLs_ zC6*4I*v&|sUWkH)RcDV zZx>ykTz4C*`JBDy#@d15g61SjoXoW%pI>rZ=hL{Lc-8kH-Tax5F!#!MPX;^7%oC#B zW<51#V_#_ZAzljcyi;!4>Jq-w!3}JR-lH!r(?6GmhozdD8hkdvQ)K2iNvTlFO{NLZ zztCZ%2`V`T=Xac*J72xKKmgUQ((4_ywNkg4rs^9!01>_tmwU4B6}c5-9{`nlv(q+5 zaxo9+ED_|8dmQZS+B!OlCgtQ$(SEHC6FJ5}p-b>(sHxdsK~(E|5biiU;m*iYczyvb zD2w&Kvs)+9#1#T*2`JGlxPE=*a~Wz%ZiBZ^$wMgHou~RB1P&|=ef;c2(HVo(7e%*x zW|6%_-xJWSz(xSW{#p3keM%oJb|AEx7vc?g29Rh*k_R@1KCTyzCJRCvRON*rr>JP_ z8O8YCl^BK=*Hq+!w(t6vu;MRU6%=TuQ!}P5P70-et`nEp-E1#u9Fnzmz5lSd&4DCI zPm^DB+?sZW*R^VUmAyyijXOA<`?k{CFLh?h=xh_ecwEaNT*Km$)t3tWA?tI>*NhcJ?=skw3m)MS|Kl zF|m%_;57POJ&#ow8#YppKpqvUub>1)YRdI^|LxJJ8>e{p8!CKgLYDopt?h)ca29wr zPX&Db1BYrlY)|h7N4%daCVuFIeA`{E%uj$mg1;fGJ9Y{xDo>D+&F_`fKg=7Mt)?BD zkid1|KuJwaD$u{Dnx`lg)Bd|REuXFRJ z;tf&-T~Pbt9$AEdOO( zJHzYIY=ZA7j4vzY&eZm;XTDe&fj}Vo^X~;n2aAZAUio zRlI?=Wk&;S*ypV}UFmn*{{1L^t#A7E$FBzJaTobLotc^vwd<#1{&v6q`^lfk7f|`5 zXNis3&#NB0iTBV4Dth=_{lN?UTG<~*c8c)orL(fK_M$WD=Dm}C5LPsMIS4W9UM^kA zWX=zxe@|K~cj`vXP8WByWS&2Nj;Wg3p>iP-47V#cj-Y8taHy&`nJaG@HhoX}#~-;3 zWmXe7lbr>E%z^Y3v@aDu0;Bx^LH4m@!EF<@za2=?WI?M0YbpBV2Ii{_f-&Tpru#R7 zc#r>t?}b$WP^GgVsklp@6`!77_wi#7C`onUl#KOK#`%sAZ=BUar%%p=OrrIRx%8I) zaw+mZpOp#$0z-rD*3yBIJJjZpU@D=}0VWJV5IauLJBe=~Is4Cs?-SEVOG``G+rWVq zd+kFh>CQjXMWXf@3lI>KgD9gr%xd30ycu!a#`uheipJ$!F$HBclSsH72*R_}$!Q+1 zd)QEg{P{!2Z8Jdiv#eVDpFI=4P(qGk9U*ZJs_MYa*A#%VA3+-n()H+Q0KrKJG`hcL zUp3td1nY^!G;de5l8wy-@X9y%XT6L!z>3}0qIbj{1a}0&eHS#iFAa(M@&hS@0dj0F zg1i0cNRI?4$n6H!Y6@Olh@`kT#B?La?or#yT;u#j{<(?(dYXruL?tC(jw7;dL4b4H zcZBXM{UEqW2w#iK%k#sHgp+M_uE*-qpOz+Isjy8dggGqykXkga*EBHu{B`u!r6Vv9 z0^d)KCw2Go2GczL=qi{9U4{A{Jz&7)y=t%v2o4Ja+la~N&|Dw9X3@if)uyAJT@?2n zDti?bY7F08JEU3}<@NdV=fN5QDGzQKH!`-_|M--I0#h7ngc%n0sJNQ9O=?UFY&Vx7PKwEPe5+qnVi# zN66PA0g5UN@ za0BtC%G6oVvqq!dqC@v-g?B#CnR1E*bTpBUn1lG#v}zH|ffgcoEWxK2d|6Jz^um3( zfzx5nj%Nx5{EFhvspZ{L)bcLpuQnxx2o${7Ry!{-cZ;>Z!MW@BJ!wY;Ud=Z?tXJl% zjr9FUm(VlDcEr;5r)m7`_$cq~#YZHC$Z`>vvMigek@lm(uQg90WfbA|^W`y{h+XoS zX5_w+O};|FP+=1+Wm-bHsKh43DU%KgxtbbBD3|8t6A{-Q{U|m!`ZH zD-@ueGweaSfN2zy%?R)vbLoD(C*|~#kMTj$CFaK`a-8Ps0!da!Qj0+#$ibQd@VkTp zvoTlBH}IO*Gi9*p%0PGk?^9*;gYRe-YbEvv|G|SjfLb+1IB!Jrqepbov_uX~EsM6S z({Tt(dvZ#;5CpE zG81l{2I#7p~0G@Yj1gWQH{ft@7@mYmKCM{(xLBx6ZP#^T4B_j&AjPoCE0ev2C z%T`TJe$NB`bcJ_UG*65NEO8U}XrITq+z8Jg2;mR2-SXr~P$KUWcC6DFV9d=OckinR znmBSkYgu>VtB{mxsIYRljBeDW4TPC7TGDu*QAM9g1+b#V);=cO^sTUJ%i;}sAXOHd zT2@xJpPfjt(D;GbhvKTD7yZdprCth7DVm79Wv&^yx*FF`71B+EIcphQXN!6(EZ1wD>O|+$B!oBEKRDDx zQiw2x!w8!x(Z+oL5p?Qs^#I0 z%*n|?+=~+$2nA+tKld&1>%vy7s2a^~mj zHrKMfe!gMGrg*gpFv!|L+NKamPY-Oy#tMN0L4*w`PSgZG3i8`-N3fr#da|jiBgETY z%TSK%QaoBP3{hKPjC6f^&hX3a;I0R@6B9ASXpe3jgtLgZH@S!Jh9(h*VJ$3!sA9hK!!0>3O$()DvFE)`UJ;X@hd$J2(X|+Owzmb# zdL7>Jc}HZ;OG#!%MpPNvvtyLxjW}13^E56^`&39DJq|A}a5BNNNo;y)hPi>c-b&Mc zo<2F#lJG84(ff}{3=cd!C6M`TxQIrrM-7W=s&uH*5f*gwia}&8iq(M7j#1=y-66e zlD5@?SU*u0^JvLqW0D})Vb0V2@tL}R1wHuL9Iwgov5f~A_$$Qw;Xtn-QKxaMzdx99 z`RfZ%sFyBp6VUj&%|P&Kt04Thup7WIv1q33uL?ORuFNQh=4n0Jv|#Ed9xSQBt47en z0F}4`LPs&yrc8w9gE9OZDq(|tp9*|}A`Z&>sLADs*#7jkx9%*+oeBP!(R(UWZUMM? z!S82r2UZJwO+A)M0G@Z(lB>|DT^wae>;sV)UkqPt9&vX)hp7Wd7u1H67 zwXl@aaKjMJNA0ep3$iq|DP}*}_Ed7Lp_i+27rL5nF{!tcj*8GqOG}sO8mpW_ggEzv zbcysX2``-48{a51WbUC1sAw;6&cKMikJ>aE!ndUpvR-2AqU6!3D!Z@<>Q{VC=(z#Gudu zx2N{2kXKSl(uh%jcQ6`v19i`3)?05%Qh~j zD#3gG9TqNPH{q$gr8du|Xi;`U7`BLdh_lQ_^A4=gzJu+U-&873EFM;JsDO%PAMXV% z)`!2T@vM&FEE-XH)FFab{8xBY*ZXMMqkAufCuZtCw})G~FXqgarZx|GO~s^+k+Tf_ zTDx71q1{&@M2TlK9<=)d(^S%9lg}nLQSuf15S>KF#bfdVh2cGK7GtiR5!)j_KFcJ{ z1MlA`MW1qK#D;gZsv)Q#YDA=ePCY)>c?0m?z;3s+(_aw0mw%F56a9HKK;`SQnDV#O zI&akJs(5(TGVKVLK#iT(9$NlO4TvgXr;&vmE}YSj=W^cYKsbxf@4T5;AFEo))Sj6z zeVBak$hu4&NKP`+(s9I-MDmMoG`DRwbUK@$RX)qViuOKQ+eQ){eY+d3263G&a4Td@ z{qbgJb!mD6U;b#i*-Q5(Q@7*(|M}H02%7&`CAT;}NgFx4lMCG!J8Hiip3?|3EKdn9 zJJj)LIeZmq>Hwi;mV(@zgHwc449hQyf4X4PAZShHamZ5!d_h2{^u^gau)F=ZHj|v3 zoN?IuGr+gm>^x^1b%yU>J7v05rs!q;?e!g>CHRVmpPt#dZHXdl%TD&P9zgexcHYRs zb|%b)cXMe?diWpMx1bR+`AJBs^wXR#&$$)lwZo)&3kaq{HS3{!BnrsQMW_y)>`l(@ z@iIUQCHh@(lSmzWfG762*5r3v=~U1=iwxW<#VBV zU6kgW)*HS8@F^nz(g z0JL^F{mrx-(^1t2l1_SMg$0u&_x8YZzpOsgI&UbRlIW1rUtgVVW?18=Cv6S7Y=0z@ zc(vwi%-NRBDiPTA&3Ym89ZoN)eyxoqB6)iU8K2UJ@Df-TzJ5Zj_-9l!5)1Ezm=))O zVgaK+&EQ8Qb)=ud&e;=D9fgDW+>+vU9XW4$V$z7-PD7lZr8nnW*Bb#A--g@OrVNYi%|j zlvth79D>ijPL57Q;5(}336Dr$e~H%jn+ORq|$+rPMh@de9639m=PsI(;{2ixl3hfgh<0_Aia(%3CUCcnV6e za&aj{RB)VWw$3BG@$tX_ShVDF_DkjSixHE^wBeoG`9zdXpt-Ug8#!&{K6`eEJ-#*b zoY9A|4Y)mG7+u=Du54Y;J}f%Z<8eyHyXYgK95O^*1{ygCqFT&^Yd@a9W%cej3f_Qj zBthCKZUGMq`!TJ>Yc2yDW%zs1tP5zKDRSCI@vp&#F-1<~^iZ!B)3I%qf*-)wa^?)?bthP#+=Mc~u>Anqg8bOQk*`W&*3 zKz5W0=R{A`MBT^r5~uFY57n2N<~?HlRHiG!8h6kD(9Rj= zfvc}RhIN1Ns;J^#sI%xUaT|^aHnKqbx;4`>+#M@!DsbRO{^hTG=|OeTY`zKdf9_Pe zN`w)In_39bB-h~@2+}bc!Zj8nPs89hs&v;iuk8z>I=e|wqL4juJr^98+*)z_nRl^8 z<=-2fyrdgKvWwh2@Dwq3JK1hkc#G5HPN3hHh(+z6dBh|MOD_ALTGuO_!|K-0*N)*1M_5n#qy#YF5m+wD9 z&gba#4QKdY%ut(11m(CoNIm&aAkE&Cl+TB7YhBD z6b162{?n!j^(0LKxWjDg2EkK}NKII$$J%o>5BvZqJ8M92b+$axxIvg&>P3R|z)a`s zvkc0U!V%tGTHSAO!nugzuL~n8m**!^Ii^6ssT>aHH+@)?g(8rP+Y@+AItw+ESe!LW z&NrWn55Gw4Wc%=E5x<43O;v%qEzRWKrF4h_6(vHCn%w=A390n)ngz7Yv~L;Wi?_;2 zeE*Ed$-!_&BC+g?KMiks_oZo(8c)w(pXt(Vk6mqLWmtk3z+$5}>DNPV%tr;(X{oq( zdfg(~RtY_fez69jVf23u=^Ez)3vhl>sMBVkJHmQh2*<%S+Ke!!S!tXNZn@+a0R$8q zud1C#DORnxKkQX)EkM(T%$^U3d5sSq5hzu2i!R`l5haxn{?8)ou*Y^Xe1~*IBsdQk z%~FDSZRE|o7d%baOpE!zIbB+rphvDssB%Ug&r=O>43{q@rMyskL;rc#V|R9iTYpyl zjyU`(1b4Zg)&6XX(5Ui~p7B=$+?ww&!rpS|z=7A-E^XaA5VXlL)H0k+X@5h?mBj;Wa$`<9}q+*8Jl=|QGnordaU zy0#DR*&2cRXw}6ZjE_sm7Wf(X1*Wc?l*k?hKx9b5P>=T|Yx$e#|)W$_D@eGSEE{B?TziP$Q?H%V&b`e>10`fA#2L*iF9|kpGNA@9^Q*4CIliI{bPL_@#Z}B zi;pj(Cp^saY~SMwZ{B|el;UONN&xZFFMfWAlmjlWwT3t?lBnRQu{Q zq<}2ngx?3Y<2*w_J50A-@puH*c!WAucmML);*-(zj8uYL0?}u_c_0;9TEAH)mx>~1 zDA>02^JjFE=ST>+;Znm-Va*5K}5} zX>K(tc7Mg%1AH(cW;FEv{rgbQ!opu%m(RG!Kw=vhrU{%%L_oa{V^z7eoX!fZA^@u?u!&a6Kt!Ql3Ta+V^O`5=u28U`+V zaFW&B@FO6xgmS%_36c;j61KBtly@&G%>8*v_N`#O5SID0ci>+tj6DyP+&i1D&e6wT z05IU@7UCu^XFnnp?kei2Z*1HfwB;QC@0j>58P^XiB&HU&2jl4vNPac*y>?HL78AI%D(|2{Wj=}f&@L_8RLji-L!I0s7eOBh?{%<~WL=rQ&{{-$E z!V$|#Iu9+osHna;>z<}O2g{*netv0iQ>Z>n@%ycpr1D>&NC{fRqm|w3MMVhR)7(T? z*EJ`Wo98Y%Sy&8%cGY6IKj&YwB)S<%Cvi){vu91X+LZ5DkY#P;Nh_;SbY&3la;-Z= zHM5`I;UvNoC3-m>H&Ux`xt)rr5^_}hB>Ic<9-+g)ijn1EW`V^aa3^|hIdXbhKZK-x zN8~Eue~zqJ!5o$?lT*fND4REYSquQ%s)+IMpVNGe#R6V!-_b4{ZZ6(;LeAV6A_R0VhaFw?q~wg&sR_kt=;F(|C&92c0U)F1@!r#B@%rApw$gS>Z5{Dw{L25 zaqVQ`xF5v1cmoY5ywch0%>_8sh6p+0$Mv+grIdUm{{ZI$8+t322)FIJ1ycS(M{ee#et zyTZ+jaa9OgifT+p&P(|@y$HUKxk!a}@1>Zt#sr_MK(ie=_S&Hpm(7$f)PLv0o=yNv z0q|1M^M6`(v%}6U^`Sp#5`J&rcO>@vy}5SA$lGea8LbCL;W_LG7#Pmo2=3obPcMSD zuY3)&`-~qy>eREtmapE``*0CkC)^2F#$eTCSu`q$JVYPC7F~ldD{7g$6t{hmdP}gK z4Z#}=KWZWir9olWURBTmN}WVX9{+wn7Z@Xf(eMLv>*H|j`@xc{_vDy`jW_kj+c&>* z?~FP7K!>=P_tJX5N0C$-BD#Y@pvCWp@xJ1PTdGoVrA3d+`};mV`*3gOK&T=GK6n)$ z3{~cuM)-yz>k6Rd5s6wb^{I4{?(F^=hq9;Su+#)1Bht;3dk)+x&|H4lkqM!>6?I^tOm_-uG5-3tlcz($5`48o)ESDLrt3c`o75|q4Y zLbcv_G@DJJxxRR{Wb*g=mdPO>Of6n(ogLVO5TD~|$9eh2k+-U*2a>+O;tzLn7Z${g zrVbYaHU2!2j8}C<#r4jHTr^wpYBXIV?T?E6^*x$@hxMphNM$hJL}OwwUv`gqU358m zPFS~luMkT`eTi%aY5dG>+%$q)J-{{Il1ZV!h?!vI+IgW&A}H5$eGMvibeJfwRt%=t& zM(rrs$6@&sv>XCE+uh@m_Z%5iG@~Hm-?UmtR;%EPv25zZN}jJ>6;$5I!U5gF>5c6c z5>?lRB7cvPk?ft@pw9dSHs#Cm;guxcuae%*w|9&r@++F4E7{6XEv_SP^Q!7rlB=Y? zUlnlfl64F{_tpHaBE|2;HTqX1RMQQNy7B>f8Uy?!j_2P0I2_JM#K^CwOkXr@a&|2-D}R1LN@)B!`RQOih~Sh@Vcy~w9c+t(@nQX#^fkpz23U@_w#SAWeRmTp>~2N z(&6t}cl<6m^eV$a*YNU{HSNp$H`jW++s>5``TxG+f8RvC4x|`7shO2EaiYULhwp11 zqxlcY6p`XHaS2Q+_q66w8;jj&b)4_M%d3^8;+4pK z^|Oecqyod@;)VH$&U0}|DFq~40^4s=4BkF{&x48X*wSs9Jr2Y6R=sEDsqZE~C!6sd zI$`3XWbw^eYko(|mC9nfiSyr2#l_wFy;&aC?Taksm9Rz6d$@fqa`}+)3x^Z6By?}} z+TT{3O$g#tLna)%-g@oS9R0j?XXrl`HmUXA=B9#=*IK03t?J6%120epCGY$(tgvRE zX!Io|I--?^$ze9eo=AR{snXMDf#V@n1>Q^g>~eazQ4 z*Ba@Eep1qtNfoy0dOdx}Y(&)U0Y1R1rLAK`5=4JrYJeS*4our>iRyv_y{E z$Jw2cRr0aQWU?vj>ugcvjv(R9|1hj$;84j(lTYn-Y}h2Ed*LKrx$6}cEv=J(PHax9 z+S{>LgF9kIPY1B4B)j)qjp31+S6x=Uchk<`p4iMxVqc}s^LH~z{~sfiH&4+|b{Sxy z++5q)My6Wi)|csdjoCWuPQ`EqWh6h@jx%0QR+qbajNBU?$|}$#d3&yC;Wd6y@{J8M z3!gb=*K4ma{?*|SR`qseq_a!$?~tl+!Am84UK#Ie)yHaxvQ}YarF`XKP=eUpsAPwn zvrQB4zt5LVMOX5kRz0f`P!E?IpAS#hnHAo&)@R zUKLc*`$=01XI+-Ba*nc06i-N05c6bs$@O)S$z4o<19M-@9X0ihcmACFN2KzzH1~QUSe{I5q&!=2;yz_(WKX>eVWdX+v_#PRexCf%~LOXvQ|gX(u~A-h1!fHaq5# zm>?~$CnJ)H0B7dK*Iq9ln&9<*$aP3C>NjHHYSlC*bvxb5MtE$hs`9aQLErdPA9ul& zHABix$Cy8-HvNw`il|o$KyC==>IVH1u*T0f$ap4ii z$dd-|-}QBguQuO(qDY(n*a?Hz`}^FaFMD$(;JF@x2{qZ}5!Ed?1xq7VW0T$?9_Boq zs{8jkR`4=v%CsEYE0my}B9^SoFrRied4bG6(RwGBcEa_sS%*!k9sLqbZ~uNbuC7szsDjbW%lWlR-(b;i{VDPMWMy=i!`-1LK9ps zFWcl!`|Yr!0q5u+i{#Hh;<_lGXZ>tX+}2h+f5Ef)X8iP%GyOMuLz>&M-Ttv$mkO^$ zcWBLbCR%p%S_!2V8D%LpD)I%FG6qOGfgE(__ozP8yjcDjs{T@`ty^^Rj;Cye0>ksg zOjl37e+MfPvYGpDlq-X|om|IS83O#Bx>eSH%{jJR zB4vw_XSZ8z`a$vQV6XWr8IC-sxw@pk6D6uk^@koS9xuoS3#xp{dEoYVH49>_q0w~f zh;`D)KUc0V$*P`})t&2o7gNua-Y5ko`et0+}m|=0Ava|ZXt(coU=R=?_*If)ccn z)za!LA7AXcKRDWY>E6(Piavi5I^sBVzSn)0udtwMqm#In&0tjrPOM}9Tw(2kDUV66 ze!r{Kg^BXC)t(A!Zl6@I)0(%&x&7x{*$loVO0KYTay!&hBmRux-CEaP8H>Fq8VZYp z|DK<~DH3t3{)q03`COWL>>{#lZltAr5-sl1j=?N?YwjP!Ix7A?BHkZ+{QHOi4eY7W zwT6Ek#DR@$%}MFX*2&UeNJ>F;k|s zLQpVuoH9e5sk9}UpG^IbzpR|%Gu}0J&*svr((1bR6&5 zE^>@J@z$R`Bf8p@WHig`gJB-66W~pNVxZ3ugKQIgQApI%yNtOq0D9Lchw~r#XGyE^ zKA5P!fiWEytssbab&z7cXdpfOo7N z`>y59H>+Y4jBm;c90qGof(Qph;dasHtX8GFZDrbaiHk=*#(41o|G>-=m#ac3(s(S9 zrgxQxd8yt?>hi~jg|W(eYG222>p_%EFg7t6@l^!HS98zX@;CC7eyD$^!uf(kZcLsZ zdZR*rB-DzXIonUUU-;_)J+*_zb@cZvcsEQ|w33K}*NBcJ=J~w>IydMkSn%#)49=JD81S&Wi<{8j|D9l2Lf!$=jCOL#YqgL=!f&`V^N79|yE>S%!kR%_ z-Us!O>brqa3r2``SB}(MWr$VL3A0#rW)BJp3rqS#o`sSIerfwh)ZK)vt zp|y{62@PuyX#wt^?aG&$L_b?rMh3TN$ITAV2R*p>L(WveEJT`J%b0RAlO(o>rY*U- zUwC-9ukW?~D*9PQD)Msl29(1+n&eF#hwAQBcsr~lRs{y`vOY_#xTq6CV*hzpB%y3Y z4;DzXqB{lIhqrKz2Qm&oKOxA>yTUtWB}FMD8iD}++@*(7M?h%){V5sXh5KkA%a1O4 zdh>mlm@_|kh=9{9zAnmgX;QO-!hQIC1z{nFHcX#PL-WQBQ95sSJ=eD*anp7q6)pbp z<-*aUN6~uI?)oZ40w!P(=0iJ_UW+|x`wdfcy>H-N$l;L93LqzN@V~rKBueoM6DET)(c=RZWe+SRWc%=|AVc}e~Nm*&!3Q&UiQ zxT3yT&rdciM#`oYBBh%q)=qvU7t-4oJ`Lc1bjy18=MIy%bAenE_B2Spw~#H0K62MEDuDPz8?hg_(v4&oH> zg`>Q$?|xHG0rm_I0eQW<4-Rl^KKwu(v?b6bi?UV)z4LS6pP}gtMl#+FHd{xIt;x2S zU@PO63P^boNBwW)UHG`k)uZoD1=TSY^$>nVeiMFI{s8{OK-1%!AK+SM?&VE{EbaZ> z&A2Iq#A^%r?>~`m5X7I(Zy+T^aYt^FY{UPaOL-AGKBfIqgj*Q<_TTPZJO77#Of8Aj z1ru5+cHD6kyxnU*yu>6t*9t^q2u1QTsCSL8va-R4#W@SDBCaVlLHC+v5y*GQg_9qUM>c3N5*Svi~7aMMFK`*2qS z*V0(FTZ<#Vjb&5s`a+cG9ugAo^o!eSkq&V7d_Y4r<}?P?_X~8_M$WIU5YqDH_R>1U z(G)HAqglV%AMdpOcqyt7%53HdEI>-o(9*aBpi%}|9 zBDzTR*$YO~D;r)hooNbY8*WMt6)o|Z9;hi^ookyCor=ZwSOSA_T| z46bFcT~7Y~O@pxeb&iMTW!JdI>r>oPmD)KgQ!Am?6*UzVHRa{T`NHeJN~bG>Dvb+t z$F`7=oYLGvK3Hh9_EG%CsY6nV(nTv%l^Luy&PR|DfFS$s*S?)wYKO1I{-mjumw%$C zpEvW<7x#X1pRK_6KLXMxbtpr}%Tb)TbwI({4bKCrQ{haVB8WkjAkTqtY`DyKE-ML# z%Iwwx_z0xXH@JFrLLCV%-^`Xi)7j~+Q%qg<^~!jdK*_O!YlLSPwjzv0!tB>T=EtSc z3xqiRt_>E6q@7!r`@Fp8KtZxmp(uJZq9CMHg0nE~ab@cHQIp;p5x!=lZ0qoX2Fy}d ztSWQJ5di^Brm{(lB$!*Lx}NQui&tl{r>BO7zY=%Gi|5Z5KvX|*Vr8z)24rp<@Lr3` z((BfyU#=f76(xJOl4};>ENe9Una#deg=&ag= zyJl(kvq`PbXs(s{Eqbo!M-Ajp^w)$s*#_cv@0Dk~j3#}mH2a=U6zntJQX3NTr0<33 z?9c77-=bm`rh*=O%r-7QmC!eb3(#i0xvjM+%~FaUL8Y|ecy*jDk*urj4X8TyhgAE|!22|AwW$F;#Q6K&h(IQ5=7MLzIU6Mdi^ zSYu1oEOK*}4}nU=mj$=tCK==s6#DPc?+ISFr%Xw5L6KHd z1^nxU^HU7M_Ef2c%RlKzix=6~?=tL*wE`15OU@(HV$pLY>ZBKf#>&Ip(_Jo=ziv%c z2wC>S`rnnJnS?iJdZFjZ+U&@QpA#KUCz!IW-McgQG4K^z>6IqJ9BRFS`$<}jSkFSl zMUQS0ULozGf*zmr=BKiB?yH$SF?)|`-yW0=CTPV^jb(QYt3oTno<`Uk zENS-9Qf~=9)_(0HNF}qelJFE0pKsSp))x{FXR4G-!*wS-XTHhNh)sHp^s2acJ$t{}bB95q zV7mGQzqF)DAwlwPZ)M{gJ{>mj+`ZWQ;|#sIxKwkTU$I?m&(MRNPii?Ab5s~hGfPAl zf3VQc$H#hja{1~1jQ6$*kFH5LyKTd}87CwDH23_2cFl?N%bLw39Vy{a(+j_fC0A#4 z``XVRqG`o z+1~(Zq$6%$NN~DFQ;VAu>{X!lY~|nBbuzG&aFHaOctNEY&}F%@3@@j8W>?VLLyrZ2 z5`#d`Y{7AxX9}sls2ix0>udAH;U+RTHB5zP;u6Ts#edq^MXd3tfoTjOQZBWv|Dg$` z75~bZwsiw|F4EC)q_RKI_7pw;p5(z)by~^{NnstmC|~ankd`WXAn&XW5g73HZl+{i zt=vn{AlDk^Nv^(NisW-UZo9BzSL#c7ZpCTKmM1q|=S6>JCan~&Cf9zCJ9L{Q*}!NY z{m^kggKH+A9+B!xOd2=@xiH!pMf7ORC#H%r(a`_A!P#sXYOqgS{URM0=ulSk7l1VH)K66qI(~lU4-eHP- zx=-Ziz5@^R?t)>n-uowlyR;0Ff-E3 zJYVZtPhhmCjZD<3NmcvsO->_|uQz76*=Z*DDV(t)ccr;{RaaL#J~_L=9=Y%g&p8q48Y%pCy$p`1IA$WB2 z^{_}uPFD$_QgBqRQIXkX9x@Fw(#kEHiSiN9lsX)e-dxc!bO@RrDxB{mI5fe(=yJR6 zd=!#eM;rskB2O%+(P1FmiEt9bTIT7|@tT|7JE$IYnA-0e&*`j?Uwd~vT;#ZB=;#BNyI@Hb>`VoKim6D`B;ZNlqMrj3n|QF z!M4M_AStl320geob@1A@S_&GbB;B;1r)k7N8r=64qmT5 zamVGMe&*Cfao4ZNtLI9!DxNNYnvvR-B3(Dh=yV*OPufY(*JtXVD8`Am(b@=vEx58X-7ZS1 zDk_e1E65OMMwn+N_In`hbP;YWr3SDNk+*iF8++kEhCP^iqBi%t6QF*WqTgTqBY`&L zZGhtThSYcCxSmI>6LdM;g&W4<)8R~03!%kBv9;97&sz}Y{aMnZmk9|VU zZSquPe8Dexe!1SQ)O|Z+z=&~bm{8>}-IBa|`;@fIB z&66&TqqFs02&nen7GB7P3&aSV4=;s76{)C8+tqCO&5j;ls};S*f%qsf(9tuolRSCemRwXO}suEVz^&S#}zlqVkk+Yf| zz<&EpWYHO+5*V?@JICxukG}I1sAV;4w2~N(~j4W*)%*~ zdIqXcwO6+~lfk=%+xpVRPt}%hx?Dzh-IK^|o%2AMBJ5jBD1+l6sfs=~Feb69PEXWL zip+VQazU<^*{L;Dqr*25?BBOTS zJ*~A>o6;4@S*;y(*Bg?Yf9*DCGjxg8#vzJ_K#WJc-7a&L#|W3@~gqhVj^I6firpO`asQgQTh8T zKq7gEv^}{#*nfL1Lz^^P={DPPZCfxu0~Vg+TFE1cOW*-dEu{pT)5T+jK@RKC$vFW# z%j#wk)WIt(tF1J8Qu2%b?lHIR^f^N42VCmQ6|SXh?LeMwdZBn}m^;K1F|sEtEX-s{ zxod0LrdC}kW(MEs0miOQ?x}WX>y(}dTYt5f+b8t47`youU%bU)1r(cq`^!dMQv zgC@R{xDDfHxKn2fpc{ZY@rA?>p86jGg#1qidHxqX+k)p0egnzir=;Cj>m>3rD$*&Z H&foe!<-cS1 literal 0 HcmV?d00001 From 09ad0498d349347a4da1b753d7d4cda37d278658 Mon Sep 17 00:00:00 2001 From: tac0turtle Date: Wed, 28 Jan 2026 13:30:08 +0100 Subject: [PATCH 2/4] refernce and change of sequencing concept --- docs/concepts/sequencing.md | 158 +++++++------- docs/reference/api/abci-rpc.md | 201 +++++++++++++++++- docs/reference/api/engine-api.md | 189 +++++++++++++++- docs/reference/api/rpc-endpoints.md | 186 +++++++++++++++- docs/reference/configuration/ev-abci-flags.md | 107 +++++++++- .../configuration/ev-reth-chainspec.md | 172 +++++++++++++-- docs/reference/interfaces/da.md | 201 +++++++++++++++++- docs/reference/interfaces/executor.md | 192 ++++++++++++++++- docs/reference/interfaces/sequencer.md | 166 ++++++++++++++- 9 files changed, 1426 insertions(+), 146 deletions(-) diff --git a/docs/concepts/sequencing.md b/docs/concepts/sequencing.md index 4141dc56f7..89ccbd6910 100644 --- a/docs/concepts/sequencing.md +++ b/docs/concepts/sequencing.md @@ -1,108 +1,120 @@ # Sequencing -Sequencing determines transaction ordering. The sequencer collects transactions, orders them, and produces blocks. +Sequencing is the process of determining the order of transactions in a blockchain. In rollups, the sequencer is the entity responsible for collecting transactions from users, ordering them, and producing blocks that are eventually posted to the data availability (DA) layer. -## Sequencer Interface +Transaction ordering matters because it determines execution outcomes. Two transactions that touch the same state can produce different results depending on which executes first. The sequencer's ordering decisions directly impact users, particularly in DeFi where transaction order can mean the difference between a successful trade and a failed one. -The [Sequencer interface](https://github.com/evstack/ev-node/blob/main/core/sequencer/sequencing.go) defines how ev-node communicates with sequencing implementations: +## The Role of the Sequencer -```go -type Sequencer interface { - // Submit transactions to the sequencer - SubmitBatchTxs(ctx context.Context, req SubmitBatchTxsRequest) (*SubmitBatchTxsResponse, error) +A sequencer performs three core functions: - // Get the next batch of ordered transactions - GetNextBatch(ctx context.Context, req GetNextBatchRequest) (*GetNextBatchResponse, error) +1. **Transaction collection** — Accepting transactions from users and holding them in a mempool +2. **Ordering** — Deciding which transactions to include and in what order +3. **Block production** — Bundling ordered transactions into blocks and publishing them - // Verify a batch from another source - VerifyBatch(ctx context.Context, req VerifyBatchRequest) (*VerifyBatchResponse, error) -} -``` - -## Sequencing Modes +In traditional L1 blockchains, these functions are distributed across validators through consensus. In rollups, sequencing can be handled differently depending on the design goals. -### Single Sequencer +## Single Sequencer -One node orders transactions and produces blocks. +The simplest approach is a single sequencer: one designated node that orders all transactions. ``` -User → Mempool → Sequencer → Block → DA +User → Sequencer → Block → DA Layer ``` -**Characteristics:** -- Fast block times (~100ms possible) -- Simple operation -- Single point of ordering (with forced inclusion for censorship resistance) +**Advantages:** -**Configuration:** -```yaml -node: - aggregator: true - block-time: 100ms -``` +- **Low latency** — No consensus required means block times can be very fast (sub-second) +- **Simple operation** — One node, one source of truth for ordering +- **Predictable performance** — No coordination overhead + +**Disadvantages:** + +- **Centralization** — Single point of control over transaction ordering +- **Censorship risk** — The sequencer can refuse to include specific transactions +- **Liveness dependency** — If the sequencer goes down, the chain halts +- **MEV extraction** — The sequencer has full visibility and can reorder for profit -See [Single Sequencer / Forced Inclusion](/guides/advanced/forced-inclusion) for details. +Most production rollups today use single sequencers because the performance benefits are significant and the trust assumptions are often acceptable for their use cases. -### Based Sequencer +## Based Sequencing -Transaction ordering is determined by the DA layer. Every full node derives blocks independently. +Based sequencing (also called "based rollups") delegates transaction ordering to the underlying DA layer. Instead of a dedicated sequencer, users submit transactions directly to the DA layer, and all rollup nodes independently derive the same ordering from DA blocks. ``` -User → DA Layer → All Nodes Derive Same Blocks +User → DA Layer → All Nodes Derive Same Order ``` -**Characteristics:** -- No single sequencer -- Ordering from DA layer (slower blocks) -- Maximum censorship resistance +**Advantages:** -**Configuration:** -```yaml -node: - aggregator: true - based-sequencer: true -``` +- **Decentralization** — No privileged sequencer role +- **Censorship resistance** — Inherits the censorship resistance of the DA layer +- **Liveness** — Chain stays live as long as the DA layer is live +- **Shared security** — Ordering is secured by the DA layer's consensus -See [Based Sequencing](/guides/advanced/based-sequencing) for details. +**Disadvantages:** -## Choosing a Sequencing Mode +- **Higher latency** — Block times are bounded by DA layer block times (e.g., ~12s for Ethereum) +- **MEV leakage** — MEV flows to DA layer validators rather than the rollup +- **Complexity** — Requires deterministic derivation rules that all nodes must follow -| Factor | Single Sequencer | Based Sequencer | -|--------|-----------------|-----------------| -| Block time | ~100ms | ~12s (DA block time) | -| Censorship resistance | Forced inclusion | Native | -| Complexity | Lower | Higher | -| MEV | Sequencer controls | DA layer controls | +Based sequencing is compelling for applications that prioritize decentralization over speed. -## Forced Inclusion +## Hybrid Approaches -Single sequencer mode includes forced inclusion for censorship resistance: +### Forced Inclusion -1. Users can submit transactions directly to DA -2. Sequencer must include these within a grace period -3. Failure to include marks sequencer as malicious -4. Chain can transition to based mode +Forced inclusion is a mechanism that combines the performance of single sequencing with censorship resistance guarantees. It works as follows: -This provides a safety mechanism while maintaining fast block times. +1. Users normally submit transactions to the sequencer for fast inclusion +2. If censored, users can submit transactions directly to the DA layer +3. The sequencer must include DA-submitted transactions within a defined time window +4. Failure to include triggers penalties or allows the chain to transition to based mode -## Transaction Flow +This gives users an escape hatch while maintaining the benefits of centralized sequencing for the common case. -```mermaid -sequenceDiagram - participant User - participant Mempool - participant Sequencer - participant DA +### Shared Sequencing - User->>Mempool: Submit tx - Sequencer->>Mempool: GetTxs() - Mempool->>Sequencer: Pending txs - Sequencer->>Sequencer: Order & Execute - Sequencer->>DA: Submit block -``` +Multiple rollups can share a sequencer or sequencer network. This enables: + +- **Atomic cross-rollup transactions** — Transactions that span multiple rollups can be ordered atomically +- **Shared MEV** — Revenue from cross-rollup MEV can be distributed +- **Reduced costs** — Infrastructure costs are amortized across chains + +Shared sequencing is an active area of research and development. + +## MEV Considerations + +Maximal Extractable Value (MEV) is the profit a sequencer can extract by reordering, inserting, or censoring transactions. Common MEV strategies include: + +- **Frontrunning** — Inserting a transaction before a target transaction +- **Backrunning** — Inserting a transaction immediately after a target +- **Sandwich attacks** — Combining frontrunning and backrunning around a target + +The sequencing design determines who captures MEV: + +| Design | MEV Captured By | +|-------------------|--------------------------| +| Single sequencer | Sequencer operator | +| Based sequencing | DA layer validators | +| Shared sequencing | Shared sequencer network | + +Some rollups implement MEV mitigation through encrypted mempools, fair ordering protocols, or MEV redistribution to users. + +## Choosing a Sequencing Model + +| Factor | Single Sequencer | Based Sequencer | +|------------------------|---------------------------|---------------------| +| Block time | Sub-second possible | DA layer block time | +| Censorship resistance | Requires forced inclusion | Native | +| Liveness | Sequencer must be online | DA layer liveness | +| MEV control | Sequencer controlled | DA layer controlled | +| Operational complexity | Lower | Higher | + +The right choice depends on your application's priorities. High-frequency trading applications might prefer single sequencing for speed. Applications handling high-value, censorship-sensitive transactions might prefer based sequencing for its guarantees. ## Learn More -- [Single Sequencer / Forced Inclusion](/guides/advanced/forced-inclusion) -- [Based Sequencing](/guides/advanced/based-sequencing) -- [Sequencer Interface Reference](/reference/interfaces/sequencer) +- [Forced Inclusion](/guides/advanced/forced-inclusion) — Implementing censorship resistance with single sequencing +- [Based Sequencing](/guides/advanced/based-sequencing) — Running a based rollup +- [Sequencer Interface](/reference/interfaces/sequencer) — Implementation reference diff --git a/docs/reference/api/abci-rpc.md b/docs/reference/api/abci-rpc.md index 2a2aa22ad1..ffca6a18a4 100644 --- a/docs/reference/api/abci-rpc.md +++ b/docs/reference/api/abci-rpc.md @@ -1,9 +1,196 @@ # ABCI RPC Reference - +CometBFT-compatible RPC endpoints provided by ev-abci. + +## Query Methods + +### /abci_query + +Query application state. + +**Request:** + +```bash +curl 'http://localhost:26657/abci_query?path="/store/bank/key"&data=0x...' +``` + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "response": { + "code": 0, + "value": "base64encodedvalue", + "height": "1000" + } + }, + "id": 1 +} +``` + +### /block + +Get block at height. + +**Request:** + +```bash +curl 'http://localhost:26657/block?height=100' +``` + +### /block_results + +Get block results (tx results, events). + +**Request:** + +```bash +curl 'http://localhost:26657/block_results?height=100' +``` + +### /commit + +Get commit (signatures) at height. + +**Request:** + +```bash +curl 'http://localhost:26657/commit?height=100' +``` + +### /validators + +Get validator set (returns sequencer in Evolve). + +**Request:** + +```bash +curl 'http://localhost:26657/validators?height=100' +``` + +### /status + +Get node status. + +**Request:** + +```bash +curl 'http://localhost:26657/status' +``` + +### /genesis + +Get genesis document. + +**Request:** + +```bash +curl 'http://localhost:26657/genesis' +``` + +### /health + +Health check. + +**Request:** + +```bash +curl 'http://localhost:26657/health' +``` + +## Transaction Methods + +### /broadcast_tx_async + +Broadcast transaction, return immediately. + +**Request:** + +```bash +curl 'http://localhost:26657/broadcast_tx_async?tx=0x...' +``` + +### /broadcast_tx_sync + +Broadcast transaction, wait for CheckTx. + +**Request:** + +```bash +curl 'http://localhost:26657/broadcast_tx_sync?tx=0x...' +``` + +### /broadcast_tx_commit + +Broadcast transaction, wait for inclusion. + +**Request:** + +```bash +curl 'http://localhost:26657/broadcast_tx_commit?tx=0x...' +``` + +### /tx + +Get transaction by hash. + +**Request:** + +```bash +curl 'http://localhost:26657/tx?hash=0x...' +``` + +### /tx_search + +Search transactions. + +**Request:** + +```bash +curl 'http://localhost:26657/tx_search?query="tx.height=100"' +``` + +## WebSocket + +### /subscribe + +Subscribe to events. + +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "params": {"query": "tm.event='NewBlock'"}, + "id": 1 +} +``` + +Event types: + +- `NewBlock` — New block committed +- `Tx` — Transaction included +- `NewBlockHeader` — New block header + +## Unsupported Methods + +These CometBFT methods are not supported in ev-abci: + +| Method | Reason | +|--------|--------| +| `/consensus_state` | No BFT consensus | +| `/dump_consensus_state` | No BFT consensus | +| `/net_info` | Different P2P model | +| `/unconfirmed_txs` | Different mempool | +| `/num_unconfirmed_txs` | Different mempool | + +## Port + +Default: `26657` + +Configure: + +```bash +--evnode.rpc.address tcp://0.0.0.0:26657 +``` diff --git a/docs/reference/api/engine-api.md b/docs/reference/api/engine-api.md index 6aab0b6c77..15854aad2d 100644 --- a/docs/reference/api/engine-api.md +++ b/docs/reference/api/engine-api.md @@ -1,10 +1,183 @@ # Engine API Reference - +Engine API methods used by ev-node to communicate with ev-reth. + +## Authentication + +All requests require JWT authentication via the `Authorization` header: + +``` +Authorization: Bearer +``` + +Generate JWT from shared secret: + +```bash +openssl rand -hex 32 > jwt.hex +``` + +## Methods + +### engine_forkchoiceUpdatedV3 + +Update fork choice and optionally build a new block. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "engine_forkchoiceUpdatedV3", + "params": [ + { + "headBlockHash": "0x...", + "safeBlockHash": "0x...", + "finalizedBlockHash": "0x..." + }, + { + "timestamp": "0x...", + "prevRandao": "0x...", + "suggestedFeeRecipient": "0x...", + "withdrawals": [], + "parentBeaconBlockRoot": "0x..." + } + ], + "id": 1 +} +``` + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "payloadStatus": { + "status": "VALID", + "latestValidHash": "0x..." + }, + "payloadId": "0x..." + }, + "id": 1 +} +``` + +### engine_getPayloadV3 + +Get a built payload. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "engine_getPayloadV3", + "params": ["0x...payloadId"], + "id": 1 +} +``` + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "executionPayload": { + "parentHash": "0x...", + "feeRecipient": "0x...", + "stateRoot": "0x...", + "receiptsRoot": "0x...", + "logsBloom": "0x...", + "prevRandao": "0x...", + "blockNumber": "0x1", + "gasLimit": "0x...", + "gasUsed": "0x...", + "timestamp": "0x...", + "extraData": "0x", + "baseFeePerGas": "0x...", + "blockHash": "0x...", + "transactions": ["0x..."], + "withdrawals": [], + "blobGasUsed": "0x0", + "excessBlobGas": "0x0" + }, + "blockValue": "0x...", + "blobsBundle": { + "commitments": [], + "proofs": [], + "blobs": [] + }, + "shouldOverrideBuilder": false + }, + "id": 1 +} +``` + +### engine_newPayloadV3 + +Validate and execute a payload. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "engine_newPayloadV3", + "params": [ + { + "parentHash": "0x...", + "feeRecipient": "0x...", + "stateRoot": "0x...", + "receiptsRoot": "0x...", + "logsBloom": "0x...", + "prevRandao": "0x...", + "blockNumber": "0x1", + "gasLimit": "0x...", + "gasUsed": "0x...", + "timestamp": "0x...", + "extraData": "0x", + "baseFeePerGas": "0x...", + "blockHash": "0x...", + "transactions": ["0x..."], + "withdrawals": [], + "blobGasUsed": "0x0", + "excessBlobGas": "0x0" + }, + ["0x...expectedBlobVersionedHashes"], + "0x...parentBeaconBlockRoot" + ], + "id": 1 +} +``` + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "status": "VALID", + "latestValidHash": "0x...", + "validationError": null + }, + "id": 1 +} +``` + +## Payload Status + +| Status | Description | +|--------|-------------| +| `VALID` | Payload is valid | +| `INVALID` | Payload failed validation | +| `SYNCING` | Node is syncing, cannot validate | +| `ACCEPTED` | Payload accepted, validation pending | +| `INVALID_BLOCK_HASH` | Block hash mismatch | + +## Ports + +| Port | Purpose | +|------|---------| +| 8551 | Engine API (authenticated) | +| 8545 | JSON-RPC (public) | diff --git a/docs/reference/api/rpc-endpoints.md b/docs/reference/api/rpc-endpoints.md index dc360c0e88..a8e41a782e 100644 --- a/docs/reference/api/rpc-endpoints.md +++ b/docs/reference/api/rpc-endpoints.md @@ -1,10 +1,176 @@ -# RPC Endpoints - - +# RPC Endpoints Reference + +ev-node JSON-RPC endpoints. + +## Health + +### GET /health + +Check node health. + +**Response:** + +```json +{ + "status": "ok" +} +``` + +## Block Queries + +### POST /block + +Get block by height. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "block", + "params": { "height": "100" }, + "id": 1 +} +``` + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "block": { + "header": { + "height": "100", + "time": "2024-01-01T00:00:00Z", + "last_header_hash": "0x...", + "data_hash": "0x...", + "app_hash": "0x...", + "proposer_address": "0x..." + }, + "data": { + "txs": ["0x..."] + } + } + }, + "id": 1 +} +``` + +### POST /header + +Get header by height. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "header", + "params": { "height": "100" }, + "id": 1 +} +``` + +### POST /block_by_hash + +Get block by hash. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "block_by_hash", + "params": { "hash": "0x..." }, + "id": 1 +} +``` + +## Transaction Queries + +### POST /tx + +Get transaction by hash. + +**Request:** + +```json +{ + "jsonrpc": "2.0", + "method": "tx", + "params": { "hash": "0x..." }, + "id": 1 +} +``` + +## Status + +### POST /status + +Get node status. + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "node_info": { + "network": "chain-id", + "version": "1.0.0" + }, + "sync_info": { + "latest_block_height": "1000", + "latest_block_time": "2024-01-01T00:00:00Z", + "catching_up": false + } + }, + "id": 1 +} +``` + +## DA Status + +### POST /da_status + +Get DA layer status. + +**Response:** + +```json +{ + "jsonrpc": "2.0", + "result": { + "da_height": "5000", + "last_submitted_height": "999", + "pending_blocks": 1 + }, + "id": 1 +} +``` + +## Configuration + +Default port: `26657` + +Configure via flag: + +```bash +--evnode.rpc.address tcp://0.0.0.0:26657 +``` + +## WebSocket + +Subscribe to events via WebSocket at `ws://localhost:26657/websocket`. + +### Subscribe to new blocks + +```json +{ + "jsonrpc": "2.0", + "method": "subscribe", + "params": { "query": "tm.event='NewBlock'" }, + "id": 1 +} +``` diff --git a/docs/reference/configuration/ev-abci-flags.md b/docs/reference/configuration/ev-abci-flags.md index 4c7aaddaf6..2733a8a907 100644 --- a/docs/reference/configuration/ev-abci-flags.md +++ b/docs/reference/configuration/ev-abci-flags.md @@ -1,8 +1,99 @@ -# ev-abci Flags - - +# ev-abci Flags Reference + +Command-line flags for Cosmos SDK applications using ev-abci. + +## ev-node Flags + +These flags configure the underlying ev-node instance. + +### Node Configuration + +| Flag | Type | Default | Description | +|---------------------------------|----------|---------|------------------------------| +| `--evnode.node.aggregator` | bool | `false` | Run as block producer | +| `--evnode.node.block_time` | duration | `1s` | Block production interval | +| `--evnode.node.lazy_aggregator` | bool | `false` | Only produce blocks with txs | +| `--evnode.node.lazy_block_time` | duration | `1s` | Max wait in lazy mode | + +### DA Configuration + +| Flag | Type | Default | Description | +|--------------------------|--------|----------|-------------------------| +| `--evnode.da.address` | string | required | DA layer URL | +| `--evnode.da.auth_token` | string | `""` | DA authentication token | +| `--evnode.da.namespace` | string | `""` | DA namespace (hex) | +| `--evnode.da.gas_price` | float | `0.01` | DA gas price | + +### P2P Configuration + +| Flag | Type | Default | Description | +|------------------------|--------|--------------------------|--------------------------------| +| `--evnode.p2p.listen` | string | `/ip4/0.0.0.0/tcp/26656` | P2P listen address | +| `--evnode.p2p.peers` | string | `""` | Comma-separated peer addresses | +| `--evnode.p2p.blocked` | string | `""` | Blocked peer IDs | + +### Signer Configuration + +| Flag | Type | Default | Description | +|------------------------------|--------|----------|-----------------------| +| `--evnode.signer.passphrase` | string | required | Signer key passphrase | + +### RPC Configuration + +| Flag | Type | Default | Description | +|------------------------|--------|-----------------------|--------------------| +| `--evnode.rpc.address` | string | `tcp://0.0.0.0:26657` | RPC listen address | + +## Cosmos SDK Flags + +Standard Cosmos SDK flags remain available: + +| Flag | Description | +|----------------|--------------------------------------| +| `--home` | Application home directory | +| `--log_level` | Log level (debug, info, warn, error) | +| `--log_format` | Log format (plain, json) | +| `--trace` | Enable full stack traces | + +## Environment Variables + +Flags can be set via environment variables: + +```bash +EVNODE_NODE_AGGREGATOR=true +EVNODE_DA_ADDRESS=http://localhost:7980 +EVNODE_SIGNER_PASSPHRASE=secret +``` + +Pattern: `EVNODE_
_` (uppercase, underscores) + +## Examples + +### Sequencer Node + +```bash +appd start \ + --evnode.node.aggregator \ + --evnode.node.block_time 500ms \ + --evnode.da.address http://localhost:7980 \ + --evnode.signer.passphrase secret +``` + +### Full Node + +```bash +appd start \ + --evnode.da.address http://localhost:7980 \ + --evnode.p2p.peers 12D3KooW...@sequencer.example.com:26656 +``` + +### Lazy Aggregator + +```bash +appd start \ + --evnode.node.aggregator \ + --evnode.node.lazy_aggregator \ + --evnode.node.lazy_block_time 5s \ + --evnode.da.address http://localhost:7980 \ + --evnode.signer.passphrase secret +``` diff --git a/docs/reference/configuration/ev-reth-chainspec.md b/docs/reference/configuration/ev-reth-chainspec.md index 1d18055a8b..2f0813085c 100644 --- a/docs/reference/configuration/ev-reth-chainspec.md +++ b/docs/reference/configuration/ev-reth-chainspec.md @@ -1,12 +1,160 @@ -# ev-reth Chainspec - - +# ev-reth Chainspec Reference + +Complete reference for ev-reth chainspec (genesis.json) configuration. + +## Structure + +```json +{ + "config": { }, + "alloc": { }, + "coinbase": "0x...", + "difficulty": "0x0", + "gasLimit": "0x...", + "nonce": "0x0", + "timestamp": "0x0" +} +``` + +## config + +Chain configuration parameters. + +### Standard Ethereum Fields + +| Field | Type | Description | +|-------|------|-------------| +| `chainId` | number | Unique chain identifier | +| `homesteadBlock` | number | Homestead fork block (use 0) | +| `eip150Block` | number | EIP-150 fork block (use 0) | +| `eip155Block` | number | EIP-155 fork block (use 0) | +| `eip158Block` | number | EIP-158 fork block (use 0) | +| `byzantiumBlock` | number | Byzantium fork block (use 0) | +| `constantinopleBlock` | number | Constantinople fork block (use 0) | +| `petersburgBlock` | number | Petersburg fork block (use 0) | +| `istanbulBlock` | number | Istanbul fork block (use 0) | +| `berlinBlock` | number | Berlin fork block (use 0) | +| `londonBlock` | number | London fork block (use 0) | +| `shanghaiTime` | number | Shanghai fork timestamp (use 0) | +| `cancunTime` | number | Cancun fork timestamp (use 0) | + +### config.evolve + +Evolve-specific extensions. + +| Field | Type | Description | +|-------|------|-------------| +| `baseFeeSink` | address | Redirect base fees to this address | +| `baseFeeRedirectActivationHeight` | number | Block height to activate redirect | +| `deployAllowlist` | object | Contract deployment restrictions | +| `contractSizeLimit` | number | Max contract bytecode size (bytes) | +| `mintPrecompile` | object | Native token minting precompile | + +#### deployAllowlist + +```json +{ + "admin": "0x...", + "enabled": ["0x...", "0x..."] +} +``` + +| Field | Type | Description | +|-------|------|-------------| +| `admin` | address | Can modify the allowlist | +| `enabled` | address[] | Addresses allowed to deploy | + +#### mintPrecompile + +```json +{ + "admin": "0x...", + "address": "0x0000000000000000000000000000000000000100" +} +``` + +| Field | Type | Description | +|-------|------|-------------| +| `admin` | address | Can call mint() | +| `address` | address | Precompile address | + +## alloc + +Pre-funded accounts and contract deployments. + +```json +{ + "alloc": { + "0xAddress1": { + "balance": "0x..." + }, + "0xAddress2": { + "balance": "0x...", + "code": "0x...", + "storage": { + "0x0": "0x..." + } + } + } +} +``` + +| Field | Type | Description | +|-------|------|-------------| +| `balance` | hex string | Wei balance | +| `code` | hex string | Contract bytecode (optional) | +| `storage` | object | Storage slots (optional) | +| `nonce` | hex string | Account nonce (optional) | + +## Top-Level Fields + +| Field | Type | Description | +|-------|------|-------------| +| `coinbase` | address | Default fee recipient | +| `difficulty` | hex string | Initial difficulty (use "0x0") | +| `gasLimit` | hex string | Block gas limit | +| `nonce` | hex string | Genesis nonce (use "0x0") | +| `timestamp` | hex string | Genesis timestamp | +| `extraData` | hex string | Extra data (optional) | +| `mixHash` | hex string | Mix hash (optional) | + +## Example + +```json +{ + "config": { + "chainId": 1337, + "homesteadBlock": 0, + "eip150Block": 0, + "eip155Block": 0, + "eip158Block": 0, + "byzantiumBlock": 0, + "constantinopleBlock": 0, + "petersburgBlock": 0, + "istanbulBlock": 0, + "berlinBlock": 0, + "londonBlock": 0, + "shanghaiTime": 0, + "cancunTime": 0, + "evolve": { + "baseFeeSink": "0x1234567890123456789012345678901234567890", + "baseFeeRedirectActivationHeight": 0, + "contractSizeLimit": 49152, + "mintPrecompile": { + "admin": "0xBridgeContract", + "address": "0x0000000000000000000000000000000000000100" + } + } + }, + "alloc": { + "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266": { + "balance": "0x200000000000000000000000000000000000000000000000000000000000000" + } + }, + "coinbase": "0x0000000000000000000000000000000000000000", + "difficulty": "0x0", + "gasLimit": "0x1c9c380", + "nonce": "0x0", + "timestamp": "0x0" +} +``` diff --git a/docs/reference/interfaces/da.md b/docs/reference/interfaces/da.md index e5bc3bed52..8cf99a8dee 100644 --- a/docs/reference/interfaces/da.md +++ b/docs/reference/interfaces/da.md @@ -1,12 +1,193 @@ # DA Interface - +The DA (Data Availability) interface defines how ev-node submits and retrieves data from the DA layer. + +## Client Interface + +```go +type Client interface { + Submit(ctx context.Context, data [][]byte, gasPrice float64, namespace []byte, options []byte) ResultSubmit + Retrieve(ctx context.Context, height uint64, namespace []byte) ResultRetrieve + Get(ctx context.Context, ids []ID, namespace []byte) ([]Blob, error) + GetHeaderNamespace() []byte + GetDataNamespace() []byte + GetForcedInclusionNamespace() []byte + HasForcedInclusionNamespace() bool +} +``` + +## Methods + +### Submit + +Submits blobs to the DA layer. + +```go +Submit(ctx context.Context, data [][]byte, gasPrice float64, namespace []byte, options []byte) ResultSubmit +``` + +**Parameters:** + +- `data` - Blobs to submit +- `gasPrice` - DA layer gas price +- `namespace` - Target namespace +- `options` - DA-specific options (JSON encoded) + +**Returns:** + +```go +type ResultSubmit struct { + BaseResult +} +``` + +### Retrieve + +Retrieves all blobs at a DA height and namespace. + +```go +Retrieve(ctx context.Context, height uint64, namespace []byte) ResultRetrieve +``` + +**Returns:** + +```go +type ResultRetrieve struct { + BaseResult + Data [][]byte // Retrieved blobs +} +``` + +### Get + +Retrieves specific blobs by their IDs. + +```go +Get(ctx context.Context, ids []ID, namespace []byte) ([]Blob, error) +``` + +### Namespace Accessors + +```go +GetHeaderNamespace() []byte // Namespace for block headers +GetDataNamespace() []byte // Namespace for block data +GetForcedInclusionNamespace() []byte // Namespace for forced inclusion txs +HasForcedInclusionNamespace() bool // Whether forced inclusion is enabled +``` + +## Verifier Interface + +For sequencers that need to verify batch inclusion: + +```go +type Verifier interface { + GetProofs(ctx context.Context, ids []ID, namespace []byte) ([]Proof, error) + Validate(ctx context.Context, ids []ID, proofs []Proof, namespace []byte) ([]bool, error) +} +``` + +## FullClient Interface + +Combines Client and Verifier: + +```go +type FullClient interface { + Client + Verifier +} +``` + +## Types + +### Core Types + +```go +type Blob = []byte // Raw data +type ID = []byte // Blob identifier (height + commitment) +type Commitment = []byte // Cryptographic commitment +type Proof = []byte // Inclusion proof +``` + +### BaseResult + +Common fields for DA operations: + +```go +type BaseResult struct { + Code StatusCode + Message string + Height uint64 + SubmittedCount uint64 + BlobSize uint64 + IDs [][]byte + Timestamp time.Time +} +``` + +### Status Codes + +```go +const ( + StatusUnknown StatusCode = iota + StatusSuccess + StatusNotFound + StatusNotIncludedInBlock + StatusAlreadyInMempool + StatusTooBig + StatusContextDeadline + StatusError + StatusIncorrectAccountSequence + StatusContextCanceled + StatusHeightFromFuture +) +``` + +## ID Format + +IDs encode both height and commitment: + +```go +// ID = height (8 bytes, little-endian) + commitment +func SplitID(id []byte) (height uint64, commitment []byte, error) +``` + +## Namespaces + +DA uses 29-byte namespaces (Celestia format): + +- 1 byte version +- 28 bytes identifier + +Three namespaces are used: + +| Namespace | Purpose | +|------------------|-----------------------------------------| +| Header | Block headers | +| Data | Transaction data | +| Forced Inclusion | User-submitted censorship-resistant txs | + +## Implementations + +| Implementation | Package | Description | +|----------------|-------------------|---------------------| +| Celestia | `pkg/da/celestia` | Production DA layer | +| Local DA | `pkg/da/local` | Development/testing | + +## Configuration + +```bash +# Celestia +--evnode.da.address http://localhost:26658 +--evnode.da.auth_token +--evnode.da.namespace +--evnode.da.gas_price 0.01 + +# Local DA +--evnode.da.address http://localhost:7980 +``` + +## See Also + +- [Data Availability Concepts](/concepts/data-availability) +- [Celestia Guide](/guides/da-layers/celestia) +- [Local DA Guide](/guides/da-layers/local-da) diff --git a/docs/reference/interfaces/executor.md b/docs/reference/interfaces/executor.md index 8e22250e95..5cb0e9f8d8 100644 --- a/docs/reference/interfaces/executor.md +++ b/docs/reference/interfaces/executor.md @@ -1,11 +1,185 @@ # Executor Interface - +The Executor interface defines how ev-node communicates with execution layers. Implement this interface to run custom execution environments on Evolve. + +## Interface Definition + +```go +type Executor interface { + InitChain(ctx context.Context, genesisTime time.Time, initialHeight uint64, chainID string) (stateRoot []byte, err error) + GetTxs(ctx context.Context) ([][]byte, error) + ExecuteTxs(ctx context.Context, txs [][]byte, blockHeight uint64, timestamp time.Time, prevStateRoot []byte) (updatedStateRoot []byte, err error) + SetFinal(ctx context.Context, blockHeight uint64) error + GetExecutionInfo(ctx context.Context) (ExecutionInfo, error) + FilterTxs(ctx context.Context, txs [][]byte, maxBytes, maxGas uint64, hasForceIncludedTransaction bool) ([]FilterStatus, error) +} +``` + +## Methods + +### InitChain + +Initializes the blockchain with genesis parameters. + +```go +InitChain(ctx context.Context, genesisTime time.Time, initialHeight uint64, chainID string) (stateRoot []byte, err error) +``` + +**Parameters:** + +- `genesisTime` - Chain start timestamp (UTC) +- `initialHeight` - First block height (must be > 0) +- `chainID` - Unique chain identifier + +**Returns:** + +- `stateRoot` - Hash representing initial state + +**Requirements:** + +- Must be idempotent (repeated calls return same result) +- Must validate genesis parameters +- Must generate deterministic initial state root + +### GetTxs + +Fetches transactions from the execution layer's mempool. + +```go +GetTxs(ctx context.Context) ([][]byte, error) +``` + +**Returns:** + +- Slice of valid transactions + +**Requirements:** + +- Return only currently valid transactions +- Do not remove transactions from mempool +- May remove invalid transactions + +### ExecuteTxs + +Processes transactions to produce a new block state. + +```go +ExecuteTxs(ctx context.Context, txs [][]byte, blockHeight uint64, timestamp time.Time, prevStateRoot []byte) (updatedStateRoot []byte, err error) +``` + +**Parameters:** + +- `txs` - Ordered list of transactions +- `blockHeight` - Height of block being created +- `timestamp` - Block timestamp (UTC) +- `prevStateRoot` - Previous block's state root + +**Returns:** + +- `updatedStateRoot` - New state root after execution + +**Requirements:** + +- Must be deterministic +- Must handle empty transaction lists +- Must handle malformed transactions gracefully +- Must validate against previous state root + +### SetFinal + +Marks a block as finalized. + +```go +SetFinal(ctx context.Context, blockHeight uint64) error +``` + +**Parameters:** + +- `blockHeight` - Height to finalize + +**Requirements:** + +- Must be idempotent +- Must verify block exists +- Finalized blocks cannot be reverted + +### GetExecutionInfo + +Returns current execution layer parameters. + +```go +GetExecutionInfo(ctx context.Context) (ExecutionInfo, error) +``` + +**Returns:** + +```go +type ExecutionInfo struct { + MaxGas uint64 // Maximum gas per block (0 = no gas-based limiting) +} +``` + +### FilterTxs + +Validates and filters transactions for block inclusion. + +```go +FilterTxs(ctx context.Context, txs [][]byte, maxBytes, maxGas uint64, hasForceIncludedTransaction bool) ([]FilterStatus, error) +``` + +**Parameters:** + +- `txs` - All transactions (force-included + mempool) +- `maxBytes` - Maximum cumulative size (0 = no limit) +- `maxGas` - Maximum cumulative gas (0 = no limit) +- `hasForceIncludedTransaction` - Whether force-included txs are present + +**Returns:** + +```go +type FilterStatus int + +const ( + FilterOK FilterStatus = iota // Include in batch + FilterRemove // Invalid, remove + FilterPostpone // Valid but exceeds limits, postpone +) +``` + +## Optional Interfaces + +### HeightProvider + +Enables height synchronization checks between ev-node and the execution layer. + +```go +type HeightProvider interface { + GetLatestHeight(ctx context.Context) (uint64, error) +} +``` + +Useful for detecting desynchronization after crashes or restarts. + +### Rollbackable + +Enables automatic rollback when execution layer is ahead of consensus. + +```go +type Rollbackable interface { + Rollback(ctx context.Context, targetHeight uint64) error +} +``` + +Only implement if your execution layer supports in-flight rollback. + +## Implementations + +| Implementation | Package | Description | +|----------------|---------|-------------| +| ev-reth | `execution/evm` | EVM execution via Engine API | +| ev-abci | `execution/abci` | Cosmos SDK via ABCI | +| testapp | `apps/testapp` | Simple key-value store | + +## Implementation Guide + +See [Implement Custom Executor](/getting-started/custom/implement-executor) for a step-by-step guide. diff --git a/docs/reference/interfaces/sequencer.md b/docs/reference/interfaces/sequencer.md index 186a46062e..ead2eb9477 100644 --- a/docs/reference/interfaces/sequencer.md +++ b/docs/reference/interfaces/sequencer.md @@ -1,11 +1,159 @@ # Sequencer Interface - +The Sequencer interface defines how ev-node orders transactions for block production. Two implementations are provided: single sequencer and based sequencer. + +## Interface Definition + +```go +type Sequencer interface { + SubmitBatchTxs(ctx context.Context, req SubmitBatchTxsRequest) (*SubmitBatchTxsResponse, error) + GetNextBatch(ctx context.Context, req GetNextBatchRequest) (*GetNextBatchResponse, error) + VerifyBatch(ctx context.Context, req VerifyBatchRequest) (*VerifyBatchResponse, error) + SetDAHeight(height uint64) + GetDAHeight() uint64 +} +``` + +## Methods + +### SubmitBatchTxs + +Submits a batch of transactions from the executor to the sequencer. + +```go +SubmitBatchTxs(ctx context.Context, req SubmitBatchTxsRequest) (*SubmitBatchTxsResponse, error) +``` + +**Request:** + +```go +type SubmitBatchTxsRequest struct { + Id []byte // Chain identifier + Batch *Batch // Transactions to submit +} + +type Batch struct { + Transactions [][]byte +} +``` + +### GetNextBatch + +Returns the next batch of transactions for block production. + +```go +GetNextBatch(ctx context.Context, req GetNextBatchRequest) (*GetNextBatchResponse, error) +``` + +**Request:** + +```go +type GetNextBatchRequest struct { + Id []byte // Chain identifier + LastBatchData [][]byte // Previous batch data + MaxBytes uint64 // Maximum batch size +} +``` + +**Response:** + +```go +type GetNextBatchResponse struct { + Batch *Batch // Transactions to include + Timestamp time.Time // Block timestamp + BatchData [][]byte // Data for verification +} +``` + +### VerifyBatch + +Verifies a batch received from another node during sync. + +```go +VerifyBatch(ctx context.Context, req VerifyBatchRequest) (*VerifyBatchResponse, error) +``` + +**Request:** + +```go +type VerifyBatchRequest struct { + Id []byte // Chain identifier + BatchData [][]byte // Batch data to verify +} +``` + +**Response:** + +```go +type VerifyBatchResponse struct { + Status bool // true if valid +} +``` + +### SetDAHeight / GetDAHeight + +Track the current DA height for forced inclusion retrieval. + +```go +SetDAHeight(height uint64) +GetDAHeight() uint64 +``` + +## Batch Type + +```go +type Batch struct { + Transactions [][]byte +} + +// Hash returns SHA256 hash of the batch +func (batch *Batch) Hash() ([]byte, error) +``` + +The hash is computed deterministically: + +1. Write transaction count as uint64 (big-endian) +2. For each transaction: write length as uint64, then bytes + +## Implementations + +### Single Sequencer + +Located in `pkg/sequencers/single/`. + +- Maintains local mempool +- Supports forced inclusion from DA +- Default for most deployments + +### Based Sequencer + +Located in `pkg/sequencers/based/`. + +- No local mempool +- All transactions come from DA layer +- Maximum censorship resistance + +## Configuration + +Select sequencer mode via configuration: + +```yaml +# Single sequencer (default) +sequencer: + type: single + +# Based sequencer +sequencer: + type: based +``` + +## Forced Inclusion + +Both sequencer implementations support forced inclusion, but with different behaviors: + +| Sequencer | Forced Inclusion Source | Mempool | +|-----------|------------------------|---------| +| Single | DA namespace + local mempool | Yes | +| Based | DA namespace only | No | + +The sequencer tracks DA height via `SetDAHeight()` to know which forced inclusion transactions to include. From 43a6a26e0e0cc3610f5beeaeab6efd88a6a23400 Mon Sep 17 00:00:00 2001 From: tac0turtle Date: Wed, 28 Jan 2026 13:40:12 +0100 Subject: [PATCH 3/4] Refactor documentation for data availability layers and node operations - Updated Celestia guide to clarify prerequisites, installation, and configuration for connecting Evolve chains to Celestia. - Enhanced Local DA documentation with installation instructions, configuration options, and use cases for development and testing. - Expanded troubleshooting guide with detailed diagnostic commands, common issues, and solutions for node operations. - Created comprehensive upgrades guide covering minor and major upgrades, version compatibility, and rollback procedures. - Added aggregator node documentation detailing configuration, block production settings, and monitoring options. - Introduced attester node overview with configuration and use cases for low-latency applications. - Removed outdated light node documentation. - Improved formatting and clarity in ev-reth chainspec reference for better readability. --- docs/ev-abci/overview.md | 14 +- docs/guides/advanced/custom-precompiles.md | 286 +++++++++++++++- docs/guides/da-layers/celestia.md | 250 +++++++++----- docs/guides/da-layers/local-da.md | 190 ++++++++-- docs/guides/operations/troubleshooting.md | 324 +++++++++++++++++- docs/guides/operations/upgrades.md | 278 ++++++++++++++- docs/guides/running-nodes/aggregator.md | 202 ++++++++++- docs/guides/running-nodes/attester.md | 72 +++- docs/guides/running-nodes/light-node.md | 9 - .../configuration/ev-reth-chainspec.md | 82 ++--- 10 files changed, 1493 insertions(+), 214 deletions(-) delete mode 100644 docs/guides/running-nodes/light-node.md diff --git a/docs/ev-abci/overview.md b/docs/ev-abci/overview.md index 2331fa5df6..3c32a6f03f 100644 --- a/docs/ev-abci/overview.md +++ b/docs/ev-abci/overview.md @@ -37,13 +37,13 @@ ev-abci implements the Executor interface, translating ev-node's calls into ABCI ## Key Differences from CometBFT -| Aspect | CometBFT | ev-abci | -|--------|----------|---------| -| Validators | Multiple validators with staking | Single sequencer | -| Consensus | BFT consensus rounds | Sequencer produces blocks | -| Finality | Instant (BFT) | Soft (P2P) → Hard (DA) | -| Block time | ~6s typical | Configurable (100ms+) | -| Vote extensions | Supported | Not supported | +| Aspect | CometBFT | ev-abci | +|-----------------|----------------------------------|---------------------------| +| Validators | Multiple validators with staking | Single sequencer | +| Consensus | BFT consensus rounds | Sequencer produces blocks | +| Finality | Instant (BFT) | Soft (P2P) → Hard (DA) | +| Block time | ~6s typical | Configurable (100ms+) | +| Vote extensions | Supported | Not supported | ## Benefits diff --git a/docs/guides/advanced/custom-precompiles.md b/docs/guides/advanced/custom-precompiles.md index f0a4a4b1a8..94eaa69ba6 100644 --- a/docs/guides/advanced/custom-precompiles.md +++ b/docs/guides/advanced/custom-precompiles.md @@ -1,11 +1,279 @@ # Custom Precompiles - +ev-reth supports custom EVM precompiled contracts for chain-specific functionality. This guide covers the built-in precompiles and how to add custom ones. + +## What Are Precompiles? + +Precompiles are special contracts at predefined addresses that execute native code instead of EVM bytecode. They're used for: + +- Computationally expensive operations (cryptography, hashing) +- Chain-specific functionality (minting, governance) +- Operations impossible or inefficient in Solidity + +## Built-in ev-reth Precompiles + +### Mint Precompile + +Allows an authorized address to mint native tokens. Useful for bridging scenarios. + +**Address:** `0x0000000000000000000000000000000000000100` + +**Configuration (chainspec):** + +```json +{ + "config": { + "evolve": { + "mintPrecompile": { + "admin": "0xBridgeContract", + "address": "0x0000000000000000000000000000000000000100" + } + } + } +} +``` + +**Interface:** + +```solidity +interface IMint { + /// @notice Mint native tokens to a recipient + /// @param recipient Address to receive tokens + /// @param amount Amount to mint (in wei) + function mint(address recipient, uint256 amount) external; +} +``` + +**Usage:** + +```solidity +// Only callable by admin address +IMint(0x0000000000000000000000000000000000000100).mint( + 0xRecipient, + 1 ether +); +``` + +See [Mint Precompile Reference](/ev-reth/features/mint-precompile) for details. + +## Creating Custom Precompiles + +Custom precompiles require modifying ev-reth source code. + +### Step 1: Define the Precompile + +Create a new precompile in `crates/precompiles/src/`: + +```rust +// my_precompile.rs +use revm::precompile::{Precompile, PrecompileOutput, PrecompileResult}; +use revm::primitives::{Bytes, U256}; + +pub const MY_PRECOMPILE_ADDRESS: Address = address!("0000000000000000000000000000000000000200"); + +pub fn my_precompile(input: &Bytes, gas_limit: u64) -> PrecompileResult { + // Check gas + let gas_used = 1000; // Base gas cost + if gas_used > gas_limit { + return Err(PrecompileError::OutOfGas); + } + + // Parse input + // input[0..4] = function selector + // input[4..] = encoded arguments + + // Execute logic + let result = process_input(input)?; + + Ok(PrecompileOutput { + gas_used, + bytes: result, + }) +} + +fn process_input(input: &Bytes) -> Result { + // Your custom logic here + Ok(Bytes::new()) +} +``` + +### Step 2: Register the Precompile + +Add the precompile to the precompile set: + +```rust +// In precompiles/src/lib.rs +pub fn evolve_precompiles(chain_spec: &ChainSpec) -> PrecompileSet { + let mut precompiles = standard_precompiles(); + + // Add mint precompile if configured + if let Some(mint_config) = &chain_spec.evolve.mint_precompile { + precompiles.insert(mint_config.address, mint_precompile); + } + + // Add your custom precompile + if chain_spec.evolve.my_feature_enabled { + precompiles.insert(MY_PRECOMPILE_ADDRESS, my_precompile); + } + + precompiles +} +``` + +### Step 3: Add Chainspec Configuration + +Define configuration structure: + +```rust +// In chainspec types +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct MyPrecompileConfig { + pub address: Address, + pub admin: Option
, + pub some_parameter: u64, +} +``` + +Update chainspec parsing to include new config. + +### Step 4: Build and Test + +```bash +# Build ev-reth +cargo build --release + +# Run tests +cargo test --package ev-reth-precompiles +``` + +## Precompile Best Practices + +### Gas Metering + +Charge gas proportional to computation: + +```rust +fn my_precompile(input: &Bytes, gas_limit: u64) -> PrecompileResult { + // Base cost + let mut gas_used = 100; + + // Per-byte cost for input processing + gas_used += input.len() as u64 * 3; + + // Additional cost for expensive operations + if requires_crypto_operation(input) { + gas_used += 10000; + } + + if gas_used > gas_limit { + return Err(PrecompileError::OutOfGas); + } + + // Process... +} +``` + +### Access Control + +For privileged operations, check caller: + +```rust +fn admin_only_precompile( + input: &Bytes, + context: &PrecompileContext, + config: &MyConfig, +) -> PrecompileResult { + // Verify caller is admin + if context.caller != config.admin { + return Err(PrecompileError::Custom("unauthorized".into())); + } + + // Process... +} +``` + +### Input Validation + +Always validate input thoroughly: + +```rust +fn my_precompile(input: &Bytes) -> PrecompileResult { + // Check minimum length + if input.len() < 36 { // 4 byte selector + 32 byte arg + return Err(PrecompileError::InvalidInput); + } + + // Validate selector + let selector = &input[0..4]; + if selector != MY_FUNCTION_SELECTOR { + return Err(PrecompileError::InvalidInput); + } + + // Parse and validate arguments + let amount = U256::from_be_slice(&input[4..36]); + if amount.is_zero() { + return Err(PrecompileError::InvalidInput); + } + + // Process... +} +``` + +### Determinism + +Precompiles must be deterministic: + +- No random number generation +- No external network calls +- No time-dependent logic +- Same input always produces same output + +## Testing Precompiles + +### Unit Tests + +```rust +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_my_precompile_success() { + let input = encode_input(/* args */); + let result = my_precompile(&input, 100000).unwrap(); + assert_eq!(result.bytes, expected_output()); + } + + #[test] + fn test_my_precompile_out_of_gas() { + let input = encode_input(/* args */); + let result = my_precompile(&input, 10); // Too little gas + assert!(matches!(result, Err(PrecompileError::OutOfGas))); + } +} +``` + +### Integration Tests + +Test precompile calls from Solidity: + +```solidity +// test/MyPrecompile.t.sol +contract MyPrecompileTest is Test { + address constant PRECOMPILE = 0x0000000000000000000000000000000000000200; + + function testPrecompileCall() public { + (bool success, bytes memory result) = PRECOMPILE.call( + abi.encodeWithSignature("myFunction(uint256)", 100) + ); + assertTrue(success); + // Assert result... + } +} +``` + +## See Also + +- [Mint Precompile](/ev-reth/features/mint-precompile) - Built-in minting +- [ev-reth Configuration](/ev-reth/configuration) - Chainspec setup +- [ev-reth Overview](/ev-reth/overview) - Architecture diff --git a/docs/guides/da-layers/celestia.md b/docs/guides/da-layers/celestia.md index 6b6c092b05..907c470a8a 100644 --- a/docs/guides/da-layers/celestia.md +++ b/docs/guides/da-layers/celestia.md @@ -1,153 +1,229 @@ -# Using Celestia as DA +# Celestia - - +This guide covers connecting your Evolve chain to Celestia for production data availability. -## 🌞 Introduction {#introduction} +## Prerequisites -This tutorial serves as a comprehensive guide for deploying your chain on Celestia's data availability (DA) network. From the Evolve perspective, there's no difference in posting blocks to Celestia's testnets or Mainnet Beta. +- Completed an Evolve quickstart tutorial +- Familiarity with running a Celestia light node -Before proceeding, ensure that you have completed the [gm-world](../gm-world.md) tutorial, which covers installing the Testapp CLI and running a chain against a local DA network. +## Running a Celestia Light Node -## 🪶 Running a Celestia light node +Before starting your Evolve chain, you need a Celestia light node running and synced. -Before you can start your chain node, you need to initiate, sync, and fund a light node on one of Celestia's networks on a compatible version: +### Version Compatibility -Find more information on how to run a light node in the [Celestia documentation](https://celestia.org/run-a-light-node/#start-up-a-node). +Ensure compatible versions between ev-node and celestia-node: -::: code-group +| Network | celestia-node | +|---------|---------------| +| Arabica | v0.20.x | +| Mocha | v0.20.x | +| Mainnet | v0.20.x | -```sh-vue [Arabica] -Evolve Version: {{constants.celestiaNodeArabicaEvolveTag}} -Celestia Node Version: {{constants.celestiaNodeArabicaTag}} -``` +### Installation -```sh-vue [Mocha] -Evolve Version: {{constants.celestiaNodeMochaEvolveTag}} -Celestia Node Version: {{constants.celestiaNodeMochaTag}} -``` +Follow the [Celestia documentation](https://docs.celestia.org/how-to-guides/light-node) to install and run a light node. -```sh-vue [Mainnet] -Evolve Version: {{constants.celestiaNodeMainnetEvolveTag}} -Celestia Node Version: {{constants.celestiaNodeMainnetTag}} -``` +**Quick start:** -::: - -- [Arabica Devnet](https://docs.celestia.org/how-to-guides/arabica-devnet) -- [Mocha Testnet](https://docs.celestia.org/how-to-guides/mocha-testnet) -- [Mainnet Beta](https://docs.celestia.org/how-to-guides/mainnet) +```bash +# Install celestia-node +curl -sL https://docs.celestia.org/install.sh | bash -The main difference lies in how you fund your wallet address: using testnet TIA or [TIA](https://docs.celestia.org/learn/tia#overview-of-tia) for Mainnet Beta. +# Initialize (choose your network) +celestia light init --p2p.network mocha -After successfully starting a light node, it's time to start posting the batches of blocks of data that your chain generates to Celestia. +# Start the node +celestia light start --p2p.network mocha +``` -## 🏗️ Prerequisites {#prerequisites} +### Network Options -- `gmd` CLI installed from the [gm-world](../gm-world.md) tutorial. +- [Arabica Devnet](https://docs.celestia.org/how-to-guides/arabica-devnet) - Development testing +- [Mocha Testnet](https://docs.celestia.org/how-to-guides/mocha-testnet) - Pre-production testing +- [Mainnet Beta](https://docs.celestia.org/how-to-guides/mainnet) - Production -## 🛠️ Configuring flags for DA +## Configuring Evolve for Celestia -Now that we are posting to the Celestia DA instead of the local DA, the `evolve start` command requires three DA configuration flags: +### Required Configuration -- `--evnode.da.start_height` -- `--evnode.da.auth_token` -- `--evnode.da.namespace` +The following flags are required to connect to Celestia: -:::tip -Optionally, you could also set the `--evnode.da.block_time` flag. This should be set to the finality time of the DA layer, not its actual block time, as Evolve does not handle reorganization logic. The default value is 15 seconds. -::: +| Flag | Description | +|------|-------------| +| `--evnode.da.address` | Celestia node RPC endpoint | +| `--evnode.da.auth_token` | JWT authentication token | +| `--evnode.da.header_namespace` | Namespace for block headers | +| `--evnode.da.data_namespace` | Namespace for transaction data | -Let's determine which values to provide for each of them. +### Get DA Block Height -First, let's query the DA layer start height using our light node. +Query the current DA height to set as your starting point: ```bash DA_BLOCK_HEIGHT=$(celestia header network-head | jq -r '.result.header.height') -echo -e "\n Your DA_BLOCK_HEIGHT is $DA_BLOCK_HEIGHT \n" +echo "Your DA_BLOCK_HEIGHT is $DA_BLOCK_HEIGHT" ``` -The output of the command above will look similar to this: - -```bash - Your DA_BLOCK_HEIGHT is 2127672 -``` +### Get Authentication Token -Now, let's obtain the authentication token of your light node using the following command: +Generate a write token for your light node: -::: code-group +**Arabica:** -```bash [Arabica Devnet] +```bash AUTH_TOKEN=$(celestia light auth write --p2p.network arabica) -echo -e "\n Your DA AUTH_TOKEN is $AUTH_TOKEN \n" ``` -```bash [Mocha Testnet] +**Mocha:** + +```bash AUTH_TOKEN=$(celestia light auth write --p2p.network mocha) -echo -e "\n Your DA AUTH_TOKEN is $AUTH_TOKEN \n" ``` -```bash [Mainnet Beta] +**Mainnet:** + +```bash AUTH_TOKEN=$(celestia light auth write) -echo -e "\n Your DA AUTH_TOKEN is $AUTH_TOKEN \n" ``` -::: +### Set Namespaces -The output of the command above will look similar to this: +Choose unique namespaces for your chain's headers and data: ```bash - Your DA AUTH_TOKEN is eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJwdWJsaWMiLCJyZWFkIiwid3JpdGUiXX0.cSrJjpfUdTNFtzGho69V0D_8kyECn9Mzv8ghJSpKRDE +DA_HEADER_NAMESPACE="my_chain_headers" +DA_DATA_NAMESPACE="my_chain_data" ``` -Next, let's set up the namespace to be used for posting data on Celestia. Evolve supports separate namespaces for headers and data, but for simplicity, we'll use a single namespace for both: +The namespace values are automatically encoded by ev-node for Celestia compatibility. -```bash -DA_NAMESPACE="fancy_namespace" -``` +You can use the same namespace for both headers and data, or separate them for optimized syncing (light clients can sync headers only). -**Advanced Configuration:** For production deployments, you can use separate namespaces for headers and data to optimize syncing: +### Set DA Address -- `--evnode.da.header_namespace` for block headers -- `--evnode.da.data_namespace` for transaction data +Default Celestia light node port is 26658: -The namespace values are automatically encoded by the node to ensure compatibility with Celestia. +```bash +DA_ADDRESS=http://localhost:26658 +``` -[Learn more about namespaces](https://docs.celestia.org/tutorials/node-tutorial#namespaces). -::: +## Running Your Chain -Lastly, set your DA address for your light node, which by default runs at -port 26658: +Start your chain with Celestia configuration: ```bash -DA_ADDRESS=http://localhost:26658 +evnode start \ + --evnode.node.aggregator \ + --evnode.da.auth_token $AUTH_TOKEN \ + --evnode.da.header_namespace $DA_HEADER_NAMESPACE \ + --evnode.da.data_namespace $DA_DATA_NAMESPACE \ + --evnode.da.address $DA_ADDRESS ``` -## 🔥 Running your chain connected to Celestia light node - -Finally, let's initiate the chain node with all the flags: +For Cosmos SDK chains: ```bash -gmd start \ +appd start \ --evnode.node.aggregator \ --evnode.da.auth_token $AUTH_TOKEN \ - --evnode.da.header_namespace $DA_NAMESPACE \ - --evnode.da.data_namespace $DA_NAMESPACE \ + --evnode.da.header_namespace $DA_HEADER_NAMESPACE \ + --evnode.da.data_namespace $DA_DATA_NAMESPACE \ --evnode.da.address $DA_ADDRESS ``` -Now, the chain is running and posting blocks (aggregated in batches) to Celestia. You can view your chain by using your namespace or account on one of Celestia's block explorers. +## Viewing Your Chain Data + +Once running, you can view your chain's data on Celestia block explorers: + +- [Celenium (Arabica)](https://arabica.celenium.io/) +- [Celenium (Mocha)](https://mocha.celenium.io/) +- [Celenium (Mainnet)](https://celenium.io/) + +Search by your namespace or account address to see submitted blobs. + +## Configuration Options + +### Gas Price + +Set the gas price for DA submissions: + +```bash +--evnode.da.gas_price 0.01 +``` + +Higher gas prices result in faster inclusion during congestion. -For example, [here on Celenium for Arabica](https://arabica.celenium.io/). +### Block Time -Other explorers: +Set the expected DA block time (affects retry timing): -- [Arabica testnet](https://docs.celestia.org/how-to-guides/arabica-devnet) -- [Mocha testnet](https://docs.celestia.org/how-to-guides/mocha-testnet) -- [Mainnet Beta](https://docs.celestia.org/how-to-guides/mainnet) +```bash +--evnode.da.block_time 6s +``` + +Celestia's block time is approximately 6 seconds. + +### Multiple Signing Addresses + +For high-throughput chains, use multiple signing addresses to avoid nonce conflicts: + +```bash +--evnode.da.signing_addresses celestia1abc...,celestia1def...,celestia1ghi... +``` + +All addresses must be funded and loaded in the Celestia node's keyring. + +## Funding Your Account + +### Testnet (Mocha/Arabica) + +Get testnet TIA from faucets: + +- [Mocha Faucet](https://faucet.celestia-mocha.com/) +- [Arabica Faucet](https://faucet.celestia-arabica.com/) + +### Mainnet + +Purchase TIA and transfer to your Celestia light node address. + +Check your address: + +```bash +celestia state account-address +``` + +## Troubleshooting + +### Out of Funds + +If you see `Code: 19` errors, your account is out of TIA: + +1. Fund your account +2. Increase gas price to unstick pending transactions +3. Restart your chain + +See [Troubleshooting Guide](/guides/operations/troubleshooting) for details. + +### Connection Refused + +Verify your Celestia node is running: + +```bash +curl http://localhost:26658/header/sync_state +``` + +### Token Expired + +Regenerate your auth token: + +```bash +celestia light auth write --p2p.network +``` -## 🎉 Next steps +## See Also -Congratulations! You've built a local chain that posts data to Celestia's DA layer. Well done! Now, go forth and build something great! Good luck! +- [Local DA Guide](/guides/da-layers/local-da) - Development setup +- [Troubleshooting](/guides/operations/troubleshooting) - Common issues +- [Configuration Reference](/reference/configuration/ev-node-config) - All DA options diff --git a/docs/guides/da-layers/local-da.md b/docs/guides/da-layers/local-da.md index 912724c52e..d9256c0577 100644 --- a/docs/guides/da-layers/local-da.md +++ b/docs/guides/da-layers/local-da.md @@ -1,56 +1,188 @@ -# Using Local DA +# Local DA - - +Local DA is a development-only data availability layer for testing Evolve chains without connecting to a real DA network. -## Introduction {#introduction} +## Overview -This tutorial serves as a comprehensive guide for using the [local-da](../../../tools/local-da) with your chain. +Local DA provides: -Before proceeding, ensure that you have completed the [build a chain](../gm-world.md) tutorial, which covers setting-up, building and running your chain. +- Fast, local blob storage +- No authentication required +- No gas fees +- Instant "finality" -## Setting Up a Local DA Network +**Warning:** Local DA is for development only. It provides no actual data availability guarantees. -To set up a local DA network node on your machine, run the following script to install and start the local DA node: +## Installation -```bash-vue +Install the local-da binary: + +```bash go install github.com/evstack/ev-node/tools/local-da@latest ``` -This script will build and run the node, which will then listen on port `7980`. +Or build from source: + +```bash +cd ev-node/tools/local-da +go build -o local-da . +``` + +## Running Local DA + +Start the local DA server: + +```bash +local-da +``` -## Configuring your chain to connect to the local DA network +Default output: -To connect your chain to the local DA network, you need to pass the `--evnode.da.address` flag with the local DA node address. +``` +INF NewLocalDA: initialized LocalDA module=local-da +INF Listening on host=localhost maxBlobSize=1974272 module=da port=7980 +INF server started listening on=localhost:7980 module=da +``` -## Run your chain +### Configuration -Start your chain node with the following command, ensuring to include the DA address flag: +| Flag | Default | Description | +|------|---------|-------------| +| `--host` | `localhost` | Listen address | +| `--port` | `7980` | Listen port | -::: code-group +Example with custom port: -```sh [Quick Start] -testapp start --evnode.da.address http://localhost:7980 +```bash +local-da --port 8080 ``` -```sh [gm-world Chain] -testapp start \ +## Connecting Your Chain + +Start your Evolve chain with the local DA address: + +```bash +evnode start \ --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 +``` + +For Cosmos SDK chains: + +```bash +appd start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 +``` + +## Features + +### No Authentication + +Unlike Celestia, local DA requires no auth token: + +```bash +# Celestia requires +--evnode.da.auth_token + +# Local DA does not +--evnode.da.address http://localhost:7980 +``` + +### No Namespace Required + +Namespace is optional with local DA: + +```bash +# Optional +--evnode.da.namespace my_namespace +``` + +### Instant Submission + +Blobs are stored immediately with no block time delay. + +## Use Cases + +### Local Development + +Test your chain logic without DA layer complexity: + +```bash +# Terminal 1: Start local DA +local-da + +# Terminal 2: Start your chain +evnode start --evnode.da.address http://localhost:7980 +``` + +### CI/CD Testing + +Use local DA in automated tests: + +```bash +# Start local DA in background +local-da & +LOCAL_DA_PID=$! + +# Run tests +go test ./... + +# Cleanup +kill $LOCAL_DA_PID +``` + +### Integration Testing + +Test multi-node setups locally: + +```bash +# Start local DA +local-da --port 7980 + +# Start sequencer +evnode start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:7980 \ + --evnode.p2p.listen /ip4/0.0.0.0/tcp/7676 + +# Start full node (separate terminal) +evnode start \ --evnode.da.address http://localhost:7980 \ + --evnode.p2p.peers /ip4/127.0.0.1/tcp/7676/p2p/ ``` -::: +## Limitations -You should see the following log message indicating that your chain is connected to the local DA network: +Local DA is **not suitable for**: -```shell -11:07AM INF NewLocalDA: initialized LocalDA module=local-da -11:07AM INF Listening on host=localhost maxBlobSize=1974272 module=da port=7980 -11:07AM INF server started listening on=localhost:7980 module=da +- Production deployments +- Security testing +- Performance benchmarking (no real network latency) +- Testing DA-specific features (proofs, commitments) + +## Transitioning to Celestia + +When ready for production, switch to Celestia: + +1. Set up a Celestia light node +2. Update your start command: + +```bash +# From local DA +--evnode.da.address http://localhost:7980 + +# To Celestia +--evnode.da.address http://localhost:26658 +--evnode.da.auth_token $AUTH_TOKEN +--evnode.da.header_namespace $HEADER_NAMESPACE +--evnode.da.data_namespace $DATA_NAMESPACE ``` -## Summary +See [Celestia Guide](/guides/da-layers/celestia) for full instructions. + +## See Also -By following these steps, you will set up a local DA network node and configure your chain to post data to it. This setup is useful for testing and development in a controlled environment. You can find more information in the [local-da README](../../../tools/local-da/README.md) +- [Celestia Guide](/guides/da-layers/celestia) - Production DA setup +- [EVM Quickstart](/getting-started/evm/quickstart) - Getting started with EVM +- [Cosmos Quickstart](/getting-started/cosmos/quickstart) - Getting started with Cosmos SDK diff --git a/docs/guides/operations/troubleshooting.md b/docs/guides/operations/troubleshooting.md index a3e26be799..c8fbcf5623 100644 --- a/docs/guides/operations/troubleshooting.md +++ b/docs/guides/operations/troubleshooting.md @@ -1,10 +1,318 @@ # Troubleshooting - +Common issues and solutions when running Evolve nodes. + +## Diagnostic Commands + +### Check Node Status + +```bash +# Health check +curl http://localhost:7331/health/live +curl http://localhost:7331/health/ready + +# Node status +curl http://localhost:26657/status +``` + +### View Logs + +```bash +# Follow logs in real-time +journalctl -u evnode -f + +# Search for errors +journalctl -u evnode | grep -i error +``` + +## Common Issues + +### Node Won't Start + +**Symptom:** Node exits immediately after starting. + +**Solutions:** + +1. Check for port conflicts: + +```bash +lsof -i :26657 +lsof -i :7676 +``` + +1. Verify configuration file syntax: + +```bash +cat ~/.evnode/config/evnode.yml +``` + +1. Check data directory permissions: + +```bash +ls -la ~/.evnode/data +``` + +### DA Connection Failures + +**Symptom:** Logs show `DA layer submission failed` errors. + +**Error example:** + +``` +ERR DA layer submission failed error="connection refused" +``` + +**Solutions:** + +1. Verify DA endpoint is reachable: + +```bash +curl http://localhost:26658/health +``` + +1. Check authentication token (Celestia): + +```bash +celestia light auth write --p2p.network mocha +``` + +1. Verify DA node is fully synced: + +```bash +celestia header sync-state +``` + +### Out of DA Funds + +**Symptom:** `Code: 19` errors in logs. + +**Error example:** + +``` +ERR DA layer submission failed error="Codespace: 'sdk', Code: 19, Message: " +``` + +**Solutions:** + +1. Check DA account balance +2. Fund the account with more tokens +3. Increase gas price to unstick pending transactions: + +```bash +--evnode.da.gas_price 0.05 +``` + +See [Restart Chain Guide](/guides/restart-chain) for detailed steps. + +### P2P Connection Issues + +**Symptom:** Node not syncing, no peers connected. + +**Solutions:** + +1. Verify peer address format: + +```bash +# Correct format +/ip4/1.2.3.4/tcp/7676/p2p/12D3KooWABC... + +# NOT just the peer ID +12D3KooWABC... +``` + +1. Check firewall allows P2P port: + +```bash +sudo ufw status +# Ensure port 7676 (or your P2P port) is allowed +``` + +1. Try DA-only sync mode (no P2P): + +```bash +evnode start --evnode.da.address http://localhost:26658 +# Leave --evnode.p2p.peers empty +``` + +### Node Falling Behind + +**Symptom:** `catching_up: true` in status, height increasing slowly. + +**Solutions:** + +1. Check system resources: + +```bash +htop +df -h +``` + +1. Increase DA request timeout: + +```bash +--evnode.da.request_timeout 60s +``` + +1. Verify DA layer is responding quickly: + +```bash +time curl http://localhost:26658/header/sync_state +``` + +### Execution Layer Desync + +**Symptom:** State root mismatches, execution errors. + +**EVM (ev-reth):** + +```bash +# Check ev-reth logs for errors +journalctl -u ev-reth -f + +# Verify Engine API connectivity +curl -X POST -H "Content-Type: application/json" \ + -H "Authorization: Bearer $(cat jwt.hex)" \ + --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \ + http://localhost:8551 +``` + +**Cosmos SDK (ev-abci):** + +```bash +# Check app hash consistency +curl http://localhost:26657/status | jq '.sync_info' +``` + +## Reset Procedures + +### Soft Reset (Keep Genesis) + +Reset state while keeping configuration: + +```bash +# Stop the node +systemctl stop evnode + +# Clear data directory +rm -rf ~/.evnode/data/* + +# Restart +systemctl start evnode +``` + +### Hard Reset (Full Reinitialize) + +Complete reset including configuration: + +```bash +# Stop the node +systemctl stop evnode + +# Remove everything +rm -rf ~/.evnode + +# Reinitialize +evnode init +``` + +### Reset EVM State (ev-reth) + +```bash +# Stop both nodes +systemctl stop evnode ev-reth + +# Clear ev-reth data +rm -rf ~/.ev-reth/db + +# Clear ev-node cache +rm -rf ~/.evnode/data/cache + +# Restart +systemctl start ev-reth evnode +``` + +## Log Analysis + +### Important Log Messages + +**Healthy operation:** + +``` +INF Creating and publishing block height=100 module=BlockManager +INF block marked as DA included blockHeight=100 module=BlockManager +INF indexed block events height=100 module=txindex +``` + +**Warning signs:** + +``` +WRN block production slowed due to pending DA submissions +WRN peer connection failed, retrying +``` + +**Errors requiring action:** + +``` +ERR DA layer submission failed +ERR failed to execute block +ERR P2P network unavailable +``` + +### Enable Debug Logging + +```bash +evnode start --log.level debug +``` + +Or in configuration: + +```yaml +log: + level: debug +``` + +## Performance Issues + +### High Memory Usage + +1. Reduce cache size in configuration +2. Enable lazy aggregation mode +3. Limit max pending blocks: + +```bash +--evnode.node.max_pending_blocks 50 +``` + +### High CPU Usage + +1. Increase block time: + +```bash +--evnode.node.block_time 2s +``` + +1. Check for transaction spam +2. Monitor execution layer performance + +### Disk Space + +1. Check disk usage: + +```bash +du -sh ~/.evnode/data/* +``` + +1. Prune old data (if supported by execution layer) +2. Consider moving data to larger disk + +## Getting Help + +1. Check logs for specific error messages +2. Search [GitHub Issues](https://github.com/evstack/ev-node/issues) +3. Join the community Discord for support + +## See Also + +- [Reset State Guide](/guides/reset-state) - Detailed reset procedures +- [Restart Chain Guide](/guides/restart-chain) - Restarting after issues +- [Monitoring Guide](/guides/operations/monitoring) - Proactive monitoring diff --git a/docs/guides/operations/upgrades.md b/docs/guides/operations/upgrades.md index f130f001bc..bde6d852d0 100644 --- a/docs/guides/operations/upgrades.md +++ b/docs/guides/operations/upgrades.md @@ -1,9 +1,273 @@ # Upgrades - +Guide for upgrading Evolve nodes and handling version migrations. + +## Upgrade Types + +### Minor Upgrades (Patch/Minor Version) + +Non-breaking changes, bug fixes, and minor improvements. + +**Process:** + +1. Stop the node +2. Replace binary +3. Restart + +```bash +# Stop +systemctl stop evnode + +# Upgrade (example with go install) +go install github.com/evstack/ev-node@v1.2.3 + +# Restart +systemctl start evnode +``` + +### Major Upgrades (Breaking Changes) + +May require state migration or coordinated network upgrade. + +**Process:** + +1. Review changelog for breaking changes +2. Coordinate upgrade height with network +3. Stop at designated height +4. Upgrade binary +5. Run any migration scripts +6. Restart + +## ev-node Upgrades + +### Check Current Version + +```bash +evnode version +``` + +### Upgrade Binary + +**Using Go:** + +```bash +go install github.com/evstack/ev-node@latest +``` + +**Using Docker:** + +```bash +docker pull evstack/evnode:latest +``` + +**From Source:** + +```bash +cd ev-node +git fetch --tags +git checkout v1.2.3 +make build +``` + +### Configuration Changes + +After upgrading, check for new or changed configuration options: + +1. Review the [changelog](https://github.com/evstack/ev-node/releases) +2. Compare your config with the new defaults +3. Update configuration as needed + +## ev-reth Upgrades + +### Version Compatibility + +ev-reth versions must be compatible with ev-node. Check the compatibility matrix: + +| ev-node | ev-reth | +|---------|---------| +| v1.x | v0.x | + +### Upgrade Process + +```bash +# Stop both nodes +systemctl stop evnode ev-reth + +# Upgrade ev-reth +cd ev-reth +git fetch --tags +git checkout v0.2.0 +cargo build --release + +# Verify chainspec compatibility +# (check for new required fields) + +# Restart +systemctl start ev-reth evnode +``` + +### Database Migrations + +Some ev-reth upgrades require database migration: + +```bash +# Check if migration needed +ev-reth db version + +# Run migration if needed +ev-reth db migrate +``` + +## ev-abci Upgrades + +### Cosmos SDK Compatibility + +ev-abci tracks Cosmos SDK versions. Ensure your app's SDK version is compatible: + +| ev-abci | Cosmos SDK | +|---------|------------| +| v1.x | v0.50.x | + +### Module Upgrades + +For Cosmos SDK apps with custom modules: + +1. Update module dependencies in `go.mod` +2. Run any module migration handlers +3. Update genesis if needed + +```go +// In app.go upgrade handler +app.UpgradeKeeper.SetUpgradeHandler( + "v2", + func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { + // Migration logic + return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM) + }, +) +``` + +## Coordinated Network Upgrades + +For networks with multiple node operators: + +### Planning + +1. Announce upgrade timeline (minimum 1 week notice) +2. Agree on upgrade block height +3. Share upgrade binary/instructions + +### Execution + +1. All nodes stop at designated height +2. Operators upgrade binaries +3. Coordinators verify readiness +4. Network restarts + +### Handling Stragglers + +If some nodes don't upgrade: + +- They will reject new blocks (if consensus rules changed) +- They can sync from upgraded nodes after upgrading + +## Rollback Procedures + +If an upgrade causes issues: + +### ev-node Rollback + +```bash +# Stop +systemctl stop evnode + +# Restore previous binary +cp /backup/evnode-v1.1.0 /usr/local/bin/evnode + +# Optionally restore data +# (only if upgrade corrupted state) +rm -rf ~/.evnode/data +cp -r /backup/evnode-data ~/.evnode/data + +# Restart +systemctl start evnode +``` + +### ev-reth Rollback + +```bash +# Stop +systemctl stop ev-reth evnode + +# Restore binary +cp /backup/ev-reth-v0.1.0 /usr/local/bin/ev-reth + +# Restore database if needed +rm -rf ~/.ev-reth/db +cp -r /backup/ev-reth-db ~/.ev-reth/db + +# Restart +systemctl start ev-reth evnode +``` + +## State Migration + +### Export State + +Before major upgrades, export state: + +```bash +# ev-node +evnode export > state-export.json + +# Cosmos SDK +appd export --height > genesis-export.json +``` + +### Migrate State + +If state format changes: + +```bash +# Run migration tool +evnode migrate state-export.json --to-version v2 > state-migrated.json +``` + +### Import State + +```bash +# Initialize with migrated state +evnode init --genesis state-migrated.json +``` + +## Best Practices + +### Pre-Upgrade Checklist + +- [ ] Review changelog for breaking changes +- [ ] Test upgrade on testnet first +- [ ] Backup current state +- [ ] Backup configuration files +- [ ] Notify dependent services +- [ ] Schedule maintenance window + +### Post-Upgrade Verification + +- [ ] Node starts successfully +- [ ] Blocks are being produced/synced +- [ ] RPC endpoints responding +- [ ] Metrics reporting correctly +- [ ] P2P connections established + +### Automation + +Consider automating upgrades with tools like: + +- Ansible playbooks +- Kubernetes operators +- systemd timers for scheduled upgrades + +## See Also + +- [Troubleshooting Guide](/guides/operations/troubleshooting) - Handling upgrade issues +- [Deployment Guide](/guides/operations/deployment) - Infrastructure setup diff --git a/docs/guides/running-nodes/aggregator.md b/docs/guides/running-nodes/aggregator.md index 36b436f16c..49f0776c74 100644 --- a/docs/guides/running-nodes/aggregator.md +++ b/docs/guides/running-nodes/aggregator.md @@ -1,12 +1,194 @@ # Aggregator Node - +An aggregator (also called sequencer) is the node responsible for producing blocks in an Evolve chain. It collects transactions, orders them, creates blocks, and submits data to the DA layer. + +## Prerequisites + +- ev-node installed +- Access to a DA layer (Celestia or local-da) +- Signer key for block signing + +## Configuration + +Enable aggregator mode with the `--evnode.node.aggregator` flag: + +```bash +evnode start --evnode.node.aggregator +``` + +### Required Flags + +| Flag | Description | +|------------------------------|-------------------------| +| `--evnode.node.aggregator` | Enable block production | +| `--evnode.da.address` | DA layer endpoint | +| `--evnode.signer.passphrase` | Signer key passphrase | + +### Block Time Configuration + +Control how often blocks are produced: + +```bash +# Produce blocks every 500ms +evnode start \ + --evnode.node.aggregator \ + --evnode.node.block_time 500ms +``` + +Default block time is 1 second. + +## Lazy Aggregation Mode + +Lazy mode only produces blocks when there are transactions, reducing DA costs during low activity periods: + +```bash +evnode start \ + --evnode.node.aggregator \ + --evnode.node.lazy_aggregator \ + --evnode.node.lazy_block_time 30s +``` + +| Flag | Description | +|---------------------------------|--------------------------------------| +| `--evnode.node.lazy_aggregator` | Enable lazy mode | +| `--evnode.node.lazy_block_time` | Max wait between blocks in lazy mode | + +In lazy mode: + +- Blocks are produced immediately when transactions arrive +- If no transactions, wait up to `lazy_block_time` before producing an empty block +- Reduces DA submission costs during idle periods + +## DA Submission Settings + +Configure how blocks are batched and submitted to DA: + +```bash +evnode start \ + --evnode.node.aggregator \ + --evnode.da.address http://localhost:26658 \ + --evnode.da.namespace "my_namespace" \ + --evnode.da.gas_price 0.01 \ + --evnode.da.batching_strategy adaptive +``` + +### Batching Strategies + +| Strategy | Description | +|-------------|---------------------------------------------| +| `immediate` | Submit as soon as blocks are ready | +| `time` | Wait for time interval before submitting | +| `size` | Wait until batch reaches size threshold | +| `adaptive` | Balance between size and time (recommended) | + +### Max Pending Blocks + +Limit how many blocks can be waiting for DA submission: + +```bash +--evnode.node.max_pending_blocks 100 +``` + +When this limit is reached, block production pauses until some blocks are confirmed on DA. + +## Signer Configuration + +The aggregator needs a signer key to sign blocks: + +```bash +# Using file-based signer +evnode start \ + --evnode.node.aggregator \ + --evnode.signer.signer_type file \ + --evnode.signer.signer_path /path/to/keys \ + --evnode.signer.passphrase "your-passphrase" +``` + +## Complete Example + +### EVM Chain (ev-reth) + +```bash +evnode start \ + --evnode.node.aggregator \ + --evnode.node.block_time 1s \ + --evnode.da.address http://localhost:26658 \ + --evnode.da.namespace "my_evm_chain" \ + --evnode.da.gas_price 0.01 \ + --evnode.signer.passphrase "secret" \ + --evnode.rpc.address tcp://0.0.0.0:26657 +``` + +### Cosmos SDK Chain (ev-abci) + +```bash +appd start \ + --evnode.node.aggregator \ + --evnode.node.block_time 1s \ + --evnode.da.address http://localhost:26658 \ + --evnode.da.namespace "my_cosmos_chain" \ + --evnode.signer.passphrase "secret" +``` + +## Monitoring + +Enable metrics to monitor aggregator performance: + +```bash +evnode start \ + --evnode.node.aggregator \ + --evnode.instrumentation.prometheus \ + --evnode.instrumentation.prometheus_listen_addr :2112 +``` + +Key metrics to watch: + +- `evolve_block_height` - Current block height +- `evolve_da_submission_total` - DA submissions count +- `evolve_da_submission_failures` - Failed DA submissions + +Enable the DA visualizer for detailed submission monitoring: + +```bash +--evnode.rpc.enable_da_visualization +``` + +Then access `http://localhost:7331/da` in your browser. + +## Health Checks + +The aggregator exposes health endpoints: + +```bash +# Liveness check +curl http://localhost:7331/health/live + +# Readiness check (includes block production rate) +curl http://localhost:7331/health/ready +``` + +## Troubleshooting + +### Blocks Not Being Produced + +1. Verify aggregator mode is enabled in logs +2. Check DA layer connectivity +3. Ensure signer key is accessible + +### DA Submission Failures + +1. Check DA layer endpoint is reachable +2. Verify DA account has sufficient funds +3. Increase gas price if transactions are being outbid + +### High Pending Block Count + +1. Reduce block time or enable lazy mode +2. Increase DA gas price for faster inclusion +3. Check DA layer congestion + +## See Also + +- [Full Node Guide](/guides/running-nodes/full-node) - Running a non-producing node +- [DA Visualization](/guides/tools/visualizer) - Monitor DA submissions +- [Monitoring Guide](/guides/operations/monitoring) - Prometheus metrics diff --git a/docs/guides/running-nodes/attester.md b/docs/guides/running-nodes/attester.md index 1e1a234392..66b7e5442b 100644 --- a/docs/guides/running-nodes/attester.md +++ b/docs/guides/running-nodes/attester.md @@ -1,9 +1,67 @@ # Attester Node - +Attester nodes participate in the validator network to provide faster soft finality through attestations. This is an advanced feature for chains requiring sub-DA-finality confirmation times. + +## Overview + +Attesters: + +- Validate blocks produced by the aggregator +- Sign attestations confirming block validity +- Participate in a soft consensus protocol +- Enable faster finality than DA-only confirmation + +## Status + +The attester network feature is under active development. This documentation will be updated as the feature matures. + +For technical details on the validator network design, see [ADR-022: Validator Network](https://github.com/evstack/ev-node/blob/main/specs/src/adr/adr-022-validator-network.md). + +## How It Works + +### Soft Finality + +Without attesters, finality depends on DA confirmation (~6-12 seconds for Celestia). With an attester network: + +1. Aggregator produces block +2. Attesters validate and sign attestations +3. When threshold of attestations collected, block has soft finality +4. DA finality provides hard finality later + +### Trust Model + +- Soft finality requires trusting the attester set (configurable threshold) +- Hard finality (DA) remains trustless +- Applications can choose which finality level to wait for + +## Configuration (Preview) + +```bash +# Run as attester (preview configuration) +evnode start \ + --evnode.node.attester \ + --evnode.da.address http://localhost:26658 \ + --evnode.p2p.peers /ip4/sequencer.example.com/tcp/7676/p2p/12D3KooW... +``` + +## Use Cases + +### Low-Latency Applications + +Applications requiring confirmation faster than DA finality: + +- Trading platforms +- Gaming +- Real-time settlement + +### Enhanced Security + +Additional validation layer before DA confirmation: + +- Multi-party validation +- Early fraud detection + +## See Also + +- [Finality Concepts](/concepts/finality) - Understanding finality in Evolve +- [Full Node Guide](/guides/running-nodes/full-node) - Running a full node diff --git a/docs/guides/running-nodes/light-node.md b/docs/guides/running-nodes/light-node.md deleted file mode 100644 index bfcfac5fbf..0000000000 --- a/docs/guides/running-nodes/light-node.md +++ /dev/null @@ -1,9 +0,0 @@ -# Light Node - - diff --git a/docs/reference/configuration/ev-reth-chainspec.md b/docs/reference/configuration/ev-reth-chainspec.md index 2f0813085c..9a6585e071 100644 --- a/docs/reference/configuration/ev-reth-chainspec.md +++ b/docs/reference/configuration/ev-reth-chainspec.md @@ -22,33 +22,33 @@ Chain configuration parameters. ### Standard Ethereum Fields -| Field | Type | Description | -|-------|------|-------------| -| `chainId` | number | Unique chain identifier | -| `homesteadBlock` | number | Homestead fork block (use 0) | -| `eip150Block` | number | EIP-150 fork block (use 0) | -| `eip155Block` | number | EIP-155 fork block (use 0) | -| `eip158Block` | number | EIP-158 fork block (use 0) | -| `byzantiumBlock` | number | Byzantium fork block (use 0) | +| Field | Type | Description | +|-----------------------|--------|-----------------------------------| +| `chainId` | number | Unique chain identifier | +| `homesteadBlock` | number | Homestead fork block (use 0) | +| `eip150Block` | number | EIP-150 fork block (use 0) | +| `eip155Block` | number | EIP-155 fork block (use 0) | +| `eip158Block` | number | EIP-158 fork block (use 0) | +| `byzantiumBlock` | number | Byzantium fork block (use 0) | | `constantinopleBlock` | number | Constantinople fork block (use 0) | -| `petersburgBlock` | number | Petersburg fork block (use 0) | -| `istanbulBlock` | number | Istanbul fork block (use 0) | -| `berlinBlock` | number | Berlin fork block (use 0) | -| `londonBlock` | number | London fork block (use 0) | -| `shanghaiTime` | number | Shanghai fork timestamp (use 0) | -| `cancunTime` | number | Cancun fork timestamp (use 0) | +| `petersburgBlock` | number | Petersburg fork block (use 0) | +| `istanbulBlock` | number | Istanbul fork block (use 0) | +| `berlinBlock` | number | Berlin fork block (use 0) | +| `londonBlock` | number | London fork block (use 0) | +| `shanghaiTime` | number | Shanghai fork timestamp (use 0) | +| `cancunTime` | number | Cancun fork timestamp (use 0) | ### config.evolve Evolve-specific extensions. -| Field | Type | Description | -|-------|------|-------------| -| `baseFeeSink` | address | Redirect base fees to this address | -| `baseFeeRedirectActivationHeight` | number | Block height to activate redirect | -| `deployAllowlist` | object | Contract deployment restrictions | -| `contractSizeLimit` | number | Max contract bytecode size (bytes) | -| `mintPrecompile` | object | Native token minting precompile | +| Field | Type | Description | +|-----------------------------------|---------|------------------------------------| +| `baseFeeSink` | address | Redirect base fees to this address | +| `baseFeeRedirectActivationHeight` | number | Block height to activate redirect | +| `deployAllowlist` | object | Contract deployment restrictions | +| `contractSizeLimit` | number | Max contract bytecode size (bytes) | +| `mintPrecompile` | object | Native token minting precompile | #### deployAllowlist @@ -59,9 +59,9 @@ Evolve-specific extensions. } ``` -| Field | Type | Description | -|-------|------|-------------| -| `admin` | address | Can modify the allowlist | +| Field | Type | Description | +|-----------|-----------|-----------------------------| +| `admin` | address | Can modify the allowlist | | `enabled` | address[] | Addresses allowed to deploy | #### mintPrecompile @@ -73,9 +73,9 @@ Evolve-specific extensions. } ``` -| Field | Type | Description | -|-------|------|-------------| -| `admin` | address | Can call mint() | +| Field | Type | Description | +|-----------|---------|--------------------| +| `admin` | address | Can call mint() | | `address` | address | Precompile address | ## alloc @@ -99,24 +99,24 @@ Pre-funded accounts and contract deployments. } ``` -| Field | Type | Description | -|-------|------|-------------| -| `balance` | hex string | Wei balance | -| `code` | hex string | Contract bytecode (optional) | -| `storage` | object | Storage slots (optional) | -| `nonce` | hex string | Account nonce (optional) | +| Field | Type | Description | +|-----------|------------|------------------------------| +| `balance` | hex string | Wei balance | +| `code` | hex string | Contract bytecode (optional) | +| `storage` | object | Storage slots (optional) | +| `nonce` | hex string | Account nonce (optional) | ## Top-Level Fields -| Field | Type | Description | -|-------|------|-------------| -| `coinbase` | address | Default fee recipient | +| Field | Type | Description | +|--------------|------------|--------------------------------| +| `coinbase` | address | Default fee recipient | | `difficulty` | hex string | Initial difficulty (use "0x0") | -| `gasLimit` | hex string | Block gas limit | -| `nonce` | hex string | Genesis nonce (use "0x0") | -| `timestamp` | hex string | Genesis timestamp | -| `extraData` | hex string | Extra data (optional) | -| `mixHash` | hex string | Mix hash (optional) | +| `gasLimit` | hex string | Block gas limit | +| `nonce` | hex string | Genesis nonce (use "0x0") | +| `timestamp` | hex string | Genesis timestamp | +| `extraData` | hex string | Extra data (optional) | +| `mixHash` | hex string | Mix hash (optional) | ## Example From 88050248cfe554d2169801ee6dd5b3fa92e66c33 Mon Sep 17 00:00:00 2001 From: tac0turtle Date: Wed, 28 Jan 2026 13:47:58 +0100 Subject: [PATCH 4/4] format --- RELEASE.md | 4 ++-- docs/concepts/data-availability.md | 1 + docs/concepts/fee-systems.md | 4 ++++ docs/ev-abci/integration-guide.md | 1 + docs/ev-abci/migration-from-cometbft.md | 6 +++--- docs/ev-abci/rpc-compatibility.md | 1 + docs/ev-reth/engine-api.md | 7 +++++++ docs/ev-reth/features/contract-size-limits.md | 2 ++ docs/ev-reth/overview.md | 1 + docs/getting-started/cosmos/integrate-ev-abci.md | 1 + docs/getting-started/cosmos/migration-guide.md | 1 + docs/getting-started/cosmos/quickstart.md | 1 + docs/getting-started/custom/implement-executor.md | 10 ++++++++++ docs/getting-started/custom/quickstart.md | 1 + docs/getting-started/evm/deploy-contracts.md | 2 +- docs/getting-started/evm/quickstart.md | 5 ++++- docs/guides/migrating-to-ev-abci.md | 6 +++--- docs/overview/architecture.md | 1 + scripts/utils.mk | 4 ++-- 19 files changed, 47 insertions(+), 12 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index fd11b700ca..e269b6680f 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -97,8 +97,8 @@ Packages must be released in the following order: These packages only depend on `core` and can be released in parallel after `core`: -2. **github.com/evstack/ev-node** - Path: `./` (root) -3. **github.com/evstack/ev-node/execution/evm** - Path: `./execution/evm` +1. **github.com/evstack/ev-node** - Path: `./` (root) +2. **github.com/evstack/ev-node/execution/evm** - Path: `./execution/evm` #### Phase 3: Application Packages diff --git a/docs/concepts/data-availability.md b/docs/concepts/data-availability.md index 69896ce4ab..cd3af8eaa2 100644 --- a/docs/concepts/data-availability.md +++ b/docs/concepts/data-availability.md @@ -5,6 +5,7 @@ Data availability (DA) ensures that all transaction data required to verify the ## Why DA Matters Without data availability guarantees: + - Nodes can't verify state transitions - Users can't prove their balances - The chain's security model breaks down diff --git a/docs/concepts/fee-systems.md b/docs/concepts/fee-systems.md index 057fc43c08..e3e67bf158 100644 --- a/docs/concepts/fee-systems.md +++ b/docs/concepts/fee-systems.md @@ -67,6 +67,7 @@ Both execution environments incur DA fees when blocks are posted to the DA layer ### Who Pays? The sequencer pays DA fees from their own funds. They recover costs through: + - Priority fees from users - Base fee redirect (if configured) - External subsidy @@ -134,11 +135,13 @@ User Transaction ### Execution Costs EVM: + ```bash cast estimate --rpc-url http://localhost:8545 "transfer(address,uint256)" ``` Cosmos: + ```bash appd tx bank send 1000stake --gas auto --gas-adjustment 1.3 ``` @@ -146,6 +149,7 @@ appd tx bank send 1000stake --gas auto --gas-adjustment 1.3 ### DA Costs Depends on: + - DA layer pricing (e.g., Celestia gas price) - Data size per block - Submission frequency diff --git a/docs/ev-abci/integration-guide.md b/docs/ev-abci/integration-guide.md index 67b855de4e..fc9463350a 100644 --- a/docs/ev-abci/integration-guide.md +++ b/docs/ev-abci/integration-guide.md @@ -70,6 +70,7 @@ Check for ev-abci flags: ``` Expected flags: + ``` --evnode.node.aggregator Run as block producer --evnode.da.address DA layer address diff --git a/docs/ev-abci/migration-from-cometbft.md b/docs/ev-abci/migration-from-cometbft.md index f49ba6df6f..eb6abcd9e0 100644 --- a/docs/ev-abci/migration-from-cometbft.md +++ b/docs/ev-abci/migration-from-cometbft.md @@ -41,9 +41,9 @@ import ( ) ``` -2. Add the migration manager keeper to your app struct -3. Register the module in your module manager -4. Configure the migration manager in your app initialization +1. Add the migration manager keeper to your app struct +2. Register the module in your module manager +3. Configure the migration manager in your app initialization ### Step 2: Replace Staking Module with Wrapper diff --git a/docs/ev-abci/rpc-compatibility.md b/docs/ev-abci/rpc-compatibility.md index 9b8e8f9898..99dffca702 100644 --- a/docs/ev-abci/rpc-compatibility.md +++ b/docs/ev-abci/rpc-compatibility.md @@ -115,6 +115,7 @@ Default ports match CometBFT: | 26656 | P2P | Configure via flags: + ```bash --evnode.rpc.address tcp://0.0.0.0:26657 --evnode.p2p.listen /ip4/0.0.0.0/tcp/26656 diff --git a/docs/ev-reth/engine-api.md b/docs/ev-reth/engine-api.md index 905e6e3075..d0b1784cd2 100644 --- a/docs/ev-reth/engine-api.md +++ b/docs/ev-reth/engine-api.md @@ -16,6 +16,7 @@ openssl rand -hex 32 > jwt.hex ``` Configure both sides: + - ev-reth: `--authrpc.jwtsecret jwt.hex` - ev-node: `--evm.jwt-secret jwt.hex` @@ -58,6 +59,7 @@ ev-node ev-reth Update the fork choice and optionally start building a new block. **Request:** + ```json { "method": "engine_forkchoiceUpdatedV3", @@ -79,6 +81,7 @@ Update the fork choice and optionally start building a new block. ``` **Response:** + ```json { "payloadStatus": { @@ -94,6 +97,7 @@ Update the fork choice and optionally start building a new block. Retrieve a built payload. **Request:** + ```json { "method": "engine_getPayloadV3", @@ -102,6 +106,7 @@ Retrieve a built payload. ``` **Response:** + ```json { "executionPayload": { @@ -129,6 +134,7 @@ Retrieve a built payload. Validate and execute a payload. **Request:** + ```json { "method": "engine_newPayloadV3", @@ -141,6 +147,7 @@ Validate and execute a payload. ``` **Response:** + ```json { "status": "VALID", diff --git a/docs/ev-reth/features/contract-size-limits.md b/docs/ev-reth/features/contract-size-limits.md index 0d73f30c03..ee90d240ff 100644 --- a/docs/ev-reth/features/contract-size-limits.md +++ b/docs/ev-reth/features/contract-size-limits.md @@ -40,11 +40,13 @@ In your chainspec (`genesis.json`): ## Trade-offs **Pros:** + - Deploy larger, more complex contracts - Avoid splitting logic across multiple contracts - Simpler contract architecture **Cons:** + - Higher deployment gas costs - Longer deployment times - May impact block gas limits diff --git a/docs/ev-reth/overview.md b/docs/ev-reth/overview.md index 79d3c5d423..bbba326875 100644 --- a/docs/ev-reth/overview.md +++ b/docs/ev-reth/overview.md @@ -32,6 +32,7 @@ ev-reth extends reth with: ``` ev-node drives ev-reth through the Engine API: + 1. ev-node calls `engine_forkchoiceUpdated` with payload attributes 2. ev-reth builds a block from pending transactions 3. ev-node calls `engine_getPayload` to retrieve the block diff --git a/docs/getting-started/cosmos/integrate-ev-abci.md b/docs/getting-started/cosmos/integrate-ev-abci.md index 9983693a49..192bfb75e9 100644 --- a/docs/getting-started/cosmos/integrate-ev-abci.md +++ b/docs/getting-started/cosmos/integrate-ev-abci.md @@ -75,6 +75,7 @@ Check that ev-abci flags are available: ``` You should see flags like: + ``` --evnode.node.aggregator --evnode.da.address diff --git a/docs/getting-started/cosmos/migration-guide.md b/docs/getting-started/cosmos/migration-guide.md index 59e09b9e9d..b0bd81553a 100644 --- a/docs/getting-started/cosmos/migration-guide.md +++ b/docs/getting-started/cosmos/migration-guide.md @@ -74,6 +74,7 @@ appd evolve-migrate ``` This command: + - Migrates blocks from CometBFT to Evolve format - Converts state to Evolve format - Creates `ev_genesis.json` diff --git a/docs/getting-started/cosmos/quickstart.md b/docs/getting-started/cosmos/quickstart.md index 6e87a6d792..4bd6a73b37 100644 --- a/docs/getting-started/cosmos/quickstart.md +++ b/docs/getting-started/cosmos/quickstart.md @@ -61,6 +61,7 @@ mychaind start \ ``` You should see blocks being produced: + ``` INF block marked as DA included blockHeight=1 INF block marked as DA included blockHeight=2 diff --git a/docs/getting-started/custom/implement-executor.md b/docs/getting-started/custom/implement-executor.md index 18ec9c0dfa..ee1c362e02 100644 --- a/docs/getting-started/custom/implement-executor.md +++ b/docs/getting-started/custom/implement-executor.md @@ -22,13 +22,16 @@ func (e *MyExecutor) InitChain(ctx context.Context, genesis Genesis) ([]byte, er ``` **Parameters:** + - `genesis` — Contains initial state, chain ID, and configuration **Returns:** + - Initial state root (hash of genesis state) - Error if initialization fails **Responsibilities:** + - Parse genesis data - Initialize state storage - Set up initial accounts/balances @@ -63,10 +66,12 @@ func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error) ``` **Returns:** + - Slice of transaction bytes from your mempool - Error if retrieval fails **Responsibilities:** + - Return transactions ready for inclusion - Optionally prioritize by fee, nonce, etc. - Remove invalid transactions @@ -94,15 +99,18 @@ func (e *MyExecutor) ExecuteTxs( ``` **Parameters:** + - `txs` — Ordered transactions to execute - `height` — Block height - `timestamp` — Block timestamp **Returns:** + - `ExecutionResult` containing new state root and gas used - Error only for system failures (not tx failures) **Responsibilities:** + - Execute each transaction in order - Update state - Track gas usage @@ -154,9 +162,11 @@ func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error ``` **Parameters:** + - `height` — The block height that is now DA-finalized **Responsibilities:** + - Mark state as finalized - Prune old state if desired - Trigger any finality-dependent logic diff --git a/docs/getting-started/custom/quickstart.md b/docs/getting-started/custom/quickstart.md index 87785de584..6e86526393 100644 --- a/docs/getting-started/custom/quickstart.md +++ b/docs/getting-started/custom/quickstart.md @@ -32,6 +32,7 @@ ls apps/testapp/ ``` Key files: + - `executor.go` — Implements the Executor interface - `main.go` — Wires everything together diff --git a/docs/getting-started/evm/deploy-contracts.md b/docs/getting-started/evm/deploy-contracts.md index 716810919f..09e18ca17a 100644 --- a/docs/getting-started/evm/deploy-contracts.md +++ b/docs/getting-started/evm/deploy-contracts.md @@ -6,7 +6,7 @@ Deploy smart contracts to your Evolve EVM chain using Foundry or Hardhat. | Setting | Local | Testnet (example) | |---------|-------|-------------------| -| RPC URL | http://localhost:8545 | https://rpc.your-chain.com | +| RPC URL | | | | Chain ID | 1337 | Your chain ID | | Currency | ETH | Your native token | diff --git a/docs/getting-started/evm/quickstart.md b/docs/getting-started/evm/quickstart.md index 4f25f3b4ab..001faa5dad 100644 --- a/docs/getting-started/evm/quickstart.md +++ b/docs/getting-started/evm/quickstart.md @@ -16,6 +16,7 @@ local-da ``` You should see: + ``` INF Listening on host=localhost port=7980 ``` @@ -31,6 +32,7 @@ docker compose up -d ``` This starts reth with Evolve's Engine API configuration. The default ports: + - `8545` — JSON-RPC - `8551` — Engine API @@ -56,6 +58,7 @@ Initialize and start: ``` You should see blocks being produced: + ``` INF block marked as DA included blockHeight=1 INF block marked as DA included blockHeight=2 @@ -68,7 +71,7 @@ Add the network to MetaMask: | Setting | Value | |---------|-------| | Network Name | Evolve Local | -| RPC URL | http://localhost:8545 | +| RPC URL | | | Chain ID | 1337 | | Currency | ETH | diff --git a/docs/guides/migrating-to-ev-abci.md b/docs/guides/migrating-to-ev-abci.md index f49ba6df6f..eb6abcd9e0 100644 --- a/docs/guides/migrating-to-ev-abci.md +++ b/docs/guides/migrating-to-ev-abci.md @@ -41,9 +41,9 @@ import ( ) ``` -2. Add the migration manager keeper to your app struct -3. Register the module in your module manager -4. Configure the migration manager in your app initialization +1. Add the migration manager keeper to your app struct +2. Register the module in your module manager +3. Configure the migration manager in your app initialization ### Step 2: Replace Staking Module with Wrapper diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md index 0681a5cbec..a262ba1236 100644 --- a/docs/overview/architecture.md +++ b/docs/overview/architecture.md @@ -163,6 +163,7 @@ Built on libp2p with: - **Topics**: `{chainID}-tx`, `{chainID}-header`, `{chainID}-data` Nodes discover peers through: + 1. Bootstrap/seed nodes 2. DHT peer exchange 3. PEX (peer exchange protocol) diff --git a/scripts/utils.mk b/scripts/utils.mk index c56d8d8116..1594806378 100644 --- a/scripts/utils.mk +++ b/scripts/utils.mk @@ -15,7 +15,7 @@ lint: vet @echo "--> Running golangci-lint" @golangci-lint run @echo "--> Running markdownlint" - @markdownlint --config .markdownlint.yaml '**/*.md' + @npx markdownlint-cli --config .markdownlint.yaml '**/*.md' @echo "--> Running hadolint" @hadolint test/docker/mockserv.Dockerfile @echo "--> Running yamllint" @@ -31,7 +31,7 @@ lint-fix: @echo "--> Formatting go" @golangci-lint run --fix @echo "--> Formatting markdownlint" - @markdownlint --config .markdownlint.yaml --ignore './changelog.md' '**/*.md' -f + @npx markdownlint-cli --config .markdownlint.yaml --ignore './changelog.md' '**/*.md' -f .PHONY: lint-fix ## vet: Run go vet