diff --git a/docs/concepts/block-lifecycle.md b/docs/concepts/block-lifecycle.md
new file mode 100644
index 000000000..c97171f90
--- /dev/null
+++ b/docs/concepts/block-lifecycle.md
@@ -0,0 +1,759 @@
+# Block Components
+
+## Abstract
+
+The block package provides a modular component-based architecture for handling block-related operations in full nodes. Instead of a single monolithic manager, the system is divided into specialized components that work together, each responsible for specific aspects of block processing. This architecture enables better separation of concerns, easier testing, and more flexible node configurations.
+
+The main components are:
+
+- **Executor**: Handles block production and state transitions (aggregator nodes only)
+- **Reaper**: Periodically retrieves transactions and submits them to the sequencer (aggregator nodes only)
+- **Submitter**: Manages submission of headers and data to the DA network (aggregator nodes only)
+- **Syncer**: Handles synchronization from both DA and P2P sources (all full nodes)
+- **Cache Manager**: Coordinates caching and tracking of blocks across all components
+
+A full node coordinates these components based on its role:
+
+- **Aggregator nodes**: Use all components for block production, submission, and synchronization
+- **Non-aggregator full nodes**: Use only Syncer and Cache for block synchronization
+
+```mermaid
+sequenceDiagram
+ title Overview of Block Manager
+
+ participant User
+ participant Sequencer
+ participant Full Node 1
+ participant Full Node 2
+ participant DA Layer
+
+ User->>Sequencer: Send Tx
+ Sequencer->>Sequencer: Generate Block
+ Sequencer->>DA Layer: Publish Block
+
+ Sequencer->>Full Node 1: Gossip Block
+ Sequencer->>Full Node 2: Gossip Block
+ Full Node 1->>Full Node 1: Verify Block
+ Full Node 1->>Full Node 2: Gossip Block
+ Full Node 1->>Full Node 1: Mark Block Soft Confirmed
+
+ Full Node 2->>Full Node 2: Verify Block
+ Full Node 2->>Full Node 2: Mark Block Soft Confirmed
+
+ DA Layer->>Full Node 1: Retrieve Block
+ Full Node 1->>Full Node 1: Mark Block DA Included
+
+ DA Layer->>Full Node 2: Retrieve Block
+ Full Node 2->>Full Node 2: Mark Block DA Included
+```
+
+### Component Architecture Overview
+
+```mermaid
+flowchart TB
+ subgraph Block Components [Modular Block Components]
+ EXE[Executor
Block Production]
+ REA[Reaper
Tx Collection]
+ SUB[Submitter
DA Submission]
+ SYN[Syncer
Block Sync]
+ CAC[Cache Manager
State Tracking]
+ end
+
+ subgraph External Components
+ CEXE[Core Executor]
+ SEQ[Sequencer]
+ DA[DA Layer]
+ HS[Header Store/P2P]
+ DS[Data Store/P2P]
+ ST[Local Store]
+ end
+
+ REA -->|GetTxs| CEXE
+ REA -->|SubmitBatch| SEQ
+ REA -->|Notify| EXE
+
+ EXE -->|CreateBlock| CEXE
+ EXE -->|ApplyBlock| CEXE
+ EXE -->|Save| ST
+ EXE -->|Track| CAC
+
+ EXE -->|Headers| SUB
+ EXE -->|Data| SUB
+ SUB -->|Submit| DA
+ SUB -->|Track| CAC
+
+ DA -->|Retrieve| SYN
+ HS -->|Headers| SYN
+ DS -->|Data| SYN
+
+ SYN -->|ApplyBlock| CEXE
+ SYN -->|Save| ST
+ SYN -->|Track| CAC
+ SYN -->|SetFinal| CEXE
+
+ CAC -->|Coordinate| EXE
+ CAC -->|Coordinate| SUB
+ CAC -->|Coordinate| SYN
+```
+
+## Protocol/Component Description
+
+The block components are initialized based on the node type:
+
+### Aggregator Components
+
+Aggregator nodes create all components for full block production and synchronization capabilities:
+
+```go
+components := block.NewAggregatorComponents(
+ config, // Node configuration
+ genesis, // Genesis state
+ store, // Local datastore
+ executor, // Core executor for state transitions
+ sequencer, // Sequencer client
+ da, // DA client
+ signer, // Block signing key
+ // P2P stores and options...
+)
+```
+
+### Non-Aggregator Components
+
+Non-aggregator full nodes create only synchronization components:
+
+```go
+components := block.NewSyncComponents(
+ config, // Node configuration
+ genesis, // Genesis state
+ store, // Local datastore
+ executor, // Core executor for state transitions
+ da, // DA client
+ // P2P stores and options... (no signer or sequencer needed)
+)
+```
+
+### Component Initialization Parameters
+
+| **Name** | **Type** | **Description** |
+| --------------------------- | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| signing key | crypto.PrivKey | used for signing blocks and data after creation |
+| config | config.BlockManagerConfig | block manager configurations (see config options below) |
+| genesis | \*cmtypes.GenesisDoc | initialize the block manager with genesis state (genesis configuration defined in `config/genesis.json` file under the app directory) |
+| store | store.Store | local datastore for storing chain blocks and states (default local store path is `$db_dir/evolve` and `db_dir` specified in the `config.yaml` file under the app directory) |
+| mempool, proxyapp, eventbus | mempool.Mempool, proxy.AppConnConsensus, \*cmtypes.EventBus | for initializing the executor (state transition function). mempool is also used in the manager to check for availability of transactions for lazy block production |
+| dalc | da.DAClient | the data availability light client used to submit and retrieve blocks to DA network |
+| headerStore | *goheaderstore.Store[*types.SignedHeader] | to store and retrieve block headers gossiped over the P2P network |
+| dataStore | *goheaderstore.Store[*types.SignedData] | to store and retrieve block data gossiped over the P2P network |
+| signaturePayloadProvider | types.SignaturePayloadProvider | optional custom provider for header signature payloads |
+| sequencer | core.Sequencer | used to retrieve batches of transactions from the sequencing layer |
+| reaper | \*Reaper | component that periodically retrieves transactions from the executor and submits them to the sequencer |
+
+### Configuration Options
+
+The block components share a common configuration:
+
+| Name | Type | Description |
+| ------------------------ | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
+| BlockTime | time.Duration | time interval used for block production and block retrieval from block store ([`defaultBlockTime`][defaultBlockTime]) |
+| DABlockTime | time.Duration | time interval used for both block publication to DA network and block retrieval from DA network ([`defaultDABlockTime`][defaultDABlockTime]) |
+| DAStartHeight | uint64 | block retrieval from DA network starts from this height |
+| LazyBlockInterval | time.Duration | time interval used for block production in lazy aggregator mode even when there are no transactions ([`defaultLazyBlockTime`][defaultLazyBlockTime]) |
+| LazyMode | bool | when set to true, enables lazy aggregation mode which produces blocks only when transactions are available or at LazyBlockInterval intervals |
+| MaxPendingHeadersAndData | uint64 | maximum number of pending headers and data blocks before pausing block production (default: 100) |
+| MaxSubmitAttempts | int | maximum number of retry attempts for DA submissions (default: 30) |
+| MempoolTTL | int | number of blocks to wait when transaction is stuck in DA mempool (default: 25) |
+| GasPrice | float64 | gas price for DA submissions (-1 for automatic/default) |
+| GasMultiplier | float64 | multiplier for gas price on DA submission retries (default: 1.3) |
+| Namespace | da.Namespace | DA namespace ID for block submissions (deprecated, use HeaderNamespace and DataNamespace instead) |
+| HeaderNamespace | string | namespace ID for submitting headers to DA layer (automatically encoded by the node) |
+| DataNamespace | string | namespace ID for submitting data to DA layer (automatically encoded by the node) |
+| RequestTimeout | duration | per-request timeout for DA `GetIDs`/`Get` calls; higher values tolerate slow DA nodes, lower values fail faster (default: 30s) |
+
+### Block Production (Executor Component)
+
+When the full node is operating as an aggregator, the **Executor component** handles block production. There are two modes of block production, which can be specified in the block manager configurations: `normal` and `lazy`.
+
+In `normal` mode, the block manager runs a timer, which is set to the `BlockTime` configuration parameter, and continuously produces blocks at `BlockTime` intervals.
+
+In `lazy` mode, the block manager implements a dual timer mechanism:
+
+```mermaid
+flowchart LR
+ subgraph Lazy Aggregation Mode
+ R[Reaper] -->|GetTxs| CE[Core Executor]
+ CE -->|Txs Available| R
+ R -->|Submit to Sequencer| S[Sequencer]
+ R -->|NotifyNewTransactions| N[txNotifyCh]
+
+ N --> E{Executor Logic}
+ BT[blockTimer] --> E
+ LT[lazyTimer] --> E
+
+ E -->|Txs Available| P1[Produce Block with Txs]
+ E -->|No Txs & LazyTimer| P2[Produce Empty Block]
+
+ P1 --> B[Block Creation]
+ P2 --> B
+ end
+```
+
+1. A `blockTimer` that triggers block production at regular intervals when transactions are available
+2. A `lazyTimer` that ensures blocks are produced at `LazyBlockInterval` intervals even during periods of inactivity
+
+The block manager starts building a block when any transaction becomes available in the mempool via a notification channel (`txNotifyCh`). When the `Reaper` detects new transactions, it calls `Manager.NotifyNewTransactions()`, which performs a non-blocking signal on this channel. The block manager also produces empty blocks at regular intervals to maintain consistency with the DA layer, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks.
+
+The Reaper component periodically retrieves transactions from the core executor and submits them to the sequencer. It runs independently and notifies the Executor component when new transactions are available, enabling responsive block production in lazy mode.
+
+#### Building the Block
+
+The Executor component of aggregator nodes performs the following steps to produce a block:
+
+```mermaid
+flowchart TD
+ A[Timer Trigger / Transaction Notification] --> B[Retrieve Batch]
+ B --> C{Transactions Available?}
+ C -->|Yes| D[Create Block with Txs]
+ C -->|No| E[Create Empty Block]
+ D --> F[Generate Header & Data]
+ E --> F
+ F --> G[Sign Header → SignedHeader]
+ F --> H[Sign Data → SignedData]
+ G --> I[Apply Block]
+ H --> I
+ I --> J[Update State]
+ J --> K[Save to Store]
+ K --> L[Add to pendingHeaders]
+ K --> M[Add to pendingData]
+ L --> N[Broadcast Header to P2P]
+ M --> O[Broadcast Data to P2P]
+```
+
+- Retrieve a batch of transactions using `retrieveBatch()` which interfaces with the sequencer
+- Call `CreateBlock` using executor with the retrieved transactions
+- Create separate header and data structures from the block
+- Sign the header using `signing key` to generate `SignedHeader`
+- Sign the data using `signing key` to generate `SignedData` (if transactions exist)
+- Call `ApplyBlock` using executor to generate an updated state
+- Save the block, validators, and updated state to local store
+- Add the newly generated header to `pendingHeaders` queue
+- Add the newly generated data to `pendingData` queue (if not empty)
+- Publish the newly generated header and data to channels to notify other components of the sequencer node (such as block and header gossip)
+
+Note: When no transactions are available, the block manager creates blocks with empty data using a special `dataHashForEmptyTxs` marker. The header and data separation architecture allows headers and data to be submitted and retrieved independently from the DA layer.
+
+### Block Publication to DA Network (Submitter Component)
+
+The **Submitter component** of aggregator nodes implements separate submission loops for headers and data, both operating at `DABlockTime` intervals. Headers and data are submitted to different namespaces to improve scalability and allow for more flexible data availability strategies:
+
+```mermaid
+flowchart LR
+ subgraph Header Submission
+ H1[pendingHeaders Queue] --> H2[Header Submission Loop]
+ H2 --> H3[Marshal to Protobuf]
+ H3 --> H4[Submit to DA]
+ H4 -->|Success| H5[Remove from Queue]
+ H4 -->|Failure| H6[Keep in Queue & Retry]
+ end
+
+ subgraph Data Submission
+ D1[pendingData Queue] --> D2[Data Submission Loop]
+ D2 --> D3[Marshal to Protobuf]
+ D3 --> D4[Submit to DA]
+ D4 -->|Success| D5[Remove from Queue]
+ D4 -->|Failure| D6[Keep in Queue & Retry]
+ end
+
+ H2 -.->|DABlockTime| H2
+ D2 -.->|DABlockTime| D2
+```
+
+#### Header Submission Loop
+
+The `HeaderSubmissionLoop` manages the submission of signed headers to the DA network:
+
+- Retrieves pending headers from the `pendingHeaders` queue
+- Marshals headers to protobuf format
+- Submits to DA using the generic `submitToDA` helper with the configured `HeaderNamespace`
+- On success, removes submitted headers from the pending queue
+- On failure, headers remain in the queue for retry
+
+#### Data Submission Loop
+
+The `DataSubmissionLoop` manages the submission of signed data to the DA network:
+
+- Retrieves pending data from the `pendingData` queue
+- Marshals data to protobuf format
+- Submits to DA using the generic `submitToDA` helper with the configured `DataNamespace`
+- On success, removes submitted data from the pending queue
+- On failure, data remains in the queue for retry
+
+#### Generic Submission Logic
+
+Both loops use a shared `submitToDA` function that provides:
+
+- Namespace-specific submission based on header or data type
+- Retry logic with configurable maximum attempts via `MaxSubmitAttempts` configuration
+- Exponential backoff starting at `initialBackoff` (100ms), doubling each attempt, capped at `DABlockTime`
+- Gas price management with `GasMultiplier` applied on retries using a centralized `retryStrategy`
+- Recursive batch splitting for handling "too big" DA submissions that exceed blob size limits
+- Comprehensive error handling for different DA submission failure types (mempool issues, context cancellation, blob size limits)
+- Comprehensive metrics tracking for attempts, successes, and failures
+- Context-aware cancellation support
+
+#### Retry Strategy and Error Handling
+
+The DA submission system implements sophisticated retry logic using a centralized `retryStrategy` struct to handle various failure scenarios:
+
+```mermaid
+flowchart TD
+ A[Submit to DA] --> B{Submission Result}
+ B -->|Success| C[Reset Backoff & Adjust Gas Price Down]
+ B -->|Too Big| D{Batch Size > 1?}
+ B -->|Mempool/Not Included| E[Mempool Backoff Strategy]
+ B -->|Context Canceled| F[Stop Submission]
+ B -->|Other Error| G[Exponential Backoff]
+
+ D -->|Yes| H[Recursive Batch Splitting]
+ D -->|No| I[Skip Single Item - Cannot Split]
+
+ E --> J[Set Backoff = MempoolTTL * BlockTime]
+ E --> K[Multiply Gas Price by GasMultiplier]
+
+ G --> L[Double Backoff Time]
+ G --> M[Cap at MaxBackoff - BlockTime]
+
+ H --> N[Split into Two Halves]
+ N --> O[Submit First Half]
+ O --> P[Submit Second Half]
+ P --> Q{Both Halves Processed?}
+ Q -->|Yes| R[Combine Results]
+ Q -->|No| S[Handle Partial Success]
+
+ C --> T[Update Pending Queues]
+ T --> U[Post-Submit Actions]
+```
+
+##### Retry Strategy Features
+
+- **Centralized State Management**: The `retryStrategy` struct manages attempt counts, backoff timing, and gas price adjustments
+- **Multiple Backoff Types**:
+ - Exponential backoff for general failures (doubles each attempt, capped at `BlockTime`)
+ - Mempool-specific backoff (waits `MempoolTTL * BlockTime` for stuck transactions)
+ - Success-based backoff reset with gas price reduction
+- **Gas Price Management**:
+ - Increases gas price by `GasMultiplier` on mempool failures
+ - Decreases gas price after successful submissions (bounded by initial price)
+ - Supports automatic gas price detection (`-1` value)
+- **Intelligent Batch Splitting**:
+ - Recursively splits batches that exceed DA blob size limits
+ - Handles partial submissions within split batches
+ - Prevents infinite recursion with proper base cases
+- **Comprehensive Error Classification**:
+ - `StatusSuccess`: Full or partial successful submission
+ - `StatusTooBig`: Triggers batch splitting logic
+ - `StatusNotIncludedInBlock`/`StatusAlreadyInMempool`: Mempool-specific handling
+ - `StatusContextCanceled`: Graceful shutdown support
+ - Other errors: Standard exponential backoff
+
+The manager enforces a limit on pending headers and data through `MaxPendingHeadersAndData` configuration. When this limit is reached, block production pauses to prevent unbounded growth of the pending queues.
+
+### Block Retrieval from DA Network (Syncer Component)
+
+The **Syncer component** implements a `RetrieveLoop` through its DARetriever that regularly pulls headers and data from the DA network. The retrieval process supports both legacy single-namespace mode (for backward compatibility) and the new separate namespace mode:
+
+```mermaid
+flowchart TD
+ A[Start RetrieveLoop] --> B[Get DA Height]
+ B --> C{DABlockTime Timer}
+ C --> D[GetHeightPair from DA]
+ D --> E{Result?}
+ E -->|Success| F[Validate Signatures]
+ E -->|NotFound| G[Increment Height]
+ E -->|Error| H[Retry Logic]
+
+ F --> I[Check Sequencer Info]
+ I --> J[Mark DA Included]
+ J --> K[Send to Sync]
+ K --> L[Increment Height]
+ L --> M[Immediate Next Retrieval]
+
+ G --> C
+ H --> N{Retries < 10?}
+ N -->|Yes| O[Wait 100ms]
+ N -->|No| P[Log Error & Stall]
+ O --> D
+ M --> D
+```
+
+#### Retrieval Process
+
+1. **Height Management**: Starts from the latest of:
+ - DA height from the last state in local store
+ - `DAStartHeight` configuration parameter
+ - Maintains and increments `daHeight` counter after successful retrievals
+
+2. **Retrieval Mechanism**:
+ - Executes at `DABlockTime` intervals
+ - Implements namespace migration support:
+ - First attempts legacy namespace retrieval if migration not completed
+ - Falls back to separate header and data namespace retrieval
+ - Tracks migration status to optimize future retrievals
+ - Retrieves from separate namespaces:
+ - Headers from `HeaderNamespace`
+ - Data from `DataNamespace`
+ - Combines results from both namespaces
+ - Handles three possible outcomes:
+ - `Success`: Process retrieved header and/or data
+ - `NotFound`: No chain block at this DA height (normal case)
+ - `Error`: Retry with backoff
+
+3. **Error Handling**:
+ - Implements retry logic with 100ms delay between attempts
+ - After 10 retries, logs error and stalls retrieval
+ - Does not increment `daHeight` on persistent errors
+
+4. **Processing Retrieved Blocks**:
+ - Validates header and data signatures
+ - Checks sequencer information
+ - Marks blocks as DA included in caches
+ - Sends to sync goroutine for state update
+ - Successful processing triggers immediate next retrieval without waiting for timer
+ - Updates namespace migration status when appropriate:
+ - Marks migration complete when data is found in new namespaces
+ - Persists migration state to avoid future legacy checks
+
+#### Header and Data Caching
+
+The retrieval system uses persistent caches for both headers and data:
+
+- Prevents duplicate processing
+- Tracks DA inclusion status
+- Supports out-of-order block arrival
+- Enables efficient sync from P2P and DA sources
+- Maintains namespace migration state for optimized retrieval
+
+For more details on DA integration, see the [Data Availability specification](./da.md).
+
+#### Out-of-Order Chain Blocks on DA
+
+Evolve should support blocks arriving out-of-order on DA, like so:
+
+
+#### Termination Condition
+
+If the sequencer double-signs two blocks at the same height, evidence of the fault should be posted to DA. Evolve full nodes should process the longest valid chain up to the height of the fault evidence, and terminate. See diagram:
+
+
+### Block Sync Service (Syncer Component)
+
+The **Syncer component** manages the synchronization of headers and data through its P2PHandler and coordination with the Cache Manager:
+
+#### Architecture
+
+- **Header Store**: Uses `goheader.Store[*types.SignedHeader]` for header management
+- **Data Store**: Uses `goheader.Store[*types.SignedData]` for data management
+- **Separation of Concerns**: Headers and data are handled independently, supporting the header/data separation architecture
+
+#### Synchronization Flow
+
+1. **Header Sync**: Headers created by the sequencer are sent to the header store for P2P gossip
+2. **Data Sync**: Data blocks are sent to the data store for P2P gossip
+3. **Cache Integration**: Both header and data caches track seen items to prevent duplicates
+4. **DA Inclusion Tracking**: Separate tracking for header and data DA inclusion status
+
+### Block Publication to P2P network (Executor Component)
+
+The **Executor component** of aggregator nodes publishes headers and data separately to the P2P network:
+
+#### Header Publication
+
+- Headers are sent through the header broadcast channel
+- Written to the header store for P2P gossip
+- Broadcast to network peers via header sync service
+
+#### Data Publication
+
+- Data blocks are sent through the data broadcast channel
+- Written to the data store for P2P gossip
+- Broadcast to network peers via data sync service
+
+Non-sequencer full nodes receive headers and data through the P2P sync service and do not publish blocks themselves.
+
+### Block Retrieval from P2P network (Syncer Component)
+
+The **Syncer component** retrieves headers and data separately from P2P stores through its P2PHandler:
+
+#### Header Store Retrieval Loop
+
+The `HeaderStoreRetrieveLoop`:
+
+- Operates at `BlockTime` intervals via `headerStoreCh` signals
+- Tracks `headerStoreHeight` for the last retrieved header
+- Retrieves all headers between last height and current store height
+- Validates sequencer information using `assertUsingExpectedSingleSequencer`
+- Marks headers as "seen" in the header cache
+- Sends headers to sync goroutine via `headerInCh`
+
+#### Data Store Retrieval Loop
+
+The `DataStoreRetrieveLoop`:
+
+- Operates at `BlockTime` intervals via `dataStoreCh` signals
+- Tracks `dataStoreHeight` for the last retrieved data
+- Retrieves all data blocks between last height and current store height
+- Validates data signatures using `assertValidSignedData`
+- Marks data as "seen" in the data cache
+- Sends data to sync goroutine via `dataInCh`
+
+#### Soft Confirmations
+
+Headers and data retrieved from P2P are marked as soft confirmed until both:
+
+1. The corresponding header is seen on the DA layer
+2. The corresponding data is seen on the DA layer
+
+Once both conditions are met, the block is marked as DA-included.
+
+#### About Soft Confirmations and DA Inclusions
+
+The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 6 seconds).
+The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked DA-included.
+DA-included blocks are considered to have a higher level of finality.
+
+**DAIncluderLoop**:
+The `DAIncluderLoop` is responsible for advancing the `DAIncludedHeight` by:
+
+- Checking if blocks after the current height have both header and data marked as DA-included in caches
+- Stopping advancement if either header or data is missing for a height
+- Calling `SetFinal` on the executor when a block becomes DA-included
+- Storing the Evolve height to DA height mapping for tracking
+- Ensuring only blocks with both header and data present are considered DA-included
+
+### State Update after Block Retrieval (Syncer Component)
+
+The **Syncer component** uses a `SyncLoop` to coordinate state updates from blocks retrieved via P2P or DA networks:
+
+```mermaid
+flowchart TD
+ subgraph Sources
+ P1[P2P Header Store] --> H[headerInCh]
+ P2[P2P Data Store] --> D[dataInCh]
+ DA1[DA Header Retrieval] --> H
+ DA2[DA Data Retrieval] --> D
+ end
+
+ subgraph SyncLoop
+ H --> S[Sync Goroutine]
+ D --> S
+ S --> C{Header & Data for Same Height?}
+ C -->|Yes| R[Reconstruct Block]
+ C -->|No| W[Wait for Matching Pair]
+ R --> V[Validate Signatures]
+ V --> A[ApplyBlock]
+ A --> CM[Commit]
+ CM --> ST[Store Block & State]
+ ST --> F{DA Included?}
+ F -->|Yes| FN[SetFinal]
+ F -->|No| E[End]
+ FN --> U[Update DA Height]
+ end
+```
+
+#### Sync Loop Architecture
+
+The `SyncLoop` processes headers and data from multiple sources:
+
+- Headers from `headerInCh` (P2P and DA sources)
+- Data from `dataInCh` (P2P and DA sources)
+- Maintains caches to track processed items
+- Ensures ordered processing by height
+
+#### State Update Process
+
+When both header and data are available for a height:
+
+1. **Block Reconstruction**: Combines header and data into a complete block
+2. **Validation**: Verifies header and data signatures match expectations
+3. **ApplyBlock**:
+ - Validates the block against current state
+ - Executes transactions
+ - Captures validator updates
+ - Returns updated state
+4. **Commit**:
+ - Persists execution results
+ - Updates mempool by removing included transactions
+ - Publishes block events
+5. **Storage**:
+ - Stores the block, validators, and updated state
+ - Updates last state in manager
+6. **Finalization**:
+ - When block is DA-included, calls `SetFinal` on executor
+ - Updates DA included height
+
+## Message Structure/Communication Format
+
+### Component Communication
+
+The components communicate through well-defined interfaces:
+
+#### Executor ↔ Core Executor
+
+- `InitChain`: initializes the chain state with the given genesis time, initial height, and chain ID using `InitChainSync` on the executor to obtain initial `appHash` and initialize the state.
+- `CreateBlock`: prepares a block with transactions from the provided batch data.
+- `ApplyBlock`: validates the block, executes the block (apply transactions), captures validator updates, and returns updated state.
+- `SetFinal`: marks the block as final when both its header and data are confirmed on the DA layer.
+- `GetTxs`: retrieves transactions from the application (used by Reaper component).
+
+#### Reaper ↔ Sequencer
+
+- `GetNextBatch`: retrieves the next batch of transactions to include in a block.
+- `VerifyBatch`: validates that a batch came from the expected sequencer.
+
+#### Submitter/Syncer ↔ DA Layer
+
+- `Submit`: submits headers or data blobs to the DA network.
+- `Get`: retrieves headers or data blobs from the DA network.
+- `GetHeightPair`: retrieves both header and data at a specific DA height.
+
+## Assumptions and Considerations
+
+### Component Architecture
+
+- The block package uses a modular component architecture instead of a monolithic manager
+- Components are created based on node type: aggregator nodes get all components, non-aggregator nodes only get synchronization components
+- Each component has a specific responsibility and communicates through well-defined interfaces
+- Components share a common Cache Manager for coordination and state tracking
+
+### Initialization and State Management
+
+- Components load the initial state from the local store and use genesis if not found in the local store, when the node (re)starts
+- During startup the Syncer invokes the execution Replayer to re-execute any blocks the local execution layer is missing; the replayer enforces strict app-hash matching so a mismatch aborts initialization instead of silently drifting out of sync
+- The default mode for aggregator nodes is normal (not lazy)
+- Components coordinate through channels and shared cache structures
+
+### Block Production (Executor Component)
+
+- The Executor can produce empty blocks
+- In lazy aggregation mode, the Executor maintains consistency with the DA layer by producing empty blocks at regular intervals, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks
+- The lazy aggregation mechanism uses a dual timer approach:
+ - A `blockTimer` that triggers block production when transactions are available
+ - A `lazyTimer` that ensures blocks are produced even during periods of inactivity
+- Empty batches are handled differently in lazy mode - instead of discarding them, they are returned with the `ErrNoBatch` error, allowing the caller to create empty blocks with proper timestamps
+- Transaction notifications from the `Reaper` to the `Executor` are handled via a non-blocking notification channel (`txNotifyCh`) to prevent backpressure
+
+### DA Submission (Submitter Component)
+
+- The Submitter enforces `MaxPendingHeadersAndData` limit to prevent unbounded growth of pending queues during DA submission issues
+- Headers and data are submitted separately to the DA layer using different namespaces, supporting the header/data separation architecture
+- The Cache Manager uses persistent caches for headers and data to track seen items and DA inclusion status
+- Namespace migration is handled transparently by the Syncer, with automatic detection and state persistence to optimize future operations
+- The system supports backward compatibility with legacy single-namespace deployments while transitioning to separate namespaces
+- Gas price management in the Submitter includes automatic adjustment with `GasMultiplier` on DA submission retries
+
+### Storage and Persistence
+
+- Components use persistent storage (disk) when the `root_dir` and `db_path` configuration parameters are specified in `config.yaml` file under the app directory. If these configuration parameters are not specified, the in-memory storage is used, which will not be persistent if the node stops
+- The Syncer does not re-apply blocks when they transition from soft confirmed to DA included status. The block is only marked DA included in the caches
+- Header and data stores use separate prefixes for isolation in the underlying database
+- The genesis `ChainID` is used to create separate `PubSubTopID`s for headers and data in go-header
+
+### P2P and Synchronization
+
+- Block sync over the P2P network works only when a full node is connected to the P2P network by specifying the initial seeds to connect to via `P2PConfig.Seeds` configuration parameter when starting the full node
+- Node's context is passed down to all components to support graceful shutdown and cancellation
+
+### Architecture Design Decisions
+
+- The Executor supports custom signature payload providers for headers, enabling flexible signing schemes
+- The component architecture supports the separation of header and data structures in Evolve. This allows for expanding the sequencing scheme beyond single sequencing and enables the use of a decentralized sequencer mode. For detailed information on this architecture, see the [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md)
+- Components process blocks with a minimal header format, which is designed to eliminate dependency on CometBFT's header format and can be used to produce an execution layer tailored header if needed. For details on this header structure, see the [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md) specification
+
+## Metrics
+
+The block components expose comprehensive metrics for monitoring through the shared Metrics instance:
+
+### Block Production Metrics (Executor Component)
+
+- `last_block_produced_height`: Height of the last produced block
+- `last_block_produced_time`: Timestamp of the last produced block
+- `aggregation_type`: Current aggregation mode (normal/lazy)
+- `block_size_bytes`: Size distribution of produced blocks
+- `produced_empty_blocks_total`: Count of empty blocks produced
+
+### DA Metrics (Submitter and Syncer Components)
+
+- `da_submission_attempts_total`: Total DA submission attempts
+- `da_submission_success_total`: Successful DA submissions
+- `da_submission_failure_total`: Failed DA submissions
+- `da_retrieval_attempts_total`: Total DA retrieval attempts
+- `da_retrieval_success_total`: Successful DA retrievals
+- `da_retrieval_failure_total`: Failed DA retrievals
+- `da_height`: Current DA retrieval height
+- `pending_headers_count`: Number of headers pending DA submission
+- `pending_data_count`: Number of data blocks pending DA submission
+
+### Sync Metrics (Syncer Component)
+
+- `sync_height`: Current sync height
+- `da_included_height`: Height of last DA-included block
+- `soft_confirmed_height`: Height of last soft confirmed block
+- `header_store_height`: Current header store height
+- `data_store_height`: Current data store height
+
+### Performance Metrics (All Components)
+
+- `block_production_time`: Time to produce a block
+- `da_submission_time`: Time to submit to DA
+- `state_update_time`: Time to apply block and update state
+- `channel_buffer_usage`: Usage of internal channels
+
+### Error Metrics (All Components)
+
+- `errors_total`: Total errors by type and operation
+
+## Implementation
+
+The modular block components are implemented in the following packages:
+
+- [Executor]: Block production and state transitions (`block/internal/executing/`)
+- [Reaper]: Transaction collection and submission (`block/internal/reaping/`)
+- [Submitter]: DA submission logic (`block/internal/submitting/`)
+- [Syncer]: Block synchronization from DA and P2P (`block/internal/syncing/`)
+- [Cache Manager]: Coordination and state tracking (`block/internal/cache/`)
+- [Components]: Main components orchestration (`block/components.go`)
+
+See [tutorial] for running a multi-node network with both aggregator and non-aggregator full nodes.
+
+## References
+
+[1] [Go Header][go-header]
+
+[2] [Block Sync][block-sync]
+
+[3] [Full Node][full-node]
+
+[4] [Block Components][Components]
+
+[5] [Tutorial][tutorial]
+
+[6] [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md)
+
+[7] [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md)
+
+[8] [Data Availability](./da.md)
+
+[9] [Lazy Aggregation with DA Layer Consistency ADR](../../adr/adr-021-lazy-aggregation.md)
+
+[defaultBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L50
+[defaultDABlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L59
+[defaultLazyBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L52
+[go-header]: https://github.com/celestiaorg/go-header
+[block-sync]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go
+[full-node]: https://github.com/evstack/ev-node/blob/main/node/full.go
+[Executor]: https://github.com/evstack/ev-node/blob/main/block/internal/executing/executor.go
+[Reaper]: https://github.com/evstack/ev-node/blob/main/block/internal/reaping/reaper.go
+[Submitter]: https://github.com/evstack/ev-node/blob/main/block/internal/submitting/submitter.go
+[Syncer]: https://github.com/evstack/ev-node/blob/main/block/internal/syncing/syncer.go
+[Cache Manager]: https://github.com/evstack/ev-node/blob/main/block/internal/cache/manager.go
+[Components]: https://github.com/evstack/ev-node/blob/main/block/components.go
+[tutorial]: https://ev.xyz/guides/full-node
diff --git a/docs/concepts/data-availability.md b/docs/concepts/data-availability.md
new file mode 100644
index 000000000..cd3af8eaa
--- /dev/null
+++ b/docs/concepts/data-availability.md
@@ -0,0 +1,76 @@
+# Data Availability
+
+Data availability (DA) ensures that all transaction data required to verify the chain's state is accessible to anyone.
+
+## Why DA Matters
+
+Without data availability guarantees:
+
+- Nodes can't verify state transitions
+- Users can't prove their balances
+- The chain's security model breaks down
+
+Evolve uses external DA layers to provide these guarantees, rather than storing all data on L1.
+
+## How Evolve Handles Data Availability
+
+Evolve is DA-agnostic and can integrate with different DA layers:
+
+### Local DA
+
+- **Use case**: Development and testing
+- **Guarantee**: None (operator can withhold data)
+- **Latency**: Instant
+
+### Celestia
+
+- **Use case**: Production deployments
+- **Guarantee**: Data availability sampling (DAS)
+- **Latency**: ~12 seconds to finality
+
+### Custom DA
+
+Implement the [DA interface](/reference/interfaces/da) to integrate any DA layer.
+
+## DA Flow
+
+```
+Block Produced
+ │
+ ▼
+┌─────────────────┐
+│ Submitter │ Queues block for DA
+└────────┬────────┘
+ │
+ ▼
+┌─────────────────┐
+│ DA Layer │ Stores and orders data
+└────────┬────────┘
+ │
+ ▼
+┌─────────────────┐
+│ Full Nodes │ Retrieve and verify
+└─────────────────┘
+```
+
+## Namespaces
+
+Evolve uses DA namespaces to organize data:
+
+| Namespace | Purpose |
+|-----------|---------|
+| Header | Block headers |
+| Data | Transaction data |
+| Forced Inclusion | User-submitted transactions |
+
+## Best Practices
+
+- **Development**: Use Local DA for fast iteration
+- **Testnet**: Use Celestia testnet (Mocha or Arabica)
+- **Production**: Use Celestia mainnet or equivalent
+
+## Learn More
+
+- [Local DA Guide](/guides/da-layers/local-da)
+- [Celestia Guide](/guides/da-layers/celestia)
+- [DA Interface Reference](/reference/interfaces/da)
diff --git a/docs/concepts/fee-systems.md b/docs/concepts/fee-systems.md
new file mode 100644
index 000000000..e3e67bf15
--- /dev/null
+++ b/docs/concepts/fee-systems.md
@@ -0,0 +1,157 @@
+# Fee Systems
+
+Evolve chains have two layers of fees: execution fees (paid to process transactions) and DA fees (paid to post data).
+
+## Execution Fees
+
+### EVM (ev-reth)
+
+Uses EIP-1559 fee model:
+
+```
+Transaction Fee = (Base Fee + Priority Fee) × Gas Used
+```
+
+| Component | Destination | Purpose |
+|-----------|-------------|---------|
+| Base Fee | Burned (or redirected) | Congestion pricing |
+| Priority Fee | Sequencer | Incentive for inclusion |
+
+#### Base Fee Redirect
+
+By default, base fees are burned. ev-reth can redirect them to a treasury:
+
+```json
+{
+ "config": {
+ "evolve": {
+ "baseFeeSink": "0xTREASURY",
+ "baseFeeRedirectActivationHeight": 0
+ }
+ }
+}
+```
+
+See [Base Fee Redirect](/ev-reth/features/base-fee-redirect) for details.
+
+### Cosmos SDK (ev-abci)
+
+Uses standard Cosmos SDK fee model:
+
+```
+Transaction Fee = Gas Price × Gas Used
+```
+
+Configure minimum gas prices:
+
+```toml
+# app.toml
+minimum-gas-prices = "0.025stake"
+```
+
+Fees go to the fee collector module and can be distributed via standard Cosmos mechanisms.
+
+## DA Fees
+
+Both execution environments incur DA fees when blocks are posted to the DA layer.
+
+### Cost Factors
+
+| Factor | Impact |
+|--------|--------|
+| Block size | Linear cost increase |
+| DA gas price | Market-driven, varies |
+| Batching | Amortizes overhead |
+| Compression | Reduces data size |
+
+### Who Pays?
+
+The sequencer pays DA fees from their own funds. They recover costs through:
+
+- Priority fees from users
+- Base fee redirect (if configured)
+- External subsidy
+
+### Optimization Strategies
+
+#### Lazy Aggregation
+
+Only produce blocks when there are transactions:
+
+```yaml
+node:
+ lazy-aggregator: true
+ lazy-block-time: 1s # Max wait time
+```
+
+Reduces empty blocks and DA costs.
+
+#### Batching
+
+ev-node batches multiple blocks into single DA submissions:
+
+```yaml
+da:
+ batch-size-threshold: 100000 # bytes
+ batch-max-delay: 5s
+```
+
+#### Compression
+
+Enable blob compression:
+
+```yaml
+da:
+ compression: true
+```
+
+## Fee Flow Diagram
+
+```
+User Transaction
+ │
+ │ Pays: Gas Price × Gas
+ ▼
+┌─────────────────┐
+│ Sequencer │
+│ │
+│ Receives: │
+│ - Priority fees │
+│ - Base fees* │
+└────────┬────────┘
+ │
+ │ Pays: DA fees
+ ▼
+┌─────────────────┐
+│ DA Layer │
+│ (Celestia) │
+└─────────────────┘
+
+* If base fee redirect is enabled
+```
+
+## Estimating Costs
+
+### Execution Costs
+
+EVM:
+
+```bash
+cast estimate --rpc-url http://localhost:8545 "transfer(address,uint256)"
+```
+
+Cosmos:
+
+```bash
+appd tx bank send 1000stake --gas auto --gas-adjustment 1.3
+```
+
+### DA Costs
+
+Depends on:
+
+- DA layer pricing (e.g., Celestia gas price)
+- Data size per block
+- Submission frequency
+
+Use the [Celestia Gas Calculator](/guides/tools/celestia-gas-calculator) for estimates.
diff --git a/docs/concepts/finality.md b/docs/concepts/finality.md
new file mode 100644
index 000000000..be965a444
--- /dev/null
+++ b/docs/concepts/finality.md
@@ -0,0 +1,55 @@
+# Finality
+
+Finality determines when a transaction is irreversible. Evolve has a multi-stage finality model.
+
+## Finality Stages
+
+```
+Transaction Submitted
+ │
+ ▼
+┌───────────────────┐
+│ Soft Confirmed │ ← Block produced, gossiped via P2P
+└─────────┬─────────┘
+ │
+ ▼
+┌───────────────────┐
+│ DA Finalized │ ← DA layer confirms inclusion
+└───────────────────┘
+```
+
+### Soft Confirmation
+
+When a block is produced and gossiped via P2P:
+
+- **Latency**: Milliseconds (block time)
+- **Guarantee**: Sequencer has committed to this ordering
+- **Risk**: Sequencer could equivocate (produce conflicting blocks)
+
+### DA Finalized
+
+When the DA layer confirms the block is included:
+
+- **Latency**: ~6 seconds (Celestia)
+- **Guarantee**: Block data is permanently available and ordered
+- **Risk**: None (assuming DA layer security)
+
+## Choosing Finality Thresholds
+
+| Use Case | Recommended Finality |
+|----------|---------------------|
+| Display balance | Soft confirmation |
+| Accept payment | Soft confirmation |
+| Process withdrawal | DA finalized |
+| Bridge transfer | DA finalized |
+
+## Configuration
+
+Block time affects soft confirmation latency:
+
+```yaml
+node:
+ block-time: 100ms
+```
+
+DA finality depends on the DA layer. Celestia provides ~6 second finality.
diff --git a/docs/concepts/p2p-networking.md b/docs/concepts/p2p-networking.md
new file mode 100644
index 000000000..14309d9e3
--- /dev/null
+++ b/docs/concepts/p2p-networking.md
@@ -0,0 +1,60 @@
+# P2P
+
+Every node (both full and light) runs a P2P client using [go-libp2p][go-libp2p] P2P networking stack for gossiping transactions in the chain's P2P network. The same P2P client is also used by the header and block sync services for gossiping headers and blocks.
+
+Following parameters are required for creating a new instance of a P2P client:
+
+* P2PConfig (described below)
+* [go-libp2p][go-libp2p] private key used to create a libp2p connection and join the p2p network.
+* chainID: identifier used as namespace within the p2p network for peer discovery. The namespace acts as a sub network in the p2p network, where peer connections are limited to the same namespace.
+* datastore: an instance of [go-datastore][go-datastore] used for creating a connection gator and stores blocked and allowed peers.
+* logger
+
+```go
+// P2PConfig stores configuration related to peer-to-peer networking.
+type P2PConfig struct {
+ ListenAddress string // Address to listen for incoming connections
+ Seeds string // Comma separated list of seed nodes to connect to
+ BlockedPeers string // Comma separated list of nodes to ignore
+ AllowedPeers string // Comma separated list of nodes to whitelist
+}
+```
+
+A P2P client also instantiates a [connection gator][conngater] to block and allow peers specified in the `P2PConfig`.
+
+It also sets up a gossiper using the gossip topic `+` (`txTopicSuffix` is defined in [p2p/client.go][client.go]), a Distributed Hash Table (DHT) using the `Seeds` defined in the `P2PConfig` and peer discovery using go-libp2p's `discovery.RoutingDiscovery`.
+
+A P2P client provides an interface `SetTxValidator(p2p.GossipValidator)` for specifying a gossip validator which can define how to handle the incoming `GossipMessage` in the P2P network. The `GossipMessage` represents message gossiped via P2P network (e.g. transaction, Block etc).
+
+```go
+// GossipValidator is a callback function type.
+type GossipValidator func(*GossipMessage) bool
+```
+
+The full nodes define a transaction validator (shown below) as gossip validator for processing the gossiped transactions to add to the mempool, whereas light nodes simply pass a dummy validator as light nodes do not process gossiped transactions.
+
+```go
+// newTxValidator creates a pubsub validator that uses the node's mempool to check the
+// transaction. If the transaction is valid, then it is added to the mempool
+func (n *FullNode) newTxValidator() p2p.GossipValidator {
+```
+
+```go
+// Dummy validator that always returns a callback function with boolean `false`
+func (ln *LightNode) falseValidator() p2p.GossipValidator {
+```
+
+## References
+
+[1] [client.go][client.go]
+
+[2] [go-datastore][go-datastore]
+
+[3] [go-libp2p][go-libp2p]
+
+[4] [conngater][conngater]
+
+[client.go]: https://github.com/evstack/ev-node/blob/main/pkg/p2p/client.go
+[go-datastore]: https://github.com/ipfs/go-datastore
+[go-libp2p]: https://github.com/libp2p/go-libp2p
+[conngater]: https://github.com/libp2p/go-libp2p/tree/master/p2p/net/conngater
diff --git a/docs/concepts/sequencing.md b/docs/concepts/sequencing.md
new file mode 100644
index 000000000..89ccbd691
--- /dev/null
+++ b/docs/concepts/sequencing.md
@@ -0,0 +1,120 @@
+# Sequencing
+
+Sequencing is the process of determining the order of transactions in a blockchain. In rollups, the sequencer is the entity responsible for collecting transactions from users, ordering them, and producing blocks that are eventually posted to the data availability (DA) layer.
+
+Transaction ordering matters because it determines execution outcomes. Two transactions that touch the same state can produce different results depending on which executes first. The sequencer's ordering decisions directly impact users, particularly in DeFi where transaction order can mean the difference between a successful trade and a failed one.
+
+## The Role of the Sequencer
+
+A sequencer performs three core functions:
+
+1. **Transaction collection** — Accepting transactions from users and holding them in a mempool
+2. **Ordering** — Deciding which transactions to include and in what order
+3. **Block production** — Bundling ordered transactions into blocks and publishing them
+
+In traditional L1 blockchains, these functions are distributed across validators through consensus. In rollups, sequencing can be handled differently depending on the design goals.
+
+## Single Sequencer
+
+The simplest approach is a single sequencer: one designated node that orders all transactions.
+
+```
+User → Sequencer → Block → DA Layer
+```
+
+**Advantages:**
+
+- **Low latency** — No consensus required means block times can be very fast (sub-second)
+- **Simple operation** — One node, one source of truth for ordering
+- **Predictable performance** — No coordination overhead
+
+**Disadvantages:**
+
+- **Centralization** — Single point of control over transaction ordering
+- **Censorship risk** — The sequencer can refuse to include specific transactions
+- **Liveness dependency** — If the sequencer goes down, the chain halts
+- **MEV extraction** — The sequencer has full visibility and can reorder for profit
+
+Most production rollups today use single sequencers because the performance benefits are significant and the trust assumptions are often acceptable for their use cases.
+
+## Based Sequencing
+
+Based sequencing (also called "based rollups") delegates transaction ordering to the underlying DA layer. Instead of a dedicated sequencer, users submit transactions directly to the DA layer, and all rollup nodes independently derive the same ordering from DA blocks.
+
+```
+User → DA Layer → All Nodes Derive Same Order
+```
+
+**Advantages:**
+
+- **Decentralization** — No privileged sequencer role
+- **Censorship resistance** — Inherits the censorship resistance of the DA layer
+- **Liveness** — Chain stays live as long as the DA layer is live
+- **Shared security** — Ordering is secured by the DA layer's consensus
+
+**Disadvantages:**
+
+- **Higher latency** — Block times are bounded by DA layer block times (e.g., ~12s for Ethereum)
+- **MEV leakage** — MEV flows to DA layer validators rather than the rollup
+- **Complexity** — Requires deterministic derivation rules that all nodes must follow
+
+Based sequencing is compelling for applications that prioritize decentralization over speed.
+
+## Hybrid Approaches
+
+### Forced Inclusion
+
+Forced inclusion is a mechanism that combines the performance of single sequencing with censorship resistance guarantees. It works as follows:
+
+1. Users normally submit transactions to the sequencer for fast inclusion
+2. If censored, users can submit transactions directly to the DA layer
+3. The sequencer must include DA-submitted transactions within a defined time window
+4. Failure to include triggers penalties or allows the chain to transition to based mode
+
+This gives users an escape hatch while maintaining the benefits of centralized sequencing for the common case.
+
+### Shared Sequencing
+
+Multiple rollups can share a sequencer or sequencer network. This enables:
+
+- **Atomic cross-rollup transactions** — Transactions that span multiple rollups can be ordered atomically
+- **Shared MEV** — Revenue from cross-rollup MEV can be distributed
+- **Reduced costs** — Infrastructure costs are amortized across chains
+
+Shared sequencing is an active area of research and development.
+
+## MEV Considerations
+
+Maximal Extractable Value (MEV) is the profit a sequencer can extract by reordering, inserting, or censoring transactions. Common MEV strategies include:
+
+- **Frontrunning** — Inserting a transaction before a target transaction
+- **Backrunning** — Inserting a transaction immediately after a target
+- **Sandwich attacks** — Combining frontrunning and backrunning around a target
+
+The sequencing design determines who captures MEV:
+
+| Design | MEV Captured By |
+|-------------------|--------------------------|
+| Single sequencer | Sequencer operator |
+| Based sequencing | DA layer validators |
+| Shared sequencing | Shared sequencer network |
+
+Some rollups implement MEV mitigation through encrypted mempools, fair ordering protocols, or MEV redistribution to users.
+
+## Choosing a Sequencing Model
+
+| Factor | Single Sequencer | Based Sequencer |
+|------------------------|---------------------------|---------------------|
+| Block time | Sub-second possible | DA layer block time |
+| Censorship resistance | Requires forced inclusion | Native |
+| Liveness | Sequencer must be online | DA layer liveness |
+| MEV control | Sequencer controlled | DA layer controlled |
+| Operational complexity | Lower | Higher |
+
+The right choice depends on your application's priorities. High-frequency trading applications might prefer single sequencing for speed. Applications handling high-value, censorship-sensitive transactions might prefer based sequencing for its guarantees.
+
+## Learn More
+
+- [Forced Inclusion](/guides/advanced/forced-inclusion) — Implementing censorship resistance with single sequencing
+- [Based Sequencing](/guides/advanced/based-sequencing) — Running a based rollup
+- [Sequencer Interface](/reference/interfaces/sequencer) — Implementation reference
diff --git a/docs/concepts/transaction-flow.md b/docs/concepts/transaction-flow.md
new file mode 100644
index 000000000..8d055321f
--- /dev/null
+++ b/docs/concepts/transaction-flow.md
@@ -0,0 +1,53 @@
+# Transaction flow
+
+Chain users use a light node to communicate with the chain P2P network for two primary reasons:
+
+- submitting transactions
+- gossiping headers and fraud proofs
+
+Here's what the typical transaction flow looks like:
+
+## Transaction submission
+
+```mermaid
+sequenceDiagram
+ participant User
+ participant LightNode
+ participant FullNode
+
+ User->>LightNode: Submit Transaction
+ LightNode->>FullNode: Gossip Transaction
+ FullNode-->>User: Refuse (if invalid)
+```
+
+## Transaction validation and processing
+
+```mermaid
+sequenceDiagram
+ participant FullNode
+ participant Sequencer
+
+ FullNode->>FullNode: Check Validity
+ FullNode->>FullNode: Add to Mempool (if valid)
+ FullNode-->>User: Transaction Processed (if valid)
+ FullNode->>Sequencer: Inform about Valid Transaction
+ Sequencer->>DALayer: Add to Chain Block
+```
+
+## Block processing
+
+```mermaid
+sequenceDiagram
+ participant DALayer
+ participant FullNode
+ participant Chain
+
+ DALayer->>Chain: Update State
+ DALayer->>FullNode: Download & Validate Block
+```
+
+To transact, users submit a transaction to their light node, which gossips the transaction to a full node. Before adding the transaction to their mempool, the full node checks its validity. Valid transactions are included in the mempool, while invalid ones are refused, and the user's transaction will not be processed.
+
+If the transaction is valid and has been included in the mempool, the sequencer can add it to a chain block, which is then submitted to the data availability (DA) layer. This results in a successful transaction flow for the user, and the state of the chain is updated accordingly.
+
+After the block is submitted to the DA layer, the full nodes download and validate the block.
diff --git a/docs/ev-abci/integration-guide.md b/docs/ev-abci/integration-guide.md
new file mode 100644
index 000000000..fc9463350
--- /dev/null
+++ b/docs/ev-abci/integration-guide.md
@@ -0,0 +1,131 @@
+# Integration Guide
+
+Integrate ev-abci into a Cosmos SDK application.
+
+## Overview
+
+ev-abci replaces CometBFT as the consensus layer. Your ABCI application logic remains unchanged—only the node startup code changes.
+
+## Prerequisites
+
+- Cosmos SDK v0.50+ application
+- Go 1.22+
+
+## Step 1: Add Dependency
+
+```bash
+go get github.com/evstack/ev-abci@latest
+```
+
+## Step 2: Modify Start Command
+
+Locate your app's entrypoint (typically `cmd//root.go` or `main.go`).
+
+### Before (CometBFT)
+
+```go
+import (
+ "github.com/cosmos/cosmos-sdk/server"
+)
+
+// In your root command setup:
+server.AddCommands(rootCmd, app.DefaultNodeHome, newApp, appExport)
+```
+
+### After (ev-abci)
+
+```go
+import (
+ "github.com/cosmos/cosmos-sdk/server"
+ evabci "github.com/evstack/ev-abci/server"
+)
+
+// Keep existing commands for init, genesis, keys, etc.
+server.AddCommands(rootCmd, app.DefaultNodeHome, newApp, appExport)
+
+// Replace the start command
+startCmd := &cobra.Command{
+ Use: "start",
+ Short: "Run the node",
+ RunE: func(cmd *cobra.Command, _ []string) error {
+ return evabci.StartHandler(cmd, newApp)
+ },
+}
+evabci.AddFlags(startCmd)
+rootCmd.AddCommand(startCmd)
+```
+
+## Step 3: Build
+
+```bash
+go build -o appd ./cmd/appd
+```
+
+## Step 4: Verify
+
+Check for ev-abci flags:
+
+```bash
+./appd start --help
+```
+
+Expected flags:
+
+```
+--evnode.node.aggregator Run as block producer
+--evnode.da.address DA layer address
+--evnode.signer.passphrase Signer passphrase
+--evnode.node.block_time Block production interval
+```
+
+## Step 5: Initialize
+
+Standard Cosmos SDK initialization:
+
+```bash
+./appd init mynode --chain-id mychain-1
+./appd keys add mykey --keyring-backend test
+./appd genesis add-genesis-account mykey 1000000000stake --keyring-backend test
+./appd genesis gentx mykey 1000000stake --chain-id mychain-1 --keyring-backend test
+./appd genesis collect-gentxs
+```
+
+## Step 6: Start
+
+```bash
+./appd start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.signer.passphrase secret
+```
+
+## Configuration
+
+### ev-node Flags
+
+| Flag | Description | Default |
+|------|-------------|---------|
+| `--evnode.node.aggregator` | Run as sequencer | `false` |
+| `--evnode.node.block_time` | Block interval | `1s` |
+| `--evnode.da.address` | DA layer URL | required |
+| `--evnode.signer.passphrase` | Signer passphrase | required |
+| `--evnode.p2p.peers` | P2P peer addresses | none |
+
+### Full Node (Non-Sequencer)
+
+```bash
+./appd start \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.p2p.peers @:26659
+```
+
+## RPC Compatibility
+
+ev-abci provides CometBFT-compatible RPC endpoints. Existing clients work without modification.
+
+See [RPC Compatibility](/ev-abci/rpc-compatibility) for details.
+
+## Next Steps
+
+- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Migrate existing chain
+- [RPC Compatibility](/ev-abci/rpc-compatibility) — Endpoint compatibility
diff --git a/docs/ev-abci/migration-from-cometbft.md b/docs/ev-abci/migration-from-cometbft.md
new file mode 100644
index 000000000..eb6abcd9e
--- /dev/null
+++ b/docs/ev-abci/migration-from-cometbft.md
@@ -0,0 +1,286 @@
+# Migrating an Existing Chain to ev-abci
+
+This guide is for developers of existing Cosmos SDK chains who want to replace their node's default CometBFT consensus engine with the `ev-abci` implementation. By following these steps, you will migrate your chain to run as an `ev-abci` node while preserving chain state.
+
+## Overview of Migration Process
+
+The migration process involves the following key phases:
+
+1. **Code Preparation:** Add migration module, staking wrapper, and upgrade handler to your existing chain
+2. **Governance Proposal:** Create and pass a governance proposal to initiate the migration
+3. **State Export:** Export the current chain state at the designated upgrade height
+4. **Node Reconfiguration:** Wire the `ev-abci` start handler into your node's entrypoint
+5. **Migration Execution:** Run `appd evolve-migrate` to transform the exported state
+6. **Chain Restart:** Start the new `ev-abci` node with the migrated state
+
+This document will guide you through each phase.
+
+---
+
+## Phase 1: Code Preparation - Add Migration Module and Staking Wrapper
+
+The first step prepares your existing chain for migration by integrating the necessary modules.
+
+### Step 1: Add Migration Manager Module
+
+Add the `migrationmngr` module to your application. This module manages the transition from a PoS validator set to a sequencer-based model.
+
+*Note: For detailed information about the migration manager, please refer to the [migration manager documentation](https://github.com/evstack/ev-abci/tree/main/modules/migrationmngr).*
+
+In your `app.go` file:
+
+1. Import the migration manager module:
+
+```go
+import (
+ // ...
+ migrationmngr "github.com/evstack/ev-abci/modules/migrationmngr"
+ migrationmngrkeeper "github.com/evstack/ev-abci/modules/migrationmngr/keeper"
+ migrationmngrtypes "github.com/evstack/ev-abci/modules/migrationmngr/types"
+ // ...
+)
+```
+
+1. Add the migration manager keeper to your app struct
+2. Register the module in your module manager
+3. Configure the migration manager in your app initialization
+
+### Step 2: Replace Staking Module with Wrapper
+
+**Goal:** Ensure the `migrationmngr` module is the *sole* source of validator set updates during migration.
+
+Replace the standard Cosmos SDK `x/staking` module with the **staking wrapper module** provided in `ev-abci`. The wrapper's `EndBlock` method prevents validator updates from the staking module, delegating that responsibility to the `migrationmngr` module during migration.
+
+In your `app.go` file (and any other files that import the staking module):
+
+**Replace this:**
+
+```go
+import (
+ // ...
+ "github.com/cosmos/cosmos-sdk/x/staking"
+ stakingkeeper "github.com/cosmos/cosmos-sdk/x/staking/keeper"
+ stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types"
+ // ...
+)
+```
+
+**With this:**
+
+```go
+import (
+ // ...
+ "github.com/evstack/ev-abci/modules/staking" // The wrapper module
+ stakingkeeper "github.com/evstack/ev-abci/modules/staking/keeper"
+ stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types" // Staking types remain the same
+ // ...
+)
+```
+
+By changing the import path, your application will automatically use the wrapper module. No other changes to your `EndBlocker` method are needed.
+
+---
+
+## Phase 2: Create Upgrade Handler
+
+Create an upgrade handler in your `app.go` that will be triggered when the governance proposal is executed.
+
+```go
+func (app *App) setupUpgradeHandlers() {
+ app.UpgradeKeeper.SetUpgradeHandler(
+ "v2-migrate-to-evolve", // Upgrade name must match governance proposal
+ func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
+ // The upgrade handler can initialize state for the migration manager if needed
+ // The actual migration will happen during the evolve-migrate step
+ return app.mm.RunMigrations(ctx, app.configurator, fromVM)
+ },
+ )
+}
+```
+
+Call this function in your app initialization code in `app.go`.
+
+---
+
+## Phase 3: Create Governance Proposal for Migration
+
+Create and submit a software upgrade governance proposal to initiate the migration at a specific block height.
+
+```bash
+# Create the governance proposal
+ tx gov submit-proposal software-upgrade v2-migrate-to-evolve \
+ --title "Migrate to Evolve" \
+ --description "Upgrade chain to use ev-abci consensus" \
+ --upgrade-height \
+ --from \
+ --chain-id
+
+# Vote on the proposal (repeat for validators to reach quorum)
+ tx gov vote yes --from
+```
+
+Wait for the proposal to pass and for the chain to reach the upgrade height. The chain will halt at the specified height, waiting for the upgrade to be applied.
+
+### Trigger Migration to Evolve
+
+After the upgrade proposal has passed, submit the `MsgMigrateToEvolve` message to initiate the actual migration process. This can be done through a governance proposal or directly if your chain's authority allows it.
+
+```bash
+# Submit MsgMigrateToEvolve governance proposal (if using governance)
+ tx gov submit-proposal migrate-to-evolve \
+ --title "Trigger Migration to Evolve" \
+ --description "Execute migration to ev-abci consensus" \
+ --from \
+ --chain-id
+
+# Or submit directly if authority allows (authority address depends on your chain configuration)
+ tx migrationmngr migrate-to-evolve \
+ --from \
+ --chain-id
+```
+
+Once this message is processed, the migration manager module will handle the transition from the PoS validator set to the sequencer-based model.
+
+---
+
+## Phase 4: Wire ev-abci Start Handler in root.go
+
+**⚠️ Important:** Complete this phase BEFORE the chain halts at the upgrade height. Do NOT start your node yet - you will start it in Phase 6 after running the migration command.
+
+Modify your node's entrypoint to use the `ev-abci` server commands.
+
+### Locate Your Application's Entrypoint
+
+Open the main entrypoint file for your chain's binary, usually found at `cmd//main.go` or `root.go`.
+
+### Modify the Start Command
+
+Add the `ev-abci` start handler to your root command. This is similar to the [Ignite Apps evolve template](https://github.com/ignite/apps/blob/main/evolve/template/init.go#L48-L60).
+
+```go
+// cmd//main.go (or root.go)
+package main
+
+import (
+ "os"
+
+ "github.com/cosmos/cosmos-sdk/server"
+ "github.com/spf13/cobra"
+
+ // Import the ev-abci server package
+ evabci_server "github.com/evstack/ev-abci/server"
+
+ "/app"
+)
+
+func main() {
+ rootCmd := &cobra.Command{
+ Use: "",
+ Short: "Your App Daemon (ev-abci enabled)",
+ }
+
+ // Keep existing commands (keys, export, etc.)
+ server.AddCommands(rootCmd, app.DefaultNodeHome, app.New, app.MakeEncodingConfig(), tx.DefaultSignModes)
+
+ // --- Wire ev-abci start handler ---
+ startCmd := &cobra.Command{
+ Use: "start",
+ Short: "Run the full node with ev-abci",
+ RunE: func(cmd *cobra.Command, _ []string) error {
+ return server.Start(cmd, evabci_server.StartHandler())
+ },
+ }
+
+ evabci_server.AddFlags(startCmd)
+ rootCmd.AddCommand(startCmd)
+ // --- End of ev-abci changes ---
+
+ if err := rootCmd.Execute(); err != nil {
+ server.HandleError(err)
+ os.Exit(1)
+ }
+}
+```
+
+### Build Your Application
+
+Re-build your application's binary with the updated code:
+
+```sh
+go build -o ./cmd/
+```
+
+**⚠️ Important:** Do NOT start the node yet. Proceed directly to Phase 5 to run the migration command.
+
+---
+
+## Phase 5: Run evolve-migrate
+
+After the chain halts at the upgrade height, run the migration command to transform the CometBFT data to Evolve format.
+
+**⚠️ Critical:** The node must NOT be running when you execute this command. Ensure all node processes are stopped before proceeding.
+
+```bash
+# Run the migration command
+ evolve-migrate
+
+# Optional: specify the DA height for the Evolve state (defaults to 1)
+ evolve-migrate --da-height
+```
+
+The `evolve-migrate` command performs the following operations:
+
+1. **Migrates all blocks** from the CometBFT blockstore to the Evolve store
+2. **Converts the CometBFT state** to Evolve state format
+3. **Creates `ev_genesis.json`** - a minimal genesis file that the node will automatically detect and use on subsequent startups
+4. **Saves state** to the ABCI execution store for compatibility
+5. **Seeds sync stores** with the latest migrated header and data
+6. **Cleans up migration state** from the application database
+
+**Important Notes:**
+
+- The migration processes blocks in reverse order (from latest to earliest)
+- If blocks are missing (e.g., due to pruning), they will be skipped. Migration stops if more than the configured maximum number of blocks are missing
+- Vote extensions are not supported in Evolve - if they were enabled in your chain, they will have no effect after migration
+- The command operates on the data in your node's home directory (e.g., `~/.appd/data/`)
+- After successful migration, the `ev_genesis.json` file will be used automatically on node restart
+
+---
+
+## Phase 6: Start New ev-abci Node
+
+Start your node with the migrated state:
+
+```bash
+ start
+```
+
+Verify that the node starts successfully:
+
+```sh
+# Check that ev-abci flags are available
+ start --help
+
+# You should see flags like:
+# --ev-node.attester-mode
+# --ev-node.aggregator
+# --ev-node.sequencer-url
+# etc.
+```
+
+Your node is now running with `ev-abci` instead of CometBFT. The chain continues from the same state but with the new consensus engine.
+
+---
+
+## Summary
+
+The migration process follows these key phases:
+
+1. **Code Preparation:** Modify your chain code to add the migration manager module and staking wrapper
+2. **Create Upgrade Handler:** Define the upgrade logic that will be triggered by governance
+3. **Governance Proposal:** Submit and pass a software upgrade proposal
+4. **Wire Start Handler:** Update your node's entrypoint to use the `ev-abci` start command
+5. **Execute Migration:** Run `appd evolve-migrate` to transform the exported state
+6. **Restart Chain:** Start the new `ev-abci` node with the migrated state
+
+This approach ensures a smooth migration with minimal downtime and preserves all chain state and history.
diff --git a/docs/ev-abci/modules/migration-manager.md b/docs/ev-abci/modules/migration-manager.md
new file mode 100644
index 000000000..203cd1058
--- /dev/null
+++ b/docs/ev-abci/modules/migration-manager.md
@@ -0,0 +1,143 @@
+# Migration Manager Module
+
+Coordinates the transition from CometBFT multi-validator consensus to Evolve single-sequencer mode.
+
+## Purpose
+
+The migration manager:
+
+- Stores the designated sequencer address
+- Tracks migration height
+- Coordinates with the staking wrapper to freeze validators
+- Provides the `MsgMigrateToEvolve` message for triggering migration
+
+## Installation
+
+### Add to app.go
+
+```go
+import (
+ migrationmngr "github.com/evstack/ev-abci/modules/migrationmngr"
+ migrationmngrkeeper "github.com/evstack/ev-abci/modules/migrationmngr/keeper"
+ migrationmngrtypes "github.com/evstack/ev-abci/modules/migrationmngr/types"
+)
+
+// Add store key
+keys := sdk.NewKVStoreKeys(
+ // ... other keys
+ migrationmngrtypes.StoreKey,
+)
+
+// Create keeper
+app.MigrationManagerKeeper = migrationmngrkeeper.NewKeeper(
+ appCodec,
+ keys[migrationmngrtypes.StoreKey],
+ app.StakingKeeper,
+ app.BankKeeper,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+)
+
+// Add to module manager
+app.ModuleManager = module.NewManager(
+ // ... other modules
+ migrationmngr.NewAppModule(appCodec, app.MigrationManagerKeeper),
+)
+```
+
+### Genesis Configuration
+
+```json
+{
+ "app_state": {
+ "migrationmngr": {
+ "params": {
+ "sequencer_address": "",
+ "migration_height": "0"
+ }
+ }
+ }
+}
+```
+
+## Migration Flow
+
+### 1. Governance Proposal
+
+Submit a proposal to set migration parameters:
+
+```bash
+appd tx gov submit-proposal set-sequencer \
+ --sequencer-address cosmos1... \
+ --migration-height 5000001 \
+ --from
+```
+
+### 2. Vote and Pass
+
+Standard governance voting process.
+
+### 3. Chain Halts
+
+At migration height, the chain halts automatically.
+
+### 4. Run Migration
+
+```bash
+appd evolve-migrate
+```
+
+### 5. Restart with ev-abci
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.da.address \
+ --evnode.signer.passphrase
+```
+
+## Messages
+
+### MsgSetMigrationParams
+
+Set migration parameters (governance-gated):
+
+```protobuf
+message MsgSetMigrationParams {
+ string authority = 1;
+ string sequencer_address = 2;
+ int64 migration_height = 3;
+}
+```
+
+### MsgMigrateToEvolve
+
+Trigger the migration (called internally):
+
+```protobuf
+message MsgMigrateToEvolve {
+ string authority = 1;
+}
+```
+
+## Queries
+
+```bash
+# Get migration params
+appd query migrationmngr params
+
+# Get previous validators (post-migration)
+appd query migrationmngr previous-validators
+```
+
+## State
+
+| Key | Description |
+|-----|-------------|
+| `params` | Sequencer address and migration height |
+| `previous_validators` | Validator set before migration (for reference) |
+| `migration_complete` | Boolean flag |
+
+## Next Steps
+
+- [Staking Wrapper](/ev-abci/modules/staking-wrapper) — Freeze validator set
+- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Full migration guide
diff --git a/docs/ev-abci/modules/staking-wrapper.md b/docs/ev-abci/modules/staking-wrapper.md
new file mode 100644
index 000000000..e9e71607f
--- /dev/null
+++ b/docs/ev-abci/modules/staking-wrapper.md
@@ -0,0 +1,96 @@
+# Staking Wrapper Module
+
+A wrapper around the Cosmos SDK staking module that prevents validator set changes during migration.
+
+## Purpose
+
+When migrating from CometBFT to Evolve, the validator set must be frozen to allow a clean transition to single-sequencer mode. The staking wrapper:
+
+- Prevents new delegations and undelegations from affecting the validator set
+- Blocks validator creation and updates
+- Allows the migration manager to perform the final transition
+
+## Installation
+
+Replace your staking module import:
+
+```go
+// Before
+import "github.com/cosmos/cosmos-sdk/x/staking"
+
+// After
+import "github.com/evstack/ev-abci/modules/staking"
+```
+
+The wrapper is API-compatible with the standard staking module.
+
+## Behavior
+
+### Normal Operation
+
+Before migration is triggered, the wrapper behaves identically to the standard staking module:
+
+- Delegations work normally
+- Validator operations work normally
+- Rewards distribution works normally
+
+### During Migration
+
+Once the migration manager signals migration mode:
+
+- `EndBlock` returns an empty validator update set
+- Delegation changes are recorded but don't affect validators
+- Validator creation/modification is blocked
+
+### After Migration
+
+Post-migration, the staking module becomes read-only for validator operations. The single sequencer is now the only block producer.
+
+## Integration
+
+### app.go
+
+```go
+import (
+ stakingkeeper "github.com/evstack/ev-abci/modules/staking/keeper"
+ stakingtypes "github.com/cosmos/cosmos-sdk/x/staking/types"
+)
+
+// In your NewApp function:
+app.StakingKeeper = stakingkeeper.NewKeeper(
+ appCodec,
+ keys[stakingtypes.StoreKey],
+ app.AccountKeeper,
+ app.BankKeeper,
+ authtypes.NewModuleAddress(govtypes.ModuleName).String(),
+)
+```
+
+### Module Manager
+
+```go
+import (
+ staking "github.com/evstack/ev-abci/modules/staking"
+)
+
+// In your module manager:
+app.ModuleManager = module.NewManager(
+ // ... other modules
+ staking.NewAppModule(appCodec, app.StakingKeeper, app.AccountKeeper, app.BankKeeper),
+)
+```
+
+## Queries
+
+All standard staking queries remain available:
+
+```bash
+appd query staking validators
+appd query staking delegations
+appd query staking pool
+```
+
+## Next Steps
+
+- [Migration Manager](/ev-abci/modules/migration-manager) — Coordinate the migration
+- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Full migration guide
diff --git a/docs/ev-abci/overview.md b/docs/ev-abci/overview.md
new file mode 100644
index 000000000..3c32a6f03
--- /dev/null
+++ b/docs/ev-abci/overview.md
@@ -0,0 +1,76 @@
+# ev-abci Overview
+
+ev-abci is an ABCI adapter that allows Cosmos SDK applications to run on Evolve instead of CometBFT.
+
+## What is ev-abci?
+
+ev-abci provides:
+
+- **Drop-in replacement** — Swap CometBFT for Evolve with minimal code changes
+- **ABCI compatibility** — Your existing Cosmos SDK modules work unchanged
+- **CometBFT RPC compatibility** — Existing clients and tooling continue to work
+- **Migration tooling** — Migrate existing chains from CometBFT to Evolve
+
+## Architecture
+
+```
+┌─────────────────────────────────────────┐
+│ Your Cosmos App │
+│ ┌─────────────────────────────────┐ │
+│ │ Cosmos SDK Modules │ │
+│ │ (bank, staking, gov, etc.) │ │
+│ └─────────────────────────────────┘ │
+│ │ ABCI │
+│ ┌───────────────▼─────────────────┐ │
+│ │ ev-abci │ │
+│ │ (ABCI adapter + RPC server) │ │
+│ └───────────────┬─────────────────┘ │
+└──────────────────┼──────────────────────┘
+ │ Executor Interface
+┌──────────────────▼──────────────────────┐
+│ ev-node │
+│ (consensus + DA + P2P) │
+└─────────────────────────────────────────┘
+```
+
+ev-abci implements the Executor interface, translating ev-node's calls into ABCI calls to your application.
+
+## Key Differences from CometBFT
+
+| Aspect | CometBFT | ev-abci |
+|-----------------|----------------------------------|---------------------------|
+| Validators | Multiple validators with staking | Single sequencer |
+| Consensus | BFT consensus rounds | Sequencer produces blocks |
+| Finality | Instant (BFT) | Soft (P2P) → Hard (DA) |
+| Block time | ~6s typical | Configurable (100ms+) |
+| Vote extensions | Supported | Not supported |
+
+## Benefits
+
+- **No validator coordination** — Single sequencer eliminates consensus overhead
+- **Faster blocks** — No BFT round-trips, blocks as fast as 100ms
+- **DA-secured** — Security from data availability, not validator set
+- **Simpler operations** — No validator management, slashing, or jailing
+
+## Trade-offs
+
+- **Single sequencer** — One node produces blocks (with forced inclusion for censorship resistance)
+- **Different finality model** — Soft confirmation before DA finality
+- **No vote extensions** — ABCI++ vote extensions not available
+
+## Modules
+
+ev-abci includes helper modules for migration:
+
+- [Staking Wrapper](/ev-abci/modules/staking-wrapper) — Prevents validator updates during migration
+- [Migration Manager](/ev-abci/modules/migration-manager) — Handles validator set transition
+
+## Repository
+
+- GitHub: [github.com/evstack/ev-abci](https://github.com/evstack/ev-abci)
+
+## Next Steps
+
+- [Cosmos SDK Quickstart](/getting-started/cosmos/quickstart) — Get started
+- [Integration Guide](/ev-abci/integration-guide) — Manual integration
+- [Migration from CometBFT](/ev-abci/migration-from-cometbft) — Migrate existing chain
diff --git a/docs/ev-abci/rpc-compatibility.md b/docs/ev-abci/rpc-compatibility.md
new file mode 100644
index 000000000..99dffca70
--- /dev/null
+++ b/docs/ev-abci/rpc-compatibility.md
@@ -0,0 +1,136 @@
+# RPC Compatibility
+
+ev-abci provides CometBFT-compatible RPC endpoints for client compatibility.
+
+## Overview
+
+Existing Cosmos SDK clients expect CometBFT RPC endpoints. ev-abci implements these endpoints so tools like:
+
+- Cosmos SDK CLI
+- Keplr wallet
+- CosmJS
+- Block explorers
+
+continue to work without modification.
+
+## Supported Endpoints
+
+### Query Methods
+
+| Endpoint | Status | Notes |
+|----------|--------|-------|
+| `/abci_query` | ✓ | Full support |
+| `/block` | ✓ | Full support |
+| `/block_by_hash` | ✓ | Full support |
+| `/block_results` | ✓ | Full support |
+| `/blockchain` | ✓ | Full support |
+| `/commit` | ✓ | Full support |
+| `/consensus_params` | ✓ | Full support |
+| `/genesis` | ✓ | Full support |
+| `/health` | ✓ | Full support |
+| `/status` | ✓ | Full support |
+| `/tx` | ✓ | Full support |
+| `/tx_search` | ✓ | Full support |
+| `/validators` | ✓ | Returns sequencer |
+
+### Transaction Methods
+
+| Endpoint | Status | Notes |
+|----------|--------|-------|
+| `/broadcast_tx_async` | ✓ | Full support |
+| `/broadcast_tx_sync` | ✓ | Full support |
+| `/broadcast_tx_commit` | ✓ | Waits for inclusion |
+| `/check_tx` | ✓ | Full support |
+
+### Subscription Methods
+
+| Endpoint | Status | Notes |
+|----------|--------|-------|
+| `/subscribe` | ✓ | WebSocket events |
+| `/unsubscribe` | ✓ | Full support |
+| `/unsubscribe_all` | ✓ | Full support |
+
+## Unsupported Endpoints
+
+| Endpoint | Reason |
+|----------|--------|
+| `/consensus_state` | No BFT consensus |
+| `/dump_consensus_state` | No BFT consensus |
+| `/net_info` | Different P2P model |
+| `/num_unconfirmed_txs` | Different mempool |
+| `/unconfirmed_txs` | Different mempool |
+
+## Behavioral Differences
+
+### Validators
+
+`/validators` returns the single sequencer rather than a validator set:
+
+```json
+{
+ "validators": [
+ {
+ "address": "...",
+ "voting_power": "1",
+ "proposer_priority": "0"
+ }
+ ],
+ "count": "1",
+ "total": "1"
+}
+```
+
+### Commit
+
+`/commit` returns a simplified commit structure since there's no BFT voting:
+
+```json
+{
+ "signed_header": {
+ "header": { ... },
+ "commit": {
+ "height": "100",
+ "signatures": [
+ {
+ "validator_address": "...",
+ "signature": "..."
+ }
+ ]
+ }
+ }
+}
+```
+
+### Block Time
+
+Block timestamps reflect actual production time, which may be faster than CometBFT's typical 6s blocks.
+
+## Port Configuration
+
+Default ports match CometBFT:
+
+| Port | Purpose |
+|------|---------|
+| 26657 | RPC |
+| 26656 | P2P |
+
+Configure via flags:
+
+```bash
+--evnode.rpc.address tcp://0.0.0.0:26657
+--evnode.p2p.listen /ip4/0.0.0.0/tcp/26656
+```
+
+## Client Configuration
+
+No client changes needed. Point clients at the same RPC URL:
+
+```javascript
+// CosmJS
+const client = await StargateClient.connect("http://localhost:26657");
+```
+
+```bash
+# CLI
+appd config node tcp://localhost:26657
+```
diff --git a/docs/ev-reth/configuration.md b/docs/ev-reth/configuration.md
new file mode 100644
index 000000000..5ef782139
--- /dev/null
+++ b/docs/ev-reth/configuration.md
@@ -0,0 +1,128 @@
+# ev-reth Configuration
+
+Configure ev-reth through chainspec (genesis.json) and command-line flags.
+
+## Chainspec
+
+The chainspec defines chain parameters. ev-reth uses standard Ethereum genesis format with Evolve extensions.
+
+### Basic Structure
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "homesteadBlock": 0,
+ "eip150Block": 0,
+ "eip155Block": 0,
+ "eip158Block": 0,
+ "byzantiumBlock": 0,
+ "constantinopleBlock": 0,
+ "petersburgBlock": 0,
+ "istanbulBlock": 0,
+ "berlinBlock": 0,
+ "londonBlock": 0,
+ "shanghaiTime": 0,
+ "cancunTime": 0
+ },
+ "alloc": {},
+ "coinbase": "0x0000000000000000000000000000000000000000",
+ "difficulty": "0x0",
+ "gasLimit": "0x1c9c380",
+ "nonce": "0x0",
+ "timestamp": "0x0"
+}
+```
+
+### Evolve Extensions
+
+Add under `config.evolve`:
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "evolve": {
+ "baseFeeSink": "0x...",
+ "baseFeeRedirectActivationHeight": 0,
+ "deployAllowlist": {
+ "admin": "0x...",
+ "enabled": ["0x..."]
+ },
+ "contractSizeLimit": 49152,
+ "mintPrecompile": {
+ "admin": "0x...",
+ "address": "0x0000000000000000000000000000000000000100"
+ }
+ }
+ }
+}
+```
+
+See [Features](/ev-reth/features/base-fee-redirect) for detailed configuration of each extension.
+
+## Command-Line Flags
+
+### RPC
+
+```bash
+--http # Enable HTTP JSON-RPC
+--http.addr 0.0.0.0 # Listen address
+--http.port 8545 # Listen port
+--http.api eth,net,web3 # Enabled APIs
+```
+
+### Engine API
+
+```bash
+--authrpc.addr 0.0.0.0 # Engine API address
+--authrpc.port 8551 # Engine API port
+--authrpc.jwtsecret jwt.hex # JWT secret file
+```
+
+### Data
+
+```bash
+--datadir /data # Data directory
+--chain genesis.json # Chainspec file
+```
+
+## Docker
+
+Default `docker-compose.yml`:
+
+```yaml
+services:
+ reth:
+ image: ghcr.io/evstack/ev-reth:latest
+ ports:
+ - "8545:8545"
+ - "8551:8551"
+ volumes:
+ - ./data:/data
+ - ./genesis.json:/genesis.json
+ - ./jwt.hex:/jwt.hex
+ command:
+ - node
+ - --chain=/genesis.json
+ - --http
+ - --http.addr=0.0.0.0
+ - --http.api=eth,net,web3,txpool
+ - --authrpc.addr=0.0.0.0
+ - --authrpc.jwtsecret=/jwt.hex
+```
+
+## JWT Secret
+
+Generate for Engine API authentication:
+
+```bash
+openssl rand -hex 32 > jwt.hex
+```
+
+Both ev-reth and ev-node must use the same secret.
+
+## Next Steps
+
+- [Engine API](/ev-reth/engine-api) — Communication protocol
+- [Chainspec Reference](/reference/configuration/ev-reth-chainspec) — Full field reference
diff --git a/docs/ev-reth/engine-api.md b/docs/ev-reth/engine-api.md
new file mode 100644
index 000000000..d0b1784cd
--- /dev/null
+++ b/docs/ev-reth/engine-api.md
@@ -0,0 +1,177 @@
+# Engine API
+
+ev-node communicates with ev-reth through the Ethereum Engine API, the same protocol used by Ethereum consensus clients.
+
+## Overview
+
+The Engine API is a JSON-RPC interface authenticated with JWT. ev-node acts as the consensus client, driving ev-reth (execution client) to build and finalize blocks.
+
+## Authentication
+
+All Engine API calls require JWT authentication:
+
+```bash
+# Generate shared secret
+openssl rand -hex 32 > jwt.hex
+```
+
+Configure both sides:
+
+- ev-reth: `--authrpc.jwtsecret jwt.hex`
+- ev-node: `--evm.jwt-secret jwt.hex`
+
+## Block Production Flow
+
+```
+ev-node ev-reth
+ │ │
+ │ 1. engine_forkchoiceUpdatedV3 │
+ │ (headBlockHash, payloadAttributes) │
+ │─────────────────────────────────────────►│
+ │ │
+ │ 2. {payloadId} │
+ │◄─────────────────────────────────────────│
+ │ │
+ │ 3. engine_getPayloadV3(payloadId) │
+ │─────────────────────────────────────────►│
+ │ │
+ │ 4. {executionPayload, blockValue} │
+ │◄─────────────────────────────────────────│
+ │ │
+ │ [ev-node broadcasts to P2P, submits DA] │
+ │ │
+ │ 5. engine_newPayloadV3(executionPayload)│
+ │─────────────────────────────────────────►│
+ │ │
+ │ 6. {status: VALID} │
+ │◄─────────────────────────────────────────│
+ │ │
+ │ 7. engine_forkchoiceUpdatedV3 │
+ │ (newHeadBlockHash) │
+ │─────────────────────────────────────────►│
+ │ │
+```
+
+## Methods
+
+### engine_forkchoiceUpdatedV3
+
+Update the fork choice and optionally start building a new block.
+
+**Request:**
+
+```json
+{
+ "method": "engine_forkchoiceUpdatedV3",
+ "params": [
+ {
+ "headBlockHash": "0x...",
+ "safeBlockHash": "0x...",
+ "finalizedBlockHash": "0x..."
+ },
+ {
+ "timestamp": "0x...",
+ "prevRandao": "0x...",
+ "suggestedFeeRecipient": "0x...",
+ "withdrawals": [],
+ "parentBeaconBlockRoot": "0x..."
+ }
+ ]
+}
+```
+
+**Response:**
+
+```json
+{
+ "payloadStatus": {
+ "status": "VALID",
+ "latestValidHash": "0x..."
+ },
+ "payloadId": "0x..."
+}
+```
+
+### engine_getPayloadV3
+
+Retrieve a built payload.
+
+**Request:**
+
+```json
+{
+ "method": "engine_getPayloadV3",
+ "params": ["0x...payloadId"]
+}
+```
+
+**Response:**
+
+```json
+{
+ "executionPayload": {
+ "parentHash": "0x...",
+ "feeRecipient": "0x...",
+ "stateRoot": "0x...",
+ "receiptsRoot": "0x...",
+ "logsBloom": "0x...",
+ "prevRandao": "0x...",
+ "blockNumber": "0x1",
+ "gasLimit": "0x...",
+ "gasUsed": "0x...",
+ "timestamp": "0x...",
+ "extraData": "0x",
+ "baseFeePerGas": "0x...",
+ "blockHash": "0x...",
+ "transactions": ["0x..."]
+ },
+ "blockValue": "0x..."
+}
+```
+
+### engine_newPayloadV3
+
+Validate and execute a payload.
+
+**Request:**
+
+```json
+{
+ "method": "engine_newPayloadV3",
+ "params": [
+ { "executionPayload": "..." },
+ ["0x...versionedHashes"],
+ "0x...parentBeaconBlockRoot"
+ ]
+}
+```
+
+**Response:**
+
+```json
+{
+ "status": "VALID",
+ "latestValidHash": "0x..."
+}
+```
+
+## Status Codes
+
+| Status | Meaning |
+|--------|---------|
+| `VALID` | Payload is valid |
+| `INVALID` | Payload is invalid |
+| `SYNCING` | Node is syncing |
+| `ACCEPTED` | Payload accepted but not yet validated |
+
+## Ports
+
+| Port | Purpose |
+|------|---------|
+| 8545 | JSON-RPC (public) |
+| 8551 | Engine API (authenticated) |
+
+## Next Steps
+
+- [Engine API Reference](/reference/api/engine-api) — Full method reference
+- [Configuration](/ev-reth/configuration) — ev-reth settings
diff --git a/docs/ev-reth/features/base-fee-redirect.md b/docs/ev-reth/features/base-fee-redirect.md
new file mode 100644
index 000000000..2165bae15
--- /dev/null
+++ b/docs/ev-reth/features/base-fee-redirect.md
@@ -0,0 +1,86 @@
+# Base Fee Redirect
+
+Redirect EIP-1559 base fees to a treasury address instead of burning them.
+
+## Overview
+
+In standard Ethereum, base fees are burned. ev-reth allows redirecting these fees to a specified address, enabling:
+
+- Protocol revenue collection
+- Treasury funding
+- DAO-controlled fee distribution
+
+## Configuration
+
+In your chainspec (`genesis.json`):
+
+```json
+{
+ "config": {
+ "evolve": {
+ "baseFeeSink": "0xYOUR_TREASURY_ADDRESS",
+ "baseFeeRedirectActivationHeight": 0
+ }
+ }
+}
+```
+
+| Field | Description |
+|-------|-------------|
+| `baseFeeSink` | Address to receive base fees |
+| `baseFeeRedirectActivationHeight` | Block height to activate (0 = genesis) |
+
+## How It Works
+
+```
+Transaction Fee = Base Fee + Priority Fee
+
+Standard Ethereum:
+├── Base Fee → Burned
+└── Priority Fee → Block producer
+
+With Base Fee Redirect:
+├── Base Fee → baseFeeSink address
+└── Priority Fee → Block producer (fee recipient)
+```
+
+## Example
+
+Treasury at `0x1234...`:
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "evolve": {
+ "baseFeeSink": "0x1234567890123456789012345678901234567890",
+ "baseFeeRedirectActivationHeight": 0
+ }
+ }
+}
+```
+
+All base fees from block 0 onward go to the treasury.
+
+## Activation at Later Height
+
+To activate after chain launch:
+
+```json
+{
+ "config": {
+ "evolve": {
+ "baseFeeSink": "0x...",
+ "baseFeeRedirectActivationHeight": 1000000
+ }
+ }
+}
+```
+
+Fees are burned until block 1,000,000, then redirected.
+
+## Use Cases
+
+- **Protocol treasury** — Fund development, grants, or operations
+- **Staking rewards** — Distribute to token holders
+- **Burn address** — Set to `0x0` to explicitly burn (default behavior)
diff --git a/docs/ev-reth/features/contract-size-limits.md b/docs/ev-reth/features/contract-size-limits.md
new file mode 100644
index 000000000..ee90d240f
--- /dev/null
+++ b/docs/ev-reth/features/contract-size-limits.md
@@ -0,0 +1,73 @@
+# Contract Size Limits
+
+Increase the maximum contract bytecode size beyond Ethereum's 24KB limit.
+
+## Overview
+
+Ethereum limits contract size to 24,576 bytes (24KB) via [EIP-170](https://eips.ethereum.org/EIPS/eip-170). ev-reth allows increasing this limit for use cases requiring larger contracts:
+
+- Complex DeFi protocols
+- On-chain game logic
+- ZK verification contracts
+
+## Configuration
+
+In your chainspec (`genesis.json`):
+
+```json
+{
+ "config": {
+ "evolve": {
+ "contractSizeLimit": 49152
+ }
+ }
+}
+```
+
+| Field | Description | Default |
+|-------|-------------|---------|
+| `contractSizeLimit` | Max bytecode size in bytes | 24576 (24KB) |
+
+## Common Values
+
+| Size | Bytes | Use Case |
+|------|-------|----------|
+| 24KB | 24576 | Ethereum default |
+| 48KB | 49152 | 2x limit |
+| 64KB | 65536 | 2.67x limit |
+| 128KB | 131072 | Large contracts |
+
+## Trade-offs
+
+**Pros:**
+
+- Deploy larger, more complex contracts
+- Avoid splitting logic across multiple contracts
+- Simpler contract architecture
+
+**Cons:**
+
+- Higher deployment gas costs
+- Longer deployment times
+- May impact block gas limits
+
+## Example
+
+Allow contracts up to 64KB:
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "evolve": {
+ "contractSizeLimit": 65536
+ }
+ }
+}
+```
+
+## Considerations
+
+- This is a chain-wide setting—affects all deployments
+- Existing tooling may warn about large contracts
+- Consider gas costs for deployment and interaction
diff --git a/docs/ev-reth/features/deploy-allowlist.md b/docs/ev-reth/features/deploy-allowlist.md
new file mode 100644
index 000000000..7b44b5908
--- /dev/null
+++ b/docs/ev-reth/features/deploy-allowlist.md
@@ -0,0 +1,77 @@
+# Deploy Allowlist
+
+Restrict contract deployment to a set of approved addresses.
+
+## Overview
+
+By default, any address can deploy contracts. The deploy allowlist restricts deployment to explicitly approved addresses, useful for:
+
+- Permissioned chains
+- Controlled rollouts
+- Compliance requirements
+
+## Configuration
+
+In your chainspec (`genesis.json`):
+
+```json
+{
+ "config": {
+ "evolve": {
+ "deployAllowlist": {
+ "admin": "0xADMIN_ADDRESS",
+ "enabled": [
+ "0xDEPLOYER_1",
+ "0xDEPLOYER_2"
+ ]
+ }
+ }
+ }
+}
+```
+
+| Field | Description |
+|-------|-------------|
+| `admin` | Address that can modify the allowlist |
+| `enabled` | Addresses allowed to deploy contracts |
+
+## How It Works
+
+1. User attempts `CREATE` or `CREATE2` opcode
+2. ev-reth checks if sender is in `enabled` list
+3. If not allowed, transaction reverts
+
+## Admin Operations
+
+The admin can modify the allowlist via precompile calls:
+
+```solidity
+interface IDeployAllowlist {
+ function addDeployer(address deployer) external;
+ function removeDeployer(address deployer) external;
+ function isAllowed(address deployer) external view returns (bool);
+}
+```
+
+Precompile address: `0x0000000000000000000000000000000000000101`
+
+## Disabling
+
+To allow unrestricted deployment, omit the `deployAllowlist` config entirely or set an empty `enabled` list with no admin.
+
+## Example: Single Deployer
+
+```json
+{
+ "config": {
+ "evolve": {
+ "deployAllowlist": {
+ "admin": "0xAdminAddress",
+ "enabled": ["0xAdminAddress"]
+ }
+ }
+ }
+}
+```
+
+Only the admin can deploy contracts initially. They can add more deployers later.
diff --git a/docs/ev-reth/features/mint-precompile.md b/docs/ev-reth/features/mint-precompile.md
new file mode 100644
index 000000000..d876c7bf9
--- /dev/null
+++ b/docs/ev-reth/features/mint-precompile.md
@@ -0,0 +1,87 @@
+# Mint Precompile
+
+A custom precompile for minting native tokens.
+
+## Overview
+
+The mint precompile allows authorized addresses to mint native tokens (ETH equivalent) directly. This enables:
+
+- Bridge minting (mint when assets are bridged in)
+- Inflation schedules
+- Programmatic rewards
+- Airdrops
+
+## Configuration
+
+In your chainspec (`genesis.json`):
+
+```json
+{
+ "config": {
+ "evolve": {
+ "mintPrecompile": {
+ "admin": "0xMINT_ADMIN_ADDRESS",
+ "address": "0x0000000000000000000000000000000000000100"
+ }
+ }
+ }
+}
+```
+
+| Field | Description |
+|-------|-------------|
+| `admin` | Address authorized to call mint |
+| `address` | Precompile address (conventionally `0x100`) |
+
+## Interface
+
+```solidity
+interface IMintPrecompile {
+ // Mint native tokens to recipient
+ function mint(address recipient, uint256 amount) external;
+}
+```
+
+## Usage
+
+From an authorized contract:
+
+```solidity
+contract Bridge {
+ IMintPrecompile constant MINT = IMintPrecompile(0x0000000000000000000000000000000000000100);
+
+ function bridgeIn(address recipient, uint256 amount) external {
+ // Verify bridge proof...
+
+ // Mint native tokens
+ MINT.mint(recipient, amount);
+ }
+}
+```
+
+## Security
+
+- Only the `admin` address can call `mint()`
+- Calls from other addresses revert
+- The admin is typically a bridge contract or multisig
+
+## Changing Admin
+
+The admin cannot be changed after genesis. To update, you would need a chain upgrade with a new chainspec.
+
+## Example: Bridge Setup
+
+```json
+{
+ "config": {
+ "evolve": {
+ "mintPrecompile": {
+ "admin": "0xBridgeContractAddress",
+ "address": "0x0000000000000000000000000000000000000100"
+ }
+ }
+ }
+}
+```
+
+The bridge contract can mint tokens when users bridge assets from another chain.
diff --git a/docs/ev-reth/overview.md b/docs/ev-reth/overview.md
new file mode 100644
index 000000000..bbba32687
--- /dev/null
+++ b/docs/ev-reth/overview.md
@@ -0,0 +1,69 @@
+# ev-reth Overview
+
+ev-reth is a modified [reth](https://github.com/paradigmxyz/reth) Ethereum execution client optimized for Evolve rollups.
+
+## What is ev-reth?
+
+ev-reth extends reth with:
+
+- **Engine API integration** — Driven by ev-node for block production
+- **Rollup-specific features** — Base fee redirect, deploy allowlist, custom precompiles
+- **Configurable chain parameters** — Contract size limits, custom gas settings
+
+## Architecture
+
+```
+┌─────────────────────────────────────────┐
+│ ev-node │
+│ (consensus + DA + P2P) │
+└─────────────────┬───────────────────────┘
+ │ Engine API
+ │ (JWT authenticated)
+┌─────────────────▼───────────────────────┐
+│ ev-reth │
+│ (EVM execution) │
+│ ┌───────────┐ ┌───────────────────┐ │
+│ │ State DB │ │ Transaction Pool │ │
+│ └───────────┘ └───────────────────┘ │
+│ ┌───────────────────────────────────┐ │
+│ │ EVM + Precompiles │ │
+│ └───────────────────────────────────┘ │
+└─────────────────────────────────────────┘
+```
+
+ev-node drives ev-reth through the Engine API:
+
+1. ev-node calls `engine_forkchoiceUpdated` with payload attributes
+2. ev-reth builds a block from pending transactions
+3. ev-node calls `engine_getPayload` to retrieve the block
+4. ev-node broadcasts and submits to DA
+5. ev-node calls `engine_newPayload` to finalize
+
+## Features
+
+| Feature | Description |
+|---------|-------------|
+| [Base Fee Redirect](/ev-reth/features/base-fee-redirect) | Send base fees to treasury instead of burning |
+| [Deploy Allowlist](/ev-reth/features/deploy-allowlist) | Restrict who can deploy contracts |
+| [Contract Size Limits](/ev-reth/features/contract-size-limits) | Increase max contract size beyond 24KB |
+| [Mint Precompile](/ev-reth/features/mint-precompile) | Native token minting for bridges |
+
+## When to Use ev-reth
+
+Use ev-reth when you want:
+
+- Full EVM compatibility
+- Ethereum tooling (Foundry, Hardhat, etc.)
+- Standard wallet support (MetaMask, etc.)
+- High-performance Rust execution
+
+## Repository
+
+- GitHub: [github.com/evstack/ev-reth](https://github.com/evstack/ev-reth)
+- Based on: [paradigmxyz/reth](https://github.com/paradigmxyz/reth)
+
+## Next Steps
+
+- [EVM Quickstart](/getting-started/evm/quickstart) — Get started
+- [Configuration](/ev-reth/configuration) — Chainspec and settings
+- [Engine API](/ev-reth/engine-api) — How ev-node communicates with ev-reth
diff --git a/docs/getting-started/choose-your-path.md b/docs/getting-started/choose-your-path.md
new file mode 100644
index 000000000..b07a1b6e0
--- /dev/null
+++ b/docs/getting-started/choose-your-path.md
@@ -0,0 +1,118 @@
+# Choose Your Path
+
+Evolve supports three execution environments. Your choice depends on your existing codebase, target users, and development resources.
+
+## Quick Comparison
+
+| | EVM (ev-reth) | Cosmos SDK (ev-abci) | Custom Executor |
+|----------------------|------------------------------------|-----------------------------|---------------------|
+| **Best for** | New chains, DeFi, NFTs | Existing Cosmos chains | Novel VMs, research |
+| **Language** | Solidity, Vyper | Go | Any |
+| **Wallet support** | MetaMask, Rainbow, all EVM wallets | Keplr, Leap, Cosmos wallets | Build your own |
+| **Block explorer** | Blockscout, any EVM explorer | Mintscan, Ping.pub | Build your own |
+| **Tooling maturity** | Excellent | Good | None |
+| **Setup complexity** | Low | Medium | High |
+| **Migration path** | Deploy existing contracts | Migrate existing chain | N/A |
+
+## EVM (ev-reth)
+
+Use ev-reth if you want Ethereum compatibility.
+
+### Pros
+
+- **Wallet ecosystem** — MetaMask, Rainbow, Rabby, and every EVM wallet works out of the box. Users don't need new software.
+- **Developer tooling** — Foundry, Hardhat, Remix, Tenderly, and the entire Ethereum toolchain works unchanged.
+- **Contract portability** — Deploy existing Solidity/Vyper contracts without modification.
+- **Block explorers** — Blockscout, Etherscan-compatible APIs, and standard indexers work immediately.
+- **RPC compatibility** — Standard Ethereum JSON-RPC means existing frontend code works.
+
+### Cons
+
+- **EVM constraints** — Bound by EVM gas model and execution semantics.
+
+### When to choose EVM
+
+- Building a new chain and want maximum user/developer reach
+- Need access to EVM DeFi tooling (Uniswap, lending protocols, etc.)
+- Want users to connect with wallets they already have
+
+**→ [EVM Quickstart](/getting-started/evm/quickstart)**
+
+## Cosmos SDK (ev-abci)
+
+Use ev-abci if you have an existing Cosmos chain or want Cosmos SDK modules.
+
+### Pros
+
+- **Migration path** — Existing Cosmos SDK chains can migrate without rewriting application logic.
+- **Cosmos tooling** — Ignite CLI, Cosmos SDK modules, and familiar Go development.
+- **Custom modules** — Build application-specific logic beyond what smart contracts allow.
+- **Established wallets** — Keplr, Leap, and Cosmos wallets have strong user bases.
+
+### Cons
+
+- **Smaller wallet ecosystem** — Fewer wallets than EVM, though major ones are well-supported.
+- **Migration complexity** — Moving from CometBFT requires careful migration.
+- **Different mental model** — Cosmos SDK modules differ significantly from smart contracts.
+
+### When to choose Cosmos SDK
+
+- Have an existing Cosmos SDK chain running on CometBFT
+- Want to shed validator overhead while keeping your application logic
+- Prefer Go over Solidity for application development
+
+**→ [Cosmos SDK Quickstart](/getting-started/cosmos/quickstart)**
+
+## Custom Executor
+
+Use a custom executor if you need something neither EVM nor Cosmos SDK provides.
+
+### Pros
+
+- **Maximum flexibility** — Implement any state machine, any VM, any execution model.
+- **Performance optimization** — Tailor execution to your specific use case.
+- **Novel designs** — Build zkVMs, specialized rollups, or research prototypes.
+
+### Cons
+
+- **No wallet support** — You must build or integrate wallet connectivity.
+- **No tooling** — No block explorers, no development frameworks, no debugging tools.
+- **High development cost** — Everything beyond ev-node itself is your responsibility.
+- **No ecosystem** — Users and developers must learn your custom environment.
+
+### When to choose Custom
+
+- Building a novel VM (zkVM, MoveVM, etc.)
+- Research or experimental chains
+- Highly specialized state machines (gaming, specific financial instruments)
+- Have resources to build full tooling stack
+
+**→ [Custom Executor Quickstart](/getting-started/custom/quickstart)**
+
+## Decision Tree
+
+```
+Do you have an existing Cosmos SDK chain?
+├── Yes → Cosmos SDK (ev-abci)
+└── No
+ │
+ Do you need a custom VM or non-standard execution?
+ ├── Yes → Custom Executor
+ └── No
+ │
+ Do you want maximum wallet/tooling support?
+ ├── Yes → EVM (ev-reth)
+ └── No
+ │
+ Do you prefer Go over Solidity?
+ ├── Yes → Cosmos SDK (ev-abci)
+ └── No → EVM (ev-reth)
+```
+
+## Switching Later
+
+- **EVM → Cosmos SDK**: Not practical. Different execution models, would require chain restart.
+- **Cosmos SDK → EVM**: Not practical. Same reason.
+- **Custom → Either**: Possible if you design for it, but significant work.
+
+Choose based on your long-term needs. The execution environment is a foundational decision.
diff --git a/docs/getting-started/cosmos/integrate-ev-abci.md b/docs/getting-started/cosmos/integrate-ev-abci.md
new file mode 100644
index 000000000..192bfb75e
--- /dev/null
+++ b/docs/getting-started/cosmos/integrate-ev-abci.md
@@ -0,0 +1,110 @@
+# Integrate ev-abci
+
+Manually integrate ev-abci into an existing Cosmos SDK application.
+
+## Overview
+
+ev-abci replaces CometBFT as the consensus engine for your Cosmos SDK chain. Your application logic remains unchanged—only the node startup code changes.
+
+## 1. Add Dependency
+
+```bash
+go get github.com/evstack/ev-abci@latest
+```
+
+## 2. Modify Your Start Command
+
+Locate your application's entrypoint, typically `cmd//main.go` or `cmd//root.go`.
+
+Replace the CometBFT server with ev-abci:
+
+```go
+package main
+
+import (
+ "os"
+
+ "github.com/cosmos/cosmos-sdk/server"
+ "github.com/spf13/cobra"
+
+ // Import ev-abci server
+ evabci "github.com/evstack/ev-abci/server"
+
+ "your-app/app"
+)
+
+func main() {
+ rootCmd := &cobra.Command{
+ Use: "appd",
+ Short: "Your App Daemon",
+ }
+
+ // Keep existing commands
+ server.AddCommands(rootCmd, app.DefaultNodeHome, app.New, app.MakeEncodingConfig())
+
+ // Replace start command with ev-abci
+ startCmd := &cobra.Command{
+ Use: "start",
+ Short: "Run the node with ev-abci",
+ RunE: func(cmd *cobra.Command, _ []string) error {
+ return evabci.StartHandler(cmd, app.New)
+ },
+ }
+
+ evabci.AddFlags(startCmd)
+ rootCmd.AddCommand(startCmd)
+
+ if err := rootCmd.Execute(); err != nil {
+ os.Exit(1)
+ }
+}
+```
+
+## 3. Build
+
+```bash
+go build -o appd ./cmd/appd
+```
+
+## 4. Verify
+
+Check that ev-abci flags are available:
+
+```bash
+./appd start --help
+```
+
+You should see flags like:
+
+```
+--evnode.node.aggregator
+--evnode.da.address
+--evnode.signer.passphrase
+```
+
+## 5. Initialize and Run
+
+```bash
+# Initialize (same as before)
+./appd init mynode --chain-id mychain-1
+
+# Start with ev-abci
+./appd start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.signer.passphrase secret
+```
+
+## Key Differences from CometBFT
+
+| Aspect | CometBFT | ev-abci |
+|--------|----------|---------|
+| Validators | Multiple validators with staking | Single sequencer |
+| Consensus | BFT consensus rounds | Sequencer produces blocks |
+| Finality | Instant (BFT) | Soft (P2P) → Hard (DA) |
+| Block time | ~6s typical | Configurable (100ms+) |
+
+## Next Steps
+
+- [Migration Guide](/getting-started/cosmos/migration-guide) — Migrate existing chain with state
+- [ev-abci Overview](/ev-abci/overview) — Architecture details
diff --git a/docs/getting-started/cosmos/migration-guide.md b/docs/getting-started/cosmos/migration-guide.md
new file mode 100644
index 000000000..b0bd81553
--- /dev/null
+++ b/docs/getting-started/cosmos/migration-guide.md
@@ -0,0 +1,116 @@
+# Migration Guide
+
+Migrate an existing Cosmos SDK chain from CometBFT to Evolve while preserving state.
+
+## Overview
+
+The migration process:
+
+1. Add migration modules to your chain
+2. Pass governance proposal to halt at upgrade height
+3. Export state and run migration
+4. Restart with ev-abci
+
+## Phase 1: Add Migration Modules
+
+### Add Migration Manager
+
+The migration manager handles the transition from multi-validator to single-sequencer.
+
+```go
+import (
+ migrationmngr "github.com/evstack/ev-abci/modules/migrationmngr"
+ migrationmngrkeeper "github.com/evstack/ev-abci/modules/migrationmngr/keeper"
+ migrationmngrtypes "github.com/evstack/ev-abci/modules/migrationmngr/types"
+)
+```
+
+Add the keeper to your app and register the module.
+
+### Replace Staking Module
+
+Replace the standard staking module with ev-abci's wrapper to prevent validator updates during migration:
+
+```go
+// Replace this:
+import "github.com/cosmos/cosmos-sdk/x/staking"
+
+// With this:
+import "github.com/evstack/ev-abci/modules/staking"
+```
+
+## Phase 2: Governance Proposal
+
+Submit a software upgrade proposal:
+
+```bash
+appd tx gov submit-proposal software-upgrade v2-evolve \
+ --title "Migrate to Evolve" \
+ --description "Upgrade to ev-abci consensus" \
+ --upgrade-height \
+ --from
+```
+
+Vote on the proposal and wait for it to pass.
+
+## Phase 3: Wire ev-abci
+
+Before the chain halts, update your start command to use ev-abci (see [Integrate ev-abci](/getting-started/cosmos/integrate-ev-abci)).
+
+Rebuild your binary:
+
+```bash
+go build -o appd ./cmd/appd
+```
+
+**Do not start the node yet.**
+
+## Phase 4: Run Migration
+
+After the chain halts at the upgrade height:
+
+```bash
+appd evolve-migrate
+```
+
+This command:
+
+- Migrates blocks from CometBFT to Evolve format
+- Converts state to Evolve format
+- Creates `ev_genesis.json`
+- Seeds sync stores
+
+## Phase 5: Restart
+
+Start with ev-abci:
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.da.address \
+ --evnode.signer.passphrase
+```
+
+The chain continues from the last CometBFT state with the new consensus engine.
+
+## Considerations
+
+- **Downtime**: Chain is halted during migration (typically minutes)
+- **Coordination**: All node operators must upgrade simultaneously
+- **Rollback**: Keep CometBFT binary and data backup for emergency rollback
+- **Vote extensions**: Not supported in Evolve—will have no effect after migration
+
+## Full Node Migration
+
+For non-sequencer nodes, skip the aggregator flag:
+
+```bash
+appd start \
+ --evnode.da.address \
+ --evnode.p2p.peers @:
+```
+
+## Next Steps
+
+- [ev-abci Migration from CometBFT](/ev-abci/migration-from-cometbft) — Detailed migration reference
+- [Run a Full Node](/guides/running-nodes/full-node) — Non-sequencer setup
diff --git a/docs/getting-started/cosmos/quickstart.md b/docs/getting-started/cosmos/quickstart.md
new file mode 100644
index 000000000..4bd6a73b3
--- /dev/null
+++ b/docs/getting-started/cosmos/quickstart.md
@@ -0,0 +1,86 @@
+# Cosmos SDK Quickstart
+
+Get a Cosmos SDK chain running on Evolve using Ignite CLI.
+
+## Prerequisites
+
+- Go 1.22+
+- [Ignite CLI](https://docs.ignite.com/welcome/install)
+
+## 1. Start Local DA
+
+```bash
+go install github.com/evstack/ev-node/tools/local-da@latest
+local-da
+```
+
+Keep this running in a separate terminal.
+
+## 2. Create a New Chain
+
+```bash
+ignite scaffold chain mychain --address-prefix mychain
+cd mychain
+```
+
+## 3. Add Evolve
+
+Install the Evolve plugin for Ignite:
+
+```bash
+ignite app install -g github.com/ignite/apps/evolve
+```
+
+Add Evolve to your chain:
+
+```bash
+ignite evolve add
+```
+
+This modifies your chain to use ev-abci instead of CometBFT.
+
+## 4. Build and Initialize
+
+```bash
+make install
+
+mychaind init mynode --chain-id mychain-1
+mychaind keys add mykey --keyring-backend test
+mychaind genesis add-genesis-account mykey 1000000000stake --keyring-backend test
+mychaind genesis gentx mykey 1000000stake --chain-id mychain-1 --keyring-backend test
+mychaind genesis collect-gentxs
+```
+
+## 5. Start the Chain
+
+```bash
+mychaind start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.signer.passphrase secret
+```
+
+You should see blocks being produced:
+
+```
+INF block marked as DA included blockHeight=1
+INF block marked as DA included blockHeight=2
+```
+
+## 6. Interact
+
+In another terminal:
+
+```bash
+# Check balance
+mychaind query bank balances $(mychaind keys show mykey -a --keyring-backend test)
+
+# Send tokens
+mychaind tx bank send mykey mychain1... 1000stake --keyring-backend test --chain-id mychain-1 -y
+```
+
+## Next Steps
+
+- [Integrate ev-abci](/getting-started/cosmos/integrate-ev-abci) — Manual integration without Ignite
+- [Migration Guide](/getting-started/cosmos/migration-guide) — Migrate existing CometBFT chain
+- [Connect to Celestia](/guides/da-layers/celestia) — Production DA layer
diff --git a/docs/getting-started/custom/implement-executor.md b/docs/getting-started/custom/implement-executor.md
new file mode 100644
index 000000000..ee1c362e0
--- /dev/null
+++ b/docs/getting-started/custom/implement-executor.md
@@ -0,0 +1,222 @@
+# Implement Executor Interface
+
+Deep dive into each method of the Executor interface.
+
+## Interface Overview
+
+```go
+type Executor interface {
+ InitChain(ctx context.Context, genesis Genesis) ([]byte, error)
+ GetTxs(ctx context.Context) ([][]byte, error)
+ ExecuteTxs(ctx context.Context, txs [][]byte, height uint64, timestamp time.Time) (*ExecutionResult, error)
+ SetFinal(ctx context.Context, height uint64) error
+}
+```
+
+## InitChain
+
+Called once when the chain starts for the first time.
+
+```go
+func (e *MyExecutor) InitChain(ctx context.Context, genesis Genesis) ([]byte, error)
+```
+
+**Parameters:**
+
+- `genesis` — Contains initial state, chain ID, and configuration
+
+**Returns:**
+
+- Initial state root (hash of genesis state)
+- Error if initialization fails
+
+**Responsibilities:**
+
+- Parse genesis data
+- Initialize state storage
+- Set up initial accounts/balances
+- Return deterministic state root
+
+**Example:**
+
+```go
+func (e *MyExecutor) InitChain(ctx context.Context, genesis Genesis) ([]byte, error) {
+ // Parse genesis
+ var state GenesisState
+ if err := json.Unmarshal(genesis.AppState, &state); err != nil {
+ return nil, err
+ }
+
+ // Initialize state
+ for addr, balance := range state.Balances {
+ e.db.Set([]byte(addr), []byte(balance))
+ }
+
+ // Compute and return state root
+ return e.db.Hash(), nil
+}
+```
+
+## GetTxs
+
+Called by the sequencer to get pending transactions for the next block.
+
+```go
+func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error)
+```
+
+**Returns:**
+
+- Slice of transaction bytes from your mempool
+- Error if retrieval fails
+
+**Responsibilities:**
+
+- Return transactions ready for inclusion
+- Optionally prioritize by fee, nonce, etc.
+- Remove invalid transactions
+
+**Example:**
+
+```go
+func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error) {
+ txs := e.mempool.GetPending(100) // Get up to 100 txs
+ return txs, nil
+}
+```
+
+## ExecuteTxs
+
+The core execution method. Called for every block.
+
+```go
+func (e *MyExecutor) ExecuteTxs(
+ ctx context.Context,
+ txs [][]byte,
+ height uint64,
+ timestamp time.Time,
+) (*ExecutionResult, error)
+```
+
+**Parameters:**
+
+- `txs` — Ordered transactions to execute
+- `height` — Block height
+- `timestamp` — Block timestamp
+
+**Returns:**
+
+- `ExecutionResult` containing new state root and gas used
+- Error only for system failures (not tx failures)
+
+**Responsibilities:**
+
+- Execute each transaction in order
+- Update state
+- Track gas usage
+- Handle transaction failures gracefully
+- Return new state root
+
+**Example:**
+
+```go
+func (e *MyExecutor) ExecuteTxs(
+ ctx context.Context,
+ txs [][]byte,
+ height uint64,
+ timestamp time.Time,
+) (*ExecutionResult, error) {
+ var totalGas uint64
+
+ for _, txBytes := range txs {
+ tx, err := DecodeTx(txBytes)
+ if err != nil {
+ continue // Skip invalid tx
+ }
+
+ gas, err := e.executeTx(tx)
+ if err != nil {
+ // Log but continue - tx failure != block failure
+ continue
+ }
+
+ totalGas += gas
+ }
+
+ // Commit state changes
+ stateRoot := e.db.Commit()
+
+ return &ExecutionResult{
+ StateRoot: stateRoot,
+ GasUsed: totalGas,
+ }, nil
+}
+```
+
+## SetFinal
+
+Called when a block is confirmed on the DA layer.
+
+```go
+func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error
+```
+
+**Parameters:**
+
+- `height` — The block height that is now DA-finalized
+
+**Responsibilities:**
+
+- Mark state as finalized
+- Prune old state if desired
+- Trigger any finality-dependent logic
+
+**Example:**
+
+```go
+func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error {
+ // Mark height as final
+ e.finalHeight = height
+
+ // Optionally prune old state
+ if height > 100 {
+ e.db.Prune(height - 100)
+ }
+
+ return nil
+}
+```
+
+## State Management Tips
+
+1. **Determinism** — ExecuteTxs must be deterministic. Same inputs must produce same state root.
+
+2. **Atomicity** — Either all state changes for a block commit, or none do.
+
+3. **Crash recovery** — State should be recoverable after crash. ev-node will replay blocks if needed.
+
+4. **Gas metering** — Track computational cost to prevent DoS.
+
+## Testing
+
+Test your executor in isolation:
+
+```go
+func TestExecuteTxs(t *testing.T) {
+ exec := NewMyExecutor()
+
+ // Initialize
+ _, err := exec.InitChain(ctx, genesis)
+ require.NoError(t, err)
+
+ // Execute
+ result, err := exec.ExecuteTxs(ctx, txs, 1, time.Now())
+ require.NoError(t, err)
+ require.NotEmpty(t, result.StateRoot)
+}
+```
+
+## Next Steps
+
+- [Executor Interface Reference](/reference/interfaces/executor) — Full type definitions
+- [Testapp Source](https://github.com/evstack/ev-node/tree/main/apps/testapp) — Working example
diff --git a/docs/getting-started/custom/quickstart.md b/docs/getting-started/custom/quickstart.md
new file mode 100644
index 000000000..6e8652639
--- /dev/null
+++ b/docs/getting-started/custom/quickstart.md
@@ -0,0 +1,141 @@
+# Custom Executor Quickstart
+
+Build a minimal custom executor to understand how ev-node integrates with execution layers.
+
+## Prerequisites
+
+- Go 1.22+
+- Familiarity with Go interfaces
+
+## 1. Start Local DA
+
+```bash
+go install github.com/evstack/ev-node/tools/local-da@latest
+local-da
+```
+
+Keep this running.
+
+## 2. Clone ev-node
+
+```bash
+git clone https://github.com/evstack/ev-node.git
+cd ev-node
+```
+
+## 3. Explore the Testapp
+
+ev-node includes a reference executor in `apps/testapp/`. This is a minimal key-value store:
+
+```bash
+ls apps/testapp/
+```
+
+Key files:
+
+- `executor.go` — Implements the Executor interface
+- `main.go` — Wires everything together
+
+## 4. Build and Run
+
+```bash
+make build
+
+./build/testapp init --evnode.node.aggregator --evnode.signer.passphrase secret
+
+./build/testapp start --evnode.signer.passphrase secret
+```
+
+You should see blocks being produced.
+
+## 5. Understand the Executor Interface
+
+The core interface your executor must implement:
+
+```go
+type Executor interface {
+ // Initialize chain state from genesis
+ InitChain(ctx context.Context, genesis Genesis) (stateRoot []byte, err error)
+
+ // Return pending transactions from mempool
+ GetTxs(ctx context.Context) (txs [][]byte, err error)
+
+ // Execute transactions and return new state root
+ ExecuteTxs(ctx context.Context, txs [][]byte, height uint64, timestamp time.Time) (*ExecutionResult, error)
+
+ // Mark a height as DA-finalized
+ SetFinal(ctx context.Context, height uint64) error
+}
+```
+
+## 6. Create Your Own Executor
+
+Create a new file `my_executor.go`:
+
+```go
+package main
+
+import (
+ "context"
+ "time"
+
+ "github.com/evstack/ev-node/core/execution"
+)
+
+type MyExecutor struct {
+ state map[string]string
+}
+
+func NewMyExecutor() *MyExecutor {
+ return &MyExecutor{state: make(map[string]string)}
+}
+
+func (e *MyExecutor) InitChain(ctx context.Context, genesis execution.Genesis) ([]byte, error) {
+ // Initialize from genesis
+ return []byte("genesis-root"), nil
+}
+
+func (e *MyExecutor) GetTxs(ctx context.Context) ([][]byte, error) {
+ // Return pending transactions
+ return nil, nil
+}
+
+func (e *MyExecutor) ExecuteTxs(ctx context.Context, txs [][]byte, height uint64, timestamp time.Time) (*execution.ExecutionResult, error) {
+ // Process transactions, update state
+ for _, tx := range txs {
+ // Your logic here
+ _ = tx
+ }
+
+ return &execution.ExecutionResult{
+ StateRoot: []byte("new-root"),
+ GasUsed: 0,
+ }, nil
+}
+
+func (e *MyExecutor) SetFinal(ctx context.Context, height uint64) error {
+ // Height is now DA-finalized
+ return nil
+}
+```
+
+## 7. Wire It Up
+
+See `apps/testapp/main.go` for how to create a full node with your executor:
+
+```go
+executor := NewMyExecutor()
+
+node, err := node.NewFullNode(
+ ctx,
+ config,
+ executor,
+ // ... other options
+)
+```
+
+## Next Steps
+
+- [Implement Executor](/getting-started/custom/implement-executor) — Deep dive into each method
+- [Executor Interface Reference](/reference/interfaces/executor) — Full interface documentation
+- [Testapp Source](https://github.com/evstack/ev-node/tree/main/apps/testapp) — Reference implementation
diff --git a/docs/getting-started/evm/deploy-contracts.md b/docs/getting-started/evm/deploy-contracts.md
new file mode 100644
index 000000000..09e18ca17
--- /dev/null
+++ b/docs/getting-started/evm/deploy-contracts.md
@@ -0,0 +1,144 @@
+# Deploy Contracts
+
+Deploy smart contracts to your Evolve EVM chain using Foundry or Hardhat.
+
+## Network Configuration
+
+| Setting | Local | Testnet (example) |
+|---------|-------|-------------------|
+| RPC URL | | |
+| Chain ID | 1337 | Your chain ID |
+| Currency | ETH | Your native token |
+
+## Foundry
+
+### Install
+
+```bash
+curl -L https://foundry.paradigm.xyz | bash
+foundryup
+```
+
+### Configure
+
+Create or update `foundry.toml`:
+
+```toml
+[profile.default]
+src = "src"
+out = "out"
+libs = ["lib"]
+
+[rpc_endpoints]
+local = "http://localhost:8545"
+```
+
+### Deploy
+
+```bash
+# Deploy a contract
+forge create src/MyContract.sol:MyContract \
+ --rpc-url local \
+ --private-key $PRIVATE_KEY
+
+# Deploy with constructor args
+forge create src/Token.sol:Token \
+ --rpc-url local \
+ --private-key $PRIVATE_KEY \
+ --constructor-args "MyToken" "MTK" 18
+
+# Deploy and verify (if explorer supports it)
+forge create src/MyContract.sol:MyContract \
+ --rpc-url local \
+ --private-key $PRIVATE_KEY \
+ --verify
+```
+
+### Interact
+
+```bash
+# Call a read function
+cast call $CONTRACT_ADDRESS "balanceOf(address)" $WALLET_ADDRESS --rpc-url local
+
+# Send a transaction
+cast send $CONTRACT_ADDRESS "transfer(address,uint256)" $TO_ADDRESS 1000 \
+ --rpc-url local \
+ --private-key $PRIVATE_KEY
+```
+
+## Hardhat
+
+### Install
+
+```bash
+npm init -y
+npm install --save-dev hardhat @nomicfoundation/hardhat-toolbox
+npx hardhat init
+```
+
+### Configure
+
+Update `hardhat.config.js`:
+
+```javascript
+require("@nomicfoundation/hardhat-toolbox");
+
+module.exports = {
+ solidity: "0.8.24",
+ networks: {
+ local: {
+ url: "http://localhost:8545",
+ accounts: [process.env.PRIVATE_KEY],
+ },
+ },
+};
+```
+
+### Deploy
+
+Create `scripts/deploy.js`:
+
+```javascript
+const hre = require("hardhat");
+
+async function main() {
+ const Contract = await hre.ethers.getContractFactory("MyContract");
+ const contract = await Contract.deploy();
+ await contract.waitForDeployment();
+
+ console.log("Deployed to:", await contract.getAddress());
+}
+
+main().catch((error) => {
+ console.error(error);
+ process.exit(1);
+});
+```
+
+Run:
+
+```bash
+npx hardhat run scripts/deploy.js --network local
+```
+
+## Prefunded Accounts
+
+The default chainspec includes prefunded accounts for testing. Check your `genesis.json` `alloc` section for available addresses.
+
+To add your own:
+
+```json
+{
+ "alloc": {
+ "0xYourAddress": {
+ "balance": "0x200000000000000000000000000000000000000000000000000000000000000"
+ }
+ }
+}
+```
+
+## Next Steps
+
+- [Configure ev-reth](/getting-started/evm/setup-ev-reth) — Chainspec customization
+- [Base Fee Redirect](/ev-reth/features/base-fee-redirect) — Send fees to treasury
+- [Deploy Allowlist](/ev-reth/features/deploy-allowlist) — Restrict contract deployment
diff --git a/docs/getting-started/evm/quickstart.md b/docs/getting-started/evm/quickstart.md
new file mode 100644
index 000000000..001faa5da
--- /dev/null
+++ b/docs/getting-started/evm/quickstart.md
@@ -0,0 +1,91 @@
+# EVM Quickstart
+
+Get an EVM rollup running locally in under 5 minutes.
+
+## Prerequisites
+
+- Go 1.22+
+- Docker
+- Git
+
+## 1. Start Local DA
+
+```bash
+go install github.com/evstack/ev-node/tools/local-da@latest
+local-da
+```
+
+You should see:
+
+```
+INF Listening on host=localhost port=7980
+```
+
+Keep this running in a separate terminal.
+
+## 2. Start ev-reth
+
+```bash
+git clone https://github.com/evstack/ev-reth.git
+cd ev-reth
+docker compose up -d
+```
+
+This starts reth with Evolve's Engine API configuration. The default ports:
+
+- `8545` — JSON-RPC
+- `8551` — Engine API
+
+## 3. Start ev-node
+
+In a new terminal:
+
+```bash
+git clone https://github.com/evstack/ev-node.git
+cd ev-node
+make build-evm
+```
+
+Initialize and start:
+
+```bash
+./build/evm init --evnode.node.aggregator --evnode.signer.passphrase secret
+
+./build/evm start \
+ --evnode.node.aggregator \
+ --evnode.signer.passphrase secret \
+ --evnode.node.block_time 1s
+```
+
+You should see blocks being produced:
+
+```
+INF block marked as DA included blockHeight=1
+INF block marked as DA included blockHeight=2
+```
+
+## 4. Connect a Wallet
+
+Add the network to MetaMask:
+
+| Setting | Value |
+|---------|-------|
+| Network Name | Evolve Local |
+| RPC URL | |
+| Chain ID | 1337 |
+| Currency | ETH |
+
+## 5. Deploy a Contract
+
+With Foundry:
+
+```bash
+forge create src/Counter.sol:Counter --rpc-url http://localhost:8545 --private-key
+```
+
+## Next Steps
+
+- [Configure ev-reth](/getting-started/evm/setup-ev-reth) — Customize chainspec, features
+- [Deploy Contracts](/getting-started/evm/deploy-contracts) — Foundry and Hardhat setup
+- [Connect to Celestia](/guides/da-layers/celestia) — Production DA layer
+- [Run a Full Node](/guides/running-nodes/full-node) — Non-sequencer node setup
diff --git a/docs/getting-started/evm/setup-ev-reth.md b/docs/getting-started/evm/setup-ev-reth.md
new file mode 100644
index 000000000..847531223
--- /dev/null
+++ b/docs/getting-started/evm/setup-ev-reth.md
@@ -0,0 +1,134 @@
+# Configure ev-reth
+
+ev-reth is a modified [reth](https://github.com/paradigmxyz/reth) client with Evolve-specific features. This guide covers configuration options.
+
+## Chainspec
+
+The chainspec (`genesis.json`) defines your chain's parameters. ev-reth extends the standard Ethereum genesis format with Evolve-specific fields.
+
+### Minimal Chainspec
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "homesteadBlock": 0,
+ "eip150Block": 0,
+ "eip155Block": 0,
+ "eip158Block": 0,
+ "byzantiumBlock": 0,
+ "constantinopleBlock": 0,
+ "petersburgBlock": 0,
+ "istanbulBlock": 0,
+ "berlinBlock": 0,
+ "londonBlock": 0,
+ "shanghaiTime": 0,
+ "cancunTime": 0
+ },
+ "alloc": {
+ "0xYOUR_ADDRESS": {
+ "balance": "0x200000000000000000000000000000000000000000000000000000000000000"
+ }
+ },
+ "coinbase": "0x0000000000000000000000000000000000000000",
+ "difficulty": "0x0",
+ "gasLimit": "0x1c9c380",
+ "nonce": "0x0",
+ "timestamp": "0x0"
+}
+```
+
+### Evolve Extensions
+
+Add these under `config.evolve`:
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "evolve": {
+ "baseFeeSink": "0xTREASURY_ADDRESS",
+ "baseFeeRedirectActivationHeight": 0,
+ "deployAllowlist": {
+ "admin": "0xADMIN_ADDRESS",
+ "enabled": ["0xDEPLOYER1", "0xDEPLOYER2"]
+ },
+ "contractSizeLimit": 49152,
+ "mintPrecompile": {
+ "admin": "0xMINT_ADMIN",
+ "address": "0x0000000000000000000000000000000000000100"
+ }
+ }
+ }
+}
+```
+
+| Field | Description |
+|-------|-------------|
+| `baseFeeSink` | Address to receive base fees instead of burning |
+| `deployAllowlist` | Restrict contract deployment to allowlisted addresses |
+| `contractSizeLimit` | Override default 24KB contract size limit |
+| `mintPrecompile` | Enable native token minting precompile |
+
+## Docker Configuration
+
+The default `docker-compose.yml` in ev-reth:
+
+```yaml
+services:
+ reth:
+ image: ghcr.io/evstack/ev-reth:latest
+ ports:
+ - "8545:8545" # JSON-RPC
+ - "8551:8551" # Engine API
+ volumes:
+ - ./data:/data
+ - ./genesis.json:/genesis.json
+ - ./jwt.hex:/jwt.hex
+ command:
+ - node
+ - --chain=/genesis.json
+ - --http
+ - --http.addr=0.0.0.0
+ - --http.api=eth,net,web3,txpool
+ - --authrpc.addr=0.0.0.0
+ - --authrpc.jwtsecret=/jwt.hex
+```
+
+### JWT Secret
+
+Generate a JWT secret for Engine API authentication:
+
+```bash
+openssl rand -hex 32 > jwt.hex
+```
+
+Both ev-reth and ev-node must use the same JWT secret.
+
+## Environment Variables
+
+| Variable | Description | Default |
+|----------|-------------|---------|
+| `RUST_LOG` | Log level | `info` |
+| `RETH_DATA_DIR` | Data directory | `/data` |
+
+## Command Line Flags
+
+Common flags when running ev-reth directly:
+
+```bash
+ev-reth node \
+ --chain genesis.json \
+ --http \
+ --http.addr 0.0.0.0 \
+ --http.port 8545 \
+ --http.api eth,net,web3,txpool,debug,trace \
+ --authrpc.addr 0.0.0.0 \
+ --authrpc.port 8551 \
+ --authrpc.jwtsecret jwt.hex
+```
+
+## Next Steps
+
+- [ev-reth Features](/ev-reth/features/base-fee-redirect) — Detailed feature documentation
+- [ev-reth Chainspec Reference](/reference/configuration/ev-reth-chainspec) — Full configuration reference
diff --git a/docs/guides/advanced/based-sequencing.md b/docs/guides/advanced/based-sequencing.md
new file mode 100644
index 000000000..c99bf279f
--- /dev/null
+++ b/docs/guides/advanced/based-sequencing.md
@@ -0,0 +1,76 @@
+# Based Sequencing
+
+Based sequencing is a decentralized sequencing model where transaction ordering is determined by the base layer (Celestia) rather than a centralized sequencer. In this model, **every full node acts as its own proposer** by independently and deterministically deriving the next batch of transactions directly from the base layer.
+
+## How Based Sequencing Works
+
+### Transaction Submission
+
+Users submit transactions to the base layer's forced inclusion namespace. These transactions are posted as blobs to the DA layer, where they become part of the canonical transaction ordering.
+
+```text
+User → Base Layer (DA) → Full Nodes retrieve and execute
+```
+
+### Deterministic Batch Construction
+
+All full nodes independently construct identical batches by:
+
+1. **Retrieving forced inclusion transactions** from the base layer at epoch boundaries
+2. **Applying forkchoice rules** to determine batch composition:
+ - `MaxBytes`: Maximum byte size per batch (respects block size limits)
+ - DA epoch boundaries
+3. **Smoothing large transactions** across multiple blocks when necessary
+
+### Epoch-Based Processing
+
+Forced inclusion transactions are retrieved in epochs defined by `DAEpochForcedInclusion`. For example, with an epoch size of 10:
+
+- DA heights 100-109 form one epoch
+- DA heights 110-119 form the next epoch
+- Transactions from each epoch must be included before the epoch ends
+
+Epochs durations determine the block time in based sequencing.
+Additionally, because no headers are published, the lazy mode has no effect. The block time is a factor of the DA layer's block time.
+
+## Block Smoothing
+
+When forced inclusion transactions exceed the `MaxBytes` limit for a single block, they can be "smoothed" across multiple blocks within the same epoch. This ensures that:
+
+- Large transactions don't block the chain
+- All transactions are eventually included
+- The system remains censorship-resistant
+
+### Example
+
+```text
+Epoch [100, 104]:
+ - Block 1: Includes 1.5 MB of forced inclusion txs (partial)
+ - Block 2: Includes remaining 0.5 MB + new regular txs
+ - All epoch transactions included before DA height 105
+```
+
+## Trust Assumptions
+
+Based sequencing minimizes trust assumptions:
+
+- **No trusted sequencer** - ordering comes from the base layer
+- **No proposer selection** - every full node derives blocks independently
+- **Deterministic consensus** - all honest nodes converge on the same chain
+- **Base layer security** - inherits the security guarantees of the DA layer
+- **No malicious actor concern** - invalid blocks are automatically rejected by validation rules
+
+## Comparison with Single Sequencer
+
+| Feature | Based Sequencing | Single Sequencer |
+| --------------------- | ----------------------------- | ----------------------------- |
+| Decentralization | ✅ Fully decentralized | ❌ Single point of control |
+| Censorship Resistance | ✅ Guaranteed by base layer | ⚠️ Guaranteed by base layer |
+| Latency | ⚠️ Depends on DA layer (~12s) | ✅ Low latency (configurable) |
+| Block Time Control | ❌ Factor of DA block time | ✅ Configurable by sequencer |
+| Trust Assumptions | ✅ Minimal (only DA layer) | ❌ Trust the sequencer |
+
+## Further Reading
+
+- [Data Availability](../data-availability.md) - Understanding the DA layer
+- [Transaction Flow](../transaction-flow.md) - How transactions move through the system
diff --git a/docs/guides/advanced/custom-precompiles.md b/docs/guides/advanced/custom-precompiles.md
new file mode 100644
index 000000000..94eaa69ba
--- /dev/null
+++ b/docs/guides/advanced/custom-precompiles.md
@@ -0,0 +1,279 @@
+# Custom Precompiles
+
+ev-reth supports custom EVM precompiled contracts for chain-specific functionality. This guide covers the built-in precompiles and how to add custom ones.
+
+## What Are Precompiles?
+
+Precompiles are special contracts at predefined addresses that execute native code instead of EVM bytecode. They're used for:
+
+- Computationally expensive operations (cryptography, hashing)
+- Chain-specific functionality (minting, governance)
+- Operations impossible or inefficient in Solidity
+
+## Built-in ev-reth Precompiles
+
+### Mint Precompile
+
+Allows an authorized address to mint native tokens. Useful for bridging scenarios.
+
+**Address:** `0x0000000000000000000000000000000000000100`
+
+**Configuration (chainspec):**
+
+```json
+{
+ "config": {
+ "evolve": {
+ "mintPrecompile": {
+ "admin": "0xBridgeContract",
+ "address": "0x0000000000000000000000000000000000000100"
+ }
+ }
+ }
+}
+```
+
+**Interface:**
+
+```solidity
+interface IMint {
+ /// @notice Mint native tokens to a recipient
+ /// @param recipient Address to receive tokens
+ /// @param amount Amount to mint (in wei)
+ function mint(address recipient, uint256 amount) external;
+}
+```
+
+**Usage:**
+
+```solidity
+// Only callable by admin address
+IMint(0x0000000000000000000000000000000000000100).mint(
+ 0xRecipient,
+ 1 ether
+);
+```
+
+See [Mint Precompile Reference](/ev-reth/features/mint-precompile) for details.
+
+## Creating Custom Precompiles
+
+Custom precompiles require modifying ev-reth source code.
+
+### Step 1: Define the Precompile
+
+Create a new precompile in `crates/precompiles/src/`:
+
+```rust
+// my_precompile.rs
+use revm::precompile::{Precompile, PrecompileOutput, PrecompileResult};
+use revm::primitives::{Bytes, U256};
+
+pub const MY_PRECOMPILE_ADDRESS: Address = address!("0000000000000000000000000000000000000200");
+
+pub fn my_precompile(input: &Bytes, gas_limit: u64) -> PrecompileResult {
+ // Check gas
+ let gas_used = 1000; // Base gas cost
+ if gas_used > gas_limit {
+ return Err(PrecompileError::OutOfGas);
+ }
+
+ // Parse input
+ // input[0..4] = function selector
+ // input[4..] = encoded arguments
+
+ // Execute logic
+ let result = process_input(input)?;
+
+ Ok(PrecompileOutput {
+ gas_used,
+ bytes: result,
+ })
+}
+
+fn process_input(input: &Bytes) -> Result {
+ // Your custom logic here
+ Ok(Bytes::new())
+}
+```
+
+### Step 2: Register the Precompile
+
+Add the precompile to the precompile set:
+
+```rust
+// In precompiles/src/lib.rs
+pub fn evolve_precompiles(chain_spec: &ChainSpec) -> PrecompileSet {
+ let mut precompiles = standard_precompiles();
+
+ // Add mint precompile if configured
+ if let Some(mint_config) = &chain_spec.evolve.mint_precompile {
+ precompiles.insert(mint_config.address, mint_precompile);
+ }
+
+ // Add your custom precompile
+ if chain_spec.evolve.my_feature_enabled {
+ precompiles.insert(MY_PRECOMPILE_ADDRESS, my_precompile);
+ }
+
+ precompiles
+}
+```
+
+### Step 3: Add Chainspec Configuration
+
+Define configuration structure:
+
+```rust
+// In chainspec types
+#[derive(Debug, Clone, Serialize, Deserialize)]
+pub struct MyPrecompileConfig {
+ pub address: Address,
+ pub admin: Option,
+ pub some_parameter: u64,
+}
+```
+
+Update chainspec parsing to include new config.
+
+### Step 4: Build and Test
+
+```bash
+# Build ev-reth
+cargo build --release
+
+# Run tests
+cargo test --package ev-reth-precompiles
+```
+
+## Precompile Best Practices
+
+### Gas Metering
+
+Charge gas proportional to computation:
+
+```rust
+fn my_precompile(input: &Bytes, gas_limit: u64) -> PrecompileResult {
+ // Base cost
+ let mut gas_used = 100;
+
+ // Per-byte cost for input processing
+ gas_used += input.len() as u64 * 3;
+
+ // Additional cost for expensive operations
+ if requires_crypto_operation(input) {
+ gas_used += 10000;
+ }
+
+ if gas_used > gas_limit {
+ return Err(PrecompileError::OutOfGas);
+ }
+
+ // Process...
+}
+```
+
+### Access Control
+
+For privileged operations, check caller:
+
+```rust
+fn admin_only_precompile(
+ input: &Bytes,
+ context: &PrecompileContext,
+ config: &MyConfig,
+) -> PrecompileResult {
+ // Verify caller is admin
+ if context.caller != config.admin {
+ return Err(PrecompileError::Custom("unauthorized".into()));
+ }
+
+ // Process...
+}
+```
+
+### Input Validation
+
+Always validate input thoroughly:
+
+```rust
+fn my_precompile(input: &Bytes) -> PrecompileResult {
+ // Check minimum length
+ if input.len() < 36 { // 4 byte selector + 32 byte arg
+ return Err(PrecompileError::InvalidInput);
+ }
+
+ // Validate selector
+ let selector = &input[0..4];
+ if selector != MY_FUNCTION_SELECTOR {
+ return Err(PrecompileError::InvalidInput);
+ }
+
+ // Parse and validate arguments
+ let amount = U256::from_be_slice(&input[4..36]);
+ if amount.is_zero() {
+ return Err(PrecompileError::InvalidInput);
+ }
+
+ // Process...
+}
+```
+
+### Determinism
+
+Precompiles must be deterministic:
+
+- No random number generation
+- No external network calls
+- No time-dependent logic
+- Same input always produces same output
+
+## Testing Precompiles
+
+### Unit Tests
+
+```rust
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn test_my_precompile_success() {
+ let input = encode_input(/* args */);
+ let result = my_precompile(&input, 100000).unwrap();
+ assert_eq!(result.bytes, expected_output());
+ }
+
+ #[test]
+ fn test_my_precompile_out_of_gas() {
+ let input = encode_input(/* args */);
+ let result = my_precompile(&input, 10); // Too little gas
+ assert!(matches!(result, Err(PrecompileError::OutOfGas)));
+ }
+}
+```
+
+### Integration Tests
+
+Test precompile calls from Solidity:
+
+```solidity
+// test/MyPrecompile.t.sol
+contract MyPrecompileTest is Test {
+ address constant PRECOMPILE = 0x0000000000000000000000000000000000000200;
+
+ function testPrecompileCall() public {
+ (bool success, bytes memory result) = PRECOMPILE.call(
+ abi.encodeWithSignature("myFunction(uint256)", 100)
+ );
+ assertTrue(success);
+ // Assert result...
+ }
+}
+```
+
+## See Also
+
+- [Mint Precompile](/ev-reth/features/mint-precompile) - Built-in minting
+- [ev-reth Configuration](/ev-reth/configuration) - Chainspec setup
+- [ev-reth Overview](/ev-reth/overview) - Architecture
diff --git a/docs/guides/advanced/forced-inclusion.md b/docs/guides/advanced/forced-inclusion.md
new file mode 100644
index 000000000..38494af3e
--- /dev/null
+++ b/docs/guides/advanced/forced-inclusion.md
@@ -0,0 +1,128 @@
+# Single Sequencer
+
+A single sequencer is the simplest sequencing architecture for an Evolve-based chain. In this model, one node (the sequencer) is responsible for ordering transactions, producing blocks, and submitting data to the data availability (DA) layer.
+
+## How the Single Sequencer Model Works
+
+1. **Transaction Submission:**
+ - Users submit transactions to the execution environment via RPC or other interfaces.
+2. **Transaction Collection and Ordering:**
+ - The execution environment collects incoming transactions.
+ - The sequencer requests a batch of transactions from the execution environment to be included in the next block.
+3. **Block Production:**
+ - **Without lazy mode:** the sequencer produces new blocks at fixed intervals.
+ - **With lazy mode:** the sequencer produces a block once either
+ - enough transactions are collected
+ - the lazy-mode block interval elapses
+ More info in the [lazy mode configuration guide](../config.md#lazy-mode-lazy-aggregator).
+ - Each block contains a batch of ordered transactions and metadata.
+
+4. **Data Availability Posting:**
+ - The sequencer posts the block data to the configured DA layer (e.g., Celestia).
+ - This ensures that anyone can access the data needed to reconstruct the chain state.
+
+5. **State Update:**
+ - The sequencer updates the chain state based on the new block and makes the updated state available to light clients and full nodes.
+
+## Transaction Flow Diagram
+
+```mermaid
+sequenceDiagram
+ participant User
+ participant ExecutionEnv as Execution Environment
+ participant Sequencer
+ participant DA as Data Availability Layer
+
+ User->>ExecutionEnv: Submit transaction
+ Sequencer->>ExecutionEnv: Request batch for block
+ ExecutionEnv->>Sequencer: Provide batch of transactions
+ Sequencer->>DA: Post block data
+ Sequencer->>ExecutionEnv: Update state
+ ExecutionEnv->>User: State/query response
+```
+
+## Forced Inclusion
+
+While the single sequencer controls transaction ordering, the system provides a censorship-resistance mechanism called **forced inclusion**. This ensures users can always include their transactions even if the sequencer refuses to process them.
+
+### How Forced Inclusion Works
+
+1. **Direct DA Submission:**
+ - Users can submit transactions directly to the DA layer's forced inclusion namespace
+ - These transactions bypass the sequencer entirely
+
+2. **Epoch-Based Retrieval:**
+ - The sequencer retrieves forced inclusion transactions from the DA layer at epoch boundaries
+ - Epochs are defined by `DAEpochForcedInclusion` in the genesis configuration
+
+3. **Mandatory Inclusion:**
+ - The sequencer MUST include all forced inclusion transactions from an epoch before the epoch ends
+ - Full nodes verify that forced inclusion transactions are properly included
+
+4. **Smoothing:**
+ - If forced inclusion transactions exceed block size limits (`MaxBytes`), they can be spread across multiple blocks within the same epoch
+ - All transactions must be included before moving to the next epoch
+
+### Example
+
+```text
+Epoch [100, 109] (epoch size = 10):
+ - User submits tx directly to DA at height 102
+ - Sequencer retrieves forced txs at epoch start (height 100)
+ - Sequencer includes forced tx in blocks before height 110
+```
+
+See [Based Sequencing](./based.md) for a fully decentralized alternative that relies entirely on forced inclusion.
+
+## Detecting Malicious Sequencer Behavior
+
+Full nodes continuously monitor the sequencer to ensure it follows consensus rules, particularly around forced inclusion:
+
+### Censorship Detection
+
+If a sequencer fails to include forced inclusion transactions past their epoch boundary, full nodes will:
+
+1. **Detect the violation** - missing transactions from past epochs
+2. **Reject invalid blocks** - do not build on top of censoring blocks
+3. **Log the violation** with transaction hashes and epoch details
+4. **Halt consensus** - the chain cannot progress with a malicious sequencer
+
+### Recovery from Malicious Sequencer
+
+When a malicious sequencer is detected (censoring forced inclusion transactions):
+
+**All nodes must restart the chain in based sequencing mode:**
+
+```bash
+# Restart with based sequencing enabled
+./evnode start --node.aggregator --node.based_sequencer
+```
+
+**In based sequencing mode:**
+
+- No single sequencer controls transaction ordering
+- Every full node derives blocks independently from the DA layer
+- Forced inclusion becomes the primary (and only) transaction submission method
+- Censorship becomes impossible as ordering comes from the DA layer
+
+**Important considerations:**
+
+- All full nodes should coordinate the switch to based mode
+- The chain continues from the last valid state
+- Users submit transactions directly to the DA layer going forward
+- This is a one-way transition - moving back to single sequencer requires social consensus
+
+See [Based Sequencing documentation](./based.md) for details on operating in this mode.
+
+## Advantages
+
+- **Simplicity:** Easy to set up and operate, making it ideal for development, testing, and small-scale deployments compared to other more complex sequencers.
+- **Low Latency:** Fast block production and transaction inclusion, since there is no consensus overhead among multiple sequencers.
+- **Independence from DA block time:** The sequencer can produce blocks on its own schedule, without being tied to the block time of the DA layer, enabling more flexible transaction processing than DA-timed sequencers.
+- **Forced inclusion fallback:** Users can always submit transactions via the DA layer if the sequencer is unresponsive or censoring.
+
+## Disadvantages
+
+- **Single point of failure:** If the sequencer goes offline, block production stops (though the chain can transition to based mode).
+- **Trust requirement:** Users must trust the sequencer to include their transactions in a timely manner (mitigated by forced inclusion).
+- **Censorship risk:** A malicious sequencer can temporarily censor transactions until forced inclusion activates or the chain transitions to based mode.
diff --git a/docs/guides/da-layers/celestia.md b/docs/guides/da-layers/celestia.md
new file mode 100644
index 000000000..907c470a8
--- /dev/null
+++ b/docs/guides/da-layers/celestia.md
@@ -0,0 +1,229 @@
+# Celestia
+
+This guide covers connecting your Evolve chain to Celestia for production data availability.
+
+## Prerequisites
+
+- Completed an Evolve quickstart tutorial
+- Familiarity with running a Celestia light node
+
+## Running a Celestia Light Node
+
+Before starting your Evolve chain, you need a Celestia light node running and synced.
+
+### Version Compatibility
+
+Ensure compatible versions between ev-node and celestia-node:
+
+| Network | celestia-node |
+|---------|---------------|
+| Arabica | v0.20.x |
+| Mocha | v0.20.x |
+| Mainnet | v0.20.x |
+
+### Installation
+
+Follow the [Celestia documentation](https://docs.celestia.org/how-to-guides/light-node) to install and run a light node.
+
+**Quick start:**
+
+```bash
+# Install celestia-node
+curl -sL https://docs.celestia.org/install.sh | bash
+
+# Initialize (choose your network)
+celestia light init --p2p.network mocha
+
+# Start the node
+celestia light start --p2p.network mocha
+```
+
+### Network Options
+
+- [Arabica Devnet](https://docs.celestia.org/how-to-guides/arabica-devnet) - Development testing
+- [Mocha Testnet](https://docs.celestia.org/how-to-guides/mocha-testnet) - Pre-production testing
+- [Mainnet Beta](https://docs.celestia.org/how-to-guides/mainnet) - Production
+
+## Configuring Evolve for Celestia
+
+### Required Configuration
+
+The following flags are required to connect to Celestia:
+
+| Flag | Description |
+|------|-------------|
+| `--evnode.da.address` | Celestia node RPC endpoint |
+| `--evnode.da.auth_token` | JWT authentication token |
+| `--evnode.da.header_namespace` | Namespace for block headers |
+| `--evnode.da.data_namespace` | Namespace for transaction data |
+
+### Get DA Block Height
+
+Query the current DA height to set as your starting point:
+
+```bash
+DA_BLOCK_HEIGHT=$(celestia header network-head | jq -r '.result.header.height')
+echo "Your DA_BLOCK_HEIGHT is $DA_BLOCK_HEIGHT"
+```
+
+### Get Authentication Token
+
+Generate a write token for your light node:
+
+**Arabica:**
+
+```bash
+AUTH_TOKEN=$(celestia light auth write --p2p.network arabica)
+```
+
+**Mocha:**
+
+```bash
+AUTH_TOKEN=$(celestia light auth write --p2p.network mocha)
+```
+
+**Mainnet:**
+
+```bash
+AUTH_TOKEN=$(celestia light auth write)
+```
+
+### Set Namespaces
+
+Choose unique namespaces for your chain's headers and data:
+
+```bash
+DA_HEADER_NAMESPACE="my_chain_headers"
+DA_DATA_NAMESPACE="my_chain_data"
+```
+
+The namespace values are automatically encoded by ev-node for Celestia compatibility.
+
+You can use the same namespace for both headers and data, or separate them for optimized syncing (light clients can sync headers only).
+
+### Set DA Address
+
+Default Celestia light node port is 26658:
+
+```bash
+DA_ADDRESS=http://localhost:26658
+```
+
+## Running Your Chain
+
+Start your chain with Celestia configuration:
+
+```bash
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.da.auth_token $AUTH_TOKEN \
+ --evnode.da.header_namespace $DA_HEADER_NAMESPACE \
+ --evnode.da.data_namespace $DA_DATA_NAMESPACE \
+ --evnode.da.address $DA_ADDRESS
+```
+
+For Cosmos SDK chains:
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.da.auth_token $AUTH_TOKEN \
+ --evnode.da.header_namespace $DA_HEADER_NAMESPACE \
+ --evnode.da.data_namespace $DA_DATA_NAMESPACE \
+ --evnode.da.address $DA_ADDRESS
+```
+
+## Viewing Your Chain Data
+
+Once running, you can view your chain's data on Celestia block explorers:
+
+- [Celenium (Arabica)](https://arabica.celenium.io/)
+- [Celenium (Mocha)](https://mocha.celenium.io/)
+- [Celenium (Mainnet)](https://celenium.io/)
+
+Search by your namespace or account address to see submitted blobs.
+
+## Configuration Options
+
+### Gas Price
+
+Set the gas price for DA submissions:
+
+```bash
+--evnode.da.gas_price 0.01
+```
+
+Higher gas prices result in faster inclusion during congestion.
+
+### Block Time
+
+Set the expected DA block time (affects retry timing):
+
+```bash
+--evnode.da.block_time 6s
+```
+
+Celestia's block time is approximately 6 seconds.
+
+### Multiple Signing Addresses
+
+For high-throughput chains, use multiple signing addresses to avoid nonce conflicts:
+
+```bash
+--evnode.da.signing_addresses celestia1abc...,celestia1def...,celestia1ghi...
+```
+
+All addresses must be funded and loaded in the Celestia node's keyring.
+
+## Funding Your Account
+
+### Testnet (Mocha/Arabica)
+
+Get testnet TIA from faucets:
+
+- [Mocha Faucet](https://faucet.celestia-mocha.com/)
+- [Arabica Faucet](https://faucet.celestia-arabica.com/)
+
+### Mainnet
+
+Purchase TIA and transfer to your Celestia light node address.
+
+Check your address:
+
+```bash
+celestia state account-address
+```
+
+## Troubleshooting
+
+### Out of Funds
+
+If you see `Code: 19` errors, your account is out of TIA:
+
+1. Fund your account
+2. Increase gas price to unstick pending transactions
+3. Restart your chain
+
+See [Troubleshooting Guide](/guides/operations/troubleshooting) for details.
+
+### Connection Refused
+
+Verify your Celestia node is running:
+
+```bash
+curl http://localhost:26658/header/sync_state
+```
+
+### Token Expired
+
+Regenerate your auth token:
+
+```bash
+celestia light auth write --p2p.network
+```
+
+## See Also
+
+- [Local DA Guide](/guides/da-layers/local-da) - Development setup
+- [Troubleshooting](/guides/operations/troubleshooting) - Common issues
+- [Configuration Reference](/reference/configuration/ev-node-config) - All DA options
diff --git a/docs/guides/da-layers/local-da.md b/docs/guides/da-layers/local-da.md
new file mode 100644
index 000000000..d9256c057
--- /dev/null
+++ b/docs/guides/da-layers/local-da.md
@@ -0,0 +1,188 @@
+# Local DA
+
+Local DA is a development-only data availability layer for testing Evolve chains without connecting to a real DA network.
+
+## Overview
+
+Local DA provides:
+
+- Fast, local blob storage
+- No authentication required
+- No gas fees
+- Instant "finality"
+
+**Warning:** Local DA is for development only. It provides no actual data availability guarantees.
+
+## Installation
+
+Install the local-da binary:
+
+```bash
+go install github.com/evstack/ev-node/tools/local-da@latest
+```
+
+Or build from source:
+
+```bash
+cd ev-node/tools/local-da
+go build -o local-da .
+```
+
+## Running Local DA
+
+Start the local DA server:
+
+```bash
+local-da
+```
+
+Default output:
+
+```
+INF NewLocalDA: initialized LocalDA module=local-da
+INF Listening on host=localhost maxBlobSize=1974272 module=da port=7980
+INF server started listening on=localhost:7980 module=da
+```
+
+### Configuration
+
+| Flag | Default | Description |
+|------|---------|-------------|
+| `--host` | `localhost` | Listen address |
+| `--port` | `7980` | Listen port |
+
+Example with custom port:
+
+```bash
+local-da --port 8080
+```
+
+## Connecting Your Chain
+
+Start your Evolve chain with the local DA address:
+
+```bash
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:7980
+```
+
+For Cosmos SDK chains:
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:7980
+```
+
+## Features
+
+### No Authentication
+
+Unlike Celestia, local DA requires no auth token:
+
+```bash
+# Celestia requires
+--evnode.da.auth_token
+
+# Local DA does not
+--evnode.da.address http://localhost:7980
+```
+
+### No Namespace Required
+
+Namespace is optional with local DA:
+
+```bash
+# Optional
+--evnode.da.namespace my_namespace
+```
+
+### Instant Submission
+
+Blobs are stored immediately with no block time delay.
+
+## Use Cases
+
+### Local Development
+
+Test your chain logic without DA layer complexity:
+
+```bash
+# Terminal 1: Start local DA
+local-da
+
+# Terminal 2: Start your chain
+evnode start --evnode.da.address http://localhost:7980
+```
+
+### CI/CD Testing
+
+Use local DA in automated tests:
+
+```bash
+# Start local DA in background
+local-da &
+LOCAL_DA_PID=$!
+
+# Run tests
+go test ./...
+
+# Cleanup
+kill $LOCAL_DA_PID
+```
+
+### Integration Testing
+
+Test multi-node setups locally:
+
+```bash
+# Start local DA
+local-da --port 7980
+
+# Start sequencer
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.p2p.listen /ip4/0.0.0.0/tcp/7676
+
+# Start full node (separate terminal)
+evnode start \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.p2p.peers /ip4/127.0.0.1/tcp/7676/p2p/
+```
+
+## Limitations
+
+Local DA is **not suitable for**:
+
+- Production deployments
+- Security testing
+- Performance benchmarking (no real network latency)
+- Testing DA-specific features (proofs, commitments)
+
+## Transitioning to Celestia
+
+When ready for production, switch to Celestia:
+
+1. Set up a Celestia light node
+2. Update your start command:
+
+```bash
+# From local DA
+--evnode.da.address http://localhost:7980
+
+# To Celestia
+--evnode.da.address http://localhost:26658
+--evnode.da.auth_token $AUTH_TOKEN
+--evnode.da.header_namespace $HEADER_NAMESPACE
+--evnode.da.data_namespace $DATA_NAMESPACE
+```
+
+See [Celestia Guide](/guides/da-layers/celestia) for full instructions.
+
+## See Also
+
+- [Celestia Guide](/guides/da-layers/celestia) - Production DA setup
+- [EVM Quickstart](/getting-started/evm/quickstart) - Getting started with EVM
+- [Cosmos Quickstart](/getting-started/cosmos/quickstart) - Getting started with Cosmos SDK
diff --git a/docs/guides/operations/deployment.md b/docs/guides/operations/deployment.md
new file mode 100644
index 000000000..dcc78ad54
--- /dev/null
+++ b/docs/guides/operations/deployment.md
@@ -0,0 +1,49 @@
+---
+description: This page provides an overview of some common ways to deploy chains.
+---
+
+# 🚀 Deploying Your Chain
+
+One of the benefits of building chains with Evolve is the flexibility you have as a developer to choose things like the DA layer, the settlement scheme, and the execution environment.
+
+You can learn more about Evolve architecture [here](../../learn/specs/overview.md).
+
+The challenge that comes with this flexibility is that there are more services that now need to be deployed and managed while running your chain.
+
+In the tutorials so far, you've seen various helper scripts used to make things easier. While great for tutorials, there are better ways to deploy and manage chains than using various bash scripts.
+
+## 🏗️ Deployment Scales
+
+Depending on your needs and the stage of your chain development, there are different deployment approaches you can take:
+
+### 🏠 Local Development
+
+For development and testing purposes, you can deploy your chain locally using containerized environments. This approach provides:
+
+- Quick iteration and testing
+- No external dependencies
+- Full control over the environment
+- Cost-effective development
+
+### 🌐 Testnet Deployment
+
+When you're ready to test with real network conditions, you can deploy to testnet environments. This includes:
+
+- Integration with testnet DA networks
+- Real network latency and conditions
+- Multi-node testing scenarios
+- Pre-production validation
+
+## 📚 Available Deployment Guides
+
+Choose the deployment approach that matches your current needs:
+
+- [🌐 Testnet Deployment](./testnet.md) - Deploy on testnet with external DA networks
+
+:::warning Disclaimer
+These examples are for educational purposes only. Before deploying your chain for production use you should fully understand the services you are deploying and your choice in deployment method.
+:::
+
+## 🎉 Next Steps
+
+For production mainnet deployments, consider additional requirements such as monitoring, security audits, infrastructure hardening, and operational procedures that go beyond the scope of these tutorials.
diff --git a/docs/guides/operations/monitoring.md b/docs/guides/operations/monitoring.md
new file mode 100644
index 000000000..6e4735770
--- /dev/null
+++ b/docs/guides/operations/monitoring.md
@@ -0,0 +1,79 @@
+# Evolve Metrics Guide
+
+## How to configure metrics
+
+Evolve can report and serve Prometheus metrics, which can be consumed by Prometheus collector(s).
+
+This functionality is disabled by default.
+
+To enable Prometheus metrics, set `instrumentation.prometheus=true` in your Evolve node's configuration file.
+
+Metrics will be served under `/metrics` on port 26660 by default. The listening address can be changed using the `instrumentation.prometheus_listen_addr` configuration option.
+
+## List of available metrics
+
+You can find the full list of available metrics in the [Technical Specifications](../learn/specs/block-manager.md#metrics).
+
+## Viewing Metrics
+
+Once your Evolve node is running with metrics enabled, you can view the metrics by:
+
+1. Accessing the metrics endpoint directly:
+
+ ```bash
+ curl http://localhost:26660/metrics
+ ```
+
+2. Configuring Prometheus to scrape these metrics by adding the following to your `prometheus.yml`:
+
+ ```yaml
+ scrape_configs:
+ - job_name: evolve
+ static_configs:
+ - targets: ['localhost:26660']
+ ```
+
+3. Using Grafana with Prometheus as a data source to visualize the metrics.
+
+## Example Prometheus Configuration
+
+Here's a basic Prometheus configuration to scrape metrics from a Evolve node:
+
+```yaml
+global:
+ scrape_interval: 15s
+ evaluation_interval: 15s
+
+scrape_configs:
+ - job_name: evolve
+ static_configs:
+ - targets: ['localhost:26660']
+```
+
+## Troubleshooting
+
+If you're not seeing metrics:
+
+1. Ensure metrics are enabled in your configuration with `instrumentation.prometheus=true`
+2. Verify the metrics endpoint is accessible: `curl http://localhost:26660/metrics`
+3. Check your Prometheus configuration is correctly pointing to your Evolve node
+4. Examine the Evolve node logs for any errors related to the metrics server
+
+## Advanced Configuration
+
+For more advanced metrics configuration, you can adjust the following settings in your configuration file:
+
+```yaml
+instrumentation:
+ prometheus: true
+ prometheus_listen_addr: ":26660"
+ max_open_connections: 3
+ namespace: "evolve"
+```
+
+These settings allow you to:
+
+- Enable/disable Prometheus metrics
+- Change the listening address for the metrics server
+- Limit the maximum number of open connections to the metrics server
+- Set a custom namespace for all metrics
diff --git a/docs/guides/operations/troubleshooting.md b/docs/guides/operations/troubleshooting.md
new file mode 100644
index 000000000..c8fbcf562
--- /dev/null
+++ b/docs/guides/operations/troubleshooting.md
@@ -0,0 +1,318 @@
+# Troubleshooting
+
+Common issues and solutions when running Evolve nodes.
+
+## Diagnostic Commands
+
+### Check Node Status
+
+```bash
+# Health check
+curl http://localhost:7331/health/live
+curl http://localhost:7331/health/ready
+
+# Node status
+curl http://localhost:26657/status
+```
+
+### View Logs
+
+```bash
+# Follow logs in real-time
+journalctl -u evnode -f
+
+# Search for errors
+journalctl -u evnode | grep -i error
+```
+
+## Common Issues
+
+### Node Won't Start
+
+**Symptom:** Node exits immediately after starting.
+
+**Solutions:**
+
+1. Check for port conflicts:
+
+```bash
+lsof -i :26657
+lsof -i :7676
+```
+
+1. Verify configuration file syntax:
+
+```bash
+cat ~/.evnode/config/evnode.yml
+```
+
+1. Check data directory permissions:
+
+```bash
+ls -la ~/.evnode/data
+```
+
+### DA Connection Failures
+
+**Symptom:** Logs show `DA layer submission failed` errors.
+
+**Error example:**
+
+```
+ERR DA layer submission failed error="connection refused"
+```
+
+**Solutions:**
+
+1. Verify DA endpoint is reachable:
+
+```bash
+curl http://localhost:26658/health
+```
+
+1. Check authentication token (Celestia):
+
+```bash
+celestia light auth write --p2p.network mocha
+```
+
+1. Verify DA node is fully synced:
+
+```bash
+celestia header sync-state
+```
+
+### Out of DA Funds
+
+**Symptom:** `Code: 19` errors in logs.
+
+**Error example:**
+
+```
+ERR DA layer submission failed error="Codespace: 'sdk', Code: 19, Message: "
+```
+
+**Solutions:**
+
+1. Check DA account balance
+2. Fund the account with more tokens
+3. Increase gas price to unstick pending transactions:
+
+```bash
+--evnode.da.gas_price 0.05
+```
+
+See [Restart Chain Guide](/guides/restart-chain) for detailed steps.
+
+### P2P Connection Issues
+
+**Symptom:** Node not syncing, no peers connected.
+
+**Solutions:**
+
+1. Verify peer address format:
+
+```bash
+# Correct format
+/ip4/1.2.3.4/tcp/7676/p2p/12D3KooWABC...
+
+# NOT just the peer ID
+12D3KooWABC...
+```
+
+1. Check firewall allows P2P port:
+
+```bash
+sudo ufw status
+# Ensure port 7676 (or your P2P port) is allowed
+```
+
+1. Try DA-only sync mode (no P2P):
+
+```bash
+evnode start --evnode.da.address http://localhost:26658
+# Leave --evnode.p2p.peers empty
+```
+
+### Node Falling Behind
+
+**Symptom:** `catching_up: true` in status, height increasing slowly.
+
+**Solutions:**
+
+1. Check system resources:
+
+```bash
+htop
+df -h
+```
+
+1. Increase DA request timeout:
+
+```bash
+--evnode.da.request_timeout 60s
+```
+
+1. Verify DA layer is responding quickly:
+
+```bash
+time curl http://localhost:26658/header/sync_state
+```
+
+### Execution Layer Desync
+
+**Symptom:** State root mismatches, execution errors.
+
+**EVM (ev-reth):**
+
+```bash
+# Check ev-reth logs for errors
+journalctl -u ev-reth -f
+
+# Verify Engine API connectivity
+curl -X POST -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $(cat jwt.hex)" \
+ --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
+ http://localhost:8551
+```
+
+**Cosmos SDK (ev-abci):**
+
+```bash
+# Check app hash consistency
+curl http://localhost:26657/status | jq '.sync_info'
+```
+
+## Reset Procedures
+
+### Soft Reset (Keep Genesis)
+
+Reset state while keeping configuration:
+
+```bash
+# Stop the node
+systemctl stop evnode
+
+# Clear data directory
+rm -rf ~/.evnode/data/*
+
+# Restart
+systemctl start evnode
+```
+
+### Hard Reset (Full Reinitialize)
+
+Complete reset including configuration:
+
+```bash
+# Stop the node
+systemctl stop evnode
+
+# Remove everything
+rm -rf ~/.evnode
+
+# Reinitialize
+evnode init
+```
+
+### Reset EVM State (ev-reth)
+
+```bash
+# Stop both nodes
+systemctl stop evnode ev-reth
+
+# Clear ev-reth data
+rm -rf ~/.ev-reth/db
+
+# Clear ev-node cache
+rm -rf ~/.evnode/data/cache
+
+# Restart
+systemctl start ev-reth evnode
+```
+
+## Log Analysis
+
+### Important Log Messages
+
+**Healthy operation:**
+
+```
+INF Creating and publishing block height=100 module=BlockManager
+INF block marked as DA included blockHeight=100 module=BlockManager
+INF indexed block events height=100 module=txindex
+```
+
+**Warning signs:**
+
+```
+WRN block production slowed due to pending DA submissions
+WRN peer connection failed, retrying
+```
+
+**Errors requiring action:**
+
+```
+ERR DA layer submission failed
+ERR failed to execute block
+ERR P2P network unavailable
+```
+
+### Enable Debug Logging
+
+```bash
+evnode start --log.level debug
+```
+
+Or in configuration:
+
+```yaml
+log:
+ level: debug
+```
+
+## Performance Issues
+
+### High Memory Usage
+
+1. Reduce cache size in configuration
+2. Enable lazy aggregation mode
+3. Limit max pending blocks:
+
+```bash
+--evnode.node.max_pending_blocks 50
+```
+
+### High CPU Usage
+
+1. Increase block time:
+
+```bash
+--evnode.node.block_time 2s
+```
+
+1. Check for transaction spam
+2. Monitor execution layer performance
+
+### Disk Space
+
+1. Check disk usage:
+
+```bash
+du -sh ~/.evnode/data/*
+```
+
+1. Prune old data (if supported by execution layer)
+2. Consider moving data to larger disk
+
+## Getting Help
+
+1. Check logs for specific error messages
+2. Search [GitHub Issues](https://github.com/evstack/ev-node/issues)
+3. Join the community Discord for support
+
+## See Also
+
+- [Reset State Guide](/guides/reset-state) - Detailed reset procedures
+- [Restart Chain Guide](/guides/restart-chain) - Restarting after issues
+- [Monitoring Guide](/guides/operations/monitoring) - Proactive monitoring
diff --git a/docs/guides/operations/upgrades.md b/docs/guides/operations/upgrades.md
new file mode 100644
index 000000000..bde6d852d
--- /dev/null
+++ b/docs/guides/operations/upgrades.md
@@ -0,0 +1,273 @@
+# Upgrades
+
+Guide for upgrading Evolve nodes and handling version migrations.
+
+## Upgrade Types
+
+### Minor Upgrades (Patch/Minor Version)
+
+Non-breaking changes, bug fixes, and minor improvements.
+
+**Process:**
+
+1. Stop the node
+2. Replace binary
+3. Restart
+
+```bash
+# Stop
+systemctl stop evnode
+
+# Upgrade (example with go install)
+go install github.com/evstack/ev-node@v1.2.3
+
+# Restart
+systemctl start evnode
+```
+
+### Major Upgrades (Breaking Changes)
+
+May require state migration or coordinated network upgrade.
+
+**Process:**
+
+1. Review changelog for breaking changes
+2. Coordinate upgrade height with network
+3. Stop at designated height
+4. Upgrade binary
+5. Run any migration scripts
+6. Restart
+
+## ev-node Upgrades
+
+### Check Current Version
+
+```bash
+evnode version
+```
+
+### Upgrade Binary
+
+**Using Go:**
+
+```bash
+go install github.com/evstack/ev-node@latest
+```
+
+**Using Docker:**
+
+```bash
+docker pull evstack/evnode:latest
+```
+
+**From Source:**
+
+```bash
+cd ev-node
+git fetch --tags
+git checkout v1.2.3
+make build
+```
+
+### Configuration Changes
+
+After upgrading, check for new or changed configuration options:
+
+1. Review the [changelog](https://github.com/evstack/ev-node/releases)
+2. Compare your config with the new defaults
+3. Update configuration as needed
+
+## ev-reth Upgrades
+
+### Version Compatibility
+
+ev-reth versions must be compatible with ev-node. Check the compatibility matrix:
+
+| ev-node | ev-reth |
+|---------|---------|
+| v1.x | v0.x |
+
+### Upgrade Process
+
+```bash
+# Stop both nodes
+systemctl stop evnode ev-reth
+
+# Upgrade ev-reth
+cd ev-reth
+git fetch --tags
+git checkout v0.2.0
+cargo build --release
+
+# Verify chainspec compatibility
+# (check for new required fields)
+
+# Restart
+systemctl start ev-reth evnode
+```
+
+### Database Migrations
+
+Some ev-reth upgrades require database migration:
+
+```bash
+# Check if migration needed
+ev-reth db version
+
+# Run migration if needed
+ev-reth db migrate
+```
+
+## ev-abci Upgrades
+
+### Cosmos SDK Compatibility
+
+ev-abci tracks Cosmos SDK versions. Ensure your app's SDK version is compatible:
+
+| ev-abci | Cosmos SDK |
+|---------|------------|
+| v1.x | v0.50.x |
+
+### Module Upgrades
+
+For Cosmos SDK apps with custom modules:
+
+1. Update module dependencies in `go.mod`
+2. Run any module migration handlers
+3. Update genesis if needed
+
+```go
+// In app.go upgrade handler
+app.UpgradeKeeper.SetUpgradeHandler(
+ "v2",
+ func(ctx sdk.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) {
+ // Migration logic
+ return app.ModuleManager.RunMigrations(ctx, app.Configurator(), fromVM)
+ },
+)
+```
+
+## Coordinated Network Upgrades
+
+For networks with multiple node operators:
+
+### Planning
+
+1. Announce upgrade timeline (minimum 1 week notice)
+2. Agree on upgrade block height
+3. Share upgrade binary/instructions
+
+### Execution
+
+1. All nodes stop at designated height
+2. Operators upgrade binaries
+3. Coordinators verify readiness
+4. Network restarts
+
+### Handling Stragglers
+
+If some nodes don't upgrade:
+
+- They will reject new blocks (if consensus rules changed)
+- They can sync from upgraded nodes after upgrading
+
+## Rollback Procedures
+
+If an upgrade causes issues:
+
+### ev-node Rollback
+
+```bash
+# Stop
+systemctl stop evnode
+
+# Restore previous binary
+cp /backup/evnode-v1.1.0 /usr/local/bin/evnode
+
+# Optionally restore data
+# (only if upgrade corrupted state)
+rm -rf ~/.evnode/data
+cp -r /backup/evnode-data ~/.evnode/data
+
+# Restart
+systemctl start evnode
+```
+
+### ev-reth Rollback
+
+```bash
+# Stop
+systemctl stop ev-reth evnode
+
+# Restore binary
+cp /backup/ev-reth-v0.1.0 /usr/local/bin/ev-reth
+
+# Restore database if needed
+rm -rf ~/.ev-reth/db
+cp -r /backup/ev-reth-db ~/.ev-reth/db
+
+# Restart
+systemctl start ev-reth evnode
+```
+
+## State Migration
+
+### Export State
+
+Before major upgrades, export state:
+
+```bash
+# ev-node
+evnode export > state-export.json
+
+# Cosmos SDK
+appd export --height > genesis-export.json
+```
+
+### Migrate State
+
+If state format changes:
+
+```bash
+# Run migration tool
+evnode migrate state-export.json --to-version v2 > state-migrated.json
+```
+
+### Import State
+
+```bash
+# Initialize with migrated state
+evnode init --genesis state-migrated.json
+```
+
+## Best Practices
+
+### Pre-Upgrade Checklist
+
+- [ ] Review changelog for breaking changes
+- [ ] Test upgrade on testnet first
+- [ ] Backup current state
+- [ ] Backup configuration files
+- [ ] Notify dependent services
+- [ ] Schedule maintenance window
+
+### Post-Upgrade Verification
+
+- [ ] Node starts successfully
+- [ ] Blocks are being produced/synced
+- [ ] RPC endpoints responding
+- [ ] Metrics reporting correctly
+- [ ] P2P connections established
+
+### Automation
+
+Consider automating upgrades with tools like:
+
+- Ansible playbooks
+- Kubernetes operators
+- systemd timers for scheduled upgrades
+
+## See Also
+
+- [Troubleshooting Guide](/guides/operations/troubleshooting) - Handling upgrade issues
+- [Deployment Guide](/guides/operations/deployment) - Infrastructure setup
diff --git a/docs/guides/running-nodes/aggregator.md b/docs/guides/running-nodes/aggregator.md
new file mode 100644
index 000000000..49f0776c7
--- /dev/null
+++ b/docs/guides/running-nodes/aggregator.md
@@ -0,0 +1,194 @@
+# Aggregator Node
+
+An aggregator (also called sequencer) is the node responsible for producing blocks in an Evolve chain. It collects transactions, orders them, creates blocks, and submits data to the DA layer.
+
+## Prerequisites
+
+- ev-node installed
+- Access to a DA layer (Celestia or local-da)
+- Signer key for block signing
+
+## Configuration
+
+Enable aggregator mode with the `--evnode.node.aggregator` flag:
+
+```bash
+evnode start --evnode.node.aggregator
+```
+
+### Required Flags
+
+| Flag | Description |
+|------------------------------|-------------------------|
+| `--evnode.node.aggregator` | Enable block production |
+| `--evnode.da.address` | DA layer endpoint |
+| `--evnode.signer.passphrase` | Signer key passphrase |
+
+### Block Time Configuration
+
+Control how often blocks are produced:
+
+```bash
+# Produce blocks every 500ms
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.node.block_time 500ms
+```
+
+Default block time is 1 second.
+
+## Lazy Aggregation Mode
+
+Lazy mode only produces blocks when there are transactions, reducing DA costs during low activity periods:
+
+```bash
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.node.lazy_aggregator \
+ --evnode.node.lazy_block_time 30s
+```
+
+| Flag | Description |
+|---------------------------------|--------------------------------------|
+| `--evnode.node.lazy_aggregator` | Enable lazy mode |
+| `--evnode.node.lazy_block_time` | Max wait between blocks in lazy mode |
+
+In lazy mode:
+
+- Blocks are produced immediately when transactions arrive
+- If no transactions, wait up to `lazy_block_time` before producing an empty block
+- Reduces DA submission costs during idle periods
+
+## DA Submission Settings
+
+Configure how blocks are batched and submitted to DA:
+
+```bash
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.da.address http://localhost:26658 \
+ --evnode.da.namespace "my_namespace" \
+ --evnode.da.gas_price 0.01 \
+ --evnode.da.batching_strategy adaptive
+```
+
+### Batching Strategies
+
+| Strategy | Description |
+|-------------|---------------------------------------------|
+| `immediate` | Submit as soon as blocks are ready |
+| `time` | Wait for time interval before submitting |
+| `size` | Wait until batch reaches size threshold |
+| `adaptive` | Balance between size and time (recommended) |
+
+### Max Pending Blocks
+
+Limit how many blocks can be waiting for DA submission:
+
+```bash
+--evnode.node.max_pending_blocks 100
+```
+
+When this limit is reached, block production pauses until some blocks are confirmed on DA.
+
+## Signer Configuration
+
+The aggregator needs a signer key to sign blocks:
+
+```bash
+# Using file-based signer
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.signer.signer_type file \
+ --evnode.signer.signer_path /path/to/keys \
+ --evnode.signer.passphrase "your-passphrase"
+```
+
+## Complete Example
+
+### EVM Chain (ev-reth)
+
+```bash
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.node.block_time 1s \
+ --evnode.da.address http://localhost:26658 \
+ --evnode.da.namespace "my_evm_chain" \
+ --evnode.da.gas_price 0.01 \
+ --evnode.signer.passphrase "secret" \
+ --evnode.rpc.address tcp://0.0.0.0:26657
+```
+
+### Cosmos SDK Chain (ev-abci)
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.node.block_time 1s \
+ --evnode.da.address http://localhost:26658 \
+ --evnode.da.namespace "my_cosmos_chain" \
+ --evnode.signer.passphrase "secret"
+```
+
+## Monitoring
+
+Enable metrics to monitor aggregator performance:
+
+```bash
+evnode start \
+ --evnode.node.aggregator \
+ --evnode.instrumentation.prometheus \
+ --evnode.instrumentation.prometheus_listen_addr :2112
+```
+
+Key metrics to watch:
+
+- `evolve_block_height` - Current block height
+- `evolve_da_submission_total` - DA submissions count
+- `evolve_da_submission_failures` - Failed DA submissions
+
+Enable the DA visualizer for detailed submission monitoring:
+
+```bash
+--evnode.rpc.enable_da_visualization
+```
+
+Then access `http://localhost:7331/da` in your browser.
+
+## Health Checks
+
+The aggregator exposes health endpoints:
+
+```bash
+# Liveness check
+curl http://localhost:7331/health/live
+
+# Readiness check (includes block production rate)
+curl http://localhost:7331/health/ready
+```
+
+## Troubleshooting
+
+### Blocks Not Being Produced
+
+1. Verify aggregator mode is enabled in logs
+2. Check DA layer connectivity
+3. Ensure signer key is accessible
+
+### DA Submission Failures
+
+1. Check DA layer endpoint is reachable
+2. Verify DA account has sufficient funds
+3. Increase gas price if transactions are being outbid
+
+### High Pending Block Count
+
+1. Reduce block time or enable lazy mode
+2. Increase DA gas price for faster inclusion
+3. Check DA layer congestion
+
+## See Also
+
+- [Full Node Guide](/guides/running-nodes/full-node) - Running a non-producing node
+- [DA Visualization](/guides/tools/visualizer) - Monitor DA submissions
+- [Monitoring Guide](/guides/operations/monitoring) - Prometheus metrics
diff --git a/docs/guides/running-nodes/attester.md b/docs/guides/running-nodes/attester.md
new file mode 100644
index 000000000..66b7e5442
--- /dev/null
+++ b/docs/guides/running-nodes/attester.md
@@ -0,0 +1,67 @@
+# Attester Node
+
+Attester nodes participate in the validator network to provide faster soft finality through attestations. This is an advanced feature for chains requiring sub-DA-finality confirmation times.
+
+## Overview
+
+Attesters:
+
+- Validate blocks produced by the aggregator
+- Sign attestations confirming block validity
+- Participate in a soft consensus protocol
+- Enable faster finality than DA-only confirmation
+
+## Status
+
+The attester network feature is under active development. This documentation will be updated as the feature matures.
+
+For technical details on the validator network design, see [ADR-022: Validator Network](https://github.com/evstack/ev-node/blob/main/specs/src/adr/adr-022-validator-network.md).
+
+## How It Works
+
+### Soft Finality
+
+Without attesters, finality depends on DA confirmation (~6-12 seconds for Celestia). With an attester network:
+
+1. Aggregator produces block
+2. Attesters validate and sign attestations
+3. When threshold of attestations collected, block has soft finality
+4. DA finality provides hard finality later
+
+### Trust Model
+
+- Soft finality requires trusting the attester set (configurable threshold)
+- Hard finality (DA) remains trustless
+- Applications can choose which finality level to wait for
+
+## Configuration (Preview)
+
+```bash
+# Run as attester (preview configuration)
+evnode start \
+ --evnode.node.attester \
+ --evnode.da.address http://localhost:26658 \
+ --evnode.p2p.peers /ip4/sequencer.example.com/tcp/7676/p2p/12D3KooW...
+```
+
+## Use Cases
+
+### Low-Latency Applications
+
+Applications requiring confirmation faster than DA finality:
+
+- Trading platforms
+- Gaming
+- Real-time settlement
+
+### Enhanced Security
+
+Additional validation layer before DA confirmation:
+
+- Multi-party validation
+- Early fraud detection
+
+## See Also
+
+- [Finality Concepts](/concepts/finality) - Understanding finality in Evolve
+- [Full Node Guide](/guides/running-nodes/full-node) - Running a full node
diff --git a/docs/guides/running-nodes/full-node.md b/docs/guides/running-nodes/full-node.md
new file mode 100644
index 000000000..753985033
--- /dev/null
+++ b/docs/guides/running-nodes/full-node.md
@@ -0,0 +1,104 @@
+# Chain Full Node Setup Guide
+
+## Introduction
+
+This guide covers how to set up a full node to run alongside a sequencer node in a Evolve-based blockchain network. A full node maintains a complete copy of the blockchain and helps validate transactions, improving the network's decentralization and security.
+
+> **Note: The guide on how to run an evolve EVM full node can be found [in the evm section](./evm/single.md#setting-up-a-full-node).**
+
+## Prerequisites
+
+Before proceeding, ensure that you have completed the [build a chain](./gm-world.md) tutorial, which covers setting-up, building and running your chain.
+
+Ensure that you have:
+
+- A local Data Availability (DA) network node running on port `7980`.
+- A Evolve sequencer node running and posting blocks to the DA network.
+
+## Setting Up Your Full Node
+
+### Initialize Chain Config and Copy Genesis File
+
+Let's set a terminal variable for the chain ID.
+
+```bash
+CHAIN_ID=gm
+```
+
+Initialize the chain config for the full node, lets call it `FullNode` and set the chain ID to your chain ID:
+
+```bash
+gmd init FullNode --chain-id $CHAIN_ID --home $HOME/.${CHAIN_ID}_fn
+```
+
+Copy the genesis file from the sequencer node:
+
+```bash
+cp $HOME/.$CHAIN_ID/config/genesis.json $HOME/.${CHAIN_ID}_fn/config/genesis.json
+```
+
+### Set Up P2P Connection to Sequencer Node
+
+Identify the sequencer node's P2P address from its logs. It will look similar to:
+
+```text
+1:55PM INF listening on address=/ip4/127.0.0.1/tcp/7676/p2p/12D3KooWJbD9TQoMSSSUyfhHMmgVY3LqCjxYFz8wQ92Qa6DAqtmh module=p2p
+```
+
+Create an environment variable with the P2P address:
+
+```bash
+export P2P_ID="12D3KooWJbD9TQoMSSSUyfhHMmgVY3LqCjxYFz8wQ92Qa6DAqtmh"
+```
+
+### Start the Full Node
+
+We are now ready to run our full node. If we are running the full node on the same machine as the sequencer, we need to make sure we update the ports to avoid conflicts.
+
+Make sure to include these flags with your start command:
+
+```sh
+ --rpc.laddr tcp://127.0.0.1:46657 \
+ --grpc.address 127.0.0.1:9390 \
+ --p2p.laddr "0.0.0.0:46656" \
+ --api.address tcp://localhost:1318
+```
+
+Run your full node with the following command:
+
+```bash
+gmd start \
+ --evnode.da.address http://127.0.0.1:7980 \
+ --p2p.seeds $P2P_ID@127.0.0.1:7676 \
+ --minimum-gas-prices 0stake \
+ --rpc.laddr tcp://127.0.0.1:46657 \
+ --grpc.address 127.0.0.1:9390 \
+ --p2p.laddr "0.0.0.0:46656" \
+ --api.address tcp://localhost:1318 \
+ --home $HOME/.${CHAIN_ID}_fn
+```
+
+Key points about this command:
+
+- `chain_id` is generally the `$CHAIN_ID`, which is `gm` in this case.
+- The ports and addresses are different from the sequencer node to avoid conflicts. Not everything may be necessary for your setup.
+- We use the `P2P_ID` environment variable to set the seed node.
+
+## Verifying Full Node Operation
+
+After starting your full node, you should see output similar to:
+
+``` bash
+2:33PM DBG indexed transactions height=1 module=txindex num_txs=0
+2:33PM INF block marked as DA included blockHash=7897885B959F52BF0D772E35F8DA638CF8BBC361C819C3FD3E61DCEF5034D1CC blockHeight=5532 module=BlockManager
+```
+
+This output indicates that your full node is successfully connecting to the network and processing blocks.
+
+:::tip
+If your chain uses EVM as an execution layer and you see an error like `datadir already used by another process`, it means you have to remove all the state from chain data directory (`/root/.yourchain_fn/data/`) and specify a different data directory for the EVM client.
+:::
+
+## Conclusion
+
+You've now set up a full node running alongside your Evolve sequencer.
diff --git a/docs/guides/tools/blob-decoder.md b/docs/guides/tools/blob-decoder.md
new file mode 100644
index 000000000..8879f39fa
--- /dev/null
+++ b/docs/guides/tools/blob-decoder.md
@@ -0,0 +1,158 @@
+# Blob Decoder Tool
+
+The blob decoder is a utility tool for decoding and inspecting blobs from Celestia (DA) layers. It provides both a web interface and API for decoding blob data into human-readable format.
+
+## Overview
+
+The blob decoder helps developers and operators inspect the contents of blobs submitted to DA layers. It can decode:
+
+- Raw blob data (hex or base64 encoded)
+- Block data structures
+- Transaction payloads
+- Protobuf-encoded messages
+
+## Usage
+
+### Starting the Server
+
+```bash
+# Run with default port (8080)
+go run tools/blob-decoder/main.go
+```
+
+The server will start and display:
+
+- Web interface URL: `http://localhost:8080`
+- API endpoint: `http://localhost:8080/api/decode`
+
+### Web Interface
+
+1. Open your browser to `http://localhost:8080`
+2. Paste your blob data in the input field
+3. Select the encoding format (hex or base64)
+4. Click "Decode" to see the parsed output
+
+### API Usage
+
+The decoder provides a REST API for programmatic access:
+
+```bash
+# Decode hex-encoded blob
+curl -X POST http://localhost:8080/api/decode \
+ -H "Content-Type: application/json" \
+ -d '{
+ "data": "0x1234abcd...",
+ "encoding": "hex"
+ }'
+
+# Decode base64-encoded blob
+curl -X POST http://localhost:8080/api/decode \
+ -H "Content-Type: application/json" \
+ -d '{
+ "data": "SGVsbG8gV29ybGQ=",
+ "encoding": "base64"
+ }'
+```
+
+#### API Request Format
+
+```json
+{
+ "data": "string", // The encoded blob data
+ "encoding": "string" // Either "hex" or "base64"
+}
+```
+
+#### API Response Format
+
+```json
+{
+ "success": true,
+ "decoded": {
+ // Decoded data structure
+ },
+ "error": "string" // Only present if success is false
+}
+```
+
+## Supported Data Types
+
+### Block Data
+
+The decoder can parse ev-node block structures:
+
+- Block height
+- Timestamp
+- Parent hash
+- Transaction list
+- Validator information
+- Data commitments
+
+### Transaction Data
+
+Decodes individual transactions including:
+
+- Transaction type
+- Sender/receiver addresses
+- Value/amount
+- Gas parameters
+- Payload data
+
+### Protobuf Messages
+
+Automatically detects and decodes protobuf-encoded messages used in ev-node:
+
+- Block headers
+- Transaction batches
+- State updates
+- DA commitments
+
+## Examples
+
+### Decoding a Block Blob
+
+```bash
+# Example block blob (hex encoded)
+curl -X POST http://localhost:8080/api/decode \
+ -H "Content-Type: application/json" \
+ -d '{
+ "data": "0a2408011220...",
+ "encoding": "hex"
+ }'
+```
+
+Response:
+
+```json
+{
+ "success": true,
+ "decoded": {
+ "height": 100,
+ "timestamp": "2024-01-15T10:30:00Z",
+ "parentHash": "0xabc123...",
+ "transactions": [
+ {
+ "type": "transfer",
+ "from": "0x123...",
+ "to": "0x456...",
+ "value": "1000000000000000000"
+ }
+ ]
+ }
+}
+```
+
+### Decoding DA Commitment
+
+```bash
+curl -X POST http://localhost:8080/api/decode \
+ -H "Content-Type: application/json" \
+ -d '{
+ "data": "eyJjb21taXRtZW50IjogIi4uLiJ9",
+ "encoding": "base64"
+ }'
+```
+
+### Celestia
+
+For Celestia blobs, you can decode namespace data and payment information from [celenium](https://celenium.io/namespaces).
diff --git a/docs/guides/tools/visualizer.md b/docs/guides/tools/visualizer.md
new file mode 100644
index 000000000..55ebc9980
--- /dev/null
+++ b/docs/guides/tools/visualizer.md
@@ -0,0 +1,240 @@
+# DA Visualizer
+
+The Data Availability (DA) Visualizer is a built-in monitoring tool in Evolve that provides real-time insights into blob submissions to the DA layer. It offers a web-based interface for tracking submission statistics, monitoring DA layer health, and analyzing blob details.
+
+**Note**: Only aggregator nodes submit data to the DA layer. Non-aggregator nodes will not display submission data.
+
+## Overview
+
+The DA Visualizer provides:
+
+- Real-time monitoring of blob submissions (last 100 submissions)
+- Success/failure statistics and trends
+- Gas price tracking and cost analysis
+- DA layer health monitoring
+- Detailed blob inspection capabilities
+- Recent submission history
+
+## Enabling the DA Visualizer
+
+The DA Visualizer is disabled by default. To enable it, use the following configuration:
+
+### Via Command-line Flag
+
+```bash
+testapp start --rollkit.rpc.enable_da_visualization
+```
+
+### Via Configuration File
+
+Add the following to your `evnode.yml` configuration file:
+
+```yaml
+rpc:
+ enable_da_visualization: true
+```
+
+## Accessing the DA Visualizer
+
+Once enabled, the DA Visualizer is accessible through your node's RPC server. By default, this is:
+
+```
+http://localhost:7331/da
+```
+
+The visualizer provides several API endpoints and a web interface:
+
+### Web Interface
+
+Navigate to `http://localhost:7331/da` in your web browser to access the interactive dashboard.
+
+### API Endpoints
+
+The following REST API endpoints are available for programmatic access:
+
+#### Get Recent Submissions
+
+```bash
+GET /da/submissions
+```
+
+Returns the most recent blob submissions (up to 100 kept in memory).
+
+#### Get Blob Details
+
+```bash
+GET /da/blob?id={blob_id}
+```
+
+Returns detailed information about a specific blob submission.
+
+#### Get DA Statistics
+
+```bash
+GET /da/stats
+```
+
+Returns aggregated statistics including:
+
+- Total submissions count
+- Success/failure rates
+- Average gas price
+- Total gas spent
+- Average blob size
+- Submission trends
+
+#### Get DA Health Status
+
+```bash
+GET /da/health
+```
+
+Returns the current health status of the DA layer including:
+
+- Connection status
+- Recent error rates
+- Performance metrics
+- Last successful submission timestamp
+
+## Features
+
+### Real-time Monitoring
+
+The dashboard automatically updates every 30 seconds, displaying:
+
+- Recent submission feed with status indicators (last 100 submissions)
+- Success rate percentage
+- Current gas price trends
+- Submission history
+
+### Submission Details
+
+Each submission entry shows:
+
+- Timestamp
+- Blob ID with link to detailed view
+- Number of blobs in the batch
+- Submission status (success/failure)
+- Gas price used
+- Error messages (if any)
+
+### Statistics Dashboard
+
+The statistics section provides:
+
+- **Performance Metrics**: Success rate, average submission time
+- **Cost Analysis**: Total gas spent, average gas price over time
+- **Volume Metrics**: Total blobs submitted, average blob size
+- **Trend Analysis**: Hourly and daily submission patterns
+
+### Health Monitoring
+
+The health status indicator shows:
+
+- 🟢 **Healthy**: DA layer responding normally
+- 🟡 **Warning**: Some failures but overall functional
+- 🔴 **Critical**: High failure rate or connection issues
+
+## Use Cases
+
+### For Node Operators
+
+- Monitor the reliability of DA submissions
+- Track gas costs and optimize gas price settings
+- Identify patterns in submission failures
+- Ensure DA layer connectivity
+
+### For Developers
+
+- Debug DA submission issues
+- Analyze blob data structure
+- Monitor application-specific submission patterns
+- Test DA layer integration
+
+### For Network Monitoring
+
+- Track overall network DA usage
+- Identify congestion periods
+- Monitor gas price fluctuations
+- Analyze submission patterns across the network
+
+## Configuration Options
+
+When enabling the DA Visualizer, you may want to adjust related RPC settings:
+
+```yaml
+rpc:
+ address: "0.0.0.0:7331" # Bind to all interfaces for remote access
+ enable_da_visualization: true
+```
+
+**Security Note**: If binding to all interfaces (`0.0.0.0`), ensure proper firewall rules are in place to restrict access to trusted sources only.
+
+## Troubleshooting
+
+### Visualizer Not Accessible
+
+1. Verify the DA Visualizer is enabled:
+ - Check your configuration file or ensure the flag is set
+ - Look for log entries confirming "DA visualization endpoints registered"
+
+2. Check the RPC server is running:
+ - Verify the RPC address in logs
+ - Ensure no port conflicts
+
+3. For remote access:
+ - Ensure the RPC server is bound to an accessible interface
+ - Check firewall settings
+
+### No Data Displayed
+
+1. Verify your node is in aggregator mode (only aggregators submit to DA)
+2. Check DA layer connectivity in the node logs
+3. Ensure transactions are being processed
+4. Note that the visualizer only keeps the last 100 submissions in memory
+
+### API Errors
+
+- **404 Not Found**: DA Visualizer not enabled
+- **500 Internal Server Error**: Check node logs for DA connection issues
+- **Empty responses**: No submissions have been made yet
+
+## Example Usage
+
+### Using curl to access the API
+
+```bash
+# Get recent submissions (returns up to 100)
+curl http://localhost:7331/da/submissions
+
+# Get specific blob details
+curl http://localhost:7331/da/blob?id=abc123...
+
+# Get statistics
+curl http://localhost:7331/da/stats
+
+# Check DA health
+curl http://localhost:7331/da/health
+```
+
+### Monitoring with scripts
+
+```bash
+#!/bin/bash
+# Simple monitoring script
+
+while true; do
+ health=$(curl -s http://localhost:7331/da/health | jq -r '.status')
+ if [ "$health" != "healthy" ]; then
+ echo "DA layer issue detected: $health"
+ # Send alert...
+ fi
+ sleep 30
+done
+```
+
+## Related Configuration
+
+For complete DA layer configuration options, see the [Config Reference](../../learn/config.md#data-availability-configuration-da).
+
+For metrics and monitoring setup, see the [Metrics Guide](../metrics.md).
diff --git a/docs/overview/architecture.md b/docs/overview/architecture.md
new file mode 100644
index 000000000..a262ba123
--- /dev/null
+++ b/docs/overview/architecture.md
@@ -0,0 +1,185 @@
+# Architecture
+
+Evolve uses a modular architecture where each component has a well-defined interface and can be swapped independently. This document provides an overview of how the pieces fit together.
+
+## System Overview
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Client Apps │
+│ (wallets, dapps, indexers) │
+└─────────────────────────────┬───────────────────────────────────┘
+ │ JSON-RPC / gRPC
+┌─────────────────────────────▼───────────────────────────────────┐
+│ ev-node │
+│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌──────────────┐ │
+│ │ Block │ │ Sequencer │ │ P2P │ │ Sync │ │
+│ │ Components│ │ │ │ Network │ │ Services │ │
+│ └─────┬─────┘ └─────┬─────┘ └─────┬─────┘ └───────┬──────┘ │
+└────────┼──────────────┼──────────────┼────────────────┼─────────┘
+ │ │ │ │
+ │ Executor │ Sequencer │ libp2p │ DA Client
+ ▼ ▼ ▼ ▼
+┌────────────────┐ ┌──────────┐ ┌─────────────────────────────────┐
+│ Executor │ │Sequencer │ │ DA Layer │
+│ (ev-reth or │ │(single, │ │ (Celestia) │
+│ ev-abci) │ │ based) │ │ │
+└────────────────┘ └──────────┘ └─────────────────────────────────┘
+```
+
+## Core Design Principles
+
+1. **Zero-dependency core** — The `core/` package contains only interfaces with no external dependencies. This keeps the API stable and allows any implementation.
+
+2. **Modular components** — Executor, Sequencer, and DA layer are all pluggable. Swap them without changing ev-node.
+
+3. **Separation of concerns** — Block production, syncing, and DA submission run as independent components that communicate through well-defined channels.
+
+4. **Two operating modes** — Nodes run as either an Aggregator (produces blocks) or Sync-only (follows chain).
+
+## Block Components
+
+The block package is the heart of ev-node. It's organized into specialized components:
+
+| Component | Responsibility | Runs On |
+|-----------|---------------|---------|
+| **Executor** | Produces blocks by getting batches from sequencer and executing via execution layer | Aggregator only |
+| **Reaper** | Scrapes transactions from execution layer mempool and submits to sequencer | Aggregator only |
+| **Syncer** | Coordinates block sync from DA layer and P2P network | All nodes |
+| **Submitter** | Submits blocks to DA layer and tracks inclusion | Aggregator only |
+| **Cache** | Manages in-memory state for headers, data, and pending submissions | All nodes |
+
+### Component Interaction
+
+```
+ ┌─────────────┐
+ │ Reaper │
+ │ (tx scrape)│
+ └──────┬──────┘
+ │ Submit batch
+ ▼
+┌─────────────┐ ┌─────────────┐ ┌─────────────┐
+│ Sequencer │◄───│ Executor │───►│ Broadcaster │
+│ │ │(block prod) │ │ (P2P) │
+└─────────────┘ └──────┬──────┘ └─────────────┘
+ │
+ │ Queue for submission
+ ▼
+ ┌─────────────┐
+ │ Submitter │───► DA Layer
+ │ │
+ └──────┬──────┘
+ │
+ │ Track inclusion
+ ▼
+ ┌─────────────┐
+ │ Cache │
+ └─────────────┘
+```
+
+## Node Types
+
+Evolve supports several node configurations:
+
+| Type | Block Production | Full Validation | DA Submission | Use Case |
+|------|-----------------|-----------------|---------------|----------|
+| **Aggregator** | Yes | Yes | Yes | Block producer (sequencer) |
+| **Full Node** | No | Yes | No | RPC provider, validator |
+| **Light Node** | No | Headers only | No | Mobile, embedded clients |
+| **Attester** | No | Yes | No | Soft consensus participant |
+
+### Aggregator
+
+The aggregator (also called sequencer node) produces blocks:
+
+1. Reaper collects transactions from execution layer
+2. Executor gets ordered batch from sequencer
+3. Executor calls execution layer to process transactions
+4. Executor creates and signs block (header + data)
+5. Broadcaster gossips block to P2P network
+6. Submitter queues block for DA submission
+
+### Full Node
+
+Full nodes sync and validate without producing blocks:
+
+1. Syncer receives blocks from DA layer and/or P2P
+2. Validates header signatures and data hashes
+3. Executes transactions via execution layer
+4. Verifies resulting state root matches header
+5. Persists validated blocks to local store
+
+## Data Flow
+
+### Block Production (Aggregator)
+
+```
+User Tx → Execution Layer Mempool
+ │
+ ▼
+ Reaper scrapes txs
+ │
+ ▼
+ Sequencer orders batch
+ │
+ ▼
+ Executor.ExecuteTxs()
+ │
+ ├──► SignedHeader + Data
+ │
+ ├──► P2P Broadcast (soft confirmation)
+ │
+ └──► Submitter Queue
+ │
+ ▼
+ DA Layer (hard confirmation)
+```
+
+### Block Sync (Non-Aggregator)
+
+```
+┌────────────────────────────────────────┐
+│ Syncer │
+├────────────┬────────────┬──────────────┤
+│ DA Worker │ P2P Worker │Forced Incl. │
+│ │ │ Worker │
+└─────┬──────┴─────┬──────┴───────┬──────┘
+ │ │ │
+ └────────────┴──────────────┘
+ │
+ ▼
+ processHeightEvent()
+ │
+ ▼
+ Validate → Execute → Persist
+```
+
+## P2P Network
+
+Built on libp2p with:
+
+- **GossipSub** for transaction and block propagation
+- **Kademlia DHT** for peer discovery
+- **Topics**: `{chainID}-tx`, `{chainID}-header`, `{chainID}-data`
+
+Nodes discover peers through:
+
+1. Bootstrap/seed nodes
+2. DHT peer exchange
+3. PEX (peer exchange protocol)
+
+## Storage
+
+ev-node uses a key-value store (badger) for:
+
+- **Headers** — Indexed by height and hash
+- **Data** — Transaction lists indexed by height
+- **State** — Last committed height, app hash, DA height
+- **Pending** — Blocks awaiting DA inclusion
+
+## Further Reading
+
+- [Block Lifecycle](/concepts/block-lifecycle) — Detailed block processing flow
+- [Sequencing](/concepts/sequencing) — How transaction ordering works
+- [Data Availability](/concepts/data-availability) — DA layer integration
+- [Executor Interface](/reference/interfaces/executor) — Full interface reference
diff --git a/docs/overview/execution-environments.md b/docs/overview/execution-environments.md
new file mode 100644
index 000000000..c2c76503e
--- /dev/null
+++ b/docs/overview/execution-environments.md
@@ -0,0 +1,31 @@
+# Execution Layers in Evolve
+
+Evolve is designed to be modular and flexible, allowing different execution layers to be plugged in. Evolve defines a general-purpose execution interface ([see execution.go](https://github.com/evstack/ev-node/blob/main/core/execution/execution.go)) that enables developers to integrate any compatible application as the chain's execution layer.
+
+This means you can use a variety of Cosmos SDK or Reth compatible applications as the execution environment for your chain: choose the execution environment that best fits your use case.
+
+## Supported Execution Layers
+
+### Cosmos SDK Execution Layer
+
+Evolve natively supports Cosmos SDK-based applications as the execution layer for a chain via the ABCI (Application Blockchain Interface) protocol. The Cosmos SDK provides a rich set of modules for staking, governance, IBC, and more, and is widely used in the Cosmos ecosystem. This integration allows developers to leverage the full power and flexibility of the Cosmos SDK when building their chain applications.
+
+- [Cosmos SDK Documentation](https://docs.cosmos.network/)
+- [Cosmos SDK ABCI Documentation](https://docs.cosmos.network/main/build/abci/introduction)
+- [Evolve ABCI Adapter](https://github.com/evstack/ev-abci)
+
+### Reth
+
+Reth is a high-performance Ethereum execution client written in Rust. Evolve can integrate Reth as an execution layer, enabling Ethereum-compatible chains to process EVM transactions and maintain Ethereum-like state. This allows developers to build chains that leverage the Ethereum ecosystem, tooling, and smart contracts, while benefiting from Evolve's modular consensus and data availability.
+
+For more information about Reth, see the official documentation:
+
+- [Reth GitHub Repository](https://github.com/paradigmxyz/reth)
+- [Evolve Reth Integration](https://github.com/evstack/ev-reth)
+
+## How It Works
+
+- Evolve acts as the consensus and uses Celestia as its data availability layer.
+- The execution layer (Cosmos SDK app or Reth) processes transactions and maintains application state.
+
+For more details on integrating an execution layer with Evolve, see the respective documentation links above.
diff --git a/docs/overview/what-is-evolve.md b/docs/overview/what-is-evolve.md
new file mode 100644
index 000000000..1f49b1d6f
--- /dev/null
+++ b/docs/overview/what-is-evolve.md
@@ -0,0 +1,95 @@
+# Introduction
+
+Evolve is the fastest way to launch your own modular network — without validator overhead or token lock-in.
+
+Built on Celestia, Evolve offers L1-level control with L2-level performance.
+
+This isn't a toolkit. It's a launch stack.
+
+No fees. No middlemen. No revenue share.
+
+## What is Evolve
+
+Evolve is a launch stack for L1s. It gives you full control over execution — without CometBFT, validator ops, or lock-in.
+
+It's [open-source](https://github.com/evstack/ev-node), production-ready, and fully composable.
+
+At its core is \`ev-node\`, a modular node that exposes an [Execution interface](https://github.com/evstack/ev-node/blob/main/core/execution/execution.go), — letting you bring any VM or execution logic, including Cosmos SDK or custom-built runtimes.
+
+Evolving from Cosmos SDK?
+
+Migrate without rewriting your stack. Bring your logic and state to Evolve and shed validator overhead — all while gaining performance and execution freedom.
+
+Evolve is how you launch your network. Modular. Production-ready. Yours.
+
+With Evolve, you get:
+
+- Full control over execution \- use any VM
+- Low-cost launch — no emissions, no validator inflation
+- Speed to traction — from local devnet to testnet in minutes
+- Keep sequencer revenue — monetize directly
+- Optional L1 validator network for fast finality and staking
+
+Powered by Celestia — toward 1GB blocks, multi-VM freedom, and execution without compromising flexibility or cost.
+
+## What problems is Evolve solving
+
+### 1\. Scalability and customizability
+
+Deploying your decentralized application as a smart contract on a shared blockchain has many limitations. Your smart contract has to share computational resources with every other application, so scalability is limited.
+
+Plus, you're restricted to the execution environment that the shared blockchain uses, so developer flexibility is limited as well.
+
+### 2\. Security and time to market
+
+Deploying a new chain might sound like the perfect solution for the problems listed above. While it's somewhat true, deploying a new layer 1 chain presents a complex set of challenges and trade-offs for developers looking to build blockchain products.
+
+Deploying a legacy layer 1 has huge barriers to entry: time, capital, token emissions and expertise.
+
+In order to secure the network, developers must bootstrap a sufficiently secure set of validators, incurring the overhead of managing a full consensus network. This requires paying validators with inflationary tokens, putting the network's business sustainability at risk. Network effects are also critical for success, but can be challenging to achieve as the network must gain widespread adoption to be secure and valuable.
+
+In a potential future with millions of chains, it's unlikely all of those chains will be able to sustainably attract a sufficiently secure and decentralized validator set.
+
+## Why Evolve
+
+Evolve solves the challenges encountered during the deployment of a smart contract or a new layer 1, by minimizing these tradeoffs through the implementation of evolve chains.
+
+With Evolve, developers can benefit from:
+
+- **Shared security**: Chains inherit security from a data availability layer, by posting blocks to it. Chains reduce the trust assumptions placed on chain sequencers by allowing full nodes to download and verify the transactions in the blocks posted by the sequencer. For optimistic or zk-chains, in case of fraudulent blocks, full nodes can generate fraud or zk-proofs, which they can share with the rest of the network, including light nodes. Our roadmap includes the ability for light clients to receive and verify proofs, so that everyday users can enjoy high security guarantees.
+
+- **Scalability:** Evolve chains are deployed on specialized data availability layers like Celestia, which directly leverages the scalability of the DA layer. Additionally, chain transactions are executed off-chain rather than on the data availability layer. This means chains have their own dedicated computational resources, rather than sharing computational resources with other applications.
+
+- **Customizability:** Evolve is built as an open source modular framework, to make it easier for developers to reuse the four main components and customize their chains. These components are data availability layers, execution environments, proof systems, and sequencer schemes.
+
+- **Faster time-to-market:** Evolve eliminates the need to bootstrap a validator set, manage a consensus network, incur high economic costs, and face other trade-offs that come with deploying a legacy layer 1\. Evolve's goal is to make deploying a chain as easy as it is to deploy a smart contract, cutting the time it takes to bring blockchain products to market from months (or even years) to just minutes.
+
+- **Sovereignty**: Evolve also enables developers to deploy chains for cases where communities require sovereignty.
+
+## How can you use Evolve
+
+As briefly mentioned above, Evolve could be used in many different ways. From chains, to settlement layers, and in the future even to L3s.
+
+### Chain with any VM
+
+Evolve gives developers the flexibility to use pre-existing ABCI-compatible state machines or create a custom state machine tailored to their chain needs. Evolve does not restrict the use of any specific virtual machine, allowing developers to experiment and bring innovative applications to life.
+
+### Cosmos SDK
+
+Similarly to how developers utilize the Cosmos SDK to build a layer 1 chain, the Cosmos SDK could be utilized to create a Evolve-compatible chain. Cosmos-SDK has great [documentation](https://docs.cosmos.network/main) and tooling that developers can leverage to learn.
+
+Another possibility is taking an existing layer 1 built with the Cosmos SDK and deploying it as a Evolve chain. Evolve gives your network a forward path. Migrate seamlessly, keep your logic, and evolve into a modular, high-performance system without CometBFT bottlenecks and zero validator overhead.
+
+### Build a settlement layer
+
+[Settlement layers](https://celestia.org/learn/modular-settlement-layers/settlement-in-the-modular-stack/) are ideal for developers who want to avoid deploying chains. They provide a platform for chains to verify proofs and resolve disputes. Additionally, they act as a hub for chains to facilitate trust-minimized token transfers and liquidity sharing between chains that share the same settlement layer. Think of settlement layers as a special type of execution layer.
+
+## When can you use Evolve
+
+As of today, Evolve provides a single sequencer, an execution interface (Engine API or ABCI), and a connection to Celestia.
+
+We're currently working on implementing many new and exciting features such as light nodes and state fraud proofs.
+
+Head down to the next section to learn more about what's coming for Evolve. If you're ready to start building, you can skip to the [Guides](../guides/quick-start.md) section.
+
+Spoiler alert, whichever you choose, it's going to be a great rabbit hole\!
diff --git a/docs/reference/api/abci-rpc.md b/docs/reference/api/abci-rpc.md
new file mode 100644
index 000000000..ffca6a18a
--- /dev/null
+++ b/docs/reference/api/abci-rpc.md
@@ -0,0 +1,196 @@
+# ABCI RPC Reference
+
+CometBFT-compatible RPC endpoints provided by ev-abci.
+
+## Query Methods
+
+### /abci_query
+
+Query application state.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/abci_query?path="/store/bank/key"&data=0x...'
+```
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "response": {
+ "code": 0,
+ "value": "base64encodedvalue",
+ "height": "1000"
+ }
+ },
+ "id": 1
+}
+```
+
+### /block
+
+Get block at height.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/block?height=100'
+```
+
+### /block_results
+
+Get block results (tx results, events).
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/block_results?height=100'
+```
+
+### /commit
+
+Get commit (signatures) at height.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/commit?height=100'
+```
+
+### /validators
+
+Get validator set (returns sequencer in Evolve).
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/validators?height=100'
+```
+
+### /status
+
+Get node status.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/status'
+```
+
+### /genesis
+
+Get genesis document.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/genesis'
+```
+
+### /health
+
+Health check.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/health'
+```
+
+## Transaction Methods
+
+### /broadcast_tx_async
+
+Broadcast transaction, return immediately.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/broadcast_tx_async?tx=0x...'
+```
+
+### /broadcast_tx_sync
+
+Broadcast transaction, wait for CheckTx.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/broadcast_tx_sync?tx=0x...'
+```
+
+### /broadcast_tx_commit
+
+Broadcast transaction, wait for inclusion.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/broadcast_tx_commit?tx=0x...'
+```
+
+### /tx
+
+Get transaction by hash.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/tx?hash=0x...'
+```
+
+### /tx_search
+
+Search transactions.
+
+**Request:**
+
+```bash
+curl 'http://localhost:26657/tx_search?query="tx.height=100"'
+```
+
+## WebSocket
+
+### /subscribe
+
+Subscribe to events.
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "subscribe",
+ "params": {"query": "tm.event='NewBlock'"},
+ "id": 1
+}
+```
+
+Event types:
+
+- `NewBlock` — New block committed
+- `Tx` — Transaction included
+- `NewBlockHeader` — New block header
+
+## Unsupported Methods
+
+These CometBFT methods are not supported in ev-abci:
+
+| Method | Reason |
+|--------|--------|
+| `/consensus_state` | No BFT consensus |
+| `/dump_consensus_state` | No BFT consensus |
+| `/net_info` | Different P2P model |
+| `/unconfirmed_txs` | Different mempool |
+| `/num_unconfirmed_txs` | Different mempool |
+
+## Port
+
+Default: `26657`
+
+Configure:
+
+```bash
+--evnode.rpc.address tcp://0.0.0.0:26657
+```
diff --git a/docs/reference/api/engine-api.md b/docs/reference/api/engine-api.md
new file mode 100644
index 000000000..15854aad2
--- /dev/null
+++ b/docs/reference/api/engine-api.md
@@ -0,0 +1,183 @@
+# Engine API Reference
+
+Engine API methods used by ev-node to communicate with ev-reth.
+
+## Authentication
+
+All requests require JWT authentication via the `Authorization` header:
+
+```
+Authorization: Bearer
+```
+
+Generate JWT from shared secret:
+
+```bash
+openssl rand -hex 32 > jwt.hex
+```
+
+## Methods
+
+### engine_forkchoiceUpdatedV3
+
+Update fork choice and optionally build a new block.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "engine_forkchoiceUpdatedV3",
+ "params": [
+ {
+ "headBlockHash": "0x...",
+ "safeBlockHash": "0x...",
+ "finalizedBlockHash": "0x..."
+ },
+ {
+ "timestamp": "0x...",
+ "prevRandao": "0x...",
+ "suggestedFeeRecipient": "0x...",
+ "withdrawals": [],
+ "parentBeaconBlockRoot": "0x..."
+ }
+ ],
+ "id": 1
+}
+```
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "payloadStatus": {
+ "status": "VALID",
+ "latestValidHash": "0x..."
+ },
+ "payloadId": "0x..."
+ },
+ "id": 1
+}
+```
+
+### engine_getPayloadV3
+
+Get a built payload.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "engine_getPayloadV3",
+ "params": ["0x...payloadId"],
+ "id": 1
+}
+```
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "executionPayload": {
+ "parentHash": "0x...",
+ "feeRecipient": "0x...",
+ "stateRoot": "0x...",
+ "receiptsRoot": "0x...",
+ "logsBloom": "0x...",
+ "prevRandao": "0x...",
+ "blockNumber": "0x1",
+ "gasLimit": "0x...",
+ "gasUsed": "0x...",
+ "timestamp": "0x...",
+ "extraData": "0x",
+ "baseFeePerGas": "0x...",
+ "blockHash": "0x...",
+ "transactions": ["0x..."],
+ "withdrawals": [],
+ "blobGasUsed": "0x0",
+ "excessBlobGas": "0x0"
+ },
+ "blockValue": "0x...",
+ "blobsBundle": {
+ "commitments": [],
+ "proofs": [],
+ "blobs": []
+ },
+ "shouldOverrideBuilder": false
+ },
+ "id": 1
+}
+```
+
+### engine_newPayloadV3
+
+Validate and execute a payload.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "engine_newPayloadV3",
+ "params": [
+ {
+ "parentHash": "0x...",
+ "feeRecipient": "0x...",
+ "stateRoot": "0x...",
+ "receiptsRoot": "0x...",
+ "logsBloom": "0x...",
+ "prevRandao": "0x...",
+ "blockNumber": "0x1",
+ "gasLimit": "0x...",
+ "gasUsed": "0x...",
+ "timestamp": "0x...",
+ "extraData": "0x",
+ "baseFeePerGas": "0x...",
+ "blockHash": "0x...",
+ "transactions": ["0x..."],
+ "withdrawals": [],
+ "blobGasUsed": "0x0",
+ "excessBlobGas": "0x0"
+ },
+ ["0x...expectedBlobVersionedHashes"],
+ "0x...parentBeaconBlockRoot"
+ ],
+ "id": 1
+}
+```
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "status": "VALID",
+ "latestValidHash": "0x...",
+ "validationError": null
+ },
+ "id": 1
+}
+```
+
+## Payload Status
+
+| Status | Description |
+|--------|-------------|
+| `VALID` | Payload is valid |
+| `INVALID` | Payload failed validation |
+| `SYNCING` | Node is syncing, cannot validate |
+| `ACCEPTED` | Payload accepted, validation pending |
+| `INVALID_BLOCK_HASH` | Block hash mismatch |
+
+## Ports
+
+| Port | Purpose |
+|------|---------|
+| 8551 | Engine API (authenticated) |
+| 8545 | JSON-RPC (public) |
diff --git a/docs/reference/api/rpc-endpoints.md b/docs/reference/api/rpc-endpoints.md
new file mode 100644
index 000000000..a8e41a782
--- /dev/null
+++ b/docs/reference/api/rpc-endpoints.md
@@ -0,0 +1,176 @@
+# RPC Endpoints Reference
+
+ev-node JSON-RPC endpoints.
+
+## Health
+
+### GET /health
+
+Check node health.
+
+**Response:**
+
+```json
+{
+ "status": "ok"
+}
+```
+
+## Block Queries
+
+### POST /block
+
+Get block by height.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "block",
+ "params": { "height": "100" },
+ "id": 1
+}
+```
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "block": {
+ "header": {
+ "height": "100",
+ "time": "2024-01-01T00:00:00Z",
+ "last_header_hash": "0x...",
+ "data_hash": "0x...",
+ "app_hash": "0x...",
+ "proposer_address": "0x..."
+ },
+ "data": {
+ "txs": ["0x..."]
+ }
+ }
+ },
+ "id": 1
+}
+```
+
+### POST /header
+
+Get header by height.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "header",
+ "params": { "height": "100" },
+ "id": 1
+}
+```
+
+### POST /block_by_hash
+
+Get block by hash.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "block_by_hash",
+ "params": { "hash": "0x..." },
+ "id": 1
+}
+```
+
+## Transaction Queries
+
+### POST /tx
+
+Get transaction by hash.
+
+**Request:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "tx",
+ "params": { "hash": "0x..." },
+ "id": 1
+}
+```
+
+## Status
+
+### POST /status
+
+Get node status.
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "node_info": {
+ "network": "chain-id",
+ "version": "1.0.0"
+ },
+ "sync_info": {
+ "latest_block_height": "1000",
+ "latest_block_time": "2024-01-01T00:00:00Z",
+ "catching_up": false
+ }
+ },
+ "id": 1
+}
+```
+
+## DA Status
+
+### POST /da_status
+
+Get DA layer status.
+
+**Response:**
+
+```json
+{
+ "jsonrpc": "2.0",
+ "result": {
+ "da_height": "5000",
+ "last_submitted_height": "999",
+ "pending_blocks": 1
+ },
+ "id": 1
+}
+```
+
+## Configuration
+
+Default port: `26657`
+
+Configure via flag:
+
+```bash
+--evnode.rpc.address tcp://0.0.0.0:26657
+```
+
+## WebSocket
+
+Subscribe to events via WebSocket at `ws://localhost:26657/websocket`.
+
+### Subscribe to new blocks
+
+```json
+{
+ "jsonrpc": "2.0",
+ "method": "subscribe",
+ "params": { "query": "tm.event='NewBlock'" },
+ "id": 1
+}
+```
diff --git a/docs/reference/configuration/ev-abci-flags.md b/docs/reference/configuration/ev-abci-flags.md
new file mode 100644
index 000000000..2733a8a90
--- /dev/null
+++ b/docs/reference/configuration/ev-abci-flags.md
@@ -0,0 +1,99 @@
+# ev-abci Flags Reference
+
+Command-line flags for Cosmos SDK applications using ev-abci.
+
+## ev-node Flags
+
+These flags configure the underlying ev-node instance.
+
+### Node Configuration
+
+| Flag | Type | Default | Description |
+|---------------------------------|----------|---------|------------------------------|
+| `--evnode.node.aggregator` | bool | `false` | Run as block producer |
+| `--evnode.node.block_time` | duration | `1s` | Block production interval |
+| `--evnode.node.lazy_aggregator` | bool | `false` | Only produce blocks with txs |
+| `--evnode.node.lazy_block_time` | duration | `1s` | Max wait in lazy mode |
+
+### DA Configuration
+
+| Flag | Type | Default | Description |
+|--------------------------|--------|----------|-------------------------|
+| `--evnode.da.address` | string | required | DA layer URL |
+| `--evnode.da.auth_token` | string | `""` | DA authentication token |
+| `--evnode.da.namespace` | string | `""` | DA namespace (hex) |
+| `--evnode.da.gas_price` | float | `0.01` | DA gas price |
+
+### P2P Configuration
+
+| Flag | Type | Default | Description |
+|------------------------|--------|--------------------------|--------------------------------|
+| `--evnode.p2p.listen` | string | `/ip4/0.0.0.0/tcp/26656` | P2P listen address |
+| `--evnode.p2p.peers` | string | `""` | Comma-separated peer addresses |
+| `--evnode.p2p.blocked` | string | `""` | Blocked peer IDs |
+
+### Signer Configuration
+
+| Flag | Type | Default | Description |
+|------------------------------|--------|----------|-----------------------|
+| `--evnode.signer.passphrase` | string | required | Signer key passphrase |
+
+### RPC Configuration
+
+| Flag | Type | Default | Description |
+|------------------------|--------|-----------------------|--------------------|
+| `--evnode.rpc.address` | string | `tcp://0.0.0.0:26657` | RPC listen address |
+
+## Cosmos SDK Flags
+
+Standard Cosmos SDK flags remain available:
+
+| Flag | Description |
+|----------------|--------------------------------------|
+| `--home` | Application home directory |
+| `--log_level` | Log level (debug, info, warn, error) |
+| `--log_format` | Log format (plain, json) |
+| `--trace` | Enable full stack traces |
+
+## Environment Variables
+
+Flags can be set via environment variables:
+
+```bash
+EVNODE_NODE_AGGREGATOR=true
+EVNODE_DA_ADDRESS=http://localhost:7980
+EVNODE_SIGNER_PASSPHRASE=secret
+```
+
+Pattern: `EVNODE__` (uppercase, underscores)
+
+## Examples
+
+### Sequencer Node
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.node.block_time 500ms \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.signer.passphrase secret
+```
+
+### Full Node
+
+```bash
+appd start \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.p2p.peers 12D3KooW...@sequencer.example.com:26656
+```
+
+### Lazy Aggregator
+
+```bash
+appd start \
+ --evnode.node.aggregator \
+ --evnode.node.lazy_aggregator \
+ --evnode.node.lazy_block_time 5s \
+ --evnode.da.address http://localhost:7980 \
+ --evnode.signer.passphrase secret
+```
diff --git a/docs/reference/configuration/ev-node-config.md b/docs/reference/configuration/ev-node-config.md
new file mode 100644
index 000000000..ba900a163
--- /dev/null
+++ b/docs/reference/configuration/ev-node-config.md
@@ -0,0 +1,999 @@
+# Config
+
+This document provides a comprehensive reference for all configuration options available in Evolve. Understanding these configurations will help you tailor Evolve's behavior to your specific needs, whether you're running an aggregator, a full node, or a light client.
+
+## Table of Contents
+
+- [DA-Only Sync Mode](#da-only-sync-mode)
+- [Introduction to Configurations](#configs)
+- [Base Configuration](#base-configuration)
+ - [Root Directory](#root-directory)
+ - [Database Path](#database-path)
+ - [Chain ID](#chain-id)
+- [Node Configuration (`node`)](#node-configuration-node)
+ - [Aggregator Mode](#aggregator-mode)
+ - [Light Client Mode](#light-client-mode)
+ - [Block Time](#block-time)
+ - [Maximum Pending Blocks](#maximum-pending-blocks)
+ - [Lazy Mode (Lazy Aggregator)](#lazy-mode-lazy-aggregator)
+ - [Lazy Block Interval](#lazy-block-interval)
+- [Data Availability Configuration (`da`)](#data-availability-configuration-da)
+ - [DA Service Address](#da-service-address)
+ - [DA Authentication Token](#da-authentication-token)
+ - [DA Gas Price](#da-gas-price)
+ - [DA Gas Multiplier](#da-gas-multiplier)
+ - [DA Submit Options](#da-submit-options)
+ - [DA Signing Addresses](#da-signing-addresses)
+ - [DA Namespace](#da-namespace)
+ - [DA Header Namespace](#da-namespace)
+ - [DA Data Namespace](#da-data-namespace)
+ - [DA Block Time](#da-block-time)
+ - [DA Mempool TTL](#da-mempool-ttl)
+ - [DA Request Timeout](#da-request-timeout)
+ - [DA Batching Strategy](#da-batching-strategy)
+ - [DA Batch Size Threshold](#da-batch-size-threshold)
+ - [DA Batch Max Delay](#da-batch-max-delay)
+ - [DA Batch Min Items](#da-batch-min-items)
+- [P2P Configuration (`p2p`)](#p2p-configuration-p2p)
+ - [P2P Listen Address](#p2p-listen-address)
+ - [P2P Peers](#p2p-peers)
+ - [P2P Blocked Peers](#p2p-blocked-peers)
+ - [P2P Allowed Peers](#p2p-allowed-peers)
+- [RPC Configuration (`rpc`)](#rpc-configuration-rpc)
+ - [RPC Server Address](#rpc-server-address)
+ - [Enable DA Visualization](#enable-da-visualization)
+ - [Health Endpoints](#health-endpoints)
+- [Instrumentation Configuration (`instrumentation`)](#instrumentation-configuration-instrumentation)
+ - [Enable Prometheus Metrics](#enable-prometheus-metrics)
+ - [Prometheus Listen Address](#prometheus-listen-address)
+ - [Maximum Open Connections](#maximum-open-connections)
+ - [Enable Pprof Profiling](#enable-pprof-profiling)
+ - [Pprof Listen Address](#pprof-listen-address)
+- [Logging Configuration (`log`)](#logging-configuration-log)
+ - [Log Level](#log-level)
+ - [Log Format](#log-format)
+ - [Log Trace (Stack Traces)](#log-trace-stack-traces)
+- [Signer Configuration (`signer`)](#signer-configuration-signer)
+ - [Signer Type](#signer-type)
+ - [Signer Path](#signer-path)
+ - [Signer Passphrase](#signer-passphrase)
+
+## DA-Only Sync Mode
+
+Evolve supports running nodes that sync exclusively from the Data Availability (DA) layer without participating in P2P networking. This mode is useful for:
+
+- **Pure DA followers**: Nodes that only need the canonical chain data from DA
+- **Resource optimization**: Reducing network overhead by avoiding P2P gossip
+- **Simplified deployment**: No need to configure or maintain P2P peer connections
+- **Isolated environments**: Nodes that should not participate in P2P communication
+
+**To enable DA-only sync mode:**
+
+1. **Leave P2P peers empty** (default behavior):
+
+ ```yaml
+ p2p:
+ peers: "" # Empty or omit this field entirely
+ ```
+
+2. **Configure DA connection** (required):
+
+ ```yaml
+ da:
+ address: "your-da-service:port"
+ namespace: "your-namespace"
+ # ... other DA configuration
+ ```
+
+3. **Optional**: You can still configure P2P listen address for potential future connections, but without peers, no P2P networking will occur.
+
+When running in DA-only mode, the node will:
+
+- ✅ Sync blocks and headers from the DA layer
+- ✅ Validate transactions and maintain state
+- ✅ Serve RPC requests
+- ❌ Not participate in P2P gossip or peer discovery
+- ❌ Not share blocks with other nodes via P2P
+- ❌ Not receive transactions via P2P (only from direct RPC submission)
+
+## Configs
+
+Evolve configurations can be managed through a YAML file (typically `evnode.yml` located in `~/.evolve/config/` or `/config/`) and command-line flags. The system prioritizes configurations in the following order (highest priority first):
+
+1. **Command-line flags:** Override all other settings.
+2. **YAML configuration file:** Values specified in the `config.yaml` file.
+3. **Default values:** Predefined defaults within Evolve.
+
+Environment variables can also be used, typically prefixed with your executable's name (e.g., `YOURAPP_CHAIN_ID="my-chain"`).
+
+## Base Configuration
+
+These are fundamental settings for your Evolve node.
+
+### Root Directory
+
+**Description:**
+The root directory where Evolve stores its data, including the database and configuration files. This is a foundational setting that dictates where all other file paths are resolved from.
+
+**YAML:**
+This option is not set within the YAML configuration file itself, as it specifies the location _of_ the configuration file and other application data.
+
+**Command-line Flag:**
+`--home `
+_Example:_ `--home /mnt/data/evolve_node`
+_Default:_ `~/.evolve` (or a directory derived from the application name if `defaultHome` is customized).
+_Constant:_ `FlagRootDir`
+
+### Database Path
+
+**Description:**
+The path, relative to the Root Directory, where the Evolve database will be stored. This database contains blockchain state, blocks, and other critical node data.
+
+**YAML:**
+Set this in your configuration file at the top level:
+
+```yaml
+db_path: "data"
+```
+
+**Command-line Flag:**
+`--rollkit.db_path `
+_Example:_ `--rollkit.db_path "node_db"`
+_Default:_ `"data"`
+_Constant:_ `FlagDBPath`
+
+### Chain ID
+
+**Description:**
+The unique identifier for your chain. This ID is used to differentiate your network from others and is crucial for network communication and transaction validation.
+
+**YAML:**
+Set this in your configuration file at the top level:
+
+```yaml
+chain_id: "my-evolve-chain"
+```
+
+**Command-line Flag:**
+`--chain_id `
+_Example:_ `--chain_id "super_rollup_testnet_v1"`
+_Default:_ `"evolve"`
+_Constant:_ `FlagChainID`
+
+## Node Configuration (`node`)
+
+Settings related to the core behavior of the Evolve node, including its mode of operation and block production parameters.
+
+**YAML Section:**
+
+```yaml
+node:
+ # ... node configurations ...
+```
+
+### Aggregator Mode
+
+**Description:**
+If true, the node runs in aggregator mode. Aggregators are responsible for producing blocks by collecting transactions, ordering them, and proposing them to the network.
+
+**YAML:**
+
+```yaml
+node:
+ aggregator: true
+```
+
+**Command-line Flag:**
+`--rollkit.node.aggregator` (boolean, presence enables it)
+_Example:_ `--rollkit.node.aggregator`
+_Default:_ `false`
+_Constant:_ `FlagAggregator`
+
+### Light Client Mode
+
+**Description:**
+If true, the node runs in light client mode. Light clients rely on full nodes for block headers and state information, offering a lightweight way to interact with the chain without storing all data.
+
+**YAML:**
+
+```yaml
+node:
+ light: true
+```
+
+**Command-line Flag:**
+`--rollkit.node.light` (boolean, presence enables it)
+_Example:_ `--rollkit.node.light`
+_Default:_ `false`
+_Constant:_ `FlagLight`
+
+### Block Time
+
+**Description:**
+The target time interval between consecutive blocks produced by an aggregator. This duration (e.g., "500ms", "1s", "5s") dictates the pace of block production.
+
+**YAML:**
+
+```yaml
+node:
+ block_time: "1s"
+```
+
+**Command-line Flag:**
+`--rollkit.node.block_time `
+_Example:_ `--rollkit.node.block_time 2s`
+_Default:_ `"1s"`
+_Constant:_ `FlagBlockTime`
+
+### Maximum Pending Blocks
+
+**Description:**
+The maximum number of blocks that can be pending Data Availability (DA) submission. When this limit is reached, the aggregator pauses block production until some blocks are confirmed on the DA layer. Use 0 for no limit. This helps manage resource usage and DA layer capacity.
+
+**YAML:**
+
+```yaml
+node:
+ max_pending_blocks: 100
+```
+
+**Command-line Flag:**
+`--rollkit.node.max_pending_blocks `
+_Example:_ `--rollkit.node.max_pending_blocks 50`
+_Default:_ `0` (no limit)
+_Constant:_ `FlagMaxPendingBlocks`
+
+### Lazy Mode (Lazy Aggregator)
+
+**Description:**
+Enables lazy aggregation mode. In this mode, blocks are produced only when new transactions are available in the mempool or after the `lazy_block_interval` has passed. This optimizes resource usage by avoiding the creation of empty blocks during periods of inactivity.
+
+**YAML:**
+
+```yaml
+node:
+ lazy_mode: true
+```
+
+**Command-line Flag:**
+`--rollkit.node.lazy_mode` (boolean, presence enables it)
+_Example:_ `--rollkit.node.lazy_mode`
+_Default:_ `false`
+_Constant:_ `FlagLazyAggregator`
+
+### Lazy Block Interval
+
+**Description:**
+The maximum time interval between blocks when running in lazy aggregation mode (`lazy_mode`). This ensures that blocks are produced periodically even if there are no new transactions, keeping the chain active. This value is generally larger than `block_time`.
+
+**YAML:**
+
+```yaml
+node:
+ lazy_block_interval: "30s"
+```
+
+**Command-line Flag:**
+`--rollkit.node.lazy_block_interval `
+_Example:_ `--rollkit.node.lazy_block_interval 1m`
+_Default:_ `"30s"`
+_Constant:_ `FlagLazyBlockTime`
+
+## Data Availability Configuration (`da`)
+
+Parameters for connecting and interacting with the Data Availability (DA) layer, which Evolve uses to publish block data.
+
+**YAML Section:**
+
+```yaml
+da:
+ # ... DA configurations ...
+```
+
+### DA Service Address
+
+**Description:**
+The network address (host:port) of the Data Availability layer service. Evolve connects to this endpoint to submit and retrieve block data.
+
+**YAML:**
+
+```yaml
+da:
+ address: "localhost:26659"
+```
+
+**Command-line Flag:**
+`--rollkit.da.address `
+_Example:_ `--rollkit.da.address 192.168.1.100:26659`
+_Default:_ `""` (empty, must be configured if DA is used)
+_Constant:_ `FlagDAAddress`
+
+### DA Authentication Token
+
+**Description:**
+The authentication token required to interact with the DA layer service, if the service mandates authentication.
+
+**YAML:**
+
+```yaml
+da:
+ auth_token: "YOUR_DA_AUTH_TOKEN"
+```
+
+**Command-line Flag:**
+`--rollkit.da.auth_token `
+_Example:_ `--rollkit.da.auth_token mysecrettoken`
+_Default:_ `""` (empty)
+_Constant:_ `FlagDAAuthToken`
+
+### DA Gas Price
+
+**Description:**
+The gas price to use for transactions submitted to the DA layer. A value of -1 indicates automatic gas price determination (if supported by the DA layer). Higher values may lead to faster inclusion of data.
+
+**YAML:**
+
+```yaml
+da:
+ gas_price: 0.025
+```
+
+**Command-line Flag:**
+`--rollkit.da.gas_price `
+_Example:_ `--rollkit.da.gas_price 0.05`
+_Default:_ `-1` (automatic)
+_Constant:_ `FlagDAGasPrice`
+
+### DA Gas Multiplier
+
+**Description:**
+A multiplier applied to the gas price when retrying failed DA submissions. Values greater than 1 increase the gas price on retries, potentially improving the chances of successful inclusion.
+
+**YAML:**
+
+```yaml
+da:
+ gas_multiplier: 1.1
+```
+
+**Command-line Flag:**
+`--rollkit.da.gas_multiplier `
+_Example:_ `--rollkit.da.gas_multiplier 1.5`
+_Default:_ `1.0` (no multiplication)
+_Constant:_ `FlagDAGasMultiplier`
+
+### DA Submit Options
+
+**Description:**
+Additional options passed to the DA layer when submitting data. The format and meaning of these options depend on the specific DA implementation being used. For example, with Celestia, this can include custom gas settings or other submission parameters in JSON format.
+
+**Note:** If you configure multiple signing addresses (see [DA Signing Addresses](#da-signing-addresses)), the selected signing address will be automatically merged into these options as a JSON field `signer_address` (matching Celestia's TxConfig schema). If the base options are already valid JSON, the signing address is added to the existing object; otherwise, a new JSON object is created.
+
+**YAML:**
+
+```yaml
+da:
+ submit_options: '{"key":"value"}' # Example, format depends on DA layer
+```
+
+**Command-line Flag:**
+`--rollkit.da.submit_options `
+_Example:_ `--rollkit.da.submit_options '{"custom_param":true}'`
+_Default:_ `""` (empty)
+_Constant:_ `FlagDASubmitOptions`
+
+### DA Signing Addresses
+
+**Description:**
+A comma-separated list of signing addresses to use for DA blob submissions. When multiple addresses are provided, they will be used in round-robin fashion to prevent sequence mismatches that can occur with high-throughput Cosmos SDK-based DA layers. This is particularly useful for Celestia when submitting many transactions concurrently.
+
+Each submission will select the next address in the list, and that address will be automatically added to the `submit_options` as `signer_address`. This ensures that the DA layer (e.g., celestia-node) uses the specified account for signing that particular blob submission.
+
+**Setup Requirements:**
+
+- All addresses must be loaded into the DA node's keyring and have sufficient funds for transaction fees
+- For Celestia, see the guide on setting up multiple accounts in the DA node documentation
+
+**YAML:**
+
+```yaml
+da:
+ signing_addresses:
+ - "celestia1abc123..."
+ - "celestia1def456..."
+ - "celestia1ghi789..."
+```
+
+**Command-line Flag:**
+`--evnode.da.signing_addresses `
+_Example:_ `--rollkit.da.signing_addresses celestia1abc...,celestia1def...,celestia1ghi...`
+_Default:_ `[]` (empty, uses default DA node behavior)
+_Constant:_ `FlagDASigningAddresses`
+
+**Behavior:**
+
+- If no signing addresses are configured, submissions use the DA layer's default signing behavior
+- If one address is configured, all submissions use that address
+- If multiple addresses are configured, they are used in round-robin order to distribute the load and prevent nonce/sequence conflicts
+- The address selection is thread-safe for concurrent submissions
+
+### DA Namespace
+
+**Description:**
+The namespace ID used when submitting blobs (block data) to the DA layer. This helps segregate data from different chains or applications on a shared DA layer.
+
+**Note:** If only `namespace` is provided, it will be used for both headers and data, otherwise the `data_namespace` will be used for data. Doing so allows speeding up light clients.
+
+**YAML:**
+
+```yaml
+da:
+ namespace: "MY_UNIQUE_NAMESPACE_ID"
+```
+
+**Command-line Flag:**
+`--rollkit.da.namespace `
+_Example:_ `--rollkit.da.namespace 0x1234567890abcdef`
+_Default:_ `""` (empty)
+_Constant:_ `FlagDANamespace`
+
+### DA Data Namespace
+
+**Description:**
+The namespace ID specifically for submitting transaction data to the DA layer. Transaction data is submitted separately from headers, enabling nodes to sync only the data they need. The namespace value is encoded by the node to ensure proper formatting and compatibility with the DA layer.
+
+**YAML:**
+
+```yaml
+da:
+ data_namespace: "DATA_NAMESPACE_ID"
+```
+
+**Command-line Flag:**
+`--rollkit.da.data_namespace `
+_Example:_ `--rollkit.da.data_namespace my_data_namespace`
+_Default:_ Falls back to `namespace` if not set
+_Constant:_ `FlagDADataNamespace`
+
+### DA Block Time
+
+**Description:**
+The average block time of the Data Availability chain (specified as a duration string, e.g., "15s", "1m"). This value influences:
+
+- The frequency of DA layer syncing.
+- The maximum backoff time for retrying DA submissions.
+- Calculation of transaction expiration when multiplied by `mempool_ttl`.
+
+**YAML:**
+
+```yaml
+da:
+ block_time: "6s"
+```
+
+**Command-line Flag:**
+`--rollkit.da.block_time `
+_Example:_ `--rollkit.da.block_time 12s`
+_Default:_ `"6s"`
+_Constant:_ `FlagDABlockTime`
+
+### DA Mempool TTL
+
+**Description:**
+The number of DA blocks after which a transaction submitted to the DA layer is considered expired and potentially dropped from the DA layer's mempool. This also controls the retry backoff timing for DA submissions.
+
+**YAML:**
+
+```yaml
+da:
+ mempool_ttl: 20
+```
+
+**Command-line Flag:**
+`--rollkit.da.mempool_ttl `
+_Example:_ `--rollkit.da.mempool_ttl 30`
+_Default:_ `20`
+_Constant:_ `FlagDAMempoolTTL`
+
+### DA Request Timeout
+
+**Description:**
+Per-request timeout applied to DA `GetIDs` and `Get` RPC calls while retrieving blobs. Increase this value if your DA endpoint has high latency to avoid premature failures; decrease it to make the syncer fail fast and free resources sooner when the DA node becomes unresponsive.
+
+**YAML:**
+
+```yaml
+da:
+ request_timeout: "30s"
+```
+
+**Command-line Flag:**
+`--rollkit.da.request_timeout `
+_Example:_ `--rollkit.da.request_timeout 45s`
+_Default:_ `"30s"`
+_Constant:_ `FlagDARequestTimeout`
+
+### DA Batching Strategy
+
+**Description:**
+Controls how blocks are batched before submission to the DA layer. Different strategies offer trade-offs between latency, cost efficiency, and throughput. All strategies pass through the DA submitter which performs additional size checks and may further split batches that exceed the DA layer's blob size limit.
+
+Available strategies:
+
+- **`immediate`**: Submits as soon as any items are available. Best for low-latency requirements where cost is not a concern.
+- **`size`**: Waits until the batch reaches a size threshold (fraction of max blob size). Best for maximizing blob utilization and minimizing costs when latency is flexible.
+- **`time`**: Waits for a time interval before submitting. Provides predictable submission timing aligned with DA block times.
+- **`adaptive`**: Balances between size and time constraints—submits when either the size threshold is reached OR the max delay expires. Recommended for most production deployments as it optimizes both cost and latency.
+
+**YAML:**
+
+```yaml
+da:
+ batching_strategy: "time"
+```
+
+**Command-line Flag:**
+`--rollkit.da.batching_strategy `
+_Example:_ `--rollkit.da.batching_strategy adaptive`
+_Default:_ `"time"`
+_Constant:_ `FlagDABatchingStrategy`
+
+### DA Batch Size Threshold
+
+**Description:**
+The minimum blob size threshold (as a fraction of the maximum blob size, between 0.0 and 1.0) before submitting a batch. Only applies to the `size` and `adaptive` strategies. For example, a value of 0.8 means the batch will be submitted when it reaches 80% of the maximum blob size.
+
+Higher values maximize blob utilization and reduce costs but may increase latency. Lower values reduce latency but may result in less efficient blob usage.
+
+**YAML:**
+
+```yaml
+da:
+ batch_size_threshold: 0.8
+```
+
+**Command-line Flag:**
+`--rollkit.da.batch_size_threshold `
+_Example:_ `--rollkit.da.batch_size_threshold 0.9`
+_Default:_ `0.8` (80% of max blob size)
+_Constant:_ `FlagDABatchSizeThreshold`
+
+### DA Batch Max Delay
+
+**Description:**
+The maximum time to wait before submitting a batch regardless of size. Applies to the `time` and `adaptive` strategies. Lower values reduce latency but may increase costs due to smaller batches. This value is typically aligned with the DA chain's block time to ensure submissions land in consecutive blocks.
+
+When set to 0, defaults to the DA BlockTime value.
+
+**YAML:**
+
+```yaml
+da:
+ batch_max_delay: "6s"
+```
+
+**Command-line Flag:**
+`--rollkit.da.batch_max_delay `
+_Example:_ `--rollkit.da.batch_max_delay 12s`
+_Default:_ `0` (uses DA BlockTime)
+_Constant:_ `FlagDABatchMaxDelay`
+
+### DA Batch Min Items
+
+**Description:**
+The minimum number of items (headers or data) to accumulate before considering submission. This helps avoid submitting single items when more are expected soon, improving batching efficiency. All strategies respect this minimum.
+
+**YAML:**
+
+```yaml
+da:
+ batch_min_items: 1
+```
+
+**Command-line Flag:**
+`--rollkit.da.batch_min_items `
+_Example:_ `--rollkit.da.batch_min_items 5`
+_Default:_ `1`
+_Constant:_ `FlagDABatchMinItems`
+
+## P2P Configuration (`p2p`)
+
+Settings for peer-to-peer networking, enabling nodes to discover each other, exchange blocks, and share transactions.
+
+**YAML Section:**
+
+```yaml
+p2p:
+ # ... P2P configurations ...
+```
+
+### P2P Listen Address
+
+**Description:**
+The network address (host:port) on which the Evolve node will listen for incoming P2P connections from other nodes.
+
+**YAML:**
+
+```yaml
+p2p:
+ listen_address: "0.0.0.0:7676"
+```
+
+**Command-line Flag:**
+`--rollkit.p2p.listen_address `
+_Example:_ `--rollkit.p2p.listen_address /ip4/127.0.0.1/tcp/26656`
+_Default:_ `"/ip4/0.0.0.0/tcp/7676"`
+_Constant:_ `FlagP2PListenAddress`
+
+### P2P Peers
+
+**Description:**
+A comma-separated list of peer addresses (e.g., multiaddresses) that the node will attempt to connect to for bootstrapping its P2P connections. These are often referred to as seed nodes.
+
+**For DA-only sync mode:** Leave this field empty (default) to disable P2P networking entirely. When no peers are configured, the node will sync exclusively from the Data Availability layer without participating in P2P gossip, peer discovery, or block sharing. This is useful for nodes that only need to follow the canonical chain data from DA.
+
+**YAML:**
+
+```yaml
+p2p:
+ peers: "/ip4/some_peer_ip/tcp/7676/p2p/PEER_ID1,/ip4/another_peer_ip/tcp/7676/p2p/PEER_ID2"
+ # For DA-only sync, leave peers empty:
+ # peers: ""
+```
+
+**Command-line Flag:**
+`--rollkit.p2p.peers `
+_Example:_ `--rollkit.p2p.peers /dns4/seed.example.com/tcp/26656/p2p/12D3KooW...`
+_Default:_ `""` (empty - enables DA-only sync mode)
+_Constant:_ `FlagP2PPeers`
+
+### P2P Blocked Peers
+
+**Description:**
+A comma-separated list of peer IDs that the node should block from connecting. This can be used to prevent connections from known malicious or problematic peers.
+
+**YAML:**
+
+```yaml
+p2p:
+ blocked_peers: "PEER_ID_TO_BLOCK1,PEER_ID_TO_BLOCK2"
+```
+
+**Command-line Flag:**
+`--rollkit.p2p.blocked_peers `
+_Example:_ `--rollkit.p2p.blocked_peers 12D3KooW...,12D3KooX...`
+_Default:_ `""` (empty)
+_Constant:_ `FlagP2PBlockedPeers`
+
+### P2P Allowed Peers
+
+**Description:**
+A comma-separated list of peer IDs that the node should exclusively allow connections from. If this list is non-empty, only peers in this list will be able to connect.
+
+**YAML:**
+
+```yaml
+p2p:
+ allowed_peers: "PEER_ID_TO_ALLOW1,PEER_ID_TO_ALLOW2"
+```
+
+**Command-line Flag:**
+`--rollkit.p2p.allowed_peers `
+_Example:_ `--rollkit.p2p.allowed_peers 12D3KooY...,12D3KooZ...`
+_Default:_ `""` (empty, allow all unless blocked)
+_Constant:_ `FlagP2PAllowedPeers`
+
+## RPC Configuration (`rpc`)
+
+Settings for the Remote Procedure Call (RPC) server, which allows clients and applications to interact with the Evolve node.
+
+**YAML Section:**
+
+```yaml
+rpc:
+ # ... RPC configurations ...
+```
+
+### RPC Server Address
+
+**Description:**
+The network address (host:port) to which the RPC server will bind and listen for incoming requests.
+
+**YAML:**
+
+```yaml
+rpc:
+ address: "127.0.0.1:7331"
+```
+
+**Command-line Flag:**
+`--rollkit.rpc.address `
+_Example:_ `--rollkit.rpc.address 0.0.0.0:26657`
+_Default:_ `"127.0.0.1:7331"`
+_Constant:_ `FlagRPCAddress`
+
+### Enable DA Visualization
+
+**Description:**
+If true, enables the Data Availability (DA) visualization endpoints that provide real-time monitoring of blob submissions to the DA layer. This includes a web-based dashboard and REST API endpoints for tracking submission statistics, monitoring DA health, and analyzing blob details. Only aggregator nodes submit data to the DA layer, so this feature is most useful when running in aggregator mode.
+
+**YAML:**
+
+```yaml
+rpc:
+ enable_da_visualization: true
+```
+
+**Command-line Flag:**
+`--rollkit.rpc.enable_da_visualization` (boolean, presence enables it)
+_Example:_ `--rollkit.rpc.enable_da_visualization`
+_Default:_ `false`
+_Constant:_ `FlagRPCEnableDAVisualization`
+
+See the [DA Visualizer Guide](../guides/da/visualizer.md) for detailed information on using this feature.
+
+### Health Endpoints
+
+#### `/health/live`
+
+Returns `200 OK` if the process is alive and can access the store.
+
+```bash
+curl http://localhost:7331/health/live
+```
+
+#### `/health/ready`
+
+Returns `200 OK` if the node can serve correct data. Checks:
+
+- P2P is listening (if enabled)
+- Has synced blocks
+- Not too far behind network
+- Non-aggregators: has peers
+- Aggregators: producing blocks at expected rate
+
+```bash
+curl http://localhost:7331/health/ready
+```
+
+Configure max blocks behind:
+
+```yaml
+node:
+ readiness_max_blocks_behind: 15
+```
+
+## Instrumentation Configuration (`instrumentation`)
+
+Settings for enabling and configuring metrics and profiling endpoints, useful for monitoring node performance and debugging.
+
+**YAML Section:**
+
+```yaml
+instrumentation:
+ # ... instrumentation configurations ...
+```
+
+### Enable Prometheus Metrics
+
+**Description:**
+If true, enables the Prometheus metrics endpoint, allowing Prometheus to scrape operational data from the Evolve node.
+
+**YAML:**
+
+```yaml
+instrumentation:
+ prometheus: true
+```
+
+**Command-line Flag:**
+`--rollkit.instrumentation.prometheus` (boolean, presence enables it)
+_Example:_ `--rollkit.instrumentation.prometheus`
+_Default:_ `false`
+_Constant:_ `FlagPrometheus`
+
+### Prometheus Listen Address
+
+**Description:**
+The network address (host:port) where the Prometheus metrics server will listen for scraping requests.
+
+See [Metrics](../guides/metrics.md) for more details on what metrics are exposed.
+
+**YAML:**
+
+```yaml
+instrumentation:
+ prometheus_listen_addr: ":2112"
+```
+
+**Command-line Flag:**
+`--rollkit.instrumentation.prometheus_listen_addr `
+_Example:_ `--rollkit.instrumentation.prometheus_listen_addr 0.0.0.0:9090`
+_Default:_ `":2112"`
+_Constant:_ `FlagPrometheusListenAddr`
+
+### Maximum Open Connections
+
+**Description:**
+The maximum number of simultaneous connections allowed for the metrics server (e.g., Prometheus endpoint).
+
+**YAML:**
+
+```yaml
+instrumentation:
+ max_open_connections: 100
+```
+
+**Command-line Flag:**
+`--rollkit.instrumentation.max_open_connections `
+_Example:_ `--rollkit.instrumentation.max_open_connections 50`
+_Default:_ (Refer to `DefaultInstrumentationConfig()` in code, typically a reasonable number like 100)
+_Constant:_ `FlagMaxOpenConnections`
+
+### Enable Pprof Profiling
+
+**Description:**
+If true, enables the pprof HTTP endpoint, which provides runtime profiling data for debugging performance issues. Accessing these endpoints can help diagnose CPU and memory usage.
+
+**YAML:**
+
+```yaml
+instrumentation:
+ pprof: true
+```
+
+**Command-line Flag:**
+`--rollkit.instrumentation.pprof` (boolean, presence enables it)
+_Example:_ `--rollkit.instrumentation.pprof`
+_Default:_ `false`
+_Constant:_ `FlagPprof`
+
+### Pprof Listen Address
+
+**Description:**
+The network address (host:port) where the pprof HTTP server will listen for profiling requests.
+
+**YAML:**
+
+```yaml
+instrumentation:
+ pprof_listen_addr: "localhost:6060"
+```
+
+**Command-line Flag:**
+`--rollkit.instrumentation.pprof_listen_addr `
+_Example:_ `--rollkit.instrumentation.pprof_listen_addr 0.0.0.0:6061`
+_Default:_ `"localhost:6060"`
+_Constant:_ `FlagPprofListenAddr`
+
+## Logging Configuration (`log`)
+
+Settings that control the verbosity and format of log output from the Evolve node. These are typically set via global flags.
+
+**YAML Section:**
+
+```yaml
+log:
+ # ... logging configurations ...
+```
+
+### Log Level
+
+**Description:**
+Sets the minimum severity level for log messages to be displayed. Common levels include `debug`, `info`, `warn`, `error`.
+
+**YAML:**
+
+```yaml
+log:
+ level: "info"
+```
+
+**Command-line Flag:**
+`--log.level ` (Note: some applications might use a different flag name like `--log_level`)
+_Example:_ `--log.level debug`
+_Default:_ `"info"`
+_Constant:_ `FlagLogLevel` (value: "evolve.log.level", but often overridden by global app flags)
+
+### Log Format
+
+**Description:**
+Sets the format for log output. Common formats include `text` (human-readable) and `json` (structured, machine-readable).
+
+**YAML:**
+
+```yaml
+log:
+ format: "text"
+```
+
+**Command-line Flag:**
+`--log.format ` (Note: some applications might use a different flag name like `--log_format`)
+_Example:_ `--log.format json`
+_Default:_ `"text"`
+_Constant:_ `FlagLogFormat` (value: "evolve.log.format", but often overridden by global app flags)
+
+### Log Trace (Stack Traces)
+
+**Description:**
+If true, enables the inclusion of stack traces in error logs. This can be very helpful for debugging issues by showing the call stack at the point of an error.
+
+**YAML:**
+
+```yaml
+log:
+ trace: false
+```
+
+**Command-line Flag:**
+`--log.trace` (boolean, presence enables it; Note: some applications might use a different flag name like `--log_trace`)
+_Example:_ `--log.trace`
+_Default:_ `false`
+_Constant:_ `FlagLogTrace` (value: "evolve.log.trace", but often overridden by global app flags)
+
+## Signer Configuration (`signer`)
+
+Settings related to the signing mechanism used by the node, particularly for aggregators that need to sign blocks.
+
+**YAML Section:**
+
+```yaml
+signer:
+ # ... signer configurations ...
+```
+
+### Signer Type
+
+**Description:**
+Specifies the type of remote signer to use. Common options might include `file` (for key files) or `grpc` (for connecting to a remote signing service).
+
+**YAML:**
+
+```yaml
+signer:
+ signer_type: "file"
+```
+
+**Command-line Flag:**
+`--rollkit.signer.signer_type `
+_Example:_ `--rollkit.signer.signer_type grpc`
+_Default:_ (Depends on application, often "file" or none if not an aggregator)
+_Constant:_ `FlagSignerType`
+
+### Signer Path
+
+**Description:**
+The path to the signer file (if `signer_type` is `file`) or the address of the remote signer service (if `signer_type` is `grpc` or similar).
+
+**YAML:**
+
+```yaml
+signer:
+ signer_path: "/path/to/priv_validator_key.json" # For file signer
+ # signer_path: "localhost:9000" # For gRPC signer
+```
+
+**Command-line Flag:**
+`--rollkit.signer.signer_path `
+_Example:_ `--rollkit.signer.signer_path ./config`
+_Default:_ (Depends on application)
+_Constant:_ `FlagSignerPath`
+
+### Signer Passphrase
+
+**Description:**
+The passphrase required to decrypt or access the signer key, particularly if using a `file` signer and the key is encrypted, or if the aggregator mode is enabled and requires it. This flag is not directly a field in the `SignerConfig` struct but is used in conjunction with it.
+
+**YAML:**
+This is typically not stored in the YAML file for security reasons but provided via flag or environment variable.
+
+**Command-line Flag:**
+`--rollkit.signer.passphrase `
+_Example:_ `--rollkit.signer.passphrase "mysecretpassphrase"`
+_Default:_ `""` (empty)
+_Constant:_ `FlagSignerPassphrase`
+_Note:_ Be cautious with providing passphrases directly on the command line in shared environments due to history logging. Environment variables or secure input methods are often preferred.
+
+---
+
+This reference should help you configure your Evolve node effectively. Always refer to the specific version of Evolve you are using, as options and defaults may change over time.
diff --git a/docs/reference/configuration/ev-reth-chainspec.md b/docs/reference/configuration/ev-reth-chainspec.md
new file mode 100644
index 000000000..9a6585e07
--- /dev/null
+++ b/docs/reference/configuration/ev-reth-chainspec.md
@@ -0,0 +1,160 @@
+# ev-reth Chainspec Reference
+
+Complete reference for ev-reth chainspec (genesis.json) configuration.
+
+## Structure
+
+```json
+{
+ "config": { },
+ "alloc": { },
+ "coinbase": "0x...",
+ "difficulty": "0x0",
+ "gasLimit": "0x...",
+ "nonce": "0x0",
+ "timestamp": "0x0"
+}
+```
+
+## config
+
+Chain configuration parameters.
+
+### Standard Ethereum Fields
+
+| Field | Type | Description |
+|-----------------------|--------|-----------------------------------|
+| `chainId` | number | Unique chain identifier |
+| `homesteadBlock` | number | Homestead fork block (use 0) |
+| `eip150Block` | number | EIP-150 fork block (use 0) |
+| `eip155Block` | number | EIP-155 fork block (use 0) |
+| `eip158Block` | number | EIP-158 fork block (use 0) |
+| `byzantiumBlock` | number | Byzantium fork block (use 0) |
+| `constantinopleBlock` | number | Constantinople fork block (use 0) |
+| `petersburgBlock` | number | Petersburg fork block (use 0) |
+| `istanbulBlock` | number | Istanbul fork block (use 0) |
+| `berlinBlock` | number | Berlin fork block (use 0) |
+| `londonBlock` | number | London fork block (use 0) |
+| `shanghaiTime` | number | Shanghai fork timestamp (use 0) |
+| `cancunTime` | number | Cancun fork timestamp (use 0) |
+
+### config.evolve
+
+Evolve-specific extensions.
+
+| Field | Type | Description |
+|-----------------------------------|---------|------------------------------------|
+| `baseFeeSink` | address | Redirect base fees to this address |
+| `baseFeeRedirectActivationHeight` | number | Block height to activate redirect |
+| `deployAllowlist` | object | Contract deployment restrictions |
+| `contractSizeLimit` | number | Max contract bytecode size (bytes) |
+| `mintPrecompile` | object | Native token minting precompile |
+
+#### deployAllowlist
+
+```json
+{
+ "admin": "0x...",
+ "enabled": ["0x...", "0x..."]
+}
+```
+
+| Field | Type | Description |
+|-----------|-----------|-----------------------------|
+| `admin` | address | Can modify the allowlist |
+| `enabled` | address[] | Addresses allowed to deploy |
+
+#### mintPrecompile
+
+```json
+{
+ "admin": "0x...",
+ "address": "0x0000000000000000000000000000000000000100"
+}
+```
+
+| Field | Type | Description |
+|-----------|---------|--------------------|
+| `admin` | address | Can call mint() |
+| `address` | address | Precompile address |
+
+## alloc
+
+Pre-funded accounts and contract deployments.
+
+```json
+{
+ "alloc": {
+ "0xAddress1": {
+ "balance": "0x..."
+ },
+ "0xAddress2": {
+ "balance": "0x...",
+ "code": "0x...",
+ "storage": {
+ "0x0": "0x..."
+ }
+ }
+ }
+}
+```
+
+| Field | Type | Description |
+|-----------|------------|------------------------------|
+| `balance` | hex string | Wei balance |
+| `code` | hex string | Contract bytecode (optional) |
+| `storage` | object | Storage slots (optional) |
+| `nonce` | hex string | Account nonce (optional) |
+
+## Top-Level Fields
+
+| Field | Type | Description |
+|--------------|------------|--------------------------------|
+| `coinbase` | address | Default fee recipient |
+| `difficulty` | hex string | Initial difficulty (use "0x0") |
+| `gasLimit` | hex string | Block gas limit |
+| `nonce` | hex string | Genesis nonce (use "0x0") |
+| `timestamp` | hex string | Genesis timestamp |
+| `extraData` | hex string | Extra data (optional) |
+| `mixHash` | hex string | Mix hash (optional) |
+
+## Example
+
+```json
+{
+ "config": {
+ "chainId": 1337,
+ "homesteadBlock": 0,
+ "eip150Block": 0,
+ "eip155Block": 0,
+ "eip158Block": 0,
+ "byzantiumBlock": 0,
+ "constantinopleBlock": 0,
+ "petersburgBlock": 0,
+ "istanbulBlock": 0,
+ "berlinBlock": 0,
+ "londonBlock": 0,
+ "shanghaiTime": 0,
+ "cancunTime": 0,
+ "evolve": {
+ "baseFeeSink": "0x1234567890123456789012345678901234567890",
+ "baseFeeRedirectActivationHeight": 0,
+ "contractSizeLimit": 49152,
+ "mintPrecompile": {
+ "admin": "0xBridgeContract",
+ "address": "0x0000000000000000000000000000000000000100"
+ }
+ }
+ },
+ "alloc": {
+ "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266": {
+ "balance": "0x200000000000000000000000000000000000000000000000000000000000000"
+ }
+ },
+ "coinbase": "0x0000000000000000000000000000000000000000",
+ "difficulty": "0x0",
+ "gasLimit": "0x1c9c380",
+ "nonce": "0x0",
+ "timestamp": "0x0"
+}
+```
diff --git a/docs/reference/interfaces/da.md b/docs/reference/interfaces/da.md
new file mode 100644
index 000000000..8cf99a8de
--- /dev/null
+++ b/docs/reference/interfaces/da.md
@@ -0,0 +1,193 @@
+# DA Interface
+
+The DA (Data Availability) interface defines how ev-node submits and retrieves data from the DA layer.
+
+## Client Interface
+
+```go
+type Client interface {
+ Submit(ctx context.Context, data [][]byte, gasPrice float64, namespace []byte, options []byte) ResultSubmit
+ Retrieve(ctx context.Context, height uint64, namespace []byte) ResultRetrieve
+ Get(ctx context.Context, ids []ID, namespace []byte) ([]Blob, error)
+ GetHeaderNamespace() []byte
+ GetDataNamespace() []byte
+ GetForcedInclusionNamespace() []byte
+ HasForcedInclusionNamespace() bool
+}
+```
+
+## Methods
+
+### Submit
+
+Submits blobs to the DA layer.
+
+```go
+Submit(ctx context.Context, data [][]byte, gasPrice float64, namespace []byte, options []byte) ResultSubmit
+```
+
+**Parameters:**
+
+- `data` - Blobs to submit
+- `gasPrice` - DA layer gas price
+- `namespace` - Target namespace
+- `options` - DA-specific options (JSON encoded)
+
+**Returns:**
+
+```go
+type ResultSubmit struct {
+ BaseResult
+}
+```
+
+### Retrieve
+
+Retrieves all blobs at a DA height and namespace.
+
+```go
+Retrieve(ctx context.Context, height uint64, namespace []byte) ResultRetrieve
+```
+
+**Returns:**
+
+```go
+type ResultRetrieve struct {
+ BaseResult
+ Data [][]byte // Retrieved blobs
+}
+```
+
+### Get
+
+Retrieves specific blobs by their IDs.
+
+```go
+Get(ctx context.Context, ids []ID, namespace []byte) ([]Blob, error)
+```
+
+### Namespace Accessors
+
+```go
+GetHeaderNamespace() []byte // Namespace for block headers
+GetDataNamespace() []byte // Namespace for block data
+GetForcedInclusionNamespace() []byte // Namespace for forced inclusion txs
+HasForcedInclusionNamespace() bool // Whether forced inclusion is enabled
+```
+
+## Verifier Interface
+
+For sequencers that need to verify batch inclusion:
+
+```go
+type Verifier interface {
+ GetProofs(ctx context.Context, ids []ID, namespace []byte) ([]Proof, error)
+ Validate(ctx context.Context, ids []ID, proofs []Proof, namespace []byte) ([]bool, error)
+}
+```
+
+## FullClient Interface
+
+Combines Client and Verifier:
+
+```go
+type FullClient interface {
+ Client
+ Verifier
+}
+```
+
+## Types
+
+### Core Types
+
+```go
+type Blob = []byte // Raw data
+type ID = []byte // Blob identifier (height + commitment)
+type Commitment = []byte // Cryptographic commitment
+type Proof = []byte // Inclusion proof
+```
+
+### BaseResult
+
+Common fields for DA operations:
+
+```go
+type BaseResult struct {
+ Code StatusCode
+ Message string
+ Height uint64
+ SubmittedCount uint64
+ BlobSize uint64
+ IDs [][]byte
+ Timestamp time.Time
+}
+```
+
+### Status Codes
+
+```go
+const (
+ StatusUnknown StatusCode = iota
+ StatusSuccess
+ StatusNotFound
+ StatusNotIncludedInBlock
+ StatusAlreadyInMempool
+ StatusTooBig
+ StatusContextDeadline
+ StatusError
+ StatusIncorrectAccountSequence
+ StatusContextCanceled
+ StatusHeightFromFuture
+)
+```
+
+## ID Format
+
+IDs encode both height and commitment:
+
+```go
+// ID = height (8 bytes, little-endian) + commitment
+func SplitID(id []byte) (height uint64, commitment []byte, error)
+```
+
+## Namespaces
+
+DA uses 29-byte namespaces (Celestia format):
+
+- 1 byte version
+- 28 bytes identifier
+
+Three namespaces are used:
+
+| Namespace | Purpose |
+|------------------|-----------------------------------------|
+| Header | Block headers |
+| Data | Transaction data |
+| Forced Inclusion | User-submitted censorship-resistant txs |
+
+## Implementations
+
+| Implementation | Package | Description |
+|----------------|-------------------|---------------------|
+| Celestia | `pkg/da/celestia` | Production DA layer |
+| Local DA | `pkg/da/local` | Development/testing |
+
+## Configuration
+
+```bash
+# Celestia
+--evnode.da.address http://localhost:26658
+--evnode.da.auth_token
+--evnode.da.namespace
+--evnode.da.gas_price 0.01
+
+# Local DA
+--evnode.da.address http://localhost:7980
+```
+
+## See Also
+
+- [Data Availability Concepts](/concepts/data-availability)
+- [Celestia Guide](/guides/da-layers/celestia)
+- [Local DA Guide](/guides/da-layers/local-da)
diff --git a/docs/reference/interfaces/executor.md b/docs/reference/interfaces/executor.md
new file mode 100644
index 000000000..5cb0e9f8d
--- /dev/null
+++ b/docs/reference/interfaces/executor.md
@@ -0,0 +1,185 @@
+# Executor Interface
+
+The Executor interface defines how ev-node communicates with execution layers. Implement this interface to run custom execution environments on Evolve.
+
+## Interface Definition
+
+```go
+type Executor interface {
+ InitChain(ctx context.Context, genesisTime time.Time, initialHeight uint64, chainID string) (stateRoot []byte, err error)
+ GetTxs(ctx context.Context) ([][]byte, error)
+ ExecuteTxs(ctx context.Context, txs [][]byte, blockHeight uint64, timestamp time.Time, prevStateRoot []byte) (updatedStateRoot []byte, err error)
+ SetFinal(ctx context.Context, blockHeight uint64) error
+ GetExecutionInfo(ctx context.Context) (ExecutionInfo, error)
+ FilterTxs(ctx context.Context, txs [][]byte, maxBytes, maxGas uint64, hasForceIncludedTransaction bool) ([]FilterStatus, error)
+}
+```
+
+## Methods
+
+### InitChain
+
+Initializes the blockchain with genesis parameters.
+
+```go
+InitChain(ctx context.Context, genesisTime time.Time, initialHeight uint64, chainID string) (stateRoot []byte, err error)
+```
+
+**Parameters:**
+
+- `genesisTime` - Chain start timestamp (UTC)
+- `initialHeight` - First block height (must be > 0)
+- `chainID` - Unique chain identifier
+
+**Returns:**
+
+- `stateRoot` - Hash representing initial state
+
+**Requirements:**
+
+- Must be idempotent (repeated calls return same result)
+- Must validate genesis parameters
+- Must generate deterministic initial state root
+
+### GetTxs
+
+Fetches transactions from the execution layer's mempool.
+
+```go
+GetTxs(ctx context.Context) ([][]byte, error)
+```
+
+**Returns:**
+
+- Slice of valid transactions
+
+**Requirements:**
+
+- Return only currently valid transactions
+- Do not remove transactions from mempool
+- May remove invalid transactions
+
+### ExecuteTxs
+
+Processes transactions to produce a new block state.
+
+```go
+ExecuteTxs(ctx context.Context, txs [][]byte, blockHeight uint64, timestamp time.Time, prevStateRoot []byte) (updatedStateRoot []byte, err error)
+```
+
+**Parameters:**
+
+- `txs` - Ordered list of transactions
+- `blockHeight` - Height of block being created
+- `timestamp` - Block timestamp (UTC)
+- `prevStateRoot` - Previous block's state root
+
+**Returns:**
+
+- `updatedStateRoot` - New state root after execution
+
+**Requirements:**
+
+- Must be deterministic
+- Must handle empty transaction lists
+- Must handle malformed transactions gracefully
+- Must validate against previous state root
+
+### SetFinal
+
+Marks a block as finalized.
+
+```go
+SetFinal(ctx context.Context, blockHeight uint64) error
+```
+
+**Parameters:**
+
+- `blockHeight` - Height to finalize
+
+**Requirements:**
+
+- Must be idempotent
+- Must verify block exists
+- Finalized blocks cannot be reverted
+
+### GetExecutionInfo
+
+Returns current execution layer parameters.
+
+```go
+GetExecutionInfo(ctx context.Context) (ExecutionInfo, error)
+```
+
+**Returns:**
+
+```go
+type ExecutionInfo struct {
+ MaxGas uint64 // Maximum gas per block (0 = no gas-based limiting)
+}
+```
+
+### FilterTxs
+
+Validates and filters transactions for block inclusion.
+
+```go
+FilterTxs(ctx context.Context, txs [][]byte, maxBytes, maxGas uint64, hasForceIncludedTransaction bool) ([]FilterStatus, error)
+```
+
+**Parameters:**
+
+- `txs` - All transactions (force-included + mempool)
+- `maxBytes` - Maximum cumulative size (0 = no limit)
+- `maxGas` - Maximum cumulative gas (0 = no limit)
+- `hasForceIncludedTransaction` - Whether force-included txs are present
+
+**Returns:**
+
+```go
+type FilterStatus int
+
+const (
+ FilterOK FilterStatus = iota // Include in batch
+ FilterRemove // Invalid, remove
+ FilterPostpone // Valid but exceeds limits, postpone
+)
+```
+
+## Optional Interfaces
+
+### HeightProvider
+
+Enables height synchronization checks between ev-node and the execution layer.
+
+```go
+type HeightProvider interface {
+ GetLatestHeight(ctx context.Context) (uint64, error)
+}
+```
+
+Useful for detecting desynchronization after crashes or restarts.
+
+### Rollbackable
+
+Enables automatic rollback when execution layer is ahead of consensus.
+
+```go
+type Rollbackable interface {
+ Rollback(ctx context.Context, targetHeight uint64) error
+}
+```
+
+Only implement if your execution layer supports in-flight rollback.
+
+## Implementations
+
+| Implementation | Package | Description |
+|----------------|---------|-------------|
+| ev-reth | `execution/evm` | EVM execution via Engine API |
+| ev-abci | `execution/abci` | Cosmos SDK via ABCI |
+| testapp | `apps/testapp` | Simple key-value store |
+
+## Implementation Guide
+
+See [Implement Custom Executor](/getting-started/custom/implement-executor) for a step-by-step guide.
diff --git a/docs/reference/interfaces/sequencer.md b/docs/reference/interfaces/sequencer.md
new file mode 100644
index 000000000..ead2eb947
--- /dev/null
+++ b/docs/reference/interfaces/sequencer.md
@@ -0,0 +1,159 @@
+# Sequencer Interface
+
+The Sequencer interface defines how ev-node orders transactions for block production. Two implementations are provided: single sequencer and based sequencer.
+
+## Interface Definition
+
+```go
+type Sequencer interface {
+ SubmitBatchTxs(ctx context.Context, req SubmitBatchTxsRequest) (*SubmitBatchTxsResponse, error)
+ GetNextBatch(ctx context.Context, req GetNextBatchRequest) (*GetNextBatchResponse, error)
+ VerifyBatch(ctx context.Context, req VerifyBatchRequest) (*VerifyBatchResponse, error)
+ SetDAHeight(height uint64)
+ GetDAHeight() uint64
+}
+```
+
+## Methods
+
+### SubmitBatchTxs
+
+Submits a batch of transactions from the executor to the sequencer.
+
+```go
+SubmitBatchTxs(ctx context.Context, req SubmitBatchTxsRequest) (*SubmitBatchTxsResponse, error)
+```
+
+**Request:**
+
+```go
+type SubmitBatchTxsRequest struct {
+ Id []byte // Chain identifier
+ Batch *Batch // Transactions to submit
+}
+
+type Batch struct {
+ Transactions [][]byte
+}
+```
+
+### GetNextBatch
+
+Returns the next batch of transactions for block production.
+
+```go
+GetNextBatch(ctx context.Context, req GetNextBatchRequest) (*GetNextBatchResponse, error)
+```
+
+**Request:**
+
+```go
+type GetNextBatchRequest struct {
+ Id []byte // Chain identifier
+ LastBatchData [][]byte // Previous batch data
+ MaxBytes uint64 // Maximum batch size
+}
+```
+
+**Response:**
+
+```go
+type GetNextBatchResponse struct {
+ Batch *Batch // Transactions to include
+ Timestamp time.Time // Block timestamp
+ BatchData [][]byte // Data for verification
+}
+```
+
+### VerifyBatch
+
+Verifies a batch received from another node during sync.
+
+```go
+VerifyBatch(ctx context.Context, req VerifyBatchRequest) (*VerifyBatchResponse, error)
+```
+
+**Request:**
+
+```go
+type VerifyBatchRequest struct {
+ Id []byte // Chain identifier
+ BatchData [][]byte // Batch data to verify
+}
+```
+
+**Response:**
+
+```go
+type VerifyBatchResponse struct {
+ Status bool // true if valid
+}
+```
+
+### SetDAHeight / GetDAHeight
+
+Track the current DA height for forced inclusion retrieval.
+
+```go
+SetDAHeight(height uint64)
+GetDAHeight() uint64
+```
+
+## Batch Type
+
+```go
+type Batch struct {
+ Transactions [][]byte
+}
+
+// Hash returns SHA256 hash of the batch
+func (batch *Batch) Hash() ([]byte, error)
+```
+
+The hash is computed deterministically:
+
+1. Write transaction count as uint64 (big-endian)
+2. For each transaction: write length as uint64, then bytes
+
+## Implementations
+
+### Single Sequencer
+
+Located in `pkg/sequencers/single/`.
+
+- Maintains local mempool
+- Supports forced inclusion from DA
+- Default for most deployments
+
+### Based Sequencer
+
+Located in `pkg/sequencers/based/`.
+
+- No local mempool
+- All transactions come from DA layer
+- Maximum censorship resistance
+
+## Configuration
+
+Select sequencer mode via configuration:
+
+```yaml
+# Single sequencer (default)
+sequencer:
+ type: single
+
+# Based sequencer
+sequencer:
+ type: based
+```
+
+## Forced Inclusion
+
+Both sequencer implementations support forced inclusion, but with different behaviors:
+
+| Sequencer | Forced Inclusion Source | Mempool |
+|-----------|------------------------|---------|
+| Single | DA namespace + local mempool | Yes |
+| Based | DA namespace only | No |
+
+The sequencer tracks DA height via `SetDAHeight()` to know which forced inclusion transactions to include.
diff --git a/docs/reference/specs/block-manager.md b/docs/reference/specs/block-manager.md
new file mode 100644
index 000000000..c97171f90
--- /dev/null
+++ b/docs/reference/specs/block-manager.md
@@ -0,0 +1,759 @@
+# Block Components
+
+## Abstract
+
+The block package provides a modular component-based architecture for handling block-related operations in full nodes. Instead of a single monolithic manager, the system is divided into specialized components that work together, each responsible for specific aspects of block processing. This architecture enables better separation of concerns, easier testing, and more flexible node configurations.
+
+The main components are:
+
+- **Executor**: Handles block production and state transitions (aggregator nodes only)
+- **Reaper**: Periodically retrieves transactions and submits them to the sequencer (aggregator nodes only)
+- **Submitter**: Manages submission of headers and data to the DA network (aggregator nodes only)
+- **Syncer**: Handles synchronization from both DA and P2P sources (all full nodes)
+- **Cache Manager**: Coordinates caching and tracking of blocks across all components
+
+A full node coordinates these components based on its role:
+
+- **Aggregator nodes**: Use all components for block production, submission, and synchronization
+- **Non-aggregator full nodes**: Use only Syncer and Cache for block synchronization
+
+```mermaid
+sequenceDiagram
+ title Overview of Block Manager
+
+ participant User
+ participant Sequencer
+ participant Full Node 1
+ participant Full Node 2
+ participant DA Layer
+
+ User->>Sequencer: Send Tx
+ Sequencer->>Sequencer: Generate Block
+ Sequencer->>DA Layer: Publish Block
+
+ Sequencer->>Full Node 1: Gossip Block
+ Sequencer->>Full Node 2: Gossip Block
+ Full Node 1->>Full Node 1: Verify Block
+ Full Node 1->>Full Node 2: Gossip Block
+ Full Node 1->>Full Node 1: Mark Block Soft Confirmed
+
+ Full Node 2->>Full Node 2: Verify Block
+ Full Node 2->>Full Node 2: Mark Block Soft Confirmed
+
+ DA Layer->>Full Node 1: Retrieve Block
+ Full Node 1->>Full Node 1: Mark Block DA Included
+
+ DA Layer->>Full Node 2: Retrieve Block
+ Full Node 2->>Full Node 2: Mark Block DA Included
+```
+
+### Component Architecture Overview
+
+```mermaid
+flowchart TB
+ subgraph Block Components [Modular Block Components]
+ EXE[Executor
Block Production]
+ REA[Reaper
Tx Collection]
+ SUB[Submitter
DA Submission]
+ SYN[Syncer
Block Sync]
+ CAC[Cache Manager
State Tracking]
+ end
+
+ subgraph External Components
+ CEXE[Core Executor]
+ SEQ[Sequencer]
+ DA[DA Layer]
+ HS[Header Store/P2P]
+ DS[Data Store/P2P]
+ ST[Local Store]
+ end
+
+ REA -->|GetTxs| CEXE
+ REA -->|SubmitBatch| SEQ
+ REA -->|Notify| EXE
+
+ EXE -->|CreateBlock| CEXE
+ EXE -->|ApplyBlock| CEXE
+ EXE -->|Save| ST
+ EXE -->|Track| CAC
+
+ EXE -->|Headers| SUB
+ EXE -->|Data| SUB
+ SUB -->|Submit| DA
+ SUB -->|Track| CAC
+
+ DA -->|Retrieve| SYN
+ HS -->|Headers| SYN
+ DS -->|Data| SYN
+
+ SYN -->|ApplyBlock| CEXE
+ SYN -->|Save| ST
+ SYN -->|Track| CAC
+ SYN -->|SetFinal| CEXE
+
+ CAC -->|Coordinate| EXE
+ CAC -->|Coordinate| SUB
+ CAC -->|Coordinate| SYN
+```
+
+## Protocol/Component Description
+
+The block components are initialized based on the node type:
+
+### Aggregator Components
+
+Aggregator nodes create all components for full block production and synchronization capabilities:
+
+```go
+components := block.NewAggregatorComponents(
+ config, // Node configuration
+ genesis, // Genesis state
+ store, // Local datastore
+ executor, // Core executor for state transitions
+ sequencer, // Sequencer client
+ da, // DA client
+ signer, // Block signing key
+ // P2P stores and options...
+)
+```
+
+### Non-Aggregator Components
+
+Non-aggregator full nodes create only synchronization components:
+
+```go
+components := block.NewSyncComponents(
+ config, // Node configuration
+ genesis, // Genesis state
+ store, // Local datastore
+ executor, // Core executor for state transitions
+ da, // DA client
+ // P2P stores and options... (no signer or sequencer needed)
+)
+```
+
+### Component Initialization Parameters
+
+| **Name** | **Type** | **Description** |
+| --------------------------- | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| signing key | crypto.PrivKey | used for signing blocks and data after creation |
+| config | config.BlockManagerConfig | block manager configurations (see config options below) |
+| genesis | \*cmtypes.GenesisDoc | initialize the block manager with genesis state (genesis configuration defined in `config/genesis.json` file under the app directory) |
+| store | store.Store | local datastore for storing chain blocks and states (default local store path is `$db_dir/evolve` and `db_dir` specified in the `config.yaml` file under the app directory) |
+| mempool, proxyapp, eventbus | mempool.Mempool, proxy.AppConnConsensus, \*cmtypes.EventBus | for initializing the executor (state transition function). mempool is also used in the manager to check for availability of transactions for lazy block production |
+| dalc | da.DAClient | the data availability light client used to submit and retrieve blocks to DA network |
+| headerStore | *goheaderstore.Store[*types.SignedHeader] | to store and retrieve block headers gossiped over the P2P network |
+| dataStore | *goheaderstore.Store[*types.SignedData] | to store and retrieve block data gossiped over the P2P network |
+| signaturePayloadProvider | types.SignaturePayloadProvider | optional custom provider for header signature payloads |
+| sequencer | core.Sequencer | used to retrieve batches of transactions from the sequencing layer |
+| reaper | \*Reaper | component that periodically retrieves transactions from the executor and submits them to the sequencer |
+
+### Configuration Options
+
+The block components share a common configuration:
+
+| Name | Type | Description |
+| ------------------------ | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
+| BlockTime | time.Duration | time interval used for block production and block retrieval from block store ([`defaultBlockTime`][defaultBlockTime]) |
+| DABlockTime | time.Duration | time interval used for both block publication to DA network and block retrieval from DA network ([`defaultDABlockTime`][defaultDABlockTime]) |
+| DAStartHeight | uint64 | block retrieval from DA network starts from this height |
+| LazyBlockInterval | time.Duration | time interval used for block production in lazy aggregator mode even when there are no transactions ([`defaultLazyBlockTime`][defaultLazyBlockTime]) |
+| LazyMode | bool | when set to true, enables lazy aggregation mode which produces blocks only when transactions are available or at LazyBlockInterval intervals |
+| MaxPendingHeadersAndData | uint64 | maximum number of pending headers and data blocks before pausing block production (default: 100) |
+| MaxSubmitAttempts | int | maximum number of retry attempts for DA submissions (default: 30) |
+| MempoolTTL | int | number of blocks to wait when transaction is stuck in DA mempool (default: 25) |
+| GasPrice | float64 | gas price for DA submissions (-1 for automatic/default) |
+| GasMultiplier | float64 | multiplier for gas price on DA submission retries (default: 1.3) |
+| Namespace | da.Namespace | DA namespace ID for block submissions (deprecated, use HeaderNamespace and DataNamespace instead) |
+| HeaderNamespace | string | namespace ID for submitting headers to DA layer (automatically encoded by the node) |
+| DataNamespace | string | namespace ID for submitting data to DA layer (automatically encoded by the node) |
+| RequestTimeout | duration | per-request timeout for DA `GetIDs`/`Get` calls; higher values tolerate slow DA nodes, lower values fail faster (default: 30s) |
+
+### Block Production (Executor Component)
+
+When the full node is operating as an aggregator, the **Executor component** handles block production. There are two modes of block production, which can be specified in the block manager configurations: `normal` and `lazy`.
+
+In `normal` mode, the block manager runs a timer, which is set to the `BlockTime` configuration parameter, and continuously produces blocks at `BlockTime` intervals.
+
+In `lazy` mode, the block manager implements a dual timer mechanism:
+
+```mermaid
+flowchart LR
+ subgraph Lazy Aggregation Mode
+ R[Reaper] -->|GetTxs| CE[Core Executor]
+ CE -->|Txs Available| R
+ R -->|Submit to Sequencer| S[Sequencer]
+ R -->|NotifyNewTransactions| N[txNotifyCh]
+
+ N --> E{Executor Logic}
+ BT[blockTimer] --> E
+ LT[lazyTimer] --> E
+
+ E -->|Txs Available| P1[Produce Block with Txs]
+ E -->|No Txs & LazyTimer| P2[Produce Empty Block]
+
+ P1 --> B[Block Creation]
+ P2 --> B
+ end
+```
+
+1. A `blockTimer` that triggers block production at regular intervals when transactions are available
+2. A `lazyTimer` that ensures blocks are produced at `LazyBlockInterval` intervals even during periods of inactivity
+
+The block manager starts building a block when any transaction becomes available in the mempool via a notification channel (`txNotifyCh`). When the `Reaper` detects new transactions, it calls `Manager.NotifyNewTransactions()`, which performs a non-blocking signal on this channel. The block manager also produces empty blocks at regular intervals to maintain consistency with the DA layer, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks.
+
+The Reaper component periodically retrieves transactions from the core executor and submits them to the sequencer. It runs independently and notifies the Executor component when new transactions are available, enabling responsive block production in lazy mode.
+
+#### Building the Block
+
+The Executor component of aggregator nodes performs the following steps to produce a block:
+
+```mermaid
+flowchart TD
+ A[Timer Trigger / Transaction Notification] --> B[Retrieve Batch]
+ B --> C{Transactions Available?}
+ C -->|Yes| D[Create Block with Txs]
+ C -->|No| E[Create Empty Block]
+ D --> F[Generate Header & Data]
+ E --> F
+ F --> G[Sign Header → SignedHeader]
+ F --> H[Sign Data → SignedData]
+ G --> I[Apply Block]
+ H --> I
+ I --> J[Update State]
+ J --> K[Save to Store]
+ K --> L[Add to pendingHeaders]
+ K --> M[Add to pendingData]
+ L --> N[Broadcast Header to P2P]
+ M --> O[Broadcast Data to P2P]
+```
+
+- Retrieve a batch of transactions using `retrieveBatch()` which interfaces with the sequencer
+- Call `CreateBlock` using executor with the retrieved transactions
+- Create separate header and data structures from the block
+- Sign the header using `signing key` to generate `SignedHeader`
+- Sign the data using `signing key` to generate `SignedData` (if transactions exist)
+- Call `ApplyBlock` using executor to generate an updated state
+- Save the block, validators, and updated state to local store
+- Add the newly generated header to `pendingHeaders` queue
+- Add the newly generated data to `pendingData` queue (if not empty)
+- Publish the newly generated header and data to channels to notify other components of the sequencer node (such as block and header gossip)
+
+Note: When no transactions are available, the block manager creates blocks with empty data using a special `dataHashForEmptyTxs` marker. The header and data separation architecture allows headers and data to be submitted and retrieved independently from the DA layer.
+
+### Block Publication to DA Network (Submitter Component)
+
+The **Submitter component** of aggregator nodes implements separate submission loops for headers and data, both operating at `DABlockTime` intervals. Headers and data are submitted to different namespaces to improve scalability and allow for more flexible data availability strategies:
+
+```mermaid
+flowchart LR
+ subgraph Header Submission
+ H1[pendingHeaders Queue] --> H2[Header Submission Loop]
+ H2 --> H3[Marshal to Protobuf]
+ H3 --> H4[Submit to DA]
+ H4 -->|Success| H5[Remove from Queue]
+ H4 -->|Failure| H6[Keep in Queue & Retry]
+ end
+
+ subgraph Data Submission
+ D1[pendingData Queue] --> D2[Data Submission Loop]
+ D2 --> D3[Marshal to Protobuf]
+ D3 --> D4[Submit to DA]
+ D4 -->|Success| D5[Remove from Queue]
+ D4 -->|Failure| D6[Keep in Queue & Retry]
+ end
+
+ H2 -.->|DABlockTime| H2
+ D2 -.->|DABlockTime| D2
+```
+
+#### Header Submission Loop
+
+The `HeaderSubmissionLoop` manages the submission of signed headers to the DA network:
+
+- Retrieves pending headers from the `pendingHeaders` queue
+- Marshals headers to protobuf format
+- Submits to DA using the generic `submitToDA` helper with the configured `HeaderNamespace`
+- On success, removes submitted headers from the pending queue
+- On failure, headers remain in the queue for retry
+
+#### Data Submission Loop
+
+The `DataSubmissionLoop` manages the submission of signed data to the DA network:
+
+- Retrieves pending data from the `pendingData` queue
+- Marshals data to protobuf format
+- Submits to DA using the generic `submitToDA` helper with the configured `DataNamespace`
+- On success, removes submitted data from the pending queue
+- On failure, data remains in the queue for retry
+
+#### Generic Submission Logic
+
+Both loops use a shared `submitToDA` function that provides:
+
+- Namespace-specific submission based on header or data type
+- Retry logic with configurable maximum attempts via `MaxSubmitAttempts` configuration
+- Exponential backoff starting at `initialBackoff` (100ms), doubling each attempt, capped at `DABlockTime`
+- Gas price management with `GasMultiplier` applied on retries using a centralized `retryStrategy`
+- Recursive batch splitting for handling "too big" DA submissions that exceed blob size limits
+- Comprehensive error handling for different DA submission failure types (mempool issues, context cancellation, blob size limits)
+- Comprehensive metrics tracking for attempts, successes, and failures
+- Context-aware cancellation support
+
+#### Retry Strategy and Error Handling
+
+The DA submission system implements sophisticated retry logic using a centralized `retryStrategy` struct to handle various failure scenarios:
+
+```mermaid
+flowchart TD
+ A[Submit to DA] --> B{Submission Result}
+ B -->|Success| C[Reset Backoff & Adjust Gas Price Down]
+ B -->|Too Big| D{Batch Size > 1?}
+ B -->|Mempool/Not Included| E[Mempool Backoff Strategy]
+ B -->|Context Canceled| F[Stop Submission]
+ B -->|Other Error| G[Exponential Backoff]
+
+ D -->|Yes| H[Recursive Batch Splitting]
+ D -->|No| I[Skip Single Item - Cannot Split]
+
+ E --> J[Set Backoff = MempoolTTL * BlockTime]
+ E --> K[Multiply Gas Price by GasMultiplier]
+
+ G --> L[Double Backoff Time]
+ G --> M[Cap at MaxBackoff - BlockTime]
+
+ H --> N[Split into Two Halves]
+ N --> O[Submit First Half]
+ O --> P[Submit Second Half]
+ P --> Q{Both Halves Processed?}
+ Q -->|Yes| R[Combine Results]
+ Q -->|No| S[Handle Partial Success]
+
+ C --> T[Update Pending Queues]
+ T --> U[Post-Submit Actions]
+```
+
+##### Retry Strategy Features
+
+- **Centralized State Management**: The `retryStrategy` struct manages attempt counts, backoff timing, and gas price adjustments
+- **Multiple Backoff Types**:
+ - Exponential backoff for general failures (doubles each attempt, capped at `BlockTime`)
+ - Mempool-specific backoff (waits `MempoolTTL * BlockTime` for stuck transactions)
+ - Success-based backoff reset with gas price reduction
+- **Gas Price Management**:
+ - Increases gas price by `GasMultiplier` on mempool failures
+ - Decreases gas price after successful submissions (bounded by initial price)
+ - Supports automatic gas price detection (`-1` value)
+- **Intelligent Batch Splitting**:
+ - Recursively splits batches that exceed DA blob size limits
+ - Handles partial submissions within split batches
+ - Prevents infinite recursion with proper base cases
+- **Comprehensive Error Classification**:
+ - `StatusSuccess`: Full or partial successful submission
+ - `StatusTooBig`: Triggers batch splitting logic
+ - `StatusNotIncludedInBlock`/`StatusAlreadyInMempool`: Mempool-specific handling
+ - `StatusContextCanceled`: Graceful shutdown support
+ - Other errors: Standard exponential backoff
+
+The manager enforces a limit on pending headers and data through `MaxPendingHeadersAndData` configuration. When this limit is reached, block production pauses to prevent unbounded growth of the pending queues.
+
+### Block Retrieval from DA Network (Syncer Component)
+
+The **Syncer component** implements a `RetrieveLoop` through its DARetriever that regularly pulls headers and data from the DA network. The retrieval process supports both legacy single-namespace mode (for backward compatibility) and the new separate namespace mode:
+
+```mermaid
+flowchart TD
+ A[Start RetrieveLoop] --> B[Get DA Height]
+ B --> C{DABlockTime Timer}
+ C --> D[GetHeightPair from DA]
+ D --> E{Result?}
+ E -->|Success| F[Validate Signatures]
+ E -->|NotFound| G[Increment Height]
+ E -->|Error| H[Retry Logic]
+
+ F --> I[Check Sequencer Info]
+ I --> J[Mark DA Included]
+ J --> K[Send to Sync]
+ K --> L[Increment Height]
+ L --> M[Immediate Next Retrieval]
+
+ G --> C
+ H --> N{Retries < 10?}
+ N -->|Yes| O[Wait 100ms]
+ N -->|No| P[Log Error & Stall]
+ O --> D
+ M --> D
+```
+
+#### Retrieval Process
+
+1. **Height Management**: Starts from the latest of:
+ - DA height from the last state in local store
+ - `DAStartHeight` configuration parameter
+ - Maintains and increments `daHeight` counter after successful retrievals
+
+2. **Retrieval Mechanism**:
+ - Executes at `DABlockTime` intervals
+ - Implements namespace migration support:
+ - First attempts legacy namespace retrieval if migration not completed
+ - Falls back to separate header and data namespace retrieval
+ - Tracks migration status to optimize future retrievals
+ - Retrieves from separate namespaces:
+ - Headers from `HeaderNamespace`
+ - Data from `DataNamespace`
+ - Combines results from both namespaces
+ - Handles three possible outcomes:
+ - `Success`: Process retrieved header and/or data
+ - `NotFound`: No chain block at this DA height (normal case)
+ - `Error`: Retry with backoff
+
+3. **Error Handling**:
+ - Implements retry logic with 100ms delay between attempts
+ - After 10 retries, logs error and stalls retrieval
+ - Does not increment `daHeight` on persistent errors
+
+4. **Processing Retrieved Blocks**:
+ - Validates header and data signatures
+ - Checks sequencer information
+ - Marks blocks as DA included in caches
+ - Sends to sync goroutine for state update
+ - Successful processing triggers immediate next retrieval without waiting for timer
+ - Updates namespace migration status when appropriate:
+ - Marks migration complete when data is found in new namespaces
+ - Persists migration state to avoid future legacy checks
+
+#### Header and Data Caching
+
+The retrieval system uses persistent caches for both headers and data:
+
+- Prevents duplicate processing
+- Tracks DA inclusion status
+- Supports out-of-order block arrival
+- Enables efficient sync from P2P and DA sources
+- Maintains namespace migration state for optimized retrieval
+
+For more details on DA integration, see the [Data Availability specification](./da.md).
+
+#### Out-of-Order Chain Blocks on DA
+
+Evolve should support blocks arriving out-of-order on DA, like so:
+
+
+#### Termination Condition
+
+If the sequencer double-signs two blocks at the same height, evidence of the fault should be posted to DA. Evolve full nodes should process the longest valid chain up to the height of the fault evidence, and terminate. See diagram:
+
+
+### Block Sync Service (Syncer Component)
+
+The **Syncer component** manages the synchronization of headers and data through its P2PHandler and coordination with the Cache Manager:
+
+#### Architecture
+
+- **Header Store**: Uses `goheader.Store[*types.SignedHeader]` for header management
+- **Data Store**: Uses `goheader.Store[*types.SignedData]` for data management
+- **Separation of Concerns**: Headers and data are handled independently, supporting the header/data separation architecture
+
+#### Synchronization Flow
+
+1. **Header Sync**: Headers created by the sequencer are sent to the header store for P2P gossip
+2. **Data Sync**: Data blocks are sent to the data store for P2P gossip
+3. **Cache Integration**: Both header and data caches track seen items to prevent duplicates
+4. **DA Inclusion Tracking**: Separate tracking for header and data DA inclusion status
+
+### Block Publication to P2P network (Executor Component)
+
+The **Executor component** of aggregator nodes publishes headers and data separately to the P2P network:
+
+#### Header Publication
+
+- Headers are sent through the header broadcast channel
+- Written to the header store for P2P gossip
+- Broadcast to network peers via header sync service
+
+#### Data Publication
+
+- Data blocks are sent through the data broadcast channel
+- Written to the data store for P2P gossip
+- Broadcast to network peers via data sync service
+
+Non-sequencer full nodes receive headers and data through the P2P sync service and do not publish blocks themselves.
+
+### Block Retrieval from P2P network (Syncer Component)
+
+The **Syncer component** retrieves headers and data separately from P2P stores through its P2PHandler:
+
+#### Header Store Retrieval Loop
+
+The `HeaderStoreRetrieveLoop`:
+
+- Operates at `BlockTime` intervals via `headerStoreCh` signals
+- Tracks `headerStoreHeight` for the last retrieved header
+- Retrieves all headers between last height and current store height
+- Validates sequencer information using `assertUsingExpectedSingleSequencer`
+- Marks headers as "seen" in the header cache
+- Sends headers to sync goroutine via `headerInCh`
+
+#### Data Store Retrieval Loop
+
+The `DataStoreRetrieveLoop`:
+
+- Operates at `BlockTime` intervals via `dataStoreCh` signals
+- Tracks `dataStoreHeight` for the last retrieved data
+- Retrieves all data blocks between last height and current store height
+- Validates data signatures using `assertValidSignedData`
+- Marks data as "seen" in the data cache
+- Sends data to sync goroutine via `dataInCh`
+
+#### Soft Confirmations
+
+Headers and data retrieved from P2P are marked as soft confirmed until both:
+
+1. The corresponding header is seen on the DA layer
+2. The corresponding data is seen on the DA layer
+
+Once both conditions are met, the block is marked as DA-included.
+
+#### About Soft Confirmations and DA Inclusions
+
+The block manager retrieves blocks from both the P2P network and the underlying DA network because the blocks are available in the P2P network faster and DA retrieval is slower (e.g., 1 second vs 6 seconds).
+The blocks retrieved from the P2P network are only marked as soft confirmed until the DA retrieval succeeds on those blocks and they are marked DA-included.
+DA-included blocks are considered to have a higher level of finality.
+
+**DAIncluderLoop**:
+The `DAIncluderLoop` is responsible for advancing the `DAIncludedHeight` by:
+
+- Checking if blocks after the current height have both header and data marked as DA-included in caches
+- Stopping advancement if either header or data is missing for a height
+- Calling `SetFinal` on the executor when a block becomes DA-included
+- Storing the Evolve height to DA height mapping for tracking
+- Ensuring only blocks with both header and data present are considered DA-included
+
+### State Update after Block Retrieval (Syncer Component)
+
+The **Syncer component** uses a `SyncLoop` to coordinate state updates from blocks retrieved via P2P or DA networks:
+
+```mermaid
+flowchart TD
+ subgraph Sources
+ P1[P2P Header Store] --> H[headerInCh]
+ P2[P2P Data Store] --> D[dataInCh]
+ DA1[DA Header Retrieval] --> H
+ DA2[DA Data Retrieval] --> D
+ end
+
+ subgraph SyncLoop
+ H --> S[Sync Goroutine]
+ D --> S
+ S --> C{Header & Data for Same Height?}
+ C -->|Yes| R[Reconstruct Block]
+ C -->|No| W[Wait for Matching Pair]
+ R --> V[Validate Signatures]
+ V --> A[ApplyBlock]
+ A --> CM[Commit]
+ CM --> ST[Store Block & State]
+ ST --> F{DA Included?}
+ F -->|Yes| FN[SetFinal]
+ F -->|No| E[End]
+ FN --> U[Update DA Height]
+ end
+```
+
+#### Sync Loop Architecture
+
+The `SyncLoop` processes headers and data from multiple sources:
+
+- Headers from `headerInCh` (P2P and DA sources)
+- Data from `dataInCh` (P2P and DA sources)
+- Maintains caches to track processed items
+- Ensures ordered processing by height
+
+#### State Update Process
+
+When both header and data are available for a height:
+
+1. **Block Reconstruction**: Combines header and data into a complete block
+2. **Validation**: Verifies header and data signatures match expectations
+3. **ApplyBlock**:
+ - Validates the block against current state
+ - Executes transactions
+ - Captures validator updates
+ - Returns updated state
+4. **Commit**:
+ - Persists execution results
+ - Updates mempool by removing included transactions
+ - Publishes block events
+5. **Storage**:
+ - Stores the block, validators, and updated state
+ - Updates last state in manager
+6. **Finalization**:
+ - When block is DA-included, calls `SetFinal` on executor
+ - Updates DA included height
+
+## Message Structure/Communication Format
+
+### Component Communication
+
+The components communicate through well-defined interfaces:
+
+#### Executor ↔ Core Executor
+
+- `InitChain`: initializes the chain state with the given genesis time, initial height, and chain ID using `InitChainSync` on the executor to obtain initial `appHash` and initialize the state.
+- `CreateBlock`: prepares a block with transactions from the provided batch data.
+- `ApplyBlock`: validates the block, executes the block (apply transactions), captures validator updates, and returns updated state.
+- `SetFinal`: marks the block as final when both its header and data are confirmed on the DA layer.
+- `GetTxs`: retrieves transactions from the application (used by Reaper component).
+
+#### Reaper ↔ Sequencer
+
+- `GetNextBatch`: retrieves the next batch of transactions to include in a block.
+- `VerifyBatch`: validates that a batch came from the expected sequencer.
+
+#### Submitter/Syncer ↔ DA Layer
+
+- `Submit`: submits headers or data blobs to the DA network.
+- `Get`: retrieves headers or data blobs from the DA network.
+- `GetHeightPair`: retrieves both header and data at a specific DA height.
+
+## Assumptions and Considerations
+
+### Component Architecture
+
+- The block package uses a modular component architecture instead of a monolithic manager
+- Components are created based on node type: aggregator nodes get all components, non-aggregator nodes only get synchronization components
+- Each component has a specific responsibility and communicates through well-defined interfaces
+- Components share a common Cache Manager for coordination and state tracking
+
+### Initialization and State Management
+
+- Components load the initial state from the local store and use genesis if not found in the local store, when the node (re)starts
+- During startup the Syncer invokes the execution Replayer to re-execute any blocks the local execution layer is missing; the replayer enforces strict app-hash matching so a mismatch aborts initialization instead of silently drifting out of sync
+- The default mode for aggregator nodes is normal (not lazy)
+- Components coordinate through channels and shared cache structures
+
+### Block Production (Executor Component)
+
+- The Executor can produce empty blocks
+- In lazy aggregation mode, the Executor maintains consistency with the DA layer by producing empty blocks at regular intervals, ensuring a 1:1 mapping between DA layer blocks and execution layer blocks
+- The lazy aggregation mechanism uses a dual timer approach:
+ - A `blockTimer` that triggers block production when transactions are available
+ - A `lazyTimer` that ensures blocks are produced even during periods of inactivity
+- Empty batches are handled differently in lazy mode - instead of discarding them, they are returned with the `ErrNoBatch` error, allowing the caller to create empty blocks with proper timestamps
+- Transaction notifications from the `Reaper` to the `Executor` are handled via a non-blocking notification channel (`txNotifyCh`) to prevent backpressure
+
+### DA Submission (Submitter Component)
+
+- The Submitter enforces `MaxPendingHeadersAndData` limit to prevent unbounded growth of pending queues during DA submission issues
+- Headers and data are submitted separately to the DA layer using different namespaces, supporting the header/data separation architecture
+- The Cache Manager uses persistent caches for headers and data to track seen items and DA inclusion status
+- Namespace migration is handled transparently by the Syncer, with automatic detection and state persistence to optimize future operations
+- The system supports backward compatibility with legacy single-namespace deployments while transitioning to separate namespaces
+- Gas price management in the Submitter includes automatic adjustment with `GasMultiplier` on DA submission retries
+
+### Storage and Persistence
+
+- Components use persistent storage (disk) when the `root_dir` and `db_path` configuration parameters are specified in `config.yaml` file under the app directory. If these configuration parameters are not specified, the in-memory storage is used, which will not be persistent if the node stops
+- The Syncer does not re-apply blocks when they transition from soft confirmed to DA included status. The block is only marked DA included in the caches
+- Header and data stores use separate prefixes for isolation in the underlying database
+- The genesis `ChainID` is used to create separate `PubSubTopID`s for headers and data in go-header
+
+### P2P and Synchronization
+
+- Block sync over the P2P network works only when a full node is connected to the P2P network by specifying the initial seeds to connect to via `P2PConfig.Seeds` configuration parameter when starting the full node
+- Node's context is passed down to all components to support graceful shutdown and cancellation
+
+### Architecture Design Decisions
+
+- The Executor supports custom signature payload providers for headers, enabling flexible signing schemes
+- The component architecture supports the separation of header and data structures in Evolve. This allows for expanding the sequencing scheme beyond single sequencing and enables the use of a decentralized sequencer mode. For detailed information on this architecture, see the [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md)
+- Components process blocks with a minimal header format, which is designed to eliminate dependency on CometBFT's header format and can be used to produce an execution layer tailored header if needed. For details on this header structure, see the [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md) specification
+
+## Metrics
+
+The block components expose comprehensive metrics for monitoring through the shared Metrics instance:
+
+### Block Production Metrics (Executor Component)
+
+- `last_block_produced_height`: Height of the last produced block
+- `last_block_produced_time`: Timestamp of the last produced block
+- `aggregation_type`: Current aggregation mode (normal/lazy)
+- `block_size_bytes`: Size distribution of produced blocks
+- `produced_empty_blocks_total`: Count of empty blocks produced
+
+### DA Metrics (Submitter and Syncer Components)
+
+- `da_submission_attempts_total`: Total DA submission attempts
+- `da_submission_success_total`: Successful DA submissions
+- `da_submission_failure_total`: Failed DA submissions
+- `da_retrieval_attempts_total`: Total DA retrieval attempts
+- `da_retrieval_success_total`: Successful DA retrievals
+- `da_retrieval_failure_total`: Failed DA retrievals
+- `da_height`: Current DA retrieval height
+- `pending_headers_count`: Number of headers pending DA submission
+- `pending_data_count`: Number of data blocks pending DA submission
+
+### Sync Metrics (Syncer Component)
+
+- `sync_height`: Current sync height
+- `da_included_height`: Height of last DA-included block
+- `soft_confirmed_height`: Height of last soft confirmed block
+- `header_store_height`: Current header store height
+- `data_store_height`: Current data store height
+
+### Performance Metrics (All Components)
+
+- `block_production_time`: Time to produce a block
+- `da_submission_time`: Time to submit to DA
+- `state_update_time`: Time to apply block and update state
+- `channel_buffer_usage`: Usage of internal channels
+
+### Error Metrics (All Components)
+
+- `errors_total`: Total errors by type and operation
+
+## Implementation
+
+The modular block components are implemented in the following packages:
+
+- [Executor]: Block production and state transitions (`block/internal/executing/`)
+- [Reaper]: Transaction collection and submission (`block/internal/reaping/`)
+- [Submitter]: DA submission logic (`block/internal/submitting/`)
+- [Syncer]: Block synchronization from DA and P2P (`block/internal/syncing/`)
+- [Cache Manager]: Coordination and state tracking (`block/internal/cache/`)
+- [Components]: Main components orchestration (`block/components.go`)
+
+See [tutorial] for running a multi-node network with both aggregator and non-aggregator full nodes.
+
+## References
+
+[1] [Go Header][go-header]
+
+[2] [Block Sync][block-sync]
+
+[3] [Full Node][full-node]
+
+[4] [Block Components][Components]
+
+[5] [Tutorial][tutorial]
+
+[6] [Header and Data Separation ADR](../../adr/adr-014-header-and-data-separation.md)
+
+[7] [Evolve Minimal Header](../../adr/adr-015-rollkit-minimal-header.md)
+
+[8] [Data Availability](./da.md)
+
+[9] [Lazy Aggregation with DA Layer Consistency ADR](../../adr/adr-021-lazy-aggregation.md)
+
+[defaultBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L50
+[defaultDABlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L59
+[defaultLazyBlockTime]: https://github.com/evstack/ev-node/blob/main/pkg/config/defaults.go#L52
+[go-header]: https://github.com/celestiaorg/go-header
+[block-sync]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go
+[full-node]: https://github.com/evstack/ev-node/blob/main/node/full.go
+[Executor]: https://github.com/evstack/ev-node/blob/main/block/internal/executing/executor.go
+[Reaper]: https://github.com/evstack/ev-node/blob/main/block/internal/reaping/reaper.go
+[Submitter]: https://github.com/evstack/ev-node/blob/main/block/internal/submitting/submitter.go
+[Syncer]: https://github.com/evstack/ev-node/blob/main/block/internal/syncing/syncer.go
+[Cache Manager]: https://github.com/evstack/ev-node/blob/main/block/internal/cache/manager.go
+[Components]: https://github.com/evstack/ev-node/blob/main/block/components.go
+[tutorial]: https://ev.xyz/guides/full-node
diff --git a/docs/reference/specs/block-validity.md b/docs/reference/specs/block-validity.md
new file mode 100644
index 000000000..6bd6964a5
--- /dev/null
+++ b/docs/reference/specs/block-validity.md
@@ -0,0 +1,125 @@
+# Block and Header Validity
+
+## Abstract
+
+Like all blockchains, chains are defined as the chain of **valid** blocks from the genesis, to the head. Thus, the block and header validity rules define the chain.
+
+Verifying a block/header is done in 3 parts:
+
+1. Verify correct serialization according to the protobuf spec
+
+2. Perform basic validation of the types
+
+3. Perform verification of the new block against the previously accepted block
+
+Evolve uses a header/data separation architecture where headers and data can be validated independently. The system has moved from a multi-validator model to a single signer model for simplified sequencer management.
+
+## Basic Validation
+
+Each type contains a `.ValidateBasic()` method, which verifies that certain basic invariants hold. The `ValidateBasic()` calls are nested for each structure.
+
+### SignedHeader Validation
+
+```go
+SignedHeader.ValidateBasic()
+ // Make sure the SignedHeader's Header passes basic validation
+ Header.ValidateBasic()
+ verify ProposerAddress not nil
+ // Make sure the SignedHeader's signature passes basic validation
+ Signature.ValidateBasic()
+ // Ensure that someone signed the header
+ verify len(c.Signatures) not 0
+ // For based chains (sh.Signer.IsEmpty()), pass validation
+ if !sh.Signer.IsEmpty():
+ // Verify the signer matches the proposer address
+ verify sh.Signer.Address == sh.ProposerAddress
+ // Verify signature using custom verifier if set, otherwise use default
+ if sh.verifier != nil:
+ verify sh.verifier(sh) == nil
+ else:
+ verify sh.Signature.Verify(sh.Signer.PubKey, sh.Header.MarshalBinary())
+```
+
+### SignedData Validation
+
+```go
+SignedData.ValidateBasic()
+ // Always passes basic validation for the Data itself
+ Data.ValidateBasic() // always passes
+ // Make sure the signature is valid
+ Signature.ValidateBasic()
+ verify len(c.Signatures) not 0
+ // Verify the signer
+ If !sd.Signer.IsEmpty():
+ verify sd.Signature.Verify(sd.Signer.PubKey, sd.Data.MarshalBinary())
+```
+
+### Block Validation
+
+Blocks are composed of SignedHeader and Data:
+
+```go
+// Block validation happens by validating header and data separately
+// then ensuring data hash matches
+verify SignedHeader.ValidateBasic() == nil
+verify Data.Hash() == SignedHeader.DataHash
+```
+
+## Verification Against Previous Block
+
+```go
+SignedHeader.Verify(untrustedHeader *SignedHeader)
+ // Basic validation is handled by go-header before this
+ Header.Verify(untrustedHeader)
+ // Verify height sequence
+ if untrustedHeader.Height != h.Height + 1:
+ if untrustedHeader.Height > h.Height + 1:
+ return soft verification failure
+ return error "headers are not adjacent"
+ // Verify the link to previous header
+ verify untrustedHeader.LastHeaderHash == h.Header.Hash()
+ // Note: ValidatorHash field exists for compatibility but is not validated
+```
+
+## [Data](https://github.com/evstack/ev-node/blob/main/types/data.go)
+
+| **Field Name** | **Valid State** | **Validation** |
+|----------------|-----------------------------------------|------------------------------------|
+| Txs | Transaction data of the block | Data.Hash() == SignedHeader.DataHash |
+| Metadata | Optional p2p gossiping metadata | Not validated |
+
+## [SignedHeader](https://github.com/evstack/ev-node/blob/main/types/signed_header.go)
+
+| **Field Name** | **Valid State** | **Validation** |
+|----------------|--------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|
+| Header | Valid header for the block | `Header` passes `ValidateBasic()` and `Verify()` |
+| Signature | Valid signature from the single sequencer | `Signature` passes `ValidateBasic()`, verified against signer |
+| Signer | Information about who signed the header | Must match ProposerAddress if not empty (based chain case) |
+| verifier | Optional custom signature verification function | Used instead of default verification if set |
+
+## [Header](https://github.com/evstack/ev-node/blob/main/types/header.go)
+
+***Note***: Evolve has moved to a single signer model. The multi-validator architecture has been replaced with a simpler single sequencer approach.
+
+| **Field Name** | **Valid State** | **Validation** |
+|---------------------|--------------------------------------------------------------------------------------------|---------------------------------------|
+| **BaseHeader** | | |
+| Height | Height of the previous accepted header, plus 1. | checked in the `Verify()`` step |
+| Time | Timestamp of the block | Not validated in Evolve |
+| ChainID | The hard-coded ChainID of the chain | Should be checked as soon as the header is received |
+| **Header** . | | |
+| Version | unused | |
+| LastHeaderHash | The hash of the previous accepted block | checked in the `Verify()`` step |
+| DataHash | Correct hash of the block's Data field | checked in the `ValidateBasic()`` step |
+| AppHash | The correct state root after executing the block's transactions against the accepted state | checked during block execution |
+| ProposerAddress | Address of the expected proposer | Must match Signer.Address in SignedHeader |
+| ValidatorHash | Compatibility field for Tendermint light client | Not validated |
+
+## [Signer](https://github.com/evstack/ev-node/blob/main/types/signed_header.go)
+
+The Signer type replaces the previous ValidatorSet for single sequencer operation:
+
+| **Field Name** | **Valid State** | **Validation** |
+|----------------|-----------------------------------------------------------------|-----------------------------|
+| PubKey | Public key of the signer | Must not be nil if Signer is not empty |
+| Address | Address derived from the public key | Must match ProposerAddress |
diff --git a/docs/reference/specs/da.md b/docs/reference/specs/da.md
new file mode 100644
index 000000000..481a43385
--- /dev/null
+++ b/docs/reference/specs/da.md
@@ -0,0 +1,63 @@
+# DA
+
+Evolve provides a generic [data availability interface][da-interface] for modular blockchains. Any DA that implements this interface can be used with Evolve.
+
+## Details
+
+`Client` can connect via JSON-RPC transports using Evolve's [jsonrpc][jsonrpc] implementations. The connection can be configured using the following cli flags:
+
+* `--rollkit.da.address`: url address of the DA service (default: "grpc://localhost:26650")
+* `--rollkit.da.auth_token`: authentication token of the DA service
+* `--rollkit.da.namespace`: namespace to use when submitting blobs to the DA service (deprecated)
+* `--rollkit.da.header_namespace`: namespace to use when submitting headers to the DA service
+* `--rollkit.da.data_namespace`: namespace to use when submitting data to the DA service
+
+The Submitter component now submits headers and data separately to the DA layer using different namespaces:
+
+* **Headers**: Submitted to the namespace specified by `--rollkit.da.header_namespace` (or falls back to `--rollkit.da.namespace` if not set)
+* **Data**: Submitted to the namespace specified by `--rollkit.da.data_namespace` (or falls back to `--rollkit.da.namespace` if not set)
+
+Each submission first encodes the headers or data using protobuf (the encoded data are called blobs) and invokes the `Submit` method on the underlying DA implementation with the appropriate namespace. On successful submission (`StatusSuccess`), the DA block height which included the blobs is returned.
+
+To make sure that the serialised blocks don't exceed the underlying DA's blob limits, it fetches the blob size limit by calling `Config` which returns the limit as `uint64` bytes, then includes serialised blocks until the limit is reached. If the limit is reached, it submits the partial set and returns the count of successfully submitted blocks as `SubmittedCount`. The caller should retry with the remaining blocks until all the blocks are submitted. If the first block itself is over the limit, it throws an error.
+
+The `Submit` call may result in an error (`StatusError`) based on the underlying DA implementations on following scenarios:
+
+* the total blobs size exceeds the underlying DA's limits (includes empty blobs)
+* the implementation specific failures, e.g., for [celestia-da-json-rpc][jsonrpc], invalid namespace, unable to create the commitment or proof, setting low gas price, etc, could return error.
+
+The retrieval process now supports both legacy single-namespace mode and separate namespace mode:
+
+1. **Legacy Mode Support**: For backward compatibility, the system first attempts to retrieve from the legacy namespace if migration has not been completed.
+
+2. **Separate Namespace Retrieval**: The system retrieves headers and data separately:
+ * Headers are retrieved from the `HeaderNamespace`
+ * Data is retrieved from the `DataNamespace`
+ * Results from both namespaces are combined
+
+3. **Namespace Migration**: The system automatically detects and tracks namespace migration:
+ * When data is found in new namespaces, migration is marked as complete
+ * Migration state is persisted to optimize future retrievals
+ * Once migration is complete, legacy namespace checks are skipped
+
+If there are no blocks available for a given DA height in any namespace, `StatusNotFound` is returned (which is not an error case). The retrieved blobs are converted back to headers and data, then combined into complete blocks for processing.
+
+Both header/data submission and retrieval operations may be unsuccessful if the DA node and the DA blockchain that the DA implementation is using have failures. For example, failures such as, DA mempool is full, DA submit transaction is nonce clashing with other transaction from the DA submitter account, DA node is not synced, etc.
+
+## Namespace Separation Benefits
+
+The separation of headers and data into different namespaces provides several advantages:
+
+* **Improved Scalability**: Headers and data can be processed independently, allowing for more efficient resource utilization
+* **Flexible Data Availability**: Different availability guarantees can be applied to headers vs data
+* **Optimized Retrieval**: Clients can retrieve only the data they need (e.g., light clients may only need headers)
+* **Backward Compatibility**: The system maintains support for legacy single-namespace deployments while enabling gradual migration
+
+## References
+
+[1] [da-interface][da-interface]
+
+[2] [jsonrpc][jsonrpc]
+
+[da-interface]: https://github.com/evstack/ev-node/blob/main/block/public.go
+[jsonrpc]: https://github.com/evstack/ev-node/tree/main/pkg/da/jsonrpc
diff --git a/docs/reference/specs/full-node.md b/docs/reference/specs/full-node.md
new file mode 100644
index 000000000..f909536b5
--- /dev/null
+++ b/docs/reference/specs/full-node.md
@@ -0,0 +1,107 @@
+# Full Node
+
+## Abstract
+
+A Full Node is a top-level service that encapsulates different components of Evolve and initializes/manages them.
+
+## Details
+
+### Full Node Details
+
+A Full Node is initialized inside the Cosmos SDK start script along with the node configuration, a private key to use in the P2P client, a private key for signing blocks as a block proposer, a client creator, a genesis document, and a logger. It uses them to initialize the components described above. The components TxIndexer, BlockIndexer, and IndexerService exist to ensure cometBFT compatibility since they are needed for most of the RPC calls from the `SignClient` interface from cometBFT.
+
+Note that unlike a light node which only syncs and stores block headers seen on the P2P layer, the full node also syncs and stores full blocks seen on both the P2P network and the DA layer. Full blocks contain all the transactions published as part of the block.
+
+The Full Node mainly encapsulates and initializes/manages the following components:
+
+### genesisDoc
+
+The [genesis] document contains information about the initial state of the chain, in particular its validator set.
+
+### conf
+
+The [node configuration] contains all the necessary settings for the node to be initialized and function properly.
+
+### P2P
+
+The [peer-to-peer client] is used to gossip transactions between full nodes in the network.
+
+### Store
+
+The [Store] is initialized with `DefaultStore`, an implementation of the [store interface] which is used for storing and retrieving blocks, commits, and state. |
+
+### blockComponents
+
+The [Block Components] provide a modular architecture for managing block-related operations. Instead of a single monolithic manager, the system uses specialized components:
+
+**For Aggregator Nodes:**
+
+- **Executor**: Block production (normal and lazy modes) and state transitions
+- **Reaper**: Transaction collection and submission to sequencer
+- **Submitter**: Header and data submission to DA layer
+- **Syncer**: Block retrieval and synchronization from DA and P2P
+- **Cache Manager**: Coordination and tracking across all components
+
+**For Non-Aggregator Nodes:**
+
+- **Syncer**: Block retrieval and synchronization from DA and P2P
+- **Cache Manager**: Tracking and caching of synchronized blocks
+
+This modular architecture implements header/data separation where headers and transaction data are handled independently by different components.
+
+### dalc
+
+The [Data Availability Layer Client][dalc] is used to interact with the data availability layer. It is initialized with the DA Layer and DA Config specified in the node configuration.
+
+### hSyncService
+
+The [Header Sync Service] is used for syncing signed headers between nodes over P2P. It operates independently from data sync to support light clients.
+
+### dSyncService
+
+The [Data Sync Service] is used for syncing transaction data between nodes over P2P. This service is only used by full nodes, not light nodes.
+
+## Message Structure/Communication Format
+
+The Full Node communicates with other nodes in the network using the P2P client. It also communicates with the application using the ABCI proxy connections. The communication format is based on the P2P and ABCI protocols.
+
+## Assumptions and Considerations
+
+The Full Node assumes that the configuration, private keys, client creator, genesis document, and logger are correctly passed in by the Cosmos SDK. It also assumes that the P2P client, data availability layer client, block components, and other services can be started and stopped without errors.
+
+## Implementation
+
+See [full node]
+
+## References
+
+[1] [Full Node][full node]
+
+[2] [Genesis Document][genesis]
+
+[3] [Node Configuration][node configuration]
+
+[4] [Peer to Peer Client][peer-to-peer client]
+
+[5] [Store][Store]
+
+[6] [Store Interface][store interface]
+
+[7] [Block Components][block components]
+
+[8] [Data Availability Layer Client][dalc]
+
+[9] [Header Sync Service][Header Sync Service]
+
+[10] [Data Sync Service][Data Sync Service]
+
+[full node]: https://github.com/evstack/ev-node/blob/main/node/full.go
+[genesis]: https://github.com/cometbft/cometbft/blob/main/spec/core/genesis.md
+[node configuration]: https://github.com/evstack/ev-node/blob/main/pkg/config/config.go
+[peer-to-peer client]: https://github.com/evstack/ev-node/blob/main/pkg/p2p/client.go
+[Store]: https://github.com/evstack/ev-node/blob/main/pkg/store/store.go
+[store interface]: https://github.com/evstack/ev-node/blob/main/pkg/store/types.go
+[Block Components]: https://github.com/evstack/ev-node/blob/main/block/components.go
+[dalc]: https://github.com/evstack/ev-node/blob/main/block/public.go
+[Header Sync Service]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go
+[Data Sync Service]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go
diff --git a/docs/reference/specs/header-sync.md b/docs/reference/specs/header-sync.md
new file mode 100644
index 000000000..750f32593
--- /dev/null
+++ b/docs/reference/specs/header-sync.md
@@ -0,0 +1,108 @@
+# Header and Data Sync
+
+## Abstract
+
+The nodes in the P2P network sync headers and data using separate sync services that implement the [go-header][go-header] interface. Evolve uses a header/data separation architecture where headers and transaction data are synchronized independently through parallel services. Each sync service consists of several components as listed below.
+
+|Component|Description|
+|---|---|
+|store| a prefixed [datastore][datastore] where synced items are stored (`headerSync` prefix for headers, `dataSync` prefix for data)|
+|subscriber| a [libp2p][libp2p] node pubsub subscriber for the specific data type|
+|P2P server| a server for handling requests between peers in the P2P network|
+|exchange| a client that enables sending in/out-bound requests from/to the P2P network|
+|syncer| a service for efficient synchronization. When a P2P node falls behind and wants to catch up to the latest network head via P2P network, it can use the syncer.|
+
+## Details
+
+Evolve implements two separate sync services:
+
+### Header Sync Service
+
+- Synchronizes `SignedHeader` structures containing block headers with signatures
+- Used by all node types (sequencer, full, and light)
+- Essential for maintaining the canonical view of the chain
+
+### Data Sync Service
+
+- Synchronizes `Data` structures containing transaction data
+- Used only by full nodes and sequencers
+- Light nodes do not run this service as they only need headers
+
+Both services:
+
+- Utilize the generic `SyncService[H header.Header[H]]` implementation
+- Inherit the `ConnectionGater` from the node's P2P client for peer management
+- Use `NodeConfig.BlockTime` to determine outdated items during sync
+- Operate independently on separate P2P topics and datastores
+
+### Consumption of Sync Services
+
+#### Header Sync
+
+- Sequencer nodes publish signed headers to the P2P network after block creation
+- Full and light nodes receive and store headers for chain validation
+- Headers contain commitments (DataHash) that link to the corresponding data
+
+#### Data Sync
+
+- Sequencer nodes publish transaction data separately from headers
+- Only full nodes receive and store data (light nodes skip this)
+- Data is linked to headers through the DataHash commitment
+
+#### Parallel Broadcasting
+
+The Executor component (in aggregator nodes) broadcasts headers and data in parallel when publishing blocks:
+
+- Headers are sent through `headerBroadcaster`
+- Data is sent through `dataBroadcaster`
+- This enables efficient network propagation of both components
+
+## Assumptions
+
+- Separate datastores are created with different prefixes:
+ - Headers: `headerSync` prefix on the main datastore
+ - Data: `dataSync` prefix on the main datastore
+- Network IDs are suffixed to distinguish services:
+ - Header sync: `{network}-headerSync`
+ - Data sync: `{network}-dataSync`
+- Chain IDs for pubsub topics are also separated:
+ - Headers: `{chainID}-headerSync` creates topic like `/gm-headerSync/header-sub/v0.0.1`
+ - Data: `{chainID}-dataSync` creates topic like `/gm-dataSync/header-sub/v0.0.1`
+- Both stores must contain at least one item before the syncer starts:
+ - On first boot, the services fetch the configured genesis height from peers
+ - On restart, each store reuses its latest item to derive the initial height requested from peers
+- Sync services work only when connected to P2P network via `P2PConfig.Seeds`
+- Node context is passed to all components for graceful shutdown
+- Headers and data are linked through DataHash but synced independently
+
+## Implementation
+
+The sync service implementation can be found in [pkg/sync/sync_service.go][sync-service]. The generic `SyncService[H header.Header[H]]` is instantiated as:
+
+- `HeaderSyncService` for syncing `*types.SignedHeader`
+- `DataSyncService` for syncing `*types.Data`
+
+Full nodes create and start both services, while light nodes only start the header sync service. The services are created in [full][fullnode] and [light][lightnode] node implementations.
+
+The block components integrate with both services through:
+
+- The Syncer component's P2PHandler retrieves headers and data from P2P
+- The Executor component publishes headers and data through broadcast channels
+- Separate stores and channels manage header and data synchronization
+
+## References
+
+[1] [Header Sync][sync-service]
+
+[2] [Full Node][fullnode]
+
+[3] [Light Node][lightnode]
+
+[4] [go-header][go-header]
+
+[sync-service]: https://github.com/evstack/ev-node/blob/main/pkg/sync/sync_service.go
+[fullnode]: https://github.com/evstack/ev-node/blob/main/node/full.go
+[lightnode]: https://github.com/evstack/ev-node/blob/main/node/light.go
+[go-header]: https://github.com/celestiaorg/go-header
+[libp2p]: https://github.com/libp2p/go-libp2p
+[datastore]: https://github.com/ipfs/go-datastore
diff --git a/docs/reference/specs/out-of-order-blocks.png b/docs/reference/specs/out-of-order-blocks.png
new file mode 100644
index 000000000..fa7a955cb
Binary files /dev/null and b/docs/reference/specs/out-of-order-blocks.png differ
diff --git a/docs/reference/specs/overview.md b/docs/reference/specs/overview.md
new file mode 100644
index 000000000..0621ad098
--- /dev/null
+++ b/docs/reference/specs/overview.md
@@ -0,0 +1,17 @@
+# Specs Overview
+
+Welcome to the Evolve Technical Specifications.
+
+This is comprehensive documentation on the inner components of Evolve, including data storage, transaction processing, and more. It’s an essential resource for developers looking to understand, contribute to, and leverage the full capabilities of Evolve.
+
+Each file in this folder covers a specific aspect of the system, from block management to data availability and networking. Use this page as a starting point to explore the technical details and architecture of Evolve.
+
+## Table of Contents
+
+- [Block Components](./block-manager.md): Explains the modular component architecture for block processing in Evolve.
+- [Block Validity](./block-validity.md): Details the rules and checks for block validity within the protocol.
+- [Data Availability (DA)](./da.md): Describes how Evolve ensures data availability and integrates with DA layers.
+- [Full Node](./full_node.md): Outlines the architecture and operation of a full node in Evolve.
+- [Header Sync](./header-sync.md): Covers the process and protocol for synchronizing block headers.
+- [P2P](./p2p.md): Documents the peer-to-peer networking layer and its protocols.
+- [Store](./store.md): Provides information about the storage subsystem and data management.
diff --git a/docs/reference/specs/store.md b/docs/reference/specs/store.md
new file mode 100644
index 000000000..8432902f7
--- /dev/null
+++ b/docs/reference/specs/store.md
@@ -0,0 +1,92 @@
+# Store
+
+## Abstract
+
+The Store interface defines methods for storing and retrieving blocks, commits, and the state of the blockchain.
+
+## Protocol/Component Description
+
+The Store interface defines the following methods:
+
+- `Height`: Returns the height of the highest block in the store.
+- `SetHeight`: Sets given height in the store if it's higher than the existing height in the store.
+- `SaveBlock`: Saves a block (containing both header and data) along with its seen signature.
+- `GetBlock`: Returns a block at a given height.
+- `GetBlockByHash`: Returns a block with a given block header hash.
+
+Note: While blocks are stored as complete units in the store, the block components handle headers and data separately during synchronization and DA layer interaction.
+
+- `SaveBlockResponses`: Saves block responses in the Store.
+- `GetBlockResponses`: Returns block results at a given height.
+- `GetSignature`: Returns a signature for a block at a given height.
+- `GetSignatureByHash`: Returns a signature for a block with a given block header hash.
+- `UpdateState`: Updates the state saved in the Store. Only one State is stored.
+- `GetState`: Returns the last state saved with UpdateState.
+- `SaveValidators`: Saves the validator set at a given height.
+- `GetValidators`: Returns the validator set at a given height.
+
+The `TxnDatastore` interface inside [go-datastore] is used for constructing different key-value stores for the underlying storage of a full node. There are two different implementations of `TxnDatastore` in [kv.go]:
+
+- `NewTestInMemoryKVStore`: Builds a key-value store that uses the [BadgerDB] library and operates in-memory, without accessing the disk. Used only across unit tests and integration tests.
+
+- `NewDefaultKVStore`: Builds a key-value store that uses the [BadgerDB] library and stores the data on disk at the specified path.
+
+A Evolve full node is [initialized][full_node_store_initialization] using `NewDefaultKVStore` as the base key-value store for underlying storage. To store various types of data in this base key-value store, different prefixes are used: `mainPrefix`, `dalcPrefix`, and `indexerPrefix`. The `mainPrefix` equal to `0` is used for the main node data, `dalcPrefix` equal to `1` is used for Data Availability Layer Client (DALC) data, and `indexerPrefix` equal to `2` is used for indexing related data.
+
+For the main node data, `DefaultStore` struct, an implementation of the Store interface, is used with the following prefixes for various types of data within it:
+
+- `blockPrefix` with value "b": Used to store complete blocks in the key-value store.
+- `indexPrefix` with value "i": Used to index the blocks stored in the key-value store.
+- `commitPrefix` with value "c": Used to store commits related to the blocks.
+- `statePrefix` with value "s": Used to store the state of the blockchain.
+- `responsesPrefix` with value "r": Used to store responses related to the blocks.
+- `validatorsPrefix` with value "v": Used to store validator sets at a given height.
+
+Additional prefixes used by sync services:
+
+- `headerSyncPrefix` with value "hs": Used by the header sync service for P2P synced headers.
+- `dataSyncPrefix` with value "ds": Used by the data sync service for P2P synced transaction data.
+ For example, in a call to `GetBlockByHash` for some block hash ``, the key used in the full node's base key-value store will be `/0/b/` where `0` is the main store prefix and `b` is the block prefix. Similarly, in a call to `GetValidators` for some height ``, the key used in the full node's base key-value store will be `/0/v/` where `0` is the main store prefix and `v` is the validator set prefix.
+
+Inside the key-value store, the value of these various types of data like `Block` is stored as a byte array which is encoded and decoded using the corresponding Protobuf [marshal and unmarshal methods][serialization].
+
+The store is most widely used inside the [block components] to perform their functions correctly. Within the block components, since they have multiple go-routines, access is protected by mutex locks to synchronize read/write access and prevent race conditions.
+
+## Message Structure/Communication Format
+
+The Store does not communicate over the network, so there is no message structure or communication format.
+
+## Assumptions and Considerations
+
+The Store assumes that the underlying datastore is reliable and provides atomicity for transactions. It also assumes that the data passed to it for storage is valid and correctly formatted.
+
+## Implementation
+
+See [Store Interface][store_interface] and [Default Store][default_store] for its implementation.
+
+## References
+
+[1] [Store Interface][store_interface]
+
+[2] [Default Store][default_store]
+
+[3] [Full Node Store Initialization][full_node_store_initialization]
+
+[4] [Block Components][block components]
+
+[5] [Badger DB][BadgerDB]
+
+[6] [Go Datastore][go-datastore]
+
+[7] [Key Value Store][kv.go]
+
+[8] [Serialization][serialization]
+
+[store_interface]: https://github.com/evstack/ev-node/blob/main/pkg/store/types.go#L11
+[default_store]: https://github.com/evstack/ev-node/blob/main/pkg/store/store.go
+[full_node_store_initialization]: https://github.com/evstack/ev-node/blob/main/node/full.go#L96
+[block components]: https://github.com/evstack/ev-node/blob/main/block/components.go
+[BadgerDB]: https://github.com/dgraph-io/badger
+[go-datastore]: https://github.com/ipfs/go-datastore
+[kv.go]: https://github.com/evstack/ev-node/blob/main/pkg/store/kv.go
+[serialization]: https://github.com/evstack/ev-node/blob/main/types/serialization.go
diff --git a/docs/reference/specs/termination.png b/docs/reference/specs/termination.png
new file mode 100644
index 000000000..0b61c8f23
Binary files /dev/null and b/docs/reference/specs/termination.png differ
diff --git a/scripts/utils.mk b/scripts/utils.mk
index c56d8d811..159480637 100644
--- a/scripts/utils.mk
+++ b/scripts/utils.mk
@@ -15,7 +15,7 @@ lint: vet
@echo "--> Running golangci-lint"
@golangci-lint run
@echo "--> Running markdownlint"
- @markdownlint --config .markdownlint.yaml '**/*.md'
+ @npx markdownlint-cli --config .markdownlint.yaml '**/*.md'
@echo "--> Running hadolint"
@hadolint test/docker/mockserv.Dockerfile
@echo "--> Running yamllint"
@@ -31,7 +31,7 @@ lint-fix:
@echo "--> Formatting go"
@golangci-lint run --fix
@echo "--> Formatting markdownlint"
- @markdownlint --config .markdownlint.yaml --ignore './changelog.md' '**/*.md' -f
+ @npx markdownlint-cli --config .markdownlint.yaml --ignore './changelog.md' '**/*.md' -f
.PHONY: lint-fix
## vet: Run go vet