Skip to content

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Nov 27, 2025

Overview

Resolves #2609

The basic idea is to store an additional DAHightHint field within the p2p store.

As SignedHeader and Data are used in other places, too. I added an an DAHeightHintContainer type to wrap the tuple for the store only.
The DAHeight Hint is added by the da_submitter and read in the syncer to fetch the missing DA header/data for the most recent block as required.

Please note: this is a breaking change to the p2p network and store

@claude
Copy link
Contributor

claude bot commented Nov 27, 2025

Claude finished @alpe's task in 5m 9s —— View job


Comprehensive Code Review: DA Hints in P2P

I've completed a thorough review of this PR implementing DA height hints in the p2p layer. Here are my findings across multiple dimensions:


🏗️ Architecture & Design

✅ Strengths

  • Clean abstraction: The P2PEnvelope[H] generic wrapper is elegant and avoids polluting core SignedHeader and Data types
  • Binary compatibility strategy: Smart use of protobuf's optional fields and type mirroring (P2PSignedHeader/P2PData) enables backward-compatible wire format
  • Separation of concerns: DA hint management is cleanly isolated to p2p layer without leaking into core domain types
  • AsyncDARetriever pattern: Worker pool with deduplication is a solid design for targeted DA retrieval

⚠️ Concerns

1. Type safety gap in P2PEnvelope.MarshalBinary/UnmarshalBinary (types/p2p_envelope.go:86-153)

switch msg := any(e.Message).(type) {
case *Data:
    // ...
case *SignedHeader:
    // ...
default:
    return nil, fmt.Errorf("unsupported type for toProto: %T", msg)
}

The type switch on any(e.Message) defeats Go's type safety. If a new header type is added, this will panic at runtime rather than fail at compile time. This is particularly risky since P2PEnvelope is generic.

Recommendation: Consider using a marker interface or sealed type pattern:

type P2PMarshallable interface {
    header.Header[H]
    toP2PProto() (proto.Message, error)
    fromP2PProto(proto.Message) error
}

2. TODO comment indicates incomplete migration (block/internal/syncing/syncer.go:680)

if event.Source == common.SourceDA { // TODO(@julienrbrt): To be reverted once DA Hints are merged

This suggests the PR isn't fully integrated. The comment implies DA events should also populate hints, but they currently don't. This creates an asymmetry where:

  • P2P events have DaHeightHints populated
  • DA events have DaHeightHints = [2]uint64{0, 0}

3. Implicit zero-value semantics (block/internal/syncing/syncer.go:606-608)

switch {
case event.DaHeightHints == [2]uint64{0, 0}:
    // empty, nothing to do

Using zero as a sentinel for "no hint" is fragile. What if DA height 0 is valid? Consider using *uint64 or a dedicated Optional[uint64] type.


🛡️ Error Handling & Resilience

✅ Strengths

  • Graceful degradation: AsyncDARetriever logs errors but doesn't crash (async_da_retriever.go:99-101)
  • Context cancellation respected: Proper context.Done() checks throughout
  • Worker pool bounded: 5 workers + 100-buffer channel prevents unbounded goroutine spawning

⚠️ Concerns

1. Silent error swallowing in DA hint appender (block/internal/submitting/da_submitter.go:213-216, 258-261)

if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
    s.logger.Error().Err(err).Msg("failed to append da height hint in header p2p store")
    // ignoring error here, since we don't want to block the block submission
}

While the comment justifies this, silently ignoring errors means hints can be lost without observable impact on submissions. This creates a silent degradation where:

  • DA submission succeeds ✅
  • Hints fail to persist ❌
  • Future P2P syncs miss optimization opportunities

Recommendation: Add a metric to track hint append failures for observability:

s.metrics.DAHintAppendFailures.Add(1)

2. AsyncDARetriever drops requests when full (block/internal/syncing/async_da_retriever.go:69-75)

select {
case r.workCh <- height:
    r.inFlight[height] = struct{}{}
default:
    r.logger.Debug().Uint64("height", height).Msg("DA retrieval worker pool full, dropping request")
}

Dropped requests are only logged at DEBUG level. If the pool is saturated, this silently degrades sync performance without visibility.

Recommendation: Add metric + warn on sustained drops:

r.metrics.DARetrievalDropped.Add(1)
if r.consecutiveDrops.Add(1) > 10 {
    r.logger.Warn().Msg("AsyncDARetriever pool saturated, consider tuning worker count")
}

3. No retry logic in AsyncDARetriever (block/internal/syncing/async_da_retriever.go:98-101)

events, err := r.retriever.RetrieveFromDA(r.ctx, height)
if err != nil {
    r.logger.Debug().Err(err).Uint64("height", height).Msg("async DA retrieval failed")
    return
}

Transient DA failures (network blips, rate limits) are not retried. The hint-driven optimization is lost permanently for that block.

Recommendation: Implement exponential backoff retry (3 attempts) before giving up.


🔒 Security

✅ Strengths

  • No credential exposure: DA hints are uint64 values, no sensitive data
  • Proposer validation: P2P handler validates proposer address before processing (p2p_handler.go:90-93)

⚠️ Concerns

1. P2P hint injection vector (types/p2p_envelope.go:129-131, 146-148)

if pData.DaHeightHint != nil {
    e.DAHeightHint = *pData.DaHeightHint
}

Untrusted P2P peers can inject arbitrary DAHeightHint values. While this doesn't compromise correctness (DA retrieval validates blocks), it creates DoS vectors:

  • Resource exhaustion: Attacker sends hints pointing to very high DA heights, triggering expensive DA queries
  • Cache pollution: Invalid hints pollute the async retriever's work queue

Current mitigation: AsyncDARetriever has bounded queue + deduplication, limiting impact.

Recommendation: Add sanity bounds checking:

if pData.DaHeightHint != nil {
    hint := *pData.DaHeightHint
    if hint > 0 && hint < currentDAHeight + 10000 { // reasonable lookahead
        e.DAHeightHint = hint
    } else {
        logger.Warn().Uint64("hint", hint).Msg("rejecting out-of-bounds DA hint from peer")
    }
}

2. No rate limiting on AsyncDARetriever submissions
A malicious or buggy component could spam RequestRetrieval() calls, exhausting the worker pool and blocking legitimate requests.

Recommendation: Add per-height rate limiting or a time-based deduplication window.


⚡ Performance & Resource Efficiency

✅ Strengths

  • Targeted DA retrieval: Hints enable direct DA fetching vs. sequential scanning (huge win for catch-up sync)
  • Bounded concurrency: 5 workers + 100-buffer channel prevents goroutine leaks
  • Deduplication: inFlight map prevents duplicate work (async_da_retriever.go:65-67)

⚠️ Concerns

1. AsyncDARetriever.inFlight map grows unbounded (block/internal/syncing/async_da_retriever.go:34)

inFlight  map[uint64]struct{}

Heights are added but only removed on completion (line 94). If DA retrieval hangs, entries leak memory. With 8 bytes per entry, this could accumulate to MBs over time.

Recommendation: Add periodic cleanup of stale entries (e.g., > 5 min old) or use a fixed-size LRU cache.

2. Marshaling overhead in P2PEnvelope (types/p2p_envelope.go:86-112)
Every marshal creates intermediate protobuf structs (pb.P2PData, pb.P2PSignedHeader) and copies field-by-field:

mirrorPb = &pb.P2PData{
    Metadata:     pData.Metadata,
    Txs:          pData.Txs,
    DaHeightHint: &e.DAHeightHint,
}

This is called for every p2p broadcast, adding CPU + GC pressure.

Impact: Moderate on aggregator nodes (frequent broadcasts), minimal on full nodes.

Recommendation: Profile in production. If hot, consider zero-copy protobuf encoding or caching marshaled results.

3. Sync service getters query state on every request (pkg/sync/sync_service.go:92-104, 106-118)

state, err := daStore.GetStateAtHeight(ctx, d.Height())
if err != nil {
    if !errors.Is(err, store.ErrNotFound) {
        return nil, err
    }
    return &types.P2PData{Message: d, DAHeightHint: 0}, nil
}
return &types.P2PData{Message: d, DAHeightHint: state.DAHeight}, nil

Every p2p GetByHeight triggers a DB lookup for state. For batch sync requests, this could be 100s of queries.

Recommendation: Add an LRU cache for recent state lookups (keyed by height).


🧪 Test Coverage & Reliability

✅ Strengths

  • Binary compatibility tests: Excellent coverage in types/p2p_envelope_test.go:92-153 proving old ↔ new wire format works
  • AsyncDARetriever tests: Good coverage of deduplication + worker pool limits (async_da_retriever_test.go)
  • Integration tests: DA hint flow tested end-to-end (sync_service_test.go:98-200)

⚠️ Concerns

1. Missing negative test cases

  • What happens if DA hint points to a non-existent DA height?
  • What if hint is far in the future (DA height 999999)?
  • What if header hint != data hint for the same block?

2. Race conditions not tested
AsyncDARetriever.inFlight map is accessed under mutex (line 62-63), but map modification on line 71 happens inside the lock while line 94 happens outside. This could race with concurrent RequestRetrieval calls.

Current code (async_da_retriever.go:61-76):

func (r *AsyncDARetriever) RequestRetrieval(height uint64) {
    r.mu.Lock()
    defer r.mu.Unlock()
    
    if _, exists := r.inFlight[height]; exists {
        return
    }
    
    select {
    case r.workCh <- height:
        r.inFlight[height] = struct{}{} // ✅ Locked
    default:
        r.logger.Debug().Msg("pool full")
    }
}

func (r *AsyncDARetriever) processRetrieval(height uint64) {
    defer func() {
        r.mu.Lock()
        delete(r.inFlight, height) // ✅ Locked
        r.mu.Unlock()
    }()
    // ...
}

Actually safe - the mutex protects all map access. False alarm on my part after closer review.

3. Codecov shows 66% patch coverage with 93 missing lines
Key gaps:

  • pkg/sync/sync_service.go: 29 missing lines (DA hint append logic)
  • block/internal/syncing/syncer.go: 35 missing lines (hint-triggered retrieval)

Recommendation: Add tests for:

// Test DA hint triggers targeted retrieval
event := common.DAHeightEvent{
    Source: common.SourceP2P,
    DaHeightHints: [2]uint64{100, 100},
    // ...
}
// Assert asyncDARetriever.RequestRetrieval(100) was called

📚 Documentation & Developer Experience

✅ Strengths

  • PR description clear: Explains the breaking change and migration strategy
  • Inline comments: Key decisions are documented (e.g., binary compatibility, TODO markers)
  • Type comments: P2PEnvelope, AsyncDARetriever have good package-level docs

⚠️ Concerns

1. Missing migration guide
The PR notes this is a breaking change requiring:

  • P2P network protocol upgrade
  • P2P store migration

But there's no:

  • Migration script for existing p2p stores
  • Rollout instructions (blue/green? rolling restart?)
  • Compatibility matrix (which versions can talk to which?)

Per review comments, @alpe added binary-compatible types, but tac0turtle asks (2026-01-12):

"was this tested on an existing network? If not, please do that before merging"

2. TODO comment suggests incomplete work (syncer.go:680)
The TODO references this PR (#2891), implying the feature isn't done. This should be:

  • Resolved before merge, OR
  • Moved to a follow-up issue with clear acceptance criteria

3. No update to CLAUDE.md or docs
The architecture has changed (new p2p types, async retrieval pattern), but /home/runner/work/ev-node/ev-node/CLAUDE.md doesn't mention:

  • The DA hints optimization
  • The P2PEnvelope pattern for future reference

🚀 Migration Strategy & Backward Compatibility

✅ Strengths

  • Binary compatibility achieved: P2PSignedHeader/P2PData protobuf types use optional fields, allowing old nodes to ignore hints
  • Graceful fallback: If hint is 0, system falls back to sequential DA scanning (syncer.go:606)

⚠️ Critical Concerns

1. No store migration provided
Existing p2p stores contain SignedHeader/Data objects. After upgrade:

  • New code expects P2PSignedHeader/P2PData
  • Old entries have no hints

What happens? Code should handle this (0 hint = no optimization), but was this tested per tac0turtle's comment?

Recommendation:

  • Add integration test: "Load p2p store from v1.0, upgrade to v2.0, verify sync works"
  • Provide store migration tool OR document that stores can be safely rebuilt

2. Network partition risk during rollout
If 50% of network upgrades first:

  • Old → New messages: Old nodes send SignedHeader, new nodes receive as P2PSignedHeader{hint: 0} ✅ Works
  • New → Old messages: New nodes send P2PSignedHeader{hint: 100}, old nodes... ?

Question: Do old protobuf decoders ignore unknown fields? If not, this could crash old nodes.

Testing needed:

// Test mixed-version network
oldNode := startNodeV1()
newNode := startNodeV2()
connectPeers(oldNode, newNode)
// Verify bidirectional message flow works

3. Forced network-wide upgrade
The PR description states:

"Please note: this is a breaking change to the p2p network and store"

But review comments suggest binary compatibility was added later. Clarify:

  • Can v1 and v2 nodes coexist on the network?
  • What's the upgrade procedure for production networks?

🎯 Summary & Recommendations

Priority Recommendations

🔴 Critical (Must Fix Before Merge)

  1. Verify network compatibility: Test old ↔ new node communication in mixed-version network
  2. Resolve TODO in syncer.go:680: Either complete DA-sourced hint population or document why it's deferred
  3. Test on existing network: Per tac0turtle's requirement (2026-01-12 comment)
  4. Document migration path: Store migration + rollout procedure

🟡 High Priority (Should Fix)

  1. Add DA hint bounds checking: Prevent malicious/invalid hints from triggering expensive DA queries (Security)
  2. Fix AsyncDARetriever.inFlight memory leak: Add TTL-based cleanup for stale entries (Performance)
  3. Add retry logic to AsyncDARetriever: Handle transient DA failures (Resilience)
  4. Improve test coverage: Add negative cases + hint-triggered retrieval tests (Quality)

🟢 Nice to Have (Future Work)

  1. Add metrics: DAHintAppendFailures, DARetrievalDropped, DAHintCacheHits (Observability)
  2. Optimize state lookups: Cache recent state queries in sync service getters (Performance)
  3. Type-safe marshaling: Replace type switch with interface-based approach (Maintainability)
  4. Update CLAUDE.md: Document new p2p patterns for future contributors (Documentation)

Overall Assessment

Architecture: 8/10 - Clean design with good separation of concerns. Minor type safety concerns.

Security: 7/10 - No critical vulnerabilities, but untrusted hint injection needs bounds checking.

Performance: 8/10 - Excellent optimization potential (targeted DA retrieval), minor resource leak concerns.

Reliability: 6/10 - Missing retry logic, silent error swallowing, and incomplete migration strategy are concerning.

Testing: 7/10 - Good coverage of happy paths, needs more negative/edge case testing.

Documentation: 6/10 - Code is documented, but migration guide and network compatibility matrix are missing.

Overall: This is a valuable optimization, but the migration strategy needs clarification before merging to production. The CI is still running - I recommend waiting for test results and verifying the TODO at syncer.go:680 is resolved.


Next Steps

  1. ✅ Wait for CI to complete (currently 5 jobs in progress)
  2. Address the critical recommendations above
  3. Verify with @tac0turtle that network testing has been completed
  4. Resolve the TODO comment or move to follow-up issue

@github-actions
Copy link
Contributor

github-actions bot commented Nov 27, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 28, 2026, 4:55 PM

@alpe alpe changed the title Alex/hints feat: DA hints in p2p Nov 28, 2025
@codecov
Copy link

codecov bot commented Nov 28, 2025

Codecov Report

❌ Patch coverage is 66.30435% with 93 lines in your changes missing coverage. Please review.
✅ Project coverage is 56.05%. Comparing base (4252154) to head (b8ec42f).

Files with missing lines Patch % Lines
pkg/sync/sync_service.go 57.83% 21 Missing and 14 partials ⚠️
types/p2p_envelope.go 53.42% 29 Missing and 5 partials ⚠️
block/internal/syncing/syncer.go 58.62% 12 Missing ⚠️
block/internal/syncing/async_da_retriever.go 86.53% 6 Missing and 1 partial ⚠️
block/internal/submitting/da_submitter.go 80.95% 2 Missing and 2 partials ⚠️
pkg/store/store.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2891      +/-   ##
==========================================
+ Coverage   55.55%   56.05%   +0.50%     
==========================================
  Files         116      118       +2     
  Lines       11477    11706     +229     
==========================================
+ Hits         6376     6562     +186     
- Misses       4401     4425      +24     
- Partials      700      719      +19     
Flag Coverage Δ
combined 56.05% <66.30%> (+0.50%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

alpe added 3 commits November 28, 2025 17:20
* main:
  refactor: omit unnecessary reassignment (#2892)
  build(deps): Bump the all-go group across 5 directories with 6 updates (#2881)
  chore: fix inconsistent method name in retryWithBackoffOnPayloadStatus comment (#2889)
  fix: ensure consistent network ID usage in P2P subscriber (#2884)
cache.SetHeaderDAIncluded(headerHash.String(), res.Height, header.Height())
hashes[i] = headerHash
}
if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the DA height is passed to the sync service to update the p2p store

Msg("P2P event with DA height hint, triggering targeted DA retrieval")

// Trigger targeted DA retrieval in background via worker pool
s.asyncDARetriever.RequestRetrieval(daHeightHint)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the "fetch from DA" is triggered for the current block event height

type SignedHeaderWithDAHint = DAHeightHintContainer[*types.SignedHeader]
type DataWithDAHint = DAHeightHintContainer[*types.Data]

type DAHeightHintContainer[H header.Header[H]] struct {
Copy link
Contributor Author

@alpe alpe Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a data container to persist the DA hint together with the block header or data.
types.SignedHeader and types.Data are used all over the place so I did not modify them but added introduced this type for the p2p store and transfer only.

It may make sense to do make this a Proto type. WDYT?

return nil
}

func (s *SyncService[V]) AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stores the DA height hints

@alpe alpe marked this pull request as ready for review December 1, 2025 09:32
@tac0turtle
Copy link
Contributor

if da hint is not in the proto how do other nodes get knowledge of the hint?

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

"github.com/evstack/ev-node/block/internal/cache"
"github.com/evstack/ev-node/block/internal/common"
"github.com/evstack/ev-node/block/internal/da"
coreda "github.com/evstack/ev-node/core/da"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: gci linter

Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! It really makes sense.

I share the same concern as @tac0turtle however about the upgrade strategy given it is p2p breaking.

julienrbrt
julienrbrt previously approved these changes Dec 2, 2025
@alpe
Copy link
Contributor Author

alpe commented Dec 2, 2025

if da hint is not in the proto how do other nodes get knowledge of the hint?

The sync_service wraps the header/data payload in a DAHeightHintContainer object that is passed upstream to the p2p layer. When the DA height is known, the store is updated.

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

It is a breaking change. Instead of signed header or data types, the p2p network exchanges DAHeightHintContainer. This would be incompatible. Also the existing p2p stores would need migration to work.

@julienrbrt
Copy link
Member

julienrbrt commented Dec 4, 2025

Could we broadcast both until every networks are updated? Then for final we can basically discard the previous one.

@alpe
Copy link
Contributor Author

alpe commented Dec 5, 2025

fyi: This PR is missing a migration strategy for the p2p store ( and ideally network)

* main:
  refactor(sequencers): persist prepended batch (#2907)
  feat(evm): add force inclusion command (#2888)
  feat: DA client, remove interface part 1: copy subset of types needed for the client using blob rpc. (#2905)
  feat: forced inclusion (#2797)
  fix: fix and cleanup metrics (sequencers + block) (#2904)
  build(deps): Bump mdast-util-to-hast from 13.2.0 to 13.2.1 in /docs in the npm_and_yarn group across 1 directory (#2900)
  refactor(block): centralize timeout in client (#2903)
  build(deps): Bump the all-go group across 2 directories with 3 updates (#2898)
  chore: bump default timeout (#2902)
  fix: revert default db (#2897)
  refactor: remove obsolete // +build tag (#2899)
  fix:da visualiser namespace  (#2895)
alpe added 3 commits December 15, 2025 10:52
* main:
  chore: execute goimports to format the code (#2924)
  refactor(block)!: remove GetLastState from components (#2923)
  feat(syncing): add grace period for missing force txs inclusion (#2915)
  chore: minor improvement for docs (#2918)
  feat: DA Client remove interface part 2,  add client for celestia blob api   (#2909)
  chore: update rust deps (#2917)
  feat(sequencers/based): add based batch time (#2911)
  build(deps): Bump golangci/golangci-lint-action from 9.1.0 to 9.2.0 (#2914)
  refactor(sequencers): implement batch position persistance (#2908)
github-merge-queue bot pushed a commit that referenced this pull request Dec 15, 2025
<!--
Please read and fill out this form before submitting your PR.

Please make sure you have reviewed our contributors guide before
submitting your
first PR.

NOTE: PR titles should follow semantic commits:
https://www.conventionalcommits.org/en/v1.0.0/
-->

## Overview

Temporary fix until #2891.
After #2891 the verification for p2p blocks will be done in the
background.

ref: #2906

<!-- 
Please provide an explanation of the PR, including the appropriate
context,
background, goal, and rationale. If there is an issue with this
information,
please provide a tl;dr and link the issue. 

Ex: Closes #<issue number>
-->
@alpe
Copy link
Contributor Author

alpe commented Dec 15, 2025

I have added 2 new types for the p2p store that are binary compatible to the types.Data and SignedHeader. With this, we should be able to roll this out without breaking the in-flight p2p data and store.

julienrbrt
julienrbrt previously approved these changes Dec 16, 2025
alpe added 3 commits December 19, 2025 17:00
* main:
  feat: use DA timestamp (#2939)
  chore: improve code comments clarity (#2943)
  build(deps): bump libp2p (#2937)
(cherry picked from commit ad3e21b)
julienrbrt
julienrbrt previously approved these changes Dec 19, 2025
* main:
  fix: make evm_execution more robust (#2942)
  fix(sequencers/single): deterministic queue (#2938)
  fix(block): fix init logic sequencer for da epoch fetching (#2926)
github-merge-queue bot pushed a commit that referenced this pull request Jan 2, 2026
Introduce envelope for headers on DA to fail fast on unauthorized
content.
Similar approach as in #2891 with a binary compatible sibling type that
carries the additional information.
 
* Add DAHeaderEnvelope type to wrap signed headers on DA
  * Binary compatible to `SignedHeader` proto type
  * Includes signature of of the plain content
* DARetriever checks for valid signature early in the process
* Supports `SignedHeader` for legacy support until first signed envelope
read
alpe added 2 commits January 8, 2026 10:06
* main:
  chore: fix some minor issues in the comments (#2955)
  feat: make reaper poll duration configurable (#2951)
  chore!: move sequencers to pkg (#2931)
  feat: Ensure Header integrity on DA (#2948)
  feat(testda): add header support with GetHeaderByHeight method (#2946)
  chore: improve code comments clarity (#2947)
  chore(sequencers): optimize store check (#2945)
@tac0turtle
Copy link
Contributor

ci seems to be having some issues, can these be fixed.

Also was this tested on an existing network? If not, please do that before merging

alpe added 10 commits January 19, 2026 09:46
* main:
  fix: inconsistent state detection and rollback (#2983)
  chore: improve graceful shutdown restarts (#2985)
  feat(submitting): add posting strategies (#2973)
  chore: adding syncing tracing (#2981)
  feat(tracing): adding block production tracing (#2980)
  feat(tracing): Add Store, P2P and Config tracing (#2972)
  chore: fix upgrade test (#2979)
  build(deps): Bump github.com/ethereum/go-ethereum from 1.16.7 to 1.16.8 in /execution/evm/test in the go_modules group across 1 directory (#2974)
  feat(tracing): adding tracing to DA client (#2968)
  chore: create onboarding skill  (#2971)
  test: add e2e tests for force inclusion (part 2) (#2970)
  feat(tracing): adding eth client tracing (#2960)
  test: add e2e tests for force inclusion (#2964)
  build(deps): Bump the all-go group across 4 directories with 10 updates (#2969)
  fix: Fail fast when executor ahead (#2966)
  feat(block): async epoch fetching (#2952)
  perf: tune badger defaults and add db bench (#2950)
  feat(tracing): add tracing to EngineClient (#2959)
  chore: inject W3C headers into engine client and eth client (#2958)
  feat: adding tracing for Executor and added initial configuration (#2957)
* main:
  feat(tracing): tracing part 9 sequencer (#2990)
  build(deps): use mainline go-header (#2988)
* main:
  chore: update calculator for strategies  (#2995)
  chore: adding tracing for da submitter (#2993)
  feat(tracing): part 10 da retriever tracing (#2991)
  chore: add da posting strategy to docs (#2992)
* main:
  build(deps): Bump the all-go group across 5 directories with 5 updates (#2999)
  feat(tracing): adding forced inclusion tracing (#2997)
* main:
  feat(tracing): add store tracing (#3001)
  feat: p2p exchange wrapper  (#2855)
* main:
  fix(docs): remove blog link from sidebar to fix 404 (#3014)
  build(deps): Bump github.com/cometbft/cometbft from 0.38.20 to 0.38.21 in /execution/evm/test in the go_modules group across 1 directory (#3011)
  refactor: use slices.Contains to simplify code (#3010)
  chore: Bump mermaid version and dependencies (#3009)
  chore: Bump github.com/consensys/gnark-crypto only (#3008)
  test: evm contract interaction (#3006)
  chore: remove redundant log (#3007)
  fix: return values correctly not nil (#3004)
  feat: expose execution client params to ev-node (#2982)
alpe added 2 commits January 27, 2026 16:09
* main:
  build(deps): Bump the all-go group across 3 directories with 1 update (#3015)
* main:
  ci: strip app prefix (#3028)
  ci: fix release workflow (#3027)
  chore: prep apps (#3025)
  build: fix docker-compose for evm (#3022)
  chore: prepare execution release (#3021)
  chore: prep changelog (#3020)
  refactor(e2e): extract shared test helpers to DockerTestSuite (#3017)
  feat: High availabilty via RAFT (#2987)
  chore: bump to core rc.1 (#3018)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

sync: P2P should provide da inclusion hints

4 participants