feat(ethexe-consensus): mini-announces for instant injected TX promises#5321
feat(ethexe-consensus): mini-announces for instant injected TX promises#5321
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request optimizes the transaction promise delivery mechanism in the ethexe-consensus module. By implementing a 'mini-announce' flow, the system can now inject and process transactions immediately after the primary block announce is computed, rather than waiting for the next full block cycle. This significantly improves responsiveness for injected transactions while maintaining system stability through a configurable cap on mini-announces per block. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a state machine update to support 'mini-announces' in the ethexe consensus layer, allowing for more granular transaction processing within a block cycle. While the implementation enables this new flow, there are critical issues regarding state transitions in process_new_head and process_computed_announce that could lead to dropped batch commitments or deadlocks in the subordinate state machine. I have provided feedback on these critical logic gaps and a medium-severity suggestion for handling redundant mini-announce inclusions.
| let state = Coordinator::create(self.ctx, self.validators, batch, self.block)?; | ||
| let state = match next_block { | ||
| Some(block) => state.process_new_head(block)?, | ||
| None => state, | ||
| }; |
There was a problem hiding this comment.
In a multi-validator setup (where signatures_threshold > 1), Coordinator::create returns a Coordinator state that must wait for signatures from other validators. However, the code immediately calls state.process_new_head(block) on this new state. Since Coordinator does not override process_new_head, it uses the default implementation which transitions to Initial. This effectively drops the coordination for the previous block immediately upon the arrival of the next block, making it impossible to collect signatures and commit the batch.
The state machine needs to be able to handle the coordination of the previous block while simultaneously starting the processing of the new block, or the transition to Initial must be deferred until coordination is complete.
There was a problem hiding this comment.
Fixed. The code now checks state.is_initial() before passing next_block. For threshold>1, the Coordinator keeps running to collect validation replies, and the next block arrives naturally via the service event loop.
There was a problem hiding this comment.
Updated in 102613a. Coordinator now overrides process_new_head to buffer the block instead of dying. When submission completes, next_block is passed to Initial. Works for all threshold values.
| fn process_new_head(mut self, block: SimpleBlockData) -> Result<ValidatorState> { | ||
| if let State::ReadyForMiniAnnounce { last_announce_hash } = &self.state { | ||
| // Create batch commitment before transitioning to Initial for the new head. | ||
| // This defers batch creation from block N's announce-compute time to block N+1's | ||
| // arrival, but ensures the batch is still created before processing the new block. | ||
| let last_announce_hash = *last_announce_hash; | ||
| self.next_block = Some(block); | ||
| self.state = State::AggregateBatchCommitment { | ||
| future: self | ||
| .ctx | ||
| .core | ||
| .batch_manager | ||
| .clone() | ||
| .create_batch_commitment(self.block, last_announce_hash) | ||
| .boxed(), | ||
| }; | ||
| Ok(self.into()) | ||
| } else { | ||
| DefaultProcessing::new_head(self, block) | ||
| } | ||
| } |
There was a problem hiding this comment.
The process_new_head implementation currently only handles the ReadyForMiniAnnounce state. If a new ETH block arrives while the producer is in the WaitingAnnounceComputed state (i.e., waiting for a mini-announce to be computed), it will fall through to DefaultProcessing::new_head, which transitions to Initial. This causes the producer to drop its state for the current block without ever triggering the AggregateBatchCommitment. As a result, the batch commitment for the block that was just finished will be lost.
You should handle WaitingAnnounceComputed by triggering the batch commitment using the last successfully computed announce hash (which can be retrieved from the database as the parent of the pending mini-announce).
There was a problem hiding this comment.
This is pre-existing behavior, not introduced by this PR. The original code also had WaitingAnnounceComputed with default process_new_head → Initial. The mini-announce changes didn't modify this path.
| mut self, | ||
| computed_announce_hash: HashOf<Announce>, | ||
| ) -> Result<ValidatorState> { | ||
| match &self.state { | ||
| State::WaitingAnnounceComputed { announce_hash } | ||
| if *announce_hash == computed_announce_hash => | ||
| { | ||
| if self.is_validator { | ||
| Participant::create(self.ctx, self.block, self.producer) | ||
| } else { | ||
| Initial::create(self.ctx) | ||
| } | ||
| // Enter waiting state for more announces (mini-announces). | ||
| // The Participant transition happens when the validation request arrives. | ||
| self.state = State::ReadyForMoreAnnounces; | ||
| Ok(self.into()) | ||
| } |
There was a problem hiding this comment.
When a subordinate finishes computing an announce and enters ReadyForMoreAnnounces, it must check if a ValidationRequest from the producer is already present in ctx.pending_events. If the producer sent the validation request while the subordinate was still computing the announce, the subordinate will now hang in ReadyForMoreAnnounces indefinitely (or until another mini-announce arrives), missing the opportunity to participate in the batch commitment.
fn process_computed_announce(
mut self,
computed_announce_hash: HashOf<Announce>,
) -> Result<ValidatorState> {
match &self.state {
State::WaitingAnnounceComputed { announce_hash }
if *announce_hash == computed_announce_hash =>
{
self.state = State::ReadyForMoreAnnounces;
// Check if we should already transition to Participant
if self.is_validator && self.ctx.pending_events.iter().any(|e| {
matches!(e, PendingEvent::ValidationRequest(r) if r.address() == self.producer)
}) {
return Participant::create(self.ctx, self.block, self.producer);
}
Ok(self.into())
}
_ => DefaultProcessing::computed_announce(self, computed_announce_hash),
}
}There was a problem hiding this comment.
Fixed. Added replay_pending_events() which drains pending events after entering ReadyForMoreAnnounces. Any mini-announces or VRs that arrived during computation are now processed immediately.
| if !newly_included { | ||
| return Ok(self.into()); | ||
| } |
There was a problem hiding this comment.
If newly_included is false, it means this mini-announce was already included in the database (e.g., by another validator or a previous attempt). In this case, the producer should still transition to WaitingAnnounceComputed(announce_hash) to wait for its computation result, rather than staying in ReadyForMiniAnnounce. Staying in ReadyForMiniAnnounce might lead to redundant attempts to produce the same mini-announce or a failure to progress once the computation finishes.
| if !newly_included { | |
| return Ok(self.into()); | |
| } | |
| if !newly_included { | |
| self.state = State::WaitingAnnounceComputed(announce_hash); | |
| return Ok(self.into()); | |
| } |
There was a problem hiding this comment.
This is intentional. If the announce was already included (duplicate from RPC abuse), there's no computation to wait for. Transitioning to WaitingAnnounceComputed would block forever waiting for a computation event that will never come. Staying in ReadyForMiniAnnounce is the correct fallback.
Eliminate the 0-12s ETH block wait for injected transaction promise delivery. After the first announce computes, the producer enters ReadyForMiniAnnounce state and immediately creates a new announce when an injected TX arrives, delivering promises in ~400ms. Key changes: - Producer: ReadyForMiniAnnounce state with mini-announce creation, batch commitment deferred to next block's process_new_head - Subordinate: ReadyForMoreAnnounces state accepts mini-announces from producer, transitions to Participant on validation request - DoS protection: MAX_MINI_ANNOUNCES_PER_BLOCK cap (30) - Pool drain: queued TXs checked on entering ReadyForMiniAnnounce Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
b2ff58f to
1af3fa5
Compare
| } else { | ||
| DefaultProcessing::new_head(self, block) | ||
| } | ||
| } |
There was a problem hiding this comment.
severity: medium — process_new_head only guards the ReadyForMiniAnnounce state; when a new Ethereum head arrives while the producer is in WaitingAnnounceComputed (computing a mini-announce), it falls through to DefaultProcessing::new_head, which transitions to Initial and silently abandons the current computation.
The batch commitment for block N is intended to be triggered by the next process_new_head call from ReadyForMiniAnnounce (lines 156-174). However, if a TX injection caused a mini-announce (entering WaitingAnnounceComputed again), and a new head arrives before that computation completes, the batch commitment for block N is never created — all injected transactions from the last confirmed announce are deferred to block N+1 in the pool, and block N's commit is permanently skipped.
Consider saving the incoming block as next_block when in WaitingAnnounceComputed and applying it after the computation completes via the existing ReadyForMiniAnnounce → process_new_head path. No test currently covers this race.
There was a problem hiding this comment.
Documented as TODO in code (producer.rs:188-193). Pre-existing behavior, not introduced by mini-announces. Also added a batch timer in ReadyForMiniAnnounce (af65fee) so batch commitment fires on a deadline without waiting for the next block.
| } | ||
| _ => DefaultProcessing::validation_request(self, request), | ||
| } | ||
| } |
There was a problem hiding this comment.
severity: low — Non-validator subordinates in ReadyForMoreAnnounces that receive a validation request from the producer hit the _ if request.address() == self.producer arm (line 125), which re-adds the request to pending and stays in Subordinate. On the next mini-announce computation, replay_pending_events drains pending and replays the request, which re-adds it again — creating an unbounded recycle loop until the next block arrives.
The loop is harmless (pending queue is bounded by MAX_PENDING_EVENTS = 10) but wastes processing. Since non-validators never transition to Participant, validation requests are meaningless for them in ReadyForMoreAnnounces. The first arm (line 119) should either drop the request instead of saving it, or the guard could be !self.is_validator rather than relying on the fallthrough arm.
There was a problem hiding this comment.
Fixed in f3c2e76. Non-validators in ReadyForMoreAnnounces now drop the VR instead of re-enqueuing it.
SummaryIntroduces mini-announces for the ethexe Producer — after the main block announce is computed, the producer stays in a new Findings
|
…unces Non-validators receiving a validation request in ReadyForMoreAnnounces would re-enqueue it as pending, causing replay_pending_events to recycle it indefinitely. Drop it instead since non-validators never transition to Participant. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
…nounce computation Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
@ukint-vs The idea looks good and make sense, but now in So in current implementation for each mini So, for me, this is now the major issue that should be fixed before the normal PR review. But, still, the idea to allow producer collect transaction within a slot looks good. |
|
I also checked the state switching in subordinate, producer and participant. |
When a mini-announce shares the same block_hash as its parent announce, canonical Ethereum events were already processed by the parent. Skip them to avoid duplicate state transitions (message queueing, program creation, etc.). Only injected transactions are processed. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
Fixed in 6c02fd7. |
|
/delta-review |
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
e5a0e1f to
72c90a8
Compare
…ests Mini-announces defer batch commitment to the next ETH block via process_new_head. Tests without continuous block generation hang because no next block arrives. Flip the default so blocks arrive automatically. The multiple_validators test explicitly opts out. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
/review-delta |
Delta ReviewThree fixes: skip duplicate canonical events on mini-announces, document the WaitingAnnounceComputed batch-skip race with a TODO, and drop the non-validator VR recycle loop in ReadyForMoreAnnounces — plus flipping the test default to continuous_block_generation=true. Previous Issues
New FindingsNo new issues. |
The hack in wait_for() already handles deferred batch commitment by periodically forcing new blocks. Setting continuous_block_generation to true broke the mailbox test which relies on the hack being active (hack is None when continuous_block_generation is true). Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
When hack forces new blocks during batch aggregation, process_new_head was falling through to DefaultProcessing which killed the batch future. Now AggregateBatchCommitment absorbs new heads by updating next_block, letting the batch complete before processing the latest block. Also reverts continuous_block_generation default back to false since the hack already handles deferred batch commitment correctly. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
pending_events stores newest at front (push_front). replay_pending_events was iterating front-to-back, processing child mini-announces before parents. accept_announce rejects children whose parent isn't yet included. Reverse iteration order so parents are processed first. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
…ests Mini-announces defer batch commitment to the next ETH block. Without auto block generation, the producer hangs in ReadyForMiniAnnounce and never commits, causing test processes to leak on shutdown. AggregateBatchCommitment now absorbs new heads (71a9eb3), so continuous blocks no longer kill in-flight batches. The multiple_validators test explicitly opts out since it requires manual block control. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Instead of waiting indefinitely for the next ETH block to trigger batch commitment, start a timer (producer_delay duration) when entering ReadyForMiniAnnounce. When it fires, batch commitment proceeds without needing a new head. If a mini-announce or new head arrives first, the timer is naturally reset/superseded. This fixes test hangs where no ETH block arrives to trigger batch commitment. Reverts continuous_block_generation default back to false. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
With 4 live validators (threshold>1), the Coordinator waits for validation replies. Previously next_block was discarded for threshold>1, losing block N+1 entirely. Now Coordinator overrides process_new_head to buffer the block. When submission completes, next_block is passed to Initial. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Replace event-driven mini-announces with timer-based polling per ecol-master's feedback. Every poll_timer tick (producer_delay), check the TX pool: if TXs found, create a batched mini-announce; if empty, create batch commitment immediately. Changes: - Remove MAX_MINI_ANNOUNCES_PER_BLOCK, mini_announce_count, process_injected_transaction override - Rename batch_timer to poll_timer in ReadyForMiniAnnounce - poll_next_state polls pool on timer, batches TXs naturally - produce_mini_announce_with_txs takes pre-selected TXs - Restore threshold_one, threshold_two, code_commitments_only tests Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
/review-delta |
| // batch commitment for this block is skipped. The announces are still in DB | ||
| // and will be picked up by the next block's collect_not_committed_predecessors, | ||
| // but block-specific code/validator/reward commitments could be missed. | ||
| DefaultProcessing::new_head(self, block) |
There was a problem hiding this comment.
The TODO at line 166 () is still present. With the removal of MAX_MINI_ANNOUNCES_PER_BLOCK, the producer now spends more time cycling through WaitingAnnounceComputed → ReadyForMiniAnnounce as long as the pool is non-empty. The window where a new block head can arrive in WaitingAnnounceComputed (and miss batch commitment) has grown relative to the old eager-trigger design. Previously, the cap at 30 mini-announces would park the state in ReadyForMiniAnnounce where the new block correctly triggers batch commitment. Now the only exit is an empty pool, which may not happen under sustained TX load. Consider bumping this TODO to a tracked issue if it can't be fixed here.
Delta ReviewReplaces the eager per-TX mini-announce trigger and Previous Issues
New Findings
|
When Coordinator buffers a new head but can't reach signature threshold (not enough validators), it hangs forever. Now if a second head arrives while still waiting, Coordinator gives up and transitions to Initial. Also un-comments the validator restart test logic that ecol-master disabled for debugging. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
All 3 issues addressed:
Pre-existing issues (batch lost during |
692793a to
4250edc
Compare
Mini-announces chain within the same block (parent-child), but all CDL-bounded loops assumed 1 announce = 1 block. This caused: - Exponential announce set growth in propagate_announces - Convergence failure in find_announces_common_predecessor - Shallow scoring in best_announce (CDL budget consumed intra-block) - Premature expiry or missed expiry in propagate_one_base_announce - Recovery failure when chain has more announces than CDL blocks - Incorrect batch expiry calculation Fixes: - Add leaf_announces() filter for propagation and predecessor search - Count block transitions (not announce hops) in all CDL-bounded loops - Filter best_parent_announce to base announces (cross-block parents) - Fix off-by-one: use > not >= to match S1's <= CDL semantics - Update S3 theory comment for mini-announce parent rules - Add 7 tests covering chained mini-announce scenarios NOTE: Behavioral change — non-base announces now live 1 block longer than before. The old code expired at CDL-1 blocks distance (off-by-one vs S1 which says <= CDL). This is intentional and matches the protocol spec. Two existing test assertions were updated accordingly. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
4250edc to
3cd275d
Compare
| let (announce, _pub_key) = verified_announce.into_parts(); | ||
| self.send_announce_for_computation(announce) | ||
| } | ||
| State::ReadyForMoreAnnounces |
There was a problem hiding this comment.
Where is check of parent? New mini announce must be above latest
There was a problem hiding this comment.
Fixed in fc0ea47. ReadyForMoreAnnounces now tracks latest_announce_hash (the just-computed announce). Mini-announces are only accepted if announce.parent == latest_announce_hash. If parent doesn't match, falls through to DefaultProcessing::announce_from_producer which queues as pending with a warning.
| State::ReadyForMoreAnnounces | ||
| if request.address() == self.producer && self.is_validator => | ||
| { | ||
| self.ctx.pending(request); | ||
| Participant::create(self.ctx, self.block, self.producer) | ||
| } |
There was a problem hiding this comment.
what if producer send mini announce MA, then validation request VR, but due to network issues VR would come before MA?
There was a problem hiding this comment.
Fixed in fc0ea47. Before entering Participant, we now check if the VR's head announce is computed locally (db.announce_meta(h).computed). If not computed (MA hasn't arrived yet), the VR is saved to pending and subordinate stays in ReadyForMoreAnnounces. When the MA arrives and computes, replay_pending_events replays the VR, and by then the head is computed.
Edge case: if VR is deferred and no more MAs arrive, VR sits in pending until next ETH block resets to Initial (~12s). Batch fails threshold and is recovered by collect_not_committed_predecessors. Acceptable for a rare network reordering scenario.
…race Addresses two review comments from Grisha on subordinate mini-announce handling: 1. ReadyForMoreAnnounces now tracks latest_announce_hash. Mini-announces are only accepted if their parent matches the latest computed announce, preventing forks from stale or out-of-order announces. 2. Validation requests are deferred when their head_announce is not yet computed locally (VR arrived before MA due to network reordering). The VR is saved to pending and replayed after the next announce computes. If no more MAs arrive, next block's process_new_head resets state. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
ed1f17c to
fc0ea47
Compare
|
Superseded by #5352 (two-phase compute). Same goal, simpler approach: depth-1 with canonical-only compute instead of depth-2 mini-announces. Eliminates CDL patches, subordinate state explosion, and gossip reorder complexity. |
|
Reopening: this is the approach that actually delivers 400ms promise latency. Looking for ways to simplify the implementation (reduce CDL patches, subordinate complexity). |
Replace ReadyForMoreAnnounces state + replay_pending_events with a simpler 2-state loop: WaitingForAnnounce ↔ WaitingAnnounceComputed. After an announce computes, the subordinate loops back to WaitingForAnnounce instead of entering a third state. VR handling moves to WaitingForAnnounce (checks head_computed directly). Pending events are replayed after each announce computes via process_pending_after_compute. Saves ~130 lines vs the original mini-announces subordinate. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Two off-by-one bugs in the block-aware CDL conversion introduced by mini-announces: 1. best_announce: used `>` instead of `>=` to check blocks_seen against CDL. Scored CDL+1 blocks instead of CDL, giving weight to stale announces outside the commit window. 2. calculate_batch_expiry: broke before examining the announce at the CDL boundary block. Moved is_base() check before the break so the boundary announce is accounted for. This changes expiry from 1 to 0 for chains where the oldest not-base announce sits at the boundary. Both bugs follow the same pattern: the boundary condition fires before the announce at that boundary is examined. The master code (simple for-loop) didn't have this issue because the loop body ran before the counter incremented. Found by Codex structured review (gpt-5.4). Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
The UnknownParent deferral in process_announce pushed to pending_events without checking MAX_PENDING_EVENTS. A Byzantine producer could spam unknown-parent announces to grow the queue unboundedly. Now drops the announce if the queue is full. Found by Codex adversarial challenge (gpt-5.4). Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Rewrite based on actual code tracing and cross-model review findings. Key corrections from this session: - Programs only init via Ethereum (not from injected TXs) - Processor ordering is correct as-is (injected first = priority) - CDL boundary must use >= not > (off-by-one found and fixed) - Subordinate simplified to 2 states (ReadyForMoreAnnounces removed) - Pending queue cap enforced during steady-state deferral - Batch commitment loss is pre-existing, not mini-announces-specific Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
|
/review-delta |
Extract shared patterns to eliminate duplication and centralize block-aware CDL counting logic: - AnnounceChainWalker: single source of truth for block-transition tracking across 3 call sites (propagate_one_base_announce, best_announce, calculate_batch_expiry) - Producer::finalize_announce: consolidates announce inclusion, signing, publishing, and state transition from two 96%-identical methods - Subordinate::transition_to_computing: DRYs accept→emit→transition sequence used in two call sites All 82 consensus tests + 21 compute tests pass, clippy clean. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
These are local Claude Code worktree references that should never be committed. They cause CI failures because git treats them as submodule paths with no matching .gitmodules entry. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Summary
producer_delayinterval. If TXs found, creates a batched mini-announce. If empty, creates batch commitment immediatelyArchitecture
Changes
Producer (
ethexe/consensus/src/validator/producer.rs)ReadyForMiniAnnouncestate withpoll_timer. Every tick, checks TX pool viaselect_for_announce. TXs found →produce_mini_announce_with_txs(batched). Empty →AggregateBatchCommitment.process_new_headoverride: When new block arrives inReadyForMiniAnnounceorAggregateBatchCommitment, buffers it innext_blockand creates batch commitment. Passesnext_blockthrough Coordinator to Initial.process_injected_transactionoverride: TXs go to pool viaDefaultProcessing, timer picks them up.Coordinator (
ethexe/consensus/src/validator/coordinator.rs)next_blockbuffering: Overridesprocess_new_headto buffer the block instead of dying. Passesnext_blockto Initial after submission.Subordinate (
ethexe/consensus/src/validator/subordinate.rs)ReadyForMoreAnnouncesstate: Accepts mini-announces from the producer after first announce computes.replay_pending_events: After enteringReadyForMoreAnnounces, replays pending events oldest-first so parent announces are processed before children.Participant::create. Non-validators drop the VR.Compute (
ethexe/compute/src/compute.rs)block_hash, canonical Ethereum events were already processed. Pass empty events, only process injected transactions.Announces correctness (
ethexe/consensus/src/announces.rs,batch/utils.rs)Mini-announces chain within the same block (parent-child), but all CDL-bounded loops assumed 1 announce = 1 block. Fixed 7 functions:
propagate_announcesleaf_announces()filter prevents exponential growthpropagate_one_base_announcefind_announces_common_predecessorbest_announcebest_parent_announcerecover_announces_chain_if_neededcalculate_batch_expiryBehavioral change: Non-base announces now live 1 block longer than before. The old code expired at CDL-1 blocks distance (off-by-one vs S1 which says
<= CDL). This is intentional and matches the protocol spec. Two existing test assertions updated.Trade-offs
collect_not_committed_predecessors.Known issues (tracked)
WaitingAnnounceComputed(pre-existing)Test plan
cargo nextest run -p ethexe-consensus— 85 tests pass (7 new for chained mini-announces)cargo nextest run -p ethexe-compute— 21 tests passcargo clippy -p ethexe-consensus— clean🤖 Generated with Claude Code