Skip to content

fix(blockassembly): close queue race for children of conflicting parents#808

Merged
oskarszoon merged 5 commits intobsv-blockchain:mainfrom
oskarszoon:fix/blockassembly-conflicting-queue-race
May 4, 2026
Merged

fix(blockassembly): close queue race for children of conflicting parents#808
oskarszoon merged 5 commits intobsv-blockchain:mainfrom
oskarszoon:fix/blockassembly-conflicting-queue-race

Conversation

@oskarszoon
Copy link
Copy Markdown
Contributor

@oskarszoon oskarszoon commented May 4, 2026

Problem

Block assembly produced mining candidates that ValidateBlock rejected with:

parent transaction X of tx Y has no block IDs

SVNode reject: bad-txns-inputs-missingorspent. The dirty parent/child state survives BA pod restarts because it lives in the persistent UTXO store: parent tx has Conflicting=true while a child that spends its outputs still has Conflicting=false.

Root cause

Three layered gaps in how conflicting state propagates to the block-assembly queue:

  1. Cascade discovery is store-bound. MarkConflictingRecursively walks parent.outputs → recorded spender in the store. A child whose Spend has not been committed yet (still mid-validation, in flight in the BA queue) is invisible to the cascade — the parent's output has no spender row pointing to it.
  2. Cascade hashes were thrown away. ProcessConflicting called MarkConflictingRecursively and discarded the second return value (markedOrder), so even the descendants the cascade did discover never reached any downstream filter.
  3. Dequeue paths checked self-hash only. dequeueDuringBlockMovement filtered by transactionMap.Exists(hash) and losingTxHashesMap.Exists(hash). Neither walked TxInpoints.ParentTxHashes. Same applies to the post-reset / startup queue drain — it dropped everything wholesale without identifying which in-flight items were now invalid.

The race on a node moving forward a block:

T0  parent P added to UTXO store via validator
T1  ProcessConflicting (during moveForwardBlock with ConflictingNodes)
    flags P.Conflicting=true. Cascade walks P.outputs → spenders, finds
    none for child C: C's Spend is not committed yet.
T2  Event loop falls into dequeueDuringBlockMovement to drain whatever
    accumulated during the moveForwardBlock case. Filter only checks
    self-hash. C admitted into subtree.
T3  C lands in subtree. Mining candidate built. Block REJECTED.

A second window exists at startup / Reset: loadUnminedTransactions runs while gRPC AddTx is already enqueueing. If validateUnminedTxInputs cascades a parent + descendants, any in-flight child of those parents already in the queue is admitted by the default-case dequeue once the goroutine starts.

Fix

Two transient drop-set drains, both scoped to the cascade event that produced them. Default-case dequeue stays untouched (no hot-path overhead added).

  • ProcessConflicting now returns the BFS marked-order slice from MarkConflictingRecursively (previously discarded).
  • processConflictingTransactions builds a transient map[chainhash.Hash]struct{} from that slice and threads it through RemainderTransactionParams.ConflictingHashes into dequeueDuringBlockMovement. The drain rejects any node whose own hash is in the set OR whose TxInpoints.ParentTxHashes contains a hash in the set. On parent match the node's hash is also added to the set so any later-in-batch descendant is caught.
  • Interface.DrainQueue(dropHashes) is the generic drain entry point used by BlockAssembler after loadUnminedTransactions to flush in-flight queue items whose parents the cascade just flagged. Implemented on SubtreeProcessor as a thin wrapper over dequeueDuringBlockMovement with skipNotification=true so it is safe to invoke before subtree-announcement listeners are wired up.
  • BlockAssembler.markAsConflicting accumulates cascade hashes in unminedDropHashes. BA.Start drains the queue with that set after loadUnminedTransactions returns and before stp.Start fires the goroutine. BA.Reset's postProcessFn does the same after its own loadUnminedTransactions call, before the existing post-postProcess drain.
  • All sets are local — no persistent map on SubtreeProcessor, no overhead on the always-on default-case dequeue.

Relationship to #806

PR #806 closed the in-memory subtree side of the same incident: validateParentChain now cascade-filters descendants of rejected unmined txs at restart and propagates the conflicting flag to the UTXO store via MarkConflictingRecursively. Operators with the bad state already on disk should pull #806 — that fix unblocks restart by preventing conflicting descendants from being loaded as unmined on the next start.

This PR closes the queue side of the race that #806 addresses for the in-memory subtree state — the in-flight items that arrived via AddTx during the cascade window. The two are complementary; both are needed to fully close the window.

Tests

  • New regression test TestDequeueDuringBlockMovement_RejectsChildOfConflictingParent in services/blockassembly/subtreeprocessor/conflicting_queue_race_test.go drives dequeueDuringBlockMovement directly with a synthetic queue and a conflicting set; asserts the child is dropped, the unrelated tx survives, and the rejected child is added to the set so its descendants are caught later in the same drain.

Full package suites green:

go test -count=1 -timeout=600s -tags testtxmetacache \
  ./services/blockassembly/subtreeprocessor/ \
  ./services/blockassembly/ \
  ./stores/utxo/ \
  ./stores/utxo/sql/ \
  ./stores/txmetacache/
# 911 passed

Lint clean (golangci-lint run --timeout=5m --disable gosec --disable prealloc).

Operational note

For a node already holding the bad parent/child state on disk: pull #806. With #806 in place, the next restart's validateParentChain will detect and cascade the orphans, propagate the flag to the UTXO store, and skip them on load. No special RPC or destructive action needed. This PR adds the queue-side defence so the same race does not re-occur once the node is healthy.

Production observed parent.Conflicting=true with child.Conflicting=false
on teranode-mainnet-eu-1 (v0.15.0-beta-3), producing mining candidates
rejected as bad-txns-inputs-missingorspent. The cascade in
ProcessConflicting / MarkConflictingRecursively only walks recorded
spenders, so a child whose Spend has not yet been committed when the
parent flips conflicting slips past. The dequeue paths (Phase 2
default-case filter and dequeueDuringBlockMovement) had no Conflicting
check on self-hash or on parent inpoints, so the in-flight child landed
in the next mining candidate and the block was rejected by ValidateBlock
with "parent transaction X of tx Y has no block IDs".

Fix:

- ProcessConflicting now returns the BFS marked-order slice from
  MarkConflictingRecursively (previously discarded).
- SubtreeProcessor gains a conflictingMap (separate from removeMap),
  populated by processConflictingTransactions and by
  BlockAssembler.markAsConflicting via a new MarkConflicting/
  GetConflictingMap pair on the Interface.
- Both dequeue filters consult conflictingMap on self-hash and on every
  TxInpoints.ParentTxHashes entry. When a child is rejected because of a
  conflicting parent, its own hash is added to the map so any
  later-arriving descendant is also caught without a store round-trip.
- Reset clears conflictingMap, mirroring removeMap, so it does not leak
  across resets.

Adds a regression test that reproduces the queue-race shape end-to-end:
fails before the fix, passes after.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 4, 2026

🤖 Claude Code Review

Status: Complete

Summary

This PR addresses a critical production bug where children of conflicting parents could enter block assembly and produce invalid mining candidates. The fix is well-architected and follows the project's defensive engineering approach.

Analysis: No issues found. The implementation correctly closes the identified race window using a transient conflicting-hash set threaded through the block-movement drain path. The approach is minimal, scoped appropriately, and includes regression tests.

Key strengths:

  • Minimal changes confined to the specific race window
  • Strong separation of concerns (transient set not persisted on processor)
  • Comprehensive documentation explaining the race and fix
  • Regression test that directly validates the production scenario
  • Default-case dequeue path intentionally left unchanged

The code is production-ready.

@oskarszoon oskarszoon requested a review from icellan May 4, 2026 13:27
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 4, 2026

Benchmark Comparison Report

Baseline: main (unknown)

Current: PR-808 (a676e37)

Summary

  • Regressions: 0
  • Improvements: 0
  • Unchanged: 142
  • Significance level: p < 0.05
All benchmark results (sec/op)
Benchmark Baseline Current Change p-value
_NewBlockFromBytes-4 1.676µ 1.663µ ~ 0.500
SplitSyncedParentMap_SetIfNotExists/256_buckets-4 61.57n 61.72n ~ 0.100
SplitSyncedParentMap_SetIfNotExists/16_buckets-4 61.75n 61.91n ~ 1.000
SplitSyncedParentMap_SetIfNotExists/1_bucket-4 62.07n 61.80n ~ 0.700
SplitSyncedParentMap_ConcurrentSetIfNotExists/256_buckets... 30.60n 30.98n ~ 1.000
SplitSyncedParentMap_ConcurrentSetIfNotExists/16_buckets_... 53.07n 54.73n ~ 0.100
SplitSyncedParentMap_ConcurrentSetIfNotExists/1_bucket_pa... 112.1n 117.2n ~ 0.200
MiningCandidate_Stringify_Short-4 267.9n 264.6n ~ 0.100
MiningCandidate_Stringify_Long-4 1.947µ 1.914µ ~ 0.100
MiningSolution_Stringify-4 985.4n 976.8n ~ 0.100
BlockInfo_MarshalJSON-4 1.790µ 1.768µ ~ 0.100
NewFromBytes-4 129.4n 142.2n ~ 0.600
Mine_EasyDifficulty-4 66.88µ 66.96µ ~ 0.700
Mine_WithAddress-4 7.278µ 7.022µ ~ 0.700
DirectSubtreeAdd/4_per_subtree-4 62.53n 61.31n ~ 1.000
DirectSubtreeAdd/64_per_subtree-4 31.68n 31.87n ~ 0.500
DirectSubtreeAdd/256_per_subtree-4 30.59n 30.54n ~ 0.400
DirectSubtreeAdd/1024_per_subtree-4 29.23n 29.47n ~ 0.100
DirectSubtreeAdd/2048_per_subtree-4 28.76n 29.01n ~ 0.100
SubtreeProcessorAdd/4_per_subtree-4 279.8n 284.1n ~ 0.700
SubtreeProcessorAdd/64_per_subtree-4 279.6n 277.0n ~ 1.000
SubtreeProcessorAdd/256_per_subtree-4 276.9n 280.5n ~ 0.700
SubtreeProcessorAdd/1024_per_subtree-4 268.0n 272.4n ~ 0.100
SubtreeProcessorAdd/2048_per_subtree-4 268.9n 270.5n ~ 0.700
SubtreeProcessorRotate/4_per_subtree-4 276.1n 274.5n ~ 0.100
SubtreeProcessorRotate/64_per_subtree-4 271.0n 273.2n ~ 0.700
SubtreeProcessorRotate/256_per_subtree-4 272.1n 275.7n ~ 1.000
SubtreeProcessorRotate/1024_per_subtree-4 270.2n 273.1n ~ 0.100
SubtreeNodeAddOnly/4_per_subtree-4 54.74n 55.08n ~ 0.700
SubtreeNodeAddOnly/64_per_subtree-4 34.57n 35.35n ~ 0.100
SubtreeNodeAddOnly/256_per_subtree-4 33.61n 34.07n ~ 0.100
SubtreeNodeAddOnly/1024_per_subtree-4 32.67n 33.09n ~ 0.100
SubtreeCreationOnly/4_per_subtree-4 113.3n 113.1n ~ 0.700
SubtreeCreationOnly/64_per_subtree-4 404.2n 399.9n ~ 0.400
SubtreeCreationOnly/256_per_subtree-4 1.342µ 1.473µ ~ 0.100
SubtreeCreationOnly/1024_per_subtree-4 4.375µ 4.529µ ~ 0.400
SubtreeCreationOnly/2048_per_subtree-4 8.149µ 8.494µ ~ 0.100
SubtreeProcessorOverheadBreakdown/64_per_subtree-4 274.4n 270.2n ~ 0.100
SubtreeProcessorOverheadBreakdown/1024_per_subtree-4 275.3n 273.7n ~ 0.200
ParallelGetAndSetIfNotExists/1k_nodes-4 823.7µ 622.0µ ~ 0.700
ParallelGetAndSetIfNotExists/10k_nodes-4 1.364m 1.352m ~ 1.000
ParallelGetAndSetIfNotExists/50k_nodes-4 6.752m 6.694m ~ 0.100
ParallelGetAndSetIfNotExists/100k_nodes-4 13.51m 13.35m ~ 0.700
SequentialGetAndSetIfNotExists/1k_nodes-4 668.2µ 661.9µ ~ 0.700
SequentialGetAndSetIfNotExists/10k_nodes-4 2.802m 2.776m ~ 0.100
SequentialGetAndSetIfNotExists/50k_nodes-4 10.74m 10.30m ~ 0.200
SequentialGetAndSetIfNotExists/100k_nodes-4 20.67m 20.02m ~ 0.200
ProcessOwnBlockSubtreeNodesParallel/1k_nodes-4 674.9µ 650.4µ ~ 0.100
ProcessOwnBlockSubtreeNodesParallel/10k_nodes-4 4.289m 4.276m ~ 0.400
ProcessOwnBlockSubtreeNodesParallel/100k_nodes-4 16.93m 16.95m ~ 1.000
ProcessOwnBlockSubtreeNodesSequential/1k_nodes-4 739.2µ 693.9µ ~ 0.100
ProcessOwnBlockSubtreeNodesSequential/10k_nodes-4 6.261m 5.955m ~ 0.100
ProcessOwnBlockSubtreeNodesSequential/100k_nodes-4 43.14m 38.27m ~ 0.100
BlockAssembler_AddTx-4 0.02839n 0.02962n ~ 0.700
AddNode-4 10.75 10.82 ~ 1.000
AddNodeWithMap-4 11.09 10.86 ~ 1.000
DiskTxMap_SetIfNotExists-4 3.941µ 3.957µ ~ 1.000
DiskTxMap_SetIfNotExists_Parallel-4 3.954µ 3.791µ ~ 0.700
DiskTxMap_ExistenceOnly-4 322.9n 306.0n ~ 0.200
Queue-4 200.1n 204.9n ~ 0.700
AtomicPointer-4 8.131n 8.121n ~ 0.700
ReorgOptimizations/DedupFilterPipeline/Old/10K-4 808.6µ 809.3µ ~ 0.700
ReorgOptimizations/DedupFilterPipeline/New/10K-4 783.5µ 775.4µ ~ 0.700
ReorgOptimizations/AllMarkFalse/Old/10K-4 114.7µ 125.8µ ~ 0.100
ReorgOptimizations/AllMarkFalse/New/10K-4 58.19µ 58.31µ ~ 0.400
ReorgOptimizations/HashSlicePool/Old/10K-4 69.46µ 61.46µ ~ 0.100
ReorgOptimizations/HashSlicePool/New/10K-4 11.77µ 11.79µ ~ 0.200
ReorgOptimizations/NodeFlags/Old/10K-4 5.262µ 5.314µ ~ 0.400
ReorgOptimizations/NodeFlags/New/10K-4 1.798µ 1.815µ ~ 0.400
ReorgOptimizations/DedupFilterPipeline/Old/100K-4 9.528m 9.345m ~ 0.400
ReorgOptimizations/DedupFilterPipeline/New/100K-4 9.869m 9.575m ~ 0.100
ReorgOptimizations/AllMarkFalse/Old/100K-4 1.170m 1.157m ~ 0.400
ReorgOptimizations/AllMarkFalse/New/100K-4 730.6µ 731.3µ ~ 0.100
ReorgOptimizations/HashSlicePool/Old/100K-4 603.2µ 608.4µ ~ 0.100
ReorgOptimizations/HashSlicePool/New/100K-4 312.4µ 308.0µ ~ 1.000
ReorgOptimizations/NodeFlags/Old/100K-4 54.71µ 55.04µ ~ 0.400
ReorgOptimizations/NodeFlags/New/100K-4 19.14µ 19.16µ ~ 1.000
TxMapSetIfNotExists-4 51.10n 50.37n ~ 0.700
TxMapSetIfNotExistsDuplicate-4 43.40n 43.35n ~ 0.200
ChannelSendReceive-4 663.1n 699.6n ~ 0.100
CalcBlockWork-4 467.7n 468.6n ~ 0.700
CalculateWork-4 659.8n 629.1n ~ 0.400
BuildBlockLocatorString_Helpers/Size_10-4 1.310µ 1.310µ ~ 0.700
BuildBlockLocatorString_Helpers/Size_100-4 15.19µ 12.70µ ~ 0.400
BuildBlockLocatorString_Helpers/Size_1000-4 123.7µ 124.2µ ~ 0.100
CatchupWithHeaderCache-4 104.5m 104.8m ~ 0.100
_BufferPoolAllocation/16KB-4 3.635µ 3.697µ ~ 0.700
_BufferPoolAllocation/32KB-4 9.798µ 8.948µ ~ 0.700
_BufferPoolAllocation/64KB-4 17.34µ 16.80µ ~ 0.400
_BufferPoolAllocation/128KB-4 29.58µ 33.12µ ~ 0.100
_BufferPoolAllocation/512KB-4 117.5µ 115.1µ ~ 0.400
_BufferPoolConcurrent/32KB-4 20.12µ 18.38µ ~ 0.100
_BufferPoolConcurrent/64KB-4 33.82µ 27.91µ ~ 0.100
_BufferPoolConcurrent/512KB-4 169.6µ 153.4µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/16KB-4 713.2µ 640.3µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/32KB-4 723.5µ 623.1µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/64KB-4 716.7µ 618.7µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/128KB-4 738.4µ 637.7µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/512KB-4 720.1µ 650.2µ ~ 0.100
_SubtreeDataDeserializationWithBufferSizes/16KB-4 37.71m 36.68m ~ 0.700
_SubtreeDataDeserializationWithBufferSizes/32KB-4 37.32m 36.43m ~ 0.400
_SubtreeDataDeserializationWithBufferSizes/64KB-4 36.82m 36.49m ~ 0.200
_SubtreeDataDeserializationWithBufferSizes/128KB-4 36.97m 36.10m ~ 0.200
_SubtreeDataDeserializationWithBufferSizes/512KB-4 36.50m 36.03m ~ 0.400
_PooledVsNonPooled/Pooled-4 741.0n 738.9n ~ 0.300
_PooledVsNonPooled/NonPooled-4 7.152µ 6.874µ ~ 0.100
_MemoryFootprint/Current_512KB_32concurrent-4 7.437µ 6.924µ ~ 0.400
_MemoryFootprint/Proposed_32KB_32concurrent-4 10.616µ 9.648µ ~ 0.100
_MemoryFootprint/Alternative_64KB_32concurrent-4 10.850µ 9.405µ ~ 0.100
_prepareTxsPerLevel-4 409.6m 408.8m ~ 0.700
_prepareTxsPerLevelOrdered-4 3.515m 3.654m ~ 0.400
_prepareTxsPerLevel_Comparison/Original-4 420.8m 417.4m ~ 1.000
_prepareTxsPerLevel_Comparison/Optimized-4 3.552m 3.473m ~ 0.100
SubtreeSizes/10k_tx_4_per_subtree-4 1.249m 1.275m ~ 0.700
SubtreeSizes/10k_tx_16_per_subtree-4 294.3µ 297.9µ ~ 0.400
SubtreeSizes/10k_tx_64_per_subtree-4 70.59µ 71.21µ ~ 0.200
SubtreeSizes/10k_tx_256_per_subtree-4 17.55µ 17.81µ ~ 0.100
SubtreeSizes/10k_tx_512_per_subtree-4 8.838µ 8.819µ ~ 0.700
SubtreeSizes/10k_tx_1024_per_subtree-4 4.352µ 4.341µ ~ 0.100
SubtreeSizes/10k_tx_2k_per_subtree-4 2.167µ 2.168µ ~ 0.700
BlockSizeScaling/10k_tx_64_per_subtree-4 69.06µ 69.76µ ~ 0.400
BlockSizeScaling/10k_tx_256_per_subtree-4 17.29µ 17.43µ ~ 0.400
BlockSizeScaling/10k_tx_1024_per_subtree-4 4.311µ 4.372µ ~ 0.100
BlockSizeScaling/50k_tx_64_per_subtree-4 362.8µ 367.2µ ~ 1.000
BlockSizeScaling/50k_tx_256_per_subtree-4 92.39µ 87.32µ ~ 0.400
BlockSizeScaling/50k_tx_1024_per_subtree-4 21.55µ 21.53µ ~ 0.700
SubtreeAllocations/small_subtrees_exists_check-4 148.5µ 147.7µ ~ 1.000
SubtreeAllocations/small_subtrees_data_fetch-4 156.3µ 157.9µ ~ 0.700
SubtreeAllocations/small_subtrees_full_validation-4 334.9µ 310.3µ ~ 0.100
SubtreeAllocations/medium_subtrees_exists_check-4 8.677µ 8.822µ ~ 0.400
SubtreeAllocations/medium_subtrees_data_fetch-4 9.537µ 9.239µ ~ 0.700
SubtreeAllocations/medium_subtrees_full_validation-4 18.83µ 17.40µ ~ 0.200
SubtreeAllocations/large_subtrees_exists_check-4 2.095µ 2.081µ ~ 0.700
SubtreeAllocations/large_subtrees_data_fetch-4 2.314µ 2.242µ ~ 0.300
SubtreeAllocations/large_subtrees_full_validation-4 4.476µ 4.356µ ~ 0.100
StoreBlock_Sequential/BelowCSVHeight-4 303.8µ 305.7µ ~ 0.400
StoreBlock_Sequential/AboveCSVHeight-4 318.7µ 315.4µ ~ 0.700
GetUtxoHashes-4 272.3n 268.8n ~ 1.000
GetUtxoHashes_ManyOutputs-4 46.47µ 47.74µ ~ 0.400
_NewMetaDataFromBytes-4 233.1n 231.8n ~ 0.500
_Bytes-4 613.5n 610.5n ~ 0.700
_MetaBytes-4 569.5n 564.3n ~ 0.100

Threshold: >10% with p < 0.05 | Generated: 2026-05-04 15:22 UTC

ordishs

This comment was marked as outdated.

oskarszoon added 3 commits May 4, 2026 16:27
Address review feedback: conflictingMap as a persistent SubtreeProcessor field
imposed a hot-path lookup on every default-case dequeue, which is wrong. The
conflicting-state knowledge is only valid for the duration of one block
movement (and the immediate post-cascade drain), not for the lifetime of the
processor.

- processConflictingTransactions returns a transient
  map[chainhash.Hash]struct{} of every hash flagged Conflicting=true by the
  BFS cascade (immediate losers + every descendant returned by
  MarkConflictingRecursively).
- The set is threaded through RemainderTransactionParams.ConflictingHashes
  into dequeueDuringBlockMovement, which rejects any node whose own hash is
  in the set OR whose TxInpoints.ParentTxHashes contains a hash in the set.
  On parent match the node's hash is added to the set so any later-in-batch
  descendants are caught.
- Default-case Phase 2 filter is unchanged: removeMap + currentTxMap dedup
  only, no conflicting lookup.
- No new SubtreeProcessor fields, no new Interface methods, no new mock
  methods. The cascade information lives only in local variables for the
  duration of one moveForwardBlock event.
- BlockAssembler.markAsConflicting reverted to its pre-fix shape; the reload
  path's cascade-to-descendants concern is handled separately upstream by
  PR bsv-blockchain#806 (validateParentChain).

Test rewritten to drive dequeueDuringBlockMovement directly, no event loop.
…nsactions

loadUnminedTransactions runs while gRPC AddTx is already enqueueing on the
input queue. If validateUnminedTxInputs cascades a parent + descendants as
conflicting in the UTXO store, any in-flight child of those parents that
arrived before the cascade ran is sitting in the queue and will be admitted
to the next mining candidate by default-case dequeue.

This was the post-reset / startup half of the production race seen on
teranode-mainnet-eu-1; PR bsv-blockchain#806 closed the in-memory subtree side, but the
queue side stayed open.

Fix:

- Add Interface.DrainQueue(dropHashes) — generic queue drain that drops any
  tx whose hash, or whose TxInpoints.ParentTxHashes entry, is in dropHashes.
  On parent-match the dropped tx's own hash is added to the set so any
  later-in-batch descendant is also caught without an extra store
  round-trip. Implemented on SubtreeProcessor as a thin wrapper over
  dequeueDuringBlockMovement with skipNotification=true.
- BlockAssembler accumulates cascade hashes in unminedDropHashes during
  loadUnminedTransactions (markAsConflicting writes the
  MarkConflictingRecursively return there). Field is serialised by the
  existing unminedTransactionsLoading flag.
- BA.Start drains the queue with that set after loadUnminedTransactions
  returns, before stp.Start fires the goroutine.
- BA.Reset's postProcessFn does the same after its loadUnminedTransactions
  call, before stp.reset's existing post-postProcess drain runs.
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented May 4, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
70.8% Coverage on New Code (required ≥ 80%)

See analysis details on SonarQube Cloud

@oskarszoon oskarszoon merged commit 40c9ec0 into bsv-blockchain:main May 4, 2026
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants