Skip to content

perf(blockchain/sql): on_main_chain fast path for GetBlockHeaders / GetBlockHeaderIDs#817

Merged
oskarszoon merged 5 commits intobsv-blockchain:mainfrom
oskarszoon:perf/legacy-sync-headers-on-main-chain
May 7, 2026
Merged

perf(blockchain/sql): on_main_chain fast path for GetBlockHeaders / GetBlockHeaderIDs#817
oskarszoon merged 5 commits intobsv-blockchain:mainfrom
oskarszoon:perf/legacy-sync-headers-on-main-chain

Conversation

@oskarszoon
Copy link
Copy Markdown
Contributor

Summary

Replace the recursive CTE walk with a backward index scan over idx_on_main_chain_height when the start hash is on the main chain. Falls back to the existing CTE for fork tips, unknown hashes, or while a main-chain rebuild is in flight — same hybrid pattern used in GetLatestBlockHeaderFromBlockLocator.

This closes a remediation gap: most other chain-walk queries already use on_main_chain (GetBlockByHeight, GetBlockHeadersByHeight, GetBlocksByHeight, GetLastNBlocks, CheckBlockIsInCurrentChain, etc.) but GetBlockHeaders and GetBlockHeaderIDs still ran the recursive CTE on every call.

Why

During legacy sync, pg_stat_statements on a fresh testnet node showed the heavy CTE in GetBlockHeaders was the top time-consuming query — 47 calls × 14.18 ms avg = 666 ms total over 6 minutes of sync, returning ~5,288 rows per call when block-validation requested catchup batches. GetBlockHeaders(parent_hash, N) is on the per-block hot path: every legacy sync block triggers it for MTP, ancestor fetch, and fork checks.

Measurements

EXPLAIN ANALYZE on a 80k-block testnet DB (35 MB, fully cached in shared_buffers):

N Variant Exec time Buffer hits
100 CTE (current) 0.554 ms 604
100 on_main_chain (fast path) 0.186 ms 118
5288 CTE (current) 17.403 ms 31,729
5288 on_main_chain (fast path) 2.875 ms 4,728

Speedup: 2.97× at N=100, 6.05× at N=5288. I/O reduction: 5–7× at both sizes.

Speedup grows with N because the CTE pays one index lookup per ancestor plus a final sort, while the fast path is a single ordered index scan (idx_on_main_chain_height partial index already exists).

On a production-sized DB (~1.27M blocks) the CTE additionally pays cold-buffer cost on parent_id walks; expected real-world speedup is 10–20× for the catchup-sized batches observed in pg_stat_statements.

Hit rate

The sync hot path always passes parent-of-incoming-block as the start hash, which is virtually always on_main_chain = true. Fork tips and unknown hashes fall through to the CTE — preserving correctness for reorg and validation paths.

Correctness

Same hybrid pattern, same guards as GetLatestBlockHeaderFromBlockLocator:

  • Probe mainChainRebuilding.Load() == 0 first; if a rebuild is in flight, take the CTE (authoritative path).
  • Probe SELECT on_main_chain FROM blocks WHERE hash = $1; if false / NULL / error, take the CTE.
  • Otherwise fast path: WHERE on_main_chain = true AND height ∈ [tip-$N, tip].

TOCTOU between the probe and the main query is bounded by the store's single-writer model: at worst one call returns slightly-stale data, the next call sees the updated mainChainRebuilding guard and takes the CTE. Acceptable and self-healing.

Test plan

  • go test ./stores/blockchain/sql/... — 475 tests pass
  • go test ./services/blockchain/... — pass (25.4s)
  • go test ./services/blockvalidation/... — pass (164.5s)
  • go vet ./stores/blockchain/sql/... — clean
  • Existing fork-tip test in TestSQLGetBlockHeaders covers the CTE-fallback path (block2Alt is on a fork → fast path probe returns on_main_chain=false → falls back to CTE)

Related follow-ups (not in this PR)

Other quick wins identified during the same investigation, deferred to separate PRs:

  1. In-process MTP cache in blockchain service — cache the last 11 headers in a ring buffer to eliminate the per-block GetBlockHeaders(hash, 11) call entirely.
  2. Pool bufio.NewReaderSize in subtreevalidation — observed 16 GB cumulative allocation in 47 min uptime, reducible to near zero with sync.Pool.
  3. Batch INSERT into scheduled_blob_deletions — currently one INSERT per row; multi-row VALUES list saves ~50× round-trips per block.
  4. Pool swiss.NewMap[outpoint] in blockvalidation — observed 224 GB cumulative allocation, dominant heap churn driving GC pressure.

These don't share files or test infrastructure with the SQL store change so they belong in separate PRs.

…etBlockHeaderIDs

Replace the recursive CTE walk with a backward index scan over
idx_on_main_chain_height when the start hash is on the main chain. Falls
back to the existing CTE for fork tips, unknown hashes, or while a
main-chain rebuild is in flight, so the CTE remains authoritative.

EXPLAIN ANALYZE on a 80k-block testnet DB (35 MB, fully cached):

  N=100   CTE 0.554 ms / 604 buffer hits
          fast 0.186 ms / 118 buffer hits     -> 2.97x, -80% I/O
  N=5288  CTE 17.4 ms / 31729 buffer hits
          fast 2.875 ms / 4728 buffer hits    -> 6.05x, -85% I/O

Speedup grows with N because the CTE pays one index lookup per ancestor
plus a final sort, while the fast path is a single ordered index scan
(idx_on_main_chain_height already exists).

On a production-sized DB (~1.27M blocks) the CTE pays additional cold-
buffer cost; expected real-world speedup is 10-20x for the catchup-
sized batches observed in pg_stat_statements during legacy IBD.

Sync hot path always feeds parent-of-incoming-block as the start hash,
which is virtually always on_main_chain, so the fast path hits >99% of
the time during normal sync.

Same hybrid pattern used in GetLatestBlockHeaderFromBlockLocator. Same
TOCTOU caveats: the on_main_chain probe and the main query are non-
atomic, but the store's single-writer model bounds staleness to one call
and the CTE fallback is self-healing.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

🤖 Claude Code Review

Status: Complete

All previously identified issues have been resolved by the author:

History:

  • ✅ Fixed: Intra-query TOCTOU race (commit fccb108) - resolved height in probe, bound as literal parameter
  • ✅ Fixed: Documentation gaps (commit 68fbc8b) - file-level docs updated to describe hybrid strategy
  • ✅ Fixed: Formatting (commit 2a78a88) - gofmt applied to fix indentation
  • ✅ Verified: Comprehensive test coverage added (commit f1625c3) - 9 new tests covering fast path, CTE fallback, fork handling, rebuild guard, and query selector logic

Current Review:
No issues found. The PR implements a solid performance optimization with:

  • Clear fast path / CTE fallback separation
  • Appropriate guards (mainChainRebuilding, on_main_chain probe)
  • Comprehensive test coverage for all code paths
  • Accurate documentation matching implementation
  • Consistent with existing patterns (GetLatestBlockHeaderFromBlockLocator)

The implementation correctly handles the TOCTOU window between probe and main query as documented and acceptable under the single-writer model.

Comment thread stores/blockchain/sql/GetBlockHeaders.go Outdated
Comment thread stores/blockchain/sql/GetBlockHeaderIDs.go Outdated
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

Benchmark Comparison Report

Baseline: main (unknown)

Current: PR-817 (c6efd4f)

Summary

  • Regressions: 0
  • Improvements: 0
  • Unchanged: 142
  • Significance level: p < 0.05
All benchmark results (sec/op)
Benchmark Baseline Current Change p-value
_NewBlockFromBytes-4 1.678µ 1.694µ ~ 0.100
SplitSyncedParentMap_SetIfNotExists/256_buckets-4 61.61n 61.75n ~ 0.700
SplitSyncedParentMap_SetIfNotExists/16_buckets-4 61.46n 61.69n ~ 0.200
SplitSyncedParentMap_SetIfNotExists/1_bucket-4 61.66n 61.65n ~ 0.800
SplitSyncedParentMap_ConcurrentSetIfNotExists/256_buckets... 30.18n 30.19n ~ 1.000
SplitSyncedParentMap_ConcurrentSetIfNotExists/16_buckets_... 51.25n 51.64n ~ 1.000
SplitSyncedParentMap_ConcurrentSetIfNotExists/1_bucket_pa... 106.7n 108.2n ~ 0.400
MiningCandidate_Stringify_Short-4 265.5n 263.1n ~ 0.700
MiningCandidate_Stringify_Long-4 1.903µ 1.911µ ~ 0.100
MiningSolution_Stringify-4 976.1n 976.6n ~ 1.000
BlockInfo_MarshalJSON-4 1.776µ 1.766µ ~ 0.200
NewFromBytes-4 125.5n 136.9n ~ 0.200
Mine_EasyDifficulty-4 59.73µ 60.54µ ~ 0.200
Mine_WithAddress-4 6.950µ 6.776µ ~ 0.700
BlockAssembler_AddTx-4 0.02691n 0.02707n ~ 0.400
AddNode-4 10.89 10.99 ~ 0.700
AddNodeWithMap-4 10.78 11.15 ~ 0.400
DirectSubtreeAdd/4_per_subtree-4 76.11n 75.98n ~ 0.700
DirectSubtreeAdd/64_per_subtree-4 42.23n 41.93n ~ 1.000
DirectSubtreeAdd/256_per_subtree-4 40.81n 40.99n ~ 0.100
DirectSubtreeAdd/1024_per_subtree-4 39.40n 39.28n ~ 1.000
DirectSubtreeAdd/2048_per_subtree-4 38.83n 38.87n ~ 1.000
SubtreeProcessorAdd/4_per_subtree-4 337.2n 337.4n ~ 0.300
SubtreeProcessorAdd/64_per_subtree-4 325.4n 325.7n ~ 1.000
SubtreeProcessorAdd/256_per_subtree-4 318.8n 328.0n ~ 0.200
SubtreeProcessorAdd/1024_per_subtree-4 320.4n 320.7n ~ 1.000
SubtreeProcessorAdd/2048_per_subtree-4 319.4n 317.3n ~ 0.400
SubtreeProcessorRotate/4_per_subtree-4 322.5n 317.6n ~ 0.100
SubtreeProcessorRotate/64_per_subtree-4 318.0n 319.1n ~ 0.400
SubtreeProcessorRotate/256_per_subtree-4 318.2n 320.0n ~ 0.400
SubtreeProcessorRotate/1024_per_subtree-4 316.9n 314.2n ~ 0.200
SubtreeNodeAddOnly/4_per_subtree-4 88.71n 88.89n ~ 0.700
SubtreeNodeAddOnly/64_per_subtree-4 65.08n 65.31n ~ 0.400
SubtreeNodeAddOnly/256_per_subtree-4 64.27n 64.48n ~ 0.700
SubtreeNodeAddOnly/1024_per_subtree-4 63.95n 63.73n ~ 0.400
SubtreeCreationOnly/4_per_subtree-4 148.5n 148.4n ~ 0.700
SubtreeCreationOnly/64_per_subtree-4 549.4n 547.6n ~ 0.400
SubtreeCreationOnly/256_per_subtree-4 1.995µ 2.015µ ~ 0.400
SubtreeCreationOnly/1024_per_subtree-4 6.388µ 6.372µ ~ 1.000
SubtreeCreationOnly/2048_per_subtree-4 11.85µ 11.64µ ~ 0.400
SubtreeProcessorOverheadBreakdown/64_per_subtree-4 312.0n 318.3n ~ 0.700
SubtreeProcessorOverheadBreakdown/1024_per_subtree-4 315.3n 320.3n ~ 0.200
ParallelGetAndSetIfNotExists/1k_nodes-4 666.7µ 655.3µ ~ 0.200
ParallelGetAndSetIfNotExists/10k_nodes-4 1.769m 1.764m ~ 1.000
ParallelGetAndSetIfNotExists/50k_nodes-4 8.984m 9.135m ~ 0.100
ParallelGetAndSetIfNotExists/100k_nodes-4 18.34m 18.31m ~ 0.700
SequentialGetAndSetIfNotExists/1k_nodes-4 704.7µ 708.1µ ~ 0.700
SequentialGetAndSetIfNotExists/10k_nodes-4 3.426m 3.456m ~ 0.400
SequentialGetAndSetIfNotExists/50k_nodes-4 12.69m 13.02m ~ 0.100
SequentialGetAndSetIfNotExists/100k_nodes-4 24.61m 24.42m ~ 0.700
ProcessOwnBlockSubtreeNodesParallel/1k_nodes-4 725.3µ 740.1µ ~ 0.400
ProcessOwnBlockSubtreeNodesParallel/10k_nodes-4 4.903m 4.763m ~ 0.100
ProcessOwnBlockSubtreeNodesParallel/100k_nodes-4 22.07m 21.94m ~ 1.000
ProcessOwnBlockSubtreeNodesSequential/1k_nodes-4 762.2µ 762.4µ ~ 1.000
ProcessOwnBlockSubtreeNodesSequential/10k_nodes-4 6.928m 6.799m ~ 0.200
ProcessOwnBlockSubtreeNodesSequential/100k_nodes-4 49.69m 48.08m ~ 0.100
DiskTxMap_SetIfNotExists-4 4.128µ 4.125µ ~ 0.700
DiskTxMap_SetIfNotExists_Parallel-4 4.082µ 3.920µ ~ 0.100
DiskTxMap_ExistenceOnly-4 409.9n 470.8n ~ 0.700
Queue-4 208.1n 199.1n ~ 0.100
AtomicPointer-4 8.139n 8.128n ~ 0.100
ReorgOptimizations/DedupFilterPipeline/Old/10K-4 829.7µ 801.6µ ~ 0.100
ReorgOptimizations/DedupFilterPipeline/New/10K-4 800.2µ 797.7µ ~ 1.000
ReorgOptimizations/AllMarkFalse/Old/10K-4 126.4µ 116.7µ ~ 0.100
ReorgOptimizations/AllMarkFalse/New/10K-4 58.18µ 58.46µ ~ 0.700
ReorgOptimizations/HashSlicePool/Old/10K-4 64.29µ 64.01µ ~ 1.000
ReorgOptimizations/HashSlicePool/New/10K-4 11.80µ 11.80µ ~ 1.000
ReorgOptimizations/NodeFlags/Old/10K-4 5.562µ 5.447µ ~ 0.400
ReorgOptimizations/NodeFlags/New/10K-4 1.840µ 1.857µ ~ 0.400
ReorgOptimizations/DedupFilterPipeline/Old/100K-4 12.46m 12.85m ~ 0.200
ReorgOptimizations/DedupFilterPipeline/New/100K-4 12.06m 12.11m ~ 1.000
ReorgOptimizations/AllMarkFalse/Old/100K-4 1.241m 1.184m ~ 0.100
ReorgOptimizations/AllMarkFalse/New/100K-4 728.3µ 727.0µ ~ 0.700
ReorgOptimizations/HashSlicePool/Old/100K-4 592.5µ 582.0µ ~ 0.200
ReorgOptimizations/HashSlicePool/New/100K-4 300.5µ 319.4µ ~ 0.100
ReorgOptimizations/NodeFlags/Old/100K-4 52.62µ 53.31µ ~ 0.400
ReorgOptimizations/NodeFlags/New/100K-4 18.92µ 18.11µ ~ 0.700
TxMapSetIfNotExists-4 51.60n 49.45n ~ 0.100
TxMapSetIfNotExistsDuplicate-4 43.57n 43.58n ~ 1.000
ChannelSendReceive-4 693.1n 666.1n ~ 0.100
CalcBlockWork-4 557.7n 599.5n ~ 0.400
CalculateWork-4 744.2n 767.9n ~ 0.100
BuildBlockLocatorString_Helpers/Size_10-4 1.602µ 1.631µ ~ 0.700
BuildBlockLocatorString_Helpers/Size_100-4 12.40µ 12.49µ ~ 0.100
BuildBlockLocatorString_Helpers/Size_1000-4 122.2µ 121.8µ ~ 1.000
CatchupWithHeaderCache-4 104.4m 104.4m ~ 1.000
_BufferPoolAllocation/16KB-4 4.275µ 4.078µ ~ 1.000
_BufferPoolAllocation/32KB-4 7.844µ 7.893µ ~ 0.700
_BufferPoolAllocation/64KB-4 15.30µ 17.57µ ~ 0.100
_BufferPoolAllocation/128KB-4 30.28µ 32.50µ ~ 0.100
_BufferPoolAllocation/512KB-4 107.9µ 112.0µ ~ 0.700
_BufferPoolConcurrent/32KB-4 17.44µ 18.86µ ~ 0.100
_BufferPoolConcurrent/64KB-4 27.85µ 29.72µ ~ 0.100
_BufferPoolConcurrent/512KB-4 142.6µ 154.4µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/16KB-4 639.2µ 648.9µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/32KB-4 620.6µ 646.2µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/64KB-4 627.7µ 643.5µ ~ 0.200
_SubtreeDeserializationWithBufferSizes/128KB-4 618.7µ 637.4µ ~ 0.100
_SubtreeDeserializationWithBufferSizes/512KB-4 639.6µ 640.9µ ~ 0.700
_SubtreeDataDeserializationWithBufferSizes/16KB-4 35.69m 35.70m ~ 0.700
_SubtreeDataDeserializationWithBufferSizes/32KB-4 35.62m 35.58m ~ 1.000
_SubtreeDataDeserializationWithBufferSizes/64KB-4 35.66m 35.98m ~ 0.100
_SubtreeDataDeserializationWithBufferSizes/128KB-4 35.57m 35.91m ~ 0.100
_SubtreeDataDeserializationWithBufferSizes/512KB-4 35.22m 35.08m ~ 0.700
_PooledVsNonPooled/Pooled-4 737.0n 742.5n ~ 0.100
_PooledVsNonPooled/NonPooled-4 7.043µ 7.730µ ~ 0.100
_MemoryFootprint/Current_512KB_32concurrent-4 6.786µ 7.047µ ~ 0.200
_MemoryFootprint/Proposed_32KB_32concurrent-4 9.153µ 10.140µ ~ 0.100
_MemoryFootprint/Alternative_64KB_32concurrent-4 8.831µ 10.136µ ~ 0.100
_prepareTxsPerLevel-4 413.0m 410.6m ~ 1.000
_prepareTxsPerLevelOrdered-4 3.701m 3.644m ~ 0.700
_prepareTxsPerLevel_Comparison/Original-4 424.3m 420.8m ~ 0.400
_prepareTxsPerLevel_Comparison/Optimized-4 3.990m 3.578m ~ 0.100
SubtreeSizes/10k_tx_4_per_subtree-4 1.365m 1.372m ~ 0.400
SubtreeSizes/10k_tx_16_per_subtree-4 316.3µ 320.5µ ~ 0.400
SubtreeSizes/10k_tx_64_per_subtree-4 76.24µ 78.61µ ~ 0.200
SubtreeSizes/10k_tx_256_per_subtree-4 19.17µ 19.38µ ~ 0.100
SubtreeSizes/10k_tx_512_per_subtree-4 9.414µ 9.590µ ~ 0.100
SubtreeSizes/10k_tx_1024_per_subtree-4 4.746µ 4.729µ ~ 0.800
SubtreeSizes/10k_tx_2k_per_subtree-4 2.366µ 2.344µ ~ 0.400
BlockSizeScaling/10k_tx_64_per_subtree-4 75.25µ 75.44µ ~ 1.000
BlockSizeScaling/10k_tx_256_per_subtree-4 19.02µ 18.78µ ~ 0.700
BlockSizeScaling/10k_tx_1024_per_subtree-4 4.722µ 4.731µ ~ 1.000
BlockSizeScaling/50k_tx_64_per_subtree-4 401.5µ 398.6µ ~ 0.700
BlockSizeScaling/50k_tx_256_per_subtree-4 96.25µ 94.03µ ~ 0.200
BlockSizeScaling/50k_tx_1024_per_subtree-4 23.58µ 23.15µ ~ 0.200
SubtreeAllocations/small_subtrees_exists_check-4 157.6µ 162.1µ ~ 0.100
SubtreeAllocations/small_subtrees_data_fetch-4 164.8µ 167.8µ ~ 0.100
SubtreeAllocations/small_subtrees_full_validation-4 328.7µ 328.8µ ~ 1.000
SubtreeAllocations/medium_subtrees_exists_check-4 9.343µ 9.432µ ~ 0.100
SubtreeAllocations/medium_subtrees_data_fetch-4 9.875µ 9.958µ ~ 0.400
SubtreeAllocations/medium_subtrees_full_validation-4 19.30µ 19.03µ ~ 0.100
SubtreeAllocations/large_subtrees_exists_check-4 2.241µ 2.236µ ~ 1.000
SubtreeAllocations/large_subtrees_data_fetch-4 2.367µ 2.368µ ~ 1.000
SubtreeAllocations/large_subtrees_full_validation-4 4.833µ 4.759µ ~ 0.100
StoreBlock_Sequential/BelowCSVHeight-4 252.6µ 250.5µ ~ 0.400
StoreBlock_Sequential/AboveCSVHeight-4 251.7µ 252.5µ ~ 1.000
GetUtxoHashes-4 270.8n 273.2n ~ 0.700
GetUtxoHashes_ManyOutputs-4 46.87µ 52.08µ ~ 0.100
_NewMetaDataFromBytes-4 230.3n 229.7n ~ 0.400
_Bytes-4 608.9n 625.0n ~ 0.100
_MetaBytes-4 561.3n 564.6n ~ 0.400

Threshold: >10% with p < 0.05 | Generated: 2026-05-07 08:13 UTC

…tra-query TOCTOU

The fast path previously evaluated `SELECT height FROM blocks WHERE hash = $1`
twice within the same SELECT (once for the `<=` upper bound and once for the
`> ... - $N` lower bound). Even under the single-writer model, that race is
within a single call rather than between calls.

Resolve the start-block height in the existing probe (which already runs once
per call), and bind it as a literal parameter to the main query. Two wins:

1. Eliminates the intra-query race entirely — the upper and lower bounds are
   guaranteed to come from the same height value.
2. Restores optimal plan choice. The CTE form (WITH start_block) caused the
   planner to pick `idx_height` with a Filter at small N (~3x slower than the
   subquery form). With the height bound as a literal parameter, the planner
   uses the `idx_on_main_chain_height` partial index for a clean range scan.

EXPLAIN ANALYZE on 80k-block testnet DB:
  N=100   literal-bound: 0.158 ms / 107 buffer hits
  N=5288  literal-bound: 2.546 ms / 4714 buffer hits

Same plan shape and timings as the original subquery-twice form, with the
race removed.

Addresses Claude Code Review feedback on PR bsv-blockchain#817.
Comment thread stores/blockchain/sql/GetBlockHeaders.go
Comment thread stores/blockchain/sql/GetBlockHeaderIDs.go
oskarszoon added 2 commits May 5, 2026 21:07
… package and func docs

The package- and function-level comments still described only the recursive CTE
path. Update them to reflect the hybrid approach now in place: cache → on_main_chain
fast path → CTE fallback. Behaviour unchanged.

Addresses Claude Code Review minor feedback on PR bsv-blockchain#817.
The previous doc-update commit used 7-space indentation under bullet points;
gofmt expects 5 spaces (one tab plus the bullet glyph alignment). Apply
gofmt -w. No semantic change.

Fixes the golangci-lint `gci` failure on PR bsv-blockchain#817 CI.
Copy link
Copy Markdown
Contributor

@icellan icellan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing tests!

…lectors

Add explicit named tests for the hybrid query selector introduced in
GetBlockHeaders and GetBlockHeaderIDs:

1. FastPath: assert fast path and CTE return identical headers/IDs for a
   main-chain start block (correctness invariant across both paths).
2. ForkTipFallback: assert CTE is used for a fork-tip start block and
   returns the fork's own ancestor chain, not the main-chain blocks at
   the same heights.
3. CTEWhenRebuilding: assert correct results are returned via the CTE
   fallback while mainChainRebuilding > 0.
4. UnknownHashReturnsEmpty: assert empty result with nil error when the
   start hash does not exist in the store.
5. BuildQuery_FastPathQuery: unit-test buildGetBlockHeadersQuery and
   buildGetBlockHeaderIDsQuery — confirm the returned query contains
   "on_main_chain = true" and not "WITH RECURSIVE" for an on-chain block
   with mainChainRebuilding == 0.
6. BuildQuery_CTEQuery: confirm the CTE form ("WITH RECURSIVE") is
   returned when mainChainRebuilding > 0 or the start block is a fork tip.

Additional integration tests: ModelForkReturnsOnlyForkAncestors,
ModelForkTipIDs, FastPathCacheNotPoisoned.
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud Bot commented May 7, 2026

@oskarszoon oskarszoon merged commit 3bec34c into bsv-blockchain:main May 7, 2026
25 checks passed
@oskarszoon oskarszoon deleted the perf/legacy-sync-headers-on-main-chain branch May 7, 2026 14:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants