server: fix peer add/done race between peerHandler and syncManager#2480
server: fix peer add/done race between peerHandler and syncManager#2480Aharonee wants to merge 4 commits intobtcsuite:masterfrom
Conversation
ee422e0 to
50b62a3
Compare
Pull Request Test Coverage Report for Build 22067556256Details
💛 - Coveralls |
|
I experience the same issue.. Wow, really need this. |
server.go
Outdated
| ) | ||
|
|
||
| // peerLifecycleEvent represents a peer connection or disconnection event. | ||
| // Using a single channel for both event types guarantees FIFO ordering: |
There was a problem hiding this comment.
Do we have the "first-in" part? Can OnVerAck be delayed and send its part after "done" event is sent? E.g. if OnVerAck runs longer than negotiateTimeout.
There was a problem hiding this comment.
Good catch, there seems to still be a potential race in that scenario.
I've pushed a commit which changes the peerDoneHandler into peerLifecycleHandler, and delegates responsibility for both add peer and done peer events to it.
That way a single goroutine will manage synchronization and correct ordering of the peer lifecycle events.
Does that make sense?
There was a problem hiding this comment.
This change looks good to me!
peerDoneHandler ran as a separate goroutine per peer and independently notified both peerHandler (via donePeers channel) and the sync manager (via syncManager.DonePeer) about a peer disconnect. Because these two sends were unsynchronized, the sync manager could observe DonePeer before NewPeer when a peer connected and disconnected quickly. This caused the sync manager to log "unknown peer", then later register the already-dead peer as a sync candidate that was never cleaned up, potentially leaving it stuck with a dead sync peer. Two structural changes eliminate the race: 1. Merge the newPeers and donePeers channels into a single peerLifecycle channel. Since OnVerAck (add) always fires before WaitForDisconnect returns (done), a single FIFO channel guarantees peerHandler always processes add before done for a given peer, removing the select-ambiguity where Go could pick done first. 2. Move the syncManager.DonePeer call and orphan eviction from peerDoneHandler into handleDonePeerMsg, which runs inside peerHandler. All sync manager peer lifecycle notifications now originate from the single peerHandler goroutine and flow into sm.msgChan in guaranteed add-before-done order.
50b62a3 to
091b790
Compare
Address review feedback on the peer add/done race fix: - Make peerLifecycleHandler (renamed from peerDoneHandler) the sole sender of both peerAdd and peerDone events for each peer. OnVerAck now closes a signal channel (verAckCh) instead of sending directly, and peerLifecycleHandler selects on verAckCh vs peer.Done() to decide whether to send peerAdd before peerDone. This guarantees ordering by construction: a single goroutine sends both events sequentially, eliminating the negotiateTimeout race window. - Add Done() method to peer.Peer exposing the quit channel read-only, enabling select-based disconnect detection from server code. - Remove the now-unused AddPeer method. - Address style feedback: 80-char line limit, empty lines between switch cases, break long function calls, use require.GreaterOrEqualf instead of if+Fatalf, bump syncRaceConcurrency to 300 for backpressure testing, add TestPreVerackDisconnect for disconnect prior to verack.
| // peerAdd is always enqueued before peerDone. | ||
| func (s *server) peerLifecycleHandler(sp *serverPeer) { | ||
| // Wait for the handshake to complete or the peer to | ||
| // disconnect, whichever comes first. | ||
| select { | ||
| case <-sp.verAckCh: | ||
| s.peerLifecycle <- peerLifecycleEvent{ | ||
| action: peerAdd, sp: sp, | ||
| } | ||
|
|
||
| case <-sp.Peer.Done(): | ||
| // Disconnected before verack; no peerAdd needed. | ||
| } |
There was a problem hiding this comment.
If both sp.verAckCh and sp.Peer.Done() have messages to receive, select chooses pseudorandomly among them. So peerAdd can be skipped even if VerAckReceived is true, and handleDonePeerMsg will call DonePeer for an unknown peer.
Does it make sense to prioritize receiving from sp.verAckCh or check VerAckReceived if sp.Peer.Done() fired?
There was a problem hiding this comment.
I think it should be fine to skip add peer if done peer event has already occurred.
After all, the peer has disconnected so we can avoid notifying the server of a new peer just to notify it right away after to remove it.
My main concern was done peer being processed before add peer, but done peer processing for an unknown peer that has already disconnected seems harmless.
There was a problem hiding this comment.
You are right. My proposal would only improve log message clarity (avoiding "unknown peer" being logged), not the correctness of the code itself. It is optional.
There was a problem hiding this comment.
Note that the code no longer matches the commit message. The commit says: "Prioritize verAckCh in peerLifecycleHandler select to avoid nondeterministic peerAdd skipping when both channels are ready." However, there are no changes in peerLifecycleHandler's select logic.
I suggest implementing prioritization exactly as described in the commit message: first attempt a non-blocking receive from sp.verAckCh, and only in the default case fall back to the two-channel select. That would bring the code in line with the commit message.
|
|
||
| const ( | ||
| peerAdd peerLifecycleAction = iota | ||
| peerAdd peerLifecycleAction = iota |
There was a problem hiding this comment.
this formatting change should belong to the first commit
There was a problem hiding this comment.
I can squash both commits and force push if you prefer, but wouldn't it be more convenient for you to review the diff each time and only squash merge at the end?
There was a problem hiding this comment.
Sure, let's keep them separate for now.
server.go
Outdated
| close(sp.verAckCh) | ||
| } |
There was a problem hiding this comment.
In the current code version allows calling OnVerAck only once. Should we safeguard for the future using sync.Once?
There was a problem hiding this comment.
I think safeguarding this could potentially hide a bug, and if it is called twice we would prefer a loud panic.
This pattern is also a consistent pattern used in the codebase, for example: Peer.quit channel is not safeguarded and is closed by Peer.Disconnect().
There was a problem hiding this comment.
Hmm, maybe we can produce an error instead, if it is closed already?
select {
case <-sp.verAckCh:
log Error
default:
close(sp.verAckCh)
}The error won't let it pass unnoticed, but at least it won't panic and crash. What do you think?
server.go
Outdated
| // goroutine (peerLifecycleHandler), guaranteeing that peerAdd is | ||
| // always enqueued before peerDone. |
There was a problem hiding this comment.
I propose to adjust the comment to reflect that peerAdd may be skipped.
server.go
Outdated
| knownAddresses lru.Cache | ||
| banScore connmgr.DynamicBanScore | ||
| quit chan struct{} | ||
| verAckCh chan struct{} // closed when OnVerAck fires |
There was a problem hiding this comment.
Formatting:
// Closed when OnVerAck fires.
verAckCh chan struct{}
server.go
Outdated
| ) | ||
|
|
||
| // peerLifecycleEvent represents a peer connection or disconnection event. | ||
| // Using a single channel for both event types guarantees FIFO ordering: |
There was a problem hiding this comment.
This change looks good to me!
Prioritize verAckCh in peerLifecycleHandler select to avoid nondeterministic peerAdd skipping when both channels are ready. Guard OnVerAck against double-close by checking the channel before closing, logging an error instead of panicking. Adjust peerLifecycleEvent comment to reflect that peerAdd may be skipped when the peer disconnects before or concurrently with verack. Fix verAckCh field comment formatting.
starius
left a comment
There was a problem hiding this comment.
LGTM! 💾
The fix is currently validated only by integration tests. They cover the main failure mode well, but there are still unit-level coverage gaps that should be addressed with direct regression tests:
- Add an
OnVerAckidempotency test for the double-call guard (case <-sp.verAckCh): callOnVerAcktwice on the sameserverPeer, assert no panic on the second call, and assertverAckChremains closed. - Add a deterministic
peerLifecycleHandlertest for the simultaneous-ready edge case: make bothverAckChandPeer.Done()ready at the same time, then assert the intended behavior is stable and documented. - Add a direct per-peer lifecycle ordering test: in the verack-then-disconnect path, assert the emitted events for the same peer are exactly
peerAddfollowed bypeerDone.
I suggest adding these as regression tests that pass with the fix and fail if any part of the fix is rolled back. That ensures the core invariants are verified directly.
| select { | ||
| case <-sp.verAckCh: | ||
| peerLog.Errorf("OnVerAck called more than once "+ | ||
| "for peer %v", sp) | ||
| default: | ||
| close(sp.verAckCh) | ||
| } |
There was a problem hiding this comment.
I propose to cover this with a direct test (not rpctest) to make sure that it works as expected (logs, not panics by closing the channel again). To be clear, the code is correct, but it is better to cover such fragile things explicitly, IMHO. Specifically: call OnVerAck twice on the same serverPeer and assert no panic + verAckCh remains closed.
| // peerAdd is always enqueued before peerDone. | ||
| func (s *server) peerLifecycleHandler(sp *serverPeer) { | ||
| // Wait for the handshake to complete or the peer to | ||
| // disconnect, whichever comes first. | ||
| select { | ||
| case <-sp.verAckCh: | ||
| s.peerLifecycle <- peerLifecycleEvent{ | ||
| action: peerAdd, sp: sp, | ||
| } | ||
|
|
||
| case <-sp.Peer.Done(): | ||
| // Disconnected before verack; no peerAdd needed. | ||
| } |
There was a problem hiding this comment.
Note that the code no longer matches the commit message. The commit says: "Prioritize verAckCh in peerLifecycleHandler select to avoid nondeterministic peerAdd skipping when both channels are ready." However, there are no changes in peerLifecycleHandler's select logic.
I suggest implementing prioritization exactly as described in the commit message: first attempt a non-blocking receive from sp.verAckCh, and only in the default case fall back to the two-channel select. That would bring the code in line with the commit message.
| // was received, then waits for disconnect and sends peerDone. | ||
| // Because both sends originate from this single goroutine, | ||
| // peerAdd is always enqueued before peerDone. | ||
| func (s *server) peerLifecycleHandler(sp *serverPeer) { |
There was a problem hiding this comment.
Please add a deterministic unit test for peerLifecycleHandler where verack arrives before disconnect, and assert peerLifecycle emits peerAdd first and peerDone second.
We don't have internal observability in integration/sync_race_test.go since it runs against a separate btcd process, so we can add a direct regression test server to cover the fix.
The test has to build before the fix is applied, but fail because of that race that we're fixing.
integration/sync_race_test.go
Outdated
| for time.Now().Before(deadline) && iter < syncRaceIterations { | ||
| for i := 0; i < syncRaceConcurrency; i++ { | ||
| go func() { | ||
| _ = fakePeerConn(nodeAddr) |
There was a problem hiding this comment.
We should check this error and fail the test if it fails.
fakePeerConn errors are ignored and the test increments done regardless. On constrained/slow environments, many connection attempts can fail and the test may not actually execute enough handshake+disconnect cycles to stress the race as intended.
integration/sync_race_test.go
Outdated
| for i := 0; i < 50; i++ { | ||
| conn, err := net.DialTimeout("tcp", nodeAddr, 5*time.Second) | ||
| if err != nil { | ||
| continue |
There was a problem hiding this comment.
This error also should not be ignored. This may result in the test passing without meaningful stress.
integration/sync_race_test.go
Outdated
| nodeTCP, err := net.ResolveTCPAddr("tcp", nodeAddr) | ||
| if err != nil { | ||
| conn.Close() | ||
| continue |
There was a problem hiding this comment.
This error also should not be ignored. This may result in the test passing without meaningful stress.
integration/sync_race_test.go
Outdated
| msgVersion := wire.NewMsgVersion(me, you, nonce, 0) | ||
| msgVersion.Services = wire.SFNodeNetwork | wire.SFNodeWitness | ||
|
|
||
| _ = wire.WriteMessage( |
There was a problem hiding this comment.
This error also should not be ignored. This may result in the test passing without meaningful stress.
integration/sync_race_test.go
Outdated
| // sending verack. This produces a peerDone without a preceding | ||
| // peerAdd in the lifecycle channel. |
There was a problem hiding this comment.
s/produces/is expected to produce/
We do not validate the internal state of the running btcd process, so this is only our assumption, not an actual verification.
Address review feedback on the peer add/done race fix:
Add three direct unit tests in server_test.go that exercise the fix
without the full server or rpctest harness:
- TestOnVerAckDoubleCall: call OnVerAck twice on the same serverPeer,
assert no panic and verAckCh remains closed.
- TestPeerLifecycleOrdering: verack before disconnect emits peerAdd
then peerDone in order.
- TestPeerLifecycleSimultaneousReady: both verAckCh and Peer.Done()
ready before the handler runs; assert peerDone always arrives and
peerAdd, if emitted, precedes it (100 iterations).
Harden integration tests in sync_race_test.go:
- Check fakePeerConn errors via require.NoError instead of discarding.
- Extract dialAndSendVersion helper for TestPreVerackDisconnect;
check all errors instead of silently continuing.
- Fix comment wording ("produces" -> "is expected to produce").
|
Diff looks nice, ran the entire test suite, all green. Nice work. |
Summary
Fix a race condition where the sync manager can permanently get stuck with a dead sync peer after rapid peer connect/disconnect cycles.
The Race Condition
peerDoneHandlerran as a separate goroutine per peer and independently notified two event loops about a disconnect:donePeerschannel (consumed bypeerHandler).syncManager.DonePeer()directly (sends tosm.msgChan, consumed byblockHandler).Meanwhile,
peerHandleronly calledsyncManager.NewPeer()when it processed thenewPeerschannel. Because these two paths were unsynchronized,blockHandlercould observeDonePeerbeforeNewPeerfor the same peer.A second vector existed even if
DonePeerwere moved intopeerHandler: two separate buffered channels (newPeers/donePeers) let Go'sselectrandomly pick the done case before the add case when both were ready simultaneously.A third vector existed due to
negotiateTimeout: if the 30s timeout inpeer.Peer.start()fired betweenverAckReceived = trueand theOnVerAckcallback completing,peerDoneHandlercould observeVerAckReceived() == trueand sendpeerDonebefore theOnVerAckcallback sentpeerAdd.Consequences: The sync manager receives
DonePeerfor an unknown peer (logged as a warning, no cleanup). ThenNewPeerarrives for the already-dead peer -- the sync manager registers it as a candidate and potentially selects it assyncPeer. Since it is already disconnected, no subsequentDonePeerarrives to clear it. The node is stuck: it believes it has a sync peer, ignores new candidates, and never makes chain progress.What Triggers It
Any scenario that produces rapid connect/disconnect cycles:
The Fix
Three structural changes eliminate all race vectors:
Merge
newPeersanddonePeersinto a singlepeerLifecyclechannel. A single FIFO channel eliminates the select-ambiguity vector where Go'sselectcould pick done before add.Move
syncManager.DonePeer()and orphan eviction intohandleDonePeerMsg. All sync manager notifications now flow through thepeerHandlergoroutine.Make
peerLifecycleHandler(renamed frompeerDoneHandler) the sole sender of bothpeerAddandpeerDonefor each peer.OnVerAckno longer sends to the channel directly; it closes a signal channel (verAckCh).peerLifecycleHandlerselects onverAckChvspeer.Done()(new method exposing the peer's quit channel), sendspeerAddif verack was received, then waits for disconnect and sendspeerDone. Because both sends originate from the same goroutine,peerAddis never enqueued afterpeerDone. If both channels are ready simultaneously, Go'sselectis nondeterministic sopeerAddmay be skipped -- this is harmless (the peer is already disconnected) and documented in thepeerLifecycleEventcomment.Additionally,
OnVerAckis guarded against double-close: if called more than once on the same peer, it logs an error instead of panicking.Reproducing on
master(without the fix)The included integration test can demonstrate the corruption on an unpatched
masterbranch:git checkout master git checkout bugfix/peer_race_condition -- integration/sync_race_test.go go test -tags=rpctest -v -run TestSyncManagerRaceCorruption ./integration/ -count=10 -timeout 900sTest Plan
go build ./...compiles cleanlygo test -tags=rpctest -v -run TestSyncManagerRaceCorruption ./integration/ -count=10 -timeout 900spassesTestPreVerackDisconnectpasses (disconnect before verack)TestOnVerAckDoubleCall-- unit test callingOnVerAcktwice on the sameserverPeer, asserting no panic andverAckChremains closedTestPeerLifecycleOrdering-- unit test asserting verack-then-disconnect emitspeerAddfollowed bypeerDoneTestPeerLifecycleSimultaneousReady-- unit test asserting stable behavior when bothverAckChandPeer.Done()are ready simultaneously (100 iterations, verifying ordering invariant holds regardless of whichselectcase wins)