Skip to content

Commit d5e4829

Browse files
committed
Do not track HTLC IDs as separate MPP parts which need claiming
When we claim an MPP payment, we need to track which channels have had the preimage durably added to their `ChannelMonitor` to ensure we don't remove the preimage from any `ChannelMonitor`s until all `ChannelMonitor`s have the preimage. Previously, we tracked each MPP part, down to the HTLC ID, as a part which we needed to get the preimage on disk for. However, this is not necessary - once a `ChannelMonitor` has a preimage, it applies it to all inbound HTLCs with the same payment hash. Further, this can cause a channel to wait on itself in cases of high-latency synchronous persistence - * If we have receive an MPP payment for which multiple parts came to us over the same channel, * and claim the MPP payment, creating a `ChannelMonitorUpdate` for the first part but enqueueing the remaining HTLC claim(s) in the channel's holding cell, * and we receive a `revoke_and_ack` for the same channel before the `ChannelManager::claim_payment` method completes (as each claim waits for the `ChannelMonitorUpdate` persistence), * we will cause the `ChannelMonitorUpdate` for that `revoke_and_ack` to go into the blocked set, waiting on the MPP parts to be fully claimed, * but when `claim_payment` goes to add the next `ChannelMonitorUpdate` for the MPP claim, it will be placed in the blocked set, since the blocked set is non-empty. Thus, we'll end up with a `ChannelMonitorUpdate` in the blocked set which is needed to unblock the channel since it is a part of the MPP set which blocked the channel.
1 parent cc7a8cc commit d5e4829

File tree

4 files changed

+289
-26
lines changed

4 files changed

+289
-26
lines changed

lightning/src/ln/chanmon_update_fail_tests.rs

+221
Original file line numberDiff line numberDiff line change
@@ -3860,3 +3860,224 @@ fn test_claim_to_closed_channel_blocks_claimed_event() {
38603860
nodes[1].chain_monitor.complete_sole_pending_chan_update(&chan_a.2);
38613861
expect_payment_claimed!(nodes[1], payment_hash, 1_000_000);
38623862
}
3863+
3864+
#[test]
3865+
#[cfg(all(feature = "std", not(target_os = "windows")))]
3866+
fn test_single_channel_multiple_mpp() {
3867+
use std::sync::atomic::{AtomicBool, Ordering};
3868+
3869+
// Test what happens when we attempt to claim an MPP with many parts that came to us through
3870+
// the same channel with a synchronous persistence interface which has very high latency.
3871+
//
3872+
// Previously, if a `revoke_and_ack` came in while we were still running in
3873+
// `ChannelManager::claim_payment` we'd end up hanging waiting to apply a
3874+
// `ChannelMonitorUpdate` until after it completed. See the commit which introduced this test
3875+
// for more info.
3876+
let chanmon_cfgs = create_chanmon_cfgs(9);
3877+
let node_cfgs = create_node_cfgs(9, &chanmon_cfgs);
3878+
let node_chanmgrs = create_node_chanmgrs(9, &node_cfgs, &[const { None }; 9]);
3879+
let mut nodes = create_network(9, &node_cfgs, &node_chanmgrs);
3880+
3881+
let node_7_id = nodes[7].node.get_our_node_id();
3882+
let node_8_id = nodes[8].node.get_our_node_id();
3883+
3884+
// Send an MPP payment in six parts along the path shown from top to bottom
3885+
// 0
3886+
// 1 2 3 4 5 6
3887+
// 7
3888+
// 8
3889+
//
3890+
// We can in theory reproduce this issue with fewer channels/HTLCs, but getting this test
3891+
// robust is rather challenging. We rely on having the main test thread wait on locks held in
3892+
// the background `claim_funds` thread and unlocking when the `claim_funds` thread completes a
3893+
// single `ChannelMonitorUpdate`.
3894+
// This thread calls `get_and_clear_pending_msg_events()` and `handle_revoke_and_ack()`, both
3895+
// of which require `ChannelManager` locks, but we have to make sure this thread gets a chance
3896+
// to be blocked on the mutexes before we let the background thread wake `claim_funds` so that
3897+
// the mutex can switch to this main thread.
3898+
// This relies on our locks being fair, but also on our threads getting runtime during the test
3899+
// run, which can be pretty competitive. Thus we do a dumb dance to be as conservative as
3900+
// possible - we have a background thread which completes a `ChannelMonitorUpdate` (by sending
3901+
// into the `write_blocker` mpsc) but it doesn't run until a mpsc channel sends from this main
3902+
// thread to the background thread, and then we let it sleep a while before we send the
3903+
// `ChannelMonitorUpdate` unblocker.
3904+
// Further, we give ourselves two chances each time, needing 4 HTLCs just to unlock our two
3905+
// `ChannelManager` calls. We then need a few remaining HTLCs to actually trigger the bug, so
3906+
// we use 6 HTLCs.
3907+
// Finaly, we do not run this test on Winblowz because it, somehow, in 2025, does not implement
3908+
// actual preemptive multitasking and thinks that cooperative multitasking somehow is
3909+
// acceptable in the 21st century, let alone a quarter of the way into it.
3910+
const MAX_THREAD_INIT_TIME: std::time::Duration = std::time::Duration::from_secs(1);
3911+
3912+
create_announced_chan_between_nodes_with_value(&nodes, 0, 1, 100_000, 0);
3913+
create_announced_chan_between_nodes_with_value(&nodes, 0, 2, 100_000, 0);
3914+
create_announced_chan_between_nodes_with_value(&nodes, 0, 3, 100_000, 0);
3915+
create_announced_chan_between_nodes_with_value(&nodes, 0, 4, 100_000, 0);
3916+
create_announced_chan_between_nodes_with_value(&nodes, 0, 5, 100_000, 0);
3917+
create_announced_chan_between_nodes_with_value(&nodes, 0, 6, 100_000, 0);
3918+
3919+
create_announced_chan_between_nodes_with_value(&nodes, 1, 7, 100_000, 0);
3920+
create_announced_chan_between_nodes_with_value(&nodes, 2, 7, 100_000, 0);
3921+
create_announced_chan_between_nodes_with_value(&nodes, 3, 7, 100_000, 0);
3922+
create_announced_chan_between_nodes_with_value(&nodes, 4, 7, 100_000, 0);
3923+
create_announced_chan_between_nodes_with_value(&nodes, 5, 7, 100_000, 0);
3924+
create_announced_chan_between_nodes_with_value(&nodes, 6, 7, 100_000, 0);
3925+
create_announced_chan_between_nodes_with_value(&nodes, 7, 8, 1_000_000, 0);
3926+
3927+
let (mut route, payment_hash, payment_preimage, payment_secret) = get_route_and_payment_hash!(&nodes[0], nodes[8], 50_000_000);
3928+
3929+
send_along_route_with_secret(&nodes[0], route, &[&[&nodes[1], &nodes[7], &nodes[8]], &[&nodes[2], &nodes[7], &nodes[8]], &[&nodes[3], &nodes[7], &nodes[8]], &[&nodes[4], &nodes[7], &nodes[8]], &[&nodes[5], &nodes[7], &nodes[8]], &[&nodes[6], &nodes[7], &nodes[8]]], 50_000_000, payment_hash, payment_secret);
3930+
3931+
let (do_a_write, blocker) = std::sync::mpsc::sync_channel(0);
3932+
*nodes[8].chain_monitor.write_blocker.lock().unwrap() = Some(blocker);
3933+
3934+
// Until we have std::thread::scoped we have to unsafe { turn off the borrow checker }.
3935+
// We do this by casting a pointer to a `TestChannelManager` to a pointer to a
3936+
// `TestChannelManager` with different (in this case 'static) lifetime.
3937+
// This is even suggested in the second example at
3938+
// https://doc.rust-lang.org/std/mem/fn.transmute.html#examples
3939+
let claim_node: &'static TestChannelManager<'static, 'static> =
3940+
unsafe { std::mem::transmute(nodes[8].node as &TestChannelManager) };
3941+
let thrd = std::thread::spawn(move || {
3942+
// Initiate the claim in a background thread as it will immediately block waiting on the
3943+
// `write_blocker` we set above.
3944+
claim_node.claim_funds(payment_preimage);
3945+
});
3946+
3947+
// First unlock one monitor so that we have a pending
3948+
// `update_fulfill_htlc`/`commitment_signed` pair to pass to our counterparty.
3949+
do_a_write.send(()).unwrap();
3950+
3951+
// Then fetch the `update_fulfill_htlc`/`commitment_signed`. Note that the
3952+
// `get_and_clear_pending_msg_events` will immediately hang trying to take a peer lock which
3953+
// `claim_funds` is holding. Thus, we release a second write after a small sleep in the
3954+
// background to give `claim_funds` a chance to step forward, unblocking
3955+
// `get_and_clear_pending_msg_events`.
3956+
let do_a_write_background = do_a_write.clone();
3957+
let block_thrd2 = AtomicBool::new(true);
3958+
let block_thrd2_read: &'static AtomicBool = unsafe { std::mem::transmute(&block_thrd2) };
3959+
let thrd2 = std::thread::spawn(move || {
3960+
while block_thrd2_read.load(Ordering::Acquire) {
3961+
std::thread::yield_now();
3962+
}
3963+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3964+
do_a_write_background.send(()).unwrap();
3965+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3966+
do_a_write_background.send(()).unwrap();
3967+
});
3968+
block_thrd2.store(false, Ordering::Release);
3969+
let first_updates = get_htlc_update_msgs(&nodes[8], &nodes[7].node.get_our_node_id());
3970+
thrd2.join().unwrap();
3971+
3972+
// Disconnect node 6 from all its peers so it doesn't bother to fail the HTLCs back
3973+
nodes[7].node.peer_disconnected(nodes[1].node.get_our_node_id());
3974+
nodes[7].node.peer_disconnected(nodes[2].node.get_our_node_id());
3975+
nodes[7].node.peer_disconnected(nodes[3].node.get_our_node_id());
3976+
nodes[7].node.peer_disconnected(nodes[4].node.get_our_node_id());
3977+
nodes[7].node.peer_disconnected(nodes[5].node.get_our_node_id());
3978+
nodes[7].node.peer_disconnected(nodes[6].node.get_our_node_id());
3979+
3980+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &first_updates.update_fulfill_htlcs[0]);
3981+
check_added_monitors(&nodes[7], 1);
3982+
expect_payment_forwarded!(nodes[7], nodes[1], nodes[8], Some(1000), false, false);
3983+
nodes[7].node.handle_commitment_signed(node_8_id, &first_updates.commitment_signed);
3984+
check_added_monitors(&nodes[7], 1);
3985+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
3986+
3987+
// Now, handle the `revoke_and_ack` from node 5. Note that `claim_funds` is still blocked on
3988+
// our peer lock, so we have to release a write to let it process.
3989+
// After this call completes, the channel previously would be locked up and should not be able
3990+
// to make further progress.
3991+
let do_a_write_background = do_a_write.clone();
3992+
let block_thrd3 = AtomicBool::new(true);
3993+
let block_thrd3_read: &'static AtomicBool = unsafe { std::mem::transmute(&block_thrd3) };
3994+
let thrd3 = std::thread::spawn(move || {
3995+
while block_thrd3_read.load(Ordering::Acquire) {
3996+
std::thread::yield_now();
3997+
}
3998+
std::thread::sleep(MAX_THREAD_INIT_TIME);
3999+
do_a_write_background.send(()).unwrap();
4000+
std::thread::sleep(MAX_THREAD_INIT_TIME);
4001+
do_a_write_background.send(()).unwrap();
4002+
});
4003+
block_thrd3.store(false, Ordering::Release);
4004+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4005+
thrd3.join().unwrap();
4006+
assert!(!thrd.is_finished());
4007+
4008+
let thrd4 = std::thread::spawn(move || {
4009+
do_a_write.send(()).unwrap();
4010+
do_a_write.send(()).unwrap();
4011+
});
4012+
4013+
thrd4.join().unwrap();
4014+
thrd.join().unwrap();
4015+
4016+
expect_payment_claimed!(nodes[8], payment_hash, 50_000_000);
4017+
4018+
// At the end, we should have 7 ChannelMonitorUpdates - 6 for HTLC claims, and one for the
4019+
// above `revoke_and_ack`.
4020+
check_added_monitors(&nodes[8], 7);
4021+
4022+
// Now drive everything to the end, at least as far as node 7 is concerned...
4023+
*nodes[8].chain_monitor.write_blocker.lock().unwrap() = None;
4024+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4025+
check_added_monitors(&nodes[8], 1);
4026+
4027+
let (updates, raa) = get_updates_and_revoke(&nodes[8], &nodes[7].node.get_our_node_id());
4028+
4029+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[0]);
4030+
expect_payment_forwarded!(nodes[7], nodes[2], nodes[8], Some(1000), false, false);
4031+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[1]);
4032+
expect_payment_forwarded!(nodes[7], nodes[3], nodes[8], Some(1000), false, false);
4033+
let mut next_source = 4;
4034+
if let Some(update) = updates.update_fulfill_htlcs.get(2) {
4035+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, update);
4036+
expect_payment_forwarded!(nodes[7], nodes[4], nodes[8], Some(1000), false, false);
4037+
next_source += 1;
4038+
}
4039+
4040+
nodes[7].node.handle_commitment_signed(node_8_id, &updates.commitment_signed);
4041+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4042+
if updates.update_fulfill_htlcs.get(2).is_some() {
4043+
check_added_monitors(&nodes[7], 5);
4044+
} else {
4045+
check_added_monitors(&nodes[7], 4);
4046+
}
4047+
4048+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
4049+
4050+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4051+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4052+
check_added_monitors(&nodes[8], 2);
4053+
4054+
let (updates, raa) = get_updates_and_revoke(&nodes[8], &node_7_id);
4055+
4056+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[0]);
4057+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4058+
next_source += 1;
4059+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, &updates.update_fulfill_htlcs[1]);
4060+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4061+
next_source += 1;
4062+
if let Some(update) = updates.update_fulfill_htlcs.get(2) {
4063+
nodes[7].node.handle_update_fulfill_htlc(node_8_id, update);
4064+
expect_payment_forwarded!(nodes[7], nodes[next_source], nodes[8], Some(1000), false, false);
4065+
}
4066+
4067+
nodes[7].node.handle_commitment_signed(node_8_id, &updates.commitment_signed);
4068+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4069+
if updates.update_fulfill_htlcs.get(2).is_some() {
4070+
check_added_monitors(&nodes[7], 5);
4071+
} else {
4072+
check_added_monitors(&nodes[7], 4);
4073+
}
4074+
4075+
let (raa, cs) = get_revoke_commit_msgs(&nodes[7], &node_8_id);
4076+
nodes[8].node.handle_revoke_and_ack(node_7_id, &raa);
4077+
nodes[8].node.handle_commitment_signed(node_7_id, &cs);
4078+
check_added_monitors(&nodes[8], 2);
4079+
4080+
let raa = get_event_msg!(nodes[8], MessageSendEvent::SendRevokeAndACK, node_7_id);
4081+
nodes[7].node.handle_revoke_and_ack(node_8_id, &raa);
4082+
check_added_monitors(&nodes[7], 1);
4083+
}

lightning/src/ln/channelmanager.rs

+34-26
Original file line numberDiff line numberDiff line change
@@ -1132,7 +1132,7 @@ pub(crate) enum MonitorUpdateCompletionAction {
11321132
/// A pending MPP claim which hasn't yet completed.
11331133
///
11341134
/// Not written to disk.
1135-
pending_mpp_claim: Option<(PublicKey, ChannelId, u64, PendingMPPClaimPointer)>,
1135+
pending_mpp_claim: Option<(PublicKey, ChannelId, PendingMPPClaimPointer)>,
11361136
},
11371137
/// Indicates an [`events::Event`] should be surfaced to the user and possibly resume the
11381138
/// operation of another channel.
@@ -1234,10 +1234,16 @@ impl From<&MPPClaimHTLCSource> for HTLCClaimSource {
12341234
}
12351235
}
12361236

1237+
#[derive(Debug)]
1238+
pub(crate) struct PendingMPPClaim {
1239+
channels_without_preimage: Vec<(PublicKey, OutPoint, ChannelId)>,
1240+
channels_with_preimage: Vec<(PublicKey, OutPoint, ChannelId)>,
1241+
}
1242+
12371243
#[derive(Clone, Debug, Hash, PartialEq, Eq)]
12381244
/// The source of an HTLC which is being claimed as a part of an incoming payment. Each part is
1239-
/// tracked in [`PendingMPPClaim`] as well as in [`ChannelMonitor`]s, so that it can be converted
1240-
/// to an [`HTLCClaimSource`] for claim replays on startup.
1245+
/// tracked in [`ChannelMonitor`]s, so that it can be converted to an [`HTLCClaimSource`] for claim
1246+
/// replays on startup.
12411247
struct MPPClaimHTLCSource {
12421248
counterparty_node_id: PublicKey,
12431249
funding_txo: OutPoint,
@@ -1252,12 +1258,6 @@ impl_writeable_tlv_based!(MPPClaimHTLCSource, {
12521258
(6, htlc_id, required),
12531259
});
12541260

1255-
#[derive(Debug)]
1256-
pub(crate) struct PendingMPPClaim {
1257-
channels_without_preimage: Vec<MPPClaimHTLCSource>,
1258-
channels_with_preimage: Vec<MPPClaimHTLCSource>,
1259-
}
1260-
12611261
#[derive(Clone, Debug, PartialEq, Eq)]
12621262
/// When we're claiming a(n MPP) payment, we want to store information about that payment in the
12631263
/// [`ChannelMonitor`] so that we can replay the claim without any information from the
@@ -7207,8 +7207,15 @@ where
72077207
}
72087208
}).collect();
72097209
let pending_mpp_claim_ptr_opt = if sources.len() > 1 {
7210+
let mut channels_without_preimage = Vec::with_capacity(mpp_parts.len());
7211+
for part in mpp_parts.iter() {
7212+
let chan = (part.counterparty_node_id, part.funding_txo, part.channel_id);
7213+
if !channels_without_preimage.contains(&chan) {
7214+
channels_without_preimage.push(chan);
7215+
}
7216+
}
72107217
Some(Arc::new(Mutex::new(PendingMPPClaim {
7211-
channels_without_preimage: mpp_parts.clone(),
7218+
channels_without_preimage,
72127219
channels_with_preimage: Vec::new(),
72137220
})))
72147221
} else {
@@ -7219,7 +7226,7 @@ where
72197226
let this_mpp_claim = pending_mpp_claim_ptr_opt.as_ref().and_then(|pending_mpp_claim|
72207227
if let Some(cp_id) = htlc.prev_hop.counterparty_node_id {
72217228
let claim_ptr = PendingMPPClaimPointer(Arc::clone(pending_mpp_claim));
7222-
Some((cp_id, htlc.prev_hop.channel_id, htlc.prev_hop.htlc_id, claim_ptr))
7229+
Some((cp_id, htlc.prev_hop.channel_id, claim_ptr))
72237230
} else {
72247231
None
72257232
}
@@ -7552,7 +7559,7 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
75527559
for action in actions.into_iter() {
75537560
match action {
75547561
MonitorUpdateCompletionAction::PaymentClaimed { payment_hash, pending_mpp_claim } => {
7555-
if let Some((counterparty_node_id, chan_id, htlc_id, claim_ptr)) = pending_mpp_claim {
7562+
if let Some((counterparty_node_id, chan_id, claim_ptr)) = pending_mpp_claim {
75567563
let per_peer_state = self.per_peer_state.read().unwrap();
75577564
per_peer_state.get(&counterparty_node_id).map(|peer_state_mutex| {
75587565
let mut peer_state = peer_state_mutex.lock().unwrap();
@@ -7563,24 +7570,17 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
75637570
if *pending_claim == claim_ptr {
75647571
let mut pending_claim_state_lock = pending_claim.0.lock().unwrap();
75657572
let pending_claim_state = &mut *pending_claim_state_lock;
7566-
pending_claim_state.channels_without_preimage.retain(|htlc_info| {
7573+
pending_claim_state.channels_without_preimage.retain(|(cp, op, cid)| {
75677574
let this_claim =
7568-
htlc_info.counterparty_node_id == counterparty_node_id
7569-
&& htlc_info.channel_id == chan_id
7570-
&& htlc_info.htlc_id == htlc_id;
7575+
*cp == counterparty_node_id && *cid == chan_id;
75717576
if this_claim {
7572-
pending_claim_state.channels_with_preimage.push(htlc_info.clone());
7577+
pending_claim_state.channels_with_preimage.push((*cp, *op, *cid));
75737578
false
75747579
} else { true }
75757580
});
75767581
if pending_claim_state.channels_without_preimage.is_empty() {
7577-
for htlc_info in pending_claim_state.channels_with_preimage.iter() {
7578-
let freed_chan = (
7579-
htlc_info.counterparty_node_id,
7580-
htlc_info.funding_txo,
7581-
htlc_info.channel_id,
7582-
blocker.clone()
7583-
);
7582+
for (cp, op, cid) in pending_claim_state.channels_with_preimage.iter() {
7583+
let freed_chan = (*cp, *op, *cid, blocker.clone());
75847584
freed_channels.push(freed_chan);
75857585
}
75867586
}
@@ -14786,8 +14786,16 @@ where
1478614786
if payment_claim.mpp_parts.is_empty() {
1478714787
return Err(DecodeError::InvalidValue);
1478814788
}
14789+
let mut channels_without_preimage = payment_claim.mpp_parts.iter()
14790+
.map(|htlc_info| (htlc_info.counterparty_node_id, htlc_info.funding_txo, htlc_info.channel_id))
14791+
.collect::<Vec<_>>();
14792+
// If we have multiple MPP parts which were received over the same channel,
14793+
// we only track it once as once we get a preimage durably in the
14794+
// `ChannelMonitor` it will be used for all HTLCs with a matching hash.
14795+
channels_without_preimage.sort_unstable();
14796+
channels_without_preimage.dedup();
1478914797
let pending_claims = PendingMPPClaim {
14790-
channels_without_preimage: payment_claim.mpp_parts.clone(),
14798+
channels_without_preimage,
1479114799
channels_with_preimage: Vec::new(),
1479214800
};
1479314801
let pending_claim_ptr_opt = Some(Arc::new(Mutex::new(pending_claims)));
@@ -14820,7 +14828,7 @@ where
1482014828

1482114829
for part in payment_claim.mpp_parts.iter() {
1482214830
let pending_mpp_claim = pending_claim_ptr_opt.as_ref().map(|ptr| (
14823-
part.counterparty_node_id, part.channel_id, part.htlc_id,
14831+
part.counterparty_node_id, part.channel_id,
1482414832
PendingMPPClaimPointer(Arc::clone(&ptr))
1482514833
));
1482614834
let pending_claim_ptr = pending_claim_ptr_opt.as_ref().map(|ptr|

lightning/src/ln/functional_test_utils.rs

+20
Original file line numberDiff line numberDiff line change
@@ -779,6 +779,26 @@ pub fn get_revoke_commit_msgs<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &
779779
})
780780
}
781781

782+
/// Gets a `UpdateHTLCs` and `revoke_and_ack` (i.e. after we get a responding `commitment_signed`
783+
/// while we have updates in the holding cell).
784+
pub fn get_updates_and_revoke<CM: AChannelManager, H: NodeHolder<CM=CM>>(node: &H, recipient: &PublicKey) -> (msgs::CommitmentUpdate, msgs::RevokeAndACK) {
785+
let events = node.node().get_and_clear_pending_msg_events();
786+
assert_eq!(events.len(), 2);
787+
(match events[0] {
788+
MessageSendEvent::UpdateHTLCs { ref node_id, ref updates } => {
789+
assert_eq!(node_id, recipient);
790+
(*updates).clone()
791+
},
792+
_ => panic!("Unexpected event"),
793+
}, match events[1] {
794+
MessageSendEvent::SendRevokeAndACK { ref node_id, ref msg } => {
795+
assert_eq!(node_id, recipient);
796+
(*msg).clone()
797+
},
798+
_ => panic!("Unexpected event"),
799+
})
800+
}
801+
782802
#[macro_export]
783803
/// Gets an RAA and CS which were sent in response to a commitment update
784804
///

0 commit comments

Comments
 (0)