Skip to content

lightning-liquidity: Introduce EventQueue notifier and wake BP for message processing #3509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

tnull
Copy link
Contributor

@tnull tnull commented Jan 8, 2025

As discussed/mentioned in #3436, this is a prefactor to upcoming persistence/LSPS1 service/LSPS5 work.

Here, we introduce a EventQueueNotifierGuard type reducing the potential of lock contention arising from calling EventQueue::enqueue while holding peer state locks.

We furthermore wake the background processor to trigger message processing after we enqueue new messages.

@tnull tnull requested a review from TheBlueMatt January 8, 2025 11:19
@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch 3 times, most recently from 90256c6 to f24e843 Compare January 14, 2025 12:43
@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch 2 times, most recently from ec27a7a to d24eca4 Compare January 16, 2025 08:57
@tnull
Copy link
Contributor Author

tnull commented Jan 16, 2025

Rebased after #3533 landed, should be ready for review (cc @TheBlueMatt)

@TheBlueMatt TheBlueMatt added the weekly goal Someone wants to land this this week label Jan 18, 2025
@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch 2 times, most recently from 90a1b34 to cfce460 Compare January 20, 2025 10:14
@TheBlueMatt
Copy link
Collaborator

Why end up going with a direct function callback instead of the lightning Future logic (and integrating into the BP to drive the calls for LDK users and letting other users drive via an async task)?

@tnull
Copy link
Contributor Author

tnull commented Jan 22, 2025

Why end up going with a direct function callback instead of the lightning Future logic (and integrating into the BP to drive the calls for LDK users and letting other users drive via an async task)?

Mhh, that might be an alternative approach, yes. I wonder if it would make sense to add this more generally to the CustomMessageHandler interface as every CMH will likely need to call back into PeerHandler to trigger message processing?

@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch from cfce460 to cbd4754 Compare January 28, 2025 15:43
@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch from cbd4754 to 03eb649 Compare January 28, 2025 15:49
@tnull
Copy link
Contributor Author

tnull commented Jan 28, 2025

Now switched to having the BP woken to trigger message processing. Had to rebase on top of #3546 to avoid silent rebase conflicts while adding a lightning-liquidity dependency on lightning-background-processor.

Should be generally good for review, but I'm currently still fighting some issues from RefCell not being Sync + Send in --no-default-features --features futures.

@tnull tnull changed the title lightning-liquidity: Introduce MessageQueue and EventQueue notifier types lightning-liquidity: Introduce EventQueue notifier and wake BP for message processing Jan 28, 2025
@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch 2 times, most recently from ea35484 to 0845ea1 Compare January 29, 2025 13:19
Copy link

codecov bot commented Jan 29, 2025

Codecov Report

Attention: Patch coverage is 74.14966% with 76 lines in your changes missing coverage. Please review.

Project coverage is 89.07%. Comparing base (83e9e80) to head (278a877).

Files with missing lines Patch % Lines
lightning-liquidity/src/lsps2/service.rs 70.00% 18 Missing and 15 partials ⚠️
lightning-liquidity/src/lsps1/client.rs 0.00% 18 Missing ⚠️
lightning-background-processor/src/lib.rs 84.09% 14 Missing ⚠️
lightning-liquidity/src/events/mod.rs 50.00% 6 Missing ⚠️
lightning-liquidity/src/lsps0/client.rs 0.00% 3 Missing ⚠️
lightning-liquidity/src/events/event_queue.rs 93.75% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3509      +/-   ##
==========================================
- Coverage   89.12%   89.07%   -0.05%     
==========================================
  Files         156      157       +1     
  Lines      123514   123528      +14     
  Branches   123514   123528      +14     
==========================================
- Hits       110086   110037      -49     
- Misses      10749    10803      +54     
- Partials     2679     2688       +9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch 5 times, most recently from ca96d1f to 4af5fc3 Compare January 29, 2025 16:28
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to get some additional context in some of the commit messages, especially "Relax Sync + Send bounds on BackgroundProcessor where possible" which really need an explainer of why we think its a good idea to do so.


let (expected_payment_size_msat, mpp_mode) =
if let Some(payment_size_msat) = payment_size_msat {
(*payment_size_msat, true)
} else {
debug_assert_eq!(num_htlcs, 1);
if num_htlcs != 1 {
// Revert the queue before error'ing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand the point of this logic here - we've pushed a new htlc onto payment_queue, but we're erroring here which causes the HTLC to be failed-back. The comment seems to imply we want to revert to the old queue state (which I think would be right?), but that isn't what we're doing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, maybe comment could be more specific, but the idea here is to preserve the previous behavior, which keeps all HTLCs in the queue until we get HTLCHandlingFailed, at which point we need to clean up the queue, no?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, so two things:
(a) I think there's an existing bug here - LSPS2ServiceHandler::htlc_intercepted calls through to here, and when it gets an Err it calls channelmanager....fail_intercepted_htlc, but the code here keeps the HTLC listed as pending.
(b) doing the take at the top looks pretty brittle, and isn't required. eg the following patch compiles:

$ git diff -U2
diff --git a/lightning-liquidity/src/lsps2/service.rs b/lightning-liquidity/src/lsps2/service.rs
index 69acc5ba6..f6ba6aec7 100644
--- a/lightning-liquidity/src/lsps2/service.rs
+++ b/lightning-liquidity/src/lsps2/service.rs
@@ -142,6 +142,5 @@ impl OutboundJITChannelState {
                let new_state;
                let res = match self {
-                       OutboundJITChannelState::PendingInitialPayment { payment_queue: old_payment_queue } => {
-                               let mut payment_queue = core::mem::take(old_payment_queue);
+                       OutboundJITChannelState::PendingInitialPayment { payment_queue } => {
                                let (total_expected_outbound_amount_msat, num_htlcs) = payment_queue.add_htlc(htlc);

@@ -152,6 +151,4 @@ impl OutboundJITChannelState {
                                                debug_assert_eq!(num_htlcs, 1);
                                                if num_htlcs != 1 {
-                                                       // Revert the queue before error'ing
-                                                       core::mem::swap(old_payment_queue, &mut payment_queue);
                                                        return Err(ChannelStateError(
                                                                "Paying via multiple HTLCs is disallowed in \"no-MPP+var-invoice\" mode.".to_string()
@@ -164,6 +161,4 @@ impl OutboundJITChannelState {
                                        || expected_payment_size_msat > opening_fee_params.max_payment_size_msat
                                {
-                                       // Revert the queue before error'ing
-                                       core::mem::swap(old_payment_queue, &mut payment_queue);
                                        return Err(ChannelStateError(
                                                        format!("Payment size violates our limits: expected_payment_size_msat = {}, min_payment_size_msat = {}, max_payment_size_msat = {}",
@@ -181,6 +176,4 @@ impl OutboundJITChannelState {
                                        opening_fee
                                } else {
-                                       // Revert the queue before error'ing
-                                       core::mem::swap(old_payment_queue, &mut payment_queue);
                                        return Err(ChannelStateError(
                                                        format!("Could not compute valid opening fee with min_fee_msat = {}, proportional = {}, and expected_payment_size_msat = {}",
@@ -199,5 +192,5 @@ impl OutboundJITChannelState {
                                {
                                        new_state = OutboundJITChannelState::PendingChannelOpen {
-                                               payment_queue,
+                                               payment_queue: core::mem::take(payment_queue),
                                                opening_fee_msat,
                                        };
@@ -210,9 +203,7 @@ impl OutboundJITChannelState {
                                        if mpp_mode {
                                                new_state =
-                                                       OutboundJITChannelState::PendingInitialPayment { payment_queue };
+                                                       OutboundJITChannelState::PendingInitialPayment { payment_queue: core::mem::take(payment_queue) };
                                                Ok(None)
                                        } else {
-                                               // Revert the queue before error'ing
-                                               core::mem::swap(old_payment_queue, &mut payment_queue);
                                                return Err(ChannelStateError(
                                                        "Intercepted HTLC is too small to pay opening fee".to_string(),

Copy link
Contributor Author

@tnull tnull Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(a) I think there's an existing bug here - LSPS2ServiceHandler::htlc_intercepted calls through to here, and when it gets an Err it calls channelmanager....fail_intercepted_htlc, but the code here keeps the HTLC listed as pending.

Well, we def. need to keep the payment as pending, question is when we'd remove the HTLC. My thinking was that it would always happen as part of handling HTLCHandlingFailed, but IIUC, we need #3551 for this to work reliably in all cases.

(b) doing the take at the top looks pretty brittle, and isn't required. eg the following patch compiles:

I think I considered this but somehow arrived at the opinion this wouldn't work (maybe something to do with persistence, as this commit came from that draft branch). I however can't recall why I thought that and indeed it seems the above approach should work just as well as being cleaner. Now added in a fixup.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, we def. need to keep the payment as pending, question is when we'd remove the HTLC. My thinking was that it would always happen as part of handling HTLCHandlingFailed, but IIUC, we need #3551 for this to work reliably in all cases.

Ah, okay, I wasn't considering the fail_intercepted_htlc generated an HTLCHandlingFailed and we handled it that way....that said I'm not really convinced that's the right approach - that leaves a race where our state doesn't match the HTLC state and we might think we've received enough HTLCs if another one comes in before we process the event.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan on changing the approach here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan on changing the approach here?

No, I think I'd prefer to tackle this pre-existing in a follow-up, also since we might want to work out a common pattern in conjuction with/after #3712.

@@ -166,6 +163,8 @@ impl OutboundJITChannelState {
if expected_payment_size_msat < opening_fee_params.min_payment_size_msat
|| expected_payment_size_msat > opening_fee_params.max_payment_size_msat
{
// Revert the queue before error'ing
core::mem::swap(old_payment_queue, &mut payment_queue);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be easier to just push the HTLC before we move to the success block instead of pushing it then trying to revert before returning an Err...like this we'll end up adding a new error return in the future and forgetting to revert.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, although note that all following calculations are made based on the state with the HTLC already added. Pushing only in the success case would mean to refactor the logic to run the same calcualtions twice essentially.

@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch from 4af5fc3 to 30e5fab Compare February 5, 2025 08:42
@tnull
Copy link
Contributor Author

tnull commented Feb 5, 2025

Would be nice to get some additional context in some of the commit messages, especially "Relax Sync + Send bounds on BackgroundProcessor where possible" which really need an explainer of why we think its a good idea to do so.

Alright, now added some rationale there.

@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch from 30e5fab to eb3ea83 Compare February 5, 2025 11:18
@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch 7 times, most recently from b4dc220 to fd7f65a Compare April 11, 2025 11:20
@tnull
Copy link
Contributor Author

tnull commented Apr 11, 2025

Somehow this is now failing no-std-check. My hunch is cc7a8cc has something to do with it, but I have yet to investigate further.

@TheBlueMatt
Copy link
Collaborator

Looks like we just need default-features = false on the serde-json dependency.

@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch from fd7f65a to a69b7e7 Compare April 11, 2025 16:24
@tnull
Copy link
Contributor Author

tnull commented Apr 11, 2025

Looks like we just need default-features = false on the serde-json dependency.

Lol, thanks for pointing that out. Hope it was indeed just that, even though it's an embarrassing oversight on my part.

@TheBlueMatt
Copy link
Collaborator

Oops, now missing:

$ git diff
diff --git a/lightning-invoice/Cargo.toml b/lightning-invoice/Cargo.toml
index 7c49b2177..8e0c7587f 100644
--- a/lightning-invoice/Cargo.toml
+++ b/lightning-invoice/Cargo.toml
@@ -20,7 +20,7 @@ std = []
 [dependencies]
 bech32 = { version = "0.11.0", default-features = false }
 lightning-types = { version = "0.3.0", path = "../lightning-types", default-features = false }
-serde = { version = "1.0.118", optional = true }
+serde = { version = "1.0", optional = true, default-features = false, features = ["alloc"] }
 bitcoin = { version = "0.32.2", default-features = false, features = ["secp-recovery"] }

 [dev-dependencies]

@tnull tnull force-pushed the 2025-01-liquidity-cleanup branch from a69b7e7 to 2c2d3a8 Compare April 12, 2025 05:04
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically LGTM, then.

@@ -30,17 +30,18 @@ use core::pin::Pin;
use core::task::{Context, Poll};

/// Used to signal to one of many waiters that the condition they're waiting on has happened.
pub(crate) struct Notifier {
pub struct Notifier {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs drastically better documentation if we're gonna make it public. Discuss use-cases, describe that notification coalescing happens, detail when a notification is considered "received" (either future poll completes Ready or a callback returns (or just is called?))

tnull added 12 commits April 15, 2025 10:47
Previously, when enqueuing new events to the `EventQueue`, we'd
directly attempt to wake any notifiers/notify any threads waiting on
the `Condvar` about the newly available events. This could of course
mean we'd notify them while ourselves still holding some locks, e.g., on
the peer state.

Here, we instead introduce a `EventQueueNotifierGuard` type that will
notify about pending events if necesssary, which mitigates any potential
lock contention: we now simply have to ensure that any method calling
`enqueue` holds the notifier before retrieving any locks.
.. in order to make handling generics easier, just as we do with
`AChannelManager`, `AOnionMessenger`, etc.
Instead of doing the callback dance, we have
`lightning-background-processor` take `lightning-liquidity` as a
dependency and wake the BP whenever we enqueue new messages to the
`MessageQueue`.
It seems that a lot of the generics on `BackgroundProcessor` don't
actually require the `Sync + Send` bounds. Here, we therefore
drop them where possible as the unncessary bounds could result in
the compiler disallowing the use of certain types that aren't `Sync +
Send`, even if run threadless environments (i.e., some `no_std`
environments).
@TheBlueMatt
Copy link
Collaborator

Feel free to squash (once there's better Notifier documentation), then I'm happy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lightning-liquidity weekly goal Someone wants to land this this week
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants