Skip to content

Sweeper async change destination source fetching #3734

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

joostjager
Copy link
Contributor

@joostjager joostjager commented Apr 14, 2025

This PR converts OutputSweeper to take an async ChangeDestinationSource implementation. This allows a (remote) address fetch call to run without blocking chain notifications.

Furthermore the changes demonstrates how LDK could be written in a natively async way, and still allow usage from a sync context using wrappers.

Part of #3540

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Apr 14, 2025

👋 Thanks for assigning @TheBlueMatt as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

}

Ok(())
self.persist_state(&*state_lock).map_err(|e| {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No more sweeping in this method, all moved to a timer.

if let Some(spending_tx) = spending_tx_opt {
self.broadcaster.broadcast_transactions(&[&spending_tx]);
}
let _ = self.persist_state(&*state_lock).map_err(|e| {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No more sweeping in this event handler.

// Don't generate and broadcast if still delayed
return false;
// Prevent concurrent sweeping.
if sweeper_state.sweep_pending {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this boolean. Seemed easiest with a locked state already.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative option with AtomicBool might create a nested locks situation that can be risky?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I quite understand the issue, but whatever we do, we should def. not add a flag to the SweeperState, which is.. the sweeper state to be persisted, and should not reflect runtime-specific behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me, state isn't necessarily limited to persistent state. I could live with some items persisted and others not. The AtomicBool option is locking unnecessarily perhaps, with nesting potentially too. Maybe I could split out the persistent part of the state in a separate struct, and group that with the runtime state. Then still a single mutex can guard all of it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continued with the persistent/runtime split in a prep. commit.

if respend_descriptors.is_empty() {
// Nothing to do.
return None;
let change_destination_script = change_destination_script_result?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Todo: address risk of getting a tx with a new address every block.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Todo: investigate what BDK does here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Traced through BDK a bit. It seems that there is only inflation if we actually use the address in a tx. It won't blindly regenerate addresses when called. But will double check this.

@joostjager joostjager force-pushed the async-sweep branch 2 times, most recently from c3420ab to d6051d9 Compare April 16, 2025 09:28
@@ -313,7 +313,7 @@ macro_rules! define_run_body {
$channel_manager: ident, $process_channel_manager_events: expr,
$onion_messenger: ident, $process_onion_message_handler_events: expr,
$peer_manager: ident, $gossip_sync: ident,
$sweeper: ident,
$sweeper: expr,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be renamed if it's not actually the sweeper but a closure.

Copy link
Contributor Author

@joostjager joostjager Apr 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Renamed to $process_sweeper in line with the naming of the other closures such as process_chain_monitor_events.

@@ -922,14 +937,18 @@ impl BackgroundProcessor {
PM: 'static + Deref + Send + Sync,
S: 'static + Deref<Target = SC> + Send + Sync,
SC: for<'b> WriteableScore<'b>,
D: 'static + Deref + Send + Sync,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure we need all these Send + Sync bounds. In #3509 we made an effort to remove a bunch of these. Lets' try not to go the other way in this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could indeed remove the ones on the dependencies of the output sweeper.

@@ -463,6 +449,27 @@ where
self.sweeper_state.lock().unwrap().best_block
}

/// Regenerates and broadcasts the spending transaction for any outputs that are pending
pub fn regenerate_and_broadcast_spend_if_necessary_locked(&self) -> Result<(), ()> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the _locked suffix? What is locked when? A pattern you'd find more often in the codebase would be to name this regenerate_and_broadcast_spends_if_necessary and add an internal to the non-pub method.

Although in this case I'm not quite getting why you split these up to begin with, only to completely rewrite them and change the entire API once again in the second commit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will redo this. This PR is far from final.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleaned this up. It's all in one method now, showing better where we come from (original sweeper with sweeping in event handlers) and where we go to (async sweeping)

@@ -30,7 +30,6 @@
//! * `grind_signatures`

#![cfg_attr(not(any(test, fuzzing, feature = "_test_utils")), deny(missing_docs))]
#![cfg_attr(not(any(test, feature = "_test_utils")), forbid(unsafe_code))]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really necessary? Can't we find a way without the unsafe dummy waker?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently not. Discussed this with @TheBlueMatt and concluded that unsafe stands out enough in reviews to not need this extra safeguard.

// Don't generate and broadcast if still delayed
return false;
// Prevent concurrent sweeping.
if sweeper_state.sweep_pending {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I quite understand the issue, but whatever we do, we should def. not add a flag to the SweeperState, which is.. the sweeper state to be persisted, and should not reflect runtime-specific behavior.

@joostjager joostjager force-pushed the async-sweep branch 2 times, most recently from 90d7104 to 370d677 Compare April 16, 2025 16:32
sweeper: OutputSweeper<B, D, E, F, K, L, O>,
}

impl<B: Deref, D: Deref, E: Deref, F: Deref, K: Deref, L: Deref, O: Deref>
Copy link
Contributor Author

@joostjager joostjager Apr 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What might be good about this wrapper is that it eliminates the possibility of someone implementing async logic in combination with future poll/ready checking. This wrapper only accepts a sync trait.

@joostjager joostjager force-pushed the async-sweep branch 4 times, most recently from a9812ea to df47e8d Compare April 18, 2025 10:23
Copy link

codecov bot commented Apr 21, 2025

Codecov Report

Attention: Patch coverage is 80.08130% with 49 lines in your changes missing coverage. Please review.

Project coverage is 90.14%. Comparing base (46cb5ff) to head (a93b09a).
Report is 46 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/util/sweep.rs 75.41% 33 Missing and 11 partials ⚠️
lightning/src/util/async_poll.rs 57.14% 3 Missing ⚠️
lightning-background-processor/src/lib.rs 96.22% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3734      +/-   ##
==========================================
+ Coverage   89.11%   90.14%   +1.02%     
==========================================
  Files         156      156              
  Lines      123435   132005    +8570     
  Branches   123435   132005    +8570     
==========================================
+ Hits       109995   118990    +8995     
+ Misses      10758    10444     -314     
+ Partials     2682     2571     -111     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@joostjager joostjager force-pushed the async-sweep branch 6 times, most recently from f3e911e to 6cd735c Compare April 21, 2025 11:35
@joostjager joostjager force-pushed the async-sweep branch 9 times, most recently from 73b4076 to 0a9fffd Compare April 21, 2025 13:57
@@ -656,6 +672,9 @@ use futures_util::{dummy_waker, OptionalSelector, Selector, SelectorOutput};
/// # F: lightning::chain::Filter + Send + Sync + 'static,
/// # FE: lightning::chain::chaininterface::FeeEstimator + Send + Sync + 'static,
/// # UL: lightning::routing::utxo::UtxoLookup + Send + Sync + 'static,
/// # D: lightning::sign::ChangeDestinationSource + Send + Sync + 'static,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if it is worth keeping this example. Another thing to update besides ldk-node and ldk-sample.

To prepare for asynchronous processing of the sweep, we need to decouple
the spending from the chain notifications. These notifications run in a
sync context and wouldn't allow calls into an async trait.

Instead we now periodically call into the sweeper, to open up the
possibility to do so from an async context if desired.
Allow runtime-only state within the same mutex lock.
@@ -36,11 +36,18 @@ use lightning::onion_message::messenger::AOnionMessenger;
use lightning::routing::gossip::{NetworkGraph, P2PGossipSync};
use lightning::routing::scoring::{ScoreUpdate, WriteableScore};
use lightning::routing::utxo::UtxoLookup;
use lightning::sign::{ChangeDestinationSource, OutputSpender};
#[cfg(feature = "futures")]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the sweeper being natively async now, is there still much use for this feature flag?

@joostjager joostjager marked this pull request as ready for review April 21, 2025 14:34
@joostjager joostjager requested review from tnull and TheBlueMatt April 21, 2025 14:34
@joostjager joostjager added the weekly goal Someone wants to land this this week label Apr 21, 2025
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically LGTM, were there any major questions remaining you wanted resolved?

/// This method should return a different value each time it is called, to avoid linking
/// on-chain funds controlled to the same user.
fn get_change_destination_script(&self) -> Result<ScriptBuf, ()>;
}

/// A wrapper around [`ChangeDestinationSource`] to allow for async calls.
pub struct ChangeDestinationSourceSyncWrapper<T: Deref>(T)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this only needs to be public for tests. Would be nice to make it pub(crate) when not building at least _test_utils, in that case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed. Needed to duplicate the struct definition, not sure if there is a better way?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sadly I don't believe so (short of a macro, which probably isn't justified here).

@@ -755,6 +782,11 @@ where
}
}

struct RuntimeSweeperState {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just to avoid hitting sweep_pending in the serialization logic for SweeperState? That can be avoided by writing it as (sweep_pending, static_value, false).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tnull commented #3734 (comment) though.

Either way is fine for me, although I have a slight preference for the least amount of code (your suggestion).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel particularly strongly, but in general we've tended to treat everything in memory as the "runtime state" and only consider the difference at the serialization layer.

pub fn sweeper_async(
&self,
) -> Arc<OutputSweeper<B, Arc<ChangeDestinationSourceSyncWrapper<D>>, E, F, K, L, O>> {
self.sweeper.clone()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bleh, its a bit weird to wrap the sweeper in an Arc just so that we can expose it here. Would returning a reference suffice?

Copy link
Contributor Author

@joostjager joostjager Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is possible, because we would then need to move that reference into the async bp thread? Of course there is always uncertainty - for me - around these kinds of statements in Rust.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, its definitely possible, but its not quite as clean as I was hoping due to the tokio::spawns in the BP tests:

diff --git a/lightning-background-processor/src/lib.rs b/lightning-background-processor/src/lib.rs
index e52affecf..da3df6ecb 100644
--- a/lightning-background-processor/src/lib.rs
+++ b/lightning-background-processor/src/lib.rs
@@ -2062,5 +2062,5 @@ mod tests {
                        nodes[0].rapid_gossip_sync(),
                        nodes[0].peer_manager.clone(),
-                       Some(nodes[0].sweeper.sweeper_async()),
+                       Some(OutputSweeperSync::sweeper_async(Arc::clone(&nodes[0].sweeper))),
                        nodes[0].logger.clone(),
                        Some(nodes[0].scorer.clone()),
@@ -2562,5 +2562,5 @@ mod tests {
                        nodes[0].rapid_gossip_sync(),
                        nodes[0].peer_manager.clone(),
-                       Some(nodes[0].sweeper.sweeper_async()),
+                       Some(OutputSweeperSync::sweeper_async(Arc::clone(&nodes[0].sweeper))),
                        nodes[0].logger.clone(),
                        Some(nodes[0].scorer.clone()),
@@ -2776,5 +2776,5 @@ mod tests {
                        nodes[0].no_gossip_sync(),
                        nodes[0].peer_manager.clone(),
-                       Some(nodes[0].sweeper.sweeper_async()),
+                       Some(OutputSweeperSync::sweeper_async(Arc::clone(&nodes[0].sweeper))),
                        nodes[0].logger.clone(),
                        Some(nodes[0].scorer.clone()),
diff --git a/lightning/src/sign/mod.rs b/lightning/src/sign/mod.rs
index e0d455109..cbc186e08 100644
--- a/lightning/src/sign/mod.rs
+++ b/lightning/src/sign/mod.rs
@@ -1029,4 +1029,20 @@ where
 }

+/// Because this wrapper is used by the sweeper to hold an underlying change destination in a
+/// generic which requires `Deref`, we implement a dummy `Deref` here.
+///
+/// This is borderline bad practice and can occasionally result in spurious compiler errors due to
+/// infinite auto-deref recursion, but it avoids a more complicated indirection and the type is not
+/// public, so there's not really any harm.
+impl<T: Deref> Deref for ChangeDestinationSourceSyncWrapper<T>
+where
+       T::Target: ChangeDestinationSourceSync,
+{
+       type Target = Self;
+       fn deref(&self) -> &Self {
+               &self
+       }
+}
+
 mod sealed {
        use bitcoin::secp256k1::{Scalar, SecretKey};
diff --git a/lightning/src/util/sweep.rs b/lightning/src/util/sweep.rs
index 3b0ce5e5e..f8f4626ac 100644
--- a/lightning/src/util/sweep.rs
+++ b/lightning/src/util/sweep.rs
@@ -928,5 +928,5 @@ where
        O::Target: OutputSpender,
 {
-       sweeper: Arc<OutputSweeper<B, Arc<ChangeDestinationSourceSyncWrapper<D>>, E, F, K, L, O>>,
+       sweeper: OutputSweeper<B, ChangeDestinationSourceSyncWrapper<D>, E, F, K, L, O>,
 }

@@ -948,5 +948,5 @@ where
        ) -> Self {
                let change_destination_source =
-                       Arc::new(ChangeDestinationSourceSyncWrapper::new(change_destination_source));
+                       ChangeDestinationSourceSyncWrapper::new(change_destination_source);

                let sweeper = OutputSweeper::new(
@@ -960,5 +960,5 @@ where
                        logger,
                );
-               Self { sweeper: Arc::new(sweeper) }
+               Self { sweeper }
        }

@@ -997,9 +997,13 @@ where

        /// Returns the inner async sweeper for testing purposes.
+       ///
+       /// Note that this leaks the provided `Arc`, keeping this sweeper in memory forever.
        #[cfg(any(test, feature = "_test_utils"))]
        pub fn sweeper_async(
-               &self,
-       ) -> Arc<OutputSweeper<B, Arc<ChangeDestinationSourceSyncWrapper<D>>, E, F, K, L, O>> {
-               self.sweeper.clone()
+               us: Arc<Self>,
+       ) -> &'static OutputSweeper<B, ChangeDestinationSourceSyncWrapper<D>, E, F, K, L, O> {
+               let res = unsafe { core::mem::transmute(&us.sweeper) };
+               core::mem::forget(us);
+               res
        }
 }

@joostjager
Copy link
Contributor Author

joostjager commented Apr 22, 2025

Basically LGTM, were there any major questions remaining you wanted resolved?

Few things:

  1. We are breaking the sweeper api in this PR. Are there specific things to check/double-check now? Update ldk-node perhaps to see if it works? Not sure if much more can be done, given that rust-lightning is an open-source library.

  2. The address inflation risk should have remained the same as before. We are calling sweeper on a timer now, but the filter function is still checking latest_broadcast_height. Or did I miss anything there?

  3. Futures flag: Sweeper async change destination source fetching #3734 (comment)

@joostjager joostjager requested a review from TheBlueMatt April 22, 2025 06:38
@TheBlueMatt
Copy link
Collaborator

We are breaking the sweeper api in this PR. Are there specific things to check/double-check now? Update ldk-node perhaps to see if it works? Not sure if much more can be done, given that rust-lightning is an open-source library.

In general we don't care too much about API breaks between versions. Obviously we don't want to require downstream projects completely overhaul their integration with LDK just for the sake of it, but this change should basically just require adding "Sync" in a few places, so it should be trivial.

The address inflation risk should have remained the same as before. We are calling sweeper on a timer now, but the filter function is still checking latest_broadcast_height. Or did I miss anything there?

Sounds correct to me.

Futures flag: Sweeper async change destination source fetching #3734 (comment)

Indeed, I see no reason not to remove the futures feature from BP.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to see the futures flag removed in a followup or here, up to you.

pub fn sweeper_async(
&self,
) -> Arc<OutputSweeper<B, Arc<ChangeDestinationSourceSyncWrapper<D>>, E, F, K, L, O>> {
self.sweeper.clone()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, its definitely possible, but its not quite as clean as I was hoping due to the tokio::spawns in the BP tests:

diff --git a/lightning-background-processor/src/lib.rs b/lightning-background-processor/src/lib.rs
index e52affecf..da3df6ecb 100644
--- a/lightning-background-processor/src/lib.rs
+++ b/lightning-background-processor/src/lib.rs
@@ -2062,5 +2062,5 @@ mod tests {
                        nodes[0].rapid_gossip_sync(),
                        nodes[0].peer_manager.clone(),
-                       Some(nodes[0].sweeper.sweeper_async()),
+                       Some(OutputSweeperSync::sweeper_async(Arc::clone(&nodes[0].sweeper))),
                        nodes[0].logger.clone(),
                        Some(nodes[0].scorer.clone()),
@@ -2562,5 +2562,5 @@ mod tests {
                        nodes[0].rapid_gossip_sync(),
                        nodes[0].peer_manager.clone(),
-                       Some(nodes[0].sweeper.sweeper_async()),
+                       Some(OutputSweeperSync::sweeper_async(Arc::clone(&nodes[0].sweeper))),
                        nodes[0].logger.clone(),
                        Some(nodes[0].scorer.clone()),
@@ -2776,5 +2776,5 @@ mod tests {
                        nodes[0].no_gossip_sync(),
                        nodes[0].peer_manager.clone(),
-                       Some(nodes[0].sweeper.sweeper_async()),
+                       Some(OutputSweeperSync::sweeper_async(Arc::clone(&nodes[0].sweeper))),
                        nodes[0].logger.clone(),
                        Some(nodes[0].scorer.clone()),
diff --git a/lightning/src/sign/mod.rs b/lightning/src/sign/mod.rs
index e0d455109..cbc186e08 100644
--- a/lightning/src/sign/mod.rs
+++ b/lightning/src/sign/mod.rs
@@ -1029,4 +1029,20 @@ where
 }

+/// Because this wrapper is used by the sweeper to hold an underlying change destination in a
+/// generic which requires `Deref`, we implement a dummy `Deref` here.
+///
+/// This is borderline bad practice and can occasionally result in spurious compiler errors due to
+/// infinite auto-deref recursion, but it avoids a more complicated indirection and the type is not
+/// public, so there's not really any harm.
+impl<T: Deref> Deref for ChangeDestinationSourceSyncWrapper<T>
+where
+       T::Target: ChangeDestinationSourceSync,
+{
+       type Target = Self;
+       fn deref(&self) -> &Self {
+               &self
+       }
+}
+
 mod sealed {
        use bitcoin::secp256k1::{Scalar, SecretKey};
diff --git a/lightning/src/util/sweep.rs b/lightning/src/util/sweep.rs
index 3b0ce5e5e..f8f4626ac 100644
--- a/lightning/src/util/sweep.rs
+++ b/lightning/src/util/sweep.rs
@@ -928,5 +928,5 @@ where
        O::Target: OutputSpender,
 {
-       sweeper: Arc<OutputSweeper<B, Arc<ChangeDestinationSourceSyncWrapper<D>>, E, F, K, L, O>>,
+       sweeper: OutputSweeper<B, ChangeDestinationSourceSyncWrapper<D>, E, F, K, L, O>,
 }

@@ -948,5 +948,5 @@ where
        ) -> Self {
                let change_destination_source =
-                       Arc::new(ChangeDestinationSourceSyncWrapper::new(change_destination_source));
+                       ChangeDestinationSourceSyncWrapper::new(change_destination_source);

                let sweeper = OutputSweeper::new(
@@ -960,5 +960,5 @@ where
                        logger,
                );
-               Self { sweeper: Arc::new(sweeper) }
+               Self { sweeper }
        }

@@ -997,9 +997,13 @@ where

        /// Returns the inner async sweeper for testing purposes.
+       ///
+       /// Note that this leaks the provided `Arc`, keeping this sweeper in memory forever.
        #[cfg(any(test, feature = "_test_utils"))]
        pub fn sweeper_async(
-               &self,
-       ) -> Arc<OutputSweeper<B, Arc<ChangeDestinationSourceSyncWrapper<D>>, E, F, K, L, O>> {
-               self.sweeper.clone()
+               us: Arc<Self>,
+       ) -> &'static OutputSweeper<B, ChangeDestinationSourceSyncWrapper<D>, E, F, K, L, O> {
+               let res = unsafe { core::mem::transmute(&us.sweeper) };
+               core::mem::forget(us);
+               res
        }
 }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
weekly goal Someone wants to land this this week
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants