Skip to content

Commit 5a7ee85

Browse files
committed
Async blocking task support
Added the `BlockingTaskQueue` type BlockingTaskQueue allows a Rust closure to be scheduled on a foreign thread where blocking operations are okay. The closure runs inside the parent future, which is nice because it allows the closure to reference its outside scope. On the foreign side, a `BlockingTaskQueue` is a native type that runs a task in some sort of thread queue (`DispatchQueue`, `CoroutineContext`, `futures.Executor`, etc.). Updated handlemaps to always start at 1 rather than 0, which is reserved for a NULL handle. Added new tests for this in the futures fixtures. Updated the tests to check that handles are being released properly.
1 parent 075efd9 commit 5a7ee85

File tree

44 files changed

+1112
-156
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1112
-156
lines changed

Cargo.lock

Lines changed: 75 additions & 6 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

docs/manual/src/futures.md

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -71,3 +71,63 @@ pub trait SayAfterTrait: Send + Sync {
7171
async fn say_after(&self, ms: u16, who: String) -> String;
7272
}
7373
```
74+
75+
## Blocking tasks
76+
77+
Rust executors are designed around an assumption that the `Future::poll` function will return quickly.
78+
This assumption, combined with cooperative scheduling, allows for a large number of futures to be handled by a small number of threads.
79+
Foreign executors make similar assumptions and sometimes more extreme ones.
80+
For example, the Python eventloop is single threaded -- if any task spends a long time between `await` points, then it will block all other tasks from progressing.
81+
82+
This raises the question of how async code can interact with blocking code that performs blocking IO, long-running computations without `await` breaks, etc.
83+
UniFFI defines the `BlockingTaskQueue` type, which is a foreign object that schedules work on a thread where it's okay to block.
84+
85+
On Rust, `BlockingTaskQueue` is a UniFFI type that can safely run blocking code.
86+
It's `execute` method works like tokio's [block_in_place](https://docs.rs/tokio/latest/tokio/task/fn.block_in_place.html) function.
87+
It inputs a closure and runs it in the `BlockingTaskQueue`.
88+
This closure can reference the outside scope (i.e. it does not need to be `'static`).
89+
For example:
90+
91+
```rust
92+
#[derive(uniffi::Object)]
93+
struct DataStore {
94+
// Used to run blocking tasks
95+
queue: uniffi::BlockingTaskQueue,
96+
// Low-level DB object with blocking methods
97+
db: Mutex<Database>,
98+
}
99+
100+
#[uniffi::export]
101+
impl DataStore {
102+
#[uniffi::constructor]
103+
fn new(queue: uniffi::BlockingTaskQueue) -> Self {
104+
Self {
105+
queue,
106+
db: Mutex::new(Database::new())
107+
}
108+
}
109+
110+
async fn fetch_all_items(&self) -> Vec<DbItem> {
111+
self.queue.execute(|| self.db.lock().fetch_all_items()).await
112+
}
113+
}
114+
```
115+
116+
On the foreign side `BlockingTaskQueue` corresponds to a language-dependent class.
117+
118+
### Kotlin
119+
Kotlin uses `CoroutineContext` for its `BlockingTaskQueue`.
120+
Any `CoroutineContext` will work, but `Dispatchers.IO` is usually a good choice.
121+
A DataStore from the example above can be created with `DataStore(Dispatchers.IO)`.
122+
123+
### Swift
124+
Swift uses `DispatchQueue` for its `BlockingTaskQueue`.
125+
The user-initiated global queue is normally a good choice.
126+
A DataStore from the example above can be created with `DataStore(queue: DispatchQueue.global(qos: .userInitiated)`.
127+
The `DispatchQueue` should be concurrent.
128+
129+
### Python
130+
131+
Python uses a `futures.Executor` for its `BlockingTaskQueue`.
132+
`ThreadPoolExecutor` is typically a good choice.
133+
A DataStore from the example above can be created with `DataStore(ThreadPoolExecutor())`.

fixtures/futures/Cargo.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ path = "src/bin.rs"
1717
[dependencies]
1818
uniffi = { workspace = true, features = ["tokio", "cli"] }
1919
async-trait = "0.1"
20+
futures = "0.3.29"
2021
thiserror = "1.0"
2122
tokio = { version = "1.24.1", features = ["time", "sync"] }
2223
once_cell = "1.18.0"

fixtures/futures/src/lib.rs

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ use std::{
1111
time::Duration,
1212
};
1313

14+
use futures::stream::{FuturesUnordered, StreamExt};
15+
1416
/// Non-blocking timer future.
1517
pub struct TimerFuture {
1618
shared_state: Arc<Mutex<SharedState>>,
@@ -387,4 +389,58 @@ fn get_say_after_udl_traits() -> Vec<Arc<dyn SayAfterUdlTrait>> {
387389
vec![Arc::new(SayAfterImpl1), Arc::new(SayAfterImpl2)]
388390
}
389391

392+
/// Async function that uses a blocking task queue to do its work
393+
#[uniffi::export]
394+
pub async fn calc_square(queue: uniffi::BlockingTaskQueue, value: i32) -> i32 {
395+
queue.execute(|| value * value).await
396+
}
397+
398+
/// Same as before, but this one runs multiple tasks
399+
#[uniffi::export]
400+
pub async fn calc_squares(queue: uniffi::BlockingTaskQueue, items: Vec<i32>) -> Vec<i32> {
401+
// Use `FuturesUnordered` to test our blocking task queue code which is known to be a tricky API to work with.
402+
// In particular, if we don't notify the waker then FuturesUnordered will not poll again.
403+
let mut futures: FuturesUnordered<_> = (0..items.len())
404+
.map(|i| {
405+
// Test that we can use references from the surrounding scope
406+
let items = &items;
407+
queue.execute(move || items[i] * items[i])
408+
})
409+
.collect();
410+
let mut results = vec![];
411+
while let Some(result) = futures.next().await {
412+
results.push(result);
413+
}
414+
results.sort();
415+
results
416+
}
417+
418+
/// ...and this one uses multiple BlockingTaskQueues
419+
#[uniffi::export]
420+
pub async fn calc_squares_multi_queue(
421+
queues: Vec<uniffi::BlockingTaskQueue>,
422+
items: Vec<i32>,
423+
) -> Vec<i32> {
424+
let mut futures: FuturesUnordered<_> = (0..items.len())
425+
.map(|i| {
426+
// Test that we can use references from the surrounding scope
427+
let items = &items;
428+
queues[i].execute(move || items[i] * items[i])
429+
})
430+
.collect();
431+
let mut results = vec![];
432+
while let Some(result) = futures.next().await {
433+
results.push(result);
434+
}
435+
results.sort();
436+
results
437+
}
438+
439+
/// Like calc_square, but it clones the BlockingTaskQueue first then drops both copies. Used to
440+
/// test that a) the clone works and b) we correctly drop the references.
441+
#[uniffi::export]
442+
pub async fn calc_square_with_clone(queue: uniffi::BlockingTaskQueue, value: i32) -> i32 {
443+
queue.clone().execute(|| value * value).await
444+
}
445+
390446
uniffi::include_scaffolding!("futures");

0 commit comments

Comments
 (0)