How often do you want non-send futures?

This is actually an interesting example, because while the future returned by takes_not_sync is necessarily !Send, the future returned by outer_future could be Send for the same reason that it can use the self-borrowing in the first place (assuming NotSync: Send). When you move the future to another thread you are moving both the NotSync value and the reference to that value, so they never end up being owned by different threads.

This seems much more difficult to automatically detect at first glance, internal-lifetimes are relatively easy because they exist at the type-system level, but if an async fn takes a &impl !Sync value I think it could soundly stash a raw pointer to that value into TLS and access it via that when polling, as long as it is careful to obey the lifetime on the reference. That async fn would definitely be !Send and could not be treated as Send by outer_future, so I'm not sure what are the constraints that would be needed for an async fn to hide internal !Syncness.

I think its very important that the signature of a function shows what the function returns. If i see the following in documentation or code

async fn foo() -> i32;

then it should be clear if the function returns an impl Future<Output = i32> or an impl Future<Output = i32> + Send. Whatever the syntax is we choose there should be a difference between Send and !Send. Leaking Send makes code less readable because you now have to read the entire function body to figure out what it returns. Besides this makes it easy to accidentally make a breaking chance by making the returned function !Send.

6 Likes

The problem with saying that is that leaking has already shipped with impl Trait, given a function

fn foo() -> impl Future<Output = i32>

you can’t tell whether the return value is Send or not from the signature, you need to look into the body.

You can force the return value to be Send by writing impl Future<Output = i32> + Send, but you can’t force your users to not rely on the return value being Send if it accidentally is; that would require something like negative trait bounds to hide the leaked auto-trait impl Future<Output = i32> + !Send.

And, IMO, it’s very important that you can interchangeably use async fn() and fn() -> impl Future (given that you correctly expand the lifetimes of the arguments to match the generated async fn signature). For performance reasons it should be easy to implement a trait that specifies an async fn() with a partially desugared fn() -> impl Future { async { ... } } to give you some synchronous setup before the future is returned. That seems like it would rule out having different auto-trait leakage between async fn and impl Trait.

2 Likes

Can this be changed in future edition? IIUC cargo-fix should be able to automatically add auto-trait bounds in most cases. I guess we also will need a way to specify bounds mentioned by @cramertj. Do we have proposals which address this problem?

1 Like

I don't really know what there is to add beyond your summary.

The previous assumption that drove this thread - that users would frequently need to say that the return types of generic methods are Send - was wrong, because its only necessary if that type is still generic at the point the Send bound is actually required. As you said, the Send bound becomes necessary in two ways, usually:

  1. Spawning the future onto a multithreaded executor.
  2. Boxing the future into a Box<Future + Send>.

In the first case, because the normal pattern is to push spawns outward as much as possible, its quite rare to spawn a future that is still parameterized by generics at that point of the program: as a rule, you already know the fully concrete type of your future at the point when you spawn it.

The second is a sort of self-healing problem: the most common reason to construct trait objects right now is to return futures from a method, because we don't have async methods. In other words, the source of the problem also solves the problem.

The other reasons for constructing trait objects are either fairly uncommon (recursive async functions, which shouldn't need dynamic dispatch in the long term once implementation limits are solved) or specifically an optimization (when erasing the type is actually more performant) which suggests you are an advanced user who can handle the annotation burden, and also is less likely if the type isn't concrete in the first place (because you don't know enough about the type to judge it would be an optimization).

In other words, the core belief of this thread was wrong: that the send bound would be infectious and necessary deep into programs, far from spawn. This just isn't true, so the system will work with methods just like it does with concrete functions.

This would be quite thorny to change and is not something we desire to change; its how the feature was intentionally designed.

4 Likes

For clarity, implementing the OP example based on what works on currently nightly, requiring Pin<Box<dyn Future>>:

#![warn(rust_2018_idioms)]
#![feature(async_await)]

use std::future::Future;
use std::pin::Pin;

use futures::{
    executor::ThreadPool,
    future::FutureExt,
    task::{SpawnError, SpawnExt},
};

pub trait Process {
    type ProcessFutr: Future<Output=()> + ?Sized;
    fn process(&self) -> Pin<Box<Self::ProcessFutr>>;
}

pub fn spawn_process<P>(p: P) -> Result<(), SpawnError>
    where P: Process, P::ProcessFutr: Send + 'static
{
    let mut rt = ThreadPool::new().unwrap();
    let handle = rt.spawn_with_handle(p.process())?;
    rt.run(handle);
    Ok(())
}

pub struct MyProcess;

impl Process for MyProcess {
    type ProcessFutr = dyn Future<Output=()> + Send + 'static;
    fn process(&self) -> Pin<Box<Self::ProcessFutr>> {
        async {
            eprintln!("processed");
        }.boxed()
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn my_spawn() {
        let p = MyProcess {};
        spawn_process(p).unwrap();
    }
}

Now the desired final state leapfrogs over even allowing impl Future in trait method return position, but could something like the below syntax be supported by a future compiler, with plenty of new automation?

pub trait Process {
    async fn process(&self); // default: ?Send
}

pub fn spawn_process<P>(p: P) -> Result<(), SpawnError>
    where P: Process, P::process::async: Send + 'static
{
    let mut rt = ThreadPool::new().unwrap();
    let handle = rt.spawn_with_handle(p.process())?;
    rt.run(handle);
    Ok(())
}

pub struct MyProcess;

impl Process for MyProcess {
    async fn process(&self)
        where async: Send + 'static // clause optional with clever autotrait?
    {
        eprintln!("processed");
    }
}

Note that this is different from what was shown in the OP, it’s missing the lifetime handling of async fn (the future created in process cannot borrow self). Without GATs it is impossible to accurately simulate async fn in traits.

That allows me to finally understand the motivation for your two trait with GAT+async examples in this thread, thanks! No examples actually show the self reference being borrowed however. Anyway, what I was trying to demonstrate with this last is:

  1. async trait methods could remain returning ?Send by default, like async functions and return-position impl Future do today.

  2. With current associated-types (and presumably, GATs) it is possible for trait method implementations to be either Send or !Send.

  3. Its possible for generic consuming code (e.g. spawn_process) to still require the Send bound if it is needed. Presumably specialization could further allow Send and !Send versions of such to co-exist.

  4. Specifying the Send bound for compatible (generated/hidden) async syntax, requires some way of externally referencing and placing bounds on an otherwise unnamed return type. So P::process::async: Send in last is one way that IMO fits nicely with the prior suggested where async: Send syntax. To the extent I understood it above: outputof(P::process) might be another.

  5. I’m not yet convinced that cleverness/Autotrait can’t play the same role with trait async method implementations.

The RPC system of capnproto-rust depends heavily on non-send futures. In particular, its code generation produces traits with non-send asynchronous methods. Currently these methods return a Promise<T,E> – which is either an immediate Result<T,E> or a Box<Future<T,E> + 'static>, where the the future is not bounded by Send. Here is an example of an implementation of such a method. I have been hoping to migrate these methods to the new async fn syntax.

I believe that enabling concurrent tasks to share mutable data without needing to synchronize (e.g. use a mutex) is one of the primary benefits of writing asynchronous code, so I think that adding default Send bounds is a bit sad.

9 Likes

Tagging/annotating a block of code, such as a file, type, trait or similar in one place as being Send (or not Send), would be helpful.

I feel like it should be possible to implement this as a macro in a crate using a techniques similar to the static_assertions crate. Maybe it could even be added to the static_assertions crate. I'll open an feature request issue.

Edit: issue opened: Feature request: `assert_expr_impl!` · Issue #14 · nvzqz/static-assertions · GitHub

Apologies if this has been answered already, but why don’t we just not allow async fn-style declarations without an implementation?

IMO, the async keyword only makes sense on implementations. On declarations it serves no purpose. I should declare a trait like this:

trait GiveMeAFuture<T> {
    fn give_me_a_future(&self) -> impl Future<T>;
}

And implement it like this:

impl<T> GiveMeAFuture<T> for () {
    async fn give_me_a_future(&self) -> T {
        ...
    }
}
2 Likes

For 1, we can’t use impl Trait in traits. 2, we cant name the type of the return type of async methods, this can be solved by existential associated types. Otherwise this would be a good idea.

1 Like

There is a purpose, to avoid having to manually write the annoyingly complicated lifetimes in the signature of the function, e.g. your example should be

trait GiveMeAFuture<T> {
    fn give_me_a_future(&self) -> impl Future<Output = T> + '_;
}

and as you add more references or generic parameters to the function the overhead of specifying the lifetimes becomes much greater (this is part of why there was a suggestion of adding syntax for declaring async closures types, otherwise you will have to manually do the lifetime desugaring of async fn signatures when you want to take an async closure).

This thread makes me think that perhaps async {...} blocks should be stabilized before async fn. That way we could start to see how many functions return an async {..} block and have a return type of impl Future vs impl Future + Send.

I’m also starting to wonder, is it possible for an async {..} block to implement Clone and Copy like closures can? If that is the case, then maybe async(Clone) fn would end up being a somewhat common thing.

7 Likes

I suspect any nontrivial async block (that is, one that uses await) wouldn’t be able to be Clone due to being !Unpin (thus cloning needing to rewrite self pointers into a new, unpinned type) and/or cloning during a suspension just outright not making sense, since who wakes the new one?

It could make sense to clone an unstarted future, but once it’s been polled once, cloning it doesn’t make sense in most cases.

Related issue on whether generators should be Clone, exactly the same reasoning applies to async created types (i.e. that because of self-referentiality they probably can’t, but you can take an impl (FnOnce() -> impl Future) + Clone instead if you want to be able to clone and start multiple from the beginning).

2 Likes

I hope you don't mind me adding considerations to this already long thread. What I want to talk about isn't necessarily limited to futures either.

I run into a lot of trouble designing libraries which are transparent to Send. Eg. defining behavior in traits, defining how those traits interact with each other and returning futures from trait methods. Currently as Box<dyn Future<_>> because of the lack of async trait methods right now. The user owns the types that they implement the trait for, and the user will spawn the futures, or if the library does so, it does so on a global executor defined by the application developer.

As an example take an actor model with a Message trait. Ideally: If the type implementing Message is Send, the user can send the message to an actor living in another thread. If it is !Send they can't. Another example. If the type implementing the Actor trait is Send, it's mailbox (which owns the actor) can be spawned on a threadpool, if it's !Send, only on a single threaded executor.

This means the constraints faced by the user are their own. The library is not constraining them.

It's currently AFAICT not possible to express this in Rust. All the traits and implementation code of the actor library that deal with generic parameters as A: Actor will always have to specify Sendness. The same is true for Boxed trait objects in return types.

This imposes severe design limitations. Either everything has to be Send, and it can never be used for types that aren't (even in a single threaded program), or nothing is Send, and end users can never use a threadpool for spawning futures returned by the library (notably the mailbox of their actors).

The only reasonable solution I can figure out for this problem right now is if there would be a ?Send unbound telling the compiler to automatically figure out from the underlying type if it is safe to send it across threads, when it happens. The compiler will know when the final application is being compiled if things are send across or not. Maybe there could be other ways to solve this than ?Send, but I haven't found them yet.


As a way to late answer to the title of this post. I think the whole point of async programing is that you can do parallel programming without threads. Therefor I think support for !Send futures should be first class. I use single threaded executors all the time (and with data that is !Send, most commonly Rc but could be other things), and I'm not "on fuchsia". The main reason for using single threaded executors for me is performance. In some use cases the overhead from locks in channels is the majority of the running time.

8 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.