How often do you want non-send futures?

Yes, I was thinking about this. This seems connected to what I was saying that my suspicion is that this is more of a "project-wide default" (with exceptions), versus a default that applies equally across codebases.

This is not a proposal, but you could imagine something like #![async(?Send)] being used to toggle the meaning of async fn within a lexical scope between -> impl Future and -> impl Future + Send.

Alternatively, you could imagine using a procedural macro like #[async] fn foo() to desugar into async(?Send) fn foo() (or even having two procedural #[async], macros, such that you alter the import path from use async_send::async and use async_unsend::async or something like that.

Usually the danger of these sorts of "modes" is that copy-and-pasting code between e.g., stackoverflow can lead to surprising errors (if that code is written assuming ?Send or whatever).

Anyway, I'm just thinking out loud here.

1 Like

You're referring to an &self function? I think in that case it would be that Self: Sync must hold, no?

Interesting idea. Would be quite challenging to implement, though, since the desugaring occurs quite early in the pipeline. Also, I imagine it will only be a bit tricky to tell whether something is Send (e.g., async fn foo<T>(x: &T) -- without a Sync bound, this is not Send, but maybe it was meant to be).

Yes, in that case it would be just Self: Sync.

yeah-- ideally here I'd like this to turn into fn foo<'a, T>(x: &'a T) -> impl Future<Output = ()> + SendIf<T: Send> + 'a (when written in a trait).

2 Likes

On the questions:

  1. I think often enough in performance critical contexts or in ones where data simply can't be moved between threads (e.g. Futures and obtained data on a UI eventloop like glib are always bound to it's single executor thread.
  2. I think Tide is a library which defines a few handler functions and doesn't bring in an executor. It should try to run with all kinds of futures, sendable and non-sendable ones in order to be usable in all contexts. An executor/framework like tokio might restrict futures to require Send capability by being multithreaded and work-stealing.

Actually I was also caught by surprise by this topic: I implemented an async mutex. The mutex and it's async fn lock() are Send + Sync - however the returned Future and LockGuard were not. This didn't cause any issue in my local testing with a singlethreaded executor, and I expected it also not to be necessary for a multithreaded one (since it's not necessary for synchronous mutexes, and since it was expected that people would work with the lock only on a single thread - even if await points are involved.

However the possibility that Futures get migrated between threads by some executors wasn't taken into account. That now obviously requires to add Send capability for the types.

At first glance it seems a bit confusing that executors put requirements on the code that they run - instead of only the code having requirements on the executor.

If we think about defaults, in what situation a Send future would be required by default:

  • It wouldn't be required for a singlethreaded scheduler (e.g. what javascript/libuv/glib/QT/etc are doing)
  • It wouldn't be required for a multithreaded scheduler which doesn't migrate work between threads (e.g. what async runtimes like Netty are doing)
  • It would be required for a multithreaded scheduler which always moves work between threads (juliex)
  • It would be required for a multithreaded scheduler which sometimes migrates work between threads, in cases of work-stealing (afaik tokio runtime does this).

My opinion is that the first 2 are great concepts for some applications, and that any design decision shouldn't make those the less-preferred option.

Regarding solutions:

I wouldn't support a project-wide default or configuration. I have seen projects which have buried 3 async runtimes in themselves due to a bunch of dependencies which all bundle another thing. Libraries should have no impact on each other if they start their own self-contained async runtime, and I wouldn't want integrators of those libraries to find out why things don't work as expected for them when another dependency is added.

Deriving the bound from other context (e.g. lifetimes or the impl Trait form) seems clever, but might be hard to understand for newcomers. It's already not super obvious that and which lifetimes get propagated onto the implicitly generated Future. I would rather prefer some explicit notation, where the omission of the notation always has the same meaning (either Send or Non-Send).

6 Likes

@nikomatsakis Also, it is more than Send. I often require other traits on futures besides Send. Debug is a common one.

1 Like

If the default includes any auto-trait, the behavior of RPIT-in-traits should probably be “leak parameter autotraits”. As an example, consider the minimal async function:

fn echo<T>(t: T) -> impl Future<Output=T> { async { t } }

trait Echo<T> {
    fn echo(t: T) -> impl Future<Output=T>;
}

impl<T> Echo<T> for () {
    fn echo(t: T) -> impl Future<Output=T> { async { t } }
}

If it weren’t for the fact I firmly believe that RPIT and async fn should have the same auto-trait inference behavior, I’d potentially argue for this “parameter auto-trait leakage” to be the behavior of all async fn. (That is, iff a struct holding all the parameters would fulfill the auto trait, the auto trait is required & exposed for the return value.) This makes the inference local to the function header. I’m almost willing to argue that non-trait RPIT should behave that way, with impl Trait + ?AutoTrait to opt out, even though that would have to wait until the 2021 edition at the earliest.

Actually, the “utility” of the autotrait leakage is conditional autotrait implementation, which must depend on the arguments in some fashion. Given all-parameter auto-trait inference, the only time you’d have to fall back to manual newtypes (that can have arbitrarily complex trait impl predicates) for just auto-traits is if auto-trait implementation depended on not all parameters and the other parameters may not implement the auto-trait.

I already had a gut feeling that auto trait leakage through impl Trait (a feature introduced to easily restrict the API promised to the caller) was a misfeature. I’ve now convinced myself that the above auto-trait inference mirrors how we treat lifetime inference (in async fn moreso than free fn, as we capture all instead of only if unique) and would be more desirable if we could redo things.

I firmly believe that async and the impl Trait desugaring (fn -> impl Future + 'all) should behave the same in terms of auto-trait inference. If we decide not to hold that position that they should be the same in all positions (leak in non-trait, no bound in trait), we can potentially make async be special by enforcing this everywhere, or make this only apply in RPIT/async in traits. If we decide to apply it to traits, however, we should probably see a migration path for non-trait usages: warn quickly for cases that don’t meet the weaker inference, and upgrade to an error with the next edition.

I have a hard time pushing any of those paths, however. Auto-trait “inference-from-parameters” feels better than “leakage-from-implementation”, but not enough better to warrant edition breakage. If we did want to push the migration, though, sooner would be better.

(As a side note, even a minimal impl Debug for async fn futures would probably be super valuable.)

3 Likes

Whilst networking/system developers ususally have multiple threads in hand, GUI developers must work in single threads in most of the time. At the same time, both developers do have high motivations to use async, for different reasons.

The Rust language can hardly secrifice any group of them, because both are major reasons for software developing. Therefore I think although it would be hard we have to find a way to make both groups happy, although I have no ideas how.

2 Likes

Generally GUI developers must interact with the GUI in a single thread, while they are doing IO operations these can run on different threads fine.

IMO instead of restricting all futures to be !Send and run everything on the GUI thread, there should be ergonomic ways to run cross-executor futures, the IO operations run on the main multi-threaded work pool and results from those are sent back to the GUI single-threaded executor to update the UI.

There are already helpers for situations like this, but I wonder if there are ways these could be improved

// This creates a `Future + !Send` because it references `!Send` GUI resources
async fn button_handler(&self) -> Result<(), _> {
    // Spawn a `Future + Send` onto a multi-threaded executor to do some IO work
    let result = self.io_pool.spawn_with_handle(async {
         Ok(do_io_request().await?.parse()?)
    });
    // Await the result on the GUI executor to then interact with the UI elements
    self.update_text(result.await?);
    Ok(())
}
2 Likes

I see multiple issues there:

  • multi-threaded async is the norm in languages like C# and Rust programmers shouldn’t have to jump through too many hoops to support this mode of operations. I think Sendable futures should be the default.
  • GUI applications require that all GUI updates are done on the UI thread. C#, for example, solves this with .ConfigureAwait(true/false), but this makes async a pain to use when you’re not working with UI.

If we get a separate spawn_and_continue_on_the_current_thread function that can be used by UI engines (it really needs a better name), then only single-threaded environments should be affected and only if they want to use non-Send futures. I’m not sure how common they are.

Let’s be very clear here about what we’re talking about: we are not talking about making it impossible to use async/await to create a non-Send future. There are multiple points in the design space of course, but we can rule out any point that makes that impossible, and highly value making the annotation burden for that convenient. I personally prefer adding async(?Send) as the relatively minor annotation needed for the case of creating an async item which will not be checked to be Send.

We’re just talking about the defaults here.

I think there are multiplicative factors that really strongly prefer the Send bound:

  • I believe the significant majority of our users will need their futures to be Send. Rust is designed to make multithreading easier, most open source executor libraries have taken advantage of this by scheduling tasks across multiple threads.
  • The cost for users whose futures must be Send is much higher than for users who don’t need Send. Users who don’t need send only need to add an anotation when they want to take advantage of being single threaded - in other words, when they actually want to use non-threadsafe primitives. While this won’t be uncommon, its different from the experience for users who must be Send who will need to annotate every trait method they call to ensure that the future it returns is Send (maybe in the trait or worse, at the call site).

In other words, combining the fact that most people will need Send, and that the burden of the default is worse for people who need Send than people who don’t, it seems clear to me what the default should be.

And I think this is exactly the sort of question on which I think fuchsia is very unrepresentative. Unlike fuchsia, I expect - based on the movement in the open source community and from other production users so far - that the majority of users will be writing network services which balance tasks across threads. This is born out by the defaults in the tokio crate, the runtime crate, and even the futures crate (where the boxing abstractions include a Send bound by default). To repeat: this is about the defaults, and its been clear for a long time the default for the futures ecosystem is multithreaded.

16 Likes

Are you referring just to using async fn in trait definitions (which there is plenty of time to discuss before it is stabilized), or would you prefer updating the current implementation to use this annotation on concrete async fn as well (which is a much tighter schedule)?

This is an interesting idea but it seems like it runs into the same problem that "just" bounding async methods as Send does, the reason this is being brought up before the MVP stabilization: the issue is that there will be a difference in rules between the trait associated fn and concrete fn.

However, this also seems like a backward compatible relaxation if the initial rules are requiring Send by default, is that right? So we could explore this as a way to make the single threaded use case easier.

EDIT: It occurs to me that it may not be backward compatible in traits, I have to ponder that more. But if we were to make send the default for concrete fn, we could later relax it backward compatibly I believe, allowing some use cases to remove their (?Send) annotation.

1 Like

That's why this is being discussed now.

The origin of this thread is essentially that I have long held a private opinion, which I didn't share to avoid adding further complication to shipping async/await, and because I knew Taylor had pretty strong contrary opinions. But a conversation with Florian yesterday made me feel I should ask Niko if this actually should be addressed as a potential blocker, and Niko felt it should be.

I feel that looking at trait methods in isolation, the trade off is really clear that Send is the right default (as we discussed last year). I considered it unfortunate that this would be a different default from outside of trait methods, but I was accepting that "wart" as the cost to avoid making the decision on a shorter time frame, because I also saw no advantage to giving them a default outside of traits (because the leakage makes things work outside of trait methods).

So yes, because I think async methods should definitely be send by default and I think its better if we are consistent across the board, I do think concrete async fn should have this as well. But the time frame is an obvious problem for this.

I expect that if we decide to pursue this more seriously, we will have to slip to 1.38, but I would be very unhappy if we slipped beyond that because of this issue, so whether we go further with this or not I want this resolved fairly quickly and not as a protracted issue like await syntax was.

7 Likes

In another thread I suggested the following syntax for Futures that need Debug:

#[derive(Debug)]
async fn foo() {
  ...
}

Is derive out of the question in this context for positive bounds:

#[derive(Send)]
async fn foo() {
  ...
}

or negative bounds?

#[derive(!Send)]
async fn foo() {
  ...
}
2 Likes

Derives are not for functions, you can use normal proc macros here.

AFAIK, an additional problem here is that you can’t impl Debug for foo, so neither a derive nor a proc macro could help you without model changes there.

1 Like

It’s not the same as deriving; the type implements or does not implement Send regardless, issue is asserting as a bound that it must (or may not) implement Send (especially value for trait definitions). In any event, its just a difference of syntax from having either async(Send) or async(?Send): you can’t define this in library code like a proc macro.

4 Likes

I would prefer a consistent default here, it would otherwise add another implicit “but” to trait functions. I appreciate Fuchias case where Futures may not be Send, I wonder if there is some form to help there.

I think (from this and other comments by others) that the proposal of how we would change things from the current situation hasn't been made clear - I see that in both my and niko's posts, its been sort of alluded to in a way that's maybe vague.

The proposal is that async items would be lowered to a type that is expected to implement both Future and Send (impl Future + Send essentially). Today, they are only expected to implement Future, but opportunistically "leak" their Send impl if they are in fact Send (this is how impl Future always works, because auto traits leak as a part of how impl Trait works).

If we changed this to instead require Send, and check every async item as if it had to be Send, we would add an additional syntax to the language to remove that Send bound:

async(?Send) fn single_threaded() -> Foo {
    let rc = Rc::new(0);
    future().await;   // the !Send Rc is alive across await, this
                      // is OK because we wrote async(?Send)
    rc.clone();
    // ...
}

So the annotation overhead when you do want to have !Send state alive across an await point is to adjust async to async(?Send).

7 Likes