Runtime-agnostic cooperative task scheduling budget

Thought I just had: does poll_cooperate implicitly wake the task if it returns Pending? Is that handled at the Context level or does the registered callback have to do it itself?


In tokio, the equivalent API of poll_cooperate (called poll_proceed there) wakes the waker as it returns Pending.

1 Like

poll_cooperate has to arrange for the task to be woken if it returns Poll::Pending just like a normal Future::poll implementation.

Currently, Tokio does this the trivial way, by simply calling the waker before it returns Poll::Pending, but I'd prefer that we don't code this into the shared mechanism. For example, I can imagine a runtime that tracks task scheduling metrics, and decides based on those whether it's going to call the waker now, or put the task into a queue to be woken at some later point (when a worker goes idle or a timeout is reached, for example).

In general, we've not really explored the whole task scheduling space in any depth, and thus I'd prefer to leave options open for runtimes to do weird stuff with scheduling wherever possible. We can't do that in runtime-agnostic crates (hence not in std or core), but we shouldn't make it hard for runtimes to experiment with clever scheduling ideas.

So in that case it would be up to the registered callback to handle it how it wants. In that case it might be better to have poll_cooperate: fn(&mut Context<'_>) -> Poll<Cooperation> so simple cases can just cx.waker().wake_by_ref() rather than having to somehow thread access to the waker through themselves.

EDIT: Though, that still gives no way to do something like store and access a counter other than e.g. thread-locals, another option could be pushing the data pointer up another level, though that's starting to get quite complicated with a lot of raw pointers and I wonder if there's a simpler API design

impl ContextBuilder {
    fn set_poll_cooperate(
        &mut self,
        data: *mut (),
        poll_cooperate: fn(*mut ()) -> Poll<Cooperation>,
        drop: fn(*mut ()),
    ) { ... }
1 Like

I would agree with passing &mut Context to the runtime's poll_cooperate function. Beyond that, though, if the runtime wants to do something more complex, it already needs some form of task-local handling, and it should be able to piggy-back on the task-local system it uses already.

A separate change would be to add a per-task runtime_data: *mut () pointer to Context, so that the runtime has a place to attach a pointer back to whatever it needs to think about with this task, but I think that's out-of-scope for the cooperation mechanism.

1 Like

It's IMHO a reasonable design to key task-local state off of some part of the context, rather than some form of scoped thread-local. It's not realistic currently, since wrapping a waker to poll child futures builds a fresh context, but I'd prefer if we didn't completely prevent this strategy by not providing a context to some poll operations.



I agree this makes sense. And it would require new APIs for building a Context by copying everything from another Context but setting a new Waker.

Is there any widely used API that defines a new waker other than FuturesUnordered? FuturesUnordered would need to be updated anyway, to better handle preemption.

1 Like

It is already possible, but not really nice. We use private types backing wakers, which then again point to a private context of the current execution. We avoid any TLS/globals this way. A Context is all that is needed to access the state of the current executor. It requires nightly for feature(waker_getters), though. Also see #96992 (Tracking Issue for `waker_getters` · Issue #96992 · rust-lang/rust · GitHub).

The biggest issue we ran into is figuring out what to do if callers mix-and-match executors, or move futures across executors. We try to tie all state to the Future, rather than Context, to allow moving the future. Whenever it is moved onto a supported executor, we stuff the required information into the Context.

So yeah, we definitely want the context available in every call.

1 Like