Pre-RFC: Contextual paramaters

Yeah, I do not doubt the existence of crates which depend on tokio for its traits. The point I'm raising is that API/method signatures/documentation lacking information about which runtime a crate depends on is not compelling justification for an implicit contextual parameter feature. hyper as an example only supports this point: It isn't affected one way or another.

I've honestly tried to find examples that do have this problem: Depends on a specific runtime, but docs are unclear about which one. So far, they all either have a direct dependency on the runtime they need, or feature flags to choose the runtime the user wants. Both appear in docs. :person_shrugging:

Framing contextuals/implicits as a binding mode is clever and I haven't seen that before.

Extremely important caveat: allocation and panicking aren't potential reentrancy paths, because those run arbitrary user code via #[global_allocator] and the panic hook, respectively. So you effectively can't do unsynchronized &mut via reentrancy reasoning for anything nontrivial soundly.

That's Handle::[try_]current, with Handle::enter to establish being in the runtime scope.

And implementing your own version of such is quite straightforward now, thanks to OnceCell. Implementing a runtime which takes a handle for any handle-requiring task, but which also provides a macro to generate a set of crate-local API which use a crate-local global to wrap the runtime's API could be an interesting design. It's gone into the design sketch doc for my theoretical toy runtime (faesync).

Some libraries do something similar. One I know off the top of my head is buttplug-rs, which guards each of its entry points with checking that a tokio runtime is available or otherwise spins up its own global tokio runtime to enter.

But using the existing contextual tokio is just so much easier than taking a handle, and it's also easy to accidentally forget to guard some API surface and never notice because you only ever test calling it in a context where a tokio context has been established.

I expect the (hopeful) eventual std async spawn API support will likely look quite similar, though. But the other parts of runtime reactor support will remain interesting even then…

This is extremely pedantic, but, strictly speaking, something like FuturesUnordered is an executor and nesting pseudo-executors is common, even a major point of async via join! and select! style sub-task concurrency. Whereas a full "runtime" bundles together an executor (what polls futures when they're ready, thus executing then) with a reactor (what notifies futures that they're ready, reacting to external progress) and API that utilizes that reactor, and (deliberately) using more than one runtime in an application is rare. Utilizing more than one executor for different systems is still uncommon, but a lot less so than you might expect if you don't do much with async beyond relatively simple plumbing. A notable example: bevy's task pools.

In the platonic ideal of an implementation, these two subsystems would be completely independent. And similar to how the rayon thread pool is managed, if a reactor is needed but isn't running yet, it would get implicitly spawned onto the executor when needed, and the executor would poll the reactor to wake the other spawned tasks. This design can actually work quite well (I think async-std polls the reactor like this) but it suffers from the fact that in practice runtimes want to couple the executor and reactor together to improve performance: optimizing latency/throughput requires cooperation between task scheduling and wakeup.

6 Likes

AFAIK Agda and Lean support so called "instance arguments", which looks very similar to this proposal. You define them along with other parameter but with a slightly different syntax (inside double curly brackets for Agda and inside square brackets for Lean) and the compiler will allow you to call those functions without specifying those arguments, instead it will search them in the caller context among its instance parameters or other explicit instances. In these languages they are mainly used to implement type classes though, as opposed to passing context around.

In Haskell there's this paper: https://okmij.org/ftp/Haskell/tr-15-04.pdf and this library to go with it

1 Like

I would call this "implicit parameters", which IMO makes it much more clear what they do and how they work. But unfortunately it seems like Scala already picked this IMO rather non-intuitive term so no matter how we call this, it'll be confusing. :confused:

Anyway, this is definitely an interesting feature worth exploring. We do in fact already have an implicit parameter in Rust: that's how #[track_caller] is implemented.

Scala 2 does call these implicit parameters. Scala 3 changed the vocabulary and semantics quite a bit.

1 Like

There also are the panic handler and (global) allocator. Although implemented as a symbol that is linked instead of additional/implicit function parameters.

Similar to contextual parameters they inject data/behavior into called functions (created at compile time instead of in some caller, e.g. main). The main difference:

  • There can only be one panic_handler and one global allocator per binary (restriction)
  • Its value must be known at compile time (due to how they are implemented)

I'm not suggesting to use contextual parameters for those two, but considering them may be useful to know what's needed in practice.

For example: Vec has additional functions to pass in an allocator, to override it using the default/global allocator (different allocators in the same binary). If the allocator would be a contextual parameter it would be really useful if a Option<T> context (or marked otherwise) could be left out (as in no Option<T> must exist in the caller). [1]


  1. For the global allocator it would also need to be possible to injected through functions that are not aware of it (not part of the function signature), but there are good arguments against this capability. ↩︎

A bit of theoretical musing, if someone is interested in this viewpoint:

Contextual/implicit parameters can be seen as the dual of effects – coeffects.

Where effects are returned/triggered by a function and a compatible effect handler is looked up on the stack, coeffects are taken as parameters, and the stack is searched for a "coeffect provider".

Actually, I would go so far as to say that effects are contextual parameters, just with different syntax. you can see this pretty clearly with how they work in unison.

Another thing worth noting is that if you implement contextual paramaters correctly, they basically give you scoped generics for free, once you combine them with invariant lifetimes.

They look pretty similar, but one important additional aspect of effects is that they can divert control flow, which doesn't seem possible with just contextual parameters.

I mean, that seems to mainly be a limitation of not having firat-class continuations.

also, not every effect system allows diverting control flow, and many times when it is possible, it's via hardcoded intrinsic effects, not any emergent property of the system.