Pre-pre-RFC/Working Prototype: Borrow-Aware Automated Context Passing

I'm currently in the process of drafting an RFC for this proposal based off the discussions on this thread.

Version history:


Hello, all!

I spent the past few months writing a custom rustc_driver implementing support for automated borrow-aware context passing called AuToken (permalink). The linked repository README describes how to use the tool, its semantics, some example usages, and the open design questions for the project. It is fully functional and I've been using it in a personal project of mine for about a month now.

This project attempts to do something very similar to Blog post: Contexts and capabilities in Rust but seems to achieve it with less machinery. I haven't really looked into the other proposal too much yet so I can't really compare them. I'll try to get on that as soon as I can!

This post is marked as a "pre-pre-RFC" because I'm interested in writing a proper RFC for this feature and would like help resolving all the open questions and generally getting feedback from other Rustaceans before spending all the effort required to write a usable RFC.

Thank you!

Riley

7 Likes

I think I have part of a solution to the problem of broken semantic versioning. Of course, feel free to explore alternatives; none of this is even close to being set in stone—let alone implemented!

As a reminder of the problem, any analysis rule which requires us to know, ahead of time, which trait members borrow something will introduce semantic versioning hazards. Consider the following snippet:

use autoken::*;

// upstream version 1
pub fn wrap_closure(f: impl Fn() + 'static) -> impl Fn() + 'static {
    move || {}
}

// upstream version 2
pub fn wrap_closure(f: impl Fn() + 'static) -> impl Fn() + 'static {
    move || f()
}

// downstream consumer
fn my_consumer() {
    let foo = wrap_closure(|| {
        let _ = BorrowsOne::<u32>::acquire_mut();  
    });
    
    // The upgrade from version 1 to 2 causes this to break since unsizing
    // operations are only permitted if `foo` doesn't borrow anything.
    let foo = &foo as &dyn Fn();
}

A similar issue occurs if the unsizing is done in the upstream crate instead:

use autoken::*;

// upstream version 1
pub fn wrap_closure(f: impl Fn() + 'static) -> impl Fn() + 'static {
    move || {}
}

// upstream version 2
pub fn wrap_closure(f: impl Fn() + 'static) -> impl Fn() + 'static {
    let f = &f as &dyn Fn();
    move || {}
}

// downstream consumer
fn my_consumer() {
    // The upgrade from version 1 to 2 causes this to break since unsizing
    // operations are only permitted if `foo` doesn't borrow anything.
    wrap_closure(|| {
        let _ = BorrowsOne::<u32>::acquire_mut();  
    });
}

While we could use markers to determine whether a given trait is allowed to be unsized Ă  la ?Sized, doing this would only introduce even more function colors to an already quite colorful language!

pub fn wrap_closure(f: impl Fn() + 'static) -> impl Fn() + 'static {
    move || f()
}

pub fn wrap_closure_sized(f: impl ?unsizable Fn() + 'static) -> impl ?unsizable Fn() + 'static {
    move || f()
}

pub fn does_not_compile(f: impl ?unsizable Fn() + 'static) -> impl Fn() + 'static {
    // ERROR: Calling a potentially unsizable trait member in a closure which is
    // unsizable is not allowed.
    move || f()
}

I'd rather not block this proposal on the already complex keyword generics proposal. Instead, I suggest that we ban trait members which perform borrows without absorbing them. Although this sounds quite restrictive, it really isn't! Traits can always emulate the legacy behavior by taking Borrows and BorrowsOne objects as parameters like so:

use autoken::*;

pub fn wrap_closure<T: TokenSet>(f: impl Fn(&mut Borrows<T>) + 'static)
    -> impl Fn(&mut Borrows<T>) + 'static
{
    // We know this closure implementation borrows nothing and is therefore
    // a safe `Fn` implementation since `f(b)` is calling a trait method
    // which we inductively know to borrow nothing!
    move |b| f(b)
}

fn my_consumer() {
    let borrows_nothing = wrap_closure::<()>(|_| {});
    let borrows_something = wrap_closure::<Mut<u32>>(|b| {
        // We need new syntax for this since the old syntax used closures,
        // which can no longer leak borrows.
        absorb b {
            BorrowsOne::<u32>::acquire_mut();
        }
      
        // Personally, I'd also introduce a form of `absorb` which applies the
        // absorbption operation to the entire block to avoid rightward drift.
        //
        // Something like:
        //
        // ```
        // |b| {
        //     absorb b;
        //     autoken::BorrowsOne::<u32>::acquire_mut();
        // }
        // ```
        //
        // ...or even:
        //
        // ```
        // |absorb b| {
        //     autoken::BorrowsOne::<u32>::acquire_mut();
        // }
        // ```
        //
        // ...would go a long way in making the feature usable!
    }));

    // Oh look, trait implementations are always unsizable!
    let dyn_1 = &borrows_nothing as &dyn Fn(&mut Borrows<()>);
    let dyn_2 = &borrows_something as &dyn Fn(&mut Borrows<autoken::Mut<u32>>);
}

One may perhaps be concerned that this restriction would require users to introduce &mut Borrows<T> parameters everywhere. Luckily, this pattern is required not nearly as much as one would expect since it is usually possible for a user to do something like this:

use autoken::*;

pub fn wrap_closure<'a>(f: impl Fn() + 'a) -> impl Fn() + 'a {
    move || f()
}

fn my_consumer() {
    let mut b = BorrowsOne::<u32>::acquire_mut();
    let borrows_something = wrap_closure(|| {
        absorb b;
        BorrowsOne::<u32>::acquire_mut();
    });
    borrows_something();
}

Ideally, we would make this pattern move convenient by allowing borrow acquisitions to be inferred. This, however, would require some complex compiler trickery to defer borrow set inference until all other types have been inferred so it might be a good idea to defer that feature until a later proposal.

Preventing trait members from leaking token borrows also fixes the other big semantic versioning breaker: trait method ties. Here's a reminder of the hazard:

use autoken::*;

pub trait MyTrait {
   fn run<'a>(self) -> &'a ();
}

// version 1
pub fn my_func(f: impl MyTrait, g: impl MyTrait) {
   let _a = f.run();
   let _b = g.run();
}

// version 2
pub fn my_func(f: impl MyTrait, g: impl MyTrait) {
   let a = f.run();
   let b = g.run();
   let _ = (a, b);
}

fn my_consumer() {
    struct Breaks;

    impl MyTrait for Breaks {
        fn run<'a>(self) -> &'a () {
            tie!('a => mut u32);
            &()
        }
    }

    // The upgrade from version 1 to 2 causes this to break since the
    // two results of `.run()` are forced to live concurrently, making
    // `u32` mutably borrowed in more than one place and causing a
    // compiler error.
    my_func(Breaks, Breaks);
}

Since you can't borrow anything in a trait method anymore, hazardous ties become impossible.

One might wonder how we can express token-tied lifetimes in trait methods now. Not to fear: this is still entirely possible by encoding this possibility as part of the trait method signature.

use autoken::*;

cap! {
    MyCap = Vec<u32>;
}

trait MyTrait {
    type Tokens: TokenSet;

    fn do_something<'a>(&self, b: &'a mut Borrows<Self::Tokens>) -> &'a mut Vec<u32>;
}

struct Demo;

impl MyTrait for Demo {
    type Tokens = Mut<MyCap>;
  
    fn do_something<'a>(&self, b: &'a mut Borrows<Self::Tokens>) -> &'a mut Vec<u32> {
        // Since we can absorb `b` for `'a`, leaking `MyCap` from the scope is perfectly fine.
        absorb b {
            cap!(mut MyCap)
        }
    }
}

With that simple change, we've effectively eliminated all of our semantic-versioning breakers without losing an ounce of the system's expressiveness. Neat!

This leaves just one open question for the analyzer: how do we handle generic borrows. Although they don't break semantic versioning, they do result in some potentially nasty semantic versioning foot-guns:

use autoken::*;

// version 1
pub fn my_func<T, V>() {
    let _a = BorrowsOne::<T>::acquire_mut();
    let _b = BorrowsOne::<V>::acquire_mut();
}

// version 2
pub fn my_func<T, V>() {
    let a = BorrowsOne::<T>::acquire_mut();
    let b = BorrowsOne::<V>::acquire_mut();
    let _ = (a, b);
}

fn my_consumer() {
    // The upgrade from version 1 to 2 causes this to break since the
    // borrows of `T` and `V` now alias mutably, causing a compiler error.
    my_func::<u32, u32>();
}

Although it's tempting to also ban borrows involving generic parameters, I'm much less confident in the impacts of that restriction since, unlike the restriction proposed here, that restriction will actually impact feature expressiveness. Maybe we could just lint this type of usage as potentially hazardous since I only really expect power-users to use it?

P.S. Well, we do technically lose the neat Deref overloading trick but we could get that back by introducing an alternative form of Deref that's token aware. I think something like this could work assuming we automatically acquire and pass tokens on each dereference operation.

trait TokenDeref {
    type Output;
    type Tokens: TokenSet;

    fn deref_tokens<'t>(&self, tokens: &'t Borrows<Self::Tokens>) -> &'t Self::Output;
}

trait TokenDerefMut: TokenDeref {
    fn deref_tokens<'t>(&self, tokens: &'t mut Borrows<Self::Tokens>) -> &'t mut Self::Output;
}
1 Like

I have to say I'm not a fan of this whole idea (as I understand it). It seems to add hidden "action at a distance" which seems likely to lead to spaghetti code.

I much prefer explicit parameters. It makes it easier to understand the code when reading it. In your small examples it doesn't matter of course, but consider a code base that is hundred of thousands of lines or more (as is common in industry).

Globals already have this problem, but at least there is only one instance of them (it is not relative to the call stack context). Thread locals have this issue even more since they are call stack relative (and I strongly dislike them for that reason). This seems just as bad as thread locals (but probably interacts better with work stealing async).

So yeah, I strongly dislike this (and any other implicit call stack based context passing). Make code explicit. Code is read far more than written, a bit of verbosity is worth it.

4 Likes

We are writing embedded software and passing a lot of stuff related to networking, filesystem etc down by arguments and got tired of it. On std you have OS as a global state.

We decided to solve it using generics and dependency injection and will soon try this library GitHub - p0lunin/teloc: Simple, compile-time DI framework for Rust (it is std but we'll make pr to make it no_std). This is still explicit way to pass stuff while, as we expect, would not force us to add a new parameter to all functions from the chain when introducing new something while not introducing global state.

Your approach seems to address the same issue and I would like to see where it goes, but I'm not sure what do you want to achieve by RFC. I don't really believe that we will agree to something that implicit, while you can get 70% of the ergonomics with clever traits.

4 Likes

Manually passing context is reasonable in many cases (I mean it's the default way we pass things!) and I can totally see how this future could be abused, but that doesn't mean that scenarios where this feature is invaluable don't also exist!

All languages need to pass some amount of context around. In most other languages, that isn't that big of a problem since, after a given piece of context is given to a subsystem, they can stash a reference to it that they can reuse for all subsequent method calls. However, in Rust, stashing a reference to a subsystem comes with a significant penalty: to mutate that object, you are now forced to employ interior mutability. Likewise, dynamic dependency injection only really works if either a) you're fine with borrowing objects at bundle granularity or b) you're fine with making every single component of your bundle immutable.

And that's super unfortunate since plenty of patterns require granular exterior mutability to function and that makes those patterns impossible to effectively dependency inject. One such pattern I absolutely love is the generational arena, which I gave as an example in the neat examples section of the project README. This pattern is essentially magic: it gives you a way to emulate shared mutable references which can be freely copied and destroyed without requiring any sort of runtime borrow-checking.

I do sympathize with the potential usage hazards this feature may introduce but I do think that the benefit of reducing our dependence on runtime borrow checking and replacing it with a compile-time checked alternative is highly compelling. Implementing this feature makes the borrow checker considerably more powerful without really changing any of its core analyses.

Interior mutability does not obviate the problem of spooky action at a distance—it just hides it until runtime, ultimately limiting the types of APIs we can safely borrow check. A similar thing can be said about the solution of manually passing everything as a parameter. Doing this doesn't prevent spooky action distance, it just means that every time the user wants to use some context deep in a function chain, they have to manually update the signature of all ancestor functions—essentially replacing automated SAAD with tedious manual SAAD.

5 Likes

That seems like a fair motivation.

My main gripe is still the hidden data flow though. It would be good if there was a way to follow (statically at compile time) how the data flows, with explicit parameters that is easy, with implicit context it is not. It is in my opinion worth considering what can be done to improve the ergonomics of "who sets what where and when" especially when the developer who wrote it left the company two years ago, and you need to fix a bug urgently.

Also, since autoken is based on thread locals apparently (missed that detail first time around), it won't play well with multi-threaded async executors. In fact as I understand it, autoken won't work well with async at all, since the scoping will be wrong. That is probably worth figuring out how to fix.

1 Like

It is in my opinion worth considering what can be done to improve the ergonomics of "who sets what where and when"

It's worth noting that there is no notion of setting a capability—instead, everything is bound in a scoped context. When you see that a function is consuming a given piece of context using e.g. cap!(mut MyContext), you know that the scope providing this context is eventually made available in all ancestors calling paths and that this entire chain involves neither generic nor dynamic dispatch. You would still have to do the same type of ancestry tracing with explicit parameter passing since the existence of these parameters would still not tell you where that value was originally obtained. The only thing that's implicit here is the intermediate forwarding steps and I don't think those are particularly interesting to any developer.

Also, since autoken is based on thread locals apparently (missed that detail first time around), it won't play well with multi-threaded async executors.

That's actually purely an implementation detail. If we choose to amend AuToken's current design with my proposal in this comment, what we can do with AuToken is no different than what we can do with existing manual parameter passing. In other words, you are still beholden to the restriction that, if a trait doesn't have a parameter to pass along a given piece of information, you can't pass it along.

It does still play well with multi-threaded async executors since, within a future's body, you can still pass context along. It is not possible, however, to just forward context across e.g. a tokio::spawn boundary since doing so would require the capability to live for 'static.

I guess the implications of using thread_local! internally do you sort of show up if you try to call thread::scope but that gets properly rejected because Borrows inherits !Send and !Sync from MyCap's marker type. An actual Rust feature with the ability to modify code-gen would let this work.

1 Like

I'd appreciate a comparison to Meta's facet concept. I'm not fully sure the following is a fair description, but

They go a different way; you have one or more "facets", which are either structs/enums or object-safe traits. They then define a pair of traits for every facet - FacetRef, which allows you to get access to you facet from a container by shared reference (so code looks like let my_facet: &MyFacetName = container.my_facet_name()), or as an Arc<_> (using container.my_facet_name_arc()) sharing the instance in the container.

There's then some logic that creates "factories" that know how to build containers, including handling dependencies between facets in a container (as long as the dependencies are not circular), and the expectation is that you have a factory that creates the containers you want.

Users of facets can take a context parameter whose type is similar to impl MyFacetNameRef + MyOtherFacetRef + MyArcFacetArc + Copy, allowing them to take a shared reference to the container, and to let you pass that reference to other users. This then allows functions to reduce the API surface they accept (e.g. a function can take impl MyFacetNameRef only, and then it only gets the method fn my_facet_name(&self) -> &MyFacetName in its context object), but allows you to have one shared context for the entire application, and have monomorphization produce the correct code.

And, of course, facets can have interior mutability to allow them to carry mutable contexts, not just immutable.

Uh, no readme and not on docs.rs. Very hard to explore that API. Not sure it is fair to ask about a comparison to some mostly unknown and unpublished crate.

That's why I described it, not just pointed at the code - the source is published, so if my description is inadequate, you can look at the source, and thus I don't think it entirely unfair to compare the two.

Especially since they both end up somewhere in the same problem space - you don't want pub statics as your global context, you don't want a single dependency crate that supplies a "god object" as context, but you do need to somehow get context into all functions that care about it.

Facets are not actually exploring the same problem space as this prototype because, as you mention, facets require interior immutability to work with mutable context elements. As I mentioned in an earlier comment, the context injection solution this proposal implements, meanwhile, has the ability to effectively inject mutable references to context elements where each elements' borrow status is tracked independently.

I have actually previously explored a solution to this problem where parameter passing is done explicitly while allowing each component to be individually borrow checked and the response was fairly cold. In my opinion, since this problem already requires a language feature to solve effectively, we might as well avoid any half-measures and eliminate the need for explicitly passing these context bundles entirely since that task really isn't that interesting to users. Indeed, the predecessor to this proposal seems to have stalled, not because users are uncomfortable with the concept of automated context passing, but because there were too many complex edge cases to solve. As far as I can tell, this solution manages to solve most of them. (perhaps @tmandry has a different perspective on this?)

It seems like you and @Vorpal are still a little bit confused about what this feature is supposed to do. Fundamentally, it is pure sugar for manually passing references through a chain of functions. For example, this program:

use autoken::*;

cap! {
    Cx1 = Vec<Foo>;
    Cx2 = Vec<Bar>;
    Cx3 = Vec<Baz>;
}

fn main() {
    cap! {
        Cx1: &mut my_vec_1,
        Cx2: &mut my_vec_2,
        Cx3: &mut my_vec_3,
    =>
        foo(my, params, here);
    }
}

fn foo(a: Random, b: Parameters, c: Here) {
    bar(other, params, here);
}

fn bar(a: More, b: Random, c: Parameters) {
    let my_value = &mut cap!(mut Cx1)[1];
    let my_other_value = baz(even, more, params);
    do_something_with(my_value);
    do_something_else_with(my_other_value);
}

fn baz<'a>(g: Wow, h: Even, i: More) -> &'a Cx3 {
    tie!('a => mut Cx3);

    cap!(mut Cx2).push(Bar { ... });
    &mut cap!(mut Cx3)[2]
}

Would desugar to:

fn main() {
    foo(&mut my_vec_1, &mut my_vec_2, &mut my_vec_3, my, params, here);
}

fn foo(cx_1: &mut Vec<Foo>, cx_2: &mut Vec<Bar>, cx_3: &mut Vec<Baz>, a: Random, b: Parameters, c: Here) {
    bar(cx_1, cx_2, cx_3, other, params, here);
}

fn bar(cx_1: &mut Vec<Foo>, cx_2: &mut Vec<Bar>, cx_3: &mut Vec<Baz>, a: More, b: Random, c: Parameters) {
    let my_value = &mut cx_1[1];
    let my_other_value = baz(cx_2, cx_3, even, more, params);
    do_something_with(my_value);
    do_something_else_with(my_other_value);
}

fn baz<'a>(cx_2: &mut Vec<Bar>, cx_3: &'a mut Vec<Baz>, g: Wow, h: Even, i: More) -> &'a Cx3 {
    cx_2.push(Bar { ... });
    &mut cx_3[2]
}

Note that at each element of the context gets its own parameter, allowing each element to be borrowed separately.

Although I implemented it with thread locals in my prototype, there is no need to do that in an actual implementation and, indeed, we might find it useful, for the sake of #[no_std] support, to pass this context like any other parameter under the hood.

1 Like

In my opinion, what would really sell this as a language feature is if it provided a way to pass something down from a high level to a deep one with the ability to verify that intermediate layers do not touch it. Riffing on your syntax, perhaps that could look like

fn foo(...params) {
    bar(19, "magic", params);
    baz(23, "science", params);
    blurf(42, "more magic");
}

fn bar(x: i32, y: &str, cap!(Cx1: &mut Vec<Foo>)) {
    let my_value = Cx1[1];
    do_something_with(x, y, my_value);
}

fn baz(x: i32, y: &str, cap!(Cx2: &mut Vec<Bar>), ...params) {
    Cx2.push(Bar::from(x, y, params));
}

fn blurf(x: i32, y: &str) {
  do_something_else_with(x, y);
}

foo receives the context, and has the ability to pass it along to other functions, but cannot access any elements of the context itself.

bar receives only Cx1 from the context; it can use it however it wants, but cannot call functions that need the context.

baz receives Cx2 from the context and can use it, and can also pass the whole context to other functions that need it. (TBD: whether the context passed down by bar includes Cx2 -- I can imagine situations where you'd want that and situations where you wouldn't.)

blurf doesn't get the context or any of its components.

And, crucially, you don't have to analyze any function bodies to know these things. Everything relevant is in each function's signature.

1 Like

By "the ability to verify that intermediate layers do not touch it," I assume you mean that you're trying to acquire a context element in such a way that deeper functions cannot re-borrow it? If so, that's already possible! You can just borrow the capability token for the entire duration of the function calls you're trying to exclude from the context like so:

use autoken::*;

cap! {
    MyCap = Vec<u32>;
    MyOtherCap = Vec<u32>;
}

fn main() {
    cap! {
        MyCap: &mut vec![42, 12],
        MyOtherCap: &mut vec![1, 2, 3],
    =>
        entry();
    }
}

fn entry() {
    cap!(mut MyOtherCap).push(4);

    let stop = BorrowsOne::<MyCap>::acquire_mut();
    cannot_borrow_my_cap();
    let _ = stop;
}

fn cannot_borrow_my_cap() {
    // Uncommenting this line would cause a borrow-checker error because
    // `stop` and `cap!(mut MyCap)` would be borrowing the same token mutably
    // at the same time.
    // cap!(mut MyCap).clear();

    // This, however, would still be allowed since the borrow on `cap!(mut MyOtherCap).push(4);`
    // ended before  `cannot_borrow_my_cap` was called.
    cap!(mut MyOtherCap).push(5);
}

You could define a macro to automate this process like so:

#[doc(hidden)]
pub struct Holder<T>(pub T);

impl Drop for Holder {
    fn drop(&mut self) {}
}

#[macro_export]
macro_rules! take_cap {
    ($($name:ident = $ty:ty),*$(,)?) => {
        let mut $name = $crate::Holder(::autoken::cap!(mut $ty));
        let $name = &mut $name.0;
    };
}

// ================================================================ //

...

fn entry() {
    take_cap!(_cap = MyCap);

    cap!(mut MyOtherCap).push(4);
    cannot_borrow_my_cap();
}

...

Additionally, you can limit the item visibility of capabilities you don't want other modules to access.

No, I mean the opposite of that.

In my code example, foo is the "intermediate layer" I'm talking about. It receives the context from its caller and passes it down to bar and baz without accessing any element of the context itself, and you can tell that this is what foo does by looking at its function signature. If foo were to start accessing context elements in the future its signature would have to change. That's the property I want.

Similarly, you can tell from the function signatures exactly which context elements bar and baz use, and that baz passes the whole context down further but bar does not.

I want this because the biggest problem (in my opinion) with both "the entire context" objects and passing lots of individual context arguments is that you have to dig through every function body to tell what is actually used where.

4 Likes

Thanks for this description - it helps enormously with understanding what you're doing.

The thing I dislike is that the cap! and tie! macros are doing "magic"; no parameter is passed to represent the context, and yet it appears magically, having been hidden from view completely in foo. And this breaks my local reasoning; bar and baz somehow have access to things that from my perspective (where main is in a different module to the other functions) doesn't exist (since I'm starting at foo, and working through its callees.

Can you do something to ensure that looking only at foo's signature, I can tell what context is available to pass onto bar and baz?

1 Like

Isn't one of the goals of this proposal that the author of foo does not have to know what context items bar and baz require? That seems in direct conflict with what you're asking for.

1 Like

Sorry if I'm still not understanding this correctly. Here's how I'm currently interpreting the rules of your proposal:

  • ...name represents an arbitrary collection of contextual elements.
  • You can extract individual elements from the collection. When you do so, those elements leave the context for the scope of the function unless you forward them along explicitly.
  • Presence of an element in the context is checked at compile time.

That sounds like it would suffer from ergonomic issues related to having to manually manage temporary interruptions in borrows. Consider this program:

//! This module manages everything humanoid :)

use generational_arena::*;

cap Humans = Arena<Human>;
cap Zombies = Arena<Zombie>;

fn my_func(
    cap mut humans: Humans,
    cap mut zombies: Zombies,
    ...cx,
) {
    other_func_1("my", "params", "here", humans, zombies, ...cx);
    humans[my_human].do_something();
    other_func_2("my", "params", "here", humans, zombies, ...cx);
    other_func_3("my", "params", "here", humans, zombies, ...cx);
    zombies[my_zombie].do_something_else();
}

fn other_func_xx(
    arg_1: &str,
    arg_2: &str,
    arg_3: &str,
    cap mut humans: Humans,
    cap mut zombies: Zombies,
    ...cx,
) {
    ...
}

The user would find themselves having to manually re-forward the contextual elements they extracted from the context. Although this work is bounded to a single function rather than an entire function chain, it's still quite tedious and I don't think it really adds much of interest to either a programmer or a reviewer. The only time a user really cares about whether a callee uses a given contextual element is when a) they're breaking an invariant on the object and want to make sure that no-one accidentally observes that broken invariant or b) the program fails to compile because of a concurrent borrow error. The former scenario is quite exceptional and likely better signaled with an explicit borrow prohibition as was shown in my previous example. The latter scenario is only a development-time-only scenario and the compiler can already point out exactly where the concurrent borrow is happening.

I would be more comfortable with this if all functions requiring context had to make this explicit, for example:

fn foo() using Ctx {
    bar();
    baz();
}

And if baz requires the context, this would desugar to

fn foo(Ctx: Ctx) {
    bar();
    baz(Ctx);
}

This is important because, if foo is part of a library, users of the library have to know what context is required, as this doesn't work:

fn main() {
    foo(); // no context provided!
}
2 Likes

If I'm understanding your proposal correctly as pure syntactic sugar for automatically forwarding contextual elements to the functions directly requesting it, this suffers from the cascade of signature changes one would need to perform to introduce a new contextual element to a function deep in the call chain.

For example, if we wanted to refactor this code...

fn foo() using Foo {
    bar();
}

fn bar() using Foo {
    baz();
}

fn baz() using Foo {
    // (do something with foo)
}

...to make it such that baz also consumes Bar, we'd need to update all of the ancestor signatures as well:

fn foo() using Foo, Bar {
    bar();
}

fn bar() using Foo, Bar {
    baz();
}

fn baz() using Foo, Bar {
    // (do something with foo, bar)
}

...which is the exact type of unnecessary churn this feature tries to avoid!

In my proposal, I'd imagine that library consumers would know what context is required through a section in the documentation auto-generated by rust-doc.

Yes, exactly. But I don't think having to make it explicit in every signature is a downside. Yes, this makes adding context more cumbersome. But having worked on a large React app with a lot of dependency injection via useContext, I found the implicit control flow makes the code harder to understand, so I'd rather have a more explicit approach.

P.S. I imagine that rust-analyzer could add a code action to automatically add the context dependency everywhere it is required. Would this make it ergonomic enough for you?

2 Likes