Almost all of this is unrelated to what I thought I was proposing. Let me try to say again what I was proposing:
It should be possible to tell which context elements a function uses directly (not by passing the context down to functions it calls) by looking only at that function's signature; you should not have to analyze its body.
One hypothetical way to write this would be by moving the cap! macro invocations in your proof-of-concept from the function body to its argument list.
A function that takes cap! arguments should be able to directly use the corresponding elements of the context and no others. If (and only if) this function also takes a ...params argument then it can also pass the whole context along to functions it calls.
A function with only a ...params argument and no cap! arguments should not be able to use any element of the context directly. However, it should still be able to add things to the context and pass the whole context down to other functions that need it.
A function that takes neither a ...params argument nor any cap! arguments should have no access to the context, if any, present in its caller at all.
That's all I'm proposing; in particular I have not thought at all about how this should interact with the borrowing and lifetime rules.
And entirely circumvent the feature! (that's kinda why I assumed it would be non-local in my response)
Ah, whoops. I remember drafting something warning you about that potential confusion but I must have deleted it while editing my response. The ...rest syntax was just supposed to be shorthand for "the user is probably defining a function with additional unrelated parameters" and I was originally too lazy to write them out. It's not part of the proposal whatsoever. Sorry for the confusion!
I am not 100% sure I understand what you don't like about the code that "entirely circumvent[s] the feature", but if the issue is what I think it is, the fault lies with the author of use_cx_1. It's fine in general for context-using functions to take callbacks and pass context elements to those callbacks. The problem is that use_cx_1 has no other purpose than to make it possible for foo to do arbitrary things to Cx1 without declaring that it does this. It wouldn't come up in real code.
The mechanics I'm proposing are meant to ease human comprehension and static analysis; not to be an enforceable security or visibility barrier.
I guess it is part of my proposal then. I consider it just as important for use of the context as a whole to be visible in a function's signature and callsites.
I'm actually starting to warm up to this idea although I'm still not a big fan of how loose its guarantees are. I do agree that it can be hard to tell which elements you're borrowing in a given function without looking through the entire thing. This is a big problem for crate authors since they risk borrowing context accidentally and unexpectedly introducing backwards compatibility errors. Perhaps it would be a good idea to be able to mark a function as not being able to borrow anything from its parent other than the contextual elements explicitly listed? AuToken can technically do that already with the unsizing restrictions but those are potentially being removed so we'd need to find a way to enshrine this as an official part of the proposal.
Perhaps a #[borrows(A, B, C)] directive would work? I was thinking of something like this:
#[borrows(MyCx)]
pub fn my_lib_api() {
inner_func();
}
fn inner_func() {
inner_func();
}
fn innerer_func() {
// This works because, one of its callers, `my_lib_api`, explicitly allow-lists the component.
cap!(mut MyCx).do_something();
// This, however, would not work and we'd see an error telling us that we might need to extend
// `my_lib_api`'s signature to reflect this new required context element.
}
To encourage users to actually use this feature, I was thinking of defining a warn-by-default lint for publicly visible functions which omit this attribute. I don't want to make it a hard error since users might want to prototype their crate first before settling on a final set of contextual requirements.
Yes, and my belief is that that goal is an anti-goal. After all, you can program without function parameters, using static items for everything instead, and indeed, I've worked on systems where that was the norm (not in Rust). But it leads to hard-to-maintain code, because you break local reasoning; there's information being smuggled around out of sight of function parameters.
I would prefer to see the "no need for foo to explicitly know what context items bar and baz require" filled by a means to say "foo's context parameter is bar's needed context + baz's needed context". Opening up some syntax for bikeshedding, you'd put the context after the where clause in a using clause, and be able to write something like:
In this bikesheddable version, foo declares in its signature that bar's context is in play, but that it's opaque to foo (so I know that foo itself doesn't touch any part of the context, but does need a context set up for bar).
In turn, bar declares that it uses Cx1 from the context, and that Cx1 is a Vec<Foo>; it also needs all of baz's context available for passing through. And baz declares that it uses Cx3 and Cx2 from context, but nothing more.
I can also see from baz's signature that the lifetime 'a is related to Cx3 somehow. And, in this model, if baz needs read-only access to Cx1 now, its signature would change:
This tells me that it can read Cx1 - and because of the baz() syntax in bar's declared context needs, it automatically picks up this requirement, even if baz didn't already need Cx1.
Also note in here that foo declares that it does not touch the context - it just passes through enough to be able to call bar. bar declares that it can modify Cx1, and passes through what baz needs. The modified baz says it reads Cx1, and modifies Cx3 and Cx2.
I think it's a dangerous but extremely useful feature if used correctly:
For logging/tracing you often want/need knowledge about how you're called (in the tracing crate a span). It doesn't have an impact on control flow or application behavior (except for logging) and currently uses a thread local static to store it. Which also means care has to be taken when using async. One example is connecting logs to http requests. I think this could be a good alternative to such use cases.
Things like dependency injection are a bit more difficult, as they (by design) have an impact on runtime behavior. The pattern I've seen most often in Rust (and used myself) is to make all functions structs generic over a trait holding those injected dependencies types and store them in the struct (or be generic over a single dependency). This could be used for that, yes, but I think that falls into the dangerous section, similar to your example of using static items for everything.
A related case (from Go): context.Context, which can be used for deadlines, timeouts or storing additional key-value pairs. There you have to add a ctx context.Context argument to every single function, which ends up not giving any readability benefit.
Here is another example where having this would be really useful, without breaking local reasoning:
If you want to modify the output of Debug or Display (for example to add colors if enabled) you currently have to either not use Debug/Display altogether or wrap your type in a different type that stores context, because those traits have no way of passing context to the implementations and adding that would (as far as I can tell) be a breaking change:
struct Config {
supports_color: bool,
my_color: Color::Red,
}
struct SomeType(())
impl Debug for SomeType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Goal: Print in `Config::my_color` if supported
// Problem the only way to pass in `Config` is via
// a (thread local) `static`
todo!()
}
}
struct ConfigAwareSomeType {
config: Config, // Could be a reference
value: (),
}
impl Debug for ConfigAwareSomeType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
// Now the config can be read from self
todo!()
}
}
// But every time you want to use this you have to wrap the type:
fn main() {
let config = todo!();
println!("{:?}", ConfigAwareSomeType{config, value: todo!()})
}
As said above: Currently the only alternative is to use a thread local or static variable for Config, which I guess isn't terrible but can break once you introduce async code that switches threads, unless you use non-thread-local and have everything use the same value of Config. And I'm currently not sure how much Rust can optimize/inline thread locals statics.
I'd argue that (in this example) having a "hidden" argument is better for local reasoning than using a (thread-local) static, as this has to be set by someone in the call hierarchy.
And my experience of BASIC, JavaScript and assembly codebases is that it gets used incorrectly, because it's quicker to add one more global than to thread a parameter down.
I don't see the argument; can you spell out how a hidden argument is better than an explicit argument, provided by wrapping the type? I see both the hidden argument and the static as awful style that should be banned because they both break local reasoning, by requiring me to go up and down the call hierarchy to work out where all the parameters actually come from.
I meant it's better than using a static, not better than having it in a wrapper type:
With a static you have to go up the callstack and look at all siblings, as they could've set the threat local as well (it has to be mutable somewhere to be useful, at least for initialization), so in the example below you'd have to look at A-F to know the value of a thread-local static in G, and you have to look at every function's body.
With a non-mutable context (created in A) you'd only have to look up in the parents, because the "siblings" cannot influence the context, so in this example you only have to look at A, E and F to know the value in G, and there you probably only have to look at the function signature to see if it specifyies if it uses/provides the context.
In case of the wrapping type I have to either use generics (making it more difficult to reason about) or modify the signature of E and F, giving little to no benefit in terms of readability/reasoning. To know what the Config in the wrapped type is I still have to look through A, E and F to know the value in G. You still have to go up the call hierarchy to know where Config comes from.
// Call hierarchy (not shown as functions for readability, A calls B which calls C, then A calls D, ...)
A
B
C
D
E
F
G
H
I
If you're looking at E you do have to go up or down the call hierarchy to know if there is any context, true.
And one big downside with using such a wrapper type is that you have to modify the Display implementation of every single type that uses this and wants the colored implementation, or you have to duplicate all types to have one variant with and one without the Config. Both are adding a lot of boilerplate and (imo) end up reducing the readbility.
while bounding refactor complexity for introducing new context requirements to deeply nested functions
âŚwhich is the short version of the design objectives I laid out in this comment.
Your solution just asks users to enumerate their callees at the top of the function instead of inferring them implicitly from the body. Indeed, if a macro could determine the set of functions called in a function body and add all of them to the function signature automatically, your solution would be almost indistinguishable from mine.
This makes me wonder: why are you so much more comfortable with this solution than you are with mine?
Because in this solution, when I'm presented with foo in isolation (a common experience for me as the "Rust expert" at my employer), I can see immediately that there's code I'm not being shown that might matter here - I can see that bar's context requirements may affect what's going on.
Further, if foo calls baz directly, I'd expect a compile error; foo is declaring that it requires an opaque pass-through context suitable for calling bar, but does not declare that it needs a context suitable for calling baz. Similar would apply if calling quux that only needs Cx1: Vec<Foo> in its context; if I neither declare that I need the individual elements of the context, nor that I pass through a suitable context for quux, then the fact that bar requires Cx1: Vec<Foo> should be irrelevant at this point.
It's the same class of argument as for "no type inference in function signatures"; there's no technical reason why I couldn't write:
and have Rust infer that arg1 is a String, while arg2 is &str based on the body (and thus the return type is String).
Basically, I want it to be abundantly clear when I read code pasted into Slack in isolation that I'm missing key details, so that I can ask the right questions.
Right, and that's the reason I dislike both approaches equally - if all I have to hand is E, because that's what a colleague has pasted into Slack, I'd like to know quickly that E and its callees depend on hidden information from higher in the hierarchy so that I can tell them that I need to see A as well in order to help them out.
Iâve actually implemented this solution before in userland using the worldâs most involved name resolution hack and used it in the game engine project I linked in the opening post of this thread!
It works by having the user define the set of types that they are interested in giving to a function as a parameter of that function like so:
You could then âupcastâ these objects in a borrow-aware fashion by calling cx!(cx_to_upcast). Itâs, of course, not as nice as your solution but it comes fairly close and proc-macros could certainly make it closer.
The big problem I found in using it was that every level of abstraction I would add to my project would forever find itself being lugged through the call tree, making these context lists grow unboundedly large as my application grew too. For example, performing a ray cast in my game engine requires access toâŚ
block data in the world
a map from Block ID to per-block collider
an AABB tracker for entity colliders
I could always write an abstraction to perform a raycast while taking into account all these different sources of data but I could never abstract away the set of context elements that requires. If I wanted to implement a generic weapon raycast routine which modifies ray casts with respect to some item state, Iâd need to carry all of the aforementioned raycast context items with me in addition to whatever context is required for items.
In essence, this system encouraged users to either write fewer abstractions or merge their smaller abstractions into bigger monolithic abstractions. These types of decisions are made by software engineers all the time but it feels super goofy to make these big choices with massive impact on program architecture in the name of ergonomicsâespecially when considering that most other languages can support dependency injection solutions which obviate the need for that kind of trade-off.
Having migrated my game engine from using that solution on the main branch to using AuToken on the branch linked in the opening post of this thread, I have already found that Iâm much more comfortable writing code that actually makes the most sense in the âhas a single-responsibilityâ sense than I did with the previous solution. This massive change happened purely because I no longer had to balance the tradeoff between my architecture and the ergonomics of context passing. Do not underestimate the effect ergonomics have on the way users interact with your language!
Hmm, youâre the second person (along with @zackw) who has advocated for a way to summarily see the effects of context passing without reading the body of a function so I guess thereâs actual demand for something like this? I do see the potential benefit of clarifying how context is threaded around since it warns people that, e.g., a pure function may not actually be pure.
I think the reason Iâm not immediately on board with this feature is that, for modules where almost all functions take in context (cf: my game engine where most functions take in arenas), the list becomes quite noisy, making it difficult to spot functions that actually are of interest to the reviewer.
One way to perhaps alleviate this could be to split up context into different ârealms.â For example, I could have a âvoxelâ realm for all my world state context and a ârenderingâ realm for all my rendering state. Each contextual element would live in exactly one realm. If I find myself in a context where Iâm frequently using both, I could define a realm alias called âvoxel renderingâ defined to encompass both âvoxelâ and ârendering.â
A syntax like this could work:
realm Voxel;
cap MyCap1 in Voxel = MyType1;
cap MyCap2 in Voxel = MyType2;
realm Render;
cap MyCap3 in Render = MyType3;
realm VoxelRenderer = Voxel + Render;
fn foo() use VoxelRenderer {
foo(use _); // Shorthand for `use <set of realms in current function>`
}
fn bar() use VoxelRenderer {
baz(use Voxel);
maz(use Render);
faz();
}
fn baz() use Voxel {
// Works
MyCap1.do_something();
MyCap2.do_something();
faz();
// Doesnât work
// MyCap3.do_something();
// maz(use Render);
}
fn maz() use Render {
// (analogous to baz)
}
fn faz() {
// Canât use any context.
}
Realms donât affect borrow checking; they only affect the set of components your function could theoretically borrow. In the context of checking whether a given use directive is compatible with a given function signature, equality of the two sets is duck-typed. Realms aliases can be nested and duplicates in their expanded set of base realms are perfectly acceptable.
I want to be able to see the context from the function signature; I don't mind if it's relatively hard to work out what the context is (unless I have experience in that area of the code), just that it's trivial to see the context.
I've already suggested a very explicit using syntax for bikeshedding:
where I'm calling out the individual pieces of my context - in this case "read-only cx1: Vec<Foo>, mutable cx2: Vec<Bar>, opaque whatever baz needs that's not already declared". If, in this syntax, baz needs cx1 or cx2, then the system should unify that with the explict cx1 and cx2, so I'm reading the same cx1 that baz will see, and I'm mutating the same cx2 that baz will see.
I'd extend this with a concept of "using aliases` (again, bikesheddable syntax):
using Render = render(), mut vulkan_state: VulkanState;
using Voxel = mut voxel_data: BTreeSet<VoxelData>;
using GameArena =
Render,
Voxel,
render(),
mut state_objects: Vec<Objects>;
Then, you could write a function like:
fn function(params: ParamsType) using GameArena {
âŚ
}
That way, there's the explicit flag for me that this uses the GameArena context, which I either know about, or look up as I need it. You can flag all your functions as GameArena, and you can choose to limit some functions - for example, rendering functions might only have using Render, while voxel-specific functions might have using Voxel; as these contexts are subsets of the larger GameArena context, a function that's declared using GameArena can call a function that's declared using Render or using Voxel, but not the other way around.
I don't really have a good intuition for when and how I'd use a using alias. Would I create one giant one for my module that lets me call other functions in that module? Should I be putting context items in the alias? Are there things I should be putting in the function signature instead?
then you don't need a using alias, but now you have a giant using clause instead.
And I'd expect the requirements for calling a function to be that either that function is specified in your using clause (after expanding aliases), or that all the items in the callee's context are specified in the caller's context.
So:
using GameCtx = Render, Voxel, mut game_objs: Vec<GameObjects>;
fn game_function(params: ParamsType) -> ReturnType using GameCtx {
âŚ
render_fn();
âŚ
}
fn render_fn() using Render {
âŚ
}
is legitimate, because once you've expanded the aliases out, game_function's using clause is a superset of render_fns.
The call to quux() in bar is OK, because everything in quux's using clause is explicitly part of bar's using clause. The call to bar() in foo is OK, because foo specifies using bar(). But the call to quux() in foo is a compile error, because foo only specifies using bar(), and thus its using clause neither specifies "I will call quux()" nor "here are the items quux needs".
I think I also argued for this, though not in as clear terms I admit. A key aspect of Rust type inference is that it stops at the function signature. This allows local reasoning, not just for the compiler but also for humans.
For me anything that breaks that is a non-starter and would mean I would ban it in any code base where I have a say in the matter. Hopefully there would be a lint for that in clippy in such a hypothetical future.
The using directive specifies which components could be borrowedânot which components are borrowed, right? Otherwise, this wouldn't compile:
using MySet = mut Foo: Vec<u32>, mut Bar: u32;
fn foo() using MySet {
for _v in &Foo {
bar();
}
}
fn bar() using MySet {
*Bar += 1;
}
Does this mean that everything in a using is in scope?
using MySet = mut Foo: u32;
fn foo() using MySet { // This resolves to `mut Foo: u32`
*Foo += 1; // So we can access `Foo` here.
bar();
}
fn bar() using MySet { // This resolves to `mut Foo: u32`
*Foo += 1; // Same here!
}
If so, what about:
fn foo() using bar() { // This resolves to `mut Foo: u32`
// ...so can we access `Foo` here?
bar();
}
fn bar() using mut Foo: u32 {
*Foo += 1;
}
What about:
using MySet = bar()
fn foo() using MySet { // This resolves to `mut Foo: u32`
// ...so can we access `Foo` here?
bar();
}
fn bar() using mut Foo: u32 {
*Foo += 1;
}
If all of these compile identically, it feels like this solution and my "realms" solution are identicalâjust with your solution being capable of introducing access sets derived from function signatures and not requiring the call-site to be explicitly annotated.
Correct - the using directive specifies which components this function, or any of its callees could borrow. The using func_name() syntax allows you to declare that you won't be borrowing things yourself, but you'll be calling func_name() and therefore need its context in place.
This is what I expect, yes- mut Foo: u32 is in scope in both foo and bar, since their using statements declare that it's present. And it's the same thing in both - so calling foo() results in Foo being incremented twice from the caller's perspective.
In this case, the using clause for foo() does not resolve to mut Foo: u32 directly; the bar() is special syntax to say that we need whatever bar() needs in context.
As a result, foo() has no way to name mut Foo: u32, since it's not in foo()'s context and therefore not nameable inside foo(). But it is present in the context as a whole, and therefore accessible in bar(). You'd have to add mut Foo: u32 to foo()'s using clause to have access to it.
Again, MySet just specifies "my context will be good enough to call bar(), so foo() can't name anything that's in bar()'s context. With a slight modification, though, you could access Foo inside foo():
using MySet = bar();
fn foo() using MySet, mut Foo: u32 { // This resolves to `mut Foo: u32`
*Foo += 1;
bar();
}
Here, you're declaring that you want a mut Foo: u32 in your context, and you also want a context suitable for calling bar(). Context unification determines that bar and foo both have a Foo: u32 in there, which is therefore the same thing in both places.
And note that this also means that in:
using MySet = bar();
fn foo() using MySet, mut Foo: u32 { // This resolves to `mut Foo: u32`
*Foo += 1;
bar();
}
fn bar() using Foo: u32, mut Bar: u32 {
*Bar += *Foo;
}
Foo is the same place in both bar and foo - it's just that in bar, it's read-only, while in foo, it's read-write.
Edit: And yes, this is very similar to what you've now described with realms; I'd be fine with your realms solution, too - the important thing is that when I look at a function signature, I can see what it requires from its caller, and what it returns to its caller, even if I have to then look up individual items to make sense of it. So, I'm OK with:
fn foo() use VoxelRenderer {
because I can see that there's a need for a VoxelRenderer thing, and I can look that up to work out what it is and what it means. I don't like it, because there's a layer hiding things from me, but I get the use case for not having a context parameter.
Interesting question: If the context is immutable, wouldn't a pure function still be pure, as long as you consider the context part of its input?
Perhaps a stupid question, but if you have to - on the entire call hierarchy - explicitly mention the context in the function signature, including name and type, or a list of all functions it calls, doesn't that mean you've reinvented normal function arguments (potentially in a struct so there is only one argument)? At that point I see basically no difference to normal function arguments except a different syntax.
Instead of using GameCtx you could then just as well write the following (potentially requiring nightly to allow mutably borrowing individual fields separately):
I honestly can't see a benefit/difference to normal function arguments if there is a requirement to list them on every function in the call hierarchy. In my opinion it makes sense to have it in the signature of functions that provide or use the context, but I don't think there is a reason for this feature if you have to list it everywhere in the call hierarchy (due to it then effectively being a different syntax for something we already have).