I find the the "doesn't have to provide all parts of the realm" feature very hard to keep track of mentally, and its interactions with #[borrows_only] make it harder to follow, not easier; given the ability to define a realm as a union of other realms, I would prefer the first version (at least) to say that you must provide all parts of the realm a function requires, but that a given function does not need to borrow all of the realm.
Then, a library that wants both "no need to provide full context" and "easy use if you've got full context" can define lots of small realms, together with composite realms for "I want this component of the library to just work" and "I want the whole library to just work".
I see some confusing things here. Is there only allowed to be one instance of a given type in a using block? What if we have something like:
fn compare_offscreen() using r1: Render, r2: Render {
// Render to each context and compare results.
}
Similar for name conflicts. What if there is something like:
using Tree = bark: TreeBark;
using Dog = bark: DogBark;
using Backyard = using Tree, Dog;
How do I differentiate the two bark names in anything using Backyard? Where does such a rename happen?
Given the name conflicts above, I feel that there needs to be a way to associate names at call boundaries for things like this. Something like:
compare_render() using offscreen as r1, onscreen as r2;
I also question what the type of compare_render is. Can I put it in a Box<Fn()> or do I need to have Box<Fn() using Render, Render>? That feels like a new function color (axis!) to me. Are the names of the Render contexts relevant to the type or does a caller of such a Box<Fn() using> need to manually wire up local context names to the callee names if the Box "erases" them? How then to differentiate the two Render instances in compare_render then if names are erased? It also feels as if the names and/or positional order of contexts are now part of the API and are subject to semver hazards as well. My previous comments (in older threads) on the semver hazards of parameter name support then apply here as well.
With that proposal, I think it would be reasonable to scrap the entire nominal realm typing thing, instead defining realms as sets of contextual parameters with their compatibility relation being a simple subset check.
Then, bind could place its scope in a realm that's the union of its containing realm and the contextual parameters that get introduced through the bind:
realm MyRealm = CX_1, CX_2, CX_3;
realm MyBiggerRealm = MyRealm, CX_4;
fn foo() use MyRealm {
bind CX_4 = &mut vec![] {
// The realm is `CX_1, CX_2, CX_3, CX_4`, which is compatible with
// `bar`'s `MyBiggerRealm`.
bar();
}
}
fn bar() use MyBiggerRealm {
// ...
}
These rules are much more intuitive but they do come with some drawbacks:
The actual borrow set of a function is already part of the interface since it has an impact on granular borrow checking so this will not get rid of #[borrows_only].
It has some impacts on semantic versioning (first three responses in this comment)
Splitting up realms like that to minimize the required bound set for users seems like churn and I don't know how many library authors will do that considering it only affects cases where users want to create a context for a one-off function call.
The last one is pretty important because, to get as much flexibility as possible from their functions, users will have to manually minimize their realm sets to get as close as possible to their actual usage set, which seems awfully close to breaking the bounded refactors principle. Then again, they already have to do this for #[borrows_only] so maybe it's okay? The big difference in my mind, though, is that borrows-only can be done right before the crate is published and can be largely diagnostic driven whereas there is no diagnostic for when a realm is too big since (if there were, we'd break the bounded refactors principle) and users may potentially have to minimize realm sets for internal functions too.
I'm not too sure how to reconcile the tradeoff between potential churn and learnability.
It does, however, make the role of #[borrows_only] much easier for me to follow; now, instead of both amending what the realm must contain and affecting borrow checker decisions, it just changes the granular borrow check from "whatever's needed based on the function body" to "what's declared in #[borrows_only]. And while I believe that #[borrows_only] ought to be compulsory, I can see the argument for making this a warn-by-default lint to let you omit it during prototyping.
Against that, if I've understood things properly, shrinking the realm your function requires is always SemVer-compatible (i.e. if I used to require realm Foo = Bar, Baz, Quux, and I change to requiring just Bar, nothing breaks that used to work, because my new requirements are a subset of my old ones). Thus, while there is churn to reduce the requirements you impose on your users, it's churn that can go into minor version updates (and is thus considered a safe update by cargo update).
And note that under the version you were proposing before, adding uses of something in your existing realm is a compatibility break, hence a new major version needed under SemVer rules.
A function which actually borrows MY_REALM_1_CX will require a realm granting access to that contextual parameter, regardless of whether that borrow is actually happening in code or defined by the #[borrows_only] attribute so this behavior seems consistent?
Ah, I might be misusing the word churn. I was kinda just using it as shorthand for "annoying refactor stuff people don't really want to do"ânot as a comment about sem-ver stuff.
What I really meant to say is that, if a bind expression is forced to bind everything in a realmâincluding things which aren't borrowedâthere is an incentive for users to make their realms as tight to the actually borrow set as possible lest they make some function calls impossible without providing more context. This seems bad because, in every other scenario, realms just define a conservative approximation of what context a function could possibly access to help users read the effects of a function call at a glance. However, in the context of binding context, it serves as a exact description of the required context. Because of this, I think users are either going to go through the effort of making all realms as small as possibleâwhich would probably require defining one realm per functionâor are just going to keep on using realms like they were before the change and making a bunch of one-off function calls impossible.
(I sort of clarified this point in an edit to the comment you responded to but I guess you were already replying to it by the time I made the editâsorry about that!)
It's consistent, but it's annoying that unless the developer filled in #[borrows_only], I have to look at the entire function body and all its callees to determine which bits of the realm are borrowed, and which bits don't matter. In the variant where you can partially bind a realm, it's a problem because a refactor of the body of a function I call can break my code.
You're not actually talking about reducing the amount of annoying refactor stuff people have to do, though; you're talking about who pays the cost of annoying refactor stuff.
In the case where only the elements I borrow from a realm matter, and I can partially bind a realm (leaving some elements unbound), any change to the elements actually borrowed by a function forces callers of that function to engage in annoying refactors to ensure that they bind enough of the realm. This also breaks the property I want where I can look at a function signature and know what needs to be bound in the realm to make it work properly.
This, in turn, means that changing your implementation details is enough to force all of your callers to refactor - making it a big deal for a library to change anything (technically, under the "partially bound realms" proposal, all changes to a function's used parts of a realm are breaking changes under SemVer and need a new major number).
In the case where a caller has to bind everything in a realm, the refactoring cost is moved onto the library; it has to define a "small enough" realm to make the functions callable, but can do so as a minor version change, and the code churn that results should be very small.
I prefer the second alternative since the first means that I either lock down my dependencies to known-good versions, or I act as-if I'm writing code in the second alternative, or I accept that my code could be broken by bringing in an essential bug fix for a library I depend upon (directly or indirectly). None of those are good options, compared to the second alternative, where I have to get the library to accept a change to reduce the realm the function I want to call requires, but I am at least able to take bug fixes from the library upstream as needed.
Unless I miss something crucial, this one is also achieved by appending a parameter into an exisiting context struct regardless of how it's being passed, inheritance of those can be emulated by some base struct field which can be as well added there by a macro:
#[inherit(Subsystem1,Subsystem2)]
struct App {
...
// subsystem1: Subsystem1
// subsystem2: Subsystem2 - those is added by macro
}
Yes, a little unergonomic, agreed.
Big benefit; also can be done with proposed in past View types in ctx and inference (of which parts we do really borrow, opposed to what we had in with proposal, where we just always had the whole ctx borrow).
Those help the mechanism, but in the other way: the ease and enhance functions that are already explicitly pass context around, through use clause.
But they don't address the need of actuall writing in each invokation and declaration site what (relm, explicit parameter, no matter) you do pass, at which point you can as well introduce an explicit last argument and do the facet's pattern, just with again, little inconvenience.
Going for logger again: (but same goes for allocator, etc)
realm Log;
ctx Logger in Log: &'static dyn Logger;
fn foo() use Log {
LOGGER.error("foo is called =(");
}
fn bar() use Log {
// in reality we more likely to extract the error and pass it to foo
if do_stuff().is_err() {
foo(use Log) // or foo(use _)
}
}
fn main() {
bind (LOGGER = make_logger()) {
bar()
}
}
Look at bar - does it not pass some addition arguments with just another syntax?
As @yigal100 pointed we just really beat the old argument about statics vs tons of arguments but wrapped in bundles and fancy syntax.
It feels like a pretty comparable situation to borrow-checking, though.
If you have:
// (using a type-alias style of definition since that seems to be
// more desired by users on this thread)
realm MyRealm = MY_CX_1, MY_CX_2, MY_CX_3;
ctx MY_CX_1 in MyRealm = Vec<u32>;
fn foo() use MyRealm {
for _v in MY_CX_1.iter() {
bar();
}
}
fn bar() use MyRealm {
baz();
}
fn baz() use MyRealm {
MY_CX_1.clear();
}
Whether foo compiles is determined by the actual borrow setânot the realmâso you'd have to look at the entire function body and all its callees to determine which bits of the realm are borrowed. This isn't a problem, though, since this is an error scenario and, in error scenarios, the compiler can just point out what you're missing.
In that sense, I think partial contexts should be the same. They're an error scenario so adding the necessary context can be a task driven by the compiler diagnostic.
but it's annoying that unless the developer filled in #[borrows_only].
rust-doc will always show the actual borrow set of a function since that is very much an important part of its public API so, for API usage, in addition to getting help from the compiler, you could also just look at the rust-doc.
Also, the warnings that lint generates will tell you the full attribute to apply to your function so you could just copy paste that or have rust automatically apply the fix for you since they should be machine-applicable. Of course, you should probably double-check that set to see if there's anything else you want to add to make the function more future-proof. In that sense, it's better than existing context passing solutions since you can remove contextual parameters from the function without having to update the signature and produce a breaking change.
Yes, library authors have to commit to a borrow set as they would with any other context injection system. It's just that, here, unless you specify #[borrows_only], this set will be implicit. That's why there's a warn-by-default lint against omitting it. Unless you're actively prototyping a crate or implementing a crate that's purely an internal helper crate for other crates, you should include #[borrows_only] on your public functions.
Perhaps it would be a good idea to let borrows_only borrow an entire realm so crate authors can just pessimistically say that they'll borrow everything and loosen it later as they commit to more decisions about their crate.
It doesn't break the bounded refactors property because you only have to make this change at bind sitesânot at every intermediate site between the provider of the context and the consumer of the context. We can't just make up a context for the user so this seems like reasonable work to ask from the user.
Since context can be passed from quite far away or acquired using a Bundle, you can easily design programs which minimize the number of places where you have to make these types of changes. For example, you could combine the runtime dependency injection system of an ECS with the static dependency injection of this feature to make updates really not that difficult:
use bevy_context::Resources;
// (a macro could generate these two lines)
realm Sys1 = MY_CX_1, MY_CX_2, MY_CX_3;
type Sys1Res = (Mut<MY_CX_1>, Mut<MY_CX_2>, Mut<MY_CX_3>);
ctx MY_CX_1 = MyResource1;
ctx MY_CX_2 = MyResource2;
ctx MY_CX_3 = MyResource3;
fn sys_1(res: Resources<Sys1Res>) {
bind res.bundle();
...
}
fn sys_2(res: Resources<Sys1Res>) {
bind res.bundle();
...
}
fn sys_3(res: Resources<Sys1Res>) {
bind res.bundle();
...
}
(...although, personally, I tend to just write out the entire resource set for each system since I can apply those changes automatically and the resulting borrow sets are smaller, which benefits automatic parallelism scheduling)
The problem is that defining super precise realms requires you to define realms based on the structure of your function rather than by some user-defined notion of what a reasonable subsystem looks like.
You'd probably have to either write your realm like this...
ctx MY_CX_1 = Vec<u32>;
ctx MY_CX_2 = Vec<u32>;
ctx MY_CX_3 = Vec<u32>;
fn foo() use MY_CX_1, MY_CX_2, MY_CX_3 {
for _v in MY_CX_1.iter_mut() {
bar();
}
}
fn bar() use MY_CX_2, MY_CX_3 {
for _v in MY_CX_2.iter_mut() {
baz();
}
}
fn baz() use MY_CX_3 {
MY_CX_3.push(42);
}
...which would be precise and useful to consumers but break the bounded refactors principle, or write it like this:
ctx MY_CX_1 = Vec<u32>;
ctx MY_CX_2 = Vec<u32>;
ctx MY_CX_3 = Vec<u32>;
realm Foo = MY_CX_1, Bar;
fn foo() use MY_CX_1, MY_CX_2, MY_CX_3 {
for _v in MY_CX_1.iter_mut() {
bar();
}
}
realm Bar = MY_CX_2, Baz;
fn bar() use Bar {
for _v in MY_CX_2.iter_mut() {
baz();
}
}
realm Baz = MY_CX_3;
fn baz() use Baz {
MY_CX_3.push(42);
}
...which satisfies the bounded refactors property but makes it harder to figure out what Foo actually is without using an IDE or tracing the entire realm.
In practice, since your proposed realm semantics only affect partial bindings instead of borrow checking, I think most people are just going to make larger conservative realms to get both the bounded refactors property and the readability and sacrifice the ability to make short succinct one-off contexts for the single function call.
The point of a realm is to say "this function will access a subset of the contextual parameters in this realm and nothing else." A realm being big only has the downside of reducing the number of guarantees you can derive just by looking at the function's signature since it claims that it potentially accesses a lot more than it actually accessed. If yourself in that scenario, that's probably a sign that you made your subsystem too big and should probably refactor it into multiple subsystems with their own realms. Otherwise, realms being big is a good thing since it gives you room to grow and forces the callers to be conservative about not making too many assumptions about your internal functions that may change as its implementation changes.
I disagree - it means that refactors become unbounded, since every single bind site in the world may have to change for a one-liner change in a library that consumes the context.
And yes, it sounds like the inference of borrowing from function bodies creates a similar unbounded refactor problem - which to me implies that you need to make #[borrows_only] compulsory, and have crate authors start with "I borrow the entire realm with an exclusive borrow", and only relax that when they've settled down their code.
I think we need to differentiate between interesting refactors and mechanistic refactors.
An interesting refactor is one which requires users intervention to make an actual decision since a compiler would have no way of making the decision for the user.
For example, if a user updates a function to depend on some new context and that causes a borrow checker error, the user doing the refactor has to do the interesting work of figuring out how to rewrite their code such that borrows can be proven non-aliasing. This isn't some problem unique to Rust eitherâif a Java programmer decides that a function should mutate a collection and that function is called somewhere while that collection is being iterated over, it is up to the user to devise a mechanism for avoiding concurrent modification.
For example, if a user updates a function to depend on some new context and that causes a missing context error, the user doing the refactor has to figure out which value to provide to that contextual parameter because the compiler can't just make one up for you. And if there's some unified context injection scheme that already makes that obvious, they can just update the single function that fetches the context from that system and places it into a bundle.
A mechanistic refactor, meanwhile, has exactly one obvious way to handle it that the compiler could handle automatically. If you update a function baz which takes a parameter from bar, which takes a parameter from foo, the obvious refactor is to just pass it along! Of course, sometimes, in the course of performing a mechanistic refactor, users may discover that the refactor would cause them to do something unexpected in a call-site that broke. That's why realms exist! They tell you when you borrow something that some old consumer didn't expect you to borrow.
When I say that we should make refactors bounded, I'm talking about making mechanistic refactors bounded. Ideally, a compiler could also solve interesting refactors for us as well but that would require a really advanced compiler capable of thinking about code (and would also probably threaten the job security of the computer science profession as a whole so let's maybe avoid that).
The impulse of your solution is to make it so that users pay the cost for the interesting refactor ahead of time but, again, that seems ill-advised, since it encourages users to do a mechanistic refactor to avoid making their design overly constrained and I feel like many people aren't going to do that for their crates. See my fourth quote response in this comment for details on what I mean by the mechanistic refactor.
Also, people can add stuff to their realm and that would still require an unbounded interesting refactor so I don't even think that solves the problem of unbounded interesting refactors of bind expressions.
I'm usually fairly negative on implicits, but this concept is growing on me a bit.
One useful way of thinking about this is to treat realm as logically similar to struct, and ctx as logically similar to fields. There are a few specific things that differentiate the implicit context from just a context parameter, though, which are worth listing out:
Implicit context is passed implicitly based on lexical scope rather than as a mentioned parameter.
The used context realm is an unordered structural set of context objects (which are themselves nominally identified), rather than being nominal (like structs) or ordered (like tuples).
Multiple context realms union together, not sum.
Borrowing from the context realm is field-precise, which is not functionality available to current types.
Which objects from the context realm are borrowed (and how) is inferred from the implementation by default.
That being said, Rust so far has held to the property that function signatures are a boundary for inference. While implicit context is genuinely useful, I believe the most borrow inference that Rust is willing to accept is limited to crossing non-pub functions, such that the module boundary remains strongly specified. One of Rust's selling points is "fearless refactoring" of large projects, and while this is in large part a conversation with type/borrow errors, it's also how the types make it nicely transparent when a change can break callers[1].
Additionally, if the goal is to bound the "interesting" refactor, the compiler would be able to point out what context you need to add to the signature and apply a cargo fix to any signatures (e.g. as an IDE intent) to propagate the context. Just like you can write -> _ for a function return type and have the compiler tell you what the inferred return type is.
Hmm... wouldn't the lint requiring users to end the chain of inference and write out their borrow sets explicitly using a #[borrows_only] attribute preserve that property? If users use those attributes (something the warn-by-default lint for missing #[borrows_only] attributes should strongly encourage), the breaking change should remain transparent.
Small clarification: I think you meant to say that the goal was to bound the mechanistic refactorâa refactor that the compiler can perform automatically since there is only one obvious way to do it.
Yes, we could just have the compiler automatically apply context forwarding changes and that would achieve all three objectives but it would also make function signatures massive and hard to read. In my opinion, writing out a succinct realm indicating a superset of potential accesses is easier to read than writing out the full component set since, although the latter is more precise, the former is much easier to audit semantically, e.g. reasoning that "this function only modifies world state so I don't need to worry about it clobbering by rendering state" is much quicker and certain than "this function touches Zombies, Players, WeatherState, and so on; none of those look like a rendering subsystem but maybe I missed something."
I haven't read this entire thread, but wanted to respond to some points briefly.
It stalled because no one was working on it. I didn't have time to devote beyond sharing and discussing the core ideas.
From skimming your doc it, brings in concepts like "realms" for how to talk about sets of context objects. I'm unsure how necessary this is, since much of the same expressiveness can be achieved by adding more trait bounds to existing named context objects. I agree with the idea that we might want aliases for that, e.g.
context version_info: AsRef<VersionNumber>;
context source_info = version_info where Self: AsRef<SourceHash>;
or similar.
While I'm not categorically opposed to something else, I'm skeptical of a proposal that introduces too many new kinds of things to the language. We already have traits and those have supertrait relationships. If we need a new kind of noun, we should justify it independently in terms of other use cases it enables, or through experience that the existing set is not enough. Either way it should very likely be in a separate RFC. (That said, I don't mean to discourage wider ranging, non-normative "vision docs", either independently or as appendices and "future work" in an RFC.)
It looks like the doc uses type state ("bundles") as a way to prove that, within a lifetime attached to the bundle type, a wrapped value has all the context it needs to implement a particular trait. This is fine and isomorphic to scoped impls, and this isomorphism is what gave me confidence that scoped impls could in fact be implemented.
I would say that if we are already adding contexts as a language-level feature we should consider going all the way and adding scoped impls. That said, I'm open to the idea that it's not enough of an ergonomic win to justify the added complexity (both in the language and its implementation). Experimentation is needed here.
The way I have thought of solving this problem is to attach context requirements (with clauses in my proposal) at increasing levels of scope:
Function (or method)
Entire impl
Entire module
The last is essentially a kind of module-level generics which means it would be an entire RFC of its own.
The first implementation challenge is expressing the feature in the type system at all. A secondary challenge relates to making implementation (edit: and compilation) efficient in the presence of many data-ful context objects. If we are worried about too many parameters being passed along, we could try to design some kind of context lookup mechanism or use thread locals.
Mutability should be correctly modeled by considering contexts a parameter of each function that requires them, or in the case of generics, as an embedded lifetime in any parameter with a bound that is only met within a scope. This should be sound. It's hard to say how ergonomic it would be.
I've been thinking of ways to try and reuse existing Rust concepts to accomplish the goals of this feature and this is the best thing I could come up with so far.
Traits have a flat structure where sets of implemented methods are unioned based off the trait inheritance hierarchy. Structure composition, meanwhile, does not properly union if you include the same structure more than once. This makes traits a good analogue for realms.
Naturally, then, traits methods would be contextual parameters. But what would their signature be?
If we implement a regular granular borrows system, we'll still fail to achieve the bounded refactors property we're looking for.
// This could be auto-generated with a proc-macro.
trait MyRealm {
place cx_1;
place cx_2;
place cx_3;
fn cx_1(&{cx_1} self) -> &Vec<u32>;
fn cx_1_mut(&{mut cx_1} self) -> &mut Vec<u32>;
fn cx_2(&{cx_2} self) -> &Vec<u32>;
fn cx_2_mut(&{mut cx_2} self) -> &mut Vec<u32>;
fn cx_3(&{cx_3} self) -> &Vec<u32>;
fn cx_3_mut(&{mut cx_3} self) -> &mut Vec<u32>;
}
fn foo(cx: &{mut cx_1, mut cx_2, mut cx_3} impl MyRealm) {
for _v in cx.cx_1_mut() {
bar(cx);
}
}
fn bar(cx: &{mut cx_2, mut cx_3} impl MyRealm) {
for _v in cx.cx_2_mut() {
baz(cx);
}
}
fn baz(cx: &{mut cx_3} impl MyRealm) {
cx.cx_3_mut().push(42);
}
We'd need to implement an inference system on top of the granular borrows system! Luckily, I think the semantics of this extension are relatively simple so long as we only allow inference on non-trait-members. We'd also probably want to deny inference on public functions or implement a warn-by-default lint.
// (same `MyRealm` definition as above)
fn foo(cx: &{_} impl MyRealm) {
for _v in cx.cx_1_mut() {
bar(cx);
}
}
fn bar(cx: &{_} impl MyRealm) {
for _v in cx.cx_2_mut() {
baz(cx);
}
}
fn baz(cx: &{_} impl MyRealm) {
cx.cx_3_mut().push(42);
}
Bundles are just realm traits without a subset-borrow, which is pretty nice. To emulate inferred bundle sets, we could expand the subset-borrow inference system to work with impl Trait-in-type-alias:
As far as I can tell, this is isomorphic to the current proposal except for lack of partially-filled-out context.
The big benefit with such a proposal is that it doesn't require any explicit support for the use-case of context passing and sub-borrows already seem to be a well-desired feature. It should also be easier to explain to new users since it involves fewer distinct concepts. It also makes the code-generation for the feature clearer: if you use impl MyRealm, you monomorphize the context set; if you use dyn MyRealm, you use dynamic dispatch to fetch the context.
The big downside is that handling the full complexity of representing places in the type-system is much harder than the complexity of implementing a dedicated context passing proposal. A big open question for any sub-place-borrowing system is how to encode aliasing between multiple generic places.
Consider the following example:
trait TraitA {
place place1;
fn thing_1(&{mut place_1} self) -> &mut Type1;
}
trait TraitB {
place place2;
fn thing_1(&{mut place_2} self) -> &mut Type2;
}
fn foo(v: &mut (impl TraitA + TraitB)) {
let a = v.thing_1();
let b = v.thing_2();
let _ = a;
}
We have to assume that place1 and place2 alias since TraitA and TraitB could be defined in totally separate crates but, if we do that, we can no longer nest realms effectively. To fix this, we'd need to be able to generically assert that places don't alias using negative constraints but solving negative constraints in a completely correct way would be SAT-equivalent and therefore NP-complete. I genuinely have no clue how you'd get this alternative proposal to work.
default implicit but explicit on module boundary propagation
context data
bundle of those
some union mechanism for bundles
granular borrows
in with proposals we didn't have unions and granular borrows at all;
bundles were supposed to be handled by structs alone;
context data was typed and distincted based on either type (mine variant) or nominal declaration (@tmandry 's)
i don't believe that any of those concepts need (except granular borrows) a new item kind, but rather a clever way to reuse structs (for 3 and 4) and fundamental extension of View types that Niko sketched to achieve 6th
1 and 2 is basically proposal with its idea, and the 5 is kind of inheritance of fields, which i'm not sure why do we want as such - we can require disambiguation of a kind if user happens to have identical fields in two contexes that they use in same place, and then, how do we construct the context at all if such unification is allowed?
In the future possibilities section, can you look at whether it might be possible to restrict an entire trait impl to be used only where a given context is available? This would allow, for example, an async executor integrated with a render loop that looks something like this:
let sprites: Vec<Sprite> = vec![];
loop {
for ev in pending_events() {
input_wakers[ev].wake();
}
bind(SPRITES = &mut sprites) {
for task in pending_tasks {
task.poll(âŠ); // Method of `Future`, may modify `sprites`
if is_frame_late() { break; }
}
}
clear_screen();
render_sprites(&sprites);
present();
}
Currently, this pattern requires something like scoped_tls and interior mutability to achieve and it would be nice to eventually get proper language support for it.
I am still more or less ±0 on the whole idea; I don't hate it but I don't feel a huge need for it either? But the thing I originally got into this thread to ask for seems to have gotten lost in the shuffle, so I'm going to ask for it again.
Suppose crate Main calls functions in crate Steamshovel which call functions in crate Scoop. The functions in crate Scoop require some context items which are to be provided by Main.
Can we make it so that Steamshovel not only doesn't need to, but cannot access those context items except as an undifferentiated blob of "stuff I get from Main and pass down to Scoop"? Steamshovel will of course need to be able to refer to the blob as a whole, what I believe is called the "realm" in the proposal, but it should behave like an object of opaque type to Steamshovel. And, as a consequence, if the set of context items required by Scoop changes, Steamshovel should not have to change at all. (Ideally, in a future where Rust has a stable shared-library ABI, Steamshovel should not even need to be recompiled.)
Another way to put it is that I think the proposal needs to give Scoop a way to have similar privacy semantics to what it can have now if its API is all methods of a "context object" that only it can construct, while still giving Main the ability to put the context together itself. Because, if we didn't want to give Main that ability, then we could just have the sole member of Scoop's "realm" be an opaque context object. But threading one opaque context object down a call chain is not enough of a hassle to make this proposal worth it; it only starts sounding to me like something with ergonomic value when Main calls transitively into many low-level libraries with overlapping things each wants in their context.
I also still want to see how this whole thing is going to be implemented at the level of assembly language. I'm worried that it will interfere with other low-level things I want to see happen, such as stable shared-library ABIs.
There is a lot of syntax in this thread, to me it reads very closely to "what if we could pass normal parameters in as part of the <...> list." Ala:
let ctx = Context::new();
f<ctx>(1, 2); // we can be explicit if we want to be
f(1, 2); // or `ctx` can be inferred as it's the only candidate
I expect there will need to be some new syntax to avoid overlap; fn f<ctx: Context>(...) { is already taken. But if this existed I would use it, and while I can imagine insanely implicit monstrosities built with this, I personally wouldn't abuse it. I'd love to just see it as an experiment, under any reasonable syntax.