Fleshing out libstd "scenarios"

This is intended to be a follow-up to @aturon’s previous pre-RFC for platform-specific APIs in the standard library. The libs team discussed this topic during triage today and we started out by outlining some problems with today’s methodology.

Current approach

First, I’d like to recap what I mean by “today’s methodology”. Today in the standard library we largely expose a platform-agnostic API surface area in the standard library. This isn’t always sufficient, however, so platform-specific APIs can be found in the std::os module. This module is organized along the lines of std::os::$platform::$module such as std::os::unix::fs or std::os::windows::process. Put another way, if we want to expose a platform-specific API, we allocate a new module inside of std::os and figure out how to put it there (with an extension trait giving access to the functionality).

This works out great for users who don’t want to worry about a Windows/Unix distinction because you just avoid the std::os module and otherwise the rest of the standard library is generally cross-platform enough to get by.

Problems with current approach

This has worked pretty well for just giving access to this platform specific functionality, but unfortunately there are a number of problems with this approach to APIs in the standard library:

  • It’s very difficult for libraries other than the standard library to follow this convention. In practice it seems to be very rarely followed. That being said, this sort of compatibility problem certainly comes up in practice!
  • Right now the design requires a strict hierarchy of features to be added. Unfortunately features are not always hierarchical, however, such as CPU features.
  • APIs don’t live in their “natural” location. For example the CommandExt::exec function on Unix is pretty far away from the relevant Command type.

Proposed solution

Continuing the concept of “scenarios” from before, I’ve been thinking that we can solve these problems with an addition of a new attribute and system to the compiler. Specifically:

  • The standard library will define scenarios, such as unix and windows.
  • APIs in the standard library can be tagged with a scenario.
  • Consumers of the standard library can enable particular scenarios (or none)
  • The standard library will have a set of “default on” scenarios, but these scenarios may not always exist for all platforms.

Some strawman syntax for defining scenario-gated APIs would perhaps look like:

// src/libstd/fs.rs
impl File {
    #[cfg(unix)]
    #[scenario(unix)]
    pub fn as_raw_fd(&self) -> c_int {
        self.fd 
    }

    #[cfg(windows)]
    #[scenario(windows)]
    pub fn as_raw_handle(&self) -> *mut HANDLE {
        self.handle
    }
}

Here we’re working with std::fs::File, and we implement the as_raw_fd and as_raw_handle methods directly on the File type. This solves the problem above where APIs don’t appear in their “natural locations”. Additionally, I’m thinking that #[cfg] is entirely orthogonal to #[scenario]. We can see here that the methods are tagged with both, but you could imagine #[scenario] implying #[cfg] perhaps eventually.

This is just an example of defining a scenario-gated API in the standard library. Consumers would perhaps look like:

#![scenario(unix, windows)]

#[cfg(unix)]
fn cross_platform_method(f: &File) {
    let fd = f.as_raw_fd();
    // ...
}

#[cfg(windows)]
fn cross_platform_method(f: &File) {
    let handle = f.as_raw_handle();
    // ...
}

Here we see a crate that activates the unix and windows scenarios, meaning that it will now have access to those methods in the standard library. There’s still a separate implementation for Windows and Unix, however. When they’re implemented they use the methods as if they were defined on the type inherently (which in a sense they are).

If the #![scenario(unix, windows)] were omitted then the above code would be a compile error. For example this code would not compile:

fn main() {
    let f = File::open("/dev/null").unwrap();
    println!("{}", f.as_raw_fd());
}

Here we’re calling the as_raw_fd method (and maybe even compiling for Unix), but the unix scenario was not activated. The compiler could perhaps even provide a tailored error message indicating that this method does exist but the gated scenario isn’t defined.

Usage in external libraries

I’d like for this system to be extended to all crates, not just the standard library. For example net2 crate provides Unix and Windows-specific APIs, or the openssl crate provides functionality depending on what version of OpenSSL you’re linked against.

As given, though, I think it’s possible for all of this to extend naturally to other crates. Crates would simply tag their methods with #[scenario(foo)] which would then require that scenario to get activated in a downstream crate before use (similarly to the standard library). This should be relatively lightweight to add as well!

I would also imagine that there’s sort of a global namespace for scenarios. That way if any other crate decides to put APIs behind the unix scenario (like the standard library) then you’ll only need to activate that scenario once.

Default scenarios

Ok, so this may plausible provide a solution to platform-specific APIs. There’s platforms like emscripten, however, which don’t have features like threads that are present in the “major platforms” of Unix and Windows. To handle this I’d propose something like the following:

  • Basically the entire standard library is split up into scenarios. Every API (maybe entire modules) would be tagged as such.
  • The standard library then defines a default set of these scenarios to be activated by default. That is, code does not have to opt-in to the APIs.
  • Crates could, however, disable this default scenario and re-enable them.

With a system like this, the standard library would simply be missing APIs on platforms where they’re not supported. For example on emscripten the thread module would be simply missing. If you’d like your crate to explicitly support emscripten then you’ve got one of two possibilities:

  1. You simply avoid std::thread. Code then naturally compiles for emscripten as you never accessed what doesn’t exist.
  2. You explicitly state in your crate #![scenario(emscripten)] (or something like that). Basically you whitelist a few scenarios in the standard library where you can use APIs. This disallow access to std::thread if you accidentally leak it in.

This way crates can explicitly declare what scenarios they support and get a guarantee into the future about continuing to support that scenario. Furthermore new platforms can simply omit swaths of the standard library. Support for these platforms would be done by declaring a scenario and avoiding it or by using #[cfg] appropriately to use the features on Unix/Windows and otherwise avoid it on the relevant platform.


Ok, so that’s what I’ve been thinking. What are your thoughts on this? Does your platform perhaps not fit into this sort of model? Is this perhaps too permissive? Can you see a killer hole in this we should close?

I think an important sentiment brought up by @aturon at the triage meeting today is that if you step back it’s actually pretty hard to be worse than today’s system. To that end we can probably get a lot of mileage without trying to be perfect for something like this. Not to say we shouldn’t try to refine it, but just something to keep in mind!

I think the functionality of this system sounds pretty good. I think it even allows for future linking of various scenarios into one binary (think runtime scenario detection for e.g. SIMD).

I wonder how all of this is going to look in rustdoc.

I worry that with cfg(target), cfg(feature) and scenario() there’s a lot of opportunity for confusing combinations of conditional compilation. But perhaps we shouldn’t think of scenario() as conditional compilation, but then what should we think of it?

I’m wondering if or how this intersects with winapi's need to deal with API families. I know the bunny has been hurting for some way to control which one is selected in a coherent fashion.

/cc @retep998

I'm not entirely clear on the motivation for a new attribute (perhaps I don't fully get what a scenario means). Was the option of putting this into #[cfg], e.g. #![cfg(scenario_optout_all)], #![cfg(scenario_whitelist(linux,windows))] considered? I don't feel strongly, I'm just curious about the reasoning (I note this was questioned in the last thread as well).

I had a similar question in mind when I read about the integer portability thread, but I never posted it and it's relevant here anyway: what do you think backwards compatibility will look like? Say I've disabled all default scenarios (somehow) and then explicitly opted into something like "has_threads". What happens when (to take a topical example) we realise that floats aren't fully portable and they need to be hidden behind a scenario? When I upgrade does my build a) start failing because I use floats (but have disabled everything with a wildcard) or b) silently opt-in to any non-portable features on each upgrade (because wildcard disabling of scenarios doesn't exist).

This concerns me - doing this for every platform or target combination is an enormous undertaking and I don't know if it's possible for all these scenarios to live in the stdlib. If there's a new processor or new OS or anything that people want to target (1, 2) that isn't supported in the stdlib at the moment, it has to go through the PR/review/merge/wait-for-stable cycle. Is that sustainable? My impression is that there's currently a bit of pushback on the likely churn for supporting very unstable targets. For some particularly niche targets there may even be nobody on the rust team able to perform more than a surface review of proposed changes. Once targets do get into stable, can they ever be removed (breaking backwards compatibility?), and if not how do they get updated with new things in the stdlib if no contributor has access to the platform? I've just opened a random configure script generated by autoconf and it has checks for a Motorola sysv4 platform, which, after some searching, looks like it may have been released in 1988.

As a result, it seems like it'd make sense to somehow allow third party authors to define scenarios on libstd outside of the rust repo - platforms without interested maintainers die by stagnation, rather than trying to decide when to rip them out. To be clear, I don't have a solid idea of what this would look like. A straw man proposal could be to have 'scenario' crates that are always unstable (like compiler plugins) that must select whether to include or exclude each individual top-level module in the stdlib (therefore breaking on top-level stdlib module changes).

It makes sense for the stdlib to contain some first-class scenarios and groupings, but finding some way to allow this work to happen alongside rust seems prudent, particularly in the context of other things that are working towards modularity (e.g. cargo building stdlib).

1 Like

I think it would be better to use feature detection instead of platform detection. That is, say "Enable this feature if the underlying library's feature X is available," e.g. "Enable this feature if threads are available," similar to the proposed C++ __has_include feature that's implemented in Clang.

The difference between scenerios and features isn't so clear. It seems like if I say "I depend on library X with feature (group) Y" that means that the build will fail if feature (group) Y isn't available on this platform. It seems like I say "I depend on library X with scenerio Y" means that I can condition Y-using features are whether feature Y is actually available. If this is the only semantic difference then I think it would be good to name things in a way that shows how similar they are. For example, instead of #[scenerio(x)] you could have #[cfg(has_feature(std/x))].

it seems like it would be very bad style for any library crate to use the default scenerio. I think it's good to bite the bullet and deprecate it from the beginning.

2 Likes

For libraries, would all scenarios be validated/typechecked during compilation or only those that are currently active?

If yes, and let’s say I have a cross platform function with different implementations for different scenarios. Would the “scenario checker” recognize that they are the same?

I.e would this be legal:

#[scenario(scenario1)]
fn cross_platform_fn() {
    // ...
}

#[scenario(scenario2)]
fn cross_platform_fn() {
    // ...
}

#[scenario(scenario1, scenario2)]
fn cross_platform_fn_2() {
    cross_platform_fn()
}
3 Likes

Tangential musing: some features contain platform detection within them (e.g. process spawning), which makes me realise that 'scenario' crates intended for platforms would probably want to inject code into libstd to plug in bits of functionality missing for your target. Which basically becomes a hacky way to maintain your own stdlib. So perhaps I agree that scenarios should be feature-only, because any platforms that can't be expressed by choosing scenarios of 'portable' features will need to maintain a stdlib anyway.

On the flip side, platform-based scenarios are nicer for some use-cases. If I'm writing something for Windows and Linux I probably don't really care about opting into threads, I just want to import it and get going. Which brings me to

If scenarios get added for virtually every module in stdlib, that creates a lot of boilerplate and I can see it becoming a meme that you sometimes (e.g. threads) have to 'import twice' in libraries (the scenario and the use). This seems like purity over practicality. Being able to opt-out of all features seems like a useful thing for portable applications/libraries though.

For the record, the initial versions of glium were using a similar system but with Cargo features. For example if you wanted to use geometry shaders, you'd enable the geometry_shaders feature. There were around ten to fifteen different features if I remember correctly.

I ended up dropping the concept pretty quickly because of the negative feedback, as the documentation became messy and people were getting confused by the system.

Forgive me for being cranky (perhaps its too early in the morning), but I think we need less strawman syntax and more mathematics. I see some syntax, but little clarified semantics since the last post.

Additionally, I'm thinking that #[cfg] is entirely orthogonal to #[scenario]. We can see here that the methods are tagged with both, but you could imagine #[scenario] implying #[cfg] perhaps eventually.

Uh, sounds like they are not orthogonal? What is the meaning of cfg without scenario, or scenario without cfg?

Here we see a crate that activates the unix and windows scenarios`

This gets confusing when we normally think of #![foo] as applying #[foo] to all items. I believe this currently holds for all items. I think we can restore this convention, but more on that in a bit.

I would also imagine that there's sort of a global namespace for scenarios.

Woah, this is very dangerous! To preserve modularity if any crate can define features I think we need to namespace by crate. Otherwise we end up with a situation where nobody but the standard library can confidently take a name from the global namespace.


Ok, here are my design sketch. First, an aside: I don't see how there won't be really awkward overlap with scenarios and features and + built-in CFG tokens. The former is are nice per-crate black box variables, and the latter is nicely connected to target definitions. Scenarios clearly will need some elements of both, so hopefully we can flesh out the division of labor once we figure out what scenarios mean.

Second, the problem. When we say code is portable, we mean one of two things. The first, and arguably original, type of portability might be deemed parametric. This means the code "simply doesn't do anything non-portable", or more formally, it doesn't use any cfg'd/scenario'd item or language construct, and is thus trivially guaranteed to work on all systems. The second would be non-parametric, where the code attempts to maintain compatibility by defining piecewise its implementation--i.e. a bunch of cfg'd versions of internal methods.

The second case usually isn't as good, because the cfg assumptions are rarely pred and not(pred), but rather a bunch of different predicates with the hope that they are disjoint and at least one is true. This hope invariably is not true forever as new systems are introduced which play with basic assumptions in various ways. At the same time, the second case is inevitable when writing software. The problem is we want to verify the portability of code without brute force compiling on different platforms, because that is intractable.

Without trying to sound academic, I firmly believe the essence of the scenarios problem is satisfiability modulo theories. I mean less the associations of off-the-shelf solvers, than the core mathematical concept. Plain boolean satisfiability is "does this formula hold true under any interpretation", SMT is "does this formula hold true under any interpretation consistent with the theory"--i.e. "does this formula hold true under any of these interpretations I actually care about". What is a formula? For our purposes it is an boolean expression (no quantifiers)....like a cfg formula. What are the variables? Primitive CFG tokens, attached by the compiler to targets, library-defined features, and perhaps something else if we need it (I think we won't). What are the theories/sets-of-interpretations? Basic axioms like and(windows, unix) = false [though @eternaleye will shut down that example beautifully]. What does SMT become when you negate your formula? "does this formula hold true under all of these interpretations I actually care about". What do you get putting it all together? "does this cfg hold true under all of the possible truthinesses of cfg-primitives/features I care about". Boom! That's portability.

Wait!, you ask. My code has many cfgs, not just one! How can that be portability?! Here's how. Recall how name resolution works: ignoring things like globs and shadowing (cause what twisted soul would use those in conjunction with #[cfg(..)]), we need all identifiers in an item to resolve to exactly one item. This means we need cfg of each item to imply that exactly one of the cfgs of everything it could possbility resolve to is true:

cfg-of-item => one-of(cfg-of-helper-v1, cfg-of-helper-v2, ...) 

Now => is straight up propositional logic, and one-of can be elaborated as propositional logic, so via this means we can derive a portability formula for each item in a crate. The standard interpretation of #![foo] actually works in that we are simply asking all items in the module to be portability up to this cfg-theory (but yes, we may want to remove the module instead of making it empty, I don't know).

Ok, I got to go to work so no time to proof-read or talk about anything else. Later I will dive into what theories Cargo's current resolution algorithm implies, and what to think about target specs / builtin configs (@nagisa's RFC!).

2 Likes

If #[scenarios] can somehow support my requirements detailed below then I am all for it.

I disagree, here. IMO, there's a clear third pole here, and one that's underserved. To wit:

  • Built-in cfg tokens are connected to target definitions, and both their existence and value is determined by the target
  • Feature flags are connected to individual crates, but while their existence is determined by the crate, their value (whether they are enabled) is controlled by Cargo. This is important to note, as due to the additive nature the user has no way to mandate a feature be disabled.
  • Scenarios, as I envision them, are connected to root crates. While they are defined... elsewhere (and that's a can of worms), their value would be specified by the root crate, and they would place an upper bound on features. Features like asm or std should depend on the appropriate scenario, and when a scenario is not in the bounding set defined by the root crate, Cargo would be forbidden from enabling features that depend on it.

In that model, Rust code would never see scenarios directly. They are an additional input to Cargo's feature-enabling algorithm... and that's entirely sufficient for every use case I've seen.

Second, the problem. When we say code is portable, we mean one of two things. [...] The second would be non-parametric, where the code attempts to maintain compatibility by defining piecewise its implementation--i.e. a bunch of cfg'd versions of internal methods.

I'll note that seeing scenarios as being about portability is, IMO, a mistake, largely because it leads people towards using them for the second approach. If they're going to go down that (painful) road, then the hard-wired cfg is better at it.

My personal view is that scenarios serve best when framed as being about operating under constraints. I want to operate in some domain that requires Rust code not do a thing. Whether that's "attempt to use assembly", "call a memory allocator", "use WinAPI calls", or anything else. There is currently no mechanism for this at all.

Basic axioms like and(windows, unix) = false [though @eternaleye will shut down that example beautifully].

Indeed I shall! Making this not an axiom is rather the whole point of the midipix project, which not only provides a POSIX-satisfying C library on Windows (by reimplementing the Linux syscall ABI in terms of Windows calls, and then putting musl on top), but also provides support for cleanly calling the Windows APIs in a seamless manner, even up to using UTF-8 for both sets of APIs.

1 Like

Very good points! I view the two systems as orthogonal in the sense that #[cfg] runs first, cleaning out good chunks of the AST. Afterwards we've got items tagged with #[scenario] that then get propagated through.

In that sense I think rustdoc would definitely want to render scenarios. The documentation for the standard library would then, for example, just simply show all Unix-specific functions (like as_raw_fd) as "this requires the unix scenario).

Note that this is not unlike today's rendering of documentation of the standard library where AsRawFd is an implemented trait for types like File. Also note that we would definitely still need to publish Windows documentation!


Ah this may actually be explained by my above comment as well. I'm at least envisioning that #[cfg] still runs to prune out and strip the AST of unconfigured items. What remains is #[scenario]-tagged items. I think of it as #[cfg] will omit items from compilation, and #[scenario] will omit items from downstream consumers (but still compile them)

Another very good question! I think we can boil down the backcompat story into two categories. Those APIs which today are platform-specific (e.g. std::os) and those that aren't (e.g. std::thread). For the first category the entire std::os module will not require scenario to use, but scenario-requiring methods and traits in their normal location (e.g. AtomicU8) will be added.

For the second category I think this is what falls under the "default scenario" category. By default the standard library will give you floats, threads, etc. Some platforms, however, just fundamentally don't have these primitives (like threads on emscripten, for now). These platforms would simply have modules disappear under this scheme. I'm currently under the impression that all these targets are nightly anyway so breakage is either expected and/or ok.

Good points! I wonder if we could perhaps enhance custom target specifications in this regard? E.g. if we have the standard library sliced up into a number of scenario categories, you could imagine that any platform could pick and choose what slices of the standard library are compiled in at will? I'm not sure if this covers all the use cases you're thinking about, however.


This is an interesting idea! I feel though that this is actually a separate feature we might also want to have. This helps I think when you want to opportunistically work on a platform that you don't even know about (e.g. automatically work on platforms without threads, but not specifically work around emscripten itself).

Independently though I think scenarios are still worthwhile as one of the major features of them is gated APIs by default. That is, you can't accidentally use as_raw_fd even though it's available. Put another way, you can't accidentally make your code less portable by default at least.

Now I think we'd definitely want a feature like this for various scenarios:

  • Working across multiple versions of the standard library
  • Working generically across platforms that don't have a particular feature instead of listing them out exhaustively

This can be emulated today with a build script that does this sort of detection, but it'd be great to have a standard feature for it yeah.

Initially I think this sounds like a great idea, but I think it ends up being too conservative to be practical. In some sense the standard library is going to provide something by default, so there's a "default scenario" no matter what. In that case, what's that default scenario? For example you could imagine platforms that don't have threads, floats, 64 bit integers, etc. In other words, crucial bits of the standard library to operate.

So to me I see a spectrum here of on one end there's 8 bit processors from the 80s, and on the other hand there's x86_64 Linux on Intel's newest chip today. The line of what the standard library provides by default is somewhere on this spectrum, and either extreme is too unergonomic to work with.

That all leads me to the conclusion that today's choice of APIs provided by the standard library strike a reasonable balance between ergonomic to use and reasonably portable. It's of course the wrong decision for some maintainers who want libraries to work in as many areas as possible, but it's very much the "right" decision for others writing, e.g. CLI apps (say, Cargo and/or rustc).

Note though that the intention is to definitely accommodate developers who work across tons of platforms though. The "opt in" would in theory just be a line or two at the top of the crate indicating that the default scenario should be turned off.


I envision the compiler working in a few ways:

  1. When compiling a crate, first, all #[cfg] items are stripped.
  2. Second, the compiler works as today. Everything has to typecheck and not conflict as such.
  3. When loading a crate as a dependency, you'll always load a crate with a set of scenarios active for that crate. Only configured APIs (e.g. activated scenarios) are loaded from the crate.

In that sense your code example would not compile because cross_platform_fn is defined twice. You'd have to use #[cfg] to select between the two there. I see #[scenario] as unlocking functionality, not changing implementations (like #[cfg] does).


True yeah, although I'm realizing now that I've not been thinking about this in the sense of changing functionality. I'm envisioning scenarios as we're shipping to you an API which is as full-featured for the platform as it can possibly be, it's just that some of it's turned off because we don't want you to be able to use it by default.


Interesting! I'd be curious to dig more into what's going on here. The documentation I see as a critical point which we need to solve, but I think it will also be a very simple one to solve (e.g. rustdoc just renders all #[scenario]-tagged items specially like it does with #[stable] today).

Could you elaborate more on the confusion you were seeing though? Perhaps there's kernels of confusion we could head off?

1 Like

I would not generally think of it like this - in terms of platforms, and purely additive - but in terms of features. Rather I would think of it as "this is a crate that works without the windows/unix/threads/networking" scenarios. The "default" scenario, that all present (non-no-std?) crates are assumed to operate under is the "windows/unix/threads/networking" scenario. In the future there will be scenarios that subtract parts of std, and thus subtract parts of the std scenario. So the emscripten std would not declare support for any of the aforementioned scenarios. Any emscripten-supporting crates would need to declare they work without chunks of std, vaguelly like

#![scenario(core, collections, alloc, rand)] // For an additive declaration started  from a position of "no-scenarios"
#![scenario(without(threads, networking))] // Declaring which features you can live without

In the subtractive case, the compiler won't let you build until you've subtracted all the parts that the emscripten port doesn't support. And you can imagine further declaring (somewhere) "I want to support this set of platforms", and have cargo/rustc tell you (without having actual cross-compile capability to that platform) "no, you are not in the right scenario, do these things to fix".

This ability to have the toolchain statically tell you which platforms you can support without actually attempting to compile I think is the thing we need out of scenarios. Otherwise it's just a reformulation of cfg.

In the subtractive case, the compiler won't let you build until you've subtracted all the parts that the emscripten port doesn't support.

The problem is that this doesn't axiomatize: What if I'm using a crate that provides RDMA, and I want to build "without infiniband" (for any number of reasons - licensing, not having the hardware, etc)?

The only way what you propose could work is if the set of scenarios is closed, and only extensible by rustc, in the definition of target specs. I feel that would cripple scenarios, which could (IMO) fill a desperately needed role of allowing root crates to limit Cargo's additive feature behavior. See also, a couple of issues recently where the lack of such an ability has made Cargo considerably less ergonomic for crate authors.

1 Like

The ability for the root crate to enable and disable scenarios/features and have that control how upstream crates are compiled would be amazing. Imagine being able to disable the openssl scenario without having to manually go through every single dependency to ensure it doesn’t forget to disable the default features of some other dependency so it can disable the default features of yet another dependency so that it doesn’t depend on the openssl crate. Features are great for enabling required functionality, but are terrible for optional functionality.

1 Like

I'm afraid I don't know what the relationship between RDMA and infiniband is so I have a hard time visualizing what you are describing. I might imagine the RDMA crate either does or does not declare the infiniband scenario, and the downstream crate does or does not consume it.

The way I've imagined scenarios is as a relatively simple effect system propagated through the dag. For any compilation configuration, a crate may or may not introduce a scenario, which is propagated downstream. Downstream crates must then match, for the compilation configuration, the set of scenarios they require with the scenarios present in the world. (Hm, to that end I might expect scenarios to be declared in Cargo.toml, not in source, and enforced entirely within cargo).

To be honest I didn't elaborate because I don't really remember exactly what was happening (it's been a long time, and since the library was in heavy development I didn't document all the changes correctly).

I think the biggest problem is that users were trying to use functions that are present in the documentation without realizing that they needed to tweak their Cargo.toml to enable the corresponding feature.

In the case of glium, the problem is that the error that the user would get is that the function simply doesn't exist. If you integrate "scenarios" in the core of Rust, that would be solved by instead returning an error saying that they need to enable the corresponding scenario.

(Disclaimer: I don't write libraries and don't use cargo in non-trivial ways.) Cargo features don't have values? Something like "USE_OPENSSL=0"? Because what you are describing sounds very similar to this thing from cmake

target_compile_definitions(my_root_crate PUBLIC USE_OPENSSL=0)

RDMA is a general term for "remote direct memory access". There are a number of technologies this can be done over, including Fibre Channel, Infiniband, and Ethernet.

I'm not clear on what you mean by "consume" here - it sounds like you intend these to propagate in the same direction and manner as features (dependencies declare them, direct dependents enable/disable them), and are arguing in favor of a "default on, opt out" semantics.

In my opinion, this would not work well: That propagation manner is insufficient (dependencies declaring them mandates direct dependents enable/disable them due to namespacing issues; if direct dependents control them, they're isomorphic to features; if they're isomorphic to features I see minimal benefit), and opt out is dangerous with any other propagation manner (because if I'm building a kernel, I know what is available, but I don't know what crazy things my Nth-level dependencies might want to be available).

They do not - they are all boolean, and are enabled by cargo if any dependent requires them. This is why negative-polarity options (no_foo) are dangerous/invalid.

No, cargo features don't have values. They are either enabled or disabled.

The problem is that some dependency might depend on hyper without disabling the default features, effectively forcing hyper to always enable the openssl feature, despite hyper and that dependency both being able to work without that feature. Once enabled by a downstream dependency, that feature is stuck enabled and the user has no control over it unless that dependency provides its own features to enable/disable the features of its dependencies (and all the way transitively until you reach the root crate) which is hell. If even one dependency fails to provide this, everything is ruined unless he user or root crate vendors their own version of that dependency to disable the default features of hyper and have its own feature to enable the openssl feature of hyper. For complex iron projects this process might have to be repeated for multiple dependencies, and can get quite hairy when the problematic dependency is several layers deep.

What Cargo needs is several things:

  1. A way to distinguish between features that are required and optional features that are nice to have.
  2. A way for upstream crates to tell downstream crates which features are actually enabled, so they can take advantage of optional features when they are enabled.
  3. A way for the user or root crate to enable and disable those optional features.

As a bonus, it would be really cool if the build script for a crate could choose which optional features to enable.

2 Likes