[IDEA] Implied enum types

I'm trying to find a way to make it a general rule. When I'm next asked to :white_check_mark: an FCP or leave a concern, what's the rule that you want me to be using to decide whether a sugar is ok?

To go back to my set_radio_state(Default::default())?; example, one could easily argue

I'm not convinced that Default::Default() is in any way better than having you write RadioState::default(), just to avoid writing use some_crate::context::RadioState;.

But that's not how Rust works today.

Or, relatedly, we could have required that people say

let x: RadioState = whatever();
set_radio_state(x)?;

to make it easier for the maintenance programmer, but, again, we don't.

What's the general principle that applies to Rust today that would be violated by allowing set_radio_state(.AllBlocked)?;?

(Note that I actually do personally have some concerns about the general feature. But I'd like to focus the bounds on it as much as possible, so there's the possibility for someone to have a clever idea to address the original motivation without the downsides.)

3 Likes

I think for me the general guideline is: If a change in one piece of code, (e.g. changing the type of a parameter, or adding a parameter to a function) is likely to mean a change in contract (i.e. something I need to consider and possibly act on), does this feature make it less likely I will notice the change in contract I may need to act on?

[IDEA] Implied enum types - #30 by farnz has a great example of this - because the type of the parameter has changed, I may want to audit whether I have a change to make where I call a function.

From that perspective, changing:

#[derive(Default)]
enum RadioState {
  Disabled,
  #[default]
  Enabled,
}

fn set_radio_state(state: RadioState);

to

#[derive(Default)]
enum RadioToEnable {
  #[default]
  None,
  Bluetooth,
  Wifi,
}

fn set_radio_state(state: RadioToEnable);

would be an example of a place where set_radio_state(Default::default()) can be problematic (and note, there is a pedantic clippy lint about this). While we have to live with the choices we've made, it would be great to avoid adding more hazards like this.

set_radio_state(x.into()) is less scary to me, because an explicitly implemented conversion between types has probably had some thought given to it (e.g. I'd assume a From impl between RadioState and RadioToEnable would map Disabled and None because someone took the time to think about it).

Personally, I'm much less concerned with "can I tell what the type is just from reading the code without an IDE" and much more concerned with "do I need to re-consider my code based on a change made to some other part of code".

3 Likes

The Default::default() is IMHO a very unconvincing example because it's already allowed, and nothing terrible has happened. There's Type::default() for those who prefer to be more explicitly (and so is Type::Variant). And even then, the example where Default::default() could be problematic is also an example where the implied enum type would have worked fine: set_radio_state(Enabled) wouldn't compile and it'd have to be changed to some other option. So it's not a big deal, and not that related to the proposed syntax.

Library authors want to have clear APIs. Application authors want to write maintainable code. If Rust gets this feature, both will be aware of it. This is a cooperative situation, not an adversarial one. If a particular combination of names and syntax gives a terrible result, people will avoid it. If a variant name would be ambiguous, misleading, or have unexpectedly different meaning after an API change, library authors will pick different names and/or users will fall back to specifying the full type name.

So I'm not really worried that this could be used to make awful APIs that are unclear at the call site. This isn't that different from overuse of bool arguments anti-pattern. Somebody could design an API like call(.No, .Nope, .Nah) just like they could design call(false, false, false). But that is a bad practice, and people generally avoid it. It might happen sometimes anyway in poorly thought-out APIs, but that isn't a big deal, just like functions overusing bools weren't a reason to avoid having bool literals in the language.

6 Likes

My experience is that people don't check the definition of the called function unless it's clearly not what they think it is; therefore, if I can write foo(Disabled)?;, people will not check the type foo takes, but instead assume it based on their past experience of the foo function.

My overriding rule is around locality of reasoning - anything that forces me to reason non-locally is problematic by design, since I know that software developers generally aren't great at non-local reasoning (we rely on heuristics like "that's what a similarly named bit of code in a codebase I worked on 10 years ago did"). Applying that overriding rule to implicitness, the general rule I'd like you to apply is that increasing implicitness should not also increase the room for non-local ambiguity.

Taking the foo(Disabled) example, if I have two enums visible at the point I call foo, Bar and Baz, both of which have a Disabled variant, you've introduced new non-local ambiguity - I have to look up whether foo's argument is of type Bar or type Baz to determine which variant is in use. On the other hand, if the only enum in scope with a Disabled variant is Bar, then foo(Disabled) is not ambiguous locally. There's a wrinkle here, in that if the file I'm in has a use Bar::Disabled;, and foo is fn foo(bar: Bar), then the ambiguity is resolved in Bar's favour, but that at least works when I refactor foo to take a Baz, since the resolution is wrong.

This is, FWIW, the same rule for ambiguity that you get if you do use Bar::*; use Baz::*, and thus there's precedent for this sort of ambiguity handling. Thus, reducing this feature to "if you don't specify the enum variant, Rust will attempt to find the type for you as-if you'd written use _::* for all enums in scope" would fit my rule, even though I don't like it, because it's not introducing new ambiguity that isn't already one use statement away. But that then introduces the question of whether this is worth it, given that the programmer can already do use Bar::*; use Baz::* and get the same effect.

And this also works with my maintenance programmer example - if I split up RadioState into separate enums for each radio type, plus an overriding state, then all of those enums will have a Disabled variant. If I then have two of those enums in scope, I can't call set_wifi_state(Disabled), since, while the compiler knows that set_wifi_state takes a WifiState, it can see that Disabled could be any of WifiState::Disabled (thanks to that being in this module), crate::zigbee_radio::ZigbeeState::Disabled (no use statements, but this is crate-wide visible, or RadioState::Disabled (thanks to a use crate::RadioState in this module), and it'll refuse to continue until I fix up the ambiguity (with an explicit use WifiState::Disabled, or changing the call to set_wifi_state, or removing the other enum variants from visibility).

1 Like

I absolutely don't understand why would you care about the type of the enum, if it compiles. The compiler already ensures the method gets the right type.

set_wifi_state(_::Disabled);
set_zigbee_state(_::Disabled);

If these two compile to two different types like WifiState::Disabled and ZigBeeState::Disabled, that's great. The code communicates to human exactly what it does with no more information than necessary — you know it's disabled, not enabled, and not set to stun. The tedious detail for computers is found by the computer.

With type inference, deref, coercions, match ergonomics, impl trait, From and ?, and iterator chains Rust routinely operates on types you can't clearly see from the code at the call site. Even has types you can't name. But that's fine, because in majority of cases the code can still be clear from its structure and names of functions and variables. Everything is still type checked, so if something isn't what it looks, it'll likely fail to compile. There are edge cases, but people can avoid these and use more verbose syntax there.

7 Likes

No, that's a disaster, not "great"; you've just created a serious bug, affecting our regulatory compliance and resulting in the company being fined.

The old code had a single function, fn set_wifi_state(state: RadioState), which controlled all the radios, because in the design at the time, we only had a WiFi radio. Over time, lots of bits of code have had calls to set_wifi_state(_::Disabled) added, because they want us to be RF-silent, and people keep adding them in places where radio silence is required.

I'm now refactoring this code, partly because it's confusing that you call set_wifi_state(_::Disabled) to turn off the ZigBee radio, but mostly because we now have a requirement to control WiFi and ZigBee separately. I thus change from one function to three:

fn set_wifi_state(state: WifiState)
fn set_zigbee_state(state: ZigbeeState)
fn set_radio_state(state: RadioState)

And I'm now trying to play whack-a-mole with concurrent development by other people to ensure that set_wifi_state is only used for cases where you genuinely want to control the WiFi radio and not all radios.

In today's Rust, the compiler helps me play whack-a-mole; if I have set_wifi_state(RadioState::Disabled), then I know that the function is pre-refactoring, and the compiler will break, allowing me to look at the use case. In your world, the compiler silently accepts the code and potentially allows us to emit ZigBee RF when we thought it was set to be radio silent. Oops.

And the whole reason I used an enum, instead of fn set_wifi_state(enabled: bool) was to make sure that if I did need to refactor later, I'd get compile errors from code where people weren't keeping up-to-date with the current status. I could easily have used fn set_wifi_state(enabled: bool), but then the compiler would not help me identify cases where someone did the wrong thing.

3 Likes

This is absolutely a stretch. Why would wifi function name their enum not wifi? Realistically, for a wifi-only system, it could have been set_wifi_state(WifiState::Disabled). If it was named after Radio, then it could have been all-radio-controlling set_radio_state(RadioState::Disabled) that can disable new radio types. I just don't understand why would there be a case where you have wifi function controlling zigbee just because its enum was named wrong.

A new version supporting zigbee could have added set_zigbee_state(ZigBeeState::Disabled) and keep set_radio_state(State::Disabled) disabling both. Or it could make it extensible and support set_enabled(Radio::Zigbee, false) instead. In all of these cases it's not a problem with naming the enum, it's a semantic problem with library adding new state to control that old code is not controlling, because it can't possibly know it needs to. It's a semantic change that needs new API calls.

But let's say it was a vague function with important enum type: set_radio(WifiState::Enabled), and it was changed to control multiple radios. Then a call to set_radio(AllRadioControl::Enabled) enabling all could be a trap. But a reasonable library upgrade doesn't need to create such a trap for its users. If there is an important change to default behavior it could rename the variants to EnableWifiOnly and EnableAll, or change set_radio to set_radios or create individual methods. There are plenty of ways to handle this. In a world where the implied enum name feature exists, the library author would be aware that _::Enable could get a new meaning, and just make sure not to screw up that meaning.

It's up to library user to ensure they use library with a competent API, especially if that is an important project. And library authors do care about their public API, and don't have to set up traps for users.

3 Likes

I'm using bad naming because I'm trying very hard not to break NDAs relating to real code, while still giving an example that shows the essence of the problem.

And my experience is that any project of significant size has at least one API where a semantic change has been made, but nobody has done it correctly. There's two ways to handle this:

  1. The way I've experienced in many C and C++ projects, where you simply have to know that the old function was misnamed, and that while it's called set_wifi_state, it's actually supposed to be set_radio_state, and if you want to control WiFi state, you should use set_wifi_radio_state instead.
  2. The way Rust projects have tended to do it, where they change argument types so that misuse of an API doesn't compile, and then fixing everything up. I want us to stay in this place, where a mistake is relatively manageable to fix, because the compiler helps you do the right thing.

I would also note that I can make the same argument as you're making about enum types with respect to unsafe - the compiler knows when I'm calling an unsafe function, and it can deduce that the right thing to do is to act as-if I was in an unsafe block, making the code compile. The code communicates to the human exactly what it's doing with no more information than necessary, and the tedious detail for computers (the unsafe block) is found by the computer.

In the majority of cases, the code is still clear from its structure and names of functions and variables; everything is still type checked, so if something isn't what it looks like, it'll likely fail to compile. There are edge cases, but people can avoid these and use more verbose syntax (say a !unsafe block) there.

It's up to the library authors to ensure that they don't name unsafe functions in a confusing way, and the library authors are aware that there's a risk of users screwing up with unsafe functions, and just make sure not to screw up by naming an unsafe function in a way that doesn't make it clear that it's unchecked.

Aaron called this "context-dependence" in his classic post on the Reasoning Footprint.

The example about type annotations describes it thusly:

Context-dependence: because data types and functions are annotated, it's easy to determine the information that's influencing the outcome of inference. You only need to look shallowly at code outside of the current function. Another way of saying this is that type inference is performed modularly, one function body at a time.

I would say that "look shallowly" is a good description of what's needed for implied enum types. After all, even set_radio_state(RadioState::Enabled)? isn't just local, because RadioState itself would often come from a use, forcing you to look elsewhere in the file.

Looking at the other two axes:

  • Applicability: I use the set_radio_state(.Enabled)? syntax in my posts here to give it that "heads-up that this [is] happening". And it's only elidable where inference has already determined what the type must be -- dbg!(.Enabled) would never work.

  • Power: Because it only works where the type is already forced by something else, this can't "radically change program behaviour or its types".

So yes, this does have some impact on locality of reasoning, but not a particularly bad one -- far less than trait method resolution, for example. And it's low impact on the other axes, so I see it as fitting well in the kinds of tradeoffs that Rust has generally been happy to make.

I think, instead, that this is the core issue:

My personal concern with this, as I linked above, is entirely about naming:

The problem is not in locality of reference, but in naming conventions.

Calling foo(.Yes, .Yes) is as bad as foo(true, true), but people make enums like that today because this feature doesn't exist (see InheritStability::Yes and InheritDeprecation::Yes, for example), and thus the call would be foo(InheritStability::Yes, InheritDeprecation::Yes), which is entirely readable.

Thus to me, if Rust was designed around this feature it wouldn't be .Disable, but .DisableRadio, and you'd solve your update problem by removing the Disable variant and calling it DisableWifi or DisableAllRadios, so you still get the compiler error for the callsites, just in a different way.

Therefore I think the big unknown here is in how to get the goodness of not needing to say the irrelevant when it's irrelevant, but still make sense with the general non-stuttering scoped nature of enums in existing Rust. The non-localness, by itself, is not the issue.

(And, of course, there's the big overarching question of whether the language grammar needs to ensure this, or whether -- like foo(true, true) -- the appropriate mitigation might just be code review.)

I consider this a false parallel, because the potential impacts are so different. UB makes it impossible to reason about your program, because a program that hits UB is allowed to do literally anything. Getting a variant from the only enum that can possibly be passed to a function is of tiny impact, relatively.

Logic errors are always possible, but when it's just a logic problem and not UB, then logging and debuggers and testing and such are meaningful.

4 Likes

But UB is just one type of logic error - it's a consistently serious one (in that wherever it occurs, it's serious), to be sure, but it's just a class of logic errors.

And IME of embedded (my preferred domain to work in), other logic errors are often more serious than UB; UB is completely unpredictable, but other logic errors are often predictably lethal. Note in this context that the final binary does not have UB itself; the processor and system are entirely defined, and thus the code's behaviour can be predicted from the resulting machine code. The issue with UB in this context is that it makes modification very brittle - an apparently innocuous change (such as compiling at 11am instead of 1pm, or compiling while reading e-mail instead of browsing the web) can result in a binary with radically different behaviour.

Treating UB as "different" to logic errors is thus fallacious reasoning - the presence of UB is a logic error in your code, and the only reason to treat UB specially is that the consequences of UB are always severe, whereas the consequences of other logic errors can vary from "something that should have been asparagus green is artichoke green instead" all the way through to "the program consistently kills people".

After all, the Therac-25 deaths were "just" the result of a logic error, not undefined behaviour - but the consequences were huge, and arguably larger than the consequences of most UB out there.

This is why my suggestion is that it should be accepted where it's unambiguous locally - if the only visible enum in this scope with a Disabled variant is RadioState, then deducing that _::Disabled is RadioState::Disabled is unambiguous - there's literally nothing else it could possibly be. This is equivalent to what you'd get if, for all enums visible at this point in the program, you wrote use $enum::*; - unambiguous variants are accepted, but the compiler ignores ambiguous choices. Using my example, this would allow set_wifi_state(_::Channel1ApMode), because only the WifiState enum has a Channel1ApMode variant visible in that scope, but not _::Disabled, since there's (in my example case) three enums visible in the caller's scope that have a Disabled variant.

I'd also be happy with the original feature as an IDE assist - you type set_wifi_state(Disabled), and the IDE assist autocorrects that to set_wifi_state(RadioState::Disabled) (or set_wifi_state(WifiState::Disabled), post-refactor) for you.

My concern comes in not when writing the code for the first time, but when refactoring - if something changes the types, the compiler may silently amend the meaning of the code drastically, ignoring all my carefully written documentation. As a result, any change to the called method can result in the compiler "auto-fixing" your code without warning; and it's this silent auto-fix behaviour that scares me, not the original code author's situation (hence being happy with it as an IDE assist).

And because it doesn't record what the original type was, there's no way for the compiler to know that my refactor has changed the type here significantly. At least with the IDE assist, the decision for which type you're using is made at the point the code is written, and if the function is changed after you made that decision, you get an error.

In general, I'm scared of silent auto-fixes - where behaviour changes significantly, I want a human in the loop doing the fix, because that human is likely to then test the changed behaviour. This means that cargo fix is great, because the human can review the changes it makes - but having rustc silently apply cargo fix auto-fixes in memory and retry would not be good.

2 Likes

I also disagree with the idea that this fits in well with the kinds of tradeoffs that Rust has generally been happy to make. All of the following, with the exception of BIG, are compile-time errors, despite the fact that I'm using the forms of literal that Rust is happy to type-deduce today:

const BIG: u32 = 0x12345;
const LE_MASK_1: u8 = 0x12345 & u8::MAX;
const LE_MASK_2: u8 = BIG & u8::MAX;
const F1: f32 = 42;
const F2: f64 = 43;
const U1: u32 = 12.0;
const U2: u64 = BIG;
const U3: i64 = BIG;
const BIG_F64: f64 = BIG;

fn convert(val: u8) -> f32 {
    val / 4.0 - 10.0
}

fn convert2(val: u8) -> f32 {
    let val: f32 = val.into();
    val / 4.0 - 10
}

fn convert3(val: u8) -> f32 {
    let val: f32 = val.into();
    val / 4 - 10.0
}

And none of them are ambiguous - so what rule are you applying that says that all of these should be errors, but that we should deduce the enum type?

It's a fairly common position that it would be nice if {integer} literals could be used where {float} literals are currently required. IIUC it's not happened yet mainly because adjusting this core fallback-having interference is very fragile, and it's not that much extra work to write out the .0 to make a float literal.

Numeric literals are also fairly different from inferred enum stems. When the source is 12.0, that's fairly clearly asking for a floating-point number, not an integer. If the source says set_wifi_state(_::Disabled), it's fairly clearly asking for the Disabled variant (or associated item, I suppose) of whatever type the argument is.

Yes, currently enum types can (and fairly commonly are) kindof used like function argument labels; the example of foo(InheritStability::Yes, InheritDeprecation::Yes) trivially maps to foo(inheritStability = true, inheritDeprecation = true). But I agree that the set_wifi_state example is a poor example; if set_wifi_state(_::Disabled) ever does anything other than just set the Wi-Fi state to disabled, the function was poorly named at that point in time, no matter what the argument type was.

If the goal is to deliberately break users of set_wifi_state so each one can be audited for changing semantics, it remains trivial to do by changing the names involved (other than the name of the argument type) or slapping #[deprecated] on the function to have the compiler give you a list that will even discover other uses that use .into() or similar and remain type-correct after the refactor.

The same pitfall can easily be hit without _::Variant syntax via any generic return type, e.g. false.into() or "disabled".parse()? or default() or any of many options. You might prefer to enable pedantic/restriction lists to limit such inference backedges, but _::Variant isn't anything more interesting than existing inference backedges.

The one limitation I'd potentially agree with is requiring the enum type to be in scope (but not that the variant name is unique among enums in scope), with the rationale being derived from how lookup for <_>::method() works today (needs to come from a trait in scope).

2 Likes

Note that it doesn't work because it's not forced to one particular type in inference.

Looking at this one, for example,

If I replace that with Default::default() it doesn't work either:

error[E0790]: cannot call associated function on trait without specifying the corresponding `impl` type
 --> src/main.rs:2:21
  |
2 | let LE_MASK_1: u8 = Default::default() & u8::MAX;
  |                     ^^^^^^^^^^^^^^^^ cannot call associated function of trait

And, fundamentally, this happens because both u8: BitAnd<u8> and &u8: BitAnd<u8> exist, so it's not forced to one type by type inference.

Similarly,

fn foo(_: impl Into<u32>) {}
foo(0x12345);

doesn't work either, even though

fn bar(_: u32) {}
bar(0x12345);

does.

So I would expect similar things here -- in a context that's forced to one specific type, rustc would use that type to look for the .Variant, but anything with more flexibility -- like the "into trick" or whatever -- and it wouldn't work any more, requiring that it be fully qualified.

1 Like

But they are forced to one type by the site of use, same as in this suggestion; because const LE_MASK_1: u8 is a u8, by definition, following your reasoning for claiming that Rust already makes this tradeoff, Rust should be able to deduce that the type must be u8. The fact that it doesn't says that Rust doesn't make this tradeoff today, but instead requires you to be verbose.

In this suggestion, if I have FooEnum::Disabled and BarEnum::Disabled, then a straight let dis = .Disabled; in isolation is not forced to one particular type in inference, because without context, Rust can't tell which of the two enums I meant. But, in this suggestion, const dis: FooEnum = .Disabled; would be inferred correctly as const dis: FooEnum = FooEnum::Disabled. This is a class of inference that Rust doesn't do on literals today.

Similar applies to const F1: f32 = 42; - Rust today says that, despite type inference forcing this to be one, and only one, type, it won't do the inference, it'll force you to give it the right type of literal.

That's two cases where Rust today doesn't do what this suggestion says (of inferring the missing bits for a literal), despite doing inference on the types already.

Further, it doesn't infer the types for the following code (Playground):

mod foo {
    #[derive(Debug)]
    pub enum FooEnum {
        Disabled,
        LovelyFoo,
    }
    
    #[derive(Debug)]
    pub enum BarEnum {
        Disabled,
        BarBarBar,
    }
}

use foo::*;
use foo::FooEnum::*;
use foo::BarEnum::*;

fn use_foo(foo: FooEnum) {
    println!("{foo:?}")   
}

fn main() {
    use_foo(LovelyFoo);
    use_foo(Disabled);
}

So we've got yet another example of a very similar feature in today's Rust (the difference between the proposal and this feature is that the proposal deletes the use foo::FooEnum::*; line), where Rust also differs from this feature proposal.

And I've not even got into other similar cases, like how Rust doesn't treat let s: String = "Hello World"; the same way as let s: String = "Hello World".to_owned();, despite this (or functionally similar ways) being the only sane way to convert a string literal to an owned String. Again, given the context that you have a receiver that needs a given type, Rust could infer this, the way this proposal suggests, but doesn't.

I could live with that limitation - for the sorts of "refactor to fix bad code" I'm concerned about, you wouldn't have both enums in scope (assuming that by "in scope", you mean "available without a full path to the item", as opposed to "visible if you know the full path to the item") until you've done the fixes, because by definition, you're introducing a new enum to catch out cases where the old use was simply wrong.

However, I'd still prefer the version where use crate::foo::FooEnum; in today's Rust, or use crate::foo::*; implied use crate::foo::FooEnum::*; with the current semantics of a glob import from an enum - if FooEnum's variants are uniquely named, then this works, while if FooEnum and BarEnum overlap, you need to disambiguate the overlap.

Not to be a buzzkill, but I'm not sure there's consensus to be found here. It's the old "Does adaptive language behavior increase complexity meaningfully" debate that is being had regularly. If it goes on long enough it just brings out bad patterns.

Personally, being in the very weird sliver of the Venn diagram that both agrees that adaptiveness causes complexity, but also wants the _::* version of this feature because it can de-noise some code significantly, I still maintain that this doesn't even have to be a conflict as often as it seems to be, and that we could have general adaptive defaults with solid guards in form of restriction lints and fully qualified syntaxes, among other things.

In this case specifically, _::* in patterns seems so distinct as a feature I'd think it would make great first-contributor issues for Rust-Analyzer and Clippy.

4 Likes

It's the old "Does adaptive language behavior increase complexity meaningfully" debate that is being had regularly. If it goes on long enough it just brings out bad patterns.

I fully agree. It seems to me that the main issue in this pre-RFC seems to be that it can cause confusing code wether being similar enum names causing bugs or syntax similar to the pre existing :: meaning crate or just importing raw enum properties like ExampleEnumProperty. From what I see, I feel that we can create an RFC out of this as it would allow further discussion.

1 Like

Even if you call it "Implied enum types", in reality you are asking for "inferred Path" or "inferred partial path".

  1. Proposed syntax is strange, underscore _ mostly mean "don't care" in a meaning "don't evaluate this, I never use it". Maybe double dot .. is handy, which has meaning "the rest implicitly find by yourself"
use_foo(..::Disabled);
  1. Since it has a cost of searching this path, it mostly handy in prototyping
  2. It has much more widely use.
type G = std::boxed::Box<dyn std::ops::FnOnce(isize) -> isize>;

let now = ::std::time::Instant::now();

let s : String = String::from("some string");

if (n as u64) > (::std::u32::MAX as u64) { }

AllocError { layout: alloc::alloc::Layout,     },

fn check_size<T>(val: T)
where  Assert<{ core::mem::size_of::<T>() < 768 }>: IsTrue,

match foreign_item_kind {
    ForeignItemKind::Static(a, b, c) => ItemKind::Static(
        StaticItem { ty: a, mutability: b, expr: c }.into()),
    ForeignItemKind::Fn(fn_kind) => ItemKind::Fn(fn_kind),
    ForeignItemKind::TyAlias(ty_alias_kind) => ItemKind::TyAlias(ty_alias_kind),
    ForeignItemKind::MacCall(a) => ItemKind::MacCall(a),
}

we could cut a bit for easy reading

type G = ..::Box<dyn ..::FnOnce(isize) -> isize>;

let now = ..::time::Instant::now();

let s : String = ..::from("some string");

if (n as u64) > (..::u32::MAX as u64) { }

AllocError { layout: ..::alloc::Layout,     },

fn check_size<T>(val: T)
where  Assert<{ ..::size_of::<T>() < 768 }>: IsTrue,

let item_kind = match foreign_item_kind {
    ForeignItemKind::Fn(fn_kind) => ItemKind::Fn(fn_kind),
    ..::Static(a, b, c) => ..::Static(
        StaticItem { ty: a, mutability: b, expr: c }.into()),
    ..::TyAlias(ty_alias_kind) => ..::TyAlias(ty_alias_kind),
    ..::MacCall(a) => ..::MacCall(a),
};
  1. This must call a compile Error, if exist an ambiguity in match, if several paths satisfy known type boundaries

You are right that the feature can be generalized, but do we really want it? The reasoning for enum variants is that they live in a different module and so they are less ergonomic to use, but this doesn't apply in general.

_ already means inferred type, so it doesn't seems strange at all. Meanwhile .. means "the rest", not "infer", so that would be strange to me. Also, ..::Disabled already parses as a range with end ::Disabled. Although ::Disabled must refer to a crate and thus this is never semantically correct, syntactically it is already allowed and would be weird to parse.

Another reason I don't like this syntax is that it is difficult to read: there are a lot of dots in ..::

6 Likes

I originally dived in simply because I saw objections to this feature being dismissed with "if you don't like it, don't use it", which to my mind is a bad way to design language features - notably because I read much more code than I write, and thus if I don't like it, I have to get everyone I work with to not use it for this to apply, and transitively, that ends up removing the feature.

Having now thought about this feature a lot, I have two hard questions:

  1. Is this only going to apply where the type is fixed in the signature, or is it going to apply where a generic type has been monomorphized to an enum? Put differently, do you expect let mut v: Vec<FooEnum> = Vec::new(); v.push(_::Foo); to work?

  2. Are we going to "look through" traits to find possible enums? For example, if I have m: HashMap<FooEnum, BarEnum>, is the compiler going to accept m.get(_::Foo);, and determine that because FooEnum: Borrow<FooEnum>, it's possible that _ should be FooEnum in this case. If we do look through traits, how do we handle the case where I have two possible choices for which enum _ should be (e.g. for m.get(_::Foo), both impl Borrow<FooEnum> for BarEnum and impl Borrow<FooEnum> for QuuxEnum are in scope, and both QuuxEnum and BarEnum have a Foo variant)?

These two questions are hard, because the answers affect how usable the feature actually is in practice. If I say no to question 1, then the feature becomes very limited in scope - even the standard collections don't work with it, and I suspect it'll push people to a very different style of coding as a result.

The second is the nasty one, because there's a SemVer hazard hidden in it; if the answer's "no, we don't look through traits", then you have to explain why m.insert(_::Foo, _::Foo) is OK, but m.get(&_::Foo) is not. If you do look through traits in this sense, then adding a trait implementation to any enum is potentially a breaking change, since by adding the trait implementation, I can cause previously valid and unambiguous code to either fail to compile, or change in meaning (depending on how you handle the case where two enums could be chosen).

1 Like