Implicit widening, polymorphic indexing, and similar ideas

Forking from Restarting the int/uint Discussion, because I think these deserve their own. All of them are in some way about treating code mixing different numeric types more liberally, and (for most of them) about thinking of the various fixed-size integer types as restricted subsets (intervals) of the full mathematical integers, which are thereby related in various ways, instead of as independent domains which must be explicitly casted between.

Implicit widening is the idea an integer type A should be automatically coerced to another integer type B if the conversion is “lossless”: if B can represent all of the values which A can. /u/sivadeilra gave a persuasive argument for this on reddit. Concretely, this would mean that the following automatic coercions would be added:

  • i8 -> i16
  • i8 -> i32
  • i8 -> i64
  • i16 -> i32
  • i16 -> i64
  • i32 -> i64
  • u8 -> u16
  • u8 -> i16
  • u8 -> u32
  • u8 -> i32
  • u8 -> u64
  • u8 -> i64
  • u16 -> u32
  • u16 -> i32
  • u16 -> u64
  • u16 -> i64
  • u32 -> u64
  • u32 -> i64

A delicate question is how to handle integers of platform-dependent size, namely the current int and uint (which will hopefully, as a separate matter, be renamed). Having the legality of those conversions depend on the target platform seems undesirable (not portable); doing automatic, potentially lossy conversions does likewise; and so requiring explicit casts seems like it might still be the least-bad option in this case. (If they have some guaranteed minimum size, then some conversions could be provided on that basis.)

What I’m going to call polymorphic indexing is the idea that arrays (or collections generally) could be indexed by any integer types, and not just uint. This was suggested by @cmr on reddit. The thinking here, essentially, is that runtime index-out-of-bounds errors are going to be possible no matter what type you use, so there’s no reason why negative values of signed types, or values of types larger than the whole address space, couldn’t just lead to the same result.

Integer promotion is a C idea, (essentially) that the result of arithmetic operations between different types has the type of the largest operand type. So, for example, adding an i16 to an i32 would be allowed, and result in an i32. (C’s actual rules are more gnarly, e.g. everything smaller than an int is promoted to an int.) Building on RFC 439, we could do this straightforwardly, if we wanted to, by simply providing impl Add<i16> for i32, and so on, between each relevant pair of types. This is not primarily a way to reduce overflow bugs, but rather, a bit like lifetime elision, just a useful shorthand to avoid having to write so many casts. A delicate question here is whether/how to handle promotion among signed, unsigned, and/or floating point types. (An argument could also be made that multiplication should promote to the next largest type; but this can’t work past 64 bits, and it’s not possible to prevent overflows in this manner in general, so it’s not clear that this would be a good idea on balance.)

Finally, along similar lines, what I’m going to call heterogenous comparisons could also be enabled by RFC 439. This would be accomplished by having impls of {Partial,}{Eq,Ord} between each pair of integer types, and the result of the comparisons would be as-if both operands had been casted to an unbounded integer type beforehand. In other words, you could write n < m where n: i32 and m: u8 and it would “do the right thing”: if n is negative, the result is true. (I got this idea from someone on the internet, as well, but I can’t remember from who, or where.)

4 Likes

Is this an argument for encouraging more use of i64 (either conventionally or by making it the default)? It seems to solve the biggest problem with it.

I just made an argument on the other thread that this is actually not the case. A slightly edited version below:

Failing a build on an architecture smaller than that envisioned and tested by the original programmer is exactly what you want to happen. If somebody only built and tested their code on a 64 bit platform, and made choices that are known to not be safe on a 16 bit one, then I'd much rather the code fail to compile rather than compile and then fail at runtime.

Infrastructure could matierally assist this apporach, and the result would be an ecosystem of code that would far more reliably work across architectures.

Travis-ci could be leveraged to do builds on all target platforms, and its output could be automatically used by cargo and crates.io to provide useful error messages about why the dependency fails. It could display a red/green list of architectures on the crates.io site to indicate which architectures each packages was safe for.

Everything about this approach seems appealling to me. It would greatly encourage pull requests to fix architecture dependancy issues by exposing those issues and making them first class artifacts of the dependency infrastructure.

To put this another way, it is impossible for rust to ensure that all code decisions made for plaform X are possible to run (safely) on platform Y. Therefore we should not try to totally hide the issue of portability, but instead make it an explicit feature exposed by the build, documentation, and dependency systems.

2 Likes

I think the argument for “just do it at runtime” (remembering from @cmr) was that either you have a real index-out-of-range error, or you will discover an out-of-memory error before the overflow in the cast. There weren’t situations (that came up) where overflow in downcasting was the root problem.

I’m not sure the right way to say “+1”, but … that. Widening and indexing would largely remove the occurrences of uint from my code, and I could no longer have an opinion on the other thread, which would be great! +1 +1 +1 … boom.

C-s widening rules cause significant confusion around where the widening occurs (classically, you have things like void malloc(size_t); int a, b; malloc(a*b) – with the multiplication overflowing before the widening) – I would prefer Rust to not have these problems.

However, allowing indexing with any integer type should be harmless for both correctness and performance, as long as we do the bounds checks on the wider type.

Maybe we could have something like

trait AsUint {
    fn as_uint(&self) -> Option<uint>;
}

impl<I: AsUint, T> ops::Index<I, T> for [T] {
    fn index(&self, index: &I) -> &T {
        let u = index.as_uint();
        match u {
           Some(i) if i < self.len() => unsafe { mem::transmute(self.repr().data.offset(index as int)) }
           _ => fail_bounds_check(self.len(), u)
        }
    }
}

Through AsUint would need to be a lang-item for the compiler-inserted index.

2 Likes

IMO implicit integer widening is just an obviously good idea. I continue to be surprised that Rust doesn’t have it. There’s no room for error (it’s a strictly lossless conversion), it’s just a pure usability win. C# got this right and I’ve never heard anyone complain about it in that language’s community.

9 Likes

I would feel that auto-widening integer types would reduce some of the safety that rust gives. I’ve ran into a lot of what end up being security bugs in C code similar to:

let a: u16 = ...;
let b: u16 = ...;

... = malloc(a * b);

This is probably slightly less a problem in Rust because most code will have bounds checking. Although the conversions are slightly annoying, they do require the programmer to at least think about where the conversion occurs.

The above AsUint trait doesn’t really help, because the conversion will still happen after the overflow.

FWIW, consider this a -1 from me.

2 Likes

It might work in some cases to let the result of multiplication be a wider type. I can’t think of a way right off the bat that would make that work well, but at least in this example leaving out implicit widening solves the problem as well.

Conversions between numeric types add complexity to the core language and reduce explicitness, so I’d be very conservative on the way of adding them to the language. (I outlined some of my concerns here, some of them may be irrelevant after the operator reform.)

If people really feel "as uint"s becoming too common in their code, then some less intrusive measures can be taken. As a first step, polymorphic bounds-checked indexing and heterogeneous arithmetical and logical operations will hopefully make >90% of these people happy and they do not require implicit conversions in the language.

If it is not enough, generic conversion traits proposed by @aturon can be used instead of built-in implicit conversions. They are more general, don’t treat built-in and user defined types differently, don’t mess with the core language and, most importantly, give users a choice - to allow or to disallow conversions in each particular case.

Also, interactions between signed and unsigned types and between pointer-sized types and everything else (not in the case of bounds-checked indexing) still look suspicious and could be banned from heterogeneous arithmetical/logical operations and generic conversion traits.

That wasn't the actual motivation, but it seems like a consequence; it would make working with non-uint number types in general less painful, and i64 happens to be one of them.

Yeah, I did see it. The argument sounds plausible to me, but I don't have the expertise, and haven't thought long and hard enough about it, to truly judge.

@arielb1, @d3zd3z, @petrochenkov: Thanks for pointing out this issue with widening / promotion! (Which are distinct ideas, but very much in the same space.)

I suspect that the true problem here is not actually with widening/promotion itself, but with the types of the arithmetic operators (equivalently, with the definitions of their respective traits), and that the requirement for explicit casts is just an ugly bandaid for this fundamental issue. The fact that the type of an arithmetic expression is derived solely from the types of its operands has always struck me as a bit curious, and as likely to be a historical holdover from how C/C++ happen to do things, but these examples now make me think that this is not just a curiosity, but possibly an actual design flaw.

All of the given examples, as far as I can tell, would work correctly if arithmetic operations were polymorphic in their result type: if the result type were an input type (parameter) of the given trait rather than an (associated) output type: for example, if instead of trait Mul<RHS> { type Result; ... } we had trait Mul<RHS, Result> { ... }. With impls wherever narrowing isn't involved, such as, to make the mentioned examples work, impl Mul<u16, usize> for u16, with mul being defined as lhs as usize * rhs as usize: which is what we would currently write by hand for the mentioned examples given the explicit casts requirement. (I'm assuming for this example that the conversion from u16 to usize is lossless, but so do the mentioned examples, otherwise implicit widening wouldn't be in play; the principle itself is more general.)

Now this leads me to wonder just how badly doing this would screw up type inference.

cc @aturon: I hope you might have some thoughts about all of this.

I think straight up widening is too confusing, but I would totally be down for having separate heterogeneous operators. These would be syntactically distinct, e.g. ** ++ --. For arithmetic: Whereas the normal ones might have an associated output type, these will have both args and the return type as input types. Also, they could only be implemented for the combinations where overflow is impossible, giving the programmer some handy guarantees and making them not overlap with the normal ones. For == it would simply be backed by a heterogeneous equality trait.

As an aside, I’d like to mention at least x86 (32 bit) has built-in support for u32 -> u32 -> u64 multiplication.

1 Like

I do indeed! Let's call the proposal where you widen eagerly based on return type "return type polymorphism".

Should the result type of operators be an associated or an input type?

Putting aside the specifics of the proposal here, you've made me question the design in ops reform which is moving us to associated result types. (Note that this has not yet landed but there is a PR poised to do it). I'm not sure we can make a decision about your proposal right now, but moving to associated types will make it impossible to carry out in the way you suggested. So what are the pros/cons of associated types in general?

Pros of associated types

  • Makes it easier to bound by a family of connected types. Doesn't really apply to operators, because the default type parameters do this for you: right now when you write T: Add you get T: Add<T, T>, giving you a similar ergonomic benefit to associated types (albeit only for a particular, common case).

  • Makes it easier to reason about your code, because associated types are uniquely determined from input types. For this particular case, it means that you do not have to take the return type into account when figuring out what a piece of code using e.g. + does. See below for more.

  • Aids type inference. Because associated types are uniquely determined by input types, the compiler can drive inference from inputs to outputs. We've worked hard to make this the case for input types as well when there is a unique impl, but it's not foolproof and can fail in edge cases.

  • In certain cases reduces coherence problems for blanket trait impls. I don't recall the specifics at the moment, but this motivates the associated type for Iterator, for example.

Cons of associated types

  • Less overloading flexibility. In this particular case, committing to associated types means that the built-in operators can never be overloaded by return type.

For the existing operator traits, we have the benefit of experience: they've been using all input types for some time now, without any annotation burden cropping up. That means it should be relatively safe to stick with input types (leaving the door open to using return type polymorphism on built-in types later, but not committing to it).

Can return type polymorphism be used without introducing an annotation burden?

It's not entirely clear. To make it work, you'd need for default type parameter fallback, general type inference, and trait matching to all work in concert in a case like:

let x: u16 = ...
let y: u16 = ...
let z = x + y; // how is the type here determined? there's a dispatch ambiguity

We'd have to carefully think about the global consequences of using default type parameter fallback to resolve trait matching ambiguities.

Is return type polymorphism on numeric operations a good idea?

I'm not sure. Certainly for the examples earlier in the thread it saves you from an unintended overflow. Given our new rule that says overflows on release builds produce unspecified values, widening early (and thereby producing a different, "better" result) is acceptable -- if you still think of the code as written as producing an overflow.

On the other hand, I worry about:

let x: u8 = ...
let y: u16 = ...
let z: u32 = ...
let w: u64 = x * y * z; // quick, what widenings are happening here?

In other words, it may become very difficult to tell when you've correctly widened to avoid overflow. The answer to the question above likely depends on the intricate interplay of inference, default fallback, and trait matching. (Layered on top of widening coercions, for that matter -- you'd still want those when e.g. calling a non-operator function with a smaller int type).

All told, this design feels pretty shaky to me. I'm more confident in the idea of introducing implicit widening together with a lint to catch the problematic cases raised in this thread. (Or perhaps multiple lints). More on that soon.

On the other hand, I'm reasonably comfortable with leaving the operator traits using all input types, since we know it works. (We can somewhat avoid the "harder to reason about the meaning of +" issue by simply not using result polymorphism in std.)

1 Like

Oh, I wasn't aware of that. The old trait type parameters were "output types" in the way that associated types now are; apparently in the transition, the operator traits kept their syntax and changed their semantics?

Could you spell out of how default type parameter fallback would/might operate here? (They seem straightforward for functions and types, but default types in traits (and can you have them in impls too?) are harder for me to wrap my head around.)

Either way, I was assuming that in cases like this, the type of z would itself be inferred from its use site. (So type inference would flow "backwards" relative to control flow, just as with the operators themselves. Which is pretty standard in languages with more sophisticated inference than C++.) If various defaults have to be involved, that makes me a bit uneasy, but maybe seeing it spelled out could ease my unease.

I think the fundamental idea here would be precisely that you wouldn't have to think about it: that if you choose a wide enough result type, the underlying mechanisms will ensure that the operand types will always be appropriately widened before the operations are performed. (And a cast is required only if narrowing would take place; conversely, if you see a cast, it is a telltale sign of narrowing.) One would have to think through the details of those "underlying mechanisms" to figure out whether/how such a "peace of mind" guarantee is actually possible to provide, but it sounds at least plausible to me.

(With return type polymorphism, there's now a fifth idea in addition the four in the original post (albeit closely tied to some of them); do you have any opinion about the others?)

Yes, exactly. Multidispatch landed long before associated types.

To get forward inference there, you'd have to say something like that when multiple trait impls could match, but there's a default type parameter, you use that to inform which impl to choose. But that would add more subtlety to an already pretty complex dispatch system.

As far as forward versus backward inference, the point is that right now we can flow in the forward direction for these operators, and that's at risk if we make this change. The global implications are not entirely clear to me.

That's appealing, but I'm not sure whether we can get there. My basic worry is that we're placing widenings at the mercy of a quite complicated mix of inference, dispatch and defaults, and there will be corner cases that go astray.

Another thing that @nmatsakis has pointed out to me is that, although inference with our current traits is working reasonably well today, if we do not change to associated types, that may limit our ability to tweak inference in the future. In other words, keeping them as input types means we're relying on the somewhat aggressive, heuristic inference we're doing right now, for fundamental operators of the language. We risk boxing ourselves in. That changes my "it's working OK today so we're probably OK tomorrow" attitude.

Ah, sorry! My basic feelings are:

  • Implicit widening, paired with (possibly multiple) lints, seems like a clear improvement. Targeted linting is IMO likely to be more effective in catching mistakes than requiring coercions everywhere (which will lead to blind coercions). I'm currently working on an RFC that captures the summary of the int discussion as well as implicit widening (and some other things) and will spell out my thoughts in more detail there.

  • Polymorphic indexing: seems unobjectionable to me. Given that we're doing the bounds check anyway, I see very little downside.

  • Promotion and heterogeneous comparisons: if we do implicit widening, I believe that these will both fall out for free.

I’m a bit unclear on this: if we accept implicit widening, what exactly would polymorphic indexing entail?

Suppose you have:

let v: Vec<Something> = ...
let i: u64 = ...
v[i]

Implicit widening alone would still keep this as a type error. It might let you go u32 -> usize but not u64 -> usize implicitly (if we bake in a >= 32-bt arch assumption). But there's no harm with indexing with a u64 as long as you check for overflow before truncating -- it's basically just like the bounds check you do anyway.

So polymorphic indexing further improves the ergonomics beyond what implicit widening provides.

Could you elaborate on this a bit? Are there confusing examples that've bit you in the past?

I'm very wary of introducing implicit widening. While it does seem like it could lead to better ergonomics, it also seems like it could cause complications with inference and also result in confusion about when widening occurs. Consider the following code:

fn some_func(_: i32) {}

let a: i16 = 23;
let b: i16 = 56;
let c = a + b;
some_func(c);  // Today, this is an error

What happens? Is c inferred to be a i16, as it would be without the function call, and then widened at the call site? Is c now inferred to be a i32 and it's the result of a + b that gets widened? Is the compiler fully eager, widening the values of a and b before performing the addition (today, if you omitted the types on a and b; a, b, and c would all be inferred to be i32s)? If c is inferred to be a i32, would latter adding a function call passing it for an i16 parameter cause the inferred type to silently be changed from i32 to i16?

I'd like to see polymorphic indexing, heterogeneous comparisons, and possibly limited heterogeneous arithmetic (it might be nice to support operating on signed ints, unsigned ints, or floats of different sizes, yielding a result of the larger type, but I think mixing them should still require casts, with the possible exception of mixing an unsigned int with a larger signed int). I feel like this will probably clear up a lot of the ergonomic issues, and can be achieved solely my providing additional trait implementations to those proposed by RFC 439. Implicit widening feels like a much more significant change to the language, with a lot more considerations to take into account and worry about getting right. I think it should only be considered if things are found to still be too painful after the other ideas have been experimented with.

I believe implementing heterogeneous comparisons would allow things that implicit widening wouldn't, such as comparing an i64 to a u64 and having it do the right thing.

2 Likes

Great stuff, Gábor! These should improve ergonomics and safety.

Implicit widening sounds great to me for the reasons above including not burying narrowing casts in a sea of unneeded casts.

+1

Is the reason why Rust doesn't do it already to force you to be explicit about which stage to widen your computation?

Having implicit conversions to/from platform-dependent integers depend on the target platform is quite desirable. Such code is not portable.

Polymorphic indexing sounds great.

To tweak that, indexing with an i64 on a 32-bit target would just do the array bounds check, no more and no less. No need for a narrowing pre-check. (BTW the compiler could optimize the test 0 <= index < length into one unsigned comparison at the width of index.)

Please don't bake in a >= 32-bit architecture assumption. Rust is so much safer than C/C++ for embedded systems.

Integer promotion sounds great.

Yeah, it's complicated. Do programmers remember that rule when coding math? OTOH it avoids some overflows in intermediate values and results.

Multiplication forced-promotion to the next largest type could be more problem than solution, and why for multiplication but not addition and subtraction? Programmers can manually widen an operand before multiplying when needed.

Heterogenous comparisons sound great. When a C++ compiler complains about signed/unsigned ambiguity in n < m, it takes me a long time to figure out how to fix it. The compiler can handle it reliably.

Return type polymorphism

Scary! That's way more complicated and unpredictable than C's "promote to int" rule.

(There's a long StackOverflow thread asking if Haskell has target typing. Some of the discussion goes over my head but the message is: No, it has something more indescribable and unpredictable, where a type probability function collapses over the entire program into an eigenvalue. But I digress.)