Pre-RFC: Untyped constants


Live version

  • Start Date: 2015-01-15
  • RFC PR: (leave this empty)
  • Rust Issue: (leave this empty)


Allow “typeless constants” which sidestep the issue of having to have separate constants per-type. This help ergonomics in the case of the common C pattern of using #define for constants.


The biggest pain this intends to solve is #define based constants. Additionally, some C APIs, such as OpenGL, often typedef to an intrinsic integer type but due to integer promotion rules in C, functions that take that typedef as a parameter will accept an int, even if the typedef is to an unsigned int. This causes FFI usability issues for automatically generated or precise bindings.

Detailed design

Change the syntax of const to:

const_item : CONST id EQ expr | CONST id COLON type EQ expr

When the optional type is not provided, the const is said to be "typeless" and the expr can only be a numeric literal. When such a const is used, its type will be a new ty_infer which is either an IntVar or a FloatVar depending on which kind of literal the const has.

This is roughly similar to what Go does.



Status quo isn’t too bad, but it does become annoying.

Unresolved questions

Pre-RFC: Elide type annotations from const and static Items
Getting rid of unnecessary explicit conversions with named constants

One serious drawback is that const inference can’t be implemented after this change.


I’m unfamiliar with what “const inference” is. Can you explain it to me? In particular, I expect that something involving inference (and not merely being typeless), would use const FOO: _ = ....

const VALUE = "string";

(Just like inference looks like in other places. In this case it will be restricted to the inference of auto in C++ though.)


I’m unsure how what I propose means that this cannot be implemented. What I have proposed here is exactly that feature, but currently limited to numeric constants (since no other literals can have types that vary on context like numeric literals).


const inference meaning that the const VALUE = "string"; is semantically equivalent to const VALUE: &'static str = "string";. This means that constants are still typed, like before, just that the type is inferred by the initializing expression.


That seems better served by const FOO: _ = ..., and more consistent with other syntax


No, I don’t think it would be more consistent, inference does not need a _ to infer the type otherwise. You don’t have to write let maybe_str: _ = Some("a"); either.

But that’s kind of orthogonal, unless we want to discuss how this const type inference feature will look syntactically, using this syntax (which is definitely viable) for something else is a drawback, because it won’t be available for discussion for the other feature later.


One thought I’ve had for a while is having semi-magical “compile-time only” types.

Imagine for a moment that integer literals are actually of type IntegerLiteral. IntegerLiteral has infinite (practical) precision. It’s also flagged in some way as CompileTimeOnly: instances of it cannot exist in the final output. As a relevant aside, note that const items can be viewed as not making it into the output.

So, an IntegerLiteral would have rules regarding how it decays into the primitive integer types (u8, i32, usize, etc.), defaulting to i32 in the absence of any more specific conversion. It would probably also be useful for it to decay into something like an IntegerLiteralStr type (newtype around &'static str), for the use of things like bignum types.

This would mean that const LIFE = 42; would be equivalent to const LIFE: IntegerLiteral = 42; and, in any context where it was actually used, could decay into an appropriate runtime type based on context.

If/when Rust gets CTFE, such types could be moved into the standard library, complete with custom coercions, allowing things like bignums to be initialised directly from literals.

And it wouldn’t interfere with const inference. :slight_smile:


This also depends somewhat on how Rust will implement literals as type-parameters, for instance as:

struct Array<T: Copy, n: UINT>([T; n]);

Here n would have to be a compile time constant. The “trait” for n is UINT, which stands for unsigned compile-time integers.


@tbu I think your and @cmr proposal can go hand in hand. If you want to have a typeless int adding a u32 or similar suffix is a type error on the other hand wanting to use type interference and not adding a suffix is also a error because interference is not possible.

const x = 235;

Clearly a literal/untyped constant

const x = 235u32; Clearly a u32 interfered constant.

Having a compiler time literal int type mentioned by @DanielKeep would make this syntax more sound but is not needed. Note that a literal type might bring some minor problems for the case mentioned by @engstad. E.g. if you pass a literal to a function the function has a limeted precision type but the literal has theoretically unlimited precision, so you would have to track down each usage of the literal to determine if a genetic parameter is valid. For clarification:

fn buz<n: IntegerLiteral>(x: bool) {
    if x { 
        let a: u32 = n;
    } else {
        let b: u64 = n;

Would mean that a type in the method signature is indirectly limited by implementation or you would have to use casts when assigning, witch in turn might lead to unexpected behavior.

Nevertheless having compiler time fixed sized int types for genetic parameters would be nice.


I think your example should be valid, but it’s tricky because this idea really depends on having CTFE. It’s possible to implement without CTFE by adding special cases in the compiler, but it should still retain the same semantics.

If this was done with CTFE, I’d expect it to actually desugar to something like:

let a: u32 = const {
    let maybe_u32: Option<u32> =
        <u32 as CoerceFrom<IntegerLiteral>>::coerce();

The const there is just made-up syntax to force a given expression to be evaluated using CTFE. As such, the compiler would have no way of knowing whether or not that piece of code can be evaluated at compile-time before it substitutes n and tries it.


Btw, this const inference isn’t my idea, I just read it in the issues list once:


For FFI bindings such as winapi the lack of untyped constants is a major annoyance. I don’t even need general untyped constants, just integers is sufficient.

#define FOO 273

What am I supposed to translate this into? What type do I assign to the constant? Currently this involves making a decision for every single constant on what type to assign it to minimize casting for users of it, but sometimes there is no right decision due to constants being used in multiple scenarios with different types. With untyped constants this would no longer be a concern. There’s over twenty five thousand constants in winapi already, and there will only be more added so please make my life easier.

Basically, I need to be able to define an integer constant without specifying a type, and then whenever you use that constant somewhere it is treated like an integer literal and the type is inferred on the spot using standard type inference. If multiple pieces of code use the same constant but with different types, then it should still work just fine. I don’t even particularly care about the syntax, just as long as consumers of those constants can use them just like any other constant.


If we’re being literal, macro_rules! FOO(() => (273)); but this has to be invoked as FOO!().

CPP makes a distinction between object-like and function-like macros – maybe there’s room for that in Rust’s macro syntax too, like macro_rules! FOO => (273); so you could just use FOO! here.


No, Rust macros are explicit, and for C the macro is only an unfortunate implementation detail, not the goal. Copying that to Rust copies the bad aspect of it and makes it worse.

I’d prefer if Rust had constants more like Go. Rust already has a pseudo-type for integer literals, so allowing it in constants would make sense IMHO.


Note that @retep998 opened an RFC:

While I do think object-like macros could be nice, it’s orthogonal to that RFC. In any case, I was not suggesting removing the explicitness of macro invocation. FOO! is still explicit, just not function-like.


I’ve always wondered why const's declared within a function needed to be explicitly typed. It seems like the type inference should be no different between it and a normal immutable let binding. I suppose a decent argument could be to just keep it consistent across all consts.

For example:

fn main() {
    let x = 1;
    const Y: i32 = 1;

I find myself wanting to use const as much as possible for the potential to avoid allocations, but then I rub up against the less ergonomic reality of losing type inference.


The fact that a definition is nested within a function changes nothing about it, it’s purely for scoping. Any deviation from that would be a potentially useful but quite complex language change.


Just for the record (you may know this), there’s generally no benefit to using const for small values like integers, because those usually don’t need to live in memory in the first place; they’re just loaded into registers with load-immediate instructions. It might make a difference if you end up taking a reference to the constant, but probably not a significant difference. That also goes for pointers, including strings (since the contents of strings are already statically allocated). So const is only beneficial for arrays and large structs.

That said, I think inference would be nice. As @eddyb noted, it could go hand in hand with other changes, such as allowing consts, typedefs, and other items nested within functions to use generic parameters from the function, which would be somewhat complex to implement but very useful.