Pre-RFC: Elide type annotations from const and static Items


As part of the Rust ergonomics initiative, I’m working on an RFC to allow type annotations on const and static items to be elided. I’d like to get some feedback on a few particular issues I’ve come across, as well as overall thoughts on the idea.

(note: there’s some previous discussion on this issue from before I was involved:

Here’s the high-level plan: if a unique type can be inferred for a const/static declaration based only on the right-hand side of the declaration (including any known types for any variables/functions in the RHS), then the type can be elided. So the following would type check:

const GREETING = "Hello"; // infers to &'static str

fn get_greeting() -> &'static str {

Unlike variables defined with let, though, this would use only local type inference: it would not attempt to infer a type based on a later use of the item. For example, the following would result in a type error, because there are multiple possible types for the literal 42 (e.g. i16, i32, etc.), even though the use in get_big_number would require it to be i64.

const THE_ANSWER = 42; // nothing in RHS indicates this must be i16

fn get_big_number() -> i16 {

So, first issue: today, the only thing preventing closures (rather than merely references to closures) to be const/static items is that there’s no way for a programmer to write down their type. This change would remove that barrier. From a technical standpoint there seem to be no difficulties, but how would the type for closures be documented in something like rustdoc?

Second issue: AFAIK, this would be the first change that would allow items exported from a crate to have an inferred type. Without careful design, this could cause unintended issues. For example, say I have a crate that exports a constant FOO with type i64, but FOO is never used in the crate itself. If I remove the type annotation on FOO (and we assume the compiler infers some default type like i32), I can compile the crate without error, but downstream consumers of my crate might get compiler errors.

From what I can tell, this is an issue only for numeric literals (but please correct me if I’m wrong). I’ve thought of three different options to deal with this, and I’d like the community’s feedback on them:

  1. Infer a type only if the type is completely determined by the right-hand side of the declaration. So 42i32 is fine, but 42 is not.
  2. If the item is exported, require that the type be completely determined by the RHS; otherwise just infer the default type for numerics (i32 for integers, f64 for floats). This is more flexible than option 1, but determining whether a constant is exported is slightly complicated: for example, a non-exported constant BAR could be used as the definition of exported constant FOO.
  3. Always default numeric literals to i32/f64, regardless of context. This obviously incurs the problem described above, but perhaps the trade-off is worth it. A compiler warning would probably be a good idea here.

I’m curious what the community thinks about this, as well as whether there are other issues I haven’t thought of. Any and all feedback welcome!


So actually I think this might be resolved already by this RFC. Since const closures can’t take an environment, they already can be coerced to fn types. I don’t know if this is implemented correctly in consts or not but basically I think we can allow you to create a const closure today. (Of course without type inference a const closure seems like just a worse function.)

So to answer the question we should probably coerce them to fn types and display that as the type in rustdoc?

As to the integer fallback issue, I think I prefer not doing any integer fallback in consts (the first option) - at least in the first pass. That’s the most conservative option and if it turns out we decide we should do it later its backwards compatible to add it.


For integers I’d rather have Go-like untyped numbers: Pre-RFC: Untyped constants


It’d be great to see a larger suite of motivating examples where this helps, examples where elision won’t be allowed, and examples where elision is allowed but possibly dubious (in that it’s difficult for a human to discern the type). These will help us judge the tradeoffs involved.

I agree with others that, at the outset, we should not employ any inference fallback (both for numeric literals, but also ideally for trait input types as well).


I’m not sure if we can do this easily, unless we just skip the “integer fallback step” in constants in general. This is probably backwards incompatible. Well, ok, I take that back. I suppose that we can have a special step for constants – basically, before we apply any type variable fallback (I don’t think this is specific to integers), we will check that the final type of the expression is fully determined, and does not contain any type variables. Seems plausible.

Still, I’m not sure how much I care about

pub const FOO = 22;

inferring to the type i32. I’m not sure how realistic it is that you export a constant that you don’t ever use…at worst you can issue a new semver or deprecate the existing constant and issue a new one.


This is sort of side-stepping the problem. Doesn’t mean it’s the wrong answer, but having a fn() is not entirely equivalent to a closure type, since (a) it has non-zero size and (b) calls to it are virtual calls. Point (a) might matter if you do something like Box::new(F), where const F = || 22;. In that case, if the type of F is a zero-sized closure, then Box::new(F) doesn’t actually allocate anything.


To clarify, do you mean that we would do no inference fallback at all? I was thinking that this is not backwards compatible, but I guess we could continue doing inference fallback for constants with an explicit type. Or we could use the rule I suggested – that before fallback, we check that the result type itself is fully inferred, rather than skipping fallback altogether.

Can you give an example of what you mean with respect to trait input types? Do you mean that we would modify the way that trait selection works?


That sounds like a recent RFC for polymorphic constants, which was postponed. My impression from that discussion is that while the idea is good, it would be better to have a separate syntax for those kinds of constants. So I think that might be an orthogonal issue to this one.


That’s a fair point, and the breakage would only happen if someone adds/removes an annotation in an already published crate, which I’m guessing would be a rare occurrence (but perhaps someone else knows better?). Consistency with let would certainly be nice.

Can you think of examples other than integers and floats? Those are the only valid const expressions I could find that have multiple possible typings, but I might have missed some.

I do think all such fallbacks in const/static items should follow the same principle, though, whether it’s to always use the fallback, or never, or whatever it may be. Like @aturon and @withoutboats, I lean towards option 1 because of backwards compatibility (and we could always loosen the rule later), but the inconsistency there in both the typechecker and the programmer’s mental reasoning might make option 3 a worthwhile solution instead.


I don’t think there are other cases today where uninferred type variables with fallback could end up in the final type, but I think there could easily be such cases in the future. For example, we’ve talked about user-specified forms of fallback (e.g., for custom allocators), and we’ve also floated ideas about changing how the fallback to ! works. (For example, I was considering that the type of Ok('a') perhaps ought to be Result<char, ?T> where ?T is a variable that falls back to !.)

In any case, the other thing to consider is something like this (which, even if it doesn’t work now, will work soon-ish):

const fn foo<T>(_: T) -> char { 'a' }
const X = foo(22);

Do we want to disallow 22 here from falling back to i32, even though the final type of X will be char either way?


There’s only one simplest syntax const FOO = 1, so both features can’t have it. AFAIK if Rust had Go-like constants then inference for integer constants wouldn’t even be needed, so that one syntax could be used for the most broadly applicable feature.

OTOH, if this simple syntax was taken for inference, then polymorphic constants would have to use a more complex syntax. Sadly, from the RFC that seems to be where Rust is heading. I’m not enthusiastic about that. While it is sort-of consistent with the rest of Rust (FOO<T: Integer>: T = 25), it’s IMHO ugly and brings complexity of generics and special traits to a thing that in other languages is just a “plain” simple number.


Here are a few. If I think of other scenarios over the next few days I’ll try to add examples for them.

These examples should all allow their types to be elided:

const A: bool = true;
const B: i32 = 42i32;
const C: &str = "hello";
const D: (f32, f32) = (1.0f32, 2.0f32);
const E: [i64; 3] = [1i64, 2i64, 3i64];

// structs with explicit types on numeric arguments are definitely okay
struct Foo {
    a: i32;
const F: Foo = Foo { a: 5i32 };

// slightly modified from an example @withoutboats provided on IRC
struct SomeType {
    x: bool,
    y: bool,

const INPUTS: &'static [(&'static str, SomeType)] =
    &[("abc", SomeType { a: true, b: true}), ("def", SomeType { a: false, b: false}), ...];

Eliding types for string literals (as in C above) could be especially helpful for beginners to delay having to learn the nuances of Rust’s string-handling.

Type elisions on the following could do “the right thing” if fallback is used, but later changes to them could potentially cause issues that wouldn’t been immediately (although as @nikomatsakis points out, it’s unlikely a constant would be exported without being used AND have a type annotation added or removed):

const X: i32 = 5;
const Y: (f64, f64) = (1.0, 3.14);

The following examples would give the wrong type if fallback is used and their annotations are elided, although the typechecker would likely catch the issue, and programmers would likely expect the inferred types to differ from these annotations:

const Z: f32 = 2.5;
const W: [i16; 4] = [1, 2, 3, 4];
const V: (f32, f64) = (1.0, 5.4); // probably unlikely, but a possible unlikely case

Struct arguments are interesting, as well. For instance, consider these:

struct Foo {
    a: i32

struct Bar {
    a: i64

const F = Foo { a: 1 };
const G = Bar { a: 2 };

Both of those have a unique typing, so I think they should be acceptable. So to me, the main question around fallback is whether it should be used if the expression has multiple possible types (as in const N = 5).

Hmm, that’s tricky. I’ll have to give generic const functions more thought.


(Duplicating what I wrote in the RFC comments) It’s better when impl Trait shortcut from RFC 1951 is used:

// "impl Integer" == "any Interger" here, not "some Integer"
const FOO: impl Integer = 25;


Heh, I don’t think it’s particularly tricky. I think the answer is that we do want to accept such a thing, even if we are trying to be conservative. This is why I was specific about restricting unbound type variables (of any kind) from appearing in the output type before we apply fallback, but not from existing at all.

Currently, the type-checker works by first doing a “pre-pass” that collects constraints, then solving constraints that are yet unsolved, and then applying fallback for any variables that remain unbound. We could easily interject between these phases. I do however hope in the future plan to consider having these phases “bleed together a bit more”, in which case we might have to revisit these considerations.


Oh, I see what you mean now. Yes, given that type-checking process, I think your proposed output type rule makes sense.


@aturon Could you give an example of what you meant by fallback for trait input types? I don’t know what that would be.


Yes, sorry, that was pretty obscure. “Input” type refers to either Self or a type parameter for a trait (as opposed to an associated type). We will infer an input type in certain cases where we can narrow down possible impls such that there’s only one type to choose:

trait Convert<T> {
    fn convert(self) -> T;

impl<T> Convert<T> for T {
    fn convert(self) -> T {

impl Convert<u64> for u32 {
    fn convert(self) -> u64 {
        self as u64

fn main() {
    let x = true.convert(); // type `bool` inferred, it's the only choice
    let y = 0u32.convert(); // error: cannot infer type, two possible choices

It’s going to be pretty rare, I think, for this kind of thing to show up in const exprs, but it could happen.


Is there a fallback rule here, or is this just another example of requiring the expressions to be fully inferred? The problem with expressions with numeric literals is that if you remove the type annotation, the type may change (but still pass the typechecker) because fallback says “if the literal’s type is still unconstrained after inference, use the default of i32/f64”. I don’t see a similar kind of “default” rule here that would cause the type of x or y to change without a type annotation: each one can pass the typechecker only if it has a unique type.


I thought this might be what you mean. It would be pretty hard, at least with the current setup of type checking, for us to prevent the type of a const from being influenced by this sort of thing. It’s not like this is some sort of “fallback” step that occurs at a discrete point in time – we’d need like a modal switch.


Yeah, I was thinking about the Chalk model we’ve been talking about, which views this as more of a fallback. But this also seems very deep into the weeds and shouldn’t block getting an RFC together here.