[MIR] constant evaluation

Macros don’t have access to types, constants do.

How do you handle type Foo<T> = [u8; rand(0, size_of::<T>())];?
Can Foo<u8> in two different contexts produce different results?

And that’s just the tip of the iceberg.
How does it integrate with traits?
How does cross-crate coherence even keep working?

Our type system is hard enough as it is to prove properties on.
How do you expect us to integrate side-effects into constant evaluation in the type system without breaking anything?

I don’t think it’s impossible, I just don’t expect you to be able to convince anyone on the Rust team to even touch this idea in 1.x, as it is, IMO, harder to get right than dependent typing, in a safe-by-default language.

EDIT: It’s actually impossible (breaks coherence), see my other responsible below.

There’s a previous discussion on the matter you might also want to look over.

hmm… I totally agree that we do not want life-before-main. Running on first access is supposed to be as you said: [quote=“arielb1, post:41, topic:3143”] at least in the Haskell-style lazy-evaluation sense [/quote]

Edit: regarding the different types between host and target - I don’t think it’s that much more difficult. A macro needs to handle two contexts, everything it executes internally is indeed for the host arch. but anything it returns will be for the target arch. Given an ergonomic quasi-quoting syntax it is clear which is used were. I also do not see any valid use-case whatsoever to pass such data between the two contexts. Yes, stat is different on each platform but it contains FS info. given that you cross compile from host A to target B, how can it ever be useful to have FS info that is specifically tied to A’s FS when executing code on B which has a separate FS?

I’ve asked this question several times in this discussion and no-one as of yet had provided any example where this is useful.

For your example, size_of::<u8>() should be exactly the same on all platforms - one byte, but I understand your more general point. I do wish that macros will have access to types too but I understand that it’s off the table atm.

The lazy evaluation I was talking about occurs solely at compile-time - it is how I think of non-recursive const-const dependencies being allowed.

All statics are evaluated eagerly once at translation time, and never at any other time.

@vadimcn did no such thing. They made a point about PathBuf containing an allocation. My own post included the admission that

But while I have a rough idea how expanded CTFE would handle these problems (use a vastly restricted subset of the language to rule out most sources of non-portability, run with target cfg values and bit sizes to mitigate the remaining sources, and @arielb1 spelled out how the memory allocation thing might work), I have no idea how your proposal would solve these things. I would like:

  • An account of how (say) a Vec<String> value in the host computer's memory is translated back into a Rust constant expression (which should not require vastly expanding the scope of const fn because if we do that we can save ourselves the multi-staged thing), including how this constant expression results in the correct bits being written to the final binary's .data segment
  • An account of how non-portable values like (say) Vec<PathBuf> or libc::tm would be rejected or otherwise handled gracefully by the above mechanism

My question was about whether Foo<X> in two different places produces the same type or not.

  1. These can be in two different crates using the crate with the definition of Foo, with no way to communicate with each-other.

  2. It gets even worse with traits, because all impls of Trait are instantiated every time <T as Trait> is attempted to be resolved.
    That would mean impl resolution can succeed once and then fail unless you cached all instantiations.

So you end up with a system where… the value is fixed after being first observed?
But you can only do this in the same crate, as shown by 1. above.
I forget if I’ve used this argument before, but I didn’t expect to actually… prove it’s impossible to generate the same type.

Combine 1 and 2, say you have a crate with:

trait Trait<B> {}
impl<T> Trait<[u8; rand(0, size_of::<T>())]> for T {}

Now different downstream crates will observe different sets of impls, and if you combine two of those crates, you will have broken coherence.

EDIT: XPOST https://www.reddit.com/r/rust/comments/907a6d/thoughts_on_compiletime_function_evaluation_and/e2pdqnt/ (you might want to read that comment too)

1 Like

Ok, but again, each stage is isolated so how does having different impls observed in different processes a problem? The entire premise here is that the macro is compiled and executed separately from the target code.

I was talking about type-dependent constants feeding back into types in the target crates, not in some sort of compiler plugin.

It’s something we want to support (via const fn and associated constants), and IMO Rust will be crippled without it.

You can totally write syntax extensions that do IO, but you can’t have type parameters as the input, as that can be used to break coherence after the fact, as shown above.

sorry for the quote confusion, indeed you made that observation.

The multi-stage thing is about isolation so you cannot just return a Vec. you can use (i.e. allocate) a Vec internally inside the macro but you cannot return that memory as it belongs to a separate process (the compiler). you can return generated source code. That returned source code can either be a constant expression such as an ["hello", "goodbye"] which the compiler will put in the target executable’s memory or you can return an expression such as Vec<String>::new(..) which will be translated to code that will allocate at run-time.

Regarding non-portable values - depends on what you want to do with them.

ok, I see now what you are talking about. In that case, the formulation is simply incorrect. inside a macro a let size = size_of::<T>() refer’s to the macro’s (i.e compiler’s HOST) type T. that means the macro generates code based on the macro’s environment. What you really want is the type T of the compiler’s TARGET. Therefore it needs to be an input to the macro.

Than you’d use it as let size = quote!(size_of::<$T>()) Does that answer your concerns?

edit: typo

So you basically want a syntax extension that is JIT-ted from the source code?

The problem with syntax extensions is that they run on the host, and therefore they can’t easily access target things - including target types.

But can the macro be triggered with an input from a generic?
That is, does your plan involve invoking magic! for every T an impl<T> Trait<magic!(T)> for T is used with?

Ah! I see what you mean.

I actually envisaged the following scenario:

const MY_CONSTANT: &'static str = {
    const BUFFER: String = some_const_fn();

which I would expect to work, and not require leaking.

I had not thought about leaking; while working I am afraid that this would further differentiate the fn and const fn realms: it would make the const fn less likely to be useful at runtime.

That’s the same as:

const fn my_constant() -> &'static str {

Which is a lifetime error, because you’re returning a reference to a stack value.

Maybe we could special-case const contexts to perform the leaking/“interning”, but again, you need to find a good rule for it (in this case, at the type-level).

EDIT: I was wrong, in this case the String would indeed be 'static, and its data not accessible except though a shared reference, so the whole Box::leak deduplication mechanism would work just as well here.

For what it's worth, this would work just as well:

const MY_CONSTANT: &'static str = &some_const_fn();

That would be even much sweeter, indeed. I suppose this relies on the fact that rustc materializes temporaries in anonymous variables.

Indeed, it would work the same way this does:

const FIVE: &'static i32 = &5;

What’s worrying is this:

const MY_CONSTANT: &'static String = &some_const_fn();

This can either be banned (on the premises of reaching a “dynamically allocated” memory region from a const, without a &str “chokepoint”), or allowed… but if it is allowed, should each mention of MY_CONSTANT leak a Box<String>?

If it’s deduplicated, you end up with life-before-main, which we also don’t want, although it’s not arbitrary code, only an allocation with constant bytes put into it.

Why would mentioning MY_CONSTANT cause a leak?

I would expect the compiler to instantiate a unnamed variable of type 'static String for the result of some_const_fn, like it would do in a function scope, and then create a reference to it.

Is the plan to re-compute MY_CONSTANT each and every time it is mentioned?

I had thought that each constant would be evaluated once (and only once) and the evaluation cached in the MIR somehow, and thus it would only be codegen’d once too.

But where do you codegen the runtime String allocation? This doesn’t have a trivial answer.

EDIT: Oh, since the destructor never runs, you may be able to have String point to a static allocation anyway.
I have not considered that outcome until now.

EDIT2: I’ve updated the original post with more details on dynamic allocations, given the fruitful discussion.
Thank you, everyone!

Oh sorry! It seemed the obvious strategy to me since the beginning (cannot allocate before main anyway) so I am afraid I never spelled it out :frowning:

As you mention in your revised post, there seems be a division between what you evaluate (any const fn) and what you can store in a const item: UnsafeCell and the like1. While an Arc could be used within the evaluation code, it cannot be stored in .rodata. This seems to mean that some types could be const and others not.

I do disagree slightly with the static BAD: Arc<Mutex<String>> = Arc::new(Mutex::new(String::from("foo")));, I think it should be okay if we let the static or const keyword determine the context of the memory allocations, and thus where they occur. In the case of static it would be in the .data segment, in the case of const in the .rodata segment.

As you mention though, there is at least one hitch: these memory areas are not handled by the memory allocator, and therefore attempting to query them through it is impossible. Unfortunately, instrumenting the call to behave differently when pointing inside .data or .rodata would introduce some overhead on all calls. This issue seems specific to static data though (could @Manishearth confirm?), so maybe just keeping using lazy_static! for such data is fine for now (I don’t personally have any strong urge to make global variables any cheaper or more ergonomic to use anyway).

1 Determining the exact extent of “and the like” is left as an exercise to the poor implementer :slightly_smiling: