Pre-RFC: stabilization of target_feature


Awesome proposal, really liking it!

The description of this pr says that its currently possible to call functions with the target_feature attribute on platforms that don’t have such an attribute, without any checks.

Therefore I think there should be a check in the compiler, that only allows such functions to be called in functions that have the same target_feature attribute, or in code that has a cfg! check for target_feature, or in an unsafe context. Later on we can also add semi-static branching that runs a cpuid or /proc/cpuinfo check on the first time we visit, and for subsequent runs has a much smaller overhead (in the best case: none) by having self-updated the code its running to hard wire the behaviour to the decision just made, or (maybe smarter to do it this way) by setting a function pointer. There has been discussion on that optimisation already, but I think for the initial stabilisation proposal its too much.


This is actually something I have been working on using a procedural macro attribute, see Suggestion for a low-effort way to take advantage of SIMD and other architecture specific tricks LLVM knows about


Would it make sense to have -ffast-math exposed as a feature so the same mechanisms can be used to opt-in to it? See here for a discussion on why it’s valuable:


Yes, that is correct, there isn’t really a good way to check this.

Therefore I think there should be a check in the compiler, that only allows such functions to be called in functions that have the same target_feature attribute, or in code that has a cfg! check for target_feature, or in an unsafe context.

I think this is a good idea, but we really haven’t been able to figure out a solid way to implement this. First, we need all three possibilities you mention. Second, if we require functions using #[target_feature] to be unsafe functions, users will still need to write unsafe on code that has a cfg! check for target_feature. Third, executing the wrong target feature at run-time raises an illegal instruction exception and your process crashes. This doesn’t introduce “memory unsafety”, so unsafe is not a perfect fit for this either. Fourth, cfg! can be used in normal if cfg!(...) { ... } branches, so the compiler would need to evaluate these at compile-time, and fold them, in order to do what you propose here. Those if branches can contain run-time and compile-time conditions in them, so to me it is unclear whether this would be actionable.

One possible solution could be to require an attribute when calling functions with target_feature out of functions with the same attribute something like #[target_dependent_call] foo_with_target_feature_attr(...), but we probably won’t be able to avoid this on cfg!-usage in general due to the fourth reason mentioned above.

The current plan was to push for the compiler to warn on all the cases in which it is sure that your code is going to raise an illegal instruction exception, but maybe somebody has a better idea about how to make this warning good enough that it can become an error and that is still ergonomic to use.

The way target feature currently works in Rust, it is intended to be an implementation detail of libraries, and not part of their API but we cannot control how people use it. Making them unsafe would be my best bet, but it is not a perfect fit, so… I am a bit torn about this as well.

Later on we can also add semi-static branching that runs a cpuid or /proc/cpuinfo check

I envision this as a third-party cpuinfo library, that users can choose to branch at run-time and set function pointers manually to the correct implementations on initialization. Some users might want to choose on every execution, only once on initialization, or directly at compile-time, without run-time checking (so that the calls can be inlined).


Would it make sense to have -ffast-math exposed as a feature so the same mechanisms can be used to opt-in to it? See here for a discussion on why it’s valuable:

I think one can already pass -mlllvm ... flags directly to rustc to achieve this, but maybe rustc could offer an option that does this for you. This is, however, enabling some LLVM optimizations that change the behavior of your program, and not something target dependent, so it is kind of orthogonal to this issue.

Whether it makes sense to have a function attribute #[fast_math] that allows you to tell LLVM that you want it to generate code for that function using unsafe fast math optimizations or not, I don’t know. It would depend of whether 1) LLVM supports this, 2) those assumptions would leak to other parts of the program without users noticing it. I wouldn’t mind for this to be available under a different feature flag on nightly to experiment with it, but it is kind of an orthogonal issue to target feature stabilization. This does not mean that one can reuse the parts of rustc that target features uses to implement it, but rather that I do not think it is very constructive to mix both discussions because they address conceptually different issues.


@gnzlbg I don’t know of any way to enable ffast-math in rustc right now. If you have a pointer to that it would be helpful as I have been doing a simple benchmark for these kinds of features:

The reason I thought of ffast-math together with target_feature is the following:

  • ffast-math is sometimes the ability to exploit extra instructions that couldn’t be used if you wanted to follow IEE 754 strictly so in a way it’s just like enabling some kinds of SIMD
  • The kinds of situations where you want to enable ffast-math are often the same as target_feature. What you often want to tell the compiler is “for this code use AVX and enable ffast-math” so that the generated code uses as many SIMD speedups as possible


@gnzlbg Thanks so much for writing this up! Sorry it has taken me so long to respond, but I’m just starting to ramp up work again on SIMD.

I feel like I pretty much agree with the consensus that you and @alexcrichton reached. In particular, I like the idea of requiring a feature to use unstable target features in code. It does seem more consistent with Rust’s overall stabilization story. It’s true that some nightly-only crates will break, but the fix seems relatively simple to me and I think that kind of breakage is acceptable.

One thing that I’m still not quite sure on is #[target_feature]. In particular, why is it necessary to stabilize it in this RFC? It seems like it can’t actually be used on Rust stable without any stable feature names? I think #[target_feature] probably needs more work to help prevent not just monomorphization errors, but runtime errors that look very suspicious to me. Take this program, for example, using the stdsimd crate:


extern crate stdsimd;

use std::env;
use stdsimd::simd;

#[target_feature = "-sse2"] // strange, perhaps nonsensical, but allowed!
fn myop(
    (x0, x1, x2, x3): (u64, u64, u64, u64),
    (y0, y1, y2, y3): (u64, u64, u64, u64),
) -> (u64, u64, u64, u64) {
    let x = simd::u64x4::new(x0, x1, x2, x3);
    let y = simd::u64x4::new(y0, y1, y2, y3);
    let r = x * y;
    (r.extract(0), r.extract(1), r.extract(2), r.extract(3))

fn main() {
    let x = env::args().nth(1).unwrap().parse().unwrap();
    let y = env::args().nth(2).unwrap().parse().unwrap();
    let r = myop((x, x, x, x), (y, y, y, y));
    println!("{:?}", r);

What is the expected output of this program? IMO, since I’m using LLVM’s support for cross platform SIMD vector types, I’d expect this to work. But specifically disabling SSE2 seems to cause things to go very wrong:

$ RUSTFLAGS="-C target-cpu=native" cargo run --release --example types 7 11
    Finished release [optimized + debuginfo] target(s) in 0.0 secs
     Running `target/release/examples/types 7 11`
thread 'main' panicked at 'assertion failed: idx < 4', src/
note: Run with `RUST_BACKTRACE=1` for a backtrace.

The specific assert that’s failing is this: — The 21 printed in the output above is the result of adding println!("{:?}", idx); just before the assert. Given that constants of 0, 1, 2 and 3 are used, something is clearly pretty wrong. I don’t have the expertise to diagnose the problem though. But the fact that this is possible through no use of unsafe at all in my program probably means that this behavior is unacceptable.

If I use #[target_feature = "+avx2"] instead, then things work just fine:

$ RUSTFLAGS="-C target-cpu=native" cargo run --release --example wat 7 11
   Compiling stdsimd v0.0.1 (file:///home/andrew/data/projects/rust/stdsimd)
    Finished release [optimized + debuginfo] target(s) in 0.38 secs
     Running `target/release/examples/types 7 11`
(77, 77, 77, 77)

Hell, you don’t even need target_feature at all for this to be correct, since it’s using the cross platform vector API. But still, the point remains.

In any case… This might all be moot if you aren’t stabilizing any feature names, but I’m not sure what the gain is?


I’ve added the above program to the repo. So it should be easy to reproduce:

$ git clone git://
$ cd stdsimd
$ RUSTFLAGS="-C target-cpu=native" cargo run --release --example wat 7 11


@alexcrichton and I talked a bit about my issue above on IRC in #rust-libs. Basically, there are two fundamental problems with #[target_feature]:

  1. It can lead to SIGILL (or equivalent).
  2. It can lead to ABI mismatches between function calls. And in particular, this doesn’t seem discoverable until monomorphization time.

With respect to (1), there doesn’t seem to be much we can do. I think we can probably assume that executing an unknown instruction will result in sane behavior like your program aborting (although this isn’t obviously 100% correct to me, since it seems like a platform specific thing).

With respect to (2), it seems a bit trickier to handle, but @alexcrichton thinks it’s possible to detect ABI mismatches during monomorphization. In particular, we could start by reporting a hard error, but it also seems possible to resolve the ABI mismatch by inserting shims. The error at monomorphization time seems unfortunate, but it’s not quite clear what other path we could take.


I’m not familiar enough with CPU ISAs but are CPUs guaranteed to return SIGILL if they encounter an instruction they don’t know? What about ARM with their vast variations, do they overload their instructions? Do we really want to guarantee this for now and for all future CPU ISAs that may come?

I think it also goes against the spirit of Rust, with strong static guarantees. If we would find a compile time solution here, we’d be really improving things.

Well we already have semi-arbitrary rules regarding what constant expressions are and what aren’t. Similarly, the borrow checker can infer lifetimes sometimes, and sometimes it can’t. Or take this code:

    let i;
    if false {
        i = 3;
    i = 4;

Its obvious that there is no re-assignment to i, yet the compiler complains about it.

Why have a similar (imperfect) check for branches? It should be easy to allow in a large enough subset of cases, no? E.g. you can reject if {anything} || cfg!(), and allow if {anything} && cfg!() and reject all kinds of if f(cfg!()).

I would argue that most uses would be literally only if cfg!(...) { without any mixing of other conditions. If we start with that as a basis and then later expand to more complicated forms, like cfg!(..) && call_of_target_dependent_fn(), we’d immediately have something useful for most people, while improving on other languages by having safety.


The only compile-time solution that has been proposed is to require that functions annotated with #[target_feature] also must be annotated with unsafe. While I like this proposal, I also think that we shouldn’t make things unsafe lightly, in particular when it might add “a new kind of unsafe”. @alexcrichton @burntsushi what do you think? I don’t know of any architecture in which trying to execute an unsupported instruction can lead to memory unsafety, but as @est31 points out, this does not mean that there aren’t any architectures present or future in which this could be the case (e.g. an architecture that just “skips” unknown instructions instead of raising an exception).

Having said this, even if we require #[target_feature]-functions to be unsafe, there is nothing that the compiler can do at compile-time to catch a mistake when using these. #[target_feature] means “compiler I know that this function will only be called on targets that support this features so you are allowed to emit code that uses those”. If you then run your binary in a target without that feature everything is still fine as long as that function is not called at run-time. I could imagine that it might be possible to add some sort of run-time instrumentation that could catch misuses of this at runtime in debug builds, but nothing that helps at compile-time.

The compiler complains about it precisely because it doesn’t understand that if false will never execute. It actually must complain about it because if false { ... } is perfectly valid rust code that must be type-checked, etc. Removing the if false { ... } branch is an optimization, but this optimization is neither part of the language nor guaranteed by it. So:

Because of the above, this cannot be done. Constant folding is not part of Rust, the language.

Sadly in if cfg!(...) { ... } the cfg! is just a macro that will expand to true or false, so it is not really any different than if false { ... } or if true { ... } and because of the above there is nothing we can do here.


That’s wrong. Rust does have a concept about what is “constant” and what isn’t. Just look at let i = [0;42 + 33 + foo()]; where foo is implemented as fn foo() -> usize { 3 }. That errors, while if you do let i = [0;42 + 33]; it works.

That’s wrong as well. You just need to change the macro expander to return a special AST item (lets call it baz) that specifies “this is from a cfg macro with content X”, and then later on, after name resolution, you implement a visitor with the following behaviour:

  1. When it encounters if baz(X) adds X to a stack/vec inside self (and after recursing to the if’s body, it removes that X again from that vec)
  2. When it encounters a function with target_feature it traverses that list and checks for each item whether the target_feature is “implied” by the item or not. If it is not, it emits an error.

I am pretty certain that this is implementable inside the compiler.


I agree with you here, we do not need to stabilize it in this RFC. However, I do think that when stabilizing any of them we should make sure that users won’t have to learn subtle differences or corner cases between both. That is, we should at least have a broad picture about how cfg!(target_feature) and #[target_feature] will interact, and show that we have given some thought to learnability, usability, and ergonomics. For example, @est31 wonders whether we can have good compile-time errors for #[target_feature] inside cfg!(target_feature). I think that the RFC shall not wonder about this, but provide a clear yes/no answer of whether this would be possible with the proposed big picture.

I think this would be solved by requiring functions annotated with #[target_feature] to be unsafe fns. I don’t have a proposal to what the semantics should be here yet, but my mental model right now is that "#[target_feature] is a coarser version of asm!" and we cannot really prove much about it at compile-time (probably what happens if one screws up is platform-specific). I think that we should have better errors or warnings for your particular example, and I also think that maybe unsafe is not really correct because I am not sure if we can do better or if using #[target_feature] might actually lead to memory unsafety (your example might just be hitting an LLVM bug).

This might all be moot if you aren’t stabilizing any feature names, but I’m not sure what the gain is?

From the discussion above the consensus was that stabilization of any feature name would then go through their own RFC. Those RFCs are just way harder to write if we don’t have clear semantics for what #[target_feature] / cfg!(target_feature) are actually supposed to do.

While this RFC is about “stabilization” of target_feature, I think the best path forward would be to:

  • agree on what exactly cfg!(target_feature)/#[target_feature] should actually do, then implement that in nightly,
  • write RFCs to stabilize some of the target features and implement those in nightly
  • if we do not find any errors in our proposed semantics for cfg!(target_feature)/#[target_feature] then we can start stabilizing things.

These things ended up on nightly without an RFC, so maybe instead of "stabilization of target_feature the first RFC should just be target_feature.


If you check out the Rust reference it says that:

In the [expr ';' expr] form, the expression after the ';' must be a constant expression

(emphasis mine). If you change your first example by making foo a const fn it also works.

However, when Rust introduces if expr, it doesn’t specify any semantics for when expr is a constant expression, which basically means that even if expr is something as trivial as false the implementation is allowed to execute it at run-time (in fact this is what rustc still does, LLVM’s opt is the one who executes this at compile-time). This means we cannot provide errors at compile-time, but we could provide a soft warning).

Basically, what you propose would need to change the Rust language specification of if to either handle constant expressions differently or to add some construct for conditionally branching on const-exprs in such a way that would allow us to emit an error here. Independently of what my opinions would be on such RFCs, we would still need to handle what happens when the expr in the if is evaluated at run-time, e.g., because fn foo() -> bool { cfg!(target_feature = "+avx") } is not a const fn and hence calling it is not a const-expr.

If the language is ever extended to handle those cases, we can revisit what we do with target features to provide hard errors where we can, but “improving” if for constant expressions is completely out of scope of this RFC (and my personal opinion is that RFC’s on making if handling const-exprs in such a way that we could provide hard errors here will not be merged, but that something like const if const-expr { ... } might have a chance of succeeding).

That’s wrong as well. You just need to change the macro expander to return a special AST item (lets call it baz)

So it is right. I am not saying that we could not add a new AST node-type to handle it, I am just saying that this is how cfg! currently works, it is just a macro (and IMO this is a good thing, so I doubt any RFC proposing to change it to return a new kind of thing will succeed). Anyhow that would need changes to cfg! and the language, and should be part of a different RFC as well. We already have cfg!, this is just about cfg!(target_feature).

IMO to make what you want actionable we would need:

  • if special handling of const-expr or handling of some new kind (the result of cfg!(target_feature)).
  • make cfg! return some new kind of thing, not only for target_feature
  • make cfg!(target_feature) return some other new kind (so that we can do some checking against #[target_feature]), that either resolves to a const-expr and can be handled by the “improved” if or make if handle this type of kind differently.

Anybody is free to start writing RFCs on those and ping us, but when writing an RFC for target_feature right now I sadly have to work with the language “as is”. I can, and will, incorporate your feedback in the alternatives section, but unless somebody puts a tremendous amount of work on those, they won’t be actionable, and even if they were, the wins will still be small, because one of the most important uses of #[target_feature] is the ability to provide a binary with code for different target features, that selects the right code path at run-time. The compiler cannot know at compile-time whether the run-time handling is done properly (without at least even more extensive changes to the language), so even if all that compile-time machinery was in place, we still would need to deal with “what happens at run-time when you screw up” which is the part that this RFC covers. If we treat all mistakes as run-time errors, and say “if you screw up, then undefined behavior”, undefined behavior still allows us to, e.g., emit warnings at compile-time, that users can turn into hard errors if they want. So not all is lost, and this isn’t really bad either considering the Rust language that we currently have, and this leaves the door open to do something better if the language ever improves enough to allow us to do it.


Yeah, thats a very good point in fact, that, I admit, makes most of the recent discussion moot. If we wanted static guarantees, we could just decide to not add the target_feature flag and instead force users to use the less powerful cfg(target_feature). Additionally, there seems to be already RFC 1868 which covers the topic lengthy, no need to re-invent the wheel here, although I guess as its a new feature we could be more strict with it. But let’s not do it. I drop my demand to have such a way of checking :slight_smile: .

I was suggesting this change because I think that target_feature in its current form too unsafe for Rust which praises itself as “safe” language. All I want basically is that there should be no way to call a target_feature function from safe code and it SIGILLS, except some unsafe code that’s outside of the language has made a mistake.

So my next proposal is very simple: just require every target_feature function to be an unsafe fn, and then external crates can come up with (proc) macros to call such a function and dispatch on it based on various criteria, from safe code.


I tend towards this and will try to justify it in the RFC. I just need to find the “proper wording”, as in, which kind of unsafety is #[target_feature] introducing. Is it like referencing a null pointer? Accessing data out of bounds? Creating a data-race? I don’t think so, it is something else. It would help a lot if the kind of unsafety it introduces would already be covered by unsafe in other situations so that I could just reference those, but if not, I am going to need to more precisely define this in “The Rust reference”-type of speech.


IMO unsafe is unnecessary, using the wrong target feature here is no less unsafe than passing the wrong target feature on the command line or even compiling for the wrong target.

What I think the compiler should do though, is disallow any features that aren’t additive to the default target features (on the command line and as an attribute), that would stop @burntsushi’s -sse2.


But those things are not part of Rust, the language, but rather part of rustc, the implementation. Stabilizing target_feature would make it part of the language where we do need to make sure that safe code cannot introduce memory unsafety. So the question is: Can target_feature be used to introduce memory unsafety in safe Rust? I don’t know yet, but this should be resolved before the RFC.

Note that if we make it unsafe at first, we can always make it safe later in a backwards compatible way. In particular, whether it can be used to introduce memory unsafety will heavily depend on the features one enables/disables, so maybe we cannot even specify this in general, but might need something more fine-grained, like making some features that can be exploited to introduce unsafety require an unsafe function annotation.

If we find that it is indeed possible to introduce memory unsafety with some features, the RFC for target feature should specify what to do in those cases.


@gnzlbg I feel like you kind of jumped to “make it unsafe” without considering the fact that we might actually be able to detect when there are problems. For example, if we could reason that “memory unsafety only occurs when there is an ABI mismatch” and we could also say that “we can detect all ABI mismatches,” then we could feasibly declare #[target_feature] as safe to use so long as we either reject any ABI mismatches or “fix” them with shims. With that said, I think the fundamental problem is that nobody here is actually sure of any of this, so I’m with you on the “we need to figure this out” part. However, I do think we could move forward with submitting an RFC and say that these are implementation details that can be worked out after-the-fact (where the RFC of course discusses the problem). Basically, in the worst case, we require that all #[target_feature] functions be unsafe.

Another noodle scratcher is whether unsafe is actually appropriate or not. And I don’t mean in the “we’re not sure if it causes memory unsafety or not” sense, but rather in the “does it leak out past the boundaries of unsafe and into safe APIs.” I think unsafe fixes my specific example—since I needed to use #[target_feature]—but does it fix all such examples of ABI mismatches? Moreover, if #[target_feature] is unsafe, then it follows necessarily that all vendor intrinsics are also necessarily unsafe.

One other small note: I wouldn’t necessarily expect that we need an RFC for every new target feature that we stabilize. RFCs are pretty heavyweight, and I’d expect either your target_feature RFC or the SIMD RFC to lay out a general path for how to stabilize additional target features. A normal FCP like we do for small library APIs seems fine to me, for example. And similarly for adding new vendor intrinsics.

@est31 Ignoring #[target_feature] unfortunately is not an option. It’s critical to a robust SIMD story on Rust. It needs to exist, just like raw pointers need to exist.


If I understood correctly, @est31 was hoping that as long as you don’t write unsafe you wouldn’t be able to introduce segfaults (like from illegal instruction exceptions) into Rust programs using #[target_feature], even though these segfaults cannot cause memory unsafety. The following question was then raised in this context: “Does each current and future platform that Rust supports produce an illegal instruction exception when it encounters an instruction that it doesn’t know / support?” (or can we assume/require this in the Rust execution model?) I don’t know the answer, but if the answer is yes (using an illegal instruction always raises a CPU exception) then we know that at least for a wide-range of target features the process will crash before any kind of memory unsafety can happen, which will mean that it might make sense to make #[target_feature] safe.

Otherwise, we either figure out a way to catch these at compile-time (something that I do not know how to do since whether these functions are called or not is a run-time issue), or we might need to make #[target_feature] unsafe. For example, suppose a CPU skips unknown instructions and continues execution silently, if we relied on that instruction being executed to guarantee memory safety and it is skipped, memory unsafety will be introduced in the program.

EDIT: I really don’t know how to make #[target_feature] safe in general (for all possible target features and all possible platforms), but I hope somebody knows how because otherwise we might need to make it unsafe in one way or another.

EDIT2: I think that we can probably work out the ABI issues in practice btw.


@gnzlbg Oh I see, yeah I was only talking about ABI mismatches. The SIGILL issue is more cut and dry to me because I actually understand it. That is, I know exactly the thing that we don’t know: whether executing an unrecognized instruction will never violate memory safety on all supported platforms. But in the case of ABI mismatches, I’m still a little murky on the details because my mental model isn’t sophisticated enough to understand all of the cases that can lead to an ABI mismatch. Moreover, I don’t specifically know the technical details enough to envision the code necessary to detect ABI mismatches. On top of all of that, I’m concerned about whether or not ABI mismatches are properly encapsulated enough by unsafe (but again, I don’t have specific concrete statements, but I don’t know enough to prove that we shouldn’t be concerned about it).