Rust's stability story should apply to bool in FFI


On stable Rust, Rust’s bool in FFI has the same ABI as C’s _Bool. Apparently this, while the case, hasn’t been formally decided. I think Rust should adopt the position that the act of shipping this feature on stable Rust and keeping it shipped for an extended period of time (ever since 1.0 even?) during which deployed code has come to rely on it should under the Rust stability policy constitute commitment to this feature.

Fact I assume to be true

Based on issue comments and by observervation of my own use of bool in FFI appearing to work on the platforms Firefox runs on, I am assuming that it’s presently true that Rust’s bool (and pointers to it) in FFI has the same extern "C" ABI representation as C’s _Bool (and pointers to it).

It’s wrong to break this, so it should be documented as reliable

There is deployed code out there that assumes that Rust bool in extern "C" signatures maps to C’s _Bool. It’s easy to assume that this is correct, since rusty-cheddar generates C headers that suggest that it works that way and running the code gives the appearance of it working.

It has come to my attention that while this actually works, there hasn’t been a formal decision that it should work and, hence, there’s a proposal to make the compiler complain to programmers who are assuming that it works.

I believe this is unproductive, because a lint complaining about bool in FFI suggests that the equivalence of bool and _Bool shouldn’t be relied on suggests that the equivance might get broken, but breaking it would violate the Rust stability story, so, hence, the equivalence mustn’t be broken and suggesting that it might just causes waste of effort on the part of programmers trying to figure out what to do about the lint.

Since the combination of Rust’s stability story, the act of shipping the behavior that Rust bool in FFI is equivalent to C _Bool on the stable channel and deployed code relying this equivalence logically has to result in commitment to keep it so, I request that it be officially documented as being so.


I’ll go ahead and address the counter-arguments I’ve seen.

Counter-argument: bool in FFI is reserved

Counter-counter-argument: It shouldn’t matter if it’s theoretically a bug that the compiler hasn’t refused to compile code that uses bool in extern "C" function signatures. What should matter is that the behavior has shipped on the stable channel, has been put to use by deployed code and isn’t a soundness bug. Taking a different position would totally undermine Rust’s stability story and de facto adopt C’s approach of blaming the programmer if the programmer relies on something that seems to work but hasn’t explicitly been documented as reliable.

Counter-argument: FFI is unsafe

Counter-counter-argument: FFI is unsafe, because it’s up to the programmer to ensure that code on the other side of the interface upholds the relevant invariants. FFI is not unsafe as an excuse for the FFI to undergo breaking changes. In fact, while Rust’s non-FFI ABI is explicitly subject to change (and this, unlike bool in FFI being “reserved”, is highly visibly communicated), it’s a key feature of FFI that it, in contrast to the non-FFI ABI, is not subject to breaking changes.

Counter-argument: There have been soundness fixes

Counter-counter-argument: These are opposite cases. A soundness bug threatens the whole value proposition of Rust and the breakage from the fix tends to be theoretical. In this case, it’s in practice a feature that’s in real use that bool maps to _Bool, so the breakage would be real, but changing it would “fix” a theoretical concern.

Counter-argument: There exist C implementations whose _Bool is too weird to commit Rust’s bool to match

Counter-counter-argument: Rust’s stability story should mean that there’s a commitment not to break what has already shipped. This means that Rust’s bool in FFI has to continue to match C’s _Bool on the Windows and System V-ish systems that Rust has already been deployed on. Suggesting that this part might be subject to change makes programmers waste time. OTOH, warning programmers about niche platforms that Rust might support in the future would be a gross misbalancing by putting theoretical future niche concerns ahead of programmer productivity on actual present mainstream and not-quite-so-mainstream systems. Rust already, quite appropriately, forgoes interop with some imaginable C implementations–most notably those that don’t provide two’s complement signed integers. If in the future Rust actually ends up targeting platforms whose C ABI is too weird for Rust’s bool to match, dealing with that mismatch should be a problem for developers in that niche to solve and shouldn’t spill over to inconvenience everyone.

1 Like

I suspect this situation is very similar to the debate over stabilizing the default drop order, where a certain behavior was shipped on stable and depended on by a bunch of code and eventually stabilized because no one could come up with a convincing rationale for changing the behavior in the future (at least, not one that would justify the practical breakage) and it seemed extremely unlikely that would ever change.

In fact, my opinion here is exactly what Niko said in his first comment on that drop order RFC PR, so I’m just going to find/replace his wording:

I’d also like to hear an affirmative case for why we should change the [bool ABI] – as a rule of thumb, I think that [ABIs] should be well-defined unless there is a strong reason for it not to be, and [bool] seems to me to be not so different from any other [FFI type]. Put another way, changing [the bool ABI] seems to me to be primarily a vector by which we can surprise people and cause bugs in their code. (It doesn’t, e.g., affect performance all that much, and if it did, people could [use some other custom type] to accommodate that.) Is there an example of why we might want to change the [bool ABI]?

1 Like

I am not really an expert in FFI, but could we, instead of bool is _Bool, guarantee that bool is a a single byte with two valid values 1, 0, which happens to be the representation of _Bool for most platforms?

This quite probably is exactly the thing being proposed, but the phrasing is a bit different: instead of tying Rust and C, we directly specify bool ABI, and it becomes your responsibility, as a programmer, to ensure that C's _Bool ABI matches Rust ABI for your platforms.


I’ve made the bool == _Bool assumption in my crates already, so changes to Rust’s bool representation could break my code, but it’s not end of the world. I could change it to libc::c_bool or something like that if it existed. One way or the other Rust needs to have a representation of C’s bool on non-exotic platforms (I won’t mind if it doesn’t work on PDP and VAX).

Are there performance reasons to prefer alternative bool representations?

The problem is with references to booleans. In some C ABIs, in-memory booleans can have values other than 0 or 1, with all non-0 values being true (I’m not sure which architectures are these). The problem with that is that you could create a Rust &bool that points to a value that is neither 0 or 1, and supporting that would pessimize Rust programs that involve match statements.

I think one compromise would be to allow bool to appear in C ABIs directly (if a bool is an argument or return value of an extern "C" function, there’s 1 obvious thing to do - to apply the necessary fixups - and there are other cases where we do ABI fixups anyway because of ABI issues, so this is not exactly unexpected), but not behind a reference or non-repr(transparent) struct.


The C standard specifically requires the true value of _Bool to compare equal to 1, and also specifically requires the numerical result of a true comparison operation to be 1. Therefore I doubt that any such ABI exists, and I don’t think the theoretical possibility should be accepted as a counter-argument to Henri’s proposal. If you can cite a specific ABI where in-memory booleans really can have values other than 0 or 1, then I might change my mind depending on how obscure it is, but I think the burden’s on you to actually find one.


Assuming that my current understanding that MSVC on Windows and gcc/clang on Linux/BSD on today-relevant CPUs represent _Bool that way is correct (I don’t know my way around GCC and clang sources well enough to check), that would work for me.

I think it would be terribly inefficient to actually leave it up to every programmer to research this, so even if the normative definition was as formulated above, I think it would be good for Rust documentation to say for which targets the definition matches C _Bool. Hopefully at present the answer is “on all supported targets”.

1 Like

The C99 standard only requires two things as far as I could find. Value != 0 is considered truthy for purposes of conversion to _Bool. And that a conversion to said _Bool will produce a value with either value of 1 or 0. However it does not specify anything about actually valid bit-patterns for _Bool, therefore ∀x ∈ (0, 255) in following snippet, the code is well defined (assuming implementation defines _Bool to be a byte) when banana is implemented in C, but would not necessarily be valid if banana was implemented in Rust:

extern void banana(_Bool *bptr);
void peach(uint8_t x) {
    banana((_Bool *)&x);

That snippet is not correct (although it will not result in undefined behavior as written - only if banana attempts to read from or write to that parameter). Due to TBAA rules, passing a uint8_t object to a parameter as a _Bool* will result in undefined behavior if the function ever reads or writes from that parameter.

Second, that’s an incorrect assumption of the definition of the C standard - it doesn’t define what happens when you read an object of type uint8_t from a pointer of type _Bool, therefore, it doesn’t need to care about what happens. The valid bit patterns aren’t defined in the C standard, however they are defined by specific platforms. For x86/64 platforms, _Bool is defined as a byte-width object, with value 0 or 1.

What C defines is:

  • Conversion from 0 to _Bool will result in a _Bool with value 0.

  • Conversion from an integer or floating point type that is not equal to 0, to _Bool, will result in a _Bool with value 1.

  • Conversion from a _Bool with value 0, to an integer or floating point type, will result in a 0 value of that type.

  • Conversion from a _Bool with value 1, to an integer or floating point type, will result in a 1 value of that type.

In practice, the code that you’ve written, assuming TBAA is turned off, will only work for x = 0, and x = 1.


We finally got around to discussing this in the Lang Team meeting. We had the following questions, perhaps @ubsan, @nagisa, or others may be in a position to answer authoritatively:

  • Are there any extant platforms where _Bool in the C ABI is permitted to take values other than 0 or 1?
    • How I understood prior answers is “no, truth-y values will coerce to 1”
  • Does the C standard permit platforms where _Bool is permitted to take values other than 0 or 1?
  • Side question: are there extant platforms where sizeof(_Bool) != 1 ?
    • We believe the C standard permits other sizes, but weren’t clear if anyone took advantage of that freedom.

Our preference is to define bool as _Bool. This would be what most people expect and avoid massive breakage. If the C standard requires values of 0 or 1, then there seems to be basically no drawback. Otherwise, it implies a potential performance pitfall on matches (matches on bools, mind you) but doesn’t seem like a huge problem. Still, it’d be good to know the answers to those questions before reaching a final consensus.


The C standard does not specify ABI.

However, there is no platform I know of where _Bool is not a char, with values true = 1 and false = 0.

Let’s link this back the other way again, because there’s more discussion following @nikomatsakis’s comment on the PR:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.