Optimization barriers suitable for cryptographic use

On those architectures you could still write these primitives in out-of-line assembly and link to them via FFI.

That might actually be a good idea on all platforms so that you don't expose secret keys to Rust in any way.

5 Likes

Frankly, relying on something like cc is a horrible solution for this problem. Not only it has issues in practice (e.g. with cross-compilation), but it also goes against goals of many projects which strive to implement cryptography in pure Rust.

Why introduce such dirty hacks instead of improving black_box guarantees?

The approach used by crates like subtle relies on introduction of types "opaque" for optimizer (e.g. u8 instead of bool) and structuring code in a way which forces compiler to generate logical operations (e.g. AND, OR, etc.) instead of branches. black_box could be useful here to create optimization barriers, so compiler will not be able to analyze that this u8 can take only two values 1 and 0. Sure, it's not a completely bulletproof solution, but it's best what we can do with current version of Rust without going full way into manually writing assembly.

5 Likes

If you're only going for best effort anyway, why are you bothered by the warning in the docs? You can still use the function on a best-effort basis.

The warning is there because that's not the usual best practice in cryptography: you don't usually want to say: failure probability is 2-256 + 1% because of the best effort guesses.

2 Likes

Because with the current wording alternative compilers can ignore black_box completely. See the Zulip discussion linked above.

Unfortunately, in the real world writing secure code often boils down to different levels of "best effort", especially with hardware vendors prioritizing performance over security by default. So the only thing we can do is to gradually resolve different concerns step-by-step depending on their severity. Improving guarantees for black_box is the easiest low-hanging fruit.

3 Likes

subtle actually already has a core_hint_black_box feature to make use of it for Choice which is another example of an off-label usage per this documentation. But it still seems strictly better than the previous strategy of using a volatile read.

1 Like

In the Zulip thread people are talking about manually inspecting assembly for all platforms and all rustc versions.

Surely that's even more of a "dirty hack" than writing that assembly once per architecture? At least then the assembly wouldn't change with a new rustc version.

4 Likes

Manual inspection is not required. It's possible to create automatic tests for which we compile a sample function dependent on black_box to generate branchless code and the test would check that indeed no branches were generated. IIRC we already have something similar for SIMD intrinsics to check that intrinsic compiles into desired instruction.

Arguing that black_box is not bulletproof, thus we should not bother with improving its guarantees, sounds to me like arguing that we should not use Rust instead of C/C++ because it does not solve 100% of memory issues and Rust compiler may break in future releases.

1 Like

With LLVM's optimizer free to insert branches anywhere at any time, the assembly alternative for cryptography really becomes writing entire algorithms in assembly as opposed to using Rust whatsoever

5 Likes

I mean you could make the process like:

  • writing cryptographic functions/blocks still in Rust
  • generate and verify the assembly, and embed it in the project (and use this instead)
  • maybe have some sort of test which verifies that the code still matches the assembly

And for targets which don't have verified assembly, just use the Rust version, and generate a warning that it might not be constant time. (are there pragmas/smth similar for generating warnings like in C++?)

It would also fix the way worse case when a crypto library is compiled with O0.

3 Likes

Warnings for non-local dependencies are suppressed with the exception of future compat warnings. Lints get ignored using --cap-lints=allow and the warnings produced by build scripts are ignored too I believe.

1 Like

How about deprecated? You could mark the non asm-variant deprecated.

That wouldn't work if the user of the crypto library is itself a non-local dependency as said dependency would get the deprecated warnings disabled too.

1 Like

An alternative compiler can also add arbitrary data-dependent branching anywhere in the program, with or without black_box. It's allowed to implement ^ with a branching special case for one side being zero, for example, which is entirely acceptable under rust semantics and would be intolerable for cryptographic code.


I think that anyone wanting a stronger comment on black_box needs to write what comment they would like it to be instead, and how that's a material difference.

Basically, I don't see "you can use this for crypto but it's still not stable and you still need to manually inspect the output assembly of every single build to confirm it did what you want" as materially different from "we don't guarantee it to do anything", which is what "this is allowed to do nothing" basically means.

6 Likes

black_box should guarantee that from compiler's point of view it acts as an opaque pure function. It can be modeled either as an empty asm! block, or as an extern identity function.

For "primitive" types we want to get the same effect as for the following asm! block, but automatically for all supported targets:

pub fn black_box_usize(mut dummy: usize) -> usize {
    unsafe {
        asm!(
            "# {}",
            inout(reg) dummy,
            options(preserves_flags, nostack, pure, nomem),
        );
    }
    dummy
}

With "complex" types we could model it like this instead:

pub fn black_box<T>(dummy: T) -> T {
    let mut dummy = ManuallyDrop::new(dummy);
    let mut dummy_ptr: *mut ManuallyDrop<T> = &mut dummy;
    unsafe {
        asm!(
            "# {}",
            inout(reg) dummy_ptr,
            options(preserves_flags, nostack, pure, nomem),
        );
        core::ptr::read(dummy_ptr.cast())
    }
}

It introduces unnecessary copies, but it can be a reasonable starting point.

Yes, there is still room for potentially surprising optimizations (e.g. when "observing" value with black_box without using its output), but it's still much better than the current status quo, which is effectively defended with the "all-or-nothing" fallacy.

The current wording isn’t just “we don’t guarantee it to do anything”. If it were, that’d be fine, IMO.

Instead it says “As such, it must not be relied upon to control critical program behavior. This immediately precludes any direct use of this function for cryptographic or security purposes”

This makes it sound like it’s completely unsuitable for any cryptographic usage whatsoever, and that any such usage is a mistake and should be removed and replaced with something else. It’s basically saying “don’t use this for cryptography, ever!” But asm! aside, there is nothing else more suitable.

A less harsh, absolutist wording which talks directly to guarantees instead of usage like “This function provides no guarantees when used in cryptographic and security-critical contexts” would be significantly better, IMO.

PR to change the wording: reword the hint::blackbox non-guarantees by the8472 ¡ Pull Request #126703 ¡ rust-lang/rust ¡ GitHub

This was and remains correct if you're targeting Rust-the-language. I.e. code that adheres to the language and library specification. What you're doing is something entirely else. You're targeting one rustc instance and manipulate it into emitting whatever assembly you wish. You could even exploit compiler bugs to get it to do your bidding under those circumstances. Under such circumstances you already are ignoring API contracts.

4 Likes

Thank you!

The way it was worded before was so stringent it made it sound even worse than a no-op, like using it at all would actively break otherwise working cryptographic code.

The new wording is much better.

Even with the new wording, it's still allowed to do that, so long as it's still an identity function.

There's no difference between "would" and "might" a the language spec level, for things it defines as unobservable. (Just like how we can splat the secret key all over the place in memory without violating any language rules.)

5 Likes

black_box also may be an optimization barrier, but doesn’t actually say that further operations will be implemented in constant time. Would that be an interesting direction to go down instead? If integers and booleans and a few other types had constant_time_add etc?

(There’s still plenty of work here, LLVM would have to support operations tagged as constant-time, probably as new intrinsics that need to be carefully implemented for each architecture.)

4 Likes

I think this makes more sense with special types, e.g. OpaqueU64 that don't leak the data except via some special explicit operations.

It's not sufficient to prevent the generated code path from depending on the data during arithmetic operations like addition, you also have to prevent it from depending on the data in other less direct ways such as speculative execution and whatnot.