It depends on what you say is the AM specification of the asm!
block, which is your responsibility to provide as the author of the asm!
. (Additionally, a (normal) function definition cannot be UB by itself; you must call it in order for the AM to encounter the UB (which may then time travel).)
With the current draft specification of the AM, this function could be defined, although it would need to use angelic nondeterminism, the heaviest hammer we have, as there is no existing AM operation which determines whether a byte is uninitialized without also throwing UB if it encounters an uninitialized byte. (This is sufficient for there to be no UB at the (current) LLVM level, as specified.)
This operation is not freeze
, though, although you could use it as freeze
. freeze
replaces an uninit byte via demonically nondeterministic choice, thus meaning that if any possible output value could trigger UB, your program has UB. Angelic nondeterminism is the opposite, doing "what the programmer intended[1]."
Additionally, it's still undecided whether asm!
semantics are actually allowed to use nondeterministic pick/choose semantics, or if those semantics are limited to be used as part of the definition of the actual usable AM operations.
So the full answer to your question would be that it is undecided. By a strict reading of the currently defined rules, it is impossible to define your function's behavior, thus it is undefined. By a loose reading, the building blocks do exist, so if such defined, it is defined. The project currently explicitly reserves the right to decide either way, and thus it would be unsound to define your function or to rely on the absence of a freeze
operation.
My personal opinion is that we are more likely to expose a freeze
than not, but this is not a strongly held opinion. After all, the preservation of write_volatile
to zeroize secrets on drop is only maintained by best-effort quality of implementation in the compiler[2].
The main issue is getting ecosystem agreement on what is this class of dangerous. Soundness and unsafe
are strictly (albeit still incompletely) defined as the reachability of UB given downstream code fulfilling the letter of the safety contract and no more. (This letter may be unfortunately vague handwaving in some cases, such as the dynamic borrowing rules.) You can although it may be difficult, verify that any unsafe
code is (or isn't) sound objectively by looking at it and any code within its safety barrier (and any, hopefully rare, ambient rules and exploits considered out of scope by upstream libraries).
There's no such objective definition possible for "dangerous" functionality. Ecosystem crates already sometimes feel the need to extend the set of "not strictly considered unsafe
but could still cause UB" conditions sometimes. The correct indication is a longer, expressive name that captures the downside, such as .sort()
vs .sort_unstable()
.
It's definitely cheeky, but if we define it as unsafe fn freeze(MaybeUninit<T>) -> T
, it is unsafe
, but only due to the fact that it's freeze_assume_valid
or whatever name you want to give the compound operation.
I don't think this is a good idea, as it's unclear that it is not in and of itself unsafe
to freeze
owned values, and clarity of soundness requirements should be the primary objective of unsafe
API design. But we could do it.
Formally, ang-ndet picks the execution without UB if there exists one. Modifying it to cause UB if the programmer "expected" behavior would cause UB, though, shouldn't invalidate any reasoning about it, at least non-formal reasoning. ↩︎
The compiler could in theory see that nothing can ever read the write and elide it, despite it being volatile, if the memory is known to ultimately be allocated on the stack or
Global
heap, this to be "normal" behaved memory. In practice we just don't do this. ↩︎