Some assorted notes:
Generally speaking, Rust considers any item which is not pub
to be internal and not exported in any way.
Doing linker tricks is inherently unsafe
and under documented. But volatile
isn't the way to say other code could be looking at a static
, pub
and #[used]
are.
If it's another process, then using volatile
is proper.
AIUI, these become relevant only in the face of whole program optimization. Given it's “smart enough,” optimization would be justified to notice that you only ever read this allocated object (e.g. from mmap) and replace all of your atomic reads with nonatomic reads, and to coalesce time separated reads. If some other process results in that memory changing, you have UB.
You need the reads to be volatile such that the volatile quality can do the “abstract machine IO” equivalent of the other processes manipulating the visible memory.
Member of T-opsem, but not speaking for the team.
I believe the temperature is roughly that we do want access to volatile atomics, but comparatively speaking it's relatively low priority. The abstract op.sem is straightforward: do both the atomic “thing” and the volatile “thing” to guard the operation.
I believe volatile atomics are also a case where LLVM unordered
may be useful semantically, as generally only the “cannot tear” part of atomics is desired, and the synchronization of even monotonic
(our Relaxed
) isn't necessary.
IIUC, because there's no simple way to restrict AtomicCell<T>
to only primitive integer types with processor atomics support as a trait obligation. It could have been some trait Atomic
over the relevant types that dispatched to the various intrinsics::atomic_*
functions, but then you still have the follow-up question of why AtomicCell<IndexNewtype>
can't work, even just with load
/store
. It's essentially the numeric trait design problem but worse.
AtomicNN
was a “working enough” solution. And you can generalize over atomic sizes with associated types in a library, e.g. radium.
Also FWIW, &VolatileCell<T>
as a library type is fundamentally broken and cannot be correctly papered over. It could be implemented with compiler magic, but that compiler magic is necessary to prevent spurious accesses, similar to the magic applied for UnsafeCell
and UnsafePinned
(async
/!Unpin
).
However, you also then have to ask the question of what are the semantics of &(u64, VolatileCell<u64>, u64)
, or other compound types having a volatile place sitting in their middle. It's not a question that lacks a reasonable answer, but it's much less self evident than just asking what it means for an access to be both volatile and atomic.
(Not said with any authority.) A write on the Abstract Machine is observable if that write could at any point be validly read without causing UB. Given whole program knowledge, any Rust Allocated Objects (i.e. allocated via Box
/alloc::alloc::alloc
(heap); let
, function parameters (stack); or static
(global)) are constrained to access within the Abstract Machine's vision unless
- the memory is sourced from outside the AM (e.g.
extern static
or an extern fn
originating pointer);
- the
static
place is visible externally to the AM (e.g. it has a known export name (#[no_mangle]
/#[export_name]
) or is marked pub
and #[used]
); or
- the memory is visible through a pointer which has been passed beyond the AM visibility (e.g. to an
extern fn
), whose provenance has not been invalidated, and a read through which would be sequenced with the write (i.e. not race and be UB).
There's no one definition of “observable” because we're aiming at an operational specification of the Rust abstract machine (op.sem == operational semantics). This definition just falls out of the definition of extern
al linkage as being unknown code that could possibly do any defined operation to the AM state (i.e. you could define the external operations as some number of threads doing some sequence of valid things that Rust could do). So this isn't exhaustive, and shouldn't be.
volatile
then essentially turns *place = Read.volatile(pointer);
into extern_arbitrary(); *place = Read(pointer); extern_arbitrary();
. ...But it's unfortunately not that simple because the set of things which LLVM permits a volatile access to do (by LangRef) is actually smaller than the set of things which an arbitrary function call can do. Or at least, I think it is; this part is solidly out of my believed understanding. (Critically, w.r.t. atomic synchronization. Atomics are reasonably well studied. volatile
is AFAICT still much more vibe based around “don't do this optimization” rather than operational.)
Yes, this is the current working model for any communication “out of” the Rust AM world; any operation done by code “outside” the AM is modeled as native AM threads doing the AM operations corresponding to whatever the external operations are, according to the implementation mapping semantics between the shared semantics (e.g. LLVM-IR for LTO, or x86 for the processor) and the AM semantics.
The one wrinkle to ask is whether any such additional threads are allowed to be running when entering main()
. They certainly are after the first extern fn
call, as that call can be said to spawn all of those threads necessary to model the outside world.