(I waffled on whether this belonged on Users or Internals; send me off to Users as needed.)
In an embedded application, there is a 32-bit (word-sized) memory-mapped register. It contains several bit fields. I want to update one of them atomically without accidentally clobbering changes to others.
In C++ I might model this problem as follows:
extern std::atomic<uint32_t> volatile register;
void update_register() {
bool success = false;
do {
auto val = register;
success = register.compare_exchange_weak(
val, val & ~mask | value);
while (!success);
}
This does what I want: the volatile std::atomic
preserves accesses/ordering, and the compiler produces (ARM) code using ldrex
/ strex
to get correct atomic semantics.
Rust, like LLVM, models the concept of “volatile accesses” at the operation level instead of the type level, using the volatile_load
and volatile_store
intrinsics.
Separately, Rust has atomic access intrinsics, seemingly modeled after LLVM’s atomic operations.
However, to implement cases like this, LLVM operations that can access memory sport a volatile
flag, among other attributes (memory ordering). Rust’s atomic intrinsics capture some of the cartesian product of these attributes (e.g. there are 17 compare-exchange intrinsics representing a flattening of the attributes of a single LLVM instruction), but not the concept of access volatility.
As a result, it’s not clear how I get atomic operations with LLVM/C11 volatile semantics in Rust.
I am aware that I could work around this by
- Wrapping accesses with a lock instead of using atomic updates, and
- Bypassing the compiler and using inline assembly,
but I’d like to avoid both if possible.
Thanks!