Volatile and sensitive memory

One can have a VolatileCell in a static that gets placed at a specific significant address (with a const fn new method). That said, the new method isn't the interesting part: unless you're making a point I don't understand, the core here is get and set since they're the gate-keepers of the actual volatile reads/writes (i.e. one can ignore the new methods entirely).

Now I’m curious… what’s the syntax for placing a static at a specific address at run time?

There’s no syntax in Rust itself, and so I believe the easiest way at the moment is by running a linker manually and using its support for that sort of thing, e.g. http://stackoverflow.com/a/19831133/1256624 or http://mcuoneclipse.com/2012/11/01/defining-variables-at-absolute-addresses-with-gcc/ in C, with #[link_section = "..."] the Rust equivalent to the section("...") attribute. (This is obviously somewhat unfortunate, but I suspect almost all cases when one has specific memory addresses of interest are bare metal/embedded situations, where one is probably going to have to be interacting with a linker directly anyway.)

1 Like

Does UnsafeCell guarantee that no accesses are generated into it without actual accesses in the code?

No

Sorry for dropping out of this thread. I've just read the entire thing and wanted to throw in my two cents.

First, a meta-comment: I don't believe we can expect a "definitive answer" to the questions that @briansmith is raising at this time, because the "Rust memory model" is not yet defined. As @arielb1 says, we "ought to" -- and we are working on settling it, but it is a complex equation with a lot of variables, and frankly one with an unclear priority. As a starting point, I've been going through all the discussion threads and so forth and trying to gather up a list of important examples, along with collecting discussion. This can be found in this repository:

At the moment, these examples are culled directly from discussion threads. I do plan to go back over them and try to eliminate duplicates / simplify / coallesce. Then I hope we can evaluate some various proposals and see how they "score" related to the examples. I've added in the various examples from this discussion as well.

All that being said, @briansmith needs to write some code today. For the time being, we have de facto adopted LLVM's "volatile access model". Personally, I am pretty comfortable with this, and I would not expect major changes. This implies to me that @huon's example of a volatile wrapper is currently correct (as @huon points out, the VolatileCell example is not, though for a reason that is orthogonal to volatility).

The key point is that the compiler will not randomly introduce reads of an &T -- it will only introduce reads that it can prove may happen at some point in the future. This is (I believe) true even with the derefenceable attribute. So if you have a field x: T and it is only ever read via a volatile-read, then the compiler will not introduce spurious, non-volatile reads of it. I believe this is the LLVM's (and hence Rust's) current semantics (@briansmith seems to have come to the same conclusion).

So TL;DR I think @briansmith should adopt a volatile wrapper like @huon's example. It will work fine today. It may need some adjustment in the future, but that seems fairly unlikely, and should not affect consumers of the API.

3 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.

For anyone reading this thread as a reference for volatile: this no longer reflects our current understanding of how volatile and dereferencable interact. See this Github thread for the latest discussion, and in particular this quote by @hanna-kruppe:

If an lvalue is accessed through only volatile loads, LLVM will not add accesses that aren't volatile.

I've not seen this guarantee stated anywhere in LLVM docs. It seems reasonable on its own but the dereferencable attribute may very well throw a wrench in it. Without that attribute, LLVM may not assume it can insert loads from a pointer, so it won't insert any unless it sees a pre-existing load, which can be extended (with a bit of care) to take the volatile-ness of the existing loads into account and not take pre-existing volatile loads as permission to insert new loads. On the other hand, by definition dereferencable means it's OK to insert any loads from the address anywhere.

While one may be tempted to say "OK but don't do that if there are volatile accesses", that's not really possible. The transformation that wants to insert a load may not be able to see the volatile accesses (e.g., because they are in another function), or there may not even be any accesses at all to the location (e.g., if the source program creates a &VolatileCell but doesn't use it).

So, the current thinking is that you should not have references pointing to MMIO memory as the compiler is allowed to introduce spurious reads. But if you just want to zero-out memory and make sure the writes really happen, this should not affect you. (Please don't reply here but continue in the other thread).

4 Likes

The idea that Rust code should avoid using references in circumstances, when the semantics of Rust references are the primary reason many people are using Rust instead of C++, seems absurd. This seems like a case where a bunch of locally-reasonable small decisions add up to a bad decision.

1 Like

I don't think Rusts's — nor llvm's — machine model of their input IR provide many guarantees with regards to timing. The number of inserted spurios reads is only remotely connected to this, as even such fundamental things as

let a = b*c;

are not guaranteed to be constant time. The compiler is allowed to insert arbitrary new branches checking and depending on the inputs and outputs of such a value computation (e.g. may check that both smaller than half the register size, then find some clever way to exploits this). References are, in some sense, exactly the same; only their value provides reasoning about the values in their pointed-to memory as well.

And that is a good thing. Without it, the compiler wouldn't be allowed to do most of the awesome optimizations it does. Vectorizing an array copy for example depends on the alignment of the input, hence introduce a conditional head and tail when the source or target array is not aligned, and do a faster copy of the aligned inner part. But then the timing of the code leaks information about the alignment and I see no reason why this should not be allowed or wanted in generic code. And I think this is example is enough to conclude that *a = b is not nearly constant time for every type (that includes implicit assignment from return), and then should it be for some arbitrary subset?

So references yield non-constant time semantics completely independent of assumptions of non-spurious reads, they are not special because of it. And I think this demonstrates something else: How is the compiler supposed to know which inputs are safe to leak and not? For a generic xor combinator it may be safe to leak the length when it does not depend on any secret inputs in any of the usages. However, leaking bits of the address is risky. There is however no inherent difference. I think anything short of checking the fully optimized llvm output (maybe you don't need to go down to machine code) against a manually compiled definition of safe inputs will fall short of a constant time guarantee. The concern is somewhat different than the concern of MMIO.

2 Likes

First of all, I can see many other reasons to use Rust over C++, from enums to a type system providing safe abstractions to traits. References are not the primary reason IMO.

But hyperboles aside, the alternative here would be to pessimize 99.99% of the code (everyone using references that do not point to MMIO memory) just to make MMIO memory (an extremely niche use case) slightly less unsafe (but still unsafe) to work with. That seems absurd to me. That also violates the zero-cost principle "you don't pay for what you don't use", as everyone would be paying for MMIO all the time. This is inherently a case where there is a conflict between high-level optimizations and low-level control.

Several solutions have been proposed in the GItHub thread, some can be implemented as a library today (such as creating a custom newtype around raw pointers and using that as "MMIO pointer type"), some require some more design and an RFC.

8 Likes

Maybe you misunderstood my "many people" for "most people" or "all people." The memory safety properties of Rust enabled by the reference semantics and the borrow checker (without forcing the use of a GC) were the deciding factor in every project I've been involved in that switched to Rust. I don't deny that Rust has lots of other good and nice-to-have features, but borrow checking's importance is on another level, so designs should be optimized for leveraging the borrow checker.

Several solutions have been proposed in the GItHub thread , some can be implemented as a library today (such as creating a custom newtype around raw pointers and using that as “MMIO pointer type”), some require some more design and an RFC.

We need a "volatile reference" type either in the language or in the libcore, and a trait that abstracts over non-volatile and volatile references. I agree that "more design and an RFC" are needed, as well as the implementation. This should be done and standardized (in libcore) before any final decision is made on how volatile memory works in Rust.

1 Like

If you really require it, would it be possible to build custom borrow checked pointer types? It's not the prettiest but would give you very precise control over which reads and writes are allow to occur and how.

struct PoorMansRef<'a, T> {
    ptr: *const T,
    lifetime: PhantomData<&'a T>,
}
4 Likes

Yes, that’s the kind of interface that I think people would want. However, I’d like to see it specified in more detail. For example, it seems like one could not implement Deref and DerefMut for that because Deref::deref is fn deref(&self) -> &Self::Target and we can’t (for reasons described above) use a reference to refer to the volatile item.

1 Like

It’s possible we could/should create an UnsafeDeref[Mut] which is an unsafe overload of raw pointer deref, and gives out *[const|mut] Self::Target.

Its relationship to today’s Deref is complicated, however. We’d probably want them to be mutually exclusive, as they use the same syntax.

People were recently talking about adding something akin to C’s -> operator to Rust. Perhaps UnsafeDeref could specifically overload that?

C's -> operator is merely reference-deref sugar as already performed. It makes no real sense to introduce it as such since its result is an lvalue and then we'd be stuck with a reference again. On references this is already performed automatically today.

  • foo->bar works in C
  • (*foo).bar works in C+Rust
  • foo.bar works in Rust

C++ operator-> is Deref/DerefMut in disguise since it as well does not allow evaluating to an actual pointer result but will dereference the final pointer as well. So yes some new operator would be nice but not operator ->. I'd argue it be better to avoid reusing the same symbol with different semantics.

Returning a raw pointer in that trait seems suboptimal as well. Note that Deref both takes and returns a reference. The equivalent would be to take and return a pointer which defies its use for providing custom pointer types.

3 Likes

(*foo).bar works but is gross and doesn’t chain well. That’s like the whole point of adding ->

1 Like

Accessing a volatile variable through an lvalue to non-volatile is undefined behavior in C and C++. I would assume that doing the same from rust (w/o using the volatile intrinsics), would also be undefined behavior.