Attributes on address ranges

In between other work, I've been forced to think about memory quite a bit lately. In particular, things like memory mapped I/O, memory mapped files, NUMA architectures, and other kinds of arcane and twisty things-that-pretend-to-be-ordinary-memory-but-aren't. This got me thinking about how we could tell rust that certain address ranges have certain attributes, and I think I may have an idea how to do so, but it would mean that we'd need to create a new, probably magical type that the compiler is fully aware of, and therefore knows how to treat specially. Here's the rough outline of the type in pseduo-rust:

use std::ops::Range;

#[non_exhaustive]
pub enum Attribute{
    NoReadNoWrite,
    ReadOnly,
    WriteOnly,
    ReadWrite,
    MemoryMappedIO,
    // I have no idea what other stuff could be added, but 
    // I'm sure it'll be a lot!
}

pub struct AddressRangeAttributes {
    range: Range<usize>,
    attributes: Vec<Attribute> // Maybe a set would be better?
}

So far, this is pretty boring (and likely both incomplete and not fine-grained enough, but I digress), but the magic is that there is a side-effect when you create an instance; the compiler knows that for the lifetime of that instance, that address range has those attributes. So a function that creates memory-mapped IO could do something similar to the following:

pub struct MemoryMappedFile {
    _memory_attributes: AddressRangeAttributes,
    // Whatever else is needed by this type.
}

impl MemoryMappedFile {
    pub new() -> Result<MemoryMappedFile, Error> {
        Ok(MemoryMappedFile{
            _memory_attributes: AddressRangeAttributes::new(...)?,
            // Etc., etc., etc.
        })
    }
}

The compiler knows about this magical type, and does whatever it can to statically verify that the constraints given by the instance aren't violating things that are known to be true (e.g., in the prelude there could be a 'static AddressRangeAttributes{range: 0..1024, attributes: vec![Attribute::NoReadNoWrite]} to mark those memory ranges as off limits), as well as provide runtime access so that verification can be done for those things that can't be verified at compile time (e.g., there is a global memory allocation that is still live that overlaps the given range, so you can't create a memory mapped region there).

The reason I want to do it via the compiler is because some attributes might affect where the compiler chooses to allocate variables. For example, if I had speed attributes, I could mark that certain address ranges are naturally slower than other ones (like in a NUMA architecture), and with lifetimes I can make statements about how long that slowness lasts, in case I've done some temporary memory mapped I/O tricks.

Anyways, let me know what you guys think.

1 Like

Features that help use "weird" kinds of memory safely and easily would be certainly very useful.

However, this particular solution looks like a chicken-egg problem. This code would run in an already-compiled program, so it couldn't affect it's own compilation that has already happened in the past.

Also by design all lifetimes are removed before code is generated, so lifetimes can't possibly affect what the code is doing. They're not instructions, but a redundant description (assertion) of what the code is doing anyway.

3 Likes

Yup, and that's where I'm stuck. I don't know how to make this work within the compiler. I know that the compiler is able to calculate some things (constants and constant functions), so if everything was known at compile time, then in theory it would work to some extent. At runtime it would need to update some kind of global structure that things like the global allocator would have access to, allowing the allocator to make decisions about what address ranges can be handed out (in case someone finds a way of using mmap() in a new and horrible way).

Basically, this isn't even a half-baked idea, it's a starting point for better ideas, and I'm really hoping that someone else that is smarter than me can figure out those better ideas.

Why I don't want attributes

As far as I know, you can't give lifetimes to attributes. In my opinion, that drastically reduces their usefulness in this case. Even if we could tie an attribute to stack frames (e.g., 'this is true at frame 45 and below'), that would give us some kind of lifetime, but as it is... not so much.

Assumptions such as that &[u8] can't be modified from outside are fundamental, baked deeply into generated code, and also design and implementation of algorithms around it, including unsafe code that is outside of compiler's control. It's not something that can be toggled in a running program. Code that uses slices (which aren't inlined compile-time constants) operates on them treating the address as an unknown variable, so it can't switch behaviors depending on an address, at least not without overhead that would make Rust unfit for purpose.

Basically address-based decisions are the hardest. Virtual memory (that allows operating systems to vary behavior per address range) is possible only because CPUs have dedicated hardware to enable it. However, even with hardware support, dynamically switching code paths between truly-immutable and not-really-immutable based on each memory access would be incredibly difficult and require compiling two versions of every function, plus glue code that can jump from one version to another without losing progress.


I think way more realistic approach would be to introduce a new kind of reference, like &surprise_mut [u8] that would statically change compiler's assumptions, regardless of which memory range it's used with. It'd be responsibility of library authors to use it instead of &[u8] for any memory that may be mapped.

4 Likes

Ah, I think that there may be some confusion about what the purpose of this is; it isn't to change something like & to &mut behind anyone's back, it's to let the compiler know about what may be done with an address range. For example, we might state that address range 0-1024 bytes can neither be read nor written, and the entire range 1024-65536 is ordinary memory that can be both read and written to. The compiler can look at these static declarations and know that any attempts to read or write the first page is bad, but that the rest is permitted. So, if the stack were to be placed somewhere, where would it be placed? In the area where you're allowed to read and write from. However, just because the AddressRangeAttributes say that you are permitted to do this from a hardware/OS point of view, it doesn't mean you can do this from Rust's point of view. Rust still enforces all of the requirements of &[u8] remain, and AddressRangeAttributes doesn't change this.

What's more, there may be mutually incompatible AddressRangeAttributes instances. For example, unallocated ordinary RAM that can be read and written to could have attributes that signal this fact that are 'static. When the global allocator allocates some address region, it may create a new AddressRangeAttributes instance on the stack marks the range as 'in use'. If you try to mmap() over an allocated range, the attributes might conflict, preventing you from doing so, but mmap() might still work across unallocated ordinary RAM (and once you've mmap()ed the some region, the global allocator would conflict it tried to hand out that memory).

I hope all of this is making sense, I'm kind of figuring it out as I talk about it.

I don't understand what the compiler is supposed to do with this information. Let's say we define that addresses 0...1000 are forbidden, 1000...2000 are OK. What does this function compile to?

unsafe fn peek(addr: *const u8) -> u8 {
   *addr
}

And does it behave differently in these two cases?

peek(999 as *const u8);
peek(1234 as *const u8);

and what about:

peek((999 + random()) as *const u8);

I was thinking it would be useful for static analysis, with the compiler inserting extra code for runtime checks if it can't verify invariants statically. So with peek() as defined, the compiler might be able to generate several different forms of peek(), depending on what it knows at compile time. For peek(999 as *const u8);, that is known at compile time to be in a forbidden range, so the compiler can throw up an error during compilation. For peek(1234 as *const u8);, there is no error at compile time.

peek((999 + random()) as *const u8); can't be verified statically, but if the AddressRangeAttributes are known at compile time, then a better panic message could be generated by inserting better assert!() statements. Maybe even do something clever to help debuggers out. Here's a taste of what the compiler could generate:

unsafe fn peek(addr: *const u8) -> u8 {
   assert!(instrinsic_memory_check(addr, Attribute::ReadWrite), "The address at {:?} can't be read because the memory map shows that as a Attribute::NoReadNoWrite region for the 'static lifetime.", addr);
   *addr
}

(OK, I'll admit that is a terrible error message, and there are far better ways to handle it, but that gives you an idea of where I was thinking this could head)

All of the points you're raising are good ones though, this is why I wanted to put this idea up here, so that people that are way smarter than me can poke at it more.

As I recall, this is traditionally the job of the linker, rather than the compiler. Perhaps the right solution is to have some kind of linker-control crate type, where only one can be included in any program. It could then define the relevant memory segments for the platform, and provide some kind of unsafe static references which indicate the location of each segment.

1 Like

That might work if user code has access to the same information. I really want to make sure that if we're writing embedded code and are writing actual physical memory that our code knows what different regions of memory are like.

I just had a sudden thought; would it make sense to have this work with build.rs scripts? My thought is that you might define a new trait similar to std::fmt::Display, but which when called produces compiler flags for rustc. This would permit all 'static usages, but in a manner that doesn't involve (much) modification to the compiler/linker. It would just be a memory map that could be checked for validity by the compiler (in the sense that mutually incompatible settings can't be constructed, lifetimes are all checked, etc.). The downside to this is that you can't create memory maps that change dynamically, so if you have some really odd hot-swap devices that keep popping into and out of memory, you'll just have to make some range permanently allocated for those devices, whether or not they are plugged in at the time.

Such runtime safe-address-range database query for every single byte accessed in memory is a massive overhead, completely unacceptable for a systems programming language.

Note that compile-time analyzable cases are going to be super rare. ASLR randomizes process's memory. Dynamically allocated memory has unknown addresses. Stack address is also unknown — even if Rust could assume address at which the stack starts, it can't assume how much data will be on the stack at the moment a function is called.

Runtime address range checks are already performed via MMU/page tables. This approach appears to be an extremely slow software emulation of the MMU hardware. And if it's for checking access to memory-mapped ranges, it's already 100% redundant with actual MMU-backed address range checks.

3 Likes

I agree. My hope was that this would work kind of like a borrow checker/lifetime specification, with the compiler determining statically what is true at compile time as far as is possible, and that code insertion would be a rare event. If it were a common/ordinary event, then there's no way it could ever work fast enough to make it useful.

As for an MMU, do all systems that rust currently targets and plans to target in the future have MMUs? What happens if you have physical and not virtual addresses?

Borrow checking is based on types, not addresses (contrast with garbage collection or valgrind, which are based on addresses).

So borrow-checker-like access checks would be feasible, using a type like &surprise_mut [u8] I've mentioned before. Use of such type can alter assumptions in compiled code at no extra cost.

1 Like

Cool! Then forget about my idea, and go with that instead!

Is it possible to nest instances as well? For example, we have a 'static instance that covers a large address range specifying that it is ordinary read/write RAM (no MMIO or other weirdness of any kind). None of that range has been allocated yet, so we're not going to stomp on anyone's toes here. We then ask for a new instance (call it subinstance) that is a sub-range of the address range that marks it as read-only. Asking for that subrange means that it needs to be verified as being compatible with the parent range's attributes. You can't ask for a subrange that is outside of what the parent can provide, and the parent can verify if there is already a subinstance that overlaps the requested range (e.g., the global allocator might grab some large range to allocate from, preventing it from being made read-only at any time). If that's possible with the borrow checker, and we can develop a sufficient set of 'surprising' types, then I'm all for what you're suggesting!

First, some background: Rust tends to separate features into abstract language-level features, and system-specific implementation using the lang features.

For example, Rust language gives certain thread-safety guarantees, even though the language has no concept of what a thread is! The language only has Send and Sync traits, and then libraries use these basic building blocks to create safe interfaces for threads, mutexes, atomics, parallel iterators, etc.

So for memmap Rust-the-language would similarly have no idea what mmap is, but could provide abstract types that could help 3rd-party and system-specific libraries to express how mapped memory can be used.

If you have &[u8] I don't think there exists a safe way to convert it to memory-mapped slice, because other copies of this slice could exist elsewhere and still claim the memory never changes. You can't convert &[u8] to &mut [u8] for the same reason.

I think you could write a function that converts &mut [u8] to something like &memmappedmut [u8] due to exclusivity guarantee of &mut. It may be possible to also have an equivalent of slice.split_at_mut() that makes a partial memory-mapped slice. Or you could wrap mmap and have it return &memmappedmut [u8] (or MappedVec<u8>) in the first place.

1 Like

Methods on slices would work fine though. slice.memory_mapped_slice(start: usize, finish: usize) -> Result<&memmappedmut [u8], error> (I know my syntax is bad, but you get the idea.)

If self was &mut u8 for this method, then yes.

Works for me! The only issue I see then is how to handle combinatorial explosion. Thoughts?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.