You wouldn't be able to provide Deref/DerefMut, but you could provide read and write methods or similar.
Relying on field projection and reborrow means this solution depends on two RFCs that are both still open experiments. If either changes direction or stalls, the entire library path disappears. This is the weakest link.
Also, ergonomics matters more than it might seem - field_definition!(S, field_name) + *field!(&mut T, field_name) vs pointer.field won't be a minor difference when you're working with dozens of struct fields.
impl<'t, Container, Field: 't> GetField<Container, Field> for &'t Container {
type Output = &'t Field;
fn get_field(self, field: FieldDefinition<Container, Field>) -> Self::Output {
unsafe { &*ptr::from_ref(self).get_field(field) }
}
}
This is an instant UB when ptr is 0x0.
That's precisely the problem, wrapper type without Deref or its equivalent would lose its purpose as a reference.
The library path doesn't disappear without these features, it just remains somewhat unergonomic.
It's a very exotic usecase, so I don't think that's a big deal if it's a bit verbose until rust adds native field projection support. Cell and MaybeUninit don't have field projection support either, and those are much more common.
That function doesn't receive a pointer, it receives a reference, which can't be null. That playground only demonstrates how field projections can work in stable rust, not how to handle usable null pointers.
For usable null pointers you'd need something like this playground
Fair, I withdraw the UB concern here; the remaining concerns still stand.
Cell exists to hide references behind interior mutability; MaybeUninit exists to hide uninitialised memory. Field projection is a nice-to-have feature, or even a soundness hazard for them - their purpose is fulfilled without field projection or aided by its absence. A zeroable reference is the opposite - transparent access to fields is its entire purpose. Without field projection, it doesn't become verbose; it fails to be what it is.
And looking at replace_memory in your any_mem sketch, that's the operation ptr::replace couldn't do soundly because it relies on &mut *dst internally (rust#138351). Your playground itself annotates read_memory/write_memory as "needs compiler magic" - doesn't that confirm the gap has to be closed below the library level?
That's why I proposed those two functions as additions to the standard library earlier. I think having read/write/copy and copy_non_overlapping variants in core would make sense. But anything more complex that that belongs in a third-party crate.
They are not strictly necessary, since read_volatile works. But that sacrifices performance and const.
Cell's purpose is to enable shared single threaded mutability. It will become much more useful once you can manipulate individual fields of a struct.
The benefits for MaybeUninit are smaller, but there are no soundness issues with projecting a &mut MaybeUninit<Struct> to a &mut MaybeUninit<Field> either.
Verbose field projection can be implemented in a library already. So native field projection is only an ergonomics improvement.
I admittedly didn't take OP's original proposal to change & semantics very seriously, since it was clearly impractical. But now that they backtracked on that point, I'm leaning towards this becoming an RFC proper. But that means this plan explicitly won't solve ecosystem compatibility.
Almost every crate uses & references somewhere. It'd suck if all that code needed to be duplicated. With hypothetical AnywherePtr there would be a fallout comparable to async.
But I don't think the majority of that code is directly expecting & to not be zero. If mechanically replacing &/&mut with AnywherePtr/AnywherePtrMut could suffice most of the time, we would like to automate that.
Unfortunately, as the different invariant propagate through the call stack, this is a really hard problem. Again, similarly to async, I believe reference zeroability has to be considered an effect. Since a valid & can be safely cast to AnywherePtr at any point, this could be modeled mostly implicitly. We just have to figure out if any given function ultimately relies on a & reference actually being nonzero. Currently, one can make that assumption:
- directly, by reading the reference's address value
- indirectly, by placing the reference in a container of which layout is exploited by
unsafecode
Unless I missed something, both of those operations can be detected statically. Any functions that don't use either can then be annotated #[zref], similar to const, another annotation signifying lack of a default effect[1].
One could also try the opposite of casting AnywherePtr to & and calling non-#[zref] fn from #[zref] fn, as long as the address is assured non-zero by some other means beyond typeck.
Internally,
#[zref]just replace any&/&mutarguments withAnywherePtr/AnywherePtrMut. However, this would penetrate anystruct/enumlayers, which wouldn't be possible without lang support. ↩︎
Even setting all of that aside, core itself heavily depends on &T. This cannot be resolved by a third-party library, and even introducing alternatives within core would invite circular dependencies. Moreover, &T does not even live in core. It resides in the compiler as TyKind::Ref, and a single line in the layout computation ensures it can never reach 0x0:
// compiler/rustc_ty_utils/src/layout.rs:410-414
ty::Ref(_, pointee, _) | ty::RawPtr(pointee, _) => {
let mut data_ptr = scalar_unit(Pointer(AddressSpace::ZERO));
if !ty.is_raw_ptr() {
data_ptr.valid_range_mut().start = 1;
}
This single expression is where the non-null invariant is physically enforced. No library - whether in core or third-party - can reach below this.
As noted above, this has already led to a soundness issue.
The zeroable reference primitive proposed to address this may seem like an excessive solution at first glance, but consider NonZero<T>. Today, NonZero<usize> sits alongside usize, and the two coexist. &T currently occupies the position of NonZero<usize>, but the usize that should sit beside it is missing. This is not a new paradigm, nor a sledgehammer to crack a nut. It is simply restoring the missing counterpart of NonZero<?> == &T.
Thank you for the thoughtful contribution. After reviewing your suggestion, I find the pattern of treating 0x0 as valid when no niche-dependent pattern is present to be elegant, and the analogy to async effects is an interesting direction worth exploring further. That said, I'm also concerned that static detection could miss niche dependencies in certain cases (e.g. FFI), introducing UB, and that the compiler complexity would increase considerably. The zeroable reference primitive in the current proposal is designed as a more conservative path that minimises these concerns - but if the concerns around static detection and FFI coverage can be addressed, I think this could be a compelling approach.
The "soundness issue" is that I'm using a dummy implementation for the new core::ptr functions I proposed. You could replace it by read_volatile today and mark the calling functions non-const. Then it will work and be sound today, just a bit slower.
It is an excessive solution, since the benefits are so small. The use-cases you address are extremely rare, and already have usable workarounds today. And in the end, Rust doesn't have to be usable for every single application.
You can of course continue to argue for complex solutions which have absolutely no chance of being accepted. Or you could try to find low impact solutions (like the handful of functions I proposed) which help your use-cases without impacting Rust at large. Those might actually have a chance of happening.
Your read_memory / write_memory proposal in #93 was genuinely helpful and I appreciated it. However, under Phase I, ptr::read / ptr::write themselves would become valid at 0x0 which means the separate functions would be redundant rather than complementary. This isn't a rejection of your idea; it's that the two approaches solve the same layer differently, and if Phase I lands, the existing APIs already cover what read_memory was designed to do.
// library/core/src/ptr/mod.rs:1546-1569
pub const unsafe fn replace<T>(dst: *mut T, src: T) -> T {
// ...
mem::replace(&mut *dst, src)
}
}
Uhh... I think we have a divergence here - I meant that core itself relies on non-zero reference internally, which was the root of rust#138351. I didn't meant your dummy impl, sorry about the misunderstanding.
To be clear: The Rust Core Team explicitly stated that embedded devices (which are no_std) are one of 4 important domains and stated to make it a first-class target.
And the environments in question are not edge-cases, they're the highest-stakes subset. The Vorago VA108xx in the OP is a radiation-hardened MCU literally embedded in satellites (Astranis DemoSat-2, TechEdSat-8 CubeSat), and the workaround it was forced into (inline assembly and volatile API) is a direct violation of safety standards.
Anyway, I don't think we disagree on as much as it might seem - your read_memory / write_memory proposal came from recognising the same gap and I think your proposal stands on its own merit even if my RFC never lands. Where we differ is on scope, and I think that's better resolved through discussion in a formal RFC than here. I'd welcome your contribution there.
You keep saying this, but I don't see how that is Rust's problem.
The cost of your proposal, even in the best case, is an extra source of confusion for everyone dealing with raw pointers: when we are saying whether the raw pointer is dereferenceable / valid / whatever, do we allow for null or not? For backwards compatibility, we have to continue assuming that code is only robust against non-null dereferenceable pointers. That will have to be the default. But people will read documentation saying that actually accessing null pointers is fine, and then assume they are fine for crates that use the old terminology, and cause UB. This is a serious problem, it will cause UB bugs in real code. So the question is, can you demonstrate that the benefits outweigh this significant cost?
So far, I don't think you can. Field projections provide everything you need to build a suitable abstraction in library code. The ergonomics aren't perfect, but easily good enough for a niche usecase such as this. We don't accept RFCs on the off-chance that other RFCs (with significant momentum behind them!) don't make it, so your talk about "weakest links" doesn't support your RFC either. What remains is your claim that the missed optimizations caused by volatile accesses matter. So far this claim has not been substantiated with any evidence. You have one example of code actually accessing address 0, and that code is apparently doing a big memcpy of some data from address 0x0 to non-volatile memory -- there's nothing to optimize here, a volatile memcpy will be just as fast as a regular one.
The work-around is volatile accesses. There is no need for inline assembly. You keep making incorrect claims even after their incorrectness has been pointed out to you. This is tiring. Maybe at the time said code was written, they had to use inline asm, but Rust has been improved in the mean time so it's disingenuous to keep bringing this up as an argument for further changes.
Your OP also still talks about "reference construction" and "slice creation", despite it having been made unambiguously clear to you that those will never allow null.
This really seems entirely pointless. Either you have MMIO or things like interrupt vectors at these weird addresses (0 and usize::MAX), in which case using volatile access is the correct approach anyway. Or you have normal RAM, in which case loosing one byte/word is not a big deal, even on embedded targets that Rust supports.
In fact, you can still use that address to store some state via volatile access if to absolutely insist on it, no need for inline asm.
I think it would be useful for you to consider at this point that your proposal really isn't going anywhere, and is a waste of time and effort, not just for you, but for others as well.
Hello,
Just my 2 cents on this, I implemented the bootloader for the MCU with program RAM at address 0x0. I will probably update my implementation to use read_volatile now. might be that this did not work when I first wrote this or I simply missed the option. Both options used in this code snippet work just fine:
#[entry]
fn main() -> ! {
(...)
#[allow(clippy::zero_ptr)]
let first_four_bytes_volatile_read = unsafe { core::ptr::read_volatile(0x0000_0000 as *const u32) }.to_ne_bytes();
let mut buf: [u8; 4] = [0; 4];
read_four_bytes_at_addr_zero(&mut buf);
assert_eq!(first_four_bytes_volatile_read, buf);
}
fn read_four_bytes_at_addr_zero(buf: &mut [u8; 4]) {
#[allow(clippy::zero_ptr)]
unsafe {
core::arch::asm!(
"ldr r0, [{0}]", // Load 4 bytes from src into r0 register
"str r0, [{1}]", // Store r0 register into first_four_bytes
in(reg) 0x0 as *const u8, // Input: src pointer (0x0)
in(reg) buf as *mut [u8; 4], // Input: destination pointer
);
}
}
On the RFC: I am happy with the way this is now without requiring special language level support, especially now that I know I can use a volatile read for that special case in my bootloader (no assembly required). I have not read all the discussion, but the gist seems to be that this feature is not worth the drawbacks it would mean, especially because there are solutions for them.
You simply can never declare ∀x, ¬P(x) by closing your eyes and plugging your ears in front of ∃x, P(x). Even worse, it's a physical evidence. Worse still, the evidence is literally flying above our head; literally embedded in ISS.
Participation is voluntary.
Dropped.
The Rust Foundation literally said:
- What does it take to ship Rust in safety-critical? | Rust Blog
- Rust Foundation 2025 Year in Review | Highlights, Impact & Ecosystem Updates
- Rust's 2018 roadmap | Rust Blog
- Announcing the Embedded Devices Working Group
Worse, safety-critical developers said:
Worse, the entire industry said:
- GitHub - rustfoundation/safety-critical-rust-consortium: Documentation, code and information for the Safety Critical Rust Consortium · GitHub
- The New Safety-Critical Rust Consortium: We’re in! - Ferrous Systems
- [2405.18135] Bringing Rust to Safety-Critical Systems in Space
Worse, the Federal government of USA said:
- https://bidenwhitehouse.archives.gov/wp-content/uploads/2024/02/Final-ONCD-Technical-Report.pdf
- https://media.defense.gov/2025/Jun/23/2003742198/-1/-1/0/CSI_MEMORY_SAFE_LANGUAGES_REDUCING_VULNERABILITIES_IN_MODERN_SOFTWARE_DEVELOPMENT.PDF
- https://www.cisa.gov/sites/default/files/2023-12/The-Case-for-Memory-Safe-Roadmaps-508c.pdf
But people will read documentation saying that actually accessing null pointers is fine, and then assume they are fine for crates that use the old terminology, and cause UB. This is a serious problem, it will cause UB bugs in real code.
I would take this concern seriously if it were backed by a concrete scenario. I made the same request in #83: if there is existing unsafe code whose soundness depends on the non-null assumption for raw pointers, I would appreciate a concrete example. (Does any existing code actually use "0x0 is UB, therefore unreachable" as part of its soundness proof?) This is not intended as pressure - I need concrete evidence to assess cost and risk.
So the question is, can you demonstrate that the benefits outweigh this significant cost?
The benefits of this proposal are threefold:
- Enabling normal pointer operations and language-level reference-like operations on addresses that are valid by hardware guarantee.
- Enabling compliance with safety certification requirements without resorting to volatile access or inline assembly. A safety certification standard for Rust (e.g. MISRA-Rust) does not yet exist, but closing this gap now directly contributes to future specification efforts and Rust's adoption in safety-critical domains.
- Structural elimination of the resulting audit risk.
On the cost side, the cost has been described in principle but not yet demonstrated in practice.
Field projections provide everything you need to build a suitable abstraction in library code. The ergonomics aren't perfect, but easily good enough for a niche usecase such as this. We don't accept RFCs on the off-chance that other RFCs (with significant momentum behind them!) don't make it, so your talk about "weakest links" doesn't support your RFC either.
I understand that perspective, but as noted earlier, in mission-critical engineering, depending on unstable features is not permitted. Having sufficient momentum is not the same as being stabilised, and even if field projection does land, I do not see how that constitutes grounds for rejecting this RFC.
What remains is your claim that the missed optimizations caused by volatile accesses matter.
You have one example of code actually accessing address 0, and that code is apparently doing a big memcpy of some data from address 0x0 to non-volatile memory -- there's nothing to optimize here, a volatile memcpy will be just as fast as a regular one.
That example is the simplest possible case - a value-by-value read. The moment you need a struct reference, a slice, or a bulk copy at 0x0, read_volatile cannot help. In this domain, one is enough. One fails, everything fails. Moreover, I am the only one who has presented a real benchmark in this thread. I expect the same level of evidence for the claim that volatile memcpy is equally performant.
The work-around is volatile accesses. There is no need for inline assembly. You keep making incorrect claims even after their incorrectness has been pointed out to you. This is tiring.
Acknowledged, I added volatile access alongside inline assembly in #112.
Your OP also still talks about "reference construction" and "slice creation", despite it having been made unambiguously clear to you that those will never allow null.
The generalised examples in Need and significance are intended to demonstrate how the current AM breaks, not to claim that the code should become well-defined behaviour as written. Though, it could be misinterpreted so I added a disclaimer. Refer to Before reading section.
Thank you for coming by and confirming the volatile path - I appreciate it, and I'm glad your bootloader case is resolved.
Your case was a single u32 read, which read_volatile covers well. But the patterns that arise on the same hardware family get harder quickly:
- A struct placed at 0x0 by firmware: you need
&mut DevTreeBlobto call methods, mutate fields, pass to APIs expecting references -read_volatilecannot construct that. - A slice starting at 0x0:
from_raw_partsconstructs&[T], which is instant UB. - Bulk operations:
ptr::copyacross a region that includes 0x0 - currently UB.
Detailed patterns here
Consider a 16-bit target whose device tree is placed at 0x0 by the hardware with 64 kiB of RAM:
// This address is forced by the hardware. // Rust does not get to choose it. const BLOB_P: usize = 0; const _: () = assert!(usize::BITS == 16); #[unsafe(no_mangle)] extern "C" fn ignite() -> ! { // BLOB can never be read volatilely; // There's no available RAM to copy the entire struct. let mut blob = unsafe { &mut *(BLOB_P as *mut DevTreeBlob) }; // instant UB upon reference construction let mapping = blob.foo(); blob.bar |= 0b1; ... }Even when spare RAM exists, the address may still come from outside the programme - and cannot be controlled:
use core::slice::from_raw_parts as mkslice; // `map` address is reported by the firmware. // Rust does not get to choose it. // Caller ensures there's at least one entry #[unsafe(no_mangle)] extern "C" fn spark(map: *const RamLayout, len: NonZeroUsize) -> ! { for entry in unsafe { mkslice(map, len.get()).iter() } { // instant UB upon calling `from_raw_parts` // as `from_raw_parts` constructs `&T` ... } ... }
These are not exotic patterns; they're the bread and butter of bare-metal firmware on Cortex-M. And the hardware you work with - the VA108xx/VA416xx family - is currently operational on the ISS (STP-H5), STPSat-5, STPSat-6, and Astranis DemoSat-2. Future software on these platforms may need exactly these patterns.
Would any of these arise in your work, or in work you've seen at IRS?
I would take this concern seriously if it were backed by a concrete scenario. I made the same request in #83: if there is existing
unsafecode whose soundness depends on the non-null assumption for raw pointers, I would appreciate a concrete example. (Does any existing code actually use "0x0 is UB, therefore unreachable" as part of its soundness proof?) This is not intended as pressure - I need concrete evidence to assess cost and risk.
You are the one who is proposing to change the language in subtle ways. The burden is on you to convince us that there is no significant risk here. I am informing you that I haven't been convinced yet (and reading the room, I think the same goes for a bunch of other project members).
I understand that perspective, but as noted earlier, in mission-critical engineering, depending on unstable features is not permitted. Having sufficient momentum is not the same as being stabilised, and even if field projection does land, I do not see how that constitutes grounds for rejecting this RFC.
Nobody said that unstable field projections are sufficient for you. But they will likely be stable eventually, and new feature proposal are expected to be evaluated against the language Rust will be, not just the language it is today.
If a feature that is already in development makes your motivation evaporate, then that's just not sufficient motivation. It makes zero sense to dismiss field projections here. We need to ensure that long-term, Rust remains a coherent well-rounded language, so of course we take into account ongoing experiments when new features are discussed.
This is not up for discussion, it is how the Rust project works. You will not make progress here if you keep pushing back against basic, well-established norms of the Rust community.
So, your motivation needs to make the case why a library type wrapping a raw pointer with field projections that internally uses volatile is not sufficient.