Pre-RFC: Unsafe Types


#1

In the Reddit and Hacker News discussions of my recent blog post, some people raised concerns about the observation I based the post on: That if your program crashes, the bug can be outside of an unsafe block. I think the main concern was that this makes unsafe significantly less useful as a lint.

Now, of course unsafe is not a lint, it has a fairly precise definition: Stuff is marked unsafe if it can violate the safety guarantees of your program; in particular, if it can cause a crash. But then, unsafe also servers to raise the attention of the reader and to make sure they double-check this piece of code. Just think of the GitHub bots that add extra warnings to PRs when you modify unsafe code. Certainly, it’d be nice if that bot could have caught evil.

I think there is a way to extend unsafe to answer these concerns. I am not saying this is what I want to happen, I am not decided yet - but well, my brain came up with this idea, so now I’m dropping it here to see whether anybody things it is useful :wink:

Proposal: Unsafe types

My proposal is to add the notion of an unsafe type to Rust, e.g.

unsafe pub Vec<T> { ... }

The consequence of this annotation at the type would be that writing to a private field of this type becomes an unsafe operation (this includes “writes” of constructors). Furthermore, taking a mutable borrow of such a field becomes unsafe.

Motivation

The motivation for this is to “fix” the fact that code like evil in my blog post can cause crashes, without being unsafe. The reason it can cause crashes is that it violates invariants. The reason that this code is not considered unsafe is that Rust does not know that it violates invariants. The solution is to tell it :slight_smile:

So, semantically speaking, adding unsafe to a type means “the private fields of this type may carry additional invariants that you, compiler, do not know anything about”. This has no operational consequences, but it means that whenever someone is writing to such a field, that could potentially break the invariant. The compiler cannot know if this particular write is okay, but it can at least make you aware that there is something extra to check here.

Public fields cannot carry invariants, and are hence excluded from this treatment.

Of course, invariants could also be violated by taking a mutable borrow of such a field and writing through it, so this also has to be unsafe. What would be safe is taking a raw pointer of this field, so if possible &mut v.len as *mut _ could be considered safe. However, I assume that people actually rarely create pointers to such fields and send these pointers all around the world. That would be dangerous exactly because everybody writing through these pointers has to be aware of the invariants.

Drawbacks

This adds more stuff to the language, extra complexity should not be added without a good reason.

This does not automatically make anything safer, or point programmers to anything new. People still have to actually tell the compiler that a particular type has additional invariants, if they forget, we’re back to the status quo.

But I think programmers usually, intuitively, know about the distinction between unsafe code that relies on local invariants within the same function, and unsafe code that relies on invariants which are maintained as a coordinated effort of the entire module. They only have to remember once to tell the compiler that a particular type carries such invariants, and then the compiler will keep reminding them that they have to double-check every write.

I don’t think we have a way to mark just the “left-hand part of an assignment” as unsafe, so e.g. an assignment to the len field of a vector would now have to be

unsafe { v.len = f(); }

such that the unsafe block also covers the entire right-hand side of the assignment. This is unnecessary. In principle, we could allow

unsafe { v.len } = f();

but I don’t think unsafe l-values are a thing right now :wink: (and this is even worse, since this is only unsafe if the l-value is used for writing). One could write

*unsafe{ &mut v.len } = f();

but oh my, please not^^.

Alternatives

The effect of the type annotation could be expanded to public fields. This would make the rule simpler. I think this is useless for public types, since public fields of public types cannot carry any useful invariant, and instead of making the rest of the world write unsafe around writes to this field, you can just make it private. (Are there any examples of public types with additional invariants that also have public fields?) But maybe there’s a use-case here for public fields in private types. I don’t think however that this justifies adding an extra dependency on the visibility of the type.

Not just writes, but also reads could be considered unsafe. This would be necessary if the invariant of the user on this field is actually weaker than the base type, so that reading from this field and assuming it has the announced types would be wrong. However, we have some types that come with so little (read: no) a-priori promises that I can’t think of any case where this is useful, and having the given type of the field be a lower bound to its actual, semantic type is, I think, a useful piece of documentation.

Instead of unsafe types, we could have unsafe fields, to mark the individual fields that carry additional invariants. These fields would have to be private. This may require more writing for types with many fields that carry invariants, but on the plus side this answers all questions about whether only private or also public fields are covered.


#2

I really don’t like the notion of unsafe type. It give the feeling that the type is unsafe to use while it is only unsafe to modify.

I think unsafe fields would be much more natural.


#3

I agree that unsafe fields make more sense. Creating an implicit relationship between unsafeness and publicity just sounds confusing.


#4

The idea of unsafe fields has come up before, see this issue https://github.com/rust-lang/rfcs/issues/381 and the linked RFCs. I think I prefer unsafe fields to unsafe types - finer-grained is nice, plus it makes it more a part of the implementation that the interface.

The general idea of unsafe fields was (iirc) considered favourably in the past, it might be a good time for a new RFC in that direction.


#5

One can also use some form of UnsafeCell<T> implemented as a library. Unfortunately, It’s a bit more cumbersome.


#6

I didn’t know this has come up before, thanks for the pointer!

I think I agree unsafe fields would be more natural; I arrived at the same conclusion over night. If you look at a type like RcBox:

struct RcBox<T: ?Sized> {
    strong: Cell<usize>,
    weak: Cell<usize>,
    value: T,
}

then even though all fields are private, only strong and weak have extra invariants on them. However, this example also shows a limitation of the suggested approach: Modifying the contents of the cells will not require an unsafe block, even though it could violate invariants. It’s not clear to me how to best fix that.

So I guess the question of whether we want something like this boils down to:

Do we want to (artificially?) extend the amount of operations considered unsafe such that more code that could cause memory unsafety, has to be in unsafe blocks?

The caveat is that we will never be able to make sure that all such code is in unsafe blocks.

Also, on another note, I forgot to stress that I think this does not violate the rule that unsafe should only cover operations that can actually lead to violations of Rust’s safety promises (the rule that was coined when forget was deemed safe). If the programmer annotation on the field is correct, i.e. if there really are additional invariants on the given fields that unsafe code relies on, then writing bad values to such fields really can cause safety violations.


#7

Do you think it would be useful to have a special (optional) comment that the compiler can output when these private fields are not used within unsafe blocks?

For example, with Vec’s len field, one could add a comment explaining the same thing you explained in your blog post. I think the usefulness of this lies in onboarding someone new into a project as well as a helpful guide for someone not familiar with that area of the code.


#8

I have no objection to unsafe fields per se, but I do think it’s important to emphasize the more in your phrase “more code”. Just because the simple example you gave in your post of updating the len field on Vec is no longer a problem doesn’t mean to me that every crash is due to a bug in an unsafe block somewhere. Imagine that you have some unsafe code using a (safely implemented, unlike the stdlib’s copy) HashSet – it could be relying on that set to detect some conflicts, but if the set has a bug, you will still get a crash, and the bug will be in safe code. (I think that example first came from @glaebhoerl? Or maybe even you? Can’t remember.)

So I guess all this is to say that I found the first few paragraphs of your pre-rfc somewhat misleading, in that it seemed to suggest it solved this problem, rather than simply making it somewhat less likely.

To me, the goal of providing better heuristics – like GH bots – doesn’t feel very motivating. But the goal of making it easier and natural to better document your invariants is appealing. unsafe fields feel like they are more likely to achieve that goal, so I think between the two those are the ones that I would favor.


#9

To this end, what I would really like is to infer a “privacy boundary”, within which all code is considered “suspect”. Unsafe fields (or types) could help with this actually. If you declare a field as unsafe, and that field is private to some module M, then all code within M is suspect (and, by extension, all code that it can reach is part of your trusted codebase, whether that code is marked safe or not).


#10

So I guess all this is to say that I found the first few paragraphs of your pre-rfc somewhat misleading, in that it seemed to suggest it solved this problem, rather than simply making it somewhat less likely.

Point taken, I did not think enough about the trouble caused by functionally incorrect dependencies.

If you declare a field as unsafe, and that field is private to some module M, then all code within M is suspect (and, by extension, all code that it can reach is part of your trusted codebase, whether that code is marked safe or not).

This is an over-approximation though. It is perfectly conceivable, and I assume it happens all the time, that safety of some unsafe code relies solely on the safety of its dependency - which, if that dependency is safe, would mean there is nothing to trust.


#11

Yes, I agree it is an over-approximation. Sorry if I suggested otherwise. :slightly_smiling:


#12

To me unsafe fields don’t seem different at all from unsafe traits, which Rust already has. In both cases the original declarer identifies that manipulating this thing (whether that’s implementing a trait or assigning to a field) could break an invariant that an unsafe block relies on. So these fields should behave the same way I think - if you declare a field as unsafe, writing to that field requires a unsafe block, but not reading from it, analogous to the fact that implementing Send is unsafe, but not bounding by it.

I also don’t know why pub unsafe fields should inherently be disallowed.


#13

Unsafe traits are materially different because they’re handling the problem that generic code can’t trust anything to be implemented correctly. That is, we’re dealing with an open set of implementations, and you can’t expect everyone to do things right (which is the C(++) approach). Unsafe traits are basically overriding this and saying “yes, I really need to trust that my implementors are correct”. Note that this is exceedingly rare; literally Send and Sync are the only stable consumers of this functionality, and that’s because there’s no way to guard against a type being thread-unsafe if you’re explicitly trying to do threaded things.

With unsafe fields, you’re generally only concerned with a closed set of implementations: the code in the same module as the struct. This is more controlled, and more reasonable to expect correct behaviour from. I agree with Niko’s analysis that this doesn’t really solve the problem of unsafe code needing to trust safe code to be implemented correctly.

As a simpler example, we need only look at the wonders of this PR that was just trying to make a style change. It was just trying to clean up an if-statement checking if a pointer had two sentinel values – totally safe – but it was guarding some subsequent unsafe code. An error in the refactoring subsequently lead to segfaults.

The only real guarantee we have around unsafe is that if you completely forgo it you’re good to go. As soon as you start using unsafe, you need to be incredibly careful.


#14

The bolded is only true if those fields are private.

But yes, this is the difference - unsafe traits can be implemented outside of their declaring module whereas unsafe fields usually would not be able to be assigned to outside of theirs. However to me that only introduces a difference in degree of urgency, and not a difference in the worthiness of the idea.


#15

This is an accurate assessment if you consider

The only reason why Safe Rust can be memory safe with threading, without all code being 100% thread-safe

vs

A self-imposed lint for failing to realize that some fields are trusted by unsafe code

to merely be a matter of urgency, and not worthiness. I disagree. There is clearly a qualitative difference here. Unsafe traits made Rust as we know it possible. Unsafe types/fields can be completely replaced by getters/setters at the cost of some ergonomics. In fact, that’s already the case today: we have NonZero, which is exactly an unsafe type.


#16

Sorry, but this seems very exaggerated to me. What makes Send and Sync work are their OIBIT-nature - that they are implemented automatically for most types and explicitly not implemented for certain types (and that this non-implementation carries through to types that have fields of that type). It being unsafe to explicitly implement them is a very valuable roadblock for errors, but it doesn’t play a fundamental role.


#17

The important part is that safe code can’t explicitly implement it. This allows unsafe code to trust the implementation to be correct. Autoderiving is of course important for making this at all tolerable. However all of the following objects would have to be magical langitems if unsafe code couldn’t either:

src/liballoc/arc.rs:unsafe impl<T: ?Sized + Sync + Send> Send for Arc<T> {}
src/liballoc/arc.rs:unsafe impl<T: ?Sized + Sync + Send> Sync for Arc<T> {}
src/liballoc/arc.rs:unsafe impl<T: ?Sized + Sync + Send> Send for Weak<T> {}
src/liballoc/arc.rs:unsafe impl<T: ?Sized + Sync + Send> Sync for Weak<T> {}
src/liballoc/arc.rs:unsafe impl<T: ?Sized + Sync + Send> Send for ArcInner<T> {}
src/liballoc/arc.rs:unsafe impl<T: ?Sized + Sync + Send> Sync for ArcInner<T> {}
src/libcollections/btree/node.rs:unsafe impl<K: Sync, V: Sync> Sync for MoveTraversalImpl<K, V> {}
src/libcollections/btree/node.rs:unsafe impl<K: Send, V: Send> Send for MoveTraversalImpl<K, V> {}
src/libcollections/linked_list.rs:unsafe impl<T: Send> Send for Rawlink<T> {}
src/libcollections/linked_list.rs:unsafe impl<T: Sync> Sync for Rawlink<T> {}
src/libcollections/string.rs:unsafe impl<'a> Sync for Drain<'a> {}
src/libcollections/string.rs:unsafe impl<'a> Send for Drain<'a> {}
src/libcollections/vec.rs:unsafe impl<T: Send> Send for IntoIter<T> {}
src/libcollections/vec.rs:unsafe impl<T: Sync> Sync for IntoIter<T> {}
src/libcollections/vec.rs:unsafe impl<'a, T: Sync> Sync for Drain<'a, T> {}
src/libcollections/vec.rs:unsafe impl<'a, T: Send> Send for Drain<'a, T> {}
src/libcollections/vec_deque.rs:unsafe impl<'a, T: Sync> Sync for Drain<'a, T> {}
src/libcollections/vec_deque.rs:unsafe impl<'a, T: Send> Send for Drain<'a, T> {}
src/libcore/array.rs:unsafe impl<T, A: Unsize<[T]>> FixedSizeArray<T> for A {
src/libcore/cell.rs:unsafe impl<T> Send for Cell<T> where T: Send {}
src/libcore/cell.rs:unsafe impl<T: ?Sized> Send for RefCell<T> where T: Send {}
src/libcore/marker.rs:    unsafe impl<'a, T: Sync + ?Sized> Send for &'a T {}
src/libcore/marker.rs:    unsafe impl<'a, T: Send + ?Sized> Send for &'a mut T {}
src/libcore/nonzero.rs:unsafe impl<T:?Sized> Zeroable for *const T {}
src/libcore/nonzero.rs:unsafe impl<T:?Sized> Zeroable for *mut T {}
src/libcore/nonzero.rs:unsafe impl Zeroable for isize {}
src/libcore/nonzero.rs:unsafe impl Zeroable for usize {}
src/libcore/nonzero.rs:unsafe impl Zeroable for i8 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for u8 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for i16 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for u16 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for i32 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for u32 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for i64 {}
src/libcore/nonzero.rs:unsafe impl Zeroable for u64 {}
src/libcore/ptr.rs:unsafe impl<T: Send + ?Sized> Send for Unique<T> { }
src/libcore/ptr.rs:unsafe impl<T: Sync + ?Sized> Sync for Unique<T> { }
src/libcore/raw.rs:unsafe impl<T> Repr<Slice<T>> for [T] {}
src/libcore/raw.rs:unsafe impl Repr<Slice<u8>> for str {}
src/libcore/slice.rs:unsafe impl<'a, T: Sync> Sync for Iter<'a, T> {}
src/libcore/slice.rs:unsafe impl<'a, T: Sync> Send for Iter<'a, T> {}
src/libcore/slice.rs:unsafe impl<'a, T: Sync> Sync for IterMut<'a, T> {}
src/libcore/slice.rs:unsafe impl<'a, T: Send> Send for IterMut<'a, T> {}
src/libcore/str/pattern.rs:unsafe impl<'a, C: CharEq> Searcher<'a> for CharEqSearcher<'a, C> {
src/libcore/str/pattern.rs:unsafe impl<'a, C: CharEq> ReverseSearcher<'a> for CharEqSearcher<'a, C> {
src/libcore/str/pattern.rs:unsafe impl<'a> Searcher<'a> for CharSearcher<'a> {
src/libcore/str/pattern.rs:unsafe impl<'a> ReverseSearcher<'a> for CharSearcher<'a> {
src/libcore/str/pattern.rs:unsafe impl<'a, 'b> Searcher<'a> for CharSliceSearcher<'a, 'b> {
src/libcore/str/pattern.rs:unsafe impl<'a, 'b> ReverseSearcher<'a> for CharSliceSearcher<'a, 'b> {
src/libcore/str/pattern.rs:unsafe impl<'a, F> Searcher<'a> for CharPredicateSearcher<'a, F>
src/libcore/str/pattern.rs:unsafe impl<'a, F> ReverseSearcher<'a> for CharPredicateSearcher<'a, F>
src/libcore/str/pattern.rs:unsafe impl<'a, 'b> Searcher<'a> for StrSearcher<'a, 'b> {
src/libcore/str/pattern.rs:unsafe impl<'a, 'b> ReverseSearcher<'a> for StrSearcher<'a, 'b> {
src/libcore/sync/atomic.rs:unsafe impl Sync for AtomicBool {}
src/libcore/sync/atomic.rs:unsafe impl Sync for AtomicIsize {}
src/libcore/sync/atomic.rs:unsafe impl Sync for AtomicUsize {}
src/libcore/sync/atomic.rs:unsafe impl<T> Send for AtomicPtr<T> {}
src/libcore/sync/atomic.rs:unsafe impl<T> Sync for AtomicPtr<T> {}
src/librustc/diagnostics.rs:unsafe impl<T> Sync for NotThreadSafe<T> {}
src/librustc_trans/back/msvc/registry.rs:unsafe impl Sync for RegistryKey {}
src/librustc_trans/back/msvc/registry.rs:unsafe impl Send for RegistryKey {}
src/librustc_trans/back/write.rs:unsafe impl Send for ModuleConfig { }
src/librustc_trans/trans/mod.rs:unsafe impl Send for ModuleTranslation { }
src/librustc_trans/trans/mod.rs:unsafe impl Sync for ModuleTranslation { }
src/libstd/collections/hash/table.rs:unsafe impl<K: Send, V: Send> Send for RawTable<K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<K: Sync, V: Sync> Sync for RawTable<K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<'a, K: Sync, V: Sync> Sync for Iter<'a, K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<'a, K: Sync, V: Sync> Send for Iter<'a, K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<'a, K: Sync, V: Sync> Sync for IterMut<'a, K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<'a, K: Send, V: Send> Send for IterMut<'a, K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<K: Sync, V: Sync> Sync for IntoIter<K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<K: Send, V: Send> Send for IntoIter<K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<'a, K: Sync, V: Sync> Sync for Drain<'a, K, V> {}
src/libstd/collections/hash/table.rs:unsafe impl<'a, K: Send, V: Send> Send for Drain<'a, K, V> {}
src/libstd/io/lazy.rs:unsafe impl<T> Sync for Lazy<T> {}
src/libstd/sync/mpsc/blocking.rs:unsafe impl Send for Inner {}
src/libstd/sync/mpsc/blocking.rs:unsafe impl Sync for Inner {}
src/libstd/sync/mpsc/mod.rs:unsafe impl<T: Send> Send for Receiver<T> { }
src/libstd/sync/mpsc/mod.rs:unsafe impl<T: Send> Send for Sender<T> { }
src/libstd/sync/mpsc/mod.rs:unsafe impl<T: Send> Send for SyncSender<T> {}
src/libstd/sync/mpsc/mpsc_queue.rs:unsafe impl<T: Send> Send for Queue<T> { }
src/libstd/sync/mpsc/mpsc_queue.rs:unsafe impl<T: Send> Sync for Queue<T> { }
src/libstd/sync/mpsc/spsc_queue.rs:unsafe impl<T: Send> Send for Queue<T> { }
src/libstd/sync/mpsc/spsc_queue.rs:unsafe impl<T: Send> Sync for Queue<T> { }
src/libstd/sync/mpsc/sync.rs:unsafe impl<T: Send> Send for Packet<T> { }
src/libstd/sync/mpsc/sync.rs:unsafe impl<T: Send> Sync for Packet<T> { }
src/libstd/sync/mpsc/sync.rs:unsafe impl<T: Send> Send for State<T> {}
src/libstd/sync/mpsc/sync.rs:unsafe impl Send for Node {}
src/libstd/sync/mutex.rs:unsafe impl<T: ?Sized + Send> Send for Mutex<T> { }
src/libstd/sync/mutex.rs:unsafe impl<T: ?Sized + Send> Sync for Mutex<T> { }
src/libstd/sync/mutex.rs:unsafe impl Sync for Dummy {}
src/libstd/sync/mutex.rs:    unsafe impl<T: Send> Send for Packet<T> {}
src/libstd/sync/mutex.rs:    unsafe impl<T> Sync for Packet<T> {}
src/libstd/sync/rwlock.rs:unsafe impl<T: ?Sized + Send + Sync> Send for RwLock<T> {}
src/libstd/sync/rwlock.rs:unsafe impl<T: ?Sized + Send + Sync> Sync for RwLock<T> {}
src/libstd/sync/rwlock.rs:unsafe impl Sync for Dummy {}
src/libstd/sys/common/mutex.rs:unsafe impl Sync for Mutex {}
src/libstd/sys/common/net.rs:unsafe impl Sync for LookupHost {}
src/libstd/sys/common/net.rs:unsafe impl Send for LookupHost {}
src/libstd/sys/common/poison.rs:unsafe impl Send for Flag {}
src/libstd/sys/common/poison.rs:unsafe impl Sync for Flag {}
src/libstd/sys/common/remutex.rs:unsafe impl<T: Send> Send for ReentrantMutex<T> {}
src/libstd/sys/common/remutex.rs:unsafe impl<T: Send> Sync for ReentrantMutex<T> {}
src/libstd/sys/unix/condvar.rs:unsafe impl Send for Condvar {}
src/libstd/sys/unix/condvar.rs:unsafe impl Sync for Condvar {}
src/libstd/sys/unix/fs.rs:unsafe impl Send for Dir {}
src/libstd/sys/unix/fs.rs:unsafe impl Sync for Dir {}
src/libstd/sys/unix/mutex.rs:unsafe impl Send for Mutex {}
src/libstd/sys/unix/mutex.rs:unsafe impl Sync for Mutex {}
src/libstd/sys/unix/mutex.rs:unsafe impl Send for ReentrantMutex {}
src/libstd/sys/unix/mutex.rs:unsafe impl Sync for ReentrantMutex {}
src/libstd/sys/unix/rwlock.rs:unsafe impl Send for RWLock {}
src/libstd/sys/unix/rwlock.rs:unsafe impl Sync for RWLock {}
src/libstd/sys/unix/thread.rs:unsafe impl Send for Thread {}
src/libstd/sys/unix/thread.rs:unsafe impl Sync for Thread {}
src/libstd/sys/windows/condvar.rs:unsafe impl Send for Condvar {}
src/libstd/sys/windows/condvar.rs:unsafe impl Sync for Condvar {}
src/libstd/sys/windows/fs.rs:unsafe impl Send for FindNextFileHandle {}
src/libstd/sys/windows/fs.rs:unsafe impl Sync for FindNextFileHandle {}
src/libstd/sys/windows/handle.rs:unsafe impl Send for RawHandle {}
src/libstd/sys/windows/handle.rs:unsafe impl Sync for RawHandle {}
src/libstd/sys/windows/mutex.rs:unsafe impl Send for Mutex {}
src/libstd/sys/windows/mutex.rs:unsafe impl Sync for Mutex {}
src/libstd/sys/windows/mutex.rs:unsafe impl Send for ReentrantMutex {}
src/libstd/sys/windows/mutex.rs:unsafe impl Sync for ReentrantMutex {}
src/libstd/sys/windows/rwlock.rs:unsafe impl Send for RWLock {}
src/libstd/sys/windows/rwlock.rs:unsafe impl Sync for RWLock {}
src/libstd/thread/local.rs:    unsafe impl<T> ::marker::Sync for Key<T> { }
src/libstd/thread/local.rs:    unsafe impl<T> ::marker::Sync for Key<T> { }
src/libstd/thread/mod.rs:unsafe impl<T: Send> Send for Packet<T> {}
src/libstd/thread/mod.rs:unsafe impl<T: Sync> Sync for Packet<T> {}
src/libstd/thread/scoped_tls.rs:    unsafe impl<T> ::marker::Sync for KeyInner<T> { }
src/libstd/thread/scoped_tls.rs:    unsafe impl<T> marker::Sync for KeyInner<T> { }

And it would be impossible for safe threading primitives to be efficiently implemented outside of std (e.g. crossbeam), or for any unsafe code outside std to declare that it is indeed thread-safe (e.g. contain-rs).


#18

Of course that is important. But how is it different from the fact that unsafe code has to trust the value of the len field?

I think we must be miscommunicating enormously though if you think I’m suggesting that Send and Sync should have required lang items to implement.


#19

The distinction is entirely in the matter of open vs closed implementations. Generic code (and higher-order code) has to operate under the suspicion that the implementations it consumes are horribly busted. Concrete code doesn’t.

This is analogous to the difference between giving your best friend a key to your house (concrete, closed trust), and giving everyone you meet a key to your house (generic, open trust). Yes, your best friend can be an idiot and mess everything up, but it’s probably not something worth worrying about too much. Trusting everyone in the world to do the right thing, however, is how we end up with the dangers of C(++).

Note that the Pre-RFC doesn’t propose exposing pub-unsafe fields (which I believe @reem had at least one use-case for as an ergonomics thing). I’m focusing only on the “using it for private stuff” aspect, which is the only motivation given.


#21

Well I think we have the same understanding of the matter, at least, and are just disagreeing over a subjective measure of the importance of the difference. I don’t see a difference in kind (only a difference in degree) between trusting that the vec module is upholding its invariants and trusting that all of the Rust code I depend on is upholding its invariants. We both agree that unsafe traits are a bigger deal than unsafe fields, but we disagree about how much bigger, coming to consensus on that won’t really impact this RFC at all.