Safe Library Imports

That sounds like we actually mostly agree on this stuff, and the disagreement is mostly a matter of tone and emphasis.

In particular, where you say “demonize” in your post, I’d say “the bar for declaring a piece of unsafe code “known-sound” should be extremely high”. That’s obviously very different from and not at all in conflict with my “don’t demonize” meaning “don’t discourage library authors from using known-sound unsafe code”.

(again, if anyone can think of better slogans/shorthands, I’m all ears)

As a quasi-conclusion: I would think it falls under the purview of the security working group’s efforts to audit and minimize unsafe throughout the ecosystem to also decide:

  • exactly where we should set the bar for declaring unsafe code to be “known-sound” (e.g. a test suite with some amazing coverage metrics that fully passes the stacked borrows model)
  • if there are any unsafes where a no-unsafe feature would be appropriate for whatever reason (e.g. its soundness depends on unresolved issues with the unsafe code guidelines, so it’s impossible to prove sound or unsound for now)

Unless someone has a concrete example they’d like to dig into, I think we’ve probably exhausted the philosophical discussion for now.

2 Likes

Well said!

There's the question of who knows it. If the maintainer/publisher knows it to be sound, that doesn't help my knowledge of its soundness. Whereas code that doesn't use unsafe is sound (wrt to types and memory).

That's fair. Of course, being able to avoid unsafe then still has merit, as it saves a lot of work meeting and checking this extremely high bar.

I think there'd be mileage in discussing what unsafe is really meant to be. It seems that anything that becomes a common unsafe pattern probably creates a need for an equivalent non-unsafe language feature, but that's probably a discussion for a whole new thread.

And that gets us right back to web-of-trust tools :slight_smile:

I think it's generally agreed that common unsafe patterns are good candidates for addition to the language or std (I'm not sure if you meant "language feature" to include std), although not every legitimate, known-sound use of unsafe is something that could be made into a language feature (especially FFI and embedded stuff). But yeah, definitely a new thread if there's anything more to say there.

Sure, though they're much more complicated than restricting unsafe and types with system access.

By “demonizing unsafe” I mean treating all uses of unsafe blocks as unacceptable by default, because of what they might be, and not based on what they actually are. That’s fear and prejudice.

I’m fine with solutions that are based on trusting authors to use unsafe properly and/or code audits to actually check what the code is doing. That is healthy skepticism and rejection of code that is actually dangerous.

2 Likes

Going back again to the original post: the official zlib library is thoroughly reviewed and fuzzed. It has an OK track record (some bugs, but nothing major in the last decade). I’d consider it acceptable to use, despite being 100% “unsafe” with 40000 lines of code.

Currently, I can use this library without any problems. I can build my libraries on top of it and have gzip that is pretty fast and safe in practice.

But if use of this library would cause my libraries to be marked as dangerous, banned from some “safe” contexts, then I wouldn’t use it. I just don’t want to deal with this. Instead, I would use a slower “safe-subset-Rust” gzip library. It would be loss of performance for no real gain in safety, just because of social pressure, not technology.

5 Likes

That's very emotive language. Note that the security working group's mission includes:

  • Most tasks shouldn't require dangerous features such as unsafe . This includes FFI.

So a little fear might be a good thing.

A fair point. I would still personally prefer to have a no-unsafe option, but some ways to moderate it may also be useful. I'd expect in the long term to be able to say 'I trust zlib' without having to trust other unsafe or native libraries. The conditional compilation options mentioned earlier would allow you to use no-unsafe zlib or native zlib depending on what the including crate trusts, wouldn't they?

For what it's worth, I made a pretty detailed proposal for how to implement a feature like this by tagging/associating usages of unsafe with cargo features, with the goal of allowing crate consumers to specify which dependent crates are allowed to transitively consume unsafe features of other crates:

For your original use case of importing unaudited libraries and knowing they can’t access files or network, the safety seems to be all-or-nothing. If FFI was allowed, how would Rust ensure it can’t call fopen? How could FFI work while still forbidding arbitrary pointer dereference?

FFI is as dangerous as unsafe, so one would typically not want to allow it. If one wanted file access, one would allow import of a well-known module that can read/write files. That module being allowed FFI still isolates it from the layer you don't trust.

If you're talking about the FFI for zlib, you'd have a zlib module that you allowed FFI/unsafe, which you might permit as a well-known library, but you wouldn't have to give the png library that.uses it the ability to use FFI/unsafe itself. Ideally, there'd be one common zlib that could have an unsafe version and a safe version and you'd import that from the png lib and not have to care whether you're in a context that allows unsafe.

Right, that's why I'm confused by:

Most tasks shouldn’t require dangerous features such as unsafe . This includes FFI

If semantics of FFI don't change (e.g. it's not turned into emulation/sandbox), then it is identical to allowing unsafe. Will it just have a different syntax without the unsafe keyword, but the new syntax will get identical treatment as the unsafe keyword? What's the point?

Of course you could vet crates and allow them (that's the trust/auditing approach I think is the best solution), but there the distinction between FFI and unsafe is not meaningful, you still have arbitrary pointer dereference allowed, and vet that it's not abused.

It looks like an FFI call requires unsafe, separate to the definitions. I think what the group is trying to say is that one can't just say "Using unsafe for just FFI is fine"; they're saying (I think) that FFI is as unsafe as other unsafe things and so won't get a free pass.

1 Like

Indeed. With the later modification that some of the unsafe in std is simply trusted by default, it could be a good part of the solution. Though the FFI in std would then probably be trusted and you'd want another restriction to prevent file and network access and such, by default.

I was suggesting using the subsystems as std as the rough modeling dimensions for "unsafe features":

https://github.com/rust-lang/rust/tree/master/src/libstd

You can imagine things like file and network access requiring, at the very least, a std/io unsafe feature is enabled.

For FFI, I'm okay with the existing API remaining unsafe forever, however I would be interested in seeing it wrapped in something higher-level which can provide safe abstractions. We've seen a few things of this nature spring up for doing language-specific bindings. All that said, if you had such a safe_ffi crate which wrapped the unsafety, this same mechanism could be used to control which crates have "safe" FFI access.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.