Safe Library Imports

I think there's a lot of confusion, discussion scope creep and talking past each other going on, so let me take a step back and make explicit a bunch of stuff I was probably far too terse or implicit about in my last post (and I think is also being hinted at in everyone else's posts).

Solving Security and "Shooting Down" discussion

Security is hard. There is no One True Threat Model that applies to all users of Rust. There is no single tool, or single feature that can solve all the security issues that matter in practice. Perhaps more importantly, these tools/features/threat models often cannot be discussed in isolation from each other because all these concerns intersect and overlap very heavily.

Like any other challenging problem with no easy answers, there is a long history of people proposing overly simplistic solutions that ignore part of the complexity and in practice would cause more problems than they solve. Explaining why they'd cause more problems than they solve is also very difficult, and it's often impossible to truly convince everyone. In particular, the people suggesting these solutions often get so defensive that they stop thinking about what's actually best for the Rust language and ecosystem in favor of just trying to "win the argument" (this happens on a lot of internals threads, not just security-related ones).

"Mantras" like the ones you listed are what happens when the same argument has to be repeatedly invoked against new equally flawed proposals on a regular basis. The intent is not (or at least it's never supposed to be) about "shooting down" propsoals or "shutting down" discussion, but only to avoid wasting everyone's time rehashing things that "everyone" knows already. Of course, the newcomer making this familiar and flawed proposal is not part of the "everyone" that already knows it's familiar and flawed, which leads to this perception of undeserved hostility.

We've clearly crossed the line where this shorthand is leading to everyone talking past each other instead of avoiding wasted discussion. Which is why I'm about to bite the bullet and rehash some stuff (or to borrow your phrase, do our share of the "emotional labor").

Philosophical Objections

"Demonizing Unsafe"

For historical context, I believe this term originates in [Pre-RFC] Cargo Safety Rails - #25 by nrc and I've probably helped popularize it by using it in posts like [Pre-RFC] Cargo Safety Rails - #51 by Ixrec and Crate capability lists - #15 by Ixrec

Here are some of the key sound bytes:

  • "While I agree that evaluating the quality-level of a program including its dependencies is a good goal, I think focussing on unsafety as a way to do so is somewhat naive."
  • "I believe that the majority of unsafe code is not a problem - it is small, self-contained, and innocuous."
  • "unsafe code is not the only source of bugs. If you care so much about safety/quality that you would check every dependent crate for unsafe code, you should almost certainly be checking them all for logic bugs too."
  • "If we treat known-sound unsafe code differently from safe code, that makes known-sound unsafe code seem more dangerous than it really is, and makes safe code seem more safe than it really is."
  • "there’s a very strong risk of any audit framework/tooling like this unintentionally leading to crate authors being discouraged from using any unsafe code for optimization, even when the soundness of that unsafe code is uncontroversial ... we’d be causing more harm than good if we introduced a system that did have this problem in practice (if nothing else, it risks encouraging the idea that security is at odds with performance)."

As in those threads, I'm assuming it's uncontroversial that there is such a thing as "known-sound unsafe code", that we as a community are capable of identifying it, and that it should not be treated any differently than safe code by the ecosystem (even though the compiler and core language obviously do need to treat it differently).

Hopefully it's now obvious why this is a legitimate objection to the proposal in this thread, just as it was in those past threads. If "safe imports" were added to the language, then it becomes a breaking change for any library to replace safe code with (even known-sound) unsafe code. While I hate to sound like I'm shutting down proposals, we have to be able to express objections to proposal, and I really do believe that would cause far more harm than good.

If anyone can think of a better shorthand for this position than "don't demonize unsafe", I'd love to hear it.

To be super clear, I do think the presence of unsafe is a good heuristic for where to focus personal or community auditing efforts. I also think that the bar for "known-sound" ought to be very high, and I agree that a minority of people in the Rust community can meet that bar. But none of that contradicts the rest of this section.

One True Threat Model

In an even broader sense, the real problem with many of these past threads is that they're effectively proposing that we hardcode one specific threat model into cargo or crates.io or (in this case) the Rust language itself.

This is where the mantras of “false sense of security”, “not the real problem”, “static analysis is the answer”, and “trust audits are the answer” come in.

What we should really say is:

  • only the application developer can decide what their threat model should be
  • obviously, there is no single threat model that is correct for all apps all the time
  • less obviously, there is also no good "default" or "lowest common denominator" threat model that is "good enough" for "most" apps
  • we should avoid designs that would give application developers the impression that they don't need to pick a threat model, or that we've solved that problem for them and they don't need to think about what their threat model is
  • we should develop tools and features that enable you to enforce whatever threat model you care about

Or at least that's what I have in mind when I say things like that. I think we all agree that there are important threat models where ruling out file system access, network access or un-audited unsafe code would be extremely valuable, but there are also other important threat models more interested in side channels, timing attacks, DoS attacks and so on. We simply shouldn't hardcode any one of these threat models.

Incomplete/Unclear Proposal

How would we actually use these features?

Another common problem that I think gets in the way of productive discussion is a failure to articulate how security-related proposals for tools or language changes would actually get used in practice. Because of the obvious practical reality that nobody can clean-room reimplement all of their dependencies, or manually audit all of their dependencies, what workflow you have in mind becomes extremely important.

For instance, with the proposal in this thread, the only usage I'm aware of is to simply mark all your imports as "safe", see which ones fail to compile, then remove those "safe" annotations and then audit those crates. That's essentially the same workflow as running a web-of-trust tool on your project to produce a list of dependencies that need auditing. If tools like cargo-crev ranked the un-audited crates by things like "contains unsafe", you get all the same benefits of focusing on more-likely-to-be-dangerous code without "demonizing" any of the crates with known-sound unsafe (assuming we can define "known-sound" in terms of community audits, which is a big assumption).

So in addition to the abstract philosophical objections given above, I'm also just not seeing how a language feature like this would be a practical improvement over an external tool. I think this is a big part of the reason many are responding by saying this solves the wrong problem or other tools do a better job.

Did you have some other workflow in mind? Are there any imports you wouldn't always want to apply "safe" to? Did you want to imbue "safe" with some other semantics beyond sometimes failing compilation? When you need a crate that uses unsafe, would you do something other than audit it for soundness?

Why all the talk of effect systems?

I think this happened because your original post said "they cannot access files or the network", and it's not at all clear how this could possibly be implemented (in a robust enough way to provide any meaningful security guarantees) without a full blown effect system. A more recent post of yours also said "If you can get static assurances from the type and build system, you don’t need trust audits and further static analysis" which raises the same question. And as I was writing this, your new post responding to my accidentally posted incomplete draft of this post is making further claims of this sort.

So while I agree that your original post intended to be a far simpler proposal than an effect system, you've been consistently making claims that conflict with that intent. Importantly, a lot of your responses to the other objections seem to rely on these claims that are hard to make sense of without an effect system. I think this is the biggest reason why so many posts in this specific thread seem to be talking past each other, but it's not really relevant to the broader issues that this thread has in common with many past threads, and that's why this is the shortest section.

Moving Forward

Fundamentally, I believe that "solving security" is too complex of a problem to be solved by simply having people submit proposals and then debating each proposal in isolation the way this forum is currently set up. This is not about whether the "pro-safe import" people "lose" to the "anti-safe import" people; that's completely the wrong way to frame this kind of discussion, yet it is how it gets implicitly framed by the very format of "one user posts a pre-RFC thread, everyone else posts responses to it".

And that is why my first instinct when presented with this thread was to link everyone to the working group. This is exactly the kind of thing working groups are for: Look at a complex problem, discuss a wide range of possible solutions with input from as many interested parties as possible, and decide as a group which solutions are worth pursuing and which aren't, based in part on past experience and which other solutions are out there to avoid redundancy or confusion.

I happen to believe the specific proposal that started this thread is not part of the ideal set of solutions. But I also think that's far less important than the idea that it makes no sense to discuss this specific proposal in isolation. There are absolutely ideas in this proposal that should be investigated thoroughly, and as far as I know they already are being investigated in other ways.

For example, I think we're all interested in ways to express "these dependencies cannot use the network". But we don't seem to have any concrete proposal for how to define that, and some proposals for enforcing it have the obviously problem that they make adding any kind of network feature a backwards-incompatible change. However, perhaps we could:

  • make all crates that access the network on purpose tag themselves in some way
  • get the whole Rust community to agree on a certain tagging system
  • teach web-of-trust tools to show these tags in their output
  • make one of the audit criteria for web-of-trust tools checking that you've been tagged correctly and none of your unsafe code circumvents that and so on
  • state that the intended workflow is not forbidding network usage at compile time but instead a CI job that fails if new network users ever appear in your dependency tree

I think there's probably a viable solution in here. But as you can see, this faint sketch of an idea is already at least five separate proposals, and I believe that's typical of security issues. That's why I think threads discussing single proposals in isolation are not a good way to make progress. If I had more free time or relevant expertise, I'd probably join a working group myself.

I do think that threads discussion problems in isolation might be worthwhile though. For instance, a thread just focused on how to define "network access" could at least work out whether there are any usefully tool-able definitions of that phrase. And personally I'd really like to see a thread about describing people's intended usage of transitive language subsetting features like this; as discussed above the only usage I know of is "audit every crate using X", but that makes a lot of these security-related feature proposals obviously redundant, so they must have something else in mind.

12 Likes

I don't think the latter part is uncontroversial. The whole crux of the "demonize unsafe or not" discussion seems to be exactly that controversy.

I'd love having better tools for unsafe tracking, auditing and so forth. And I'm pretty sure I'm not the only one.

1 Like

Ah, yeah that’s bad wording. I should’ve been more explicit there.

So there are definitely projects that will want to “treat unsafe code differently” in the sense of auditing those crates or forbidden new ones entering their dependency tree without audits and so forth. That’s obviously fine, and is one of the many use cases that we want to enable.

What I meant to say was that we shouldn’t treat unsafe code in any way that would “unintentionally lead to crate authors being discouraged from using any unsafe code for optimization, even when the soundness of that unsafe code is uncontroversial”. And to be extra clear, we absolutely want to discourage using unsafe code when its soundness is not crystal clear for whatever reason.

Is that still controversial?

2 Likes

Phrased like that I agree it's a good principle, yeah.

Thanks for clarifying!

1 Like

I didn't know of this "known-sound unsafe code" that one has to write when writing Rust code. It would indeed be a huge barrier to writing code without unsafe. Are there any resources that describe it? Why has Rust not adapted to make it unnecessary?

If I need to spend a lot of time engaging with or being part of a relevant working group, how would I go about that?

hardcode one specific threat model

That's not what this is. This is directly applying the principle of least authority to module imports. Rather than letting every module load anything it likes, we instead choose what it can load (and whether it can use unsafe, because that's a bypass). There are many threat models it affects and many it does not. It is a tool, and one that provides more guarantees that many of the other tools under consideration.

I think we all agree that there are important threat models where ruling out file system access, network access or un-audited unsafe code would be extremely valuable

Agreed.

For instance, with the proposal in this thread, the only usage I’m aware of is to simply mark all your imports as “safe”, see which ones fail to compile, then remove those “safe” annotations and then audit those crates.

When finding crates, it would be a category that you'd search inside first. If you find a crate for your purposes that has 'least authority', you'd likely use it in place of one that does not. The crate may offer extra features if you give it more authority.

Are there any imports you wouldn’t always want to apply “safe” to?

You'd want that for most imports. If you're importing a library that runs plugins for a paint package, you might not restrict it. But it, in turn, would likely restrict the plugins.

Did you want to imbue “safe” with some other semantics beyond sometimes failing compilation?

I'd want packages to be able to contain optional unsafe code or file or network access code, accordingly compiled or not. A first version could just forbid it all, though.

When you need a crate that uses unsafe, would you do something other than audit it for soundness?

After auditing it, I'd have to fix the version and add myself to whatever mailing list might inform me of security flaws in that version. Or I'd fork/pull-request an unsafe-optional version. The trust/audit systems under discussion elsewhere could help in this case, as long as sufficiently trusted people have had time to audit it.

effect system

Effect systems affect all function types. This proposal does not alter any type signatures.

consistently making claims that conflict

not part of the ideal set of solutions

These in particular feel like rejection before understanding, especially given that you still don't know how this proposal differs from an Effect System.

That's part of my problem. It seems common for people to form that belief before even properly understanding what is being proposed, and to stick to it.

It feels like no matter what I do, some people here will decide that what I'm describing is actually something I'm explicitly saying it's not, or that it cannot do what I know it does from direct experience, and is hence unsuitable, without/before apparently asking questions that will aid understanding.

I wonder whether some sort of real-time chat might help, to avoid so much talking past each other. But while people are comfortable making authoritative negative claims about a proposal they don't yet understand, it is hard to see a way forward.

This problem has been addressed for other languages, such as the Caja project for JavaScript and the Joe-E object capability subset of Java. In each case, there was a need to tame the library. Section 4.2.1 of the Joe-E paper describes the process.

I found it interesting how little it affected usability. For example, you control file access by replacing methods that take path names with methods that take open file handles. The result is that you can allow file access without the risk of allowing code to open arbitrary files.

1 Like

This is where I was going with my "75% solution" comments.

There are many interesting, perf-critical crates that one absolutely wants to use unsafe. If one uses them, then one needs to trust them somehow. That might be faith in humanity, faith in the crates.io maintainers to remove broken things, faith in a community reputation, faith in your code audit process, whatever. One does need to trust it, but is good with doing so because it provides enough value to be worth it.

The problem, as I see it, are all the little crates. For example, I have this silly little crate:

Nobody needs this crate. Right now, even I wouldn't use it if I needed to get permission for things, because it does use unsafe. If you look at it the unsafe is clearly sound, but why would I even bother auditing it when all it does it let me move .rev() calls around?

But if there was a new language feature that meant it didn't need unsafe, or it could use a well-known trusted crate to do what it needs instead of the unsafe, then I think it would be really nice to be able to argue "look, it's a low-risk crate" and not need to audit it, to help encourage more crate usage.

I'll note that in a world where all code needs to be audited, no upgrades can be done without looking at the changes, and thus breaking on new unsafe code is a feature.

2 Likes

Yeah, it does. It has to. Let's imagine a scenario where this is done entirely at the crate level, with the appropriate trickery of having a std_nonetwork variant that has its types reexported by the std_network etc. Attempting to implement all of this stuff in Cargo without touching the core Rust language itself.

Imagine I've written a crate, let's hypothetically say ammonia, which doesn't require network support. Let's say it also depends on content-security-policy. Ammonia shall marked as no-network, and it shall always be marked as no-network, because filtering HTML is supposed to be a deterministic process.

Let's further imagine content-security-policy has optional networking support. This is also not really hypothetical; the CSP level 3 standard specifies a way to report violations. Presumably, I would make this a Cargo feature (let's call it network-report-violations) with conditional compilation. Cargo features are supposed to be additive, and I would be careful to ensure that I followed that here, so if one crate that depends on content-security-policy turns on networking support, it shouldn't affect ones that don't require that feature. Naturally, this also makes content-security-policy a crate that optionally relies on network.

How should Cargo ensure that ammonia never calls the network-using functions that content-security-policy may or many not have? Obviously, ammonia should not be allowed to turn on the feature within its own Cargo.toml, but if it went ahead and called the network-requiring functions anyway, then it would fail to compile if the feature was turned off, but it would successfully compile if it was turned on by a completely different crate, and it would violate the sandbox by doing so.

1 Like

This is indeed a significant problem, and one I was hoping would be raised and perhaps answered by someone. The question of what to do when the same crate is used more than once with differing permissions is a good one.

It's perhaps worth noting that ammonia can be included no-network while being explicitly allowed to import content-security-policy with the network-report-violations feature, as a user-level workaround for this multi-import problem. But I'm sure others with more knowledge than me about the lines between cargo imports and rust types could find a better answer.

I’ve had this conversation a few times:

— Rust is thread-safe and automatically manages memory thanks to static analysis!
— What?! That’s not possible! How does it solve <this difficult problem>?
— It doesn’t! :smiley:

Rust as a language is an interesting hack: instead of solving all the hard/unsolvable edge cases, it only solves the easy ones, and leaves the rest to unsafe.

I find this combination incredibly powerful, because it’s safe and easy most of the time, but it doesn’t have any hard limits on how low-level and performant it can get.

That’s remarkably different from “safe-only” languages that pay performance and/or complexity penalty (things like GC and/or stricter immutability) in order to be able to do more useful things safely, and they still must have some hard limits on what they can do.

Rust wasn’t designed as “safe-only” language from the start, so the safe subset is intentionally smaller and not maximally powerful, because it never needed to. So to me “demonizing unsafe” in Rust is a lose-lose scenario: you throw away the powerful/performant part, and you’re left with a half-language that is less powerful than proper “safe-only” languages.

8 Likes

There are ways to handle @notriddle’s problem. One approach is to use a tamed object capabilities library. Just as you limit file access by passing an open file handle instead of a path, you can pass a reference to something that forwards HTTP requests only to a specific set of URLs or that sends all requests to localhost.

1 Like

In my experience, same-language sandboxing of arbitrary untrusted code is incredibly hard and almost always fails. Perhaps the most famous example are Java applets, which have had an almost continuous stream of security vulnerabilities. But I’ve also discovered vulnerabilities in multiple other sandboxing projects. I don’t think it is as easy or reliable as you seem to be assuming.

Yeah, the runtime Java security layer is hugely overcomplicated. It has class loaders, security managers, stack inspectors, protection domains, access controllers, access controller contexts privileged blocks and more all trying to work together to intuit intent, along with easy bypasses in Reflection and Serialization. There were plenty of mistakes both in the design and the implementation, in large part because it is so big. Rust is not Java and the principle of least authority and preventing imports is much simpler.

A large part of what's being spoken about here is compile-time, not run-time and is already enforced by static typing. Also, the principle of least authority tends to lead to smaller attack surfaces most everywhere, as running code then holds fewer things to be attacked.

Yeah, there will always be vulnerabilities at multiple levels. No matter how good or bad Java's security layer was, the bypasses using reflection or bad class validation simply bypassed it. That shouldn't make us give up on good security measures, any more than it makes us give up on memory protection.

It's still a significant improvement. If you can demonstrate some Rust code that doesn't use unsafe or import any modules at all, that can do anything at all at the OS level, that might be more convincing.

Basically, every time you reduce the amount of authority available to code (or leaked via other types), you improve the security of a system. You don't have to be aiming for capabilities; the effort still helps. In a statically typed language like Rust, the majority of the security enforcement is already in the compiler, as it's just types and visibility of them.

1 Like

This question still stands. And the general question of what ways forward there potentially are for this kind of compile-time security enhancement. Some sort of Pre-RFC with things fleshed out? Some sort of spike/prototype? A chat with some security folks?

Why not start by taking the question directly to the security-WG? @bascule

The ideal solution here is for consumers of the library to choose what they prefer: is security/the safety of you application far more critical than its speed? (e.g. a non time-constrained application, or an internet-facing server). Then use the safe version of the lib. Is the security constraint less important than its speed (e.g. videogames)? Then do pick the unsafe version of the lib.

Given #[cfg(feature = "no-unsafe")] and cfg_if!, I think it is really possible for a library author to expose a library where users can opt in/out of unsafe, even if it requires/implies a runtime cost (e.g. RefCell, Rc, Option pattern matching, ...):

Cargo.toml:

[dependecies]
"some-lib" = { version = "0.6.9", features = ["no-unsafe", ], }

It could be nice to attempt to opt-in this mode for untrusted dependencies when safety is paramount, while keeping the unsafe of other dependencies (not just ::std obviously) for the speed.

Empowering library users by giving them the choice.

That puts a burden on library maintainers to maintain two versions.

If you require safety and can accept performance penalty, then use WASM in a sandbox. That puts the burden on you, not literally everyone else in the ecosystem.

@dhm This again seems to make the assumption that the presence of unsafe in a library always, without exception means client applications using it are less safe.

I still don’t understand this position, so let’s try this:

  • If the unsafe code was known for a fact to be perfectly sound, would you still want the library to maintain a no-unsafe feature?

  • If some new unsafe code optimization is proposed, and it’s not known to be sound, would you want the library to add it anyway alongside a no-unsafe feature until soundness can be proven or disproven?

I think most of the comments here (or at least mine) are made from the perspective that if unsafe code is not known to be sound, it should never be published in the first place, but if it is known to be sound, then it’s not a threat, and there doesn’t seem to be any sort of “probably sound” middle ground, so there’s no use case for a no-unsafe feature. Which part of this would you disagree with?

4 Likes

If library maintainers want to earn the trust of more skeptical users, it is obvious that some kind of effort is required:

  • they sacrifice performance to gain trust, by not using unsafe;

  • they "prove" (or hint a plausible sketch of a proof) that the unsafe used is sound (or get someone or something to do it for them (i.e. auditing));

  • they accept to "double" the work required to maintain some functions / structs by providing two versions of each: a reliably fast one and a reliably safe one (that is, for instance, what I have started doing);

  • or they do not do any of the above (because performance is paramount for the library to be interesting and because they do not have the time for the alternatives); in that case users should be warned / told about the library maintainers' choice.

That is is a genuinely interesting alternative, I'll look into it. That makes me think there should be a Rust Security Book, with its own section about using rust in a WASM sandbox.

(I thank you @Ixrec for your good attitude in trying to come to an agreement).

In my (maybe not so) humble opinion (aside: the very fact of qualifying oneself as humble is a contradiction xD):

  • My concern with unsafe is that it is dangerously close to C. For many this is not an issue, since many programs have been written in C without any vulnerabilities whatsoever. Yet, where I work (security-related field), C is the bread and butter of most sneaky / harder-to-spot bugs: stack & heap overflows, integer arithmetic with pointers (e.g. through indices), use-after-free, data races... Of course, there are other sources of bugs: eval/include-ing user input, file accesses (read/reveal secrets, write/corrupt system data), but those are less sneaky (sometimes they are even related to evil intent from the programmer (e.g. a malicious package / dependency)). The point of all this is that C / unsafe bugs are easy to miss. And many people do not realise at which point / extent that is true.

    I, for instance, have used unsafe in a crate trying really really hard to be careful and using tests etc. and yet I used ::std::mem::uninitialized for a user-given generic type T. Even if I was never reading such a value, it was still UB since the value could hold a value bit-layout-incompatible with the type (e.g., value 3_u8 for a bool, or a null reference) and the compiler may use that type-provided layout guarantee for some exotic optimization . I did not know that at that time

    On the same level, I have seen people attempt to prevent bugs by zero-ing secrets (e.g., on drop), and even go as far a recommending it, without any special attention paid to generics: back to wrong-bit-layout UB.

    TL,DR: UB is so incredibly easy to get when using unsafe that I do demonize it. I'd rather demonize it than naively trust it.

  • Specially when there are usually safe alternatives for what the code is attempting to do, by using Rcs, RefCells, more Options. And in many cases the runtime cost is negligeable.

  • On the other hand, I do trust attempts at fixing the latent danger of unsafe, not only when it has been proven safe (obviously), but also when it has been "thoroughly" audited:

It does not seem necessary to me;

That would be my dream. But, more realistically, it should at the very least be added as a major semver change ("safe API -> potentially unsafe API" = "API change"). Once proven sound, it could be retro-added as a minor semver change.