An idea to mitigate attacks through malicious crates


#62

I guess I disagree about where we should start. Answering the question you just posted sounds rather difficult and any sort of precise list the subject of much discussion/debate.

I am personally more interested specifying a reusable mechanism for modeling these design boundaries which works for arbitrary Rust crates and std alike than answering questions like “what should the boundaries for std be?”. That feels a bit like putting the cart before the horse, as the answer to that question seems to depend on exactly what the question is.

To answer your question in broad strokes: std is already broken down into relevant modules and tagged with existing features. I think you start with a list that’s roughly shaped to how std is implemented today, and potentially coalesce subsystems in the event there’s consensus that there are trivial escalations between them to the point that modeling them separately does not provide any value.

While I agree there might be merit in collapsing fs and command, I think there are also advantages to keeping them separate. Is an argument like "a Linux specific API can be used to escalate fs to net" a sufficient reason to collapse them or, failing that, make one a dependency of the other? I think that’s debatable. I also think this is maybe not the time and place to have that particular debate.

Mainly that I want to point out that the fact I think it’s incorrect to say that because a privilege escalation path between two proposed permissions exists in a particular context that they are “exactly the same thing”. As I pointed out earlier, there is (or was) an escalation path between net and cmd for users running an unauthenticated Redis instance on their local computer. Does this mean separating the net and cmd features/permissions provides no value or is misleading to users? That’s debatable, but personally I think the answer is “no”.

Finally, I again want to point out the danger that without sufficiently fine-grained authority, crates will be given too much authority, and that will result in vulnerabilities that could’ve been prevented by a more least authority approach.


#63

It seems that there are (at least) two ways to look at the problem.

  1. I believe that we should start by defining the invariant we wish to guarantee, and then define a mechanism to guarantee it.
  2. If I read correctly what you write, @bascule, you have a mechanism in mind, and you wish to explore how much you can guarantee with it.

Both approaches are sound, but require different methodologies and will lead to different results.

As stated above, I believe that the best way to reach the result stated initially in this thread (“mitigate attacks through malicious crates”) is to start by defining an invariant that will be sufficient to mitigate such attacks, then define the mechanism, then possibly refine it.

I’m fully aware, of course, that I may be wrong :slight_smile:

Is this an accurate summary of our differences in methodology?


#64

I agree that fine-grained authority is desirable, and that crates should be given the least authority that they can. To do that we need to define specific capabilities that are less than “crate can do anything”. Unfortunately, this is rather challenging because Rust doesn’t currently have do anything to sand box parts of applications at run-time so they are free to do anything that the OS lets them. The question then becomes, are there meaningful capabilities that don’t grant the crate everything? One possibility is net which sounds like it at least needs a very specific conditions in order to escalate. Another could be a variant of fs that was restricted to known safe paths, although it would be incompatible with the existing std::fs


#65

Yep, I agree with this.

In regard to “an invariant that will be sufficient to mitigate such attacks”, to me the only sound place to start is by addressing unsafe. My proposal started with trying to unify the problem in terms of unsafe because unless unsafe is addressed (in a transitive manner), no other finer grained authority is possible, because unsafe is an escape hatch through which those guarantees can be bypassed. However, rather than trying to just address unsafe, my proposal uses the mechanism by which it addresses unsafe as a foundation for finer-grained authority.

Concretely something I was trying to avoid was the notion of something like cap_unsafe mentioned earlier, i.e. granting unbounded authority. In as much as my proposal was a backwards-compatible, opt-in addition to the language, that sort of approach is what we already have today, so my thought on the manner was if people are opting into micromanaging the authority of their dependencies, they should always get something which is strictly better than the status quo. Otherwise, what’s the point?

My personal thoughts on “capabilities” tend to steer more towards the OCap school than, say, POSIX capabilities, and to me the notion of completely unrestrained, unbounded authority is an antipattern to be avoided.


#66

Good points. This suggests that we may want, at a later stage, to introduce custom capabilities.

Also, it has been pointed to me that in operating systems such as Redox or Fuchsia, std::fs, std::command and std::net are effectively different capabilities. This suggests that, as advocated by @bascule, we should find a way to treat them differently, either now or eventually.

I believe that we should start with whichever is simplest and refine later.


#67

I’m not sure I understand what you mean.

As long as you have an unsafe function, it can be used to call std::fs, std::command or std::net without declaring it, just by jumping to an address in the binary – or more visibly, it can emulate the feature by calling into libc or equivalent. So whatever you do, I cannot see unsafe being anything other than the union of all capabilities.

Do we agree on this point or am I missing something?


#68

I agree unsafe is still unsafe, but perhaps I can better clarify the overall framework of my proposal.

Modeled as an access control decision, and looking at it a bit through a more traditional OCap lens, there is an access control decision involving three different “principals”, or perhaps more precisely in cargo terminology (at least in the context of my proposal) “packages”. I’ll call them A, B, and C:

  • A: the parent package attempting to consume two different dependent packages: the first (B) as a direct dependency, and the second (C) as a transitive dependency
  • B: a direct dependency of package A
  • C: a transitive dependency of package B exporting unsafe behavior

My proposal attempts to attenuate/minimize package B's ability to vicariously/transitively call functions with “interior unsafety” in package C.

Another way of describing this concept is package A wishes to delegate the authority to make use of certain unsafe features of C to package B. However, because they’re tied to cargo features, each usage of unsafe is “tagged” and tied to a particular cargo feature.

B itself would not contain unsafe and could even be #![forbid(unsafe_code)], but instead of being able to call any function in C which makes use of unsafe, its visibility of C (at least in the context of “interior unsafety”) would be scoped to the explicitly whitelisted features in C which A has delegated it the authority to call.

It’s true within the scope of C, when it makes use of unsafe, it has unbounded authority. However, the idea is to catalogue and whitelist these usages, so changes to them are visible.

It’s worth noting delegation is tricky to get right, and often a bit counterintuitive because most access control systems implement delegation poorly. Here’s one of my favorite papers on the matter:

http://waterken.sourceforge.net/aclsdont/current.pdf


#69

I mentioned this upthread, but right now every unsafe fn is an unsafe block. As a result, there are fns with larger unsafe code areas than necessary, well sometimes entirely safe fn bodies like Vec::set_len get treated as unsafe. These increases in unsafe code area magnify the malicious crate risk rather significantly. RFC 2585 for unsafe blocks in unsafe fns should fix this design error.


#70

So RFC 2585 would limit the amount of code that needs to be reviewed in a crate with unstable, right?


#71

Not by much. The conventional unit of unsafe in rust is the module. If a module uses an unsafe block, you must audit the entire module.

If a module has unsafe fns but does not use unsafe blocks (or have unsafe impls), then I suppose you no longer have to audit it. How could this even happen? I can think of some possiblities:

  1. (Unlikely) The unsafe fns currently delegate to safe functions, but are marked unsafe so that unsafe optimizations could be added in the future based on documented preconditions.
  2. (Somewhat likely) The unsafe fns belong to traits, and the impls in this module don’t require unsafety.
  3. (Most likely) There is another module in the crate which fails to encapsulate its unsafety, and therefore should fail auditing.

#72
  1. A crate only contains low-level unsafe bindings (e.g. libc). I imagine this situation is rare, and it still requires downstream crates to use unsafe blocks.

#73

I don’t think capabilities are the way to go here. For one thing, it won’t do anything as long as there are known soundness bugs in the compiler anyway. For another, Rust is not Haskell for a reason and trying to retrofit it is going to lead to no end in tears. But it seems to me to be the wrong way to approach the problem to begin with.

What we really need is a way to make it easier for people to ensure that they are only running code that was reviewed by people they trust. Of course, even with this, there are a lot of practical difficulties and tradeoffs, so I expect the average user will not use this. But we could experiment with such a system for highly sensitive use cases.


#74

This does not scale. There’s simply no practical way to audit every single version of every single crate.

The purpose of the capability system is: crates which don’t need capabilities don’t need to be audited, so that way the auditing can be focused only on the crates which are potentially dangerous.

It’s exactly the same principle as Rust’s existing partition between safe and unsafe code: focusing developer audits on the parts which are dangerous.

In other words, yes you’re right that we need a trust-based system, but that’s in addition to a capability based system.


#75

Note that there is already at least one trust-based experimental tool out there: https://github.com/dpc/crev/tree/master/cargo-crev

So it’s probably a good idea to keep this thread focused on a capability system.


closed #76

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.