Crate capability lists

Yes, it is more bulletproof and simpler but I think it is still too big of a restriction. If we cannot implement the more complex version of the checker then that's a big problem. The way I see it, either it is possible to implement this checker and we should do it or it's literally not possible (for reasons) and we are doomed to build a system on human auditors and trust networks. I'll be honest though, I am not perfectly familiar with the exact limitations of const fn functions (aren't they constantly changing?) but there is no way that most of the crates can be implemented in const fns. Do we have statistics about this? I don't know what the average crate is using. Are most of them wrappers around unsafe stuff? Maybe we should start with getting statistics. Maybe this idea is infeasible because the average crate is using at least one unsafe block and using a c library so there's nothing to gain with an automatic checker.

Basically, everything that isnā€™t a) unsafe, b) random, or c) environment-dependent should eventually be able to be const.

const is roughly a conservative determinism guarantee. If it can be computed at compile time and be guaranteed to have the exact same result as runtime, it can (theoretically) be const. (This eliminates anything that touches the file system, @Soni, as the file system can be different at compile and run time.)

const is similar to unsafe in that way; the restriction of const is that you can only call other const things. Even allocation is theoretically const-safe, at least if only immutable references ā€œescapeā€.

For how itā€™s very conservstive. But hopefully, if you can do it in pure Haskell, you can do it in const Rust.

I think if this proposal were fleshed out a little more, we could reconcile these concerns. Namely, I suggested unsafe-features would be off-by-default in my proposal. What if that were only the case if allow-unsafe = false or unsafe-features were present for that crate, and if they weren't, all unsafe-features were on-by-default? (or perhaps default unsafe-features would only be honored in the event that allow-unsafe = false or unsafe-features weren't in use)

Furthermore, since this proposal is built on a conditional compilation, crate authors could even include a safe fallback for users of allow-unsafe, with the option to opt-in to the additional performance using unsafe-features. I think this approach could give consumers of crates choices around security vs performance:

  • Nothing would change for projects which don't choose to make use of the allow-unsafe or unsafe-features attributes. They'd automatically get opted into the unsafe performance optimizations.
  • Users of allow-unsafe could get a less performant, safe fallback.
  • If the additional performance is desired, unsafe-features can be used instead of allow-unsafe to opt-in to the unsafe optimization.

I imagine things like [patch] could even be used to globally enable certain unsafe-features for all crates.

Overall I would expect a feature like this isn't commonly used, and would require transitive adoption and support on a crate-by-crate basis in a dependency hierarchy. In that regard I think it's a bit like #![no_std]: a feature which is extremely useful to a subset of the community (people using Rust in any sort of "high assurance" capacity) which can be largely ignored by the rest.

I discussed this in the "Tough questions" section of my proposal, and suggested that std could be allowed to "bless" certain unsafe code which does not provide access to ambient authority or cause potentially security-critical side effects.

Though I didn't explicitly call it out as such, I think allocators would belong to this category.

I donā€™t think that allow-unsafe = false approach or list of crate capabilities will be practical on their own, mostly because of the transitive nature. I believe we need a review infrastructure, so we will be able to configure our builds with conditions like: ā€œallow unsafe only in crate versions reviewed by group Xā€, ā€œallow network/file IO only for crate versions reviewed by Y or whitelisted by meā€, etc. Probably this configuration should be local, i.e. it will not influence build process of downstream crates.

1 Like

Doesn't this essentially mean that group X is the maintainer of the project? If I simply do not use code produced by the original maintainer then group X is the maintainer, at least for me. I don't think this approach scales either.

No, X can be an organization which will review crates across whole ecosystem (e.g. by selecting most important crates which use unsafe), the main tasks for people in this organization will be reviewing published crates, not writing code or fixing issues, and they will not have any crate publishing rights, so itā€™s a bit strange to call them ā€œmaintainersā€. If for example maintainer will publish a patch update, cargo update will not switch to it until it is reviewed by X.

The only solution that will actually work must be tool driven and automatic, otherwise someone manually has to check every single version of every crate. I think it is the same deal as the memory safety issues Rust is supposed to solve. We thought for a while that we can solve them by simply asking a large bunch of people to review everything and it didnā€™t seem to work. Of course the automatic tool cannot possibly tell which change is harmless and which is tricky but it should be able to recognize trivially, provably safe ones so only the tricky changes must be manually reviewed, similar to case of the borrow checkerā€¦

This idea (in broad strokes) came up on the kickoff calls we had for the Secure Code WG. I opened an issue about it here:

1 Like

I donā€™t want to hijack, but since youā€™re considering tracing a bunch of package metadata, it seems like one of the ā€˜capabilitiesā€™ you might want to track is software license.

In effect, if you use a package with a more restrictive (viral) license than you presently have, or a dependency switches to use a license that is more restrictive, people who use rust technology for production software would probably want to know that.

Itā€™s possible this should be a separate item, it just triggered my memory as I was reading the proposal and the notion of ā€˜trustā€™ above.

I am not too familiar with the details of software licenses, is there some kind of ordering in the world of licenses that could be followed by a software? Other than listing all licenses (from the whole dependency tree), what could a software do to help the developer?

There are many licenses that are ā€œcompatibleā€ or ā€œincompatibleā€ with other licenses, and many projects (like Rust) that allow users to choose one of multiple licenses, and some licenses that impose additional requirements you have to actively keep track of. I doubt it forms anything as mathematically clean as a strict partial order, but thereā€™s plenty that automated tool could theoretically help with.

Though thatā€™s very much a language-agnostic problem, and it applies over FFI boundaries, so thereā€™s an argument that a language-specific package manager like cargo might not be the right place to try solving it.

There was a related discussion in u.r-l.o a few days ago. A sample script was posted that approximates the kind of checking of licenses of the dependency tree that is being discussed here.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.