I assure you the spirit of the proposal is neither to be unfriendly nor antagonistic to the health of the ecosystem, though we can disagree about whether it would, in practice, help or hurt it.
My position is that given how deep dependency graphs already are, it's unreasonable to actually audit every single crate I depend on in many cases. If I care about ensuring type-safety (for security or reliability reasons, for example), it just means I'm not going to use any of crates.io.
For me, this isn't theoretical, it's the reason we don't use crates.io in Tock (yet anyway). So from my perspective, not having a mechanism to take care of this is harmful to the ecosystem.
Of course, that could be too special a case to solve for generally, and i don't think it's an unreasonable conclusion to say that this feature is better off overall in, say, a fork of cargo. But anyway, that's my justification for the proposal being in service of the ecosystem rather than hostile to it.
I wonder if some of the other forms of the proposal mitigate this concern at all. For example, @lxrec suggested having an explicit opt-in in the top-level crates being compiled (allow_unsafe_default = false
), which would (or could?) mean that if the end-user doesn't explicitly opt-in, there would be zero impact even if some crate along the dependency chain uses it.
FWIW, in Safe Haskell I don't think it really has this affect (although Safe Haskell has module level granularity, so a library author can declare certain parts trusted or safe). In practice, what happened there is that the vast majority of hackage packages were already safe but some packages (Vector, aeson, binary, bytestring, wai, etc) are not for performance or ergonomic reasons and they are simply marked as Trustworthy. Perhaps it would be helpful to empirically asses what that would look like in crates.io as well? That kind of comes down whether this is completely an objection in principle or a pragmatic one...
I also wonder if @Nemo157's suggestion of having a way for dependent crates to detect if they are compiled with unsafe
allowed or not would mitigate this concern. For example, if you are making a non-breaking change that uses unsafe
as a performance improvement, you could simply gate that change on whether you're "allowed" to use unsafe.
(I also am realizing that maybe the fact that we're using "allow" here is triggering negative reactions, because of course as a library author you can use whatever you want, it's just that if I want to depend on your library and not trust your library, that would only be possible if your library compiles with -F unsafe_code
.)
I completely agree. This proposal only practically works if the unsafe
is used sparingly (I believe it is, but am not sure). Though, I think having some technical support to assist library users in figuring out which dependencies need the most auditing is complimentary to reducing the amount of unsafe
code.
If I'm writing C, I basically need to be fairly confident about every line of code I compile into my binary (whether I or someone else wrote it). Part of the goal of a strong type system (different people have different justifications, but I think this is one of them) is to mitigate that. If I have no idea whether a dependency is using unsafe
that subverts this goal.