Proposal to add "features" to standard library

Proposal to add "features" to standard library


After mastering Rust in smaller and bigger hobby projects, I wanted to use Rust in production. But my boss asked me about the danger of "supply chain attacks" in Rust. Sadly, the Rust crate ecosystem is "potentially very dangerous". I didn't have any bad experience till now, but the doors are widely open for malevolent actors! It is enough to get a ransomware attack only once to make life miserable and to destroy the reputation of the Rust crate ecosystem. This bad reputation will then stick with Rust for eternity.

My goal here is to make it easier to review the "potentially harmful" code of all the crates in the dependency tree of my Rust project.
There can be hundreds of dependency crates. I believe it is my responsibility to manually review every single one of them exactly the version that is resolved by cargo and used for compilation. Then with the tool "cargo-crev" I can write a subjective opinion why I think that a crate version is safe to use in our project. And finally show this list of reviews to my boss for approval.
To make this feasible, we must reduce the code to review to a minimum - just the "potentially harmful" code. Maybe this is not perfect, but it is much better than what we have today.

The situation today

The standard library is one "monolithic crate". It is used by default. No need to write anything in the Cargo.toml. It has functions that can read/write files, send data to the network, run commands, ... All very "potentially harmful" operations, ideal for a ransomware attack.
We can only choose to disable the dependency on it with #[no_std]. That means that we then use a minimal subset of "std" called "core". We loose most of the functionality of the "std" library.
We can than add some functionality back with crates like "alloc" or "heapless" and crates for I/O operations and other.

My proposal

I would like to propose the idea to add "features" to the standard library. Features are used widely in crates to enable or disable special functionality and are "part of the Rust language". These features could enable or disable some "std" functions that are potentially harmful and can lead to "supply chain attacks". This would make it easier to review just the code that is "potentially harmful".
Still, we need to review every dependency crate in the dependency tree, but the amount of critical code becomes much smaller and more manageable. Tools like "cargo-crev" should then help to make these reviews available to other developers and our bosses.
I don't want to make here a thorough analysis of all the use cases. Just a simple one that is representative of the idea. I expect that Rust developers will have a lot of good ideas around this concept.
First let's solve the backward compatibility issue:
the "default" feature would have all functionality enabled, just like the "std" library does today.
The first feature is "core". Used alone is the same as today #[no_std].

version = "x.y.z"
default_features = false
features = ["core"]

File write and read

Probably the first "potentially harmful" operations to isolate from the "std" library is "fs_write". Without the feature "fs_write" this functions cannot be used: std::fs::create, std::fs::write, std::fs::create_dir and a lot of other similar functions.
Similarly the feature "fs_read" isolates the functions std::fs::open, std::fs::read, std::fs::read_to_string, std::fs::read_dir and a other similar functions.
With the feature "fs", the crate can still manipulate file objects. Just the critical functions like creating, opening, reading, and writing files are not included in "fs".
For example:
We write a Rust project and add a "simple third-party dependency crate" that cannot read/write files. Our code has the responsibility to create and pass a File object to the "third party dependency crate" for manipulations. After that, it returns the File object to our code, where we can inspect and error handle and eventually write the file. So we have full control of what files and directories are manipulated.
The dependency trees in real-life projects are much more complex than this. We cannot review the code of only the dependency that we add to our project. We must inspect all the dependencies that come with it until we come down to the "std library". All the dependencies down the tree are "potentially harmful".

In Cargo.toml of the simple library crate we can enable these features when it is appropriate:

version = "x.y.z"
default_features = false
features = ["core",  "fs", "fs_read", "fs_write"]

Other features

All the operations in "std" that are "potentially harmful" should be isolated behind a feature: fs read/write, network IO, environment access, std::process:Command, ...
Further, also things like FFI to external C code, asm code, and even the use of "unsafe" should be enabled with some kind of "feature".
These features are meant only for the direct dependency on the "std library" for every crate separately. There is no need to complicate around how to make these features transitive through the dependency tree.

Code Reviews

The main goal of these features is to simplify the manual review of every dependency crate we use. It is a colossal task, but could be manageable with the help of tools like "cargo-crev".
We can use cargo tree to find exactly the versions of all the crates we depend on.
First we review the "leafs of the tree" crates that have only the "std" dependency. We just look at the enabled features in Cargo.toml. If no "potentially harmful" feature is enabled on the "std" dependency we can be confident, that the crate cannot do any harm.
When a "potentially harmful" feature is enabled, we can inspect thoroughly only the code that uses that feature. This is easy to find. If we disable the feature in Cargo.toml, the compiler will show errors where the feature is used.
Secondly, we go to the "caller crates" and inspect how they use the "std" library. Then we inspect the calls to the dependency "potentially harmful" functions, that we already know. And so on. Not easy, but possible.
I am sure that with time, good crate design will have the "potentially harmful" code specially isolated in modules and thoroughly commented for the review to be easier. It is in the interest of the crate author to make the review easier to boost confidence in the crate security.
I think there is no alternative to manually reviewing third-party code in the open-source community. It must be done for every crate version we use. Even a simple small "security update" of a mini crate can be a surprise "malicious" version. Who knows?
When any dependency for any reason needs to increment the version it must be first manually reviewed. Automated updates of dependencies by CI systems are super dangerous and should be avoided. Instead, we need a security warning from tools like RustSec! Then we review the code of the new version and finally change the dependency manually in our code.


To be productive, we need to use third party crates in Rust. It is not realistic to write everything in house. The standard library is really small and this is on purpose. But it is impossible to review all the code of all the crates in search for a few lines of "potentially harmful" code.
There must be a better way to reduce the amount of code that needs to be inspected. One step that Rust already made in this direction is to introduce "unsafe" blocks. They are really easy to find and inspect for soundness. This is great for memory corruption! But what is more dangerous for "supply chain attacks" : a memory corruption in "unsafe" or a simple file-write to encrypt all of your documents and ask for a ransom?
The latter can be difficult to find, isolate and inspect. If only this "potentially harmful" operations could be enabled/disabled with a "feature" in the standard library.
We are also worried about other "potentially dangerous" aspects of Rust like the fact that and procedural macros can run any code on my development system in build time, even in edit time with the use of "rust analyzer" or similar tools. We really don't feel safe about the third party crate ecosystem of Rust and that must be addressed.


In the past years I have read a lot of discussions around this topic in Rust.
Sadly I didn't find anything is moving in the right direction. If I am wrong, please inform me.
I found a lot of skepticism and negativism with little constructive efforts or signs of approval:
"This is not a perfect solution, than it makes no sense to do anything.",
"Nobody had this problem till now, don't panic around nothing",
"This is bad and will just make a false sense of security",
"Don't demonize the unsafe, some unsafe is sound",
"Other languages had tried and failed",
"Just use a sandbox like WebAssembly engine or Docker",
"If I add a dependency to my code, I don't want to check all transitive dependencies",
"The responsibility to review the dependency code is on the author of the library, not me",
"Transitive dependencies need a super complicated effects system or Capability-based security",
"The burden on the author of the library crate is too big",
"The quantity of strangeness in Rust is high, this will add strangeness and break the adoption for new developers",
"There are much more urgent and important things to be done for Rust",
"It is impossible, just stop thinking about it", ...

Here is a good discussion with a lot of links to other discussions on this topic:


There's a plan to make std customizable with features, but this will not change Rust's trust/security model, which can't guarantee what you'd like.


You can't sandbox rust code without OS level sandboxes even if you prevent usage of unsafe code. There are various soundness issues in the compiler making it possible to circumvent this kind of restrictions. Also note that both Java and C# gave up on providing a way to restrict what specific pieces of code could do due to having too many bugs. The only reliable ways to isolate code are OS sandboxes and wasm.

Edit: I see you already noted this in the past discussion section.


While I certainly support features that reduce the surface for supply chain attacks, I don't think this feature will help.

Like already said by @bjorn3, std code is not the only way to cause harmness. Even without unsafe the code can have undefined behavior. And also, unsafe is not local. An innocent-looking unsafe block can cause harm in a far place.

To repeat, Rust is not a sandbox language. And it was never meant to be, and if we'd tried, we had been failed. Rust is designed to save you from yourself - not from an attacker.


Some previous discussion of ideas along these lines:

My take on this overall idea, which is similar to yours LucianoBestia (I called mine "unsafe features"):


I think it should be handled not by features, but by splitting std into a number of "standard" crates. Here is an old thread which discusses such approach:

Though I must say that the security motivation looks quite weak to me. As already noted, malicious code can easily circumvent such "protection".

1 Like

Note that features don't even do what you want. Even if I only enable feature "a" from a crate, I can still use functionality gated by feature "b" if any package in the tree enables that feature gate.

What you want is a novel (to Rust) feature called capabilities. These do make review easier! But due to the aforementioned issues, they shouldn't be trusted 100%; they're just to make review easier and accidentally doing something outside of what you're supposed to have permission to harder.

My answer to anyone asking for this level of review for Rust is "What's the current practice?" The benefit that Rust claims to provide isn't perfection, it's an incremental improvement over the status quo. Do you implicitly trust libc and the STL already? Then it isn't a stretch that Rust libstd should have that implicit trust.

Adopting Rust is a benefit if it improves the code trust situation. It doesn't have to fix it, just make it easier to be better than the current solution.


The crate cap-std "Capability-based version of the Rust standard library" by the Bytecode Alliance is interesting and promising!

The Bytecode Alliance is a nonprofit organization dedicated to creating secure new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).

I hope it will not be limited only for WebAssembly, but useful also for other targets.

1 Like

Cap-std works outside of wasm. In fact wasmtime uses the cap-* family of crates to implement large parts of wasi.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.