There is “Safe Haskell”. Using inherent purity of the language, we can just limit access at API level to be sure that the untrusted module can’t do whatever it please (sans DoS by resource exhaustion). Such scheme can be useful when some fast&optimized, yet untrusted plugins are wanted.
While Rust does not have “inherent purity”, it has “inherent memory safety”. So, similarly, by whitelisting the API modules a module can use + forbidding "unsafe"s one can provile a security level similar to Safe Haskell or to Java Sandboxes.
In Safe Haskell there are 3 types of “crates”:
“Default/undecided” modules can be automatically used if they don’t contain any “unsafe” blocks and depend only on Trustworthy modules. What is Trustworthy and what is Unsafe is decided by project, not hardcoded into the language.
To make Safe Rust, rustc should be able to:
- compile “evil” code without triggering arbitrary behaviour at compilation time;
- be able to bar “unsafe” blocks except of on some explicitly trustworthy modules;
- generate code that can not trigger arbitrary behaviour, except of though “Trustworthy” modules somewhere in the stack.
In general, the “trustworthy” module[s] should “monopolize” application’s access to actual syscalls and raw memory operations.
- Is such scheme viable with Rust? Are there no architectureal obstacles that prevents it to be implemented someday?
- Is ability to implement Safe Rust in future worth to be considered when moving Rust forward today?
- Shall I create the respective feature request issue on Github?