Not sure how much of it is currently implemented. I suppose I meant those would be implementations related to rustc. It remains to be seen how high the priority of memory protection features should be. The problem is that embedded does not, and likely will not, support ASLR or DEP or many other things. Same goes for IoT. This was an earlier post on some of it:
It’s certainly possible to program with 100% memory safety in C/C++, the same way that it’s possible to do a skyscraper tightrope walk. It’s another question whether one should attempt that. Proper Unsafe Coding Guidelines can ensure that the tightrope comes with a handrail in the case of Rust though. Also, unsafe marks where exactly code auditing should be focused and makes it easier to audit for that reason. Lastly, targeted testing and fuzzing of crates does allow for automated discovery of memory problems in the code. If unsafe does not occur too many times in the crate, and with enough added work on these tools, one can be reasonably assured that test-fuzzing will hit all the possible code paths.
If everyone had read and remembered (https://www.amazon.com/Art-Software-Security-Assessment-Vulnerabilities/dp/0321444426), C/C++ would be much safer. Problem is, it takes two months+ of full time reading and practising to meaningfully work through those books. ( I tried doing it in 3 weeks - didn’t get far enough). So that means almost no-one in industry who isn’t paid for that sort of thing is going to work through it. That means that unsafe should be a last resort - and crates.io should detect and flag indiscriminate use of unsafe automatically to allow correcting this code early on. The idea is that rustc+cargo should keep programmers from accidentally walking into the ‘here be dragons’ parts of coding unless someone intentionally chooses to jump over the handrail - just like the borrow checker does. This implies work needs to be done to separate and warn/error on ‘unsafe unsafe’ and not ‘safe unsafe’. DEP/ASLR etc. should then really be the second tier of defence, since needing them means the attackers have already broken the front door down. But, on Windows/Linux such second-tier mitigations should still be used wherever possible.
I like that statement. I was more continuing from my earlier posts on pointing out the fact that, traditionally, far too much effort is expended on a handful of theoretical security features, while leaving a whole bunch of previously invented features unimplemented or not easily implementable. And this is coming from someone who’s spent disproportionate effort in pursuing quantum cryptography studies, mind you. So call it speaking from the far side of personal experience.