Another example of a Rust user being bitten by “unoptimized by default.”
There’s also people complaining about long compile times, and this might increase if we change it to “optimized by default”.
Thanks for the example, @Valloric. I added it to the Cargo issue.
An alternative (for Cargo users anyway) would be to nag users when running “cargo build” that they are building an unoptimized binary and suggest using the “–release” flag before testing performance. There would also be a message indicating a cargo command to run that will disable the nag message (by writing to the Cargo config file)
@tomjakubowski Yep, I created a Cargo issue to do that. If you agree with it, please comment on it and +1 it.
I’ll admit that I myself got bit by debug-by-default when I first got started with Rust, but that was before cargo
was a thing. cargo bench
implies optimizations, and therefore would have protected me from my mistake.
As such I’m a bit torn. I feel like there’s a reasonable motivation to be had on both sides. I guess lacking optimization is the bigger foot-gun than having optimization when you don’t care. Although missing debug asserts or debug info is presumably not great?
People evaluating Rust are unlikely to know about cargo
; even if they will, they won't know about cargo bench
.
If you compile without debug symbols and need them, you'll immediately notice you don't have them. Since the user explicitly needs debug symbols, they'll know they need to look at how to get a debug build.
If you need perf, you'll just notice that "Rust is slow." If you come from any of the Big 5 languages I listed previously (and you are certainly likely to), the thought that a magic compiler flag is needed to get any real perf will never even cross your mind.
Debug-by-default is just a poor design, period. It's a footgun of massive proportions. Rust is already breaking with programming convention in a myriad ways to eliminate footguns (which is excellent); it should break with convention here as well.
Let's not leave footguns in place because of obsolete conventions created by backwards compatibility decisions made 40 years ago.
Don't think so, it's advertised all over the tutorials, it's usually assumed that you want to have it over plain rustc, even.
It's not a poor design, "period". Please be willing to listen to counter arguments.
Sorry for triple reply, but this particular sentence is just rethorics, let's leave that behind and focus on arguments only. Thanks.
for enabling optimisations by default.
Especially when for any project bigger than 1 file there is Cargo. In most cases when rustc
will be ran “manually” it is learning, micro-benchmarking or PoC. Anyone that works on 3rd option seriously don’t give a s… about performance, but first 2 options are most important for us. IMHO most of micro-benchmarks will be written by haters and others that will only want to show that “Rust is bad” and someone who is still learning will see if (s)he cannot debug program and will add appreciated flags (or at least will ask how to debug and do not say that “Rust is s… because it runs 99999x slower than XYZ”).
You won’t know if you’re missing debug_asserts, but I suppose those are more important in a language like C++ where so much is UB, and debug_asserts really save you. In Rust most of our debug_asserts are internal sanity checks against the implementation, and not things that are actually guarding against bad user code.
I have been hoping to toss some debug_asserts in some of our unsafe APIs, though. e.g. maybe get_unchecked is still checked in debug builds. Using unsafe code should be considered moderately “advanced” though.
Although debug_asserts are actual disjoint from optimization. They’re tied to the ndebug flag. So you can compile in O3 with ndebug=false (which I believe is basically what the nightlies do?). Presumably you are proposing also passing ndebug=true as well by default?
There’s a big difference between a release build, and just building with optimizations. I do believe building with optimizations by default is the ideal behavior, but those debug asserts really ought not go away until the final build that will be distributed. Tests should still run on debug builds; those asserts are what find your bugs!
I agree that debug/release isn’t a strict binary. Optimizing with assertions on seems like a good default behavior. Full release builds are for benchmarking and deployment.
@Gankro, speaking of ndebug
, do you know why Rust kept the same strange negated boolean name as C++? debug=true
or debug=false
seems so much simpler.
Compiler flags are totally adhoc right now, as far as I know. Haven’t been stabilized/reviewed at all.
Thanks. I’ll take a look and see if there’s already an issue for that. I’d like to get into the Rust code, and that seems like it would be an easy task.
Another thing still to consider is the integer overflow behavior. Currently/before 1.0 the plan is only to have integer overflow or underflow detection in “debug” builds. If we change the default to optimize but keep debug behavior, does the overflow checking stay on? It seems reasonable that it should, but that code generation can prevent a lot of peephole optimizations due to the jump. It would definitely hurt the performance of numerics-heavy code.
There is a great document somewhere describing the different build types, the C committee’s opinion on them, and why NDEBUG
is the correct choice, but I can’t seem to find it right now
#ifndef NDEBUG ...
is used in C because it means that if there is a typo in the macro name or somebody forgot about it, the section is turned on (which usually means assertions enabled). The first reason doesn’t apply to Rust code because the compiler will tell you if the macro names are wrong.
Optimisation levels: are there only two (on/off)? GCC supports many (extracts from man page):
With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.
-O2 Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. As compared to -O, this option increases both compilation time and the performance of the generated code.
-O3 Optimize yet more.
-O0 Reduce compilation time and make debugging produce the expected results. This is the default.
-Os Optimize for size. -Os enables all -O2 optimizations that do not typically increase code size. It also performs further optimizations designed to reduce code size.
As noted here, disabling optimisations can be useful to reduce compilation time and to make code behave in a more expected manner when run in a debugger. This implies that under specific circumstances disabling optimisations is useful, but not that this needs to be the default.
From my point of view, an optimisation level equivalent to -O1
or -O2
would be a good default. (Not -O3
because some of those optimisations result in numeric approximations or in some cases, significantly larger code size and maybe other draw-backs.)
I agree with optimizing by default for rustc, at least, since by the time you need to debug anything, you already know (or will rapidly learn) how to turn optimizations off. For cargo it’s harder to say since I don’t want to have to type significantly more when developing to get a debug build rather than a release one. But that could be fixed by adding a marker in Cargo.toml which specifies what the default is for a simple cargo build and let the user decide.