I think optimizing by default is very important for newcomers. Like @tanadeau said, even experienced devs coming from dynamic languages like JavaScript, Python, Ruby or even static, enterprise languages like Java and C# are probably completely unaware of the concept of “debug” and “optimized” builds. Note that the above 5 languages represent the vast majority of the “first language” for people learning to code today.
rustc needs to be designed so that its users fall into the pit of success. Today this is sadly not the case; a newbie running rustc foo.rs will get a slow build by default, and might easily conclude that the equivalent code they wrote in Java is faster and that Rust can then be discounted. This isn’t a theoretical concern; situations similar to this come up on IRC all the time.
Anyone with more that passing experience with Rust will know how to get a debug build if they need one, but the default should certainly be an optimized build.
Think through why the unoptimized build is the default: the reason is convention started many decades ago by ancient compilers that didn’t have an optimized build at all, so when one was added in the future, it was added behind a flag one had to pass so as not to break the way people were using the compilers already. I don’t think anyone can make a reasonable case for “unoptimized by default” in a vacuum where no previous precedent is set by other compilers.
rustc should break with convention here because in this case, the conventional approach is hurtful to users. Today’s default is “fail-deadly” instead of being “fail-safe”; you forget to pass a flag you might not even be aware of and we give you a build that’s very slow and almost certainly not what you wanted (and if you’re a newbie, then it’s “certainly”, not “almost certainly”).