I'm just curious as to what would be the downside of having #[derive(Debug)]
enabled by default for ALL structs etc when building the debug target, rather than having to sprinkle it on a handful of datatypes manually during development?
There'd need to be a way to turn it off for things like encryption keys and structs that you want to manually implement Debug for, it would require making Debug special in a way that other traits aren't, it would open the floodgates for people asking for additional traits to be made special in the same way, and since Debug doesn't exist without libstd, it would need to handle the discontinuity between no_std and std crates.
None of these are unsolvable problems, of course. And I actually like the sound of that, too. But it involves making a trait special when right now it's just an ordinary trait with an ordinary derive macro, and a lot of people don't like adding special-cases like that.
Ok, that's an awesome answer. Totally agree.
As we can't just enable it by default because of those issues, how about solving this another way? What would people think of a command line flag e.g:
cargo build --derive-debug-all
... or something like that? To me, I think that would be handy
That wouldn't be useful at all. Since the code wouldn't compile without it (unless it did but then the flag would have no effect).
hmm... yeah, you're right.
Any thoughts on how something like this could work then?
I've found #![warn(missing_debug_implementations)]
to be "enough," personally, as it reminds you when you forget to derive Debug
, and I'm used to just accepting every struct requiring some amount of attributes to tell how it should behave.
The "perfect" "magic Debug
" to me would be the functional equivalent of providing a blanket default impl
for all types with the implementation of #[derive(Debug)]
. Thus, you'd be able to override it immediately by providing your own impl Debug
which is more specific than for<T>
(i.e. any impl that you could write anyway). It'd also have to always apply; silently turning off if some member doesn't impl Debug
(either because it didn't opt in to magic Debug
impls or through some explicit opt-out of the magic impl (impl !Debug
? Could that be generally applicable to default impl
?), because otherwise it's a huge semver hazard to accidentally make a struct not impl Debug
anymore by introducing a new private member.
We could potentially add a magic default
automatic Debug
derive for all types (even those where it'd be a compile error, for the reason above) on an edition boundary (probably one that has stable specialization, though), but I think the problems with it outweigh the potential benefits. However, missing_debug_implementations
is a good candidate for raising up to warn-by-default (at least when running lints via cargo clippy
rather than cargo check
} I think, especially if it can be/is limited to only pub
types (I don't recall if it is off the top of my head).
The problem with missing_debug_implementation
is that you have to manually add it (or let you editor do it etc), which is what I have issue with - IMHO lots of derive(Debug)
sprinkled clutters code too.
Alternatively, which is what I'm opposed to, would be to add an option to cargo fmt
which adds the derive
everywhere!
... I started this thread hating the cluttering, but now think auto-sprinkling may be the best of all worlds!
If the lint is marked as machine applicable, then cargo fix
will be able to apply it for you. (cargo fmt
afterwards would then merge_derives
for you.)
And I'm in favor of making missing_derive_implementations
warn-by-default (for pub
types) so that you don't have to opt into it.
I haven't measured, but I expect it'd affect compilation times and executable sizes. There would be more macros to expand and compile, and I don't know if the linker can be trusted to remove unused Debug
implementations.
We actually have a case study here with syn: they've made derive impls opt-in specifically because they both negatively impact compile time for their huge number of AST structs and are rarely used in practice.
A "magic default impl" might have more freedom to only generate when requested and/or be more efficient than a syntactical derive (which tbh I don't think #[derive__Default]
even is anyway?), but ultimately, yes, always applying a Debug
derive will have some performance impact.
Personally I haven't been too annoyed with that, since it's usually #[derive(Debug, Clone)]
or more, so just removing the Debug,
doesn't feel like it's make all that much of a difference.
Which is, of course, additional evidence for notriddle's point about floodgates and wanting the same feature for more traits. I suppose that "auto-derived" traits would combo well with expanded negative impls, though...
What's wrong with just writing #[derive(Debug)]
? Seriously, I don't see a problem to be solved here.
One problem is when your type contains types from a library, and the library author neglected to write #[derive(Debug)]
on all their types.
The API guidelines recommend preemptively implementing Debug
for all types, but there's not great tooling to help authors do this consistently.
This is a nice tool, didn't know this existed.
If a problem exists at all here, I would say it is that both the missing_debug_implementations
lint and the #[derive(Debug)]
attribute actively require an action to be taken on the part of the programmer, and since programmers tend to be lazy, I expect most people want a "zero-action" solution. In other words, they want the generation of Debug
impls automated away completely.
Such a solution would have to play nice with extant manual Debug
impls though, because those usually exist because the author deemed the derivable Debug
impl insufficient.
That's been my experience when I've deliberately created a manual debug Impl
for a struct: to better convey the data in an application-related 2D form.
In this case, I just have to disagree that "zero-action" is desirable here. It's basically magic with downsides stemming from reasons similar to what you mentioned. Furthermore, adding the one crate attribute literally once per project shouldn't be deemed too much effort.
Edit: while thinking about it, an obvious solution to "I didn't know about/I don't want to type missing_debug_implementations
" is to make the lint warn-by default (or higher, up to discussion). However I'm not sure if that can be done without a breaking change. I'd actually welcome more warnings enabled by default, both in rustc and in Clippy.
I agree with making it an enabled-by-default clippy lint, but disagree with enabling the lint by default on rustc, as there are legitimate reasons not to derive Debug
. (Compilation performance is to me such a legitimate reason.)
Note that that's a reason it shouldn't be deny-by-default (or worse, a hard error), but not that it shouldn't be warn-by-default. If you have one of those legitimate reasons, you can disable it -- globally or just for the one type. (Like many of our other warn-by-default lints, such as naming conventions.)
Since nobody has mentioned this:
making a struct automagically
derive(Debug)
when possible makes it so that adding a nonDebug
field to apub
type becomes a breaking change.
The solution, as said by @CAD97, would require specialization:
#![feature(specialization)]
impl<T : ?Sized> Debug for T {
default
fn fmt (...) -> _
{
..."<no Debug representation available>"
}
}
This way everything would be guaranteed to have a Debug
impl, for which changing in a future edition derive(Debug)
to be opt-out1 rather than opt-in would then just be a matter of measuring the impact in compilation times.
1 Ideally with an attribute applicable on a type definition (à la #[derive...]
) but also applicable to a module (like the other attributes): the latter case would allow people concerned about compile times to keep functioning as it does now.
All this having been said, I am with @H2CO3 in that this is not really an actual concern, provided missing_debug_implementations
becomes a warn-by-default
lint:
What is greater, the laziness of programmers, or the itching of a warning?