As an aside, I generally don’t advocate for using unsafe in a macro as it can be very hard to reason about how it may be (ab)used. Consider for example this contrived library for reading from a 2D array, where the library initially appears safe when looked at in isolation, but if the macro is passed custom types which overload the arithmetic operations all assumptions are broken and the macro can read out of bounds.
However, this post aims to propose a more sensible default behavior for interacting with macros intending to create safe constructions which require underlying unsafe.
To clarify, the problem is that unsafe is an implementation detail of this macro, and shouldn’t affect how external safe code interacts with it, but in my example it does:
let x = value_equals_unsafe_operation!(*a.get_unchecked(100));
It’s valid to pass in additional unsafe code (dereferencing an out of bounds pointer) as an expression to this macro without an error.
I don’t think this current behavior is a preferable default, as it is not obvious to the macro author that this is possible, and the suggestions for preventing this (matching the expression, or assigning the expression to a temporary variable) are error prone (easy to forget when writing or refactoring macros), and are not aesthetically pleasing.
I think if code does intend for this behavior (creating wrappers for unsafe, to allow hiding external implicit unsafe code in expressions) it is bad hygiene, and shouldn’t be allowed by default:
macro_rules! identity {( $expr:expr ) => ( unsafe { $expr } )}
fn main ()
{
// look, no unsafe !
identity!(
::std::mem::transmute::<(), ()>(())
);
}
The suggestion of creating a new unsafe_expr
to retain the current behavior would work, but I really think any use cases that depend on this are bad hygiene and should prefer always being explicit when their expressions require unsafe:
macro_rules! identity {( $expr:expr ) => ($expr)}
fn main ()
{
identity!(
unsafe { ::std::mem::transmute::<(), ()>(()) }
);
}