Don't warn about nested unsafe blocks by default?


#1

I encountered a case where I had nested unsafe {} blocks. While the outer block was necessary, I wanted to retain the inner unsafe because it paralleled some other code elsewhere; I also wanted to explicitly highlight that specific unsafe thing.

Obviously I could have just suppressed the warning, but it occurs to me that since unsafe is an annotation anyway, there’s no real downside to allowing it to be nested. I’m suggesting the warning could be off by default, and explicitly enabled with no loss.


#2

I think this warning makes sense because nested unsafe blocks are redundant. A nested unsafe block doesn’t do anything useful because unsafe code has already been allowed by the outer unsafe block/fn. I quite like that the compiler reminds me to remove unnecessary unsafe blocks. I have never encountered a situation where it would really make sense to introduce a nested unsafe block. To the contrary, this warning has been useful to me quite a few times. Consider using a normal block instead. Or if you really want you can do this ^^’:

/* unsafe */ {
    ...
} 

Edit: Warnings like this one are quite useful because they give us a clearer understanding about how we write our code. It’s similar to the error we get when we import the same thing twice. It’s good that the Rust compiler reminds us to take another look when we’re doing something awkward.


#3

Macros, like array_ref, in unsafe fn?


#4

I think you are giving an excellent argument for why we have #[allow] :slight_smile:

However, I agree with the others here that the default behavior should be for the compiler to say “hey, that looks funny, are you sure that’s what you meant?”. Most of the time the answer is “no”, but sometimes it is “yes” and that’s what #[allow] is for.


#5

Just make it a function, shouldn’t that be the default solution instead of silencing a warning?


#6

Really, I feel this is the problem. I see no legitimate reason why the body of an unsafe fn is implicitly wrapped in unsafe { }.

Having unsafe { } around unsafe operations in an unsafe fn is something I would find useful, and I think it should be required. But just to have them at all, I have to disable a lint, which prevents me from identifying which unsafe blocks are really meaningless. (nested unsafe blocks, or unused unsafe)


#7

Well, you don’t need to put a const {} block around the body of a const fn, nor an async {} block around the body of an async fn. And so many unsafe fns read/write pointers or call FFI that I think overall it’d be be more noisy than helpful – certainly in the unsafe fn I was looking at most recently adding unsafe {} blocks wouldn’t be helpful to me.

Do you have an example of a function where you’d like them?


#8

FFI wrappers are especially where I want it, because that’s where you can end up with a bunch of similar-looking functions that actually differ very slightly in safety considerations.

pub unsafe fn extract_compute_0d(&mut self, name: &str) -> FailResult<f64>
{
    let it = self.extract_compute_any_d(name,
                                        ComputeStyle::Global,
                                        ComputeType::Scalar)?;
    Some(it.clone())
}

pub unsafe fn extract_compute_1d(
    &mut self,
    name: &str,
    style: ComputeStyle,
    len: usize,
) -> FailResult<Vec<f64>>
{
    let it = self.extract_compute_any_d(name,
                                        style,
                                        ComputeType::Vector)?;

    ::std::slice::from_raw_parts(it, len)
        .iter().map(|&c| c as f64).collect()
}

When I ask myself the question, why are these functions unsafe, I see the first one, and realize, “oh, it’s because I call this function”… and then when I look at the second one, my eyes see the same function being called again and stop searching.

It takes careful reading for me to notice that the second line has another unsafe call, and thus stronger requirements to validate.

Edit: And in fact, it has a bug, now that I am looking at it. extract_compute_any_d returns a for<'a> &'a T, which points to uninitialized data if len = 0.


#9

Hmmm… though I guess there is a distinction between the two unsafe things in my example:

  • The safety of the call to extract_compute_any_d is the responsibility of the caller. I’d want an unsafe block here to make it easier to document the requirements expected of the caller.
  • The safety of std::slice::from_raw_parts is assured by the unsafe fn. I’d want an unsafe block here for auditing purposes.

Having an unsafe block around each one seems to defeat the purpose of having the unsafe block around the other…


#10

Now, think of a slightly larger unsafe function, where most of the code is actually safe, except for one line. Thanks to the implicit unsafe block, you have to actually dig to find what’s actually unsafe. Not only is it a problem when reading code, but it’s also a problem when writing code, because you have to be extra careful that what you think is safe actually is, instead of just focusing on the one thing that is, in fact, unsafe.

This has been, for me, the major hurdle when writing unsafe code. So much that I would actually start writing my unsafe functions without the unsafe keyword, or make them call a private safe function.


#11

That sounds like a good plan to me!


#12

I usually sprinkle the function with // Safety: ... comments when I write it, e.g. // Safety: Pointer is known to be valid. That way, I can spot the unsafe operations easily when I look at it later and the individual comments remind me why each unsafe operation is safe.


#13

I agree that this was a mistake. Just because you are unsafe to call, doesn’t mean you should lose all support in identifying where you are performing unsafe operations.

If I could come up with a good migration plan, I’d propose an RFC that fixes this… but it’d have to be backwards compatible, somehow.


#14

@alercah floated the idea of unsafe(my_obligation) (“named unsafe obligations”)… you could write something like:

unsafe(obligation_a) fn foo(..) -> R1 { .. }

unsafe(obligation_b) fn bar(..) -> R2 { .. }

fn baz(..) -> R3 {
   ..
   let r1 = unsafe(obligation_a) { foo() };
   ..
   let r2 = unsafe(obligation_b) { bar() };
   ..
   r3
}

this could make obligations more clear.


Thoughts on Compile-Time Function Evaluation and Type Systems
#15

We could roll this back on an edition, couldn’t we? It’s an easy mechanical transformation that could be applied by rustfix.


#16

can you put a safe func inside an unsafe func and get satety assurance? or is that context-sensitive?


#17

Internal functions seem to not inherit unsafe:

unsafe fn foo(x: *const i32) -> i32 {
    fn inner(x: *const i32) -> i32 { *x }
    inner(x)
}

produces

error[E0133]: dereference of raw pointer requires unsafe function or block
 --> src/main.rs:2:38
  |
2 |     fn inner(x: *const i32) -> i32 { *x }
  |                                      ^^ dereference of raw pointer

error: aborting due to previous error

#18

ooh nice! that’s a good way around the unsafe fn issues! can we get a clippy lint for that?

I feel like we should promote that so we can fix unsafe fn later. like depreciation but different.


#19

Strangely, using a closure instead doesn’t trigger any error:

unsafe fn foo(x: *const i32) -> i32 {
    || -> i32 {
        *x
    }()
}

#20

Thanks all! I wasn’t expecting a drive-by whinge to generate so much thoughtful conversation.

While I still think nested unsafe shouldn’t warn by default, I can see this is definitely a personal preference thing.

The specific case was a function being passed to C (via pointer) so it was required to be unsafe to meet the signature, but most of the code inside was pure safe Rust, which just a bit of pointer manipulation from the parameters. In this case, it would also have helped if unsafe fn didn’t implicitly wrap the insides with unsafe {}.

However, I think having a nested safe function is a reasonable workaround in this case.