I’d say its dependent on the use case.
If you have a security library like maybe openssh or openssl then almost every logic bug is automatically also a security bug. Similar for operating system kernels. And for web services. If you write a web server in Rust, you can expose the entire user database with logic bugs or allow people to log in as admin. There are many examples of logic bugs in websites affecting security, like this one. Safe Rust doesn’t prevent that.
However there are other use cases for Rust. Think of media players or image viewers. These tools need to be able to handle untrusted data, and display it to the user. Here there is a big difference between logic bugs, which in the worst case give you a black/corrupted image, and memory safety bugs, which may give you a “your files have been encrypted” message. The only places where memory safety bugs can be introduced are inside the unsafe blocks, so it does make sense to minimize them, and if possible, provide an option to get rid of them completely.
As an example, think of the nautilus thumbnail bug which gained code execution privileges by exploiting memory unsafety, only from viewing a directoy with a prepared file in the file explorer. There are definitely parts of a file explorer where you might have security relevant logic bugs, but for displaying thumbnails, there is no real logic bug which is of relevance for security, at least not directly (you can use a wrong thumbnail to convince people that the file is something else than it really is, but that is a far less severe security issue than having an attacker user execution privileges on your machine), while all memory safety bugs are relevant for security. If it had been all safe Rust, it wouldn’t have happened. If it had been mostly safe Rust with unsafe components, it may have happened.
Sure, a malicious crate author might also add malicious code to an image decoding library but that assumes actual malicious intent from the author which is much less prevalent and can be recognized far more easily during review than susceptibility to doing mistakes when writing unsafe code (which affects most people, sadly). If I wanted to hide a security vulnerability, I would do it in unsafe code, not in safe code.
Unsafe code also makes logic bugs worse. Sometimes unsafe code has conditions under which it works safely, and if those are violated, safety is as well.
To summarize, being cautious which libraries get allowed to use unsafe is for many use cases no way “security theater” but a legitimate strategy to minimize attack surface.