And just for clarity, a function exposed to Python which can cause UB is incorrect the same way that a function in Rust that can cause UB not being marked unsafe is. Evaluating Python from Rust should be considered safe, because Python cannot be used to cause UB without the presence of a soundness bug in the unsafe bindings layer.
Rust is in fact relatively unique in how strictly it demarcates the unsafe barrier between soundness trusted and untrusted code. It's not the only one to use the "unsafe" descriptor, but most other primarily safe languages with an unsafe escape hatch typically just have some "be careful" API namespace that otherwise looks like any other code, e.g. Haskell System.IO.Unsafe or Swift UnsafePointer.
The presence of an unsafe-style escape hatch does not itself mean that calling that language is always unsafe, of course. (If it did, calling any Rust code would necessarily be unsafe.) The point of unsafe is defaults: if the foreign language typically uses unsafe constructions everywhere, calling it should be unsafe by default; if the foreign language is safe by default and relegates unsafe constructions to a discouraged escape hatch used for optimization purposes, calling it should be safe by default.
There will always be some things out of scope for the unsafe guarantee of UB freedom; these are the axioms which are assumed by the system which cannot be enforced within the system (such as that my memory isn't accessible externally, e.g. /proc/mem). Safe languages with less strictly controlled unsafe escape hatches are generally understood to be handled the same way; the unsafety should be marked in the way fit for the language and not exposed to consuming APIs which expect the standard UB-freedom.
Even if you do have a bridge library taking a stricter stance, it doesn't have to mean your code is littered with unnecessary unsafe. You could have something like C#'s unsafe where when setting up the compiler/CLR you need to use the AllowUnsafeBlocks option to enable the use of unsafe functionality. Enabling this flag would probably be considered unsafe from Rust, but needn't make further use of C# code unsafe from Rust.
The "unsafe on entry" pattern where you're promising to not misuse the resulting object ever in the future isn't ideal, to be clear; it's much better if unsafe APIs have relatively clear and localizable preconditions with limited postconditions. But for APIs where most of the surface should be considered safe, and it's just some theoretical edge case that could cause UB, it can be argued for. (But it does make passing any derived proof-carrying types across crate boundaries a huge footgun; since the other library didn't write the unsafe promise to not do so, them obtaining a proof of the promise is unsound if they can use it to cause UB.)
With well designed script embedding APIs that provide security features for safety running user (i.e. untrusted) scripts, there are usually clever ways to keep the amount of unsafe required for a proper API to a minimum. Generally, it'll roughly take the shape of mounting any trusted code, calling a single unsafe function to mark the loaded script as trusted (thus allowing it to do potentially unsafe things), and then evaluating any user code. The user code can go through the trusted interface, but can't itself directly use any potentially unsafe functionally. If you're running potentially untrusted code (if you have plugins, you will be, since people will install and run plugins from the web with minimal vetting if any) you already want to have some sort of sandboxing to limit access to not unsafe but still abusable APIs like filesystem access (e.g. via cap-std or similar). If you have this set up for your scripting interface, adding unsafe should not be all that difficult.
But as a final note: if you can write an API such that it doesn't require unsafe with just some small runtime assertion overhead sublinear on the amount of other work done: absolutely prefer doing that first, and only add unsafe *_unchecked versions if checks show up as a performance bottleneck. They probably won't, and your software will be more resilient to programming errors. The use of unsafe should ideally be encapsulated to core abstractions/collections and limited performance-critical subsections.