I remember having this discussion before, about offset_of (whether it should be a keyword or an intrinsic macro) and about await (if it should be a keyword or a method with an exotic calling convention). The thing is… it doesn’t matter much. Either way the decision is made, you’re still accessing the same compiler intrinsic that behaves the exact same way and has the exact same limitations and overheads. The only thing that changes is syntax. That’s not to say syntax doesn’t matter at all – sometimes it matters more than in other situations. But let’s not pretend this issue is something more profound than that.
And by the way,
This isn’t true in C either. You cannot write your own offsetof macro without invoking UB. You cannot write your own stdarg.h. You cannot portably compute INT_MIN or define uint32_t without #ifdefs checking for each particular target and compiler. You cannot write your own typedef for the type returned by the sizeof operator without size_t. And it doesn’t make sense to give up those things any more than it does to give up the volatile keyword.
I can agree that accessing fundamental compiler intrinsics ought not to pull in additional runtime cost, but I think the core versus std split accomplishes that already quite well. I see no good reason to ever forgo core: it doesn’t save you any resources, and it doesn’t grant you any extra degrees of freedom over just using the already-implemented version. Even if you defined your own UnsafeCell, you’d still have to use it the same way as the one defined in core, because the only possible implementation is using the #[lang] attribute to say ‘yes, compiler, this is that thing’. It would be at best an exercise in reinventing the wheel.