Basically, what I’d like to do is brainstorm the trade offs of adding new API items to a library that are based on the version of the compiler building the library.
The context in which this has come up is adding various types of support for u128, which was stabilized in Rust 1.26: https://github.com/TyOverby/bincode/issues/250
The idea here would be to do version sniffing in the build.rs, and then add new APIs specifically for u128 automatically for rustc 1.26 or newer. There would be no need to enable a specific feature for it. You just get it automatically. I’ve used this same strategy for enabling SIMD in the regex library and it works great, however, in the case of regex, there are no changes to the public API. It’s just internal optimizations.
So basically, adding new things to the API automatically is making me feel a bit nervous, but I’m having trouble pinning down concrete reasons why. One thing that pops to mind is viewing the docs for a library which contain new API items because the docs were generated with a newer compiler, while developing on an older compiler and being surprised when a public API item isn’t available. AIUI, I think the first error message they’ll see is “some_thing is not available” rather than “some_thing uses u128 which isn’t available on your Rust 1.22 compiler.” So I could see the failure modes being pretty mystifying here. I don’t know how heavily to weight them though.
Are there other concrete reasons to avoid doing this?
N.B. In an effort to preempt off topic discussion, I’d like to avoid going down the path of questioning why we don’t just increase the minimum required Rust version or why the crate doesn’t just do a semver bump. There are plenty of other threads on that topic. For this discussion, let’s take “older rustc compatibility” as a given.