Trade offs for adding new APIs based on compiler version

Basically, what I’d like to do is brainstorm the trade offs of adding new API items to a library that are based on the version of the compiler building the library.

The context in which this has come up is adding various types of support for u128, which was stabilized in Rust 1.26: https://github.com/TyOverby/bincode/issues/250

The idea here would be to do version sniffing in the build.rs, and then add new APIs specifically for u128 automatically for rustc 1.26 or newer. There would be no need to enable a specific feature for it. You just get it automatically. I’ve used this same strategy for enabling SIMD in the regex library and it works great, however, in the case of regex, there are no changes to the public API. It’s just internal optimizations.

So basically, adding new things to the API automatically is making me feel a bit nervous, but I’m having trouble pinning down concrete reasons why. One thing that pops to mind is viewing the docs for a library which contain new API items because the docs were generated with a newer compiler, while developing on an older compiler and being surprised when a public API item isn’t available. AIUI, I think the first error message they’ll see is “some_thing is not available” rather than “some_thing uses u128 which isn’t available on your Rust 1.22 compiler.” So I could see the failure modes being pretty mystifying here. I don’t know how heavily to weight them though.

Are there other concrete reasons to avoid doing this?

N.B. In an effort to preempt off topic discussion, I’d like to avoid going down the path of questioning why we don’t just increase the minimum required Rust version or why the crate doesn’t just do a semver bump. There are plenty of other threads on that topic. For this discussion, let’s take “older rustc compatibility” as a given.

1 Like

cc:

I will confess we have been a little more bold about this in the Rand project, and already implemented a couple of new APIs (and one optimisation) based on compiler version:

  • [1.22] Minimum supported version
  • [1.26] u128 / i128 generation support
  • [1.26] impl<'a, D, R, T> iter::FusedIterator for DistIter<'a, D, R, T>
  • [1.26] set_word_pos / get_word_pos for ChaChaRng (new functionality, not likely to be used much)
  • [1.27] impl<X: SampleUniform> From<RangeInclusive<X>> for Uniform<X>

Ideally, rustdoc should show the compiler version required for each feature of the API. This already happens for the std lib.

1 Like

Serde already does this too.

  • [1.13] Minimum supported version with none of the below.
  • [1.20] CString::into_boxed_c_str enables us to deserialize Box<CStr>.
  • [1.21] From<Box<T>> for Rc<T> / Arc<T> enables deserializing rc for T: ?Sized.
  • [1.25] core::time::Duration makes Duration impls available for no_std.
  • [1.26] Impls for 128-bit integers.
  • [1.27] Impls for core::ops::RangeInclusive.
  • [1.28] Impls for core::num::NonZero*.

The only downside so far is that adding a build script to do version detection adds +0.5 seconds to compile time. In my opinion the advantages make it worthwhile.

Putting new API behind a Cargo cfg that is not automatically detected does not solve this though. Either you build with all-features = true in docs.rs and have the same problem, or build without all-features in docs.rs and nobody finds out about the new additions.

Good point.

If Serde has been doing this for several release cycles and it has worked well, then that definitely makes me much less nervous about this!

To me it’s absolutely fine. It’s basically how libstd works :slight_smile:

I prefer wrapping it in feature gates which even though has the same problem as build script in terms of discoverability but at least it is discoverable while going through the source code.

This is what bincode currently does and is causing some issues, hence the motivation for this discussion. In this case, rand_pcg (uses u128 type) has optional support for Serde which optionally uses bincode for binary encoding — so only when a user encodes rand_pcg via bincode is the bincode/i128 feature needed. Arguably in this case the crate doing the encoding should depend on bincode/i128 but this puts a hard-to-document requirement on users and causes problems with rand_pcg's own tests (which use bincode as a dev-dependency and need i128 support only for compilers >= 1.26; unfortunately Cargo.toml / build.rs cannot encode this).

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.