GSoC project planning: SemVer specification and tool architecture overview


As I am now working on the GSoC project discussed in a previous thread, I now reach out to the wider community to get a few things out of the way with regards to the what and how of implementing the proposed system.

A quick reminder: The goal is to implement a tool to check whether a library crate adheres to the semantic versioning specification, and to generate a recommended version number for the unpublished version being analyzed. Before implementatin work on this can start, major design concepts have to be agreed upon, and a consistent specification of semantic versioning as it applies to Rust has to be described. The following gist elaborates on these two aspects and will serve as a reference during development. All comments and suggestions are welcome.

I’m glad to work on this project and open to discussion about the approach(es) outlined above, and like to thank everybody who already participated in such. While I’m not a fan of a formal introduction, one can get in touch over IRC instead, where I use the same nickname (and should be available most of the time).


It looks like a great start to me!

AFAIK, the API evolution RFC is still in effect, so I suspect you’ll want to give that a careful read. The RFC talks about a number of other technically breaking changes that aren’t (or shouldn’t be) considered as semver incompatible. For example, adding a new default method to a trait can technically break downstream code, but we don’t release a new major version when adding such a new method. There are several other cases like that outlined in the RFC.

In practice, at least when considering std, we will not make a minor version if it breaks a lot of public crates, so there is an element of judgment involved here that an automatic tool can’t capture. (Which I think is OK! Just worth mentioning. :-))


There’s currently an RFC on Public/Private Dependencies under discussion that would be relevant to

For example, if crate a depends on crates b and c, and b depends on c, then b needs to update it’s major version, if it begins to depend on a new major version of c, as otherwise a could face a transitive dependency conflict.

Specifically if b were to have a private dependency on c then incrementing the major version of b's dependency on c wouldn’t be a breaking change.

I guess there’s no chance of that being merged and implemented in time to affect this project, but it might be something to keep in mind as a likely future extension needed.


Oh, I didn’t know this was accepted. Congrats! I was soon going to write something and how the chalk + trait stuff might be possible to leverage for lots of this.


How would it differ from leveraging the current compiler architecture, which offers type inference via HM-like unification, trait queries and even a fulfillment context (which we also had before chalk)?


The compat modality encodes the compatability relation.

not { compat { not { new interface }}

this, asked in the context of the old interface is “does there exist a compatible crate entailing this interface”.

Now I don’t mean to say that will just work out of the box—from some quick research prolog only does HORNSAT, while what I wrote requires real SAT. We’ll probably need at least to implement the dual (diamond or similar) modality more directly. Overlapping impl checking also seems impossible to do with just horn clauses, and in any event a single authoritative implementation of the compatability relation in rustc for all uses strikes me as an exciting goal.

What are “fulfillment contexts”? I don’t mean to say progress can’t already be made with rustc. As with everything I dream up, it will best-case serve as a long term plan :).


A growable set of “obligations” (pending predicates, especially those involving inference variables) which get resolved “behind the scenes” (in some arbitrary order), resulting in types being inferred and/or errors produced.

It’s the trait (more generally, predicate) equivalent of an (HM) inference context.


I’m skeptical that there is much value in mechanically checking semver compatibility (or for that matter, that semver is all it’s cracked up to be). At best you can mechanically check that a code change is unlikely to cause compile errors in clients, but that isn’t the same thing as detecting breakages.

This would be both too lax, in that it allows semantic breakages which don’t change the API, and too strict, in flagging changes as backwards incompatible that wouldn’t break any real world code.


Great that this project is getting of the ground now. Together with a friend I started semantic-rs a while ago. Initially we’re relying on commit annotations to make version decisions, but we always had the idea to automate this to some extend using automatic SemVer compat issues.

We would be happy to assist in your upcoming project and maybe taking it as the backend library for our tool at some point.


I think it fits perfectly to the overall design of rust: It doesn’t ensure that your program is bug free, can’t crash or - in this case - is semver compatible. It just gives you strong guarantees that prohibit some of the most common and most severe problems, such as dangling pointers in the core language or - in this case - api breakage in the case of libraries using this tool.

Elm, which did inspire this project, has a similiar tool shipped with their package manager and advertises as main language feature. API breakage might not seem to be a problem coming from an established language with ḿature ecosystem and well written frameworks. But if had your typescript web-app suddenly fail to build because someone changed the api of a major component in a patch-level release, then you really wish such a tool would have existed.


To be fair, due to Elm’s significantly simpler type system, their implementation fits in less than 1k loc and from a quick glance seemed to cover strong guarantees. I still agree with your position (otherwise I wouldn’t have applied for this project), but I thought this should be pointed out.