I’m not very familiar with the manner in which this works in Boost. I have read the documentation and examples but I can be missing a number of things.

I have several concerns about the Boost implementation:

it depends on magic constants, which must be globally unique across the entire application, which is Not Good (presumably for normalization, but I’m not sure);

boost seems to provide auto-conversion between units, which is very much not in the spirit of Rust (hopefully, this can be removed);

does the Boost implementation support any kind of inference?

Also, I’d like to see the error messages. Not that I don’t trust Boost to have human-readable error messages, but…

And solving linear systems of equations is the only thing one needs to be able to do to check whether one unit can be converted into another, and to perform the conversion. The `Dimensional Analysis chapter of the Boost.Unit documentation explains this in detail.

Ok, that is a very nice hack. I'll assume that it also works on integers, which is what we need here. However, it still has at least two major problems:

you need to be able to reduce the problem to a set of constraints in a number of dimensions known at compile-time – uom, dimensioned, etc. handle this by closing the set of dimensions, while Boost handles this with non-compositional magic numbers;

users need to be able to understand error messages.

Oh, and technically, I suspect that the only way to extend this hack to work with a number of dimensions that is not hardcoded is to rely upon code generation, which would bring us back to the problem of uom, dimensioned, etc.

while Boost handles this with non-compositional magic numbers

Which ones?

Ok, I wrote a very early (and incomplete) draft3. Any help welcome!

Looks good! It would be cool if the user-guide section would show, for example, how to implement "mini" SI and CGS systems of units that include length, time, and velocity. Then show how to specify the conversions between the natural units of those systems, and finally how to convert between velocity in CGS and velocity in SI, for example, m/s to mile/h, or km/h to inches/s.

Isn’t it enough to introduce “where T < U”, “where T <= U” and “where T == U” bounds where T < U is an arbitrary total order on types? (and at least compatible with in-crate declaration or lexicographic order, so units come out in consistent order)

Along with specialization and a smart enough trait resolution system (esp. one that knows that T < U || T == U || T > U), that should allow to define everything else as a library while retaining full composability.

Or equivalently, add negative where bounds, “where T == U”, and a “TypeLess<T> : U” compiler-provided trait.

This looks, smells and sounds like something that really-really-really shouldn’t be baked into the language. Typesafe units are nice and probably pretty much necessary for any reasonable numeric/engineering/physics/simulation work that claims to be safe or correct by design. However, they are so fundamentally type-related and well-described by types that before (and ideally, instead) of adding them as magic special cases to the language, we should really strive for making the type system more powerful in general and try to express them in terms of more fundamental features of the type system. In the wider perspective, adding even more built-in traits isn’t much better than requiring globally unique magic constants. While it is of course significantly less fragile, it still doesn’t feel more principled and general at all.

So instead of rushing to an RFC that extends the magic part of the language (of which we already have way more than what would be justified in a language that wants to solve C++'s bloat/complexity issue…) even more, I suggest that we consider instead what kind of expressiveness/additional power the type system needs, and try to achieve those goals in a less coupled, less units-specific manner instead. That would also mean that more potential users could benefit from them.

I suggest that we consider instead what kind of expressiveness/additional power the type system needs, and try to achieve those goals in a less coupled, less units-specific manner instead. That would also mean that more potential users could benefit from them.

Of course. I just want to start the conversation somewhere. I'll try and post an updated draft tomorrow, feel free to make holes in it if you realize that something smaller would work.

With this, we can define the base dimensions for length, mass, and time as:

/// base dimension of length
struct length_base_dimension : base_dimension<length_base_dimension,1> { };
/// base dimension of mass
struct mass_base_dimension : base_dimension<mass_base_dimension,2> { };
/// base dimension of time
struct time_base_dimension : base_dimension<time_base_dimension,3> { };

It is important to note that the choice of order is completely arbitrary as long as each tag
has a unique enumerable value; non-unique ordinals are flagged as errors at compile-time.

Note that these tags types are only a convenience. Units are type lists of base dimensions, and being able to sort these type lists allows to always reduce composite units to the same unit type which is some sort of unification. The only thing required for sorting is for the base units to have a total order - the concrete order is irrelevant.

Without sorting capabilities, one expression might produce kg * m * s and a different expression might produce m * kg * s. Because the base units are in a different order, these expressions have different types and would not unify. If you had a total order and a way to sort according to it every time a composite expression is produced, one could always produce consistent types that unify.

Assigning each base unit a unique integer is just one of the many ways there are to give a sequence of types a total order.

But it does provide a total ordering that is sharable between different systems of units, such as between SI and CGS. Lexicographic ordering of the base units within each system, for instance, would not provide that.

Without sorting capabilities, one expression might produce kg * m * s and a different expression might produce m * kg * s. Because the base units are in a different order, these expressions have different types and would not unify. If you had a total order and a way to sort according to it every time a composite expression is produced, one could always produce consistent types that unify.

Exactly.

Assigning each base unit a unique integer is just one of the many ways there are to give a sequence of types a total order.

Indeed. A total ordering shareable between different systems of units can be convenient for the implementation, depending on how convenient the units library wants to be with respect to conversions, but it is not necessary.

Well, you need magic numbers that are unique across your entire application. So if you have two libraries and they collide by using the same magic number, it's game over.

Indeed. A total ordering shareable between different systems of units can be convenient for the implementation, depending on how convenient the units library wants to be with respect to conversions, but it is not necessary.

Looks to me like without these numbers, s*m/s != m, which is a pretty big deal.