Any RFC for Units of Measure?


Yeah, this is a pragmatic decision that could (and probably would) be sensible to make if UoM did make it into Rust. The exact representation is an important question too: I think parameterising by the type of the quantity (e.g. u32, f32, etc.) would also probably be useful.

These are essentially library questions, however, so it’s probably worth waiting until we have a proposed solution for the language problem (canonicalisation) before bikeshedding about those questions.


That seems to me to be an odd and counter-productive position. Especially considering that: 5.6[km] + 2.3[nm] is likely to result in a much more off-the-wall and useless value than 5.0[m] + 3.0[ft].


I think that whether this is possible or not depends on which particular approach we settle on.

We could, for example, have a single type constructor per system of units where all base units are encoded in the Unit type as type-level consts: SI::Unit</*time*/-1, /*length*/ 1, ...>, such that each operation on units like Mul or Add generates new already canonicalized types by operating on the exponents.

Converting between systems is then just an impl From<GCS::Unit> for SI::Unit away. This has pros and cons, but is doable in a library.


I should’ve mentioned that 5.6[km] + 2.3[nm] probably should be disallowed as well without an explicit conversion to meters. In other words we will have two different units “kilometers” and “nanometers”, i.e. 5.2f32[nm] will store in memory 5.32, not 5.32e-9. But of course I would like to have custom literal “nm” using which I’ll get value in “meter” units. Units and custom literals are related to each other, but probably should not be mixed.


OK, I see how that would be a more consistent position, I’m just not sure I understand why you would find that useful. Why should I be able to do 5.0[m] + 0.00000000032[m] but not be able to do 5.0[m] + 3.2[nm] or 10000[in] + 3.2[Gm].


Again, you mix custom literals and units of measurment. I was talking about units, not about literals. If nm and m in 5.0[m] + 3.2[nm] are custom literals which desugar to Meters(5.0) + Meter(3.2e-9), I have no problem with it. But if it desugars to Meter(5.0) + Nanometer(3.2) I think we should heavily discourage allowing such operations.


I guess I’d like to see units of measure be in terms of hierarchies/dimensions. For example, the “length” dimension has several units of measure hierarchies, the 2 obvious ones are:

  • etc -> Gm -> Mm -> km -> m -> cm -> mm -> etc
  • etc -> Miles -> Yards -> Feet -> Inches -> etc

A dimension of length consists of a number of components from a particular hierarchy of the length dimension. When adding values from the same hierarchies, the components are added directly, when adding values from different hierarchies, the secondary hierarchy’s components are converted to closes order of magnitude component of the primary hierarchy before then adding the components. Something like that would give the greatest flexibility and accuracy to my mind.


I still don’t understand why you think that. I understand it is your position, I just don’t think I understand why it is your position.


I believe my position is in line with the current Rust approach to implicit conversions between types (and units are essentially subset of types), i.e. it’s currently forbidden to write code like this 1u32 + 1.3f32 even though some may say that conversion here is “trivial” and should be done automatically.


Yes, but, forbidding adding different units of measure simply because they express different magnitudes is more akin to forbidding:

let x : f32 = 0.0000000001;
let y : f32 = 1413411351;
let z = x + y; // forbidden because the magnitudes are so different that there will be potential loss of information

Which we don’t forbid in reality, rather than:

let x : f32 = 0.0000000001;
let y : i32 = 1413411351;
let z = x + y; // forbidden because there will be loss of precision that you may not be thinking about because f32 and i32 have different ranges and precisions available to them. 

Which we do in fact forbid in reality. I guess I just don’t see those as the same thing at all. Perhaps it’s just me though.


I can’t answer for @newpavlov, but i’m of the same opinion. 1m is very different from 1000mm are very different in terms of precision and range. 1 usize mm would convert to 0 usize meters, a very big usize meter measure might overflow when converting to mm.

I’ve a background in industrial automation and units of measure is something i really missed there (i’m doing other stuff now). But at least in that context most conversions between dimensions require decisions about what to preserve - range or precision.

Of course, all of this is moot if a meter is just sugar for literal 1000 mm (or any other arbitrary base length). But that has its own challenges. I’d argue that it would be surprising if a f32 would be unable to encode the length of 1 meter precisely because the base length is one nanometer.

I’d also argue that (at least in engineering contexts) a wild mix of different dimensions is a sign of badly designed system and shouldn’t be encouraged, but thats just an opinion and offtopic. :wink:


If I’m not misunderstanding something, this approach would make every expansion of the set of base dimensions (for any “unit” type) a breaking change, without a language feature like variadic generics, which seems like a show-stopper.


I think this would depend on how exactly we would allow the system unit types (e.g. SI::Unit) to leak, if at all. If we only expose concrete unit types like SI::meter and SI::second that hide the Unit type (e.g. via new types), then we could add new base dimensions as an implementation detail of the SI unit system. An alternative might be to use default const parameters (analogous to default type parameters on nightly) for new base units.


Also, adding new base dimensions to the main unit systems does not happen often. Anecdotally, in ~15 years of boost units, I recall it happening once for SI. In such a Rust library, we would just bump the major version and let people upgrade whenever they can (we could add a compatibility layer with the previous version).


So, one way we can get unification is by doing it in “the stupidest way possible”, i.e. by just saying “this is so”. We introduce type quotients (I’m guessing the type theorists have their own name for this concept). Essentially, we’d want to allow introduction of rules like the following (warning, ad-hoc syntax).

quot<A, B> UnitMul<A, B> = UnitMul<B, A> where ..;

More specifically, we have a grammar like

quot_type := "quot" TyParams? Ty = Ty Where? ";"

which does more-or-less what you expect: two copies of Mul are glued together along the given equality. Unfortunately there’s a few problems with this:

  • There’s no good way to say “ah yes this is how these types are equivalent”, unless we require the types to both be ZSTs or uninhabited, unless we require a const body after the declaration or whatever.
  • This is probably a coherence and inference disaster.

As a neat example, type aliases desugar to quotients:

type New<T, ..> = Ty<F<T>, ..>;
// becomes
struct New<T, ..> = { /* fields of Ty<F<T>, ..> */ }
quot<T, ..> New<T, ..> = Ty<F<T>, ..> where ..;

which is admittedly a pretty silly example.

I rather like this solution, because it doesn’t require us to bake units into the language, and we can build much more hilariously complicated structures than what is essentially the polynomial ring Z[x1, x2, ...]. Of course, it’s also hilariouslly unfeasible.


If you don’t expose the underlying Unit type, then you have to have a type alias for every combination of dimensions (e.g. metre_second_kilogram), which means 2^n types, for n dimensions. You’re also unable to provide conversion functions (e.g. SI::Unit to GCS::Unit) in user code.

This could work and is certainly a smaller language change than the other suggestions.


This is a limited form of quotient types. The “dimension type canonicalisation” is essentially a way to encode quotient types (whose types can be normalised). (In general, full quotient types are not useful without dependent types.)


What I had in mind was to add ways for users to create these type aliases themselves, while hiding the details, e.g. Mul and Div, such that type metre_second_kilogram = unit::Mul<SI::metre, unit::Mul<SI::second, SI::kilogram>>; would do the right thing, and Mul would be provided by the library and hide the unit internals, e.g., type Mul<A, B> = Unit<A::B0 + B::B0, A::B1 + B::B1, ...>. But this approach is really unergonomic, and in these examples it would leak the internal Unit type.

I think that you are right and that this is not easily doable, and even if it were, it wouldn’t really be ergonomic, which is bad because creating compound units from the base ones is a very common operation :confused:

This could work and is certainly a smaller language change than the other suggestions.

FWIW I am not advocating that this is a good solution for units, but rather that once we get const generics, we might be able to do much better than what we can do today. The infinite extensibility that systems like F# give you sounds really nice, but I am not sure yet whether that’s worth it adding a language feature to just specifically solve this problem. In C++, which has had imperfect solutions like Boost.Unit for a long time, not that many people discuss language-level solutions for this particular problem, because Boost.Unit is pretty much enough for most users in practice. We can aim for something better, but putting the result in the balance with the alternatives.


Sounds about right. I figure that full quotient types are complete overkill, and that we’d want some kind of restricted form to be able to declare different types as extensionally equal. I just want A<T> + B<U> / A<F<T, U>> = B<G<T, U>>.


Yeah, that’s one thing that Go does well. A const can be any integer or float type without casting. It would be good to see something close to that in rust.

As long as a const is known at compile time. It should be possible to make into any type it fit’s