I think that the encouragements to look at Homotopy Type Theory (HoTT) are well-meaning but rather off-topic. What you are looking for for associated types is type-level computation, in a form that has been formalized in various ways in programming language theory (for example ML module system’s research uses singleton types to compute with type definitions) and is essentially unrelated to the higher-dimensional nature of HoTT. Equality in general is a deep problem in type theory, and HoTT has a lot of exposure right now, so it is natural for people to think of a relation, but I don’t think there is any here. (In technical terms, F-omega already has a type-equality relation in its type system that does type-level computation, while it doesn’t have any notion of propositional equality.)

On the other hand, my comment on reading the blog post was that there was a bit too much Prolog and a bit too little type theory, in the following sense: you are not only solving a proof search problem, you are building a *program fragment* that has computational meaning – the execution behavior of your program depends on the fragment fragment that is inferred during trait elaboration

By *elaboration*, I am referring to the idea of transforming the source code that people write into a more *explicit* form in which the calls to trait-bound functions are replaced/enriched/annotated with the path of the specific implementation of the function that is actually going to run, and calls to methods with trait assumptions on their generic parameters are enriched/annotated with an extra argument carrying the specific implementation of the trait for the call’s instance. (This does not mean that trait dispatch has to be implemented this way in practice; you can always explain the specialization as inlining and specialization of these extra explicit arguments.) In particular, while the code before the elaboration does not satisfy the *type erasability* property (if you erase all information about types, you cannot execute the program anymore), the explicited program after elaboration has a type-erased semantics that is faithful to the language semantics. Understanding this transformation pass is important because (1) having an intermediate form that permits type erasure is an interesting property of your language (it tells you the minimum information a programmer needs to understand to accurately predict program execution) and (2) this intermediate form is a good way to think about human-program interactions in an IDE: it is precisely what you want to show (under some nice syntax) when a programmer wonders “what is going to happen with this trait-bound call here?”, and understanding the syntax of this elaborated forms also helps you write better error messages when elaboration fails (because there is an ambiguity, for example).

If you are lucky, you can design this elaboration in such a way that the post-elaboration program are expressed in exactly the same syntax as the pre-elaboration program that users wrote. This property trivially holds of Scala’s implicit parameters (an implicit parameter is just a value/term that the user could have written themselves), but it is a bit less obvious of Haskell’s type classes or Rust’s traits. Using the Haskell syntax that I am more familiar with, you could elaborate something like:

```
class Eq a where
eq :: a -> a -> Bool
instance EqInt where
eq n1 n2 = ...
instance Eq a => Eq (List a) where
eq [] [] = True
eq (x:xs) (y:ys) = eq x y && eq xs ys
eq _ _ = False
discr :: Eq a => a -> a -> Int
discr x y = if eq x y then 1 else 0
assert (discr [1] [2] == 0)
```

into the more explicit transformed program

```
type eq a = (a -> a -> bool)
eqInt :: eq Int = ...
eqList :: eq a -> eq (List a)
eqList eqA [] [] = True
eqList eqA (x:xs) (y:ys) = eqA x xs && eqList eqA xs ys
eqList eqA _ _ = False
discr :: eq a -> a -> a -> Int
discr eqA x y = if eqA x y then 1 else 0
assert (discr (eqList eqInt) [1] [2] == 0)
```

Notice that:

- a class/trait becomes the type of a value returning its operation(s): the class
`Eq a`

becomes the type `eq a`

of comparison functions
- a ground instance (
`Int`

is in the class `Eq`

) is turned into a value of that type, `eq Int`

- a conditional instance (if
`a`

is in the class `Eq`

, so are lists of `a`

) becomes a *function* from values of type `eq a`

to values of type `eq (List a)`

- the call
`discr [1] [2]`

in the source version has a behavior that depends on “which definition of the equality will be chosen”. In the explicit version, we can easily describe which definition has been chosen by a source term: `(eqList eqInt)`

. I could show this to the user; if there was a conflict with two possible definitions, I could show both sides to explain the conflict.

The prolog rules presented in the blog post explain *whether* there exists a solution to this elaboration problem, but not *what* the solution is. There are two ways to describe the relation between these two questions:

- if you are familiar with logic programming and proof theory, you will say that the
*proof term(s)* implicitly generated by the Prolog search has exactly the structure of the elaboration witness you are looking form; the only thing you have to do, after search succeeded (or possibly found two candidates, for example), is to transform these proof derivations into terms
- if you are familiar with functional programming and type theory, you will say that you are actually doing
*program search*, searching for a program (of type `eq (List Int)`

in my example), and that the reason this looks like Prolog is the Curry-Howard isomorphism (the type system of your elaboration terms also describes a logic).

Of course, none of these two views is intrisically better than the other: they are both perfectly correct, and it is interesting to keep both perspectives in mind. Which one matters depends on the specific problem you are asking about the system you are studying. (I think this is important to point out because the original blog post is not balanced in this respect, it only presents the logic-programming view of the problem.)

Here are some reasons why I think also working out the type-theroetic views (that elaboration witnesses are *terms* that we are doing a search for) is important:

- The terms in this story really influence the runtime semantics of your programs, there are not just some formal objects whose existence matters but whose precise details can be ignored (as is common in logic, and sometimes is the perspective on Prolog derivations).
- You will want to show those terms to the user, and explain the behavior of the search by using them. (This is especially helpful if the terms can be expressed in Rust itself, as you end up talking about program search in the language the user knows about, and not about programming in a completely different logic language.) This is important for debugging, and also for understanding the compilation, optimizations opportunities, and performance models of the programs – which matters to the Rust community.
- If there are cases where you cannot enforce consistency of trait resolution (there locally are two distinct solutions to a trait elaboration problem), the term view is essential, I think, to reason about what should be done about it. For now Rust has fairly strict restrictions on trait declarations to avoid inconsistency issues, but those are in tension with modularity, so you will get requests to relax them and may end up having to tolerate some amount of inconsistency as the design evolves.
- Finally, and I think this is maybe the most important point, the equalities that you are concerned with are the equalities between the terms and the types of your language. Equalities between terms matter to understand when two elaboration paths are in fact equivalence (and thus unambiguous), for example
`Eq Int`

compared to `Ord Int => Eq Int`

in Haskell (the witness of the latter resolution path is equivalent, as a term, to the witness of the former resolution path). Equalities between types matter as part of the type-directed search for witnesses. The blog post explains that trait queries are lowered to propositions in some unspecified logic whose propositions are given a somewhat Rust-like syntax; but what really happens is that the goals of elaboration precisely use *Rust types*, and the equalities are the equality relations in the Rust type system.

If you see trait search as elaboration into an existing logic, you will be able to apply existing proof search techniques exactly unchanged, but every desirable property of your design will have to be checked through the encoding from types to logic propositions, and from logic proof terms to programs or elaboration witnesses. If you see trait search as term search in your existing type system, you will have to re-interpret existing proof search methods inside a slightly different setting, but the terms and the types manipulated will be those that users know and think about. You need to combine both views to productively think about the problem.