I find it remarkable that the “no filters at all” rule raises several interesting questions with regard to other corners of your language design. This seems to suggest that, even if you don’t adopt it in the end, making sure that it would work well is a nice way to clean up other aspects of the language.

#### Interactions with inference

I think that the problem you describe is the sign of an implementation smell: if you know that *both* `&'a T: Hash`

and `for<'x> &'x T: Hash`

hold, only the latter should be considered; more generally, I would expect inference to be monotonic: the more you learn, the more general types you infer.)

#### Declaration-site Impliciteness declaration?

In Agda, you write `(x : A)`

in a declaration to quantify over a variable whose value will be explicitly provided by the caller at the use site, and `{x : A}`

to quantify over a variable that will be implicitly provided at the use site. It might make sense to let people declaring a bound decide whether they should be implicitly implied at use-site of their declarations or explicitly provided by the context.

I am not convinced, however, that this is a good idea. Is there an API-design case for having this flexibility, or is there rather a good default for everyone?

#### Implicitly implied from where?

In Haskell, the use in a function body of a type-class operation suffices to have the type-class constraint implied in the function signature. It works well for the Haskell community, but it is frequently described as “a bit too implicit” – in Scala for example implicit parameters must be explicitly abstracted over in the method declaration.

So far, you are only proposing implicit assumptions at the level of a single type signature: extra bounds would implied by well-formedness constraints of types appearing in the function signature, not in other part of the program (such as the code of the function with this signature). This is a more restricted form of inference and, I believe, significantly safer / more conservative.

####
`Hash<NotHash>`

: Consistency of abstractions

In his PhD thesis¹, Julien Crétin worked on the question of “when can an assumption safely be made implicit?” and its relation to the question of “when is it sound to reduce/compute an assumption?”. He proposed² the idea of classifying abstractions (function parameters, be it lifetime, type parameters or values/arguments, or even bounds) into the *consistent* abstractions and the *inconsistent* abstraction. An abstraction abstracts over a class of objects (or over a logical proposition), and it is consistent when the class of object is known to be non-empty, or when the proposition is true – otherwise it is inconsistent.

For example, abstracting over a lifetime `'a`

or a type `T`

is always consistent, because we know that some lifetimes and types do exist. Subtyping-bounded abstractions, abstracting on “a type `T`

such as `T <= Foo`

" holds”, are also consistent as we know that `Foo`

is in the class of valid types. Abstracting over a term variable of type `Hash<NotHash>`

is inconsistent as we know that there is no value of this type – the body of such a function would be dead code.

An interesting aspect of this design is that *grouping abstractions matters*. It is not the same thing to abstract over all `T <= Foo`

and to abstract over all `T`

first, and then later assume that `T <= Foo`

. In the latter case, the second abstraction is inconsistent, in the sense that for an abstract `T`

we cannot prove that `T <= Foo`

holds. (This is the shape of GADT assumptions, they add equalities/constraints over a pre-existing type parameter, and they are inconsistent abstractions that must block reduction.)

I think that a reasonable take-away from Julien’s work is that it makes sense to distinguish consistent and inconsistent abstractions in a programming language, and force users to be *explicit* about the fact that a given abstraction is inconsistent. This allows to show a warning or error message when an abstraction that they believe is consistent in fact is not, which often corresponds to a programming error. I believe that this approach is a good solution of the usual C+±templates problem of “we cannot see whether we made a mistake without trying to use the new definition in client code”.

In particular, allowing consistent implied bounds to remain implicit while asking for inconsistent implied bounds to be explicit would be an interesting “filter” to consider.

However, I think that the question of consistency of trait bounds should be treated with care: when is the abstraction on `T where T: Trait`

consistent? Formally it should be consistent if there exists at least one `T`

such that `T: Trait`

holds, but what is the scope of declarations that should be searched for? This question interacts directly with the part of your language design on scoping of trait instances (orphan instances, etc.); it may or may not be the case that the type-checker can take a closed-world view of trait declarations and know for sure that a trait has no inhabitants.

¹: See the article-sized version of his work at System F with coercion constraints (PDF), Julien Crétin and Didier Rémy, 2014. I myself did a bit of work on the topic, looking at how to design language features for *inconsistent* abstractions (those that should block reduction), which is less relevant to the current discussion, but see these slides in case.

²: The question of consistency of contexts was also discussed in other works around the same time, for example System FC with Explicit Kind Equality by Stephanie Weirich, Justin Hsu, and Richard A. Eisenberg, 2013, but there it is more on the side and in particular the question of which abstractions preserve consistency is not studied in depth.