I’ve been working with @nikomatsakis and @withoutboats on Chalk, and have written up a post about the piece I’ve been working on: negative reasoning, which is vital for coherence checking, type checking, and specialization. Take a look and let me know what you think!

I am not yet expert enough to help you, but regarding this:

It will consolidate our existing, somewhat ad hoc implementation into something far more principled and expressive,

That's one of the important differences between a good language and a probably hopeless one.

I don’t know, but it sounds a lot to me like some kind of intuitionistic type theory could be used, that allows for the otherwise “excluded middle” to represent the *not yet* proved or disproved.

Leaving some proposition as a `Maybe`

is at least a conservative stance should the prover become more powerful in the future.

There’s a close relationship between modal logic and intuititionistic logic: one of the basic models of intuitionistic logic *is* a possible worlds interpretation! Put differently, the use of the `compat`

modality here just gives you an intuitionistic logic.

I’m a little confused about `compat`

working with type inference. In an example like

```
trait Foo<T> {
fn make() -> T;
}
trait Bar<T> {}
#[derive(Debug)]
struct Local;
impl<T> Foo<Vec<T>> for () where (): Bar<T> {
fn make() -> T { panic!() }
}
impl Foo<Local> for () {
fn make() -> Local { Local }
}
fn main() {
println!("{:?}", <()>::make());
}
```

Suppose we are selecting `Foo<?T>: ()`

. If we restrict to the first impl, `exists<T> { compat { (): Make<T> using impl #1 } }`

is `Maybe { T = Vec<?U> }`

while the second impl is `Maybe { T = Local }`

. Do we not unify anything (in contrast to the example in the OP) because both impls unify type-vars?

Another possibility would be to do “impl candidate selection” in classical mode, and if we get a solid “no” for all candidates but 1, “apply” the impl to advance type-checking, but otherwise do all typeck selection in compat mode. That is more “leaky” with respect to specialization than the current situation (I need to find a good example), but should be fairly principled.

I agree that the strategy described in the post for picking between options seems too simplistic, but it feels like we could propagate a bit more information back to handle this case. In particular, the thing that @aturon was going for, I believe, was that we would be looking for candidates that "are only yielding `maybe`

because of the closed-world assumption" (to some approximation). If we propagated that back up (why we chose `Maybe`

, essentially), that might be enough to pick at each level, no?

(The current compiler actually has a rather more conservative strategy than this, which also gives us some room. The most notable thing is that we refuse to proceed if the `Self`

type is unknown, because we are afraid it might wind up resolved to an associated type projection -- I would really like to do away with this hesitation in chalk (ideally by having some other inference rules that let us go from the fact that a projection exists to the trait's WF conditions; this is related to chalk#12.)

I just wrote a long post in the thread related to the earlier blog posts on Chalk about a related topic, so I’ll keep this one shorter.

I have been working in this area for several years, and the problem you encountered with ‘maybe’ outcomes seems to be handled by “deferred goals” which is one technique used in Prolog engines to implement constraints like disequality (and even inequalities on number “kinds”).

A quick summary would be that disequalities in types/logic are easy to deal with by deferring the goal (and the output can be the unsolved deferred goals in the case of a ‘maybe’ that can be pretty printed as disequalities or inequalities), but negation is tough. I have not found any of the simple negations to be sound (failure, inconsistency etc.) and right now the only sound option appears to be to rewrite the clauses into disequalities, which is easy for simple cases “not(X = Y)” becomes “X /= Y”, but harder for the general case including recursion.

Regarding specialisation, most specific match can lead to instability (new code changing existing behaviours), although i think it is a viable approach, i prefer requiring coherence (no overlapping), but using the disequality constraint to disambiguate. For example the following is problematic:

```
(a).
(X).
```

But this is not:

```
(a).
(X), X /= a.
```

This requires that typeclass dependencies take part in matching (IE the bounds of a trait implementation must be satisfied to choose it). This must be resolved recursively, and requires backtracking to solve. I don’t know if i am the first to say this, but i have said it before on LtU, “any sufficiently complex type system ends up re-implementing a Prolog-like logic language badly”

@aturon great post! I’m that the other modality—is it called the diamond modality?—can be used to check whether two crate interfaces are compatible. E.g. given foo-1.0 and foo-1.1, take the former to be necessary and ask if the latter is possible.

Negation tricks allow one to encode one modality with the other, so all that possible remains I suppose is to encode the interfaces of crate items that aren’t relevant to the trait system in chalk.

Hmm I’m confused about all the talk of excluded middle and intuitionism. In classical logic, there is still the “yes, no, maybe” of a) deriving proof of a formula b) deriving proof of its negation c) establishing the validity of the formula is independent of the given context/axioms.

Negation-as-failure turns negation of independent propositions into additional axioms basically, but this makes a non-monotonic logic (https://plato.stanford.edu/entries/logic-nonmonotonic/), whereas “orthodox” classical and intuitionist alike logic are monotonic.

Provability logic(https://plato.stanford.edu/entries/logic-provability/) is a modal logic. While classical logic leads to independence and logic of provability, logic of provability itself is not really classical.

@sanxiyn thanks for linking me that—I had never heard of provability logic before—but I’m afraid I’m missing how it’s relevant.

I think I have the answer to my own question: I was making a mistake in thinking that the lack of excluded middle stuff and substitution-as-failure where somehow related. Modal logic alone is indeed fine with `¬□p ∧ ¬□¬p`

, and negation-as-failure does indeed make for a non-monotonic logic, but those two relationships have nothing to do with each other.

There is interplay is because in a single world, in the conventional semantics for model theory, must assign a truth value to all atomic propositions. Those worlds are closed and so subsitution-as-failure is a sound “trick” to avoid keeping track of the false atoms (if we did keep track of all of them, the context would be “maximal” . At the same time, that means that any set of *multiple* worlds *must* have some case where `world-0 ⊨ p`

`world-1 ⊭ p`

, so subsitution-as-failure or excluded-middle within the box modality is ruled out as we have `worlds ⊧ ¬□p ∧ ¬□¬p`

. At the same time, this means that box modality’s lack of excluded middle can be seen as reifying normal/classical independence.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.