Pre-RFC: make self less special

trait Logger {
  fn log(&mut self, record: &LogRecord);
impl Logger for Foo

becomes (sans syntax bikeshedding):

trait Logger<self A> {
  fn log(&mut self, record: &LogRecord);
impl Logger<Foo>


This RFC:

  1. Makes traits like Add which have to choose one side or object arbitrarily to be self not have to do so. Instead, they can choose to not have a self.

  2. Makes trait inheritance syntax unnecessary:

    trait Foo : Bar becomes

    trait<A: Bar> Foo

Alternative syntax

trait Logger<Self>


If it went that way, not only trait inheritance but the whole : Trait syntax would be redundant:

trait Foo<self A> where Bar<A>

This is how Haskell works.



This has been proposed before. I am generally not keen on the idea. Among other things, Self is special – it is the type which can be erased for trait objects. I realize that we could generalize into full-on existential types a la Haskell, but I have little desire to do that, and it would have more implications than it might appear at first glance. I quite like the symmetry between the object-oriented and functional styles that we have achieved. I also find that, for most traits, there is a “primary” type and so having Self be special feels like a good fit (admittedly this is not particularly true for binary operators, but I think they are more the exception than the rule).



I have a couple of mixtures of notes and partially-written RFCs (“Deconflate…” and “Existentially…”) along vaguely similar lines but more radical (and hence vanishingly improbable), if anybody cares.

I realize that we could generalize into full-on existential types a la Haskell

I wouldn’t really call what Haskell has “full-on”… are you thinking of the ability to wrap existential types behind data constructors? This is pretty limited and I think Rust’s existing notion of DSTs is already more powerful in some ways (IIRC, but possibly not).



This RFC is 2 things:

  1. Treat Self as a normal parameter in the impl and trait syntax. The advantage of this is that it reduces complexity and makes “trait inheritance” unnecessary. IMO, this doesn’t make Self feel any less special especially if it’s given special treatment, say forcing it to be the first parameter or giving it special impl<Self = Foo> syntax.

  2. Allow traits without Self. This has the advantage of not forcing traits such as Add to have a Self value, and having the trait objects of such traits not awkwardly store the arbitrary Self value (trait objects of traits without Self would just be a vtable). This is an improvement on the current state of affairs where non-object oriented traits such as operators erase an arbitrarily chosen value. Personally, I remember reaching for trait objects without Self (which this would solve) in the past.



I also find that, for most traits, there is a “primary” type and so having Self be special feels like a good fit

Most traits have one type. That does not mean that for most multi-parameter traits there is a principal type.

I quite like the symmetry between the object-oriented and functional styles that we have achieved

I am not sure I see in what sense Rust is still really an object-oriented language unless one uses a loose definition for OOP. Is it because it still allows the metaphor of “applying methods to an object” via the dot notation ? Isn’t it 90% of the time a purely syntactic characteristic once you get rid of the metaphor ? Traits could have been considered as contracts between several types rather than reusing the OOP idea of behaviors inherent to instances of a single type. Sure multi-dispatch mitigates this asymmetry but it also makes more obvious that the conceptual purpose of Self is (to me) rather weak in the global balance.

Now, as you mention them, it is true that dynamic linking through trait objects arguably feels pretty OO but:

  • I have always felt that Rust philosophy deliberately emphasizes parametric polymorphism over sub-typing polymorphism (and consequently over OOP);
  • as designed trait objects currently exclude (AFAIU) a certain category of traits because they require to conform to the OOP philosophy of priviledging (at most) one parameter of the method.

In the end I feel like the enduring presence of a few OOP features in the global and (according to me) not-so-symmetric design results in some limitations rather than actual strengths when considered from the perspective of pure expressiveness. As it had already been mentioned, it may be a successful gamble to keep this OOP flavor for a marketing strategy: make standard (C++?) programmers feel confortable and decrease the intimidating factor for newcomers (and I agree this is a valid argument). I also agree that even if the current design is kept it is already very expressive :grinning: But sometimes I have the impression some Rustinians believe Rust intrinsically needs to be/remain the most object-oriented possible. I don’t.



[quote=“nikomatsakis, post:3, topic:1017, full:true”] I realize that we could generalize into full-on existential types a la Haskell, but I have little desire to do that, and it would have more implications than it might appear at first glance.[/quote]

Would you mind elaborating on why? Is it because of the complication with monomorphization? IIRC, cyclone had existentials. It seems there is some overlap with DSTs but I have had difficulty using them.

If Rust someday gets HKTs, I feel the lack of existentials and higher-rank polymorphism are going to become a much more noticeable limitation. I don’t think Rust should necessarily do everything Haskell does, but for fancier math/algebra stuff one might want to implement, these are useful features.



Can you elaborate on this?



Well, for one Existentials + traits + associated types is a whole lot of fun. But there are certain types of existential (e.g. concrete constrained associated type, existential phantom type), that don’t cause too many problems, and can’t really be expressed with the current syntax.



One thing I’ve wanted which I haven’t been able to do nicely without existentials is Coyoneda (specialized for a given functor of course). I need it to be able to encode signature functors for free monad macros.

For example (using Haskell notation)

data Exists f = MkEx { runEx :: forall r. (forall a. f a -> r) -> r }
data CoyonedaF f a i = CoyonedaF { k :: i -> a, fi :: f i }
newtype Coyoneda f a = Coyoneda (Exists (CoyonedaF f a))
data Free f a = Return a | Join (f (Free f a))
data Signature r x
  = Ask (r -> x)
  | Local (r -> r) (Coyoneda (Reader r) x)
  deriving (Functor)
newtype Reader r a = Reader { toFree :: Free (Signature r) a }

Some I can do without it:

pub enum Signature<'a, S, X> {
    Get(Box<FnOnce<(S,), X> + 'a>),
    Put(S, X),
monad!(State, Signature, map, [ S, ])

I attempted something with associated types at one point but couldn’t even try it because boxed traits weren’t working with them:

struct F<'a, A>;
trait CoyonedaFT<'a, A:'a> {
    type I;
    fn k(i: <Self as CoyonedaFT<'a, A>>::I) -> A;
    fn fi() -> F<'a, <Self as CoyonedaFT<'a, A>>::I>;
type CoyonedaF<'a, A> = Box<CoyonedaFT<'a, A> + 'a>;

If there’s a way to do this using DSTs or something else aside from unsafe casts, I’d be very interested to see how. Even using casts it’s tricky if you want to avoid multiple traversals when you lower to the underlying functor, which kind of kills the whole map fusion thing.



It’s like self in Lua. In Lua self is not a keyword and Lua has a colon syntax that implicitly name the first argument as self. Personally I would be happy if Self become a keyword.



Thinking out loud a bit:

I don’t think the trait definition makes sense, as it implies that given a CoyonedaFT trait object, you can somehow directly call k and fi, despite not knowing what types they are supposed to be (because they depend on I, which is runtime-variable in the Haskell version).

In Haskell, lowerCoyoneda (as defined in kan-extensions) takes a Coyoneda instance (which does not itself know about Functors) and, given a functor instance F and A, instantiates (to translate into Rust) F::fmap::<B, A> ! So all implementations of fmap have to have dictionary-passing based machine code implementations lying around, but they probably want to extract data from b (e.g. for Maybe you need to extract the inner value from Just, which means record offset data needs to be in those dictionaries and everything memcpy'd around, which adds a lot of complexity… let me know if I’m wrong.

To make this more straightforward to generate code for, I think you want each Coyoneda instance to itself store the code to lowerCoyoneda it, as this way the calling convention doesn’t need to know anything about B. i.e. B is Self, the erased type (if you really want multiple erased types you could impl on a tuple, but you don’t need it here). Since you’re already treating the idea of Functor outside the type system (as required, thanks to the lack of HKT), it’s possible to only implement it for Coyoneda instances for functors, but… then how do you map between different functors? Too tired to figure that out.

But you aren’t even doing that, are you? You’re not taking advantage of Coyoneda's ability to work with a non-functor f, or converting between functors. So all you end up with is

trait CoyonedaForSomeFunctor<A> {
   fn lower_coyoneda(&self) -> SomeFunctor<A>;

But obviously this is just an indirect way to store SomeFunctor<A> (lazily), hardly useful at all.

At this point I’m a little lost because I don’t understand your Signature; anyway, I need to sleep. But I’m not sure how Coyoneda is actually helpful. Maybe I did something wrong.



What happens if you attempt to implement a trait twice in two different modules?


closed #15

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.