Looking over some of the recent discussion it’s got me thinking a bit, and I think I’m able to clearly articulate a good question for landing an initial implementation of generators/coroutines to the compiler. How certain to we want to be of syntax before landing in the compiler? The cricial piece here though landing in the compiler, not stabilization. I would want to have clear community consenus on all syntactical decision before stabilization, of course!
I would personally think that we don’t need to 100% nail down the syntax before landing this as an unstable feature on nightly (i.e. for the time being). Much of async/await, for example doesn’t actually end up exposing the underlying syntax for generators/coroutines in the end anyway! Now that’s not to say we should take simply anything, however, because I think certain decisions will affect the implementation in the compiler as well.
One of the major differences I noticed between @vadimcn’s RFC and @Zoxc’s implementation is how a generator/coroutine is defined. In the RFC a coroutine was a “closure with the yield
keyword in it” whereas in the implementation you call a function/closure with a yield
keyword to get a generator. For example, in @vadimcn’s RFC you’d write:
fn range(a, b) -> impl Generator {
|| {
for x in a..b {
yield x;
}
}
}
whereas in @Zoxc’s implementation I believe you write:
fn range(a, b) -> impl Generator {
for x in a..b {
yield x;
}
}
This, I believe, has a lot of ramifications on the implementation in the compiler itself. The way typechecking and such interact I suspect would change at a relatively deep level depending on the decision here. I believe, though, that there’s another feature in @Zoxc’s branch called gen arg
to support this syntax as well, but I’m not sure I quite understand it. @Zoxc maybe you can elaborate here?
I personally feel that from a compiler implementor’s perspective that @vadimcn’s construction is the way to go. It’s got a clear definition of what a coroutine is, when code does and does not run, how captures/arguments work, etc. From a user’s perspective, however, @Zoxc’s implementation seems clearly superious as there’s basically less syntax involved.
So to me one question is: how do we resolve this? There’s sort of two questions here, though: the short term resolution and the long term resolution. I’d personally think that in the short term we should take routes like @vadimcn proposed which seem more conservative from an implementation perspective (less impact on the compiler). Once we’ve got experience with generators and coroutines, though, we can revisit decisions like this and see how much the ergonomic argument comes into play.
So @Zoxc does that sound accurate? Do you disagree with anything? Or @vadimcn am I missing something? It’s worth nothing that decision like this (and in general, most coroutine/generator decisions) don’t actually affect async/await at all! The async/await desugaring is easily translatable regardless.
In general though I think that this is an example of syntactic question that’d be good to classify. In some sense I think a lot of these can fall in the bucket of “do we need to resolve this in the short term?” I’d imagine that 99% of these questions need to be resolved by the stabilization phase of generators, but less so for the implementation phase.
@nikomatsakis or other compiler/lang folks, are there specific issues that you’d like to see resolved before landing an unstable implementation? So far a point about a test suite has been raised which I’d be more than willing to help write myself, but I’m curious if there’s other key points about generators you’d like to see resolved before landing an unstable implementation vs before stabilization.