Ok; Then I can similarly go on to declare a bunch of various possible feature proposals “small” because they are conceptually simple to explain pre-rigorously.
-
total functions must always terminate.
- HKTs allow quantification of type operators.
- (value) dependent types, allow types to depend on values
- pi-types allow function argument types and return types to depend on the (runtime) value of previous arguments.
- sigma-types allow the type of the second half of a pair to depend on the value of the first half.
I must say; it is not at all clear to me what “we” mean by “small” and “large” and I am not getting any wiser as the thread is moving on and various things I assumed were large features are now simple => small. Particularly, I think the evaluation method of what is simple/small and large/complex is too arbitrary. If you want to (do you?) formalize this into a language team policy, then it must be much easier to tell what is what. I think this will be too difficult, so I don’t want us to legislate here.
Personally I believe that if a feature adds more compensating value to the language than it costs and is prerigorously conceptually simple to explain, it should be added. A language with fewer concepts is not to be preferred in my view. Of course, once you are adding more features which duplicate features and does not advance generality in significant ways, then the cost is still there but the compensating value is just much smaller. I think this flows naturally.
I wholeheartedly agree with you on all of these specifics. And I do think specifics are important because otherwise we will just be talking past each other. Is it right to see your view more as against redundancy than it being OK to limit expressive powers (because more expressive power usually requires more concepts… 1ML notwithstanding…)?
I do think personally that syntactic sugar is good to add in some cases proviso that it is really sugar and deugars into a simpler core language where soundness is easy to formally reason about.
It is conceptually simple to explain yes. However, all the things you can do with GATs that were impossible before will take some doing to explain. This is necessary so that people can turn new features into useful practice. So I think you are overstating the simplicity of GATs.
This feels like somewhat of a strawman. My points were mostly about the type system and not about control flow. How would “goto” work and how would it preserve soundness (which I take as an absolute requirement in all cases…)?
I do agree with this. I also think it is very uncontroversial.
What I don’t agree with is that this implies a language with a small number of concepts.
Syntactic sugar, and general higher order constructs, which both increase the number of concepts, can help to massively minimize the boilerplate and plumbing away from code. I believe this to be essential to understanding the intent of a particular piece of code quickly and at a glance.
While we can reason about this in the abstract, is there any empirical evidence showing this? By analogy with natural languages, Swedish is a simpler language (grammatically at least wrt. conjugation of verbs) than French is – yet people learn French just fine and you don’t need to learn all of either language to converse in it. Likewise, I don’t think it is necessary for most rustaceans to learn what #[repr(C)] does until you are in that context of needing to talk to C. When you are, you can lazily learn that then and there.
What constitutes too powerful? Are dependent types too powerful? – More specifics would be good here so that I can reason about your criteria.
This is a great point; To actually talk the same language, we need a base set of primitives involving some syntactic sugar, such that you can learn common constructs once, and not have to learn them a bit differently for each project. I think most time spent learning a new language is mostly about learning all the important libraries in the ecosystem (rayon, rocket.rs, futures-rs, tokio, etc.) that make the language tick.