[Roadmap 2017] Productivity: learning curve and expressiveness


Continuing the discussion from Setting our vision for the 2017 cycle:

So I wrote this:

A bit on tradeoffs

I wanted to expand on those thoughts a bit here. When we think about the language, I think we have to pay attention to rounding out and smoothing the language we have; we should be very judicious about adding new features (which is not to say that I oppose new features!). In the initial roadmap post, @aturon proposed this theme for Rust in 2017:

Rust: safe, fast, productive — pick three.

This is catchy, but it carries a really deep meaning. A lot of times when we think about “learning curve” or “new users” we tend to view those needs as being in contrast with experienced users. But I think there is a better way of looking at things. We need to work hard to find solutions that address the requirements of experienced users while keeping the learning curve in mind.

After all, we all – experienced or n00b alike – love using those areas of the language that work really smoothly (and there are many). As an example, I remember how the standard library felt back in the day when it was basically a dumping ground of utility functions we needed for the compiler. Then we went through the runup to 1.0, with the obsessive focus on improving ergonomics – but not at the cost of performance or expressiveness – and we wound up with the, frankly, awesome stdlib that we have today. This was a massive community success at overcoming this “new user vs experienced user” tradeoff.

Another example, one that I focused on in the RustConf keynote, is the closure inference in the language. It took us a while to iterate here but we have managed to have a closure system that addresses a huge diversity of needs (sync code like iterators, parallelism like rayon, async code like futures, syntax-like helpers like catch or get_or, etc) while still feeling very nice to use as a new user. Part of that is a lot of compiler smarts under the hood (e.g., when a variable is captured in the environment, the compiler analyzes the closure body to decide if that should be “by vaue” or not, etc).

Some things that are on my mind

What follows is a list of problems that I see as a common source of confusion for new or intermediate users. I would love to hear other candidates that aren’t listed here. I’d also like to know which of these things do not belong on this list – for example, because the concerns are too niche.

I’m adding a few notes on possible solutions but trying not to get bogged down in the details of how we would fix any particular problem. I’d like to be optimistic and just assume that with enough effort we can solve the problems to everyone’s satisfaction (i.e., in a way that preserves the constraints you need for your particular problem). The biggest question to my mind is – for which problems is it worth investing the effort to find and implement those solutions?

  • The fact that string literals have type &'static str which cannot be readily coerced to String
    • This is literally the first thing I talk about in tutorials, after “Hello, world”
    • I would like a way for an &'static str to be coerced to a String, ideally without allocation
  • ref and ref mut on match bindings are confusing and annoying
    • I would be happy to never write match *x { Some(ref y) => ... } again but rather just write match x { Some(y) => ... }
    • would also need autoderef in some places
    • @nrc and I think that a scheme like closure inference could be used here
  • lexical lifetimes on borrows
    • I think there is general agreement that something more like non-lexical lifetimes would make the borrow checker more ergonomic and nicer to use
  • Fiddling around with integer sizes
    • Everybody has to agree that it’s annoying to deal with usize vs u32 and so forth
    • This has been discussed numerous times and there are even more complex trade-offs than usual
    • But it seems like some kind of widening (perhaps with optional linting)
  • References to copy types (e.g., &u32), references on comparison operations
    • It’d be nice if we could use a &u32 more like a u32 most of the time
    • It sort of works: map(|x| x + 1).filter(|&x| x > 2) (note that filter required the &)
    • We may not be able to do better than progressively addressing more and more cases
  • Some kind of auto-clone
    • Working with Rc and Arc requires a lot of manual cloning
    • Sometimes you want this level of control; often you don’t
    • The same thing will affect persistent collection libraries, which make big use of Arc etc
    • In comparison, Swift makes ref-count adjustment more automatic, and Go leverages a GC
      • when targeting higher-level environments, this puts Rust at an ergonomic disadvantage
  • #[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone, etc)]
    • there are a lot of fine-grained traits to derive
    • easy to forget something; annoying to have to have both PartialEq and Eq
    • maybe shorthands for common combinations?
    • note: custom derive may allow experimentation here
  • language-level support for AsRef-like pattern
    • we often have things in the std lib where you can either give ownership or not of an argument as you choose
    • example: path.join(foo); here foo can be a PathBuf (give ownership) or &Path (not) or &'static str etc
    • it’d be nice if we could make this pattern smoother in the language, perhaps just for some particular cases
  • lifting lifetime/type parameters to modules
    • often you have large blocks of code that share parameters
    • I’ve mentioned a few times I think it would be great to float these to the module level
    • should write up a more complete proposal…
  • inferring T: 'x annotations at the type level
    • these are just kind of annoying and nobody has a good intution for what they mean
    • I think we could infer them, just want to experiment to be sure it doesn’t lead to confusing errors later on
    • on the other hand, the compiler tells you what to type, and you type it, maybe not so bad?
  • explicit types on statics and constants
    • often we could infer these, but it’d add dependencies between items
      • but then, impl trait already does
  • trait aliases
    • it’s annoying and non-DRY to repeat a large number of constraints like Foo+Bar+Baz
    • related to inferred bounds:
      • e.g., if I have struct Foo<T: Eq>, and fn bar<T>(x: Foo<T>), can the compiler just figure out that T: Eq must hold?
        • answer: yes, but we should talk about the details :slight_smile:

Some new features we might consider

I think there are definitely some “new features” we should consider to make Rust more expressive. Another way to look at this is that I think sometimes the easiest way to make something more productive and ergonomic is to extend the language a bit. I talked about some of those on the main thread, but here is a list of extensions that I personally think might be important to pursue, and why:

  • specialization, impl Trait
    • have to finish these up
  • const generics
    • enables many high-performance domains
  • “virtual structs” story (e.g., fields in traits, variant types)
    • modeling OOP-like constructs is important for some domains and can be really challenging
  • coherence limitations
    • being able to implement traits for types in other crates seems like a pressing problem (e.g., @sgrif has raised this)
    • negative reasoning may be part of this
  • custom error messages
    • when building abstractions like Diesel, Futures, and Rayon, you often wind up with bad type errors
    • the problem is that the library is in a better position to give a semantic error than the compiler
    • i.e., the compiler knows that Foo: ActiveRecord doesn’t hold, but diesel might know that this means the field doesn’t appear in your database schema (or whatever)
  • stabilizing and pursuing placement new (<-) a bit more
    • maybe imp’t for low memory domains?
  • iterables and abstacting over rc vs arc etc
    • not sure how to prioritize this, but RFC 1598 seems like it might be a not-that-hard approach to solving it


Are you talking about wfcheck annotations or something else?


I’m talking about

struct Foo<'x, T: 'x> { // That annotation there


From the other angle: I found the work to make the Path-related stuff easy to use actually made it harder to learn how to use it, and harder to read the code for the path stuff. I also remember being confused about why &String coerces to &str automatically; for a long time I thought this was because of AsRef but after like a year I learned that this is (probably) because of Deref.

Similarly, are you sure you want &'static str, an immutable value, to coerce automatically to String, a mutable value? This makes me think of C++, where such things are possible due to implicit a converting constructor, which are considered an anti-pattern by many advanced C++ users due to how error-prone they are.

What’s the process for verifying that a type is a value type? First, you have to make sure that PartialEq makes sense for it. Then Eq, then PartialOrd, then Ord, then Copy, then Clone. The current syntax is slightly annoying but it provides a framework for guiding one through this verification process. I can think of things in my own code, such as ring::digest::Digest, which are value types but which require custom implementations of Eq and PartialEq, instead of the derived implementations. if I had were to use #[derive(ValueType)] or similar I very well might have overlooked that detail.

So, in some ways this seems like there are trade-offs between convenience and correctness. I think a lot of people (e.g. me) are attracted to Rust precisely because we prefer such tradeoffs to be made in favor of correctness pretty much all the time.

Some of the other points you mention, such as the need for ref in patterns, do trip people up, but once you learn them you are set. In the case of ref, I still forget the ref often but at least I know what to do when I see the error message.

Keep in mind that at some level, adding new features to the language and/or stdlib necessarily makes the language/stdlib harder to learn rather than easier to learn, if no other reason than it adds more things to learn. Things that are purely about convenience have to make things really better in order to justify their cost.


… I can’t think of what you’d even do with a String that “owns” immutable data. That said, maybe you could use capacity == 0 && len != 0 to indicate “my contents are static”, then have it point to static string data. With default type fallback, I can’t think of any immediate issue with it.

Could you use whatever machinery is used to support things like let Blah(x) = *thing;, but just extend the “pointerness” of *thing to x? I’m not sure I like the idea in general, because it can be important to know when you’re aliasing and when you’re not.

My problem with implicit coercions is that you can’t easily tell which parts of an expression are in what type. If we’re going to add them, maybe really aggressive widening that has to widen every input into the expression to the same output type right at the beginning. i.e. if the result of (a + b) * (c / d) is widened to i64, then all of a, b, c, and d get immediately widened to i64 when they’re used. e as T / e: T would block this process. That way, the whole thing is evaluated under the same rules, and it’s at least tractable to work out the types.

This one makes me feel really uneasy. If it’s a significant problem, perhaps explicit clones could be shortened to a new suffix operator?

In D, you could write template S(T) { struct S {} }, which was the same as struct S(T) {}. template could also have multiple items, and functioned like a namespace. So, yeah, I can kinda see mod m<T> { struct A; struct B; } being logically equivalent to mod m { struct A<T>; struct B<T>; }. That said, that could screw with privacy, so maybe a non-namespaced <T> { struct A; struct B; } would be good.

I like something I read once (forget where): allow inference on private stuff, not public stuff. Also, impl Trait isn’t quite the same thing, since it tells you what you can do with the returned value.

As a fan of explicitness, yes please. Heck, just making where clauses on traits checked would help.

Can’t rememeber where I put the draft RFC for it, but I like naming impls so they can be explicitly imported (the sticky part is generics).

That’s a really interesting idea. No idea how you’d go about that, though. :stuck_out_tongue:


Modules are convenient because they’re an existing unit of scope, but unless you’re proposing parameterized modules (or ML functors), the parameters don’t have to be hoisted all the way to the module.

e.g. Idris using-notation hoists parameters of a group of adjacent declarations.


&'static str and String are equally mutable. They both can be mutated with a mutable reference to them - the former by assigning a different static string, the latter assigning a different String or using the mutator methods.


Regarding the str / String thing: From my own experience answering questions in #rust and #rust-beginners, this is indeed a very common speed bump on the learning curve, but it goes much, much deeper than string literals not being Strings. Very few programs that beginners have trouble with would be solved by implicitly adding an .to_string() here and there. Just as often, they’re writing out one type but actually need another, or want to modify a &str, or need to decide if a text field in a struct should be a String or a &str, or are simply hopelessly confused by the fact that there are two types. The proposed implicit conversion might allow a tutorial to delay the “oh BTW there are two string types” part a little bit, but since the difference matters in most non-trivial programs that handle strings, beginners will be confronted with the issue very quickly. Adding yet another thing to explain when the time comes just makes the whole thing even messier. Compare:

  1. String literals are &'static str
  2. String literals are &'static str except sometimes they are String but this only works for literals not for other &str value you want to convert.

So, in summary, I am opposed to the literal->String conversion because I don’t see it actually helping people: it adds more stuff to the first few hours of learning and not actually make it easier to write programs larger than a presentation slide. And that is without even touching on the performance concerns!

Are there other ways to reduce the complexity here? I have little hope. The String / &str split is pretty fundamental and thus hard to hide. Implicit conversion, if they go far enough, might help a little bit, but is this a price we’re willing to pay, keeping in mind how that affects the language for everyone else?

Performance concerns, briefly: Barring an implicit conversion that allocates, which would obviously be Very Bad™, this will turn String into a slightly more compact Cow<'static, str>. Potentially mutating operations such as deref_mut, truncate, remove, insert, as_mut_vec, into_boxed_string need to either fail or allocate a copy of the 'static data, and need a branch to tell if they have 'static data at hand. All other &mut String methods are also in jeopardy, unless the cap == 0 && len != 0 trick works out okay with their capacity checks. There will also be additional branches when reallocating and dropping, by necessity.

String contents inherit the mutability of the string, &str contents do not (because it’s a & reference, nothing special about str here). That is the interesting notion of mutability (along with interior mutability), since every type can be mutated in the sense you describe.


A lot of scary stuff in the list moving Rust to more “implicit” territory.


This is a great list of pain points, but perhaps some of them could be solved by Clippy/IDE-tools rather than language sugar. The benefit of IDE-tools is that they can teach langugage concepts while correcting your code. This is useful for newcomers, but ideally the IDE-tools should also be able to fix small errors on the fly without slowing you down if you are busy.

EDIT Perhaps the rule should be: Prefer language sugar over of IDE-tools when it helps readability.


You can explain &str and String as we already do and say that string literals just work.


Coercions can’t always Just Work. You can see this with deref coercions. They are very useful in many circumstances, but too often there is not enough context to pick up that the coercion is needed. While it may work well enough to be able to skim over the issue in the first hours of learning, that just makes it more confusing for the beginner when it doesn’t work for the first time.


I shared this concern at first, but during the keynote, they compared these kinds of implicit conversions to the way that closures implicitly determine which Fn trait they implement, which has worked out quite well. There’s always a risk that DWIM features will hurt more than they help, but I think we don’t have enough details on the proposal to make a call about that in this thread.


Like many commenters here, I’m personally opposed to implicty and coersiony things. We can and probably will debate them endlessly, but I rather focus on other less controversial measures instead. There’s a decent amount of funny corner cases in the language that are confusing for experts. And while most beginners live in blissful ignorance of them, if they do run across them they’ll fare no better, Some examples would be:

  • Drop is very complex (and as I always point out, &mut seems wrong to me)
  • Closures in argument position get better inference.
  • Box needs lots of compiler magic
  • Secret &uniq borrow for closure bodies (though this worries me a lot less than when I first heard of it :))

I would not mine a moratorium on all but the most trivial new language features so we can better get a census on these odd corners. After that, I’d like to think about how we can remove these corner-cases. For example, my Stateful MIR proposal (A Stateful MIR for Rust) is as much of an attempt to reduce the number of features by subsuming them into fewer more powerful ones (or at least providing safe desugarings) as it is a drive for more expressiveness for expresiveness’s sake.

People talk about the Rube Goldberg machines that C++ (and to a lesser extent Scala) are, and I think that its important to realize that even when that’s explicitly not our attempt we can get there by mistake. I think a periodic enumerating and culling of oddities is a great and necessary housekeeping to keep it from happening.


Exactly what I came into this thread to say. Once you have a problem with higher-ranked lifetimes and inference you won’t really care about little edge cases like having to deref something explicitly.

this answer is nearly incomprehensible to me because I’m not an expert


How are any of these issues that could bite a new user? These are all complexities that primarily matter to the language implementer, not to the language user. I don’t need to know anything about how special cased Box’s implementation is to use it, for example.


From the user-perspective, explicit control is the main difference between Copy and Clone. This doesn’t entirely overlap with what goes on inside a struct, it’s true, so maybe Rc could derive AutoClone or something. Not something I see as very important however.

+1. Maybe some names like #[derive(OrdEqCopy) vs OrdEqClone etc? Only for the most common ones; e.g. not many types are PartialOrd but not Ord.

[quote=“nikomatsakis, post:1, topic:4097”] language-level support for AsRef-like pattern[/quote] Maybe. It might also save having to template functions, if from the callee side an AsRef<T> parameter is simply seen as &T. Not sure if there are any downsides to that? Or even letting any &T parameter be passed as a move (but from the caller’s perspective, x: T goes out of scope in foo(x) even if passed as &T, so there is no implicit aliasing).


This blog post about rewriting a python thing in Rust mentions many ergonomic papercuts. Including the verbosity of using Rc<RefCell> , confusion about references, dereferences an the syntax around them, ergonomic issues with Clone and Index and annoyance with integer casts.


Maybe it would help to have simple RcCell and RcRefCell wrappers? And probably something for the sync side – ArcMutex and ArcRwLock?

Deref is supposed to make those fairly transparent already, but wrappers could still smooth some rough edges like initialization.


That blog post also says:

Part of the problem is that auto-deref means you can get by without the right derefs in some circumstances, but in others, doing the same thing fails for no apparent reason.

I agree that introducing more special cases to improve ergonomy could make the language less regular and harder to understand/learn.