[Roadmap 2017] Productivity: learning curve and expressiveness


“what the hell is this error message! All I did was bind the closure as a local variable because I went over 100 chars!” “Hmm, Maybe I should try to write my own box, because I want something to be slightly different” “I dropped these fields 1-by-1 before, but when I wanted to encapsulate the pattern with a Drop instance, I can’t avoid these “move out of &mut” errors!”


Could you give a code example for the first and third so I could see a clear idea of what you’re talking about?

I really don’t think anyone who is thinking they should write their own Box implementation is still in the “funnel” stage of the learning curve. I don’t really think that’s a use case we need to worry about supporting for anyone but the most advanced users.

I also don’t think new users are writing their own Drops very often.


What follows is a list of problems that I see as a common source of confusion for new or intermediate users. I would love to hear other candidates that aren’t listed here.

It’s a nice and bold list, I hope other people will add more items to it. See below for some suggestions from me.

for which problems is it worth investing the effort to find and implement those solutions?

Most of them, in few years of work.

The fact that string literals have type &'static str which cannot be readily coerced to String I would like a way for an &'static str to be coerced to a String, ideally without allocation

This is good to make the Rust code a little less noisy (but this doesn’t help newbies much).

ref and ref mut on match bindings are confusing and annoying References to copy types (e.g., &u32), references on comparison operations

But we should also be careful avoiding adding too many special cases to the language.

lexical lifetimes on borrows


Fiddling around with integer sizes Everybody has to agree that it’s annoying to deal with usize vs u32 and so forth This has been discussed numerous times and there are even more complex trade-offs than usual But it seems like some kind of widening (perhaps with optional linting)

On this topic I think:

  • There’s a need of a built-in safe cast or in the Prelude;
  • Implicit or ways to index arrays/vectors/slices with values that can be safely coerced to uint.

#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone, etc)]

The code to manually implement PartialEq, Eq, PartialOrd, Ord, Add, Sub, etc is quite verbose.

explicit types on statics and constants

One way to improve the situation is to accept code like (two different small features):

// This is expecially handy when you have many items in the array:
const DATA: [u8; _] = [1, 2, 3];

const DIRECTIONS: [u8; 4] = b"LRUD";

Below I list few more things that I think could help Rust become more handy, more succinct, while retaining its safety.

Looking at code like this, I’d like improvements in the type inferencer, so there’s no need to add that “::” at the end:

println!("Total: {}", (0i32 ..)
                      .map(|n| n * n)
                      .take_while(|&n| n < 100)
                      .filter(|&n| is_odd(n))

The syntax for struct literals is sometimes too much repetitive and long:

#![allow(dead_code, unused_variables)]

struct Item {
    name: &'static str,
    weight: usize,
    value: usize

const ITEMS: [Item; 22] = [
    Item { name: "map",                    weight: 9,   value: 150 },
    Item { name: "compass",                weight: 13,  value: 35  },
    Item { name: "water",                  weight: 153, value: 200 },
    Item { name: "sandwich",               weight: 50,  value: 160 },
    Item { name: "glucose",                weight: 15,  value: 60  },
    Item { name: "tin",                    weight: 68,  value: 45  },
    Item { name: "banana",                 weight: 27,  value: 60  },
    Item { name: "apple",                  weight: 39,  value: 40  },
    Item { name: "cheese",                 weight: 23,  value: 30  },
    Item { name: "beer",                   weight: 52,  value: 10  },
    Item { name: "suntancream",            weight: 11,  value: 70  },
    Item { name: "camera",                 weight: 32,  value: 30  },
    Item { name: "T-shirt",                weight: 24,  value: 15  },
    Item { name: "trousers",               weight: 48,  value: 10  },
    Item { name: "umbrella",               weight: 73,  value: 40  },
    Item { name: "waterproof trousers",    weight: 42,  value: 70  },
    Item { name: "waterproof overclothes", weight: 43,  value: 75  },
    Item { name: "note-case",              weight: 22,  value: 80  },
    Item { name: "sunglasses",             weight: 7,   value: 20  },
    Item { name: "towel",                  weight: 18,  value: 12  },
    Item { name: "socks",                  weight: 4,   value: 50  },
    Item { name: "book",                   weight: 30,  value: 10  },

Improevement in compile-time execution of code could help, but while this is more DRY, it generates a Vec instead of a better [Item; 22]:

const ITEMS2: Vec<Item> =
    [("map",                    9,   150),
     ("compass",                13,  35 ),
     ("water",                  153, 200),
     ("sandwich",               50,  160),
     ("glucose",                15,  60 ),
     ("tin",                    68,  45 ),
     ("banana",                 27,  60 ),
     ("apple",                  39,  40 ),
     ("cheese",                 23,  30 ),
     ("beer",                   52,  10 ),
     ("suntancream",            11,  70 ),
     ("camera",                 32,  30 ),
     ("T-shirt",                24,  15 ),
     ("trousers",               48,  10 ),
     ("umbrella",               73,  40 ),
     ("waterproof trousers",    42,  70 ),
     ("waterproof overclothes", 43,  75 ),
     ("note-case",              22,  80 ),
     ("sunglasses",             7,   20 ),
     ("towel",                  18,  12 ),
     ("socks",                  4,   50 ),
     ("book",                   30,  10 )]
    .map(|&(n, w, v)| Item { name: n, weight: w, value: v })

I’d like some macro in Prelude to create handy associative array literals:

macro_rules! map(
    { $($key:expr => $value:expr),+ } => {
            let mut m = ::std::collections::HashMap::new();
                m.insert($key, $value);

fn main() {
    let names = map!{ 1 => "one", 2 => "two" };
    println!("{} -> {:?}", 1, names.get(&1));
    println!("{} -> {:?}", 10, names.get(&10));

I’d like enums of chars too:

#[derive(Copy, Clone, Debug, Eq, PartialEq)]
enum MapItem {

match c {
    '#' => MapItem::Wall,
    'S' => MapItem::Start,
    'G' => MapItem::Goal,
    'D' => MapItem::Down,
    'U' => MapItem::Up,
    ' ' => MapItem::Empty,
    _   => MapItem::Empty, // Error!

match *item {
    MapItem::Wall  => '#',
    MapItem::Start => 'S',
    MapItem::Goal  => 'G',
    MapItem::Down  => 'D',
    MapItem::Up    => 'U',
    MapItem::Path  => '*',
    MapItem::Empty => ' ',

In D language you can write this, and then you don’t need the two not-DRY conversion tables:

enum MapItem : char {
    Wall  = '#',
    Start = 'S',
    Goal  = 'G',
    Down  = 'D',
    Up    = 'U',
    Path  = '*',
    Empty = ' '

Also it’s worth taking a look at Ada language enumerations, they show several nice features. They allow code similar to this, that is very handy in lot of situations:

enum MapItem: char { '#', 'S', 'G', 'D', 'U', '*', ' ' }
let arr1: [MapItem; 4] = ['#', 'D', 'U', '*'];
let arr2: [MapItem; 4] = "#DU*";
enum TrueMapItem : MapItem { 'S', 'G', 'D', 'U' } // Subset.

This kind of code with literals of subsets of values is quite useful and helps catch some bugs at compile-time.

This line of Python code should be simpler (and shorter) in Rust:

assert [10, 20, 30].index(20) == 1

Currently you have to write:

assert_eq!([10, 20, 30].iter().position(|&x| x == 20).unwrap(), 1);

I’d like a slice fill shorcut in Rust:

arr[] = 10; // D language code

Arrays.fill(arr, 10); // Java code

arr[2 .. $] = 10; // D code

In Rust you write something like this, that is longer and more bug-prone:

for i in 0 .. arr.len() { arr[i] = 10; }
for i in 2 .. arr.len() { arr[i] = 10; }

I’d like this supported in Rust:

fn foo(x: u64) -> u32 {
    u32::from(x % 1_000)

This isn’t using try_from because with Value Range Analysis the compiler sees that “x % 1_000” always fit in a u32.

Similar code in D language:

uint foo(in ulong x) {
    return x % 1_000;

This can’t be done, but it’s sometimes useful:

for n in (10u32 .. 28).step_by(2).rev() {}

for n in (27u32 ... 10).step_by(-2) {}

This point is more speculative. Currently you have to write:

let a: u32 = 10;
let b = 3;
let c = 20;
let minimum = *[a, b, c].iter().min().unwrap();

But when the length of the iterator is known at compile-time and it’s longer than zero, it should not return Option<> but the value (or a reference to the value).

But you can’t change the return type like that. So how do we solve this problem?

With a fixed-size lazy iteration?

let minimum: u32 = *[a, b, c].fixed_iter().min();

With a specific function that requires a static length inside the iterator?

let minimum: u32 = *[a, b, c].iter().fixec_min();

In general the compile-time knowledge of the length of an array is a precious information that should be kept and used as much as possible, and not thrown away immediately.

In D language you can do similar things, but I don’t know if they are simple enough to do in Rust.

Collections like Array, slice, Vec, Deque, Circular buffers and more could have a indices() method, so instead of:

for i in 0 .. arr.len() { ...
(0 .. arr.len()).map(|i| ...

You can write:

for i in arr.indices() { ...
arr.indices().map(|i| ...

In Ada and Go languages there are similar features.

A standard method like indices() is also useful for library-defined matrices and tensors, that have more complex indexes:

for (r, c) in my_matrix.indices() { ...

In Rust I’d like safe functions like the safe Java operation Float.floatToRawIntBits ( https://docs.oracle.com/javase/7/docs/api/java/lang/Float.html#floatToRawIntBits(float) ).

Currently you need to use an unsafe mem::transmute or unsafe unions. The four functions convert:

f32 => u32
f64 => u64
f32 <= u32
f64 <= u64

I’d like this code to compile:

let n: i32 = 100;
let mut arr = [0u32; usize::try_from(n).unwrap()];

On slices I’d like an uniq() method similar to:

fn sort_uniq<T: Ord + PartialEq>(arr: &mut [T]) -> &[T] {

    let ln = arr.len();
    if ln <= 1 { return arr; }

    // Avoid bounds checks by using raw pointers.
    let p = arr.as_mut_ptr();
    let mut r: usize = 1;
    let mut w: usize = 1;

    while r < ln {
        unsafe {
            let p_r = p.offset(r as isize);
            let p_wm1 = p.offset((w - 1) as isize);
            if *p_r != *p_wm1 {
                if r != w {
                    let p_w = p_wm1.offset(1);
                    std::mem::swap(&mut *p_r, &mut *p_w);
                w += 1;
        r += 1;

    &arr[.. w]

I’d also like a lazy uniq() function like in D language.

I’d like some Slice Length Analysis. While this feature could do many useful things, at its basic level it allows handy code like:

fn main() {
    let v1: [u32; 4] = [10, 20, 30, 40];
    let a1: [u32; 2] = v1[1 .. 3];

I like to chain iterators on multiple lines. But when you need to sort the items you need to break the chain. So I’d like some way to sort avoiding breaking the chain of iterators (to do this the sort of the D language returns a SortedRange that you can convert to an array with a release() method).

In Python language there’s also the sorted() function that converts any iterable in a list, sorts it, and returns it. It’s a simile yet handy feature. Doing the same on Rust iterator chains looks handy.

In Rust I miss some common algorithms you find in the C++ std lib and D language std lib. One set of functions I miss is the ones that handle a Rust Vec like a sorted set, like insert, remove, intersect, union, difference, etc. I’d also like a rotate() function like in C++/D.

I often write “.collect::<Vec<_>>()”, so I’d like a standard shortcut, like “.to_vec()” standard method for iterators, or something similar.

In the Rust Prelude I’d like a Python-like iterator comprehensions macro. This is going to simplify iterator chains, expecially when you need to iterate on two iterators in some kind of cartesian product.

I mean Python code like:

>>> [i * j for i in xrange(5) for j in xrange(6)]
[0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 0, 2, 4, 6, 8, 10, 0, 3, 6, 9, 12, 15, 0, 4, 8, 12, 16, 20]

In Rust you can use flat_map(), but it becomes a little messy:

fn main() {
    let data = (0 .. 5)
               .flat_map(|i| (0 .. 6)
                             .map(move |j| i * j))

With a Rust Prelude macro it could become something like:

fn main() {
    let data = iter!{ i * j, for i in 0 .. 5, for j in 0 .. 6 }

Or even

fn main() {
    let data = iter!{ i * j, for i in 0 .. 5, for j in 0 .. 6 }.to_vec();

This macro could also accept if clauses, like in Python and Haskell.

Python (and D language) have simple means to join a lazy iterable of strings, this is a very common operation that I miss in Rust:

from itertools import imap
a_lazy_iterable = imap(str, xrange(10))
print "".join(a_lazy_iterable)

(In Rust you have to first convert the iterable in a Vec, this is wasteful).

The way D language denotes the length of the array with a $ is:

v[$ - 2] = 10;

In Rust I suggest to add a syntax like:

v[# - 2] = 10;

It’s very handy in lot of cases (I discussed those cases in a past post, but so far nothing has moved on this handy feature). Currently you have to use v.len() that is not DRY, and it’s much less ergonomic in many cases. D language also offers a way to define the $ or # for library defined matrixces, and so on. This syntax is rather important if you want to use Rust for scientific code.

New blog post about the lang team's initiative for this year

I tend to agree with your other points here. But my own experience is that it’s fairly likely for beginners to want to do FFI/interop, which can often result in needing to write one’s own Drop impls.


Good point. I think the disconnect is that there are multiple new user stories:

  • some new users want to write an application in Rust (a network service, a command line tool, a desktop app, etc)
  • some new users want to write a library or plugin to fit into some project they have in another language (whether that language is C or a scripting language they need a native extension for).

These two new user stories have very different needs. A user writing a whole network service in Rust is unlikely to care about Drop or #[no_mangle] for quite a while, whereas a user writing a native extension cares a lot about those, but not as much about how stable our async IO or serialization are.


I would like a way for an &'static str to be coerced to a String, ideally without allocation

I agree with the criticisms here - it’s not going to move this out of the first day of things you learn, but it’s going to add yet more complexity to the rules.

Further, this is just one corner of the problem. The last thing Rust needs is another rule that doesn’t generalise.

Fiddling around with integer sizes

This only upsets me with indexing. some_vec[my_u8 as usize] just feels like busywork.

language-level support for AsRef-like pattern

One particularly irksome instance is that you can’t do much with a temporary array. Since its IntoIterator is by reference, flat_map(|_| &[1, 2, 3]) doesn’t work. If it used AsRef for its impls, this wouldn’t be a problem. Rust’s stdin also has awkward semantics, since you can’t just chain io::stdin().lock(). Having lock be AsRef would solve that too.

Some language feature could improve things here quite easily.

lexical lifetimes on borrows, impl Trait

These are the two most major usability improvements IMO.


I think widening of integers and converting &'static str to String should be done with .into() instead of introducing more special rules.


Glad to see so much activity on this thread. =) I’ve been traveling and engaged in some all-day meetings and so forth this week, so I don’t have time for a lengthy post, but I wanted to just note a few things. With respect to some of the problems I mentioned, I agree with many of the concerns raised, and I’m sure when/if we get around to making specific proposals we’ll get into those details. I think we can find ways to overcome some of these objections.

I really liked @leonardo’s post, which focused in a lot of the kind of problems that they are encountering. I’d love to hear more of this. I also liked examples where there are things we’d like to type but no idea how to make it work out (e.g., let mut arr = [0u32; usize::try_from(n).unwrap()]; – I actually have some thoughts here that might support this).

It’s encouraging to see some dovetailing: for example, some kind of support for const generics would also allow us to address some issues around [T; N] types. For example, we could implement into_iter on such a type.


Yes please! I’d be happy to redo #32871 myself when integer generics are possible.


Re the string problem. I think a nice solution would be to add support for ‘from literal’. Then we can have a SimpleString crate which has a string type that newcomers can use without worrying about the more complex string types and which allocates all over the place. Opting out of the allocation means just not using that crate. Some support for creating a SimpleString (or Str or whatever) from a string literal token would be needed to make this transparent.

Support for custom literals would also be useful for BigNum types and other numerical libraries.

In terms of implementation, the first thing to come to mind is to allow crates to provide the functionality as a procedural macro (tagged with an attribute for identification), that is then run normally (though implicitly) in macro expansion. The downside of this is time to stabilisation, but since the macro needs would be very simple, we could fast track it in the same way as custom derive.


Looking at the big picture/requirements rather than solutions, etc. I think it is great to push on this stuff, I’m especially keen to see us tackle some of the smaller, low-hanging fruit solutions rather than the sweeping changes. I think focussing on ergonomic paper cuts is way to get big returns for relatively little effort and with relatively few risks to the language. In contrast, I worry deeply about adding to our complexity budget with big new features.

The fact that string literals have type &'static str which cannot be readily coerced to String

ref and ref mut on match bindings are confusing and annoying

lexical lifetimes on borrows

Fiddling around with integer sizes

These all seem like pure win to me. NLL are a big issue, I don’t think it significantly affects the complexity of the language (for users, clearly it complicates the implementation). However, I think we have significant non-technical debt in terms of documentation and tooling for lifetimes/borrowing and I think this feature adds to that debt. We should prioritise paying some of it off too.

References to copy types (e.g., &u32), references on comparison operations

I think this a worthy target, but whether it can be addressed depends on the solution. If there are nice solutions it seems like something good to do.

Some kind of auto-clone

This I’m not keen to think of as a goal. If we want to tackle the higher-level space, then I think we need to think hard about an holistic strategy rather than polishing specific pain points that might be detrimental elsewhere. Put another way, although it is annoying, I don’t think cloning Rc/Arc is a pain point for most Rust code today.

By strategy, I mean that maybe for higher-level programming we want a new type like Gc which didn’t need cloning or even a new language (RustScript!) or language mode. I.e., there may be bigger language changes that give the desired ergonomics here.

#[derive(PartialEq, Eq, PartialOrd, Ord, Copy, Clone, etc)]

derive shorthands seem like an obvious and low cost thing to add, I’d love to see this.

language-level support for AsRef-like pattern

Could you expand on why you think this needs language-level support? My experience has been that (mostly) it works smoothly as is. The risk/reward on this one seems much higher than the others.

lifting lifetime/type parameters to modules

I would like to see this, but it does seem to add to the complexity budget a bit. I’d be happy to put this one off for a while longer.

inferring T: 'x annotations at the type level

Would be great to play around with elision/inference here.

explicit types on statics and constants

I think we could cautiously add more inference here, perhaps starting by bailing if we need cross-item dependencies?

trait aliases

Trait aliases (as in something like type but for traits) seem quite different from inferred bounds, why do you think they are related? I’m very keen on trait aliases, not sure about inferring bounds

We’ve also discussed type and lifetime parameters on impls as a paper-cut we might tackle.

I’ve warmed up to the idea of getting rid of extern crate, although I’m still not very convinced it would buy us a lot.

Seems worth thinking about if we can simplify the module rules too.

I would also like to tackle some of the boilerplate around structs (derive(new), default values for fields), and add enum variant types (currently blocked on finalising the default generics stuff).

I want enough of CTFE stabilised to have RefCells in statics.

#[transparent] (#1744) for easing the ugly stack traces we give out.

On the bigger items, I strongly think we should focus on finishing off the stuff we have in flight, rather than adding to that mountain. It would be much better for the language to finish and stabilise specialisation, impl trait, default generics, CTFE, allocators, etc. than to start on const generics, virtual structs, or HKT-ish stuff.


another ergonomics nit (and related to virtual structs): I would like to have some solution to the ‘newtype deriving’ problem, i.e., some way to write an opaque type with no access to the internals (i.e., like a newtype, unlike a type alias), but that allows access to all public (or accessible) fields/methods without using Deref (which is inappropriate and has problems with priv type in pub signature errors) and without writing a bunch of boilerplate impls.


The recent trend of trying to redirect Rust from a fast, safe systems programming language to a language focusing on developer productivity is worrisome. One example is the requests to reduce large portion of explicitness in language in order to either type fewer key-strokes, or to avoid thinking how work is done under-hood. It’s fine if some one who does not care about low-level detail finds a way to write fast code, or a Rust beginner to trip less on compiler, but if this means compiler becomes easier to accept inefficient program, or it’s harder for programmers to spot previously easily noticeable bugs, then we should have a second thought. Rust is “the” system language that a lot of people have been hoping for for a long time, which has the potential to fundamentally change the landscape of critical infrastructures of this information age. No second language in sight has the ability to do this. In the mean time, while it is nice to have yet another high productivity language, this need is arguably less urgent in an era when multiple high quality successful dynamic-typed or garbage-collected languages already exist.


I absolutely agree, and feel the same way. That said, to change sides for a moment: the more people using Rust, the stronger the network effect can be. More community resources, more publicity, more available programmers, more jobs… all of those would possibly be helped by helping more people use Rust. And the more people using Rust, the fewer projects that may need to turn to unsafer languages for performance, or slower languages for safety.

Plus, I also believe there’s a point of diminishing returns on explicitness: at some point, you start skimming or cutting corners because it’s all just too much to deal with.

It’s a tightrope walking act, really. I think we need people arguing both sides.


Modern languages have shown that you can design a safe system language that’s also sufficiently handy to use and sufficiently predictable. This also allows lot of people to avoid switching to a more handy language, you can use Rust for more purposes.

Your concerns are reasonable, but they should be applied only on each specific feature we want to change or add, and not as a blanket on the whole efforts of trying to make Rust more handy.


I’m 100% sure I can speak for the rest of the language team, and probably the core team too, that we have no intention at all of making Rust any less of a “fast, safe systems programming language”. While we do want to focus on developer productivity, we will only do so where we can make programming in Rust more productive as well as keeping it performant and safe. We strongly believe you can have all three.

Of course opinions may differ about exactly how we keep being fast and safe at the same time as improving productivity, and that is why we have these discussions in public, to make sure we are doing the right thing.


We don’t have any particular feature proposal to discuss the trade offs of, so I think a lot of the responses on this thread are premature. Maybe what you think you don’t like about an idea in Niko’s post isn’t actually true of the proposal he has in mind. We can’t talk about the performance implications (for example) until we have actual RFCs.

I think the thesis question of this thread is this:

  • Is Rust as accessible and easy to use as we want it to be? If not, is it worth putting focus on solving that problem this year?


All things considered, what I want most is that Rust not add more implicit features or coersions. I could say (or rant) more, but perhaps it’s best to just leave it at this: implicit/coersion == bad.


I’m interested in what people mean by productivity? Given that I feel most productive in C++, I don’t quite understand what other people want or how many requested features even achieve this.

Interestingly, you didn’t mention what was said about Rust vs Go in terms of learning curve, etc:

However, this does show that Rust’s infamous learning curve isn’t that bad, especially compared to the much touted simplicity of Go. Furthermore, becoming familiar with a language is basically a one time cost. I wouldn’t be surprised to find that if this experiment were done by someone who is expert in Go and Rust instead of a beginner, the Rust version would be completed faster than the Go version.

Initially, I thought having a #[derive(ValueType)] would be nice too, until I realized that perhapsValueTypemay not end up being what I want it to mean. And then I started thinking about the two usability scenarios forderive`, which is writing and reading.

When writing a derive, in almost all cases, I need to find the docs (this should be easier, since it’s not in either reference (libs or language) and instead, is only in the book) for what things are supported and then I need to determine whether my type needs a custom implementation or whether the derive one works. At this point, I fail to see how not having to type out each one results in a win. The only scenario that results in a time saving is that you blindly type #[derive(ValueType)] and it works without complaint and you don’t have to think about it anymore. But I personally wouldn’t want to optimize for that.

When reading, however, I’d much rather have a list of traits that are derived rather than having to read the documentation to figure out whether I remember exactly which incarnation of ValueType’s Rust decided to use.

So, what’s the value of a shorthand when it complicates reading and doesn’t significantly change writing?


This. As for writing, this is a problem which should be solved by tools. I want to type out,

struct Foo {
    a: String,
    b: i32,

have a light bulb appear near this struct, hit Alt+Enter and get derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Debug) above the struct. This is not quite possible yet, because IntelliJ Rust does not know yet if it is possible to derive a trait for the struct (will try to implement this), and the idea of such intentions and quick fixes is not present in RLS/racer (please correct me if I am wrong, I have not double checked this). Though even simple syntactic based assistance can make writing derive more pleasant: intention to add the #derive() itself + completion for common traits (a gif of similar feature in Intellij Rust) + completion of pairs of traits (TODO).

This is just an example though. The main theme is that it is the tooling and not the language itself that defines productivity. Java is a very verbose language, but you can be extremely productive in it if you use Eclipse or IDEA. And you don’t even need to be an expert, because these tools, unlike the language, are automatically descoverable.

To be clear, Rust is concise and dense, and this is great, but adding alternative ways to do something just because the proper tooling is nonexistent at the moment is a bad idea imo.

To bored to type all the derives? Let the tool type for you, because it knows which traits are automatically derivable.

Have “Hello, world”, but want String? That’s easy, it is a compiler error, so hitting Alt+Enter should just add an into() call.

Accidentally moved out of a struct with a pattern? That’s easy too, Alt+Enter and here is your ref. And if you want to get fancy, add an intention to switch all the patterns in the match between destructing moves and inspecting refs.

There is &&i32 in your chain of iterators? Warn the users if they match it as x and not as &&x in the lambda argument, and suggest as quick fix.

The problem of productivity and learning curve is huge, but imo the solution is not more language design, but better tools support (and tools are not about completion. Hippie expand works great in practice! Tools are about helping to look at your program as a Rust program, and not as a sequence of Unicode characters).

Looks like I’ve got carried away a bit and even wrote a post without a smiley :fearful: