Setting our vision for the 2017 cycle

Can you say a bit more about what you are thinking of when you say “project infrastructure”? When reading what @aturon wrote, I was primarily thinking of things like bors and our own internal CI infrastructure, but it’s not clear to me how that would help adoption (except indirectly by making Rust better).

I feel like we had worked out a fairly decent story here at some point, but we’ve stalled in terms of pushing the various pieces. Specialization and “fields in traits” were intended to be starting points, for example. I’ve got to revisit the “fields in traits” RFC in any case…

4 Likes

I like these ideas, and also a similar guide mentioned earlier in this thread “If you’re used to inheritance, here’s what to do in Rust instead” kind of guide. Perhaps my concern with this approach was making such guides accessible. Should we just let these guides appear on the interwebs naturally? Should there be a curated list of guides? What’s the story here? And for someone coming from another language, where do they start?

1 Like

If it frees up the core team’s time because more of what they do is automated I could see how that would contribute indirectly to Rust but beyond that I don’t see how it would increase adoption beyond more features being created.

These vision statements are really good, and I think cover all of the important bases.

On learning curve, I think it’s important we also consider significant investment in tooling, i.e. rustdoc. I’ve heard @steveklabnik mention it doesn’t get much love, and that feels like a pity. One example: the Windows docs for std are currently not available online, because rustdoc will only build the docs for the current platform (or really just the currently applicable set of conditional compilation conditions). How are Windows/OS X users supposed to learn std in this situation? Another: how to document module items whose definition depends on conditional compilation stuff? Do I have to copy docs for each platform?

If the current order of vision statement implies prioritization, I would prioritize the FFI story more. In trying to sell Rust, being able to integrate well with already existing/used technology is incredibly important in making an exploration of Rust as a technology low-risk.

One other thing that came up in threads on this forum recently: it would be in an interesting challenge to try to organize a Rust fund to try and help fund the library ecosystem. I came up with a concept around “turning precious metal into Rust”, with a yearly pitch to stated Friends of Rust (I thought up all kinds of puns, too, but forget them) to contribute on the order of 1k/10k/25k to a funding team that would then approve grant proposals from the community. It would maybe hinge on demonstrating the community’s independence from Mozilla, to some extent?

1 Like

Or like making it much simpler & shorter defining an unsecure-but-fast hashing.

This seems a nice and simple idea.

2 Likes

Isn’t this short enough? https://doc.rust-lang.org/std/collections/struct.HashMap.html#examples-3 You just have to use the desired hasher instead.

1 Like

Could you elaborate? What about no_std doesn’t work?

It’s not easy and not short.

So, I haven't spoken with the docs team about all of this yet, given that it was just posted, but here's my own personal thought:

For the docs team, I would hope that we can manage to switch from reactive to proactive.

What do I mean by this? Well, ever since I came on board, I've been in 'reactive mode.' "Here's the language and standard library, get to work." Consider it like a consumer/producer problem:

The "producer" are people working on things: the language, libraries, etc. Once they've produced their code, it goes into the buffer. Then, the consumer is writing the docs: something gets popped off the queue, worked on, and documented.

The problem is, with many more producers than consumers, the buffer fills up. This leads to bugs like Ensure accepted RFCs are all in the manual and elsewhere · Issue #20137 · rust-lang/rust · GitHub, and you can see what happened there.

So, what to do? Well, now that it's not just me, I'm hoping that we as a group can finish off the backlog in the buffer. Once we do, we can stop fighting fires and start taking a more pro-active stance on docs. Some of that is tied into all the other goals, for example. So like, let's take one of them: FFI. We could have a whole little mini-book on FFI and Rust, and how to make it good. But I don't have time for that; the standard library still has functions with no docs. This would be the "proactive" mode: we're focusing on what docs the ecosystem needs to have, rather than just catching up on a huge backlog.

The next level is getting rid of the producer/consumer issue entirely: @chriskrycho opened RFC: Require documentation for all new features. by chriskrycho · Pull Request #1636 · rust-lang/rfcs · GitHub , which basically says "nothing can ever go into the buffer". In order to manage that, we as a team will have to help others help us, by getting more people to write docs, even if they don't usually write them. We want to keep high standards, yet not let that block actually shipping.


Now, for some replies:


Could you elaborate on what you mean here? What doesn't work about it?

I think this post by @wycats is a big part of it:

One argument passed for the hasher isn't short enough? What would be your ideal syntax here?

5 Likes

I think this post covers a lot of the problems one runs into when turning off std. I would consider making an OS as bare metal as you could get and something you would run into when working on embdeded development systems which have no OS. One shouldn’t have to use things like ralloc just to get it to work. These are the kinds of problems I meant with workarounds. Like you can use no_std and there are solutions but they’re not baked into Rust without finagling. http://os.phil-opp.com/set-up-rust.html

One thing I wonder, actually - how feasible would it be to (in essence) do “symbolic execution” of #[cfg] directives? Specifically, have rustdoc “branch” on encountering a #[cfg] directive, generate documentation under both, and generate fragments for each such section? That could then allow selecting from a menu of #[cfg] options when reading the docs.

In terms of phrasing this as a vision statement, “Users of any platform should be able to easily read the documentation relevant to a crate as they would actually use it.”

8 Likes

Do you have a complete correct up-to-date usage example somewhere on the Rust site or on the internet? Just showing the first a step and asking “to use the desired hasher instead” isn’t enough, in my opinion. In the page I’d like to see a complete example of fast hashing of a simple struct (like a struct { a: char, b: f64 }). Once the example is complete, I think the resulting code is not short and not simple.

I am not a Rust programmer. Only knowing some C++. Not sure if my idea makes any sense…

I have heard regarding the question: “Can Rust do OOP?”. The answer seems to be yes and no. There is this classic book “Design Patterns” from the gang of four. My idea is a guide that says when you want to implement a certain design pattern (from this book or other newer design patterns) what is the best way in Rust to implement it? When it is not possible to implement it then what should be done in this case to achieve the same good code quality or the same goals of a design pattern? The guide should at least cover the design patterns in the book mentioned. The more the better.

13 Likes

I assume you’re specifically referring to the “fixing linker errors” section? That seems to me to be a totally reasonable thing when you’re working bare-metal. In my experience, regardless of whether you’re working in C, assembly or Rust, you always need some boilerplate code and linker scripts when doing OS dev.

1 Like

I would love to see firmer support for the existing non-x86 architectures. I feel a bit guilty that I’m only building i686 and x86_64 for Fedora so far, but these are the only Tier 1 targets. Perhaps I should bite the Tier 2 bullet for aarch64 and armv7, but the rest don’t have bootstrap binaries for rustc and cargo, especially ppc64 and ppc64le I’d hope for next.

I can see that. I guess it would just be nice to have it just work depending on the target platform without having to do the extra bit of linking. I.E. if compiling for arduino using no std rustc knows what to do without having to do that extra work. Making it more accessible to people who just want to code on a system without having to figure out how to configure it properly could go a long way towards adoption of Rust in that ecosystem.

In terms of compiler errors, currently linker errors are the weakest. It’s just “we ran this unreadable long command and something didn’t work”.

I know it’s probably hard to fix, and it’s not rustc, but the rest of the toolchain, but still — these errors happen, and it’s often hard to figure out why.

1 Like

I don’t know if the linker output is unambiguously parseable or if there is some way to output something machine-readable, but it would be very helpful if rustc would tell you which specific crates have unresolved symbols.

But that’s implementing a new hasher, not using a different hasher. I can argue that people with enough know-how to implement a hash function themselves are completely capable of implementing rust hasher interface.

2 Likes