Proposal: The Rust Platform

I don't understand this reasoning. Not endorsing a library is not some sort of punishment (surely someone who doesn't manage such a crate is not an 'outcast'), and the Rust team has the right to define the qualities of a library which are the minimum for endorsement, and stability and backward compatibility are very reasonable qualities to require.

There are existential questions about why anyone would want to maintain an open source libraries, but those seem outside the scope of this conversation.

I certainly agree this would be awesome. My experiences with other ecosystems have not made me confident that it happens often, though :confused:

1 Like

This might be a simplistic question, but Iā€™ll ask it anyway:

Say my library is already using a large fraction of the rust-platform, as individual dependencies. Will there be any advantages in ā€œmigratingā€ to the rust platform? The obvious one seems to be that I will no longer have to investigate any compatibility issues between various crates. But are there any others?

A number of good points have been raised above and elsewhere, so Iā€™ll try to avoid repeating those and just offer some brief suggestions, noting first that Iā€™m not hostile to the ideas raised here, but I donā€™t think theyā€™re particularly necessary, and I think some of the other pain points brought up here (e.g. the UI on crates.io) and on HN and Reddit are as or more important than whether we have a blessed meta-package or not.

  1. I strongly endorse the idea of enabling ā€œmeta-packagesā€ without expressly endorsing any. Making the build and packaging tooling robust around this will enable the community to coalesce around the things which are valuable, and those can overlap in unique ways. Some of the best stuff going on in the Python ecosystem is around exactly those kinds of community-driven sets-of-libraries/metapackages, but the official tooling has lagged so badly that there are entire alternative distribution tools that have grown up around them (e.g. Conda etc.).

  2. Put a high priority on supporting and shipping the in-work tool for supplying docs in a centralized location (presumably, ultimately, as part of crates.io), and work at a cultural level to make that the default location for documentation.

    • This will help enormously with discovery, as well as providing the opportunity for integration with the ā€œdocsetā€ tools out there (Dash/Zeal/etc. and anything compatible with that). I know I find that really nice when I want to go look at something for Elixir: you can just pull up hexdocs.pm, and those docs can be integrated into any docset tool.

    • As part of that effort, build tools to support the docs/various-content.md model which is used for the Rust docs themselves (which both @steveklabnik and I have taken abortive stabs at in the past) and make those integrate with the documentation hosting as well.

    It is hard to overstate how significant an impact I think this will have on the community.

  3. Look at building out something like Ember Observer. When Iā€™m evaluating a Ember add-on for integration into our application, I evaluate it there first. This is a community-maintained resource, and there are definitely some things that would need to be handled differently for Rust crates than for Ember add-onsā€”but there are more than a few similarities, too.

Each of those could very well be integrated in with the broader idea youā€™ve thrown out, but they can also be useful regardless of whether any official Rust Platform shipped, and would help address many of the same underlying issues.


Separate from all of that: I think the idea of shipping a small set of standard tools (rustfmt, clippy, etc.) is great, and that should definitely happen. Small support applications feel very different to me (and, I suspect, to most other developers) than do libraries, especially libraries which can basically function as standard libraries.

6 Likes

And, even if a library was on 1.0, if a rust platform became available and I used it, Iā€™d expect to be able to continue using it for a reasonable amount of timeā€¦but inclusion in a platform doesnā€™t mean the maintainer has the bandwidth to support an LTS version of their library with only cherry-picked fixes (and thereā€™s at least one popular open source project with plenty of resources that simply doesnā€™t ā€˜doā€™ patch branches - if youā€™re not on the latest version and are having an issue, tough). So now you need to decide what compatibility guarantees a platform has and figure out if included projects can apply for exceptions either because of important fixes or lack of resources. If I were a maintainer of a popular library (Iā€™m not, so I may be wrong), I might actively not want to be included in a platform due to the possible additional burden.

Taking a thought from upthread, potentially if some team or company wants to make a platform then they could release a -platform crate and just pub extern everything. At this point, I would consider them to have agreed to act as the gatekeepers for any issues that requires them to negotiate with upstream library authors, and possibly branch off themselves. If this (being willing to shoulder work themselves, rather than always defer to/push work upstream) seems like a huge burden and nobody wants to do it, maybe thatā€™s a red flag!

The dependency bloat problem (mentioned upthread) also worries me. Iā€™ve never written a line of haskell, and my single experience with it was downloading it to build a tool, at which point it seemed like half the world was being compiled and I ended up with a 100MB binary. Some crates feel like theyā€™re on this road (which I can live with, the creators of these crates are delivering immense value) but I donā€™t know if the tradeoff is right for a platform (even with pre-compilation).

I havenā€™t really mentioned anything other than libs (e.g. compiler, cargo, and particularly cross-compilation targets) because I think thereā€™s important lower hanging fruit/prerequisite work for them all - third parties right now canā€™t easily/officially swap out/release replacement parts of the toolchain, and a platform feels like itā€™d add to this list.

That's basically what happens anyway when an enterprise/lts distro includes a set of packages. It becomes a "platform" which that distro promises to maintain.

1 Like

Yes, I had linux distros and rails LTS in mind for this bit. But the blog post isnā€™t keen on the libs team doing it for a couple of reasons. My off-the-cuff suggestion was for ā€˜lib platformsā€™ to be always unofficial (just a normal crate, maybe with some bells and whistles) and require whatever team/company doing it to have the resources to support a distro model (rather than the proposed libs team collaboration model). I have a few reasons for not being a fan of the collaboration idea, but Iā€™ll wait for the next blog post.

I like this approach (better than the simpler version I proposed in reddit). Also, the reason I prefer this, over the simple meta-crate alternative that many others brought up, is the fine-grained control over which crates are fetched and compiled. I have a few things to add (letā€™s call this tight group of crates: a clique):

[cliques]
platform = "2.6"
security = "1.1"
piston = "1.9"

[dependencies]
libc = "platform"
regex = "platform"
crypto = "security"
...

A few more advantages apart from the ones I mentioned in reddit:

  • Distributed curation of various cliques
  • Minimize effects of ā€œendorsementā€

But of course this brings an additional layer of version conflicts, and adds to the complexity; such as crate collisions (with conflicting versions) between cliques. One way to adapt to this is to strictly follow a few conventions. One that comes to my mind:

  • Every clique version should be compatible with at least one version of the ā€œplatformā€.
  • Every clique version should maintain a list of rustc versions it compiles with.

To make this easy for clique maintenance, the ā€œplatformā€ clique should only contain a very few and extremely popular/useful crates (may be around 10-20) with a high degree of maintenance guarantee.

To make this easy for general users, may be cargo should suggest clique/crate versions when there are conflicts based on compatibility metadata with rustc and platform.

Update: -ed!

Update 2: ā€“ OR ā€“

Avoid having library users from dealing with such complexty and let these ā€œcliquesā€ be used exclusively for automatic version matching and suggestion by cargo.

The users should not specify which cliques it depends on. And all crates should have membership metadata on various cliques for use by cargo intelligence.

PS: Sadly, the documentation concern is completely ignored!

I would point that there are a few algorithms around "scoring", and simply counting the number of +/- is the just the simplest of all.

Be it clicks or downloads or whatever, it is relatively easy to weigh them by the time they occurred on (so that the further in the past they occurred, the less weight they pull).

Similarly, there are algorithms for consensus, which use probabilities to determine whether the currently gathered votes can be used to derive a significative "average" vote or not (yet), to avoid having a single 5 stars vote push the library in the first position, or avoid having a 5 stars and 0 stars combine into a 2.5 stars average and instead wait for more votes to settle.


I personally think that there are various metrics which matter, with regard to a library:

  • its maturity/stability: is it likely to evolve into something very different?
  • its overall quality (generally linked to maturity): how happy are the users, how many bugs are opened, what are their age, ...?
  • its activity: an inactive library can be a risk (no maintainer, no one remembers how it works)
  • ...

The score will generally correlate with the happiness of its users (though generally during the honey moon period, where the tougher issues may not have surfaced yet).

On the other hand, the endorsement by a large community (as materialized by the inclusion in a well-regarded package) speaks both of maturity, stability and a commitment to maintenance.

For winapi I am holding back from 1.0 because I don't feel I have perfected the design yet, and that is mainly due to there being a lot of features I want that aren't implemented yet such as unions and alignment and packing and bitfields, many of which even have accepted RFCs! The Rust Platform would do nothing to fix that. The only thing that would fix that is for MIR to stop blocking everything and all the features I need to finally be implemented and stabilized. Then I can finally settle on a good design and hit 1.0.

I would love it if crates.io had some way to see popular keywords so I could know which keywords I should match up to. The case sensitivity of keywords is also kind of awkward.

I like this idea.

3 Likes

To expand on the question of why so many crates arenā€™t 1.0 yet: 95% of existing gamedev-related crates today will segfault if you do something wrong (ie. functions are marked safe when in reality they are unsafe). Publishing a memory-unsafe crate is ok, but version 1.0 usually means that the crate is production-ready, which is obviously not the case if it can crash at any time.

Iā€™ve posted a follow up as a separate discuss post. Thanks again for the feedback, everyone!

I'm imagining something akin to Ruby Toolbox. It categorizes gems and shows statistics like how many favorites and downloads gems have, how recently it's been updated and how active it is in general, etc. These statistics can help people discover gems for the use cases they need and perhaps some ways to choose among the similar ones. There's no curation or official blessing involved.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.