Follow up: the Rust Platform


One heads up: I’m going to be away for the next 1.5 weeks, but I wanted to get out a response to all the feedback before I left. I’m looking forward to jumping back into this discussion soon!


This looks really great @aturon. Thanks for staying on top of the feedback and keeping the conversation moving and focused.


and we should revive efforts to gather guidelines in a central place.

How about we add a conventions directory to src/doc for the Rust repository?

I’ve already put a loose hanging in src/doc, so it’s not without precedent to have them stored there.


Where does stdx fit in with all of this? Maintenance seems to have stagnated, but it shares a number of goals with the proposed “platform”. Is there some fundamental reason why that is nonviable?

On an unrelated note, I would love central doc hosting. Documentation hosting is really a pain to setup right now. I’m going with the auto-upload gh-pages route, but I decided to have a different upload key for each repo and have them upload to their own gh-pages branches instead of a single docs repository. It seemed like a more robust solution at the time but it’s a pain to set up right now because setting up a repo entails generating a new RSA key, encrypting it with Travis, and modifying the scripts to use it. I could probably write a shell script to automate it but it generally feels like it shouldn’t be a permanent solution.


I’ve brought this up before, but I think there are language changes that can make this part of the goal easier to accomplish. My goto example here is something like Serde or Diesel deserializing dates. chrono has become the defacto standard for this, but right now the languages forces unnecessary dependencies/coupling there, because of the orphan rule. An implementation of Deserialize or FromSql<DateTime> for chrono::NaiveDateTime has to be in either the crate providing the trait, or in chrono itself. There’s no reason for a deserialization or persistence library to care about which lib the community uses for dates. There’s certainly no reason for a date time library to care about these concerns. The ability to have a serde-chrono or diesel-chrono library as the glue layer separately would vastly ease this capability, but would require having an escape hatch for the orphan rule.


Seems reasonable (though personally I’d like to see user-facing docs moving out of the main repo). The most difficult part about standardizing conventions is the politics though.

It’s conceptually a precursor. At the moment it can’t be used as an extended standard library because suitable technology doesn’t exist in the stack. Needs cargo metapackages. But it or something like it could fill the same role as the proposed platform metapackage, unofficially.

There’s a promising, completely automatic solution for this in the wings by the author of Not sure if it’s been announced publicly yet, but it’s going to be pretty nice.


I mean, someone has to care. If we were to say that that someone should be a third party, we run into huge problems where you and I can both act as that third party, splitting the ecosystem, because our libraries are totally incompatible (you can’t even build a crate relying on both of them, with no code!) This is even worse than the situation we have right now.

Like always, I think modifying the coherence rules to enable more blanket impls is the solution here. We should have a base library (std presumably) defining the trait DateTime, and serde should define how to Serialize any T: DateTime, operating against that abstract interface. The most effective strategy for this I know of is mutually exclusive traits.


That’s slightly off-topic, but in my opinion it is not a good idea to have a crate expose types that will be used for communications between libraries, or you open the gates to semver hell. See this post which came after half of the ecosystem broke because of this exact problem.

Anything that is part of the public API of a library should be either defined by the library or part of the stdlib. About the situation you exposed, I’m in favor of putting DateTime in the stdlib.


A big +1 - I don’t have much to say (other than liking the improvement and roadmap ideas, along with related thoughts) because the new proposal is a huge pivot and much leaner. So I want to register my thanks to @aturon for continuing the rust subteam trend of taking feedback on board wholeheartedly and giving a demonstration that I (as a mostly hands-off follower of rust language development) can remain hands off and trust in the subteams.


While i agree with this statement, i’d like to point out the following: When comparing the ecosystem of Python and Ruby, i’d like to point out that it is a huge advantage for python to have a good and extensive standard library that fits 90% of small usecases. It’s the reason python is often chosen over ruby on projects. You can’t always access third party packages easily or have to rely on the distributions packages and having a good and curated standard library in official distribution repositories is a big advantage for many usecases. I’m not saying Rust should ship with third party libraries, but i can imagine to have a rust-stdx package which would potentially become a defacto standard and be distributed by Debian/CentOS/Ubuntu in addtion to rust would be a but plus for rust in such restricted environments.


A bit of a meta-comment: when talking about large standard libraries, people sometimes point to Python as an example of how a large standard library boost adoption (“batteries included”), but one thing people rarely seem to bring up is that it only gained a packaging system fairly late in its lifetime (Python began development in 1991, and the first package-management tool was probably in 2005). People clamoured to add things to the standard library because despite all the downsides, it was still less painful than trying to distribute packages any other way. And sure enough, as Python’s packaging tools have matured, there’s been less pressure to include things in the standard library (although even today, they’re still less mature than, say, Cargo).

As @aturon says, the important things are the existence and discoverability of high-quality third-party libraries. For Python, reaching that goal involved having a large, standardised collection of packages, but Rust+cargo is in a very different place, and that goal may lie in a very different direction.


A amendment to meta-comment: also in Python you must distribute deps to the end user of your program, and that would make a cost of non-standard library dependency higher even if Python had perfect packaging tools.


I understand where you come from, but… how do two libraries communicate with each others then?

By your statement, the only way two libraries can talk to each other is either:

  • exposing std types
  • having the user write scaffolding to translate from one library type to the other library type

The first one is extremely restrictive, the second one is extremely boiler-platey (wasteful, error-prone, and a performance issue).

I am afraid that having a library accept, in its public API, a type created by another library (such as a Matrix) cannot be avoided. To push the reasoning to an extreme, proprietary code will not come put its XyConsumer type in the standard library, and yet many of the libraries defined in their repositories might need to access such a consumer.

Composition of 3rd-party libraries is necessary.

That being said, circumventing the orphan rule may not be the best solution. After all, Rust has 0-cost newtypes.

Today automatic derivation does not work for such newtypes, because it expects the inner type to implement the trait whereas the lack of implementation is precisely the reason why the newtype is necessary in the first place. Still, that’s just a short-coming of the derivation today, and maybe it could be solved generically.

(After, there’s still the issue of the newtype require wrapping/unwrapping, Deref, AsRef, From, etc… can probably help here)


In terms of improving Cargo search, it may be worthwhile to consider whether a crate includes a repository link, description, or documentation when sorting search results.


Java is very successful (especially in the enterprise world) at applying @withoutboats’s suggestion.

J2EE is set of additional industry standards that go beyond the regular SDK (language + stdlib). This contains standardized APIs for many global concerns that would require 3rd party libs to communicate such as - logging, DB access, serialization, dependency injection, etc.

Look at DI for example - first came Spring and than other containers followed, than a common practice and idiom was established and was dimmed worthy of standardization. Java does not actually provide any DI container, but it does define an industry standard API (via its JCP) that 3rd party vendors (Spring) conform to.

Rust could follow a similar path where common interfaces, abstraction and idioms are derived from existing best practices and added to the Rust distribution. I’d also like to adopt the distinction between the stdlib that is the most minimal basic building blocks and abstractions and a more encompassing set of interfaces as a “Rust platform”. In any case, the implementations should still be provided by 3rd party crates and the "Platform " will contain only the trait definitions that are the required glue for libs to communicate.


Thank you for really taking the time and effort to incorporate all the feedback with an open mind. This followup post and proposal really solidify my trust in the people behind rust, and the future of the platform.


Not sure if this has already been suggested, but what about having a set of quality criteria for packages to be included in ‘The Platform’ which can be automatically tested?

For example:

  • a minimum level of API coverage
  • a README and Users Guide
  • unit tests
  • standalone sample code
  • integration tests (*)
  • more…?

There should be enough metadata to process this automatically, and could provide feedback to developers as to which areas need improvement before they can be eligible for inclusion.

This avoids the problem of having to bless a single ‘category leader’ (ie. particular database bindings) and puts all packages on an equal footing. The more these things can be automated, the easier it will be to detect regressions and let them compete on merits.

As for integration testing, any package declaring a dependency on another should not only declare the required version but actively test for compatibility. For example, if chrono depends on serdes then it should test the features it requires.

An automated build system can then track the most recent version for which a build works, and not update the version in the Platform set of packages until all dependent packages build successfully. Obviously this is a non-trivial problem (!) but certainly doable, and doesn’t rely on manual curation or testing.


I think we all have to collect some deep statistics about dependencies inside our (not open source) software written in Rust. Which crates uses together? Which versions? Numbers replies to us that such a platform.

I didn’t say this before: I think the Python’s batteries are not great, because some APIs are extremelly awful. Let’s look at Tk, logging, etc. Users was forced to carry it.

I like metapackages idea, It’s cool and retains choice. Often I need something simple like crate reexport feature:

libc = { version = "...", pub = true }


Yeah, and this is one of many things the platform metapackage could solve. If we end up not providing it in the default installation, this may be one specific problem we do not solve - you’ll need to do at least one more step to get the common components necessary to do simple things. But that step might be as simple as adding a one liner to your Cargo.toml pulling in an unofficial collection of common crates. It’s not as simple as it could be, but still an improvement over the status quo.

Shipping a platform metapackage by default in some distributions (Debian) but not others (rustup) seems ineffective since nobody will be able to rely on it being available.

This is one avenue yeah, but even if we go this sort of minimal, interface-heavy route, I am always inclined not to literally put (big) new things directly in std. For example, I don’t want anything related to databases in std. If we were to define e.g. an ODBC-like interface for Rust and distribute it to everyone, I would want it in a different crate. At the same time, the way we distribute std is basically an artifact of its develeopment process and that it has unstable hooks directly into the compiler. Any crates ‘above’ std I strongly prefer to distribute through our standard mechanism (cargo).

Yeah, this is something we can do, and part of the intent - having a path from ‘new crate’ to ‘crate listed in an official directory’ to ‘crate that is an official part of rust’ gives us increasing leverage over various quality issues, improves the quality of the whole ecosystem. And as objective criteria it shouldn’t cause too much teeth-gnashing about favoritism.

Lots of great feedback here. Thanks, all.


I totally agree and it was the whole premise of my comment that the stdlib should remain as minimal as it is today.

All I’m saying is that if we want to have an endorsed official “Rust Platform” meta-package than it should contain at most the standardized APIs & interfaces.
Actually, I’d go further and suggest that we have an entire set of such officially endorsed meta-packages, each standardizes a specific area. So e.g. we could have the “Rust-DB” cargo meta-package that defines the official ODBC-like API.