Revisiting Rust's modules

Update: to keep discussion manageable, I’ve closed this thread in favor of a new follow-up thread focused on the next blog post in this series.


As part of the Ergonomics Initiative, I, @withoutboats and several others on the Rust language team have been taking a hard look at Rust’s module system; you can see some earlier thoughts here and discussion here.

There are two related perspectives for improvement here: learnability and productivity.

  • Modules are not a place that Rust was trying to innovate at 1.0, but they are nevertheless often reported as one of the major stumbling blocks to learning Rust. We should fix that.
  • Even for seasoned Rustaceans, the module system has several deficiencies, as we’ll dig into below. Ideally, we can solve these problems while also making modules easier to learn.

This post is going to explore some of the known problems, give a few insights, and then explore the design space afresh. It does not contain a specific favored proposal, but rather a collection of ideas with various tradeoffs.

I want to say at the outset that, for this post, I’m going to completely ignore backwards-compatibility. Not for lack of importance, but rather because I think it’s a useful exercise to explore the full design space in an unconstrained way, and then separately to see how best to fit those lessons back into today’s Rust.

http://aturon.github.io/blog/2017/07/26/revisiting-rusts-modules/

31 Likes

There’s been some interesting work on module systems, by authors who (conveniently enough) are also working on Rust stuff. In particular, MixML (which gave rise to Backpack for Haskell), and what seems to be an in-progress successor 1ML.

I’d definitely be interested in hearing Derek Dreyer’s thoughts on the intersection of those projects and Rust.

Also, you seem to have a formatting issue in one of the headings: An example of misalignment: facades in **futures**

(FWIW, Derek was my postdoc supervisor and I’m pretty familiar with his work.)

This line of academic work is focused on a pretty distinct set of concerns, mostly connected to type abstraction and its interaction with side-effecting operations and recursive modules. My post is more focused on the surface-level expression of modular structure, and has only a small effect on the semantic model of modules (mostly around introducing a new kind of privacy scope); this is the kind of thing that is generally below the abstraction level of formal academic model. In short, I don’t think there’s much relevance to what this particular effort is about.

That said! I think that work on type abstraction, functors etc does have a lot of relevance to Rust, in at least two ways:

  • The semantics of type equality for impl Trait boils down to something like the split between generative and applicative functors in ML-descendant languages. I talked to Derek at length about this a couple of years ago, and we both believe that the applicative semantics (allowing equality for two values of type impl Trait when all input types are the same) is the right one for Rust.

  • We sometimes talk about a simple form of functors for Rust, particularly in cases where large amounts of code is parameterized over some lifetime (which happens in the compiler quite often).

That makes sense! There is one other case where I think that work is hugely relevant to Rust, though: The growing number of cases where a global need must be satisfied at-most-once. More and more cases of this have been cropping up:

  • Global allocators
  • std platform backends
  • Linking to external libraries statically
  • Panic runtimes
  • Replacing crates behind the facade
  • 1-of-N cargo features (for selecting backends)
  • lang items
  • event loops
  • etc
4 Likes

This works if your platform-specific code is limited to one file. But what if you have a whole tree of platform-specific code? e.g. as in std::sys?

In that case, you'd employ a facade pattern just like today. It's always available as a fallback.

I think it’d still be a win, as you’d do something like this:

src/
    foo/
        facade.rs
        _unix/...
        _windows/...

And facade.rs would only contain:

#[cfg(and(os("unix"), not(os("windows")))]
pub use unix::*;

#[cfg(and(os("windows"), not(os("unix")))]
pub use windows::*;

While it does revive facades, it’s less intrusive than what’s done today.

EDIT: Actually, this raises a question. @aturon, what happens in this case?

src/
    foo/
        bar.rs
    _foo/
        baz.rs
  • Is the latter foo actually imported as _foo, contrary to my example above?
    • Feels like a mandatory identifier constraint, which has so far been avoided in favor of lints
  • Do they get somehow merged?
    • Potentially surprising to users
  • Do they collide and error?
    • Semantics gap with the FS, exactly what the proposal tries to avoid
  • Does one shadow the other?
    • Likely VERY surprising to users
2 Likes

Your proposed system still seems way too complicated. If we’re talking about any system at all for potential systems, why not just use the Haskell style and have an export list at the top of a module. It says what the module exports to the outer world, and you can read it all in one place. It can re-export things too, so the Facade pattern can still be used as necessary. It’s immediately clear to a reader, “if you import this module, these functions/structs/enums/etc are what you will be importing”. Then the crate’s public api as a whole is whatever lib.rs exports. Simple to understand.

1 Like

I find myself liking a lot about this proposal.

I think the cost of the first primary downside (longer search for item declaration within a module) is perhaps higher than is indicated in the post. One concrete example would be browsing several crates on GitHub. I frequently find myself poking through several repositories to examine an API, or to find examples of something, or some other reason, and cloning each of these repositories to navigate them is high friction. Today, even if I have to “dereference” several module system “pointers” to get to an item’s definition in the crate, I still have a (usually short) process to arrive at the unambiguous source of truth. In the proposed system, I would probably have to rely on a tool like ripgrep, editor support like the RLS or tags, or some other local tool (read: clone the repository and make sure my editor configuration is happy with it) to quickly find an item’s declaration without manually reading every file in a directory. Given all of the advantages I can see with the proposal, perhaps this isn’t too high of a cost to pay, but it’s worth addressing the “parachute reader” case more directly I think.

Another minor thing (I know, I know, not in the spirit of the requested dialogue): if directories are used to indicate privacy, my immediate impression is that it seems to me that pub(crate) should sort lexicographically after pub, not before. When reading someone else’s code, it is very valuable to have the organization of the code (and how it shows up in my editor/a code browser like GitHub) reflect the priorities and organization of the reader as well as the author.

11 Likes

I want to be careful not to derail on these fine details yet, but: I would expect them to collide and give an error. I don't see this as a semantic gap with the fs; it's just that in the fs, you write pub(crate) as a leading _ :slight_smile:

Anyway, let's save design discussion on these points for later, and focus on finding bigger flaws!

Mm, this is a good point. Though I wouldn't mind a bit more elaboration on the API piece -- if you're just examining an API and examples, presumably docs.rs is enough? Can you spell out what drives you to the source?

One mitigating argument, which I didn't lay out in the post, is that I believe in this brave new world we're likely to have a bit more directory nesting than today, and a bit fewer files in any given directory (since you now need a directory to create a new module namespace). I suspect that in practice, that fact together with reasonable naming means you have a decent chance of guessing which file is relevant. But, uh, that's not an amazing argument :slight_smile:

Well, the semantic gap is “Counts as a collision in Rust, does not count as a collision in the FS” - but yeah, I agree it’s a detail that can be pushed back.

2 Likes

Sorry, that wording was too vague. In reality, there are a few situations I’d class under “examining an API” as I was thinking of it. It’s probably better described as “reading the source as a non-contributor.” Some examples where I often do this on github rather than locally:

  • looking at existing implementations of a crate’s trait that I want to implement on my own types
  • evaluating a few competing crates for a task (looking at implementation details of competing solutions is useful but I’m still just browsing and the docs may not tell the whole story)
  • seeing if a bug I’m hitting could be fixed easily enough to onboard as a contributor and submit a quick PR
  • answering a question I have about the output of a function but where the docs aren’t clear
  • I’m sure there are others but I can’t think of them right now and this seems like a decently motivated list to me :slight_smile:

When I’m browsing source to find examples of usage, it’s usually because the crate doesn’t have sufficiently rich examples in its docs (or any at all). While rust has a significantly better docs culture than most other ecosystems I work in, there’s sometimes no substitute for seeing how a crate author uses their own API. Tests are a common place that I’ll look, for example. One could perhaps make the argument that if I’m going to this extent to understand a crate, I should just clone it.

It’s already pretty accepted that a reasonable experience of writing Rust includes tool support. Perhaps the general form of this question is how much tool support should be needed for Rust code to be easily read.

Related: are there perhaps tools that would make it easier for me to do this “parachute reading” in my own editor? Maybe the process of cloning and opening a given crate’s source could be automatic enough that the low friction of a browser+GitHub wouldn’t be worth it.

I buy that we’ll have significantly fewer files per directory in the brave new world, actually. Still not an incredibly strong argument in my mind, but it definitely makes sense.

3 Likes
  • Perhaps pub(mod) would be easier to teach than pub(self)? The default visibility (nothing) is equivalent to pub(file).
  • pub(crate) would be a lot more common than it currently is. It might encourage tighter coupling. (I think this was a concern with the original pub(crate) RFC)
  • A compiler switch could instruct it on which format to use. Cargo should support it as well. Backwards compatibility is relatively easy to implement if the default format is the current one. New projects use the new format via a Cargo setting.
1 Like

One paper cut I run in to is the need for the #[macro_use] annotation and the lack of macro namespaces. Macro name-spacing is probably an entirely orthogonal issue with an orthogonal fix, but in terms of developer experience it falls solidly in the ‘annoying things one has to remember when using modules and crates’ category.

4 Likes

One thing that stands out is that the survey includes only library crates. It might be possible that terminal application crates use a bit different idioms for modules.

The biggest pain point for me is the amount of ceremony required for workspaces. If possible, I love to split code into separate crates, to enforce acyclic dependencies. These crates are an implementation detail: they don’t stand on their own and are not suitable for publishing to crates.io.

To split off a private crate today, I need to create a pure boilerplate Cargo.toml with a completely irrelevant version number, either use a boilerplatish src/lib.rs layout or violate conventions for a flatter directory structure, and then I need to modify other Cargo.tomls to specify dependency.

Perhaps Cargo should have a notion of workspace private crate? Or perhaps at the language level it should be possible to mark a module as a separate compilation unit?

11 Likes

I have a lot of troubles understanding the proposed visibility rules. Since I’ve been opposed to changes to the modules system in the past, you may think that I’m exaggerating to prove my point, but no I really don’t grasp it. Unless I’m dumb, I don’t see how this is an improvement to the learnability issue.

One thing I really dislike (if I’m not mistaken) is that you can’t have private modules defined by a directory.

Many times in the past I have noticed that one of my foo.rs modules contains a lot of code, so I replaced it with foo/mod.rs, foo/subpart1.rs, foo/subpart2.rs, and so on. With this proposal I wouldn’t be able to do this anymore without changing the semantics of visibility?

I discovered that the stdlib doesn’t mind having files with something like 2k lines of code, but I really can’t. I really like splitting things between files a lot independently of the API I want to expose.

19 Likes

The diagnosis of learnability and productivity issues is spot-on! I generally like the proposal and I could work with it.

I do sometimes leave other (unfinished, trying things out) .rs files in directories, so automatic include of everything would require me to be more diligent about this.

Would it be possible to use _-prefixed files as automatic includes? That matches pattern used in SCSS.

src/
  future/
    mod.rs
    _and_then.rs
    _flatten.rs
    _fuse.rs

Auto-including only _-prefixed files would make the “magic” behavior more explicit. It’d also restore ability to have files defining modules.

Intuitively, right now, directly after reading your proposal, I don’t have the feeling that it actually makes things easier to learn. But I’m aware that there is a very good chance that this is simply due to me being used to the current module system.

What I like most about your proposal is that it solves the FS-tree vs. module tree mismatch (I didn’t know that this pattern had a name…). This could have the very real effect that files will be a lot smaller, since it is easy now to split code into multiple files without having to jump through several hoops to retain the module tree. I would certainly create more files than before!

About the “finding the definition” problem: if files really get a lot smaller, then I could easily imagine creating a file for each (non-tiny) type with the same name as the type. The same goes for large free functions. And if the module requires many tiny definitions of something, those could be put into the mod.rs. My point is: I think with a couple of guidelines on how to split and name the files, it should be fairly easy to quickly locate most of the definitions.

And a minor nit...

Adding an underscore to declare a pub(crate) module might be confusing. Everywhere else in the language, an underscore at the beginning of an identifier means that this identifier is not used, but the compiler shall not warn about it. Also: module names starting with underscores are allowed today – would this change in your proposal?

2 Likes

Everything old is new again… Rust actually used to have export list. It was painful, so now we have visibility instead.

https://github.com/rust-lang/rust/issues/1893

5 Likes