Plan to test parallel rustc

From rustup's perspective, we'd need to know what the channel name would be, and then we could add support for it. We're restricted to the three channels (stable, beta, nightly) currently, and we do some special handling for nightly so that'd need some decision-making (i.e. is this new channel more nightly or more beta?)

Otherwise probably not a huge amount of work, plus a release :smiley:

1 Like

Would it be possible to have something a bit different than a channel, with no automatic rustup update, just custom builds that get installed by name (or name and date)? Having to invent a channel name and issue a new rustup release is exactly what we'd want to avoid for each such experiment.

Would it be possible to have an "experiment" pseudo-channel, where there's no such thing as rustup install experiment, only rustup install experiment-foo or rustup install experiment-foo-2019-12-18?

1 Like

Yes, in theory we could do that -- it'd essentially be a channel, just not one which rustup tries to update on demand. Regardless a new rustup release will be needed because currently the format of channel names is limited and rustup won't download something it doesn't think is a channel. I'm also considering a mode of rustup toolchain install https://..... which would allow arbitrary manifest URLs resulting in an installation. That'd allow people to try stuff from different buckets, or self-hosted channels in the future.

3 Likes

Any such installation should still require a signature; rustup should maintain a chain of trust. And given an official signature, it seems reasonable to expect the build to appear in an official location.

I'd be happy to work with you on a spec for rustup-installed experiments. This one could have been experiment-parallel-2019-12-18, for instance. How about experiment-identifier for any alphanumeric identifier, and optionally experiment-identifier-date, where experiment-identifier installs the latest experiment-identifier-date?

That sounds reasonable, and I agree that trust chains are critical. We can talk about that on Discord perhaps at some point when our timezones are compatible?

1 Like

Don't know if this is better placed here or in the sister thread. But i am wondering if it would be helpful to assemble a script that would automatically install the needed compiler versions, has predefined crates to test against and invokes the different scenarios (full, incremental, check ...). Those predefined crates would be extendable with crates individual people care about. Idk how helpful it is to have different crates on different platforms/architectures. I think you would want to "pin" one set of variables (the predefined crates) and varying the cpu/os etc. but still have different crates people add to the list. I could run 3 or 4 crates "by hand" without loosing interest but i can run a script and let the machine run for an hour and do the laundry or washing dishes meanwhile while gathering a bigger dataset.

2 Likes

BTW: Thanks to the working group for getting this far! It has taken a lot of work to get this all working, and I just wanted to thank you all.

3 Likes

This reminds me of the discussion to add a nightly-alt channel to rustup: Add support for a nightly-alt toolchain · Issue #1099 · rust-lang/rustup · GitHub

The proposal didn't materialize as the LLVM assertions were disabled unconditionally.

AFAIK, a huge part of the point of asking the public to test this is so that they can test it on their code, including private codebases not published to crates.io, so asking everyone to use the same crates would not even be testing what we want to test anymore (especially since it's still limited to 4 threads by default, so a lot of hardware variation won't kick in yet).

The first post of the sister thread basically is that already; it lists 10 shell commands that do all of these steps: Help test parallel rustc!

I think you missed the Point of my post. I explicitly mentioned that it is possible to add crates you care about twice

The "value" of "pinning" a certain set of crates that everybody runs is to eliminate a variable if values look out of place. What if i run my crate and the results are off? If we have results for "common" crates that looks good by everybody else expect on my machine we can narrow the problem down because it is likely that the problem is not within the specialty of my crate but with my platform that is causing the problem. Like on Notebooks, when the thread/core utilization rises it increases the temperature and may lead to throttling which would be visible in the compilation of the "common" crates as well. And it would still be useful because if i care about 10 crates it would still be helpful and the easier it is to gather more data the better. Running different crates on different platforms without a "baseline" to compare against makes it harder to investigate a problem when it emerges. People already reporting slowdowns – what is the cause, the platform or the specific crate? Everybody running a baseline set of crates AND their individual ones – by just adding them to a list the script is reading in – is an advantage from my perspective.

I see it still as very time consuming for doing this with 10 - 20 crates. I could easily let a machine do all the work while watching a movie – i don't think i would sit an hour in front of my PC watching it compile and repeatedly enter the same commands.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.