Tools Team: tell us your sorrows

Area: cargo

The most annoying thing is that cargo always produces duplicates of error messages. Instead of 2 errors, you get 4 errors, and then you try to figure out what’s wrong with the 3rd error, only to find out that this is just a duplicate of the 1st error you’ve already fixed. It’s somehow connected to the fact that the crate is built with multiple different configurations. It’s understandable that rustc produces a set of errors for each build, but if cargo runs rustc twice, it should be cargo’s job to deduplicate the errors.

There is no easy way to run cargo clippy or cargo check and make sure they actually do their work. If the crate or its dependency hasn’t changed since the last run, it won’t be analyzed again, so the output will appear as if everything is fine, while it’s actually not fine. It’s possible to work around this with cargo clean, but there is no easy way to clean all workspace members without cleaning all the dependencies as well. Caching makes sense for building but makes no sense for linting.

Cargo commands are inconsistent about handling -p argument in workspaces. For example, cargo publish -p ... doesn’t work for some reason. It’s even worse with plugins. For example, cargo add doesn’t support -p. It seems that every custom command must reimplement this functionality. It would be much better if this option just worked without needing for extra support from custom command authors.

--help works inconsistently. cargo add --help works but cargo fmt --help is useless and cargo fmt -- --help works instead.

Everyone is tired of recompiling dependencies, so cargo needs some kind of system-wide binary cache.

I personally dislike that cargo always puts build files in the source directory. It’s possible to override the target directory for each project, but it’s not possible to configure this globally.

22 Likes

Packaging, like cargo-wix has been discussed by the WG-CLI. I need to wrap up a couple more loose ends (workspace support for cargo-release being one) and then I plan to focus on the packaging space, including working with the cargo-wix maintainer.

I don't remember if icons and manifests came up in that linked issue. Could you post on it?

2 Likes

Area: cross compilation

Rust and rustup go very far in supporting cross compilation. They’re 99% there. Unfortunately, it never worked for me because they need target’s linker, which they can’t provide and which is PITA to obtain. E.g. adding targets on Windows is futile, because MSVC linker can’t do anything with them.

4 Likes

I don't think it makes sense for the CLI WG to be concerned about icons. They're more of a GUI thing. Manifests are executable resources, and so are icons, but that's the only connection they have. My XML manifest does not reference my icon at all. The "application icon" that Windows displays for an EXE or a DLL is just whichever icon resource comes first.

My present workaround is to include a .res file every time I invoke the linker, and that res file has both the icon and the manifest under it. I'm pretty sure if Rust added a way to take care of including the manifest automatically, I could still use the same workaround to bundle the icon. It would just be linking two resource files at once (one of them implicitly generated by rustc or cargo, the other one generated by hand).

Area: debugging proc-macros

When writing a procedural macro, you want to see the expanded result for some examples for debugging. The only way I could find is to println! the final TokenStream in proc-macro. But without any formatting, it’s very hard to read.

I’d like a tool that can expand a proc-macro (once the proc-macro crate is compiled) and format the result with rustfmt. Ideally, this should be integrated in my IDE. IntelliJ can expand normal macro_rules! macros, but not procedural ones.

11 Likes

This is in addition to what @crlf0710 said. Make sure debugging works for every part of rust; i.e.:

  • Closures
  • Unions
  • Anything else that is anonymous, or where there are numerous types compacted into one.
  • macros - can they even be debugged in a debugger???

These are all powerful parts of rust, which I have to debug using log::trace!() statements.

Also, document how to get custom formatters to work within various debuggers. I.e., if I’ve gone to the trouble of implementing std::fmt::Debug on a type, it would be nice if the debuggers would use it for displaying the type.

Edit Just read to the end of the thread, and saw that @Aloso said the same thing I noted about macros.

1 Like

Area

Source code generation

Basically, I find myself writing what a lot of what is essentially boilerplate. For example, if I have the following function:

pub fn foo(a: Bar) -> bool {
// stuff here.
}

I will thereafter have to expand it into something like:

/// # Foo's Bars
///
/// Will Foo a Bar if it hasn't already been Foo'd.  Will not Foo a
/// if it has already been Foo'd, so this is safe to use on any Bar
/// you come across, without checking to see if it has already
/// been Foo'd.
///
/// # Example - added if needed
///
/// ```rust
/// use crate::bar::Bar;
/// fn main() {
///     let bar = Bar::new();
///     match foo(bar) {
///         true => println!("Did foo the bar"),
///         false => println!("Bar was already foo'd, not doing anything")
///     };
/// }
/// ```
///
/// # Parameters - added if needed
///
/// - `a` - A Bar that may or may not have already been Foo'd.  
///
/// # Returns - added if needed
///
/// `true` iff `a` wasn't Foo'd before this call.  If this call returns,
/// rather than panics, then `a` will have been Foo'd successfully.
///
/// # Panics - added if needed
///
/// Will panic if `a` has been Baz'd.
///
/// # Errors - added if needed
/// # Safety - added if needed
/// # Aborts - added if needed
/// # Undefined Behavior - added if needed
pub fn foo(a: &mut Bar) -> bool {
// stuff here.
}

// A whole bunch more functions

#[cfg(test)]
mod tests {
    use super::*;

    fn test_foo() {
        // Some kind of test
    }
}

That is a lot of boilerplate for each function, struct, union, etc. What I want is a tool that is able to analyze the contents of a module (probably via the AST) and generate the boilerplate in place as needed (kind of like how cargo fmt figures out where whitespace should be added). It should do the following:

  • If there are no module-level docs, add the appropriate block at the top. Put in a FIXME comment anywhere I need to fill something in.
  • For each member of a struct, enum, union, etc., make sure that there is documentation in place.
  • For each function/method, add in the docs as shown above intelligently. That is, if my function doesn’t have any parameters, don’t bother with that section. If it does have parameters, add in a single stub line for each parameter. Same for every other heading.
  • For each chunk of executable code, add in a unit test stub whose contents are assert!(false), so that I know what I need to fill in.
  • Don’t overrwrite what is already there! I want this to be an idempotent operation, so that I can run it whenever I want, kind of like how I run cargo fmt --all on my code right before each commit.

Is there a tool for this already?

Not as far as I can tell, but I may not be looking in the right places.

If not: which tool do you think it should be included in?

cargo fmt, probably with a new switch like cargo fmt --generate_doc_stubs --generate_test_stubs

Did you manage to work around the lack of tooling?

Copy/pasting of function names, and then prefixing with test_, or copy/pasting home-grown documentation stubs, etc.

5 Likes
cargo rustc -- -Z unpretty=expanded

Or, outside of cargo,

rustc myfile.rs -Z unpretty=expanded
3 Likes

Area: Cargo / build targets

Getting a plug-in with bidirectional linkage that can be loaded from a C or C++ application to build is extremely painful, and requires platform-by-platform linker flag tweaks.

Essentially, plug-ins require a cdylib-like library target that expects a specified list of symbols to be undefined at link time, which will be provided by the component that loads it.

Is there a tool for this already?

Not that addresses this use case

If not: which tool do you think it should be included in?

I would like to see it as an extension to the lib block in the Cargo.toml, something like

[lib]
crate-type = ["plugin"] // why did making this a one-per-crate target seem like a good idea again?!
symbols [file="ffi_link_symbols.rs"]

Did you manage to work around the lack of tooling?

Yes, using .cargo/config files with rustflags to specify link-args for each target platform… and a lot of redundantly specified symbols.

1 Like

Are not the symbols handled by having a -sys crate dependency? (I’ll admit no knowledge of unruly plugin architectures.)

The problem with -sys crates is that they are assumed to be linked by the emitted library… they must exist at the time the plug-in is compiled, and even if their linkage is load-time, the same C library has to be loaded with the plug-in. When the symbols in question are interfaces the application expects to provide to the plug-in, the -sys model doesn’t help at all.

FWIW, there are other (arguably better) ways to design a plug-in architecture. If I had control over the application side, I could bind the application’s symbols to the plug-in dynamically, with Option<FunctionSignature> binding…

It might be feasible to create a C shim that does that and link the Rust library against that…

This is indeed annoying.

On Unix I use touch src/lib.rs && cargo check, although this "trick" does not scale well with workspaces :sweat_smile:

You will love cargo expand :slight_smile:

One more issue is that cargo test output is cluttered. Every library and binary in a workspace is tested separately, and there are also doc tests which are also run separately from the rest of the tests. The usual result is a whole screen of useless running 0 tests; test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out, and you always have to scroll up to see some actual tests. Again, cargo runs multiple test binaries but it doesn’t bother to join the results. I understand that it’s non-trivial to fix, but it’s surely something to work on.

One can say that when tests pass, it doesn’t matter what’s the output, but I disagree. Looking at the last line of the output is a simple sanity check. If I see zeros there, I start thinking that my tests weren’t picked up and there is some kind of configuration issue.

15 Likes

Area: IDE support, code assistance/RLS

It would be extremely handy to automatically generate trait stubs when I start trying to implement a trait from another library. I currently use VSCode with RLS.

P.S. I’m sorry if this is in the wrong WG. I don’t what responsibilities the tools team owns.

2 Likes

Area: Clean up after Cargo

One of many hard jobs that Cargo has is when to reuse resources between projects and when to isolate the builds from separate projects. Cargo manages to make it easy to have different projects with different setting and never compile a mismatched set. Unfortunately this least to a major problem. The Dependencies that are rebuilt once per project never get cleaned up. There is cargo clean, but it triggers a full rebuild. This means that it is impractical to use a Global cache for build Dependencies, as cleaning it would triggers a full rebuild of every project on the system.

Existing tools: cargo-sweep

cargo-sweep is a tool that mucks around in the target folder to use cargos fingerprint files and file system access time to decide when a artifact can be deleted. Unfortunately it requires “file system access time” witch is maintained on ~0% of real systems. There was an attempt to get Cargo to touch files to make cargo-sweep work, but it broke the playground. So it is now an unstable feature with insufficient testing to determine if it can ever be stabilized.

There is a big opportunity to get some testing on CI, but at the moment cargo-sweep needs to be installed from source. (More than defeating the time saved.) I have not figured out how to use trust well enough to get binary releases working.

5 Likes

Standardizing libtest's json output would make this relatively easy. For the unstable API, we already have an API and there are experiments with alternative wrappers around tests. We can use those to experiment with alternatively implementations before pushing to cargo.

On a related note, it might be good to eventually get built-in JUnit support. The handling of benchmarks is an issue.

I love that you’re working on this! Thanks.

Area: Debugger

I would like to have feature parity with C++ debuggers on all platforms.

This was mentioned in #15, but I’ll elaborate a bit.

The first thing I noticed when trying to set up debugging for my project is that it’s not clear whether Rust is meant to be debugged with GDB, LLDB, or something else (the project is dockerized & on Linux). It would be nice to have direct instructions for which debugger I should use and what functionality I can expect to get from the debugger.

The second issue I’ve encountered is that LLDB debugging, including with the rust-lldb wrapper, doesn’t seem to have feature parity with LLDB’s C++ debugging. I can set breakpoints and inspect some variables, but I can’t inspect some other variables nor can I evaluate Rust expressions. I haven’t been able to work around it; I am not sure if GDB works better on Linux today.

Area: Build systems

This is Google-specific, but can be generalized. Cargo isn’t integrated with Google’s internal build system, blaze. I don’t know how much of this is just that cargo & blaze haven’t been integrated and how much of it is that cargo is missing necessary hooks. I think trying to set up bazel integration (or integration with some other generalized build system) would expose whatever functionality gaps there are.

3 Likes

I am not sure if GDB works better on Linux today.

Yes, GDB works better. Use GDB. Do not use LLDB.

Getting RLS working is a PITA

1 Like

It would be great to have namespaced features finished up in Cargo. But Cargo improvements to how Cargo handles being offline are probably my top concern.