Building a rust app within Zephyr, lots of configs

I am working on a project to bring Rust support to the Zephyr RTOS. One of the challenges is going to be incorporating the two build systems.

I have a prototype working that has some cmake rules within a Zephyr build that invokes a cargo build to produce a static lib that the cmake build system then links into the rest of Zephyr. However, there are a few issues I need to solve, some I have a pretty good idea of how to solve, and some I'm not so sure about. For example, I need to map between the Zephyr notion of build targets to a --target that I can pass to Cargo, but this can easily be done within Cargo.

One area I'm not sure how to do is the large set of configuration options that are available within Zephyr. Zephyr uses Kconfig, and a given build will end up with possibly thousands of these. The Zephyr build system writes these to a header file, which can then be used for conditional compilation. Many of these are needed to develop proper bindings to Zephyr, as the Zephyr interfaces frequently use these config settings to adjust structures, and even how syscalls are made.

However, rustc does not support conditional compilation based on anything in a source file (as far as I can tell, this is quite intentional).

What I'd though of was to have the cargo config file have a rustflags entry that has all of these configurations, passed as --cfg CONFIG_FEATURE_AAA and so on. There are also configs that are more like consts, and these would make more sense to include in a generated source file.

However, cargo also really doesn't want the cargo config to be a generated file. Aside from modifying .cargo/config.toml in the user's source tree, the other idea I thought of was to actually generate the source tree the build would happen from. The Cargo.toml/lock would be copied from the source tree, the .cargo/config.toml would be generated, and the source files would be short stubs that just have include!("../path/to/user/source").

Any other ideas on how to do this. I'd like both a solution that works with stable, as well as any thoughts on future ways that might work better.

The Zephyr build system writes these to a header file, which can then be used for conditional compilation. Many of these are needed to develop proper bindings to Zephyr, as the Zephyr interfaces frequently use these config settings to adjust structures, and even how syscalls are made.

In your Rust packages which provide Zephyr bindings, include a build script. The build script can read the header file (perhaps from a path given in environment variable), and translate it into cargo:rustc-cfg outputs that add cfgs the Rust code can then make use of. Build scripts can also generate source files if necessary (they should be written to the special OUT_DIR, not modifying the package's normal source tree).

Build scripts can even depend on libraries, so this conversion logic can be written in one library that is declared as [dev-dependencies] of each package that needs it.

2 Likes

If it helps, cargo rustc can use a file containing all sorts of arguments to be passed on the command line Command-line Arguments - The rustc book

In a similar case as yours (although using meson instead of CMake), generating a file containing a bunch of --cfg=KCONFIG_FLAG then passed as cargo rustc -- @path/to/generated/configfile did the trick.

I was able to add a build script to a zephyr-sys crate, which for now just outputs a single cfg to test with. However, I can't seem to be able to get a crate that depends on that crate to see this cfg entry. Checking the compilation lines, indeed, it seems that although zephyr-sys is built with the cfg option, I can't get a crate that depends on it to have those same cfgs set.

Does this mean it is possible that every crate that needs to know these configuration entities needs to have its own build script that invokes this dependent library (as a dev-dependency), as well as a small build script that invokes the library to get the cfg settings?

This is as expected — the build script gets to set cfgs for that package's compilation. Consider this: if they were global, then you'd have an unmanaged global namespace that would easily have subtle conflicts.

Does this mean it is possible that every crate that needs to know these configuration entities needs to have its own build script that invokes this dependent library (as a dev-dependency), as well as a small build script that invokes the library to get the cfg settings?

Yes, but, you should minimize the number of crates that actually need conditional compilation, because conditional compilation is intrinsically fragile — it's hard to be sure it's correct without testing multi-crate builds against many different configurations.

Instead, have your -sys crate (or other appropriate crates) export items whose definitions adapt to the configuration, so that dependent crates need only use those items, not write cfgs of their own.

1 Like