Idea: façade crates

In the absence of that, anyone using an interface anywhere in the dependency tree forces the top-level crate to pick an implementation, even if the interface is used as an implementation detail. I think it'd be common, as an example, for a framework-style library (e.g. a web framework or a service framework) to make various default choices for its users. Users should still be able to override those choices, most of the time, but I think it makes sense to have defaults. And I wouldn't expect that the interface crate itself will always be in the best position to choose the default implementation.

My main concern is that libraries will be adding facade impls for convenience sake. We already observed something similar with rand/getrandom, people where enabling the js feature (enables entropy retrieval on wasm32-unknown under assumption that wasm-bindgen is used) even though it was intended only for binary crates. Having such dependency in a dependency tree makes lives of those who target non-browser/Node WASM quite difficult.

The ability to override facade impl alleviates the issue a bit, but projects which do not select their own set of facade impls are likely to encounter build breakage due to conflicting choices made by libraries, thus practically removing the convenience win of not needing to pick all facade impls.

Also note that saying that addition of a facade impl by library is a breaking change will not work well in practice because such addition is viral. In other words, you can not encapsulate such breaking change. If your dependency has added a facade impl, you may not be able to keep your library backwards compatible as if you expose a type from the dependency as part of your public API. It can trigger a wave of breaking changes downstream, which is obviously far from ideal.

In my opinion, it's what the proposal effectively does, just with a better Cargo integration, which should significantly improve ergonomics and make this feature more robust. When you define a facade crate, you effectively say that this crate has a link time dependency. In a certain sense, it can be viewed as a "deferred" dependency. A facade impl crate states that it could be used as a dependency of its facade, which has to be explicitly selected by a project.

So one project I work on (VTK) has something like this and we call it "object factories" (because it works at the object level, not crate level). The basic setup is that there's an abstract class that "everyone" actually uses. Say vtkRenderWindow. If you need a render window, you ask for this[1]. However, it is abstract in that this base class actually knows "nothing" (it is constructible in the normal VTK way which uses hidden constructors and instead prefers a ::New() static method which is where the subclass gets snuck in on code not aware of the subclasses at all). This ends up looking into a registry to construct the first "eligible" subclass that is available and return that instead[2]. Say, vtkXOpenGLRenderWindow for X-using platforms. macOS gets vtkCocoaOpenGLRenderWindow, etc.

Now how this gets populated is what is of interest here. VTK supports static and shared builds, so how do shared builds ensure that a concrete implementation is available in practice? There's some nasty preprocessor magic that is helped by the build system that sees IMPLEMENTABLE libraries (those with the abstract base classes) and IMPLEMENTS <X> for those providing implementation(s) of base classes in library <X>. This triggers some macro nastiness where a header is generated, the path to the header injected via -D and then #include'd from the base class' module header. It expands to a Schwartz-counted registration mechanism of the subclass registration routines.

Now the relevant part here is that this is not a lot of control. If you have multiple subclasses available, the base class is only going to give one instance because ::New() is always parameter-less. So how to do that? I have a proposal here where the end-user can hint which implementation is preferred by using attributes to guide selection (this is all much easier over there since it is runtime, not build-time selection of the subclass to use and there is far richer logic available).

How is it relevant here? A few things that it has shown me about these kinds of things:

  • the overall graph leaf is not necessarily always aware of everything going on (e.g., python knows nothing of this stuff, so it is up to the Python modules to set it up): defaults are useful, but any crate should be able to express an overriding preference because it may be the "leaf" for the use of any given facade crate;
  • multiple preferences: first-come-first-serve is unpredictable (i.e., the build graph can shift and toposort different crates to "earlier" on subtle changes) and therefore not useful here. Some kind of preference mechanism is useful (see the issue where I propose attributes in the implementations which can then be selected over). Note that preferences may be platform specific (e.g., the systemd's journal APIs for logging on Linux, syslog on other Unix, and…whatever makes sense on Windows). Testing may also want to ignore all of that and use its own stdout-based logging infrastructure instead (I do this at least).

Given this second bullet point, I don't think that build-time resolution is enough and some kind of link-time decision needs to be available (i.e., to make the test executable prefer something that the binary itself doesn't). I'm also of the opinion that explicit is better than explicit and actually like log's explicit setup calls. If cargo could generate that for me, it'd be great. Is it something that could be mocked up as a build.rs-time dependency? Or is there just not enough info at that point?

Anyways, this ended up a bit rambly, but I saw a similarity that might help with some decision making here with similar prior experience for use cases.


  1. In practice, you ask for vtkOpenGLRenderWindow, but not relevant here. ↩︎

  2. It is first-come-first-serve with the asterisk that there's an API to "turn off" a subclass. It's global state, but something that'd be great to fix some day. ↩︎

1 Like

The primary benefit of traits is that they guarantee seamless integration with all existing Rust features. If a feature is supported on traits, it's supported on your facade items. Otherwise we get into a situation of this proposal, where effectively a separate duplicate language is introduced to replicate the existing features (already 3 attributes were mentioned, #[facade], #[facade_impl] and #[optional_facade], and there is some expected new magic in Cargo.toml, as well as extra parameters to rustc).

It's also not true that these traits are never used in generic programming. It's not super common, but there is nothing preventing us from e.g. writing stuff like

struct InstrumentedAlloc<A>(A);
impl<A: GlobalAlloc> GlobalAlloc for InstrumentedAlloc<A> { .. }

where InstrumentedAlloc is some wrapper to the global allocator which collects extra metrics or modifies the allocator in some extra way (e.g. logs sizes of requested allocations). It's not super common, but I'd be surprised if people didn't use such patterns.


It would be nice to have a good list of use cases included in this proposal. There is GlobalAlloc, but it's already special-cased in the language. There's log, but alone it's not compelling enough. How often do people mess up their logger setup? Also tracing seems to be even more popular nowadays, and it uses an entirely different interface, even though it has a compatibility shim for log. However, it doesn't seem to fit the facade impl pattern proposed here.

What else? One could expect something like async executors, but it is entirely unclear to me whether this proposal is sufficient to implement something like "executor facade". Part of the problem is that different executors can have very different capabilities (e.g. an executor which doesn't assume OS likely doesn't have support for timers). It's also not true that an executor facade is always implemented only once. For example, it's not that uncommon, even if inconvenient and undesirable, to run tokio and async-std executors on different threads.

This seems to imply that executors can not be implemented via the proposed facades, eliminating a potentially strong use case. So what are other examples?

1 Like

It'd be sufficient for a global spawn, or spawn_blocking, or similar. And those could be separate traits, so that an executor can implement a subset of them.

2 Likes

Actually, that brings up an interesting question: spawn and spawn_blocking are disjoint without any need for using one mandating using the paired one. But what if you did have that situation?

For example, I'll go back to allocation. It's valid (if not so useful) to have alloc without dealloc; how would we handle a global provider if these were split?

trait Alloc {
    fn alloc(layout: Layout) -> Option<NonNull<Memory>>;
}

unsafe trait Dealloc: Alloc {
    unsafe fn dealloc(memory: NonNull<Memory>, layout: Layout);
}

#[global_resource]
static GLOBAL_ALLOC: Alloc + Dealloc;

#[derive(Copy)]
struct GlobalAlloc;

impl Alloc for GlobalAlloc
where GLOBAL_ALLOC: Alloc
=> GLOBAL_ALLOC;

impl Dealloc for GlobalAlloc
where GLOBAL_ALLOC: Dealloc
=> GLOBAL_ALLOC;

Presuming it's possible to have a compilation where no GLOBAL_ALLOC is provided if it is not used (a necessary feature for GlobalAsyncExecutor, unless std provides a basic default executor a la block_on and thread::spawn(|| block_on)), it's desirable to support two "support levels," one with Dealloc support but also one with only Alloc.

(This specific case can be emulated by a noöp dealloc and making everything leak, but may still be useful for e.g. allowing deliberately leaked &'static mut allocation with no intent to dealloc but not other shortlived heap allocation.)

Even for an async executor, it's still desirable for there to only only be one static instead of two. The split static can be a ZST-reference to the actual shared static location of the global allocation, but this introduces an unnecessary double indirection to generic uses of the global split static resource handles[1] compared to if the global resource location was the single static location involved.


  1. i.e. uses which use &impl Executor or &dyn Executor that can't take an impl Executor directly which could itself be a zero-sized reference to the static global resource and ends up (ideally) monomorphizing out these indirection-to-ZST layers. ↩︎

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.