Thoughts on Rust GUIs


I agree in principle; but CSS has a lot going for it in terms of developer familiarity – we shouldn’t underestimate that.

AFAIK (correct me if I’m wrong) WebRender isn’t reliant on CSS, but you can reuse some other surface style language and talk to WebRender. What we could do to eat our cake and keep it is to provide that other style language and CSS – then we can also provide a translation of CSS -> other style language as a const fn (or macros) and so we gain most of the benefits of familiarity and performance.


I love this initiative, thanks everyone for working on it! I’ve been focusing on delegation, as I think it’ll enable ergonomic implementation of custom widgets in a pure Rust GUI library.


I found some slides about different potential approaches for making HTML5 canvas accessible. This is applicable because using canvas is similar to using any graphics API: As far as accessibility is concerned, it just produces an opaque (uninspectable) image. These slides explore some different approaches to adding more information.


I wonder if this topic can grow into a separate community WG of its own.

While not urgent, the GUI problem is deeply associated with desktop application development use cases. Being able to write small portable programs ergonomically as show cases of the language can improve the “general-purpose” figure of the language, and maybe make more non-Rust-people interested about it.

EDIT: It seems actually there is an existing group: See, don’t know the details rough.


I have just bumped into a mega-post about the state of cross-platform GUI libraries from 2016, it might be worthwhile to check it out:


I hope low latency will bey a key tenet of whichever Rust GUI rises to prominence:

This to me sounds like the most promising project so far, especially with Rust quickly becoming the de-facto language fo WASM, making frameworks like Yew that much more viable.

Do you post updates anywhere other than your internal chat and the GitHub repo?


Thank you :-). Currently we working internal on our focus time group. But you can find information about the current progress on the redox chat at the orbital channel We will soon release an roadmap on our github repository


It’s pretty weird to see people propose that the standard Rust GUI library should be some web-based thing. There’s no problem with people wanting fast and “portable” prototyping for apps using web technologies, but given that Rust is a statically-typed, compiled, heavily performance-, correctness-, and ergonomics-oriented systems programming language, I don’t think it makes sense to have the “official” GUI library be based on flaky and painful-to-use technologies such as HTML and CSS.

HTML was never meant to be a GUI or app deployment platform; it was meant to be a simple document delivery system. It’s just that people started abusing it for app development, for which it sometimes works, it sometimes doesn’t, and its design (and means of usage) certainly conflicts with many if not all principles of modern GUI libraries.

Furthermore, having to compile half of a browser engine into my app just in order to display a button and an image doesn’t sound terribly attractive from a developer’s point of view. Not to mention the infinitude of security holes that operating a browser engine opens up. It’s the same huge set of security problems that client-side web development has in general, except that nobody will pay attention to them now because it’s being used in a different context, and so most programmers will just assume that “it’s safe”.

So I’d be strongly in favor of a new, Rust-idiomatic GUI library which probably wraps different underlying native libraries on different platforms. Completely hiding differences between OSes is impossible anyway, that’s incidentally why I think the “web is portable” argument is a fallacy — just look at the variety of cross-platform libraries that offer “seamless, platform-independent” development for mobile OSes such as Android and iOS; none of them managed to achieve that noble goal so far. Anyway, any library would certainly need to enable somehow dropping down to the underlying platform without too much pain.


I’m little surprised that nobody refered to tokio/futures in this thread. The GUI is meant to be an complex async system. If user click this, show that modal, etc etc. And obviously those applications needs network access, file read/write, but everything should never block the rendering thread. And we already have a library for it - tokio.


tokio is about high performance io. But futures is generic and it makes sense, maybe someone will write a GUI event reactor(maybe based on winit work? i don’t know.)


relm was using it for a time then removed it because it was just too painful to use.


Has anyone experimented with ECS-based GUIs by chance? Rather than the OO-style, ECS seems like something that would fit a lot better with the Rust approach.

I’m just now finally starting to learn a bit more about it, so if by chance someone has good reading material please do share :slight_smile:


The best material I know is actually not specifically about ECS, but on data-oriented design, with some ECS mixed in:

As far as applying it to the GUI problem goes, I suspect there’s still a lot of institutional game programmer knowledge that hasn’t been “figured out” for desktop GUIs yet. I would start by figuring out what the hard and slow parts of a GUI are, and how we might write them if we had complete control over the input data structures, independent of the other parts of the system.

It may not wind up looking like an ECS, exactly, since game world entities are shaped and interact rather unlike UI elements. But I do think we’ve hit on some of the possibilities already- WebRender-style retained display lists, event queues rather than nested event handlers, separate layout algorithms, etc.


Note that the Elm architecture, and so React, follows a slightly modified version of ECS. The main difference is that there’s only single entity in the system - the root component.

I’m not saying this approach is better. Even on the DOM-based web ui it’s often needed to handle multiple entities. e.g. popup/toast style modals shouldn’t be a component in ideal, but people already implementing it as a react component.

But the React become the mainstream way to develop web frontend, and people already building awesome UIs with it in production. So ECS-style UI isn’t a somewhat experimental. It’s just market standard.


I’m… really not sure how React has anything to do with ECS. I mean, it uses the word “component,” but a React component is very different from an ECS component.


After some quick search, I found that I totally misunderstood what ECS is. They don’t seems to be in the same domain at all. Sorry if you confused!


A follow-up on using ECS for GUIs. Looks like @raphlinus (Xi creator) has been experimenting with it, and has had very positive experiences with this approach:


Thanks for looping me in to this thread, @jntrnr. Reading through it, I have a few thoughts.

First, an explicit goal of xi-win is what I’d describe as “uncompromising performance.” To me, this means not repainting the whole screen just because a GPU can do it in a small number of milliseconds, not creating make-work intermediate representations, etc. It’s a separate question whether this is the right point in the space, but at the very least I want to understand what’s possible and how hard it is to actually achieve. I’m finding that performance is an interesting problem, and I’ll have more to say about it as I build it out (note that I’m not making strong performance claims about the existing prototype, it takes a number of shortcuts).

The next observation is that I’m finding that modern UI is often quite layered, as exemplified by Flutter. At the top layers, you want to have a discussion whether you’re using a Flux-style functional reactive pattern, some kind of declarative approach, a DSL of some kind, or whatnot. At the lower levels, you want to be concerned about whether you’re effectively using the GPU resources, avoiding redundant work when the deltas are small, etc. In between, you have some really difficult questions around, for example, how to decompress images off the main thread speculatively, before they scroll into view. Of course, imgui is something of a reaction to the layered approach. From my perspective, imgui makes tradeoffs that are more suitable for a game than a traditional productivity-style app.

I’m explicitly only addressing the lower layers in my prototype. I think it’s also very much worth exploring the question, “what is the most concise, idiomatic expression of some UI in Rust?” Then as a separate question how to map it to an incremental widget tree implementation, which I think is still the gold standard for performance and ability to do things like accessibility and responsive layout.

I don’t claim to have the answers, but I hope my prototype is interesting input to the discussion.


Great post! This feels like a great start toward making a traditional object graph fit into Rust (though it’s not really an ECS, as I’m sure you’ve heard enough of on the Reddit thread :slight_smile:). Separating event handling out into queues, and moving layout into the widget container, are both things that we’ve kind of touched on in this thread and I’m glad to see they’re working out.

The question of minimal repainting is interesting. It’s definitely worth doing in some cases, like scrolling or localized changes. APIs like DXGI and DirectComposition do provide specialized support for updating sub-rectangles, moving layers without re-blitting them, etc. But in general I suspect part of the reason for repainting the whole screen on the GPU is not just that it’s “fast enough” but that it may faster and lower-power to bypass the complicated logic of a full minimal-repaint system.

I always had the impression that imgui was more of a reaction to traditional data binding models like MVC than to the layered approach in general. It certainly makes tradeoffs for games, but I think both imgui and Flux are onto something where data binding is concerned. The idea of a separate, canonical model that the UI automatically reflects, without any extra maintenance (e.g. signals/slots or the Controller of MVC) is extremely nice.


I think it’s useful to classify applications into four kinds:

  1. Per-platform UI. Applications that are happy to write per-platform platform-specific UI code. They probably just want a direct Rust interface to platform APIs.
  2. Wrap native widgets. Applications which can get by with a toolkit like wxWidgets or React Native that just wraps native widgets. Doesn’t have to mean least-common-denominator, because you can drop in self-implemented components, but some kinds of extensions beyond native functionality are difficult.
  3. Full-featured reimplementation. Applications that want native-looking and full-featured UI (accessibility etc) but for whatever reason, can’t use native widgets and need to reimplement their own (and accept the weight of that).
  4. Cut-down reimplementation. Applications that don’t care about being native-looking and are willing to ignore features like accessibility, in exchange for portability, flexibility, performance, and reduced footprint with minimal effort.

I think it would be very reasonable for each kind of application to prefer a different toolkit.

I don’t have much to say about 1 and 2, but Rust versions of React Native sound worth exploring.

For 3: I spent years working on layout and rendering for Gecko, and I have seen the amount of work required to reimplement close-to-native UI with broad enough coverage for most applications with all the -ilities (accessibility, GPU acceleration, etc etc). Not only is it a mountain of work up front, it is also a lot of ongoing work as underlying OS platforms evolve and your implementation has to migrate from framework to framework and implement new look-and-feels. I very strongly recommend NOT repeating that in Rust, and instead, figure out how to reuse an existing framework like Qt or GTK or a browser engine in idiomatic Rust. A browser engine has the advantage of reusing technologies familiar to the most developers.

For 4, because different applications are willing to compromise in different ways, you won’t converge on a single popular toolkit. Also I fear 4 is a siren song: it is very very tempting to look at existing UI frameworks, decide that they’re much more complicated than your immediate needs, and start writing your own, but over time as your needs grow you find that eventually you would have been better off had you started with a full-featured framework. Classic greenfield fever. But it can still make sense sometimes (often games).