I think we've recursed too deeply into a specific detail of the discussion. I'm not sure if I want to propose this specifically. The only take-away from this is that existence of private fields is likely a solvable problem, and not a dealbreaker for reflection/serialization interfaces.
This is true, but reflection has to solve the same problem- either you give it access to private fields (equivalent to your #1 requirement) or you continue to enforce privacy (with the same limitations for serde-like crates).
Given both the semver and safety implications of exposing private fields, I suspect it's probably not worth it. However, those same considerations apply when the crate itself provides serde trait impls! It may turn out that the types people want "adapter crates" for are already fully public. Something worth investigating for anyone trying to solve this problem.
Yes, I mentioned this at the end of my post- there are alternatives to "feed an external type's tokens into an existing derive macro, and relax the orphan rules." Ideally we wouldn't need to relax the orphan rules at all- adapter crates could provide methods like Path::display to build their newtype wrappers that implement the relevant trait(s). This provides a similar disambiguation point to the ones in various first-class module systems.
For what it's worth, I think some kind of reflection capability would allow Rust to grow beyond certain boundaries that exist today - not just because of the issues with orphan rules, but also other limitations, for example the lack of a stable ABI (which seems even less likely to change in the near future from what I'm seeing). If you can access any public APIs through metaprogramming, then you can define whatever interface you want to interact with arbitrary Rust code - admittedly with reduced performance, but WASM<->JS transformations don't seem very efficient either at the moment, yet WASM (and thus Rust) still has obvious potential and valid uses even right now.
That being said, Rust is not being positioned (at least currently) in the same way as languages with reflection capabilities such Java or C#. In Java, you can take a binary artifact (a WAR file), upload it to a running server process, and run it within that process as long as it follows some basic rules. It's kind of hard to picture a Rust program doing the same thing - as cool as the idea itself is.
Thus in my opinion, even if reflection has its place in Rust, the first attempt should be made with a separate crate instead of a language feature. There would be some limitations (I've done some PoCs and handling trait implementations seems rather iffy, lifetimes hugely complicate matters and don't even get me started on unsized types ), but a crate using macros or even some source code parsing magic could still cover a lot of the use cases - and if the crate becomes popular enough, perhaps the language itself will later evolve in that direction as well.
This would not be a useful thing for Rust IMHO. This implies dynamic linking to a bunch of libraries (which is all it really is at the end of the day), which prevents things like LTO and aggressive AOT inlining. Rust is not a JIT language and never should be and so will never benefit from such a capability.
I disagree. While dynamic linking does have some cost associated with it, it is a useful abstraction, and Rust supports lots of non-zero-cost abstractions. So while Rust does static linking by default (and gains some performance benefits through it) doesn't mean we should take dynamic linking off the table solely for performance reasons.
The war thing for example seems really close to a plugin system, and I know there's demand for that, and I don't think we want to ban plugins just because static linking is more dynamic.
Coming back to the topic at hand though, I do think for war/plugin loading schemas that even bigger than runtime reflection is a stable ABI. Or at least, a stable ABI is required either way, and there are ways to accomplish it the loading without runtime reflection. (And you can currently get around the unstable ABI by using stuff like
Most importantly, you can do stuff like that today if you want with no new language features required, see for example https://michael-f-bryan.github.io/rust-ffi-guide/dynamic_loading.html
Thanks for the link! Some really interesting stuff in there, and it certainly works well for a plugin-based architecture. It works less well for general interfaces though, which is what I wanted to emphasize in my previous post (I only brought up WARs as an example of a seemingly less Rust-like use case - but I think I've been certainly proven wrong on that count).
The problem with existing solutions today is that they either require the library author to take non-Rust usage into account (and thus define the API with repr(C) structs, functions, etc), or require a helper crate written specifically for that library which provides the same thing. With a reflection API, as long as the metadata can be retrieved for the library in question, you could have one single repr(C) generic helper crate for all Rust libraries. That's it. Sure, performance will be significantly worse, but should that become a concern you could still write a specific helper create that does away with the indirections made necessary by reflection.
One more thing regarding performance. I understand that one of the goals of Rust is providing a lower-level C-like language that can be used for the same purposes. For embedded programming and the like, performance is critical and reflection has absolutely no place there. But that's just one of the goals - and I think we're selling Rust short if we stop there. What actually drew me to this language was the memory and thread safety guarantees (all without using a garbage collector, but even if Rust had GC those two things would still be huge). These are universally important, and give the language potential to be used pretty much everywhere - and in some of those cases, reflection would certainly be useful. (Again, not necessarily as a language feature, but perhaps as a library.)
But compile-time reflection could have a place there, and I'd argue that most use cases for reflection are achievable with compile-time reflection.
That said... why not both? Despite @krojew describing their design as "runtime reflection", since all the APIs are
const fn, it seems suitable for compile-time reflection as well. Even if we don't adopt their design, it seems like a powerful idea to use the same API for both types of reflection.
In our system, we've built something very similar to what is proposed in this RFC. What we have is:
- we have our own trait-object-based "reflection" API.
- we've re-built serialization to use that API instead of using
serdedirectly. Exactly the case mentioned in the RFC.
- we use that API for other purposes, too (for example, validation against dynamic type schemas).
So, this RFC would be down our alley, right? Well, not quite, I don't think so. Here is why.
Based on my experience building this system, my biggest concern is how generalizable reflection, as a pattern, could potentially be?
In our case, we've build very targeted reflection API for the type of data we need to work with. Which among other things, means we don't have to deal with:
- Which "base" types we could use. For example,
serdebase types are very limited -- if they work for you, you a good, if they don't -- well, sometimes it's not very easy to work around that limitation. For example, they way
serde_jsonsupports "arbitrary precision numbers" (which are not supported by
serdedata model directly) is a bit hacky right now (in my opinion) and actually gets into way of other
serdefeatures. The list of types in this RFC doesn't include "arbitrary precision decimal" (for valid reason, of course), so it will have to be represented as "struct" or something (so we would loose efficiency here).
- Even though we use dynamic dispatch, we optimized it for exactly use cases we want. Types we work with are very regular, but have some interesting specific features. For example, instead of scanning "type system" for a field with a given name (and keeping some cache around to optimize these lookups), we can "ask" our data types directly via a sigle call with the signature
fn get<'a>(&'a self, field_name: &str) -> FieldRef<'a>, where
FieldRefwould be an
enumindicating the "kind" of the field (list? singular object? etc...) and a trait object to work with that "kind" of a field. This actually allowed us to do serialization / deserialization faster than we used to have when we used
serde, so I'm not completely on board with implicit assumption that "static typing is always faster". Like yeah, it is faster, but there are nuances, so sometimes it is not (and also there are a lot of cases where performance trade-off is acceptable).
- Variability in how types are represented in Rust vs what we want over the wire. This is another case where we can do better compared to off the shelf library like
serdeis that we can do slight adjustments between how we represent types in Rust (so they make sense to application developers) vs how these types are serialized over the wire (how we need JSON to look like). Plus, we can support more "difficult" formats like XML, which are hard to support in a generic library without a tons of attribute annotations (should field be an element? attribute? what about namespaces? etc...)
- Other optimizations which I think are hard to do in a generic case. For example, we provide mutability API in our "reflection" API and one major assumption we make is that every data type we have is
Default, so we can always create an "empty" field, if field wasn't initialized already. That simplifies API / deserialization code by a lot. Simply because you can create an empty "thing" and then fill it later. Without that assumption, you have to accumulate all the fields first and then create data type instance (and I have feeling that this is where we might be getting performance advantage -- since this would not require drop tracking for local variables -- not that I know anything about it!).
So, I think that even though this RFC is close to our use-case (let's call it "large enterprise application") as it ever could be, it's these details that make me think that no, it is not going to work for us.
What would help us? Little bit of an off-topic.
Things that, I think, would help us are smaller building blocks that we can use to create our "reflection" API, such as:
- compile-time macroses for things like 1) field offsets 2) trait object vtables (yeah... there's a lot behind this "tiny" ask )
- maybe, some ABI stability for trait objects & such (so we can have types to be dynamically loadable)
- probably, other little things that I'm forgetting about...
I'd argue that Rust's support for self-description and dynamic loading in a WASM context is actually top-tier right now thanks to wasm-bindgen and wasmtime. Companies like Fastly are also taking real advantage of our good WASM support to, as you said, "upload [a binary artifact] to a running server process."
MVP of compile-time reflection can be done right now, but it wouldn't be zero-cost. Last time I was trying to achieve this I found useful getter\setter pattern to acess fields via their names. Only reason it wasn't zero-cost is bad support of const generics at a time. Now, it is limited by const fn in traits, lifetime parametric type id's (type_checking is done with Any), and const evaluation of higher-kind functions. See https://github.com/rust-lang/rfcs/issues/2743
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.