Unsafe specialization?

Given how ancient the discussion about Rust specialization is, I'm pretty sure that some variant of this idea has been discussed before, and would gladly be redirected to the appropriate forum/zulip threads, RFCs, blog posts, etc for further reading.

But a thought just occurred to me yesterday: if providing safe and sound specialization is so hard, what if the MVP way out was to first release specialization as an unsafe language feature, with the aim to eventually expose a safe subset or make it fully safe as compiler technology marches on?

With this approach, the prerequisite for shipping specialization would be reduced to specifying what constitutes sound usage of specialization, so that it can be written down as a safety contract for early users. Of course, this is still a Hard Problem, as evidenced by the fact that we've already been through several iterations of the specialization specification. But maybe it's a significantly easier problem than having the compiler detect incorrect usage of specialization at compile time, which is what Rust has always aimed for so far, and what I agree the end goal should be?

My main intent here is to remove one of the things that makes core/std special, and help user-defined container and iterator types to enjoy the kind of optimizations that std containers and iterators have been getting for a long time. I think that this is a good initial target audience for the feature, because container and iterator developers are already one of the large users of unsafe Rust, as they constantly operate near the edge of what compiler optimizers can do. So for them, specialization being initially unsafe would not be an absolute deal-breaker.

4 Likes

I would very much love to see this. For example, given a pair of AsyncRead/AsyncWrite implementations, if both are file descriptors there are ways to speed up copying between them on Linux using specialised sys calls. You can speed it up even more if you know what type of file descriptors (pipes, sockets, files). This is something specialisation would allow.

The problem is that specialization causes action at a distance. If you have an unsafe specialization trait and have it implemented conditionally on a safe trait then your safe trait becomes a source of unsoundness if unsafe code somewhere depends on the distinction. The lifetime-dependence is usually tricky to weaponize, but when it comes to unsoundness the bar is higher than that.

A subset of specialization could be stabilized. But that would lock us into the current syntax which might not be sufficient for lattice specialization.

Related thread: Idea: making traits specialization safe over an edition boundary

1 Like

I think the syntax issue is minimal. We can always change syntax across an edition later if necessary.

I have a feeling that specialization in Rust is unnecessarily tied to trait implementation defaults.

In libstd every use of specialization seems to want an if/else construct (if it's a Vec do this else do generic), but instead needs to create a one-off SpecFoo trait and use method dispatch for the "if" and the "else" implementations.

How about adding specialization in the form of a downcast function? One that works statically, without Box or Any bound.

3 Likes

A lot of those aren't strictly necessary, we do them because specialization isn't stable and (iirc) because they might cause inference problems. Otherwise could could just slap default on the public trait impls and add additional specializing ones and even have separate documentation for the specializing impls.

Those internal specializing trait impls (I call them specialization helpers) are not to be confused with specialization traits like InPlaceIterable or TrustedStep, the latter ones are not simple if-else because they're open and can have implementations in multiple crates.

Some of what looks like if-else is also due to limitations of min_specialization. With full specialization we'd likely use more open traits in some of those cases too. E.g. the io::copy specializations could be made accessible to users so std could pierce through their wrappers when they opt into that.

It seems to me that action at a distance is already a property of both unsafe and the trait system in stable Rust:

  • Unsafe causes action at a distance, because UB in badly written/encapsulated unsafe code can trigger symptoms in seemingly unrelated safe code.
  • Traits cause action at a distance because part of their purpose is to let people from other crates inject code into the generics that you write.

This is why in stable Rust, unsafe code authors already operate under the constraint that they should not rely on non-unsafe traits being implemented correctly for safety.

I get the impression that a generalization of this existing principle could encompass the scenario that you are describing, but I suspect I may just be misunderstanding it completely!

Thanks for the pointer! :heart:

Take BufReader<R: ?Sized> for example and note that it's not R: Read. The read bound is only on the impl block.

So any useful specialization will have to add a impl SpecHelper for BufWriter<R> where R: Read + ActualSpecializationTrait bound. The latter trait is about the optimization, the Read is just the basic necessity to get the whole thing to be Read. Sure one could do where Self: Read instead but then you lose access to a lot of methods that you're going to need in practice.

Now by having a specializing impl conditional on a safe trait you get all the soundness holes around lifetime erasure the current specialization impl has. And you can't require it to be unsafe because Read already is stable.

2 Likes

I see, thanks for the clarification! I thought this was about ActualSpecializationTrait needing to be an unsafe trait, but indeed, if the mere bound on Read causes the current specialization issues, this is a no-starter.

Not strictly. If impl for BufWriter<R> where R: Read + SpecMarker specializes default impl for BufWriter<R> where R: Read, then that should be sound. The required property is that the bound for the specialized subset is specialization safe (not potentially lifetime dependent).

Although in fairness, this isn't necessarily useful for what specialization tricks you typically wish to do, because the specialization would generally want to have a base case of impl for R where R: Read and the coherence rules for impl for BufWriter<R> where R: Read safely specializing that would be dangerously subtle and probably not true.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.