So therefore adding a type parameter W : RcWeakMode = WithWeak
to Rc<T: ?Sized>
, i.e: Rc<T: ?Sized, W : RcWeakMode = WithWeak>
should NOT break backwards-compatibility… right?
For now at least, code like let r: Rc<u8> = Rc::new(0);
would not break, but let r = Rc::new(0);
would:
use std::marker::PhantomData;
struct Rc<T, W=i32>(T, PhantomData<W>);
impl<T, W> Rc<T, W> {
fn new(t: T) -> Self {
Rc(t, PhantomData)
}
}
fn main() {
let rc: Rc<usize> = Rc::new(0); // compiles
let rc = Rc::new(0); // error: type annotations needed
}
That is super counter-intuitive… and means that defaults do influence inference, no?
No – the break is that Rc::new
has a new type parameter. How to not have a break? Don’t do that…
So Rc::new(1)
would try to unify Rc<T, W>
and so it unifies T == usize
but can not unify W == WithWeak
since “It does not mean “infer U and fall back to i32” ?
Is there a particular reason that type inference works this way w.r.t. default parameters? It still feels counter-intuitive and unergonomic…
So I guess modifying Rc
by adding a type parameter is out of the question then…?
You could leave Rc::new
only parameterized on T
, forcing WithWeak
, just like HashSet::new
always uses RandomState
. A separate constructor can create the NoWeak
variant. Methods like Rc::downgrade
can also be implemented only for Rc<T, WithWeak>
.
impl<T> Rc<T, WithWeak> {
fn new() -> Self {...}
fn downgrade(this: &Self) -> Weak<T> {...}
}
impl<T> Rc<T, NoWeak> {
fn new_never_weak() -> Self {...}
}
impl<T, W: RcWeakMode> Rc<T, W> {
// common methods
}
Of course you can do that!.. stupid me!..
It seems that the consensus is to extend Rc, Arc
with new parameters, based on a trait, set to work by default as Rc, Arc
does today.
I guess this is ready to move over to the RFC stage… perhaps I can get it merged before September is over…
Oops, you're right. Sorry for the confusion.
This seems to me like the kind of microoptimization that only matters in situations where a user is already profiling and finding out they need it. I don’t see why this can’t be solved by an external crate.
Even adding a type parameter has a cost, as it increases the complexity of the public interface of the Rc type.
It could be implemented in an external crate were it not for the large amount of unstable features required to get it working on stable rust - so such an external crate will not be available for stable rust for some time.
The standard library’s monopoly on usage of unstable is a bit frustrating at times
It’s only CoerceUnsized that you’d really miss in stable Rust, I think? Is there something else?
Oh sigh, NonZero too
And:
-
allocator_api
forLayout
andHeap.{alloc, dealloc}
used forshared_from_slice
. -
shared
used for wrappingRcBox
. -
specialization
used for specializing forCopy
inshared_from_slice
. -
optin_builtin_traits
used for!Sync
,!Send
. -
unsize
used forUnsize
.
I don’t think NonZero
is required as long as you have Shared
.
Technically !Sync, !Send
follows from using Shared
, so that one is redundant.
I don’t think all of these are necessary (for example you can make a type !Send
and !Sync
with a PhantomData<*const ()>
, shared is for a type you could write yourself, assuming you could get around the other features it uses).
The basics in Rust 1.0 era stable rust: https://docs.rs/rc/0.1.1/rc/ ; it was released when Rust 1.1 was the most recent stable version. It has most of the basic features, and NonZero is actually not that important.
Fair enough, but I don’t see how you can get around:
- allocator_api
- specialization
- coerce_unsized / unsize
In any case - uses of Rc / Arc
fall under 2 categories - a) caching, b) modelling tree-like structures.
Currently, Rc / Arc
is a zero-cost abstraction for the latter case, but not for the former.
I’d argue that a basic primitive for the former should be in the standard library as well.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.