@Gankra did a pretty good job in this comment: Collection Traits, Take 2 - #15 by Gankra where basically the complexities that the different operations should have are clear, and some operations are separated into their own traits. For example, Insert
is its own trait, and Map
requires it, while Seq
has its own Insert
method which is different because it takes an index
. However, Seq
could also implement Insert
with the same complexity as Map
, because without an index
that would just mean "insert somewhere", and Seq
s can typically insert somewhere in O(1)
: Vec
and Deque
can insert at the end in amortized O(1)
, and any List can insert at the front in O(1)
. If the insert trait would specify that Insert::insert
has at worst O(logN)
complexity, that trait can be implemented for a lot of collections, but for example it cannot be implemented for a sorted vector, and thus a sorted vector cannot implement Map
, which kind of would match my expectations - the proper way to use a sorted vector when inserting in a loop is to bulk insert at the end + sort + unique which is O(NlogN)
and the API is necessarily going to clash with that of collections that can just insert in O(1)
or O(logN)
and thus can just insert the elements one by one in a loop.
So in my opinion, just adding docs to @Gankra's proposal specifying a worst case complexity that implementations have to ensure would be alright. If some implementation violates that, well, then its a bad implementation, but no unsafety can result from that.
Enforcing this at the type-level is necessarily going to complicate @Gankra's proposal. One thing we could do is, instead of having just one Insert
trait, having many traits for different complexity classes:
mod worst_1 {
// ^ replace worst_1 with a suitable name,
// I am bad at naming
trait Insert;
}
mod worst_logN { trait Insert; }
mod worst_N { trait Insert; }
and then building the "collection" traits on top of these, for example:
// So Map Insert has at least O(logN) worst case compleixt
trait Map: worst_logN::Insert;
trait Seq: worst_1::Insert;
Finally we could twist this further such that a type that implements worst_logN::Insert
automatically also implements worst_N::Insert
, so that if you have code that requires worst_N::Insert
you can pass it Map
s and Seq
s without issues. Blanket impls could be one way to do this, but I always felt that designs with many blanket impls turn out brittle.
Personally, I don't know if going this far is worth it. In particular, this lets you do non-sensical things like implementing Insert
for a collection multiple times for the different complexity guarantees. Maybe adding blanket impls could make this an error, but specialization would probably make this possible again.
I like @Gankra's proposal a lot. It is simple, and adding some comments saying which complexity implementors should uphold when implementing the traits would be enough for me. If overspecification is a concern, we don't really have to make the bounds too tight either, I'd be fine with not using O(1)
anywhere and making O(logN)
the tightest complexity class that we specify. That would already allow many traits to be implemented for both sequences and associative collections and honestly the difference between O(1)
and O(logN)
is really blurry in practice, to the point that I at least don't care that much about it (O(1) => 1
but O(log1000000) => 6
which is really close to 1
in the range of inputs [0, 1000000]
).