If you want to insert n elements into a middle of the vector, you can do it in linear time as well, by, for example, using and iterator API, roughly like xs.take(n).chain(new_elements).chain(xs.drop(n)).collect(). Each operation that processes elements left to write and amortizes to O(1) with linked list will also amortize to O(1) with Vec. Not sure which one will be more performant, but at least the number of allocations in Vec’s case will be smaller.
To be asymptotically faster then vec, you’ll need a sequence of operations like “move ± n elements from the current position and do and op”, otherwise, if you process list in a single direction, you can do the same with vec by constructing a new one alongside.
EDIT: to clarify, I am not trying to say that using cursors with linked list is niche, my statement is that the problems which have better big O complexity when solved with linked lists are rare. In particular, just traversing a list in a single direction adding and removing elements is not such a problem
Maybe it is subtle but AFAICT this operation is O(N + M) where N is the number of elements in the sequence and M the number of elements that are inserted.
Let’s make the problem more concrete: Given a sequence with O(N) elements, insert M elements from a Stream [*] into the sequence in O(M).
If you use vectors, and insert the elements one by one, you get O(N * M) complexity, because for each element from the Stream that you insert, you have to shift N vector elements. Here it is much better to do what your iterator code is doing: insert M elements into a separate vector which is amortized O(1) per element, and then you insert that vector into the original one once the Stream has been exhausted. This results in O(N + M) complexity and requires O(M) extra storage.
With a doubly-linked list, you have to find the middle once, which is O(N), but then all insertions are O(1). So you get O(N+M) complexity. However, depending on how you see things, finding the middle is amortized over the M insertions, so you could also argue that inserting is amortized O(1), and this takes O(M) time for a list. Also, if you already have a pointer to the middle stored somewhere, this is unarguably an O(M) operation. So for a list, it makes a bit of a difference whether you have a pointer to the insertion point lying around or not.
Using the Stream here is not necessary. If you have the elements in a vector already, inserting into a vector is still O(N + M), but again inserting into a list can be O(N+M) or O(M) depending on whether you have a pointer to the middle around or not. What the Stream shows is that with a list you don’t need extra storage. You also don’t need multiple passes to implement this operation, which can be important, for example, if concurrency is involved (although in that case a “normal” list won’t work).
Also, things change even further if instead of having the elements to insert in a vector, you already have them in a list, because then inserting into a vector is still O(N + M) but inserting them into another list is O(N), or if you have a pointer to the insertion point lying around: O(1).
Or in other words, if you can a priori have a pointer to the insertion point and reuse it, inserting into a vector is always O(N + M), but inserting into a list is either O(M) or O(1). This can and is much faster than inserting into a vector, particularly if N is large and M is small. OTOH if you don’t have a pointer to the insertion point, then this becomes O(N + M) and O(N) for the list, in which case the fastest solution will depend on the “list traversal cost” vs the “vector memcpy cost”.
The take away here isn’t that list can be faster than vector, or that vector is often/generally faster than list, but that for lists having a pointer to the insertion/splicing/erasing point makes a huge difference performance wise. If a list API does not support this, then you are better off just using a vector because this is the main advantage of lists.
[*] That is, the number of elements to be inserted is a priori unknown, one only knows M once the Stream has been exhausted.
Yep, I should have stated explicitly that linear applies to both input parameters.
With a doubly-linked list, you have to find the middle once, which is O(N), but then all insertions are O(1) (or amortized O(1) if you count the find + insertions together). So you get O(M) complexity with O(1) storage.
Hm, that’s still O(N + M), N for finding, M for insertions.
The situation for allocation is tricky to analyze precisely. If you are willing to give up on memcpy you can append M elements to the end and then rearrange vectors in place in linear time (but given that vectors often have handy spare capacity, you might be able to place some nice trick with it).
So I still maintain that complexity wise both data structures are the same for this particular problem I won’t make any claims regarding the actual wall-clock time performance
Oh sorry, I posted too early and was still editing the post.
Hm, that’s still O(N + M), N for finding, M for insertions.
Yeah, I think this depends on how you want to see it. Strictly speaking this is O(N + M), but I also do find reasonable the argument that inserting into the list is amortized O(1) if you have to find the insertion point (although maybe this is wrong? I can’t recall exactly when we are allowed to say that an operation amortizes over another).
So I still maintain that complexity wise both data structures are the same for this particular problem
They are the same as long as you don’t have a pointer to the middle available. If I give you a pointer to the middle, the vector solution is still O(N + M) but the one using a list is O(M) or O(1), depending on the form in which I provide the elements to insert. That can make a big difference if N is much larger than M (these are all orders of magnitude anyways).
Complexity aside, a nice intrusive linked list implementation provides more functionality than something like Vec gives you out of the box. Instead of needing a separate iterator type, you can remove an item from a list with just a pointer to it, a pattern that fits naturally with an object-oriented style:
Similarly, you can use a pointer as an insertion point, although in my experience that’s less often needed.
It’s not actually difficult to implement this on top of Vec or Vec-of-Vec: you just need to have each element store its own index, and adjust the indices whenever you move elements around. (This might add overhead but won’t change asymptotic complexity.) But I’ve never seen a library that does this automatically, and doing it manually is a pain. In higher level languages I’ve often just used a hash set for this purpose, which is a bit wasteful but similarly allows constructs like set.remove(object).
Intrusive linked lists also never allocate memory; if you’re in an environment where allocation can fail, having one less error case to think about is often much more important than any small difference in runtime performance (one way or the other). They also work if you have no allocator at all.
Finally, intrusive lists are less complex to implement by hand than something like Vec-of-Vec, which can matter if you’re worried about code size, or if (like me) you just feel like writing everything from scratch
As for performance:
If you are doing anything O(N) with linked lists (other than iterating over the list), you’re doing it wrong.
If N is large, an intrusive linked list will probably be slower than a Vec-based approach (but not asymptotically slower) due to cache effects. But if N is small, it’s probably faster – for example, accessing the first element is a single indirection (list.first) rather than two, and you save the cost of allocating/deallocating the elements array.
All in all, from a language-agnostic perspective, I think there are compelling use cases for intrusive linked lists. They’re less compelling in Rust where they interact poorly with the borrow checker (for now), but they still have their uses.
However, I’ve been referring exclusively to intrusive lists. Non-intrusive lists have almost none of the above benefits. They don’t support removing by pointer; they rely on an allocator; and if the element type needs to be a pointer (e.g. because you’re using Rc), then they have more indirections and more allocations than you’d get with Vec. (If you don’t have experience with intrusive lists, I can understand why linked lists as a whole seem ‘obviously’ useless!)
So, in my opinion, by all means deprecate std::LinkedList. But… maybe hold out hope that someday we’ll find a way to make the borrow checker expressive enough to handle more kinds of data structures safely, including intrusive linked lists.
The simple implementation would be a Vec<Box<_>>, since the items are stored in LRU order, rather than sorted order.
For performance reasons, uluru stores all its data inline in an array, without any heap allocations or pointer indirection. (It’s used in very hot code in Firefox and Servo.) This has the side benefit that uluru does not depend on std.
Rayon implements its traits for LinkedList, along with all of the other std collections, so I wouldn’t really count that as a “use”. But we do use it for real in the generic collect and extend implementations, where it’s helpful to have very fast concatenation of two lists for parallel reduction. We have benchmarks of different approaches in rayon-demo, and you can see the last time I measured it here:
I’ve wanted linked lists in Rust but never std::collections::LinkedList. Instead, I’ve wanted a linked list trait so that I could roll my own index-based linked list inside some arrangement of other data structures, so vaguely a linked list in custom arenas.
As many pointed out already, a linked list like std::LinkedList or std::list is almost never the right choice. This is why either of them is very rarely used. What is often a good choice is an intrusive linked list. There are many good points above on when an intrusive linked list is a good choice. There is a very important one missing: when an ordered collection of trait objects is needed (or more generically, a polymorphic ordered collection), an intrusive doubly linked list is quite often the right choice. And since neither std::LinkedList or std::list support polymorphic T’s we get to roll our own.
To me, it shouldn’t be deprecated without first creating a List trait and/or impl that is better/actually meets the needs mentioned (like polymorphic T’s). It is already recommended in the docs that it shouldn’t be used as there are better options.
Currently, none of the existing implementations in crates.io is good enough for all usecases. It is unclear to me how they could be, since trade-offs like intrusive/non-intrusive, skip lists vs singly vs doubly linked, arena backed, w/o cursors, etc. appear irreconcilable. I am also extremely skeptical that coming up with a List trait that abstracts over all of those while remaining useful is even possible.
Basically, when you want a linked list, you want the linked list that’s good for your problem, which is different from the linked lists that are good for all other problems. That’s just the opposite of “genericity”.
If the current std::LinkedList is good enough for you, you can still use it after deprecation just fine, or use any of the crates in crates.io, or use VecDeque if its good enough, etc.
Right, I understand that. I’m just worried about the perception of the non-Rust community and those considering adopting Rust. Does seeing things deprecated from the stdlib without a replacement create a perception of instability? I realize that the justification for deprecating it is right, but, it would be good to have “something” (if that is possible) before deprecating. If the consensus that there can no useful “something” in its place, then, that is fine.