Presumably both versions would exist and you would just use the one matching the key you’re starting with, no?
Yes, that's correct, and that's why I immediately suggested we keep both:
That duplicates a rather large API surface area to only handle one of several use cases that the entry API doesn’t service. For example, Borrow
isn’t sufficient to allow allocation-free lookup via composite keys, and ToOwned
doesn’t handle cases where the computation of the owned value is fallible.
Trying to introduce a borrowed path for Entry was litigated pretty heavily in https://github.com/rust-lang/rfcs/pull/1769 and there were plenty of nasty tradeoffs that people weren’t very happy with.
Adding raw iterator support would also be very useful, for example for experimenting with https://github.com/rust-lang/rfcs/issues/1893
I don't know… I think half of the tradeoffs from that thread go away if you just give up on using the same method for borrowed and owned keys.
The other half are based on the ToOwned
trait itself being too limited, due to supporting only a single owned type for any reference type. That does sound messy. On the other hand, it's a pretty fundamental issue (especially if custom allocators for standard library types ever become a thing), and punting on entry_ref
won't make it go away. Why not solve it now?
But I guess that deserves to be its own RFC rather than tagging along with this one.
I just wanted to say that I like these ideas very much. Also, I was talking to @jonhoo and someone else whose user name I don’t know at the Boston Rust meetup recently, and they were talking about some of their other use cases that involve getting more “introspection” into the Rust hashmaps. Hopefully they can describe their needs here themselves though.
The other person was @fintelia. We’re working on a (currently underdocumented and largely impossible to use) new database built in Rust, and especially for reads we often end up bottlenecked by how quickly we can look up things in HashMap
s (we’d likely have similar issues if we wanted to support range lookups with BTreeMap
). There are many instances where we could make use of a lower-level API to HashMap
(in fact, @fintelia forked off HashMap
from std
for this purpose). I’ll try to enumerate some here:
- If given a batch of keys to read from/write to the map, we’d like to operate on that batch in bucket order. This can significantly speed up some workloads. Sorting by the key obviously doesn’t work (as the hash is used), but according to @fintelia (who has looked at the code closer than I have) even sorting by the hash doesn’t work, because only the lower bits of the hash is used to choose a bucket.
- We often have many maps where the same value is used as a key. We’d ideally like to memoize its hash the first time we insert that key into a map, then clone the key and its hash, and carry those along so that we can use that same hash the next time the same key is used in a different map.
- We generally use maps as caches, which means we also need to be able to do eviction. In our particular case, we sometimes want to evict a random element (or perhaps even a span of random elements). For this it would be extremely useful to be able to go to a particular bucket index in the map (even if it’s empty), and then scan forwards and remove + collect, say, the next
n
non-empty elements.
One thing we discussed with @nikomatsakis at the meetup was whether we should have a new std
module like std::lowlevel
that contains traits for low-level access to various Rust data-structures (e.g., HashMap
). This has the nice added benefit that in order to use things like raw_entry_at
, you would need to do explicitly something like use std::lowlevel::HashMapSuperPowerfulMethods
.
Oh yeah, I forgot about that. I liked this idea =)
Small bikeshed: std::collections::hash_map::raw::SuperPowerfulStuff
might be more "local"?
Maybe, though I suspect we might end up with these kinds of “raw” APIs for a handful of different types (e.g., BTreeMap
), and it’d be nice if there was one place to “collect” all these sort-of-kind-of-unsafe methods.
Perhaps you could use re-exports and have ::raw
modules in std::collections::raw
and std::collections::hash_map::raw
so that we have our cake and eat it? (we get a raw collections prelude of sorts)
I’m a bit dubious about letting you walk around in the buckets of a HashMap, as that’s the sort of thing that can get us more locked into specific implementations. Haven’t thought about it much though.
In generally I’d rather not have a std::low_level sort of thing for the same reason. The standard library is too general to ever justify exposing ultra-low-level details. You’ll always need to fork for that kind of stuff, I think.
The bucket walking (at least in our case) is not really about the semantics of what goes in each bucket, but rather wanting sequential memory access to semi-random elements. Being able to do batch inserts/lookups in bucket order is similarly not about knowledge of bucketing. I think there is some room for exposing the fact that there are buckets that are sequential in memory, without exposing the exact mapping from keys to buckets.
I’m not sure I understand your objection to std::low_level
? The proposal would be to take the methods you proposed in the original post and have them be behind a trait defined in std::low_level
and implemented only explicitly for HashMap
. That way, users would need to explicitly state that they’re using a low-level API (use std::low_level::RawHashMap
). In a sense that proposal is only about how people would be able to access whatever API we’d end up exposing, not what that API is.
I’ve done some benchmarking of Rust HashMap
s here. The current public APIs on HashMap
do not provide the most optimal performance. In particular, I found two pieces of functionality to be missing:
-
There is no way to provide a precomputed hash value during an insertion, despite batch precomputation of hashes providing a ~25% speedup on bulk insertions. (I didn’t track down the cause of the speedup, but I suspect that vectorization played a role)
-
Computing the bucket index that a key will land in is quite brittle. Yet, creating a HashMap from entries sorted by this value can be several times faster. This could be improved either by making bucket computation more predictable as @jonhoo mentioned or directly exposing a function to compute the index.
The objection is that once something gets into std
, it's stable and the API must be maintained indefinitely. The current HashMap API doesn't expose too much of the internals, so we have some leeway to change the HashMap internals in the future. The more we expose of the inner workings, the less we can change in the future.
Oh, sure, I realize that. The proposal to have std::low_level
was premised on @Gankra’s suggestions making it into std
at all. I’m saying that if we are to add these methods then I think they should be behind this kind of explicit trait, rather than be methods directly on HashMap
. I separately think that it is indeed a good idea to add some methods along these lines, though you are totally right that we’d want to be careful about what we commit to. In particular, I think it’d probably be fine to committing to open addressing (as opposed to per-bucket chaining), but I don’t think it’d be fine to, say, commit to Robin Hood hashing.
I have the same concerns as @Gankra.
IMO, if @jonhoo and @fintelia need to use the exact implementation details of the std::collections
, what they are currently doing is the right way to do so. That is, to fork the collection, put it in a crate, and rely on that.
So a different approach to help them here would be to make that easier, for example, by moving liballoc/libstd into the nursery, and splitting the different collections into different crates, so that they can have their own finer-grained fork that they can synchronize with upstream at their own pace.
OTOH, some of the operations they want are pretty high level, like bulk inserting items, or inserting with an intial pre-computed hash. These operations, like the operations that @Gankra proposes, are operations that any potential future HashMap
improvements are going to be able to offer anyways. Or at least, one could design these APIs in such a way such that the evolution of the implementation is not constrained by them.
What I don’t understand exactly is why do we need to put these utilities in std::low_level
or some other special place. They are reasonable things to do when one needs to, and I don’t really see how adding special extra hoops for these helps anybody in any way. They should just sit along the other collection operations, have good docs, and the unsafe
operations should require unsafe
code to use them. That would be good enough for me.
Just FTR, Having to make wrapper types to do custom search logic is annoying
seems like a red herring to me.
The reason is that extension methods are always an option, even for types in libstd.
@JoeyAcc I’m not sure what you mean by using wrapper types for custom search logic. AFAIK neither they nor extension methods allow an end user to access private fields within standard library types. In order to adjust the hash to index computation like I described above, I had to copy the ~5000 lines of HashMap implementation into my own crate, manually remove the stability annotations from every single public function, and then switch the randomness implementation back to using the rand crate.
@gnzlbg None of the operations that have been proposed so far are unsafe. In fact, that’s probably part of the concern: these additions would open up tons of subtle ways of accidentally shooting yourself in the foot. For instance, the compiler won’t stop you from providing the wrong hash to insert_hashed_nocheck
, but if you do, other seemingly unrelated lookups in the HashMap might fail later on.
Plus, there’s also just a lot of functions that we’d want. At very least, every function that takes a key (i.e. most of them) should also have a *_hashed_nocheck
version that lets the user pass along the hash.