Help wanted: fast hash maps in std

What if the default Hasher would stay the same and with a builder, the user could change the hasher?

I think an elegant way would be with an optional type parameter or a new new.

For example to create a fast hash table for u8:

let map = HashMap::with_hasher(|e: u8| e % 64)

@bluss

We unfortunately can’t use oneshot hashing for any type where the oneshot version would be different than the non-oneshot version. Notably this includes strings since we add the 0xff byte and primitive slices since we prepend with the length.

You run into breakage when a user has defined their own type which can borrow to a &str (think InternedString). If we add a oneshot hash implementation to &str and use it in HashMap and HashSet, the user’s code would suddenly fail to successfully look up things if InternedString is missing an identical oneshot implementation.

@dns2utf8

I think this is what you’re looking for? https://doc.rust-lang.org/std/collections/struct.HashMap.html#method.with_hasher

Why is it important to continue doing those things?

It avoids trivial collisions between things like ("foo", "bar") and ("foob", "ar").

We also can’t realistically change it for the same reason as we run into with oneshot - user defined types that have to has the same way as a &str because they’re linked via Borrow.

I wanted to point out a relevant idea and commentary from the 2017 planning thread:

@comex, @wycats, maybe you can continue discussion here?

Wouldn’t the solution be more generic if it was implemented as part of the hashing of a tuple rather than as part of String?

For example:

    fn hash<S: Hasher>(&self, state: &mut S) {
        let (ref a, ref b) = *self;
        a.hash(state);
        state.write_u8(0xff);
        b.hash(state);
        state.write_u8(0xff);
    }

Just curious, what’s meant by ‘oneshot’?

But then you still have collisions between e.g. ([0xff], []) vs ([], [0xff]). 0xff works specifically for strings because that’s not a valid UTF-8 byte.

Oneshot hashing is when you have a single byte buffer you need to hash and can give it to the hasher all at once. It can be more efficient than iterative hashing since you don’t need to carry a bunch of state around. You can also pick an algorithm based on the input size to maximize throughput.

I hadn’t considered that, but that’s because my initial suggestion was something that the current api can’t do, for example:

    fn hash<S: Hasher>(&self, state: &mut S) {
        let (ref a, ref b) = *self;
        let a_s = S::new(); // This doesn't exist currently.
        a.hash(a_s);
        state.write_u64(a_s.finish());
        let b_s = S::new(); // This doesn't exist currently.
        b.hash(b_s);
        state.write_u64(b_s.finish());
    }

There are probably other implications to this that I’m not aware of, but does solve the problem.

For tuples, how about first hashing the lengths of all the variable length sub components, together, before any of their contents? Wouldn’t this defeat any attempts to trigger a collision?

(The tuple collision problem reminds me of the issue of unambiguously serializing data. Perhaps some techniques in this space could be useful?)

How is the implementation going to find the lengths of all variable length sub components?

See also:

https://github.com/contain-rs/hashmap2/pull/5/files

a fairly old (March) PR to a fork of the current HashMap implementation, which implements this already. I’m not sure what the current state is.

I really should try hacking on HashMap myself as there are a bunch of other potential optimizations I’d like to look at.

For some reason, MurmurHash3 hasn’t been extensively investigated. It performs vastly better than xxHash for short keys because it has much less state to populate, but has top-quality (non-cryptographic) properties. For example, a 1-byte array can give a 32 bit hash in ~4 ns in Scala, which handily beats everything on https://cdn.rawgit.com/Gankro/hash-rs/7b9cf787a830c1e52dcaf6ec37d2985c8a30bce1/index.html save FNV. A somewhat-carefully optimized Rust implementation could probably outdo that substantially.

I strongly support the idea of having context-dependent hashing (switch to a slower algorithm if many collisions are detected), but I am not sure that FNV is a good choice for a default for anything. You don’t have to spend very much more time to get at least somewhat better hashing even at very short key sizes.

1 Like

I’m improving the cache awareness in this PR https://github.com/rust-lang/rust/pull/36692 This will add up to any hashing improvements.

1 Like

How much does the extra hash_u8(0xff) cost? If it’s large, it’s worth noting we can avoid this for String where s.capacity() > s.len() by writing the 0xff to the underlying buffer.

A lot, I did benchmark something similar here https://github.com/rust-lang/rfcs/pull/1666#issuecomment-230730783

We should get rid of it altogether (there’s already open issues on the subject).

1 Like

FWIW, even FnvHasher is on the slow side. Look at https://github.com/rust-lang/rust/pull/37229 where I replaced FnvHasher with a worse-but-faster hash function and sped up rustc itself by 2–6% across most workloads…

I think it’s important to contextualize a bit here. That workload is mostly 98%+ usize and u64 writes, and for those the PR hasher is a single instruction instead of 16 non-parallelizable ones (fnv). Even if the hash-fn is a bit bad and has bad distribution at the lower bits, solving partial-collisions is not that expensive in rustc hashmap implementation, thus the speedup.