Hashing C
Hash Tables
By Eric Suh
Hash tables are an efficient implementation of a keyed array data structure, a structure sometimes known as an associative array or map. If you're working in C++, you can take advantage of the STL map container for keyed arrays implemented using binary trees, but this article will give you some of the theory behind how a hash table works.
Sommaire
Keyed Arrays vs. Indexed Arrays[modifier]
One of the biggest drawbacks to a language like C is that there are no keyed arrays. In a normal C array (also called an indexed array), the only way to access an element would be through its index number. To find element 50 of an array named "employees" you have to access it like this:
1 employees[50];
In a keyed array, however, you would be able to associate each element with a "key," which can be anything from a name to a product model number. So, if you have a keyed array of employee records, you could access the record of employee "John Brown" like this:
1 employees["Brown, John"];
One basic form of a keyed array is called the hash table. In a hash table, a key is used to find an element instead of an index number. Since the hash table has to be coded using an indexed array, there has to be some way of transforming a key to an index number. That way is called the hashing function.
Hashing Functions[modifier]
A hashing function can be just about anything. How the hashing function is actually coded depends on the situation, but generally the hashing function should return a value based on a key and the size of the array the hashing table is built on. Also, one important thing that is sometimes overlooked is that a hashing function has to return the same value every time it is given the same key.
Let's say you wanted to organize a list of about 200 addresses by people's last names. A hash table would be ideal for this sort of thing, so that you can access the records with the people's last names as the keys.
First, we have to determine the size of the array we're using. Let's use a 260 element array so that there can be an average of about 10 element spaces per letter of the alphabet.>
Now, we have to make a hashing function. First, let's create a relationship between letters and numbers:
A --> 0 B --> 1 C --> 2 D --> 3 ... and so on until Z --> 25.
The easiest way to organize the hash table would be based on the first letter of the last name.
Since we have 260 elements, we can multiply the first letter of the last name by 10. So, when a key like "Smith" is given, the key would be transformed to the index 180 (S is the 19 letter of the alphabet, so S --> 18, and 18 * 10 = 180).
Since we use a simple function to generate an index number quickly, and we use the fact that the index number can be used to access an element directly, a hash table's access time is quite small. A linked list of keys and elements wouldn't be nearly as fast, since you would have to search through every single key-element pair.
Collisions and Collision Handling[modifier]
Problems, of course, arise when we have last names with the same first letter. So "Webster" and "Whitney" would correspond to the same index number, 22. A situation like this when two keys get sent to the same location in the array is called a collision. If you're trying to insert an element, you might find that the space is already filled by a different one.
Of course, you might try to just make a huge array and thus make it almost impossible for collisions to happen, but then that defeats the purpose of using a hash table. One of the advantages of the hash table is that it is both fast and small.
Collision handling with open addressing[modifier]
The simplest collision handling algorithm is known as the open address method or the closed hashing method. When you are adding an element, say "Whitney," and you find that another element is already there ("Webster," for instance) then you would just proceed to the next element space (the one after "Webster"). If that is filled, you go on to the next one, and so on, until you find an empty space to insert the new element (all those extra elements came in handy after all!)
... 220 "White" | <-- ### COLLISION ### : Gotta move on to the next. 221 "Webster" | <-- ### COLLISION ### : Next one. 222 | Ahhh, perfect. Insert Here. 223 | ...
Since we modified the insertion algorithm, we also have to change the function that finds the element. You have to have some way of verifying that you've found the element you want, and not some other element. The simplest way is to just compare keys. (Does this record have the last name "Whitney"? Does this one?) If the element you find is not one of them, just move on to the next element until you reach the one you want or you find an empty space (which means the element is not in the table).
Sounds simple, right? Well, it gets more complicated. What if you have so many collisions that you run off the end of the array?
If you're trying to insert "Zorba" and all the elements are filled because of the collision handling, then what? Look at the example:
... 258 "Whitney" | <-- Nope, not Empty 259 "Zeno" | Nope, not Empty ---------------- <-- Ummm, what now?
The easiest thing to do is to just wrap around to the beginning again. If there are still no empty spaces, then we have to resize the array, since there isn't enough space in the hash table for all of the elements. If we resize the array, of course, we'll have to come up with a tweak to our hash function (or at least how we handle it) so that it covers the right range of values again, but at least we'll have room. (Note that resizing the array means that occasionally inserting a value into the list will cause an O(n) copy operation to take place, but that on average this should happen only once for every n items inserted, so insertion should be on average constant time, O(1). (If you aren't sure what terms like "O(n") and "constant time" mean, take a look at the Cprogramming.com article series on algorithmic efficiency.) As you can see, resizing isn't all that bad--still, if you know the amount of space you will need to start with, you can save your program some work.
Handling collisions with separate chaining[modifier]
A second collision handling strategy is to store a linked list at each element in the hash data structure. This way, when a collision occurs, you can just add the element into the linked list that is stored at the hash index. If you have only a single element with a particular hash value, then you have a single element list--no performance penalty. If you have a lot of elements hashing to the same value, you'll see a slowdown of course, but no more than you otherwise would see with hash collisions.
One nice thing about separate chaining is that having a bunch of values that hash "near" each other is less important. With open addressing, if you have a cluster of values that hash to nearly the same value, you'll run out of open space in that part of the hash. With separate chaining, each element that has a different hash value will not impact the other elements.
Resizing dynamically based on a load factor[modifier]
Generally speaking, you wouldn't want your hash table to grow completely full because this will make lookups take much longer. If a value isn't in the array, with open addressing, you have to keep looking until you hit an empty location or you get back to the starting point--in other words, with a completely full table, lookups could be O(n), which is horrible. A real hash table implementation will keep track of its load factor, the ratio of elements to array size. If you have a 10 element array, with 7 elements, the load factor is 0.7. In fact, 0.7 is generally about the right time to resize the underlying array.
Choosing a Good Hash Algorithm[modifier]
The more collisions you have, the worse the performance of your hash table will be. With enough elements in your hash table, you can get an average performance that's quite good--essentially constant time O(1). (The trick is to make the array grow over time as you start to fill up the array.) But if you have a lot of elements that hash to the same value, then you will have to start doing a lookup through a list of elements that all have the same hash value. This can make your hash lookups go from constant time to being, well, linear time in the number of elements. Imagine if your hash function hashed all values to 0, putting them in the first element of the array. Then it would be just a really complicated way of implementing a linear search.
Choosing a good hash algorithm can require some care and experimentation, and it will depend on your problem domain. If you're working with names, you probably don't want a hash algorithm that just looks at the first letter, because the letters of the alphabet are not used evenly--you'll find a lot more names that start with S than with Z. You also want to have your hash functions be fast--you don't want to lose all the time savings you're getting from the hash table because you're computing the hash function really slowly. It's a delicate balance. For one good hash function, check out this hash algorithm.
Now you're ready to implement your first hash table! Give it a try. It isn't too hard, and the end result is quite useful.
Hash functions.
© Copyright 2004-2008 by Paul Hsieh
Why look at hash functions?
In a previous job, I was asked to look at hashing functions and got into a dispute with my boss about how such functions should be designed. I had advocated the used of LFSRs or CRCs that would be customized to the size of the table, as discussed in "Numerical Recipes". My boss advocated simply performing a modulo by prime operation and cited Knuth's 30 years old "the Art of Computer Programming". I showed him examples where modulo by prime had extremely bad collisions, but this did not seem to sway him at all. It seems Knuth's rewrites have come too late.
A coworker "settled" the dispute by discovering Bob Jenkin's hash function. This outperformed both of our suggestions while being based on better analysis regarding collisions. I had bookmarked the webpage and occassionally referred to it in future projects, and noticed the two additions of the "One at a time Hash" and "FNV hash" as updates to the page over time. The thing about the Bob Jenkin's function is that the code is messy, and uses a large number of mystery constants whose construction I did not understand. Both the "One at a time Hash" and "FNV Hash" are extremely elegant with very few magic constants.
Bob Jenkins himself indicated that FNV outperformed his own function, so at first I simply took his word for it, and started using the FNV hash blindly on all occassions. After that, I had finally had the occassion to measure the real performance inside of a project. After a number of miscalculations and mismeasurements, I decided that I needed to study the problem for real.
Analyzing existing hash functions
The first thing I noticed was that on my system (an Athlon XP) the Bob Jenkins function outperformed basically everything else (including the FNV function) by a large margin. How do we resolve this contradiction with Bob Jenkins' claim? Simple -- he was measuring on a "Pentium". Intel's latest Pentium IV is known to have very slow shifters which slows down every hash function except the FNV function which uses a multiply instead. The Opteron/Athlon 64 architecture has a vastly improved integer multiplier (unchallenged by any other architecture, save perhaps the Itanium) which suggests that the FNV hash should do well on that system as well.
But at a more fundamental level I wanted to understand what the real performance limiters were for these functions to see if a recoding of them might help (I performed a similar exercise for some reference MD5 code and got a drammatic performance boost, so I was optimistic.) The Bob Jenkins' code is too convoluted and it seemed that the compiler or out-of-order CPU architectures could easily find the parallelism that was there (and there is some to be had.)
But the CRCs and One at a time Hash are completely instruction after instruction dependent. So I split the input data into odd and even bytes and calculated two parallel CRCs and One at a time Hashes then at the end combined one into the other as if it were more data. This markedly improved the performance of these functions, but not quite up to the point of outperforming the Bob Jenkins hash. So I did not completely pursue the task of proving the suitability of these modified functions.
If I was going to try to outperform Bob Jenkins' function, I would have to take a step back and understand the functional nature of the bottleneck in these functions. The functions other than Bob Jenkins' basically operated on the idea of consuming one 8-bit byte at a time of input data and mixing each in some injective way into some 32-bit accumulator which, after possible post processing, is simply output. One can see the motivation of this idea -- each input byte can be mixed twice with a large degree of freedom in the 32 bit accumulator without self overlapping. Thus in successive steps its only really required to sort of "spread out" consecutive sequences of at most 8 bits in such a way that previous bytes don't obviously cancell out.
This is explicitely seen in the "One at a Time hash" function. In fact, for each byte only a few very simple operations are performed -- a final short sequence of operations is required at the end. These operations at the end are required to make sure the bits in the last few bytes fully "avalanche" to all the output bytes. Avalanching is the property between an input and output bit where the output bit will flip with a probability p ("close" to 0.5) if the input bit is flipped relative to any random input data. A good hash function requires avalanching from all input bits to all the output bits. (Incidentally, Bob Jenkins overly chastizes CRCs for their lack of avalanching -- CRCs are not supposed to be truncated to fewer bits as other more general hash functions are; you are supposed to construct a custom CRC for each number of bits you require.)
Creating a new hash function
Using the One at a Time hash function as a model, the next obvious question to ask is "why not try to use fewer operations between data fragments"? The idea would be to rely more heavily on fixups at the end to produce the final avalanching which adds a constant time overhead in hopes of reducing the linear overhead. I.e., the mixing function would in fact operate much more slowly relative to the stream of input bytes, but this would not matter to the bulk of the early bytes because they would eventually reach a maximal point of avalanching anyway.
So my thought was to use fewer instructions per input fragment and to increase the size of the input fragment from 8 bits to 16 bits. On the x86 this latter idea has a particularly high impact on performance since these architectures have hardware support for unaligned 16 bit word accesses. Using Bob Jenkin's definition of avalanching, I chose an inner loop instruction sequence that I thought might work by interleaving two 16 bit words, then wrote a program to search for parameters which gave the greatest amount of avalanching before requiring a fix up for the end. I then added instructions that would be equivalent of unrolling the inner loop corresponding to padding the input with a fixed number zeros, then scanned for the set of parameters which could complete the avalanching for all the real input bits.
I was shocked to find that there were no significant impediments to this exercise, and I easily found a hash function with all these properties after a few hours or work. I then subjected all realistic sub-bit patterns of the hash output to a simple statistical test and verified that it had a distribution equivalent to a uniformly random map.
The moment of truth came with the performance test -- but given the architecture, it was a forgone conclusion. My hash function performs around 66% faster than Bob Jenkin's functions tested with various compilers.
Below is the code:
IMPORTANT NOTE: Since there has been a lot of interest for the code below, I have decided to additionally provide it under the LGPL 2.1 license. This provision applies to the code below only and not to any other code including other source archives or listings from this site unless otherwise specified.
The LGPL 2.1 is not necessarily a more liberal license than my derivative license, but this additional licensing makes the code available to more developers. Note that this does not give you multi-licensing rights. You can only use the code under one of the licenses at a time.
#include "pstdint.h" /* Replace with <stdint.h> if appropriate */ #undef get16bits #if (defined(__GNUC__) && defined(__i386__)) || defined(__WATCOMC__) \ || defined(_MSC_VER) || defined (__BORLANDC__) || defined (__TURBOC__) #define get16bits(d) (*((const uint16_t *) (d))) #endif #if !defined (get16bits) #define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8)\ +(uint32_t)(((const uint8_t *)(d))[0]) ) #endif uint32_t SuperFastHash (const char * data, int len) { uint32_t hash = len, tmp; int rem; if (len <= 0 || data == NULL) return 0; rem = len & 3; len >>= 2; /* Main loop */ for (;len > 0; len--) { hash += get16bits (data); tmp = (get16bits (data+2) << 11) ^ hash; hash = (hash << 16) ^ tmp; data += 2*sizeof (uint16_t); hash += hash >> 11; } /* Handle end cases */ switch (rem) { case 3: hash += get16bits (data); hash ^= hash << 16; hash ^= ((signed char)data[sizeof (uint16_t)]) << 18; hash += hash >> 11; break; case 2: hash += get16bits (data); hash ^= hash << 11; hash += hash >> 17; break; case 1: hash += (signed char)*data; hash ^= hash << 10; hash += hash >> 1; } /* Force "avalanching" of final 127 bits */ hash ^= hash << 3; hash += hash >> 5; hash ^= hash << 4; hash += hash >> 17; hash ^= hash << 25; hash += hash >> 6; return hash; }
Below is the results of a benchmark: AMD Athlon XP 1.620Ghz Power4 1Ghz UltraSparc III 1.2Ghz
Intel C/C++ /O2 /G6 /Qaxi /Qxi /Qip MSVC /O2 /Ot /Og /G6 WATCOM C/C++ /otexan /6r GCC -O3 -march=athlon-xp GCC -O3 -mpowerpc64 CC 32bit -O CRC32 6.42 5.66 5.66 5.67 14.06 8.75 One at a Time 5.76 5.66 5.66 5.69 12.79 5.57 Alpha Numeric 3.29 4.06 4.06 5.67 10.26 5.52 FNV Hash 4.88 4.84 4.83 4.87 8.92 11.98 Bob Jenkins 2.08 2.36 2.03 2.07 6.16 3.08 SuperFastHash 1.54 1.92 1.59 1.34 3.71 2.15 Data is time in seconds taken to hash a random buffer of 256 bytes 5 million times. Download test here
MSVC seems to have a hard time optimizing the two faster hash functions, and surprisingly the open source gcc is able to turn in the outright fastest result. Well done!
For the hash function to have the correct properties, it is assumed that CHAR_BIT is 8 and computations use 2s complement arithmetic.
I was initially worried that using a portable way of accessing 16 bits at a time would erode the performance significantly. Although none of the x86 compilers correctly reduced the portable version to the direct version (something worth complaining about), subsequent testing showed that this did not lead to the drastic performance drop that I thought it would (only about 20%). This leads me to believe that even on RISC architectures that this function should perform very well versus the Bob Jenkins, or other hashes.
- Update(1)
- David C. wrote: I tested your hash function against all of the popular ones, including Bob Burtles. It turns out it was not only the quickest but had the best distribution (I created histograms of the chain lengths). The architecture I work on is IBM Z/OS (S/390 mainframes). Well done mate, will be using your code from now on! Ok, not exactly RISC, but at least this demonstrates that this function is good beyond the x86 architecture.
- Update(2)
- I have recently gained access to a Power4 based Linux machine, as can be seen in the updated performance table above. (Interestingly, even normalizing for clock rate differences, the Athlon XP is 35-40% faster than the Power4 architecture). I did not see any appreciable performance difference between gcc and the native cc compiler, so I just reported the results from gcc. The performance ratio between SuperFastHash and Bob Jenkins is only slightly less impressive, so the main point about its advantage still holds.
- Update(3)
- Feedback from Tim Rentsch suggested that to be fair, Bob Jenkin's hash should leverage the x86's unaligned access feature as well (this helps it even more than for my function because it accesses 32 bits at once.) I have also rescheduled the operations of SuperFastHash to maximally leverage the pipelines of modern CPUs. I have made the code more uniform in its treatment of integers, so besides being portable from a compilation point of view, it is now more portable from a semantic point of view. And finally I have added results from an alpha numeric hash that has been discussed on USENET.
- Update(4)
- It seems that this hash function has gotten a little bit of attention lately. The function has been tested, and implemented in Apple Corporation's open source WebKit (used in safari) sources. Presumably this may mean that the code will eventually make its way back into Konqueror as well. In addition, MacroMedia appears to have adopted it and will use it in a future deployment. I guess people think there's something to this. (Update (4a): Google's Chrome Browser is based on Webkit and which continues to use this hash function.)
- Update(5)
- People have been asking for an incremental version of the hash function. Here is a simple diff sequence that shows how to do this:
13,14c13,14 < uint32_t SuperFastHash (const char * data, int len) { < uint32_t hash = len, tmp; --- > uint32_t SuperFastHash (const char * data, int len, uint32_t hash) { > uint32_t tmp;
You initialize it, by initially setting hash to some constant (like 0, or some other number if you want to avoid 0 sinking) and you keep it going by feeding the incremental output of each call into the hash parameter of each successive call. This will make the algorithm functionally comparable to Bob Jenkin's hash from an incremental point of view. You can improve this by splitting out the final avalanching code into a kind of "finalization step". But it should be pointed out that if you split the same data in two different partitionings, then the hash value that you get out will not necessarily be the same (Bob Jenkin's hash has the same problem). If you wish to use an incremental hash which is partition independent and has very low "per-call" overhead, then the Bob Jenkin's "One-At-A-Time" hash is recommended.
- Update(6)
- In Google's open source "sparse hash table" project, the documentation makes the following observation: " ... in one test of the default SGI STL string hash function against the Hsieh hash function ..., for a particular set of string keys, the Hsieh function resulted in hashtable lookups that were 20 times as fast as the STLPort hash function. The string keys were chosen to be "hard" to hash well, so these results may not be typical, but they are suggestive."
Future work to be considered[modifier]
The newest generation of CPUs are capable of 64 bit computation and certainly in a few years we can expect that there will be widespread development with tool availability for 64 bit software. So should this idea work by reading 32 bits at a time within a 64 bit accumulator? Probably, and we could expect the result to have roughly twice the asymptotic performance.
There's also the question of the inline dependencies. There are 6 instruction dependencies in the inner loop, so its quite possible that the odd and even word splits and recombination might lead to a substantial performance boost. (Rescheduling the operations actually saturates even the 3-pipeline Athlon, so unrolling is not necessary.)
- Update(1)
- There have been some requests for an incremental version of SuperFastHash. This is straightforward enough to do, by accepting the "hash" value as a parameter, and initializing it with some non-zero constant (since the point is to assume that the length is not available until all the data is read). The only sticking issue, is which constant to choose.
- Update(2)
- Tim Rentsch has noticed that the bit avalanching probability of SuperFastHash deviates from 50% more than Bob Jenkin's hash -- this is true, in fact it is between 5/12 and 7/12 (by design), while Bob Jenkin's hash appears to be far closer to 50%. There are some easy hacks limited to adding to the avalanche code at the end (for example, adding hash += (hash << 16) | (hash >> 16) to the end) to make all the bits of my hash function avalanche with a probability between .485 and .515, however its probably best that I revisit this to see how to achieve this with the least additional impact. (See next note.)
- Update(3)
- Following comments by Tim Rentsch and Bob Jenkins I have revisted the final tempering code and have updated it to reduce the bit correllation range to between .485 and .515 without introducing other anomolies. It now also directly passes Bob Jenkins' lookup2 test.