72

I find that if I initialize an empty dictionary at the beginning, and then adding elements to the dictionary in a for loop (about 110,000 keys, the value for each key is a list, also increasing in the loop), the speed goes down as for loop goes.

I suspect that the problem is, the dictionary does not know the number of keys at init time and it is not doing something very smart, so perhaps the storage collision becomes quite often and it slows down.

If I know the number of keys and exactly what are those keys, is there any way in python to make a dict (or a hashtable) work more efficiently? I vaguely remember that if you know the keys, you can design the hash function smartly (perfect hash?) and allocate the space beforehand.

Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485
szli
  • 36,893
  • 11
  • 32
  • 40
  • 6
    Hashtable performance can be improved by removing/reducing collisions. This can be achieved by pre-allocating an optimal number of buckets, or creating a perfect hash function from a set of known keys. Unfortunately, Python dictionaries do not give you low-level access to the internals of the hash table, so you cannot fine-tune them in this manner. – Charles Salvia Apr 27 '13 at 21:34
  • How much memory does this dict require? (Did you say the lists are increasing in size?) It can be measured with [pympler](http://packages.python.org/Pympler/). If the size is causing Python to hit swap memory, you could see a dramatic slow-down. – unutbu Apr 27 '13 at 22:13

1 Answers1

156

If I know the number of keys and exactly what are those keys, is there any way in python to make a dict (or a hashtable) work more efficiently? I vaguely remember that if you know the keys, you can design the hash function smartly (perfect hash?) and allocate the space beforehand.

Python doesn't expose a pre-sizing option to speed-up the "growth phase" of a dictionary, nor does it provide any direct controls over "placement" in the dictionary.

That said, if the keys are always known in advance, you can store them in a set and build your dictionaries from the set using dict.fromkeys(). That classmethod is optimized to pre-size the dictionary based on the set size and it can populate the dictionary without any new calls to __hash__():

>>> keys = {'red', 'green', 'blue', 'yellow', 'orange', 'pink', 'black'}
>>> d = dict.fromkeys(keys)  # dict is pre-sized to 32 empty slots

If reducing collisions is your goal, you can run experiments on the insertion order in the dictionary to minimize pile-ups. (Take a look at Brent's variation on Algorithm D in Knuth's TAOCP to get an idea of how this is done).

By instrumenting a pure Python model for dictionaries (such as this one), it is possible to count the weighted-average number of probes for an alternative insertion order. For example, inserting dict.fromkeys([11100, 22200, 44400, 33300]) averages 1.75 probes per lookup. That beats the 2.25 average probes per lookup for dict.fromkeys([33300, 22200, 11100, 44400]).

Another "trick" is to increase spareness in a fully populated dictionary by fooling it into increasing its size without adding new keys:

 d = dict.fromkeys(['red', 'green', 'blue', 'yellow', 'orange'])
 d.update(dict(d))     # This makes room for additional keys
                       # and makes the set collision-free.

Lastly, you can introduce your own custom __hash__() for your keys with the goal of eliminating all collisions (perhaps using a perfect hash generator such as gperf).

Raymond Hettinger
  • 216,523
  • 63
  • 388
  • 485