What is the default __hash__ in python?

python hash dictionary
python __hash__
python hash function implementation
python override __hash__
python hash string to int
how to compare two hash values in python
what hash function does python use
__hash__ method should return an integer

I am quite often using funky stuff as keys for dictionaries, and therefore, I am wondering what is the right way to do it - and this goes through implementing good hash methods for my objects. I am aware of other questions asked here like good way to implement hash, but I'd like to understand how the default __hash__ works for custom objects, and if it is possible to rely on it.

I have noticed that mutables are explicitely unhashable since hash({}) raises an error ... but strangely, custom classes are hashable :

>>> class Object(object): pass
>>> o = Object()
>>> hash(o)

So, does anybody knows how this default hash function works ? By understanding this, I'd like to know :

Can I rely on this default hash, if I put objects of a same type as keys of a dictionary ? e.g. :

key1 = MyObject()
key2 = MyObject()
key3 = MyObject()
{key1: 1, key2: 'blabla', key3: 456}

Can I rely on it if I use objects of different types as keys in a dictionary ? e.g.

{int: 123, MyObject(10): 'bla', 'plo': 890}

And in the last case also, how to make sure that my custom hashes don't clash with the builtin hashes ? e.g :

{int: 123, MyObject(10): 'bla', MyObjectWithCustomHash(123): 890}

What you can rely on: custom objects have a default hash() that is based in some way on the identity of the object. i.e. any object using the default hash will have a constant value for that hash over its lifetime and different objects may or may not have a different hash value.

You cannot rely on any particular relationship between the value returned by id() and the value returned by hash(). In the standard C implementation of Python 2.6 and earlier they were the same, in Python 2.7-3.2 hash(x)==id(x)/16.

Edit: originally I wrote that in releases 3.2.3 and later or 2.7.3 or later the hash value may be randomised and in Python 3.3 the relationship will always be randomised. In fact that randomisation at present only applies to hashing strings so in fact the divide by 16 relationship may continue to hold for now, but don't bank on it.

Hash collisions don't usually matter: in a dictionary lookup to find an object it must have the same hash and must also compare equal. Collisions only matter if you get a very high proportion of collisions such as in the denial of service attack that led to recent versions of Python being able to randomise the hash calculation.

Python hash(), != , the corresponding method is __ne__(self, other) . By default, those methods are inherited from the object class that compares two instances  Older versions of CPython just used the value of id() directly for the default hash(), newer versions use id()/16 because in CPython all ids are a multiple of 16 and you want the low bits set. This is purely an implementation detail: the default hash() is generated from id() but exactly how changes between releases.


The documentation states that custom objects rely on id() as their hash() implementation:

CPython implementation detail: This is the address of the object in memory.

If you mix custom objects with builtin types like int their might be hash collisions, but that's no problem at all if they are equally distributed. Don't investigate too much unless you really hit a performance problem.

Python Hashes and Equality · Homepage of Hynek Schlawack, Hash tables are used to implement map and set data structures in In order to perform comparisons, a hashable needs an __eq__() method. Python hash () The hash () method returns the hash value of an object if it has one. Hash values are just integers which are used to compare dictionary keys during a dictionary lookup quickly. Internally, hash () method calls __hash__ () method of an object which are set by default for any object.


In Python 3 the following function is used on subclasses of object against the id() of the object (from pyhash.c)

Py_hash_t
_Py_HashPointer(void *p)
{
    Py_hash_t x;
    size_t y = (size_t)p;
    /* bottom 3 or 4 bits are likely to be 0; rotate y by 4 to avoid
       excessive hash collisions for dicts and sets */
    y = (y >> 4) | (y << (8 * SIZEOF_VOID_P - 4));
    x = (Py_hash_t)y;
    if (x == -1)
        x = -2;
    return x;
}

SIZEOF_VOID_P is 8 for 64-bit Python and 4 for 32-bit Python.

>>> class test: pass
...
>>> a = test()
>>> id(a)
4325845928
>>> hash(a)
-9223372036584410438

You can see that the hash is calculated from id(a) using the formula (id(a) >> 4) | (id(a) << (8 * SIZEOF_VOID_P - 4)), where the bitwise operations are performed on C signed integers. For example, for the a defined above:

>>> import numpy
>>> y = numpy.array([4325845928], dtype='int64')
>>> SIZEOF_VOID_P = 8
>>> (y >> 4) | (y << (8 * SIZEOF_VOID_P - 4))
array([-9223372036584410438])

Note that I am using numpy.array(dtype='int64') so that the bitwise operations act the same way they would in C (if you perform the same operation on Python ints you get different behavior because they don't overflow). See https://stackoverflow.com/a/5994397/161801.

Python hashing tutorial - explaining hashing in Python, ). The double underscores indicate that these are magic methods and shouldn't be called directly by the programmer, they are normally called by the interpreter itself. When used, it calls for the __hash__() of an object which is set by default during the creation of the object by the user. The syntax for using the hash method is as follows. hash(object)


The default hash for user-defined classes is to just return their id. This gives a behaviour that is often useful; using an instance of a user-defined class as a dictionary key will allow the associated value to be retrieved when exactly the same object is provided again to lookup the value. e.g:

>>> class Foo(object):
    def __init__(self, foo):
        self.foo = foo


>>> f = Foo(10)
>>> d = {f: 10}
>>> d[f]
10

This matches the default equality of user-defined classes:

>>> g = Foo(10)
>>> f == g
False
>>> d[g]

Traceback (most recent call last):
  File "<pyshell#9>", line 1, in <module>
    d[g]
KeyError: <__main__.Foo object at 0x0000000002D69390>

Note that even though f and g have the same values for their attributes, they are not equal and looking up g in d doesn't find the value stored under f. Furthermore, even if we change the value of f.foo, looking up f in d still finds the value:

>>> f.foo = 11
>>> d[f]
10

The assumption is that instances of some arbitrary new class should be treated as non-equivalent, unless the programmer specifically declares the conditions for two instances to be treated as equivalent by defining __eq__ and __hash__.

And this pretty much works; if I define a Car class, I probably consider two cars with identical attributes to be representing two different cars. If I have a dictionary mapping cars to registered owners, I don't want to find Alice when I look up Bob's car, even if Alice and Bob happen to own identical cars! OTOH, if I define a class to represent postal codes, I probably do want to consider two different objects with the same code to be interchangeable representations of "the same" thing, and in this case if I had a dictionary mapping postal codes to states, I would clearly want to be able to find the same state with two different objects representing the same post code.

I refer to this as the difference between "value types" and "object types". Value types represent some value, and it's the value I care about, not each individual object's identity. Two different ways of coming up with the same value are equally good, and the "contract" of code passing around value types usually just promises to give you an object with some value, without specifying which particular object it is. For object types OTOH, each individual instance has its own identity, even if it contains exactly the same data as another instance. The "contract" of code passing around object types usually promises to keep track of the exact individual objects.

So why don't the built-in mutable classes use their id as their hash? It's because they're all containers, and we usually consider containers to be mostly like value types, with their value determined by the contained elements:

>>> [1, 2, 3] == [1, 2, 3]
True
>>> {f: 10} == {f: 10}
True

But mutable containers have a value that is transient. Some given list currently has the value [1, 2, 3], but it can be mutated into having the value [4, 5, 6]. If you could use lists as dictionary keys, then we'd have to make a ruling on whether lookup should use the list's (current) value, or its identity. Either way we can be (very) surprised when the value of an object currently being used as a dictionary key is changed by mutating it. Using objects as dictionary keys only works well when the object's value is its identity, or when an object's identity is irrelevant to its value. So the answer chosen by Python is to declare mutable containers unhashable.


Now, more specific details in answer to your direct questions:

1) Since this default hash in CPython (though apparently only < 2.6, according to other answers/comments) maps to the object's memory address, then in CPython no two objects using default hashing that are both live at the same time can possibly clash on their hash values, regardless of the classes involved (and if it's being stored as a dictionary key it's live). I would also expect that other Python implementations that don't use memory addresses as hashes should still have fine hash distributions among objects using the default hashing. So yes, you can rely on it.

2) So long as you don't return as your custom hash a result that is exactly the hash of some existing object, you should be relatively fine. My understanding is that Python's hash-based containers are relatively tolerant of sub-optimal hash functions, so long as they're not completely degenerate.

What hash algorithm does Python's dictionary mapping use?, Python's __hash__ function is not a cryptographic hash of hash-table based associative mappings with it's dictionary and set types amongst  Python Hash Function Example A good hash function is the one function that results in the least number of collisions, meaning, No two sets of information should have the same hash values. Internally, the hash () method calls __hash__ () method of the object which is set by default for any object. We’ll look at this later.


>>> class C(object):
...     pass
... 
>>> c = C()
>>> hash(c) == id(c)
True

See function id

Magic Methods, In Python, there are two objects that correspond to hash tables, dict and set . A dict To set an object as not hashable, set __hash__ to None . When __eq__ is called, what Python does under the hood is perform a rich comparison (tp_richcompare). This includes both equality and non-equality check as well as comparisons like greater or lower than. The default implementation is using object_richcompare which requires a reference equality:


id() function in Python, __hash__ is only called once when an object is inserted into a dict or a set . Note we never see the hash for a or b called again even though we'  By default, dataclass() will not implicitly add a __hash__() method unless it is safe to do so. Neither will it add or change an existing explicitly defined __hash__() method. Setting the class attribute __hash__ = None has a specific meaning to Python, as described in the __hash__() documentation.


Ensuring Data Integrity with Hash Codes, attrs will happily write a __hash__ method for you 1, however it will not do so by default. Because according to the definition from the official Python docs, the  Since the default Python hash () implementation works by overriding the __hash__ () method, we can create our own hash () method for our custom objects, by overriding __hash__ (), provided that the relevant attributes are immutable. Let’s create a class Student now.


Python's __hash__ function is not a cryptographic hash, These represent elements from the mathematical set of integers (positive and __dict__. The namespace supporting arbitrary function attributes. Writable. Python, which is how to create hashes for custom objects. By default, if we create an object, it will be hashable.