This could be a bug, or could be something I don’t understand about when numpy decides to convert the types of the objects in an “object” array.
X = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [1158941147679947299,0]
Y = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [11589411476799472995,0]
Z = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [115894114767994729956,0]
print(type(X[0]),X[0]) # <class 'int'> 7047216832217320738
print(type(Y[0]),Y[0]) # <class 'float'> 1.7477687161336848e+19
print(type(Z[0]),Z[0]) # <class 'int'> 121782390452532103395
The arrays themselves remain “object” type (as expected). It is unexpected that the Y
array’s objects got converted to “floats”. Why is that happening? As a consequence I immediately loose precision in my combinatorics. To make things even stranger, removing the 0
fixes things:
X = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [1158941147679947299]
Y = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [11589411476799472995]
Z = np.array([5888275684537373439, 1945629710750298993],dtype=object) + [115894114767994729956]
print(type(X[0]),X[0]) # <class 'int'> 7047216832217320738
print(type(Y[0]),Y[0]) # <class 'int'> 17477687161336846434
print(type(Z[0]),Z[0]) # <class 'int'> 121782390452532103395
I have tried other things, such as using larger/smaller numbers, but rarely (if ever) end up with “floats”. It is something very specific about the size of these particular “int” values.