I’m passing various bits of data to a function that computes variance along the first dimension. Sometimes the variance is zero, which is fine, but then the following strange thing happens:
>> sigma = data.var(axis=0) + 1e-7 # data has zero variance so all entries should equal 1e-7
>> sigma
array([1.e-07, 1.e-07, 1.e-07, ..., 1.e-07, 1.e-07, 1.e-07], dtype=float16)
>> (sigma==1e-7).all()
True
>> sigma[0]==1e-7
False
On its own, the fourth line would be explained by the 16-point precision, and indeed
>> np.float16(1e-7)==1e-7
False
But it seems to contradict the third line, which says they are equal. This was causing a bug in my code. I can redesign around it, but I want to understand why numpy is doing this so I’m not caught out again in the future.