Why do numpy float32s correctly return numbers below the float32 minimum value?
The smallest number that can be stored by an IEEE-754 32-bit float is 2^-126 ~ 1.18e-38. Why then do I get the following output?
surprising behaviour for numpy float16 when testing equality
I’m passing various bits of data to a function that computes variance along the first dimension. Sometimes the variance is zero, which is fine, but then the following strange thing happens: