While playing around with matrices as part of Liner Algebra learning I encountered a weird issue where numpy
and pytorch
produces incorrect determinant of a matrix, however tensorflow
gives the expected result.
In [110]: a = np.array([[15, 3], [10, 2]], dtype=np.float32)
In [111]: np.linalg.det(a)
Out[111]: 1.6653345e-15)
To the naked eye, the determinant of this matrix is 0, even though the type of the matrix is float32
there is no floating point rounding off needed here, so why does numpy
rounds these numbers off and give non-zero determinant.
The same thing happens with pytorch
as well
In [114]: at = torch.tensor([[15, 3], [10, 2]], dtype=torch.float32)
In [115]: torch.linalg.det(at)
Out[115]: tensor(-2.6822e-06)
However tensorflow
works as expected
In [117]: af = tf.Variable([[15, 3], [10, 2]], dtype=tf.float32)
In [118]: tf.linalg.det(af)
Out[118]: <tf.Tensor: shape=(), dtype=float32, numpy=0.0>
There is more weirdness here
In [121]: a = np.array([[15, 3], [20, 4]], dtype=np.float32)
In [122]: np.linalg.det(a)
Out[122]: 0.0
In [119]: at = torch.tensor([[15, 3], [20, 4]], dtype=torch.float32)
In [120]: torch.linalg.det(at)
Out[120]: tensor(-5.3644e-06)
In [123]: af = tf.Variable([[15, 3], [20, 4]], dtype=tf.float32)
In [124]: tf.linalg.det(af)
Out[124]: <tf.Tensor: shape=(), dtype=float32, numpy=-0.0>
So out of the three libraries tensorflow
seems to be the most stable, so can someone please explain this behavior or is this an anomaly? I understand this can cause issues with bunch of different matrix operations.