I’m encountering an issue with Cholesky decomposition in PyTorch when running on a GPU. The following code works perfectly on the CPU:
import torch
device = torch.device('cpu')
torch.manual_seed(1)
size = 4096
F = torch.rand(int(size / 2), int(size / 2)).to(device)
F = torch.matmul(F, F.T)
torch.linalg.cholesky(F)
However, when I move the matrix to the GPU, I get an error:
import torch
device = torch.device('cuda')
torch.manual_seed(1)
size = 4096
F = torch.rand(int(size / 2), int(size / 2)).to(device)
F = torch.matmul(F, F.T)
torch.linalg.cholesky(F)
The error message is:
torch.linalg.cholesky(F)
torch._C._LinAlgError: linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 2044 is not positive-definite).
Why does this happen? How can I resolve this issue?
PS:
PyTorch version: 2.3.0
CUDA version: 12.1