I have recently installed PyTorch with CUDA support on my machine, but when I run torch.cuda.is_available()
, it returns False
. I verified my GPU setup using nvidia-smi
, and it seems that my system recognizes the GPU correctly.
Here are the steps I’ve followed so far:
-
Installed PyTorch with CUDA support using the command
pip install torch torchvision torchaudio
. -
Confirmed that my GPU is recognized by my system using
nvidia-smi
. -
Verified that the correct CUDA version is installed.
Despite this, running torch.cuda.is_available()
still returns False
. What could be the reason for this issue, and how can I resolve it?
Additional Information:
-
Operating System: Windows 11
-
Python Version: 3.10
-
PyTorch Version: 2.3.1
-
CUDA Version: 12.5.82
-
GPU Model: NVIDIA GeForce RTX 4060
What I’ve Tried:
-
Reinstalling PyTorch with CUDA support.
-
Updating GPU drivers.
-
Rebooting the system.
import torch
print(torch.cuda.is_available()) # Returns False
print(torch.cuda.current_device()) # Raises RuntimeError: No CUDA GPUs are available
I have recently installed PyTorch with CUDA support on my machine, but when I run torch.cuda.is_available()
, it returns False
.
1
I solved this problem via DELETING ALL TORCH AND CUDA modules on my PC. Then I used Pytorch with Cuda Website and downloaded 11.8 version.