>>> import torch
>>> torch.cuda.is_available()
miniconda3/envs/test_gpu/lib/python3.10/site-packages/torch/cuda/__init__.py:118: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
False
OS: Ubuntu 22.04 base on hyper-v
NVIDIA-SMI 550.76.01
Driver Version: 552.22
Support CUDA Version: 12.4
Build cuda_12.1.r12.1/compiler.32415258_0
Python 3.10.14
torch 2.3.0
tensorflow 2.16.1
It was working, but after I installed Ollama or doing something, the program doesn’t work anymore.
I’m sure the memory is enough:
1549MiB / 8188MiB
I have also tried tensorflow, it raised the same error.
I have tried use nvcc to complier a .cu file directly, it’s work.
Can anyone tell me what’s going on? Thanks
vogel is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.