ISSUE:
I have been trying without success to run TensorFlow in GPU mode. When I run the following code to check for available devices:
<code>from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
</code>
<code>from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
</code>
from tensorflow.python.client import device_lib
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
The output is:
<code>['/device:CPU:0']
</code>
<code>['/device:CPU:0']
</code>
['/device:CPU:0']
The expected output should be:
<code>['/device:CPU:0', '/device:GPU:0']
</code>
<code>['/device:CPU:0', '/device:GPU:0']
</code>
['/device:CPU:0', '/device:GPU:0']
When I run:
<code>import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
</code>
<code>import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
</code>
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
The output is:
<code>Num GPUs Available: 0
</code>
<code>Num GPUs Available: 0
</code>
Num GPUs Available: 0
Here are my specifications:
- CUDA version: 12.2
- cuDNN version: 8.9.2
- GPU model: Nvidia GeForce MX350
- Operating System: Windows 11
- TensorFlow version: 2.16.0
Steps I have taken:
- Installed CUDA 12.2
- Downloaded and set up cuDNN 8.9.2
- Set the CUDA path in the environment variables
- Restarted the PC
What could be the issue here, and how can I resolve it to ensure TensorFlow recognizes and uses the GPU?