When running the script ‘inference_acc.py’, I encountered a CUDA error during execution. The specific error message states: “CUDA error: no kernel image is available for execution on the device”.
Context:
CUDA version: 12.4
Python version: 3.8.10
Torch version: 2.2.0
Issue:
The error occurs when attempting to execute certain CUDA operations, particularly when the script tries to perform matrix multiplication using torch.matmul().
I’m trying to accelerate computation by utilizing the GPU for faster processing using the .cuda() method.
I’m seeking guidance on how to resolve this CUDA error and ensure proper execution of CUDA operations within the script.
Any insights or suggestions on how to troubleshoot and resolve this issue would be greatly appreciated. Thank you!
Emilie Lalala is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.