I am using tensorflow
on jupyter notebook
. Suppose I want to flush everything in my GPU memory without restarting the kernel (that means without touching my RAM contents). Please do not dive into the reasons behind my demand; at the end I should be able to clear the GPU memory at will.
There are tons of discussions on this simple question but without a clear answer. However, I have seen many recommending numba.cuda.close()
. But this causes issues for me, eventually killing the kernel and defeating my purpose.
Below is a basic code.
#---- cell 1 ----
import tensorflow as tf
from numba import cuda
#---- cell2 ----
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
#---- cell3 ----
#device = cuda.get_current_device(); device.reset()
cuda.select_device(0); cuda.close()
After cell3, the memory is released but when I attempt to execute cell2 again, the kernel dies. In fact, in a fresh kernel once I execute cell2 and cell3 then I can never execute cell2 again without killing the kernel no matter what I do. I am puzzled. Same thing happens if I use device = cuda.get_current_device(); device.reset()
inseatd of cuda.close()
.
So my questions are.
- Can I again use the GPU after executing
cuda.select_device(0); cuda.close()
? If so, how? Note that this question was asked before here but there is no clear answer. - If this problem with
cuda.close()
cannot be avoided is there a better solution for releasing all GPU memory without touching anything on RAM inipython
notebook?tf.keras.backend.clear_session()
did not release the memory in the first place.
Thanks in adavance.