I know this question has been asked before, but the answers are quite cluttered and did not work for me. My GPU has memory of 20 GB and my training set is around 15 GB as I am classifying time series of 2D images. The problem is that I can’t load such a large data on the GPU (loading on CPU is working fine). I tried
batch_size = 32
train_dataset = tf.data.Dataset.from_tensor_slices((train_data_arr, train_label)).shuffle(len(train_data_arr)).batch(batch_size)
and some other options in tf.data
API. None worked.
Note that my question is not about the model or data pre-processing. It is just about loading such a huge data on the GPU. Is there a way?