I have a huge image dataset that is around 1 gigabyte on disk and I’m doing some data loading to train an image model
However, when you convert them into image tensors, the images now take up a huge amount of memory (for example, 100 HD images as float32 pytorch tensors take up 24GBs).
If you keep them as disk, the dataloader has to read from disk every time, which is a huge time sink in my use case. (To form a batch of 100 images, reading from disk takes around 700ms on my setup, but ideally I need to cut this time by at least 50%)
I would like to cache these jpegs to RAM, and then decode them with multiple parallel workers to form an image batch, shifting a disk intensive task to an CPU-bound image decoding (since there are multiprocessing, it can be much faster to generate image batches)
My current solution is the following:
In my dataset class that subclasses torch.utils.data.Dataset, in the constructor init method I have opened every image file with:
for image_filename in image_filenames:
with open(image_filename, 'rb') as f:
self.binary_images.append(io.BytesIO(f.read()))
And then my __get_item__(image_idx)
method uses PIL’s Image.open(binary_images[image_idx])` to open the file and then converts that to a numpy tensor
Is this idea feasible and the most efficient way? For example, is my 1 GB of image jpegs loading into 1 GB total and what other ways are there to make this faster?
Note: please don’t refer me to FFCV, as I have a very unique issue that can’t be solved with ffcv