I have a docker.tar
file that contains numerous docker images – the size of this file is quite big, sitting at around 44 gigabytes. These images would be loaded from the tar file, retagged, and then pushed to another registry. All-in-all, this entire process could take about 40 minutes due to how many images there are.
So far, I’ve managed to cut this time down to about 20 minutes as I am using xargs
to push the images once they have been retagged. The loading of the images is the next thing I want to try and address as this takes some time as well.
I have tried using split
to split the original tar file into smaller parts, and then using xargs
that way to try a docker load
however I get some errors saying these new tar files are not created correctly (incorrect header, unexpected EOF, etc.)
Apart from that, I haven’t found much on the topic besides this: https://forums.docker.com/t/docker-save-load-performance/9245 – but the one comment that provides a possible improvement deals with docker save
Is there any other way I can improve the speed of docker load
? Ideally, possible improvements would need to be done in bash
.