OOM : Memory Increase Issue in Model Training with Pytorch on WSL2
I am experiencing an issue with memory increase and saturation while training a deep learning model using PyTorch in WSL2. While it doesn’t happen on a Linux OS with the exact same code. The only thing that differs is the version of PyTorch, Cuda and the OS. I have tested this on two different setups: