Is it possible to train a model requiring 20GB vRAM using TensorFlow’s MultiWorkerMirroredStrategy on a server with GPUs of varying vRAM capacities
I am trying to perform distributed training using TensorFlow’s MultiWorkerMirroredStrategy on a server equipped with four GPUs with the following specifications:
Can I train a model requiring 20GB vRAM using TensorFlow’s MultiWorkerMirroredStrategy on a server with GPUs of varying vRAM capacities?
I am trying to perform distributed training using TensorFlow’s MultiWorkerMirroredStrategy on a server equipped with four GPUs with the following specifications: