I have a pre-trained CNN model on MNIST and each time load the trained weights and biases to run inference. Is there any way to skip zero operations in the conv and fc layers in only inference phase (I dont want to retrain it so it doesnt need backpropagate)?
As the MNIST images are sparse so I should be expecting to have much less execution time when skipping zero operations. The optimality of work is not that much important for me, only I want to see how much execution time differs in different rate of zeros in input.
I tried some repositories for sprase convolution but they are considering you retrain the model after it. I was expecting to find a simple change in Pytorch code that just skips the zero operations. Also tried to find a way to change the C++ codebase of Pytorch however I couldnt figure it out.
Jimm Hall is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.