Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process. I found that when the code of these layers are remained, the performance is different from the result when the code of these layers are deleted. Does this phenomenon caused by initialization of model? An example is as follows:
Class Model(nn.Module):
def __init__(self,***):
super().__init__()
self.layer1 = ***
self.layer2 = ***
self.layer3 = ***
def forward(self,x):
out = self.layer1(x)
out = self.layer2(out)
return out
When the self.layer3 is remained, the performance is different from when the self.layer3 is deleted. However, the self.layer3 does not participate in the train and test, why can it influence the performance?
I try to delete the code of the model initialization. the result is still the same as the above mentioned.
yolo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.