I am pretraining an encoder within a VAE architecture. My goal is to transfer the encoder’s weights and fine-tune it for a classification task. Because my final goal is a classification problem, not generation, is coding “dumb” decoder a general practice?
For example:
# Dumb decoder
self.decoder = nn.Sequential(
nn.Linear(latent_size, 5000),
nn.Sigmoid()
)
# Make the decoder not trainable
for param in self.decoder.parameters():
param.requires_grad = False