I am working on an autoencoder project and would like to understand how to implement a contrastive loss for it.
As far as I understand, a contrastive loss uses pairs of latent space representations and pushes them toward each other if they are part of the same class or pulls them away if they are part of different classes. In my project, I also have an MLP module that inputs the autoencoder’s latent representations so that I can link a latent representation to a label.
The main issue in my case is that I don’t know how to implement the contrastive loss algorithm in PyTorch for an autoencoder, are there any references that I may look into?
Moreover, does it make sense if I use just a simple sum to combine the contrastive loss with other losses (such as the reconstruction loss of the autoencoder and the CrossEntropy for the MLP block)?