Hi guys I am experimenting on cifar10 and cifar10.1 dataset, where I am using a Resnet18 model to train on the cifar10 training data and evaluating on the cifar10 test data and cifar10.1(attribute shift data). I am using supervised contrastive learning to train the feature encoder(current the feature vector dim id 64) and then train the projection head(which is a single layer MLP), on evaluating this model gives 91% ACC on cifar10 test data and 81% on cifar10.1 dataset.
To improve the feature encoder model to the unseen attribute shift data, I am fine tuning the feature encoder on the cifar10.1 dataset using a self-supervised contrastive loss, but on fine-tuning the fine-tuned feature encoder along with the projection head pretrained on the cifar10 training data is performing worse than the model which is not fine-tuned, the fine-tuned model is giving an accuracy of 56%(before it was 81%), please can you tell me why the fine-tuned model is not working good.
I expect to know the reason why on fine tuning the auto encoder on cifar10.1, it is performing worse on cifar10.1 and also performing bad on cifar10 testset
ANUMULA CHAITANYA SAI is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.