I understand that there are various methods for calibration, including using logistic regression.
In the paper “On Calibration of Modern Neural Networks” available here: https://proceedings.mlr.press/v70/guo17a/guo17a.pdf
It is suggested that logistic calibration should be applied to the logit scores before converting them to the [0,1] range.
However, I’m working with a deep neural network where my output scores are in the range [-1,1]. For example, I trained the network using the “AMSoftmax” loss function to generate scores and applied cosine similarity in the final layer.
Could you recommend any papers or resources that specifically address calibrating deep neural networks when the scores fall within the [-1,1] range? I’ve had difficulty finding relevant information on this topic.
Thank you very much!