Recently, when I was studying the code you released about “bob.paper.iccv2023_face_ti”, I came across a problem.Here are the questions:
For your evaluation code evaluation_pipeline.py, when evaluated with your trained model, the results are quite different from the data given in the literature. For example, using the trained model ElasticFace-ArcFace_loss.pth, I set FR_system to ElasticFace and FR_target to ArcFace in the evaluation_pipeline.py file to perform black-box attacks on the LFW dataset. We will get scores_inversion-dev.csv and scores-dev.csv, and then compute the SAR by eval_SAR_TMR.py
FMR: 0.01 threshold: -0.8191349171709174 TMR: 0.976, SAR: 0.5722518676627535
FMR: 0.001 threshold: -0.7618033404275485 TMR: 0.964, SAR: 0.3755602988260406
I also evaluated other trained models given by you, and the results are all about 30 points lower than those of your excellent work. My environment configuration is similar to yours. Could you please help me to solve the problem in my work? Can you give me your scores_inversion-dev.csv and scores-dev.csv data files? Do you have any other evaluation Settings in evaluation_pipeline.py besides specifying the pretrained model path, dataset, and FR parameter? I would appreciate it if you could help me with my doubts.
user26878634 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
1