I’m a student working on a school project and need help.
My binary classification Convolutional Neural Network has a very high accuracy (>96%) on the validation data, and had performed just as well on the test dataset. However, when I use LIME to visualise the parts of the image that were important to its decision making, more often than not it’s highlighting the background. So my questions are:
Why is it doing this and has anyone seen this before?
How does it have 96% accuracy when it’s literally looking at a black mask when making the decision?
The reason I applied a black mask to the images is because the entire dataset I was given had the exact same background which was white rollers, and as you can see from one of the images ive uploaded, the model was heavily leaning on the rollers in it’s decision making so I preprocessed the background to be entirely black (0, 0, 0)RGB pixels however now the model seems to be using that somehow.
I am just stumped and greatly appreciate any help!
model architecture[example of issue with rollersexample of issue with black mask applied](https://i.sstatic.net/TM0FGHCJ.png)
I’ve tried various architectures, some built with keras layers and even tried a pretrained ResNet50. I’ve also varied most of the important hyperparameters and the behaviour persists. If it helps I can provide any particulars.
I appreciate any help offered in advance! 🙂
David Coddé is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.