I decided to study the attack on machine learning systems.
To do this, I used the Adversarial Robustness Toolbox library I tried to repeat the example: (Demonstrations of a black box attack on Tesseract OCR. Using BlackBoxClassifier
I did everything according to the instructions, used different IDEs, but I constantly encounter a problem when using jpeg compression, which involves protecting the classifier using image compression.
in this code I am trying to apply black box attacks on my model and tried to make look like the notebook that library provided Adversarial Robustness Toolbox but I failed
my code
https://github.com/mostaf7583/bacheloe/blob/master/blackbox.ipynb
I was expecting the attack out put some Adversarial images but it outputted this
enter image description here