I am working on a project to generate a forged image ( Almost a copy of it but not a copy completely!) from some source images.
After some epochs, the generator model generates some white images. I think it’s because of that all of the source images’ backgrounds are white pixels.
I wanna model learn the lines and shapes of source images and generate them. but due to the small proportion of black to white pixels, the model cannot learn that black pixels. If anyone has any suggestions about this model or even other architectures help!