Our project is working on implementing style transfer using styles from Across the Spider-verse. We’ve gotten some good outputs so far, but were wondering if we could better align the VGG model to the styles in the universe we are relying on in order to get a better output.
The idea is by using transfer learning on VGG to identify different universes from the film, we could tune VGG to notice some of the features used in each of the different universes (e.g. better recognize Gwen’s paintbrush strokes or Miles’s half-tones).
We’ve experimented with this and ended up with a VGG model that seemed to have difficultly with identifying content – when we run the entire model our images have all the detail removed (e.g. extremely blurred, or blocky).
This made us wonder if this implementation is possible. It doesn’t seem that any other papers use anything but VGG (1).
Geo C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.