I am quite new in the use of images with ML algorithms and with VAEs in particular.
I want to train a Variational Autoencoder with some X-Ray Images coming from DICOM files. The thing is the images have different Photometric Interpretation: MONOCHROME1 and MONOCHROME2. WHen I plot MONOCHROME 2 images the result is a green-blue image while on MONOCHROME1 the resulting image is green-yellow.
I would like to ask two things:
-
Should I put the image in the same color scale in such a problem? I mean, does the color matter for a variational autoencoder or it won’t really care about colours and will look just for shapes?
-
How can I transform my pixel array to RGB for each Photometric Interpretation I have?
I have looked for similar questions here but I don’t can’t clarify myself.
Thanks in advance