I have two questions after exploring VAE for a while. In the standard VAE setup, we assume 1 latent variable of shape (BHWD): mu and var, and the prior N(0, I).
-
Latent distribution: I read a bit on Chi distribution and wonder if the
L2 norm of latent
(B,) is a good indictor that the latent is gaussian-like distributed. In the standard VAE training, I find that its value is stably near(D-1)**0.5
, which fits the discription of the center Chi distribution. The following question is if L2 norm doesn’t equal to the expected value, we can say that that latent distribution is more complicated than gaussian? -
Posterior collapse: (1) What is the symptom of the posterior collapse? Does it have to strictly be:
mu~0, var~1 and KL~0
? (2) How to intepret the mean value of var. I observed that it is also affected byD
. Also, if the extreme small var indicates that the model is very certain about the input,the exploding var
would tell us that the encoder is lack of capability? In general, do we prefer a smaller var or there exists an ideal value. -
Given an image, I tried to convert it into Y/UV channels and learn two sets of latent respectively. Specifically, I applied the standard process on both: use one/two encoders to generate mu and var for both Y and UV input, compute two KLs separately wrt the normal proir, sample posterior both, concat them together before decoding. The job of the decoder is to reconstruct the original RGB image.
I imagine that if the VAE is able to construct an interpolatable/smooth latent space for images, it should also be able to deal with Y/UV latents and align the ones from the same image close.
Unfortunately, I didn’t observe it in the experiments. The reconstruction is fine but the stats of the latents (I will just label them 1 and 2 for convenience) are very confusing. (1) Hard to read from
mu1
andmu2
as their values are very close to 0. However, I can always findvar2
explodes (refer to what I asked in bullet 2), may upto 200. (2) Usually,var1
looks more like a gaussian as its L2-norm converges at(d-1)**0.5
, butvar2
has slightly larger value (refer to what I asked in bullet 1). (3) I also compute thecos similarity
andL2 distance
betweenmu1
andmu2
. They are mostly orthogonal, which aligns the saying that high dimensional vectors are natually orthogonal to each other. AndL2(mu1, mu2)
andL2(mu2, origin)
have similar value, which is larger thanL1(mu1, origin)
. What is the right way to digest these stats? Or in general, VAE framework is not suitable to learn two separate but corelated gaussian-like latent variables?
Thank you for any insights!
Ran lots of experiment and try to have a better understanding of the high dimension latent space.