作者: Søren Hauberg , David Eklund , Georgios Arvanitidis , Dimitris Kalatzis
DOI:
关键词:
摘要: Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads common choice of standard Gaussian prior over continuous variables. Recent work has, however, shown that this has detrimental effect on model capacity, leading subpar performance. We propose Euclidean lies at heart failure mode. To counter this, we assume Riemannian structure constitutes more principled geometric view codes, and replace with Brownian motion prior. an efficient inference scheme does not rely unknown normalizing factor Finally, demonstrate significantly increases capacity using only one additional scalar parameter.