Variational Autoencoders with Riemannian Brownian Motion Priors

作者: Søren Hauberg , David Eklund , Georgios Arvanitidis , Dimitris Kalatzis

DOI:

关键词:

摘要: Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads common choice of standard Gaussian prior over continuous variables. Recent work has, however, shown that this has detrimental effect on model capacity, leading subpar performance. We propose Euclidean lies at heart failure mode. To counter this, we assume Riemannian structure constitutes more principled geometric view codes, and replace with Brownian motion prior. an efficient inference scheme does not rely unknown normalizing factor Finally, demonstrate significantly increases capacity using only one additional scalar parameter.

参考文章(36)
Josep M Oller, None, On an intrinsic analysis of statistical estimation Multivariate Analysis: Future Directions 2. pp. 421- 437 ,(1993) , 10.1016/B978-0-444-81531-6.50030-1
Basic Concepts and Models John Wiley & Sons, Inc.. pp. 25- 56 ,(2008) , 10.1002/9780470316979.CH3
Christopher M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics) Springer-Verlag New York, Inc.. ,(2006)
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification international conference on computer vision. pp. 1026- 1034 ,(2015) , 10.1109/ICCV.2015.123
Alfredo Vellido, Sören Hauberg, Neil D. Lawrence, Alessandra Tosi, Metrics for probabilistic geometries uncertainty in artificial intelligence. pp. 800- 808 ,(2014)
Max Welling, Diederik P Kingma, Auto-Encoding Variational Bayes international conference on learning representations. ,(2014)
Xavier Pennec, Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements Journal of Mathematical Imaging and Vision. ,vol. 25, pp. 127- 154 ,(2006) , 10.1007/S10851-006-6228-4
Neil Lawrence, Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models Journal of Machine Learning Research. ,vol. 6, pp. 1783- 1816 ,(2005)
Eric Nalisnick, Padhraic Smyth, Stick-Breaking Variational Autoencoders arXiv: Machine Learning. ,(2016)
Ilya Sutskever, Tim Salimans, Max Welling, Rafal Jozefowicz, Durk P. Kingma, Xi Chen, Improved Variational Inference with Inverse Autoregressive Flow neural information processing systems. ,vol. 29, pp. 4743- 4751 ,(2016)