Deep Learning of Representations

作者: Yoshua Bengio , Aaron Courville

DOI: 10.1007/978-3-642-36657-4_1

关键词:

摘要: Unsupervised learning of representations has been found useful in many applications and benefits from several advantages, e.g., where there are unlabeled examples few labeled ones (semi-supervised learning), or the a distribution different but related to one interest (self-taught learning, multi-task domain adaptation). Some these algorithms have successfully used learn hierarchy features, i.e., build deep architecture, either as initialization for supervised predictor, generative model. Deep can yield that more abstract better disentangle hidden factors variation underlying unknown generating distribution, capture invariances discover non-local structure distribution. This chapter reviews main motivations ideas behind their representation-learning components, well recent results this area, proposes vision challenges hopes on road ahead, focusing questions invariance disentangling.

参考文章(168)
Matthew D. Zeiler, ADADELTA: An Adaptive Learning Rate Method arXiv: Learning. ,(2012)
Salah Rifai, Grégoire Mesnil, Pascal Vincent, Xavier Muller, Yoshua Bengio, Yann Dauphin, Xavier Glorot, Higher order contractive auto-encoder european conference on machine learning. pp. 645- 660 ,(2011) , 10.1007/978-3-642-23783-6_41
Nicolas Usunier, Samy Bengio, Jason Weston, WSABIE: scaling up to large vocabulary image annotation international joint conference on artificial intelligence. pp. 2764- 2770 ,(2011) , 10.5591/978-1-57735-516-8/IJCAI11-460
Yoshua Bengio, Xavier Glorot, Antoine Bordes, Antoine Bordes, Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach international conference on machine learning. pp. 513- 520 ,(2011)
Yoshua Bengio, Hugo Larochelle, Dumitru Erhan, Zero-data learning of new tasks national conference on artificial intelligence. pp. 646- 651 ,(2008)
Yoshua Bengio, Frederic Morin, Hierarchical Probabilistic Neural Network Language Model. international conference on artificial intelligence and statistics. ,(2005)
Geoffrey E. Hinton, A Practical Guide to Training Restricted Boltzmann Machines Neural Networks: Tricks of the Trade (2nd ed.). pp. 599- 619 ,(2012) , 10.1007/978-3-642-35289-8_32
Yoshua Bengio, Aaron C. Courville, Pascal Vincent, Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives ,(2012)
Thierry Bertin-Mahieux, Pierre-Antoine Manzagol, Douglas Eck, On the Use of Sparce Time Relative Auditory Codes for Music. international symposium/conference on music information retrieval. pp. 603- 608 ,(2008)
Ruslan Salakhutdinov, Learning Deep Boltzmann Machines using Adaptive MCMC international conference on machine learning. pp. 943- 950 ,(2010)