作者: Yoshua Bengio , Aaron Courville
DOI: 10.1007/978-3-642-36657-4_1
关键词:
摘要: Unsupervised learning of representations has been found useful in many applications and benefits from several advantages, e.g., where there are unlabeled examples few labeled ones (semi-supervised learning), or the a distribution different but related to one interest (self-taught learning, multi-task domain adaptation). Some these algorithms have successfully used learn hierarchy features, i.e., build deep architecture, either as initialization for supervised predictor, generative model. Deep can yield that more abstract better disentangle hidden factors variation underlying unknown generating distribution, capture invariances discover non-local structure distribution. This chapter reviews main motivations ideas behind their representation-learning components, well recent results this area, proposes vision challenges hopes on road ahead, focusing questions invariance disentangling.