作者: Jakub M. Tomczak , Adam Gonczarek
DOI: 10.1007/978-3-319-08422-0_27
关键词:
摘要: Sparsity has become a concept of interest in machine learning for many years. In deep sparse solutions play crucial role obtaining robust and discriminative features. this paper, we study new regularization term hidden units activation the context Restricted Boltzmann Machine (RBM). Our proposition is based on symmetric Kullback-Leibler divergence applied to compare actual desired distribution over active units. We our method against two other enforcing sparsity terms by evaluating empirical classification error using datasets: (i) image (MNIST), (ii) document (20-newsgroups).