作者: Lawrence K. Saul , Youngmin Cho
DOI:
关键词: Transduction (machine learning) 、 Linear classifier 、 Support vector machine 、 Computer science 、 Curse of dimensionality 、 Pattern recognition 、 Supervised learning 、 Artificial intelligence 、 Machine learning 、 Dimensionality reduction 、 Simple (abstract algebra) 、 Non-negative matrix factorization
摘要: We show how to incorporate information from labeled examples into nonnegative matrix factorization (NMF), a popular unsupervised learning algorithm for dimensionality reduction. In addition mapping the data space of lower dimensionality, our approach aims preserve components that are important classification. identify these support vectors large-margin classifiers and derive iterative updates them in semi-supervised version NMF. These have simple multiplicative form like their counterparts; they also guaranteed at each iteration decrease loss function---a weighted sum I-divergences captures trade-off between supervised learning. evaluate reduction when used as precursor linear this role, we find yield much better performance than counterparts. one unexpected benefit low dimensional representations discovered by approach: often more accurate both ordinary transductive SVMs trained original input space.