作者: Vince D. Calhoun , Sergey M. Plis , Vamsi K. Potluru , Thomas P. Hayes , Jonathan Le Roux
DOI:
关键词:
摘要: Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require learnt features to be sparse. A natural measure of sparsity L$_0$ norm, however its optimization NP-hard. Mixed norms, such as L$_1$/L$_2$ measure, have been shown model robustly, based on intuitive attributes that measures need satisfy. This in contrast computationally cheaper alternatives plain L$_1$ norm. However, present algorithms designed optimizing mixed norm are slow and other formulations proposed those norms. Our algorithm allows us solve constraints while not sacrificing computation time. We experimental evidence real-world datasets shows our new performs an order magnitude faster compared current state-of-the-art solvers suitable large-scale datasets.