作者: Paul Sajda , Wotao Yin , Jianing Shi , Stanley Osher
DOI:
关键词:
摘要: l1-regularized logistic regression, also known as sparse is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of l1 regularization attributes attractive properties to the classifier, such feature selection, robustness noise, a result, classifier generality context supervised learning. When regression problem has large-scale high dimensions, it computationally expensive minimize non-differentiable l1-norm objective function. Motivated by recent work (Koh et al., 2007; Hale 2008), we propose novel hybrid algorithm based on combining two types optimization iterations: one being very fast memory friendly while other slower but more accurate. Called iterative shrinkage (HIS), resulting comprised fixed point continuation phase an interior phase. first completely efficient operations matrix-vector multiplications, second truncated Newton's method. Furthermore, show that various techniques, including line search continuation, can significantly accelerate convergence. global convergence at geometric rate (a Q-linear terminology). We present numerical comparison with several existing algorithms, analysis using benchmark from UCI learning repository, our most without loss accuracy.