作者: Hongbo Zhou , Qiang Cheng
DOI: 10.1109/TNNLS.2014.2314129
关键词:
摘要: This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have $l_{p}$ loss function as additive component. For this problem, well-known learning algorithms often well-established results on accuracy efficiency, but there exists rarely any report explicit linear scalability with respect to the problem size. The proposed approach starts developing second-order procedure iterative descent general penalization then builds efficient restricted satisfy Karmarkar's projective scaling condition. Under condition, light weight, message passing (MPA) is further developed by constructing series simpler equivalent problems. MPA intrinsically because it only involves matrix-vector multiplication avoids matrix inversion operations. proven be globally convergent formulations; nonconvex situations, converges stationary point. accuracy, scalability, applicability method are verified through extensive experiments sparse signal recovery, face image classification, over-complete dictionary