作者: Qi Qian , Rong Jin , Jinfeng Yi , Lijun Zhang , Shenghuo Zhu
DOI: 10.1007/S10994-014-5456-X
关键词: Task (computing) 、 Matrix (mathematics) 、 Stochastic gradient descent 、 Online learning 、 Mathematics 、 Hybrid approach 、 Constraint (information theory) 、 Adaptive sampling 、 Mathematical optimization
摘要: Distance metric learning (DML) is an important task that has found applications in many domains. The high computational cost of DML arises from the large number variables to be determined and constraint a distance positive semi-definite (PSD) matrix. Although stochastic gradient descent (SGD) been successfully applied improve efficiency DML, it can still computationally expensive order ensure solution PSD It to, at every iteration, project updated onto cone, operation. We address this challenge by developing two strategies within SGD, i.e. mini-batch adaptive sampling, effectively reduce updates (i.e. projections cone) SGD. also develop hybrid approaches combine strength sampling with online techniques further SGD for DML. prove theoretical guarantees both based conduct extensive empirical study verify effectiveness proposed algorithms