作者: Masoud Faraki , Mehrtash T Harandi , Fatih Porikli , None
DOI: 10.1016/J.PATREC.2017.09.017
关键词:
摘要: Abstract In this paper, we devise a kernel version of the recently introduced keep it simple and straightforward metric learning method, hence adding novel dimension to its applicability in scenarios where input data is non-linearly distributed. To end, make use infinite dimensional covariance matrices show how matrix reproducing Hilbert space can be projected onto positive cone efficiently. particular, propose two techniques towards projecting on space. The first though approximating solution, enjoys closed-form analytic formulation. second solution more accurate requires Riemannian optimization techniques. Nevertheless, both solutions scale up very well as our empirical evaluations suggest. For sake completeness, also employ Nystrom method approximate before metric. Our experiments evidence that, compared state-of-the-art algorithms, working directly space, leads robust better performances.