作者: Daniel Xavier De Sousa , Thierson Couto Rosa , Wellington Santos Martins , Rodrigo Silva , Marcos André Gonçalves
DOI: 10.1007/978-3-642-35063-4_38
关键词:
摘要: Traditional Learning to Rank (L2R) is usually conducted in a batch mode which single ranking function learned order results for future queries. This approach not flexible since queries may differ considerably from those present the training set and, consequently, work properly. Ideally, distinct learning should be on demand each query. Nevertheless, on-demand L2R significantly degrade query processing time, as has on-the-fly before it can applied. In this paper we parallel implementation of an technique that reduces drastically response time previous serial implementation. Our makes use thousands threads GPU learn query, and takes advantage reduced obtained through active learning. Experiments with LETOR benchmark show our proposed achieves mean speedup 127x when compared sequential version, while producing very competitive effectiveness.