作者: M. Ehsan Abbasnejad , Edwin V. Bonilla , Scott Sanner
DOI: 10.1007/978-3-642-40991-2_33
关键词:
摘要: We propose a decision-theoretic sparsification method for Gaussian process preference learning. This overcomes the loss-insensitive nature of popular approaches such as Informative Vector Machine (IVM). Instead selecting subset users and items inducing points based on uncertainty-reduction principles, our approach is underpinned by decision theory directly incorporates loss function inherent to underlying learning problem. show that different specifications function, IVM's differential entropy criterion, value information an upper confidence bound (UCB) criterion used in bandit setting can all be recovered from framework. refer Valuable (VVM) it selects most useful during minimize corresponding loss. evaluate one synthetic two real-world datasets, including generated via Amazon Mechanical Turk another collected Facebook. Experiments variants VVM significantly outperform IVM datasets under similar computational constraints.