作者: Brian D. Ziebart , Anqi Liu , Lev Reyzin
DOI:
关键词: Semi-supervised learning 、 Pessimism 、 Computer science 、 Binary classification 、 Active learning 、 Machine learning 、 Multi-task learning 、 Generalization error 、 Artificial intelligence 、 Probabilistic logic
摘要: Existing approaches to active learning are generally optimistic about their certainty with respect data shift between labeled and unlabeled data. They assume that unknown datapoint labels follow the inductive biases of learner. As a result, most useful data-point labels—ones refute current biases— rarely solicited. We propose shift-pessimistic approach assumes worst-case conditional label distribution. This closely aligns model uncertainty generalization error, enabling more solicitation. investigate theoretical benefits this demonstrate its empirical advantages on probabilistic binary classification tasks.