作者: Yi Zhang
DOI:
关键词:
摘要: Many problems in information extraction, text mining, natural language processing and other fields exhibit the same property: multiple prediction tasks are related sense that their outputs (labels) satisfy certain constraints. In this paper, we propose an active learning framework exploiting such relations among tasks. Intuitively, with task coupled by constraints, can utilize not only uncertainty of a single but also inconsistency predictions across We formalize idea as cross-task value criteria, which reward labeling assignment is propagated measured over all relevant reachable through A specific example our leads to cross entropy measure on tasks, generalizes classical single-task uncertain sampling. conduct experiments two real-world problems: web extraction document classification. Empirical results demonstrate effectiveness actively collecting labeled examples for