作者: Evan Wei Xiang , Qiang Yang , Sinno Jialin Pan , Weike Pan , Jian Su
DOI: 10.5591/978-1-57735-516-8/IJCAI11-392
关键词:
摘要: Transfer learning addresses the problems that labeled training data are insufficient to produce a high-performance model. Typically, given target task, most transfer approaches require select one or more auxiliary tasks as sources by designers. However, how right source enable effective knowledge automatically is still an unsolved problem, which limits applicability of learning. In this paper, we take step ahead and propose novel framework, known source-selection-free (SSFTL), free users from need domains. Instead asking for pairs, traditional does, SSFTL turns some online information such World Wide Web Wikipedia help. The can be hidden somewhere within large source, but do not know where they are. Based on sources, train number classifiers. Then, bridge built labels potential candidates domain in via social media with tag cloud label translator. An added advantage that, unlike many previous approaches, difficult scale up scale, highly scalable offset much work offline stage. We demonstrate effectiveness efficiency through extensive experiments several real-world datasets text classification.