作者: Mikel Penagarikano , Eduardo Lleida , Jesus Villalba , Alberto Abad , Mireia Diez
DOI:
关键词:
摘要: This paper describes the most relevant features of a collaborative multi-site submission to NIST 2011 Language Recognition Evaluation (LRE), consisting one primary and three contrastive systems, each fusing different combinations 13 state-of-the-art (acoustic phonotactic) language recognition subsystems. The collaboration focused on collecting sharing training data for those target languages which few development were provided by NIST, defining common dataset train backend fusion parameters select best fusions. Official post-key results are presented compared, revealing that greedy approach applied fusions suboptimal but very competitive performance. Several factors contributed high performance attained BLZ including availability low resource languages, reliability (consisting only audited NIST), diversity modeling approaches, datasets in systems considered fusion, effectiveness search optimal