作者: Takaaki Hori , Shinji Watanabe , Atsushi Nakamura
DOI:
关键词: Machine learning 、 Word error rate 、 Speech recognition 、 Minification 、 Viterbi beam search 、 Heuristic 、 Pruning (decision trees) 、 Computer science 、 Beam search 、 Artificial intelligence 、 Heuristic (computer science) 、 Function (mathematics) 、 Vocabulary
摘要: Abstract This paper describes improvements in a search error risk min-imization approach to fast beam for speech recognition.In our previous work, we proposed this reducesearch errors by optimizing the pruning criterion. While con-ventional methods use heuristic criteria prune hypotheses,our method employs function that makesa more precise decision using rich features extracted from eachhypothesis. The parameters of can be estimatedto minimize loss based on risk. Inthis paper, improve introducing modifiedloss function, arc-averaged risk, which potentially has highercorrelation with actual rate than original one. We alsoinvestigate various combinations features. Experimental re-sults show further reduction over originalmethod is obtained 100K-word vocabulary lecture speechtranscription task.IndexTerms: recognition, search, pruning, searcherror, WFST