作者: A. Adjoudani , C. Benoît
DOI: 10.1007/978-3-662-13015-5_35
关键词:
摘要: In this paper, we describe two architectures for combining automatic speechreading and acoustic speech recognition. We propose a model which can improve the performances of an audio-visual recognizer in isolated word speaker dependent situation. This is achieved by using hybrid system based on HMMs trained respectively with optic data. Both have been tested degraded audio over wide range S/N ratios. The results these experiments are presented discussed.