作者: Holger Mitterer , Eva Reinisch
DOI: 10.3758/S13414-016-1249-6
关键词: Gesture 、 Auditory perception 、 Speech recognition 、 Eye tracking 、 Speech perception 、 Motor theory of speech perception 、 Visual processing 、 Neurocomputational speech processing 、 Visual perception 、 Communication 、 Psychology
摘要: Two experiments examined the time course of use auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results first experiment showed that from lipreading is reduced if concurrently presented pictures require a division attentional resources. This reduction was evident even when listeners' eye gaze on speaker rather than (static) pictures. Experiment 2 used deictic hand gesture foster attention speaker. At same time, processing load by keeping display constant over fixed number successive trials. Under these conditions, were used. Moreover, data indicated information immediately earlier information. In combination, indicate are not automatically, but they used, immediately.