作者: Dominic W. Massaro , Peter B. Egan
DOI: 10.3758/BF03212421
关键词:
摘要: This experiment examines how emotion is perceived by using facial and vocal cues of a speaker. Three levels affect were presented computer-generated face. obtained recording the voice male amateur actor who spoke semantically neutral word in different simulated emotional states. These two independent variables to subjects all possible permutations-visual alone, visual together-which gave total set 15 stimuli. The asked judge stimuli two-alternative forced choice task (either HAPPY or ANGRY). results indicate that evaluate integrate information from both modalities perceive emotion. influence one modality was greater extent other ambiguous (neutral). fuzzy logical model perception (FLMP) fit judgments significantly better than an additive model, which weakens theories based on combination modalities, categorical perception, only single modality.