摘要: The role of emotions and other affective states within Human-Computer Interaction (HCI) is gaining importance. Introducing affect into computer applications typically makes these systems more efficient, effective enjoyable. This paper presents a model that able to extract interpersonal stance from vocal signals. To achieve this, dataset 3840 sentences spoken by 20 semi-professional actors was built used train test based on Support Vector Machines (SVM). An analysis the results indicates there much variation in way people express stance, which it difficult build generic model. Instead, shows good performance individual level (with accuracy above 80%). implications findings for HCI are discussed.