Emotion space for analysis and synthesis of facial expression

作者: S. Morishima , H. Harashima

DOI: 10.1109/ROMAN.1993.367724

关键词: Face (geometry)Expression (mathematics)GeneralizationArtificial neural networkLayer (object-oriented design)Speech recognitionComputer scienceFacial Action Coding SystemFacial expressionPattern recognition (psychology)Artificial intelligence

摘要: This paper presents a new emotion model which gives criteria to decide human's condition from the face image. Our final goal is realize very natural and user-friendly human-machine communication environment by giving computer terminal or system can also understand user's condition. So it necessary for express emotional meanings of parameterized expression its motion quantitatively. based on 5-layered neural network has generalization nonlinear mapping performance. Both input output layer same number units. identity be realized space constructed in middle-layer (3rd layer). The middle means recognition that corresponds synthesis value. Training performed typical 13 patterns are expressed parameters. Subjective test this proves propriety model. facial action coding selected as an efficient describe delicate motion. >

参考文章(11)
Shigeo Morishima, Hiroshi Harashima, Facial Animation Synthesis for Human-Machine Communication System. international conference on human-computer interaction. pp. 1085- 1090 ,(1993)
S. Morishima, H. Harashima, Human machine interface using media conversion and model-based coding schemes international symposium on visual computing. pp. 95- 105 ,(1992) , 10.1007/978-4-431-68204-2_7
Paul Ekman, Wallace V. Friesen, Facial Action Coding System PsycTESTS Dataset. ,(2019) , 10.1037/T27734-000
S. Morishima, H. Harashima, H. Miyakawa, A proposal of a knowledge based isolated word recognition international conference on acoustics, speech, and signal processing. ,vol. 11, pp. 713- 716 ,(1986) , 10.1109/ICASSP.1986.1169202
S. Morishima, T. Sakaguchi, H. Harashima, A facial image synthesis system for human-machine interface robot and human interactive communication. pp. 363- 368 ,(1992) , 10.1109/ROMAN.1992.253860
Yumiko Fukuda, Shizuo Hiki, Characteristics of the mouth shape in the production of Japanese The Journal of The Acoustical Society of Japan (e). ,vol. 3, pp. 75- 91 ,(1982) , 10.1250/AST.3.75
C.S. Choi, H. Harashima, T. Takebe, Analysis and synthesis of facial expressions in knowledge-based coding of facial image sequences international conference on acoustics, speech, and signal processing. pp. 2737- 2740 ,(1991) , 10.1109/ICASSP.1991.150968
Michael Potmesil, Eric M. Hoffert, The pixel machine: a parallel image computer international conference on computer graphics and interactive techniques. ,vol. 23, pp. 69- 78 ,(1989) , 10.1145/74333.74340
Carl E. Williams, Kenneth N. Stevens, Emotions and Speech: Some Acoustical Correlates The Journal of the Acoustical Society of America. ,vol. 52, pp. 1238- 1250 ,(1972) , 10.1121/1.1913238
S. Morishima, H. Harashima, A media conversion from speech to facial image for intelligent man-machine interface IEEE Journal on Selected Areas in Communications. ,vol. 9, pp. 594- 600 ,(1991) , 10.1109/49.81953