作者: Nathan Henderson , Wookhee Min , Jonathan Rowe , James Lester
DOI:
关键词:
摘要: Accurately recognizing students’ affective states is critical for enabling adaptive learning environments to promote engagement and enhance learning outcomes. Multimodal approaches to student affect recognition capture multi-dimensional patterns of student behavior through the use of multiple data channels. An important factor in multimodal affect recognition is the context in which affect is experienced and exhibited. In this paper, we present a multimodal, multitask affect recognition framework that predicts students’ future affective states as auxiliary training tasks and uses prior affective states as input features to capture bi-directional affective dynamics and enhance the training of affect recognition models. Additionally, we investigate cross-stitch networks to maintain parameterized separation between shared and task-specific representations and task-specific uncertainty-weighted loss functions for contextual …