“…There is an entire class of solutions called multimodal-based affective human–computer interaction that enables computer systems to recognize specific affective states. Emotions in these approaches can be recognized in many ways, including those based on: - Voice parameters (timbre, raised voice, speaking rate, linguistic analysis and errors made) [ 7 , 8 , 9 , 10 ];
- Characteristics of writing [ 11 , 12 , 13 , 14 , 15 ];
- Changes in facial expressions in specific areas of the face [ 16 , 17 , 18 , 19 , 20 ];
- Gestures and posture analysis [ 21 , 22 , 23 , 24 ];
- Characterization of biological signals, including but not limited to respiration, skin conductance, blood pressure, brain imaging, and brain bioelectrical signals [ 25 , 26 , 27 , 28 , 29 , 30 ];
- Context—assessing the fit between the emotion and the context of expression [ 31 ].
…”