Summary
Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,
1
,
2
,
3
,
4
,
5
including specific categories, such as “anger,” and broader dimensions, such as “negative valence, high arousal.”
6
,
7
,
8
An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information—i.e., specific categories and broader dimensions—via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.
9
We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver’s perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.
10
,
11
,
12
First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions
13
plus 19 complex emotions
3
) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent—i.e., multiplex—categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results—based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms—show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.