Emotional facial expressions critically impact social interactions and cognition. However, emotion research to date has generally relied on the assumption that people represent categorical emotions in the same way, using standardized stimulus sets and overlooking important individual differences. To resolve this problem, we developed and tested a task using
genetic algorithms
to derive assumption-free, participant-generated emotional expressions. One hundred and five participants generated a subjective representation of happy, angry, fearful and sad faces. Population-level consistency was observed for happy faces, but fearful and sad faces showed a high degree of variability. High test–retest reliability was observed across all emotions. A separate group of 108 individuals accurately identified happy and angry faces from the first study, while fearful and sad faces were commonly misidentified. These findings are an important first step towards understanding individual differences in emotion representation, with the potential to reconceptualize the way we study atypical emotion processing in future research.