“…This allowed us to parametrically adjust along multiple dimensions including age, gender, ethnicity, and emotional expression. We chose to use FaceGen because it has been validated by human participant rating ( Roesch et al., 2011 ) and has been widely used in studies related to emotional expression (e.g., Hass, Weston, & Lim, 2016 ; N’Diaye, Sander, & Vuilleumier, 2009 ; Oosterhof & Todoro, 2009 ). To prepare the final 12 face stimuli to test participants’ sensitivity to detect happy faces, we first generated 36 happy faces from four identities (two females) with FaceGen Modeler by varying expressiveness at 0%, 12%, 25%, 32%, 50%, 62%, 75%, 87%, and 100%.…”