“…When directly compared (which is rare), listeners tend to be superior in labeling vocalizations over prosody, especially for anger, joy, sadness, disgust and fear (Hawk et al, 2009). Nonetheless, cross-cultural studies that have presented emotional prosody (Pell, Paulmann, Dara, Alasseri, & Kotz, 2009;Scherer, Banse, & Wallbott, 2001; Thompson & Balkwill, 2006) or vocalizations (Laukka et al, 2013;Sauter & Eimer, 2009;Sauter & Scott, 2007) argue that each type of vocal expression possesses a 'universal' set of acoustic features that uniquely refers to different basic emotions, which predict how listeners assign meaning to these signals. Recent data on speech-embedded emotions are also beginning to reveal the time course of emotional prosody recognition based on behavioral judgments of stimuli gated into different stimulus durations; this work shows that acoustic patterns in speech differentiate rapidly to reveal basic emotional meanings to listeners after hearing approximately 400-800ms of acoustic information (Cornew, Carver, & Love, 2010;Jiang, Paulmann, Robin, & Pell, in press;Pell & Kotz, 2011;Rigoulot, Wassiliwizky, & Pell, 2013).…”