The distinctiveness of emotion expressions in faces and voices is thought to increase with increasing emotion intensity, but recent work on human nonverbal vocalizations challenges this commonplace assumption: peak emotions actually reveal maximal confusions. Whether this perceptual pattern reflects changing physical stimulus attributes or varying listener ability to use available information is not known. To adjudicate between these alternatives, we tested intensity effects on objective stimulus properties using supervised learning models and information-theoretic analyses. We show that ambiguity is not a mere perceptual phenomenon but is instead reflected in a tradeoff between emotion category and emotion intensity signal information, available in vocalizations’ low-level acoustic structure. The componential information about emotion is weighted differently: the composition of signal parts serving classification, intensification, or both, varies substantially with the expressed emotional intensity. Maximally intense vocal expressions primarily signal intensity, less so emotion categories, suggesting that the communicative function of vocalizations shifts with their social or biological relevance.