We address two research applications in this methodological review: starting from an audio recording, the goal may be to characterize nonlinear phenomena (NLP) at the level of voice production or to test their perceptual effects on listeners. A crucial prerequisite for this work is the ability to detect NLP in acoustic signals, which can then be correlated with biologically relevant information about the caller and with listeners’ reaction. NLP are often annotated manually, but this is labor-intensive and not very reliable, although we describe potentially helpful advanced visualization aids such as reassigned spectrograms and phasegrams. Objective acoustic features can also be useful, including general descriptives (harmonics-to-noise ratio, cepstral peak prominence, vocal roughness), statistics derived from nonlinear dynamics (correlation dimension), and NLP-specific measures (depth of modulation and subharmonics). On the perception side, playback studies can greatly benefit from tools for directly manipulating NLP in recordings. Adding frequency jumps, amplitude modulation, and subharmonics is relatively straightforward. Creating biphonation, imitating chaos, or removing NLP from a recording is more challenging, but feasible with parametric voice synthesis. We describe the most promising algorithms for analyzing and manipulating NLP and provide detailed examples with audio files and R code in supplementary materials (https://osf.io/gs8u3/).