The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound.