A computational model of the dolphin auditory system was developed to describe how multiple discrimination cues may be represented and employed during echolocation discrimination tasks. The model consisted of a bank of gammatone filters followed by half-wave rectification and low pass filtering. The output of the model resembles a spectrogram; however, the model reflects temporal and spectral resolving properties of the dolphin auditory system. Model outputs were organized to represent discrimination cues related to spectral, temporal and intensity information. Two empirical experiments, a phase discrimination experiment [Johnson et al., Animal Sonar Processes and Performance (Plenum, New York, 1988)] and a cylinder wall thickness discrimination tasks [Au and Pawolski, J. Comp. Physiol. A 170, 41-47 (1992)] were then simulated. Model performance was compared to dolphin performance. Although multiple discrimination cues were potentially available to the dolphin, simulation results suggest temporal information was used in the former experiment and spectral information in the latter. This model's representation of sound provides a more accurate approximation to what the dolphin may be hearing compared to conventional spectrograms, time-amplitude, or spectral representations.