Abstract-Complex sounds, especially natural sounds, can be parametrically characterized by many acoustic and perceptual features, one among which is temporal modulation. Temporal modulations describe changes of a sound in amplitude (amplitude modulation, AM) or in frequency (frequency modulation, FM). AM and FM are fundamental components of communication sounds, such as human speech and speciesspecific vocalizations, as well as music. Temporal modulations are encoded in at least two ways, temporal coding and rate coding. Magnetoencephalography (MEG), with its high temporal resolution and simultaneous access to multiple auditory cortical areas, is a non-invasive tool that can measure and describe the temporal coding of auditory modulations. We refer to the neural temporal encoding of temporal acoustic modulations as "modulation encoding". For simple, individually presented, acoustic modulations, modulation encoding is well described by a simple modulation transfer function (MTF). Even in this simple case, however, the MTF may depend strongly on the type of modulation being encoded (e.g.
AM vs. FM, narrowband vs. broadband) or the context in which the modulation is heard (e.g. attended vs. unattended).Here we present a range of different types of modulation encoding employed by human auditory cortex. The simplest examples are for sinusoidally amplitude modulated carriers of a range of bandwidths (with special emphasis on those modulation rates relevant to speech and other natural sounds: below a few tens of Hz). We provide evidence that the modulation transfer functions are lowpass in shape and relatively independent of bandwidth. When several modulations are applied concurrently however, the modulation encoding typically, but not always, becomes non-linear: the auditory modulations are at the rates of the acoustic modulations but also at the rates of cross-modulation frequencies. The physiological occurrence, or not, of these cross terms seem be in accord with the psychophysical concept of modulation filterbanks.