Objective
Ecologically valid signals (e.g., vowels) have multiple components of substantially different frequencies and amplitudes that may not be equally cortically represented. In this study, we investigate a relatively simple signal at an intermediate level of complexity, two-frequency composite tones, a stimulus lying between simple sinusoids and ecologically valid signals such as speech. We aim to characterize the cortical response properties to better understand how complex signals may be represented in auditory cortex.
Design
Using magnetoencephalography, we assessed the sensitivity of the M100/N100m auditory-evoked component to manipulations of the power ratio of the individual frequency components of the two-frequency complexes. Fourteen right-handed subjects with normal hearing were scanned while passively listening to 10 complex and 12 simple signals. The complex signals were composed of one higher frequency and one lower frequency sinusoid; the lower frequency sinusoidal component was at one of the five loudness levels relative to the higher frequency one: − 20, − 10, 0, +10, +20 dB. The simple signals comprised all the complex signal components presented in isolation.
Results
The data replicate and extend several previous findings: (1) the systematic dependence of the M100 latency on signal intensity and (2) the dependence of the M100 latency on signal frequency, with lower frequency signals (~100 Hz) exhibiting longer latencies than higher frequency signals (~1000 Hz) even at matched loudness levels. (3) Importantly, we observe that, relative to simple signals, complex signals show increased response amplitude—as one might predict— but decreased M100 latencies.
Conclusion
The data suggest that by the time the M100 is generated in auditory cortex (~70 to 80 msecs after stimulus onset), integrative processing across frequency channels has taken place which is observable in the M100 modulation. In light of these data models that attribute more time and processing resources to a complex stimulus merit reevaluation, in that our data show that acoustically more complex signals are associated with robust temporal facilitation, across frequencies and signal amplitude level.