An important question for research on audiovisual integration in humans is whether multisensory information is brought together in the primary sensory or association areas of the cortex. For example, can auditory information activate primary visual cortex directly, or must it first be processed by the primary auditory cortex and higher-order association areas? Studying the information flow of audiovisual processing in the human brain is crucial for discovering the neural mechanisms of audiovisual integration.Although many electroencephalography (EEG) studies have investigated the temporal aspects of brain processing during audiovisual integration, the limited spatial resolution of EEG cannot provide the actual propagation route across different brain regions in great detail. Meanwhile, with high spatial resolution but relatively low temporal resolution, most previous studies using functional magnetic resonance imaging (fMRI) emphasized spatial localization of brain activity during audiovisual processing. So far, only a few fMRI studies have investigated the temporal sequence of brain activations. Those studies were based mainly on the framework of the general linear model (GLM) (for review, see Formisano and Goebel, 2003). A recent fMRI study by Fuhrmann Alpert et al. (2008) published in The Journal of Neuroscience focused on studying the temporal characteristics of audiovisual processing, using mutual information to help assess the relative timing of activations in different brain areas under simultaneous audiovisual (AV) stimulation as well as separate auditory and visual stimulation (Fuhrmann Alpert et al., 2008).Mutual information is a measure of the statistical interdependence of two random variables such as a particular stimulus condition (e.g., AV stimulation) and the blood oxygenation level-dependent (BOLD) response: a higher mutual information value implies a greater predictability of the BOLD signal from the preceding stimuli. Compared with conventional GLM, the advantages of mutual information are that it measures not only the linear but also the nonlinear relationship between two random variables, and that no prior assumption about the shape of the relationship [e.g., hemodynamic response function (HRF)] is required (Fuhrmann Alpert et al., 2007). By estimating the mutual information between the preceding stimulus condition and the BOLD responses for each voxel and the latency after the onset of the stimuli, this approach can detect both brain activation and the preferred latency that maximizes the information content of the BOLD signal about the preceding stimuli. Assuming that the preferred latency reflects brain processing time, the temporal sequence of brain activity can be revealed by comparing the preferred latencies of different brain regions.Fuhrmann Alpert et al. (2008) found that the AV-related activity occurs earliest in the primary auditory and visual cortices and later in the inferior frontal cortex [Fuhrmann Alpert et al. (2008), their Figs. 3