OBJECTIVEMaximal safe tumor resection in language areas of the brain relies on a patient’s ability to perform intraoperative language tasks. Assessing the performance of these tasks during awake craniotomies allows the neurosurgeon to identify and preserve brain regions that are critical for language processing. However, receiving sedation and analgesia just prior to experiencing an awake craniotomy may reduce a patient’s wakefulness, leading to transient language and/or cognitive impairments that do not completely subside before language testing begins. At present, the degree to which wakefulness influences intraoperative language task performance is unclear. Therefore, the authors sought to determine whether any of 5 brief measures of wakefulness predicts such performance during awake craniotomies for glioma resection.METHODSThe authors recruited 21 patients with dominant hemisphere low- and high-grade gliomas. Each patient performed baseline wakefulness measures in addition to picture-naming and text-reading language tasks 24 hours before undergoing an awake craniotomy. The patients performed these same tasks again in the operating room following the cessation of anesthesia medications. The authors then conducted statistical analyses to investigate potential relationships between wakefulness measures and language task performance.RESULTSRelative to baseline, performance on 3 of the 4 objective wakefulness measures (rapid counting, button pressing, and vigilance) declined in the operating room. Moreover, these declines appeared in the complete absence of self-reported changes in arousal. Performance on language tasks similarly declined in the intraoperative setting, with patients experiencing greater declines in picture naming than in text reading. Finally, performance declines on rapid counting and vigilance wakefulness tasks predicted performance declines on the picture-naming task.CONCLUSIONSCurrent subjective methods for assessing wakefulness during awake craniotomies may be insufficient. The administration of objective measures of wakefulness just prior to language task administration may help to ensure that patients are ready for testing. It may also allow neurosurgeons to identify patients who are at risk for poor intraoperative performance.
Co-occurring sounds can facilitate perception of spatially and temporally correspondent visual events. Separate lines of research have identified two putatively distinct neural mechanisms underlying two types of crossmodal facilitations: Whereas crossmodal phase resetting is thought to underlie enhancements based on temporal correspondences, lateralized occipital evoked potentials (ERPs) are thought to reflect enhancements based on spatial correspondences. Here, we sought to clarify the relationship between these two effects to assess whether they reflect two distinct mechanisms or, rather, two facets of the same underlying process. To identify the neural generators of each effect, we examined crossmodal responses to lateralized sounds in visually responsive cortex of 22 patients using electrocorticographic recordings. Auditory-driven phase reset and ERP responses in visual cortex displayed similar topography, revealing significant activity in pericalcarine, inferior occipital–temporal, and posterior parietal cortex, with maximal activity in lateral occipitotemporal cortex (potentially V5/hMT+). Laterality effects showed similar but less widespread topography. To test whether lateralized and nonlateralized components of crossmodal ERPs emerged from common or distinct neural generators, we compared responses throughout visual cortex. Visual electrodes responded to both contralateral and ipsilateral sounds with a contralateral bias, suggesting that previously observed laterality effects do not emerge from a distinct neural generator but rather reflect laterality-biased responses in the same neural populations that produce phase-resetting responses. These results suggest that crossmodal phase reset and ERP responses previously found to reflect spatial and temporal facilitation in visual cortex may reflect the same underlying mechanism. We propose a new unified model to account for these and previous results.
Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. Additionally, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.