Overt speech production in functional magnetic resonance imaging (fMRI) studies is often associated with imaging artifacts, attributable to both movement and susceptibility. Various image-processing methods have been proposed to remove these artifacts from the data but none of these methods has been shown to work with continuous overt speech, at least over periods greater than 3 s. In this study natural, continuous, overt sentence production was evaluated in normal volunteers using both arterial spin labeling (ASL) and conventional echoplanar blood oxygenation level-dependent (BOLD) imaging sequences on the same 1.5-T scanner. We found a high congruency between activation results obtained with ASL and the de facto gold standard in overt language production imaging, positron emission tomography (PET). No task-related artifacts were found in the ASL study. However, the BOLD data showed artifacts that appeared as large bilateral false-positive temporopolar activations; percent signal change estimated in these regions showed signal increases and temporal dynamics that were incongruent with typical BOLD activations. These artifacts were not distributed uniformly, but were aligned at the frontotemporal base, close to the oropharynx. The calculated head movement parameters for overt speech blocks were within the range of the rest blocks, indicating that head movement is unlikely the reason for the artifact. We conclude that ASL is not influenced by overt speech artifacts, whereas BOLD showed significant susceptibility artifacts, especially in the opercular and insular regions, where activation would be expected. ASL may prove to be the method of choice for fMRI investigations of continuous overt speech.
Short-term memory (STM), or the ability to hold verbal information in mind for a few seconds, is known to rely on the integrity of a frontoparietal network of areas. Here, we used functional magnetic resonance imaging to ask whether a similar network is engaged when verbal information is conveyed through a visuospatial language, American Sign Language, rather than speech. Deaf native signers and hearing native English speakers performed a verbal recall task, where they had to first encode a list of letters in memory, maintain it for a few seconds, and finally recall it in the order presented. The frontoparietal network described to mediate STM in speakers was also observed in signers, with its recruitment appearing independent of the modality of the language. This finding supports the view that signed and spoken STM rely on similar mechanisms. However, deaf signers and hearing speakers differentially engaged key structures of the frontoparietal network as the stages of STM unfold. In particular, deaf signers relied to a greater extent than hearing speakers on passive memory storage areas during encoding and maintenance, but on executive process areas during recall. This work opens new avenues for understanding similarities and differences in STM performance in signers and speakers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.