Verbal working memory for different orthographic sentences in Japanese was investigated using near-infrared spectroscopy (NIRS). 12 participants were asked to read aloud three sentences presented sequentially on a CRT display in the Reading task. The participants also memorized an underlined target word in each sentence in the Reading Span Test (RST) task. Four conditions were presented to the participants: (1) A sentence with spaces using only Kana characters (Kana-S); (2) A sentence with spaces using Kanji and Kana characters (Kanji-S); (3) A sentence without spaces using only Kana characters (Kana-NS); and (4) A sentence without spaces using Kanji and Kana characters (Kanji-NS). Oxy-Hb waveforms in frontal areas increased significantly when reading the sentences aloud in the RST task, compared to the Reading task. In the RST task, Oxy-Hb waveforms in the right dorsolateral prefrontal cortex during the recall of target words in Kana-NS and Kanji-S (i.e., unfamiliar orthography) increased significantly, compared to Kana-S and Kanji-NS (i.e., familiar orthography). These results suggest that increased attention control is necessary to working memory for orthographically unfamiliar sentences.
Audio-visual neural interaction was examined b y using ERPs. Eleven male volunteers participated in this stud y . EEGs were recorded from 19 locations. Japanese vowels (/al or Ii!) or white-noise (/noise/) were used as auditor y stimuli. The face images pronouncing vowel ([a] or [i]) were used as visual stimuli.This stud y was composed of three conditions, i.e., (1) audio-visual condition (A V condition) due to bimodal stimulus presentation, (2) auditor y condition (A condition) due to onl y auditor y stimulus presentation, and (3) visual condition (V condition) due to onl y visual stimulus presentation. In the A V condition, audio-visual stimuli pairs were phoneticall y congruent (audio/a/, visual[a]),
incongruent (audio/a/, visual[i]), and deviant (/noise/, visual[a]).The participants were instructed to press a button for vowel/al or Inoisel stimulus. Audio-visual interaction was examined b y subtracting ERPs in the A or V condition from ERPs in the A V condition. Cross-modal facilitator y effects were not observed in auditor y perception. On the other hand, topographical changes occurred in face-specific negative components around 170 ms depending on audio and visual informational congruenc y , i.e., the center of negative activit y shifted towards the left hemisphere for "incongruent" stimulus. This result might be concerned with suppression of incongruent visual information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.