To explore the multimodality of two representative EFL textbook series for Chinese college students, their visual and verbal semiotic modes were compared. The target textbooks are Experiencing English and New Century College English. Through multimodal discourse analysis, the study aims to shed some light on how to develop high-quality multimodal EFL textbooks. The main findings are: (1) EE and NCCE are similar in the fact that their representative multimodal texts are visually-verbally coherent and both demonstrate prominent features for intersemiotic semantic relations; (2) their differences are EE displays a higher degree of interpersonal intersemiotic complementarity and multimodality facilitates the realization of different modern educational concepts — constructivism in EE and humanism in NCCE; and (3) such differences are related to or may partially result from the differences in the language difficulty of textbooks and English proficiency of target learners. As a pioneering attempt to probe into the possible relationship between multimodality and modern educational concepts in EFL textbooks, the study shows the importance of properly arranging the different modes in a double-page spread. It also suggests that EFL textbook compilers consider the learners’ English proficiency and appropriately employ the variety and number of multimodal resources to achieve optimal intersemiotic complementarity.
This study examined visual-tactile perceptual integration in deaf and normal
hearing individuals. Participants were presented with photos of faces or
pictures of an oval in either a visual mode or a visual-tactile mode in a
recognition learning task. Event-related potentials (ERPs)were recorded when
participants recognized real faces and pictures of ovals in learning stage.
Results from the parietal-occipital region showed that photos of faces
accompanied with vibration elicited more positive-going ERP responses than
photos of faces without vibration as indicated in the components of P1and
N170in both deaf and hearing individuals. However, pictures of ovals
accompanied with vibration produced more positive-going ERP responses than
pictures of ovals without vibration in N170, which was only found in deaf
individuals. A reversed pattern was shown in the temporal region indicating
that real faces with vibration elicited less positive ERPs than photos of
faces without vibration in both N170 and N300 for deaf, but such pattern did
not appear in N170 and N300 for normal hearing. The results suggest that
multisensory integration across the visual and tactile modality involves
more fundamental perceptual regions than auditory regions. Moreover,
auditory deprivation played an essential role at the perceptual encoding
stage of the multisensory integration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.