Introduction The literature indicates that few studies have been conducted with persons with visual impairments (that is, those who are blind or have low vision) concerning mobile application or “app” usage. The current study explores the use of mobile apps with this population globally. Methods A total of 259 participants with visual impairments completed an online survey. Descriptive statistics and bivariate tests were used to examine associations between demographic characteristics and mobile app use. Results The participants rated special apps as useful (95.4%) and accessible (91.1%) tools for individuals with visual impairments. More than 90% of the middle-aged adult group strongly agreed with the practicality of special apps, a significantly higher percentage than was observed in the young and old adult groups. In addition, the participants with low vision considered special apps less accessible than did those with blindness (p < .05). Discussion Results show that persons with visual impairments frequently use apps specifically designed for them to accomplish daily activities. Furthermore, this population is satisfied with mobile apps and would like to see improvements and new apps. Implications for practitioners Developers of apps for individuals with visual impairments need to refine and test the existing apps. Practitioners need to be knowledgeable about app usage so they can provide effective instruction to their students or clients. This study provides preliminary information regarding app usage among persons with visual impairments.
Previous emotion studies in education have focused mainly on the superiority of positive emotion for learning performance (e.g., enjoyment) over negative emotion (e.g., fear). However, few studies have considered different arousal levels in terms of learners' emotion. For example, the effects of calm positive or negative emotion have not been discussed, when compared to arousing positive or negative emotion. Based on the limited Capacity model of motivated mediated message processing (LC4MP), this study investigated how learners' emotional valence and arousal, induced by video clips, influenced their learning performance and mental effort in an animated instruction with different modalities (written-text versus spoken-text). A total of 206 participants were randomly assigned to eight groups: (a) calm positive, (b) calm negative (c) arousing positive, and (d) arousing negative emotions under different modality conditions (written and spoken). The results showed that both arousing groups outperformed calm groups on a recall test only in the written-text group regardless of valence, while emotional valence and arousal did not significantly influence learning performance in the spoken-text group. The results provide partial support for the LC4MP model and imply that the arousing emotional state has the potential to enhance multimedia learning.
Jongpil Cheon is an assistant professor in the Instructional Technology program at Texas Tech University. His research interests involve implementing immersive online learning environments and investigating advanced technologies for interactive learning. Steven Crooks is an associate professor in the Instructional Technology program at Texas Tech University. His research areas include multimedia learning, online learning and the design of authentic learning environments. Sungwon Chung is a doctoral student in the Instructional Technology program at Texas Tech University. His research interests are digital game-based learning and multimedia design principles.
AbstractThis study investigated the segmenting and modality principles in instructional animation. Two segmentation conditions (active pause vs. passive pause) were presented in combination with two modality conditions (written text vs. spoken text). The results showed that the significant effect was found in relation to segmentation conditions, whereas the modality effect was not found. The groups with embedded questions (ie, active pause) between segments outperformed pause-only groups (ie, passive pause) on both recall and transfer tests regardless of the mode of text. The findings of the study imply that a stimulus (eg, testing occasion) would be more effective than only pauses between segments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.