Adaptive Educational Hypermedia Systems (AEHS) play a crucial role in supporting adaptive learning and immensely outperform learner-control based systems. AEHS' page indexing and hyperspace rely mostly on navigation supports which provide the learners with a user-friendly interactive learning environment. Such AEHS features provide the systems with a unique ability to adapt learners' preferences. However, obtaining timely and accurate information for their adaptive decision-making process is still a challenge due to the dynamic understanding of individual learner. This causes a spontaneous changing of learners' learning styles that makes hard for system developers to integrate learning objects with learning styles on real-time basis. Thus, in previous research studies, multiple levels navigation supports have been applied to solve this problem. However, this approach destroys their learning motivation because of imposing time and work overload on learners. To address such a challenge, this study proposes a bioinformatics-based adaptive navigation support that was initiated by the alternation of learners' motivation states on a real-time basis. EyeTracking sensor and adaptive time-locked Learning Objects (LOs) were used. Hence, learners' pupil size dilation and reading and reaction time were used for the adaption process and evaluation. The results show that the proposed approach improved the AEHS adaptive process and increased learners' performance up to 78%.
Optimal learning environment highly depends on aptitude treatment interaction. Even though ICT (Information and Communication Technologies) advancement supports knowledge sharing platforms including e-learning and multimedia, designing of learning content in such platforms relies on the aptitude treatment interaction to sustain a learner-centric environment. However, achieving the optimal learning environment in e-learning platform is still a challenge due to variation of learner's skills and abilities to both learning process and learning content. This makes difficulties in monitoring learner's cognition states and quality of learning content. To overcome the difficulties, in this study, a learner-centric metacognitive experiences based approach has been proposed (e-learning Prior Knowledge Assessment System-ePKAS), ePKAS is supporting the detection and evaluation of learners' prior knowledge profiles in turn to enable monitoring of learners' cognition states and adaptation of cognitive states in to e-learning platforms based on visual contact. The study has investigated students' reactions to multimedia content based on their past experiences. The results show that students respond more attentively and accurately (93%) to the learning content that is closely related to their past experiences. Scope of this study based on visual contact in order to support the involvement of people with hearing impairment in e-learning platforms.
A real time communication between deaf and hearing people is still a barrier that isolates the deaf people from the hearing world. Over ninety percent of deaf children are born to hearing parents. However, most of them can only learn how to communicate using sign language at school. One of the reasons is that the hearings parents have neither enough time nor support to learn sign language to communicate and support their children. Not surprisingly, the deaf finds difficulties in the oral-only education. Since many other hearing pupils do not even know about the existence of sign language, they cannot communicate directly with the deaf without a sign language interpreter. Therefore, to enable a face-to-face conversation between deaf and hearing people, it is important not only to sustain real time conversation between the deaf and their hearing counterparts but also to equip the hearing with basics of sign language. However, speech to sign conversion remains a challenge due to dialectal and sign language variation, speech utterance and lack of sign language written form. In this paper, a solution named Face-to-Face Conversation Deaf and Hearing people-FFCDH is proposed to address above issues. FFCDH supports real time conversation and also allows the hearing to learn the signs with the same meaning as the deaf understand. Moreover, FFCDH records the speech of the hearing and converts it into signs for the deaf. It also provides deaf with an option to adjust volume of their speech by displaying volume of their voice. The performance of the system in supporting the deaf has been evaluated by using a real test-bed. The obtained results show that English and Japanese daily conversation phrases can be recognized with over 90 percent accuracy on average. The average coherent of simple content is over 94 percent. However, when the speech includes long and complex phrases, the average accuracy and the coherent are slightly lower because the system could not comprehend long and complex context at large scope.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.