The predominance of using videos for learning has become a phenomenon for generations to come. This leads to a prevalence of videos generating and using open learning platforms (Youtube, MOOC, Khan Academy, etc.). However, learners may not be able to detect the main points in the video and relate them to the domain for study. This can hinder the effectiveness of using videos for learning. To address these challenges, we are aiming to develop automatic ways to generate video narratives to support learning. We presume that the domain for which we are processing the videos has been computationally presented (via ontology). We are proposing a generic framework for segmenting, characterising and aggregating video segments VISC-L which offers the foundation to generate the narratives. The narrative framework designing is in progress which is underpinned with Ausubel's Subsumption theory. All the work is being implemented in two different domains and evaluated with people to test their awareness of the domains-aspects.
The predominance of using videos for learning has become a phenomenon for generations to come. This leads to a prevalence of videos generating and using open learning platforms. However, learners may not be able to detect the main points in the video and relate them to the domain for their study. This can hinder the effectiveness of using videos for learning. To address these challenges, our research aims to develop automatic ways to segment videos, characterise them and őnalise the segmentation work by aggregating adjacent segments within a video with the same focus of domain topic(s) or topic-concept(s). We present a framework for automated video segmenting and characterising to support learning (VISC-L). We assume that the domain we are processing videos from has been computationally presented (via ontology). We are using the Deep learning BERT-BASE-Uncased model with a binary classiőer to identify the focus topic of each segment. Then we use a semantic tagging algorithm to identify the focus concept within the topic. The adjacent segments within a video with the same focus topic/concept are aggregated to generate the őnal characterised video segments. We have evaluated the usefulness of watching the identiőed segments and characterisations compared with video segmentation provided by Google.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.