The predominance of using videos for learning has become a phenomenon for generations to come. This leads to a prevalence of videos generating and using open learning platforms (Youtube, MOOC, Khan Academy, etc.). However, learners may not be able to detect the main points in the video and relate them to the domain for study. This can hinder the effectiveness of using videos for learning. To address these challenges, we are aiming to develop automatic ways to generate video narratives to support learning. We presume that the domain for which we are processing the videos has been computationally presented (via ontology). We are proposing a generic framework for segmenting, characterising and aggregating video segments VISC-L which offers the foundation to generate the narratives. The narrative framework designing is in progress which is underpinned with Ausubel's Subsumption theory. All the work is being implemented in two different domains and evaluated with people to test their awareness of the domains-aspects.