Students' feedback is an effective mechanism that provides valuable insights about teachinglearning process. Handling opinions of students expressed in reviews is a quite labour-intensive and tedious task as it is typically performed manually by the human intervention. While this task may be viable for smallscale courses that involve just a few students' feedback, it is unpractical for large-scale cases as it applies to online courses in general, and MOOCs, in particular. Therefore, to address this issue, we propose in this paper a framework to automatically analyzing opinions of students expressed in reviews. Specifically, the framework relies on aspect-level sentiment analysis and aims to automatically identify sentiment or opinion polarity expressed towards a given aspect related to the MOOC. The proposed framework takes advantage of weakly supervised annotation of MOOC-related aspects and propagates the weak supervision signal to effectively identify the aspect categories discussed in the unlabeled students' reviews. Consequently, it significantly reduces the need for manually annotated data which is the main bottleneck for all deep learning techniques. A large-scale real-world education dataset containing around 105k students' reviews collected from Coursera and a dataset comprising of 5989 students' feedback in traditional classroom settings are used to perform experiments. The experimental results indicate that our proposed framework attains inspiring performance with respect to both the aspect category identification and the aspect sentiment classification. Moreover, the results suggest that the framework leads to more accurate results than the expensive and labour-intensive sentiment analysis techniques relying heavily on manually labelled data.
In this paper, we present our design approach for bridging outdoors and indoors learning activities with the support of mobile and positioning technologies. In order to illustrate these research efforts we describe the outcomes of two trials we have conducted with more than 50 elementary school children. The activities presented in this paper aspire at supporting the notion of situated learning with mobile and positioning technologies to promote new ways of collaboration based on the users' learning context. The results of our experiments indicate that children enjoyed learning where mobile devices are used in situ, supporting the learning activities in the context of which they are taking place.Reference to this paper should be made as follows: Kurti, A., Spikol, D. and Milrad, M. (2008) 'Bridging outdoors and indoors educational activities in schools with the support of mobile and positioning technologies', Int.
Continuous change changes everything; it introduces various uncertainties, which may harm the sustainability of software systems. We argue that integrating runtime adaptation and evolution is crucial for the sustainability of software systems. Realising this integration calls for a radical change in the way software is developed and operated. Our position is that we need to Design for Sustainability. To that end, we present: (i) the AdEpS model (Adaptation and Evolution processes for Sustainability) to handle and mitigate uncertainties by means of integrating runtime adaptation and evolution, and (ii) a set of engineering principles to design software systems that facilitate the application of the AdEpS model to build sustainable software.
The advent of MOOC platforms brought an abundance of video educational content that made the selection of best fitting content for a specific topic a lengthy process. To tackle this challenge in this paper we report our research e↵orts of using deep learning techniques for managing and classifying educational content for various search and retrieval applications in order to provide a more personalized learning experience. In this regard, we propose a framework which takes advantages of feature representations and deep learning for classifying video lectures in a MOOC setting to aid e↵ective search and retrieval. The framework consists of three main modules. The first module called pre-processing concerns with video-to-text conversion. The second module is transcript representation which represents text in lecture transcripts into vector space by exploiting di↵erent representation techniques including bag-of-words, embeddings, transfer learning, and topic modeling. The final module covers classifiers whose aim is to label video lectures into the appropriate categories. Two deep learning models, namely feed-forward deep neural network (DNN) and convolutional neural network (CNN) are examined as part of the classifier module. Multiple simulations are carried out on a large-scale real dataset using various feature representations and classification techniques to test and validate the proposed framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.