Abstract-A learning environment generates massive knowledge by means of the services provided in MOOCs. Such knowledge is produced via learning actor interactions. This result is a motivation for researchers to put forward solutions for big data usage, depending on learning analytics techniques as well as the big data techniques relating to the educational field. In this context, the present article unfolds a uniform model to facilitate the exploitation of the experiences produced by the interactions of the pedagogical actors. The aim of proposing the said model is to make a unified analysis of the massive data generated by learning actors. This model suggests making an initial pre-processing of the massive data produced in an e-learning system, and it's subsequently intends to produce machine learning, defined by rules of measures of actors knowledge relevance. All the processing stages of this model will be introduced in an algorithm that results in the production of learning actor knowledge tree.Keywords-learning analytics, operational data, machine learning, big data analysis, knowledge management IntroductionCurrently, the field of education is flourishing rapidly throughout the world, due to the changes that have been occurring in this area with the implementation of Massive Open Online Courses MOOCs [1]. Great many research projects have been funded in order to draw the attention of researchers in this field to work on such massive data, conducting in-depth studies of MOOCs (COURSERA, OPEN ODX, etc.).MOOCs generate big data in the form of activity traces. Such data are of three various types, namely: structured, semi-structured and unstructured data. In [2], the author has conducted an in-depth study on the types of data generated by the interactions of educational actors in online learning systems. The structured data are those found in the databases; the semi-structured are those found in the XML and JSON files, whereas the unstructured are those found in documents, video recordings, audio, etc.iJET -Vol. 12, No. 11, 2017 151Big data analysis [3] represents the combination of big data techniques with learning analysis. This combination enables to envision integrating Learning analytics (LA) algorithms with learning systems based on big data. Learning analytics (LA) represent a set of algorithms useful for the analysis and pre-processing of the massive data originally generated in the MOOCs. Indeed, we find two approaches: one supervised and another unsupervised [4,5]. On the other side, big data represent the tendency of actors to store massive data of different natures and to process them in parallel in tune with an architecture [6] built on three key elements: HDSF, MapReduce and YARN. Research problematicThe massive data generated by the services which are offered within the MOOCs systems are structured, semi-structured and unstructured. Given such fact, prerequisite is to make an in-depth analysis focusing on all the massive data dimensions. To this end, the author in [7] identifies three dimensio...
With the proliferation of distance platforms, in particular that of an open access such as Massive Online Open Courses (MOOC), the learner finds himself overwhelmed with data which are not all efficient for his interest. Besides, the MOOC has tools that allow learners to seek information, express their ideas, and participate in discussions in an online forum. This tool is a huge repository of rich data, which continues to evolve, however its exploitation is fiddly in the search for information relevant to the learner. Similarly, the task of the tutor seems to be difficult in management of a large number of learners. To this end, the development of a Chatbot able to meet the requests of learners in a natural language is necessary to the deroulement a course in the MOOC. The ChatBot plays the role of assistant and guide for the learners and for the tutors. However, ChatBot responses come from a knowledge base, which must be relevant. Knowledge extraction to answer questions is a difficult task due to the number of MOOC participants. Learners' interactions with the MOOC platform gen-erate massive information, particularly in discussion forums by seeking answers to their questions. Identifying and extracting knowledge from online forums requires collaborative interactions between learners. In this article we propose a new approach to answer learners' questions in a relevant and instantaneous way in a ChatBot in natural language. Our model is based on the LDA Bayesian statistical method, applied to threads posted in the forum and classifies them to provide the learner with a rich semantic response. These threads taken from the discussion forum in the form of knowledge will enrich the ChatBot knowledge database. In parallel, we will map the extracted knowledge to ontology, to provide the learner with pedagogical resources that will serve as learning support.
Higher education is increasingly integrating free learning management systems (LMS). The main objective underlying such systems integration is the automatization of online educational processes for the benefit of all the involved actors who use these systems. The said processes are developed through the integration and implementation of learning scenarios similar to traditional learning systems. LMS produce big data traces emerging from actors’ interactions in online learning. However, we note the absence of instruments adequate for representing knowledge extracted from big traces. In this context, the research at hand is aimed at transforming the big data produced via interactions into big knowledge that can be used in MOOCs by actors falling within a given learning level within a given learning domain, be it formal or informal. In order to achieve such an objective, ontological approaches are taken, namely: mapping, learning and enrichment, in addition to artificial intelligence-based approaches which are relevant in our research context. In this paper, we propose three interconnected algorithms for a better ontological representation of learning actors’ knowledge, while premising heavily on artificial intelligence approaches throughout the stages of this work. For verifying the validity of our contribution, we will implement an experiment about knowledge sources example.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.