Deep learning approaches can be applied to a large amount of data for the purpose of simplifying and improving the engineering practice of automated decision-making activities rather than relying on human encoded heuristics. The need for generating faster and effective decisions about systems, processes, and applications gave rise to many artificial intelligences motivated approaches such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), fuzzy analytics, etc. Deep learning deploys diverse multiple layers of cascaded processing elements to enable features extraction and transformations. These deep learning approaches conduct multiple levels of depiction corresponding to distinct abstraction levels. There are several applications of deep learning algorithms including weather forecasting, object recognition, stock market performance forecasts, medical diagnosis, and emergency warning systems. This paper investigates the performance of the deep learning approach on the basis of processing components, data representation, and data types. To achieve this, a deep learning algorithm based on a long short-term memory-recurrent neural network (LSTM-RNN) was utilized to learn hidden patterns and features in the textual and image datasets respectively. The outcomes reveal that the performance of the image-based deep learning model was better in terms of speed due to well-defined patterns of data representation against the data with sentiments-based deep learning by 3.49 mins. to 18.25 mins. While the LSTM-RNN with images offered better classification accuracy by 96.50% to 85.69% due to complex network architecture, processing elements, and features of the underlying datasets. Povzetek: .
The educational sector faced many types of research in predicting student performance based on supervised and unsupervised machine learning algorithms. Most students' performance data are imbalanced, where the final classes are not equally represented. Besides the size of the dataset, this problem affects the model's prediction accuracy. In this paper, the Synthetic Minority Oversampling Technique (SMOTE) filter is applied to the dataset to find its effect on the model's accuracy. Four feature selection approaches are applied to find the most correlated attributes that affect the students' performance. The SMOTE filter is examined before and after applying feature selection approaches to measure the model's accuracy with supervised and unsupervised algorithms. Three supervised/unsupervised algorithms are examined based on feature selection approaches to predict the students' performance. The findings show that supervised algorithms (LMT, Simple Logistic, and Random Forest) got high accuracy after applying SMOTE without feature selection. The prediction accuracies of unsupervised algorithms (Canopy, EM, and Farthest First) are enhanced after applying feature selection approaches and SMOTE filter.
For learning environments like schools and colleges, predicting the performance of students is one of the most crucial topics since it aids in the creation of practical systems that, among other things, promote academic performance and prevent dropout. The decision-makers and stakeholders in educational institutions always seek tools that help in predicting the number of failed courses for the students. These tools can help in finding and investigating the factors that led to this failure. In this paper, many supervised machine learning algorithms will investigate finding and exploring the optimal algorithm for predicting the number of failed courses of students. An imbalanced dataset will be handled with Synthetic Minority Oversampling TEchinque (SMOTE) to get an equal representation of the final class. Two feature selection approaches will be implemented to find the best approach that produces a highly accurate prediction. Wrapper with Particle Swarm Optimization (SPO) will be applied to find the optimal subset of features, and Info Gain with ranker to get the most correlated individual features to the final class. Many supervised algorithms will be implemented such as (Naïve Bayes, Random Forest, Random Tree, C4.5, LMT, Logistic, and Sequential Minimal Optimization algorithm (SMO)). The findings show that the wrapper filter with SPO-based SMOTE outperforms the Info-Gain filter with SMOTE and improves the performance of the algorithms. Random Forest outperforms the other supervised machine learning algorithms with (85.6%) in TP average rate and Recall, and (96.7%) in ROC curve.Povzetek: Opisana je metoda za napovedovanje uspeha študentov s pomočjo strojnega učenja.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.