Prediction and analysis of stock market data have a vital role in current time’s economy. The various methods used for the prediction can be classified into 1) Linear Algorithms like Moving Average (MA) and Auto-Regressive Integrated Moving Average (ARIMA). 2) Non-Linear Models like Artificial Neural Networks and Deep Learning. In this work, we are using the results of previous research papers to demonstrate the potential of some models like ARIMA, Multi-Layer Perception (MLP) ), Convolutional Neural Neural Network (CNN), Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), Long-Short Term Memory (LSTM) for forecasting the stock price of an organization based on its available historical data. Then, implementing some of these methods to check and compare their efficiency within the same issue. We used Independently RNN (IndRNN) to explore a better efficiency for stock prediction and we found that it gives better accuracy prevailing methods in the current time. We also proposed an enhancement to IndRNN by replacing its default activation function with a more effective function called Parametric Rectified Linear Unit (PreLU). Our proposed approach can be used as an alternative method for predicting time series data efficiently other than the typical approaches today
This erratum is published as a typo related with co-author name was discovered. Alsahref should be read as Alsharef.Original article has been updated.Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The automated identification of toxicity in texts is a crucial area in text analysis since the social media world is replete with unfiltered content that ranges from mildly abusive to downright hateful. Researchers have found an unintended bias and unfairness caused by training datasets, which caused an inaccurate classification of toxic words in context. In this paper, several approaches for locating toxicity in texts are assessed and presented aiming to enhance the overall quality of text classification. General unsupervised methods were used depending on the state-of-art models and external embeddings to improve the accuracy while relieving bias and enhancing F1-score. Suggested approaches used a combination of long short-term memory (LSTM) deep learning model with Glove word embeddings and LSTM with word embeddings generated by the Bidirectional Encoder Representations from Transformers (BERT), respectively. These models were trained and tested on large secondary qualitative data containing a large number of comments classified as toxic or not. Results found that acceptable accuracy of 94% and an F1-score of 0.89 were achieved using LSTM with BERT word embeddings in the binary classification of comments (toxic and nontoxic). A combination of LSTM and BERT performed better than both LSTM unaccompanied and LSTM with Glove word embedding. This paper tries to solve the problem of classifying comments with high accuracy by pertaining models with larger corpora of text (high-quality word embedding) rather than the training data solely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.