News media always pursue informing the public at large. It is impossible to overestimate the significance of understanding the semantics of news coverage. Traditionally, a news text is assigned to a single category; however, a piece of news may contain information from more than one domain. A multi-label text classification model for news is proposed in this paper. The proposed model is an automated expert system designed to optimize CNN’s classification of multi-label news items. The performance of a CNN is highly dependent on its hyperparameters, and manually tweaking their values is a cumbersome and inefficient task. A high-level metaheuristic optimization algorithm, spotted hyena optimizer (SHO), has higher advanced exploration and exploitation capabilities. SHO generates a collection of solutions as a group of hyperparameters to be optimized, and the process is repeated until the desired optimal solution is achieved. SHO is integrated to automate the tuning of the hyperparameters of a CNN, including learning rate, momentum, number of epochs, batch size, dropout, number of nodes, and activation function. Four publicly available news datasets are used to evaluate the proposed model. The tuned hyperparameters and higher convergence rate of the proposed model result in higher performance for multi-label news classification compared to a baseline CNN and other optimizations of CNNs. The resulting accuracies are 93.6%, 90.8%, 68.7%, and 95.4% for RCV1-v2, Reuters-21578, Slashdot, and NELA-GT-2019, respectively.
Numerous studies have been conducted to meet the growing need for analytic tools capable of processing increasing amounts of textual data available online, and sentiment analysis has emerged as a frontrunner in this field. Current studies are focused on the English language, while minority languages, such as Roman Urdu, are ignored because of their complex syntax and lexical varieties. In recent years, deep neural networks have become the standard in this field. The entire potential of DL models for text SA has not yet been fully explored, despite their early success. For sentiment analysis, CNN has surpassed in accuracy, although it still has some imperfections. To begin, CNNs need a significant amount of data to train. Second, it presumes that all words have the same impact on the polarity of a statement. To fill these voids, this study proposes a CNN with an attention mechanism and transfer learning to improve SA performance. Compared to state-of-the-art methods, our proposed model appears to have achieved greater classification accuracy in experiments.
Sentiment analysis is an ongoing research field within the discipline of data mining. The majority of academics employ deep learning models for sentiment analysis due to their ability to self-learn and process vast amounts of data. However, the performance of deep learning models depends on the values of the hyperparameters. Determining suitable values for hyperparameters is a cumbersome task. The goal of this study is to increase the accuracy of stacked autoencoders for sentiment analysis using a heuristic optimization approach. In this study, we propose a hybrid model GA(SAE)-SVM using a genetic algorithm (GA), stacked autoencoder (SAE), and support vector machine (SVM) for fine-grained sentiment analysis. Features are extracted using continuous bag-of-words (CBOW), and then input into the SAE. In the proposed GA(SAE)-SVM, the hyperparameters of the SAE algorithm are optimized using GA. The features extracted by SAE are input into the SVM for final classification. A comparison is performed with a random search and grid search for parameter optimization. GA optimization is faster than grid search, and selects more optimal values than random search, resulting in improved accuracy. We evaluate the performance of the proposed model on eight benchmark datasets. The proposed model outperformed when compared to the baseline and state-of-the-art techniques.
News media agencies are known to publish misinformation, disinformation, and propaganda for the sake of money, higher news propagation, political influence, or other unfair reasons. The exponential increase in the use of social media has also contributed to the frequent spread of fake news. This study extends the concept of symmetry into deep learning approaches for advanced natural language processing, thereby improving the identification of fake news and propaganda. A hybrid HyproBert model for automatic fake news detection is proposed in this paper. To begin, the proposed HyproBert model uses DistilBERT for tokenization and word embeddings. The embeddings are provided as input to the convolution layer to highlight and extract the spatial features. Subsequently, the output is provided to BiGRU to extract the contextual features. The CapsNet, along with the self-attention layer, proceeds to the output of BiGRU to model the hierarchy relationship among the spatial features. Finally, a dense layer is implemented to combine all the features for classification. The proposed HyproBert model is evaluated using two fake news datasets (ISOT and FA-KES). As a result, HyproBert achieved a higher performance compared to other baseline and state-of-the-art models.
The task of analyzing sentiment has been extensively researched for a variety of languages. However, due to a dearth of readily available Natural Language Processing methods, Urdu sentiment analysis still necessitates additional study by academics. When it comes to text processing, Urdu has a lot to offer because of its rich morphological structure. The most difficult aspect is determining the optimal classifier. Several studies have incorporated ensemble learning into their methodology to boost performance by decreasing error rates and preventing overfitting. However, the baseline classifiers and the fusion procedure limit the performance of the ensemble approaches. This research made several contributions to incorporate the symmetries concept into the deep learning model and architecture: firstly, it presents a new meta-learning ensemble method for fusing basic machine learning and deep learning models utilizing two tiers of meta-classifiers for Urdu. The proposed ensemble technique combines the predictions of both the inter- and intra-committee classifiers on two separate levels. Secondly, a comparison is made between the performance of various committees of deep baseline classifiers and the performance of the suggested ensemble Model. Finally, the study’s findings are expanded upon by contrasting the proposed ensemble approach efficiency with that of other, more advanced ensemble techniques. Additionally, the proposed model reduces complexity, and overfitting in the training process. The results show that the classification accuracy of the baseline deep models is greatly enhanced by the proposed MLE approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.