The previous multi-layer learning network is easy to fall into local extreme points in supervised learning. If the training samples sufficiently cover future samples, the learned multi-layer weights can be well used to predict new test samples. This paper mainly studies the research and analysis of machine short text sentiment classification based on Bayesian network and deep neural network algorithm. It first introduces Bayesian network and deep neural network algorithms, and analyzes the comments of various social software such as Twitter, Weibo, and other popular emotional communication platforms. Using modeling technology popular reviews are designed to conduct classification research on unigrams, bigrams, parts of speech, dependency labels, and triplet dependencies. The results show that the range of its classification accuracy is the smallest as 0.8116 and the largest as 0.87. These values are obtained when the input nodes of the triple dependency feature are 12,000, and the reconstruction error range of the Boltzmann machine is limited between 7.3175 and 26.5429, and the average classification accuracy is 0.8301. The advantages of triplet dependency features for text representation in text sentiment classification tasks are illustrated. It shows that Bayesian and deep neural network show good advantages in short text emotion classification.
.With the development of Internet technology and the transformation of news media to informatization, news images, texts, and other information have exploded. The news image and text classification can effectively solve the disorder problem of news information. The early news image text classification is to establish artificial classifiers according to specific classification rules, but the classification error rate is high and the classification speed is slow. Later, machine learning technology replaced manual classification of news image texts. Although the classification efficiency has been greatly improved, the classification time is still a bit long. The bidirectional encoder representations from transformers (BERT) model uses transformer and encoder to pretrain news image text to improve classification efficiency. By comparing the differences between machine learning and BERT models in news image text classification, the experiments showed that the average precision, recall, and F1 values of the news image text classification algorithm using the BERT model were 96.6%, 95.7%, and 96.1%, respectively. All three evaluation criteria were about 5% more than the classification algorithm of the machine learning model. The classification speed of the news image text classification algorithm using the BERT model was 1.8 times that of the news image text classification algorithm based on support vector machine. Therefore, the news image text classification algorithm using the BERT model can improve the classification accuracy and efficiency of news image text.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.