With the rapid development of the mobile internet, people are becoming more dependent on the internet to express their comments on products or stores; meanwhile, text sentiment classification of these comments has become a research hotspot. In existing methods, it is fairly popular to apply a deep learning method to the text classification task. Aiming at solving information loss, weak context and other problems, this paper makes an improvement based on the transformer model to reduce the difficulty of model training and training time cost and achieve higher overall model recall and accuracy in text sentiment classification. The transformer model replaces the traditional convolutional neural network (CNN) and the recurrent neural network (RNN) and is fully based on the attention mechanism; therefore, the transformer model effectively improves the training speed and reduces training difficulty. This paper selects e-commerce reviews as research objects and applies deep learning theory. First, the text is preprocessed by word vectorization. Then the IN standardized method and the GELUs activation function are applied based on the original model to analyze the emotional tendencies of online users towards stores or products. The experimental results show that our method improves by 9.71%, 6.05%, 5.58% and 5.12% in terms of recall and approaches the peak level of the F1 value in the test model by comparing BiLSTM, Naive Bayesian Model, the serial BiLSTM_CNN model and BiLSTM with an attention mechanism model. Therefore, this finding proves that our method can be used to improve the text sentiment classification accuracy and effectively apply the method to text classification.
This is the second time for SRCB to participate in WAT. This paper describes the neural machine translation systems for the shared translation tasks of WAT 2019. We participated in ASPEC tasks and submitted results on English-Japanese, Japanese-English, Chinese-Japanese, and Japanese-Chinese four language pairs. We employed the Transformer model as the baseline and experimented relative position representation, data augmentation, deep layer model, ensemble. Experiments show that all these methods can yield substantial improvements.
In the context of the increasingly severe global greenhouse effect, the “14th Five-Year Plan” proposes to “promote green development and promote harmonious coexistence between man and nature”, which provides a new platform for the faster and better development of low-carbon countries. The low-carbon economy has entered a high-quality stage of China’s economic development in the new era, which is of great significance to the overall green transformation of China’s economic and social development. In order to assess the development level of China’s low-carbon economy, this paper estimates the carbon emissions and carbon emission intensity of energy consumption from 2008 to 2017 and applies the LMDI model to decompose the influencing factors of carbon emissions, analyzes the contribution rate of driving factors, and proposes energy saving, emission reduction and low carbon. Developmental countermeasures. The results show that economic growth and energy intensity are the biggest driving factors for promoting and suppressing carbon emissions, respectively. Measures are taken to improve energy structure, increase utilization efficiency, develop low-carbon industries, and promote low-carbon life.
In 2020, SARS-CoV-2 will affect the hearts of people all over the country, and Weibo will become the representative of people expressing their feelings on the Internet. Traditional emotion dictionary and machine learning methods have poor text emotion recognition effect, while BERT pre-training model is based on bidirectional Transformer model, which can better obtain the emotion expressed by the text and effectively improve the accuracy of the model. On the basis of improving BERT pre-training model, attention mechanism is introduced, and the key features are weighted to make emotion classification more accurate. According to the analysis of emotions expressed by netizens on Weibo during the epidemic, compared with textCNN model, BILSTM model and BILSTM+Attention model, the accuracy rate has increased by 6.25%, 4.69% and 2.67% respectively. The overall performance of this model is the best, and it can effectively recognize text emotion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.