The propagation of rumors on social media has been identified as a critical problem in recent years that causes social panic or economic turmoil (to some extent), thereby giving rise to the need for faster identification. With the advancements in deep learning, researches based on neural networks become popular. Most of the existing methods extensively adopt recurrent neural networks (RNNs), such as gated recurrent unit and long short-term memory. This results in a significant degradation in the concurrency performance of the models, implying an increased consumption of time and resources. This study proposes a model with multilayer transformer encoding blocks for detecting rumors. The self-attention mechanism in the transformer encoding blocks provides better concurrency to the proposed model and improves its performance. The proposed model performs faster executions using less or no recurrences in comparison to other models based on RNNs. Experiments on two real social datasets verified that our model could achieve significantly better results than baseline methods. The accuracy rate increased by 1.1% and 2.2% on the Weibo and PHEME datasets, respectively. In comparison to the methods that use RNNs as a feature extractor, the training duration in the proposed model was reduced by 16% and 70% in Weibo and PHEME, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.