Fake news potentially causes serious problems in society. Therefore, it is necessary to detect such news, which is, of course, associated with some challenges such as events, verification and datasets. Reference datasets related to this area face various problems, like the lack of sufficient information about news samples, no subject diversity, etc. The present paper proposes a model using feature extraction and machine learning algorithms for dealing with some of these problems. In the feature extraction phase, two new features (named coherence and cohesion), along with other key features, were extracted from news samples. In the detection phase, initially, the news samples of each dataset were sorted based on a specific order (easier samples in the beginning and harder ones towards the end) using a hybrid method consisting of statistical descriptors and a k-nearest neighbor algorithm. Then, inspired by the human learning principles, the sorted news samples, were sent to the Long-Short-Term Memory and classical machine learning algorithms for the detection of fake news. The obtained results indicated the higher performance of the proposed model in fake news detection compared to benchmark models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.