2018
DOI: 10.4108/eai.13-7-2018.163973
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Fake News by Fine-tuning Deep Bidirectional Transformers based Language Model

Abstract: With the ever-increasing rate of information dissemination and absorption, "Fake News" has become a real menace. People these days often fall prey to fake news that is in line with their perception. Checking the authenticity of news articles manually is a time-consuming and laborious task, thus, giving rise to the requirement for automated computational tools that can provide insights about degree of fake ness for news articles. In this paper, a Natural Language Processing (NLP) based mechanism is proposed to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 28 publications
(28 reference statements)
0
15
0
Order By: Relevance
“…BERT is composed of two stages, i.e., unsupervised pre-training and supervised fine-tuning. Aggarwal et al [5] showed that BERT outperformed LSTM and gradient boosted tree models even with minimal text pre-processing. To improve the performance of BERT, Jwa et al [27] proposed a model that classified the data using weighted cross-entropy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…BERT is composed of two stages, i.e., unsupervised pre-training and supervised fine-tuning. Aggarwal et al [5] showed that BERT outperformed LSTM and gradient boosted tree models even with minimal text pre-processing. To improve the performance of BERT, Jwa et al [27] proposed a model that classified the data using weighted cross-entropy.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, more successful models based on deep learning techniques have been used to detect misinformation. For example, Aggarwal et al [5] detected misinformation using BERT with very minimal text pre-processing, but obtained very good performance. It was also reported that by April 2020 Facebook removed more than fifty million posts related to COVID-19 since they were classified as misinformation using machine learning-based NLP techniques [6].…”
mentioning
confidence: 99%
“…BERT is one of the newest advances in natural language modeling and is state-of-the-art in various text data sets ( Sun et al, 2019 ; Aggarwal et al, 2020 ; González-Carvajal and Garrido-Merchán, 2020 ).…”
Section: Methodsmentioning
confidence: 99%
“…The result, people are polarized into those who speaks positively or negatively [5]. There is a long way to go tackle the problem of fake news detection, transfer learning promises to be a strong means of progress in the field [6]. Furthermore, this article examines language abuse or language criminalization that appeared in the public sphere during the 2019 Indonesian Presidential election.…”
Section: Introduction Ministry Of Communication and Informationmentioning
confidence: 99%