2019 IEEE International Conference on Big Data (Big Data) 2019
DOI: 10.1109/bigdata47090.2019.9005980
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Fake News Articles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…Lin J. et al [ 50 ] presented a framework that extracted 134 features and built traditional machine learning models, such as XGBoost and Random Forest. They also developed a deep learning-based model, the LSTM with a self-attention mechanism.…”
Section: Cross-* Detection Methodologiesmentioning
confidence: 99%
“…Lin J. et al [ 50 ] presented a framework that extracted 134 features and built traditional machine learning models, such as XGBoost and Random Forest. They also developed a deep learning-based model, the LSTM with a self-attention mechanism.…”
Section: Cross-* Detection Methodologiesmentioning
confidence: 99%
“…In a study conducted by Escolà-Gascón et al (2021), it was discovered that critical thinking predicts reductions in stress levels among Spanish physicians and increases fake news identification. According to Lin et al (2019), a system was provided that extracts 134 features from news articles using machine learning models and deep learning models that simply rely on textual input. Afterwards, they must compare their models to the baseline model in order to determine which one performs better in the fake news detection task.…”
Section: Related Workmentioning
confidence: 99%
“…• LSTM-ATT (Lin et al, 2019): LSTM-ATT utilizes long short term memory (LSTM) with an attention mechanism. The model takes a 300-dimensional vector representation of news articles as input to a two-layer LSTM for fake news detection.…”
Section: Datasetmentioning
confidence: 99%
“…One possible explanation for this is that LIWC can capture the linguistic features in news articles based on words that denote psycholinguistic characteristics. The LSTM-ATT, which has extensive preprocessing using count features and sentiment features along with hyperparameter tuning (Lin et al, 2019), has similar performance compared with HAN in PolitiFact, however, outperforms HAN on GossipCop. One reason for this can be that the attention mechanism is able to capture the relevant representation of the input.…”
Section: Comparison With State-of-the-artmentioning
confidence: 99%