2021
DOI: 10.1016/j.ipm.2021.102610
|View full text |Cite
|
Sign up to set email alerts
|

Detecting fake news by exploring the consistency of multimodal data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
42
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 137 publications
(44 citation statements)
references
References 37 publications
1
42
0
1
Order By: Relevance
“…Since it is difficult to find both pertinent and pristine images to match these fictions, fake news generators sometimes use manipulated images to support non-factual scenarios. Researchers refer to this cue as the similarity relationship between text and image (Zhou, Wu, and Zafarani 2020;Giachanou, Zhang, and Rosso 2020;Xue et al 2021) which could be captured with a variety of similarity measuring techniques such as cosine similarity between the title and image tags embeddings (Zhou, Wu, and Zafarani 2020;Giachanou, Zhang, and Rosso 2020) or similarity measure architectures (Xue et al 2021).…”
Section: Multi-modal Features and Cluesmentioning
confidence: 99%
See 1 more Smart Citation
“…Since it is difficult to find both pertinent and pristine images to match these fictions, fake news generators sometimes use manipulated images to support non-factual scenarios. Researchers refer to this cue as the similarity relationship between text and image (Zhou, Wu, and Zafarani 2020;Giachanou, Zhang, and Rosso 2020;Xue et al 2021) which could be captured with a variety of similarity measuring techniques such as cosine similarity between the title and image tags embeddings (Zhou, Wu, and Zafarani 2020;Giachanou, Zhang, and Rosso 2020) or similarity measure architectures (Xue et al 2021).…”
Section: Multi-modal Features and Cluesmentioning
confidence: 99%
“…(Singhal et al 2021) develop an inter-modality discordance based fake news detection which learns discriminating features and employs a modified version of contrastive loss that explores the inter-modality discordance. Xue et al (Xue et al 2021), propose a Multimodal Consistency Neural Network (MCNN) which utilizes a similarity measurement module that measures the similarity of multi-modal data to detect the possible mismatches between the image and text. Lastly, Biamby et al (Biamby et al 2021),leverage a Contrastive Language-Image Pre-Training (CLIP) model (Radford et al 2021), to jointly learn image/text representation to detect Image-Text incon-sistencies in Tweets.…”
Section: Generative Architecturesmentioning
confidence: 99%
“…Instead of obtaining in-the-wild knowledge, recent works leveraged entity background information obtained from knowledge graphs (Cui, Seo, Tabar, Ma, Wang and Lee, 2020;Zhang, Fang, Qian and Xu, 2019;Hu, Yang, Zhang, Zhong, Tang, Shi, Duan and Zhou, 2021). For multi-modal scenarios, entity knowledge is important to bridge the text-image semantics (Xue, Wang, Tian, Li, Shi and Wei, 2021;Qi, Cao and Sheng, 2021b;Qi, Cao, Li, Liu, Sheng, Mi, He, Lv, Guo and Yu, 2021a;Li, Sun, Yu, Tian, Yao and Xu, 2021). These methods could provide accurate and explainable evidence, but have the issue of source credibility and scalability.…”
Section: False News Detectionmentioning
confidence: 99%
“…However, these studies do not consider consistencies between multi-modal information as our work. While both SAFE (Zhou et al, 2020) and MCNN (Xue et al, 2021) consider relevance between textual and visual information, our work differs from theirs in that we distinguish modal-unique from modal-shared information, and also model inconsistencies between content and external knowledge.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, it also provides a great opportunity to identify rumors. Xue et al (2021) shows that, in order to catch eyes of public, rumors tend to use theatrical, comical and attractive images that are irrelevant to the post content. In general, it is often difficult to find pertinent and non-manipulated images to match fictional events, thus posts with mismatched textual and visual information are more likely to be fake (Zhou et al, 2020).…”
Section: Introductionmentioning
confidence: 99%